Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
600 | 1,549 | Bayesian peA
Christopher M. Bishop
Microsoft Research
St. George House, 1 Guildhall Street
Cambridge CB2 3NH, u.K.
cmbishop@microsoft.com
Abstract
The technique of principal component analysis (PCA) has recently been
expressed as the maximum likelihood solution for a generative latent
variable model. In this paper we use this probabilistic reformulation
as the basis for a Bayesian treatment of PCA. Our key result is that effective dimensionality of the latent space (equivalent to the number of
retained principal components) can be determined automatically as part
of the Bayesian inference procedure. An important application of this
framework is to mixtures of probabilistic PCA models, in which each
component can determine its own effective complexity.
1 Introduction
Principal component analysis (PCA) is a widely used technique for data analysis. Recently
Tipping and Bishop (1997b) showed that a specific form of generative latent variable model
has the property that its maximum likelihood solution extracts the principal sub-space of
the observed data set. This probabilistic reformulation of PCA permits many extensions
including a principled formulation of mixtures of principal component analyzers, as discussed by Tipping and Bishop (l997a).
A central issue in maximum likelihood (as well as conventional) PCA is the choice of
the number of principal components to be retained. This is particularly problematic in a
mixture modelling context since ideally we would like the components to have potentially
different dimensionalities. However, an exhaustive search over the choice of dimensionality
for each of the components in a mixture distribution can quickly become computationally
intractable. In this paper we develop a Bayesian treatment of PCA, and we show how this
leads to an automatic selection of the appropriate model dimensionality. Our approach
avoids a discrete model search, involving instead the use of continuous hyper-parameters
to determine an effective number of principal components.
Bayesian peA
383
2 Maximum Likelihood peA
Consider a data set D of observed d-dimensional vectors D = {t n } where n E
{I, ... ,N}. Conventional principal component analysis is obtained by first computing
the sample covariance matrix given by
= N1""
L) t n - t) (tn - t)
N
S
-T
(1)
n=l
where t = N- 1 Ln tn is the sample mean. Next the eigenvectors Ui and eigenvalues .Ai
of S are found, where SUi = .AiUi and i = 1, ... , d. The eigenvectors corresponding
to the q largest eigenvalues (where q < d) are retained, and a reduced-dimensionality
representation of the data set is defined by Xn = U T (t n - t) where U q = (U 1, . .. ,U q).
It is easily shown that PCA corresponds to the linear projection of a data set under which
the retained variance is a maximum, or equivalently the linear projection for which the
sum-of-squares reconstruction cost is minimized.
A significant limitation of conventional PCA is that it does not define a probability distribution. Recently, however, Tipping and Bishop (1997b) showed how PCA can be reformulated as the maximum likelihood solution of a specific latent variable model, as follows.
We first introduce a q-dimensionallatent variable x whose prior distribution is a zero mean
Gaussianp(x) = N(O, Iq) and Iq is the q-dimensional unit matrix . The observed variable t
is then defined as a linear transformation ofx with additive Gaussian noise t = Wx+ p,+?
where W is a d x q matrix, p, is a d-dimensional vector and ? is a zero-mean Gaussiandistributed vector with covariance (72Id. Thus p(tlx) = N(Wx + p" (72Id). The marginal
distribution of the observed variable is then given by the convolution of two Gaussians and
is itself Gaussian
p(t)
=
J
p(tlx)p(x) dx
= N(p"
C)
(2)
where the covariance matrix C = WWT + (72Id. The model (2) represents a constrained
Gaussian distribution governed by the parameters p" Wand (72.
The log probability of the parameters under the observed data set D is then given by
L(p"W, (72)
N
= -2 {dln(2rr) +lnlCl +Tr[C-1S]}
(3)
where S is the sample covariance matrix given by (I). The maximum likelihood solution
for p, is easily seen to be P,ML = t. It was shown by Tipping and Bishop (l997b) that the
stationary points of the log likelihood with respect to W satisfy
WML = Uq(Aq - (72Iq)1/2
(4)
where the columns of U q are eigenvectors of S, with corresponding eigenvalues in the
diagonal matrix A q ? It was also shown that the maximum of the likelihood is achieved when
the q largest eigenvalues are chosen, so that the columns of U q correspond to the principal
eigenvectors, with all other choices of eigenvalues corresponding to saddle points. The
maximum likelihood solution for (72 is then given by
d
2
(7ML
=
1
""
~
~ .Ai
(5)
q i=q+l
which has a natural interpretation as the average variance lost per discarded dimension. The
density model (2) thus represents a probabilistic formulation of PCA. It is easily verified
that conventional PCA is recovered in the limit (72 -+ O.
C. M Bishop
384
Probabilistic PCA has been successfully applied to problems in data compression, density
estimation and data visualization, and has been extended to mixture and hierarchical mixture models. As with conventional PCA, however, the model itself provides no mechanism
for determining the value of the latent-space dimensionality q. For q = d - 1 the model
is equivalent to a full-covariance Gaussian distribution, while for q < d - 1 it represents
a constrained Gaussian in which the variance in the remaining d - q directions is modelled by the single parameter (j2 . Thus the choice of q corresponds to a problem in model
complexity optimization. If data is plentiful, then cross-validation to compare all possible
values of q offers a possible approach. However, this can quickly become intractable for
mixtures of probabilistic PCA models if we wish to allow each component to have its own
q value.
3
Bayesian peA
The issue of model complexity can be handled naturally within a Bayesian paradigm.
Armed with the probabilistic reformulation of PCA defined in Section 2, a Bayesian treatment of PCA is obtained by first introducing a prior distribution p(p" W, (j2) over the
parameters of the model. The corresponding posterior distribution p(p" W , (j2ID) is then
obtained by multiplying the prior by the likelihood function, whose logarithm is given by
(3), and normalizing. Finally, the predictive density is obtained by marginalizing over the
parameters, so that
(6)
In order to implement this framework we must address two issues: (i) the choice of prior
distribution, and (ii) the formulation of a tractable algorithm. Our focus in this paper is on
the specific issue of controlling the effective dimensionality of the latent space (corresponding to the number of retained principal components). Furthermore, we seek to avoid discrete model selection and instead use continuous hyper-parameters to determine automatically an appropriate effective dimensionality for the latent space as part of the process of
Bayesian inference. This is achieved by introducing a hierarchical prior p(Wla) over the
matrix W, governed by a q-dimensional vector of hyper-parameters a = {0:1, ... ,O:q}.
The dimensionality of the latent space is set to its maximum possible value q = d - 1, and
each hyper-parameter controls one of the columns of the matrix W through a conditional
Gaussian distribution of the form
(7)
where {Wi} are the columns of W. This form of prior is motivated by the framework
of automatic relevance determination (ARD) introduced in the context of neural networks
by Neal and MacKay (see MacKay, 1995). Each O:i controls the inverse variance of the
corresponding Wi, so that if a particular O:i has a posterior distribution concentrated at
large values, the corresponding Wi will tend to be small, and that direction in latent space
will be effectively 'switched off'. The probabilistic structure of the model is displayed
graphically in Figure I.
In order to make use of this model in practice we must be able to marginalize over the
posterior distribution of W. Since this is analytically intractable we have developed three
alternative approaches based on (i) type-II maximum likelihood using a local Gaussian
approximation to a mode of the posterior distribution (MacKay, 1995), (ii) Markov chain
Monte Carlo using Gibbs sampling, and (iii) variational inference using a factorized approximation to the posterior distribution. Here we describe the first of these in more detail.
Bayesian peA
385
Figure 1: Representation of Bayesian PCA as a probabilistic graphical model showing the hierarchical prior over W governed by the vector of hyper-parameters ex. The box. denotes a 'plate' comprising
a data set of N independent observations of the visible vector tn (shown shaded) together with the
corresponding hidden variables X n .
The location W MP of the mode can be found by maximizing the log posterior distribution
given, from Bayes' theorem, by
1 d-l
Inp(WID) = L -
2L
aill w ill 2
+ const.
(8)
i=1
where L is given by (3). For the purpose of controlling the effective dimensionality of
the latent space, it is sufficient to treat J.L, (1 2 and Q as parameters whose values are to
be estimated, rather than as random variables. In this case there is no need to introduce
priors over these variables, and we can determine J.L and (1 2 by maximum likelihood. To
estimate ex we use type-II maximum likelihood, corresponding to maximizing the marginal
likelihood p( D Iex) in which we have integrated over W using the quadratic approximation .
It is easily shown (Bishop, 1995) that this leads to a re-estimation formula for the hyperparameters ai of the form
/i
ai
:=
II W ill 2
(9)
where /i ::::: d - ai Tri (H- 1 ) is the effective number of parameters in Wi, H is the Hessian
matrix given by the second derivatives of Inp(WID) with respect to the elements of W
(evaluated at W MP), and Tri (.) denotes the trace of the sub-matrix corresponding to the
vector Wi.
For the results presented in this paper, we make the further simplification of replacing / i in
(9) by d, corresponding to the assumption that all model parameters are 'well-determined'.
This significantly reduces the computational cost since it avoids evaluation and manipulation of the Hessian matrix. An additional consequence is that vectors Wi for which there is
insufficient support from the data wiII be driven to zero, with the corresponding a i -t 00,
so that un-used dimensions are switched off completely. We define the effective dimensionality of the model to be the number of vectors Wi whose values remain non-zero.
The solution for W MP can be found efficiently using the EM algorithm, in which the Estep involves evaluation of the expected sufficient statistics of the latent-space posterior
distribution, given by
M- 1 W T (tn - J.L)
(10)
+ (xn)(xn) T
(II)
(12M
C. M Bishop
386
where M = (WTW
W
+ a 2 Iq).
[ptn-I-')(X~)] [pxnX~)H'Ar
N
(;2
The M-step involves updating the model parameters using
~d L
{lit
n -
J-t1l 2
2(x~)WT(tn -
-
J-t)
+ Tr [(XnX~)WTW]}
(12)
(13)
n=l
where A = diag(ad. Optimization of Wand a 2 is alternated with re-estimation of n,
using (9) with '"'Ii = d, until all of the parameters satisfy a suitable convergence criterion .
As an illustration of the operation of this algorithm, we consider a data set consisting of 300
points in 10 dimensions, in which the data is drawn from a Gaussian distribution having
standard deviation 1.0 in 3 directions and standard deviation 0.5 in the remaining 7 directions. The result of fitting both maximum likelihood and Bayesian PCA models is shown
in Figure 2. In this case the Bayesian model has an effective dimensionality of qeff = 3.
?
??
????
?
?
?
??
?? ? ? ?? ?
? ?
??
??
?
?
?
?
?
? ?
?
?
Figure 2: Hinton diagrams of the matrix W for a data set in 10 dimensions having m = 3 directions
with larger variance than the remaining 7 directions. The left plot shows W from maximum likelihood peA while the right plot shows WMP from the Bayesian approach, showing how the model is
able to discover the appropriate dimensionality by suppressing the 6 surplus degrees of freedom.
The effective dimensionality found by Bayesian PCA will be dependent on the number N
of points in the data set. For N ~ 00 we expect qeff ~ d -1, and in this limit the maximum
likelihood framework and the Bayesian approach will give identical results. For finite data
sets the effective dimensionality may be reduced, with degrees of freedom for which there
is insufficient evidence in the data set being suppressed. The variance of the data in the remaining d - qeff directions is then accounted for by the single degree of freedom defined by
a 2 . This is illustrated by considering data in 10 dimensions generated from a Gaussian distribution with standard deviations given by {1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1}.
In Figure 3 we plot qeff (averaged over 50 independent experiments) versus the number N
of points in the data set.
These results indicate that Bayesian PCA is able to determine automatically a suitable
effective dimensionality qeff for the principal component subspace, and therefore offers a
practical alternative to exhaustive comparison of dimensionalities using techniques such as
cross-validation. As an illustration of the generalization capability of the resulting model
we consider a data set of 20 points in 10 dimensions generated from a Gaussian distribution
having standard deviations in 5 directions given by (1.0,0.8,0.6 , 0.4,0.2) and standard
deviation 0.04 in the remaining 5 directions. We fit maximum likelihood PCA models to
this data having q values in the range 1-9 and compare their log likelihoods on both the
training data and on an independent test set, with the results (averaged over 10 independent
experiments) shown in Figure 4. Also shown are the corresponding results obtained from
Bayesian PCA.
Figure 3: Plot of the average effective dimensionality of the Bayesian PCA model versus the number
N of data points for data in a IO-dimensional space.
c
'8.
8
6
<'l
~ 4
,,0
,, ,
,
,,
--- .. .... _------
Q;
a. 2
8? a
Qi
"'"
=-2
~
-4
-6~~~2~~3--~4--~5---6--~7--~8--~9~
q
Figure 4: Plot of the log likelihood for the training set (dashed curve) and the test set (solid curve)
for maximum likelihood PCA models having q values in the range 1-9, showing that the best generalization is achieved for q = 5 which corresponds to the number of directions of significant variance
in the data set. Also shown are the training (circle) and test (cross) results from a Bayesian PCA
model, plotted at the average effective q value given by qeff = 5.2. We see that the Bayesian PCA
model automatically discovers the appropriate dimensionality for the principal component subspace,
and furthermore that it has a generalization performance which is close to that of the optimal fixed q
model.
4
Mixtures of Bayesian peA Models
Given a probabilistic formulation of PCA it is straightforward to construct a mixture distribution comprising a linear superposition of principal component analyzers. In the case of
maximum likelihood PCA we have to choose both the number IvI of components and the
latent space dimensionality q for each component. For moderate numbers of components
and data spaces of several dimensions it quickly becomes intractable to explore the exponentially large number of combinations of q values for a given value of M. Here Bayesian
PCA offers a significant advantage in allowing the effective dimensionalities of the models
to be determined automatically.
As an illustration we consider a density estimation problem involving hand-written digits
from the CEDAR database. The data set comprises 8 x 8 scaled and smoothed gray-scale
images of the digits '2', '3' and '4', partitioned randomly into 1500 training, 900 validation
and 900 test points. For mixtures of maximum likelihood PCA the model parameters can be
C. M Bishop
388
determined using the EM algorithm in which the M-step uses (4) and (5), with eigenvector
and eigenvalues obtained from the weighted covariance matrices in which the weighting coefficients are the posterior probabilities for the components determined in the E-step. Since,
for maximum likelihood PCA, it is computationally impractical to explore independent q
values for each component we consider mixtures in which every component has the same
dimensionality. We therefore train mixtures having M E {2, 4, 6, 8, 10, 12, 14, 16, 18} for
all values q E {2, 4, 8, 12, 16, 20 , 25, 30, 40, 50}. In order to avoid singularities associated with the more complex models we omit any component from the mixture for which
the value of (7 2 goes to zero during the optimization. The highest log likelihood on the
validation set ( - 295) is obtained for M = 6 and q = 50.
For mixtures of Bayesian PCA models we need only explore alternative values for M ,
which are taken from the same set as for the mixtures of maximum likelihood PCA. Again,
the best performance on the validation set (- 293) is obtained for M = 6. The values of the
log likelihood for the test set were -295 (maximum likelihood PCA) and -293 (Bayesian
PCA). The mean vectors I-Li for each of the 6 components of the Bayesian PCA mixture
model are shown in Figure 5.
62
54
63
60
62
59
Figure 5: The mean vectors for each of the 6 components in the Bayesian PCA mixture model,
displayed as an 8 x 8 image, together with the corresponding values of the effective dimensionality.
The Bayesian treatment of PCA discussed in this paper can be particularly advantageous
for small data sets in high dimensions as it can avoid the singularities associated with
maximum likelihood (or conventional) PCA by suppressing unwanted degrees of freedom
in the model. This is especially helpful in a mixture modelling context, since the effective
number of data points associated with specific 'clusters' can be small even when the total
number of data points appears to be large.
References
Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University
Press.
MacKay, D. J. C. (1995). Probable networks and plausible predictions - a review of
practical Bayesian methods for supervised neural networks. Network: Computation
in Neural Systems 6 (3), 469-505.
Tipping, M. E. and C. M. Bishop (1997a). Mixtures of principal component analysers.
In Proceedings lEE Fifth International Conference on Artificial Neural Networks.
Cambridge, u.K., July. , pp. 13-18.
Tipping, M. E . and C. M. Bishop (1997b). Probabilistic principal component analysis.
Accepted for publication in the Journal of the Royal Statistical Society, B.
| 1549 |@word compression:1 advantageous:1 wla:1 seek:1 covariance:6 tr:2 solid:1 plentiful:1 suppressing:2 recovered:1 com:1 dx:1 must:2 written:1 visible:1 additive:1 wx:2 plot:5 stationary:1 generative:2 provides:1 location:1 become:2 fitting:1 introduce:2 expected:1 wml:1 automatically:5 armed:1 considering:1 becomes:1 discover:1 factorized:1 eigenvector:1 developed:1 transformation:1 impractical:1 cmbishop:1 every:1 unwanted:1 scaled:1 control:2 unit:1 omit:1 local:1 treat:1 limit:2 consequence:1 io:1 id:3 oxford:1 shaded:1 range:2 averaged:2 practical:2 lost:1 practice:1 implement:1 cb2:1 digit:2 procedure:1 significantly:1 projection:2 inp:2 marginalize:1 selection:2 close:1 context:3 equivalent:2 conventional:6 maximizing:2 graphically:1 straightforward:1 go:1 controlling:2 us:1 element:1 recognition:1 particularly:2 updating:1 database:1 observed:5 highest:1 principled:1 complexity:3 ui:1 ideally:1 predictive:1 basis:1 completely:1 easily:4 train:1 effective:17 describe:1 monte:1 artificial:1 analyser:1 hyper:5 exhaustive:2 whose:4 widely:1 larger:1 plausible:1 statistic:1 itself:2 advantage:1 eigenvalue:6 rr:1 reconstruction:1 dimensionallatent:1 j2:2 convergence:1 cluster:1 iq:4 develop:1 ard:1 involves:2 indicate:1 direction:10 pea:7 wid:2 generalization:3 probable:1 singularity:2 extension:1 guildhall:1 purpose:1 estimation:4 xnx:1 superposition:1 largest:2 successfully:1 weighted:1 gaussian:10 rather:1 avoid:3 publication:1 iex:1 focus:1 modelling:2 likelihood:29 helpful:1 inference:3 dependent:1 integrated:1 hidden:1 comprising:2 issue:4 ill:2 constrained:2 mackay:4 marginal:2 construct:1 having:6 sampling:1 identical:1 represents:3 lit:1 minimized:1 randomly:1 consisting:1 microsoft:2 n1:1 tlx:2 freedom:4 evaluation:2 mixture:19 chain:1 logarithm:1 re:2 circle:1 plotted:1 column:4 ar:1 cost:2 introducing:2 deviation:5 cedar:1 st:1 density:4 international:1 probabilistic:11 off:2 lee:1 together:2 quickly:3 again:1 central:1 gaussiandistributed:1 choose:1 derivative:1 li:1 coefficient:1 satisfy:2 mp:3 ad:1 bayes:1 capability:1 aiui:1 square:1 variance:7 efficiently:1 correspond:1 modelled:1 bayesian:29 carlo:1 multiplying:1 pp:1 naturally:1 associated:3 treatment:4 dimensionality:23 surplus:1 appears:1 wwt:1 tipping:6 supervised:1 formulation:4 evaluated:1 box:1 furthermore:2 wmp:1 until:1 hand:1 christopher:1 replacing:1 mode:2 gray:1 analytically:1 neal:1 illustrated:1 during:1 criterion:1 plate:1 tn:5 image:2 variational:1 discovers:1 recently:3 exponentially:1 nh:1 discussed:2 interpretation:1 significant:3 cambridge:2 gibbs:1 ai:5 automatic:2 wiii:1 analyzer:2 aq:1 posterior:8 own:2 showed:2 l997a:1 moderate:1 driven:1 manipulation:1 seen:1 george:1 additional:1 determine:5 paradigm:1 dashed:1 ii:7 july:1 full:1 reduces:1 determination:1 cross:3 offer:3 qi:1 prediction:1 involving:2 achieved:3 diagram:1 ivi:1 tri:2 tend:1 ofx:1 iii:1 fit:1 motivated:1 pca:40 handled:1 reformulated:1 hessian:2 eigenvectors:4 concentrated:1 reduced:2 problematic:1 dln:1 estimated:1 per:1 discrete:2 key:1 reformulation:3 drawn:1 verified:1 wtw:2 sum:1 wand:2 inverse:1 simplification:1 quadratic:1 ptn:1 estep:1 combination:1 remain:1 em:2 suppressed:1 wi:7 partitioned:1 taken:1 computationally:2 ln:1 visualization:1 mechanism:1 tractable:1 gaussians:1 operation:1 permit:1 hierarchical:3 appropriate:4 uq:1 alternative:3 denotes:2 remaining:5 graphical:1 const:1 especially:1 society:1 diagonal:1 subspace:2 t1l:1 street:1 retained:5 insufficient:2 illustration:3 equivalently:1 potentially:1 trace:1 allowing:1 convolution:1 observation:1 markov:1 discarded:1 finite:1 displayed:2 extended:1 hinton:1 smoothed:1 introduced:1 address:1 able:3 pattern:1 including:1 royal:1 suitable:2 natural:1 extract:1 alternated:1 prior:8 review:1 determining:1 marginalizing:1 expect:1 limitation:1 versus:2 validation:5 switched:2 degree:4 sufficient:2 accounted:1 allow:1 fifth:1 curve:2 dimension:8 xn:3 avoids:2 ml:2 search:2 latent:12 continuous:2 un:1 complex:1 diag:1 noise:1 hyperparameters:1 sub:2 comprises:1 wish:1 house:1 governed:3 weighting:1 sui:1 theorem:1 formula:1 bishop:12 specific:4 showing:3 normalizing:1 evidence:1 intractable:4 effectively:1 saddle:1 explore:3 expressed:1 corresponds:3 conditional:1 determined:5 wt:1 principal:15 total:1 accepted:1 support:1 relevance:1 ex:2 |
601 | 155 | 739
AN ANALOG SELF-ORGANIZING
NEURAL NElWORK CHIP
James R. Mann
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02173"()()73
Sheldon Gilbert
4421 West Estes
Lincolnwood, IL 60646
ABSTRACT
A design for a fully analog version of a self-organizing feature map neural
network has been completed. Several parts of this design are in fabrication.
The feature map algorithm was modified to accommodate circuit solutions
to the various computations required. Performance effects were measured
by simulating the design as part of a frontend for a speech recognition
system. Circuits are included to implement both activation computations and
weight adaption 'or learning. External access to the analog weight values is
provided to facilitate weight initialization, testing and static storage. This
fully analog implementation requires an order of magnitude less area than
a comparable digital/analog hybrid version developed earlier.
INTRODUCTION
This paper describes an analog version of a self-organizing feature map circuit. The design
implements Kohonen's self-organizing feature map algorithm [Kohonen, 1988] with some
modifications imposed by practical circuit limitations. The feature map algorithm automatically
adapts connection weights to nodes in the network such that each node comes to represent a
distinct class of features in the input space. The system also self-organizes such that neighboring
nodes become responsive to similar input classes. The prototype circuit was fabricated in two
parts (for testability); a 4 node, 4 input synaptic array, and a weight adaptation and refresh
circuit. A functional simulator was used to measure the effects of design constraints. This
simulator evolved with the design to the point that actual device characteristics and process
statistics were incorporated. The feature map simulator was used as a front-end processor to
a speech recognition system whose error rates were used to monitor the effects of parameter
changes on performance.
This design has evolved over the past two years from earlier experiments with a perceptron
classifier [Raffel, 1987] and an earlier version of a self-organizing feature map circuit [Mann,
1988]. The perceptron classifier used a connection matrix built with multiplying D / A converters
to perform the product operation for the sum-of-products computation common to all neural
network algorithms. The feature map circuit also used MDAC's to perform a more complicated
calculation to realize a squared Euclidean distance measure. The weights were also stored
digitally, but in a unary encoded format to simplify the weight adjustment operation. This circuit
contained all of the control necessary to perform weight adaptation, except for selecting a
maximum responder.
The new feature map circuit described in this paper replaces the digital weight storage with
dynamic analog charge storage on a capacitor. This paper will describe the circuitry and discuss
problems associated with this approach to neural network implementations.
Reprinted with pennission of Lincoln Laboratory, Massachusetts Institute of Tedmology, Lexington,
Massachusetts
740
Mann and Gilbert
ALGORITHM DESCRIPTION
The original Kohonen algorithm is based on a network topology such as shown in Figure 1. This
illustrates a linear array of nodes, consistent with the hardware implementation being descnbed.
Each node in the circuit computes a level of activity [Dj(t)] which indicates the similarity
between the current input vector [Xi(t)] and its respective weight vector [Wij(t)]. Traditionally
this would be the squared Euclidean distance given by the activation equation in the figure. If
the inputs are normalized, a dot product operation can be substituted. The node most
representative of the current input will be the one with the minimum or maximum output
activity (classification), depending on which distance measure is used. The node number of the
min.fmax. responder U?] then comes to represent that class of which the input is a member.
If the network is still in its learning phase, an adaptation process is invoked. This process
updates the weights of all the nodes lying within a prescribed neighborhood [NEjj?(t)] of the
selected node. The weights are adjusted such that the distance between the input and weight
vector is diminished. This is accomplished by decreasing the individual differences between each
component pair of the two vectors. The rate of learningis controlled by the gain term [aCt)].
Both the neighborhood and gain terms decrease during the learning process, stopping when the
gain term reaches O.
The following strategy was selected for the circuit implementation. First, it was assumed that
inputs are normalized, thereby permitting the simpler dot product operation to be adopted.
Second, weight adjustments were reducedto a simple increment / decrement operation determined
by the sign of the difference between the components of the input and weight vector. Both of
these Simplifications were tested in the simulations described earlier and had negligible effects
on overall performance as a speech vector quantizer. In addition, the prototype circuits of the
analog weight version of the feature map vector quantizer do not include either the max. picker
or the neighborhood operator. To date, a version of a max. picker has not yet been chosen,
though many forms exist. The neighborhood operator was included in the previous version of
this design, but was not repeated on this ftrst pass.
HARDWARE DESCRIPTION
SYNAPTIC ARRAY
A transistor constitutes the basic synaptic connection used in this design. An analog input is
represented by a voltage v(Xi) on the drain of the transistor. The weight is stored as charge
q(Wij) on the gate of the transistor. If the gate voltage exceeds the maximum input voltage by
an amount greater than the transistor threshold voltage, the device will be operating in the
ohmic region. In this region the current [i(Dj)] through the transistor is proportional to the
product of the input and weight voltages. This effectively computes one contribution to the dot
product. By connecting many synapses to a single wire, current summing is performed, in
accordance with Kirchofrs current law, producing the desired sum of products activity.
Figure 2 shows the transistor current as a function of the input and weight voltages. These
curves merely serve to demonstrate how a transistor operating in the ohmic region will
approximate a product operation.
As the input voltage begins to approach the saturation region of the transistor, the curves begin
to bend over. For use in competitive learning networks, like the feature map algorithm, it is
only important that the computation be monotonically increasing. These curves were the
characteristics of the computation used in the simulations. The absolute values given for output
current do not reflect those produced in the actual circuit.
An Analog Self-Organizing Neural Network Chip
?
ACTIVATION :
m
Dj(l)
=2:
(x,(I) - W' j (I))2
i=1
?
CLASSIFICATION :
,
j' = M!N(D,(I))
?
ADAPTATION :
Figure 1. Description of Kohonen's original feature map algorithm using a
linear array of nodes.
200 ,-------,-------...---~---~ WEIGHT (Vgs)
/l. A
S.OV
4.BV
4.6V
4.4V
150
4.2V
4.0V
~
I::;)
c..
J.BV
J.6V
lao
~.4V
I::;)
3.2V
0
3.0V
50
C
L -_ _
o
~
___
0.5
~
_ _ _ __ L_ _ __ _ _
1
INPUT (Vds)
1.5
2
V
Figure 2. Typical T-V curves for a transistor operating in the ohmic region.
741
742
Mann and Gilbert
It should also be noted that there is no true zero weight; even the zero weight voltage
contnbutes to the output current. But again, in a competitive network, it is only important that
it contribute less than a higher weight value at that same input voltage.
In short, neither small non-linearities nor offsets interfere with circuit operation if the synapse
characteristic is monotonic with weight value and input.
SYSTEM
Figure 3 is a block diagram of the small four-node hardware prototype. The nodes are oriented
horizontally, their outputs identified as 10 through 13 along the right-hand edge, representing the
accumulated currents. The analog inputs [X3-XO] come in from the bottom and, traveling
vertically, make connections with each node at the boxes identified as synapses. Each synapse
performs its product operation between the analog weight stored at that node and the input
potential.
Along the top and left sides are the control circuits for accessing weight information. The two
storage registers associated with each synapse are the control signals used to select the reading
and writing of weights. Weights are accessed serially by connecting to a global read and write
wire, W- and W + respectively. Besides the need for modification, the weights also drift with
time, much like DRAM storage, and therefore must be refreshed periodically. This is also
performed by the adaptation circuit that will be presented separately.
Control is provided by having a single "1" bit circulating through the DRAM storage bits
associated with each synapse. This process goes on continuously in the background after being
initialized, in parallel with the activity calculations. If the circuit is not being trained, the
adaptation circuit continues to refresh the existing weights.
WEIGHT MODIFICATION & REFRESH
A complete synapse, along with the current to voltage conversion circuit used to read the weight
contents, is shown in Figure 4. The current synapse is approximately the size of two 6 tr~sistor
static RAM bits. This approximation will be used to make synaptic population estimates from
current SRAM design experience. The six transistors along the top of the synapse circuit are
two, three-transistor dynamic RAM cells used to control access to weight contents. These are
represented in Figure 3 as the two storage elements associated with each synapse and are used
as descnbed earlier.
READING THE WEIGHT
The two serial, vertically oriented transistors in the synapse circuit are used to sense the stored
weight value. The bottom (sensing) transistor's channel is modulated by the charge stored on
the weight capacitor. The sensing transistor is selected through the binary state of the 3T
DRAM bit immediately above it. These two transistors used for reading the weight are
duplicated in the outpu~ circuit shown to the right of the synapse. The current produced in the
global read wire through the sensing transistor, is set up in the cascode current mirror
arrangement in the output circuit. A mirrored version of the current, leaving the right hand side
of the cascode mirror, is established in the duplicate transistor pair. The gate of this transistor
is controlled by the operational amplifier as shown, and must be equivalent to the weight valueat
the connection being read, if the drains are both at the same potential. This is guaranteed by
the cascode mirror arrangement selected, and is set by the minus input to the amplifier.
WRITING THE WEIGHT
The lone horizontal transistor at the bottom right comer of the synapse circuit is the weight
access transistor. This connects the global write wire[W +] to the weight capacitor [Wij]. This
An Analog Self-Organizing Neural Network Chip
(581 x 320 microns)
ROW?CTRl?IN
w+
W-X3
X2
Xl
XO
Figure 3. A block diagram of the 4 x 4 synaptic array integrated circuit.
SYNAPSE
x 32 MICRONS)
(82
,dl
RO
I
,
,d2
wrl
wr2
OUTPUT CIRCUIT
?
I
~~I'
T
i
: :fiT1
I'IR--;---~--+.--+--=--t--+-+-
I I
i
I
-: ~rJ
r
I
j
1'1.
?'
\'IIi
-r----r-+--;----+-_f_
X,
W?
Figure 4. Full synapse circuit. Activation transistor is at bottom central
position in the synapse circuit.
743
744
Mann and Gilbert
occurs whenever the DRAM element directly above it is holding a "1". When the access
transistor is off, current leakage takes place, causing the voltage on the capacitor to drift with
time.
There are two requirements on the weight drift for our application; That drift rates be as slow
as possible, and that they drift in a known direction, in our case, toward ground. This is true
because the refresh mechanism always raises the voltage to the top of a quantized voltage bin.
A cross-section of the access transistor in Figure 5 identifies the two major leakage components;
reverse diode leakage to the grounded substrate (or p-well) [10], and subthreshold channel
conduction to the global write wire[Id]. The reverse diode leakage current is proportional to
the area of the diffusion while the channel conduction leakage is proportional to the channel
W/L ratio. Maintaining a negative voltage drift can be accomplished by sizing the devices such
that reverse diode leakage dominates the channel conduction. This however would degrade the
overall storage performance, and hence the minimum refresh cycle time. This can be relaxed by
the technique of holding the global write line at some low voltage during everything but write
cycles. This then makes the average voltage seen across the channel less than the minimum
weight voltage, always resulting in a net voltage drop.
Also, these leakage currents are exponentially dependent on temperature and can be decreased
by an order of magnitude with just 10's of degrees of cooling [SChwartz, 1988].
WEIGHT REPRESENTATION
Weights, while analog, are restricted to discrete voltages. This permits the stored voltage to drift
by a restricted amount (a bin), and still be refreshed to its original value. The drift rate just
discussed, combined with the bin size (determined by the levels of quantization (i.e. 'of bins) and
weight range (i.e. column height?, determines the refresh cycle time. The refresh cycle time,
in tum, determines how many synapses (or weights) can be served by a single adaptation circuit.
This means that doubling the range of the weight voltage would permit either doubling the
number of quantization levels or doubling the number of synapses served by one adaptation
circuit.
Weight adjustments during learning involve raising or lowering the current weight voltage to the
bins immediately above or below the current bin. This constitutes a digital increment or
decrement operation.
ADAPTATION CIRCUITRY
Weight adjustments are made based upon a comparison between the current weight value and
the input voltage connected to that weight. But, as these two ranges are not coincident, the
comparison is made between two binary values produced by parallel flash AID converters
[Brown, 1987]. The two opposing AID converters in Figure 6, produce a 1-of-N code, used in
the comparison. The converters are composed of two stages to conserve area. The fIrst stage
performs a coarse conversion which in tum drives the upper and lower rails of the second stage
converter. The selection logic decides which of the voltages among those in the second stage
weight conversion circuit to route back on the global write wire [W +].
This conflguration provides an easy mechanism for setting the ranges on both the inputs and
weights. This is accomplished merely by setting the desired maximum and minimum voltages
desired on the respective conversion circuits ([Xmin,Xmax] [Wmin,Wmax]).
TEST RESULTS
Both circuits were fabricated in MOSIS. The synaptic array was fabricated in a 3 micron 2 metal
CMOS process while the adaptation circuitry was fabricated in a similar 2 micron process. To
date, only the synaptic array has been tested. In these tests, the input was restricted to a 0 to1
An Analog Self-Organizing Neural Network Chip
I
Figure 5. Cross-sectional view of a weight access transistor with leakage
currents.
a.1
~~~m
~-~T!
-:;<;:,~,compH
'.---,: \ II
- ...~
I
_.... ....I ""'--
!:
~
I
SELECT
LOGIC I---.-----J
W-mln
Figure 6. Block diagram of the weight adaptation and refresh circuit.
Comparison of digital A \D outputs and new weight selection takes
place in the box marked SELECT LOGIC.
745
746
Mann and Gilbert
V range while the weight range was 2 to 3 V. Most of these early tests were done with binary
weights, either 2 V or 3 V, corresponding to a "O"and a "1".
The synapses and associated control circuitry all work as expected. The circuit can be clocked
up to 7 MHz. The curves shown in Figure 7 display a typical neuron output during two modes
of operation; a set of four binary weights with all of the inputs swept together over their
operating range, and a single, constant input with its weight being swept through its operating
range.
The graphs in Figure 8 show the temporal behavior of the weight voltage stored at a single
synapse. On the left is plotted the output current to weight VOltage, for converting between the
two quantities. The right hand plot is the output current of the synapse plotted against time.
If the weight VOltage bin size is set to 15 mV (2V range, 128 bins), a 3 to 4 second refresh cycle
time limit would be required. This is a very lenient constraint and may permit a much finer
quantization than expected.
The circuitry for reading the weights was tested and appears to be inoperative. The casco de
mirror requires a very high potential at the p-channel sources which causes the circuit to latch
up when the clocks are turned on. This circuit will be isolated and tested under static
conditions.
CONCLUSIONS
In summary, a design for an analog version of a self- organizing feature map has been completed
and prototype versions of the synaptic array and the adaptation circuitry have been fabricated.
The devices are still undergoing testing and characterization, but the basic DRAM control and
synaptic operation have been demonstrated. Simulations have provided the guidance on design
choices. These have been instrumental in providing information on effects due to quantization,
computational non-linearities, and process variations. The new design offers a significant
increase in density over a digital/analog hybrid approach. The 84 pin standard frame package
from MOSIS will accommodate more than 8000 synapses of from 6 to 8 bits accuracy. It
appears that control modifications may offer even greater densities in future versions.
This work was sponsored by the Department of the Air Force, and the Defense Advanced
Research Projects Agency, the views expressed are those of the author and do not reflect the
official policy or pOSition of the U.S. Government.
REFERENCES
P. Brown, R Millecchia M. Stinely. Analog Memory for Continuous-Voltage, Discrete-Time
Implementation of Neural Networks. Proc. IEEE IntI. Conf. on Neural Networks. 1987.
T. Kohonen. Self-Organization and Associative Memory.
Springer-Verlag. 1988.
J. Mann, R Lippmann, R Berger J. Raffel. A Self-Organizing Neural Net Chip. IEEE 1988
Custom Integrated Circuits Conference. pp. 103.1-103.5. 1988.
J. Raffel, J. Mann, R Berger, A. Soares S. Gilbert. A Generic Architecture for Wafer-Scale
Neuromorphic Systems. Proc. IEEE IntI. Conf. on Neural Networks.1987.
D.B. Schwartz RE. Howard. A Programmable Analog Neural Network Chip. IEEE 1988
Custom Integrated Circuits Conference. pp. 10.2.1-10.2.4. 1988.
An Analog Self-Organizing Neural Network Chip
I.
:...
I
~
II
I
i
I
l
I
'J
I
Il
I,
X1 (V)
X1 (V)
a)
b)
Figure 7. a) plot of output current (lj) as a function of imput voltage (Xi)
between 0 and I volt for 0 (top curve) to 4 (bottom curve) weights
"DN". b) plot of output current (Ij) vs. input voltage (Xi) from
o to IV for a weight voltage between 2V (top) and 3V (bottom)
in O.IV steps.
0.212
Y I "". y(lon)
I
II
II
O.261------~------~--I
II
I
I
O.?II.....-_~--.:.;~~:....:,:::=---~-___.
,
I
II
I
I
V(lou,) (V)
I
I
I
0 .2S'r------+--
I
--1'------
I
I
I
I
I
0 . 2U
t---
I
I
I
I
I
I
I
I
I
I
---+-------+..------
0.25C=",,;= - - - - - : - h - - - - . . " . . + , , - - _ _ _ = '
2.t7
:s
Yel (V)
a)
26
V(lo.,) (V)
o.m
I
I
II
1----1 --:---r---l--?
0.
I
I
I
I
I
.
I
I
I
I
I
I
I
I
I
I
,---1----:-- -r---t---
o.258~---l---~---~. _ L__ _
0 . 257
"
o
:
I
:
I
I
I
I
I
2:S
4
I
n ??
6
(I'c)
b)
Figure 8. a) plot of output current V (lout) vs. weight voltage. b) plot of
output current as a function of time with W + held at OVX and the
local weight initially set to 3V.
747
| 155 |@word version:11 instrumental:1 d2:1 simulation:3 descnbed:2 thereby:1 tr:1 minus:1 accommodate:2 selecting:1 t7:1 l__:1 past:1 existing:1 current:28 activation:4 yet:1 must:2 refresh:9 realize:1 periodically:1 drop:1 plot:5 update:1 sponsored:1 v:2 selected:4 device:4 mln:1 sram:1 short:1 quantizer:2 quantized:1 node:15 contribute:1 coarse:1 provides:1 characterization:1 simpler:1 accessed:1 height:1 along:4 dn:1 become:1 expected:2 behavior:1 nor:1 simulator:3 decreasing:1 automatically:1 actual:2 increasing:1 provided:3 begin:2 linearity:2 project:1 circuit:40 evolved:2 developed:1 lone:1 lexington:2 fabricated:5 temporal:1 act:1 charge:3 ro:1 classifier:2 schwartz:2 control:8 producing:1 negligible:1 accordance:1 vertically:2 local:1 limit:1 id:1 approximately:1 initialization:1 range:9 practical:1 testing:2 block:3 implement:2 x3:2 area:3 selection:2 operator:2 bend:1 storage:8 writing:2 gilbert:6 equivalent:1 map:13 imposed:1 raffel:3 demonstrated:1 go:1 immediately:2 array:8 population:1 traditionally:1 increment:2 variation:1 substrate:1 element:2 recognition:2 conserve:1 continues:1 cooling:1 bottom:6 region:5 cycle:5 connected:1 decrease:1 xmin:1 digitally:1 accessing:1 agency:1 cascode:3 testability:1 dynamic:2 trained:1 ov:1 raise:1 serve:1 upon:1 comer:1 chip:7 represented:2 various:1 ohmic:3 distinct:1 describe:1 neighborhood:4 whose:1 encoded:1 statistic:1 associative:1 transistor:24 net:2 product:9 adaptation:12 kohonen:5 neighboring:1 causing:1 turned:1 fmax:1 date:2 to1:1 organizing:11 lincoln:2 adapts:1 description:3 requirement:1 produce:1 cmos:1 depending:1 measured:1 ij:1 diode:3 come:3 direction:1 mann:8 bin:8 everything:1 government:1 sizing:1 adjusted:1 lying:1 ground:1 casco:1 circuitry:6 major:1 early:1 proc:2 mit:1 always:2 modified:1 voltage:33 lon:1 indicates:1 sense:1 dependent:1 stopping:1 unary:1 accumulated:1 integrated:3 lj:1 initially:1 wij:3 overall:2 classification:2 among:1 having:1 constitutes:2 future:1 simplify:1 duplicate:1 oriented:2 composed:1 individual:1 phase:1 connects:1 opposing:1 amplifier:2 organization:1 custom:2 held:1 edge:1 necessary:1 experience:1 respective:2 iv:2 euclidean:2 initialized:1 desired:3 plotted:2 re:1 isolated:1 guidance:1 column:1 earlier:5 mhz:1 neuromorphic:1 fabrication:1 front:1 stored:7 wrl:1 conduction:3 combined:1 density:2 off:1 vgs:1 connecting:2 continuously:1 together:1 squared:2 reflect:2 again:1 central:1 external:1 conf:2 potential:3 de:1 mdac:1 register:1 mv:1 performed:2 view:2 competitive:2 complicated:1 parallel:2 contribution:1 responder:2 il:2 ir:1 accuracy:1 air:1 characteristic:3 subthreshold:1 produced:3 multiplying:1 served:2 drive:1 finer:1 processor:1 synapsis:6 reach:1 whenever:1 synaptic:9 against:1 pp:2 james:1 associated:5 refreshed:2 static:3 gain:3 massachusetts:2 duplicated:1 back:1 appears:2 tum:2 higher:1 synapse:16 done:1 though:1 box:2 just:2 stage:4 clock:1 traveling:1 hand:3 horizontal:1 wmax:1 interfere:1 mode:1 facilitate:1 effect:5 normalized:2 true:2 brown:2 hence:1 read:4 volt:1 laboratory:2 latch:1 during:4 self:13 noted:1 clocked:1 complete:1 demonstrate:1 performs:2 temperature:1 invoked:1 common:1 functional:1 exponentially:1 analog:20 discussed:1 significant:1 dj:3 had:1 dot:3 access:6 similarity:1 operating:5 soares:1 reverse:3 route:1 verlag:1 binary:4 accomplished:3 swept:2 seen:1 minimum:4 greater:2 relaxed:1 converting:1 monotonically:1 signal:1 nelwork:1 ii:8 full:1 rj:1 exceeds:1 calculation:2 cross:2 offer:2 serial:1 permitting:1 controlled:2 basic:2 represent:2 grounded:1 xmax:1 cell:1 addition:1 background:1 separately:1 decreased:1 diagram:3 leaving:1 source:1 member:1 capacitor:4 iii:1 easy:1 architecture:1 topology:1 converter:5 identified:2 prototype:4 reprinted:1 six:1 defense:1 speech:3 cause:1 programmable:1 involve:1 amount:2 hardware:3 exist:1 mirrored:1 sign:1 wr2:1 write:6 discrete:2 wafer:1 four:2 threshold:1 monitor:1 neither:1 diffusion:1 lowering:1 ram:2 mosis:2 graph:1 merely:2 wood:1 year:1 sum:2 package:1 micron:4 yel:1 place:2 comparable:1 bit:5 guaranteed:1 simplification:1 display:1 replaces:1 activity:4 bv:2 constraint:2 x2:1 sheldon:1 imput:1 min:1 prescribed:1 format:1 department:1 describes:1 across:1 modification:4 restricted:3 xo:2 inti:2 equation:1 discus:1 pin:1 mechanism:2 end:1 adopted:1 operation:11 permit:3 generic:1 simulating:1 responsive:1 gate:3 original:3 top:5 estes:1 include:1 completed:2 maintaining:1 lenient:1 leakage:8 arrangement:2 quantity:1 occurs:1 strategy:1 distance:4 lou:1 street:1 vd:1 degrade:1 toward:1 besides:1 code:1 berger:2 ratio:1 providing:1 ftrst:1 holding:2 negative:1 dram:5 design:13 implementation:5 policy:1 perform:3 conversion:4 upper:1 wire:6 neuron:1 howard:1 coincident:1 incorporated:1 frame:1 drift:8 pair:2 required:2 connection:5 raising:1 established:1 below:1 lout:1 reading:4 saturation:1 built:1 max:2 memory:2 serially:1 hybrid:2 force:1 advanced:1 representing:1 lao:1 circulating:1 identifies:1 drain:2 law:1 fully:2 limitation:1 proportional:3 digital:5 degree:1 metal:1 consistent:1 row:1 lo:1 summary:1 l_:1 side:2 perceptron:2 institute:1 wmin:1 absolute:1 curve:7 computes:2 author:1 made:2 approximate:1 lippmann:1 logic:3 global:6 decides:1 summing:1 assumed:1 xi:4 continuous:1 channel:7 f_:1 operational:1 substituted:1 official:1 decrement:2 ctrl:1 repeated:1 x1:2 west:1 representative:1 slow:1 aid:2 position:2 xl:1 rail:1 sensing:3 offset:1 outpu:1 undergoing:1 dominates:1 dl:1 quantization:4 effectively:1 frontend:1 mirror:4 magnitude:2 illustrates:1 horizontally:1 sectional:1 adjustment:4 contained:1 expressed:1 doubling:3 monotonic:1 springer:1 determines:2 adaption:1 ma:1 marked:1 flash:1 content:2 change:1 included:2 diminished:1 except:1 determined:2 typical:2 pas:1 organizes:1 select:3 modulated:1 tested:4 |
602 | 1,550 | Robust. Efficient, Globally-Optimized
Reinforcement Learning with the
Parti-Game Algorithm
Mohammad A. AI-Ansari and Ronald J. Williams
College of Computer Science, 161 CN
Northeastern University
Boston, MA 02115
alansar@ccs.neu.edu, rjw@ccs.neu.edu
Abstract
Parti-game (Moore 1994a; Moore 1994b; Moore and Atkeson 1995) is a
reinforcement learning (RL) algorithm that has a lot of promise in overcoming the curse of dimensionality that can plague RL algorithms when
applied to high-dimensional problems. In this paper we introduce modifications to the algorithm that further improve its performance and robustness. In addition, while parti-game solutions can be improved locally
by standard local path-improvement techniques, we introduce an add-on
algorithm in the same spirit as parti-game that instead tries to improve
solutions in a non-local manner.
1 INTRODUCTION
Parti-game operates on goal problems by dynamically partitioning the space into hyperrectangular cells of varying sizes, represented using a k-d tree data structure. It assumes
the existence of a pre-specified local controller that can be commanded to proceed from the
current state to a given state. The algorithm uses a game-theoretic approach to assign costs
to cells based on past experiences using a minimax algorithm. A cell's cost can be either
a finite positive integer or infinity. The former represents the number of cells that have to
be traveled through to get to the goal cell and the latter represents the belief that there is
no reliable way of getting from that cell to the goal. Cells with a cost of infinity are called
losing cells while others are called winning ones.
The algorithm starts out with one cell representing the entire space and another, contained
within it, representing the goal region. In a typical step, the local controller is commanded
to proceed to the center of the most promising neighboring cell. Upon entering a neighboring cell (whether the one aimed at or not), or upon failing to leave the current cell within
M A. AI-Ansari and R. J. Williams
962
0
o
?!---:---:----:-...J.......:---!
?
(I)
.-
.
l
s....
(0)
~ .~
s ....
0
:~
(e)
0
(d)
Figure I: In these mazes, the agent is required to stan from the point marked Stan and reach the square goal cell.
a timeout period, the result of this attempt is added to the database of experiences the algorithm has collected, cell costs are recomputed based on the updated database, and the
process repeats. The costs are computed using a Dijkstra-like, one-pass minimax version
of dynamic programming. The algorithm terminates upon entering the goal cell.
If at any point the algorithm determines that it can not proceed because the agent is in
a losing cell, each cell lying on the boundary between losing and winning cells is split
across the dimension in which it is largest and all experiences involving cells that are split
are discarded. Since parti-game assumes, in the absence of evidence to the contrary, that
from any given cell every neighboring cell is reachable, discarding experiences in this way
encourages exploration of the newly created cells.
2
PARTITIONING ONLY LOSING CELLS
The win-lose boundary mentioned above represents a barrier the algorithm perceives that
is preventing the agent from reaching the goal. The reason behind partitioning cells along
this boundary is to increase the resolution along these areas that are crucial to reaching the
goal and thus creating more regions along this boundary for the agent to try to get through.
By partitioning on both sides of the boundary, parti-game guarantees that neighboring cells
along the boundary remain close in size. Along with the strategy of aiming towards centers of neighboring cells, this produces pairings of winner-loser cells that form proposed
"corridors" for the agent to try to go through to penetrate the barrier it perceives.
In this section we investigate doing away with partitioning on the winning side, and only
partition losing cells. Because partitioning can only be triggered with the agent on the
losing side of the win-lose boundary, partitioning only losing cells would still give the
agent the same kind of access to the boundary through the newly formed cells. However,
this would result in a size disparity between winner- and loser-side cells and, thus, would
not produce the winner side of the pairings mentioned above. To produce a similar effect to
the pairings of parti-game, we change the aiming strategy of the algorithm. Under the new
strategy, when the agent decides to go from the cell it currently occupies to a neighboring
one, it aims towards the center point of the common surface between the two cells. While
this does not reproduce the same line of motion of the original aiming strategy exactly, it
achieves a very similar objective.
Parti-game's success in high-dimensional problems stems from its variable resolution strategy, which partitions finely only in regions where it is needed. By limiting partitioning to
losing cells only, we hope to increase the resolution in even fewer parts of the state space
and thereby make the algorithm even more efficient.
To compare the performance of parti-game to the modified algorithm, we applied both algorithms to the set of continuous mazes shown in Figure 1. For all maze problems we used
a simple local controller that can move directly toward the specified target state. We also
963
Robust, Efficient Reiriforcement Learning with the Parti-Game Algorithm
Figure 2: An ice puck on a hill. The puck can thrust horizontally to the left and to the right with a maximum force of I Newton.
The state space is two-dimensional consisting of the horizontal position and velocity. The agent starts at the position marked Start
at velocity zero and its goal is to reach the position marked Goal at velocity zero. Maximum thrust is not adequate to get the puck
up the ramp so it has to learn to move to the left first to build up momentum
Figure 3: A nine degree of freedom, snake-like arm that moves in a plane and is fixed at one tip, as depicted in Figure 3. The
objective is to move the arm from the start configuration to the goal one, which requires curling and uncurling to avoid the barrier
and the wall.
applied both algorithms to the non-linear dynamics problem of the ice puck on a hill, depicted in Figure 2, which has been studied extensively in reinforcement learning literature.
We used a local controller very similar to the one described in Moore and Atkeson (1995).
Finally, we applied the algorithm to the nine-degree of freedom planar robot introduced in
Moore and Atkeson (1995) and shown in Figure 3 and we used the same local controller
described there. Additional results on the Acrobot problem (Sutton and Barto 1998) were
not included here for space limitations but can be found in AI-Ansari and Williams (1998).
We applied both algorithms to each of these problems, in each case performing as many
trials as was needed for the solution to stabilize. The agent was placed back in the start
state at the end of each trial. In the puck problem, the agent was also reset to the start
state whenever it hit either of the barriers at the bottom and top of the slope. The results are
shown in Table 1. The table compares the number of trials needed, the number of partitions,
total number of steps taken in the world, and the length of the final trajectory.
The table shows that the new algorithm indeed resulted in fewer total partitions in all prob,
f-
"--t-""
? !
mtm
I
\
ft-
,
.
? "'" ?
,
(a)
?
,
1 1 .1
I
1\
~
? /1
I
~-
f-
?
c-
?
I
I
1\
f-
,
(b)
(e)
Figure 4: The final trial of applying the various algorithms to the maze in Figure 1(a). (a) parti-game. (b) parti-game with
partitioning only losing cells and (c) parti-game with partitioning only the largest losing cells.
M A. AI-Ansari and R. J. Williams
964
?
?
?
?
?
?
I
I
0
Figure 5: Parti-game needed 1194 partitions to reach the goal in the maze of Figure l(d).
lems. It also improved in all problems in the number of trials required to stabilization_ It
improved in all but one problem (maze d) in the length of the final trajectory, however the
difference in length is very small. Finally, it resulted in fewer total steps taken in three of
the six problems, but the total steps taken increased in the remaining three.
To see the effect of the modification in detail, we show the result of applying parti-game and
the modified algorithm on the maze of Figure l(a) in Figures 4(a) and 4(b), respectively.
We can see how areas with higher resolution are more localized in Figure 4(b).
3 BALANCED PARTITIONING
Upon close observation of Figure 4(a), we see that parti-game partitions very finely along
the right wall of the maze. This behavior is even more clearly seen in parti-game's solution
to the maze in Figure l(d), which is a simple maze with a single barrier between the start
state and the goal. As we see in Table 1, parti-game has a very hard time reaching the goal
in this maze. Figure 5 shows the 1194 partitions that parti-game generated in trying to reach
the goal. We can see that partitioning along the barrier is very uneven, being extremely fine
near the goal and growing coarser as the distance from the goal increases. Putting higher
focus on places where the highest gain could be attained if a hole is found can be a desirable
feature, but what happens in cases like this one is obviously excessive.
One of the factors contributing to this problem of continuing to search at ever-higher resolutions in the part of the barrier nearest the goal is that any version of parti-game searches
for solutions using an implicit trade-off between the shortness of a potential solution path
and the resolution required to find this path. Only when the resolution becomes so fine
that the number of cells through which the agent would have to pass in this potential shortcut exceeds the number of cells to be traversed when traveling around the barrier is the
algorithm forced to look elsewhere for the actual opening.
A conceptually appealing way to bias this search is to maintain a more explicit coarse-tofine search strategy. One way to do this is to try to keep the smallest cell size the algorithm
generates as large as possible. In addition to achieving the balance we are seeking, this
would tend to lower the total number of partitions and result in shallower tree structures
needed to represent the state space, which, in tum, results in higher efficiency.
To achieve these goals, we modified the algorithm from the previous section such that
whenever partitioning is required, instead of partitioning all losing cells, we only partition
those among them that are of maximum size. This has the effect of postponing splits that
would lower the minimum cell size as long as possible. The results of applying the modified
algorithm on the test problems are also shown in Table 1.
Comparing the results of this version of the algorithm to those of partitioning all losing cells
Robust. Efficient Reinforcement Learning with the Parti-Game Algorithm
965
,
:~
?
?
?/
?\
,
I
~
(a)
I
I
\.P
(b)
Figure 6: The result of partitioning largest cells on the losing side in the maze of Figure I (d). Only two nials are required to
stabilize. The first requires 1304 steps and 21 partitions. The second nial adds no new partitions and produces a path of only 165
steps.
Problem
Algorithm
maze a
original parti-game
partition losing side
partition largest losing
original parti-game
partition losing side
partition largest losing
original parti-game
partition losing side
partition largest losing
original parti-game
partition losing side
partition largest losing
original parti-game
parti tion losing side
partition largest losing
original parti-game
partition losing side
partition largest losing
mazeb
mazec
mazed
puck
ninejoint
arm
Trials
Partitions
Total
Steps
3
3
3
6
5
6
3
2
2
2
2
2
6
2
2
25
17
7
444
239
27
98
76
76
176
120
35131
16652
1977
5180
7187
5635
7768
10429
6803
553340
18639
1469
6764
3237
3237
2970
3041
2694
96
1194
350
21
80
18
18
104
61
37
Final
Trajectory
Length
I
279
256
270
183
175
174
416
165
165
149
155
165
240
151
lSI
58
56
112
Table 1: Results of applying parti-game, parti-game with partitioning only losing cells and parti-game with partitioning the largest
losing cells on three of the problem domains. Smaller numbers are better. Best numbers are shown in bold.
on the win-lose boundary shows that this algorithm improves on parti-garne's performance
even further. It outperforms the above algorithm in four problems in the total number of
partitions required, while it ties it in the remaining two. It outperforms the above algorithm
in total steps taken in five problems and ties it in one. It improves in the number of trials
needed to stabilize in one problem, ties the above algorithm in four cases and ties partigame in the remaining one. In the length of the final trajectory, partitioning the largest
losing cells does better in one case, ties partitioning only losing cells in two cases and does
worse in three. This latter result is due to the generally larger partition sizes that result from
the lower resolution that this algorithm produces. However, the increase in the number of
steps is very minimal in all but the nine-joint arm problem.
Figure 4(c) shows the result of applying the new algorithm to the maze of Figure l(a). In
contrast to the other two algorithms depicted in the same figure, we can see that the new
algorithm partitions very uniformly around the barrier. In addition, it requires the fewest
number of partitions and total steps out of the three algorithms. Figure 6 shows that the new
algorithm vastly outperforms parti-game on the maze in Figure l(d). Here, too, it partitions
very evenly around the barrier and finds the goal very quickly, requiring far fewer steps and
partitions.
966
M. A. AI-Ansari and R. J Williams
4 GLOBAL PATH IMPROVEMENT
Parti-game does not claim to find optimal solutions. As we see in Figure 4, parti-game and
the two modified algorithms settle on the longer of the two possible routes to the goal in
this maze. In this section we investigate ways we could improve parti-game so that it could
find paths of optimal form. It is important to note that we are not seeking paths that are
optimal, since that is not possible to achieve using the cell shapes and aiming strategies
we are using here. By a path of optimal form we mean a path that could be continuously
deformed into an optimal path.
4.1 OTHER GRADIENTS
As mentioned above, parti-game partitions only when the agent has no winning cells to aim
for and the only cells partitioned are those that lie on the win-lose boundary. The win-lose
boundary falls on the gradient between finite- and infinite-cost cells and it appears when
the algorithm knows of no reliable way to get to the goal. Consistently partitioning along
this gradient guarantees that the algorithm will eventually find a path to the goal, if one
exists.
However, gradients across which the difference in cost is finite also exist in a state space
partitioned by parti-game (or any of the variants introduced in this paper). Like the winlose boundary, these gradients are boundaries through which the agent does not believe
it can move directly. Although finding an opening in such a boundary is not essential to
reaching the goal, these boundaries do represent potential shortcuts that might improve the
agent's policy. Any gradient with a difference in cost of two or more is a location of such
a potentially useful shortcut.
Because such gradients appear throughout the space, we need to be selective about which
ones to partition along. There are many possible strategies one might consider using to incorporate these ideas into parti-game. For example, since parti-game focuses on the highest
gradients only, the first thing that comes to mind is to follow in parti-game's footsteps and
assign partitioning priorities to cells along gradients based on the differences in values
across those gradients. However, since the true cost function typically has discontinuities,
it is clear that the effect of such a strategy would be to continue refining the partitioning
indefinitely along such a discontinuity in a vain search for a nonexistent shortcut.
4.2 THE ALGORITHM
A much better idea is to try to pick cells to partition in a way that would achieve balanced
partitioning, following the rationale we introduced in section 3. Again, such a strategy
would result in a uniform coarse-to-fine search for better paths along those other gradients.
The following discussion could, in principle, apply to any of the three forms of parti-game
studied up to this point. Because of the superior behavior of the version where we partition
the largest cells on the losing side, this is the specific version we report on here, and we use
the term modified parti-game to refer to it.
The way we incorporated partitioning along other gradients is as follows. At the end of any
trial in which the agent is able to go from the start state to the goal without any unexpected
results of any of its aiming attempts, we partition the largest "losing cells" (i.e., higher-cost
cells) that fall on any gradient across which costs differ by more than one. Because data
about experiences involving cells that are partitioned is discarded, the next time modified
parti-game is run, the agent will try to go through the newly formed cells in search of a
shortcut.
This algorithm amounts to simply running modified parti-game until a stable solution is
Robust, Efficient Reinforcement Learning with the Parti-Game Algorithm
.
. 11
1
I I I
' ?/1 I I
I I I
? I
I
.\
..j
I
967
\
Figure 7: The solution found by applying the global improvement algorithm on the maze of Figure 1(a). The solution proceeded
exactly like that of the algorithm of section 3 until the solution in Figure 4(d) was reached. After that. eight additional iterations
were needed to find the better trajectory, resulting in 22 additional partitions, for a total of 49.
reached. At that point, it introduces new cells along some of the other gradients, and when
it is subsequently run, modified parti-game is applied again until stabilization is achieved,
and so on. The results of applying this algorithm to the maze of Figure l(a) is shown in
Figure 7. As we can see, the algorithm finds the better solution by increasing the resolution
around the relevant part of the barrier above the start state.
In the absence of information about the form of the optimal trajectory, there is no natural
termination criterion for this algorithm. It is designed to be run continually in search of
better solutions. If, however, the form of the optimal solution is known in advance, the
extra partitioning could be turned off after such a solution is found.
5
CONCLUSIONS
In this paper we have presented three successive modifications to parti-game. The combination of the first two appears to improve its robustness and efficiency, sometimes dramatically, and generally yields better solutions. The third provides a novel way of performing
non-local search for higher quality solutions that are closer to optimal.
Acknowledgments
Mohammad AI-Ansari acknowledges the continued support of King Saud University,
Riyadh, Saudi Arabia and the Saudi Arabian Cultural Mission to the U.S.A.
References
AI-Ansari, M. A. and R. 1. Williams (1998). Modifying the parti-game algorithm for increased robustness, higher efficiency and better policies. Technical Report NU-CCS98-13, College of Computer Science, Northeastern University, Boston, MA.
Moore, A. (1994a). Variable resolution reinforcement learning. In Proceedings of the
Eighth Yale Workshop on Adaptive and Learning Systems. Center for Systems Science,
Yale University.
Moore, A. W. (1994b). The parti-game algorithm for variable resolution reinforcement
learning in multidimensional state spaces. In Proceedings of Neural Information Processing Systems Conference 6. Morgan Kaufman.
Moore, A. W. and C. O. Atkeson (1995). The parti-game algorithm for variable resolution
reinforcement learning in multidimensional state-spaces. Machine Learning 21.
Sutton, R. S. and A. O. Barto (1998). Reinforcement Learning: An Introduction. MIT Press.
| 1550 |@word deformed:1 trial:8 proceeded:1 version:5 termination:1 pick:1 thereby:1 nonexistent:1 configuration:1 disparity:1 past:1 outperforms:3 current:2 comparing:1 ronald:1 partition:35 thrust:2 shape:1 designed:1 fewer:4 plane:1 indefinitely:1 coarse:2 provides:1 location:1 successive:1 five:1 along:14 corridor:1 pairing:3 introduce:2 manner:1 indeed:1 behavior:2 growing:1 globally:1 actual:1 curse:1 increasing:1 perceives:2 becomes:1 cultural:1 what:1 kind:1 kaufman:1 finding:1 guarantee:2 every:1 multidimensional:2 tie:5 exactly:2 hit:1 partitioning:26 appear:1 continually:1 positive:1 ice:2 local:8 aiming:5 sutton:2 path:12 might:2 studied:2 dynamically:1 commanded:2 acknowledgment:1 area:2 pre:1 get:4 close:2 applying:7 center:4 williams:6 go:4 resolution:12 penetrate:1 parti:52 continued:1 updated:1 limiting:1 target:1 losing:31 programming:1 us:1 velocity:3 coarser:1 database:2 bottom:1 ft:1 region:3 trade:1 highest:2 mentioned:3 balanced:2 dynamic:2 upon:4 efficiency:3 joint:1 represented:1 various:1 fewest:1 forced:1 larger:1 ramp:1 vain:1 final:5 timeout:1 obviously:1 triggered:1 mission:1 reset:1 neighboring:6 relevant:1 turned:1 loser:2 achieve:3 saudi:2 getting:1 produce:5 leave:1 nearest:1 come:1 differ:1 modifying:1 subsequently:1 exploration:1 occupies:1 stabilization:1 settle:1 assign:2 wall:2 traversed:1 lying:1 around:4 claim:1 achieves:1 smallest:1 failing:1 lose:5 currently:1 largest:13 hope:1 mit:1 clearly:1 aim:2 modified:9 reaching:4 mtm:1 avoid:1 varying:1 barto:2 focus:2 refining:1 improvement:3 consistently:1 contrast:1 ansari:7 entire:1 snake:1 typically:1 footstep:1 reproduce:1 selective:1 among:1 represents:3 look:1 excessive:1 others:1 report:2 opening:2 resulted:2 puck:6 consisting:1 maintain:1 attempt:2 freedom:2 investigate:2 introduces:1 behind:1 closer:1 experience:5 tree:2 continuing:1 minimal:1 increased:2 cost:11 uniform:1 too:1 hyperrectangular:1 off:2 tip:1 quickly:1 continuously:1 vastly:1 again:2 priority:1 worse:1 creating:1 potential:3 bold:1 stabilize:3 tion:1 try:6 lot:1 doing:1 reached:2 start:9 slope:1 square:1 formed:2 yield:1 conceptually:1 trajectory:6 cc:2 reach:4 whenever:2 neu:2 gain:1 newly:3 dimensionality:1 improves:2 back:1 appears:2 tum:1 higher:7 attained:1 follow:1 planar:1 improved:3 implicit:1 until:3 traveling:1 horizontal:1 quality:1 believe:1 effect:4 requiring:1 true:1 former:1 entering:2 moore:8 game:51 encourages:1 criterion:1 trying:1 hill:2 theoretic:1 mohammad:2 motion:1 novel:1 common:1 superior:1 rl:2 winner:3 refer:1 ai:7 reachable:1 access:1 robot:1 longer:1 surface:1 stable:1 add:2 uncurling:1 route:1 success:1 continue:1 morgan:1 seen:1 minimum:1 additional:3 period:1 desirable:1 stem:1 exceeds:1 technical:1 long:1 involving:2 variant:1 controller:5 iteration:1 represent:2 sometimes:1 achieved:1 cell:59 addition:3 fine:3 crucial:1 extra:1 finely:2 tend:1 thing:1 contrary:1 spirit:1 integer:1 near:1 split:3 idea:2 cn:1 whether:1 six:1 rjw:1 proceed:3 nine:3 adequate:1 dramatically:1 generally:2 useful:1 clear:1 aimed:1 amount:1 locally:1 extensively:1 exist:1 lsi:1 promise:1 recomputed:1 putting:1 four:2 achieving:1 run:3 prob:1 place:1 throughout:1 yale:2 tofine:1 infinity:2 generates:1 extremely:1 performing:2 combination:1 arabia:1 terminates:1 across:4 remain:1 smaller:1 partitioned:3 appealing:1 modification:3 lem:1 happens:1 taken:4 eventually:1 needed:7 know:1 mind:1 end:2 apply:1 eight:1 away:1 robustness:3 existence:1 original:7 assumes:2 top:1 remaining:3 running:1 newton:1 build:1 seeking:2 objective:2 move:5 added:1 strategy:10 gradient:14 win:5 distance:1 evenly:1 collected:1 reason:1 toward:1 length:5 balance:1 postponing:1 potentially:1 policy:2 shallower:1 observation:1 discarded:2 finite:3 dijkstra:1 ever:1 incorporated:1 overcoming:1 introduced:3 required:6 specified:2 optimized:1 plague:1 nu:1 discontinuity:2 able:1 eighth:1 reliable:2 belief:1 natural:1 force:1 arm:4 minimax:2 representing:2 improve:5 stan:2 created:1 acknowledges:1 traveled:1 literature:1 contributing:1 rationale:1 limitation:1 localized:1 agent:17 degree:2 principle:1 elsewhere:1 repeat:1 placed:1 side:13 bias:1 fall:2 barrier:11 boundary:15 dimension:1 world:1 maze:18 preventing:1 reinforcement:9 adaptive:1 atkeson:4 far:1 keep:1 global:2 decides:1 continuous:1 search:9 table:6 promising:1 learn:1 robust:4 domain:1 position:3 momentum:1 explicit:1 winning:4 lie:1 third:1 northeastern:2 discarding:1 specific:1 evidence:1 exists:1 essential:1 workshop:1 acrobot:1 hole:1 boston:2 depicted:3 simply:1 horizontally:1 unexpected:1 contained:1 determines:1 ma:2 goal:25 marked:3 king:1 towards:2 absence:2 shortcut:5 change:1 hard:1 included:1 typical:1 infinite:1 operates:1 uniformly:1 called:2 total:10 pas:2 college:2 uneven:1 support:1 latter:2 incorporate:1 |
603 | 1,551 | Reinforcement Learning for Trading
John Moody and Matthew Saffell*
Oregon Graduate Institute , CSE Dept.
P.O . Box 91000 , Portland, OR 97291-1000
{moody, saffell }@cse.ogi.edu
Abstract
We propose to train trading systems by optimizing financial objective functions via reinforcement learning. The performance functions that we consider are profit or wealth, the Sharpe ratio and
our recently proposed differential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results that
demonstrate the advantages of reinforcement learning relative to
supervised learning . Here we extend our previous work to compare Q-Learning to our Recurrent Reinforcement Learning (RRL)
algorithm. We provide new simulation results that demonstrate
the presence of predictability in the monthly S&P 500 Stock Index
for the 25 year period 1970 through 1994, as well as a sensitivity
analysis that provides economic insight into the trader's structure.
1
Introduction: Reinforcement Learning for Thading
The investor's or trader's ultimate goal is to optimize some relevant measure of
trading system performance , such as profit , economic utility or risk-adjusted return. In this paper , we propose to use recurrent reinforcement learning to directly
optimize such trading system performance functions , and we compare two different reinforcement learning methods. The first, Recurrent Reinforcement Learning,
uses immediate rewards to train the trading systems , while the second (Q-Learning
(Watkins 1989)) approximates discounted future rewards. These methodologies can
be applied to optimizing systems designed to trade a single security or to trade portfolios . In addition , we propose a novel value function for risk-adjusted return that
enables learning to be done online: the differential Sharpe ratio.
Trading system profits depend upon sequences of interdependent decisions, and are
thus path-dependent. Optimal trading decisions when the effects of transactions
costs, market impact and taxes are included require knowledge of the current system
state. In Moody, Wu, Liao & Saffell (1998), we demonstrate that reinforcement
learning provides a more elegant and effective means for training trading systems
when transaction costs are included , than do more standard supervised approaches.
? The authors are also with Nonlinear Prediction Systems.
J. Moody and M Saffell
918
Though much theoretical progress has been made in recent years in the area of reinforcement learning, there have been relatively few successful, practical applications
of the techniques. Notable examples include Neuro-gammon (Tesauro 1989), the
asset trader of Neuneier (1996), an elevator scheduler (Crites & Barto 1996) and a
space-shuttle payload scheduler (Zhang & Dietterich 1996).
In this paper we present results for reinforcement learning trading systems that
outperform the S&P 500 Stock Index over a 25-year test period, thus demonstrating
the presence of predictable structure in US stock prices. The reinforcement learning
algorithms compared here include our new recurrent reinforcement learning (RRL)
method (Moody & Wu 1997, Moody et ai. 1998) and Q-Learning (Watkins 1989).
2
Trading Systems and Financial Performance Functions
2.1
Structure, Profit and Wealth for Trading Systems
We consider performance functions for systems that trade a single 1 security with
price series Zt. The trader is assumed to take only long, neutral or short positions
F t E {-I , 0, I} of constant magnitude. The constant magnitude assumption can
be easily relaxed to enable better risk control. The position Ft is established or
maintained at the end of each time interval t, and is re-assessed at the end of
period t + 1. A trade is thus possible at the end of each time period, although
nonzero trading costs will discourage excessive trading. A trading system return
R t is realized at the end of the time interval (t - 1, t] and includes the profit or loss
resulting from the position F t - 1 held during that interval and any transaction cost
incurred at time t due to a difference in the positions Ft - 1 and Ft.
In order to properly incorporate the effects of transactions costs, market impact and
taxes in a trader's decision making, the trader must have internal state information
and must therefore be recurrent. An example of a single asset trading system
that takes into account transactions costs and market impact has following decision
function: Ft = F((}t; Ft-l. It) with It = {Zt, Zt-1, Zt-2,??.; Yt, Yt-1, Yt-2, ... } where
(}t denotes the (learned) system parameters at time t and It denotes the information
set at time t, which includes present and past values of the price series Zt and an
arbitrary number of other external variables denoted Yt.
Trading systems can be optimized by maximizing performance functions U 0 such
as profit, wealth, utility functions of wealth or performance ratios like the Sharpe
ratio. The simplest and most natural performance function for a risk-insensitive
trader is profit. The transactions cost rate is denoted 6.
Additive profits are appropriate to consider if each trade is for a fixed number
of shares or contracts of security Zt. This is often the case, for example, when
trading small futures accounts or when trading standard US$ FX contracts in dollardenominated foreign currencies. With the definitions rt = Zt - Zt-1 and r{ =
for the price returns of a risky (traded) asset and a risk-free asset (like TBills) respectively, the additive profit accumulated over T time periods with trading
position size Jl > 0 is then defined as:
4 - 4-1
PT =
1 See
T
T
t=l
t=l
LRt = Jl L {r{ + Ft- 1(rt -
r{) - 61Ft - Ft-11}
Moody et al. (1998) for a detailed discussion of multiple asset portfolios.
(1)
919
Reinforcement Learning for Trading
=
=
=
with Po
0 and typically FT
Fa O. Equation (1) holds for continuous quantities also. The wealth is defined as WT
Wo + PT.
=
Multiplicative profits are appropriate when a fixed fraction of accumulated
wealth v > 0 is invested in each long or short trade. Here, rt = (zt/ Zt-l - I)
and r{ = (z{ /4-1 - 1). If no short sales are allowed and the leverage factor is set
fixed at v = 1, the wealth at time Tis:
WT = Wo
T
T
t=1
t=1
II {I + Rd = Wo II {I + (1- Ft_t}r{ + Ft-1rt} {1- 81Ft -
Ft- 11}? (2)
2.2 The Differential Sharpe Ratio for On-line Learning
Rather than maximizing profits, most modern fund managers attempt to maximize
risk-adjusted return as advocated by Modern Portfolio Theory. The Sharpe ratio is
the most widely-used measure of risk-adjusted return (Sharpe 1966). Denoting as
before the trading system returns for period t (including transactions costs) as R t ,
the Sharpe ratio is defined to be
S
_
T -
Average(Re)
Standard Deviation(Rt )
(3)
where the average and standard deviation are estimated for periods t = {I, ... , T}.
Proper on-line learning requires that we compute the influence on the Sharpe ratio
of the return at time t. To accomplish this, we have derived a new objective function called the differential Sharpe ratio for on-line optimization of trading system
performance (Moody et al. 1998). It is obtained by considering exponential moving
averages of the returns and standard deviation of returns in (3), and expanding to
first order in the decay rate ".,: St ~ St-1 + ""~ll1=O + 0(".,2) . Noting that only the
first order term in this expansion depends upon the return R t at time t, we define
the differential Sharpe ratio as:
(4)
where the quantities At and B t are exponential moving estimates of the first and
second moments of R t :
= A t- 1 + ".,(Rt Bt- 1 + ".,~Bt = Bt- 1 + TJ(R; -
A t- 1 + ".,~At
A t -1)
Bt-d
(5)
Treating A t - 1 and B t - 1 as numerical constants, note that"., in the update equations
controls the magnitude of the influence of the return R t on the Sharpe ratio St .
Hence, the differential Sharpe ratio represents the influence of the trading return
R t realized at time t on St.
3
Reinforcement Learning for Trading Systems
The goal in using reinforcement learning to adjust the parameters of a system is
to maximize the expected payoff or reward that is generated due to the actions
of the system. This is accomplished through trial and error exploration of the
environment. The system receives a reinforcement signal from its environment (a
J. Moody and M. Saffell
920
reward) that provides information on whether its actions are good or bad. The
performance function at time T can be expressed as a function of the sequence of
trading returns UT
U(R1' R 2 , ... , RT).
=
Given a trading system model FtU}), the goal is to adjust the parameters () in
order to maximize UT. This maximization for a complete sequence of T trades
can be done off-line using dynamic programming or batch versions of recurrent
reinforcement learning algorithms. Here we do the optimization on-line using a
reinforcement learning technique. This reinforcement learning algorithm is based
on stochastic gradient ascent. The gradient of UT with respect to the parameters ()
of the system after a sequence of T trades is
dUT(())
d()
=L
T
dUT {dRt dFt
dR t dFt d()
t=1
+
dR t dFt-1}
dFt - 1 d()
(6)
A simple on-line stochastic optimization can be obtained by considering only the
term in (6) that depends on the most recently realized return R t during a forward
pass through the data:
_dU_t-'..(()-'-) = _dU_t {_dR_t _dF_t
d()
dRt dFt d()
+ _d_R_t__dF_t_-_1}
.
dFt - 1 d()
(7)
The parameters are then updated on-line using /),.()t = pdUt(()t)/d()t. Because of the
recurrent structure of the problem (necessary when transaction costs are included),
we use a reinforcement learning algorithm based on real-time recurrent learning
(Williams & Zipser 1989). This approach, which we call recurrent reinforcement
learning (RRL), is described in (Moody & Wu 1997, Moody et al. 1998) along with
extensive simulation results.
4
Empirical Results: S&P 500
I
TBill Asset Allocation
A long/short trading system is trained on monthly S&P 500 stock index and 3month TBill data to maximize the differential Sharpe ratio . The S&P 500 target
series is the total return index computed by reinvesting dividends. The 84 input
series used in the trading systems include both financial and macroeconomic data.
All data are obtained from Citibase, and the macroeconomic series are lagged by
one month to reflect reporting delays.
A total of 45 years of monthly data are used, from January 1950 through December
1994. The first 20 years of data are used only for the initial training of the system.
The test period is the 25 year period from January 1970 through December 1994.
The experimental results for the 25 year test period are true ex ante simulated
trading results.
For each year during 1970 through 1994, the system is trained on a moving window
of the previous 20 years of data. For 1970, the system is initialized with random
parameters. For the 24 subsequent years, the previously learned parameters are
used to initialize the training. In this way, the system is able to adapt to changing
market and economic conditions. Within the moving training window, the "RRL"
systems use the first 10 years for stochastic optimization of system parameters, and
the subsequent 10 years for validating early stopping of training. The networks
are linear, and are regularized using quadratic weight decay during training with a
921
Reinforcement Learningfor Trading
regularization parameter of 0.0l. The "Qtrader" systems use a bootstrap sample
of the 20 year training window for training, and the final 10 years of the training
window are used for validating early stopping of training. The networks are twolayer feedforward networks with 30 tanh units in the hidden layer.
4.1 Experimental Results
The left panel in Figure 1 shows box plots summarizing the test performance for
the full 25 year test period of the trading systems with various realizations of the
initial system parameters over 30 trials for the "RRL" system, and 10 trials for
the "Qtrader" system 2 . The transaction cost is set at 0.5%. Profits are reinvested
during trading, and multiplicative profits are used when calculating the wealth. The
notches in the box plots indicate robust estimates of the 95% confidence intervals
on the hypothesis that the median is equal to the performance of the buy and hold
strategy. The horizontal lines show the performance of the "RRL" voting, "Qtrader"
voting and buy and hold strategies for the same test period. The annualized monthly
Sharpe ratios of the buy and hold strategy, the "Qtrader" voting strategy and the
"RRL" voting strategy are 0.34, 0.63 and 0.83 respectively. The Sharpe ratios
calculated here are for the returns in excess of the 3-month treasury bill rate.
The right panel of Figure 1 shows results for following the strategy of taking positions based on a majority vote of the ensembles of trading systems compared with
the buy and hold strategy. We can see that the trading systems go short the S&P
500 during critical periods, such as the oil price shock of 1974, the tight money
periods of the early 1980's, the market correction of 1984 and the 1987 crash. This
ability to take advantage of high treasury bill rates or to avoid periods of substantial
stock market loss is the major factor in the long term success of these trading models. One exception is that the "RRL" trading system remains long during the 1991
stock market correction associated with the Persian Gulf war, though the "Qtrader"
system does identify the correction. On the whole though, the "Qtrader" system
trades much more frequently than the "RRL" system, and in the end does not
perform as well on this data set.
From these results we find that both trading systems outperform the buy and hold
strategy, as measured by both accumulated wealth and Sharpe ratio. These differences are statistically significant and support the proposition that there is predictability in the U.S. stock and treasury bill markets during the 25 year period
1970 through 1994. A more detailed presentation of the "RRL" results appears in
(Moody et al. 1998).
4.2 Gaining Economic Insight Through Sensitivity Analysis
A sensitivity analysis of the "RRL" systems was performed in an attempt to determine on which economic factors the traders are basing their decisions. Figure 2
shows the absolute normalized sensitivities for 3 of the more salient input series as
a function of time, averaged over the 30 members of the "RRL" committee. The
sensitivity of input i is defined as:
IdXi I
IdXj I
Si = dF /max dF
J
where F is the unthresholded trading output and
Xi
(8)
denotes input i.
2Ten trials were done for the "Qtrader" system due to the amount of computation
required in training the systems
J. Moody and M. Saffell
922
F,nal Eqully: OIrador VI RRl
70
. . :. .:. ~:vcMI
""''''''_I
.I: ---
so
~40
.z-
_____
~
~
""' __ I
a.-._
Ml_
...g_~
30
,
,,,
,,
20
10
________ , __
g
_________________,_______ _
ro~====~
I~
,
-'----- --r, ----- --- -------- --- --'-
RRL
Figure 1: Test results for ensembles of simulations using the S&P 500 stock index and 3-month Treasury Bill data over the 1970-1994 time period. The solid
curves correspond to the "RRL" voting system performance, dashed curves to the
"Qtrader" voting system and the dashed and dotted curves indicate the buy and
hold performance. The boxplots in (a) show the performance for the ensembles
of "RRL" and "Qtrader" trading systems The horizontal lines indicate the performance of the voting systems and the buy and hold strategy. Both systems
significantly outperform the buy and hold strategy. (b) shows the equity curves
associated with the voting systems and the buy and hold strategy, as well as the
voting trading signals produced by the systems. In both cases, the traders avoid
the dramatic losses that the buy and hold strategy incurred during 1974 and 1987.
The time-varying sensitivities in Figure 2 emphasize the nonstationarity of economic
relationships. For example, the yield curve slope (which measures inflation expectations) is found to be a very important factor in the 1970's, while trends in long term
interest rates (measured by the 6 month difference in the AAA bond yield) becomes
more important in the 1980's, and trends in short term interest rates (measured by
the 6 month difference in the treasury bill yield) dominate in the early 1990's.
5
Conclusions and Extensions
In this paper, we have trained trading systems via reinforcement learning to optimize
financial objective functions including our differential Sharpe ratio for online learning. We have also provided results that demonstrate the presence of predictability
in the monthly S&P 500 Stock Index for the 25 year period 1970 through 1994.
We have previously shown with extensive simulation results (Moody & Wu
1997, Moody et al. 1998) that the "RRL" trading system significantly outperforms
systems trained using supervised methods for traders of both single securities and
portfolios. The superiority of reinforcement learning over supervised learning is
most striking when state-dependent transaction costs are taken into account. Here,
we present results for asset allocation systems trained using two different reinforcement learning algorithms on a real, economic dataset. We find that the "Qtrader"
system does not perform as well as the "RRL" system on the S&P 500 / TBill asset
allocation problem, possibly due to its more frequent trading. This effect deserves
further exploration. In general, we find that Q-Iearning can suffer from the curse of
dimensionality and is more difficult to use than our RRL approach.
Finally, we apply sensitivity analysis to the trading systems, and find that certain
interest rate variables have an influential role in making asset allocation decisions.
923
Reinforcement Learningfor Trading
S",sltivity Analysis: A....g. on RRL Commill ??
,',
"
,- "\
0.9
,- -..
,
0.8
,I
,
I
~f07
j
,I
:
! i
\
,
I
I
,
I
,
"
GO.6 ?
,
,,
jos!
iO.4
I
,,
I
1
I
\
1
03
0.2
I
'
Ir--------'...!.'-----,
VI.1d Curv. Slop.
6 Month Dill. In AM Bond yield
6 Month Dill. In TBIU Vieid
1975
1980
0.,.
1985
1990
1995
Figure 2: Sensitivity traces for three of the inputs to the "RRL" trading system
averaged over the ensemble of traders. The nonstationary relationships typical
among economic variables is evident from the time-varying sensitivities.
We also find that these influences exhibit nonstationarity over time.
Acknowledgements
We gratefully acknowledge support for this work from Nonlinear Prediction Systems and
from DARPA under contract DAAH01-96-C-R026 and AASERT grant DAAH04-95-10485.
References
Crites, R. H. & Barto, A. G. (1996), Improving elevator performance using reinforcement
learning, in D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS',
Vol. 8, pp. 1017-1023.
Moody, J. & Wu, L. (1997), Optimization of trading systems and portfolios, in Y. AbuMostafa, A. N. Refenes & A. S. Weigend, eds, 'Decision Technologies for Financial
Engineering', World Scientific, London, pp. 23-35. This is a slightly revised version
of the original paper that appeared in the NNCM*96 Conference Record, published
by Caltech, Pasadena, 1996.
Moody, J., Wu, L., Liao, Y. & Saffell, M. (1998), 'Performance functions and reinforcement
learning for trading systems and portfolios', Journal of Forecasting 17,441-470.
Neuneier, R. (1996), Optimal asset allocation using adaptive dynamic programming, in
D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS', Vol. 8,
pp. 952-958.
Sharpe, W. F. (1966), 'Mutual fund performance', Journal of Business pp. 119-138.
Tesauro, G. (1989), 'Neurogammon wins the computer olympiad', Neural Computation
1,321-323.
Watkins, C. J. C. H. (1989), Learning with Delayed Rewards, PhD thesis, Cambridge
University, Psychology Department.
Williams, R. J. & Zipser, D. (1989), 'A learning algorithm for continually running fully
recurrent neural networks', Neural Computation 1,270-280.
Zhang, W. & Dietterich, T. G. (1996), High-performance job-shop scheduling with a timedelay td(A) network, in D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS', Vol. 8, pp. 1024-1030.
| 1551 |@word trial:4 version:2 simulation:4 dramatic:1 twolayer:1 profit:13 solid:1 moment:1 initial:2 series:6 denoting:1 past:1 outperforms:1 current:1 neuneier:2 si:1 must:2 john:1 numerical:1 subsequent:2 additive:2 enables:1 designed:1 treating:1 fund:2 update:1 plot:2 short:6 record:1 provides:3 cse:2 zhang:2 along:1 differential:8 expected:1 market:8 frequently:1 manager:1 discounted:1 td:1 curse:1 window:4 considering:2 becomes:1 provided:1 panel:2 ti:1 voting:9 iearning:1 ro:1 refenes:1 control:2 sale:1 unit:1 grant:1 superiority:1 continually:1 before:1 engineering:1 io:1 path:1 annualized:1 dut:2 graduate:1 statistically:1 averaged:2 practical:1 bootstrap:1 area:1 empirical:2 significantly:2 confidence:1 gammon:1 scheduling:1 risk:7 influence:4 optimize:3 bill:5 yt:4 maximizing:2 williams:2 go:2 insight:2 dominate:1 financial:5 fx:1 updated:1 pt:2 target:1 programming:2 us:1 hypothesis:1 trend:2 ft:12 role:1 trade:9 substantial:1 mozer:3 predictable:1 environment:2 reward:5 saffell:7 dynamic:2 trained:5 depend:1 tight:1 upon:2 easily:1 po:1 darpa:1 stock:9 various:1 train:2 effective:1 london:1 widely:1 aasert:1 ability:1 invested:1 final:1 online:3 advantage:2 sequence:4 propose:3 frequent:1 relevant:1 realization:1 tax:2 r1:1 recurrent:10 measured:3 advocated:1 progress:1 job:1 trading:48 payload:1 indicate:3 stochastic:3 exploration:2 enable:1 require:1 dill:2 proposition:1 adjusted:4 extension:1 correction:3 hold:11 inflation:1 traded:1 matthew:1 abumostafa:1 major:1 early:4 bond:2 tanh:1 hasselmo:3 basing:1 rrl:21 rather:1 avoid:2 shuttle:1 varying:2 barto:2 derived:1 daah04:1 properly:1 portland:1 summarizing:1 am:1 dependent:2 stopping:2 foreign:1 accumulated:3 typically:1 bt:4 hidden:1 pasadena:1 lrt:1 among:1 denoted:2 initialize:1 mutual:1 equal:1 represents:1 excessive:1 future:2 few:1 modern:2 elevator:2 delayed:1 attempt:2 interest:3 g_:1 adjust:2 sharpe:19 tj:1 held:1 necessary:1 dividend:1 initialized:1 re:2 theoretical:1 maximization:1 deserves:1 cost:11 deviation:3 neutral:1 trader:11 delay:1 successful:1 accomplish:1 st:4 sensitivity:9 contract:3 off:1 jos:1 moody:18 thesis:1 reflect:1 possibly:1 dr:2 external:1 return:17 account:3 includes:2 oregon:1 notable:1 depends:2 vi:2 multiplicative:2 performed:1 investor:1 slope:1 ante:1 ir:1 ensemble:4 correspond:1 identify:1 yield:4 produced:1 asset:10 published:1 touretzky:3 nonstationarity:2 ed:4 definition:1 pp:5 associated:2 dataset:1 knowledge:1 ut:3 dimensionality:1 appears:1 supervised:4 methodology:1 done:3 box:3 though:3 receives:1 horizontal:2 nonlinear:2 scientific:1 oil:1 effect:3 dietterich:2 normalized:1 true:1 hence:1 regularization:1 nonzero:1 ogi:1 during:9 maintained:1 timedelay:1 evident:1 complete:1 demonstrate:4 novel:1 recently:2 insensitive:1 jl:2 extend:1 approximates:1 significant:1 monthly:5 cambridge:1 ai:1 dft:6 rd:1 gratefully:1 portfolio:6 moving:4 money:1 recent:1 optimizing:2 tesauro:2 certain:1 success:1 accomplished:1 caltech:1 relaxed:1 determine:1 maximize:4 period:18 dashed:2 signal:2 ii:2 currency:1 multiple:1 persian:1 full:1 adapt:1 long:6 drt:2 impact:3 prediction:2 neuro:1 liao:2 expectation:1 df:2 curv:1 addition:1 crash:1 interval:4 wealth:9 median:1 ascent:1 elegant:1 validating:2 december:2 member:1 call:1 slop:1 zipser:2 nonstationary:1 presence:3 leverage:1 noting:1 feedforward:1 neurogammon:1 psychology:1 economic:8 whether:1 war:1 utility:2 ultimate:1 notch:1 forecasting:1 wo:3 suffer:1 gulf:1 action:2 detailed:2 ftu:1 amount:1 ten:1 simplest:1 outperform:3 dotted:1 estimated:1 reinvested:1 olympiad:1 vol:3 salient:1 demonstrating:1 changing:1 nal:1 boxplots:1 shock:1 fraction:1 year:17 weigend:1 striking:1 reporting:1 wu:7 decision:7 layer:1 quadratic:1 relatively:1 influential:1 department:1 slightly:1 making:2 aaa:1 taken:1 equation:2 previously:2 remains:1 committee:1 end:5 apply:1 appropriate:2 batch:1 original:1 denotes:3 running:1 include:3 calculating:1 objective:3 realized:3 quantity:2 fa:1 strategy:12 rt:7 exhibit:1 gradient:2 win:1 simulated:1 majority:1 index:6 relationship:2 ratio:18 difficult:1 trace:1 lagged:1 zt:10 proper:1 perform:2 revised:1 acknowledge:1 january:2 immediate:1 payoff:1 arbitrary:1 learningfor:2 required:1 extensive:2 optimized:1 security:4 learned:2 established:1 nip:3 able:1 appeared:1 including:2 gaining:1 max:1 critical:1 natural:1 business:1 regularized:1 ll1:1 shop:1 technology:1 risky:1 interdependent:1 acknowledgement:1 relative:1 loss:3 fully:1 allocation:5 incurred:2 share:1 free:1 institute:1 taking:1 unthresholded:1 absolute:1 curve:5 calculated:1 world:1 author:1 made:1 reinforcement:29 forward:1 adaptive:1 transaction:10 excess:1 emphasize:1 buy:10 assumed:1 xi:1 continuous:1 robust:1 expanding:1 improving:1 expansion:1 discourage:1 crites:2 whole:1 allowed:1 predictability:3 position:6 scheduler:2 exponential:2 watkins:3 treasury:5 bad:1 decay:2 phd:1 magnitude:3 expressed:1 goal:3 month:8 presentation:1 price:5 included:3 typical:1 wt:2 called:1 total:2 pas:1 experimental:2 equity:1 vote:1 exception:1 internal:1 support:2 assessed:1 macroeconomic:2 incorporate:1 dept:1 ex:1 |
604 | 1,552 | Visualizing Group Structure*
Marcus Held, Jan Puzicha, and Joachim M. Buhmann
Institut fur Informatik III,
RomerstraBe 164, D-53117 Bonn, Germany
email: {heldjanjb}.cs.uni-bonn.de.
VVVVVV: http://vvv-dbv.cs.uni-bonn.de
Abstract
Cluster analysis is a fundamental principle in exploratory data
analysis, providing the user with a description of the group structure of given data. A key problem in this context is the interpretation and visualization of clustering solutions in high- dimensional
or abstract data spaces. In particular, probabilistic descriptions
of the group structure, essential to capture inter-cluster relationships, are hardly assessable by simple inspection ofthe probabilistic
assignment variables. VVe present a novel approach to the visualization of group structure. It is based on a statistical model of the
object assignments which have been observed or estimated by a
probabilistic clustering procedure. The objects or data points are
embedded in a low dimensional Euclidean space by approximating
the observed data statistics with a Gaussian mixture model. The
algorithm provides a new approach to the visualization of the inherent structure for a broad variety of data types, e.g. histogram data,
proximity data and co-occurrence data. To demonstrate the power
of the approach, histograms of textured images are visualized as an
example of a large-scale data mining application .
1
Introduction
Clustering and visualization are key issues in exploratory data analysis and are
fundamental principles of many unsupervised learning schemes. For a given data
set, the aim of any clustering approach is to extract a description of the inherent
group structure. The object space is partitioned into groups where each partition
-This work has been supported by the German Research Foundation (DFG) under grant
#BU 914/3-1, by the German Israel Foundation for Science and Research Development
(GlF) under grant #1-0403-001.06/95 and by the Federal Ministry for Education, Science
and Technology (BMBF #01 M 3021 A/4) .
Visualizing Group Structure
453
is as homogeneous as possible and two partitions are maximally heterogeneous. For
several reasons it is useful to deal with probabilistic partitioning approaches:
1. The data generation process itself might be stochastic, resulting in overlapping partitions. Thus, a probabilistic group description is adequate and
provides additional information about the inter-cluster relations.
2. The number of clusters might be chosen too large. Forcing the algorithm
to a hard clustering solution creates artificial structure not supported by
the data. On the other hand , superfluous clusters can be identified by a
probabilistic group description .
3. There exists theoretical and empirical evidence that probabilistic assignments avoid over-fitting phenomena [7].
Several well-known clustering schemes result in fuzzy cluster assignments: For the
most common type of vector- valued data, heuristic fuzzy clustering methods were
suggested [4, 5] . In a more principled way, deterministic annealing algorithms provide fuzzy clustering solutions for a given cost function with a rigorous statistical
foundation and have been developed for vectorial [9], proximity [6] and histogram
data [8]. In mixture model approaches the assignments of objects to groups are
interpreted as missing data. Its conditional expectations given the data and the
estimated cluster parameters are computed during the E- step in the corresponding
EM-algorithm and can be understood as assignment probabilities.
The aim of this contribution is to develop a generic framework to visualize such
probabilities as distances in a low dimensional Euclidean space . Especially in high
dimensional or abstract object spaces, the interpretation of fuzzy group structure is
rather difficult, as humans do not perform very well in interpreting probabilities. It
is, therefore , a key issue to make an interpretation of the cluster structure more feasible. In contrast to multidimensional scaling (MDS), where objects are embedded
in low dimensional Euclidean spaces by preserving the original inter object distances
[3], our approach yields a mixture model in low dimensions , where the probabilities
for assigning objects to clusters are maximally preserved. The proposed approach
is similar in spirit to data visualization methods like projection pursuit clustering,
GTM [1], simultaneous clustering and embedding [6]' and hierarchical latent variable models [2] . It also aims on visualizing high dimensional data. But while the
other methods try to model the data itself by a low dimensional generator model,
we seek to model the inferred probabilistic grouping structure. As a consequence,
the framework is generic in the sense that it is applicable to any probabilistic or
fuzzy group description.
The key idea is to interpret a given probabilistic group description as an observation of an underlying random process. We estimate a low- dimensional statistical
model by maximum likelihood inference which provides the visualization . To our
knowledge the proposed algorithm provides the first solution to the visualization
of distributional data, where the observations of an object consists of a histogram
of measured features . Such data is common in data mining applications like image
retrieval where image similarity is often based on histograms of color or texture
features. Moreover, our method is applicable to proximity and co- occurrence data.
2
Visualizing Probabilistic Group Structure
Let a set of N (abstract) objects CJ = {01 , ... , ON} be given which have been partitioned into K groups or clusters. Let the fuzzy assignment of object OJ to cluster
Cv be given by qjv E [0,1], where we assume 2:~=1 qjv = 1 to enable a probabilistic
interpretation . We assume that there exists an underlying "true" assignment of
M Held, J Puzicha and J M Buhmann
454
objects to clusters which we encode by Boolean variables Miv denoting whether
object OJ belongs to (has been generated by) cluster C v . We thus interpret qiv as
an empirical estimate of the probability P(Miv
1). For notational simplicity, we
summarize the assignment variables in matrices Q = (qiv) and M
(Miv).
=
=
The key idea for visualizing group structure is to exploit a low-dimensional statistical model which "explains" the observed qiv. The parameters are estimated by
maximum likelihood inference and provide a natural data visualization. Gaussian
mixture models in low dimensions (typically d = 2 or d = 3) are often appropriate
but the scheme could be easily extended to other classes, e.g. hierarchical models. To define the Gaussian mixture model, we first introduce a set of prototypes
Y = {Y1, ... ,YK} C JRd representing the K clusters, and a set vector-valued object
parameters X
{Xl, ... ,XN} C JRd. To model the assignment probabilities, the
prototypes Y and the data points X are chosen such that the resulting assignment
probabilities are maximally similar to the given frequencies Q. For the Gaussian
mixture model we have
=
Note that the probability distribution is invariant under translation and rotation
of the complete parameter sets X, y. In addition, the scale parameter f3 could be
dropped since a change of f3 only results in a rescaling of the prototypes Y and the
data points X. For the observation Q the log-likelihood is given by1
N
LQ (X,Y)
K
= LLqivlogmiv
(2)
.
i=l v=l
It is worth to note that when the qiv = (Miv}ptrue are estimates obtained by
a factorial distribution, i.e. ptrue(M)
I1 Lv Mivqiv, then maximizing (2) is
identical to minimizing the Kullback- Leibler (KL-)divergence DKdptruellP)
LM p true log (ptrue IP). In that case the similarity to the recent approach of Hofmann et al. [6] proposed as the minimization of DKdPllptrue) becomes apparent.
Compared to [6] the role of P and ptrue is interchanged. From an informationtheoretic viewpoint DKdptruellP) is a better choice as it quantifies the coding inefficiency of assuming the distribution P when the true distribution is p true . Note
that the choice of the KL-divergence as a distortion measure for distributions follows intrinsically from the likelihood principle. Maximum likelihood estimates are
derived by differentiation:
=
=
~
qiv 8m iv
~ (~
)
L..J-. - 8
. =-2f3L..Jqiv
L..J m i/1Y/1-Yv
v=l
N
m~v
x,
v=l
K
N
LL qj~
,
(3)
/1=1
K
88miV =-2f3LLqiv(miO'- JO'v)(Xi-YO')
i=l v=l mw yO'
i=l v=l
N
-2f3L (miO' - qiO') (Xi - yO')
i=l
(4)
The gradients can be used for any gradient descent scheme. In the experiments,
we used (3)-(4) in conjunction with a simple gradient descent technique, which has
1 Here, it is implicitly assumed that all
amount of information.
qiv
have been estimated based on the same
Visualizing Group Structure
455
0.8
0.6
+
Figure 1: Visualization of two-dimensional artificial data. Original data generated
by the mixture model with f3 = 1.0 and 5 prototypes. Crosses denote the data
points Xi, circles the prototypes Ya. The embedding prototypes are plotted as
squares, while the embedding data points are diamonds. The contours are given by
!(x) = maXa (exp (-f3llx - Ya 112)/L~=1 exp (-f3llx - Y/JW)), For visualization
purposes the embedding is translated and rotated in the correct position.
been observed to be efficient and reliable up to a few hundred objects. From (4) an
explicit formula for the prototypes may be recovered
Ya
(5)
which can be interpreted as an alternative centroid rule. The position of the prototypes is dominated by objects with a large deviation between modeled and measured
assignment probabilities. Note that (5) should not be used as an iterative equation
as the corresponding fixed point is not contractive.
3
Results
As a first experiment we discuss the approximately recoverable case, where we sample from (1) to generate artificial two-dimensional data and infer the positions of
the sample points and of the prototypes by the visualizing group structure approach
(see Fig. 1). Due to iso- contour lines in the generator density and in the visualization density not all data positions are recovered exactly. We like to emphasize
that the complete information available on the grouping structure of the data is
preserved, since the mean KL-divergence is quite small (Ri 2.10.10- 5 ). It is worth
mentioning that the rank-order of the assignments of objects i to clusters (}' is
completely preserved.
For many image retrieval systems image similarity has been defined as similarity of
occurring feature coefficients, e.g. colors or texture features. In [7], a novel statistical mixture model for distributional data, the probabilistic histogram clustering
(ACM), has been proposed which we applied to extract the group structure inherent in image databases based on histograms of textured image features. The ACM
explains the observed data by the generative model:
456
M. Held. J puzicha and J M. Buhmann
Figure 2: Embedding of the VisTex database with MDS.
1. select an object OJ E 0 with probability Pi,
2. choose a cluster Ca according to the cluster membership Mia of Oi,
3. sample a feature Vj E V from the cluster conditional distribution qjla.
This generative model is formalized by
K
P (OJ, vjIM,p, q) = Pi
L Miaqjla
(6)
a=1
The parameters are estimated by maximum likelihood inference. The assignments
Mia are treated as unobserved data in an (annealed) EM procedure, which provides
a probabilistic group description. For the details we refer to [7].
In the experiments, texture features are extracted by a bank of 12 Gabor filters
with 3 scales and 4 orientations. Different Gabor channels are assumed to be independently distributed, which results in a concatenated histogram of the empirically
measured channel distributions. Each channel was discretized into 40 bins resulting
in a 480 dimensional histogram representing one image. For the experiments two
different databases were used.
In Fig. 3 a probabilistic K = 10 cluster solution with 160 images containing different
textures taken from the Brodatz album is visualized. The clustering algorithm
produces 8 well separated clusters, while the two clusters in the mid region exhibit
substantial overlap. A close inspection of these two clusters indicates that the
fuzziness of the assignments in this area is plausible as the textures in this area
have similar frequency components in common.
The result for a more complex database of 220 textured images taken from the MIT
VisTex image database with a large range of uniformly and non-uniformly textured
images is depicted in Fig. 4. This plot indicates that the proposed approach provides
a structured view on image databases. Especially the upper left cluster yields some
Visualizing Group Structure
457
insight in the clustering solution, as this cluster consists of a large range of nonuniformly textured images, enabling the user to decide that a higher number of
clusters might yield a better solution. The visualization approach fits naturally in
an interactive scenario, where the user can choose interactively data points to focus
his examination to certain areas of interest in the clustering solution.
For comparison, we present in Fig. 2 a multidimensional scaling (Sammon's mapping
[3]) solution for the VisTex database. A detailed inspection of this plot indicates,
that the embedding is locally quiet satisfactory, while no global structure of the
database is visible. This is explained by the fact, that Sammon's mapping only
tries to preserve the object distances, while our novel approach first extracts group
structure in a high dimensional feature space and than embeds this group structure
in a low dimensional Euclidean space. While MDS completely neglects the grouping
structure we do not care for the exact inter object distances.
4
Conclusion
In this contribution, a generic framework for the low-dimensional visualization of
probabilistic group structure was presented. The effectiveness of this approach was
demonstrated by experiments on artificial data as well as on databases of textured
images. While we have focussed on histogram data the generality of the approach
makes it feasible to visualize a broad range of different data types, e.g. vectorial,
proximity or co-occurrence data. Thus, it is useful in a broad variety of applications,
ranging from image or document retrieval tasks, the analysis of marketing data to
the inspection of protein data. We believe that this technique provides the user
substantial insight in the validity of clustering solutions making the inspection and
interpretation of large databases more practicable.
A natural extension of the proposed approach leads to the visualization of hierarchical cluster structures by a hierarchy of visualization plots.
References
[1] C.M. Bishop, M. Svensen, and C .K.1. Williams. GTM: the generative topographic
mapping. Neural Computation, 10(1):215-234, 1998.
[2] C.M. Bishop and M. E. Tipping. A hierarchical latent variable model for data visualization. Technical Report NCRG/96/028, Neural Computing Research Group Dept.
Of Computer Science & Applied Mathematics, Aston University, 1998.
[3] T.F. Cox and M.A.A. Cox. Multidimensional Scaling, volume 59 of Mongraphs On
statistics and applied probability. Chapman & Hall, London, New York, 1994.
[4] J.C. Dunn. A fuzzy relative of the ISODATA process and its use in detecting wellseparated clusters. Journal of Cybernetics, 3:32-57, 1975.
[5] 1. Gath and A. Geva. Unsupervised optimal fuzzy clustering. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 11:773-781, 1989.
[6] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing.
PAMI, 19(1):1- 25, 1997.
[7] T. Hofmann, J. Puzicha, and M. I. Jordan. Learning from dyadic data. In Advances
in Neural Information Processing Systens 11. MIT Press, 1999.
[8] F.C.N. Pereira, N.Z. Tishby, and L. Lee. Distributional clustering of English words.
In 30th Annual Meeting of the Association for Computational Linguistics, Columbus,
Ohio, pages 183-190, 1993.
[9] K. Rose, E. Gurewitz, and G.C . Fox. A deterministic annealing approach to clustering.
Pattern Recognition Letters, 11(9):589-594, September 1990.
458
M. Held. J. Puzicha and J. M. Buhmann
Figure 3: Visualization of a probabilistic grouping structure inferred for a database
of 160 Brodatz textures. A mean KL-divergence of 0.031 is obtained.
Figure 4: Visualization of a probabilistic grouping structure inferred for 220 images
of the VisTex database. A mean KL-divergence of 0.0018 is obtained.
| 1552 |@word cox:2 sammon:2 seek:1 inefficiency:1 denoting:1 document:1 recovered:2 assigning:1 visible:1 partition:3 hofmann:3 plot:3 generative:3 intelligence:1 inspection:5 iso:1 provides:7 detecting:1 vistex:4 consists:2 fitting:1 introduce:1 pairwise:1 inter:4 discretized:1 becomes:1 underlying:2 moreover:1 israel:1 interpreted:2 fuzzy:8 developed:1 maxa:1 unobserved:1 differentiation:1 multidimensional:3 interactive:1 exactly:1 partitioning:1 grant:2 understood:1 dropped:1 consequence:1 approximately:1 pami:1 might:3 co:3 mentioning:1 contractive:1 range:3 procedure:2 dunn:1 jan:1 area:3 empirical:2 gabor:2 projection:1 word:1 protein:1 close:1 context:1 deterministic:3 demonstrated:1 missing:1 maximizing:1 annealed:1 williams:1 independently:1 simplicity:1 formalized:1 rule:1 insight:2 his:1 embedding:6 exploratory:2 hierarchy:1 user:4 exact:1 homogeneous:1 recognition:1 distributional:3 database:12 observed:5 role:1 capture:1 region:1 yk:1 principled:1 substantial:2 rose:1 creates:1 textured:6 completely:2 translated:1 easily:1 gtm:2 separated:1 london:1 artificial:4 apparent:1 heuristic:1 quite:1 valued:2 plausible:1 distortion:1 statistic:2 topographic:1 itself:2 ip:1 description:8 cluster:27 produce:1 brodatz:2 rotated:1 object:20 develop:1 svensen:1 measured:3 c:2 correct:1 filter:1 stochastic:1 human:1 qiv:6 enable:1 education:1 explains:2 bin:1 extension:1 proximity:4 hall:1 exp:2 mapping:3 visualize:2 lm:1 interchanged:1 purpose:1 applicable:2 minimization:1 federal:1 mit:2 gaussian:4 aim:3 rather:1 avoid:1 conjunction:1 encode:1 derived:1 yo:3 focus:1 joachim:1 notational:1 fur:1 likelihood:6 rank:1 indicates:3 contrast:1 rigorous:1 centroid:1 sense:1 inference:3 membership:1 typically:1 relation:1 i1:1 germany:1 issue:2 orientation:1 development:1 f3:3 chapman:1 identical:1 broad:3 unsupervised:2 report:1 inherent:3 few:1 preserve:1 divergence:5 dfg:1 interest:1 mining:2 mixture:8 wellseparated:1 superfluous:1 held:4 institut:1 fox:1 iv:1 euclidean:4 circle:1 plotted:1 theoretical:1 boolean:1 assignment:15 cost:1 deviation:1 hundred:1 too:1 tishby:1 density:2 fundamental:2 bu:1 probabilistic:18 lee:1 jo:1 interactively:1 containing:1 choose:2 rescaling:1 de:2 coding:1 coefficient:1 try:2 view:1 yv:1 contribution:2 square:1 oi:1 yield:3 ofthe:1 informatik:1 worth:2 cybernetics:1 mia:2 simultaneous:1 email:1 frequency:2 naturally:1 intrinsically:1 knowledge:1 color:2 cj:1 higher:1 tipping:1 maximally:3 jw:1 generality:1 marketing:1 hand:1 overlapping:1 columbus:1 believe:1 validity:1 true:4 leibler:1 satisfactory:1 deal:1 visualizing:8 ll:1 during:1 complete:2 demonstrate:1 interpreting:1 glf:1 image:17 ranging:1 novel:3 ohio:1 common:3 rotation:1 empirically:1 volume:1 ncrg:1 association:1 interpretation:5 interpret:2 refer:1 cv:1 mathematics:1 similarity:4 recent:1 belongs:1 forcing:1 scenario:1 certain:1 meeting:1 preserving:1 ministry:1 additional:1 care:1 recoverable:1 jrd:2 infer:1 technical:1 cross:1 retrieval:3 heterogeneous:1 expectation:1 histogram:10 preserved:3 addition:1 annealing:3 vvv:1 spirit:1 effectiveness:1 jordan:1 mw:1 iii:1 variety:2 fit:1 identified:1 idea:2 prototype:9 qj:1 whether:1 york:1 hardly:1 adequate:1 useful:2 detailed:1 factorial:1 amount:1 mid:1 locally:1 visualized:2 http:1 generate:1 mio:2 estimated:5 group:25 key:5 letter:1 decide:1 scaling:3 annual:1 vectorial:2 ri:1 dominated:1 bonn:3 structured:1 according:1 em:2 partitioned:2 making:1 practicable:1 explained:1 invariant:1 taken:2 equation:1 visualization:18 discus:1 german:2 pursuit:1 available:1 hierarchical:4 generic:3 appropriate:1 occurrence:3 alternative:1 original:2 clustering:19 linguistics:1 neglect:1 exploit:1 concatenated:1 especially:2 approximating:1 nonuniformly:1 md:3 exhibit:1 gradient:3 quiet:1 september:1 distance:4 reason:1 marcus:1 assuming:1 modeled:1 relationship:1 providing:1 minimizing:1 difficult:1 perform:1 diamond:1 upper:1 observation:3 enabling:1 descent:2 extended:1 y1:1 inferred:3 kl:5 suggested:1 pattern:2 summarize:1 oj:4 reliable:1 power:1 overlap:1 natural:2 treated:1 examination:1 buhmann:5 representing:2 scheme:4 miv:5 technology:1 aston:1 isodata:1 extract:3 gurewitz:1 relative:1 embedded:2 generation:1 by1:1 lv:1 generator:2 foundation:3 principle:3 viewpoint:1 bank:1 pi:2 translation:1 supported:2 english:1 focussed:1 distributed:1 dimension:2 xn:1 contour:2 qio:1 transaction:1 emphasize:1 uni:2 informationtheoretic:1 kullback:1 implicitly:1 global:1 geva:1 assumed:2 xi:3 latent:2 iterative:1 quantifies:1 channel:3 ca:1 complex:1 vj:1 dyadic:1 vve:1 fig:4 bmbf:1 embeds:1 position:4 pereira:1 explicit:1 lq:1 xl:1 formula:1 bishop:2 evidence:1 grouping:5 essential:1 exists:2 texture:6 album:1 occurring:1 depicted:1 ptrue:4 acm:2 extracted:1 conditional:2 fuzziness:1 feasible:2 hard:1 change:1 uniformly:2 ya:3 select:1 puzicha:5 romerstrabe:1 dept:1 phenomenon:1 |
605 | 1,553 | Direct Optimization of Margins Improves
Generalization in Combined Classifiers
Llew Mason,Peter Bartlett, Jonathan Baxter
Department of Systems Engineering
Australian National University, Canberra, ACT 0200, Australia
{lmason, bartlett, jon }@syseng.anu.edu.au
Abstract
Cumulative training margin distributions for AdaBoost versus
our "Direct Optimization Of
Margins" (DOOM) algorithm.
The dark curve is AdaBoost, the
light curve is DOOM. DOOM
sacrifices significant training error for improved test error (horizontal marks on margin= 0 line)_
-1
-0.8 -0.6 -0.4 -0.2
0
0.2
0.4
0.6 0.8
1
Margin
1
Introduction
Many learning algorithms for pattern classification minimize some cost function of
the training data, with the aim of minimizing error (the probability of misclassifying
an example). One example of such a cost function is simply the classifier's error
on the training data. Recent results have examined alternative cost functions that
provide better error estimates in some cases_ For example, results in [Bar98] show
that the error of a sigmoid network classifier f(-) is no more than the sample average
of the cost function sgn(B-yf(x)) (which takes value 1 when yf(x) is no more than
Band 0 otherwise) plus a complexity penalty term that scales as IlwlldB, where
(x,y) E X x {?1} is a labelled training example, and Ilwlll is the sum of the
magnitUdes of the output node weights. The quantity yf(x) is the margin of the
real-valued function f, and reflects the extent to which f(x) agrees with the label y E
{? 1}. By minimizing squared error, neural network learning algorithms implicitly
maximize margins, which may explain their good generalization performance.
More recently, Schapire et al [SFBL98] have shown a similar result for convex combinations of classifiers, such as those produced by boosting algorithms. They show
289
Direct Optimization ofMargins Improves Generalization
that, with high probability over m random examples, every convex combination of
classifiers from some finite class H has error satisfying
Pr[yf(x)
for all
<: 0] <: Es [sgn(O -
yf(x))]
+0 (
J,n
~~gIHI + IOg(1/0))
Cogm
t)
(1)
e > 0, where Es denotes the average over the sample S.
One way to think of these results is as a technique for adjusting the effective complexity of the function class by adjusting e. Large values of e correspond to low
complexity and small values to high complexity. If the learning algorithm were to
optimize the parametrized cost function Essgn(e - yf(x)) for large values of e, it
would not be able to make fine distinctions between different functions in the class,
and so the effective complexity of the class would be reduced. The second term in
the error bounds (the regularization term involving the complexity parameter e and
the size of the base hypothesis class H) would be correspondingly reduced. In both
the neural network and boosting settings, the learning algorithms do not directly
minimize these cost functions; we use different values of the complexity parameter
in the cost functions only in explaining their generalization performance.
In this paper, we address the question: what are suitable cost functions for convex combinations of classifiers? In the next section, we give general conditions on
parametrized families of cost functions that ensure that they can be used to give error bounds for convex combinations of classifiers. In the remainder of the paper, we
investigate learning algorithms that choose the convex coefficients of a combined
classifier by minimizing a suitable family of piecewise linear cost functions using
gradient descent. Even when the base hypotheses are chosen by the AdaBoost algorithm, and we only use the new cost functions to adjust the convex coefficients,
we obtained an improvement on the test error of AdaBoost in all but one of the
UC Irvine data sets we used. Margin distribution plots show that in many cases the
algorithm achieves these lower errors by sacrificing training error, in the interests
of reducing the new cost function.
2
Theory
In this section, we derive an error bound that generalizes the result for convex
combinations of classifiers described in the previous section. The result involves a
family of margin cost functions (functions mapping from the interval [-1, 1] to ~+),
indexed by an integer-valued complexity parameter N, which measures the resolution at which we examine the margins. The following definition gives conditions on
the margin cost functions that relate the complexity N to the amount by which the
margin cost function is larger than the function sgn( -yf(x)). The particular form
of this definition is not important. In particular, the functions lit N are only used in
the analysis in this section, and will not concern us later in the paper.
Definition 1 A family {CN : N E N} of margin cost functions is B-admissible for
B ~ 0 if for all N E N there is an interval Y C ~ of length no more than B and a
function lit N : [-1, 1] -+ Y that satisfies
sgn( -a) ~ EZ~QN,Q (lit N(Z)) ~ CN(a)
for all a E [-1, 1], where E Z ~Q N,
denotes the expectation when Z is chosen
randomly as Z = (l/N) 2:/(=1 Zi with Zi E {-1, 1} and Pr(Zi = 1) = (1 + a)/2.
Q
(.)
As an example, let CN(a) = sgn(e - a) + c, for e = l/VN and some constant c.
This is a B-admissible family of margin cost functions, for suitably large B. (This is
290
L. Mason, P L. Bartlett and J. Baxter
exhibited by the functions WN(a) = sgn(O /2 - a) + c/2; the proof involves Chernoff
bounds.) Clearly, for larger values of N, the cost functions CN are closer to the
threshold function sgn( -a). Inequality (1) is implied by the following theorem.
In this theorem, co(H) is the set of convex combinations of functions from H. A
similar proof gives the same result with VCdim(H) In m replacing In IHI.
Theorem 2 For any B-admissible family {CN : N E N} of margin cost junctions,
any finite hypothesis class H and any distribution P on X x { -1,1}, with probability
at least 1 - 8 over a random sample S of m labelled examples chosen according to
P, every N and every f in co(H) satisfies
Pr [yf(x) ~ 0]
< Es [CN(yf(x))] +
B2
2m (N In IHI
Proof Fix Nand f E co(H), and suppose that f =
+ In(N(N + 1)/8)).
r:d aihi for hi E H.
Define
cON(H) = {(I/N) 2:.%,1 hj : hj E H} , and notice that ICON(H)I :s; IHIN. As in
the proof of (1) in [SFBL98], we show using the probabilistic method that there is
a function 9 in cON(H) that closely approximates f. Let Q be the distribution on
cON(H) corresponding to the average of N independent draws from {hd according
to the distribution {ad, and let QN,Ci be the distribution given in Definition 1. Then
for any fixed pair x, y, when 9 is chosen according to Q the distribution of yg(x) is
QN,yf(x)' Now, fix the function WN implied by the B-admissibility condition. By
the definition of B-admissibility,
Eg~QEp
[w N(yg(X))] =
EpEz~QN , Yf(")
[WN(Z)]
~ Ep sgn( -yf(x)) = P
[yf(x)
~ 0].
Similarly, Es [CN(yf(x))] ~ Eg~QEs [WN(yg(X))]. Hence, if Pr [yf(x) :s; 0] Es [CN(yf(x))] ~ EN, then Eg~Q [Ep [WN(yg(X))]- Es [WN(yg(X))]] ~ EN. Thus,
Pr [3f E co(H): Pr [yf(x) ~ 0] ~ Es [CN(yf(x))] + EN]
~ Pr [3g E CON (H) : Ep [w N(yg(X))] ~ Es [WN(yg(X))]
~ IHIN exp( -2mE~/ B2),
+ EN]
where the last inequality follows from the union bound and Hoeffding's inequality.
Setting this probability to 8N = 8/(N(N + 1)), solving for EN, and summing over
values of N completes the proof, since 2:NEN 8N = 8.
0
For the best bounds, we want WN to satisfy EZ~QN." [w N(Z)] 2 sgn( -0), but with
the difference EZ~QN , ,, [WN(Z) - sgn(-a)] as small as possible for a E [-1 , 1].
One approach would be to minimize the expectation of this difference, for 0 chosen
uniformly in [-1,1]. However, this yields a non-monotone solution for CN(o).
Figure la illustrates an example of a monotone B-admissible family; it shows the
cost functions CN(a) = EZ~QN,,, WN(Z), for N = 20,50 and 200, where WN(O) =
sgn(y'210gN/N - a) + I/N.
3
Algorithm
We now consider how to select convex coefficients WI, ... , WT for a sequence of
{-1,1} classifiers h 1 , ... ,hT so that the combined classifier f(x) = 2:;=1 Wtht(x)
has small error. In the experiments we used the hypotheses provided by AdaBoost.
(The aim was to investigate how useful are the error estimates provided by the cost
functions of the previous section.)
If we take Theorem 2 at face value and ignore log terms, the best error bound is
obtained if the weights WI, . .. , WT and the complexity N are chosen to minimize
Direct Optimization ofMargins improves Generalization
291
1.2 .----~--~--~---,
0.8
0.8
Cii
Cii
8 0 .6
8 0.6
0.4
0.4
0.2
0.2
-1
-0.5
0
0.5
0
-1
-0.5
0
0.5
Figure 1: (a) The margin cost functions CN(O), for N = 20,50 and 200, compared to the
function sgn( -0). Larger values of N correspond to closer approximations to sgn( -0).
(b) Piecewise linear upper bounds on the functions CN (0), and the function sgn( -0).
(11m) 2::1 CN(yi!(xd) + KvNlm, where Kis a constant and {CN} is a family of
B-admissible cost functions. Although Theorem 2 provides an expression for the
constant K, in practical problems this will almost certainly be an overestimate and
so our penalty for even moderately complex models will be too great. To solve
this problem, instead of optimizing the average cost of the margins plus a penalty
term over all values of the parameter 0, we estimated the optimal value of 0 using
a cross-validation set. That is, for fixed values of 0 in a discrete but fairly dense
set we selected weights optimizing the average cost
2::1 Co (yi!(Xi)) and then
chose the solution with smallest error on an independent cross-validation set.
!
We considered the use of the cost functions plotted in Figure la, but the existence of
flat regions caused difficulties for gradient descent approaches. Instead we adopted
a piecewise linear family of cost functions Co that are linear in the intervals [-1, OJ,
[0, OJ, and [0,1]' and pass through the points (-1,1.2), (0,0.1), (0,0.1), and (1,0),
for 0 E (0,1). The numbers were chosen to ensure the Co are upper bounds on
the cost functions of Figure Ia (see Figure Ib). Note that 0 plays the role of a
complexity parameter, except that in this case smaller values of 0 correspond to
higher complexity classes.
Even with the restriction to piecewise linear cost functions, the problem of optimiz2::1 Co (yi!(Xi)) is still hard. Fortunately, the nature of this cost function
ing
makes it possible to find successful heuristics (which is why we chose it). The algorithm we have devised to optimize the Co family of cost functions is called Direct
Optimization Of Margins (DOOM). (The pseudo-code of the algorithm is given in
the full version [MBB98].) DOOM is basically a form of gradient descent, with two
complications: it takes account of the fact that the cost function is not differentiable at 0 and 0, and it ensures that the weight vector lies on the unit ball in it.
In order to avoid problems with local minima we actually allow the weight vector
to lie within the it -ball throughout optimization rather than on the h-ball. If the
weight vector reaches the surface of the ll-ball and the update direction points out
of the it -ball, it is projected back to the surface of the it -ball.
!
!
Observe that the gradient of
2::1 CO(yi!(Xi)) is a constant function of the
weights W = (WI, ... , WT) provided no example (Xi, Yi) "crosses" one of the discontinuities at 0 or 0 (Le. provided the margin yi!(Xi) does not cross 0 or 0).
Hence, the central operation of DOOM is to step in the negative gradient direction
until an example's margin hits one of the discontinuities (projecting where necessary to ensure the weight vector lies within the h ball). At this point the gradient
vector becomes multi-valued (generally two-valued but it can be more). Each of the
possible gradient directions is then tested by taking a small step in that direction (a
L. Mason. P L. Bartlett and J. Baxter
292
random subset of the gradient directions is chosen if there are too many of them).
If none of the directions lead to a decrease in the cost, the examples whose margins
lie on discontinuities of the cost function are added to a constraint set E. In subsequent iterations the same stepping procedure above is followed except that the
direction step is modified to ensure that the examples in E do not move (Le. they
remain on the discontinuity points of C(J). That is, the weight vector moves within
the subspace defined by the examples in E. If no progress is made in any iteration,
the constraint set E is reset to zero. If still no progress is made the procedure
terminates.
4
Experiments
We used the following two-class problems from the UC Irvine database [CBM98] :
Cleveland Heart Disease, Credit Application, German, Glass, Ionosphere, King
Rook vs King Pawn, Pima Indians Diabetes, Sonar, Tic-Tac-Toe, and Wisconsin
Breast Cancer. For the sake of simplicity we did not consider multi-class problems. Each data set was randomly separated into train, test and validation sets,
with the test and validation sets being equal in size. This was repeated 10 times
independently and the results were averaged.
Each experiment consisted of the following steps.
First, AdaBoost was run on the training data to
It
..,!i produce a sequence of base classifiers and their
c: 25
corresponding weights. In all of the experiments
'">
?. 20
the base classifiers were axis-orthogonal hyper.?
. x ..
planes (also known as decision stumps); this
'"> 15 . . ....
x
x
;:; 10
choice ensured that the complexity of the class
0::
::E
of base classifiers was constant. Boosting was
5
0
halted when adding a new classifier failed to de8 0
crease the error on the validation set. DOOM
xi
-5
was then run on the classifiers produced by Ad10
15
20
25
30
0
5
aBoost
for a large range of e values and 1000
AdaBoost Test Error (%)
random initial weight vectors for each value of e.
Figure 2: Relative improvement of The weight vector (and e value) with minimum
DOOM over AdaBoost for all exam- misclassification on the validation set was chosen
ined datasets.
as the final solution.
In some cases the training sets were reduced in size to make overfitting more likely,
so that complexity regularization with DOOM could have an effect. (The details
are given in the full version [MBB98].) In three of the datasets (Credit Application, Wisconsin Breast Cancer and Pima Indians Diabetes), AdaBoost gained no
advantage from using more than a single classifier. In these datasets, the number
of classifiers was chosen so that the validation error was reasonably stable.
35
x:
~
,
30
...~
.~
A comparison between the test errors generated by AdaBoost and DOOM is shown
in Figure 2. In only one data set did DOOM produce a classifier which performed
worse than AdaBoost in terms of test error; for most data sets DOOM's test error
was a significant improvement over AdaBoost's.
Figure 3 shows cumulative training margin distribution graphs for four of the
datasets for both AdaBoost and DOOM (with optimal e chosen by cross-validation).
For a given margin the value on the curve corresponds to the proportion of training
examples with margin no mOI;e than this value. The test errors for both algorithms
are also shown for comparison, as short horizontal lines on the vertical axis.
The margin distributions show that the value of the minimum training margin has
no real impact on generalization performance. (See also [Bre97] and [GS98].) As
293
Direct Optimization ofMargins Improves Generalization
40
..................... . ... . .................................................... . ........
100 ..........................
Wisconsion Breast Cancer
Credit Application
so
30
~
~
!
1!
:? 20
"E
"
u
JI
10
o
-I
100
-U.S -0.6 -U.4 ?0.2
0
0.2
!:
.."
~
u"
60
40
20
o+-~~--~~~~--~~~~
0.4
0.6
0.8
I
..?...........................
-I
100
0
0.2
0.4
0.6
0.8
I
0
0.2
0.4
0.6
0.8
I
.........................?....................
Ionosphere
""1:
-O.S -U.6 -U.4 -0.2
Sonar
60
]
~ 40
u"
20
20
?1
-0.8 -U.6 -U.4 -0.2
0
Margin
0.2 0.4
0.6
0.8
I
_~
-I
.....- ............-.
-0.8 -0.6 -U.4 -U.2
Margin
Figure 3: Cumulative training margin distributions for four datasets. The dark curve is
AdaBoost, the light curve is DOOM with e selected by cross-validation. The test errors
for both algorithms are marked on the vertical axis at margin O.
can be seen in Figure 3 (Credit Application and Sonar data sets), the generalization performance of the combined classifier produced by DOOM can be as good
as or better than that of the classifier produced by AdaBoost, despite having dramatically worse minimum training margin. Conversely, Figure 3 (Ionosphere data
set) shows that improved generalization performance can be associated with an
improved minimum margin.
The margin distributions also show that there is a balance to be found between
training error and complexity (as measured by 0). DOOM is willing to sacrifice
training error in order to reduce complexity and thereby obtain a better margin
distribution. For instance, in Figure 3 (Sonar data set), DOOM's training error is
over 20% while AdaBoost's is 0%, but DOOM's test error is 5% less than that of
AdaBoost's. The reaSOn for this success can be seen in Figure 4, which illustrates
the changes in the cost function, training error, and test error as a function of o.
The optimal complexity for this data set is low (corresponding to a large optimal
0). In this case, a reduction in complexity is more important to generalization error
than a reduction in training error.
5
Conclusion
In this paper we have addressed the question: what are suitable cost functions for
COnvex combinations of base hypotheses? For general families of cost functions that
are functions of the margin of a sample, we proved (Theorem 2) that the error of
a COnvex combination is nO more than the sample average of the cost function plus
a regularization term involving the complexity of the cost function and the size of
the base hypothesis class.
We constructed a piecewise linear family of cost functions satisfying the conditions of
Theorem 2 and presented a heuristic algorithm (DOOM) for optimizing the sample
L. Mason, P L. Bartlett and J. Baxter
294
50
0.45
-r----------------- -..-
45
.................... ................... ......... ..... ......... .
...
?
. :~~,
0.40
0.35
0.30
~ 0.25
:: 25
~ 20
u 0.20
0. 15
0. 10
15
..... Ada Boost Train _... .__.. _ .. __ .. . . _.
- - AdaBoost Tes!
... ..... DOOM Train
10
.
-0-
DOOM Tes!
0.05
0.00
o
0
8
a
8
0
e
000
5
~
~
o
0
'J>
0
tv
W
a
Figure 4: Sonar data set, Left: Plot of cost (~ 2:~1 C9(yi/(Xi))) against () for AdaBoost
and DOOM . Right: Plot of training and test error against ().
average of the cost.
We ran experiments on several of the datasets in the UC Irvine database, in which
AdaBoost was used to generate a set of base classifiers and then DOOM was used
to find the optimal convex combination of those classifiers. In all but one case
the convex combination generated by DOOM had lower test error than AdaBoost's
combination. Margin distribution plots show that in many cases DOOM achieves
these lower test errors by sacrificing training error, in the interests of reducing the
new cost function. The margin plots also show that the size of the minimum margin
is not relevant to generalization performance.
Acknow ledgments
Thanks to Yoav Freund, Wee Sun Lee and Rob Schapire for helpful comments and
suggestions. This research was supported in part by a grant from the Australian Research Council. Jonathan Baxter was supported by an Australian Research Council
Fellowship and Llew Mason was supported by an Australian Postgraduate Award.
References
[Bar98]
P. L. Bartlett. The sample complexity of pattern classification with neural
networks: the size of the weights is more important than the size of
the network. IEEE Transactions on Information Theory, 44(2):525- 536,
1998.
L. Breiman. Prediction games and arcing algorithms. Technical Report
[Bre97]
504, Department of Statistics, University of California, Berkeley, 1997.
[CBM98] E. Keogh C. Blake and C.J . Merz. UCI repository of machine learning
databases, 1998. http://www.ics.uci.edu/rvmlearn/MLRepository.html.
[GS98]
A. Grove and D. Schuurmans. Boosting in the limit: Maximizing the
margin of learned ensembles. In Proceedings of the Fifteenth National
Conference on Artificial Intelligence, pages 692- 699, 1998.
[MBB98] L. Mason, P. L. Bartlett, and J. Baxter. Improved generalization through
explicit optimization of margins. Technical report, Department of Systems Engineering, Australian National University, 1998. (Available from
http://syseng.anu.edu.au/lsg) .
[SFBL98] R. E. Schapire, Y. Freund, P. L. Bartlett, and W. S. Lee. Boosting
the margin: a new explanation for the effectiveness of voting methods.
Annals of Statistics, (to appear), 1998.
| 1553 |@word repository:1 version:2 proportion:1 suitably:1 willing:1 thereby:1 reduction:2 initial:1 subsequent:1 plot:5 update:1 v:1 intelligence:1 selected:2 plane:1 short:1 provides:1 boosting:5 node:1 complication:1 constructed:1 direct:6 sacrifice:2 examine:1 multi:2 becomes:1 provided:4 cleveland:1 what:2 tic:1 pseudo:1 berkeley:1 every:3 act:1 voting:1 xd:1 ensured:1 classifier:23 hit:1 unit:1 grant:1 appear:1 overestimate:1 llew:2 engineering:2 local:1 limit:1 despite:1 plus:3 chose:2 au:2 examined:1 conversely:1 co:10 range:1 averaged:1 ihi:2 practical:1 union:1 procedure:2 doom:25 optimize:2 restriction:1 www:1 moi:1 maximizing:1 independently:1 convex:13 resolution:1 simplicity:1 hd:1 annals:1 play:1 suppose:1 hypothesis:6 diabetes:2 satisfying:2 database:3 ep:3 role:1 region:1 ensures:1 sun:1 decrease:1 ran:1 disease:1 complexity:20 moderately:1 pawn:1 solving:1 train:3 separated:1 effective:2 artificial:1 hyper:1 whose:1 heuristic:2 larger:3 valued:4 solve:1 otherwise:1 statistic:2 think:1 final:1 sequence:2 differentiable:1 advantage:1 reset:1 remainder:1 relevant:1 uci:2 produce:2 derive:1 exam:1 measured:1 progress:2 involves:2 australian:5 direction:7 closely:1 australia:1 sgn:14 vcdim:1 fix:2 generalization:12 keogh:1 considered:1 credit:4 blake:1 exp:1 great:1 ic:1 mapping:1 achieves:2 smallest:1 label:1 council:2 agrees:1 reflects:1 clearly:1 aim:2 modified:1 rather:1 avoid:1 hj:2 breiman:1 arcing:1 improvement:3 glass:1 helpful:1 nand:1 ined:1 classification:2 html:1 fairly:1 uc:3 equal:1 having:1 chernoff:1 lit:3 jon:1 report:2 piecewise:5 randomly:2 wee:1 national:3 interest:2 investigate:2 adjust:1 certainly:1 light:2 grove:1 closer:2 necessary:1 orthogonal:1 indexed:1 plotted:1 sacrificing:2 instance:1 gn:1 halted:1 yoav:1 ada:1 cost:43 subset:1 successful:1 ilwlll:1 too:2 ad10:1 combined:4 thanks:1 probabilistic:1 lee:2 yg:7 squared:1 central:1 choose:1 hoeffding:1 worse:2 account:1 stump:1 b2:2 coefficient:3 satisfy:1 caused:1 ad:1 later:1 performed:1 minimize:4 ensemble:1 correspond:3 yield:1 produced:4 basically:1 none:1 icon:1 explain:1 reach:1 definition:5 against:2 toe:1 proof:5 associated:1 con:4 irvine:3 proved:1 adjusting:2 improves:4 actually:1 back:1 higher:1 adaboost:21 improved:4 until:1 horizontal:2 replacing:1 yf:18 effect:1 consisted:1 regularization:3 hence:2 eg:3 ll:1 game:1 mlrepository:1 recently:1 sigmoid:1 ji:1 stepping:1 approximates:1 significant:2 tac:1 similarly:1 had:1 stable:1 surface:2 base:8 recent:1 optimizing:3 inequality:3 success:1 yi:7 seen:2 minimum:6 fortunately:1 cii:2 maximize:1 full:2 ing:1 technical:2 cross:6 devised:1 crease:1 award:1 iog:1 impact:1 prediction:1 involving:2 breast:3 expectation:2 fifteenth:1 iteration:2 want:1 fine:1 fellowship:1 interval:3 addressed:1 completes:1 exhibited:1 comment:1 effectiveness:1 integer:1 baxter:6 wn:11 ledgments:1 zi:3 reduce:1 cn:15 aihi:1 expression:1 bartlett:8 syseng:2 penalty:3 peter:1 dramatically:1 useful:1 generally:1 amount:1 dark:2 band:1 reduced:3 schapire:3 generate:1 http:2 misclassifying:1 notice:1 estimated:1 discrete:1 four:2 threshold:1 ht:1 graph:1 monotone:2 sum:1 run:2 throughout:1 almost:1 family:12 vn:1 draw:1 decision:1 bound:9 hi:1 ki:1 followed:1 constraint:2 flat:1 sake:1 department:3 tv:1 according:3 combination:11 ball:7 smaller:1 remain:1 terminates:1 wi:3 rob:1 projecting:1 pr:7 heart:1 german:1 adopted:1 generalizes:1 junction:1 operation:1 available:1 nen:1 observe:1 alternative:1 existence:1 denotes:2 ensure:4 implied:2 move:2 question:2 quantity:1 added:1 gradient:8 qep:1 subspace:1 parametrized:2 me:1 extent:1 reason:1 length:1 code:1 minimizing:3 balance:1 pima:2 relate:1 acknow:1 negative:1 upper:2 vertical:2 datasets:6 finite:2 descent:3 pair:1 california:1 distinction:1 learned:1 boost:1 discontinuity:4 address:1 able:1 pattern:2 oj:2 explanation:1 ia:1 suitable:3 misclassification:1 difficulty:1 axis:3 relative:1 wisconsin:2 freund:2 admissibility:2 suggestion:1 versus:1 validation:9 cancer:3 supported:3 last:1 allow:1 explaining:1 face:1 correspondingly:1 taking:1 curve:5 cumulative:3 qn:7 made:2 projected:1 transaction:1 ignore:1 implicitly:1 overfitting:1 rook:1 summing:1 xi:7 sonar:5 why:1 nature:1 reasonably:1 aboost:1 schuurmans:1 complex:1 did:2 dense:1 repeated:1 canberra:1 en:5 explicit:1 lie:4 ib:1 admissible:5 theorem:7 mason:6 ionosphere:3 concern:1 postgraduate:1 adding:1 gained:1 ci:1 magnitude:1 te:2 illustrates:2 anu:2 margin:41 simply:1 likely:1 ez:4 failed:1 corresponds:1 satisfies:2 marked:1 king:2 labelled:2 hard:1 change:1 except:2 reducing:2 uniformly:1 wt:3 called:1 pas:1 e:8 la:2 merz:1 select:1 mark:1 jonathan:2 indian:2 c9:1 tested:1 |
606 | 1,554 | Neuronal Regulation Implements
Efficient Synaptic Pruning
Gal Chechik and Isaac Meilijson
School of Mathematical Sciences
Tel Aviv University, Tel Aviv 69978, Israel
ggal@math.tau.ac.il isaco@math.tau.ac.il
Eytan Ruppin
Schools of Medicine and Mathematical Sciences
Tel Aviv University, Tel Aviv 69978, Israel
ruppin@math.tau.ac.il
Abstract
Human and animal studies show that mammalian brain undergoes
massive synaptic pruning during childhood , removing about half of
the synapses until puberty. We have previously shown that maintaining network memory performance while synapses are deleted,
requires that synapses are properly modified and pruned, removing the weaker synapses. We now show that neuronal regulation , a
mechanism recently observed to maintain the average neuronal input field , results in weight-dependent synaptic modification . Under
the correct range of the degradation dimension and synaptic upper bound, neuronal regulation removes the weaker synapses and
judiciously modifies the remaining synapses . It implements near
optimal synaptic modification, and maintains the memory performance of a network undergoing massive synaptic pruning. Thus ,
this paper shows that in addition to the known effects of Hebbian
changes, neuronal regulation may play an important role in the
self-organization of brain networks during development .
1
Introduction
This paper studies one of the fundamental puzzles in brain development: the massive synaptic pruning observed in mammals during childhood , removing more than
half of the synapses until puberty (see [1] for review) . This phenomenon is observed in various areas of the brain both in animal studies and human studies. How
can the brain function after such massive synaptic elimination? what could be the
computational advantage of such a seemingly wasteful developmental strategy? In
G. Chechik. I. Meilijson and E. Ruppin
98
previous work [2], we have shown that synaptic overgrowth followed by judicial
pruning along development improves the performance of an associative memory
network with limited synaptic resources, thus suggesting a new computational explanation for synaptic pruning in childhood. The optimal pruning strategy was
found to require that synapses are deleted according to their efficacy, removing the
weaker synapses first.
But is there a mechanism that can implement these theoretically-derived synaptic
pruning strategies in a biologically plausible manner? To answer this question , we
focus here on studying the role of neuronal regulation (NR) , a mechanism operating
to maintain the homeostasis of the neuron 's membrane potential. NR has been recently identified experimentally by [3], who showed that neurons both up-regulate
and down-regulate the efficacy of their incoming excitatory synapses in a multiplicative manner, maintaining their membrane potential around a baseline level.
Independently, [4] have studied NR theoretically, showing that it can efficiently
maintain the memory performance of networks undergoing synaptic degradation .
Both [3] and [4] have hypothesized that NR may lead to synaptic pruning during
development.
In this paper we show that this hypothesis is both computationally feasible and
biologically plausible by studying the modification of synaptic values resulting from
the operation of NR. Our work thus gives a possible account for the way brain
networks maintain their performance while undergoing massive synaptic pruning.
2
The Model
NR-driven synaptic modification (NRSM) results from two concomitant processes:
synaptic degradation (which is the inevitable consequence of synaptic turnover
[5]) , and neuronal regulation (NR) operating to compensate for the degradation.
We therefore model NRSM by a sequence of degradation-strengthening steps. At
each time step, synaptic degradation stochastically reduces the synaptic strength
W t (W t > 0) to W't+l by
W't+l = W t - (wtt'1]t; 1] "" N(J..{/ , (1"1/)
(1)
where 1] is noise term with positive mean and the power a defines the degradation
dimension parameter chosen in the range [0,1] . Neuronal regulation is modeled by
letting the post-synaptic neuron multiplicatively strengthen all its synapses by a
common factor to restore its original input field
W t +1 = W'tH li~
(2)
Ii
where If is the input field of neuron i at time t. The excitatory synaptic efficacies are
assumed to have a viability lower bound B- below which a synapse degenerates and
vanishes, and a soft upper bound B+ beyond which a synapse is strongly degraded
reflecting their maximal efficacy. To study of the above process in a network, a
model incorporating a segregation between inhibitory and excitatory neurons (i.e.
obeying Dale's law) is required . To generate this essential segregation, we modify
the standard low-activity associative memory model proposed by [6] by adding a
small positive term to the synaptic learning rule. In this model, M memories
are stored in an excitatory N -neuron network forming attractors of the network
dynamics. The synaptic efficacy Wij between the jth (pre-synaptic) neuron and
the ith (post-synaptic) neuron is
M
Wij
=
I: [(er - p)(ej - p) + a] , 1 ~ i i= j
~
N
(3)
99
Neuronal Regulation Implements Efficient Synaptic Prnning
where {e'}~=l are {O, I} memory patterns with coding level p (fraction of firing
neurons), and a is some positive constant 1. The updating rule for the state Xf of
the ith neuron at time t is
N
xI+1
= (J(Jf),
If
N
= ~ L9(Wij)Xj - ~ L
j=l
xj -
T,
(J(J)
= 1 + sign(J)
(4)
2
j=l
where T is the neuronal threshold, and I is the inhibition strength. 9 is a general
modification function over the excitatory synapses, which is either derived explicitly
(See Section 4), or determined implicitly by the operation of NRSM. If 9 is linear
and I = Mathe model reduces to the original model described by [6]. The overlap
mil (or similarity) between the network 's activity pattern X and the memory ~II
serves to measure memory performance (retrieval acuity), and is defined as mil
~
3
Ef=l (~j -
=
p)Xj.
N euronally Regulated Synaptic Modification
NRSM was studied by simulating the degradation-strengthening sequence in a network in which memory patterns were stored according to Eq.3. Figure la plots a
typical distribution of synaptic values as traced along a sequence of degradationstrengthening steps (Eq. 1,2) . As evident, the synaptic values diverge: some of the
weights are strengthened and lie close to the upper synaptic bounds, while the other
synapses degenerate and vanish. Using probabilistic considerations, it can be shown
that the synaptic distribution converge to a meta-stable state where it remains for
long waiting times. Figure Ib describes the metastable synaptic distribution as
calculated for different 0 values .
Evolving distribution of synaptic efficacies
a. Simulation results
b. Numerical results
10000
CJ)
Q)
CJ)
c..
1.0
/1, 5000
r
r
r
I
I
I
r
\1000
I
0.8
ctl
c::
>CJ)
r
1400
-....
I
_._. Alpha=O.O
- - - Alpha=O.5
Alpha=O.9
~.6
'(j)
c::
~0.4
0
I
i
i
Q)
.0
E
0.2
:::l
c::
0.0
/
/
..
//
0
Figure 1: Distribution of synaptic strengths following a degradation-strengthening
process. a) Synaptic distribution after 0,200 , 400, 1000 and 5000 degradationstrengthening steps of a 400 neurons network with 1000 stored memory patterns.
0=0.8, p = 0.1, B- = 10- 5 , B+ = 18 and T/ '" N(0.05, 0.05). Qualitatively similar
results were obtained for a wide range of simulation parameters. b) The synaptic
distribution of the remaining synapses at the meta-stable state was calculated as
the main eigen vector of the transition probability matrix.
the weights are normally distributed with expectation M a > 0 and standard deviation O(VM) , the probability of a negative synapse vanishes as M goes to infinity (and
is negligible already for several dozens of memories in the parameters' range used here).
1 As
G. Chechik, I Meilijson and E. Ruppin
100
a. NRSM functions at the
Metastable state
, 20
b. NRSM and
random deletion
i ----r--- --j::=::::==
1.0
r-:-~-~---_r__-_-~---,
~-?-v~?.M,~
r~
,
\
\
\
\
\
~
\
\
c
'"
E
.g
Q)
a...
0 .5
\
\
\
\
\
\
\
NR modification
- - - Random deletion
\
\
\
,,
' ......
0 .0 '---''-------'''--.......''"'''-...?...-~--------,'
0.0
4 .0
8.0
,2.0
Original synaptic strength
0 .0 '---- ' - -- - ',--~----"~.,.~
--~
, 0.9
0 .8
0.7
0.6
0.5
- ''--~
04
0 .3
Network's Connectivity
Figure 2: a) NRSM functions at the metastable state for different a values. Results were obtained in a 400-neurons network after performing 5000 degradationstrengthening steps. Parameter values are as in Figure 1, except B+
12. b)
Performance of NR modification and random deletion. The retrieval acuity of 200
memories stored in a network of 800 neurons is portrayed as a function of network
connectivity, as the network undergoes continuous pruning until NR reaches the
metastable state. a = 0, B+ = 7.5, p = 0.1, rna = 0.80, a = 0.01, T = 0.35,
B- = 10- 5 and TJ"'" N(O.OI, 0.01).
=
To further investigate which synapses are strengthened and which are pruned, we
study the resulting synaptic modification function. Figure 2a plots the value of
synaptic efficacy at the metastable state as a function of the initial synaptic efficacy, for different values of the degradation dimension a. As observed, a sigmoidal
dependency is obtained, where the slope of the sigmoid s.trongly depends on the
degradatiori dimension. In the two limit cases, additive degradation (a = 0) results
in a step function at the metastable state, while multiplicative degradation (a = 1)
results in random diffusion of the synaptic weights toward a memory less mean value.
Different values of a and B+ result in different levels of synaptic pruning: When
the synaptic upper bound B+ is high, the surviving synapses assume high values,
leading to massive pruning to maintain the neuronal input field, which in turn reduces network 's performance. Low B+ values lead to high connectivity, but limit
synapses to a small set of possible values, again reducing memory performance. Our
simulations show that optimal memory retrieval is obtained for B+ values that lead
to deletion levels of 40% - 60%, in which NR indeed maintains the network performance. Figure 2b traces the average retrieval acuity of a network throughout the
operation of NR, versus a network subject to random deletion at the same pruning
levels. While the retrieval of a randomly pruned network collapses already at low
deletion levels of about 20%, a network undergoing NR performs well even in high
deletion levels.
4
Optimal Modification In Excitatory-Inhibitory Networks
To obtain a a comparative yardstick to evaluate the efficiency of NR as a selective
pruning mechanism, we derive optimal modification functions maximizing memory
performance in our excitatory-inhibitory model and compare them to the NRSM
functions.
101
Neuronal Regulation Implements Efficient Synaptic Pruning
We study general synaptic modification functions, which prune some of the synapses
and possibly modify the rest, while satisfying global constraints on synapses such
as the number or total strength of the synapses. These constraints reflect the
observation that synaptic activity is strongly correlated with energy consumption
in the brain [7], and synaptic resources may hence be inherently limited in the adult
brain.
We evaluate the impact of these functions on the network's retrieval performance,
by deriving their effect on the signal to noise ratio (SIN) of the neuron's input field
(Eqs. 3,4)' known to be the primary determinant of retrieval capacity ([8]). This
analysis, conducted in a similar manner to [2] yields
where z'" N(O, 1) and 9 is the modification function of Eq. 4 but is now explicitly
applied to the synapses. To derive optimal synaptic modification functions with
limited synaptic resources, we consider 9 functions that zero all synapses except
those in some set A, and keep the integral
i
l(z)?(z)dz
k
= 0, 1, ...
;
= OVz ~ A
g(z)
(6)
limited. We then maximize the SIN under this constraint using the Lagrange
method. Our results show that without any synaptic constraints the optimal function is the identity function, that is, the original Hebbian rule is optimal. When
the number of synapses is restricted (k = 0), the optimal modification function is a
linear function for all the remaining synapses
a
g(W)
= aW -J.ta+b
-
L z2?(z)dz
where { b
(Ta
J z?(z )dz
L ?(z)dz)
A
(1-
E(W)
V(W)
(7)
for any deletion set A. To find the synapses that should be deleted, we have numerically searched for a deletion set maximizing SIN while limiting g(W) to positive
values (as required by the segregation between excitatory and inhibitory neurons).
The results show, that weak synapses pruning, a modification strategy that removes the weakest synapses and modifies the rest according to Eq. 7, is optimal
at deletion levels above 50% . For lower deletion levels, the above 9 function fails
to satisfy the positivity constraint for any set A. When the positivity constraint is
ignored, SIN is maximized if the weights closest to the mean are deleted and the
remaining synapses are modified according to Eq 7. We name this strategy mean
synapses pruning. Figure 3 plots the memory capacity under weak-synapses
pruning (compared with random deletion and mean-synaptic pruning) showing that
pruning the weak synapses performs at least near optimally for lower deletion levels
as well. Even more interesting, under the correct parameter values weak-synapses
pruning results in a modification function that has a similar form to the NR-driven
modification function studied in the previous Section: both strategies remove the
weakest synapses and linearly modify the remaining synapses in a similar manner.
In the case of limited overall synaptic strength (k > 0 in Eq. 6), the optimal 9
satisfies
z - 2"Y1 [g(z) - E(g(z))] - "Y2kg(z)k-1 = 0
(8)
and thus for k = 1 and k = 2 the optimal modification function is again linear. For
k > 2 a sublinear modification function is optimal, where 9 is a function of zl/(k-1),
G. Chechik, I. Meilijson and E. Ruppin
102
Capacity of different synaptic modification functions g(w)
a. Analysis results
b. Simulations results
1oo0r---~----~--~----~---'
=---==-.......,::...:.... _.-.-........
800
800
.~600
~6oo
?I
?I
?I 400
?1400
(.)
c..
(.)
200
(.)
c..
(.)
200
'..... .........
.... ....
.
..... .... ..
'" '" '"
'".
'".
'"'" ,.,
'"'" .,
'",,.,.
"
'\.
,.
1\
\
Figure 3: Comparison between performance of different modification strategies as a
function of the deletion level (percentage of synapses pruned). Capacity is measured
as the number of patterns that can be stored in the network (N = 2000) and be
recalled almost correctly (rn > 0.95) from a degraded pattern (rna = 0.80).
and is thus unbounded for all k. Therefore, in our model, bounds on the synaptic
efficacies are not dictated by the optimization process. Their computational advantage arises from their effect on preserving memory capacity in face of ongoing
synaptic pruning.
5
Discussion
By studying NR-driven synaptic modification in the framework of associative memory networks, we show that NR prunes the weaker synapses and modifies the remaining synapses in a sigmoidal manner. The critical variables that govern the
pruning process are the degradation dimension and the upper synaptic bound. Our
results show that in the correct range of these parameters, NR implements
a near optimal strategy, maximizing memory capacity in the sparse connectivity levels observed in the brain.
A fundamental requirement of central nervous system development is that the system should continuously function, while undergoing major structural and functional developmental changes. It has been proposed that a major functional role
of neuronal down-regulation during early infancy is to maintain neuronal activity
at its baseline levels while facing continuous increase in the number and efficacy
of synapses [3]. Focusing on up-regulation, our work shows that NR has another
important interesting effect: that of modifying and pruning synapses in a continuously optimal manner. Neuronally regulated synaptic modifications may play the
same role also in the peripheral nervous system: It was recently shown that in the
neuro-muscular junction the muscle regulates its incoming synapses in a way similar to NR [9]. Our analysis suggests this process may be the underlying cause for
the finding that synapses in the neuro-muscular junction are either strengthened or
pruned according to their initial efficacy [10].
The significance of our work goes beyond understanding synaptic organization and
remodeling in the associative memory models studied in this paper. Our analysis
bears relevance to two other fundamental paradigms: Hetero Associative memory
and self organizing maps, sharing the same basic synaptic structure of storing as-
Neuronal Regulation Implements Efficient Synaptic Pruning
103
sociations between sets of patterns via a Hebbian learning rule.
Combining the investigation of a biologically identified mechanism with the analytic study of performance optimization in neural network models, this paper shows
the biologically plausible and beneficial role of weight dependent synaptic pruning.
Thus, in addition to the known effects of Hebbian learning, neuronal regulation may
play an important role in the self-organization of brain networks during development .
References
[1] G.M. Innocenti. Exuberant development of connections and its possible permissive role in cortical evolution. Trends Neurosci, 18:397-402, 1995.
[2] G. Chechik, I. Meilijson, and E. Ruppin. Synaptic pruning during development:
A computational account. Neural Computation. In press., 1998.
[3] G.G. Turrigano, K. Leslie, N. Desai, and S.B. Nelson . Activity dependent scaling of quantal amplitude in neocoritcal pyramidal neurons. Nature,
391(6670):892-896,1998.
[4] D. Horn, N. Levy, and E. Ruppin. Synaptic maintenance via neuronal regulation. Neural Computation, 10(1):1- 18,1998.
[5] J .R. Wolff, R. Laskawi, W.B. Spatz, and M. Missler. Structural dynamics of
synapses and synaptic components. Behavioral Brain Research, 66(1-2):13- 20,
1995.
[6] M.V . Tsodyks and M. Feigel'man. Enhanced storage capacity in neural networks with low activity level. Europhys. Lett., 6:101- 105,1988.
[7] Per E. Roland. Brain Activation. Willey-Liss, 1993 .
[8] I. Meilijson and E. Ruppin. Optimal firing in sparsely-connected low-activity
attractor networks. Biological cybernetics, 74:479-485, 1996.
[9] G .W . Davis and C .S. Goodman. Synapse-specific control of synaptic efficacy
at the terminals of a single neuron. Nature, 392(6671):82- 86, 1998.
[10] H. Colman, J . Nabekura, and J. W. Lichtman. Alterations in synaptic strength
preceding axon withdrawal. Science, 275(5298):356-361, 1997.
| 1554 |@word determinant:1 simulation:4 mammal:1 initial:2 efficacy:12 z2:1 activation:1 additive:1 numerical:1 analytic:1 remove:3 plot:3 half:2 nervous:2 ith:2 math:3 sigmoidal:2 unbounded:1 mathematical:2 along:2 behavioral:1 manner:6 theoretically:2 indeed:1 brain:12 terminal:1 underlying:1 feigel:1 israel:2 what:1 finding:1 gal:1 zl:1 normally:1 control:1 positive:4 negligible:1 modify:3 limit:2 consequence:1 firing:2 studied:4 suggests:1 limited:5 collapse:1 range:5 horn:1 implement:7 area:1 evolving:1 chechik:5 pre:1 close:1 storage:1 ggal:1 map:1 dz:4 maximizing:3 modifies:3 go:2 independently:1 rule:4 colman:1 deriving:1 limiting:1 enhanced:1 play:3 massive:6 strengthen:1 hypothesis:1 trend:1 satisfying:1 updating:1 mammalian:1 sparsely:1 observed:5 role:7 childhood:3 tsodyks:1 connected:1 desai:1 vanishes:2 developmental:2 govern:1 turnover:1 dynamic:2 efficiency:1 various:1 europhys:1 plausible:3 seemingly:1 associative:5 advantage:2 sequence:3 maximal:1 strengthening:3 combining:1 organizing:1 degenerate:2 requirement:1 comparative:1 derive:2 oo:1 ac:3 measured:1 school:2 eq:7 hetero:1 correct:3 modifying:1 human:2 elimination:1 require:1 investigation:1 biological:1 around:1 puzzle:1 major:2 early:1 homeostasis:1 ctl:1 rna:2 modified:2 ej:1 mil:2 derived:2 focus:1 acuity:3 properly:1 baseline:2 dependent:3 wij:3 selective:1 overall:1 development:8 animal:2 field:5 inevitable:1 randomly:1 attractor:2 maintain:6 organization:3 investigate:1 tj:1 integral:1 judicial:1 soft:1 leslie:1 deviation:1 conducted:1 optimally:1 stored:5 dependency:1 answer:1 aw:1 fundamental:3 probabilistic:1 vm:1 diverge:1 continuously:2 connectivity:4 again:2 reflect:1 central:1 possibly:1 positivity:2 stochastically:1 leading:1 li:2 suggesting:1 potential:2 account:2 alteration:1 coding:1 satisfy:1 explicitly:2 depends:1 multiplicative:2 meilijson:6 maintains:2 slope:1 il:3 oi:1 degraded:2 who:1 efficiently:1 puberty:2 yield:1 maximized:1 neuronally:1 weak:4 cybernetics:1 synapsis:42 reach:1 sharing:1 synaptic:69 energy:1 isaac:1 permissive:1 improves:1 cj:3 amplitude:1 reflecting:1 focusing:1 ta:2 synapse:4 strongly:2 until:3 defines:1 undergoes:2 aviv:4 name:1 effect:5 hypothesized:1 evolution:1 hence:1 sin:4 during:7 self:3 davis:1 evident:1 performs:2 ruppin:8 ef:1 recently:3 consideration:1 common:1 sigmoid:1 functional:2 regulates:1 numerically:1 stable:2 similarity:1 operating:2 inhibition:1 closest:1 showed:1 dictated:1 driven:3 meta:2 muscle:1 preserving:1 preceding:1 prune:2 converge:1 maximize:1 paradigm:1 signal:1 ii:2 reduces:3 hebbian:4 xf:1 compensate:1 retrieval:7 long:1 post:2 roland:1 impact:1 neuro:2 basic:1 maintenance:1 expectation:1 isaco:1 addition:2 pyramidal:1 goodman:1 rest:2 subject:1 surviving:1 structural:2 near:3 viability:1 xj:3 identified:2 judiciously:1 cause:1 ignored:1 generate:1 percentage:1 inhibitory:4 sign:1 correctly:1 per:1 waiting:1 threshold:1 traced:1 deleted:4 wasteful:1 diffusion:1 fraction:1 wtt:1 throughout:1 almost:1 scaling:1 bound:7 followed:1 activity:7 strength:7 infinity:1 constraint:6 l9:1 pruned:5 performing:1 remodeling:1 according:5 metastable:6 peripheral:1 membrane:2 describes:1 beneficial:1 modification:24 biologically:4 restricted:1 computationally:1 resource:3 segregation:3 previously:1 remains:1 turn:1 mechanism:5 letting:1 serf:1 studying:3 junction:2 operation:3 regulate:2 simulating:1 eigen:1 original:4 remaining:6 maintaining:2 medicine:1 question:1 already:2 strategy:8 primary:1 nr:20 regulated:2 capacity:7 consumption:1 nelson:1 toward:1 modeled:1 quantal:1 multiplicatively:1 concomitant:1 ratio:1 regulation:14 trace:1 negative:1 upper:5 neuron:17 observation:1 mathe:1 y1:1 rn:1 required:2 connection:1 recalled:1 deletion:14 adult:1 beyond:2 below:1 pattern:7 tau:3 memory:23 explanation:1 power:1 overlap:1 critical:1 restore:1 review:1 understanding:1 law:1 bear:1 sublinear:1 interesting:2 versus:1 facing:1 storing:1 excitatory:8 jth:1 weaker:4 wide:1 face:1 sparse:1 distributed:1 dimension:5 calculated:2 transition:1 cortical:1 lett:1 dale:1 qualitatively:1 pruning:28 alpha:3 implicitly:1 keep:1 global:1 incoming:2 assumed:1 xi:1 continuous:2 nature:2 inherently:1 tel:4 r__:1 significance:1 main:1 linearly:1 neurosci:1 noise:2 neuronal:17 strengthened:3 axon:1 fails:1 obeying:1 lie:1 portrayed:1 infancy:1 vanish:1 ib:1 levy:1 dozen:1 removing:4 down:2 specific:1 showing:2 er:1 undergoing:5 weakest:2 incorporating:1 essential:1 adding:1 lichtman:1 forming:1 lagrange:1 satisfies:1 willey:1 identity:1 jf:1 man:1 feasible:1 change:2 experimentally:1 determined:1 typical:1 except:2 reducing:1 muscular:2 degradation:13 wolff:1 total:1 eytan:1 la:1 withdrawal:1 searched:1 arises:1 yardstick:1 relevance:1 ongoing:1 evaluate:2 phenomenon:1 correlated:1 |
607 | 1,555 | Computation of Smooth Optical Flow in a
Feedback Connected Analog Network
Alan Stocker *
Institute of Neuroinforrnatics
University and ETH Zi.irich
Winterthurerstrasse 190
8057 Zi.irich, Switzerland
Rodney Douglas
Institute of Neuroinforrnatics
University and ETH Zi.irich
Winterthurerstrasse 190
8057 Zi.irich, Switzerland
Abstract
In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuit for global motion sensing in a VLSI. We report here a
new and improved aVLSI implementation that provides smooth optical
flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as
the aperture problem. However, the optical flow can be estimated by the
use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized . We
show how the algorithmic constraints of Hom and Schunck [2] on computing smooth optical flow can be mapped onto the physical constraints
of an equivalent electronic network.
1 Motivation
The perception of apparent motion is crucial for navigation. Knowledge of local motion of
the environment relative to the observer simplifies the calculation of important tasks such as
time-to-contact or focus-of-expansion. There are several methods to compute optical flow.
They have the common problem that their computational load is large. This is a severe
disadvantage for autonomous agents, whose computational power is restricted by energy,
size and weight. Here we show how the global regularization approach which is necessary
to solve for the ill-posed nature of computing optical flow, can be formulated as a local
feedback constraint, and implemented as a physical analog device that is computationally
efficient.
* correspondence to: alan@ini.phys.ethz.ch
707
Computation of Optical Flow in an Analog Network
2 Smooth Optical Flow
Horn and Schunck [2] defined optical flow in relation to the spatial and temporal changes
in image brightness. Their model assumes that the total image brightness E(x, y, t) does
not change over time;
d
dt E(x, y, t)
= O.
(I)
Expanding equation (1) according to the chain rule of differentiation leads to
0
0
F == ox E(x, y, t)u + oy E(x, y, t)v + 8t E(x, y, t) = 0,
o
(2)
where u = dx / dt and v = dy / dt represent the two components of the local optical flow
vector.
Since there is one equation for two unknowns at each spatial location, the problem is
ill-posed, and there are an infinite number of possible solutions lying on the constraint
line for every location (x, y). However, by introducing an additional constraint the problem can be regularized and a unique solution can be found.
For example, Horn and Schunck require the optical flow field to be smooth. As a measure
of smoothness they choose the squares of of the spatial derivatives of the flow vectors,
(3)
One can also view this constraint as introducing a priori knowledge: the closer two points
are in the image space the more likely they belong to the projection of the same object. Under the assumption of rigid objects undergoing translational motion, this constraint implies
that the points have the same, or at least very similar motion vectors. This assumption is
obviously not valid at boundaries of moving objects, and so this algorithm fails to detect
motion discontinuities [3].
The computation of smooth optical flow can now be formulated as the minimization problem of a global energy functional,
JJ~
dx dy
---7
min
(4)
L
with F and 8 2 as in equation (2) and (3) respectively. Thus, we exactly apply the approach
of standard regularization theory [4]:
Ax=y
x = A -Iy
II Ax -
y
II +.x II P 11= min
y: data
inverse problem, ill-posed
regularization
The regularization parameter, .x, controls the degree of smoothing of the solution and its
closeness to the data. The norm, II . II, is quadratic. A difference in our case is that A
is not constant but depends on the data. However, if we consider motion on a discrete
time-axis and look at snapshots rather than continuously changing images, A is quasistationary.1 The energy functional (4) is convex and so, a simple numerical technique
like gradient descent would be able to find the global minimum. To compute optical flow
while preserving motion discontinuities one can modify the energy functional to include
a binary line process that prevents smoothing over discontinuities [4]. However, such an
functional will not be convex. Gradient descent methods would probably fail to find the
global amongst all local minima and other methods have to be applied.
1 In the aVLSI implementation this requires a much shorter settling time constant for the network
than the brightness changes in the image.
708
3
A. Stocker and R. Doug/as
A Physical Analog Model
3.1
Continuous space
Standard regularization problems can be mapped onto electronic networks consisting of
conductances and capacitors [5]. Hutchinson et al. [6] showed how resistive networks can
be used to compute optical flow and Poggio et al. [7] introduced electronic network solutions for second-order-derivative optic flow computation. However, these proposed network architectures all require complicated and sometimes negative conductances although
Harris et al. [8] outlined a similar approach as proposed in this paper independently. Furthennore, such networks were not implemented practically, whereas our implementation
with constant nearest neighbor conductances is intuitive and straightforward.
Consider equation (4):
L = L(u, v, '\lu, '\lv, x, y).
The Lagrange function L is sufficiently regular (L E C 2 ), and thus it follows from calculus of variation that the solution of equation (4) also suffices the linear Euler-Lagrange
equations
A'\l2u - Ex (Exu
o
A'\l2v -
O.
+ Eyv + E t )
Ey(Exu + Eyv + E t )
(5)
The Euler-Lagrange equations are only necessary conditions for equation (4). The sufficient condition for solutions of equations (5) to be a weak minimum is the strong Legendrecondition, that is
and
L'ilu'ilu > 0
L'ilv'ilv > 0,
which is easily shown to be true.
3.2
Discrete Space - Mapping to Resistive Network
By using a discrete five-point approximation of the Laplacian \7 2 on a regular grid, equations (5) can be rewritten as
A(Ui+1 )'
,
+ Ui-1 )' + Ui )'+1 + Ui )-1
,
,
,
-
4Ui )') - Ex, ,(Ex l ,,Ui)'
J'
,
t,]
+ E y'
' .]
Vi)'
'
+ Et
,) =0 (6)
1,J
A(Vi+1)'
+Vi - 1)'
+Vi)'+1
+Vi)'-1
- 4Vi)'
+Ey1' ,1
,Vi)'
+Et,1,],)=0
,
,
,
,
, ) -Ey'1 , )(Ex,
' . J,Ui)'
'
'
where i and j are the indices for the sampling nodes. Consider a single node of the resistive
network shown in Figure 1:
Figure 1: Single node of a resistive network.
From Kirchhoff's law it follows that
dV,? ,
C d~') = G(Vi+1 ,j
+ Vi-I ,j + Vi,HI + Vi,j-1 - 4Vi,j) + lini.j
(7)
709
Computation of Optical Flow in an Analog Network
where Vi ,j represents the voltage and l in', i the input current. G is the conductance between
two neighboring nodes and C the node capacitance.
In steady state, equation (7) becomes
G(Vi+I ,j
+ Vi - I ,j + Vi, j+! + Vi ,j- I
- 4Vi ,j)
+ lini"
= O.
(8)
The analogy with equations (6) is obvious:
G
~
.A
lUin t??
t]
~
-Ex? . (E x t?t ) UiJ'
+Ey t,, ]
ViJ'
+Et 1 , ) ? )
'
'
lVin t", }
~
-Ey . , (Ex 1", ) UiJ,
+Ey"1 , ) Vi),+E
t I , J, )
'
'
t. )
t ,}
(9)
To create the full system we use two parallel resistive networks in which the node voltages
Ui, j and Vi,j represent the two components of the optical flow vector U and v . The input
currents lUin i,i and lVini" are computed by a negative recurrentfeedback loop modulated
by the input data, which are the spatial and temporal intensity gradients.
Notice that the input currents are proportional to the deviation of the local brightness constraint: the less the local optical flow solution fits the data the higher the current lini.j will
be to correct the solution and vice versa.
Stability and convergence of the network are guaranteed by Maxwell 's minimum power
principle [4, 9].
4 The Smooth Optical Flow Chip
4.1
Implementation
-CP\~}1J?
~tf)~
! I ~
Figure 2: A single motion cell within the three layer network. For simplicity only one
resistive network is shown.
The circuitry consists of three functional layers (Figure 2). The input layer includes an
array of adaptive photoreceptors [10] and provides the derivatives of the image brightness
to the second layer, The spatial gradients are the first-order linear approximation obtained
by subtracting the two neighboring photoreceptor outputs. The second layer computes the
input current to the third layer according to equations (9). Finally these currents are fed
into the two resistive networks that report the optical flow components.
The schematics of the core of a single motion cell are drawn in Figure 3. The photoreceptor
and the temporal differentiator are not shown as well as the other half of the circuitry that
computes the y-component of the flow vector.
A. Stocker and R. Doug/as
710
A few remarks are appropriate here: First, the two components of the optical flow vector
have to be able to take on positive and negative values with respect to some reference potential. Therefore, a symmetrical circuit scheme is applied where the positive and negative
(reference voltage) values are carried on separate signal lines. Thus, the actual value is
encoded as the difference of the two potentials.
Ex (Ex Vx + E)
t
~."
.... ....... "....... ......... :
"
"
Exl
l
_f-VViBias
! v+
I:???????? . ? . ?. ? ? ?. ? ?. ?:
temporal
differentiator
X
DiffBias
1
OpBias
Figure 3: Cell core schematics; only the circuitry related to the computation of the
x-component of the flow vector is shown.
Second, the limited linear range of the Gilbert multipliers leads to a narrow span of flow velocities that can be computed reliably. However, the tuning can be such that the operational
range is either at high or very low velocities. Newer implementations are using modified
multipliers with a larger linear range.
Third, consider a single motion cell (Figure 2). In principle, this cell would be able to satisfy the local constraint perfectly. In practice (see Figure 3), the finite output impedance of
the p-type Gilbert multiplier slightly degrades this ideal solution by imposing an effective
conductance G load . Thus, a constant voltage on the capacitor representing a non-zero motion signal requires a net output current of the mUltiplier to maintain it. This requirement
has two interesting consequences:
i) The reported optical flow is dependent on the spatial gradients (contrast). A single uncoupled cell according to Figure 2 has a steady state solution with
U
i ,j '"
-Et .Ex ' .J.
(G load + E;i .j + E~iJ
I ,]
and
'Y:
-EtEy 1.., J
+ E; . + Ey2)
i,j '" (G load
1,)
1,)
1,)
respectively. For the same object speed, the chip reports higher velocity signals for higher
spatial gradients. Preferably, Gload should be as low as possible to minimize its influence
on the solution.
ii) On the other hand, the locally ill-posed problem is now well-posed because G load imposes a second constraint. Thus, the chip behaves sensibly in the case of low contrast
input (small gradients), reporting zero motion where otherwise, unreliable high values
would occur. This is convenient because the signal-to-noise ratio at low contrast is very
poor. Furthermore, a single cell is forced to report the velocity on the constraint line with
smallest absolute value, which is normal to the spatial gradient. That means that the chip
711
Computation of Optical Flow in an Analog Network
reports normal flow when there is no neighbor connection. Since there is an trade-off between the robustness of the optical flow computation and a low conductance Glaad, the
follower-connected transconductance amplifier in our implementation allows us to control
G laad above its small intrinsic value.
4.2
Results
The results reported below were obtained from a MOSIS tinychip containing a 7x7 array
of motion cells each 325x325 A2 in size . The chip was fabricated in 1.2 J.,tm technology at
AMI.
-
\
"-
"-
"- "- "-
,
"-
"-
,
""-
3
...
"-
-,
,,,- "
.' ,
"",
-~"
."
'f-~' ~
"'-
, "
.........
,"
,~"
,"
,"
,1'-'"
.'
"-
,
"-
,
,
, ,
"-
,~"
"
b
a
,""",-- ~ ~
"-
,
, , ",
c
Figure 4: Smooth optical flow response of the chip to an left-upwards moving edge.
a: photoreceptor output, the arrow indicates the actual motion direction. b: weak coupling
(small conductance G). c: strong coupling.
\
,
,
3
-
--,
\
,-
"
.
,.-
I
.,.-
a
b
-/
"'--
'r-- /"
'I /
,
I
,
I
,~
"
,/
"- /
--\
,, '....
\
lr-
~~~~~~
2F---
~ ~~
~
-E--
~
3F--
~
"'E---
~
-E--
~
~
.F--~~~~~~
Sr-
~
~
~
~
'4--
<Eo--
&r--
~
'E---
~
-E--
'E--
~
1F--
~
~
~
~
'<E--
4-
c
Figure 5: Response of the optical flow chip to a plaid stimulus moving towards the left:
a: photoreceptor output; b shows the normal flow computation with disabled coupling
between the motion cells in the network while in c the coupling strength is at maximum.
The chip is able to compute smooth optical flow in a qualitative manner. The smoothness
can be set by adjusting the coupling conductances (Figure 4). Figure 5b presents the normal flow computation that occurs when the coupling between the motion cells is disabled.
The limited resolution of this prototype chip together with the small size of the stimulus
leads to a noisy response. However it is clear that the chip perceives the two gratings as
separate moving objects with motion normal to their edge orientation. When the network
A. Stocker and R. Douglas
712
conductance is set very high the chip perfonns a collective computation solving the aperture problem under the assumption of single object motion. Figure 5c shows how the chip
can compute the correct motion of a plaid pattern.
5
Conclusion
We have presented here an aVLSI implementation of a network that computes 2D smooth
optical flow. The strength of the resistive coupling can be varied continuously to obtain
different degrees of smoothing, from a purely local up to a single global motion signal. The
chip ideally computes smooth optical flow in the classical definition of Horn and Schunck.
Instead of using negative and complex conductances we implemented a network solution
where each motion cell is perfonning a local constraint satisfaction task in a recurrent
negative feedback loop.
It is significant that the solution of a global energy minimization task can be achieved
within a network of local constraint solving cells that do not have explicit access to the
global computational goal.
Acknowledgments
This article is dedicated to Misha Mahowald. We would like to thank Eric Vittoz, Jorg
Kramer, Giacomo Indiveri and Tobi Delbriick for fruitful discussions. We thank the Swiss
National Foundation for supporting this work and MOSIS for chip fabrication.
References
[1] J. Tanner and c.A. Mead. An integrated analog optical motion sensor. In S. -Y. Kung,
R. Owen, and G. Nash, editors, VLSI Signal Processing, 2, page 59 ff. IEEE Press,
1986.
[2] B.K. Horn and B.G. Schunck. Detennining optical flow.
17: 185-203, 1981.
Artificial Intelligence,
[3] A. Yuille. Energy functions for early vision and analog networks. Biological Cybernetic~61:115-123,
1989.
[4] T. Poggio, V. Torre, and C. Koch. Computational vision and regularization theory.
Nature, 317(26):314-319, September 1985.
[5] B. K. Horn. Parallel networks for machine vision. Technical Report 1071, MIT AI
Lab, December 1988.
[6] J. Hutchinson, C. Koch, 1. Luo, and C. Mead. Computing motion using analog and
binary resistive networks. Computer, 21 :52-64, March 1988.
[7] T. Poggio, W. Yang, and V. Torre. Optical flow: Computational properties and networks, biological and analog. The Computing Neuron, pages 355-370, 1989.
[8] 1.G. Harris, C. Koch, E. Staats, and J. Luo. Analog hardware for detecting discontinuities in early vision. Int. Journal of Computer Vision, 4:211-223, 1990.
[9] J. Wyatt. Little-known properties of resistive grids that are useful in analog vision chip
designs. In C. Koch and H. Li, editors, Vision Chips: Implementing Vision Algorithms
with Analog VLSI Circuits, pages 72-89. IEEE Computer Society Press, 1995.
[10] S.c. Liu. Silicon retina with adaptive filtering properties. In Advances in Neural
Information Processing Systems 10, November 1997.
| 1555 |@word norm:1 calculus:1 brightness:5 liu:1 current:7 luo:2 dx:2 must:1 follower:1 numerical:1 exl:1 half:1 intelligence:1 device:1 core:2 lr:1 detecting:1 provides:2 node:6 location:2 five:1 qualitative:1 consists:1 resistive:10 manner:1 actual:2 little:1 becomes:1 perceives:1 circuit:3 differentiation:1 fabricated:1 temporal:4 winterthurerstrasse:2 every:1 preferably:1 exactly:1 sensibly:1 control:2 positive:2 local:10 modify:1 consequence:1 mead:3 limited:2 range:3 horn:5 unique:1 acknowledgment:1 practice:1 swiss:1 eth:2 projection:1 convenient:1 regular:2 onto:2 influence:1 eyv:2 equivalent:1 gilbert:2 fruitful:1 straightforward:1 independently:1 convex:2 resolution:1 simplicity:1 rule:1 array:2 neuroinforrnatics:2 stability:1 variation:1 autonomous:1 velocity:4 connected:2 l2v:1 trade:1 environment:1 nash:1 ui:8 ideally:1 solving:2 purely:1 yuille:1 eric:1 easily:1 kirchhoff:1 chip:16 forced:1 effective:1 artificial:1 apparent:1 whose:1 posed:6 solve:1 encoded:1 larger:1 furthennore:1 otherwise:1 itself:1 noisy:1 obviously:1 net:1 subtracting:1 neighboring:2 loop:2 tinychip:1 intuitive:1 convergence:1 requirement:1 object:6 coupling:7 avlsi:3 recurrent:1 ij:1 nearest:1 grating:1 strong:2 implemented:4 implies:1 vittoz:1 switzerland:2 direction:1 plaid:2 correct:2 torre:2 vx:1 implementing:1 require:2 suffices:1 perfonns:1 biological:2 lying:1 practically:1 sufficiently:1 koch:4 normal:5 algorithmic:1 mapping:1 circuitry:3 early:2 ilv:2 smallest:1 a2:1 vice:1 create:1 tf:1 minimization:2 mit:1 sensor:1 modified:1 rather:1 voltage:4 ax:2 focus:1 indiveri:1 indicates:1 contrast:3 detect:1 dependent:1 rigid:1 integrated:1 vlsi:3 relation:1 uij:2 translational:1 ill:5 orientation:1 priori:1 spatial:8 smoothing:3 field:2 sampling:1 represents:1 look:1 minimized:1 report:6 stimulus:2 few:1 retina:1 national:1 consisting:1 maintain:1 amplifier:1 conductance:10 severe:1 navigation:1 misha:1 stocker:4 chain:1 edge:2 closer:1 necessary:2 poggio:3 shorter:1 wyatt:1 disadvantage:1 mahowald:1 introducing:2 deviation:1 euler:2 fabrication:1 reported:2 hutchinson:2 giacomo:1 off:1 tanner:2 together:1 iy:1 continuously:2 ey2:1 containing:1 choose:1 ilu:2 derivative:3 li:1 potential:2 includes:1 int:1 satisfy:1 depends:1 vi:20 view:1 observer:1 lab:1 complicated:1 parallel:2 rodney:1 minimize:1 square:1 weak:2 staats:1 lu:1 phys:1 definition:1 energy:7 obvious:1 adjusting:1 knowledge:2 maxwell:1 higher:3 dt:3 luin:2 response:3 improved:1 ox:1 furthermore:1 hand:1 disabled:2 true:1 multiplier:4 regularization:7 steady:2 ini:1 motion:25 cp:1 upwards:1 dedicated:1 image:6 common:1 behaves:1 functional:6 physical:3 detennining:1 analog:13 belong:1 significant:1 silicon:1 versa:1 imposing:1 ai:1 smoothness:2 tuning:1 outlined:1 grid:2 moving:4 access:1 showed:1 binary:2 preserving:1 minimum:4 additional:2 ey:5 eo:1 signal:6 ii:6 full:1 alan:2 smooth:11 technical:1 calculation:1 laplacian:1 schematic:2 vision:8 represent:2 sometimes:1 achieved:1 cell:12 whereas:1 crucial:1 sr:1 probably:1 december:1 flow:39 capacitor:2 yang:1 ideal:1 fit:1 zi:4 architecture:1 perfectly:1 simplifies:1 tm:1 prototype:1 cybernetic:1 jj:1 differentiator:2 remark:1 useful:1 clear:1 locally:1 hardware:1 notice:1 estimated:1 discrete:3 express:1 drawn:1 changing:1 douglas:2 mosis:2 inverse:1 reporting:1 electronic:3 dy:2 layer:6 hi:1 hom:1 guaranteed:1 correspondence:1 quadratic:1 strength:2 occur:1 optic:1 constraint:15 x7:1 speed:1 min:2 span:1 transconductance:1 optical:32 ey1:1 according:3 march:1 poor:1 slightly:1 newer:1 dv:1 restricted:1 computationally:1 equation:13 fail:1 fed:1 rewritten:1 apply:1 appropriate:1 robustness:1 assumes:1 include:1 classical:1 society:1 contact:1 capacitance:1 occurs:1 degrades:1 september:1 gradient:8 amongst:1 separate:2 mapped:2 thank:2 x325:1 index:1 ratio:1 negative:6 implementation:7 reliably:1 collective:1 jorg:1 unknown:1 design:1 neuron:1 snapshot:1 finite:1 descent:2 november:1 supporting:1 varied:1 delbriick:1 intensity:1 introduced:2 connection:1 l2u:1 narrow:1 uncoupled:1 discontinuity:4 able:4 etey:1 perception:1 below:1 pattern:1 power:2 satisfaction:2 settling:1 regularized:1 lini:3 scheme:1 representing:1 technology:1 axis:1 doug:2 carried:1 relative:1 law:1 interesting:2 oy:1 proportional:1 filtering:1 analogy:1 lv:1 foundation:1 agent:1 degree:2 sufficient:1 imposes:1 article:1 principle:2 editor:2 vij:1 institute:2 neighbor:2 tobi:1 absolute:1 feedback:3 boundary:1 valid:1 computes:4 adaptive:2 aperture:2 unreliable:1 global:10 photoreceptors:1 symmetrical:1 continuous:1 impedance:1 nature:2 expanding:1 operational:1 irich:4 expansion:1 complex:1 arrow:1 motivation:1 noise:1 ff:1 fails:1 lvin:1 explicit:1 third:2 load:5 sensing:1 undergoing:1 closeness:1 intrinsic:1 exu:2 likely:1 visual:1 schunck:5 prevents:1 lagrange:3 ch:1 harris:2 goal:1 formulated:2 kramer:1 towards:1 owen:1 change:3 infinite:1 ami:1 total:1 photoreceptor:4 modulated:1 kung:1 ethz:1 perfonning:1 ex:9 |
608 | 1,556 | Distributional Population Codes and
Multiple Motion Models
Richard S. Zemel
University of Arizona
Peter Dayan
Gatsby Computational Neuroscience Unit
zemel@u.arizona.edu
dayan@gatsby.ucl.ac.uk
Abstract
Most theoretical and empirical studies of population codes make
the assumption that underlying neuronal activities is a unique and
unambiguous value of an encoded quantity. However, population
activities can contain additional information about such things as
multiple values of or uncertainty about the quantity. We have previously suggested a method to recover extra information by treating the activities of the population of cells as coding for a complete distribution over the coded quantity rather than just a single
value. We now show how this approach bears on psychophysical and neurophysiological studies of population codes for motion direction in tasks involving transparent motion stimuli. We
show that, unlike standard approaches, it is able to recover multiple motions from population responses, and also that its output
is consistent with both correct and erroneous human performance
on psychophysical tasks.
A population code can be defined as a set of units whose activities collectively
encode some underlying variable (or variables). The standard view is that population codes are useful for accurately encoding the underlying variable when the
individual units are noisy. Current statistical approaches to interpreting population activity reflect this view, in that they determine the optimal single value that
explains the observed activity pattern given a particular model of the noise (and
possibly a loss function).
In our work, we have pursued an alternative hypothesis, that the population encodes additional information about the underlying variable, including multiple
values and uncertainty. The Distributional Population Coding (DPC) framework
finds the best probability distribution across values that fits the population activity
(Zemel, Dayan, & Pouget, 1998).
The DPC framework is appealing since it makes clear how extra information can
be conveyed in a population code. In this paper, we use it to address a particu-
175
Distributional Population Codes and Multiple Motion Models
. ...??
100
~
50
'0...
~
~
0
-180
-90
'lO
.....
90
{I
..... '50
0
-180
0
180
..+..",.....
?
-90
'lO
.......
+??
0
-180
90
....... fir"
,."? ".'"
-90
0
100
6.0: 60?
100
0
0...
~
6.0: 90?
100
??
~
]
6.0: 30?
0
180
-180
......
90
180
6.0: 120?
.....
-')0
.... ..++...
?
..???
."...
0
.....~
90
180
Figure 1: Each of the four plots depicts a single MT cell response (spikes per second) to a transparent motion stimulus of a fixed directional difference (LlO) between the two motion directions. The x-axis gives the average direction of stimulus motion relative to the cell's preferred direction (0?). From Treue, personal
communication.
lar body of experimental data on transparent motion perception, due to Treue and
colleagues (HoI & Treue, 1997; Rauber & Treue, 1997). These transparent motion
experiments provide an ideal test of the DPC framework, in that the neurophysiological data reveal how the population responds to multiple values in the stimuli,
and the psychophysical data describe how these values are actually decoded, putatively from the population response. We investigate how standard methods fare
on these data, and compare their performance to that of DPC.
1
RESPONSES TO MULTIPLE MOTIONS
Many investigators have examined neural and behavioral responses to stimuli
composed of two patterns sliding across each other. These often create the impression of two separate surfaces moving in different directions. The general neurophysiological finding is that an MT cell's response to these stimuli can be characterized as the average of its responses to the individual components (van Wezel
et al., 1996; Recanzone et al., 1997). As an example, Figure 1 shows data obtained
from single-cell recordings in MT to random dot patterns consisting of two distinct
motion directions (Treue, personal communication). Each plot is for a different relative angle (LlO) between the two directions. A plot can equivalently be viewed
as the response of an population of MT cells having different preferred directions
to a single presentation of a stimulus containing two directions. If LlO is large, the
activity profile is bimodal, but as the directional difference shrinks, the profile becomes unimodal. The population response to a LlO = 30? motion stimulus is merely
a wider version of the response to a stimulus containing a single direction of motion. However, this transition from a bimodal to unimodal profiles in MT does not
apparently correspond to subjects' percepts; subjects can reliably perceive both
motions in superimposed transparent random patterns down to an angle of 10?
(Mather & Moulden, 1983). If these MT activities playa determining role in motion perception, the challenge is to understand how the visual system can extract
R. S. Zemel and P. Dayan
176
B
A
encode
_--------
--
__
r
~
P[rIP(O))
:I
I
:
I
I
l
\
"
unit
f
I
,
\
1
I!
~
....
I
I
................ decode
...........
""'"
:I
P[P (O)lrJ
unit
f
t
J(O)} = = =
~
t
:
I
." )~ P(O)I
, "
\P(O)l~
,
+
(J
P'(O))
,."
,'/
~'O
o
Figure 2: (A) The standard Bayesian population coding framework assumes that
a single value is encoded in a set of noisy neural activities. (B) The distributional
population coding framework shows how a distribution over 8 can be encoded
and then decoded from noisy population activities. From Zemel et al. (1998).
both motions from such unimodal (and bimodal) response profiles.
2
ENCODING & DECODING
Statistical population code decoding methods begin with the knowledge, collected
over many experimental trials, of the tuning function h(8) for each cell i, determined using simple stimuli (e.g., ones containing uni-directional motion). Figure 2A cartoons the framework used for standard decoding. Starting on the bottom left, encoding consists of taking a value 8 to be coded and representing it by
the noisy activities ri of the elements of a population code. In the simulations described here, we have used a population of 200 model MT cells, with tuning functions defined by random sampling within physiologically-determined ranges for
the parameters: baseline b, amplitude a and width 0'. The encoding model comes
from the MT data: for a single motion, (ri /8) = h(8) = bi +ai x exp[-(8-8i )2 /20'n
while for two motions, (ri/81, ( 2 ) = ~ [h(8d + h(82 )]. The noise is taken to be independent and Poisson.
Standard Bayesian decoding starts with the activities r = {r i} and generates a distribution P[8/r]. Under the model with Poisson noise,
This method thus provides a multiplicative kernel density estimate, tending to
produce a sharp distribution for a single motion direction 8. A single estimate 0
can be extracted from P[8/r] using a loss function.
For this method to decode successfully when there are two motions in the input
(8 1 and ( 2 ), the extracted distribution must at least have two modes. Standard
Bayesian decoding fails to satisfy this requirement. First, if the response profile
r is unimodal (d. the 30? plot in Figure I), convolution with unimodal kernels
{log h (8)} produces a unimodal log P[8/r], peaked about the average of the two
Distributional Population Codes and Multiple Motion Models
177
directions. The additive kernel density estimate, an alternative distributional decoding method proposed by Anderson (1995), suffers from the same problem, and
also fails to be adequately sharp for single value inputs.
Surprisingly, the standard Bayesian decoding method also fails on bimodal response profiles. If the baseline response bi = 0, then P[O/r] is Gaussian, with
mean L:i riOd L:il ri' and variance II L:i rdo-; (Snippe, 1996; Zemel et aL, 1998).
If bi > 0, then, for the extracted distribution to have two modes in the appropriate
positions, log[P[01/r]/P[02Ir]] must be smalL However, the variance of this quantity is L:i(ri) (log[/i(Odl h(02)])2, which is much greater than 0 unless the tuning
curves are so flat as to be able to convey only little information about the stimuli.
Intuitively, the noise in the rates causes L: r i log fi(O) to be greater around one of
the two values, and exponentiating to form P[Olr] selects out this one value. Thus
the standard method can only extract one of the two motion components from the
population responses to transparent motion.
The distributional population coding method (Figure 2B) extends the standard encoding model to allow r to depend on general P[O]:
(ri) =
l
P [0] fi (O)dO
(1)
Bayesian decoding takes the observed activities r and produces probability distributions over probability distributions over 0, P[P(O)/r] . For simplicity, we decode
using an approximate form of maximum likelihood in distributions over 0, finding
the pr(o) that maximizes L [P(O)lr] '" L:i r i log [/i(O) * P(O)] - ag [P(O)] where the
smoothness term g[] acts as a regularizer.
The distributional encoding operation in Equation 1 is quite straightforward - by
design, since this represents an assumption about what neural processing prior to
(in this case) MT performs. However, the distributional decoding operation that
we have used (Zemel et aL, 1998) involves complicated and non-neural operations. The idea is to understand what information in principle may be conveyed
by a population code under this interpretation, and then to judge actual neural
operations in the light of this theoretical optimum. DPC is a statistical cousin of
so-called line-element models, which attempt to account for subjects' performance
in cases like transparency using the output of some fixed number of directionselective mechanisms (Williams et al., 1991).
3 DECODING MULTIPLE MOTIONS
We have applied our model to simulated MT response patterns r generated via
the DPC encoding model (Equation 1). For multiple motion stimuli, with P(O) =
(8 (0 - 01 ) + 8(0 - O2)) 12, this encoding model produces the observed neurophysiological response: each unit's expected activity is the av~rage of its responses to the
component motions. For bimodal response patterns, DPC matches the generating
distribution (Figure 3). For unimodal response patterns, such as those generated
by double motion stimuli with fj.O = 30?, DPC also consistently recovers the generating distribution. The bimodality of the reconstructed distribution begins to
break down around fj.O = 10?, which is also the point at which subjects are unable
distinguish two motions from a single broader band of motion directions (Mather
& Moulden, 1983).
< 10?,
subjects can tell that all points are not moving in parallel, but are uncertain whether
It has been reported (Treue, personal communication) that for angles fj.0
R. S. Zemel and P. Dayan
178
200
200
~150
~150
..
..
...
dJ
.:
~
'5.
$100
~
.
??
R
???
?
e ..... ... o? '-000
_ ... .."
SO
0?
.4P\
eo
.0 0
~c'"
8.
?
~. . . . . . . .
???
__
~
?
...
?
dJ
0
.....,
.,.?l
-90
0
90
preferred direction (deg)
..... ..
0"
... , . :
??
..~. ..
,1..
.,..'\,,;,~.
? ?:.tolft.~0
~50
.
?~_o ? ? ~o
?, .
~..
-~80
'"
o
~loo
GO? ? ? ? '
: . - ........
o
'5.
?
??
... ? It.
Q)
.><
~
0? ?
~
.. .
,. .
-~80
180
0.08
0.08
~0.06
~0 .06
CD
,.
-90
0
90
preferred direction (deg)
I
e
O.04
0...
0...
0.02
-Hi6 . .::':120
180
i
<['
?:0.04
?
~
CD
<iC
?
o. ,.?...4-~
.....
0.02
I ????? ? ?
-60
0
60
direction (deg)
120
,
180
-60
0
60
direction (deg)
.. .?
120
.. ,
180
Figure 3: (A) On a single simulated trial, the population response forms a bimodal activity profile when 1:l8 = 120?. (B) The reconstructed (darker) distribution
closely matches the true input distribution for this trial. (C) As 1:l8 -+ 10?, the population response is no longer bimodal, instead has a noisy unimodal profile, and
(D) the reconstructed distribution no longer has two clear modes.
they are moving in two discrete directions or within a directional band. Our model
qualitatively captures this uncertainty, reconstructing a broad distribution with
two small peaks for directional differences between 7? and 10?.
DPC also matches psychophysical performance on metameric stimuli. Rauber and
Treue (1997) asked human subjects to report the directions in moving dot patterns
consisting of 2, 3 or 5 directions of motion. The motion directions were -40? and
+40?; -50?, 0? and +50?; and -50?, -30?, 0?, +30?, and +50?, respectively, but the
proportions of dots moving in each direction were adjusted so that the population
responses produced by an encoding model similar to Equation 1 would all be the
same. Subjects reported the same two motion directions, at -40? and 40?, to all
three types of stimuli.
DPC, like any reasonably deterministic decoding model, takes these (essentially
identical) patterns of activity and, metamerically, reports the same answer for each
case. Unlike most models, its answer-that there are two motions at roughly
?400-matches human responses. The fact of metamerization is not due to any
kind of prior in the model as to the number of directions to be recovered. However, that the actual report in each case includes just two motions (when clearly
three or five motions would be equally consistent with the input) is a consequence
of the smoothness prior. We can go further with DPC and predict how changing the proportion of dots moving in the central of three directions would lead to
different percepts - from a single motion to two as this proportion decreases.
We can further evaluate the performance of DPC by comparing the quality of its
179
Distributional Population Codes and Multiple Motion Models
-~...
100
g
75
Q)
Q)
.~
as
...
50
CD
Q)
0)
~ 25
Q)
~
00
10
20
30
40
.1.9 (deg)
50
60
Figure 4: The average relative error E in direction judgments (Equation 2) for the
DPC model (top curve) and for a model with the correct prior for this particular
input set.
reconstruction to that obtained by fitting the correct model of the input distribution, a mixture of delta functions. We simulated MT responses to motion stimuli
composed of two evenly-weighted directions, with 100 examples for each value of
~() in a range from 5? to 60?. We fit a mixture of two delta functions to each population response, and measured the average relative error in direction judgments
based on this fitted distribution versus the two true directions, ()1 and ()2 on that
example t:
(2)
We then applied the DPC model to the same population codes. To measure the
average error, we first fit the general distribution pr?()) produced by DPC with a
pair of equal-weighted Gaussians, and determined O~ and O~ from the appropriate
mean and variance. As can be seen in Figure 4, the DPC model, which only has
a general smoothness prior over the form of the input distribution, preserves the
information in the observed rates nearly as well as the model with the correct prior.
4
CONCLUSIONS
Transparent motion provides an ideal test of distributional population coding,
since the encoding model is determined by neural activity and the decoding model
by the behavioral data. Two existing kernel density estimate models, involving additive (Anderson, 1995) and multiplicative (standard Bayesian decoding) combination, perform poorly in this paradigm. DPC, a model in which neuronal responses
and the animal's judgments are treated as being sensitive to the entire distribution of an encoded value, has been shown to be consistent with both single-cell
responses and behavioral decisions, even matching subjects' threshold behavior.
We are currently applying this same model to several other motion experiments,
including one in which subjects had to determine whether a motion stimulus consisted of a number of discrete directions or a uniform distribution (Williams et
al., 1991). We are investigating whether our model can explain the nonmonotonic
relationship between the number of directions and the judgments. We have also
applied DPC to a notorious puzzle for population coding: that single MT cells are
R. S. Zemel and P Dayan
180
just as accurate as the whole monkey - one cell's output could directly support
inference of the same quality as the monkeys. Our approach provides an alternative explanation for part of this apparent inefficiency to that of the noisy pooling
model of Shadlen et al. (1996). Finally, experiments showing the effect of target
uncertainty on population responses (Basso & Wurtz, 1998; Bastian et al,. 1998) are
also handled naturally by the DPe approach.
The current model is intended to describe the information available at one stage
in the processing stream. It does not address the precise mechanism of motion
encoding, i.e., how responses in MT arise. We also have not considered the neural
decoding and decision mechanisms. These could likely involve a layer of units that
reaches decisions through a pattern of feedforward and lateral connections, as in
the model proposed by Grunewald (1996) for the detection of transparent motion.
One critical issue that remains is normalization. It is not clear how to distinguish
ambiguity about a single value for the encoded variable from the existence of multiple values of that variable (as in transparency for motion). Various factors are
likely to be important, including the degree of separation of the modes and also
prior expectations about the possibility of equivalents of transparency.
Acknowledgements: This work was funded by ONR Young Investigator Award NOOOI4-98-1-0509 to RZ, and NIMH
grant lR29MH5541-01, and grants from the Surdna Foundation and the Gatsby Charitable Foundation to PD. We
thank Stefan Treue for proViding us with the data plot and for informative discussions of his experiments; Alexandre Pouget and Charlie Anderson for useful discussions of distributed coding and the standard model; and Zoubin
Ghahramani and Geoff Hinton for helpful conversations about reconstruction in the log probability domain.
References
[1] Anderson, C. H. (1995). Unifying perspectives on neuronal codes and processing. In XIX International workshop
on condensed matter theories . Caracas, Venezuela.
[2] Basso, M. A. & Wurtz, R. H. (1998). Modulation of neuronal activity in superior colliculus by changes in target
probability. Journal a/Neuroscience, 18(18),7519-34.
[3] Bastian, A., Riehle, A., Erlhagen, w., & Schoner, G. (1998). Prior information preshapes the population representation of movement direction in motor cortex. Neuroreport, 9(2), 315-319.
[4] Britten, K. H., Shadlen, M. N ., Newsome, W. T., & Movshon, J. A. (1992). The analysis of visual motion: A
comparison of neuronal and psychophysical performance. Journal a/Neuroscience, 12(12), 4745-4765.
[5] Grunewald, A. (1996). A model of transparent motion and non-transparent motion aftereffects. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems 8 (pp. 837-843).
Cambridge, MA: MIT Press.
[6] HoI, K. & Treue, S. (1997). Direction-selective responses in the superior temporal sulcus to transparent patterns
moving at acute angles. Society for Neuroscience Abstracts 23 (p. 179:11).
[7] Mather, G. & Moulden, B. (1983). Thresholds for movement direction: two directions are less detectable than
one. Quarterly Journal 0/ Experimental Psychology, 35, 513-518.
[8] Rauber, H . J. & Treue, S. (1997). Recovering the directions of visual motion in transparent patterns. Society for
Neuroscience Abstracts 23 (p. 179:10).
[9] Recanzone, G . H., Wurtz, R. H., & Schwarz, U. (1997). Responses of MT and MST neurons to one and two
moving objects in the receptive field. Journal a/Neurophysiology, 78(6), 2904-2915.
[10] Shadlen, M. N ., Britten, K. H, Newsome, W. T., & Movshon, J. A. (1996). A computational analysis of the
relationship between neuronal and behavioral responses to visual motion. Journal 0/ Neuroscience, 16(4), 1486-510.
[11] Snippe, H. P. (1996). Theoretical considerations for the analysis of population coding in motor cortex. Neural
Computation, 8(3):29-37.
[12] van Wezel, R. J., Lankheet, M. J., Verstraten, F. A., Maree, A. F., & van de Grind, W. A. (1996). Responses of
complex cells in area 17 of the cat to bi-vectorial transparent motion. Vision Research, 36(18), 2805-13.
[13] Williams, D., Tweten,S., & Sekuler, R. (1991). Using me tamers to explore motion perception. Vision Research,
31(2),275-286.
[14] Zemel, R. 5., Dayan, P , & Pouget, A. (1998). Probabilistic interpretation of population codes. Neural Computation,
10,403-430.
III
THEORY
PART
| 1556 |@word neurophysiology:1 trial:3 version:1 proportion:3 simulation:1 llo:4 inefficiency:1 schoner:1 o2:1 existing:1 current:2 recovered:1 comparing:1 must:2 mst:1 additive:2 informative:1 motor:2 treating:1 plot:5 pursued:1 metamerization:1 lr:1 provides:3 putatively:1 five:1 consists:1 fitting:1 behavioral:4 grunewald:2 expected:1 behavior:1 roughly:1 little:1 actual:2 becomes:1 begin:2 underlying:4 maximizes:1 what:2 directionselective:1 kind:1 monkey:2 finding:2 ag:1 temporal:1 act:1 uk:1 unit:7 grant:2 consequence:1 encoding:11 modulation:1 examined:1 sekuler:1 range:2 bi:4 unique:1 area:1 empirical:1 matching:1 zoubin:1 applying:1 equivalent:1 deterministic:1 moulden:3 straightforward:1 starting:1 williams:3 go:2 simplicity:1 pouget:3 perceive:1 his:1 population:40 target:2 rip:1 decode:3 hypothesis:1 element:2 distributional:11 observed:4 role:1 bottom:1 capture:1 lrj:1 decrease:1 movement:2 mozer:1 pd:1 nimh:1 asked:1 personal:3 depend:1 geoff:1 various:1 cat:1 regularizer:1 distinct:1 describe:2 zemel:10 tell:1 nonmonotonic:1 tamer:1 whose:1 encoded:5 quite:1 apparent:1 noisy:6 ucl:1 reconstruction:2 basso:2 poorly:1 double:1 requirement:1 optimum:1 mather:3 produce:4 generating:2 bimodality:1 object:1 wider:1 ac:1 measured:1 recovering:1 involves:1 come:1 judge:1 direction:36 closely:1 correct:4 snippe:2 human:3 hoi:2 explains:1 transparent:13 adjusted:1 around:2 considered:1 ic:1 exp:1 puzzle:1 predict:1 condensed:1 currently:1 sensitive:1 schwarz:1 hasselmo:1 grind:1 create:1 wezel:2 successfully:1 weighted:2 stefan:1 mit:1 clearly:1 gaussian:1 rather:1 broader:1 treue:10 encode:2 consistently:1 superimposed:1 likelihood:1 baseline:2 helpful:1 inference:1 dayan:7 entire:1 selective:1 selects:1 issue:1 animal:1 equal:1 field:1 having:1 cartoon:1 sampling:1 identical:1 represents:1 broad:1 nearly:1 peaked:1 report:3 stimulus:17 richard:1 composed:2 preserve:1 individual:2 intended:1 consisting:2 attempt:1 detection:1 investigate:1 possibility:1 mixture:2 light:1 accurate:1 unless:1 theoretical:3 uncertain:1 fitted:1 newsome:2 uniform:1 hi6:1 loo:1 reported:2 answer:2 density:3 peak:1 international:1 probabilistic:1 decoding:14 reflect:1 central:1 ambiguity:1 containing:3 possibly:1 fir:1 account:1 de:1 coding:9 includes:1 matter:1 satisfy:1 stream:1 multiplicative:2 view:2 break:1 apparently:1 start:1 recover:2 complicated:1 parallel:1 il:1 ir:1 variance:3 percept:2 correspond:1 judgment:4 directional:5 bayesian:6 accurately:1 produced:2 recanzone:2 explain:1 reach:1 suffers:1 touretzky:1 ed:1 colleague:1 pp:1 naturally:1 recovers:1 noooi4:1 knowledge:1 conversation:1 amplitude:1 actually:1 alexandre:1 response:34 shrink:1 anderson:4 just:3 stage:1 olr:1 mode:4 quality:2 reveal:1 lar:1 effect:1 contain:1 true:2 consisted:1 adequately:1 riehle:1 width:1 unambiguous:1 impression:1 complete:1 performs:1 motion:53 interpreting:1 fj:3 consideration:1 fi:2 superior:2 tending:1 mt:14 fare:1 interpretation:2 cambridge:1 ai:1 smoothness:3 tuning:3 dj:2 dot:4 had:1 moving:8 funded:1 longer:2 surface:1 cortex:2 acute:1 playa:1 perspective:1 onr:1 erlhagen:1 seen:1 additional:2 greater:2 eo:1 determine:2 paradigm:1 ii:1 sliding:1 multiple:12 unimodal:8 transparency:3 match:4 characterized:1 equally:1 award:1 coded:2 involving:2 rage:1 vision:2 essentially:1 expectation:1 poisson:2 wurtz:3 kernel:4 normalization:1 bimodal:7 cell:12 extra:2 unlike:2 recording:1 subject:9 pooling:1 thing:1 ideal:2 feedforward:1 iii:1 fit:3 psychology:1 idea:1 cousin:1 whether:3 handled:1 movshon:2 peter:1 cause:1 useful:2 clear:3 involve:1 band:2 neuroscience:6 delta:2 per:1 odl:1 discrete:2 metameric:1 four:1 threshold:2 sulcus:1 changing:1 merely:1 colliculus:1 verstraten:1 angle:4 uncertainty:4 extends:1 separation:1 decision:3 layer:1 distinguish:2 bastian:2 arizona:2 activity:19 vectorial:1 ri:6 flat:1 encodes:1 generates:1 combination:1 across:2 reconstructing:1 appealing:1 intuitively:1 pr:2 notorious:1 taken:1 equation:4 aftereffect:1 previously:1 remains:1 tweten:1 detectable:1 mechanism:3 available:1 operation:4 gaussians:1 quarterly:1 appropriate:2 alternative:3 existence:1 rz:1 assumes:1 top:1 charlie:1 unifying:1 ghahramani:1 society:2 psychophysical:5 quantity:4 spike:1 receptive:1 responds:1 separate:1 unable:1 simulated:3 lateral:1 thank:1 evenly:1 me:1 collected:1 code:15 relationship:2 providing:1 equivalently:1 design:1 reliably:1 perform:1 av:1 convolution:1 neuron:1 hinton:1 communication:3 precise:1 sharp:2 pair:1 connection:1 address:2 able:2 suggested:1 pattern:12 perception:3 challenge:1 including:3 explanation:1 critical:1 treated:1 dpc:18 representing:1 axis:1 britten:2 extract:2 prior:8 acknowledgement:1 determining:1 relative:4 loss:2 bear:1 versus:1 foundation:2 degree:1 conveyed:2 consistent:3 shadlen:3 principle:1 charitable:1 cd:3 lo:2 l8:2 surprisingly:1 allow:1 understand:2 taking:1 van:3 distributed:1 curve:2 xix:1 transition:1 qualitatively:1 exponentiating:1 reconstructed:3 approximate:1 uni:1 preferred:4 deg:5 investigating:1 physiologically:1 reasonably:1 complex:1 domain:1 whole:1 noise:4 arise:1 profile:8 convey:1 body:1 neuronal:6 depicts:1 gatsby:3 darker:1 fails:3 position:1 decoded:2 young:1 down:2 erroneous:1 showing:1 workshop:1 venezuela:1 likely:2 rauber:3 explore:1 neurophysiological:4 visual:4 collectively:1 extracted:3 ma:1 viewed:1 presentation:1 change:1 determined:4 called:1 experimental:3 support:1 investigator:2 evaluate:1 neuroreport:1 |
609 | 1,557 | Computation of Smooth Optical Flow in a
Feedback Connected Analog Network
Alan Stocker *
Institute of Neuroinforrnatics
University and ETH Zi.irich
Winterthurerstrasse 190
8057 Zi.irich, Switzerland
Rodney Douglas
Institute of Neuroinforrnatics
University and ETH Zi.irich
Winterthurerstrasse 190
8057 Zi.irich, Switzerland
Abstract
In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuit for global motion sensing in a VLSI. We report here a
new and improved aVLSI implementation that provides smooth optical
flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as
the aperture problem. However, the optical flow can be estimated by the
use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized . We
show how the algorithmic constraints of Hom and Schunck [2] on computing smooth optical flow can be mapped onto the physical constraints
of an equivalent electronic network.
1 Motivation
The perception of apparent motion is crucial for navigation. Knowledge of local motion of
the environment relative to the observer simplifies the calculation of important tasks such as
time-to-contact or focus-of-expansion. There are several methods to compute optical flow.
They have the common problem that their computational load is large. This is a severe
disadvantage for autonomous agents, whose computational power is restricted by energy,
size and weight. Here we show how the global regularization approach which is necessary
to solve for the ill-posed nature of computing optical flow, can be formulated as a local
feedback constraint, and implemented as a physical analog device that is computationally
efficient.
* correspondence to: alan@ini.phys.ethz.ch
707
Computation of Optical Flow in an Analog Network
2 Smooth Optical Flow
Horn and Schunck [2] defined optical flow in relation to the spatial and temporal changes
in image brightness. Their model assumes that the total image brightness E(x, y, t) does
not change over time;
d
dt E(x, y, t)
= O.
(I)
Expanding equation (1) according to the chain rule of differentiation leads to
0
0
F == ox E(x, y, t)u + oy E(x, y, t)v + 8t E(x, y, t) = 0,
o
(2)
where u = dx / dt and v = dy / dt represent the two components of the local optical flow
vector.
Since there is one equation for two unknowns at each spatial location, the problem is
ill-posed, and there are an infinite number of possible solutions lying on the constraint
line for every location (x, y). However, by introducing an additional constraint the problem can be regularized and a unique solution can be found.
For example, Horn and Schunck require the optical flow field to be smooth. As a measure
of smoothness they choose the squares of of the spatial derivatives of the flow vectors,
(3)
One can also view this constraint as introducing a priori knowledge: the closer two points
are in the image space the more likely they belong to the projection of the same object. Under the assumption of rigid objects undergoing translational motion, this constraint implies
that the points have the same, or at least very similar motion vectors. This assumption is
obviously not valid at boundaries of moving objects, and so this algorithm fails to detect
motion discontinuities [3].
The computation of smooth optical flow can now be formulated as the minimization problem of a global energy functional,
JJ~
dx dy
---7
min
(4)
L
with F and 8 2 as in equation (2) and (3) respectively. Thus, we exactly apply the approach
of standard regularization theory [4]:
Ax=y
x = A -Iy
II Ax -
y
II +.x II P 11= min
y: data
inverse problem, ill-posed
regularization
The regularization parameter, .x, controls the degree of smoothing of the solution and its
closeness to the data. The norm, II . II, is quadratic. A difference in our case is that A
is not constant but depends on the data. However, if we consider motion on a discrete
time-axis and look at snapshots rather than continuously changing images, A is quasistationary.1 The energy functional (4) is convex and so, a simple numerical technique
like gradient descent would be able to find the global minimum. To compute optical flow
while preserving motion discontinuities one can modify the energy functional to include
a binary line process that prevents smoothing over discontinuities [4]. However, such an
functional will not be convex. Gradient descent methods would probably fail to find the
global amongst all local minima and other methods have to be applied.
1 In the aVLSI implementation this requires a much shorter settling time constant for the network
than the brightness changes in the image.
708
3
A. Stocker and R. Doug/as
A Physical Analog Model
3.1
Continuous space
Standard regularization problems can be mapped onto electronic networks consisting of
conductances and capacitors [5]. Hutchinson et al. [6] showed how resistive networks can
be used to compute optical flow and Poggio et al. [7] introduced electronic network solutions for second-order-derivative optic flow computation. However, these proposed network architectures all require complicated and sometimes negative conductances although
Harris et al. [8] outlined a similar approach as proposed in this paper independently. Furthennore, such networks were not implemented practically, whereas our implementation
with constant nearest neighbor conductances is intuitive and straightforward.
Consider equation (4):
L = L(u, v, '\lu, '\lv, x, y).
The Lagrange function L is sufficiently regular (L E C 2 ), and thus it follows from calculus of variation that the solution of equation (4) also suffices the linear Euler-Lagrange
equations
A'\l2u - Ex (Exu
o
A'\l2v -
O.
+ Eyv + E t )
Ey(Exu + Eyv + E t )
(5)
The Euler-Lagrange equations are only necessary conditions for equation (4). The sufficient condition for solutions of equations (5) to be a weak minimum is the strong Legendrecondition, that is
and
L'ilu'ilu > 0
L'ilv'ilv > 0,
which is easily shown to be true.
3.2
Discrete Space - Mapping to Resistive Network
By using a discrete five-point approximation of the Laplacian \7 2 on a regular grid, equations (5) can be rewritten as
A(Ui+1 )'
,
+ Ui-1 )' + Ui )'+1 + Ui )-1
,
,
,
-
4Ui )') - Ex, ,(Ex l ,,Ui)'
J'
,
t,]
+ E y'
' .]
Vi)'
'
+ Et
,) =0 (6)
1,J
A(Vi+1)'
+Vi - 1)'
+Vi)'+1
+Vi)'-1
- 4Vi)'
+Ey1' ,1
,Vi)'
+Et,1,],)=0
,
,
,
,
, ) -Ey'1 , )(Ex,
' . J,Ui)'
'
'
where i and j are the indices for the sampling nodes. Consider a single node of the resistive
network shown in Figure 1:
Figure 1: Single node of a resistive network.
From Kirchhoff's law it follows that
dV,? ,
C d~') = G(Vi+1 ,j
+ Vi-I ,j + Vi,HI + Vi,j-1 - 4Vi,j) + lini.j
(7)
709
Computation of Optical Flow in an Analog Network
where Vi ,j represents the voltage and l in', i the input current. G is the conductance between
two neighboring nodes and C the node capacitance.
In steady state, equation (7) becomes
G(Vi+I ,j
+ Vi - I ,j + Vi, j+! + Vi ,j- I
- 4Vi ,j)
+ lini"
= O.
(8)
The analogy with equations (6) is obvious:
G
~
.A
lUin t??
t]
~
-Ex? . (E x t?t ) UiJ'
+Ey t,, ]
ViJ'
+Et 1 , ) ? )
'
'
lVin t", }
~
-Ey . , (Ex 1", ) UiJ,
+Ey"1 , ) Vi),+E
t I , J, )
'
'
t. )
t ,}
(9)
To create the full system we use two parallel resistive networks in which the node voltages
Ui, j and Vi,j represent the two components of the optical flow vector U and v . The input
currents lUin i,i and lVini" are computed by a negative recurrentfeedback loop modulated
by the input data, which are the spatial and temporal intensity gradients.
Notice that the input currents are proportional to the deviation of the local brightness constraint: the less the local optical flow solution fits the data the higher the current lini.j will
be to correct the solution and vice versa.
Stability and convergence of the network are guaranteed by Maxwell 's minimum power
principle [4, 9].
4 The Smooth Optical Flow Chip
4.1
Implementation
-CP\~}1J?
~tf)~
! I ~
Figure 2: A single motion cell within the three layer network. For simplicity only one
resistive network is shown.
The circuitry consists of three functional layers (Figure 2). The input layer includes an
array of adaptive photoreceptors [10] and provides the derivatives of the image brightness
to the second layer, The spatial gradients are the first-order linear approximation obtained
by subtracting the two neighboring photoreceptor outputs. The second layer computes the
input current to the third layer according to equations (9). Finally these currents are fed
into the two resistive networks that report the optical flow components.
The schematics of the core of a single motion cell are drawn in Figure 3. The photoreceptor
and the temporal differentiator are not shown as well as the other half of the circuitry that
computes the y-component of the flow vector.
A. Stocker and R. Doug/as
710
A few remarks are appropriate here: First, the two components of the optical flow vector
have to be able to take on positive and negative values with respect to some reference potential. Therefore, a symmetrical circuit scheme is applied where the positive and negative
(reference voltage) values are carried on separate signal lines. Thus, the actual value is
encoded as the difference of the two potentials.
Ex (Ex Vx + E)
t
~."
.... ....... "....... ......... :
"
"
Exl
l
_f-VViBias
! v+
I:???????? . ? . ?. ? ? ?. ? ?. ?:
temporal
differentiator
X
DiffBias
1
OpBias
Figure 3: Cell core schematics; only the circuitry related to the computation of the
x-component of the flow vector is shown.
Second, the limited linear range of the Gilbert multipliers leads to a narrow span of flow velocities that can be computed reliably. However, the tuning can be such that the operational
range is either at high or very low velocities. Newer implementations are using modified
multipliers with a larger linear range.
Third, consider a single motion cell (Figure 2). In principle, this cell would be able to satisfy the local constraint perfectly. In practice (see Figure 3), the finite output impedance of
the p-type Gilbert multiplier slightly degrades this ideal solution by imposing an effective
conductance G load . Thus, a constant voltage on the capacitor representing a non-zero motion signal requires a net output current of the mUltiplier to maintain it. This requirement
has two interesting consequences:
i) The reported optical flow is dependent on the spatial gradients (contrast). A single uncoupled cell according to Figure 2 has a steady state solution with
U
i ,j '"
-Et .Ex ' .J.
(G load + E;i .j + E~iJ
I ,]
and
'Y:
-EtEy 1.., J
+ E; . + Ey2)
i,j '" (G load
1,)
1,)
1,)
respectively. For the same object speed, the chip reports higher velocity signals for higher
spatial gradients. Preferably, Gload should be as low as possible to minimize its influence
on the solution.
ii) On the other hand, the locally ill-posed problem is now well-posed because G load imposes a second constraint. Thus, the chip behaves sensibly in the case of low contrast
input (small gradients), reporting zero motion where otherwise, unreliable high values
would occur. This is convenient because the signal-to-noise ratio at low contrast is very
poor. Furthermore, a single cell is forced to report the velocity on the constraint line with
smallest absolute value, which is normal to the spatial gradient. That means that the chip
711
Computation of Optical Flow in an Analog Network
reports normal flow when there is no neighbor connection. Since there is an trade-off between the robustness of the optical flow computation and a low conductance Glaad, the
follower-connected transconductance amplifier in our implementation allows us to control
G laad above its small intrinsic value.
4.2
Results
The results reported below were obtained from a MOSIS tinychip containing a 7x7 array
of motion cells each 325x325 A2 in size . The chip was fabricated in 1.2 J.,tm technology at
AMI.
-
\
"-
"-
"- "- "-
,
"-
"-
,
""-
3
...
"-
-,
,,,- "
.' ,
"",
-~"
."
'f-~' ~
"'-
, "
.........
,"
,~"
,"
,"
,1'-'"
.'
"-
,
"-
,
,
, ,
"-
,~"
"
b
a
,""",-- ~ ~
"-
,
, , ",
c
Figure 4: Smooth optical flow response of the chip to an left-upwards moving edge.
a: photoreceptor output, the arrow indicates the actual motion direction. b: weak coupling
(small conductance G). c: strong coupling.
\
,
,
3
-
--,
\
,-
"
.
,.-
I
.,.-
a
b
-/
"'--
'r-- /"
'I /
,
I
,
I
,~
"
,/
"- /
--\
,, '....
\
lr-
~~~~~~
2F---
~ ~~
~
-E--
~
3F--
~
"'E---
~
-E--
~
~
.F--~~~~~~
Sr-
~
~
~
~
'4--
<Eo--
&r--
~
'E---
~
-E--
'E--
~
1F--
~
~
~
~
'<E--
4-
c
Figure 5: Response of the optical flow chip to a plaid stimulus moving towards the left:
a: photoreceptor output; b shows the normal flow computation with disabled coupling
between the motion cells in the network while in c the coupling strength is at maximum.
The chip is able to compute smooth optical flow in a qualitative manner. The smoothness
can be set by adjusting the coupling conductances (Figure 4). Figure 5b presents the normal flow computation that occurs when the coupling between the motion cells is disabled.
The limited resolution of this prototype chip together with the small size of the stimulus
leads to a noisy response. However it is clear that the chip perceives the two gratings as
separate moving objects with motion normal to their edge orientation. When the network
A. Stocker and R. Douglas
712
conductance is set very high the chip perfonns a collective computation solving the aperture problem under the assumption of single object motion. Figure 5c shows how the chip
can compute the correct motion of a plaid pattern.
5
Conclusion
We have presented here an aVLSI implementation of a network that computes 2D smooth
optical flow. The strength of the resistive coupling can be varied continuously to obtain
different degrees of smoothing, from a purely local up to a single global motion signal. The
chip ideally computes smooth optical flow in the classical definition of Horn and Schunck.
Instead of using negative and complex conductances we implemented a network solution
where each motion cell is perfonning a local constraint satisfaction task in a recurrent
negative feedback loop.
It is significant that the solution of a global energy minimization task can be achieved
within a network of local constraint solving cells that do not have explicit access to the
global computational goal.
Acknowledgments
This article is dedicated to Misha Mahowald. We would like to thank Eric Vittoz, Jorg
Kramer, Giacomo Indiveri and Tobi Delbriick for fruitful discussions. We thank the Swiss
National Foundation for supporting this work and MOSIS for chip fabrication.
References
[1] J. Tanner and c.A. Mead. An integrated analog optical motion sensor. In S. -Y. Kung,
R. Owen, and G. Nash, editors, VLSI Signal Processing, 2, page 59 ff. IEEE Press,
1986.
[2] B.K. Horn and B.G. Schunck. Detennining optical flow.
17: 185-203, 1981.
Artificial Intelligence,
[3] A. Yuille. Energy functions for early vision and analog networks. Biological Cybernetic~61:115-123,
1989.
[4] T. Poggio, V. Torre, and C. Koch. Computational vision and regularization theory.
Nature, 317(26):314-319, September 1985.
[5] B. K. Horn. Parallel networks for machine vision. Technical Report 1071, MIT AI
Lab, December 1988.
[6] J. Hutchinson, C. Koch, 1. Luo, and C. Mead. Computing motion using analog and
binary resistive networks. Computer, 21 :52-64, March 1988.
[7] T. Poggio, W. Yang, and V. Torre. Optical flow: Computational properties and networks, biological and analog. The Computing Neuron, pages 355-370, 1989.
[8] 1.G. Harris, C. Koch, E. Staats, and J. Luo. Analog hardware for detecting discontinuities in early vision. Int. Journal of Computer Vision, 4:211-223, 1990.
[9] J. Wyatt. Little-known properties of resistive grids that are useful in analog vision chip
designs. In C. Koch and H. Li, editors, Vision Chips: Implementing Vision Algorithms
with Analog VLSI Circuits, pages 72-89. IEEE Computer Society Press, 1995.
[10] S.c. Liu. Silicon retina with adaptive filtering properties. In Advances in Neural
Information Processing Systems 10, November 1997.
Scheduling Straight-Line Code Using
Reinforcement Learning and Rollouts
Amy McGovern and Eliot Moss
{amy Imoss@cs. umass. edu}
Department of Computer Science
University of Massachusetts, Amherst
Amherst, MA 01003
Abstract
The execution order of a block of computer instructions can make a
difference in its running time by a factor of two or more. In order to
achieve the best possible speed, compilers use heuristic schedulers appropriate to each specific architecture implementation. However, these
heuristic schedulers are time-consuming and expensive to build. In this
paper, we present results using both rollouts and reinforcement learning
to construct heuristics for scheduling basic blocks . The rollout scheduler
outperformed a commercial scheduler, and the reinforcement learning
scheduler performed almost as well as the commercial scheduler.
1 Introduction
Although high-level code is generally written as if it were going to be executed sequentially, many modern computers are pipelined and allow for the simultaneous issue of multiple instructions. In order to take advantage of this feature , a scheduler needs to reorder
the instructions in a way that preserves the semantics of the original high-level code while
executing it as quickly as possible. An efficient schedule can produce a speedup in execution of a factor of two or more. However, building a scheduler can be an arduous process.
Architects developing a new computer must manually develop a specialized instruction
scheduler each time a change is made in the proposed system. Building a scheduler automatically can save time and money. It can allow the architects to explore the design space
more thoroughly and to use more accurate metrics in evaluating designs.
Moss et al. (1997) showed that supervised learning techniques can induce excellent basic
block instruction schedulers for the Digital Alpha 21064 processor. Although all of the
supervised learning methods performed quite well, they shared several limitations. Supervised learning requires exact input/output pairs. Generating these training pairs requires
an optimal scheduler that searches every valid permutation of the instructions within a basic block and saves the optimal permutation (the schedule with the smallest running time).
However, this search was too time-consuming to perform on blocks with more than 10 in-
A. McGovern and E. Moss
904
structions, because optimal instruction scheduling is NP-hard. Using a semi-supervised
method such as reinforcement learning or rollouts does not require generating training
pairs, so the method can be applied to larger basic blocks and can be trained without knowing optimal schedules.
2 Domain Overview
Moss et al. (1997) gave a full description of the domain. This study presents an overview,
necessary details, our experimental method and detailed results for both rollouts and reinforcement learning.
We focused on scheduling basic blocks of instructions on the 21064 version (DEC, 1992)
of the Digital Alpha processor (Sites, 1992). A basic block is a set of instructions with a
single entry point and a single exit point. Our schedulers could reorder instructions within
a basic block but could not rewrite, add, or remove any instructions. The goal of each
scheduler is to find a least-cost valid ordering of the instructions. The cost is defined as the
simulated execution time of the block. A valid ordering is one that preserves the semantically necessary ordering constraints of the original code. We insure validity by creating
a dependency graph that directly represents those necessary ordering relationships. This
graph is a directed acyclic graph (DAG).
The Alpha 21064 is a dual-issue machine with two different execution pipelines . Dual
issue occurs only if a number of detailed conditions hold, e.g., the two instructions match
the two pipelines. An instruction can take anywhere from one to many tens of cycles to
execute. Researchers at Digital have a publicly available 21064 simulator that also includes
a heuristic scheduler for basic blocks. We call that scheduler DEC. The simulator gives the
running time for a given scheduled block assuming all memory references hit the cache
and all resources are available at the beginning of the block. All of our schedulers used a
greedy algorithm to schedule the instructions, i.e., they built schedules sequentially from
beginning to end with no backtracking.
In order to test each scheduling algorithm, we used the 18 SPEC95 benchmark programs.
Ten of these programs are written in FORTRAN and contain mostly floating point calculations. Eight of the programs are written in C and focus more on integer, string, and pointer
calculations. Each program was compiled using the commercial Digital compiler at the
highest level of optimization. We call the schedules output by the compiler OR/G. This
collection has 447 ,127 basic blocks, containing 2,205,466 instructions.
3
Rollouts
Rollouts are a form of Monte Carlo search, first introduced by Tesauro and Galperin (1996)
for use in backgammon. Bertsekas et al. (l997a,b) have explored rollouts in other domains
and proven important theoretical results. In the instruction scheduling domain, rollouts
work as follows: suppose the scheduler comes to a point where it has a partial schedule and
a set of (more than one) candidate instructions to add to the schedule. For each candidate,
the scheduler appends it to the partial schedule and then follows a fixed policy 1r to schedule
the remaining instructions. When the schedule is complete, the scheduler evaluates the
running time and returns. When 1r is stochastic, this rollout can be repeated many times for
each instruction to achieve a measure of the average expected outcome. After rolling out
each candidate, the scheduler picks the one with the best average running time.
Our first set of rollout experiments compared three different rollout policies 1r. The theory
developed by Bertsekas et al. (l997a,b) proved that if we used the DEC scheduler as 1r,
we would perform no worse than DEC. An architect proposing a new machine might not
have a good heuristic available to use as 1r, so we also considered policies more likely to be
available. The first was the random policy, RANDOM-1r, which is a choice that is clearly
always available. Under this policy, the rollout makes all choices randomly. We also used
905
Scheduling Straight-Line Code Using RL and Rollouts
the ordering produced by the optimizing compiler ORIG, denoted ORIG-1r. The last rollout
policy tested was the DEC scheduler itself, denoted DEC-1r.
The scheduler performed only one rollout per candidate instruction when using ORIG-1r
and DEC-1r because they are deterministic. We used 25 rollouts for RANDOM-1r. After
performing a number of rollouts for each candidate instruction, we chose the instruction
with the best average running time. As a baseline scheduler, we also scheduled each block
randomly. Because the running time increases quadratically with the number of rollouts,
we focused our rollout experiments on one program in the SPEC95 suite: applu.
Table 1 gives the performance of each rollout scheduler as compared to the DEC scheduler
on all 33,007 basic blocks of size 200 or less from applu. To assess the performance of each
rollout policy 1r, we used the ratio of the weighted execution time of the rollout scheduler
to the weighted execution time of the DEC scheduler. More concisely, the performance
measure was:
Lall blocks rollout scheduler execution time * number of times block is executed
.
ratio = ===:=-'-'-~==-~-----------------------?
Lall blocks DEC scheduler execution time * number of times block is executed
This means that a faster running time on the part of our scheduler would give a smaller
ratio.
Scheduler
Random
ORIG-1T'
Ratio
1.3150
0.9895
Scheduler
RANDOM-1T'
DEC-1T'
Ratio
1.0560
0.9875
Table 1: Ratios of the weighted execution time of the rollout scheduler to the DEC scheduler. A ratio of less than one means that the rollouts outperformed the DEC scheduler.
All of the rollout schedulers far outperformed the random scheduler which was 31 % slower
than DEC. By only adding rollouts, RANDOM-1r was able to achieve a running time only
5% slower than DEC. Only the schedulers using ORIG-1r and DEC-1r as a model outperformed the DEC scheduler. Using ORIG-1r and DEC-1r for rollouts produced a schedule
that was 1.1 % faster than the DEC scheduler on average. Although this improvement may
seem small, the DEC scheduler is known to make optimal choices 99.13% of the time for
blocks of size 10 or less (Stefanovic, 1997).
Rollouts were tested only on applu rather than on the entire SPEC95 benchmark suite due
to the lengthy computation time. Rollouts are costly because performing m rollouts on n
instructions is O(n 2 m), whereas a greedy scheduling algorithm is O(n). Again, because of
the time required, we only performed five runs of RANDOM-1r. Since DEC-1r and ORIG-1r
are deterministic, only one run was necessary. We also ran the random scheduler 5 times.
Each number reported above is the geometric mean of the ratios across the five runs.
Part of the motivation behind using rollouts in a scheduler is to obtain fast schedules without
spending the time to build a precise heuristic. With this in mind, we explored RANDOM-1r
more closely in a follow-up experiment.
Evaluation of the number of rollouts
This experiment considered how performance varies with the number of rollouts. We tested
1,5, 10,25, and 50 rollouts per candidate instruction. We also varied the metric for choosing among candidates. Instead of always choosing the instruction with the best average
performance, we also experimented with selecting the instruction with the absolute best
running time among its rollouts. We hypothesized that selection of the absolute best path
might lead to better performance overall. These experiments were performed on all 33,007
basic blocks of size 200 or less from applu.
Figure 1 shows the performance of the rollout scheduler as a function of the number of
rollouts. Performance is assessed in the same way as before: ratio of weighted execution
A. McGovern and E. Moss
906
Performance over number of rollouts
, ,8
1-"-
, '6
~~t l
,",4
~
lij' '2
E
.g., "
Q.
, 08
'06
1.04,
5
'0
25
50
Number of Rollouts
Figure 1: Performance of rollout scheduler with the random model as a function of the
number of rollouts and the choice of evaluation function.
times. Thus, a lower number is better. Each data point represents the geometric mean over
five different runs. The difference in performance between one rollout and five rollouts
using the average choice for each rollout is 1.16 versus 1.10. However, the difference
between 25 rollouts and 50 rollouts is only 1.06 versus 1.05. This indicates the tradeoff
between schedule quality and the number of rollouts. Also, choosing the instructions with
the best rollout schedule did not yield better performance over any numbers of rollouts.
We hypothesize that this is due to the stochastic nature of the rollouts. Once the scheduler
chooses an instruction, it repeats the rollout process again. By choosing the instruction with
the absolute best rollout, there is no guarantee that the scheduler will find that permutation
of instructions again on the next rollout. When it chooses the instruction with the best
average rollout, the scheduler has a better chance of finding a good schedule on the next
rollout.
Although the rollout schedulers performed quite well, the extremely long scheduling time
is a major drawback. Using 25 rollouts per block took over 6 hours to schedule one program. Unless this aspect can be improved, rollouts cannot be used for all blocks in a
commercial scheduler or in evaluating more than a few proposed machine architectures.
However, because rollout scheduling performance is high, rollouts could be used to optimize the schedules on important (long running times or frequently executed) blocks within
a program.
4
4.1
Reinforcement Learning Results
Overview
Reinforcement learning (RL) is a collection of methods for discovering near-optimal solutions to stochastic sequential decision problems (Sutton & Barto, 1998). A reinforcement
learning system does not require a teacher to specify correct actions. Instead, the learning
agent tries different actions and observes their consequences to determine which actions are
best. More specifically, in the reinforcement learning framework, a learning agent interacts
with an environment over a series of discrete time steps t = 0,1,2 , 3, .... At each time t,
the agent is in some state, denoted St, and chooses an action, denoted at , which causes the
environment to transition to state StH and to emit a reward, denoted rtH' The next state
and reward depend only on the preceding state and action, but they may depend on it in a
stochastic fashion. The objective is to learn a (possibly stochastic) mapping from states to
actions called a policy, which maximizes the cumulative discounted reward received by the
agent. More precisely, the objective is to choose action at so as to maximize the expected
return, E
-yirt+i+l }, where -y E [0, 1) is a discount-rate parameter.
n::::o
907
Scheduling Straight-Line Code USing RL and Rollouts
A common solution strategy is to approximate the optimal value function V* , which maps
states to the maximal expected return that can be obtained starting in each state and taking
the best action. In this paper we use temporal difference (TD) learning (Sutton, 1988). In
this method, the approximation to V* is represented by a table with an entry V (s) for every
state. After each transition from state St to state StH, under an action with reward rt+l,
the estimated value function V (St) is updated by:
V(St) +- V(St)
+ a [rtH + ,V(St+l) - V(st)]
where a is a positive step-size parameter.
4.2
Experimental Results
Scheeff et al. (1997) have previously experimented with reinforcement learning in this
domain. However, the results were not as good as hoped. Finding the right reward structure
was the difficult part of using RL in this domain. Rewarding based on number of cycles
to execute the block does not work well as it punishes the learner on long blocks. To
normalize for this effect, Scheeff et al. (1997) rewarded based on the cycles per instruction
(CPI). However, learning with this reward also did not work well as some blocks have more
unavoidable idle time than others. A reward based solely on CPI does not account for this
aspect. To account for this variation across blocks, we gave the RL scheduler a final reward
of:
r
.
= time to execute block-max
(.
minimum wetghted critical path,
(#
of instructions)
)
2
The scheduler received a reward of zero unless the schedule was complete. As the 21064
processor can only issue two instructions at a time, the number of instructions divided by 2
gives an absolute lower bound on the running time. The weighted critical path (wcp) helps
to solve the problem of the same size blocks being easier or harder to schedule than others.
When a block is harder to execute than another block of the same size, the wcp tends to
be higher, thus causing the learner to get a different reward. The wcp is correlated with
the predicted number of execution cycles for the DEC scheduler (r = 0.9) and the number
of instructions divided by 2 is also correlated (r = 0.78) with the DEC scheduler. Future
experiments will use a weighted combination of these two features to compute the reward.
As with the supervised learning results presented in Moss et al. (1997), the RL system
learned a preferential value function between candidate instructions. That is, instead of
learning the value of instruction A or instruction B, RL learned the value of choosing
instruction A over instruction B. The state space consisted of a tuple of features from a
current partial schedule and the two candidate instructions. These features were derived
from knowledge of the DEC simulator. The features and our intuition for their importance
are summarized in Table 2.
Previous experiments (Moss et al. 1997) showed that the actual value of wcp and e did
not matter as much as their relative values. Thus, for those features we used the signum
?(1) of the difference of their values for the two candidate instruction. Signum returns
-1,0, or 1 depending on whether the value is less than, equal to, or greater than zero. Using
this representation, the RL state space consisted of the following tuple, given candidate
instruction x and y and partial schedule p:
state_vec(p, x, y) = (odd(P), ic(x) , ic(y),d(x), dey), a(wcp(x) - wcp(y)), a(e(x) - e(y?)
This yields 28,800 unique states. Figure 2 shows an example partial schedule, a set of
candidate instructions, and the resulting states for the RL system.
The RL scheduler does not learn over states where there are no choices to be made. The
last choice point in a trajectory is given the final reward even if further instructions are
scheduled from that point. The values of multiple states are updated at each time step because the instruction that is chosen affects the preference function of multiple states. For
A. McGovern and E. Moss
908
Heuristic Name
Odd Partial (odd)
Heuristic Description
Is the current number of instructions scheduled odd or even?
Instructi on Class (ic)
The Alpha's instructions can be divided
into equivalence classes with respect to
timing properties.
The height of the instruction in the DAG
(the length of the longest chain of instructions dependent on this one), with edges
weighted by expected latency of the result
produced by the instruction
Can the instruction dual-issue with the previous scheduled instruction?
Weighted Critical Path (wcp)
Actual Dual (d)
Max Delay (e)
The earliest cycle when the instruction can
begin to execute, relative to the current cycle; this takes into account any wait for inputs for functional units to become available
Intuition for Use
If TRUE, we're interested in scheduling instructions that can dual-issue with the previous instruction.
The instructions in each class can be executed only in certain execution pipelines,
etc.
Instructions on longer critical paths should
be scheduled first, since they affect the
lower bound of the schedule cost.
If Odd Partial is TRUE, it is important that
we find an instruction, if there is one, that
can issue in the same cycle with the previous scheduled instruction.
We want to schedule instructions that will
have their data and functional unit available
earliest.
Table 2: Features for Instructions and Partial Schedule
partial schedule p
c
A
candidate instructions
States for RL system
State label State
state_vec(p,A,B)
AB
state_vec(p,A,C)
AC
state_vec(p,B,C)
BC
state_vec(p,B,A)
BA
state_vec(p,C,A)
CA
state_vec(p,C,B)
CB
Figure 2: On the left is a graphical depiction of a partial schedule and three candidate
instructions. The table on the right shows how the RL system makes its states from this.
example, using the partial schedule and candidate instructions shown in Figure 2, scheduling instruction A, the RL system would backup values for AB, AC, and the opposite values
for BA and CA.
Using this system, we performed leave-one-out cross validation across all blocks of the
SPEC95 benchmark suite. Blocks with more than 800 instructions were broken into blocks
of 800 or less because of memory limitations on the DEC simulator. This was true for
only two applications: applu and fpppp. The RL system was trained online for 19 of the
20 applications using Q = 0.05 and an ?-greedy exploration method with ? = 0.05. This
was repeated 20 different times, holding one program from SPEC95 out of the training
each time. We then evaluated the greedy policy (? = 0) learned by the RL system on each
program that had been held out. All ties were broken randomly. Performance was assessed
the same way as before. The results for each benchmark are shown in Table 3. Overall,
the RL scheduler performed only 2% slower than DEC. This is a geometric mean over all
applications in the suite and on all blocks. Although the RL system did not outperform the
DEC scheduler overall, it significantly outperformed DEC on the large blocks (applu-big
and fpppp-big) .
5
Conclusions
The advantages of the RL scheduler are its performance on the task, its speed, and the fact
that it does not rely on any heuristics for training. Each run was much faster than with
rollouts and the performance came close to the performance of the DEC scheduler. In a
909
Scheduling Straight-Line Code Using RL and Rollouts
App
applu
compress95
hydro2d
mgrid
tomcatv
Ratio
1.001
0.977
1.022
1.009
1.019
App
Ratio
applu-big
fpppp
ijpeg
perl
turb3d
0.959
1.055
0.975
1.014
1.218
App
apsi
fpppp-big
Ii
su2cor
vortex
Ratio
1.018
0.977
1.012
1.018
1.032
App
ccl
go
m88ksim
swim
waveS
Ratio
1.022
1.028
1.042
1.040
1.032
Table 3: Performance of the greedy RL-scheduler on each application in SPEC95 over all
leave-one-out cross-validation runs as compared to DEC. Applications whose running time
was better than DEC are shown in italics.
system where multiple architectures are being tested, RL could provide a good scheduler
with minimal setup and training.
We have demonstrated two methods of instruction scheduling that do not rely on having
heuristics and that perform quite well. Future work could address tying the two methods
together while retaining the speed of the RL learner, issues of global instruction scheduling,
scheduling loops, and validating the techniques on other architectures.
Acknowledgments
We thank John Cavazos and Darko Stefanovic for setting up the simulator and for prior work in
this domain, along with Paul Utgoff, Doina Precup, Carla Brodley, and David Scheeff. We also
wish to thank Andrew Barto, Andrew Fagg, and Doina Precup for comments on earlier versions of
the paper. This work is supported in part by the National Physical Science Consortium, Lockheed
Martin, Advanced Technology Labs , and NSF grant IRI-9503687 to Roderic A. Grupen and Andrew
G. Barto. We thank various people of Digital Equipment Corporation, for the DEC scheduler and the
ATOM program instrumentation tool (Srivastava & Eustace, 1994), essential to this work. We also
thank Sun Microsystems and Hewlett-Packard for their support.
References
Bertsekas, D. P. (1997). Differential training of rollout policies. In Proc. of the 35th Allerton Conference on Communication, Control, and Computing. Allerton Park, Ill.
Bertsekas, D. P., Tsitsiklis, 1. N. & Wu, c. (1997). Rollout algorithms for combinatorial optimization.
Journal of Heuristics.
DEC (1992). DEC chip 21064-AA Microprocessor Hardware Reference Manual (first edition Ed.).
Maynard, MA: Digital Equipment Corporation.
Moss , 1. E. B., Utgoff, P. E., Cavazos, J., Precup, D., Stefanovic, D., Brodley, C. E. & Scheeff,
D. T. (1997). Learning to schedule straight-line code. In Proceedings of Advances in Neural
Information Processing Systems 10 (Proceedings of NIPS'97) . MIT Press.
Scheeff, D., Brodley, c., Moss, E., Cavazos, 1. & Stefanovic, D. (1997). Applying reinforcement
learning to instruction scheduling within basic blocks. Technical report, University of Massachusetts, Amherst.
Sites, R. (1992). Alpha Architecture Reference Manual. Maynard, MA: Digital Equipment Corporation.
Srivastava, A. & Eustace, A. (1994). ATOM: A system for building customized program analysis
tools. In Proc. ACM SIGPLAN '94 Con! on Prog. Lang. Design and Imp!. (pp. 196-205).
Stefanovic, D. (1997). The character of the instruction scheduling problem. University of Massachusetts, Amherst.
Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine Learning,
3,9-44.
Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning. An Introduction . Cambridge, MA:
MIT Press.
Tesauro, G. & Galperin, G. R. (1996). On-line policy improvement using monte-carlo search. In
Advances in Neural Information Processing: Proceedings of the Ninth Conference. MIT Press.
| 1557 |@word version:2 norm:1 instruction:73 calculus:1 brightness:5 pick:1 harder:2 liu:1 series:1 uma:1 selecting:1 punishes:1 bc:1 eustace:2 current:10 luo:2 lang:1 dx:2 must:2 follower:1 written:3 john:1 numerical:1 remove:1 hypothesize:1 exl:1 half:1 intelligence:1 device:1 greedy:5 discovering:1 beginning:2 core:2 lr:1 pointer:1 provides:2 detecting:1 node:6 location:2 preference:1 allerton:2 five:5 height:1 rollout:28 along:1 become:1 differential:1 qualitative:1 consists:1 resistive:10 grupen:1 manner:1 expected:4 frequently:1 simulator:5 discounted:1 automatically:1 td:1 actual:4 little:1 cache:1 becomes:1 perceives:1 begin:1 insure:1 circuit:3 maximizes:1 tying:1 string:1 developed:1 proposing:1 finding:2 differentiation:1 fabricated:1 suite:4 guarantee:1 temporal:6 corporation:3 winterthurerstrasse:2 every:3 preferably:1 tie:1 exactly:1 sensibly:1 hit:1 control:3 unit:2 grant:1 bertsekas:4 positive:3 before:2 local:10 modify:1 tends:1 timing:1 consequence:2 sutton:4 vortex:1 mead:3 path:5 solely:1 might:2 chose:1 equivalence:1 limited:2 range:3 directed:1 horn:5 unique:2 acknowledgment:2 practice:1 block:39 swiss:1 eth:2 significantly:1 projection:1 convenient:1 idle:1 induce:1 regular:2 wait:1 consortium:1 get:1 onto:2 pipelined:1 selection:1 cannot:1 scheduling:19 close:1 influence:1 applying:1 eyv:2 equivalent:1 gilbert:2 fruitful:1 deterministic:2 optimize:1 map:1 straightforward:1 go:1 starting:1 independently:1 convex:2 focused:2 resolution:1 iri:1 simplicity:1 amy:2 rule:1 array:2 scheeff:5 neuroinforrnatics:2 stability:1 variation:2 autonomous:1 updated:2 commercial:4 suppose:1 exact:1 velocity:4 expensive:1 connected:2 cycle:7 sun:1 l2v:1 ordering:5 trade:1 highest:1 observes:1 ran:1 intuition:2 environment:3 nash:1 ui:8 broken:2 reward:12 ideally:1 utgoff:2 trained:2 depend:2 solving:2 rewrite:1 orig:7 purely:1 yuille:1 eric:1 exit:1 learner:3 easily:1 kirchhoff:1 chip:17 represented:1 various:1 forced:1 fast:1 effective:1 monte:2 artificial:1 mcgovern:4 outcome:1 choosing:5 apparent:1 whose:2 posed:6 solve:2 encoded:1 larger:2 heuristic:11 furthennore:1 otherwise:1 quite:3 itself:2 noisy:1 final:2 online:1 obviously:1 advantage:2 net:1 took:1 subtracting:1 maximal:1 neighboring:2 causing:1 loop:3 achieve:3 tinychip:1 intuitive:1 description:2 normalize:1 convergence:1 requirement:1 produce:1 generating:2 executing:1 leave:2 object:6 help:1 coupling:7 andrew:3 avlsi:3 recurrent:1 develop:1 depending:1 nearest:1 odd:5 ac:2 ij:1 received:2 grating:1 strong:2 implemented:4 c:1 predicted:1 implies:1 vittoz:1 come:1 switzerland:2 direction:1 plaid:2 closely:1 correct:3 torre:2 drawback:1 stochastic:5 exploration:1 vx:1 implementing:1 require:4 suffices:1 perfonns:1 biological:2 hold:1 lying:1 practically:1 sufficiently:1 considered:2 normal:5 ic:3 koch:4 cb:1 algorithmic:1 mapping:2 predict:1 circuitry:3 major:1 early:2 ilv:2 smallest:2 a2:1 proc:2 outperformed:5 label:1 combinatorial:1 vice:1 create:1 tf:1 tool:2 weighted:8 minimization:2 mit:4 clearly:1 sensor:1 always:2 modified:1 rather:2 voltage:4 barto:4 signum:2 earliest:2 ax:2 focus:2 indiveri:1 derived:1 improvement:2 longest:1 backgammon:1 indicates:2 contrast:3 equipment:3 baseline:1 detect:1 dependent:2 rigid:1 integrated:1 entire:1 vlsi:3 relation:1 uij:2 going:1 interested:1 semantics:1 translational:1 issue:8 ill:6 orientation:1 dual:5 priori:1 denoted:5 among:2 retaining:1 spatial:8 smoothing:3 field:2 construct:1 once:1 equal:1 having:1 sampling:1 manually:1 atom:2 represents:3 park:1 look:1 imp:1 future:2 minimized:1 report:7 stimulus:2 np:1 others:2 few:2 retina:1 modern:1 randomly:3 preserve:2 national:2 floating:1 ccl:1 consisting:1 rollouts:39 maintain:1 ab:2 amplifier:1 conductance:10 evaluation:2 severe:1 navigation:1 misha:1 hewlett:1 behind:1 held:1 stocker:4 chain:2 accurate:1 emit:1 tuple:2 edge:3 closer:1 necessary:6 partial:11 poggio:3 shorter:1 preferential:1 unless:2 re:1 theoretical:1 minimal:1 earlier:1 wyatt:1 disadvantage:1 mahowald:1 cost:3 introducing:2 deviation:1 entry:2 euler:2 rolling:1 delay:1 fabrication:1 too:1 reported:3 dependency:1 varies:1 teacher:1 hutchinson:2 giacomo:1 chooses:3 thoroughly:1 st:7 amherst:4 off:1 rewarding:1 tanner:2 together:2 iy:1 continuously:2 ey2:1 quickly:1 precup:3 again:3 unavoidable:1 containing:2 choose:2 possibly:1 worse:1 ilu:2 creating:1 derivative:3 return:4 li:1 account:3 potential:2 summarized:1 includes:2 int:1 matter:1 satisfy:1 doina:2 depends:1 vi:20 performed:8 view:1 observer:1 lab:2 try:1 compiler:4 wave:1 complicated:1 parallel:2 rodney:1 minimize:1 square:1 publicly:1 ass:1 yield:2 weak:2 staats:1 produced:3 lu:1 carlo:2 trajectory:1 researcher:1 straight:5 processor:3 app:4 simultaneous:1 phys:1 manual:2 ed:1 lengthy:1 definition:1 evaluates:1 energy:7 pp:1 obvious:1 con:1 proved:1 adjusting:1 massachusetts:3 appends:1 knowledge:3 schedule:30 maxwell:1 higher:4 dt:3 luin:2 supervised:5 follow:1 response:3 improved:2 specify:1 execute:5 ox:1 evaluated:1 dey:1 furthermore:1 anywhere:1 hand:1 maynard:2 ex:9 quality:1 arduous:1 scheduled:7 disabled:2 building:3 effect:1 validity:1 contain:1 true:4 multiplier:4 hypothesized:1 consisted:2 regularization:7 cavazos:3 steady:2 ini:1 complete:2 motion:25 cp:1 upwards:1 dedicated:1 roderic:1 image:6 spending:1 common:2 specialized:1 behaves:1 functional:8 physical:4 overview:3 rl:22 detennining:1 analog:13 belong:1 rth:2 significant:1 silicon:1 versa:1 imposing:1 ai:1 dag:2 smoothness:2 tuning:1 cambridge:1 outlined:1 grid:2 had:1 moving:4 access:1 longer:1 money:1 compiled:1 etc:1 add:2 depiction:1 l997a:2 showed:3 optimizing:1 instrumentation:1 tesauro:2 rewarded:1 certain:1 binary:2 came:1 preserving:1 minimum:5 additional:2 stefanovic:5 preceding:1 greater:1 ey:5 eo:1 determine:1 maximize:1 signal:6 ii:7 semi:1 full:2 multiple:4 alan:2 smooth:11 technical:2 match:1 calculation:3 faster:3 long:3 cross:2 divided:3 laplacian:1 schematic:2 basic:12 vision:8 metric:2 represent:2 sometimes:1 achieved:1 cell:12 dec:34 whereas:2 want:1 crucial:1 sr:1 probably:1 comment:1 validating:1 december:1 flow:39 capacitor:2 seem:1 name:1 call:2 integer:1 near:1 yang:1 ideal:1 affect:2 fit:1 zi:4 gave:2 architecture:6 perfectly:1 spec95:6 opposite:1 simplifies:1 tm:1 prototype:1 cybernetic:1 knowing:1 tradeoff:1 whether:1 swim:1 cause:1 jj:1 differentiator:2 remark:1 action:9 useful:1 generally:1 clear:1 detailed:2 latency:1 discount:1 locally:1 ten:2 hardware:2 outperform:1 nsf:1 notice:1 estimated:2 per:4 discrete:4 express:1 drawn:1 changing:1 douglas:2 mosis:2 graph:3 fagg:1 run:6 inverse:1 reporting:1 almost:1 prog:1 electronic:3 wu:1 decision:1 dy:2 bound:2 layer:6 hom:1 hi:1 guaranteed:1 correspondence:1 quadratic:1 lall:2 strength:2 occur:1 optic:1 constraint:16 precisely:1 sigplan:1 x7:1 aspect:2 speed:4 min:2 span:1 transconductance:1 performing:2 extremely:1 optical:32 ey1:1 darko:1 martin:1 speedup:1 department:1 developing:1 according:3 march:1 poor:1 combination:1 smaller:1 slightly:1 across:3 character:1 newer:1 sth:2 dv:1 restricted:1 pipeline:3 computationally:1 equation:13 resource:1 previously:1 fail:1 fortran:1 mind:1 fed:1 end:1 available:7 rewritten:1 apply:1 eight:1 appropriate:2 save:2 robustness:1 slower:3 original:2 assumes:1 running:13 include:1 remaining:1 graphical:1 build:2 classical:1 society:1 contact:1 objective:2 capacitance:1 occurs:2 degrades:1 costly:1 strategy:1 rt:1 interacts:1 italic:1 september:1 gradient:8 amongst:1 separate:2 mapped:2 thank:6 simulated:1 x325:1 assuming:1 code:8 length:1 index:1 relationship:1 ratio:14 difficult:1 executed:5 mostly:1 setup:1 holding:1 negative:6 wcp:7 ba:2 implementation:8 reliably:1 collective:1 jorg:1 unknown:1 design:4 perform:3 galperin:2 policy:11 neuron:1 snapshot:1 benchmark:4 finite:1 descent:2 november:1 supporting:1 architect:3 communication:1 precise:1 varied:2 ninth:1 delbriick:1 intensity:1 introduced:3 david:1 pair:3 required:1 connection:1 l2u:1 concisely:1 quadratically:1 narrow:1 uncoupled:1 learned:3 hour:1 discontinuity:4 nip:1 address:1 able:5 etey:1 perception:1 below:1 pattern:1 microsystems:1 program:11 perl:1 built:1 max:2 memory:2 packard:1 power:2 critical:4 satisfaction:2 settling:1 regularized:1 rely:2 customized:1 advanced:1 lini:3 scheme:1 representing:1 technology:2 brodley:3 axis:1 doug:2 carried:1 lij:1 moss:10 prior:1 geometric:3 relative:3 law:1 permutation:3 interesting:2 oy:1 proportional:1 filtering:1 analogy:1 limitation:2 lv:1 acyclic:1 proven:1 digital:7 foundation:1 versus:2 validation:2 agent:5 degree:2 sufficient:1 imposes:1 article:1 principle:2 editor:2 vij:1 repeat:1 last:2 supported:1 tsitsiklis:1 allow:2 institute:2 neighbor:2 tobi:1 taking:1 absolute:5 feedback:3 boundary:1 valid:4 evaluating:2 transition:2 computes:4 overall:3 cumulative:1 made:2 adaptive:2 reinforcement:12 collection:2 far:1 alpha:5 approximate:1 lockheed:1 aperture:2 unreliable:1 global:11 sequentially:2 photoreceptors:1 symmetrical:1 reorder:2 consuming:2 continuous:1 search:4 table:8 impedance:1 nature:3 learn:2 expanding:1 ca:2 operational:1 irich:4 expansion:1 excellent:1 complex:1 microprocessor:1 domain:7 did:4 arrow:1 motivation:2 noise:1 backup:1 big:4 paul:1 edition:1 repeated:2 cpi:2 site:2 ff:1 fashion:1 fails:1 lvin:1 scheduler:64 explicit:1 wish:1 candidate:15 third:2 load:5 specific:1 sensing:1 undergoing:1 explored:2 experimented:2 closeness:1 intrinsic:1 essential:1 adding:1 sequential:1 importance:1 demonstrated:1 execution:12 hoped:1 easier:1 exu:2 backtracking:1 carla:1 likely:2 explore:1 visual:1 schunck:5 prevents:1 lagrange:3 srivastava:2 ch:1 aa:1 chance:1 harris:2 ma:4 acm:1 goal:2 formulated:2 kramer:1 towards:1 owen:1 shared:1 change:4 hard:1 infinite:1 specifically:1 ami:1 eliot:1 semantically:1 total:1 called:1 experimental:2 photoreceptor:4 people:1 support:1 modulated:1 kung:1 assessed:2 ethz:1 perfonning:1 tested:4 correlated:2 |
610 | 1,558 | A Micropower CMOS Adaptive Amplitude and
Shift Invariant Vector Quantiser
Richard J. Coggins, Raymond J.W. Wang and Marwan A. Jabri
Computer Engineering Laboratory
School of Electrical and Infonnation Engineering, J03
University of Sydney, 2006, Australia.
{richardc, jwwang, marwan} @seda1.usyd.edu.au
Abstract
In this paper we describe the architecture, implementation and experimental results for an Intracardiac Electrogram (ICEG) classification and
compression chip. The chip processes and vector-quantises 30 dimensional analogue vectors while consuming a maximum of 2.5 J-tW power
for a heart rate of 60 beats per minute (1 vector per second) from a 3.3 V
supply. This represents a significant advance on previous work which
achieved ultra low power supervised morphology classification since the
template matching scheme used in this chip enables unsupervised blind
classification of abnonnal rhythms and the computational support for low
bit rate data compression. The adaptive template matching scheme used
is tolerant to amplitude variations, and inter- and intra-sample time shifts.
1 INTRODUCTION
Implantable cardioverter defibrillators (ICDs) are devices used to monitor the electrical
activity of the heart muscle and to apply appropriate levels of electrical stimulation if abnonnal conditions are detected. Despite the considerable success of ICDs they suffer from
a number of limitations including an inability to detect and treat some abnonnal heart
rhythms and limited data recording capabilities.
We have previously shown that micropower analogue Multi-Layer Perceptron (MLP) neural networks can be trained to separate such arrhythmia [4]. However, MLPs are best suited
to learning the boundary between classes whereas a vector quantization scheme allows a
measure of the probability density of the morphological types to be estimated.
Many analogue vector quantiser (VQ) chips have been reported in the literature. For example, a 16x256 500 kHz 50 mW 2 J-tm CMOS vector AID converter [10] and a 16 x 16
300 kHz 0.7 mW 2 J-tm CMOS analogue VQ [1]. These correspond to an energy per match
R. J. Coggins, R. J. W. Wang and M. A. Jabri
672
per dimension of 24 pI and 9 pI respectively. The integrated circuit (lC) described in
this paper is distinguished from these approaches in that it is specifically targeted for the
low power, low bandwidth application of ICEG classification and compression . Our chip
achieves vector matching (without the winner take all function) to 7 bit 30 dimensional
vectors with three coefficient linear prediction, at an energy consumption of 15 pI per template per dimension using a 1.2 pm CMOS process. Although this figure is greater than
that for [1] it should be noted that in [1] the mean absolute error metric is used rather than
the squared Euclidean distance and no provision is provided for linear transformation of
the incoming analogue vector.
2 ADAPTIVE DATA COMPRESSION
Recording of ICEGs in ICDs is currently very limited due to the amount of memory available and the power/area cost of implementing all but the simplest compression techniques.
Micropower template matching however, enables large amounts of the signal to be encoded
as template indices plus amplitude parameters. Effective compression ofthe ICEG requires
adaptation to the short term non-stationary behaviour of the ICEG [2] . In particular, short
term amplitude variations, lag variation, phase variation and ectopic beats (which originate from the ventricles of the heart and have differing morphology) reduce the achievable
compression. The impact of ectopic beats can be reduced by increasing the number of
templates. This can often be achieved without increasing the code book search complexity
by using associated timing features. The amplitude and shift variations require short term
adaptation of the template matching in order to minimise the residual error and hence raise
the compression ratio at fixed distortion.
2.1
Amplitude and Shift Invariant Matching
In order to facilitate analogue implementation, a backward prediction procedure is used
rather than the usual forward prediction [8]. This approach allows the incoming analogue
template to be manipulated in the analogue domain for amplitude and shift invariance purposes. Consider the long term backward prediction problem described by,
-()
rb ()
n = x n -
b (
OX n
) b {x(n + a + 1) - x(n + a - I)}
+a
-I
2
(1)
where rb (n) denotes the backward residuals, x is a template which is a function of previous
beats, x( a) is the sampled ICEG signal, a the time index, n is the template index and bo and
bl are the amplitude and phase coefficients respectively. bo scales the current beat to match
the template and hence is an amplitude term. b1 scales the central difference of the current
beat and is a function of the amplitude and phase corrections required to minimise the
residuals. To see why this is a phase term consider the Taylor expansion of Ax(t + ?) to
the first derivative term around t,
Ax(t + ?) = Ax(t)
+ A?x' (t)
(2)
where ? is a small phase shift of x(t) and A is the amplitude factor. When ? is due to
sampling jitter then,
? :::;
where T is the sampling period. Provided that x(t) is
sampled according to the Nyquist criterion, ? is sufficiently small for the first derivative
term to adequately account for the sampling jitter. Hence, bi accounts for the residual error
remaining after optimisation of the integer a. a is approximately determined by the beat
detector of the ICD which attempts to detect the fiducial point of heart beats using filters
and comparators. bo and b1 can be determined by minimising the squared error between
the current signal window and the previously recorded template which in this case has a
closed form solution in terms of correlation coefficients. However, in Section 3 we present
an alternative iterative procedure suited to low-power analogue implementations.
-t : :;
t,
A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser
673
3 SYSTEM ARCHITECTURE & IMPLEMENTATION
'n.
LL
r
..
, ::;'.I~
1'11':::-""~fC"'-1
??
...~) ???"'J
I'U
=-
Figure I: Left: Block diagram of the adaptive linear transfonn VQ chip. Middle: Floorplan
of the chip. Right: Photomicrograph of the chip.
The ICEG is first high pass filtered to remove the DC and then is bandpass filtered to
prevent aliasing and enhance the high frequency component for beat detection. (This is
the filtering approach already existing in an ICD and therefore not implemented by us).
This then feeds the discrete time analogue delay line, which is continuously sampling the
signal at 250 Hz. The analogue samples are then transfonned by a two layer network. The
first layer implements the linear prediction by adjusting the amplitude bo and the phase
of the analogue vector. Note that the phase consists of two components, the coarse part
a corresponding to sample lags and the fine part b1 corresponding to intra-sample lags. The
second layer calculates the distance between the linearly predicted vector and the template
wen) to be matched. A comparator is provided so that a match to within a given threshold
may be detected.
3.1
Chip Architecture
Input to the IC is via a single analogue channel which is sampled by a bucket brigade
device of length 30. The resultant 30 dimensional analogue vector is adaptively linear
transfonned to facilitate a shift and scale invariant match to a digital (7 bit per dimension)
template. The IC generates digital representations of the square of the Euclidean distance
between the transfonned analogue vector and the digital template. A block diagram of the
IC appears in Figure I. The IC has been fabricated. Perfonnance figures in this paper are
based on measurements of the chip fabricated in a 1.2/-Lm CMOS MOSIS process.
The block diagram shows the input signal being sampled by the bucket brigade device
(BBD)[4]. The signal is sampled at a rate of 250 Hz. Existing circuitry in the defibrillator
detects the peak of the heart beat and hence indicates a coarse alignment (due to detection
jitter) to the template stored in the template DACs (TDACs). The BBD continues to sample
until the coarse alignment is attained at which point the IC is biased up. The BBD now
contains a segment of the ICEG corresponding to one heart beat. The digital error output is
then monitored with the linear transfonn blocks configured to I: I mappings until an error
minimum is detected indicating optimal sampling alignment. The three linear transfonn
coefficient DACs (CDACs) which are common to the 30 linear transfoqn blocks may then
be adapted to further reduce the matching error. The transfonnation can be represented by
yen) = aox(n - 1) + alx(n) + a2x(n + 1) where ao corresponds to CDACO etc. This
constitutes a general linear long tenn prediction [8]. Constraining CDACO and CDAC2 to
be equal magnitudes and opposite signs results in a minimisation of errors due to phase
and amplitude variation and a simpler adaptation procedure. The matching error is computed via the squarer blocks and the summing node. The matching error consists of both a
magnitude and exponent thereby increasing the dynamic range of the error representation.
R. J. Coggins, R. J. W Wang and M. A . Jabri
674
The magnitude is the output of the squarer block. The exponent is determined by control
of a current reference in the squaring circuit. A reference DAC and precision current comparator provide the means of successive approximation AID conversion of the matching
error current [ERR. Using this scheme heart beat morphology can be classified by loading
different templates (TDAC values). A stream of beats may be compressed by identifying matches with continuously updated representations of previous beats. Close matches
are encoded by an index and an amplitude coefficent while poor matches are encoded by
quanti sed residuals which have been minimised by the linear prediction.
3.2 Adaptation and Learning
The first step in the learning process is to determine a, the coarse phase lag. This can be
achieved by shifting the delay line and evaluating the error until a minimum is reached.
Once the coarse phase lag a has been determined the error function to be minimised to
compensate for amplitude and phase variations is given by E = E~I (bOXi+bI~Xi-Wi)2,
where the subscript i implicitly incorporates the coarse phase a. This is a quadratic in
bo and bl . bo and bi can be optimised separately provided cross terms in E are negligible.
Here the cross terms are given by E~I 2bobIXi~Xi
bobI(XN+IXN - XIXO). Thus, if
the end points of the N point window have approximately the same value (as is usually
the case for ICEG beats) then the cross terms in E are negligible and bo and bi can be
optimised separately.
=
So the only remaining issue is how to optimise a single parameter. A simple linear search
takes at most 2b evaluations of E where b is the number of bits. A search based on bisection
takes b + 2 evaluations. Techniques involving gradient descent and conjugate gradient
lead to more complex learning logic with minor reductions in the number of evaluations.
Therefore, bisection is the best compromise between the number of evaluations and the
complexity of the learning state machine.
Once the best template match has been achieved, learning may also then be applied to
the template itself depending on the application and context. For example, in the case
of adaptive classification a weight perturbation algorithm [6] could be used to adapt the
template for morphological drift based on heart rate information. Similarly, for a data
compression application, if the template match exceeds a fidelity criterion the template
may be adapted and the template changes logged in the compression record.
3.3
Building Blocks
In order to implement the template matcher, sub-threshold analogue VLSI building blocks
were designed. All transistors in the building blocks operate in weak inversion exclusively.
We do not have the space to describe all of the building blocks, so we will focus here on
the linear transform and squarer cells.
3.3.1
Linear Transform Cell
The linear transform (LT) cell consists of three linearised differential pairs [7] with their
biases controlled by the coefficient DACs (CDACs) (see Figure 2(a?. The nature ofthe linearisation is controlled by the ratio of the aspect ratios.ofM3 to M5 and M4 to M6. Methods
for choosing this ratio are discussed in [5]. Denoting the aspect ratio of a transistor by S we
chose S3/ S5 = S4/ S6 = 4. This introduces some ripple in the transconductance while
increasing the asymptotic saturation voltage to 4nUT compared to nUT for the ordinary
differential pair. Signed coefficients are achieved by switches at the outputs of the differential pairs. The template DACs (TDACs) have differential outputs to form the difference
y(n) - w(n) where w(n) is the nth template value.
A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser
675
3.3.2 Squaring Cell
The squaring function must meet the following design constraints. It should have current
inputs and outputs in order to avoid linear current to voltage conversion at low currents. The
squared current must be normalised to the original linear range to avoid excessive power
consumption. The squaring function should avoid the MOS square law approach in order
to conserve space and power, and the the available voltage range should be 3.3 V rail to
rail.
RCLK1 D--~--*"-<l
VIOl
V..
RClK2
D---ol<t+--<l
L-:-t-~-+--o
(a)
(b)
Figure 2: (a) Circuit diagram of one of three the linear transform linearised differential
pairs in the LT cell. (b) Circuit diagram of the squarer (SQ cell) and the summing node.
The choices available then are restricted to weak inversion circuits. The circuit (see Figure 2(b)) used relies on the translinear principle [9]. Here, loops of MOS g-s diode structures operating in weak inversion are used to form a normalised squared current which is
summed to form the final normalised output. The translinear loops are implemented with
P-type transistors in separate N-wells to avoid the body effect. Positive and negative inputs
are squared separately using the RCLK signals and then added at the output.
3.4
Circuit Performance
Table 1: Summary of electrical specifications of the chip.
Item
Conditions
Template dimension
Adaptation coefficients
Excludes squarer error gain control
DAC Precision
Weighted lateral PNP
Max. Error per dimension a CDACx=64, DCBBD, wlr to TDACs
LSB bias
Power comsumption
TDACs=CDACl=64, duty cycle b = 3.2%
a
Excludes error at 1st CDACO stage.
b
Value
30
3
7 bits
2 bits
2nA
2.5 J-LW
For 1 bpm, chip biased up 8/250 of the time.
We provide three measures of the performance of the chip along with a summary of its
basic electrical characteristics which is shown in Table 1. The first measure characterises
the accuracy of the template matching function relative to the available precision of the
template. This is summarised by the Maximum Error per dimension in Table 1 which was
produced by inputing a zero offset DC signal into the BBD and setting each CDAC in turn to
one half of its maximum value. The TDACs were then adjusted so as to minimise the output
of the squarer. Therefore, the reSUlting TDAC values indicate the accumulated effects of
transistor mismatches through each path to the squarer output. The curves generated are
averages over 80 trials to remove noise influences (where as the classification performance
676
R. J. Coggins, R. J. W Wang and M. A. Jabri
shown in Table Irefvterr-tab includes such influences). The curves showed that except for
the input stage corresponding to CDACO (stage 30) the accumulated mismatches influence
the two least significant bits of the TDACs. A larger error of 4 bits for the first stage
feeding CDACO was due to a design oversight of not providing a dummy capacitive load
to the input end of the BBD (stage 30 of CDACO derives its input from the input BBD cell,
which does not have the full capacitive loading of three linearised differential pairs as on
the rest of the cells).
Table 2: Relative impact on the error output of the chip for the adaptation steps of alignment, amplitude and phase correction for patient No. 2s ST rhythm. The errors are normalised to the non-aligned error. A numerical simulation is provided for comparison to the
chip performance.
Adaptation step Chip Error Std. Dev. Simulation Error Std. Dev.
No align
1.0
0.04
1.0
0.28
0.31
0.41
Align
0.07
0.35
0.16
0.37
Amplitude
0.05
0.22
0.07
0.32
0.01
Phase
0.16
The second performance measure uses real heart patients ICEG (Sinus Tachycardia) ST
data. Table 2 shows the normalised output error of the chip averaged over 107 heart beats
while being compared to the 10th beat in the series. The normalised error was measured
from a mirrored version of the current at the output of the chip. The adaptation steps shown
in the table are as follows. "No align" implies that the error for the template match is determined only by the approximate alignment provided by a numerical simulation of the beat
detector of the ICD. "Align" corresponds to coarse alignment where the matching error is
calculated up to two samples either side to determine the best positioning of the input in
the BBD. "Amplitude" corresponds to adaptation of the amplitude coefficient by adjustment ofCDAC1. "Phase" corresponds to adaptation of the difference between CDAC2 and
CDACO. Each of the adaptations reduces the error of the match with the coarse alignment
being most significant. An idealised limited precision numerical simulation of the error
calculation is also provided in the table for comparison. It can be seen that the amplitude
and phase adaptation steps lower the relative error more for the chip than in the simulation.
This is most likely due to the adaptation on the chip also compensating for the analogue
noise and imprecision as well as the variability of the original data.
The third performance measure illustrates the ability of the chip to solve a blind classification problem and is summarised in Table 3. The safe rhythm of the patient is Sinus
Tachycardia (ST). For each patient one beat is chosen at random as the template and is
loaded into the TDACs of the chip. The 20 beats subsequent to the chosen template are
then used to determine the average error between templates after adaptation. Twice this
error is then used as the classifier threshold for "safe" versus "unknown". The ST and
VT data sets for the patient are then passed through the chip and classified giving the column "% Correct chip". For comparison the expected best performance for the data set are
also reproduced in the table from previous work by the authors [3]. The results indicate
that a very simple blind classification algorithm when combined with the adaptive template
matching capabilities of the chip shows good performance for 4 out of 5 patients.
4
CONCLUSION
We have presented a micropower learning vector quantization system that can provide hardware support for both signal classification and compression of ICEG signals. The analogue
block can be used to implement several different classification and compression algorithms
A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser
677
Table 3: Performance of the chip on a blind classification task for 5 patients with Ventricular
Tachycardia (VT) 1: 1 retrograde conduction compared to classification bounds.
The R point search interval was increased to 4 for this patient.
a
depending on how the template matching capability is utilised. By providing significant
compression capability in an lCD, a larger data base of natural onset cardiac arrhythmia
should become available, leading to improved designs of ICD based adaptive classification
and compression systems.
5
ACKNOWLEDGEMENTS
The work in this paper was funded by the Australian Research Council and Telectronics
Pacing Systems Ltd, Sydney, Australia.
References
[1] G. Cauwenberghs and V. Pedroni. A Charge-Based CMOS Parallel Analog Vector
Quantiser. In NIPS, volume 7, pages 779-786. MIT Press, 1995.
[2] R.J. Coggins. Low Power Signal Compression and Classification for Implantable
Defibrillators. PhD thesis, University of Sydney, Sydney, Australia, 1996.
[3] R.J. Coggins and M.A. Jabri. Classification and Compression of ICEGs using Gaussian Mixture Models. In J. Principe, L. Giles, N. Morgan, and E. Wilson, editors,
Neural Networks for Signal Processing, volume 7, pages 226-235 . IEEE, 1997.
[4] R.J. Coggins, M.A. Jabri, B.G. Flower, and S.J. Pickard. A Hybrid Analog and Digital
VLSI Neural Network for Intracardiac Morphology Classification. IEEE Journal of
Solid-State Circuits, 30(5):542-550, May 1995.
[5] M. Furth and A. Andreou. Linearised Differential Transconductors in Subthreshold
CMOS. Electronics Letters, 31(7):545-547, 1995.
[6] M.A. Jabri and B.G. Flower. Weight Perturbation: An Optimal Architecture and
Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks. IEEE Transactions on Neural Networks, 3(1):154-157, January 1992.
[7] F. Krummenacher and N Joehl. A 4Mhz CMOS Continuous Time Filter with On Chip
Automatic Tuning. IEEE Journal of Solid-State Circuits, 23(3):750-758, June 1986.
[8] G. Nave and A. Cohen. ECG Compression Using Long Term Prediction. IEEE Trans.
Biomed. Eng., 40(9):877-885 , 1993.
[9] E. Seevinck. Analysis and Synthesis of Translinear Integrated Circuits. Elsevier,
1988.
[to] G.T. Tyson, S. Fallahi, and A.A. Abidi. An 8b CMOS Vector AID converter. In
Proceedings of the International Solid State Circuits Conference, pages 38-39,1993.
| 1558 |@word trial:1 middle:1 inversion:3 achievable:1 compression:17 loading:2 version:1 simulation:5 eng:1 thereby:1 solid:3 reduction:1 electronics:1 contains:1 exclusively:1 series:1 transfonn:3 denoting:1 existing:2 err:1 current:12 must:2 numerical:3 subsequent:1 icds:3 enables:2 remove:2 designed:1 stationary:1 tenn:1 half:1 device:3 item:1 short:3 record:1 filtered:2 coarse:8 node:2 successive:1 simpler:1 along:1 differential:7 supply:1 become:1 consists:3 pnp:1 inter:1 expected:1 arrhythmia:2 morphology:4 multi:1 aliasing:1 ol:1 compensating:1 detects:1 cdac:1 window:2 increasing:4 provided:7 matched:1 circuit:11 differing:1 transformation:1 fabricated:2 charge:1 classifier:1 control:2 positive:1 negligible:2 engineering:2 timing:1 treat:1 despite:1 subscript:1 optimised:2 meet:1 approximately:2 path:1 signed:1 plus:1 chose:1 au:1 twice:1 ecg:1 limited:3 bi:4 range:3 averaged:1 x256:1 block:12 implement:3 sq:1 procedure:3 area:1 matching:14 close:1 context:1 influence:3 identifying:1 s6:1 variation:7 updated:1 us:1 conserve:1 continues:1 std:2 wang:4 electrical:5 abidi:1 cycle:1 morphological:2 complexity:2 dynamic:1 lcd:1 trained:1 raise:1 segment:1 compromise:1 chip:27 represented:1 describe:2 effective:1 detected:3 fallahi:1 choosing:1 encoded:3 lag:5 larger:2 solve:1 distortion:1 compressed:1 ability:1 transform:4 itself:1 final:1 reproduced:1 transistor:4 adaptation:14 aligned:1 loop:2 ripple:1 cmos:12 depending:2 recurrent:1 measured:1 minor:1 school:1 sydney:4 implemented:2 predicted:1 diode:1 indicate:2 implies:1 australian:1 safe:2 correct:1 filter:2 translinear:3 australia:3 implementing:1 require:1 behaviour:1 feeding:1 ao:1 pacing:1 ultra:1 coggins:7 adjusted:1 correction:2 around:1 sufficiently:1 ic:5 mapping:1 mo:2 lm:1 circuitry:1 achieves:1 purpose:1 currently:1 infonnation:1 council:1 weighted:1 mit:1 gaussian:1 rather:2 avoid:4 voltage:3 wilson:1 minimisation:1 ax:3 focus:1 june:1 indicates:1 detect:2 elsevier:1 squaring:4 accumulated:2 integrated:2 vlsi:3 bpm:1 biomed:1 issue:1 classification:16 fidelity:1 oversight:1 exponent:2 summed:1 equal:1 once:2 sampling:5 represents:1 unsupervised:1 comparators:1 constitutes:1 excessive:1 tyson:1 richard:1 wen:1 manipulated:1 implantable:2 m4:1 phase:16 attempt:1 linearised:4 detection:2 mlp:1 intra:2 evaluation:4 alignment:7 introduces:1 mixture:1 perfonnance:1 euclidean:2 taylor:1 increased:1 column:1 giles:1 bbd:7 dev:2 mhz:1 ordinary:1 cost:1 delay:2 j03:1 reported:1 stored:1 conduction:1 combined:1 adaptively:1 defibrillator:3 density:1 peak:1 st:5 international:1 minimised:2 enhance:1 synthesis:1 continuously:2 na:1 squared:5 central:1 recorded:1 thesis:1 book:1 derivative:2 leading:1 account:2 includes:1 coefficient:8 configured:1 blind:4 stream:1 cardioverter:1 onset:1 utilised:1 closed:1 tab:1 reached:1 cauwenberghs:1 capability:4 parallel:1 yen:1 mlps:1 square:2 sed:1 accuracy:1 loaded:1 characteristic:1 correspond:1 ofthe:2 subthreshold:1 pickard:1 weak:3 produced:1 bisection:2 classified:2 detector:2 energy:2 frequency:1 resultant:1 associated:1 monitored:1 sampled:5 gain:1 adjusting:1 provision:1 amplitude:23 appears:1 feed:1 attained:1 supervised:1 improved:1 ox:1 stage:5 correlation:1 until:3 dac:2 building:4 effect:2 facilitate:2 adequately:1 hence:4 idealised:1 imprecision:1 laboratory:1 nut:2 ll:1 noted:1 rhythm:4 criterion:2 m5:1 common:1 stimulation:1 cohen:1 khz:2 brigade:2 volume:2 winner:1 discussed:1 analog:3 abnonnal:3 significant:4 measurement:1 s5:1 automatic:1 tuning:1 wlr:1 pm:1 similarly:1 funded:1 furth:1 specification:1 operating:1 etc:1 align:4 base:1 showed:1 linearisation:1 success:1 vt:2 muscle:1 seen:1 minimum:2 greater:1 morgan:1 determine:3 period:1 signal:12 alx:1 full:1 reduces:1 exceeds:1 positioning:1 match:11 adapt:1 calculation:1 minimising:1 long:3 compensate:1 cross:3 controlled:2 impact:2 prediction:8 calculates:1 involving:1 basic:1 multilayer:1 optimisation:1 intracardiac:2 metric:1 patient:8 sinus:2 achieved:5 cell:8 whereas:1 fine:1 separately:3 interval:1 diagram:5 lsb:1 biased:2 operate:1 rest:1 recording:2 hz:2 transfonnation:1 incorporates:1 quantiser:6 integer:1 mw:2 constraining:1 feedforward:1 m6:1 switch:1 architecture:4 converter:2 bandwidth:1 opposite:1 reduce:2 tm:2 shift:10 minimise:3 icd:4 duty:1 passed:1 ltd:1 nyquist:1 suffer:1 amount:2 s4:1 hardware:1 simplest:1 reduced:1 mirrored:1 s3:1 sign:1 estimated:1 per:9 rb:2 dummy:1 summarised:2 discrete:1 threshold:3 monitor:1 photomicrograph:1 prevent:1 backward:3 retrograde:1 dacs:4 mosis:1 excludes:2 letter:1 jitter:3 logged:1 bit:8 layer:4 bound:1 quadratic:1 activity:1 adapted:2 constraint:1 krummenacher:1 ventricular:1 generates:1 aspect:2 transconductance:1 according:1 poor:1 conjugate:1 cardiac:1 wi:1 tw:1 invariant:6 restricted:1 bucket:2 heart:11 vq:3 previously:2 turn:1 end:2 available:5 apply:1 appropriate:1 distinguished:1 alternative:1 original:2 capacitive:2 denotes:1 remaining:2 giving:1 bl:2 already:1 added:1 usual:1 fiducial:1 gradient:2 usyd:1 distance:3 separate:2 lateral:1 consumption:2 originate:1 richardc:1 transfonned:3 code:1 length:1 index:4 ratio:5 providing:2 negative:1 a2x:1 implementation:4 design:3 unknown:1 conversion:2 descent:1 beat:20 january:1 variability:1 dc:2 perturbation:2 drift:1 pair:5 required:1 andreou:1 nip:1 trans:1 usually:1 flower:2 mismatch:2 saturation:1 including:1 memory:1 optimise:1 max:1 analogue:18 power:9 shifting:1 natural:1 ixn:1 hybrid:1 residual:5 electrogram:1 nth:1 scheme:4 raymond:1 literature:1 characterises:1 acknowledgement:1 asymptotic:1 law:1 relative:3 limitation:1 filtering:1 pedroni:1 versus:1 coefficent:1 digital:5 ventricle:1 principle:1 editor:1 pi:3 summary:2 bias:2 normalised:6 side:1 perceptron:1 template:36 absolute:1 boundary:1 dimension:6 xn:1 evaluating:1 curve:2 calculated:1 forward:1 author:1 adaptive:10 transaction:1 approximate:1 implicitly:1 logic:1 tolerant:1 incoming:2 b1:3 summing:2 marwan:2 consuming:1 xi:2 search:4 iterative:1 continuous:1 why:1 table:11 channel:1 nature:1 expansion:1 complex:1 jabri:7 domain:1 quanti:1 tachycardia:3 linearly:1 noise:2 body:1 icegs:2 aid:3 lc:1 precision:4 sub:1 bandpass:1 rail:2 lw:1 third:1 minute:1 load:1 offset:1 micropower:7 derives:1 quantization:2 phd:1 magnitude:3 illustrates:1 suited:2 lt:2 fc:1 likely:1 adjustment:1 bo:7 corresponds:4 relies:1 comparator:2 targeted:1 telectronics:1 iceg:10 considerable:1 change:1 specifically:1 determined:5 except:1 pas:1 invariance:1 experimental:1 matcher:1 ectopic:2 indicating:1 principe:1 support:2 inability:1 |
611 | 1,559 | Computational Differences between
Asymmetrical and Symmetrical Networks
Zhaoping Li
Peter Dayan
Gatsby Computational Neuroscience Unit
17 Queen Square, London, England, WCIN 3AR.
zhaoping@gatsby.ucl.ac.uk
dayan@gatsby.ucl.ac.uk
Abstract
Symmetrically connected recurrent networks have recently been
used as models of a host of neural computations. However, because of the separation between excitation and inhibition, biological neural networks are asymmetrical. We study characteristic
differences between asymmetrical networks and their symmetrical counterparts, showing that they have dramatically different
dynamical behavior and also how the differences can be exploited
for computational ends. We illustrate our results in the case of a
network that is a selective amplifier.
1 Introduction
A large class of non-linear recurrent networks, including those studied by
Grossberg,9 the Hopfield net,lO,l1 and many more recent proposals for the
head direction system,27 orientation tuning in primarls visual cortex,25, 1,3, 18 eye
position,20 and spatial location in the hippocampus 9 make a key simplifying
assumption that the connections between the neurons are symmetric. Analysis
is relatively straightforward in this case, since there is a Lyapunov (or energy)
function 4, 11 that often guarantees the convergence of the motion trajectory to an
equilibrium point. However, the assumption of symmetry is broadly false. Networks in the brain are almost never symmetrical, if for no other reason than the
separation between excitation and inhibition. In fact, the question of whether ignoring the polarity of the cells is simplification or over-simplication has yet to be
fully answered.
Networks with excitatory and inhibitory cells (EI systems, for short) have
long been studied,6 for instance from the per~ective of pattern generation in
invertebrates,23 and oscillations in the thalamus ' 24 and the olfactory systemP, 13
Further, since the discovery of 40 Hz oscillations or synchronization amongst cells
in primary visual cortex of anesthetised cat,8,5 oscillatory models of VI involving
separate excitatory and inhibitory cells have also been popular, mainly from the
perspective of how the oscillations can be created and sustained and how they can
Computational Differences between Asymmetrical and Symmetrical Networks
275
be used for feature linking or binding. 26 ,22,12 However the scope for computing
with dynamically stable behaviors such as limit cycles is not yet clear.
In this paper, we study the computational differences between a family of EI systems and their symmetric counterparts (which we call S systems). One inspiration for this work is Li's nonlinear EI system modeling how the primary visual
cortex performs contour enhancement and pre-attentive region segmentation. 14 ,15
Studies by Braun2 had suggested that an S system model of the cortex can not
perform contour enhancement unless additional (and biologically questionable)
mechanisms are used. This posed a question about the true differences between
EI and S systems that we answer. We show that EI systems can take advantage of
dynamically stable modes that are not available to S systems. The computational
significance of this result is discussed and demonstrated in the context of models
of orientation selectivity. More details of this work, especially its significance for
models of the primary visual cortical system, can be found in Li & Dayan (1999).16
2 Theory and Experiment
Consider a simple, but biologically significant, EI system in which excitatory and
inhibitory cells come in pairs and there are no 'lon~-range' connections from the
inhibitory cells 14 , 15 (to which the Lyapunov theory1 ,21 does not yet apply):
Xi = -Xi
+ L: j Jijg(Xj) - h(Yd + Ii
TyYi = -Yi
+ L: j Wijg(Xj) ,
(1)
where Xi are the principal excitatory cells, which receive external or sensory input h and generate the network outputs g(Xi); Yi are the inhibitory interneurons
(which are taken here as having no external input); function g(x) = [x - T]+ is
the threshold non-linear activation function for the excitatory cells; h(y) is the activation function for the inhibitory cells (for analytical convenience, we use the
linear form h(y) = (y - Ty) although the results are similar with the non-linear
h(y) = [y - Ty]+); Ty is a time-constant for the inhibitory cells; and Jij and W ij are
the output connections of the excitatory cells. Excitatory and inhibitory cells can
also be perturbed by Gaussian noise.
In the limit that the inhibitory cells are made infinitely fast (Ty = 0), we have
Yi = L: j Wijg(Xj), leaving the excitatory cells to interact directly with each other:
Xi
+ L: j Jijg(Xj) - h(L: j Wijg(Xj)) + Ii
-Xi + L:j(Jij - Wij)g(Xj) + Ii + /'i,i
-Xi
=
(2)
(3)
where /'i,i are constants. In this network, the neural connections J ij - W ij between
any two cells X can be either excitatory or inhibitory, as in many abstract neural
network models. When Jij = J ji and W ij = Wji, the network has symmetric
connections. This paper compares EI systems with such connections and the corresponding S systems. Since there are many ways of setting Jij and Wij in the EI
system whilst keeping constant J ij - W ij , which is the effective weight in the S
system, one may intuitively expect the EI system to have a broader computational
range.
The response of either system to given inputs is goverhed by the location and linear stability of their fixed points. The S network is so defined as to have fixed
points x (where x = 0 in equation 3) that are the same as those (x, y) of the EI
network. In particular, x depends on inputs I (the input-output sensitivity) via
ax = (IT- JDg + WD g)-1 dI, where IT is the identity matrix, J and Ware the
connection matrices, and Dg is a diagonal matrix with elements [Dg]ii = g'(Xi).
However, although the locations of the fixed points are the same for the EI and S
Z. Li and P. Dayan
276
systems, the dynamical behavior of the systems about those fixed points are quite
different, and this is what leads to their differing computational power.
To analyse the stability of the fixed points, consider, for simplicity the case that Ty =
1 in the EI system, and that the matrices JDg and WDg commute with eigenvalues
>"i and >..J; respectively for k = 1, ... ,N where N is the dimension of x. The local
deviations near the fixed points along each of the N modes will grow in time if the
real parts of the following values are positive
-1 + (1/2>..t ? (t (>..t)2 - >..J;)1/2 for the EI system
= -1 - >..J; + >..t
for the S system
In the case that>.. J and >.. ware real, then if the S system is unstable, then the EI
system is also unstable. Forif-1+>..{ ->..r' > Othen(>..{)2-4>..r' > (>..{ -2?,and
s02,fI = -2+>..{+{(>..f)2 _4>..W)I/2 > o. However, if the EI system is oscillatory,
4>..w > (>..J)2, then the S system is stable since -1 +>..J - >..w < _l+>..J - (>..J)2 /4 =
-(1 - >..J /2)2 ::; O. Hence the EI system can be unstable and oscillatory while the S
system is stable.
We are interested in the capacity of both systems to be selective amplifiers. This
means that there is a class of inputs I that should be comparatively boosted by
the system; whereas others should be comparatively suppressed. For instance, if
the cells represent the orientation of a bar at a point, then the mode containing a
unimodal, well-tuned, 'bump' in orientation space should be enhanced compared
with poorly tuned inputs. 2 ,1,18 However, if the cells represent oriented small
bars at multiple points in visual space, then isolated smooth and strai?ht contours
should be enhanced compared with extended homogeneous textures. 4,15
The quality of the systems will be judged according to how much selective amplification they can stably deliver. The critical trade-off is that the more the selected
mode is amplified, the more likely it is that, when the input is non-specific, the
system will be unstable to fluctuations in the direction of the selected mode, and
therefore will hallucinate spurious answers.
,fI
,r
3
=
The Two Point System
A particularly simple case to consider has just two neurons (for the S system; two
pairs of neurons for the EI system) and weights
J =
(~o
~)
)
)0
W =
(wo
w)
w
Wo
The idea is that each node coarsely models a group of neurons, and the interactions between neurons within a group (jo and w o) are qualitatively different from
interactions between neurons between groups (j and w). The form of selective amplification here is that symmetric or ambiguous inputs Ia = 1(1, 1) should be suppressed compared with asymmetric inputs Ib = 1(1, 0) (and, equivalently, 1(0,1).
In particular, given, la, the system should not spontaneously generate a response
with Xl significantly different from X2? Define the fixed points to be Xl = x2 > T
under Ia and x~ > T > x~ under I b, where T is the threshold of the excitatory
neurons. These relationships will be true across a wide range of input levels I. The
ratio
(w - j)
R = dxUdl = 1 + ((wo + w) - (jo + j)) = 1 +
(4)
dxlIdl
1 + (wo - jo)
1 + (w o - jo)
of the average relative responses as the input level 1 changes is a measure of how
the system selectively amplifies the preferred or consistent inputs against ambiguous ones. This measure is appropriate only when the fluctuations of the system
277
Computational Differences between Asymmetrical and Symmetrical Networks
The symmetry preserving network
A
Ia
B
The symmetry breaking network
Ib
C
Ia
D
X2
-2
-~4
Ib
6
-2
-2
0
2
4
6
Xl
8
-~4
-2
0
2
4
6
Xl
8
-.:14
-2
0
2
4
6
Xl
8
-~4
-2
a
2
4
6
Xl
8
Figure 1: Phase portraits for the S system in the 2 point case. A;B) Evolution in response to I ex (1, 1)
G
and Ib ex (1,0) for parameters for which the response to I a is stably symmetric. C;D) Evolution in
response to Ia and Ib for parameters for which the symmetric response to Ia is unstable, inducing two
extra equilibrium points. The dotted lines show the thresholds T for g(x).
from the fixed points xa and xb are well behaved. We will show that this requirement permits larger values of R in the EI system than the S system, suggesting that
the EI system can be a more powerful selective amplifier.
In the S system, the stabilities are governed by,S = -(1 + Wo - jo) for the single
mode of deviation Xl - x~ around fixed point band,f = - (1 + (w o ? w) - (jo ? j))
for the two modes of deviation X? == (Xl - xl) ? (X2 - x~) around fixed point
a. Since we only consider cases when the input-output relationship dX/ dI of the
fixed points is well defined, this means,s < a and,~ < O. However, for some
interaction parameters, there are two extra (uneven) fixed points x~ =1= x~ for (the
even) input fa. Dynamic systems theory dictates these two uneven fixed points
will be stable and that they will appear when the '-' mode of the perturbation
around the even fixed point x~ = x~ is unstable. The system breaks symmetry
in inputs, ie the motion trajectory diverges from the (unstable) even fixed point to
one of the (stable) uneven ones. To avoid such cases, it is necessary that,~ < O.
Combining this condition with equation 4 and,s < a leads to a upper bound on
the amplification ratio R S < 2. Figure 1 shows phase portraits and the equilibrium
pOints of the S system under input fa and fb for the two different system parameter
regions.
As we have described, the EI system has exactly the same fixed points as the S system, but they are more unstable. The stability around the symmetric fixed point
under I a is governed by ,f:I = -1+(jo?j)/2?J(jo ? j)2/4 - (wo ? w), while that
of the asymmetric fixed pointunderI b orI a by ,EI = -1+jo/2?JHJ4 - woo Consequently, when there are three fixed points under la, all of them can be unstable
in the EI system, and the motion trajectory cannot converge to any of them. In this
case, when both the' +' and '-' modes around the symmetric fixed point x~ = x~
are unstable, the global dynamics constrains the motion trajectory to a limit cycle
around the fixed points. If x~ ~ x~ on this limit cycle, then the EI system will
not break symmetry, even though the selective amplification ratio R > 2. Figure 2
demonstrates the performance of the EI system in this regime. Figure 2A;B show
various aspects of the response to input P which should be comparatively suppressed. The system oscillates in such a way that Xl and X2 tend to be extremely
similar (including being synchronised). Figure 2C;D show the same aspects of
the response to Ib, which should be amplified. Again the network oscillates, and,
although g(X2) is not driven completely to a (it peaks at 15), it is very strongly
dominated by g(xd, and further, the overall response is much stronger than in
figure 2A;B.
The pertinent difference between the EI and S systems is that while the S system
(when h(y) is linear) can only roll down the energy landscape to a stable fixed
Z. Li and P Dayan
278
Response to I a = 1{1 , 1)
A
Response to Ib = 1(1,0)
c
B
D
80
80
40
-200
10
20 time 30
40
50
0 0
Figure 2: Projections of the response of the EI system.
1000 x
2000
3000
100
lime
200
AiB) E~olution of response to la. A)
:m
Xl VS Yl
and B) g(xl) - g(X2) (solid); g(X!}+g(X2) (dotted) across time show that the Xl = X2 mode dominates
and the growth of X l - X2 is strongly suppressed. C;D) Evolution of the response to lb. Here, the
response of Xl always dominates that of X2 over oscillations. The difference between g(Xl )+g(X2) and
g(Xl) - g(X2) is too small to be evident on the figure. Note the difference in scales between AiB and
C;D. Herejo = 2.1 ; j = O.4;wo = 1.11 ; w = 0.9.
point and break the input symmetry, the EI system can resort to global limit cycles
(t) ~ X2(t) between unstable fixed points and maintain input symmetry. This is
often (robustly over a large range of parameters) the case even when the '-' mode
is locally more unstable (at the symmetric fixed point) than the ' +' mode, because
the'-' mode is much strongly suppressed when the motion trajectory enters the
subthreshold region Xl < T and X2 < T. As we can see in figure 2A;B, this acts
to suppress any overall growth in the' -' mode. Since the asymmetric fixed point
under Ib is just as unstable as that under la, the El system responds to asymmetric
input Ib also by a stable limit cycle around the asymmetric fixed point.
Since the response of the system in response to either pattern is oscillatory, there are
various reasonable ways of evaluating the relative response ratio. Using the mean
responses of the system during a cycle to define X, the selective amplification ratio
in figure 2 is REI = 97, which is significantly higher than the R S = 2 available from
the S system. This is a simple existence proof of the superiority of the EI system for
amplification, albeit at the expense of oscillations. In fact, in this two point case, it
can be shown that any meaningful behavior of the S system (including symmetry
breaking) can be qualitatively replicated in the EI system, but not vice-versa.
Xl
4 The Orientation System
Symmetric recurrent networks have recently been investigated in great depth for
representing and calculating a wide variety of quantities, including orientation
tuning. The idea behind the recurrent networks is that they should take noisy (and
perhaps weakly tuned) input and selectively amplify the component that represents an orientation 0 in the input, leaving a tuned pattern of excitation across the
population that faithfully represents the underlying input. Based on the analysis
above, we can expect that if an S network amplifies a tuned input enough, then it
will break input symmetry given an untuned input and thus hallucinate a tuned
response. However, an EI system, in the same oscillatory regime as for the two
point system, can maintain untuned and suppressed response to untuned inputs.
We designed a particular El system with a high selective amplification factor
for tuned inputs 1(0). In this case, units Xi, Yi have preferred orientations Oi =
(i - N/2)7r/N for i = 1 .. . n. the connection matrices J is Toplitz with Gaussian
tuning, and, for simplicity, [W]ij does not depend on i,j. Figure 3B (and inset)
shows the output of two units in the network in response to a tuned input, showing the nature of the oscillations and the way that selectivity builds up over the
course of each period. Figure 3C shows the activities of all the units at three particular phases of the oscillation. Figure 3A shows how the mean activity of the most
279
Computational Differences between Asymmetrical and Symmetrical Networks
A) Cell outputs vs a or b
C) Cell outputs vs (}i
B) Cell outputs vs time
x 10'
X
10'
10'
j"
6
6
2
2
~10'
.a.'"
tuned
~102
&.
.. '
~
flat
ij100
~
E
10- 2
0
10
a or b
15
20
~5
46
49
50
-~
-45
45
90
Figure 3: The Gaussian orientation network. A) Mean response of the 8; = 0? unit in the network as
a function of a (untuned) or b (tuned) with a log scale. B) Activity of the 8i = 0? (solid) and 8; = 30?
(dashed) units in the network over the course of the positive part of an oscillation. Inset - activity of
these units over all time. C) Activity of all the units at the three times shown as (i), (ii) and (iii) in (B)
(i) (dashed) is in the rising phase of the oscillation; (ii) (solid) is at the peak; and (iii) (dotted) is during
the falling phase. Here, the input is Ii = a + be-IJ[ /20'2, with (J" = 13?, and the Toplitz weights are
J;j = (3 + 21e-( IJ i - IJj )2/ 20',2)/N, with (J"' = 20? and Wij = 23.S/N.
activated unit scales with the levels of tuned and untuned input. The network
amplifies the tuned inputs dramatically more - note the logarithmic scale. The S
system breaks symmetry to the untuned input (b = 0) for these weights. If the
weights are scaled uniformly by a factor of 0.22, then the S system is appropriately
stable. However, the magnification ratio is 4.2 rather than something greater than
1000 in the EI system.
The orientation system can be understood to a large qualitative degree by looking
at its two-point cousins. Many of the essential constraints on the system are determined by the behavior of the system when the mode with Xi = Xj dominates,
in which case the complex non-linearities induced by orientation tuning or cut off
and its equivalents are irrelevant. Let J(I) and W(I) for (angular) frequency f
be the Fourier transforms of J(i - j) == [J]ij and W(i - j) == [WLj and define
)..(J) = Re{ -1 + J(I)/2 + iJ(W(I) - J2(1)/4)}. Then, let 1* >0 be the frequency
such that )..(1*) 2: )..(1) for all f > O. This is the non-translation-invariant mode
that is most likely to cause instabilities for translation invariant behavior. A two
point system that closely corresponds to the full system can be found by solving
the simultaneous equations:
jo + j
= J(O)
WO
+ w = W(O)
jo - j
= J(J*)
WO -
w
= W(I*)
This design equates the Xl = X2 mode in the two point system with the f = 0 mode
in the orientation system and the Xl = -X2 mode with the f = 1* mode. For smooth
J and W, 1* is often the smallest or one of the smallest non-zero spatial frequencies. It is easy to see that the two systems are exactly equivalent in the translation
invariant mode Xi = Xj under translation invariant input Ii = Ij in both the linear and nonlinear regimes. The close correspondence between the two systems in
other dynamic regimes is supported by simulation results. 16 Quantitatively, however, the amplification ratio differs between the two systems.
5 Conclusions
We have studied the dynamical behavior of networks with symmetrical and asymmetrical connections and have shown that the extra degrees of dynamical freedom
Z. Li and P. Dayan
280
of the latter can be put to good computational use, eg global dynamic stability via
local instability. Many applications of recurrent networks involve selective amplification - and the selective amplification factors for asymmetrical networks can
greatly exceed those of symmetrical networks. We showed this in the case of orientation selectivity. However, it was originally inspired by a similar result in contour
enhancement and texture segregation for which the activity of isolated oriented
line elements should be enhanced if they form part of a smooth contour in the
input and suppressed if they form part of an extended homogeneous texture. Further, the output should be homogeneous if the input is homogeneous (in the same
way that the orientation network should not hallucinate orientations from untuned
input). In this case, similar analysis16 shows that stable contour enhancement is
limited to just a factor of 3.0 for the S system (but not for the EI system), suggesting an explanation for the poor performance of a slew of S systems in the literature
designed for this purpose. We used a very simple system with just two pairs of
neurons to develop analytical intuitions which are powerful enough to guide our
design of the more complex systems. We expect that the details of our model, with
the exact pairing of excitatory and inhibitory cells and the threshold non-linearity,
are not crucial for the results.
Inhibition in the cortex is, of course, substantially more complicated than we have
suggested. In particular, inhibitory cells do have somewhat faster (though finite)
time constants than excitatory cells, and are also not so subject to short term plasticity effects such as spike rate adaptation. Nevertheless, oscillations of various
sorts can certainly occur, suggesting the relevance of the computational regime
that we have studied.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
Ben-Yishai, R, Bar-Or, RL & Sompolinsky, H (1995) PNAS 92:3844-3848.
Braun, J, Neibur, E, Schuster, HG & Koch, C (1994) Society for Neuroscience Abstracts 20:1665.
Carandini, M & Ringach, DL (1997) Vision Research 37:3061-307l.
Cohen, MA & Grossberg, S (1983) IEEE Transactions on Systems, Man and Cybernetics 13:815-826.
Eckhom, R, et al (1988) Biological Cybernetics 60:121-130.
Ermentrout, GB & Cowan, JD (1979). Journal of Mathematical Biology 7:265-280.
Golomb, D, Wang, XI & Rinzel, J (1996). Journal of Neurophysiology 75:750-769.
Gray, CM, Konig, P, Engel, AK & Singer, W (1989) Nature 338:334-337.
Grossberg, S (1988) Neural Networks 1:17-61.
Hopfield, JJ (1982) PNAS 79:2554-2558.
Hopfield, JJ (1984) PNAS 81:3088-3092.
Konig, P, Janosch, B & Schillen, TB (1992) Neural Computation 4:666-681.
Li, Z (1995) InJL van Hemmen et ai, eds, Models of Neural Networks. Vol. 2. NY: Springer.
Li, Z (1997) In KYM Wong, I King & DY Yeung, editors, Theoretical Aspects of Neural Computation.
Hong Kong: Springer-Verlag.
Li, Z (1998) Neural Computation 10:903-940.
Li, Z. and Dayan, P. (1999) to be published in Network: Computations in Neural Systems.
Li, Z & Hopfield, JJ (1989). Biological Cybernetics 61:379-392.
Pouget, A, Zhang, KC, Deneve, S & Latham, PE (1998) Neural Computation, 10373-401.
Samsonovich A & McNaughton, BL (1997) Journal of Neuroscience 17:5900-5920.
Seung, HS (1996) PNAS 93:13339-13344.
Seung, HS et al (1998). NIPS 10.
Sompolinsky, H, Golomb, D & Kleinfeld, D (1990) PNAS 87:7200-7204.
Stein, PSG, et al (1997) Neurons, Networks, and Motor Behavior. Cambridge, MA: MIT Press.
Steriade, M, McCormick, DA & Sejnowski, TJ (1993). Science 262:679-685.
Suarez, H, Koch, C & Douglas, R (1995) Journal of Neuroscience 15:6700-6719.
von der Malsburg, C (1988) Neural Networks 1:141-148.
Zhang, K (1996) Journal of Neuroscience 16:2112-2126.
| 1559 |@word neurophysiology:1 kong:1 h:2 rising:1 hippocampus:1 stronger:1 simulation:1 simplifying:1 commute:1 solid:3 tuned:12 wd:1 activation:2 yet:3 dx:1 plasticity:1 pertinent:1 motor:1 designed:2 rinzel:1 v:4 selected:2 hallucinate:3 short:2 node:1 location:3 zhang:2 mathematical:1 along:1 pairing:1 qualitative:1 sustained:1 olfactory:1 behavior:8 samsonovich:1 brain:1 inspired:1 underlying:1 linearity:2 golomb:2 what:1 cm:1 substantially:1 whilst:1 differing:1 guarantee:1 act:1 xd:1 braun:1 questionable:1 growth:2 exactly:2 oscillates:2 demonstrates:1 scaled:1 uk:2 unit:9 appear:1 superiority:1 positive:2 understood:1 local:2 limit:6 ak:1 ware:2 fluctuation:2 yd:1 studied:4 dynamically:2 limited:1 range:4 grossberg:3 spontaneously:1 differs:1 significantly:2 dictate:1 projection:1 pre:1 convenience:1 cannot:1 amplify:1 close:1 judged:1 put:1 context:1 instability:2 wong:1 equivalent:2 demonstrated:1 straightforward:1 simplicity:2 pouget:1 stability:5 population:1 mcnaughton:1 enhanced:3 exact:1 homogeneous:4 element:2 magnification:1 particularly:1 asymmetric:5 cut:1 suarez:1 enters:1 wang:1 region:3 connected:1 cycle:6 sompolinsky:2 trade:1 intuition:1 constrains:1 ermentrout:1 seung:2 dynamic:4 weakly:1 depend:1 solving:1 deliver:1 completely:1 hopfield:4 cat:1 various:3 fast:1 effective:1 london:1 sejnowski:1 quite:1 posed:1 larger:1 analyse:1 noisy:1 advantage:1 eigenvalue:1 net:1 analytical:2 ucl:2 interaction:3 jij:4 steriade:1 adaptation:1 j2:1 combining:1 poorly:1 amplified:2 amplification:10 inducing:1 amplifies:3 konig:2 convergence:1 wcin:1 enhancement:4 requirement:1 diverges:1 ben:1 illustrate:1 recurrent:5 ac:2 develop:1 ij:12 come:1 lyapunov:2 direction:2 rei:1 closely:1 biological:3 jdg:2 around:7 koch:2 great:1 equilibrium:3 scope:1 bump:1 smallest:2 purpose:1 vice:1 faithfully:1 engel:1 mit:1 gaussian:3 always:1 rather:1 avoid:1 boosted:1 broader:1 ax:1 lon:1 mainly:1 greatly:1 dayan:7 el:2 spurious:1 kc:1 selective:10 wij:3 interested:1 overall:2 orientation:15 spatial:2 never:1 having:1 biology:1 zhaoping:2 represents:2 others:1 quantitatively:1 oriented:2 dg:2 phase:5 maintain:2 amplifier:3 freedom:1 interneurons:1 wijg:3 certainly:1 behind:1 activated:1 yishai:1 hg:1 xb:1 tj:1 necessary:1 unless:1 re:1 isolated:2 theoretical:1 instance:2 portrait:2 modeling:1 ar:1 queen:1 deviation:3 too:1 answer:2 perturbed:1 peak:2 sensitivity:1 ie:1 othen:1 off:2 yl:1 jo:11 again:1 von:1 containing:1 external:2 resort:1 li:11 suggesting:3 vi:1 depends:1 break:5 ori:1 sort:1 complicated:1 square:1 oi:1 roll:1 characteristic:1 subthreshold:1 landscape:1 schillen:1 trajectory:5 cybernetics:3 published:1 oscillatory:5 simultaneous:1 psg:1 ed:1 attentive:1 ty:5 energy:2 against:1 frequency:3 proof:1 di:2 ective:1 popular:1 carandini:1 segmentation:1 higher:1 originally:1 response:24 though:2 strongly:3 just:4 xa:1 angular:1 wlj:1 ei:32 nonlinear:2 kleinfeld:1 mode:21 stably:2 quality:1 gray:1 behaved:1 perhaps:1 effect:1 aib:2 asymmetrical:8 true:2 counterpart:2 evolution:3 hence:1 inspiration:1 symmetric:10 ringach:1 eg:1 during:2 ambiguous:2 excitation:3 hong:1 evident:1 latham:1 performs:1 l1:1 motion:5 recently:2 fi:2 jijg:2 ji:1 rl:1 cohen:1 linking:1 discussed:1 significant:1 versa:1 cambridge:1 ai:1 tuning:4 had:1 stable:10 cortex:5 inhibition:3 something:1 recent:1 showed:1 perspective:1 irrelevant:1 driven:1 selectivity:3 verlag:1 slew:1 yi:4 exploited:1 wji:1 der:1 preserving:1 additional:1 greater:1 somewhat:1 converge:1 period:1 dashed:2 ii:8 multiple:1 unimodal:1 full:1 thalamus:1 pnas:5 smooth:3 faster:1 england:1 long:1 host:1 involving:1 vision:1 yeung:1 represent:2 cell:23 proposal:1 receive:1 whereas:1 grow:1 leaving:2 crucial:1 appropriately:1 extra:3 hz:1 tend:1 induced:1 subject:1 cowan:1 call:1 near:1 symmetrically:1 exceed:1 iii:2 enough:2 easy:1 variety:1 xj:8 idea:2 cousin:1 whether:1 gb:1 wo:9 peter:1 cause:1 jj:3 dramatically:2 clear:1 involve:1 transforms:1 stein:1 band:1 locally:1 generate:2 inhibitory:12 dotted:3 neuroscience:5 per:1 broadly:1 vol:1 coarsely:1 group:3 key:1 threshold:4 nevertheless:1 falling:1 douglas:1 ht:1 deneve:1 powerful:2 family:1 almost:1 reasonable:1 separation:2 oscillation:10 dy:1 lime:1 bound:1 simplification:1 correspondence:1 activity:6 occur:1 constraint:1 x2:17 flat:1 invertebrate:1 dominated:1 aspect:3 answered:1 fourier:1 extremely:1 relatively:1 according:1 poor:1 across:3 suppressed:7 biologically:2 intuitively:1 invariant:4 taken:1 equation:3 segregation:1 mechanism:1 singer:1 end:1 available:2 permit:1 apply:1 appropriate:1 robustly:1 existence:1 jd:1 malsburg:1 calculating:1 especially:1 build:1 society:1 comparatively:3 bl:1 question:2 quantity:1 spike:1 fa:2 primary:3 diagonal:1 responds:1 amongst:1 separate:1 capacity:1 unstable:13 reason:1 polarity:1 relationship:2 ratio:7 equivalently:1 expense:1 suppress:1 design:2 perform:1 mccormick:1 upper:1 neuron:9 finite:1 extended:2 looking:1 head:1 perturbation:1 lb:1 pair:3 connection:9 nip:1 suggested:2 bar:3 dynamical:4 pattern:3 regime:5 tb:1 including:4 explanation:1 power:1 critical:1 ia:6 representing:1 eye:1 created:1 woo:1 literature:1 discovery:1 relative:2 synchronization:1 fully:1 expect:3 generation:1 untuned:7 degree:2 consistent:1 editor:1 translation:4 lo:1 excitatory:12 course:3 supported:1 keeping:1 guide:1 wide:2 van:1 dimension:1 cortical:1 evaluating:1 depth:1 contour:6 fb:1 sensory:1 made:1 qualitatively:2 replicated:1 transaction:1 preferred:2 global:3 symmetrical:8 xi:12 nature:2 ignoring:1 symmetry:10 interact:1 investigated:1 complex:2 da:1 significance:2 noise:1 hemmen:1 gatsby:3 ny:1 position:1 xl:20 governed:2 pe:1 ib:9 breaking:2 down:1 specific:1 inset:2 showing:2 dominates:3 dl:1 essential:1 false:1 albeit:1 equates:1 texture:3 logarithmic:1 kym:1 likely:2 infinitely:1 ijj:1 visual:5 binding:1 springer:2 corresponds:1 ma:2 identity:1 king:1 consequently:1 man:1 change:1 determined:1 uniformly:1 principal:1 la:4 meaningful:1 s02:1 selectively:2 uneven:3 latter:1 synchronised:1 relevance:1 schuster:1 ex:2 |
612 | 156 | 177
COMPARING BIASES FOR MINIMAL NETWORK
CONSTRUCTION WITH BACK-PROPAGATION
Stephen Jo~ Hansont
Bell Communications Research
Morristown. New Jersey 07960
Lorien Y. Pratt
Rutgers University
New Brunswick. New Jersey 08903
ABSTRACT
Rumelhart (1987). has proposed a method for choosing minimal or
"simple" representations during learning in Back-propagation
networks. This approach can be used to (a) dynamically select the
number of hidden units. (b) construct a representation that is
appropriate for the problem and (c) thus improve the generalization
ability of Back-propagation networks. The method Rumelhart suggests
involves adding penalty terms to the usual error function. In this paper
we introduce Rumelhart?s minimal networks idea and compare two
possible biases on the weight search space. These biases are compared
in both simple counting problems and a speech recognition problem.
In general. the constrained search does seem to minimize the number of
hidden units required with an expected increase in local minima.
INTRODUCTION
Many supervised connectionist models use gradient descent in error to solve various
kinds of tasks (Rumelhart. Hinton & Williams. 1986). However. such gradient descent
methods tend to be ".opportunistic" and can solve problems in an arbitrary way dependent
on starting point in weight space and peculiarities of the training set. For example. in
Figure 1 we show a "mesh" problem which consists of a random distribution of
exemplars from two categories. The spatial geometry of the categories impose a meshed
or overlapping subset of the exemplars in the two dimensional feature space. As the
meshed part of the categories increase the problem becomes more complex and must
involve the combination of more linear cuts in feature space and consequently more
nonlinear cuts for category separation. In the top left corner of Figure l(a). we show a
mesh geometry requiring only three cuts for category separation. In the bottom center
t
Also member of Cognitive Science Laboratory, 221 Nassau Street, Princeton University, Princeton, New
lersey,08S42
178
Hanson and Pratt
I (b) is the projection of the three cut solution of the mesh in output space. In the top right
of this Figure I(c) is a typical solution provided by back-propagation starting with 16
hidden units. This Figure shows the two dimensional featme space in which 9 of the
lines cuts are projected (the other 7 are outside the [0.1] unit plane).
~ r-------------------------~
0.0
0.2
U
0.'
0.8
'.0
Figure 1: Mesh problem (a). output space (b) and typical back-propagation solution (c)
Examining the weights in the next layer of the network indicates that in fact. 7 of these 9
line segments are used in order to construct the output surface shown in Figure l(b).
Consequently. the underlying feature relations determining the output surface and
category separation are arbitrary. more complex then necessary and may result in
anomalous generalizations.
Rumelhart (1987). has proposed a way to increase the generalization capabilities of
learning networks which use gradient descent methods and to automatically control the
resources learning networks use-for example. in tenns of "hidden" units. His hypothesis
concerns the nature of the 'representation in the network: "... the simplest most robust
network which accounts/or a data set will, on awrage,lead to the best generalization to
the population from which the training set has been drawn".
The basic approach involves adding penalty terms to the usual error function in order to
constrain the search and cause weights to differentially decay. This is similar to many
proposals in statistical regression where a "simplicity" measure is minimized along with
the error term and is sometimes referred to as "biased" regression (Rawlings. 1988).
Basically. the statistical concept of biased regression derives from parameter estimation
approaches that attempt to achieve a best linear unbiased estimator ("BLUE"). By
definition an unbiased estimator is one with the lowest possible variance and
theoretically. unless there is significant collinearityl or nonlinearity amongst the
1. For example, Ridge regreuiolt is a special case of biased regression which attempts to make a singular
correlation matrix non-lingular by adding a small arbitrary coostant to the diagonal of the matrix. This
increase in the diagonal may lower the impact of the off-diagonal elements and thus reduce the effects of
collinearity?
Comparing Biases for Minimal Network Construction
variables. a least squares estimator (LSE) can be also shown to be a BLUE. If on the
other hand. input variables are correlated or nonlinear with the output variables (as is the
case in back-propagation) then there is no guarantee that the LSE will also be unbiased.
Consequently. introducing a bias may actually reduce the variance of the estimator of
that below the theoretically unbiased estimator.
Since back-propagation is a special case of multivariate nonlinear regression methods we
must immediately give up on achieving a BLUE. Worse ye4 the input variables are also
very likely to be collinear in that input data are typically expected to be used for feature
extraction. Consequently. the neural network framework leads naturally to the
exploration of biased regression techniques. unfortunately. it is not obvious what sorts of
biases ought to be introduced and whether they may be problem dependent
Furthennore. the choice of particular biases probably determines the particular
representation that is chosen and its nature in tenns of size. structure and "simplicity".
This representation bias may in turn induce generalization behavior which is greater in
accuracy with larger coverage over the domain. Nonetheless. since there is no particular
motivation for minimizing a least squares estimator it is important to begin exploring
possible biases that would lead to lower variance and more robust estimators.
In this paper we explore two general type of bias which introduce explicit constraints on
the hidden units. First we discuss the standard back-propagation method. various past
methods of biasing which have been called "weight decay". the properties of our biases.
and finally some simple benchmark tests using parity and a speech recognition task.
BACK?PROPAGATION
The Back-propagation method [2] is a supervised learning technique using a gradient
descent in an error variable. The error is established by comparing an output value to a
desired or expected value. These errors can be accumulated over the sample:
E = LL (yu - ;ir)2
(1)
? i
Assuming the output function is differentiable then a gradient of the error can be found,
and we require that this derivative be decreasing.
dE
--=0
dWij
?
(2)
Over multiple layers we pass back a weighted sum of each derivative from units above.
WEIGHT DECAY
Past wo~ using biases have generally been based on ad hoc arguments that weights
should differentially decay allowing large weights to persist and small weights to tend
179
180
Hanson and Pratt
towards zero sooner. Apparently. this would tend to concentrate more of the input into a
smaller number of weights. Generally. the intuitive notion is to somehow reduce the
complexity of the network as defined by the nmnber of connections and number of
hidden units. A simple but inefficient way of doing this is to include a weight decay tenn
in the usual delta updating rule causing all weights to decay on each learning step (where
W
=Wjj throughout):
(3)
Solving this difference equation shows that for
exponentially over steps towards zero.
P< 1.0
weights are decaying
"
. aE
w" = a 1: P"'" (--)j + P" Wo
;=1
Ow
(4)
This approach introduces the decay tenn in the derivative itself causing error tenns to
also decrease over learning steps which may not be desirable.
BIAS
The sort of weight decay just discussed can also be derived from genezal consideration of
"costs" on weights. For example it is possible to consider E with a bias tenn which in the
simple decay case is quadratic with weight value (Le. w2 ).
We now combine this bias with E producing an objective function that includes both the
error term and this bias function:
O=E+B
(5)
where. we now want to minimize
ao = aE + aB
o~??
'I
ow??'I
ow??
'I
(6)
In the quadratic case the updating rule becomes.
W,,+1
=a (-
aE
:\.... - 2w,,) + w"
ClW;J
(7)
Solving this difference equation derives the updating rule from equation 4.
"
. oE
w" = a I:(1-2a)""'(- Ow ); + (l-2a)"wo
lal
(8)
ij
In this case. however without introduction of other parameters. a is both the learning rate
2. MOlt tX the wort discussed here has not been previously publilhed but nonetheless has entered into general
use in many cormectionisl models and wu recently summarized on the COlIMctionist Bw/~tin Board by
John Kruschke.
Comparing Biases for Minimal Network Construction
and related to the decay tenn and must be strictly < ~ for weight decay.
Unifonn weight decay has a disadvantage in that large weights are decaying at the same
rate as small weights. It is possible to design biases that influence weights only when
they are relatively small or even in a particular range of values. For example. Rumelhart
has entertained a number of biases. one fonn in particular that we will also explore is
based on a rectangular hyperbolic function.
w1
B:: (1+w2)
(9)
It is infonnative to examine the derivative associated with this function in order
understand its effect on the weight updates.
dB
2w
to
(10)
- dwij ::- (1+w2)1
This derivative is plotted in Figure 2 (indicated as Rumelhart) and is non-monotonic
showing a strong differential effect on small weights (+ or -) pushing them towards zero.
while near zero and large weight values are not significantly affected.
BIAS PER UNIT
It is possible to consider bias on each hidden unit weight group. This has the potentially
desirable effect of isolating weight changes to hidden unit weight groups and could
effectively eliminate hidden units. Consequently. the hidden units are directly
determining the bias. In order to do this. first define
w?::~lw??1
I
If..I
'I'
(11)
j
where i is the ith hidden unit.
Hyperbolic Bias
Now consider a function similar to Rumelhart's but this time with
Wi,
the ith hidden
group as the variable.
W?
B-- 1 +AWi?
'
(12)
The new gradient includes the term from the bias which is.
aB
Asgn(wij)
- dWij =
(1+Wi)2
Exponential Bias
A similar kind of bias would be to consider the negative exponential:
(13)
181
182
Hanson and Pratt
(14)
This bias is similar to the hyperbolic bias tenn as above but involves the exponential
which potentially produce more unifonn and gradual rate changes towards zero,
dB
sgn(wij)
( e ~Wi) .
--=
dWij
(15)
The behavior of these two biases (hyperbolic, exponential) are shown as function of
weight magnitudes in Figure 2. Notice that the exponential bias term is more similar in
slope change to RumelharCs (even though his is non-monotonic) than the hyperbolic as
weight magnitude to a hidden unit increases.
q
II)
i
0
~
'iI
'i
'tJ
d
0
d
an
9
-.
q
-3
-2
-1
o
1
2
3
weightvalu8
Figure 2: Bias function behavior of Rumelharfs, Hyperbolic and Exponential
Obviously there are many more kinds of bias that one can consider. These two were
chosen in order to provide a systematic test of varying biases and exploring their
differential effectiveness in minimizing network complexity.
SOME COMPARISONS
Parity
These biased Back-propagation methods were applied to several counting problems and
to a speech (digit) recognition problem. In the following graphs for example, we show
the results of 100 runs of XOR and 4-bit parity at 11 =.1 (learning rate) and ex=.8
(moving average) starting with 10 hidden units. The parameter A. was optimized for the
bias runs.
Comparing Biases for Minimal Network Construction
II
---
I:
---
J:
.........., ....
,
I
...-------.-
......
_.--
a
_._-
4
~-----r-
.......-
oJ~~~~~~~=-~.
?
?
12
\I
12
J:
r-
r--
---r-
-
t--
_._,
,
?
I
,
,
??
~, .
.'1
Figure 3: Exclusive OR runs for standard, hyperbolic and exponential biasing
Shown are runs for the standard case without biases, the hyperbolic bias and the
exponential bias. Once a solution was reached all hidden Wlits were tested individually
by removing each of them one at a time from the network and then testing on the training
set Any hidden unit which was unnecessary was removed for data analysis. Only the
number of these "functional units" are reported in the histograms. Notice the number of
hidden units decrease with bias runs. An analysis of variance (statistical test) verified
this improvement for both the hyperbolic and exponential over the standard. Also note
that the exponential is significandy better than the hyperbolic. This is also confinned for
the 4-bit parity case as shown in Figure 4.
-_. __...._-
- -_...._...
........------
I-
. .
---
--.....--
I'
?
I'
..........-..-..
_-
.
-_.
,
Figure 4: four-bit parity runs for standard. hyperbolic and exponential biasing
183
184
Hanson and Pratt
Speech Recognition
Samples of 10 spoken digits (0-9) each were collected (same speaker throughout--DJ.
BUlT kindly supplied data). Samples were then preprocessed using FFTs retaining the
first 12 Cepstral coefficients. To avoid ceiling effects only two tokens each of the 10
digits were used for training ("0", "0","1","1",.... "9",.,9., .. ) each network. Eight such 2
token samples were used for replications. Another set of 50 spoken digits (5 samples of
each of the 10 digits) were collected for transfer. All runs were matched across methods
for number ofleaming sweeps ?300),11=.05, a=.2, and A=.01 which were optimized for
the exponential bias. Shown in the following table is the results of the 8 replications for
the standard and the exponential bias.
r2
1'rInIrer
5K
6K
r3
62~
1'4
6A
d
16
IIIIIDIe
rl
t7
IS
doll
, HIdden Units
II
11
COIII&I'IiIlecl{up1
TrlDafer
76~
'HWcnUnill
10
13
64"
74"
14
14
62~
18
14
16
56..
11
c569&
19
68..
14
58"
18
18
54"
11
9
sa"
64~
64..
17%.56
65~
12.%.71
Table 1: Eight replications with transfer for standard and exponential bias.
In this case there appears to both an improvement in the average number of hidden units
(functional ones) and transfer. A typical correlation of the improved transfer and reduced
hidden unit usage for a single replication is plotted in the next graph.
J!
~
l
~
I
2
I
:I
y- -1.21+ 71.7. ,- -.trT
i
I/)
or
~
10
12
14
18
11
I1IMnber of hidden unIIs
Figure 5: Transfer as a function of hidden unit usage for a single replication
We note that introduction of biases decrease the probability of convergence relative to
the standard case (as many as 75% of the parity runs did not converge within criteria
Comparing Biases for Minimal Network Construction
number of sweeps.) Since the search problem is made more difficult by introducing
biases it now becomes even more important to explore methods for improving
convergence similar for example. to simulated annealing (Kirkpatrick. Gelatt & Vecchi.
1983)
CONCLUSIONS
Minimal networks were defined and two types of bias were compared in a simple
counting problem and a speech recognition problem. In the counting problems under
biasing conditions the number hidden units tended to decrease towards the minimum
required for the problem although with a concomitant decrease in convergence rate. In
the speech problem also under biasing conditions the number of hidden units tended to
decrease as the transfer rate tended to improve.
Acknowledgements
We thank Dave Rumelhart for discussions concerning the minimal network concept. the
Bellcore connectionist group and members of the Princeton Cognitive Science Lab for a
lively environment for the development of these ideas.
References
Kirkpalrick. S .? Gelatt. C. D .? & Vecchi. M. P .? Optimization by simulated annealing.
Science. 220. 671-680. (1983).
Rawlings. I. 0 ?? Applied Regression Analysis. Wadsworth & Brooks/Cole, (1988).
Rumelhart D. E .? Personal Communication, Princeton. (1987).
Rumelhart D. E., Hinton G. E .? & Williams R .? Learning Internal Representations by
error propagation. Nature. (1986).
185
| 156 |@word collinearity:1 gradual:1 fonn:1 t7:1 past:2 comparing:6 must:3 john:1 mesh:4 update:1 tenn:5 plane:1 ith:2 along:1 differential:2 replication:5 consists:1 combine:1 introduce:2 theoretically:2 expected:3 behavior:3 examine:1 rawlings:2 decreasing:1 automatically:1 becomes:3 provided:1 begin:1 underlying:1 matched:1 lowest:1 what:1 kind:3 spoken:2 ought:1 guarantee:1 morristown:1 control:1 unit:25 producing:1 local:1 awi:1 dynamically:1 suggests:1 confinned:1 range:1 testing:1 opportunistic:1 digit:5 bell:1 hyperbolic:11 significantly:1 projection:1 induce:1 influence:1 center:1 williams:2 starting:3 kruschke:1 rectangular:1 simplicity:2 immediately:1 estimator:7 rule:3 wort:1 his:2 population:1 notion:1 construction:5 hypothesis:1 element:1 rumelhart:11 recognition:5 updating:3 cut:5 persist:1 bottom:1 oe:1 decrease:6 removed:1 environment:1 complexity:2 wjj:1 personal:1 solving:2 segment:1 jersey:2 various:2 tx:1 choosing:1 outside:1 larger:1 solve:2 furthennore:1 ability:1 itself:1 obviously:1 hoc:1 differentiable:1 causing:2 entered:1 achieve:1 intuitive:1 differentially:2 convergence:3 produce:1 exemplar:2 ij:1 sa:1 strong:1 coverage:1 involves:3 concentrate:1 peculiarity:1 exploration:1 sgn:1 require:1 ao:1 generalization:5 exploring:2 strictly:1 estimation:1 cole:1 individually:1 weighted:1 avoid:1 varying:1 derived:1 improvement:2 indicates:1 dependent:2 accumulated:1 typically:1 eliminate:1 hidden:24 relation:1 wij:2 bellcore:1 retaining:1 development:1 constrained:1 spatial:1 special:2 wadsworth:1 construct:2 once:1 extraction:1 yu:1 minimized:1 connectionist:2 geometry:2 bw:1 attempt:2 ab:2 introduces:1 kirkpatrick:1 tj:1 necessary:1 unless:1 sooner:1 desired:1 plotted:2 isolating:1 minimal:9 disadvantage:1 infonnative:1 cost:1 introducing:2 subset:1 examining:1 reported:1 systematic:1 off:1 jo:1 lorien:1 w1:1 worse:1 corner:1 cognitive:2 derivative:5 inefficient:1 account:1 de:1 summarized:1 includes:2 coefficient:1 ad:1 lab:1 apparently:1 doing:1 reached:1 sort:2 decaying:2 capability:1 slope:1 minimize:2 square:2 ir:1 accuracy:1 xor:1 variance:4 basically:1 dave:1 tended:3 definition:1 nonetheless:2 obvious:1 naturally:1 associated:1 actually:1 back:12 appears:1 supervised:2 improved:1 though:1 just:1 correlation:2 hand:1 nonlinear:3 overlapping:1 propagation:12 somehow:1 indicated:1 usage:2 effect:5 lively:1 requiring:1 concept:2 unbiased:4 laboratory:1 s42:1 ll:1 during:1 speaker:1 criterion:1 trt:1 ridge:1 lse:2 consideration:1 recently:1 functional:2 rl:1 exponentially:1 discussed:2 significant:1 nonlinearity:1 dj:1 moving:1 surface:2 multivariate:1 tenns:3 minimum:2 greater:1 impose:1 converge:1 stephen:1 ii:4 multiple:1 desirable:2 concerning:1 impact:1 anomalous:1 basic:1 regression:7 ae:3 rutgers:1 histogram:1 sometimes:1 proposal:1 want:1 annealing:2 singular:1 biased:5 w2:3 probably:1 tend:3 db:2 member:2 seem:1 effectiveness:1 near:1 counting:4 pratt:5 reduce:3 idea:2 whether:1 collinear:1 penalty:2 wo:3 speech:6 cause:1 generally:2 involve:1 category:6 simplest:1 reduced:1 supplied:1 unifonn:2 notice:2 delta:1 per:1 blue:3 affected:1 group:4 four:1 achieving:1 drawn:1 preprocessed:1 verified:1 graph:2 clw:1 sum:1 run:8 throughout:2 wu:1 separation:3 bit:3 layer:2 quadratic:2 constraint:1 constrain:1 argument:1 vecchi:2 relatively:1 combination:1 smaller:1 across:1 wi:3 ceiling:1 resource:1 equation:3 previously:1 turn:1 discus:1 r3:1 doll:1 eight:2 appropriate:1 gelatt:2 top:2 include:1 pushing:1 sweep:2 objective:1 exclusive:1 usual:3 diagonal:3 gradient:6 amongst:1 ow:4 thank:1 simulated:2 street:1 collected:2 assuming:1 concomitant:1 minimizing:2 difficult:1 unfortunately:1 potentially:2 negative:1 design:1 allowing:1 benchmark:1 descent:4 meshed:2 hinton:2 communication:2 arbitrary:3 introduced:1 required:2 connection:1 optimized:2 hanson:4 lal:1 established:1 brook:1 below:1 biasing:5 oj:1 improve:2 ofleaming:1 acknowledgement:1 determining:2 relative:1 token:2 parity:6 bias:46 understand:1 cepstral:1 made:1 projected:1 unnecessary:1 search:4 table:2 nature:3 transfer:6 robust:2 correlated:1 improving:1 complex:2 domain:1 kindly:1 did:1 motivation:1 referred:1 board:1 explicit:1 exponential:14 lw:1 tin:1 ffts:1 removing:1 showing:1 r2:1 decay:12 concern:1 derives:2 adding:3 effectively:1 magnitude:2 entertained:1 likely:1 explore:3 monotonic:2 determines:1 dwij:4 consequently:5 towards:5 change:3 typical:3 called:1 pas:1 select:1 internal:1 brunswick:1 princeton:4 tested:1 ex:1 |
613 | 1,560 | Graphical Models for Recognizing
Human Interactions
Nuria M. Oliver, Barbara Rosario and Alex Pentland
20 Ames Street, E15-384C,
Media Arts and Sciences Laboratory, MIT
Cambridge, MA 02139
{nuria, rosario, sandy}@media.mit.edu
Abstract
We describe a real-time computer vision and machine learning system for modeling and recognizing human actions and interactions.
Two different domains are explored: recognition of two-handed
motions in the martial art 'Tai Chi' , and multiple- person interactions in a visual surveillance task. Our system combines top-down
with bottom-up information using a feedback loop, and is formulated with a Bayesian framework. Two different graphical models
(HMMs and Coupled HMMs) are used for modeling both individual
actions and multiple-agent interactions, and CHMMs are shown to
work more efficiently and accurately for a given amount of training. Finally, to overcome the limited amounts of training data,
we demonstrate that 'synthetic agents' (Alife-style agents) can be
used to develop flexible prior models of the person-to-person interactions.
1
INTRODUCTION
We describe a real-time computer vision and machine learning system for modeling
and recognizing human behaviors in two different scenarios: (1) complex, twohanded action recognition in the martial art of Tai Chi and (2) detection and
recognition of individual human behaviors and multiple-person interactions in a
visual surveillance task. In the latter case, the system is particularly concerned
with detecting when interactions between people occur, and classifying them.
Graphical models, such as Hidden Markov Models (HMMs) [6] and Coupled Hidden Markov Models (CHMMs) [3, 2], seem appropriate for modeling and, classifying human behaviors because they offer dynamic time warping, a well-understood
training algorithm, and a clear Bayesian semantics for both individual (HMMs)
and interacting or coupled (CHMMs) generative processes. A major problem with
this data-driven statistical approach, especially when modeling rare or anomalous
behaviors, is the limited number of training examples. A major emphasis of our
work, therefore, is on efficient Bayesian integration of both prior knowledge with
evidence from data. We will show that for situations involving multiple independent (or partially independent) agents the Coupled HMM approach generates much
better results than traditional HMM methods.
In addition, we have developed a synthetic agent or Alife modeling environment for
building and training flexible a priori models of various behaviors using software
agents. Simulation with these software agents yields synthetic data that can be
used to train prior models. These prior models can then be used recursively in a
Bayesian framework to fit real behavioral data.
Graphical Models for Recognizing Human Interactions
925
This synthetic agent approach is a straightforward and flexible method for developing prior models, one that does not require strong analytical assumptions to be
made about the form of the priorsl . In addition, it has allowed us to develop robust models even when there are only a few examples of some target behaviors. In
our experiments we have found that by combining such synthetic priors with limited real data we can easily achieve very high accuracies at recognition of different
human-to-human interactions.
The paper is structured as follows: section 2 presents an overview of the system,
the statistical models used for behavior modeling and recognition are described in
section 3. Section 4 contains experimental results in two different real situations.
Finally section 5 summarizes the main conclusions and our future lines of research .
2
VISUAL INPUT
We have experimented using two different types of visual input. The first is a realtime, self-calibrating 3-D stereo blob tracker (used for the Tai Chi scenario) [1], and
the second is a real-time blob-tracking system [5] (used in the visual surveillance
task). In both cases an Extended Kalman filter (EKF) tracks the blobs' location,
coarse shape, color pattern, and velocity. This information is represented as a
low-dimensional, parametric probability distribution function (PDF) composed of
a mixture of Gaussians, whose parameters (sufficient statistics and mixing weights
for each of the components) are estimated using Expectation Maximization (EM).
This visual input module detects and tracks moving objects - body parts in Tai
Chi and pedestrians in the visual surveillance task - and outputs a feature vector
describing their motion, heading, and spatial relationship to all nearby moving
objects. These output feature vectors constitute the temporally ordered stream
of data input to our stochastic state-based behavior models. Both HMMs and
CHMMs, with varying structures depending on the complexity of the behavior, are
used for classifying the observed behaviors.
Both top-down and bottom-up flows of information are continuously managed and
combined for each moving object within the scene. The Bayesian graphical models
offer a mathematical framework for combining the observations (bottom-up) with
complex behavioral priors (top-down) to provide expectations that will be fed back
to the input visual system.
3
VISUAL UNDERSTANDING VIA GRAPHICAL
MODELS: HMMs and CHMMs
Statistical directed acyclic graphs (DAGs) or probabilistic inference networks (PINs
hereafter) can provide a computationally efficient solution to the problem of time
series analysis and modeling . HMMs and some of their extensions, in particular
CHMMs, can be viewed as a particular and simple case of temporal PIN or DAG.
Graphically Markov Models are often depicted 'rolled-out in time' as Probabilistic
Inference Networks, such as in figure 1. PINs present important advantages that are
relevant to our problem: they can handle incomplete data as well as uncertainty;
they are trainable and easier to avoid overfitting; they encode causality in a natural
way; there are algorithms for both doing prediction and probabilistic inference;
they offer a framework for combining prior knowledge and data; and finally they
are modular and parallelizable.
Traditional HMMs offer a probabilistic framework for modeling processes that have
structure in time. They offer clear Bayesian semantics, efficient algorithms for state
and parameter estimation, and they automatically perform dynamic time warping .
An HMM is essentially a quantization of a system's configuration space into a
small number of discrete states, together with probabilities for transitions between
1 Note that our priors have the same form as our posteriors, namely, they are graphical
models.
N. M. Oliver, B. Rosario and A. Pentland
926
Coupled
Hidden ....kov Model
~1ddo.II.rk'" 1101101
1?
H I-... .iI S~'~ "S;c"rrrr
?
...- .......
...
Obsc:tvallou
H
~S'
o
......
............
'--0'
Figure 1: Graphical representation of a HMM and a CHMM rolled-out in time
states. A single finite discrete variable indexes the current state of the system. Any
information about the history of the process needed for future inferences must be
reflected in the current value of this state variable.
However many interesting real-life problems are composed of multiple interacting
processes, and thus merit a compositional representation of two or more variables.
This is typically the case for systems that have structure both in time and space.
With a single state variable, Markov models are ill-suited to these problems. In
order to model these interactions a more complex architecture is needed.
Extensions to the basic Markov model generally increase the memory of the system (durational modeling), providing it with compositional state in time. We are
interested in systems that have compositional state in space, e.g., more than one
simultaneous state variable. It is well known that the exact solution of extensions
of the basic HMM to 3 or more chains is intractable. In those cases approximation
techniques are needed ([7, 4, 8, 9]). However, it is also known that there exists an
exact solution for the case of 2 interacting chains, as it is our case [7, 2].
We therefore use two Coupled Hidden Markov Models (CHMMs) for modeling two
interacting processes, whether they are separate body parts or individual humans.
In this architecture state chains are coupled via matrices of conditional probabilities
modeling causal (temporal) influences between their hidden state variables. The
graphical representation of CHMMs is shown in figure 1. From the graph it can be
seen that for each chain, the state at time t depends on the state at time t - 1 in
both chains. The influence of one chain on the other is through a causal link.
In this paper we compare performance of HMMs and CHMMs for maximum a
posteriori (MAP) state estimation . We compute the most likely sequence of states
S within a model given the observation sequence 0 = {01' ... , on}. This most likely
sequence is obtained by S = argmaxsP(SIO).
In the case of HMMs the posterior state sequence probability P(SIO) is given by
T
P(SIO) = P31P31(0I) IIP3t(Ot)P3tI31_1
(1)
t=2
where S = {a1,"" aN} is the set of discrete states, St E S corresponds to the
state at time t. Pilj == P31 =a,1 3t_l=a J is the state-to-state transition probability (i.e.
probability of being in state ai at time t given that the system was in state aj at
time t - 1). In the following we will write them as P3tI3t-l' Pi == P31 =a, = P31 are
the prior probabilities for the initial state. Finally Pi(Ot) == P3t=a,(Ot) = P3t(od are
the output probabilities for each state 2 .
For CHMMs we need to introduce another set of probabilities,
2The output probability is the probability of observing
Ot
I
P 3t 3 :_ 1 ,
which cor-
given state a, at time t
927
Graphical Models for Recognizing Human Interactions
respond to the probability of state St at time t in one chain given that the other
chain -denoted hereafter by superscript I - was in state S~_l at time t - 1. These
new probabilities express the causal influence (coupling) of one chain to the other.
The posterior state probability for CHMMs is expressed as
P(SIO)
=
P p (Ol)P,P ,(d)
"'1"'1
P(O)"'1"'1
1
II P"',1",-1 P" :I"'~_I P,,;1",-1 P""I";_IPs, (
T
x
0t
) p,,; (')
?t
t=2
(2)
where St, s~; Ot, o~ denote states and observations for each of the Markov chains that
compose the CHMMs.
In [2] a deterministic approximation for maximum a posterior (MAP) state estimation is introduced. It enables fast classification and parameter estimation via
EM, and also obtains an upper bound on the cross entropy with the full (combinatoric) posterior which can be minimized using a subspace that is linear in the
number of state variables. An "N-heads" dynamic programming algorithm samples
from the O(N) highest probability paths through a compacted state trellis, with
complexity O(T( C N)2) for C chains of N states apiece observing T data points.
The cartesian product equivalent HMM would involve a combinatoric number of
states, typically requiring OCTN 2C ) computations. We are particularly interested
in efficient, compact algorithms that can perform in real-time.
4
EXPERIMENTAL RESULTS
Our first experiment is with a version of Tai Chi Ch 'uan (a Chinese martial and
meditative art) that is practiced while sitting. Using our self-calibrating, 3-D stereo
blob tracker [1], we obtained 3D hand tracking data for three Tai Chi gestures involving two, semi-independent arm motions: the left single whip, the left cobra, and
the left brush knee. Figure 4 illustrates one of the gestures and the blob-tracking.
A detailed description of this set of Tai Chi experimental results can be found in [3]
and viewed at http://nuria . www.media.mit. edurnurial chmm/taichi . html.
I
,
.~
""-
Figure 2: Selected frames from 'left brush knee.'
We collected 52 sequences, roughly 17 of each gesture and created a feature vector
consisting of the 3-D (x, y, z) centroid (mean position) of each of the blobs that characterize the hands. The resulting six-dimensional time series was used for training
both HMMs and CHMMs.
We used the best trained HMMs and CHMMs - using 10-crossvalidation - to
classify the full data set of 52 gestures. The Viterbi algorithm was used to find the
maximum likelihood model for HMMs and CHMMs. Two-thirds ofthe testing data
had not been seen in training, including gestures performed at varying speeds and
from slightly different views. It can be seen from the classification accuracies, shown
in table 1, that the CHMMs outperform the HMMs. This difference is not due to
intrinsic modeling power, however; from earlier experiments we know that when a
large number of training samples is available then HMMs can reach similar accuracies. We conclude thus that for data where there are two partially-independent
processes (e.g., coordinated but not exactly linked), the CHMM method requires
much less training to achieve a high classification accuracy.
Table 1 illustrates the source of this training advantage. The numbers between
N. M Oliver, B. Rosario and A. Pentland
928
Table 1: Recognition accuracies for HMMs and CHMMs on Tai Chi gestures. The expressions between parenthesis correspond to the number of parameters of the largest bestscoring model.
Recognition Results on Tai Chi Gestures
Accuracy
Single HMMs
Coupled HMMs (CHMMs)
69.2% (25+30+180)
100% (27+18+54)
parenthesis correspond to the number of degrees of freedom in the largest bestscoring model: state-to-state probabilities + output means + output covariances.
The conventional HMM has a large number of covariance parameters because it
has a 6-D output variable; whereas the CHMM architecture has two 3-D output
variables. In consequence, due to their larger dimensionality HMMs need much
more training data than equivalent CHMMs before yielding good generalization
results.
Our second experiment was with a pedestrian video surveillance task 3; the goal was
first to recognize typical pedestrian behaviors in an open plaza (e.g., walk from A to
B, run from C to D), and second to recognize interactions between the pedestrians
(e .g., person X greets person V). The task is to reliably and robustly detect and
track the pedestrians in the scene. We use in this case 2-D blob features for modeling
each pedestrian. In our system one of the main cues for clustering the pixels into
blobs is motion, because we have a static background with moving objects. To
detect these moving objects we build an eigenspace that models the background.
Depending on the dynamics of the background scene the system can adaptively
relearn the eigenbackground to compensate for changes such as big shadows.
The trajectories of each blob are computed and saved into a dynamic track memory.
Each trajectory has associated a first order EKF that predicts the blob's position
and velocity in the next frame As before, the appearance of each blob is modeled
by a Gaussian PDF in RGB color space, allowing us to handle occlusions.
Figure 3: Typical Image from pedestrian plaza. Background mean image, input image
with blob bounding boxes and blob segmentation image
The behaviors we examine are generated by pedestrians walking in an open outdoor environment. Our goal is to develop a generic, compositional analysis of the
observed behaviors in terms of states and transitions between states over time in
such a manner that (1) the states correspond to our common sense notions of human behaviors, and (2) they are immediately applicable to a wide range of sites
and viewing situations. Figure 3 shows a typical image for our pedestrian scenario,
the pedestrians found, and the final segmentation. Two people (each modeled as
its own generative process) may interact without wholly determining each others'
behavior. Instead, each of them has its own internal dynamics and is influenced
(either weakly or strongly) by others. The probabilities PStIS~_1 and PS;ISt_l from
equation 2 describe this kind of interactions and CHMMs are intended to model
them in as efficient a manner as is possible.
We would like to have a system that will accurately interpret behaviors and interactions within almost any pedestrian scene with at most minimal training. As we have
3
Further
information
about
this
system
can
be
found
http:/www.vismod.www.media.mit.edu/ nuria/humanBehavior IhumanBehavior .html
at
Graphical Models for Recognizing Human Interactions
929
already mentioned , une critical problem is the generation of models that capture
our prior knowledge about human behavior. To achieve this goal we have developed
a modeling environment that uses synthetic agents to mimic pedestrian behavior in
a virtual environment. The agents can be assigned different behaviors and they can
interact with each other as well. Currently they can generate 5 different interacting
behaviors and various kinds of individual behaviors (with no interaction) . These
behaviors are: following, meet and walk together (inter1); approach, meet and go
on separately (inter2) or go on together (inter3) ; change direction in order to meet ,
approach , meet and continue together (inter4) or go on separately (inter5) . The parameters of this virtual environment are modeled using data drawn from a 'generic '
set of real scenes.
By training the models of the synthetic agents to have good generalization and
invariance properties, we can obtain flexible prior models for use when learning the
human behavior models from real scenes. Thus the synthetic prior models allow us
to learn robust behavior models from a small number of real behavior examples.
This capability is of special importance in a visual surveillance task , where typically
the behaviors of greatest interest are also the rarest .
To test our behavior modeling in the pedestrian scenario, we first used the detection
and tracking system previously described to obtain 2-D blob features for each person
in several hours of video. More than 20 examples of following and the two first types
of meeting behaviors were detected and processed.
CHMMs were then used for modeling three different behaviors: following , meet
and continue together, and meet and go on separately. Furthermore, an interaction
versus no interaction detection test was also performed (HMMs performed so poorly
at this task that their results are not reported). In addition to velocity, heading,
and position, the feature vectors consisted of the derivative of the relative distance
between two agents, their degree of alignment (dot product of their velocity vectors)
and the magnitude of the difference in their velocity vectors.
We tested on this video data using models trained with two types of data: (1) ' Prioronly models', that is, models learned entirely from our synthetic-agents environment
and then applied directly to the real data with no additional training or tuning of
the parameters; and (2) 'Posterior models', or prior-pIus-real data behavior models
trained by starting with the prior-only model and then 'tuning' the models with data
from this specific site, using eight examples of each type of interaction. Recognition
accuracies for both these 'prior' and 'posterior' CHMMs are summarized in table
2. It is noteworthy that with only 8 training examples , the recognition accuracy
on the remaining data could be raised to 100%. This demonstrates the ability to
accomplish extremely rapid refinement of our behavior models from the initial a
priori models.
Table 2: Accuracies on real pedestrian data, (a) only a priori models, (b) posterior
models (with site-specific training)
Accuracy on Real Pedestrian Data
(a)Prior CHMMs
(b ) Posterior CHMMs
No-inter
Interl
Inter2
Inter3
90.9
93.7
100
100
100
100
100
100
In a visual surveillance system the false alarm rate is often as important as the
classification accuracy4 To analyze this aspect of our system's performance, we
calculated the system's ROC curve. For accuracies of 95% the false alarm rate was
less than 0.01.
4In an ideal automatic surveillance system, all the targeted behaviors should be detected
with a close-to-zero false alarm rate, so that we can reasonably alert a human operator to
examine them further.
N. M. Oliver, B. Rosario and A . Pentland
930
5
SUMMARY, CONCLUSIONS AND FUTURE WORK
In this paper we have described a computer vision system and a mathematical
modeling framework for recognizing different human behaviors and interactions in
two different real domains: human actions in the martial art of Tai Chi and human
interactions in a visual surveillance task. Our system combines top-down with
bottom-up information in a closed feedback loop, with both components employing
a statistical Bayesian approach.
Two different state-based statistical learning architectures, namely HMMs and
CHMMs, have been proposed and compared for modeling behaviors and interactions. The superiority of the CHMM formulation has been demonstrated in terms
of both training efficiency and classification accuracy. A synthetic agent training
system has been created in order to develop flexible prior behavior models, and we
have demonstrated the ability to use these prior models to accurately classify real
behaviors with no additional training on real data. This fact is specially important,
given the limited amount of training data available.
Future directions under current investigation include: extending our agent interactions to more than two interacting processes; developing a hierarchical system where
complex behaviors are expressed in terms of simpler behaviors; automatic discovery
and modeling of new behaviors (both structure and parameters) ; automatic determination of priors, their evaluation and interpretation; developing an attentional
mechanism with a foveated camera along with a more detailed representation of the
behaviors; evaluating the adaptability of off-line learned behavior structures to different real situations; and exploring a sampling approach for recognizing behaviors
by sampling the interactions generated by our synthetic agents .
Acknowledgments
Sincere thanks to Michael Jordan, Tony Jebara and Matthew Brand for their inestimable help .
References
1. A. Azarbayejani and A. Pentland.
Real-time self-calibrating stereo person-tracker
using 3-D shape estimation from blob features.
In Proceedings, International Conferenc e on Pattern R ecognition, Vienna, August 1996. IEEE.
2. M. Brand.
Coupled hidden markov models for modeling interacting processes.
November 1996. Submitted to Neural Computation .
3. M. Brand and N. Oliver.
nition .
Coupled hidden markov models for complex action recogIn In Proceedings of IEEE CVPR97, 1996.
4. Z. Ghahramani and M. 1. Jordan .
Factorial hidden Markov models.
In D. S.
Touretzky, M. C . Mozer , and M. Hasselmo, editors , NIPS, volume 8, Cambridge, MA ,
1996. MITP.
5. N. Oliver, B . Rosario , and A . Pentland.
Statistical modeling of human behaviors .
In To appear in Proceedings of CVPR98, Perception of Action Workshop, 1998.
6 1. R. Rabiner .
A tutorial on hidden markov models and selected applications in
speech recognition . PIEEE, 77(2):257- 285 , 1989.
7. L. K. Saul and M. 1. Jordan.
Boltzmann chains and hidden Markov models . In
G. Tesauro, D. S. Touretzky, and T. Leen , editors, NIPS, volume 7, Cambridge, MA ,
1995. MITP.
8. P. Smyth, D. Heckerman , and M. Jordan.
Probabilistic independence networks for
hidden Markov probability models. AI memo 1565, MIT, Cambridge, MA, Feb 1996.
9 C. Williams and G . E. Hinton . Mean field networks that learn to discriminate temporally distorted strings. In Proceedings, connectionist models summ er school, pages
18- 22 , San Mateo , CA, 1990. Morgan Kaufmann.
| 1560 |@word version:1 open:2 simulation:1 rgb:1 covariance:2 recursively:1 initial:2 configuration:1 contains:1 series:2 hereafter:2 practiced:1 current:3 od:1 must:1 shape:2 enables:1 generative:2 selected:2 cue:1 une:1 detecting:1 coarse:1 ames:1 location:1 simpler:1 mathematical:2 alert:1 along:1 kov:1 combine:2 compose:1 behavioral:2 manner:2 introduce:1 inter:1 rapid:1 roughly:1 behavior:43 examine:2 nuria:4 ol:1 chi:10 detects:1 automatically:1 medium:4 eigenspace:1 kind:2 string:1 developed:2 temporal:2 exactly:1 demonstrates:1 superiority:1 appear:1 before:2 understood:1 consequence:1 meet:6 path:1 noteworthy:1 emphasis:1 mateo:1 hmms:21 limited:4 range:1 directed:1 acknowledgment:1 camera:1 testing:1 wholly:1 close:1 operator:1 influence:3 www:3 equivalent:2 deterministic:1 demonstrated:2 map:2 conventional:1 straightforward:1 graphically:1 go:4 starting:1 williams:1 cobra:1 knee:2 immediately:1 handle:2 notion:1 target:1 exact:2 programming:1 smyth:1 us:1 velocity:5 recognition:10 particularly:2 walking:1 predicts:1 bottom:4 observed:2 module:1 capture:1 highest:1 mentioned:1 mozer:1 environment:6 complexity:2 dynamic:6 trained:3 weakly:1 efficiency:1 easily:1 various:2 represented:1 train:1 fast:1 describe:3 detected:2 whose:1 modular:1 larger:1 ability:2 statistic:1 superscript:1 ip:1 final:1 blob:15 advantage:2 sequence:5 analytical:1 interaction:24 product:2 relevant:1 loop:2 combining:3 mixing:1 poorly:1 achieve:3 description:1 crossvalidation:1 p:1 extending:1 object:5 help:1 depending:2 develop:4 coupling:1 school:1 strong:1 shadow:1 direction:2 saved:1 filter:1 stochastic:1 human:19 viewing:1 virtual:2 require:1 generalization:2 investigation:1 extension:3 exploring:1 tracker:3 viterbi:1 matthew:1 major:2 sandy:1 estimation:5 applicable:1 currently:1 largest:2 hasselmo:1 mit:5 gaussian:1 ekf:2 avoid:1 surveillance:9 varying:2 encode:1 likelihood:1 centroid:1 sense:1 detect:2 posteriori:1 inference:4 typically:3 hidden:11 interested:2 semantics:2 pixel:1 classification:5 flexible:5 ill:1 denoted:1 priori:3 html:2 art:5 integration:1 spatial:1 special:1 raised:1 field:1 sampling:2 future:4 minimized:1 others:2 mimic:1 sincere:1 t_l:1 few:1 connectionist:1 composed:2 recognize:2 individual:5 intended:1 consisting:1 occlusion:1 chmm:5 detection:3 freedom:1 interest:1 evaluation:1 alignment:1 rolled:2 mixture:1 durational:1 inter2:2 yielding:1 chain:12 oliver:6 incomplete:1 walk:2 causal:3 minimal:1 handed:1 classify:2 modeling:22 combinatoric:2 earlier:1 maximization:1 rare:1 recognizing:8 characterize:1 reported:1 accomplish:1 synthetic:11 combined:1 adaptively:1 person:8 st:3 thanks:1 international:1 probabilistic:5 off:1 michael:1 together:5 continuously:1 derivative:1 style:1 summarized:1 pedestrian:15 coordinated:1 depends:1 stream:1 performed:3 view:1 closed:1 doing:1 observing:2 linked:1 inter3:2 compacted:1 analyze:1 capability:1 accuracy:12 kaufmann:1 efficiently:1 yield:1 sitting:1 ofthe:1 correspond:3 rabiner:1 bayesian:7 accurately:3 trajectory:2 history:1 submitted:1 simultaneous:1 parallelizable:1 reach:1 influenced:1 touretzky:2 p31:3 associated:1 static:1 knowledge:3 color:2 dimensionality:1 segmentation:2 adaptability:1 back:1 reflected:1 formulation:1 leen:1 box:1 strongly:1 furthermore:1 relearn:1 hand:2 aj:1 building:1 calibrating:3 requiring:1 consisted:1 managed:1 assigned:1 laboratory:1 self:3 pdf:2 demonstrate:1 motion:4 image:5 common:1 overview:1 volume:2 interpretation:1 interpret:1 cambridge:4 dag:2 ai:2 tuning:2 automatic:3 had:1 dot:1 moving:5 feb:1 posterior:9 own:2 driven:1 barbara:1 scenario:4 tesauro:1 continue:2 life:1 meeting:1 nition:1 rosario:6 seen:3 morgan:1 additional:2 ii:3 semi:1 multiple:5 full:2 determination:1 gesture:7 offer:5 cross:1 compensate:1 a1:1 parenthesis:2 prediction:1 anomalous:1 involving:2 basic:2 vision:3 expectation:2 essentially:1 addition:3 whereas:1 priorsl:1 background:4 separately:3 source:1 ot:5 specially:1 flow:1 seem:1 jordan:4 ideal:1 concerned:1 independence:1 fit:1 architecture:4 ddo:1 whether:1 six:1 expression:1 stereo:3 speech:1 constitute:1 action:6 compositional:4 generally:1 clear:2 involve:1 detailed:2 factorial:1 amount:3 processed:1 http:2 generate:1 outperform:1 tutorial:1 estimated:1 track:4 discrete:3 write:1 express:1 drawn:1 graph:2 run:1 uncertainty:1 respond:1 distorted:1 almost:1 realtime:1 summarizes:1 entirely:1 bound:1 plaza:2 uan:1 occur:1 alex:1 scene:6 software:2 nearby:1 generates:1 aspect:1 speed:1 extremely:1 structured:1 developing:3 heckerman:1 slightly:1 em:2 computationally:1 equation:1 tai:10 previously:1 describing:1 pin:3 mechanism:1 needed:3 know:1 merit:1 fed:1 cor:1 available:2 gaussians:1 eight:1 hierarchical:1 appropriate:1 generic:2 robustly:1 top:4 clustering:1 remaining:1 include:1 tony:1 graphical:11 vienna:1 ghahramani:1 especially:1 chinese:1 build:1 warping:2 already:1 parametric:1 traditional:2 ecognition:1 subspace:1 distance:1 separate:1 link:1 pilj:1 attentional:1 street:1 hmm:7 collected:1 kalman:1 index:1 relationship:1 modeled:3 providing:1 memo:1 reliably:1 boltzmann:1 perform:2 allowing:1 upper:1 observation:3 markov:13 finite:1 november:1 pentland:6 situation:4 extended:1 rrrr:1 head:1 hinton:1 frame:2 interacting:7 august:1 jebara:1 introduced:1 namely:2 trainable:1 learned:2 hour:1 nip:2 chmms:25 pattern:2 perception:1 including:1 memory:2 video:3 power:1 critical:1 greatest:1 natural:1 arm:1 e15:1 temporally:2 martial:4 created:2 coupled:10 prior:20 understanding:1 discovery:1 determining:1 relative:1 interesting:1 generation:1 acyclic:1 versus:1 agent:16 degree:2 sufficient:1 editor:2 classifying:3 pi:2 summary:1 heading:2 allow:1 wide:1 saul:1 feedback:2 overcome:1 calculated:1 transition:3 curve:1 evaluating:1 made:1 refinement:1 san:1 employing:1 obtains:1 compact:1 overfitting:1 conclude:1 table:5 learn:2 reasonably:1 robust:2 ca:1 interact:2 complex:5 domain:2 main:2 big:1 bounding:1 alarm:3 allowed:1 body:2 causality:1 site:3 roc:1 trellis:1 position:3 outdoor:1 third:1 down:4 rk:1 specific:2 er:1 explored:1 experimented:1 evidence:1 intractable:1 exists:1 quantization:1 intrinsic:1 false:3 workshop:1 importance:1 inter5:1 magnitude:1 illustrates:2 foveated:1 cartesian:1 easier:1 suited:1 entropy:1 depicted:1 azarbayejani:1 likely:2 appearance:1 visual:12 expressed:2 ordered:1 tracking:4 partially:2 brush:2 ch:1 corresponds:1 ma:4 conditional:1 viewed:2 formulated:1 goal:3 targeted:1 change:2 typical:3 discriminate:1 invariance:1 experimental:3 brand:3 sio:4 internal:1 people:2 latter:1 tested:1 |
614 | 1,561 | Probabilistic Visualisation of
High-dimensional Binary Data
Michael E. Tipping
Microsoft Research,
St George House, 1 Guildhall Street,
Cambridge CB2 3NH, U.K.
mtipping@microsoit.com
Abstract
We present a probabilistic latent-variable framework for data visualisation, a key feature of which is its applicability to binary and
categorical data types for which few established methods exist. A
variational approximation to the likelihood is exploited to derive a
fast algorithm for determining the model parameters. Illustrations
of application to real and synthetic binary data sets are given.
1
Introduction
Visualisation is a powerful tool in the exploratory analysis of multivariate data. The
rendering of high-dimensional data in two dimensions, while generally implying loss
of information, often reveals interesting structure to the human eye. Standard
dimensionality-reduction methods from multivariate analysis, notably the principal
component projection, are often utilised for this purpose, while techniques such
as 'projection pursuit ' have been tailored specifically to this end. With the current trend for larger databases and the need for effective 'data mining' methods,
visualisation is becoming increasingly topical, and recent novel developments include nonlinear topographic methods (Lowe and Tipping 1997; Bishop, Svensen,
and Williams 1998) and hierarchical combinations of linear models (Bishop and
Tipping 1998). However, a disadvantageous aspect of many proposed techniques
is their applicability only to continuous variables; there are very few such methods
proposed specifically for the visualisation of discrete binary data types, which are
commonplace in real-world datasets.
We approach this difficulty by proposing a probabilistic framework for the visualisation of arbitrary data types, based on an underlying latent variable density model.
This leads to an algorithm which permits the visualisation of structure within data,
while also defining a generative observation probability model. A further, and
593
Probabilistic Visualisation of High-Dimensional Binary Data
intuitively pleasing, result is that the specialisation of the model to continuous variables recovers principal component analysis. Continuous , binary and categorical
data types may thus be combined and visualised together within this framework,
but for reasons of space, we concentrate on binary types alone in this paper.
In the next section we outline the proposed latent variable approach, and in Section
3 consider the difficulties involved in estimating the parameters in this model, giving
an efficient variational scheme to this end in Section 4. In Section 5 we illustrate the
application of the model and consider the accuracy of the variational approximation.
2
Latent Variable Models for Visualisation
In an ideal visualisation model, we would wish all of the dependencies between
variables to be evident in the visualisation space, while the information that we lose
in the dimensionality-reduction process should represent "noise", independent to
each variable. This principle is captured by the following probability density model
for a dataset comprising d-dimensional observation vectors t = (t1' t 2, ... , td):
p(t)
=
J{gP(!i!X,IJ)}
(1)
p(x)dx,
where x is a two-dimensional latent variable vector, the distribution of which must
be a priori specified, and 0 are the model parameters. Now, for a given value of x
(or location in the visualisation space), the observations are independent under the
model. (In general, of course, the model and conditional independence assumption
will only hold approximately.) However, the unconditional observation model p(t)
does not, in general, factorise and so can still capture dependencies between the
d variables, given the constraint implied by the use of just two underlying latent
variables. So, having estimated the parameters 0, data could be visualised by
'inverting' the generative model using Bayes' rule: p(xlt) = p(tlx)p(x)/p(t). Each
data point then induces a distribution in the latent space, which for the purposes
of visualisation, we might summarise with the conditional mean value (x lt).
That this form of model can be appropriate for visualisation was demonstrated by
Bishop and Tipping (1998), who showed that if the latent variables are defined to
be independent and Gaussian, x "'" N(O, I), and the conditional observation model
is also Gaussian, tilx "'" N(wJx + J.l.i' a}I), then maximum-likelihood estimation of
the model parameters {Wi, J.l.i, a}} leads to a model where the the posterior mean
(xlt) is equivalent to a probabilistic principal component projection.
A visualisation method for binary variables now follows naturally. Retaining the
Gaussian latent distribution x "'" N(O , I), we specify an appropriate conditional
distribution for P( ti Ix, 0). Given that principal components corresponds to a linear
model for continuous data types, we adopt the appropriate generalised linear model
in the binary case:
(2)
where O'(A)
3
= {I + exp( -An -1
and Ai = wJx
+ bi
with parameters
Wi
and k
Maximum-likelihood Parameter Estimation
The proposed model for binary data already exists in the literature under various
guises , most historically as a latent trait model (Bartholomew 1987), although it
is not utilised for data visualisation. While in the case of probabilistic principal
M. E. Tipping
594
component analysis, ML parameter estimates can be obtained in closed-form, a disadvantageous feature of the binary model is that, with P(tilx) defined by (2), the
integral of (1) is analytically intractable and P(t) cannot be computed directly. Fitting a latent trait model thus necessitates a numerical integration, and recent papers
have considered both Gauss-Hermite (Moustaki 1996) and Monte-Carlo sampling
approximations (Mackay 1995; Sammel, Ryan, and Legler 1997).
In this latter case, the log-likelihood for a dataset of N observation vectors
{tl, ... , tN} would be approximated by
N
;: ~ ~ In
where Xl , l
{IL g
L
~
= 1 ... L, are samples from
d
P(tinIXI, Wi, bi)
}
(3)
the two-dimensional latent distribution.
To obtain parameter estimates we may utilise an expectation-maximisation (EM)
approach by noting that (3) is equivalent in form to an L-component latent class
model (Bartholomew 1987) where the component probabilities are mutually constrained from (2). Applying standard methodology leads to an E-step which requires computation of N x L posterior 'responsibilities' P(xllt n ), and a logistic
regression M-step which is unfortunately iterative , although it can be performed
relatively efficiently by an iteratively re-weighted least-squares algorithm. Because
of these difficulties in implementation, in the next section we describe a variational
approximation to the likelihood which can be maximised more efficiently.
4
A Variational Approximation to the Likelihood
Jaakkola and Jordan (1997) introduced a variational approximation for the predictive likelihood in a Bayesian logistic regression model and also briefly considered
the "dual" problem, which is closely related to the proposed visualisation model.
In this approach, the integral in (1) is approximated by:
(4)
where
(5)
with Ai = (2ti - l)(wTx + bi ) and A(~i) = {O.5 - (J(~i)}/2~i. The parameters
~i are the 'variational' parameters, and this approximation has the property that
P(tilx, ~i) ::; P(tilx), with equality at ~i = Ai, and thus it follows that P(t) ::; P(t).
Now because the exponential in (5) is quadratic in X , then the integral in (4), and
also the likelihood, can be computed in closed form. This suggests an alternative algorithm for finding parameter estimates where we iteratively maximise the
variational approximation to the likelihood. Each iteration of this algorithm is guaranteed to increase a lower bound on, but will not necessarily maximise, the true
likelihood. Nevertheless , we would hope that it will be a close approximation, the
accuracy of which is investigated later. At each step in the algorithm, then, we:
1. Obtain the sufficient statistics for the approximated posterior distribution
of latent variables given each observation, p(xnltn, ~n).
2. Qptimise the variational parameters ~in in order to make the approximation
P(t n ) as close as possible to P(t n ) for all tn.
3. Update the model parameters
Wi
and bi to increase P(t).
595
Probabilistic VISualisation of High-Dimensional Binary Data
Jaakkola and Jordan (1997) give formulae for the above computations, but these
do not include provision for the 'biases' bi, and so the necessary expressions are
re-derived below. Note that although we have introduced N x d additional variational parameters, it is no longer necessary to sample from p(x) and compute
responsibilities, and no iterative logistic regression step is needed.
Computing the Latent Posterior Statistics. From Bayes' rule, the posterior
approximation p(xnltn'~n) is Gaussian with covariance and mean given by
en =
t
[1 -2
-1
'\((in)Wi W [
1'
(6)
(7)
Optimising the Variational Parameters. B~ause P(t) ~ P(t), the variational
approximation can be optimised by maximising P(t n ) with respect to each (,in. We
use the EM methodology to obtain updates
(8)
where the angle brackets (.) denote expectations with respect to p(xnltn,~~ld) and
where, from (6) and (7) earlier, the necessary posterior statistics are given by:
(xn)
(xnx~)
= I-Ln,
= C n + I-Lnl-L~.
(9)
(10)
Since (6) and (7) depend on the variational parameters, C n and I-Ln are computed
followed by the update for each (,in from (8). Iteration of this two-stage process
is guaranteed to improve monotonically the approximation of P(t n ) and typically
only two iterations are necessary for convergence.
Optimising the Model Parameters. We again use EM to increase the variationallikelihood approximation with respect to Wi and bi. Defining
Wi = (wi, bi)T,
x=(xT,1r,
leads to updates for both
Wi
and bi given by:
(11)
where
I-Ln)
1 .
5
(12)
Visualisation
Synthetic clustered data. We firstly give an example of visualisation of
artificially-generated data to illustrate the operation and features of the method .
Binary data was synthesised by first generating three random 16-bit prototype vectors, where each bit was set with probability 0.5. Next a 600-point dataset was
generated by taking 200 examples of each prototype and inverting each bit with
M. E. npping
596
probability 0.05. We generated a second dataset in the same manner, but where
the probability of bit inversion was 0.15, simulating more "noise" about each prototype. The final values of ILn from (7) for each data point are plotted in Figure
1. In the left plot for the low-noise dataset, the three clusters are clear, as are the
prototype vectors. On the right , the bit-noise is sufficiently high such that clusters now overlap to a degree and the prototypes are no longer evident. However,
we can elucidate further information from the model by drawing lines representing
P(tilx) = 0.5, or wTx+bi = 0, which may be considered to be 'decision boundaries'
for each bit. These offer more convincing evidence of the presence of three clusters.
~
1,5
+
1,5
+t
+
4"
?
-ito
+
.#
+
-+
.,
0,5
..0 .-.
0
0.5
+
.,
0
"
"
".,
-0.5
\fi
-1
"
"
tv
?
"0
-1
It
-1.5
~
-1.5
- 1,5
-1
-0,5
-0.5
"
0
0.5
1.5
- 1.5
-1
-0,5
0
0.5
1,5
Figure 1: Visualisation of two synthetic clustered datasets. The three clusters have been
denoted by separate glyphs, the size of which reflects the number of examples whose
posterior means are located at that point in the latent space. In the right plot , lines
corresponding to P(tdx) = 0.5 have been drawn.
Handwritten digit data. On the left of Figure 2, a visualisation is given of 1000
examples derived from 16 x 16 images of handwritten digit '2's. There is visual
evidence of the natural variability of writing styles in the plot as the posterior latent
means in Figure 2 describe an approximate 'horseshoe' structure. On the right of
the figure we examine the nature of this by plotting gray-scale images of the vectors
P(tlxj), where Xj are four numbered samples in the visualisation space. These
images illustrate the expected value of each bit given the latent-space location and
demonstrate that the location is indeed indicative of the style of the digit, notably
the presence of a loop.
Accuracy of the variational approximation. To investigate the accuracy of the
approximation, the sampling algorithm of Section 3 for likelihood maximisation was
implemented and applied to the above two datasets. The evolution of error (negative
log-likelihood per data-point) was plotted against time for both algorithms, using
identical initialisations. The 'true' error for the variational approach was estimated
using the same 500-point Monte-Carlo sample. Typical results are shown in Figure
3, and the final running time and error (using a sensible stopping criterion) are
given for both datasets in Table 1.
For these two example datasets, the variational algorithm converges considerably
more quickly than in the sampling case, and the difference in final error is relatively
small, particularly so for the larger-dimensionality dataset. The approximation of
the posterior distributions p(xnlt n ) is the key factor in the accuracy of the algorithm. In Figure 4, contours of the posterior distribution in the latent space induced
597
Probabilistic VISualisation ofHigh-Dimensional Binary Data
DigIt 2
?
?
????
.'
.
::., ,' ...
.?~4?~?:.?. '..
',.
(2)
"(j)~.~ ???
. . ;. "?t~?":~~~:'" .
? "';':\-Y~.,,:.'.
I.
~ -;I t'n : . :'? ? ?
. . ',r ..
" . ..
".
. .....
. . ":.......
. ;" : '..
".
. .. , ,~~
......,. .....
.
. .?'. J:, ,,_ . '. " ?? 1t:.:':":~::'''''':'':
.. .: ?. .,:...t'.,? ...,?~?;'~i''''-:I:".
? ? ? ?~:.
'.. :
?
,:
;0
, .., ?? "
J.JIl. ~.'
."'.
?
~"J
~
?.'. ????
..:r-~\i??
. 'f.
. ..... r"" .'
'(.tY ?.,::,,:,~ ....
~ ".,.~otl
:,111'-:')
-:..~
...".
.
(3)
~
~
.-: .
?
.' ,?. , '.' @ ;a,\." .':;'
: ' , ?? :. ,- :.':, 3??~" ':.- ? _.:'
'
'.'
. :.,. '.. .f", ..
. ": .....
.
'.
?
.
#
?
I ? ? ', ":
.
(4)
Figure 2: Left: visualisation of 256-dimensional digit '2' data. Right : gray-scale images
of the conditional probability of each bit at the latent space locations marked.
8
Variational
335
Variational
Sampling
7.5
33
7
Sampling
325
e 32
12 65
w
W
315
6
31
5.5
30 5
30
10-'
10?
10'
Time (sees)
10'
10'
10- 1
10?
la'
la'
la'
10'
TIme (sees)
Figure 3: Error vs. time for the synthetic data (left) and the digit '2' data (right).
by a typical data point are shown for both algorithms and datasets. This approximation is more accurate as dimensionality increases (a phenomenon observed with
other datasets too), as the true posterior becomes more Gaussian in form.
6
Conclusions
We have outlined a variational approximation for parameter estimation in a probabilistic visualisation model and although we have only considered its application to
binary variables here, the extension to mixtures of arbitrary data types is readily
implemented. For the two comparisons shown (and others not illustrated here) , the
approximation appears acceptably accurate, and particularly so for data of higher
dimensionality. The algorithm is considerably faster than a sampling approach,
which would permit incorporation of multiple models in a more complex hierarchical architecture, of a sort that has been effectively implemented for visualisation of
continuous variables (Bishop and Tipping 1998) .
598
M. E. Tipping
Digit-256
Time Error
25.6 30.23
1204.5 30.19
Synthetic-16
Error
Time
5.14
7.8
331.1
4.93
Variational
Sampling
Table 1: Comparison of final error and running time for the two algorithms.
True Posterior
Approximation
0 .5
05
o
-0.5
-I
-15
-2
-2
-1 .5
-I
-05
0
0.5
-2
-15
-I
- 05
0
0.5
Approximation
True Posterior
0.5
0.5
-0.5
-05
-I
-1
-1.5
-1.5
-I
-05
0
05
15
-I
-05
0
05
15
Figure 4: True and approximated posteriors for a single example from the synthetic data
set (top) and the digit '2' data (bottom) .
7
References
Bartholomew, D . J . (1987). Latent Variable Models and Factor Analysis. London:
Charles Griffin & Co. Ltd.
Bishop, C. M., M. Svensen, and C . K. I. Williams (1998) . GTM : the Generative Topographic Mapping. Neural Computation 10(1),215- 234.
Bishop , C. M. and M. E . Tipping (1998) . A hierarchical latent variable model for data visualization. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3),
281- 293.
Jaakkola, T. S. and M. 1. Jordan (1997) . Bayesian logistic regression: a variational
approach. In D . Madigan and P. Smyth (Eds.), Proceedings of the 1997 Conference
on Artificial Intelligence and Statistics, Ft Lauderdale, FL.
Lowe, D. and M. E. Tipping (1997). Neuroscale : Novel topographic feature extraction
with radial basis function networks. In M. Mozer, M. Jordan , and T . Petsche (Eds.),
Advances in Neural Information Processing Systems 9, pp. 543- 549. Cambridge,
Mass: MIT Press.
Mackay, D . J. C. (1995). Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, Section A 354 (1), 73- 80.
Moustaki, 1. (1996). A latent trait and a latent class model for mixed observed variables.
British Journal of Mathematical and Statistical Psychology 49, 313- 334.
Sammel , M. D ., L. M. Ryan, and J. M. Legler (1997) . Latent variable models for mixed
discrete and continuous outcomes. Journal of the Royal Statistical Society, Series
B 59, 667- 678.
| 1561 |@word briefly:1 inversion:1 covariance:1 ld:1 reduction:2 series:1 initialisation:1 current:1 com:1 dx:1 must:1 readily:1 numerical:1 plot:3 update:4 v:1 implying:1 generative:3 alone:1 intelligence:2 indicative:1 maximised:1 location:4 firstly:1 hermite:1 mathematical:1 fitting:1 manner:1 notably:2 indeed:1 expected:1 examine:1 td:1 becomes:1 estimating:1 underlying:2 mass:1 proposing:1 finding:1 ti:2 acceptably:1 t1:1 generalised:1 maximise:2 optimised:1 becoming:1 approximately:1 might:1 suggests:1 co:1 bi:9 maximisation:2 cb2:1 digit:8 projection:3 radial:1 numbered:1 madigan:1 cannot:1 close:2 applying:1 writing:1 equivalent:2 demonstrated:1 williams:2 rule:2 mtipping:1 nuclear:1 otl:1 exploratory:1 elucidate:1 smyth:1 trend:1 approximated:4 particularly:2 located:1 database:1 observed:2 bottom:1 ft:1 capture:1 commonplace:1 visualised:2 mozer:1 depend:1 predictive:1 basis:1 necessitates:1 various:1 gtm:1 fast:1 effective:1 describe:2 monte:2 london:1 artificial:1 outcome:1 whose:1 larger:2 drawing:1 statistic:4 topographic:3 gp:1 final:4 xlt:2 moustaki:2 loop:1 convergence:1 cluster:4 generating:1 converges:1 derive:1 illustrate:3 svensen:2 ij:1 wtx:2 implemented:3 concentrate:1 closely:1 human:1 clustered:2 ryan:2 extension:1 hold:1 sufficiently:1 considered:4 guildhall:1 exp:1 mapping:1 adopt:1 purpose:2 estimation:3 xnx:1 lose:1 tool:1 weighted:1 reflects:1 hope:1 mit:1 gaussian:5 jaakkola:3 derived:2 likelihood:12 stopping:1 typically:1 visualisation:27 comprising:1 dual:1 denoted:1 priori:1 retaining:1 development:1 constrained:1 integration:1 mackay:2 having:1 extraction:1 sampling:7 optimising:2 identical:1 summarise:1 others:1 few:2 microsoft:1 pleasing:1 factorise:1 tlx:1 mining:1 investigate:1 mixture:1 bracket:1 unconditional:1 accurate:2 synthesised:1 integral:3 necessary:4 re:2 plotted:2 earlier:1 jil:1 applicability:2 too:1 dependency:2 synthetic:6 combined:1 considerably:2 st:1 density:3 probabilistic:9 physic:1 lauderdale:1 michael:1 together:1 quickly:1 again:1 tdx:1 style:2 performed:1 later:1 utilised:2 lowe:2 closed:2 responsibility:2 disadvantageous:2 bayes:2 sort:1 il:1 square:1 accuracy:5 who:1 efficiently:2 bayesian:3 handwritten:2 carlo:2 ed:2 against:1 ty:1 pp:1 involved:1 naturally:1 recovers:1 dataset:6 dimensionality:5 provision:1 appears:1 higher:1 tipping:9 methodology:2 specify:1 just:1 stage:1 nonlinear:1 logistic:4 gray:2 glyph:1 true:6 evolution:1 analytically:1 equality:1 iteratively:2 illustrated:1 iln:1 criterion:1 outline:1 evident:2 demonstrate:1 tn:2 image:4 variational:21 novel:2 fi:1 charles:1 nh:1 trait:3 cambridge:2 ai:3 outlined:1 bartholomew:3 longer:2 multivariate:2 posterior:14 recent:2 showed:1 binary:15 exploited:1 captured:1 george:1 additional:1 monotonically:1 multiple:1 faster:1 offer:1 regression:4 expectation:2 iteration:3 represent:1 tailored:1 induced:1 jordan:4 noting:1 ideal:1 presence:2 rendering:1 independence:1 xj:1 psychology:1 architecture:1 prototype:5 expression:1 ltd:1 generally:1 clear:1 induces:1 exist:1 estimated:2 per:1 discrete:2 key:2 four:1 nevertheless:1 drawn:1 neuroscale:1 angle:1 powerful:1 decision:1 griffin:1 bit:8 bound:1 fl:1 guaranteed:2 followed:1 quadratic:1 constraint:1 incorporation:1 aspect:1 relatively:2 tv:1 combination:1 increasingly:1 em:3 wi:9 intuitively:1 ln:3 mutually:1 visualization:1 needed:1 instrument:1 end:2 pursuit:1 operation:1 permit:2 hierarchical:3 appropriate:3 petsche:1 simulating:1 alternative:1 top:1 running:2 include:2 giving:1 society:1 implied:1 already:1 xllt:1 separate:1 ause:1 street:1 sensible:1 reason:1 maximising:1 illustration:1 convincing:1 unfortunately:1 negative:1 implementation:1 observation:7 datasets:7 horseshoe:1 defining:2 variability:1 topical:1 arbitrary:2 introduced:2 inverting:2 specified:1 established:1 below:1 pattern:1 royal:1 overlap:1 difficulty:3 natural:1 representing:1 scheme:1 improve:1 historically:1 eye:1 categorical:2 literature:1 determining:1 loss:1 mixed:2 interesting:1 degree:1 sufficient:1 principle:1 plotting:1 course:1 bias:1 taking:1 boundary:1 dimension:1 xn:1 world:1 contour:1 lnl:1 transaction:1 approximate:1 ml:1 reveals:1 continuous:6 latent:25 iterative:2 table:2 nature:1 investigated:1 necessarily:1 artificially:1 complex:1 noise:4 tl:1 en:1 guise:1 wish:1 exponential:1 xl:1 house:1 ito:1 ix:1 formula:1 british:1 bishop:6 xt:1 specialisation:1 evidence:2 exists:1 intractable:1 effectively:1 lt:1 visual:1 corresponds:1 utilise:1 conditional:5 marked:1 specifically:2 typical:2 principal:5 gauss:1 la:3 latter:1 phenomenon:1 |
615 | 1,562 | Fast Neural Network Emulation of Dynamical
Systems for Computer Animation
Radek Grzeszczuk 1
Demetri Terzopoulos
1 Intel Corporation
Microcomputer Research Lab
2200 Mission College Blvd.
Santa Clara, CA 95052, USA
2
Geoffrey Hinton
2
2 University of Toronto
Department of Computer Science
10 King's College Road
Toronto, ON M5S 3H5, Canada
Abstract
Computer animation through the numerical simulation of physics-based
graphics models offers unsurpassed realism, but it can be computationally demanding. This paper demonstrates the possibility of replacing the
numerical simulation of nontrivial dynamic models with a dramatically
more efficient "NeuroAnimator" that exploits neural networks . NeuroAnimators are automatically trained off-line to emulate physical dynamics through the observation of physics-based models in action. Depending on the model, its neural network emulator can yield physically
realistic animation one or two orders of magnitude faster than conventional numerical simulation. We demonstrate NeuroAnimators for a variety of physics-based models.
1
Introduction
Animation based on physical principles has been an influential trend in computer graphics
for over a decade (see, e.g., [1, 2, 3]). This is not only due to the unsurpassed realism
that physics-based techniques offer. In conjunction with suitable control and constraint
mechanisms, physical models also facilitate the production of copious quantities of realistic animation in a highly automated fashion. Physics-based animation techniques are
beginning to find their way into high-end commercial systems. However, a well-known
drawback has retarded their broader penetration--compared to geometric models, physical
models typically entail formidable numerical simulation costs.
This paper proposes a new approach to creating physically realistic animation that differs
883
Emulation for Animation
radically from the conventional approach of numerically simulating the equations of motion of physics-based models. We replace physics-based models by fast emulators which
automatically learn to produce similar motions by observing the models in action. Our
emulators have a neural network structure, hence we dub them NeuroAnimators.
Our work is inspired in part by that of Nguyen and Widrow [4]. Their "truck backer-upper"
demonstrated the neural network based approximation and control of a nonlinear kinematic
system. We introduce several generalizations that enable us to tackle a variety of complex,
fully dynamic models in the context of computer animation. Connectionist approximations
of dynamical systems have been also been applied to robot control (see, e.g., [5,6]).
2
The NeuroAnimator Approach
Our approach is motivated by the following considerations: Whether we are dealing with
rigid [2], articulated [3], or nonrigid [I] dynamic animation models, the numerical simulation of the associated equations of motion leads to the computation of a discrete-time
dynamical system of the form StHt = ~[St, Ut, f t ]. These (generally nonlinear) equations
express the vector St+8t of state variables of the system (values of the system's degrees of
freedom and their velocities) at time t + r5t in the future as a function ~ of the state vector
St, the vector Ut of control inputs, and the vector f t of external forces acting on the system
at time t.
Physics-based animation through the numerical simulation of a dynamical system requires
the evaluation of the map ~ at every timestep, which usually involves a non-trivial computation. Evaluating ~ using explicit time integration methods incurs a computational cost of
O(N) operations, where N is proportional to the dimensionality of the state space. Unfortunately, for many dynamic models of interest, explicit methods are plagued by instability,
necessitating numerous tiny timesteps r5t per unit simulation time. Alternatively, implicit
time-integration methods usually permit larger timesteps, but they compute ~ by solving a
system of N algebraic equations, generally incurring a cost of O( N 3 ) per timestep.
Is it possible to replace the conventional numerical simulator by a significantly cheaper
alternative? A crucial realization is that the substitute, or emulator, need not compute
the map ~ exactly, but merely approximate it to a degree of precision that preserves the
perceived faithfulness of the resulting animation to the simulated dynamics of the physical
model. Neural networks offer a general mechanism for approximating complex maps in
higher dimensional spaces [7].1 Our premise is that, to a sufficient degree of accuracy and
at significant computational savings, trained neural networks can approximate maps ~ not
just for simple dynamical systems, but also for those associated with dynamic models that
are among the most complex reported in the graphics literature to date.
The NeuroAnimator, which uses neural networks to emulate physics-based animation,
learns an approximation to the dynamic model by observing instances of state transitions,
as well as control inputs and/or external forces that cause these transitions. By generalizing
from the sparse examples presented to it, a trained NeuroAnimator can emulate an infinite
variety of continuous animations that it has never actually seen. Each emulation step costs
only O(N2) operations, but it is possible to gain additional efficiency relative to a numerical simulator by training neural networks to approximate a lengthy chain of evaluations
of the discrete-time dynamical system. Thus, the emulator network can perform "super
Note that q, is in general a high-dimensional map from RS+u +f
the dimensionalities of the state, control, and external force vectors.
I
t---7
RS, where s, u, and f denote
R. Grzeszczuk, D. Terzopoulos and G. E. Hinton
884
timesteps" b.t = n6t, typically one or two orders of magnitude larger than 6t for the competing implicit time-integration scheme, thereby achieving outstanding efficiency without
serious loss of accuracy.
3
From Physics-Based Models to NeuroAnimators
Our task is to construct neural networks that approximate <P in the dynamical system. We
propose to employ backpropagation to train feed forward networks N<l>, with a single layer
of sigmoidal hidden units, to predict future states using super time steps b.t
n6t while
containing the approximation error so as not to appreciably degrade the physical realism of
the resulting animation. The basic emulation step is St+~t = N <l> [st, Ut, ftl. The trained
emulator network N<l> takes as input the state of the model, its control inputs, and the
external forces acting on it at time t, and produces as output the state of the model at
time t + t1t by evaluating the network. The emulation process is a sequence of these
evaluations. After each evaluation, the network control and force inputs receive new values,
and the network state inputs receive the emulator outputs from the previous evaluation.
Since the emulation step is large compared with the numerical simulation step, we res ample
the motion trajectory at the animation frame rate, computing intermediate states through
linear interpolation of states obtained from the emulation.
=
3.1
Network Input/Output Structure
Fig. lea) illustrates different emulator input/output structures. The emulator network has
a single set of output variables specifying St+~t. In general, for a so-called active model,
which includes control inputs, under the influence of unpredictable applied forces, we employ a full network with three sets of input variables: St. Ut. and ft. as shown in the figure.
For passive models, the control Ut = 0 and the network simplifies to one with two sets of
inputs, St and ft. In the special case when the forces f t are completely determined by the
state of the system St. we can suppress the f t inputs, allowing the network to learn the effects of these forces from the state transition training data, thus yielding a simpler emulator
with two input sets St and Ut. The simplest type of emulator has only a single set of inputs St. This emulator suffices to approximate passive models acted upon by deterministic
external forces.
3.2
Input and Output Transformations
The accurate approximation of complex functional mappings using neural networks can
be challenging. We have observed that a simple feedforward neural network with a single
layer of sigmoid units has difficulty producing an accurate approximation to the dynamics
of physical models. In practice, we often must transform the emulator to ensure a good
approximation of the map <P.
A fundamental problem is that the state variables of a dynamical system can have a large
dynamic range (in principle, from -00 to +(0). To approximate a nonlinear map <P accurately over a large domain, we would need to use a neural network with many sigmoid
units, each shifted and scaled so that their nonlinear segments cover different parts of the
domain. The direct approximation of <P is therefore impractical. A successful strategy is
to train networks to emulate changes in state variables rather than their actual values, since
state changes over small timesteps will have a significantly smaller dynamic range. Hence,
in Fig. 1(b) (top) we restructure our simple network N <l> as a network N ~ which is trained
Emulation/or Animation
885
IlJ
., !
I ..
-"-
....!L. :
r------------------------------------------------------.
~
G
;;~
!____________________________________________________
u
,
N4~
I
Y
,
I
~_~J
-
1- ------ - - - --- ---- ------------------------------------- I
;;~
Ut
I
~
N'~
x
y
y:
__
:t___ __ _ ____________________________ _ _ ______ _ __________NcJ):
J
-
It
u,
I,
I
I
I
I
T'X
~X
T'JY
N"1$
:
T'Y
I.
I'''''
T"y:
NcJ):
, _ _ _ _ _ _ _ _ _ _ _ _ __ _ __ __ ___ __ _ _ _ __ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ __ _ J
(a)
(b)
Figure 1: (a) Different types of emulators. (b) Transforming a simple feedforward neural
network Net> into a practical emulator network N4, that is easily trained to emulate physicsbased models. The following operators perform the appropriate pre- and post-processing:
T~ transforms inputs to local coordinates, T~ normalizes inputs, T~ unnormalizes outputs,
T~ transforms outputs to global coordinates, T~ converts from a state change to the next
state (see text and [8] for the details).
to emulate the change in the state vector ~St for given state, external force, and control
inputs, followed by an operator T~ that computes St+t>.t = S t + ~St to recover the next
state.
We can further improve the approximation power of the emulator network by exploiting
natural invariances. In particular, since the map !f> is invariant under rotation and translation, we replace N~ with an operator T~ that converts the inputs from the world coordinate
system to the local coordinate system of the model, a network N~ that is trained to emulate
state changes represented in the local coordinate system, and an operator T~ that converts
the output of N~ back to world coordinates (Fig. I (b) (center?.
Since the values of state, force, and control variables can deviate significantly, their effect
on the network outputs is uneven, causing problems when large inputs must have a small
influence on outputs. To make inputs contribute more evenly to the network outputs, we
normalize groups of variables so that they have zero means and unit variances. With normalization, we can furthermore expect the weights of the trained network to be of order
unity and they can be given a simple random initialization prior to training. Hence, in
Fig. l(b)) (bottom) we replace N~ with an operator T~ that normalizes its inputs, a network N4, that assumes zero mean, unit variance inputs and outputs, and an operator T~
that unnormalizes the outputs to recover their original distributions.
Although the final emulator in Fig. 1(b) is structurally more complex than the standard
feed forward neural network Net> that it replaces, the operators denoted by T are completely
determined by the state of the model and the distribution of the training data, and the
emulator network N4, is much easier to train.
3.3 Hierarchical Networks
As a universal function approximator, a neural network should in principle be able to approximate the map !f> for any dynamical system, given enough sigmoid hidden units and
R. Grzeszczuk. D. Terzopoulos and G. E. Hinton
886
training data. In practice, however, the number of hidden layer neurons needed and the
training data requirements grow quickly with the size of the network, often making the
training of large networks impractical. To overcome the "curse of dimensionality," we have
found it prudent to structure NeuroAnimators for all but the simplest physics-based models
as hierarchies of smaller networks rather than as large, monolithic networks. The strategy
behind a hierarchical representation is to group state variables according to their dependencies and approximate each tightly coupled group with a subnet that takes part of its input
from a parent network.
3.4
Training NeuroAnimators
To arrive at a NeuroAnimator for a given physics-based model, we train the constituent
neural network(s) through backpropagation on training examples generated by simulating
the model. Training requires the generation and processing of many examples, hence it
is typically slow, often requiring several CPU hours. However, once a NeuroAnimator is
trained offline, it can be reused online to produce an infinite variety of fast animations.
The important point is that by generalizing from the sparse training examples, a trained
NeuroAnimator will produce an infinite variety of extended, continuous animations that it
has never "seen".
More specifically, each training example consists of an input vector x and an output vector
In the general case, the input vector
=
comprises the state of the
model, the external forces, and the control inputs at time t = O. The output vector y = SLl.t
is the state of the model at time t
6.t, where 6.t is the duration of the super timestep.
To generate each training example, we could start the numerical simulator of the physicsbased model with the initial conditions So, ro, and uo, and run the dynamic simulation
for n numerical time steps M such that flt = nl5t. In principle, we could generate an
arbitrarily large set of training examples {XT; yT}, T = 1,2, ... , by repeating this process
with different initial conditions. To learn a good neural network approximation N<I> of
the map CP-, we would like ideally to sample q> as uniformly as possible over its domain,
with randomly chosen initial conditions among all valid state, external force, and control
combinations. However, we can make better use of computational resources by sampling
those state, force, and control inputs that typically occur as a physics-based model is used
in practice.
x
y.
[s6', rl, u6'V
=
We employ a neural network simulator called Xerion which was developed at the University of Toronto. We begin the off-line training process by initializing the weights of N~
to random values from a uniform distribution in the range [0, 1J (due to the normalization
of inputs and outputs). Xerion automatically terminates the backpropagation learning algorithm when it can no longer reduce the network approximation error significantly. We
use the conjugate gradient method to train networks of small and moderate size. For large
networks, we use gradient descent with momentum. We divide the training examples into
mini-batches, each consisting of approximately 30 uncorrelated examples, and update the
network weights after processing each mini-batch.
4
Results
We have successfully constructed and trained several NeuroAnimators to emulate a variety of physics-based models (Fig. 2). We used SDIFAST (a rigid body dynamics simulator marketed by Symbolic Dynamics, Inc.) to simulate the dynamics of the rigid body
Emulation/or Animation
(a)
887
(b)
(c)
(d)
Figure 2: NeuroAnimators used in our experiments. (a) Emulator of a physics-based model
of a planar multi-link pendulum suspended in gravity, subject to joint friction forces, external forces applied on the links, and controlled by independent motor torques at each of the
three joints. (b) Emulator of a physics-based model of a truck implemented as a rigid body,
subject to friction forces where the tires contact the ground, controlled by rear-wheel drive
(forward and reverse) and steerable front wheels. (c) Emulator of a physics-based model of
a lunar lander, implemented as a rigid body subject to gravitational forces and controlled by
a main rocket thruster and three independent attitude jets. (d) Emulator of a biomechanical
(mass-spring-damper) model of a dolphin capable of swimming in simulated water via the
coordinated contraction of 6 independently controlled muscle actuators which deform its
body, producing hydrodynamic propulsion forces.
and articulated models, and we employ the simulator developed in [10] to simulate the
deformable-body dynamics of the dolphin.
In our experiments we have not attempted to minimize the number of network weights required for successful training. We have also not tried to minimize the number of sigmoidal
hidden units, but rather used enough units to obtain networks that generalize well while not
overfitting the training data. We can always expect to be able to satisfy these guidelines in
view of our ability to generate sufficient training data.
An important advantage of using neural networks to emulate dynamical systems is the
speed at which they can be iterated to produce animation. Since the emulator for a dynamical system with the state vector of size N never uses more than O(N) hidden units, it can
be evaluated using only O(N2) operations. By comparison, a single simulation timestep
using an implicit time integration scheme requires O(N3) operations. Moreover, a forward
pass through the neural network is often equivalent to as many as 50 physical simulation
steps, so the efficiency is even more dramatic, yielding performance improvements up to
two orders of magnitude faster than the physical simulator. A NeuroAnimator that predicts
100 physical simulation steps offers a speedup of anywhere between 50 and 100 times
depending on the type of physical model.
5
Control Learning
An additional benefit of the NeuroAnimator is that it enables a novel, highly efficient approach to the difficult problem of controlling physics-based models to synthesize motions
that satisfy prescribed animation goals. The neural network approximation to the physical
model is differentiable; hence, it can be used to discover the causal effects that control force
inputs have on the actions of the models. Outstanding efficiency stems from exploiting the
trained NeuroAnimator to compute partial derivatives of output states with respect to control inputs. The efficient computation of the approximate gradient enables the utilization of
fast gradient-based optimization for controller synthesis.
R. Grzeszczuk, D. Terzopoulos and G. E. Hinton
888
Nguyen and Widrow's [4] "truck backer-upper" demonstrated the neural network based
approximation and control of a nonlinear kinematic system. Our technique offers a new
controller synthesis algorithm that works well in dynamic environments with changing
control objectives. See [8, 9] for the details.
6
Conclusion
We have introduced an efficient alternative to the conventional approach of producing physically realistic animation through numerical simulation. Our approach involves the learning
of neural network emulators of physics-based models by observing the dynamic state transitions produced by such models in action. The emulators approximate physical dynamics
with dramatic efficiency, yet without serious loss of apparent fidelity. Our performance
benchmarks indicate that the neural network emulators can yield physically realistic animation one or two orders of magnitude faster than conventional numerical simulation of the
associated physics-based models. Our new control learning algorithm, which exploits fast
emulation and the differentiability of the network approximation, is orders of magnitude
faster than competing controller synthesis algorithms for computer animation.
Acknowledgements
We thank Zoubin Ghahramani for valuable discussions leading to the idea of the rotation and translation invariant emulator, which was crucial to the success of this work . We are indebted to Steve Hunt,
John Funge, Alexander Reshetov, Sonja Jeter and Mike Gendimenico at Intel, and Mike Revow, Drew
van Camp and Michiel van de Panne at the University of Toronto for their assistance.
References
[1] D. Terzopoulos, 1. Platt, A. Barr, K. Fleischer. Elastically deformable models. In M.e. Stone,
ed., Computer Graphics (SIGGRAPH '87 Proceedings), 21 , 205-214, July 1987.
[2] J.K. Hahn: Realistic animation of rigid bodies. In J. Dill, ed., Computer Graphics (SIGGRAPH
'88 Proceedings) , 22, 299-308, August 1988.
[3] J.K. Hodgins, w.L. Wooten, D.e. Brogan, J.F. O' Brien. Animating human athletics. In R. Cook,
ed., Proc. of ACM SIGGRAPH 95 Conf, 71-78, August, 1995.
[4] D. Nguyen, B. Widrow. The truck backer-upper: An example of self-learning in neural networks. In Proc. Inter. Joint Conf Neural Networks , 357-363. IEEE Press, 1989.
[5] M. I. Jordan. Supervised learning and systems with excess degrees of freedom. Technical
Report 88-27, Univ. of Massachusetts, Comp.& Info. Sci. , Amherst, MA, 1988.
[6] K. S. Narendra, K. Parthasarathy. Gradient methods for the optimization of dynamical systems
containing neural networks. IEEE Trans. on Neural Networks, 2(2):252-262, 1991.
[7] G . Cybenko. Approximation by superposition of sigmoidal function . Math. of Control Signals
& Systems, 2(4):303-314, 1989.
[8] R. Grzeszczuk. NeuroAnimator: Fast Neural Network Emulation and Control of Physics-Based
Models . PhD thesis, Dept. of Compo Sci., Univ. of Toronto, May 1998.
[9] R. Grzeszczuk, D. Terzopoulos, G . Hinton. NeuroAnimator: Fast neural network emulation
and control of physics-based models. In M. Cohen, ed., Proc. of ACM SIGGRAPH 98 Conf,
9-20, July 1998.
[10] X. Th, D. Terzopoulos. Artificial fishes : Physics, locomotion, perception, behavior. In A. Glassner, ed., Proc. of ACM SIGGRAPH 94 Conf , 43- 50. July 1994.
| 1562 |@word reused:1 r:2 tried:1 simulation:14 contraction:1 dramatic:2 incurs:1 thereby:1 initial:3 r5t:2 brien:1 clara:1 yet:1 must:2 john:1 realistic:6 biomechanical:1 numerical:13 enables:2 motor:1 update:1 cook:1 beginning:1 realism:3 compo:1 math:1 contribute:1 toronto:5 sigmoidal:3 simpler:1 constructed:1 direct:1 consists:1 introduce:1 inter:1 behavior:1 simulator:7 multi:1 torque:1 inspired:1 automatically:3 actual:1 curse:1 unpredictable:1 cpu:1 begin:1 discover:1 moreover:1 formidable:1 mass:1 developed:2 microcomputer:1 transformation:1 corporation:1 impractical:2 every:1 tackle:1 glassner:1 gravity:1 exactly:1 ro:1 demonstrates:1 scaled:1 demetri:1 control:24 unit:10 platt:1 utilization:1 uo:1 producing:3 local:3 monolithic:1 elastically:1 backer:3 interpolation:1 approximately:1 blvd:1 initialization:1 neuroanimator:12 specifying:1 challenging:1 hunt:1 range:3 practical:1 practice:3 differs:1 backpropagation:3 steerable:1 universal:1 significantly:4 pre:1 road:1 zoubin:1 symbolic:1 wheel:2 operator:7 context:1 influence:2 instability:1 unsurpassed:2 conventional:5 map:10 demonstrated:2 deterministic:1 center:1 yt:1 equivalent:1 duration:1 independently:1 u6:1 s6:1 coordinate:6 hierarchy:1 commercial:1 controlling:1 us:2 locomotion:1 trend:1 velocity:1 synthesize:1 predicts:1 observed:1 ft:2 bottom:1 mike:2 initializing:1 valuable:1 transforming:1 environment:1 ideally:1 dynamic:19 trained:12 solving:1 segment:1 upon:1 lunar:1 efficiency:5 completely:2 easily:1 joint:3 siggraph:5 emulate:9 represented:1 articulated:2 train:5 attitude:1 fast:7 univ:2 artificial:1 apparent:1 larger:2 ability:1 transform:1 final:1 online:1 sequence:1 xerion:2 advantage:1 net:2 differentiable:1 propose:1 mission:1 causing:1 sll:1 realization:1 date:1 deformable:2 normalize:1 constituent:1 exploiting:2 parent:1 dolphin:2 requirement:1 produce:5 depending:2 widrow:3 implemented:2 involves:2 indicate:1 emulation:12 drawback:1 human:1 enable:1 subnet:1 premise:1 barr:1 suffices:1 generalization:1 dill:1 cybenko:1 gravitational:1 copious:1 ground:1 plagued:1 mapping:1 predict:1 narendra:1 perceived:1 proc:4 superposition:1 physicsbased:2 appreciably:1 successfully:1 always:1 super:3 rather:3 broader:1 conjunction:1 improvement:1 camp:1 rear:1 rigid:6 typically:4 hidden:5 among:2 fidelity:1 denoted:1 prudent:1 proposes:1 integration:4 special:1 construct:1 saving:1 never:3 once:1 sampling:1 future:2 report:1 connectionist:1 serious:2 employ:4 randomly:1 preserve:1 tightly:1 cheaper:1 consisting:1 freedom:2 interest:1 possibility:1 highly:2 kinematic:2 evaluation:5 yielding:2 behind:1 chain:1 accurate:2 capable:1 partial:1 divide:1 re:1 causal:1 panne:1 instance:1 cover:1 cost:4 retarded:1 uniform:1 successful:2 graphic:5 front:1 reported:1 dependency:1 st:14 fundamental:1 amherst:1 physic:23 off:2 synthesis:3 quickly:1 ilj:1 thesis:1 containing:2 restructure:1 external:9 creating:1 hydrodynamic:1 derivative:1 leading:1 conf:4 deform:1 de:1 includes:1 inc:1 coordinated:1 satisfy:2 rocket:1 view:1 lab:1 observing:3 pendulum:1 start:1 recover:2 minimize:2 accuracy:2 variance:2 yield:2 generalize:1 iterated:1 accurately:1 produced:1 dub:1 trajectory:1 comp:1 drive:1 m5s:1 indebted:1 ed:5 lengthy:1 associated:3 gain:1 massachusetts:1 ut:7 dimensionality:3 actually:1 back:1 feed:2 steve:1 higher:1 supervised:1 planar:1 evaluated:1 furthermore:1 just:1 implicit:3 anywhere:1 replacing:1 nonlinear:5 usa:1 effect:3 facilitate:1 requiring:1 hence:5 assistance:1 self:1 nonrigid:1 stone:1 demonstrate:1 necessitating:1 motion:5 cp:1 passive:2 consideration:1 novel:1 sigmoid:3 rotation:2 functional:1 physical:13 rl:1 cohen:1 numerically:1 significant:1 robot:1 entail:1 longer:1 moderate:1 reverse:1 arbitrarily:1 suspended:1 success:1 muscle:1 seen:2 additional:2 july:3 signal:1 full:1 stem:1 technical:1 faster:4 jet:1 offer:5 michiel:1 post:1 jy:1 controlled:4 basic:1 controller:3 physically:4 normalization:2 animating:1 lea:1 receive:2 ftl:1 lander:1 grow:1 crucial:2 subject:3 ample:1 jordan:1 intermediate:1 feedforward:2 enough:2 automated:1 variety:6 timesteps:4 competing:2 reduce:1 simplifies:1 idea:1 ncj:2 fleischer:1 whether:1 motivated:1 algebraic:1 cause:1 action:4 dramatically:1 generally:2 santa:1 transforms:2 repeating:1 differentiability:1 simplest:2 generate:3 shifted:1 fish:1 per:2 discrete:2 express:1 group:3 achieving:1 changing:1 timestep:4 merely:1 swimming:1 convert:3 run:1 h5:1 arrive:1 t1t:1 layer:3 followed:1 replaces:1 truck:4 nontrivial:1 occur:1 constraint:1 n3:1 damper:1 simulate:2 friction:2 speed:1 spring:1 prescribed:1 acted:1 department:1 influential:1 according:1 speedup:1 combination:1 conjugate:1 smaller:2 terminates:1 unity:1 n4:4 penetration:1 making:1 invariant:2 computationally:1 equation:4 resource:1 mechanism:2 needed:1 end:1 operation:4 incurring:1 permit:1 actuator:1 hierarchical:2 appropriate:1 simulating:2 alternative:2 batch:2 substitute:1 original:1 top:1 assumes:1 ensure:1 exploit:2 ghahramani:1 hahn:1 approximating:1 contact:1 objective:1 quantity:1 strategy:2 gradient:5 link:2 thank:1 simulated:2 sci:2 propulsion:1 degrade:1 evenly:1 trivial:1 water:1 mini:2 difficult:1 unfortunately:1 info:1 suppress:1 guideline:1 perform:2 allowing:1 upper:3 observation:1 neuron:1 benchmark:1 descent:1 hinton:5 extended:1 frame:1 august:2 canada:1 introduced:1 required:1 faithfulness:1 hour:1 trans:1 able:2 dynamical:12 usually:2 perception:1 grzeszczuk:6 suitable:1 demanding:1 difficulty:1 force:20 power:1 natural:1 scheme:2 improve:1 numerous:1 coupled:1 parthasarathy:1 text:1 deviate:1 geometric:1 literature:1 prior:1 acknowledgement:1 relative:1 fully:1 loss:2 expect:2 generation:1 proportional:1 geoffrey:1 approximator:1 degree:4 sufficient:2 principle:4 emulator:27 tiny:1 uncorrelated:1 production:1 normalizes:2 translation:2 offline:1 tire:1 terzopoulos:7 sparse:2 benefit:1 van:2 overcome:1 evaluating:2 transition:4 world:2 computes:1 valid:1 forward:4 nguyen:3 excess:1 approximate:10 dealing:1 global:1 active:1 overfitting:1 alternatively:1 continuous:2 decade:1 learn:3 ca:1 complex:5 domain:3 hodgins:1 main:1 animation:26 n2:2 body:7 fig:6 intel:2 fashion:1 slow:1 precision:1 structurally:1 momentum:1 comprises:1 explicit:2 learns:1 xt:1 flt:1 drew:1 phd:1 magnitude:5 illustrates:1 easier:1 generalizing:2 sonja:1 radically:1 acm:3 ma:1 goal:1 king:1 replace:4 revow:1 change:5 infinite:3 determined:2 specifically:1 uniformly:1 acting:2 called:2 pas:1 invariance:1 attempted:1 college:2 uneven:1 alexander:1 outstanding:2 dept:1 |
616 | 1,563 | Shrinking the Thbe:
A New Support Vector Regression Algorithm
Bernhard SchOikopr?,*, Peter Bartlett*, Alex Smola?,r, Robert Williamson*
? GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany
* FEITIRSISE, Australian National University, Canberra 0200, Australia
bs, smola@first.gmd.de, Peter.Bartlett, Bob.Williamson@anu.edu.au
Abstract
A new algorithm for Support Vector regression is described. For a priori
chosen 1/, it automatically adjusts a flexible tube of minimal radius to the
data such that at most a fraction 1/ of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant
radius. The algorithm is analysed theoretically and experimentally.
1 INTRODUCTION
Support Vector (SV) machines comprise a new class of learning algorithms, motivated by
results of statistical learning theory (Vapnik, 1995). Originally developed for pattern recognition, they represent the decision boundary in terms of a typically small subset (SchOikopf
et aI., 1995) of all training examples, called the Support Vectors. In order for this property
to carryover to the case of SV Regression, Vapnik devised the so-called E-insensitive loss
function Iy - f(x)lc = max{O, Iy - f(x)1 - E}, which does not penalize errors below
some E > 0, chosen a priori. His algorithm, which we will henceforth call E-SVR, seeks to
estimate functions
(1)
f (x) = (w . x) + b, w, x E ~N , b E ~,
based on data
(xl,yd, ... ,(xe,Ye) E ~N x~,
(2)
by minimizing the regularized risk functional
IIwll2/2 + C . R~mp,
(3)
where C is a constant determining the trade-off between minimizing training errors and
minimizing the model complexity term IIwll 2, and R~mp := 2::;=1 IYi - f(Xi)lc'
t
The parameter E can be useful if the desired accuracy of the approximation can be specified
beforehand. In some cases, however, we just want the estimate to be as accurate as possible,
without having to commit ourselves to a certain level of accuracy.
We present a modification of the E-SVR algorithm which automatically minimizes
adjusting the accuracy level to the data at hand.
E,
thus
331
Shrinking the Tube: A New Support Vector Regression Algorithm
2
ZJ-SV REGRESSION AND c-SV REGRESSION
To estimate functions (1) from empirical data (2) we proceed as follows (SchOlkopf et aI.,
1998a). At each point Xi, we allow an error of E. Everything above E is captured in
slack variables d*) ?(*) being a shorthand implying both the variables with and without
asterisks), which are penalized in the objective function via a regularization constant C,
chosen a priori (Vapnik, 1995). The tube size E is traded off against model complexity and
slack variables via a constant v > 0:
1 e
minimize -r(w, e(*) ,E) = Ilw112/2 + C? (VE + ? :L(Ei +
(4)
En)
subject to
((w,xi)+b)-Yi
Yi-((W ' Xi)+b)
d*)
~ 0,
E
< E+Ei
< E+Ei
> 0.
i-I
-
(5)
(6)
(7)
Here and below, it is understood that i = 1, ... , i, and that bold face greek letters denote
i-dimensional vectors of the corresponding variables. Introducing a Lagrangian with multipliers o~ *) , 77i *) ,f3 ~ 0, we obtain the the Wolfe dual problem. Moreover, as Boser et al.
(1992), we substitute a kernel k for the dot product, corresponding to a dot product in some
feature space related to input space via a nonlinear map <I> ,
k(x,y) = (<I>(x)? <I>(y)).
This leads to the v-SVR Optimization Problem: for v ~ 0, C > 0,
e
e
maximize W(o(*)) = :L(oi - Oi)Yi - ~ :L (oi - Oi)(O; - OJ)k(Xi, Xj)
i=1
i,j=1
(8)
(9)
subject to
(11)
The regression estimate can be shown to take the form
l
f(x) = :L(oi - oi)k(Xi' x)
i=1
+ b,
(13)
where b (and E) can be computed by taking into account that (5) and (6) (substitution of
L: j (0; - oj)k(xj, x) for (w? x) is understood) become equalities with E~*) = for points
?
?
with < o~*) < C / i, respectively, due to the Karush-Kuhn-Tuckerconditions (cf. Vapnik,
1995). The latter moreover imply that in the kernel expansion (13), only those o~*) will
be nonzero that correspond to a constraint (5)/(6) which is precisely met. The respective
patterns Xi are referred to as Support Vectors.
Before we give theoretical results explaining the significance of the parameter v, the following observation concerning E is helpful. If v > 1, then E = 0, since it does not pay to
increase E (cf. (4)). If v ~ 1, it can still happen that E = 0, e.g. if the data are noise-free
and can perfectly be interpolated with a low capacity model. The case E = 0, however, is
not what we are interested in; it corresponds to plain Ll loss regression . Below, we will use
the term errors to refer to training points lying outside of the tube, and the term fraction
of errors/SVs to denote the relative numbers of errors/SVs, i.e. divided by i.
Proposition 1 Assume E
> 0.
The following statements hoLd:
(i) v is an upper bound on the fraction of errors.
(ii) v is a Lower bound on the fraction ofSVs.
332
B. SchOlkopf, P. L. Bartlett, A. 1. Smola and R. Williamson
(iii) Suppose the data (2) were generated iid from a distribution P(x, y)
P(x)P(ylx) with P(ylx) continuous. With probability 1, asymptotically, v equals
both the fraction of SVs and the fraction of errors.
The first two statements of this proposition can be proven from the structure of the dual optimization problem, with (12) playing a crucial role. Presently, we instead give a graphical
proof based on the primal problem (Fig. 1).
To understand the third statement, note that all errors are also SVs, but there can be SVs
which are not errors: namely, if they lie exactly at the edge of the tube. Asymptotically,
however, these SVs form a negligible fraction of the whole SV set, and the set of errors and
the one of SV s essentially coincide. This is due to the fact that for a class of functions with
well-behaved capacity (such as SV regression functions), and for a distribution satisfying
the above continuity condition, the number of points that the tube edges f ? ? can pass
through cannot asymptotically increase linearly with the sample size. Interestingly, the
proof (Scholkopf et aI., 1998a) uses a uniform convergence argument similar in spirit to
those used in statistical learning theory.
Due to this proposition, 0 ::; v ::; 1 can be used to control the number of errors (note that
for v ~ 1, (11) implies (12), since ai . a; = 0 for all i (Vapnik, 1995)). Moreover, since
the constraint (10) implies that (12) is equivalent to Li a~*) ::; Cv/2, we conclude that
Proposition 1 actually holds for the upper and the lower edge of the tube separately, with
v /2 each. As an aside, note that by the same argument, the number of SVs at the two edges
of the standard ?-SVR tube asymptotically agree.
Moreover, note that this bears on the robustness of v-SVR. At first glance, SVR seems all
but robust: using the ?-insensitive loss function, only the patterns outside of the ?-tube contribute to the empirical risk term, whereas the patterns closest to the estimated regression
have zero loss. This, however, does not mean that it is only the outliers that determine the
regression. In fact, the contrary is the case: one can show that local movements of target
values Yi of points Xi outside the tube do not influence the regression (Scholkopf et aI.,
1998c). Hence, v-SVR is a generalization of an estimator for the mean of a random variable which throws away the largest and smallest examples (a fraction of at most v /2 of
either category), and estimates the mean by taking the average of the two extremal ones of
the remaining examples . This is close in spirit to robust estimators like the trimmed mean.
Let us briefly discuss how the new algorithm relates to ?-SVR (Vapnik, 1995). By rewriting
(3) as a constrained optimization problem, and deriving a dual much like we did for v-SVR,
Figure 1: Graphical depiction of the v-trick. Imagine increasing ?, starting from O. The first term in
v?+
(~i +~n (cf. (4)) will increase proportionally to v, while the second term will decrease
proportionally to the fraction of points outside of
the tube. Hence, ? will grow as long as the latter
+? fraction is larger than v . At the optimum, it thereo fore must be::; v (Proposition 1, (i)). Next, imagine decreasing ?, starting from some large value.
-? Again, the change in the first term is proportional
to v, but this time, the change in the second term
is proportional to the fraction of SVs (even points
on the edge of the tube will contribute). Hence, ?
will shrink as long as the fraction of SVs is smaller
than v, eventually leading to Proposition 1, (ii).
1L;=l
333
Shrinking the Tube: A New Support Vector Regression Algorithm
one arrives at the following quadratic program: maximize
W(a, a*)
l
l
i=l
i=l
= -? 2)0: +Oi)+ 'L)oi -Oi)Yi-~
l
L (0; -Oi)(O) -OJ)k(Xi' Xj)
(14)
i,j=l
subject to (10) and (11). Compared to (9), we have an additional term -c 2:;=1 (aT
which makes it plausible that the constraint (12) is not needed.
+ Oi),
In the following sense, v-SVR includes c-SVR. Note that in the general case, using kernels,
wis a vector in feature space.
Proposition 2 If v-SVR leads to the solution t, w, b, then c-SVR with
and the same value of C, has the solution W, b.
E
set a priori to t,
Proof If we minimize (4), then fix c and minimize only over the remaining variables, the
solution does not change.
?
3
PARAMETRIC INSENSITIVITY MODELS
We generalized ?-SVR by considering the tube as not given but instead estimated it as a
model parameter. What we have so far retained is the assumption that the c-insensitive zone
has a tube (or slab) shape. We now go one step further and use parametric models of arbitrary shape. Let { *)} (here and below, q = 1, ... ,p is understood) be a set of 2p positive
functions on IRN. Consider the following quadratic program: for given v~*), . .. , v~*) 2: 0,
minimize
1 l
)
p
(15)
r(w,
c(*?) = IlwW /2 + C? ( ?;(vqcq + v;?;) + f ~(~i + ~n
d
e(*),
subject to
((w? Xi) + b) - Yi
<
Yi-((W'Xi)+b)
<
~J*) 2: 0, E~*)
L
L
q
q
+ ~i
(16)
c;(;(xd+C
(17)
cq(q(X;)
> O.
(18)
A calculation analogous to Sec. 2 shows that the Wolfe dual consists of maximizing (9)
subject to (10), (11), and, instead of (12), the modified constraints 2:;=1 o~*)d*)(xd :S
C . v~*). In the experiments in Sec. 4, we use a simplified version of this optimization
problem, where we drop the term v;c~ from the objective function (15), and use Cq and
(q in (17). By this, we render the problem symmetric with respect to the two edges of the
tube. In addition, we use p = 1. This leads to the same Wolfe dual, except for the last
constraint, which becomes (cf. (12?
L
l
i =l (a;
+ ai)((xi) :S
C . v.
(19)
The advantage of this setting is that since the same v is used for both sides of the tube, the
computation of E, b is straightforward: for instance, by solving a linear system, using two
conditions as those described following (13). Otherwise, general statements are harder to
make: the linear system can have a zero determinant, depending on whether the functions
*) , evaluated on the Xi with 0 < o~ *) < C / ?, are linearly dependent. The latter occurs,
for instance, if we use constant functions (( *) == 1. In this case, it is pointless to use
two different values v, v*; for, the constraint (10) then implies that both sums 2:;=1 a~ *)
will be bounded by C . min {v, v*}. We conclude this section by giving, without proof, a
generalization of Proposition 1, (iii), to the optimization problem with constraint (19):
d
334
B. SchOlkopf, P L. Bartlett, A. J. Smola and R. Williamson
Proposition 3 Assume c > O. Suppose the data (2) were generated iid from a distribution
P(x, y) = P(x)P(ylx) with P(ylx) continuous. With probability 1, asymptotically, the
fractions of SVs and errors equal v ?(J ((x) d?(X))-l, where? is the asymptotic distribution of SVs over x.
4 EXPERIMENTS AND DISCUSSION
In the experiments, we used the optimizer LOQO (http://www.princeton.edwrvdb/).This
has the serendipitous advantage that the primal variables band c can be recovered as the
dual variables of the Wolfe dual (9) (i.e. the double dual variables) fed into the optimizer.
In Fig. 2, the task was to estimate a regression of a noisy sinc function, given f examples
(Xi,Yi), with Xi drawn uniformly from [-3,3], and Yi = sin(7l'Xi)/(7l'Xi) + Vi, with Vi
drawn from a Gaussian with zero mean and variance (J2. We used the default parameters
? = 50, C = 100, (J = 0.2, and the RBF kernel k(x, x') = exp( -Ix - x /12 ).
Figure 3 gives an illustration of how one can make use of parametric insensitivity models as
proposed in Sec. 3. Using the proper model, the estimate gets much better. In the parametric
case, we used v = 0.1 and ((x) = sin 2 ((27l' /3)x), which, due to J ((x) dP(x) = 1/2,
corresponds to our standard choice v = 0.2 in v-SVR (cf. Proposition 3). The experimental
findings are consistent with the asymptotics predicted theoretically even if we assume a
uniform distribution of SVs: for ? = 200, we got 0.24 and 0.19 for the fraction of SVs and
errors, respectively.
This method allows the incorporation of prior knowledge into the loss function. Although
this approach at first glance seems fundamentally different from incorporating prior knowledge directly into the kernel (Sch6lkopf et al., 1998b), from the point of view of statistical
,,,'
"''''''''
Figure 2: Left: v-SV regression with v = 0.2 (top) and v = 0.8 (bottom). The larger v
allows more points to lie outside the tube (see Sec. 2). The algorithm automatically adjusts
c to 0.22 (top) and 0.04 (bottom). Shown are the sinc function (dotted), the regression f
and the tube f ? c. Middle: v-SV regression on data with noise (J = 0 (top) and (J = 1
(bottom). In both cases, v = 0.2. The tube width automatically adjusts to the noise (top:
c = 0, bottom: c = 1.19). Right: c-SV regression (Vapnik, 1995) on data with noise (J = 0
(top) and (J = 1 (bottom). In both cases, c = 0.2 - this choice, which has to be specified
a priori, is ideal for neither case: in the top figure, the regression estimate is biased; in the
bottom figure, c does not match the external noise (cf. Smola et al., 1998).
335
Shrinking the Tube: A New Support Vector Regression Algorithm
,--~-------,
,'''-''-,
.
.',, ."
,
.
.0.5
? ___ ?
?
,'_
.
..'
-....... _- _....... .
.,
'-:
? ,,:---::-----c7-----:--7-----:------!, .,,: ---::--.,------:---.,------:------!,
Figure 3: Toy example, using
prior knowledge about an xdependence of the noise. Additive
noise (0' = 1) was multiplied by
sin2 ((27r 13)x). Left: the same
function was used as ( as a parametric insensitivity tube (Sec. 3) .
Right: v-S VR with standard tube.
Table 1: Results for the Boston housing benchmark; top: v-SVR, bottom: e:-SVR MSE:
Mean squared errors, STD: standard deviations thereof (100 trials), Errors: fraction oftraining points outside the tube, SVs: fraction of training points which are SVs,
I
Iv
automatic e:
MSE
STD
Errors
SVs
Ie:
MSE
STD
Errors
SVs
0
11.3
9.5
0.5
1.0
1
0.1
2.6
9.4
6.4
0.0
0.3
I
0.2
1.7
8.7
6.8
0.1
0.4
I
0.3
1.2
9.3
7.6
0.2
0.6
I
1
9.5
7.7
0.2
0.6
I
2
8.8
6.8
0.1
0.4
I
3
9.7
6.2
0.0
0.3
I
0.4
0.8
9.5
7.9
0.2
0.7
I
41
11.2
6.3
0.0
0.2
0.5
0.6
10.0
8.4
0.3
0.8
I
5
13.1
6.0
0.0
0.1
0.6
0.3
10.6
9.0
0.4
0.9
I
I
6
15.6
6.1
0.0
0.1
0.7
0.0
11.3
9.6
0.5
1.0
1
I
7
18.2
6.2
0.0
0.1
0,8
0.0
11.3
9.5
0.5
1.0
I
I
8
22.1
6.6
0.0
0.1
0,9
0.0
11.3
9.5
0.5
1.0
I
I
9
27.0
7.3
0.0
0.1
1.0
0.0
11.3
9.5
0.5
1.0
I
I
10
34.3
8.4
0.0
0.1
I
learning theory the two approaches are closely related: in both cases, the structure of the
loss-function-induced class of functions (which is the object of interest for generalization
error bounds) is customized; in the first case, by changing the loss function, in the second
case, by changing the class of functions that the estimate is taken from .
Empirical studies using e:-SVR have reported excellent performance on the widely used
Boston housing regression benchmark set (Stitson et aI., 1999). Due to Proposition 2,
the only difference between v-SVR and standard e:-SVR lies in the fact that different
parameters, e: vs. v , have to be specified a priori. Consequently, we are in this experiment only interested in these parameters and simply adjusted C and the width 20'2 in
k(x, y) = exp( -llx - YI12/(20'2)) as Scholkopf et ai. (1997): we used 20' 2 = 0.3 ? N,
where N = 13 is the input dimensionality, and C 1e = 10 . 50 (i.e. the original value of
10 was corrected since in the present case, the maximal y-value is 50). We performed 100
runs, where each time the overall set of 506 examples was randomly split into a training
set of = 481 examples and a test set of 25 examples. Table 1 shows that in a wide range
of v (note that only 0 v
1 makes sense), we obtained performances which are close to
the best performances that can be achieved by selecting e: a priori by looking at the test set.
Finally, note that although we did not use validation techniques to select the optimal values
for C and 20'2, we obtained performance which are state of the art (Stitson et ai. (1999) report an MSE of 7.6 for e:-SVR using ANOVA kernels, and 11.7 for Bagging trees). Table 1
moreover shows that v can be used to control the fraction of SVs/errors.
e
:s :s
Discussion. The theoretical and experimental analysis suggest that v provides a way to
control an upper bound on the number of training errors which is tighter than the one used
in the soft margin hyperplane (Vapnik, 1995). In many cases, this makes it a parameter
which is more convenient than the one in e:-SVR. Asymptotically, it directly controls the
336
B. SchOlkopf, P L. Bartlett, A. 1. Smola and R. Williamson
number of Support Vectors, and the latter can be used to give a leave-one-out generalization
bound (Vapnik, 1995). In addition, v characterizes the compression ratio: it suffices to train
the algorithm only on the SVs, leading to the same solution (SchOlkopf et aI., 1995). In
c:-SVR, the tube width c: must be specified a priori; in v-SVR, which generalizes the idea of
the trimmed mean, it is computed automatically. Desirable properties of c:-SVR, including
the formulation as a definite quadratic program, and the sparse SV representation of the
solution, are retained. We are optimistic that in many applications, v-SVR will be more
robust than c:-SVR. Among these should be the reduced set algorithm of Osuna and Girosi
(1999), which approximates the SV pattern recognition decision surface by c:-SVR. Here,
v should give a direct handle on the desired speed-up.
One of the immediate questions that a v-approach to SV regression raises is whether a
similar algorithm is possible for the case of pattern recognition . This question has recently
been answered to the affirmative (SchOlkopf et aI., 1998c). Since the pattern recognition
algorithm (Vapnik, 1995) does not use c:, the only parameter that we can dispose of by
using v is the regularization constant C. This leads to a dual optimization problem with a
homogeneous quadratic form, and v lower bounding the sum of the Lagrange multipliers.
Whether we could have abolished C in the regression case, too, is an open problem.
Acknowledgement
This work was supported by the ARC and the DFG (# Ja 379171).
References
B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin
classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on
Computational Learning Theory, pages 144-152, Pittsburgh, PA, 1992. ACM Press .
E. Osuna and F. Girosi. Reducing run-time complexity in support vector machines. In
B. SchOlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support
Vector Learning, pages 271 - 283. MIT Press, Cambridge, MA, 1999.
B. SchOlkopf, C. Burges, and V. Vapnik. Extracting support data for a given task. In
U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference
on Knowledge Discovery & Data Mining. AAAI Press, Menlo Park, CA, 1995 .
B. Scholkopf, P. Bartlett, A. Smola, and R. Williamson. Support vector regression with
automatic accuracy control. In L. Niklasson, M. Boden, and T. Ziemke, editors, Proceedings of the 8th International Conference on Artificial Neural Networks, Perspectives
in Neural Computing, pages III - 116, Berlin, 1998a. Springer Verlag.
B. SchOlkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector
kernels. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information
Processing Systems 10, pages 640 - 646, Cambridge, MA, 1998b. MIT Press.
B. SchOlkopf, A. Smola, R. Williamson, and P. Bartlett. New support vector algorithms.
1998c. NeuroColt2-TR 1998-031; cf. http:!www.neurocolt.com
B. Scholkopf, K. Sung, C. Burges, F. Girosi, P. Niyogi, T. Poggio, and V. Vapnik. Comparing support vector machines with gaussian kernels to radial basis function classifiers.
IEEE Trans. Sign. Processing, 45:2758 - 2765, 1997.
A. Smola, N. Murata, B. SchOlkopf, and K.-R. Moller. Asymptotically optimal choice of
c:-Ioss for support vector machines. In L. Niklasson, M. Boden, and T. Ziemke, editors,
Proceedings of the 8th International Conference on Artificial Neural Networks, Perspectives in Neural Computing, pages 105 - 110, Berlin, 1998. Springer Verlag.
M . Stitson, A. Gammerman, V. Vapnik, V. Vovk, C. Watkins, and J. Weston. Support
vector regression with ANOVA decomposition kernels. In B. Scholkopf, C. Burges,
and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages
285 - 291. MIT Press, Cambridge, MA, 1999.
V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995.
| 1563 |@word trial:1 determinant:1 version:1 briefly:1 middle:1 seems:2 compression:1 open:1 seek:1 decomposition:1 tr:1 harder:1 stitson:3 substitution:1 selecting:1 interestingly:1 recovered:1 com:1 comparing:1 analysed:1 must:2 additive:1 happen:1 shape:3 girosi:3 drop:1 aside:1 implying:1 v:1 provides:1 contribute:2 direct:1 become:1 scholkopf:17 shorthand:1 consists:1 ziemke:2 theoretically:2 decreasing:1 automatically:5 considering:1 increasing:1 becomes:1 moreover:6 bounded:1 what:2 minimizes:1 affirmative:1 developed:1 finding:1 sung:1 xd:2 exactly:1 classifier:2 control:5 before:1 negligible:1 understood:3 local:1 positive:1 io:1 yd:1 au:1 range:1 definite:1 asymptotics:1 empirical:3 got:1 convenient:1 radial:1 suggest:1 svr:28 cannot:1 close:2 get:1 risk:2 influence:1 www:2 equivalent:1 map:1 lagrangian:1 maximizing:1 go:1 straightforward:1 starting:2 adjusts:3 estimator:2 haussler:1 deriving:1 his:1 handle:1 analogous:1 target:1 suppose:2 imagine:2 homogeneous:1 us:1 trick:1 wolfe:4 pa:1 recognition:4 satisfying:1 std:3 bottom:7 role:1 solla:1 trade:1 movement:1 decrease:1 complexity:3 rudower:1 raise:1 solving:1 basis:1 train:1 artificial:2 outside:7 larger:2 plausible:1 widely:1 otherwise:1 niyogi:1 commit:1 noisy:1 housing:2 advantage:2 product:2 maximal:1 j2:1 insensitivity:3 convergence:1 double:1 optimum:1 leave:1 object:1 depending:1 throw:1 predicted:1 implies:3 australian:1 met:1 kuhn:1 greek:1 radius:2 closely:1 australia:1 everything:1 ja:1 fix:1 karush:1 generalization:4 suffices:1 proposition:11 tighter:1 adjusted:1 hold:2 lying:1 exp:2 traded:1 slab:1 optimizer:2 smallest:1 extremal:1 largest:1 mit:3 gaussian:2 modified:1 sense:2 helpful:1 sin2:1 dependent:1 typically:1 irn:1 interested:2 germany:1 overall:1 dual:9 flexible:1 among:1 priori:8 constrained:1 art:1 equal:2 comprise:1 f3:1 having:1 park:1 carryover:1 report:1 fundamentally:1 randomly:1 national:1 ve:1 dfg:1 ourselves:1 interest:1 mining:1 arrives:1 primal:2 accurate:1 beforehand:1 edge:6 poggio:1 respective:1 tree:1 iv:1 desired:2 theoretical:2 minimal:1 instance:2 soft:1 introducing:1 deviation:1 subset:1 uniform:2 too:1 reported:1 ilww:1 sv:13 international:3 ie:1 off:2 iy:2 again:1 squared:1 tube:26 aaai:1 henceforth:1 external:1 simard:1 leading:2 li:1 toy:1 account:1 de:1 bold:1 sec:5 includes:1 mp:2 vi:2 performed:1 view:1 optimistic:1 characterizes:1 minimize:4 oi:11 accuracy:4 variance:1 murata:1 correspond:1 iid:2 fore:1 bob:1 against:1 c7:1 thereof:1 proof:4 adjusting:1 knowledge:5 dimensionality:1 actually:1 originally:1 formulation:1 evaluated:1 shrink:1 just:1 smola:12 hand:1 ei:3 nonlinear:1 glance:2 continuity:1 behaved:1 ye:1 multiplier:2 regularization:2 hence:3 equality:1 symmetric:1 nonzero:1 ll:1 sin:2 width:3 generalized:1 recently:1 niklasson:2 functional:1 insensitive:3 approximates:1 refer:1 cambridge:3 ai:11 cv:1 llx:1 automatic:2 dot:2 iyi:1 depiction:1 surface:1 closest:1 perspective:2 certain:1 verlag:3 xe:1 yi:9 captured:1 additional:1 determine:1 maximize:2 ii:2 relates:1 desirable:1 match:1 calculation:1 long:2 devised:1 concerning:1 divided:1 regression:25 essentially:1 represent:1 kernel:11 achieved:1 penalize:1 whereas:1 want:1 separately:1 chaussee:1 addition:2 grow:1 crucial:1 biased:1 subject:5 induced:1 contrary:1 spirit:2 jordan:1 call:1 extracting:1 ideal:1 iii:3 split:1 xj:3 perfectly:1 idea:1 whether:3 motivated:1 bartlett:7 trimmed:2 schoikopf:1 render:1 peter:2 proceed:1 york:1 svs:19 useful:1 proportionally:2 ylx:4 band:1 category:1 gmd:2 reduced:1 http:2 zj:1 dotted:1 dispose:1 sign:1 estimated:2 gammerman:1 drawn:2 changing:2 neither:1 rewriting:1 anova:2 asymptotically:7 fraction:17 sum:2 run:2 letter:1 guyon:1 decision:2 bound:5 pay:1 oftraining:1 quadratic:4 annual:1 constraint:7 precisely:1 alex:1 incorporation:1 interpolated:1 speed:1 argument:2 min:1 loqo:1 answered:1 fayyad:1 smaller:1 osuna:2 wi:1 b:1 modification:1 presently:1 outlier:1 taken:1 agree:1 slack:2 discus:1 eventually:1 needed:1 fed:1 generalizes:1 multiplied:1 away:1 uthurusamy:1 robustness:1 substitute:1 original:1 top:7 remaining:2 cf:7 bagging:1 graphical:2 giving:1 objective:2 question:2 occurs:1 neurocolt2:1 parametric:6 dp:1 berlin:3 capacity:2 neurocolt:1 retained:2 cq:2 illustration:1 minimizing:3 ratio:1 robert:1 statement:4 proper:1 sch6lkopf:1 upper:3 observation:1 benchmark:2 arc:1 immediate:1 looking:1 arbitrary:1 namely:1 specified:4 boser:2 trans:1 below:4 pattern:7 program:3 yi12:1 max:1 oj:3 including:1 regularized:1 customized:1 imply:1 iiwll2:1 prior:4 acknowledgement:1 discovery:1 determining:1 relative:1 asymptotic:1 loss:7 bear:1 abolished:1 proportional:2 proven:1 asterisk:1 validation:1 consistent:1 editor:7 playing:1 penalized:1 supported:1 last:1 free:1 side:1 allow:1 understand:1 burges:4 explaining:1 wide:1 face:1 taking:2 sparse:1 boundary:1 plain:1 default:1 coincide:1 simplified:1 far:1 bernhard:1 pittsburgh:1 conclude:2 xi:17 continuous:2 table:3 nature:1 robust:3 ca:1 menlo:1 boden:2 expansion:1 williamson:7 mse:4 excellent:1 moller:1 iiwll:1 did:2 significance:1 linearly:2 whole:1 noise:7 bounding:1 fig:2 canberra:1 referred:1 en:1 vr:1 lc:2 shrinking:4 xl:1 lie:4 watkins:1 third:1 ix:1 pointless:1 sinc:2 incorporating:1 workshop:1 vapnik:16 anu:1 margin:2 boston:2 simply:1 lagrange:1 springer:3 corresponds:2 acm:2 ma:3 weston:1 consequently:1 rbf:1 experimentally:1 change:3 except:1 uniformly:1 corrected:1 hyperplane:1 reducing:1 vovk:1 kearns:1 called:2 pas:1 experimental:2 zone:1 select:1 support:19 latter:4 princeton:1 |
617 | 1,564 | ?
Inference in Multilayer Networks VIa
Large Deviation Bounds
Michael Kearns and Lawrence Saul
AT&T Labs - Research
Shannon Laboratory
180 Park A venue A-235
Florham Park, NJ 07932
{mkearns ,lsaul}Oresearch.att. com
Abstract
We study probabilistic inference in large, layered Bayesian networks represented as directed acyclic graphs. We show that the
intractability of exact inference in such networks does not preclude
their effective use. We give algorithms for approximate probabilistic inference that exploit averaging phenomena occurring at nodes
with large numbers of parents. We show that these algorithms
compute rigorous lower and upper bounds on marginal probabilities of interest, prove that these bounds become exact in the limit
of large networks, and provide rates of convergence.
1
Introduction
The promise of neural computation lies in exploiting the information processing
abilities of simple computing elements organized into large networks. Arguably one
of the most important types of information processing is the capacity for probabilistic reasoning.
The properties of undirectedproDabilistic models represented as symmetric networks
have been studied extensively using methods from statistical mechanics (Hertz et
aI, 1991). Detailed analyses of these models are possible by exploiting averaging
phenomena that occur in the thermodynamic limit of large networks .
In this paper, we analyze the limit of large , multilayer networks for probabilistic
models represented as directed acyclic graphs. These models are known as Bayesian
networks (Pearl, 1988; Neal, 1992), and they have different probabilistic semantics
than symmetric neural networks (such as Hopfield models or Boltzmann machines).
We show that the intractability of exact inference in multilayer Bayesian networks
261
Inference in Multilayer Networks via Large Deviation Bounds
does not preclude their effective use. Our work builds on earlier studies of variational methods (Jordan et aI, 1997). We give algorithms for approximate probabilistic inference that exploit averaging phenomena occurring at nodes with N ? 1
parents. We show that these algorithms compute rigorous lower and upper bounds
on marginal probabilities of interest, prove that these bounds become exact in the
limit N -+ 00, and provide rates of convergence.
2
Definitions and Preliminaries
A Bayesian network is a directed graphical probabilistic model, in which the nodes
represent random variables, and the links represent causal dependencies . The joint
distribution of this model is obtained by composing the local conditional probability
distributions (or tables), Pr[childlparents], specified at each node in the network.
For networks of binary random variables, so-called transfer functions provide a
convenient way to parameterize conditional probability tables (CPTs). A transfer
function is a mapping f : [-00,00] -+ [0,1] that is everywhere differentiable and
satisfies f' (x) 2: 0 for all x (thus, f is nondecreasing). If f' (x) ::; a for all x, we say
that f has slope a. Common examples of transfer functions of bounded slope include
the sigmoid f(x) = 1/(1 + e- X ), the cumulative gaussian f(x) = J~oodt e- t2 / ft,
and the noisy-OR f(x) = 1 - e- x . Because the value of a transfer function f
is bounded between 0 and 1, it can be interpreted as the conditional probability
that a binary random variable takes on a particular value. One use of transfer
functions is to endow multilayer networks of soft-thresholding computing elements
with probabilistic semantics. This motivates the following definition:
Definition 1 For a transfer function f, a layered probabilistic f-network has:
=
=
? Nodes representing binary variables {xf}, f
1, ... ,L and i
1, ... , N.
Thus, L is the number of layers, and each layer contains N nodes.
? For every pair of n~des XJ- 1 and xf in adjacent layers, a real-valued weight
0'-:-1
from X Jl - 1 to Xlt .
tJ
? For every node
xl
in the first layer, a bias Pi.
We will sometimes refer to nodes in layer 1 as inputs, and to nodes in layer L as
outputs. A layered probabilistic f-network defines a joint probability distribution
over all of the variables {Xf} as follows: each input node xl is independently set
to 1 with probability Pi, and to 0 with probability 1 - Pi. Inductively, given binary
values
= x;-l E {O, 1} for all of the nodes in layer f - 1, the node xf is set
XJ-1
to 1 with probability f('Lf=l
ofj- 1 x;-l).
Among other uses, multilayer networks of this form have been studied as hierarchical generative models of sensory data (Hinton et aI, 1995). In such applications,
the fundamental computational problem (known as inference) is that of estimating
the marginal probability of evidence at some number of output nodes, say the first
f{ ::; N. (The computation of conditional probabilities, such as diagnostic queries,
can be reduced to marginals via Bayes rule.) More precisely, one wishes to estimate
Pr[Xf = Xl, ... ,XI< = XK] (where Xi E {O, 1}), a quantity whose exact computation involves an exponential sum over all the possible settings of the uninstantiated
nodes in layers 1 through L - 1, and is known to be computationally intractable
(Cooper, 1990).
262
3
M. Kearns and L. Saul
Large Deviation and Union Bounds
One of our main weapons will be the theory of large deviations. As a first illustration
of this theory, consider the input nodes {Xl} (which are independently set to 0 or 1
according to their biases pj) and the weighted sum 2::7= 1Blj Xl that feeds into the
ith node xl in the second layer. A typical large deviation bound (Kearns & Saul,
1997) states that for all f > 0, Pr[1 2::7=1 Blj (XJ - pj) I > f] ~ 2e-2~2 /(N0 2) where
e is the largest weight in the network. If we make the scaling assumption that
each weight Blj is bounded by T / N for some constant T (thus, e ~ T / N), then we
see that the probability of large (order 1) deviations of this weighted sum from its
mean decays exponentially with N. (Our methods can also provide results under
the weaker assumption that all weights are bounded by O(N-a) for a > 1/2.)
How can we apply this observation to the problem of inference? Suppose we are
interested in the marginal probability Pr[Xl = 1]. Then the large deviation bound
tells us that with probability at least 1 - 0 (where we define 0 = 2e- 2N ?2/ r 2), the
weighted sum at node Xl will be within f of its mean value Pi = 2::7=1 Bljpj. Thus,
with probability at least 1- 0, we are assured that Pr[Xl
1] is at least f(pi - f)
and at most f(Pi + f). Of course, the flip side of the large deviation bound is that
with probability at most 0, the weighted sum may fall more than f away from Pi.
In this case we can make no guarantees on Pr[Xl
1] aside from the trivial lower
and upper bounds of 0 and 1. Combining both eventualities, however, we obtain
the overall bounds:
=
=
(1 - O)f(Pi - f) ~ Pr[Xl
= 1]
~ (1 - O)f(Pi
+ f) + o.
(1)
Equation (1) is based on a simple two-point approximation to the distribution over
the weighted sum of inputs, 2::7=1 BtjX]. This approximation places one point,
with weight 1 - 0, at either f above or below the mean Pi (depending on whether
we are deriving the upper or lower bound); and the other point, with weight 0, at
either -00 or +00. The value of 0 depends on the choice of f: in particular, as f
becomes smaller, we give more weight to the ?oo point, with the trade-off governed
by the large deviation bound. We regard the weight given to the ?oo point as a
throw-away probability, since with this weight we resort to the trivial bounds of 0
or 1 on the marginal probability Pr[Xl = 1].
Note that the very simple bounds in Equation (1) already exhibit an interesting
trade-off, governed by the choice of the parameter f-namely, as f becomes smaller,
the throw-away probability 0 becomes larger, while the terms f(Pi ? f) converge to
the same value. Since the overall bounds involve products of f(Pi ? f) and 1 - 0,
the optimal value of f is the one that balances this competition between probable
explanations of the evidence and improbable deviations from the mean. This tradeoff is reminiscent of that encountered between energy and entropy in mean-field
approximations for symmetric networks (Hertz et aI, 1991).
So far we have considered the marginal probability involving a single node in the
second layer. We can also compute bounds on the marginal probabilities involving
]{ > 1 nodes in this layer (which without loss of generality we take to be the nodes
through Xi<). This is done by considering the probability that one or more
of the weighted sums entering these ]{ nodes in the second layer deviate by more
than f from their means. We can upper bound this probability by ]{0 by appealing
to the so-called union bound, which simply states that the probability of a union of
events is bounded by the sum of their individual probabilities. The union bound
allows us to bound marginal probabilities involving multiple variables. For example,
Xr
263
Inference in Multilayer Networks via Large Deviation Bounds
consider the marginal probability Pr[Xf = 1, ... , Xldeviation and union bounds, we find:
rr
1]. Combining the large
K
(I-Kb")
rr
K
f(Pi- f ) ~ Pr[Xf
= 1, ... , xl- = 1] <
i=1
(I-Kb")
f(Pi+f)+Kb". (2)
i=1
A number of observations are in order here. First, Equation (2) directly leads to
efficient algorithms for computing the upper and lower bounds. Second, although
for simplicity we have considered f- deviations of the same size at each node in the
second layer, the same methods apply to different choices of fi (and therefore b"i)
at each node. Indeed, variations in fi can lead to significantly tighter bounds, and
thus we exploit the freedom to choose different fi in the rest of the paper. This
results, for example, in bounds of the form:
rrK f(pt. - f t.)
(1 _ ~
t;tb"t.) i=1
~
2 _
]
Pr [Xl2 -_ 1, . .. ,XK
- 1,
where b"t. -_ 2e -2NE; /r2 .
(3)
The reader is invited to study the small but important differences between this
lower bound and the one in Equation (2). Third, the arguments leading to bounds
on the marginal probability Pr[X; = 1, ... , Xl- = 1] generalize in a straightforward manner to other patterns of evidence besides all 1 'so For instance, again just
considering the lower bound, we have:
( 1-
t,
0;)
ny -
/(1';+';)]
}I /(/4 -';) :s Pr[Xf = X" ... , Xl" = XK] (4)
where Xi E {a, I} are arbitrary binary values . Thus together the large deviation
and union bounds provide the means to compute upper and lower bounds on the
marginal probabilities over nodes in the second layer. Further details and consequences of these bounds for the special case of two-layer networks are given in a
companion paper (Kearns & Saul, 1997); our interest here, however, is in the more
.
challenging generalization to multilayer networks.
4
Multilayer Networks: Inference via Induction
In extending the ideas of the previous section to multilayer networks, we face the
problem that the nodes in the second layer, unlike those in the first, are not independent . But we can still adopt an inductive strategy to derive bounds on marginal
probabilities. The crucial observation is that conditioned on the values of the incoming weighted sums at the nodes in the second layer, the variables {xl} do become
independent. More generally, conditioned on these weighted sums all falling "near"
their means - an event whose probability we quantified in the last section - the
nodes {Xl} become "almost" independent. It is exactly this near-independence
that we now formalize and exploit inductively to compute bounds for multilayer
networks. The first tool we require is an appropriate generalization of the large
deviation bound, which does not rely on precise knowledge of the means of the
random variables being summed.
Theorem 1 For all 1 ~ j ~ N, let Xj E {a, I} denote independent binary random
variables, and let ITj I ~ T. Suppose that the means are bounded by IE[Xj ]-Pj I ~ !:l.j,
where
?< !:l.j ~
Pr [
Pj
~ 1 - !:l.j. Then for all f > ~ L:f=l h I!:l.j,'
~tTj(Xj-Pj)
J=1
>f] ~2e-~~(E-ttL:~==1IrJI~Jr
(5)
M. Keams and L. Saul
264
The proof of this result is omitted due to space considerations. Now for induction,
consider the nodes in the fth layer of the network. Suppose we are told that for
every i, the weighted sum 2:;=1 07j-1 XJ-1 entering into the node Xl lies in the
interval (p~ - fr , J.lr + frJ, for some choice of the J.l~ and the ff . Then the mean of
node xf is constrained to lie in the interval [pf - ~r, pf + ~n, where
~ [f(J.l~ -
ff) + f(J.lf + ff)]
~ [J(J.lf + ff) -
(6)
f(J.lf - fDJ .
(7)
Here we have simply run the leftmost and rightmost allowed values for the incoming
weighted sums through the transfer function , and defined the interval around the
mean of unit xf to be centered around pf. Thus we have translated uncertainties
on the incoming weighted sums to layer f into conditional uncertainties on the
means of the nodes Xf in layer f . To complete the cycle , we now translate these
into conditional uncertainties on the incoming weighted sums to layer f + 1. In
particular, conditioned on the original intervals [J.lf - ff , J.lf + ff] , what is probability
that for each i , 0 J =1 O~1J.X~J lies inside some new interval [//+1
_l+l
1I~+1 + fl+1J?
r1
l ' r1
l
'
"N
In order to make some guarantee on this probability, we set J.lf+1
= 2:;=1 efjP]
and assume that ff+1 > 2:;=1 IOfj I~]. These conditions suffice to ensure that
the new intervals contain the (conditional) expected values of the weighted sums
2:;=1 efjxf , and that the new intervals are large enough to encompass the incoming
uncertainties . Because these conditions are a minimal requirement for establishing
any probabilistic guarantees, we shall say that the [J.lf - d, J.lf + ffj define a valid
set of f-intervals if they meet these conditions for all 1 ::; i ::; N. Given a valid set
of f-intervals at the (f + 1)th layer , it follows from Theorem 1 and the union bound
that the weighted sums entering nodes in layer f + 1 obey
Pr [
~ O~ ?Xl - 1I~+1 > f~+l for some 1 < i < N]
~
1J
r1
J
1
-
(8)
-
j=l
i=l
where
8~+1
1
2N (l+1 "N I l I l)2
= 2e - -;:2
-0)=1 8,) L:.)
f,
(9)
In what follows, we shall frequently make use of the fact that the weighted sums
Ol.x~1 are bounded by intervals rlP'1
.. l+1 _l+l
1I~+1 + f~+l]
This motivates the
0J=11J
1
, r1
l'
following definitions.
"N
Definition 2 Given a valid set of f-intervals and binary values {Xf = xf} for the
nodes in the fth layer, we say that the (f + 1)st layer of the network satisfies its
f-intervals if 12:;=1 Ofjx] - J.lf+11 < fl+1 for all 1 ::;
the (f + 1)st layer violates its f-intervals .
i::;
N. Otherwise, we say that
Suppose that we are given a valid set of f-intervals and that we sample from the joint
distribution defined by the probabilistic I-network. The right hand side of Equation
(8) provides an upper bound on the conditional probability that the (f + 1)st layer
violates its f-intervals , given that the fth layer did not. This upper bound may be
vacuous (that is, larger than 1) , so let us denote by 81+1 whichever is smaller - the
right hand side of Equation (8) , or 1; in other words, 81+1 = min {2:~1 8;+1,1 }.
Since at the fth layer, the probability of violating the f-intervals is at most 81 we
265
Inference in Multilayer Networks via Large Deviation Bounds
are guaranteed that with probability at least TIl> 1 [1 - 6l ], all the layers satisfy
their f-intervals. Conversely, we are guaranteed that the probability that any layer
violates its f-intervals is at most 1 - TIl>l [1 - 6l ]. Treating this as a throw-away
probability, we can now compute upper and lower bounds on marginal probabilities
involving nodes at the Lth layer exactly as in the case of nodes at the second layer.
This yields the following theorem.
Theorem 2 For any subset {Xf, ... , Xi(} of the outputs of a probabilistic fnetwork, for any setting Xl, ... ,XK, and for any valid set of f-intervals, the marginal
probability of partial evidence in the output layer obeys:
(10)
< Pr[ X f
<
= Xl, ... , X f< = X K]
D[1- 0'] }I
f("f +tf)
}}o [1- f("f - tf)] + (1- D[1- O']}ll)
Theorem 2 generalizes our earlier results for marginal probabilities over nodes in the
second layer; for example, compare Equation (10) to Equation (4). Again, the upper
and lower bounds can be efficiently computed for all common transfer functions.
5
Rates of Convergence
To demonstrate the power of Theorem 2, we consider how the gap (or additive
difference) between these upper and lower bounds on Pr[Xf = Xl,? .. , Xi( = XK]
behaves for some crude (but informed) choices of the {fn. Our goal is to derive
the rate at which these upper and lower bounds converge to the same value as we
examine larger and larger networks. Suppose we choose the f-intervals inductively
by defining .6.; = 0 and setting
~+1 = ;....I()~
.1.6. l
L...J lJ J +
fl
J,r
2
ln N
N
(12)
j=l
for some / > 1. From Equations (8) and (9), this choice gives 6l+ 1 ::;: 2N l - 2,,/ as
an upper bound on the probability that the (? + 1)th layer violates its f-intervals .
Moreover, denoting the gap between the upper and lower bounds in Theorem 2 by
G, it can be shown that:
(13)
Let us briefly recall the definitions of the parameters on the right hand side of this
equation: a is the maximal slope of the transfer function f, N is the number of
nodes in each layer, ]{ is the number of nodes with evidence, r = N8 is N times the
largest weight in the network, L is the number of layers, and / > 1 is a parameter
at our disposal. The first term of this bound essentially has a 1/ VN dependence on
N, but is multiplied by a damping factor that we might typically expect to decay
exponentially with the number ]{ of outputs examined. To see this, simply notice
that each of the factors f(f.lj +fj) and [1- f(f.lj -fj)] is bounded by 1; furthermore,
266
M Kearns and L. Saul
since all the means J.lj are bounded, if N is large compared to 1 then the Ci are
small, and each of these factors is in fact bounded by some value f3 < 1. Thus
the first term in Equation (13) is bounded by a constant times f3 K - l f{ Jln(N)/N.
Since it is natural to expect the marginal probability of interest itself to decrease
exponentially with f{, this is desirable and natural behavior.
Of course, in the case of large f{, the behavior of the resulting overall bound can
be dominated by the second term 2L/ N 2'Y- l of Equation (13). In such situations,
however, we can consider larger values of I, possibly even of order f{; indeed, for
sufficiently large I, the first term (which scales like y0) must necessarily overtake
the second one. Thus there is a clear trade-off between the two terms, as well as
optimal value of 1 that sets them to be (roughly) the same magnitude. Generally
speaking , for fixed f{ and large N, we observe that the difference between our upper
and lower bounds on Pr[Xf
6
= Xl, ... , xi = XK]
vanishes as 0 (Jln(N)/N).
An Algorithm for Fixed Multilayer Networks
We conclude by noting that the specific choices made for the parameters Ci in
Section 5 to derive rates of convergence may be far from the optimal choices for a
fixed network of interest. However, Theorem 2 directly suggests a natural algorithm
for approximate probabilistic inference. In particular, regarding the upper and lower
bounds on Pr [X f = Xl, ... , Xi = X K] as functions of {cn, we can optimize these
bounds by standard numerical methods. For the upper bound, we may perform
gradient descent in the {cn to find a local minimum, while for the lower bound, we
may perform gradient ascent to find a local maximum. The components of these
gradients in both cases are easily computable for all the commonly studied transfer
functions. Moreover, the constraint of maintaining valid c-intervals can be enforced
by maintaining a floor on the c-intervals in one layer in terms of those at the previous
one. The practical application of this algorithm to interesting Bayesian networks
will be studied in future work.
References
Cooper, G. (1990). Computational complexity of probabilistic inference usmg
Bayesian belief networks. Artificial Intelligence 42:393-405.
Hertz, J, . Krogh, A., & Palmer, R. (1991). Introduction to the theory of neural
computation. Addison-Wesley, Redwood City, CA.
Hinton, G., Dayan, P., Frey, B., and Neal, R. (1995). The wake-sleep algorithm for
unsupervised neural networks. Science 268 :1158- 1161.
Jordan, M., Ghahramani, Z. , Jaakkola, T. , & Saul , 1. (1997) . An introduction to
variational methods for graphical models. In M. Jordan , ed. Learning in Graphical
Models. Kluwer Academic.
Kearns , M. , & Saul, 1. (1998) . Large deviation methods for approximate probabilistic inference. In Proceedings of the 14th Annual Conference on Uncertainty in
A rtificial Intelligence.
Neal, R. (1992). Connectionist learning of belief networks. Artificial Intelligence
56:71-113 .
Pearl , J . (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA.
| 1564 |@word briefly:1 n8:1 mkearns:1 att:1 contains:1 denoting:1 rightmost:1 com:1 reminiscent:1 must:1 fn:1 numerical:1 additive:1 treating:1 n0:1 aside:1 generative:1 intelligence:3 xk:6 ith:1 lr:1 provides:1 node:38 become:4 prove:2 fth:4 inside:1 manner:1 expected:1 indeed:2 roughly:1 behavior:2 frequently:1 mechanic:1 examine:1 ol:1 preclude:2 considering:2 pf:3 becomes:3 estimating:1 bounded:11 suffice:1 moreover:2 what:2 ttl:1 interpreted:1 informed:1 nj:1 guarantee:3 every:3 exactly:2 unit:1 arguably:1 local:3 frey:1 limit:4 consequence:1 establishing:1 meet:1 might:1 studied:4 quantified:1 examined:1 conversely:1 challenging:1 suggests:1 mateo:1 palmer:1 obeys:1 directed:3 practical:1 union:7 lf:10 xr:1 significantly:1 convenient:1 word:1 layered:3 optimize:1 straightforward:1 independently:2 simplicity:1 rule:1 deriving:1 variation:1 pt:1 suppose:5 exact:5 us:1 element:2 ft:1 parameterize:1 cycle:1 trade:3 decrease:1 vanishes:1 complexity:1 inductively:3 translated:1 blj:3 easily:1 joint:3 hopfield:1 represented:3 uninstantiated:1 effective:2 query:1 artificial:2 tell:1 whose:2 larger:5 valued:1 plausible:1 say:5 otherwise:1 florham:1 ability:1 nondecreasing:1 noisy:1 itself:1 xlt:1 differentiable:1 rr:2 product:1 maximal:1 fr:1 combining:2 translate:1 competition:1 exploiting:2 parent:2 convergence:4 requirement:1 extending:1 r1:4 depending:1 oo:2 derive:3 krogh:1 throw:3 involves:1 kb:3 centered:1 violates:4 require:1 generalization:2 preliminary:1 probable:1 tighter:1 around:2 considered:2 sufficiently:1 lawrence:1 mapping:1 adopt:1 omitted:1 largest:2 tf:2 city:1 tool:1 weighted:15 gaussian:1 jaakkola:1 endow:1 rigorous:2 inference:16 dayan:1 lj:4 typically:1 lsaul:1 keams:1 jln:2 interested:1 semantics:2 overall:3 among:1 constrained:1 special:1 summed:1 marginal:16 field:1 f3:2 park:2 unsupervised:1 future:1 t2:1 connectionist:1 intelligent:1 individual:1 freedom:1 interest:5 tj:1 partial:1 improbable:1 damping:1 causal:1 minimal:1 instance:1 earlier:2 soft:1 deviation:16 subset:1 dependency:1 st:3 venue:1 fundamental:1 overtake:1 ie:1 probabilistic:17 off:3 told:1 michael:1 together:1 itj:1 again:2 choose:2 possibly:1 resort:1 leading:1 til:2 de:1 satisfy:1 depends:1 cpts:1 lab:1 analyze:1 bayes:1 slope:3 kaufmann:1 efficiently:1 yield:1 generalize:1 bayesian:6 ttj:1 ed:1 definition:6 energy:1 proof:1 recall:1 knowledge:1 organized:1 formalize:1 feed:1 disposal:1 wesley:1 violating:1 done:1 generality:1 furthermore:1 just:1 hand:3 defines:1 contain:1 inductive:1 entering:3 symmetric:3 laboratory:1 neal:3 adjacent:1 ll:1 leftmost:1 complete:1 demonstrate:1 fj:2 reasoning:2 variational:2 consideration:1 fi:3 common:2 sigmoid:1 behaves:1 exponentially:3 jl:1 kluwer:1 marginals:1 refer:1 ai:4 binary:7 morgan:1 minimum:1 floor:1 converge:2 thermodynamic:1 desirable:1 encompass:1 multiple:1 xf:16 academic:1 involving:4 multilayer:13 essentially:1 represent:2 sometimes:1 usmg:1 interval:23 wake:1 crucial:1 weapon:1 rest:1 invited:1 unlike:1 ascent:1 jordan:3 near:2 noting:1 enough:1 xj:7 independence:1 idea:1 regarding:1 cn:2 tradeoff:1 computable:1 whether:1 speaking:1 generally:2 detailed:1 involve:1 clear:1 extensively:1 reduced:1 notice:1 diagnostic:1 promise:1 shall:2 falling:1 pj:5 rrk:1 graph:2 sum:17 enforced:1 run:1 everywhere:1 uncertainty:5 place:1 almost:1 reader:1 vn:1 scaling:1 bound:53 layer:39 fl:3 guaranteed:2 sleep:1 encountered:1 annual:1 frj:1 occur:1 precisely:1 constraint:1 dominated:1 argument:1 min:1 according:1 hertz:3 smaller:3 jr:1 y0:1 appealing:1 pr:19 computationally:1 equation:12 ln:1 addison:1 flip:1 whichever:1 generalizes:1 multiplied:1 apply:2 obey:1 hierarchical:1 away:4 appropriate:1 observe:1 original:1 include:1 ensure:1 graphical:3 maintaining:2 exploit:4 ghahramani:1 build:1 already:1 quantity:1 strategy:1 dependence:1 exhibit:1 gradient:3 link:1 capacity:1 trivial:2 induction:2 besides:1 illustration:1 balance:1 motivates:2 boltzmann:1 perform:2 upper:18 observation:3 descent:1 defining:1 hinton:2 situation:1 precise:1 redwood:1 arbitrary:1 vacuous:1 pair:1 namely:1 specified:1 pearl:2 below:1 pattern:1 tb:1 explanation:1 belief:2 power:1 event:2 natural:3 rely:1 representing:1 ne:1 deviate:1 loss:1 expect:2 interesting:2 acyclic:2 thresholding:1 intractability:2 pi:14 course:2 last:1 bias:2 weaker:1 side:4 saul:8 fall:1 face:1 regard:1 valid:6 cumulative:1 sensory:1 xl2:1 made:1 commonly:1 san:1 far:2 approximate:4 incoming:5 conclude:1 xi:8 table:2 transfer:10 composing:1 ca:2 necessarily:1 assured:1 did:1 main:1 ofj:1 allowed:1 ff:7 rtificial:1 cooper:2 ny:1 wish:1 exponential:1 xl:24 lie:4 governed:2 crude:1 third:1 companion:1 theorem:8 specific:1 r2:1 decay:2 evidence:5 intractable:1 ci:2 magnitude:1 conditioned:3 occurring:2 gap:2 entropy:1 simply:3 ffj:1 rlp:1 satisfies:2 conditional:8 lth:1 goal:1 typical:1 averaging:3 kearns:6 called:2 shannon:1 phenomenon:3 |
618 | 1,565 | Barycentric Interpolators for Continuous
Space & Time Reinforcement Learning
Remi Munos & Andrew Moore
Robotics Institute, Carnegie Mellon University
Pittsburgh, PA 15213, USA.
E-mail: {munos, awm }@cs.cmu.edu
Abstract
In order to find the optimal control of continuous state-space and
time reinforcement learning (RL) problems, we approximate the
value function (VF) with a particular class of functions called the
barycentric interpolators. We establish sufficient conditions under
which a RL algorithm converges to the optimal VF, even when we
use approximate models of the state dynamics and the reinforcement functions .
1
INTRODUCTION
In order to approximate the value function (VF) of a continuous state-space and
time reinforcement learning (RL) problem, we define a particular class of functions
called the barycentric interpolator, that use some interpolation process based on
finite sets of points. This class of functions, including continuous or discontinuous
piecewise linear and multi-linear functions, provides us with a general method for
designing RL algorithms that converge to the optimal value function. Indeed these
functions permit us to discretize the HJB equation of the continuous control problem
by a consistent (and thus convergent) approximation scheme, which is solved by
using some model of the state dynamics and the reinforcement functions.
Section 2 defines the barycentric interpolators. Section 3 describes the optimal control problem in the deterministic continuous case. Section 4 states the convergence
result for RL algorithms by giving sufficient conditions on the applied model. Section 5 gives some computational issues for this method, and Section 6 describes the
approximation scheme used here and proves the convergence result.
1025
Barycentric Interpolators for Continuous Reinforcement Learning
2
DEFINITION OF BARYCENTRIC INTERPOLATORS
Let I:0 = {~di be a set of points distributed at some resolution <5 (see (4) below)
on the state space of dimension d.
For any state x inside some simplex (6, ... , ~n), we say that x is the barycenter of
the {~di=Ln inside this simplex with positive coefficients P(XI~i) of sum 1, called
the barycentric coordinates, if x = Li=1..np(xl~i)'~i'
Let VO (~i) be the value of the function at the points ~i. Va is a barycentric
interpolator if for any state x which is the barycenter of the points {~di=1.n for
some simplex (6, ... ,~n), with the barycentric coordinates p(xl~d, we have:
(1)
Moreover we assume that the simplex (~1' ... , ~n) is of diameter 0(<5). Let us describe
some simple barycentric interpolators:
? Piecewise linear functions defined by some triangulation on the state
space (thus defining continuous functions), see figure La, or defined at any
x by a linear combination of (d + 1) values at any points (6, ... , ~d+ d 3 x
(such functions may be discontinuous at some boundaries), see figure Lb .
? Piecewise multi-linear functions defined by a multi-linear combination
of the 2d values at the vertices of d-dimensional rectangles, see figure 1.c.
In this case as well, we can build continuous interpolations or allow discontinuities at the boundaries of the rectangles.
An important point is that the convergence result stated in Section 4 does not
require the continuity of the function. This permits us to build variable resolution
triangulations (see figure 1.b) or grid (figure 1.c) easily.
~.I,
x+
+
~
(a)
(b)
(c)
Figure 1: Some examples of barycentric approximators. These are piecewise continuous (a) or discontinuous (b) linear or multi-linear (c) interpolators.
Remark 1 In the general case, for a given x, the choice of a simplex (6, ... , ~n) 3 x
is not unique (see the two sets of grey and black points in figure l.b and l.c), and
once the simplex (~1' ... , ~n) 3 x is defined, if n > d + 1 (for example in figure l.c),
then the choice of the barycentric coordinates P(XI~i) is also not unique.
Remark 2 Depending on the interpolation method we use, the time needed for computing the values will vary. Following {Dav96}, the continuous multi-linear interpolation must process 2d values, whereas the linear continuous interpolation inside a
simplex processes (d + 1) values in 0 (d log d) time.
R. Munos and A. W. Moore
1026
In comparison to [Gor95], the functions used here are averagers that satisfy the
barycentric interpolation property (1). This additional geometric constraint permits
us to prove the consistency (see (15) below) ofthe approximation scheme and thus
the convergence to the optimal value in the continuous time case.
3
THE OPTIMAL CONTROL PROBLEM
Let us describe the optimal control problem in the deterministic and discounted case
for continuous state-space and time variables and define the value function that we
intend to approximate. We consider a dynamical system whose state dynamics
depends on the current state x(t) E () (the state-space, with 0 an open subset of
JRd) and control u(t) E U (compact subset) by a differential equation :
dx
dt = f(x(t), u(t))
(2)
From equation (2), the choice of an initial state x and a control function u(t) leads
to a unique trajectories x (t) (see figure 2). Let r be the exit time from 0 (with
the convention that if x(t) always stays in 0, then r
(0) . Then , we d efine the
functional J as the discounted cumulative reinforcement :
=
J(x; u(.)) =
loT -/r(x(t), u(t))dt + -{ R(x(r))
where r(x, u) is the running reinforcement and R(x) the boundary reinforcement.
'Y is the discount factor (0 ~ 'Y < 1). We assume that f, rand R are bounded and
Lipschitzian, and that the boundary 80 is C2 .
RL uses the method of Dynamic Programming (DP) that introduces the value
function (VF) : the maximal value of J as a function of initial state x :
V(x)
= sup J(x; u(.)).
u(.)
From the DP principle, we deduce that V satisfies a first-order differential equation,
called the Hamilton-Jacobi-Bellman (HJB) equation (see [FS93] for a survey) :
Theorem 1 If V is differentiable at x E 0, let DV(x) be the gradient of V at x ,
then the following HJB equation holds at x.
H(V, DV, x) ~f V(x) In'Y
+ sup[DV(x).f(x, u) + r(x , u)] = 0
(3)
uEU
The challenge of RL is to get a good approximation of the VF, because from V
we can deduce the optimal control : for state x, the control u? (x) that realizes the
supremum in the HJB equation provides an optimal (feed-back) control law .
The following hypothesis is a sufficient condition for V to be continuous within 0
(see [Bar94]) and is required for proving the convergence result of the next section.
Hyp 1: For x E 80 , let nt(x) be the outward normal of 0 at x , we assume that :
-If3u E U, s.t . f(x, u) .nt(x) ~ 0 then 3v E U, s.t . f(x, v)nt(x) < O.
-If3u E U, s.t . f(x, u) .nt(x) ~ 0 then 3v E U, s.t. f(x, v)nt(x) > O.
which means that at the states (if there exist any) where some trajectory is tangent
to the boundary, there exists, for some control, a trajectory strictly coming inside
and one strictly leaving the state space.
1027
Barycentric Interpolators for Continuous Reinforcement Learning
o
Figure 2: The state space and the set
of points EO (the black dots belong to
the interior and the white ones to the
boundary). The value at some point
is updated, at step n, by the discounted
value at point '1n E (el, 6, 6). The main
requirement for convergence is that the
points '1n approximate '1 in the sense :
P('1nl{.) = p('1I{.) + 0(0) (i.e. the '1n
belong to the grey area).
o
?
e
?
x ?
?
O--------QO~------~~~
4
THE CONVERGENCE RESULT
Let us introduce the set of points I;0 = {~di' composed of the interior (I;0 n 0)
and the boundary (8I;? = I; \ 0), such that its convex hull covers the state space
0, and performing a discretization at some resolution 6 :
VxEO , inf IIX-~ill::;6 and VxE80 inf Ilx-~jll::;6
(4)
?.EE 6no
?jE&E6
Moreover, we approximate the control space U by some finite control spaces UO C U
such that for 6 ::; 6', UO' c UO and liffiO-+o UO = U.
We would like to update the value of any:
- interior point ~ E I;0 nO with the discounted values at state 77n(~, u) (figure 2) :
V~+l (~) +- sup ["YTn(?,u)V~(77n(~' u)) + Tn(~, u) .rn(~, u)]
(5)
uEU 6
for some state
77n(~,
u), some time delay
Tn(~,
u) and some reinforcement
rn(~,
u) .
- boundary point ~ E 8I;? with some terminal reinforcement Rn(~) :
V~+1 (~) +- Rn(~)
(6)
The following theorem states that the values V~ computed by a RL algorithm using
the model (because of some a priori partial uncertainty of the state dynamics and
the reinforcement functions) 77n(~, u), Tn(~, u), rn(~, u) and Rn(~) converge to the
optimal value function as the number of iterations n -+ 00 and the resolution 6 -+ O.
77(~,
u) (see figure 2) :
(7)
77(~, u) = ~ + T(~, u).f(~, u)
for some time delay T(~, u) (with k16 ::; T(~, u) ::; k26 for some constants kl > 0 and
k2 > 0), and let p(77I~i) (resp. P(77nl~d) be the barycentric coordinate of 77 inside a
simplex containing it (resp. 77n inside the same simplex). We will write 77 , 77n , T, 1',
.. . , instead of 77(~, u), 77n(~, u), T(~, u), r(~, u), .. . when no confusion is possible.
Let us define the state
Theorem 2 Assume that the hypotheses of the previous sections hold, and that for
any resolution 6, we use barycentric interpolators VO defined on state spaces I;0
(satisfying (4)) such that all points of I;0 nO are regularly updated with rule (5)
and all points of 8I;? are updated with rule (6) at least once. Suppose that 77n , Tn,
rn and Rn approximate 77, T, rand R in the sense:
V~i, P(77nl~d
p(77I~i) + 0(6)
(8)
2
Tn
T + 0(6 )
(9)
rn
1'+0(6)
(10)
Rn
R + 0(6)
(11)
R. Munos and A. W. Moore
1028
V uniformly on any compact 0 C 0 (i.e. "Ie
then we have limn-+oo V;
?
compact C 0, 3~, 3N, such that "18 ~ ~,Vn 2: N, SUp~6nn IVn 0-+0
>
0, "10
VI ~ e).
Remark 3 For a given value of 8, the rule (5) is not a DP updating rule for some
Markov Decision Problem (MDP) since the values l7n, Tn, rn depend on n. This
point is important in the RL framework since this allows on-line improvement of
the model of the state dynamics and the reinforcement functions.
Remark 4 This result extends the previous results of convergence obtained by
Finite-Element or Finite-Difference methods (see {Mun97}}.
This theoretical result can be applied by starting from a rough EO (high 8) and by
combining to the iteration process (n ~ 00) some learning process of the model
(l7n ~ 17) and a increasing process of the number of points (8 ~ 0).
5
COMPUTATIONAL ISSUES
From (8) we deduce that the method will also converge if we use an approximate
barycentric interpolator, defined at any state x E (~1"'" ~n) by the value of the
barycentric interpolator at some state x' E (~1' ... , ~n) such that p(X'I~i) = p(XI~i) +
0(8) (see figure 3) . The fact that we need not be completely accurate can be
Approx-linear
Linear
~3
X
x' ~4
Figure 3: The linear function and
the approximation error around it
(the grey area). The value of the
approximate linear function plotted
here at some state x is equal to the
value of the linear one at x'. Any
such approximate barycenter interpolator can be used in (5).
used to our advantage. First, the computation of barycentric coordinates can use
very fast approximate matrix methods. Second, the model we use to integrate the
dynamics need not be perfect. We can make an 0(&2) error, which is useful if we
are learning a model from data: we need simply arrange to not gather more data
than is necessary for the current 8. For example, if we use nearest neighbor for
our dynamics learning, we need to ensure enough data so that every observation
is 0(8 2 ) from its nearest neighbor. If we use local regression, then a mere 0(8)
density is all that is required [Om087, AMS97].
6
6.1
PROOF OF THE CONVERGENCE RESULT
Description of the approximation scheme
We use a convergent scheme derived from Kushner (see [Kus90]) in order to approximate the continuous control problem by a finite MDP. The HJB equation is
discretized, at some resolution 8, into the following DP equation : for ~ E EO nO,
VO(~)
= FO
[vo(.)] (~) ~f sUPUEU 6 {"IT L~t p(l7l~i).v?(~d + T.r}
(12)
and for ~ E BEo, VO (~) = R(~) . This is a fixed-point equation and we can prove that,
thanks to the discount factor "I, it satisfies the "strong" contraction property:
SUP~6
jv;+l - vo I ~ ,\. sup~61V; - vo I for some ,\ < 1
(13)
Barycentric Interpolators for Continuous Reinforcement Learning
1029
from which we deduce that there exists exactly one solution Va to the DP equation ,
which can be computed by some value iteration process : for any initial Voa , we
iterate V~+l f- Fa [V~] . Thus for any resolution 8, the values V~ -+ Va as 71 -+ 00.
Moreover, as va is a barycentric interpolator and from the definition (7) of "I ,
Fa [va (.)] (~)
= sUPuEU 6 {-yT va (~ + T.f(~ , u)) + T.r}
(14)
from which we deduce that the scheme Fa is consistent : in a formal sense ,
limsuPa--+o ilFa[W](x) - W(x)1 '" H(W, DW,x)
(15)
and obtain, from the general convergence theorem of [BS91] (and a result of strong
unicity obtained from hyp.l)' the convergence of the scheme : va -+ V as 8 -+ O.
6.2
Use of the "weak contraction" result of convergence
Since in the RL approach used here, we only have an approximation "In , Tn , ... of
the true values "I, T, ... , the strong contraction property (13) does not hold any
more. However, in previous work ([Mun98]), we have proven the convergence for
some weakened conditions, recalled here :
If the values V~ updated by some algorithm satisfy the "weak" contraction property with respect to a solution va of a convergent approximation scheme (such as
the previous one (12)) :
sUPE6no
I
va I
1V~+1 - Va
SUP&E61V~+1
-
< (1 - k.8) . SUPE6 IV~
- va 1+ 0(8)
(16)
0(8)
(17)
=
for some positive constant k, (with the notation f(8) :S 0(8) iff 39(8)
0(8) with
f(8) :S 9(8)) then we have limn-+oo V~ = V uniformly on any compact 0 C 0
a--+O
(i .e.
SUPE6nn
6.3
> 0, VO compact C 0,
Vf
IV~
- Vi :S f) .
3~
and N such that V8
:S
~,Vn
~
N,
Proof of theorem 2
We are going to use the approximations (8), (9), (10) and (11) to deduce that the
weak contraction property holds, and then use the result of the previous section to
prove theorem 2.
The proof of (17) is immediate since, from (6) and (11) we have : V~ E
1V~+1(~)
-
va(~)1
= I Rn(~) -
R(~)I
a'L.o ,
= 0(8)
Now we need to prove (16) . Let us estimate the error En(~) = va(~) - V~(~)
between the value Va of the DP equation (12) and the values V~ computed by rule
(5) after one iteration :
En+d~)
En+d~)
{LE' [-{ p(TJI~d? Va (~d - "(Tn P(TJn I~d.v~ (~d] + T.T' - Tn .rn}
SUp { "(T LE, [P( ryl~;) - p( ryn I~d] Va (~;) + b
"(Tn] L?, p( "In I~i)' va (~d
= SUPuEU6
=
T -
uEU6
By using (9)
+ "(Tn L?. P(TJn I~i)' [va (~;) - V~(~;)] + Tn [1' - rn] + [T - Tn] r}
(from which we deduce : -{ = "(Tn + 0(8 2 )) and (10), we deduce
IEn+d~)1 <
SUPuEU6
+"(Tn
{"(T ?IL?, [P(TJI~;) - P(TJnl~d] Va (~d I
L?, P(TJnl~i).lVa(~d - V~(~i)l} + 0(8 2 ) .
:
(18)
R. Munos and A. W Moore
1030
From the basic properties of the coefficients p( 1J1~d and p( 1Jn I~;) we have:
LE, [P(1JI~i) -
P(1Jnl~d] VO(~i)
= L(, [P(1JI~d -
Moreover, IVO(~d - VO(~)I ~ IVO(~d - V(~i)1
P(1Jnl~d] [VO(~d - VO(~)]
(19)
- V(~)I + IV(~) - VO(~)I?
From the convergence of the scheme V O, we have sUPE6nn Ivo - Vi ~ 0 for any
compact nCo and from the continuity of V and the fact that the support of the
simplex
{O
3 1J is 0(0), we have sUPE6nn
+ 1V(~i)
1V(~d - V(~)I ~ 0 and deduce that
sUPE 6 nn Jv?(~i) - VO(~)I o~ O. Thus, from (19) and (8) , we obtain:
ILE' [P(1JI~) - P(1Jnl~)] VO(~dl = 0(0)
(20)
The "weak" contraction property (16) holds: from the property of the
exponential function ,Tn ~ 1 - 2f In ~ for small values of
from (9) and that
2: klO , we deduce that ,Tn ~ 1 - ? In ~ + 0(0 and from (18) and (20) we
Tn,
2 ),
T
deduce that :
IV;+l(~) - VO(~)I ~ (1- k.0)SUPE61V;+d~) - VO(~)I
+ 0(0)
with k = ?~ In 1"I , and the property (16) holds . Thus the "weak contraction" result
of convergence (described in section 6.2) applies and convergence occurs.
FUTURE WORK
This work proves the convergence to the optimal value as the resolution tends to the
limit, but does not provide us with the rate of convergence. Our future work will
focus on defining upper bounds of the approximation error, especially for variable
resolution discretizations, and we will also consider the stochastic case.
ACKNOWLEDGMENTS
This research was sponsored by DASSAULT-AVIATION and CMU.
References
[AMS97] c. G. Atkeson ,
A. W. Moore , and S. A. Schaal. Locally Weighted Learning. AI
Review, 11:11- 73, April 1997.
[Bar94]
Guy Barles. Solutions de viscosite des equations de Hamilton-Ja co bi, volume 17
of Mathematiques et Applications. Springer-Verlag, 1994.
[BS91]
Guy Barles and P.E. Souganidis. Convergence of approximation schemes for
fully nonlinear second order equations. Asymptotic Analysis, 4:271- 283, 1991.
Scott Davies. Multidimensional triangulation and interpolation for reinforcement
learning. Advances in Neural Information Processing Systems, 8, 1996.
Wendell H. Fleming and H. Mete Soner. Controlled Markov Processes and Viscos ity Solutions. Applications of Mathematics. Springer-Verlag, 1993.
G. Gordon. Stable function approximation in dynamic programming. International Conference on Machine Learning, 1995.
[Dav96]
[FS93]
[Gor95]
Harold J. Kushner. Numerical methods for stochastic control problems in continuous time. SIAM J. Control and Optimization, 28:999- 1048, 1990.
[Mun97] Remi Munos. A convergent reinforcement learning algorithm in the continuous
case based on a finite difference method. International Joint Conference on
A rtificial Intelligence, 1997.
[Mun98] Remi Munos. A general convergence theorem for reinforcement learning in the
continuous case. European Conference on Machine Learning, 1998.
[Omo87] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal
of Complex Systems, 1(2):273-347, 1987.
[Kus90]
| 1565 |@word open:1 grey:3 contraction:7 ytn:1 initial:3 current:2 discretization:1 nt:5 dx:1 must:1 numerical:1 j1:1 sponsored:1 update:1 intelligence:1 ivo:3 provides:2 c2:1 differential:2 prove:4 hjb:5 inside:6 introduce:1 indeed:1 multi:5 terminal:1 bellman:1 discretized:1 discounted:4 increasing:1 moreover:4 bounded:1 notation:1 averagers:1 every:1 multidimensional:1 exactly:1 k2:1 control:16 uo:4 hamilton:2 positive:2 local:1 tends:1 limit:1 interpolation:7 black:2 weakened:1 co:1 bi:1 unique:3 acknowledgment:1 area:2 discretizations:1 davy:1 get:1 interior:3 deterministic:2 yt:1 lva:1 starting:1 convex:1 survey:1 resolution:9 rule:5 dw:1 ity:1 proving:1 coordinate:5 updated:4 resp:2 suppose:1 programming:2 us:1 designing:1 hypothesis:2 pa:1 element:1 satisfying:1 updating:1 ien:1 solved:1 dynamic:9 depend:1 exit:1 completely:1 easily:1 joint:1 interpolators:10 k16:1 fast:1 describe:2 whose:1 say:1 supe:1 advantage:1 differentiable:1 maximal:1 coming:1 combining:1 iff:1 description:1 mete:1 convergence:20 requirement:1 perfect:1 converges:1 depending:1 andrew:1 oo:2 nearest:2 strong:3 c:1 convention:1 discontinuous:3 tji:2 hull:1 stochastic:2 awm:1 require:1 ja:1 behaviour:1 mathematiques:1 strictly:2 hold:6 around:1 normal:1 vary:1 arrange:1 realizes:1 weighted:1 rough:1 always:1 derived:1 focus:1 schaal:1 improvement:1 sense:3 el:1 nn:2 nco:1 going:1 issue:2 ill:1 priori:1 equal:1 once:2 future:2 simplex:10 np:1 piecewise:4 gordon:1 composed:1 hyp:2 introduces:1 nl:3 accurate:1 partial:1 necessary:1 iv:4 plotted:1 theoretical:1 cover:1 vertex:1 subset:2 delay:2 thanks:1 density:1 international:2 siam:1 ie:1 stay:1 containing:1 tjn:2 guy:2 li:1 de:3 coefficient:2 satisfy:2 depends:1 vi:3 lot:1 sup:8 il:1 ofthe:1 weak:5 wendell:1 mere:1 trajectory:3 fo:1 definition:2 mun97:2 proof:3 di:4 jacobi:1 back:1 feed:1 dt:2 rand:2 april:1 qo:1 ams97:2 nonlinear:1 continuity:2 defines:1 jll:1 mdp:2 usa:1 true:1 moore:5 white:1 harold:1 omohundro:1 vo:17 tn:18 confusion:1 functional:1 rl:10 ji:3 volume:1 belong:2 mellon:1 ai:1 approx:1 grid:1 consistency:1 mathematics:1 dot:1 stable:1 deduce:11 triangulation:3 inf:2 verlag:2 approximators:1 additional:1 eo:3 converge:3 jrd:1 va:19 controlled:1 ile:1 regression:1 basic:1 cmu:2 iteration:4 robotics:1 whereas:1 leaving:1 limn:2 regularly:1 ee:1 enough:1 iterate:1 remark:4 v8:1 useful:1 outward:1 discount:2 locally:1 diameter:1 exist:1 carnegie:1 write:1 interpolator:6 jv:2 rectangle:2 sum:1 uncertainty:1 extends:1 vn:2 decision:1 vf:6 bound:1 convergent:4 constraint:1 performing:1 combination:2 describes:2 soner:1 dv:3 ln:1 equation:14 needed:1 permit:3 jn:1 running:1 kushner:2 ensure:1 iix:1 lipschitzian:1 giving:1 prof:2 establish:1 build:2 especially:1 intend:1 occurs:1 barycenter:3 fa:3 gradient:1 dp:6 mail:1 stated:1 discretize:1 upper:1 observation:1 markov:2 finite:6 immediate:1 defining:2 barycentric:21 rn:14 lb:1 required:2 kl:1 unicity:1 recalled:1 fleming:1 discontinuity:1 jnl:3 below:2 dynamical:1 scott:1 challenge:1 fs93:2 including:1 ryl:1 scheme:10 review:1 geometric:1 tangent:1 asymptotic:1 law:1 fully:1 proven:1 ivn:1 integrate:1 gather:1 sufficient:3 consistent:2 principle:1 formal:1 allow:1 institute:1 neighbor:2 munos:7 distributed:1 boundary:8 dimension:1 cumulative:1 reinforcement:18 atkeson:1 approximate:12 compact:6 supremum:1 pittsburgh:1 xi:3 continuous:21 european:1 complex:1 main:1 l7l:1 je:1 en:3 rtificial:1 ueu:2 exponential:1 xl:2 kus90:2 theorem:7 dl:1 exists:2 ilx:1 remi:3 simply:1 beo:1 applies:1 springer:2 satisfies:2 supueu:2 uniformly:2 aviation:1 called:4 la:1 e6:1 support:1 |
619 | 1,567 | Probabilistic Modeling for Face Orientation
Discrimination:
Learning from Labeled and Unlabeled Data
Shumeet Baluja
baluja@cs.cmu.edu
Justsystem Pittsburgh Research Center &
School of Computer Science, Carnegie Mellon University
Abstract
This paper presents probabilistic modeling methods to solve the problem of discriminating between five facial orientations with very little labeled data. Three
models are explored. The first model maintains no inter-pixel dependencies, the
second model is capable of modeling a set of arbitrary pair-wise dependencies,
and the last model allows dependencies only between neighboring pixels. We
show that for all three of these models, the accuracy of the learned models can
be greatly improved by augmenting a small number of labeled training images
with a large set of unlabeled images using Expectation-Maximization. This is
important because it is often difficult to obtain image labels, while many unlabeled images are readily available. Through a large set of empirical tests, we
examine the benefits of unlabeled data for each of the models. By using only
two randomly selected labeled examples per class, we can discriminate between
the five facial orientations with an accuracy of 94%; with six labeled examples,
we achieve an accuracy of 98%.
1 Introduction
This paper examines probabilistic modeling techniques for discriminating between five
face orientations: left profile, left semi-profile, frontal, right semi-profile, and right profile.
Three models are explored: the first model represents no inter-pixel dependencies, the second model is c,~pable of modeling a set of arbitrary pair-wise dependencies, and the last
model allows'~~~ndencies only between neighboring pixels.
Models which capture inter-pixel dependencies can provide better classification performance than those that do not capture dependencies. The difficulty in using the more complex models, however, is that as more dependencies are modeled, more parameters must be
estimated - which requires more training data. We show that by using Expectation-Maximization, the accuracy of what is learned can be greatly improved by augmenting a small
number of labeled training images with unlabeled images, which are much easier to obtain.
The remainder of this section describes the problem of face orientation discrimination in
detail. Section 2 provides a brief description ofthe probabilistic models explored. Section 3
presents results with these models with varying amounts of training data. Also shown is
how Expectation-Maximization can be used to augment the limited labeled training data
with unlabeled training data. Section 4 briefly discusses related work. Finally, Section 5
closes the paper with conclusions and suggestions for future work.
Probabilistic Modelingfor Face Orientation Discrimination
855
1.1 Detailed Problem Description
The interest in face orientation discrimination arises from two areas. First, the rapid
increase in the availability of inexpensive cameras makes it practical to create systems
which automatically monitor a person while using a computer. By using motion, color,
and size cues, it is possible to quickly fmd and segment a person's face when he/she is sitting in front of a computer monitor. By determining whether the person is looking directly
at the computer, or is staring away from the computer, we can provide feedback to any
user interface that could benefit from knowing whether a user is paying attention or is distracted (such as computer-based tutoring systems for children, computer games, or even
car-mounted cameras that monitor drivers).
Second, to perform accurate face detection for use in video-indexing or content-based
image retrieval systems, one approach is to design detectors specific to each face orientation, such as [Rowley et aI., 1998, Sung 1996]. Rather than applying all detectors to every
location, a face-orientation system can be applied to each candidate face location to
"route" the candidate to the appropriate detector, thereby reducing the potential for falsepositives, and also reducing the computational cost of applying each detector. This
approach was taken in [Rowley et at., 1998].
For the experiments in this paper, each image to be classified is 20x20 pixels. The face is
centered in the image, and comprises most of the image. Sample faces are shown in
Figure 1. Empirically, our experiments show that accurate pose discrimination is possible
from binary versions of the images. First, the images were histogram-equalized to values
between 0 and 255. This is a standard non-linear transformation that maps an approximately equal number of pixels to each value within the 0-255 range. It is used to improve
the contrast in images. Second, to "binarize" the images, pixels with intensity above 128
were mapped to a value of255, otherwise the pixels were mapped to a value ofO.
-[. j
Right
Half Profile
Right Profile
--
.
,.
..... .,
,,"
l"
.i
J ? fII'!~ 'J
?,
"...
Frontal
~
\'
Left i ~~
Half Profile l
;;
'
? ...
.1
...t .
I
J
.J ?b
Left Profile
t.
,l
....
lIZ!
~
...
Figure 1: 4 images of
each of the 5 classes to be
discriminated, Note the
variability in the images.
Left: Original Images.
Right: Images after
histogram equalization
and binary quantization.
6;
Original
2 Methods Explored
This section provides a description of the probabilistic models explored: Naive-Bayes,
Dependency Trees (as proposed by [Chow and Liu, 1968]), and a dependence network
which models dependencies only between neighboring pixels. For more details on using
Bayesian "multinets" (independent networks trained to model each class) for classification in a manner very similar to that used in this paper, see [Friedman, et at., 1997].
2.1 The Naive-Bayes Model
The first, and simplest, model assumes that each pixel is independent of every other pixel.
Although this assumption is clearly violated in real images, the model often yields good
results with limited training data since it requires the estimation of the fewest parameters.
Assuming that each image belongs exclusively to one of the five face classes to be dis-
S. Baluja
856
criminated, the probability of the image belonging to a particular class is given as follows:
P(
CI
400
P(ImagelClassc) x P(Class c)
= -...:....-;;."".,.;.--P-(l-m...!:a,;..ge-)-.:...-~
assc Jlmage)
P(lmagelClassc) =
I1 P(Pixel, IClassc)
,
~
I
P(PixelilClassJ is estimated directly from the training data by:
l:
k+
Pixel, x P(Classcllmage)
P(Pixelri C lassc) = _""'Tr-"-a"""l1""ng..../""ma....g....
es'--_ _ _ _ _ __
2k+
P(Classcllmage)
Trarnrng/mages
l:
Since we are only counting examples from the training images, P(ClasscIImage) is
known. The notation P(ClasscIImage) is used to represent image labels because it is convenient for describing the counting process with both labeled and unlabeled data (this will
be described in detail in Section 3). With the labeled data, P(ClasscIImage)E{O,I}. Later,
P(ClasscIImage) may not be binary; instead, the probability mass may be divided between
classes. PixeliE {O, I} since the images are binary. k is a smoothing constant, set to 0.001.
When used for classification, we compute the posterior probabilities and take the maximum, Cpredicted, where: cpred,cled = BrgmBX c P(Classc IImage) =: P(lmage lClassc) . For simplicity, P(ClassJ is assumed equal for all c; prImage) is a normalization constant which
can be ignored since we are only interested in fmding the maximum posterior probability.
2.2 Optimal Pair-Wise Dependency Trees
We wish to model a probability distribution P(Xb ... , X 400 1ClassJ, where each X corresponds to a pixel in the image. Instead of assuming pixel independence, we restrict our
model to the following form:
n
P(X1?? ?XnIClass c )
=
Il p(xilnx-,ClassJ
i = 1
I
where I1x is Xi's single "parent" variable. We require that there be no cycles in these
,
"parent-of' relationships: formally, there must exist some permutation m = (m b ...' m,J of
(1, ... , n) such that
(n x, =
x) ~ m(,) < mU)
for all i. In other words, we restrict P' to
factorizations representable by Bayesian networks in which each node (except the root)
has one parent, i.e., tree-shaped graphs.
A method for finding the optimal model within these restrictions is presented in [Chow
and Liu, 1968]. A complete weighted graph G is created in which each variable Xi is represented by a corresponding vertex Vi, and in which the weight W jj for the edge between
vertices V j and Vj is set to the mutual information I(Xj,Xj) between Xj and Xj. The edges
in the maximum spanning tree of G determine an optimal set of (n-l) conditional probabilities with which to construct a tree-based model of the original probability distribution.
We calculate the probabilities P(X i) and P(Xj, Xj ) directly from the dataset. From these,
we calculate the mutual information, I(X j, Xj), between all pairs of variables Xi and Xj:
I(X X)
"J
= "P(X
L..
I
a,b
=
a X = b). log
'J
P(X . = a, X = b)
I
I
P(X = a) ? P(X = b)
I
J
The maximum spanning tree minimizes the Kullback-Leibler divergence D(PIIP') between
857
Probabilistic Modelingfor Face Orientation Discrimination
the true and estimated distributions:
D(P II P') =
L P(X)log :,?~
x
as shown in [Chow & Liu, 1968]. Among all distributions of the same form, this distribution maximizes the likelihood of the data when the data is a set of empirical observations
drawn from any unknown distribution.
2.3 Local Dependency Models
Unlike the Dependency Trees presented in the previous section, the local dependency networks only model dependencies between adjacent pixels. The most obvious dependencies
to model are each pixel's eight neighbors. The dependencies are shown graphically in Figure 2(left). The difficulty with the above representation is that two pixels may be dependent upon each other (if this above model was represented as a Bayesian network, it would
contain cycles). Therefore, to avoid problems with circular dependencies, we use the following model instead. Each pixel is still connected to each of its eight neighbors; however, the arcs are directed such that the dependencies are acyclic. In this local dependence
network, each pixel is only dependent on four of its neighbors: the three neighbors to the
right and the one immediately below. The dependencies which are modeled are shown
graphically in Figure 2 (right). The dependencies are:
400
P(ImagelClassC>
=
TI P(Pixel,ln
ptrel,'
Class c )
,=1
(0,0)
0
0
DO
0
0
0
0
DO
0
o
0
0
0
0
0 . ?? 0 (20,20)
(0,0)0
0
DO
0
0
0
0
0
0
DODD
0
0
0
0
0(20,20)
Figure 2: Diagmm of the dependencies maintained. Each square represents a pixel in the image.
Dependencies are shown only for two pixels. (Left) Model with 8 dependencies - note that because this model
has circular dependencies, we do not use it. Instead, we use the model shown on the Right. (Right) Model used
has 4 dependencies per pixel. By imposing an ordering on the pixels, circular dependencies are avoided.
3 Performance with Labeled and Unlabeled Data
In this section, we compare the results of the three probabilistic models with varying
amounts of labeled training data. The training set consists of between 1 and 500 labeled
training examples, and the testing set contains 5500 examples. Each experiment is
repeated at least 20 times with random train/test splits of the data.
3.1 Using only Labeled Data
In this section, experiments are conducted with only labeled data. Figure 3(left} shows
each model's accuracy in classifying the images in the test set into the five classes. As
858
S. Baluja
expected, as more training data is used, the performance improves for all models.
Note that the model with no-dependencies performs the best when there is little data.
However, as the amount of data increases, the relative performance of this model, compared to the other models which account for dependencies, decreases. It is interesting to
note that when there is little data, the Dependency Trees perform poorly. Since these trees
can select dependencies between any two pixels, they are the most susceptible to fmding
spurious dependencies. However, as the amount of data increases, the performance of this
model rapidly improves. By using all of the labeled data (500 examples total), the Dependency Tree and the Local-Dependence network perform approximately the same, achieving a correct classification rate of approximately 99% .
..tda....."
CIoMfkwtIap Poi Ibi
I All
, -------
0.11
0 ...
I
I
0.'111
,/
,,,
0. .
0. .
0'"
o.m
LaboIecI Data
l
I
,
I
,
,/
,,
,/
l.oa
0"
,/
0...
I3
--=-..,
~""-J
----
~"..
--
....
~
.......... "
?.711
I ......
0.40
au
0.'
,
,,
,,
,,
........ ,
....,
I
I
I
....
us
..-
--~
(..........J
I
I
I
----
~,.,.
- - .... D~
o.m
0:111
1011
-
0:111
10
.........
:00
1011
-
Figure 3: Perfonnance of the three models. X Axis: Amount oflabeled training data used. Y Axis: Percent
correct on an independent test set. In the left graph, only labeled data was used. In the right graph, unlabeled and
labeled data was used (the total number of examples were 500, with varying amounts of labeled data).
3.2 Augmenting the Models with Unlabeled Data
We can augment what is learned from only using the labeled examples by incorporating
unlabeled examples through the use of the Expectation-Maximization (EM) algorithm.
Although the details of EM are beyond the scope of this paper, the resulting algorithm is
easily described (for a description of EM and applications to filling in missing values, see
[Dempster et al., 1977] and [Ghahramani & Jordan, 1994]):
1.
Build the models using only the labeled data (as in Section 2).
2.
Use the models to probabilistically label the unlabeled images.
3.
Using the images with the probabilistically assigned labels, and the
images with the given labels, recalculate the models' parameters. As
mentioned in section 2, for the images labeled by this process,
P(Classcllmage) is not restricted to {0,1}; the probability mass for an
image may be spread to multiple classes.
4.
If a pre-specified termination condition is not met, go to step 2.
This process is used for each classifier. The termination condition was five iterations; after
five iterations, there was little change in the models' parameters.
The performance of the three classifiers with unlabeled data is shown in Figure 3(right).
Note that with small amounts of data, the performance of all of the classifiers improved
dramatically when the unlabeled data is used. Figure 4 shows the percent improvement by
using the unlabeled data to augment the labeled data. Note that the error is reduced by
Probabilistic Modelingfor Face Orientation Discrimination
859
almost 90% with the use of unlabeled data (see the case with Dependency Trees with only
4 labeled examples, in which the accuracy rates increase from 44% to 92.5%). With only
50 labeled examples, a classification accuracy of 99% was obtained. This accuracy was
obtained with almost an order of magnitude fewer labeled examples than required with
classifiers which used only labeled examples.
In almost every case examined, the addition of unlabeled data helped performance. However, unlabeled data actually hurt the no-dependency model when a large amount of
labeled data already existed. With large amounts of labeled data, the parameters of the
model were estimated well. Incorporating unlabeled data may have hurt performance
because the underlying generative process modeled did not match the real generative process. Therefore, the additional data provided may not have been labeled with the accuracy
required to improve the model's classification performance. It is interesting to note that
with the more complex models, such as the dependency trees or local dependence networks, even with the same amount of labeled data, unlabeled data improved performance.
[Nigam, et al., 1998] have reported similar performance degradation when using a large
number of labeled examples and EM with a naive-Bayesian model to classify text documents. They describe two methods for overcoming this problem. First, they adjust the relative weight of the labeled and unlabeled data in the M-step by using cross-validation.
Second, they providing multiple centroids per class, which improves the data/model fit.
Although not presented here due to space limitations, the first method was attempted - it
improved the performance on the face orientation discrimination task.
?
t) ?
~
0 '"
u ?
is
8...
01)
0...
?.
?
?
"
......
&'-; ,
~
I
I
.
,.
,--:
??
'"
?
?
;::-
.
(11"')
.1
. .,.,..J..-_
,
?
?
.,
--~-
...
~~
......
<-)
I
--=
LaaI
.......................
~~
??
"",..<-~)
,.. ...... (1K)
LaaI_Do
ILK)
?
.
.?
.f-
?
?
"
Figure 4: Improvement for each model by using unlabeled data to augment the labeled data. Left: with
only 1 labeled example, Middle: 4 labeled, Right: 50 labeled. The bars in light gray represent the
performance with only labeled data, the dark bars indicate the performance with the unlabeled data. The
number in parentheses indicates the absolute (in contrast to relative) percentage change in classification
performance with the use of unlabeled data.
4 Related Work
There is a large amount of work which attempts to discover attributes of faces, including
(but not limited to) face detection, face expression discrimination, face recognition, and
face orientation discrimination (for example [Rowley et al., 1998][Sung, 1996][Bartlett &
Sejnowski, 1997][Cottrell & Metcalfe, 1991 ][Turk & Pentland, 1991 D. The work presented in this paper demonstrates the effective incorporation of unlabeled data into image
classification procedures; it should be possible to use unlabeled data in any of these tasks.
The closest related work is presented in [Nigam et aI, 1998]. They used naive-Bayes
methods to classify text documents into a pre-specified number of groups. By using unlabeled data, they achieve significant classification performance improvement over using
labeled documents alone. Other work which has employed EM for learning from labeled
and unlabeled data include [Miller and Uyar, 1997] who used a mixture of experts classifier, and [Shahshahani & Landgrebe, 1994] who used a mixture of Gaussians. However,
the dimensionality oftheir input was at least an order of magnitude smaller than used here.
There is a wealth of other related work, such as [Ghahramani & Jordan, 1994] who have
860
S. Baluja
used EM to fill in missing values in the training examples. In their work, class labels can
be regarded as another feature value to fill-in.
Other approaches to reducing the need for large amounts of labeled data take the fonn of
active learning in which the learner can ask for the labels of particular examples. [Cohn,
et. a11996] [McCallum & Nigam, 1998] provide good overviews of active learning.
5 Conclusions & Future Work
This paper has made two contributions. The first contribution is to solve the problem of
discriminating between five face orientations with very little data. With only two labeled
example images per class, we were able to obtain classification accuracies of94% on separate test sets (with the local dependence networks with 4 parents). With only a few more
examples, this was increased to greater than 98% accuracy. This task has a range of applications in the design of user-interfaces and user monitoring.
We also explored the use of mUltiple probabilistic models with unlabeled data. The models varied in their complexity, ranging from modeling no dependencies between pixels, to
modeling four dependencies per pixel. While the no-dependency model perfonns well
with very little labeled data, when given a large amount of labeled data, it is unable to
match the perfonnance of the other models presented. The Dependency-Tree models perfonn the worst when given small amounts of data because they are most susceptible to
finding spurious dependencies in the data. The local dependency models perfonned the
best overall, both by working well with little data, and by being able to exploit more data,
whether labeled or unlabeled. By using EM to incorporate unlabeled data into the training
of the classifiers, we improved the perfonnance of the classifiers by up to approximately
90% when little labeled data was available.
The use of unlabeled data is vital in this domain. It is time-consuming to hand label many
images, but many unlabeled images are often readily available. Because many similar
tasks, such as face recognition and facial expression discrimination, suffer from the same
problem of limited labeled data, we hope to apply the methods described in this paper to
these applications. Preliminary results on related recognition tasks have been promising.
Acknowledgments
Scott Davies helped tremendously with discussions about modeling dependencies. I would also like to acknowledge the help of Andrew McCallum for discussions of EM, unlabeled data and the related work. Many thanks are
given to Henry Rowley who graciously provided the data set. Finally, thanks are given to Kaari Flagstad for
comments on drafts of this paper.
References
Bartlett, M . & Sejnowski, T. (1997) "Viewpoint Invariant Face Recognition using ICA and Attractor Networks",
in Adv. in Neural Information Processing Systems (NIPS) 9.
Chow, C. & Liu, C. (1968) "Approximating Discrete Probability Distributions with Dependence Trees". IEEETransactions on Information Theory, 14: 462-467.
Cohn, D.A., Ghahramani, Z. & Jordan, M. (1996) "Active Learning with Statistical Models", Journal of Artificial Intelligence Research 4: 129-145.
Cottrell, G. & Metcalfe, (1991) "Face, Gender and Emotion Recognition using Holons", NIPS 3.
Dempster, A. P., Laird, N.M ., Rubin, D.B. (1977) " Maximum Likelihood from Incomplete Data via the EM
Algorithm", J Royal Statistical Society Series B, 39 1-38.
Friedman, N., Geiger, D. Goldszmidt, M. (1997) "Bayesian Network Classifiers", Machine Learning 1:29.
Ghahramani & Jordan (1994) "Supervised Learning from Incomplete Data Via an EM Approach" NIPS 6.
McCallum, A. & Nigam, K. (1998) "Employing EM in Pool-Based Active Learning", in ICML98.
Miller, D. & Uyar, H. (1997) "A Mixture of Experts Classifier with Learning based on both Labeled and Unlabeled data", in Adv. in Neural Information Processing Systems 9.
Nigam, K. McCallum, A., Thrun, S., Mitchell, T. (1998), "Learning to Classify Text from Labeled and Unlabeled Examples", to appear in AAAI-98.
Rowley, H., Baluja, S. & Kanade, T. (1998) "Neural Network-Based Face Detection", IEEE-Transactions on
Pattern Analysis and Machine Intelligence (PAMI). Vol. 20, No. 1, January, 1998.
Shahshahani, B. & Landgrebe, D . (1994) "The Effect of Unlabeled samples in reducing the small sample size
problem and mitigating the Hughes Phenomenon", IEEE Trans. on Geosc. and Remote Sensing 32.
Sung, K.K. (1996), Learning and Example Selection for Object and Pattern Detection. Ph .D. Thesis, MIT AI
Lab - AI Memo 1572.
Turk, M. & Pentland, A. (1991) "Eigenfaces for Recognition". J . Cog Neurosci. 3 (I).
| 1567 |@word version:1 briefly:1 middle:1 termination:2 fonn:1 thereby:1 tr:1 liu:4 contains:1 exclusively:1 series:1 document:3 mages:1 must:2 readily:2 cottrell:2 discrimination:11 alone:1 cue:1 selected:1 half:2 fewer:1 generative:2 intelligence:2 mccallum:4 provides:2 draft:1 node:1 location:2 ofo:1 five:8 driver:1 consists:1 manner:1 inter:3 ica:1 expected:1 rapid:1 examine:1 automatically:1 little:8 provided:2 discover:1 notation:1 underlying:1 maximizes:1 mass:2 what:2 minimizes:1 finding:2 transformation:1 sung:3 perfonn:1 every:3 ti:1 holons:1 classifier:9 demonstrates:1 appear:1 shumeet:1 local:7 approximately:4 pami:1 ibi:1 au:1 examined:1 limited:4 factorization:1 range:2 directed:1 practical:1 camera:2 acknowledgment:1 testing:1 hughes:1 dodd:1 procedure:1 area:1 empirical:2 convenient:1 davy:1 word:1 pre:2 unlabeled:37 close:1 selection:1 applying:2 equalization:1 restriction:1 map:1 center:1 missing:2 graphically:2 attention:1 go:1 simplicity:1 immediately:1 examines:1 regarded:1 fill:2 hurt:2 user:4 recognition:6 labeled:48 capture:2 worst:1 calculate:2 recalculate:1 cycle:2 connected:1 i1x:1 adv:2 ordering:1 decrease:1 remote:1 mentioned:1 dempster:2 mu:1 complexity:1 rowley:5 trained:1 segment:1 upon:1 learner:1 easily:1 represented:2 fewest:1 train:1 describe:1 effective:1 sejnowski:2 artificial:1 equalized:1 solve:2 otherwise:1 laird:1 remainder:1 neighboring:3 rapidly:1 poorly:1 achieve:2 description:4 parent:4 object:1 help:1 andrew:1 augmenting:3 pose:1 school:1 paying:1 c:1 indicate:1 met:1 correct:2 attribute:1 centered:1 require:1 perfonns:1 preliminary:1 liz:1 scope:1 estimation:1 label:8 create:1 weighted:1 hope:1 mit:1 clearly:1 i3:1 rather:1 avoid:1 poi:1 varying:3 probabilistically:2 she:1 improvement:3 likelihood:2 indicates:1 greatly:2 contrast:2 tremendously:1 centroid:1 graciously:1 kaari:1 dependent:2 chow:4 spurious:2 i1:1 interested:1 mitigating:1 pixel:29 overall:1 classification:10 orientation:14 among:1 augment:4 smoothing:1 mutual:2 equal:2 construct:1 emotion:1 shaped:1 ng:1 represents:2 filling:1 future:2 few:1 randomly:1 divergence:1 attractor:1 friedman:2 attempt:1 detection:4 interest:1 circular:3 adjust:1 mixture:3 light:1 xb:1 accurate:2 edge:2 capable:1 facial:3 perfonnance:3 tree:14 incomplete:2 increased:1 classify:3 modeling:8 maximization:4 cost:1 vertex:2 conducted:1 front:1 reported:1 dependency:43 person:3 thanks:2 discriminating:3 probabilistic:10 pool:1 quickly:1 thesis:1 aaai:1 expert:2 account:1 potential:1 availability:1 vi:1 later:1 root:1 helped:2 lab:1 bayes:3 maintains:1 staring:1 contribution:2 il:1 square:1 accuracy:11 who:4 miller:2 sitting:1 ofthe:1 yield:1 bayesian:5 monitoring:1 classified:1 detector:4 inexpensive:1 turk:2 obvious:1 dataset:1 ask:1 mitchell:1 color:1 car:1 improves:3 dimensionality:1 actually:1 supervised:1 improved:6 working:1 hand:1 cohn:2 gray:1 effect:1 contain:1 true:1 assigned:1 leibler:1 shahshahani:2 adjacent:1 game:1 maintained:1 complete:1 performs:1 motion:1 interface:2 l1:1 percent:2 image:36 wise:3 ranging:1 empirically:1 discriminated:1 overview:1 he:1 mellon:1 tda:1 significant:1 imposing:1 ai:4 henry:1 fii:1 posterior:2 closest:1 belongs:1 route:1 binary:4 additional:1 greater:1 employed:1 determine:1 semi:2 ii:1 multiple:3 match:2 cross:1 retrieval:1 divided:1 parenthesis:1 cmu:1 expectation:4 histogram:2 represent:2 normalization:1 iteration:2 addition:1 wealth:1 unlike:1 comment:1 jordan:4 counting:2 split:1 vital:1 independence:1 xj:8 fit:1 restrict:2 knowing:1 whether:3 six:1 expression:2 bartlett:2 suffer:1 jj:1 ignored:1 dramatically:1 detailed:1 amount:14 dark:1 ph:1 simplest:1 reduced:1 exist:1 percentage:1 estimated:4 per:5 carnegie:1 discrete:1 vol:1 group:1 four:2 monitor:3 drawn:1 achieving:1 graph:4 almost:3 geiger:1 existed:1 ilk:1 incorporation:1 multinets:1 representable:1 belonging:1 describes:1 smaller:1 em:11 restricted:1 indexing:1 invariant:1 taken:1 ln:1 discus:1 describing:1 ge:1 available:3 gaussians:1 eight:2 apply:1 away:1 appropriate:1 falsepositives:1 original:3 assumes:1 include:1 exploit:1 ghahramani:4 build:1 approximating:1 society:1 already:1 dependence:6 separate:1 mapped:2 unable:1 oa:1 thrun:1 tutoring:1 binarize:1 spanning:2 assuming:2 modeled:3 fmding:2 relationship:1 providing:1 difficult:1 x20:1 susceptible:2 memo:1 design:2 unknown:1 perform:3 observation:1 arc:1 acknowledge:1 pentland:2 january:1 looking:1 variability:1 distracted:1 varied:1 arbitrary:2 intensity:1 overcoming:1 piip:1 pair:4 required:2 specified:2 learned:3 nip:3 trans:1 beyond:1 bar:2 able:2 below:1 pattern:2 scott:1 including:1 classj:3 video:1 royal:1 perfonned:1 difficulty:2 improve:2 brief:1 axis:2 created:1 naive:4 text:3 determining:1 relative:3 a11996:1 permutation:1 suggestion:1 interesting:2 mounted:1 limitation:1 acyclic:1 validation:1 rubin:1 viewpoint:1 classifying:1 last:2 dis:1 neighbor:4 eigenfaces:1 face:26 absolute:1 benefit:2 feedback:1 landgrebe:2 made:1 avoided:1 employing:1 transaction:1 kullback:1 active:4 pittsburgh:1 assumed:1 consuming:1 xi:3 classc:1 promising:1 kanade:1 nigam:5 complex:2 domain:1 vj:1 did:1 spread:1 neurosci:1 justsystem:1 profile:8 child:1 repeated:1 fmd:1 x1:1 comprises:1 wish:1 candidate:2 cog:1 specific:1 sensing:1 explored:6 incorporating:2 quantization:1 ci:1 magnitude:2 easier:1 gender:1 corresponds:1 ma:1 conditional:1 content:1 change:2 baluja:6 except:1 reducing:4 uyar:2 degradation:1 total:2 discriminate:1 e:1 attempted:1 formally:1 select:1 metcalfe:2 goldszmidt:1 arises:1 frontal:2 violated:1 incorporate:1 phenomenon:1 |
620 | 1,568 | Blind Separation of Filtered Sources
Using State-Space Approach
Liqing Zhang? and Andrzej Cichocki t
Laboratory for Open Information Systems,
Brain Science Institute, RIKEN
Saitama 351-0198, Wako shi, JAPAN
Email: {zha.cia}@open.brain.riken.go.jp
Abstract
In this paper we present a novel approach to multichannel blind
separation/generalized deconvolution, assuming that both mixing
and demixing models are described by stable linear state-space systems. We decompose the blind separation problem into two process: separation and state estimation. Based on the minimization
of Kullback-Leibler Divergence, we develop a novel learning algorithm to train the matrices in the output equation. To estimate the
state of the demixing model, we introduce a new concept, called
hidden innovation, to numerically implement the Kalman filter.
Computer simulations are given to show the validity and high effectiveness of the state-space approach.
1
Introd uction
The field of blind separation and deconvolution has grown dramatically during recent years due to its similarity to the separation feature in human brain, as well as its
rapidly growing applications in various fields, such as telecommunication systems,
image enhancement and biomedical signal processing. The blind source separation
problem is to recover independent sources from sensor outputs without assuming
any priori knowledge of the original signals besides certain statistic features. Refer
to review papers [lJ and [5J for the current state of theory and methods in the field.
Although there exist a number of models and methods, such as the infomax, natural gradient approach and equivariant adaptive algorithms, for separating blindly
independent sources, there still are several challenges in generalizing mixture to dy?On leave from South China University of Technology, China
tan leave from Warsaw University of Technology, Poland
Blind Separation ofFiltered Sources
649
namic and nonlinear systems, as well as in developing more rigorous and effective
algorithms with general convergence.[1-9], [11-13]
The state-space description of systems is a new model for blind separation and
deconvolution[9,12]. There are several reasons why we use linear state-space systems
as blind deconvolution models. Although transfer function models are equivalent
to the state-space ones, it is difficult to exploit any common features that may
be present in the real dynamic systems. The main advantage of the state space
description for blind deconvolution is that it not only gives the internal description
of a system, but there are various equivalent types of state-space realizations for a
system, such as balanced realization and observable canonical forms. In particular
it is known how to parameterize some specific classes of models which are of interest
in applications. Also it is much easy to tackle the stability problem of state-space
systems using the Kalman Filter. Moreover, the state-space model enables much
more general description than standard finite impulse response (FIR) convolutive
filtering. All known filtering (dynamic) models, like AR, MA, ARMA, ARMAX and
Gamma filterings, could also be considered as special cases of flexible state-space
models.
2
Formulation of Problem
Assume that the source signals are a stationary zero-mean i.i.d processes and mutually statistically independent. Let s(t) = (SI (t),"', sn(t)) be an unknown vector of
independent Li.d. sources. Suppose that the mixing model is described by a stable
linear state discrete-time system
(1)
x(k + 1)
Ax(k) + Bs(k) + Lep(k),
(2)
u(k)
Cx(k) + Ds(k) + 6(k),
where x E RT is the state vector of system, s(k) E R n is the vector of source signals
and u(k) E R m is the vector of sensor signals. A, B, C and D are the mixing
matrices of the state space model with consistent dimensions. ep(k) is the process
noise and 6(k) is sensor noise of the mixing system. If we ignore the noise terms
in the mixing model, its transfer function matrix is described by a m x n matrix of
the form
H(z) = C(zI - A)-l B + D,
(3)
where Z-1 is a delay operator.
We formulate the blind separation problem as a task to recover original signals
from observations u(t) without prior knowledge on the source signals and the state
space matrices [A, B, C, D] besides certain statistic features of source signals. We
propose that the demixing model here is another linear state-space system, which
is described as follows, (see Fig. 1)
x(k + 1) = Ax(k) + Bu(k) + LeR(k),
(4)
y(k) = Cx(k) + DU(k),
(5)
where the input u(k) of the demixing model is just the output (sensor signals)
of the mixing model and the eR(k) is the reference model noise. A, B, C and
D are the demixing matrices of consistent dimensions. In general, the matrices
W = [A, B, C, D, L] are parameters to be determined in learning process.
For simplicity, we do not consider, at this moment, the noise terms both in
the mixing and demixing models. The transfer function of the demixing model
is W(z) = C(zI - A)-1 B + D. The output y(k) is designed to recover the source
signals in the following sense
y(k) = W(z)H(z)s(k) = PA(z)s(k),
(6)
L. Zhang and A. Cichocki
650
u(k)
Figure 1: General state-space model for blind deconvolution
where P is any permutation matrix and A(z) is a diagonal matrix with Aiz-Ti
in diagonal entry (i,i), here Ai is a nonzero constant and Ti is any nonnegative
integer. It is easy to see that the linear state space model mixture is an extension
of instantaneous mixture. When both the matrices A, B, C in the mixing model
and A, B, C in the demixing model are null matrices, the problem is simplified to
standard leA problem [1-8].
The question here is whether exist matrices [A, B, C, D] in the demixing model (4)
and (5), such that its transfer function W(z) satisfies (6). It is proven [12] that if
the matrix D in the mixing model is of full rank, rank(D) = n, then there exist
matrices [A, B, C, D], such that the output signal y of state-space system (4) and
(5) recovers the independent source signal 8 in the sense of (6).
3
Learning Algorithm
Assume that p(y, W),Pi(Yi, W) are the joint probability density function of y and
marginal pdf of Yi, (i = 1" . " n) respectively. We employ the mutual information
of the output signals, which measures the mutual independence of the output signals
Yi(k), as a risk function [1,2]
n
l(W) = -H(y, W)
+L
H(Yi, W),
(7)
i=l
where
H(y, W) = -
J
p(y, W)logp(y, W)dy, H(Yi, W) = -
J
Pi(Yi, W)logpi(Yi, W)dYi.
In this paper we do not directly develop learning algorithms to update all parameters W = [A, B, C, D] in demixing model. We separate the blind deconvolution
problem into two procedures: separation and state-estimation. In the separation
procedure we develop a novel learning algorithm, using a new search direction, to
update the matrices C and D in output equation (5). Then we define a hidden
innovation of the output and use Kalman filter to estimate the state vector x(k).
For simplicity we suppose that the matrix D in the demixing model (5) is nonsingular n x n matrix. From the risk function (7), we can obtain a cost function for
on line learning
1
l(y, W) = -2logdet(DT D) -
n
L logpi(Yi, W),
i=l
(8)
651
Blind Separation of Filtered Sources
where det(D T D) is the determinant of symmetric positive definite matrix DT D.
For the gradient of I with respect to W, we calculate the total differential dl of
l(y, W) when we takes a differential dW on W
dl(y, W) = l(y, W
+ dW) -l(y, W).
(9)
Following Amari's derivation for natural gradient methods [1-3], we have
dl(y, W) = -tr(dDD- I ) + cpT(y)dy,
(10)
where tr is the trace of a matrix and cp(y) is a vector of nonlinear activation
functions
CPi(Yi) = - dlogpi(Yi) = _P~(Yi).
(11)
dYi
Pi(Yi)
Taking the derivative on equation (5), we have following approximation
dy = dCx(k)
+ dDu(k).
(12)
On the other hand, from (5), we have
u(k) = D- I (y(k) - Cx(k))
(13)
Substituting (13) into (12), we obtain
dy = (dC - dDD-IC)x
+ dDD-ly.
(14)
In order to improve the computing efficiency of learning algorithms, we introduce a
new search direction
= dC-dDD-IC ,
(15)
dX 2
=
dDD- I .
(16)
Then the total differential dl can be expressed by
(17)
dl = -tr(dX 2) + cpT(y)(dXIX + dX2Y)'
It is easy to obtain the derivatives of the cost function I with respect to matrices
Xl and X 2 as
cp(y(k))XT(k),
(18)
cp(y(k))yT (k) - I.
(19)
From (15) and (16), we derive a novel learning algorithm to update matrices C and
D.
~C(k) = 'T] (-cp(y(k))xT(k) + (I - cp(y(k))yT(k))C(k)) ,
(20)
~D(k)
= 'T] (I - cp(y(k))yT(k)) D(k).
(21)
The equilibrium points of the learning algorithm satisfy the following equations
E[cp(y(k))XT(k)] = 0,
(22)
E
[I -
cp(y(k))yT (k)] = O.
(23)
This means that separated signals y could achieve as mutually independent as
possible if the nonlinear activation function cp(y) are,suitably chosen and the state
vector x(k) is well estimated. From (20) and (21), we see that the natural gradient
learning algorithm [2] is covered as a special case of the learning algorithm when
the mixture is simplified to instantaneous case.
The above derived learning algorithm enable to solve the blind separation problem
under assumption that state matrices A and B are known or designed appropriately.
In the next section instead of adjusting state matrices A and B directly, we propose
new approaches how to estimate state vector x.
L. Zhang and A. Cichocki
652
4
State Estimator
From output equation (5), it is observed that if we can accurately estimate the state
vector x(k) of the system, then we can separate mixed signals using the learning
algorithm (20) and (21).
4.1
Kalman Filter
The Kalman filter is a useful technique to estimate the state vector in state-space
models. The function of the Kalman Filter is to generate on line the state estimate
of the state x(k). The Kalman filter dynamics are given as follows
x(k
+ 1) =
Ax(k)
+ BU(k) + Kr(k) + eR(k),
(24)
where K is the Kalman filter gain matrix, and r(k) is the innovation or residual
vector which measures the error between the measured(or expected) output y(k)
and the predicted output Cx(k) + Du(k). There are varieties of algorithms to
update the Kalman filter gain matrix K as well as the state x(k), refer to [10] for
more details.
However in the blind deconvolution problem there exists no explicit residual r(k)
to estimate the state vector x(k) because the expected output y(t) means the
unavailable source signals. In order to solve this problem, we present a new concept
called hidden innovation to implement the Kalman filter in blind deconvolution case.
Since updating matrices C and D will produces an innovation in each learning step,
we introduce a hidden innovation as follows
(25)
= b.y(k) = t1Cx(k) + t1Du(k),
C(k) and t1D = D(k + 1) - D(k). The hidden innovation
r(k)
where t1C = C(k + 1) presents the adjusting direction of the output of the demixing system and is used
to generate an a posteriori state estimate. Once we define the hidden innovation,
we can employ the commonly used Kalman filter to estimate the state vector x(k),
as well as to update Kalman gain matrix K . The updating rule in this paper is
described as follows:
(1) Compute the Kalman gain matrix
K(k) = P(k)C(kf(C(k)P(k)CT(k)
+ R(k))-l
(2) Update state vector with hidden innovation
x(k) = x(k)
+ K(k)r(k)
(3) Update the error covariance matrix
P(k) = (I - K(k)C(k))P(k)
(4) evaluate the state vector ahead
Xk+l = A(k)x(k)
+ B(k)u(k)
(5) evaluate the error covariance matrix ahead
P(k) = A(k)P(k)A(kf
+ Q(k)
with the initial condition P(O) = I, where Q(k), R(k) are the covariance matrices
of the noise vector R and output measurement noise nk.
e
The theoretic problems, such as convergence and stability, remain to be elaborated.
Simulation experiments show that the proposed algorithm, based on the Kalman
filter, can separate the convolved signals well.
Blind Separation of Filtered Sources
4.2
653
Information Back-propagation
Another solution to estimating the state of a system is to propagate backward the
mutual information. If we consider the cost function is also a function of the vector
x, than we have the partial derivative of l(y, W) with respect to x
8l(y , W) = C T
8x
(
)
(26)
cp Y .
Then we adjust the state vector x(k) according to the following rule
(27)
x(k) = x(k) - TlC(kf cp(y(k)).
Then the estimated state vector is used as a new state of the system.
5
Numerical Implementation
Several numerical simulations have been done to demonstrate the validity and effectiveness of the proposed algorithm. Here we give a typical example
Example 1. Consider the following MIMO mixing model
10
U(k)
+L
10
AiU(k - i) = s(k)
+L
i=l
BiS(k - i)
+ v(k),
i=l
where u, s, v E R 3 , and
A2
AlO
=
B8
-0.48 -0.16 -0 .64
-0 .16 -0.48 -0.24
-0.16 -0.16 -0 .08
0.32 0.19 0.38 )
0.16 0.29 0.20
,
0.08 0.08 0.10
-0.40 -0.08 -0.08
-0.08 -0.40 -0.16
-0.08 -0.08 -0.56
),
A8
B2
),
BlO
=
-0.50 -0.10 -0.40 )
,
-0 .10 -0.50 -0.20
-0.10 -0.10 -0.10
0.42 0.21 0.,4)
0.10 0.56 0.14
,
0.21 0.21 0.35
-0.19 -0.15 -0.,0)
-0.11 -0.27 -0.12
,
-0.16 -0.18 -0.22
and other matrices are set to the null matrix. The sources s are chosen to be
LLd signals uniformly distributed in the range (-1,1), and v are the Gaussian noises
with zero mean and a covariance matrix 0.11. We employ the state space approach
to separate mixing signals. The nonlinear activation function is chosen cp(y) = y3.
The initial value for matrices A and B in the state equation are chosen as in
canonical controller form. The initial values for matrix C is set to null matrix or
given randomly in the range (-1,1) , and D = 1 3 . A large number of simulations
show that the state space method can easily recover source signals in the sense of
W(z)H( z ) = PA. Figure 2 illustrates the coefficients of global transfer function
G(z) = W( z )H(z ) after 3000 iterations, where the (i,j)th sub-figure plots the
coefficients of the transfer function G ij (z) = E~o gijkZ-k up to order of 50.
References
[1] S. Amari and A. Cichocki, "Adaptive blind signal processing- neural network
approaches", Proceedings of the IEEE, 86(10):2026-2048, 1998.
[2] S. Amari, A. Cichocki, and H.H. Yang, "A new learning algorithm for blind
signal separation", Advances in Neural Information Processing Systems 1995
(Boston, MA: MIT Press, 1996), pp. 752- 763.
654
L. Zhang and A. Cichocki
G(z)
G(Z) '3
G(Z) 1 2
11
_~CJ _~CJ~Cl
~CJ ~CJ ~CJ
o
~
~
a
(3(:) 21
~
~
a
G(Z)22
00
~
G(Z) :!3
_~CJ }:~ _~CJ
~CJ ~CJ ~CJ
o
00
~
a
00
40
a
00
~
_;CJ
~CJ ~CJ ~CJ
(3(Z) 3 1
G( Z)32
G (Z')33
_~c:J r~~
a
00
~
a
00
~
0
00
~
Figure 2: The coefficients of global transfer function after 3000 iterations
[3] S. Amari "Natural gradient works efficiently in learning", Neural Computation,
VoLlO, pp251-276, 1998.
[4] A. J. Bell and T. J. Sejnowski, "An information-maximization approach to
blind separation and blind deconvolution", Neural Computation, Vol. 7, pp
1129-1159, 1995.
[5] J.-F Cardoso, "Blind signal separation: statistical principles", Proceedings
of the IEEE, 86(10):2009-2025, 1998.
[6] J.-F. Cardoso and B. Laheld, "Equivariant adaptive source separation," IEEE
Trans . Signal Processing, vol. SP-43, pp. 3017-3029, Dec. 1996.
[7] A.Cichocki and R. Unbehauen, "Robust neural networks with on-line learning
for blind identification and blind separation of sources" IEEE Trans Circuits
and Systems I : Fundamentals Theory and Applications, vol 43, No.Il, pp.
894-906, Nov. 1996.
[8] P. Comon, "Independent component analysis: a new concept?", Signal Processing, vol.36, pp.287- 314, 1994.
[9] A. Gharbi and F. Salam, "Algorithm for blind signal separation and recovery in static and dynamics environments", IEEE Symposium on Circuits and
Systems, Hong Kong, June, 713-716, 1997.
[10] O. L. R. Jacobs, "Introduction to Control Theory", Second Edition, Oxford
University Press, 1993.
[11] T. W. Lee, A.J . Bell, and R. Lambert, "Blind separation of delayed and
convolved sources", NIPS 9, 1997, MIT Press, Cambridge MA, pp758-764.
[12] L. -Q. Zhang and A. Cichocki, "Blind deconvolution/equalization using statespace models", Proc. '98 IEEE Signal Processing Society Workshop on NNSP,
ppI23-131 , Cambridge, 1998.
[13] S. Choi, A. Cichocki and S. Amari, "Blind equalization of simo channels via
spatio-temporal anti-Hebbian learning rule", Proc. '98 IEEE Signal Processing
Society Workshop on NNSP, pp93-102, Cambridge, 1998.
PART V
IMPLEMENTATION
| 1568 |@word kong:1 determinant:1 suitably:1 open:2 simulation:4 propagate:1 covariance:4 jacob:1 tr:3 moment:1 initial:3 wako:1 current:1 si:1 activation:3 dx:2 numerical:2 enables:1 designed:2 plot:1 update:7 stationary:1 xk:1 nnsp:2 filtered:3 zhang:5 differential:3 symposium:1 introduce:3 expected:2 equivariant:2 growing:1 brain:3 estimating:1 moreover:1 circuit:2 null:3 namic:1 temporal:1 y3:1 ti:2 tackle:1 control:1 ly:1 positive:1 oxford:1 china:2 bi:1 statistically:1 range:2 dcx:1 implement:2 definite:1 procedure:2 laheld:1 bell:2 operator:1 risk:2 equalization:2 equivalent:2 logpi:2 shi:1 yt:4 go:1 formulate:1 filterings:1 simplicity:2 recovery:1 estimator:1 rule:3 dw:2 stability:2 tan:1 suppose:2 pa:2 updating:2 ep:1 observed:1 parameterize:1 calculate:1 balanced:1 environment:1 dynamic:4 efficiency:1 easily:1 joint:1 various:2 grown:1 riken:2 train:1 derivation:1 separated:1 effective:1 sejnowski:1 solve:2 amari:5 statistic:2 advantage:1 propose:2 realization:2 rapidly:1 mixing:11 achieve:1 description:4 convergence:2 enhancement:1 produce:1 leave:2 derive:1 develop:3 aiu:1 measured:1 ij:1 predicted:1 direction:3 filter:12 human:1 enable:1 decompose:1 extension:1 considered:1 ic:2 warsaw:1 equilibrium:1 substituting:1 a2:1 estimation:2 proc:2 minimization:1 mit:2 sensor:4 gaussian:1 ax:3 derived:1 june:1 rank:2 rigorous:1 sense:3 posteriori:1 lj:1 hidden:7 flexible:1 priori:1 special:2 mutual:3 marginal:1 field:3 once:1 employ:3 randomly:1 gamma:1 divergence:1 delayed:1 interest:1 adjust:1 lep:1 mixture:4 dyi:2 partial:1 simo:1 arma:1 ar:1 logp:1 maximization:1 cost:3 entry:1 saitama:1 delay:1 mimo:1 density:1 fundamental:1 bu:2 lee:1 infomax:1 b8:1 fir:1 alo:1 derivative:3 li:1 japan:1 b2:1 coefficient:3 satisfy:1 blind:28 zha:1 recover:4 elaborated:1 il:1 efficiently:1 nonsingular:1 identification:1 lambert:1 accurately:1 email:1 pp:5 recovers:1 static:1 gain:4 adjusting:2 knowledge:2 cj:14 back:1 dt:2 response:1 formulation:1 done:1 just:1 biomedical:1 d:1 hand:1 nonlinear:4 propagation:1 impulse:1 validity:2 concept:3 gharbi:1 symmetric:1 laboratory:1 leibler:1 nonzero:1 during:1 hong:1 generalized:1 pdf:1 theoretic:1 demonstrate:1 cp:12 image:1 instantaneous:2 novel:4 common:1 jp:1 numerically:1 refer:2 measurement:1 cambridge:3 ai:1 stable:2 similarity:1 recent:1 certain:2 yi:12 signal:29 full:1 hebbian:1 liqing:1 ddd:5 controller:1 blindly:1 iteration:2 dec:1 lea:1 source:20 appropriately:1 south:1 effectiveness:2 integer:1 yang:1 easy:3 variety:1 independence:1 zi:2 t1c:1 det:1 whether:1 introd:1 tlc:1 armax:1 logdet:1 cpt:2 dramatically:1 useful:1 covered:1 cardoso:2 multichannel:1 generate:2 exist:3 canonical:2 estimated:2 discrete:1 vol:4 backward:1 year:1 telecommunication:1 separation:22 dy:5 ct:1 nonnegative:1 ahead:2 developing:1 according:1 remain:1 lld:1 b:1 comon:1 equation:6 mutually:2 cia:1 convolved:2 original:2 andrzej:1 exploit:1 society:2 question:1 rt:1 diagonal:2 gradient:5 separate:4 separating:1 evaluate:2 reason:1 assuming:2 kalman:14 besides:2 ler:1 innovation:9 difficult:1 trace:1 implementation:2 unknown:1 observation:1 finite:1 unbehauen:1 anti:1 dc:2 salam:1 nip:1 trans:2 convolutive:1 challenge:1 natural:4 residual:2 improve:1 technology:2 cichocki:9 sn:1 poland:1 review:1 prior:1 kf:3 permutation:1 mixed:1 filtering:2 proven:1 consistent:2 principle:1 pi:3 institute:1 taking:1 distributed:1 dimension:2 commonly:1 adaptive:3 simplified:2 nov:1 observable:1 ignore:1 kullback:1 global:2 spatio:1 search:2 why:1 channel:1 transfer:7 robust:1 unavailable:1 du:2 cl:1 sp:1 main:1 noise:8 edition:1 cpi:1 fig:1 sub:1 explicit:1 xl:1 choi:1 specific:1 xt:3 er:2 deconvolution:11 demixing:12 dl:5 uction:1 exists:1 workshop:2 kr:1 illustrates:1 nk:1 boston:1 generalizing:1 cx:4 expressed:1 a8:1 satisfies:1 ma:3 determined:1 typical:1 uniformly:1 called:2 total:2 internal:1 statespace:1 |
621 | 1,569 | Discontinuous Recall Transitions Induced By
Competition Between Short- and Long-Range
Interactions in Recurrent Networks
N.S. Skantzos, C.F. Beckmann and A.C.C. Coolen
Dept of Mathematics, King's College London, Strand, London WC2R 2LS, UK
E-mail: skantzos@mth.kcl.ac.uktcoolen@mth.kcl.ac.uk
Abstract
We present exact analytical equilibrium solutions for a class of recurrent neural network models, with both sequential and parallel neuronal
dynamics, in which there is a tunable competition between nearestneighbour and long-range synaptic interactions. This competition is
found to induce novel coexistence phenomena as well as discontinuous
transitions between pattern recall states, 2-cycles and non-recall states.
1 INTRODUCTION
Analytically solvable models of large recurrent neural networks are bound to be simplified
representations of biological reality. In early analytical studies such as [1,2] neurons were,
for instance, only allowed to interact with a strength which was independent of their spatial
distance (these are the so-called mean field models). At present both the statics of infinitely
large mean-field models of recurrent networks, as well as their dynamics away from saturation are well understood, and have obtained the status of textbook or review paper material
[3,4]. The focus in theoretical research of recurrent networks has consequently turned to
new areas such as solving the dynamics of large networks close to saturation [5], the analysis of finite size phenomenology [6], solving biologically more realistic (e.g. spike-based)
models [7] or analysing systems with spatial structure. In this paper we analyse models of recurrent networks with spatial structure, in which there are two types of synapses:
long-range ones (operating between any pair of neurons), and short-range ones (operating
between nearest neighbours only). In contrast to early papers on spatially structured networks [8], one here finds that, due to the nearest neighbour interactions, exact solutions
based on simple mean-field approaches are ruled out. Instead, the present models can be
solved exactly by a combination of mean-field techniques and the so-called transfer matrix method (see [9]). In parameter regimes where the two synapse types compete (where
one has long-range excitation with short-range inhibition, or long-range Hebbian synapses
with short-range anti-Hebbian synapses) we find interesting and potentially useful novel
phenomena, such as coexistence of states and discontinuous transitions between them.
N. S. Skantzos, C. F Beckmann and A. C. C. Coo/en
338
2 MODEL DEFINITIONS
We study models with N binary neuron variables Ui = ?1. which evolve in time stochastically on the basis of post-synapi.ic potentials hi (8). following
Prob[ui(t
+ 1) = ?1] = ~ [1 ? tanh[,Bhi (8(t))Jl
h i (8)
=L
JijUj
+ ()i
(1)
#i
The variables Jij and ()i represent synaptic interactions and firing thresholds. respectively.
The (non-negative) parameter ,B controls the amount of noise. with,B 0 and,B 00 corresponding to purely random and purely deterministic response. respectively. If the synaptic
matrix is symmetric. both a random sequential execution and a fully parallel execution of
the stochastic dynamics (1) will evolve to a unique equilibrium state. The corresponding
microscopic state probabilities can then both formally be written in the Boltzmann form
Poo(a) ,...., exp[-,BH(a)], with [10]
=
Hseq(a) = - LUiJijUj- L()Wi
i<j
i
Hpar(a) =
=
-~ ~logcosh[,Bhi(a)]- ~()iUi
I
S
(2)
For large systems the macroscopic observables of interest can be obtained by differentiation
of the free energy per neuron f = -limN-+oo(,BN)-llog LiT exp[ -,BH(8)]. which acts
as a generating function. For the synaptic interactions Jij and the thresholds ()i we now
make the following choice:
Jl
Jij = N ~i~j
model I :
+ J s (di,j+1 + di,j-d ~i~j
()i
= ()~i
(3)
(which corresponds to the result of having stored a binary pattern { E {-1, 1}N through
Hebbian-type learning). with h, J s , () E Rand i + N =i. The neurons can be thought of as
being arranged on a periodic one-dimensional array. with uniform interactions of strength
Jl~i~j / N , in combination with nearest neighbour interactions of strength JS~i~j . Note that
model I behaves in exactly the same way as the following
model II:
Jij
= NJl + J s (di ,j+1 + di,j-d
(4)
since a simple transformation Ui --t Ui~i maps one model into the other. Taking derivatives
of f with respect to the parameters () and J s for model II produces our order parameters.
expressed as equilibrium expectation values. For sequential dynamics we have
of
.
1 ~
of
.
1
(5)
m = - - = hm - L..J(Ui)
a = - - = hm - L(Ui+1Ui)
o()
N-+oo N
.
oJs
I
N-+oo N
.
I
For parallel dynamics the corresponding expressions turn out to be
10f
m =
-2 o()
.
1 ~
= J~oo N L..J(Ui)
,
1 of
a
.
1
~
_
= -2 oJs = J~oo N ~(Ui+1
tanh[,Bhi(u)])
,
(6)
We have simplified (6) with the iJentities (Ui+1 tanh[,Bhi(a)]) = (Ui-1 tanh[,Bhi(a)]) and
(tanh[,Bhi(5)j) = (Ui). which follow from (1) and from invariance under the transformation i --t N + 1 - i (for all i) . For sequential dynamics a describes the average equilibrium
state covariances of neighbouring neurons. For parallel dynamics a gives the average equilibrium state covariances of neurons at a given time t. and their neighbours at time t + 1 (the
difference between the two meanings of a will be important in the presence of 2-cycIes).
In model II m is the average activity in equilibrium. whereas for model lone finds
m
= N-+oo
lim N1 ~ ~i(Ui)
L..J
1
This is the familiar overlap order parameter of associative memory models [1 . 2], which
measures the quality of pattern recall in equilibrium. The observable a transforms similarly.
Competition between Short- and Long-Range Interactions
339
3 SOLUTION VIA TRANSFER MATRICES
From this stage onwards our analysis will refer to model II, i.e eqn (4); the results can immediately be translated into the language of model I (3) via the transformation CTi -+ CTi~i.
In calculating f it is advantageous to separate terms induced by the long-range synapses
from those induced by the short-range ones, via insertion of 1 = f dm a[m - iv Ei CTi].
Upon using the integral representation of the a-function, we then arrive at
f = - lim
N-+oo
with
_l-IOg/dmdm
{3N
?seq(m,m) = -imm - mO -
e-{3NtP(m,m)
~Jtm2 - {3~ log Rseq(m)
(7)
{3~ log Rpar(m, m)
(8)
?par(m, m) = -imm - mO -
The quantities R contain all complexities due to the short-range interactions in the model
(they describe a periodic one-dimensional system with neighbour interactions only):
O'E{ -l,l}N
Rpar(m,m)
=
O'E{ -l,l}N
They can be calculated using the transfer-matrix technique [9], which exploits an interpretation of the summations over the N neuron states CTi as matrix multiplications, giving
Tseq =
Rpar(m, m)
= Tr [T:ar]
e{3J. - i {3m
e-{3J?
e-{3J.
e{3J?+i{3m
(
)
_ (COSh[{3W+ ]e- i {3m cosh[{3wo] )
Tpar ~
cosh [{3wo]
cosh[{3w_ ]eil-'m
where Wo = Jtm + 0 and W? = Wo ? 2Ja ? The identity Tr [TN] = >..f- + >..I'!, in which
>..? are the eigenvalues of the 2 x 2 matrix T, enables us to take the limit N -+ 00 in our
equations. The integral over (m, m) is for N -+ 00 evaluated by gradient descent, and is
dominated by the saddle points of the exponent ? . We thus arrive at the transparent result
A.
(
\ seq
'l'seq
m,m = -~mm - m 0 - 2"1 J tm 2 - 731 Iog,,+
{
f = extr ?(m, m)
A.
(
0 1 I \ par
(9)
'l'par m, m = -~mm - m - 73 og "+
A
)
?
A
A
)
?
A
where >..~q and >..~ar are the largest eigenvalues of Tseq and T par. For simplicity, we will
restrict ourselves to the case where 0
0; generalisation of what follows to the case of
arbitrary 0, by using the full form of (9), is not significantly more difficult. The expressions
defining the value(s) of the order parameter m can now be obtained from the saddle point
equations om?(m, m) = om?(m, m) = O. Straightforward differentiation shows that
=
sequential:
parallel:
m = imJt ,
m =imJt ,
m=
-imJt,
m = G(m; Jt, J a )
m = G(m; Jt, J a )
m = G(m;-Jt,-Ja )
for Jt
for Jt
2: 0
<0
(10)
with
_
sinh[{3Jt m ]
G( m,. J t, J)
a - -r==========
Jsinh 2 [{3Jtm] + e- 4{3J.
(11)
Note that equations (to, 11) allow us to derive the physical properties of the parallel dynamics model from those of the sequential dynamics model via simple transformations.
340
N. S. Skantzos, C. F Beckmann and A. C. C. Coo/en
4 PHASE TRANSITIONS
Our main order parameter m is to be determined by solving an equation of the form m =
G(m), in which G(m) = G(m; Jl. J s ) for both sequential and parallel dynamics with
Jl ~ 0, whereas G(m) = G(m;-Jl.-Js ) for parallel dynamics with Jl < O. Note that,
due to G(O; h, J s ) =0, the trivial solution m =0 always exists. In order to obtain a phase
diagram we have to perform a bifurcation analysis of the equations (10,11), and determine
the combinations of parameter values for which specific non-zero solutions are created or
annihilated (the transition lines). Bifurcations of non-zero solutions occur when
m = G(m)
1 = G'(m)
and
(12)
The first equation in (12) states that m must be a solution of the saddle-point problem, the
second one states that this solution is in the process of being created/annihilated. Nonzero
solutions of m = G (m) can come into existence in two qualitatively different ways: as continuous bifurcations away from the trivial solution m = 0, and as discontinuous bifurcations
away from the trivial solution. These two types will have to be treated differently.
4.1
Continuous Transitions
An analytical expression for the lines in the ({3Js ? {3Jl) plane where continuous transitions
occur between recall states (where m =f. 0) and non-recall states (where m = 0) is obtained
by solving the coupled equations (12) for m = O. This gives:
cont. trans . :
sequential:
parallel:
{3Jl = e- 2 /3J.
{3Jl = e- 2 /3J.
and
{3Jl = _e 2/3J.
(13)
If along the transition lines (13) we inspect the behaviour of G(m) close to m = 0 we
can anticipate the possible existence of discontinuous ones, using the properties of G(m)
for m -+ ?oo, in combination with G(-m) = -O(m). Precisely at the lines (13) we
have G(m) = m + i-G'''(O).m3 + O(m 5 ) . Since liIllm-+oo G(m) = 1 one knows that
for G 11I (0) > 0 the function G(m) will have to cross the diagonal G(m) = m again at
some value m > 0 in order to reach the limit G (00) = 1. This implies, in combination
with G (-m) = -0 (m), that a discontinous transition must have already taken place earlier,
and that away from the lines (13) there will consequently be regions where one finds five
solutions of m = G(m) (two positive ones, two negative ones). Along the lines (13) the
condition G 11I (0) > 0, pointing at discontinuous transitions elsewhere, translates into
sequential :
parallel:
4.2
{3Jl > J3 and {3Js < - i log 3
I{3Jll > J3 and I{3Js l < - i log 3
(14)
Discontinuous Transitions
In the present models it turns out that one can also find an analytical expression for the
discontinuous transition lines in the ({3Js , {3h) plane, in the form of a parametrisation. For
sequential dynamics one finds a single line, parametrised by x = {3Jlm E [0,00) :
discont. trans. :
{3Jl(X) =
x 3 ( ).
x-tanh x
{3Js (x) =
2
_~ 10g [tanh(X) Sinh
( ) (X)]
4
x-tanh x
(15)
Since this parametrisation (15) obeys {3Js (O) =
log3 and {3Jl(O) = J3, the discontinuous transition indeed starts precisely at the point predicted by the convexity of G (m) at
m = 0, see (14). For sequential dynamics the line (15) gives all non-zero solutions of the
coupled equations (12). For parallel dynamics one finds, in addition to (15), a second 'mirror image' transition line, generated by the transformation {{3Jl, {3Js } H {-{3h, -{3Js } .
-i
341
Competition between Short- and Long-Range Interactions
5 PHASE DIAGRAMS
6 . ........
--- --
coex
4
r-
2
r
-- -------,
~
1
1m1>O, a>O
m=O, a<O
~
2
-2
f3lt
f~
m..(),a>O
,
-4 ~
-6
m=O, a<O
1
1
0
0
f3Jl
~
ImI>O, a>O
fixed point
-2
1
,j
~
-I
0
2
-2
>
~
t
1m1>O, a<O
2-cycle
-4 -
r
-6
-2
-I
2
f3Js
Figure 1: Left: phase diagram for sequential dynamics, involving three regions: (i) a region with
m = 0 only (here a = tanh[f3Js ]), (ii) a region with two m #- 0 fixed-point states (with opposite
sign, and with identical a > 0), and (iii) a region where the m = 0 state and the two m #- 0
states coexist. The (i) -+ (ii) and (ii) -+ (iii) transitions are continuous (solid lines), whereas the
(i) -+ (iii) transition is discontinuous (dashed line). Right: phase diagram for parallel dynamics,
involving the above regions and transitions, as well as a second set of transition lines (in the region
It < 0) which are exact reflections in the origin of the first set. Here. however, the m = 0 region has
a = tanh[2f3Js l, the two m #- 0 physical solutions describe 2-cycles rather than fixed-points , and
the Jl < 0 coexistence region describes the coexistence of an m = 0 fixed-point and 2-cycles.
Having detennined the transition lines in parameter space. we can turn to the phase diagrams. A detailed expose of the various procedures followed to detennine the nature of the
various phases, which are also dependent on the type of dynamics used, goes beyond the
scope of this presentation; here we can only present the resulting picture. 1 Figure 1 shows
the phase diagram for the two types of dynamics, in the (j3Js , j3Jl ) plane (note: of the
three parameters {j3, J s , Jd one is redundant). In contrast to models with nearest neighbour interactions only (Jl = 0, where no pattern recall ever will occur), and to models with
mean-field interactions only (Js = 0, where pattern recall can occur), the combination of
the two interaction types leads to qualitatively new modes of operation. This especially in
the competition region, where Jl > 0 and J s < 0 (Hebbian long-range synapses, combined with anti-Hebbian short range ones). The novel features of the diagram can playa
useful role: phase coexistence ensures that only sufficiently strong recall cues will evoke
pattern recognition; the discontinuity of the transition subsequently ensures that in the latter case the recall will be of a substantial quality. In the case of parallel dynamics, similar
statements can be made in the opposite region of synaptic competition, but now involving
2-cycles. Since figure 1 cannot show the zero noise region (13 = T- 1 = 00), we have also
drawn the interesting competition region of the sequential dynamics phase diagram in the
(Jl, T) plane, for J s = -1 (see figure 3, left picture). At T = 0 one finds coexistence of
recall states (m i:- 0) and non-recall states (m
0) for any Jl > 0, as soon as J s < O.
In the same figure (right picture) we show the magnitude of the discontinuity in the order
parameter m at the discontinuous transition, as a function of j3Jl.
=
IDue to the occurrence of imaginary saddle-points in (10) and our strategy to eliminate the variable m by using the equation om<l>(m, m) = 0, it need not be true that the saddle-point with the lowest value of <1>( m, m) is the minimum of <I> (complex conjugation can induce curvature sign changes.
and in addition the minimum could occur at boundaries or as special limits). Inspection of the status of saddle-points and identification of the physical ones in those cases where there are multiple
solutions is thus a somewhat technical issue, details of which will be published elsewhere [11].
N. S. Skantzos. C. F Beckmann and A. C. C. Coolen
342
1.0
6
5
0.8
4
3
T
t
2
0
,,
t
0
,
,,
,,
,
t
r
1
j
m=IJ, a<O
r
,,
.'.'"
2
,,
,
,,
,
0.4
f
t
0.2
r
m 06
cMSisttnct
"
4
~
j
f
8
6
0.0
0
10
2
4
Jl
8
6
10
{31t
Figure 2: Left picture: alternative presentation of the competition region of the sequential dynamics
phase dia~am in figure 1. Here the system states and transitions are drawn in the (Jl' T) plane
(T = {3- ), for J s = -1. Right picture: the magnitude of the 'jump' of the overlap m along the
discontinuous transition line, as a function of {3Jl.
The fact that for parallel dynamics one finds 2-cyc1es in the lower left corner of the phase
diagram (figure 1) can be inferred from the exact dynamical solution available along the
line J s = 0 (see e.g. [4]), provided by the deterministic map m(t + 1} = tanh(.BJlm(t}].
Finally we show, by way of further illustration of the coexistence mechanism, the value of
reduced exponent ?seq (m) given in (9), evaluated upon elimination of the auxiliary order
parameter m: ?(m) == ?seq(m, imJi}' The result, for the parameter choice ({3, Ji) = (2, 3)
and for three different short-range coupling stengths (corresponding to the three phase
regimes: non-zero recall, coexistence and zero recall) is given in figure 3. In the same
figure we also give the sequential dynamics bifurcation diagram displaying the value(s) of
the overlap m as a function of {3Ji and for {3Js
-0.6 (a line crossing all three phase
regimes in figure (l?.
=
0.0
I
f
-0.5
1
~I
i
r
1.0
f
0.5 l
I
-1.0
-i
m 0.0
?(m)
-1.5
-2.0
r
1
i
-2.5 t
-1.0
j
-0.5
0.0
0.5
1.0
-0.5 ~
I
-1.0
2.0
2.5
3.0
3.5
4.0
m
{3Jl
Figure 3: Left: Graph of the reduced exponent ?(m) = ?aeq(m, imJl) for the parameter choice
({3, Jd = (2,3) . The three lines (from upper to lower: J s = -1.2,-0.8,-0.2) correspond to
regimes where (i) m':j:. 0 only (ii) coexistence of trivial and non-trivial recall states occurs and
(iii) m = 0 only. Right: Sequential dynamics bifurcation diagram displaying for {3Js = -0.6 the
possible recall solutions. For a critical {3Jl given by (I5) m jumps discontinuously to non-zero
values. For increasing values of {3Jl the unstable m :j:. 0 solutions converge towards the trivial one
until {3Jl = exp(1.2) where a continuous phase transition takes place and m = 0 becomes unstable.
Competition between Short- and Long-Range Interactions
343
6 DISCUSSION
In this paper we have presented exact analytical equilibrium solutions, for sequential and
parallel neuronal dynamics, for a class of recurrent neural network models which allow for
a tunable competition between short-range synapses (operating between nearest neighbours
only) and long-range ones (operating between any pair of neurons). The present models
have been solved exactly by a combination of mean-field techniques and transfer matrix
techniques. We found that there exist regions in parameter space where discontinuous
transitions take place between states without pattern recall and either states of partiaVfull
pattern recall or 2-cycles. These regions correspond to the ranges of the network parameters
where the competition is most evident, for instance, where one has strongly excitatory longrange interactions and strongly inhibitory short-range ones. In addition this competition is
found to generate a coexistence of pattern recall states or 2-cycles with the non-recall state,
which (in turn) induces a dependence on initial conditions of whether or not recall will at
all take place.
This study is, however, only a first step. In a similar fashion one can now study more complicated systems, where (in addition to the long-range synapses) the short-range synapses
reach beyond nearest neighbours, or where the system is effectively on a two-dimensional
(rather than one-dimensional) array. Such models can still be solved using the techniques
employed here. A different type of generalisation would be to allow for a competition
between synapses which would not all be of a Hebbian form, e.g. by having long-range
Hebbian synapses (modeling processing via pyramidal neurons) in combination with shortrange inhibitory synapses without any effect of learning (modeling processing via simple
inhibitory inter-neurons). In addition, one could increase the complexity of the model by
storing more than just a single pattern. In the latter types of models the various pattern
components can no longer be transformed away, and one has to turn to the methods of
random field Ising models (see e.g. [12]).
References
[1] DJ. Amit, H. Gutfreund and H. Sompolinsky (1985), Phys. Rev. A32, 1007-1018
[2] D.J. Amit, H. Gutfreund and H. Sompolinsky (1985), Phys. Rev. Lett. 55, 1530-1533
[3] A.e.e. Cool en and D. Sherrington (1993), in J.G.Taylor (editor) Mathematical Approaches to Neural Networks, Elsevier Science Publishers, 293-306
[4] A.e.C. Coolen (1997), Statistical Mechanics of Neural Networks, King's College
London Lecture Notes
[5] A.C.e. Coolen, S.N. Laughton and D. Sherrington (1996), in D.S. Touretzky, M.C.
Mozer and M.E. Hasselmo (eds) Advances in Neural Information Processing Systems
8, MIT Press
[6] A. Castellanos, A.c.e. Coolen and L. Viana (1998), J. Phys. A 31, 6615-6634
[7] E. Domany, J.L. van Hemmen and K. Schulten (eds) (1994), Models of Neural Networks II, Springer
[8] A.C.e. Coolen and L.G.Y.M. Lenders (1992), J. Phys A 25, 2593-2606
[9] J.M. Yeomans (1992), Statistical Mechanics of Phase Transitions, Oxford V.P.
[10] P. Peretto (1984), Bioi. Cybem. 50, 51-62
[11] N.S. Skantzos and A.e.C. Coolen (1998), in preparation
[12] V. Brandt and W. Gross (1978); Z. Physik B 31, 237-245
| 1569 |@word advantageous:1 physik:1 bn:1 covariance:2 tr:2 solid:1 initial:1 imaginary:1 written:1 must:2 realistic:1 enables:1 cue:1 plane:5 inspection:1 short:14 brandt:1 five:1 mathematical:1 along:4 inter:1 indeed:1 mechanic:2 eil:1 increasing:1 becomes:1 provided:1 lowest:1 what:1 textbook:1 gutfreund:2 lone:1 transformation:5 differentiation:2 jlm:1 act:1 exactly:3 uk:2 control:1 positive:1 understood:1 limit:3 oxford:1 firing:1 range:24 obeys:1 unique:1 procedure:1 area:1 thought:1 significantly:1 induce:2 extr:1 cannot:1 close:2 coexist:1 bh:2 deterministic:2 map:2 poo:1 straightforward:1 go:1 l:1 simplicity:1 immediately:1 array:2 exact:5 neighbouring:1 origin:1 crossing:1 recognition:1 ising:1 role:1 solved:3 region:16 ensures:2 cycle:7 sompolinsky:2 substantial:1 a32:1 mozer:1 convexity:1 ui:13 insertion:1 complexity:2 gross:1 dynamic:26 solving:4 purely:2 upon:2 basis:1 observables:1 translated:1 differently:1 various:3 kcl:2 describe:2 london:3 analyse:1 associative:1 eigenvalue:2 analytical:5 interaction:16 jij:4 turned:1 detennined:1 competition:14 produce:1 generating:1 oo:9 recurrent:7 ac:2 derive:1 coupling:1 ij:1 nearest:6 strong:1 auxiliary:1 predicted:1 cool:1 come:1 implies:1 discontinuous:13 stochastic:1 subsequently:1 material:1 elimination:1 ja:2 wc2r:1 behaviour:1 transparent:1 biological:1 anticipate:1 summation:1 mm:2 sufficiently:1 ic:1 exp:3 equilibrium:8 scope:1 mo:2 pointing:1 early:2 coolen:7 tanh:11 expose:1 largest:1 hasselmo:1 mit:1 always:1 rather:2 og:1 focus:1 contrast:2 am:1 elsevier:1 dependent:1 eliminate:1 mth:2 transformed:1 issue:1 exponent:3 spatial:3 special:1 bifurcation:6 field:7 having:3 identical:1 lit:1 neighbour:8 familiar:1 phase:16 ourselves:1 n1:1 onwards:1 interest:1 parametrised:1 integral:2 iv:1 taylor:1 ruled:1 theoretical:1 instance:2 earlier:1 modeling:2 castellanos:1 ar:2 uniform:1 imi:1 stored:1 ojs:2 periodic:2 combined:1 parametrisation:2 again:1 stochastically:1 corner:1 derivative:1 potential:1 w_:1 skantzos:6 start:1 parallel:15 complicated:1 om:3 correspond:2 identification:1 published:1 synapsis:11 reach:2 phys:4 touretzky:1 synaptic:5 ed:2 definition:1 energy:1 dm:1 di:4 static:1 coexistence:10 tunable:2 recall:21 lim:2 follow:1 response:1 synapse:1 rand:1 arranged:1 evaluated:2 strongly:2 just:1 stage:1 until:1 eqn:1 ei:1 annihilated:2 jll:1 mode:1 quality:2 effect:1 contain:1 true:1 analytically:1 spatially:1 symmetric:1 nonzero:1 excitation:1 evident:1 sherrington:2 tn:1 reflection:1 meaning:1 image:1 novel:3 behaves:1 physical:3 ji:2 jl:27 interpretation:1 m1:2 refer:1 mathematics:1 similarly:1 language:1 dj:1 longer:1 operating:4 inhibition:1 j:13 playa:1 curvature:1 ntp:1 binary:2 minimum:2 somewhat:1 employed:1 determine:1 converge:1 redundant:1 dashed:1 ii:9 full:1 multiple:1 hebbian:7 technical:1 cross:1 long:13 post:1 iog:2 j3:4 involving:3 expectation:1 represent:1 whereas:3 addition:5 diagram:11 pyramidal:1 limn:1 macroscopic:1 publisher:1 induced:3 presence:1 iii:4 restrict:1 opposite:2 tm:1 domany:1 translates:1 whether:1 expression:4 wo:4 useful:2 detailed:1 amount:1 transforms:1 cosh:4 induces:1 reduced:2 generate:1 exist:1 inhibitory:3 sign:2 per:1 threshold:2 drawn:2 graph:1 compete:1 prob:1 i5:1 viana:1 arrive:2 place:4 seq:5 bound:1 hi:1 sinh:2 followed:1 conjugation:1 activity:1 strength:3 occur:5 precisely:2 dominated:1 structured:1 combination:8 describes:2 wi:1 rev:2 biologically:1 coo:2 taken:1 equation:9 turn:5 mechanism:1 know:1 dia:1 available:1 detennine:1 operation:1 phenomenology:1 away:5 occurrence:1 alternative:1 existence:2 jd:2 calculating:1 exploit:1 giving:1 especially:1 amit:2 already:1 quantity:1 spike:1 occurs:1 strategy:1 dependence:1 diagonal:1 microscopic:1 gradient:1 distance:1 separate:1 mail:1 unstable:2 trivial:6 cont:1 beckmann:4 illustration:1 bhi:6 difficult:1 potentially:1 statement:1 negative:2 boltzmann:1 perform:1 upper:1 inspect:1 neuron:11 peretto:1 finite:1 descent:1 anti:2 defining:1 ever:1 iui:1 arbitrary:1 inferred:1 pair:2 discontinuity:2 trans:2 beyond:2 dynamical:1 pattern:11 regime:4 saturation:2 memory:1 overlap:3 critical:1 treated:1 shortrange:1 solvable:1 picture:5 created:2 hm:2 coupled:2 review:1 evolve:2 multiplication:1 laughton:1 fully:1 par:4 lecture:1 interesting:2 displaying:2 editor:1 storing:1 elsewhere:2 excitatory:1 free:1 soon:1 allow:3 taking:1 van:1 boundary:1 calculated:1 lett:1 transition:26 qualitatively:2 made:1 jump:2 simplified:2 log3:1 observable:1 longrange:1 status:2 evoke:1 imm:2 jtm:2 cybem:1 continuous:5 reality:1 nature:1 transfer:4 interact:1 complex:1 main:1 noise:2 allowed:1 neuronal:2 en:3 hemmen:1 fashion:1 schulten:1 discont:1 specific:1 jt:6 exists:1 sequential:17 effectively:1 mirror:1 magnitude:2 execution:2 saddle:6 infinitely:1 expressed:1 strand:1 springer:1 corresponds:1 cti:4 bioi:1 identity:1 presentation:2 king:2 consequently:2 njl:1 towards:1 analysing:1 change:1 generalisation:2 determined:1 discontinuously:1 llog:1 called:2 invariance:1 m3:1 formally:1 college:2 latter:2 preparation:1 dept:1 phenomenon:2 |
622 | 157 | 634
ON THE K-WINNERS-TAKE-ALL NETWORK
E. Majani
Jet Propulsion Laboratory
California Institute of Technology
R. Erlanson, Y. Abu-Mostafa
Department of Electrical Engineering
California Institute of Technology
ABSTRACT
We present and rigorously analyze a generalization of the WinnerTake-All Network: the K-Winners-Take-All Network. This network identifies the K largest of a set of N real numbers. The
network model used is the continuous Hopfield model.
I - INTRODUCTION
The Winner-Take-All Network is a network which identifies the largest of N real
numbers. Winner-Take-All Networks have been developed using various neural
networks models (Grossberg-73, Lippman-87, Feldman-82, Lazzaro-89). We present
here a generalization of the Winner-Take-All Network: the K-Winners-Take-All
(KWTA) Network. The KWTA Network identifies the K largest of N real numbers.
The neural network model we use throughout the paper is the continuous Hopfield
network model (Hopfield-84). If the states of the N nodes are initialized to the N
real numbers, then, if the gain of the sigmoid is large enough, the network converges
to the state with K positive real numbers in the positions of the nodes with the K
largest initial states, and N - K negative real numbers everywhere else.
= 4,
= 2.
There are 6 = (~) stable
states:(++-_)T, (+_+_)T, (+--+)T, ( __ ++)T, (_+_+)T, and (_++_)T.
If the initial state of the network is (0.3, -0.4, 0.7, O.l)T, then the network will
converge to (Vi,V2,V3,v4)T where Vi> 0, V2 < 0, V3 > 0, V4 < 0 ((+ _ +_)T).
Consider the following example: N
K
In Section II, we define the KWTA Network (connection weights, external inputs).
In Section III, we analyze the equilibrium states and in Section IV, we identify all
the stable equilibrium states of the KWTA Network. In Section V, we describe the
dynamics of the KWTA Network. In Section VI, we give two important examples
of the KWTA Network and comment on an alternate implementation of the KWTA
Network.
On the K-Winners-Take-All Network
II - THE K-WINNERS-TAKE-ALL NETWORK
The continuous Hopfield network model (Hopfield-84) (also known as the Grossberg
additive model (Grossberg-88)), is characterized by a system of first order differential equations which governs the evolution of the state of the network (i 1, .. . , N) :
=
=
The sigmoid function g(u) is defined by: g(u)
f(G? u), where G > 0 is the
gain of the sigmoid, and f(u) is defined by: 1. "f/u, 0 < f'(u) < f'(O)
1,
2. limu .... +oo f( u)
1, 3. limu .... -oo f( u) -l.
=
=
=
The KWTA Network is characterized by mutually inhibitory interconnections Taj =
-1 for i ?= j, a self connection Tai
a, (Ial < 1) and'an external input (identical
for every node) which depends on the number K of winners desired and the size of
the network N : ti
2K - N.
=
=
The differential equations for the KWTA Network are therefore:
for all i,
Cd~i = -Aui + (a + l)g(ui) -
(tg(u j )
-
t) ,
(1)
J=l
=
=
where A N - 1 + lal, -1 < a < +1, and t
2K - N. Let us now study the
equilibrium states of the dynamical system defined in (1). We already know from
previous work (Hopfield-84) that the network is guaranteed to converge to a stable
equilibrium state if the connection matrix (T) is symmetric (and it is here).
III - EQUILIBRIUM STATES OF THE NETWORK
The equilibrium states u? of the KWTA network are defined by
for all i,
I.e.,
for all i,
g(u'!')
I
dUi
- 0
dt - ,
A
(E.
g(u~) - (2K = --u'!'
+
a+1
a+1
J
J
N))
?
(2)
I
Let us now develop necessary conditions for a state u? to be an equilibrium state
of the network.
Theorem 1: For a given equilibrium state u?, every component
one of at most three distinct values.
ui
of u? can be
635
636
Majani, Erlanson and Abu-Mostafa
Proof of Theorem 1.
If we look at equation (2), we see that the last term of the righthandside expression
is independent of i; let us denote this term by H(u*). Therefore, the components
ut of the equilibrium state u* must be solutions of the equation:
g(ui)
= _A_u;
+ H(u*).
a+1
Since the sigmoid function g(u) is monotone increasing and A/(a + 1) > 0, then
the sigmoid and the line a~l ut + H(u*) intersect in at least one point and at most
three (see Figure 1). Note that the constant H(u*) can be different for different
?
equilibrium states u*.
The following theorem shows that the sum of the node outputs is constrained to
being close to 2K - N, as desired.
Theorem 2: If u* is an equilibrium state of (1), then we have:
N
(a+ 1)maxg(ui) < '" g(uJ~) -2K +N < (a+ 1) min g(ui).
u~<o
L..J
u'!'>o
?
j=l
(3)
?
Proof of Theorem 2.
Let us rewrite equation (2) in the following way:
Aut
= (a + 1)g(ui) -
(Eg(uj) - (2K - N)).
j
un
un -
Since ut and g(
are of the same sign, the term (Lj g(
(2K - N)) can neither
be too large (if ut > 0) nor too low (if ui < 0). Therefore, we must have
{ (Lj g(uj) - (2K - N)) < (a + 1)g(un, for all ut > 0,
(Lj g(uj) - (2K - N)) > (a + 1)g(ut), for all ut < 0,
which yields (3).
?
Theorem 1 states that the components of an equilibrium state can only be one of
at most three distinct values. We will distinguish between two types of equilibrium
states, for the purposes of our analysis: those which have one or more components
ut such that g'(
> a~l' which we categorize as type I, and those which do not
(type II). We will show in the next section that for a gain G large enough, no
equilibrium state of type II is stable.
un
On the K-Winners-Take-All Network
IV - ASYMPTOTIC STABILITY OF
EQUILIBRIUM STATES
We will first derive a necessary condition for an equilibrium state of (1) to be
asymptotically stable. Then we will find the stable equilibrium states of the KWTA
Network.
IV-I. A NECESSARY CONDITION FOR ASYMPTOTIC
STABILITY
An important necessary condition for asymptotic stability is given in the following
theorem.
Theorem 3: Given any asymptotically stable equilibrium state u*, at most one of
the components ut of u* may satisfy:
9'( u?,*)
A
>
- a+-1.
Proof of Theorem 3.
Theorem 3 is obtained by proving the following three lemmas.
Lemma 1: Given any asymptotically stable equilibrium state u*, we always have
for all i and j such that i # j :
A> a
g'(u~) + g'(u~)
'2
J
+
Ja 2 (g'(un - g'(ujn 2 + 4g'(ung'(uj)
2
.
(4)
Proof of Lemma l.
System (1) can be linearized around any equilibrium state u* :
d(u ~ u*)
~ L(u*)(u _ u*),
where L(u*)
= T? diag (g'(ui), ... ,g'(uN?
- AI.
A necessary and sufficient condition for the asymptotic stability of u* is for L(u*)
to be negative definite. A necessary condition for L(u*) to be negative definite is
for all 2 X 2 matrices Lij(U*) of the type
*
Lij(U )
= (ag'(u~)-A
-g,'(ut)
-g'(U~?)
ag'(uj):'- A '
(i
# j)
to be negative definite. This results from an infinitesimal perturbation of components i and j only. Any matrix Lij (u*) has two real eigenvalues. Since the largest
one has to be negative, we obtain:
~ (ag'(ui) - A + ag'(uj) - A + Ja 2 (g'(ut) -
g'(ujn 2
+ 49'(Ut)g'(Uj?) < 0.?
637
638
Majani, Erlan80n and Abu-Mostafa
Lemma 2: Equation (4) implies:
<
min (g'(u:),g'(u1))
.
2a+ 1
(5)
Proof of Lemma 2.
Consider the function h of three variables:
, ? , * _ g'(u;) + g'(u;)
h (a,g (ua),g (Uj)) - a
2
+
va (g'(u;) - g'(u;))2 + 4g'(u;)g'(uj)
2
2
.
If we differentiate h with respect to its third variable g'(uj), we obtain:
{)h
(a, g'(ut) , g'(uj))
=~ +
{)g'(uj)
a2g'(uj) + (2 - a2)g'(ut)
2 2va (g'(un-g'(uj))2 +4g'(ung'(uj)
2
which can be shown to be positive if and only if a > -1. But since
g'(u;) < g'(uj) (without loss of generality), we have:
lal < 1, then
if
h (a,g'(ui),g'(u1)) > h(a,g'(ui),g'(ui)) = (a+ 1)g'(ut),
which, with (4), yields:
9 '( Us*)
A
< --1'
a+
?
which yields Lemma 2.
Lemma 3: If for all i
# j,
min (g'(ut),g'(u1))
then there can be at most one
ui
<
,
2a+ 1
such that:
g '(u~)
I
A
>a+-1.
-
Proof of Lemma 3.
Let us assume there exists a pair (ui, uj) with i
g'(uj) >
then (5) would be violated.
0;1'
#
j such that g'( ut)
>
0;1 and
I
On the K-Winners-Take-All Network
IV-2. STABLE EQUILmRIUM STATES
From Theorem 3, all stable equilibrium states of type I have exactly one component
, (at least one and at most one) such that g' (,) ~
l' Let N + be the number
of components a with g'(a) <
and a > 0, and let N_ be the number of
components (3 with g'(f3) <
and f3 < 0 (note that N+ + N_ + 1 = N). For
a large enough gain G, g(a) and g(f3) can be made arbitrarily close to +1 and
-1 respectively. Using Theorem 2, and assuming a large enough gain, we obtain:
-1 < N + - K < O. N + and K being integers, there is therefore no stable equilibrium
state of type I.
0;1
0;
0;1
For the equilibrium states of type II, we have for all i, ut = a(> 0) or f3( < 0) where
g'(a) < 0~1 and g'(f3) < 0;1' For a large enough gain, g(a) and g(f3) can be made
arbitrarily close to +1 and -1 respectively. Using theorem 2 and assuming a large
enough gain, we obtain: -(a + 1) < 2(N+ - K) < (a + 1), which yields N+ = K.
Let us now summarize our results in the following theorem:
Theorem 4: For a large enough gain, the only possible asymptotically stable
equilibrium states u? of (1) must have K components equal to a > 0 and N - K
components equal to f3 < 0, with
{
g
+ K(g(a)-g(p)-2)+N(1+g(P?
0+1
...Lf3 + K(g(a)-g(p)-2)+N(1+g(,8?
( a ) -.....L
- 0+1 a
g({3) --
0+1
0+1
,
(7)
?
Since we are guaranteed to have at least one stable equilibrium state (Hopfield-84),
and since any state whose components are a permutation of the components of a
stable equilibrium state, is clearly a stable equilibrium state, then we have:
Theorem 5: There exist at least (~) stable equilibrium states as defined in Theorem 4. They correspond to the (~) different states obtained by the N! permutations
of one stable state with K positive components and N - K positive components.
v - THE DYNAMICS OF THE KWTA NETWORK
Now that we know the characteristics of the stable equilibrium states of the KWTA
Network, we need to show that the KWTA Network will converge to the stable state
which has a > 0 in the positions of the K largest initial components. This can be
seen clearly by observing that for all i ;/; j :
C
d(u' - u?)
'dt J =.>t(ui- uj)+(a+1)(g(Ui)-g(Uj?.
If at some time T, ui(T) = uj(T), then one can show that Vt, Ui(t) = Uj(t).
Therefore, for all i ;/; j, Ui(t) - Uj(t) always keeps the same sign. This leads to the
following theorem.
639
640
Majani, Erlan80n and Abu-Mostafa
Theorem 6: (Preservation of order) For all nodes i
# j,
We shall now summarize the results of the last two sections.
Theorem 7: Given an initial state u-(O) and a gain G large enough, the KWTA
Network will converge to a stable equilibrium state with K components equal to a
positive real number (Q > 0) in the positions of the K largest initial components,
and N - K components equal to a negative real number (13 < 0) in all other N - K
positions.
This can be derived directly from Theorems 4, 5 and 6: we know the form of all
stable equilibrium states, the order of the initial node states is preserved through
time, and there is guaranteed convergence to an equilibrium state.
VI - DISCUSSION
The well-known Winner-Take-All Network is obtained by setting K to 1.
The N/2-Winners-Take-All Network, given a set gf N real numbers, identifies which
numbers are above or below the mediaIl~ This task is slightly more complex computationally (~ O(N log(N? than that of the Winner-Take-All (~ O(N?. The
number of stable states is much larger,
N) 2N
(N/2
~ J21rN'
i.e., asymptotically exponential in the size of the network.
Although the number of connection weights is N2, there exists an alternate implementation of the KWTA Network which has O(N) connections (see Figure 2). The
sum of the outputs of all nodes and the external input is computed, then negated
and fed back to all the nodes. In addition, a positive self-connection (a + 1) is
needed at every node.
The analysis was done for a "large enough" gain G. In practice, the critical value of
Gis a~i for the N/2-Winners-Take-All Network, and slightly higher for K # N/2.
Also, the analysis was done for an arbitrary value of the self-connection weight a
(Ial < 1). In general, if a is close to +1, this will lead to faster convergence and a
smaller value of the critical gain than if a is close to -1.
On the K-Winners-Take-All Network
VII - CONCLUSION
The KWTA Network lets all nodes compete until the desired number of winners
(K) is obtained. The competition is ibatained by using mutual inhibition between
all nodes, while the number of winners K is selected by setting all external inputs
to 2K - N. This paper illustrates the capability of the continuous Hopfield Network
to solve exactly an interesting decision problem, i.e ., identifying the K largest of N
real numbers.
Acknowledgments
The authors would like to thank John Hopfield and Stephen DeWeerth from the
California Institute of Technology and Marvin Perlman from the Jet Propulsion
Laboratory for insightful discussions about material presented in this paper. Part of
the research described in this paper was performed at the Jet Propulsion Laboratory
under contract with NASA.
References
J .A. Feldman, D.H . Ballard, "Connectionist Models and their properties," Cognitive
Science, Vol. 6, pp. 205-254, 1982
S. Grossberg, "Contour Enhancement, Short Term Memory, and Constancies in
Reverberating Neural Networks," Studies in Applied Mathematics, Vol. LII (52),
No.3, pp. 213-257, September 1973
S. Grossberg, "Non-Linear Neural Networks: Principles, Mechanisms, and Architectures," Neural Networks, Vol. 1, pp. 17-61, 1988
J.J. Hopfield, "Neurons with graded response have collective computational properties like those of two-state neurons," Proc. Natl. Acad. Sci. USA, Vol. 81, pp.
3088-3092, May 1984
J. Lazzaro, S. Ryckebusch, M.A. Mahovald, C.A. Mead, "Winner-Take-All Networks
of O(N) Complexity," in this volume, 1989
R.P. Lippman, B. Gold, M.L. Malpass, "A Comparison of Hamming and Hopfield
Neural Nets for Pattern Classification," MIT Lincoln Lab. Tech. Rep. TR-769, 21
May 1987
641
642
Majani, Erlanson and Abu-Mostafa
u
,1
/
,
Fj gure 1; Intersecti on of si gmoj d and line,
a+1
N-2K
Figure 2; An Implementation of the KWTA Network,
| 157 |@word uj:24 graded:1 implies:1 evolution:1 already:1 symmetric:1 laboratory:3 ryckebusch:1 linearized:1 eg:1 september:1 self:3 material:1 tr:1 thank:1 ja:2 sci:1 initial:6 propulsion:3 generalization:2 assuming:2 fj:1 majani:5 around:1 si:1 must:3 equilibrium:32 john:1 sigmoid:5 additive:1 mostafa:5 a2:1 winner:19 negative:6 purpose:1 volume:1 implementation:3 proc:1 collective:1 selected:1 negated:1 neuron:2 feldman:2 largest:8 ial:2 ai:1 short:1 mathematics:1 gure:1 mit:1 clearly:2 node:11 always:2 winnertake:1 perturbation:1 stable:22 arbitrary:1 inhibition:1 differential:2 derived:1 pair:1 connection:7 lal:2 california:3 tech:1 rep:1 arbitrarily:2 vt:1 nor:1 seen:1 dynamical:1 aut:1 lj:3 below:1 pattern:1 summarize:2 converge:4 v3:2 increasing:1 ua:1 ii:5 preservation:1 stephen:1 memory:1 classification:1 duo:1 critical:2 jet:3 characterized:2 constrained:1 faster:1 developed:1 mutual:1 equal:4 ag:4 f3:7 ung:2 technology:3 identical:1 va:2 every:3 look:1 ti:1 identifies:4 exactly:2 connectionist:1 lij:3 gf:1 preserved:1 addition:1 positive:6 asymptotic:4 engineering:1 loss:1 else:1 permutation:2 acad:1 interesting:1 mead:1 comment:1 lf3:1 limu:2 sufficient:1 integer:1 principle:1 cd:1 natl:1 iii:2 enough:9 grossberg:5 acknowledgment:1 last:2 perlman:1 practice:1 architecture:1 definite:3 necessary:6 lippman:2 institute:3 iv:4 intersect:1 initialized:1 desired:3 expression:1 contour:1 author:1 made:2 close:5 taj:1 lazzaro:2 tg:1 governs:1 keep:1 too:2 kwta:18 ujn:2 identifying:1 exist:1 inhibitory:1 continuous:4 maxg:1 un:7 sign:2 v4:2 stability:4 proving:1 contract:1 ballard:1 shall:1 vol:4 abu:5 complex:1 diag:1 neither:1 external:4 cognitive:1 lii:1 asymptotically:5 n2:1 monotone:1 sum:2 constancy:1 compete:1 everywhere:1 electrical:1 throughout:1 satisfy:1 vi:4 depends:1 position:4 performed:1 decision:1 lab:1 exponential:1 analyze:2 observing:1 ui:19 complexity:1 third:1 capability:1 guaranteed:3 distinguish:1 rigorously:1 dynamic:2 theorem:21 rewrite:1 marvin:1 aui:1 reverberating:1 insightful:1 characteristic:1 yield:4 identify:1 correspond:1 exists:2 hopfield:11 u1:3 various:1 min:3 distinct:2 illustrates:1 describe:1 department:1 alternate:2 vii:1 smaller:1 slightly:2 whose:1 infinitesimal:1 larger:1 solve:1 pp:4 interconnection:1 gi:1 proof:6 hamming:1 gain:11 differentiate:1 computationally:1 eigenvalue:1 equation:6 net:1 ut:18 mutually:1 tai:1 mechanism:1 needed:1 know:3 back:1 nasa:1 fed:1 higher:1 dt:2 response:1 lincoln:1 gold:1 v2:2 done:2 lemma:8 generality:1 competition:1 until:1 convergence:2 enhancement:1 deweerth:1 converges:1 oo:2 develop:1 erlanson:3 derive:1 n_:2 categorize:1 violated:1 usa:1 |
623 | 1,570 | The Belief in TAP
Yoshiyuki Kabashima
Dept. of Compt. IntI. & Syst. Sci.
Tokyo Institute of Technology
Yokohama 226, Japan
David Saad
Neural Computing Research Group
Aston University
Birmingham B4 7ET, UK
Abstract
We show the similarity between belief propagation and TAP, for
decoding corrupted messages encoded by Sourlas's method. The
latter is a special case of the Gallager error-correcting code , where
the code word comprises products of J{ bits selected randomly from
the original message. We examine the efficacy of solutions obtained
by the two methods for various values of J{ and show that solutions
for J{ 2': 3 may be sensitive to the choice of initial conditions in
the case of unbiased patterns. Good approximations are obtained
generally for J{ = 2 and for biased patterns in the case of J{ 2': 3,
especially when Nishimori's temperature is being used.
1
Introduction
Belief networks [1] are diagrammatic representations of joint probability distributions over a set of variables. This set is usually represented by the vertices of
a graph, while arcs between vertices represent probabilistic dependencies between
variables . Belief propagation provides a convenient mathematical tool for calculating iteratively joint probability distributions between variables and have been used
in a variety of cases, most recently in the field of error correcting codes, for decoding
corrupted messages [2] (for a review of graphical models and their use in the context
of error-correcting codes see [3]).
Error-correcting codes provide a mechanism for retrieving the original message after
corruption due to noise during transmission. Of a particular interest to the current
paper is an error-correcting code presented by Sourlas [4] which is a special case of
the Gallager codes [5]. The latter have been recently re-discovered by MacKay and
Neal [2] and seem to have a significant practical potential.
In this paper we will examine the similarities between the belief propagation (BP)
and TAP approaches, used to decode corrupted messaged encoded by Sourlas's
method , and compare the solutions obtained by both approaches to the exact results
obtained using the replica method [8]. The statistical mechanics approach will then
247
The Belie/in TAP
allow us to draw some conclusion on the efficacy of the TAP /BP approach in the
context of error correcting codes.
The paper is arranged in the following manner: In section 2 we will introduce the
encoding method and describe the decoding task. The Belief Propagation approach
to the decoding process will be introduced in section 3 and will be compared to the
TAP approach for diluted spin systems in section 4. Numerical solutions for various
cases will be presented in section 5 and we will summarize our results and discuss
their implications in section 6.
2
The decoding problem
e
In a general scenario, a message represented ~ an N dimensional binary vector
is encoded by a vector JO which is then transmitted through a noisy channel with
some flipping probability p per bit. The received message J is then decoded to
retrieve the original message. Sourlas's code [4], is based on encoded message bits
of the form JPl,i 2 . . iK = ~il ei 2 . . . eiK , taking the product of different J{ message sites
for each code word bit .
In the statistical mechanics approach we will attempt to retrieve the original message by exploring the ground state of the following Hamiltonian which corresponds
to the preferred state of the system in terms of 'energy'
1{=-
L
Ah ,.. .iK) J(il ,.. iK) Si l ? ? .SiK - F/f3LSk ,
(i1, ... iK)
(1)
k
where S is an N dimensional binary vector of dynamical variables and A is a sparse
tensor with C unit elements per index (other elements are zero), which determines
the components of JO. The last term on the right is required in the case of sparse
(biased) messages and will require assigning a certain value to the additive field
F / f3, related to the prior belief in the Bayesian framework.
The statistical mechanical analysis can be easily linked to the Bayesian framework [4] in which one focuses on the posterior probability using Bayes theorem
P(SIJ)",,-, IT!' P(J!'IS) Po(S) where jJ runs over the message components and Po(S)
represents the prior . Knowing the posterior one can calculate the typical retrieved
message elements and their alignment, which correspond to the Bayes-optimal decoding. The logarithms of the likelihood and prior terms are directly related to the
first and second components of the Hamiltonian (Eq.l).
One should also note that A(il , . iK) J(il , . i K) represents a similar encoding scheme
to that of Ref. [2] where a sparse matrix with J{ non-zero elements per row multiplies
the original message and the resulting vector, modulo 2, is transmitted.
e
Sourlas analyzed this code in the cases of J{ = 2 and J{ -+ 00, where the ratio
C / J{ -+ 00 , by mapping them onto the SK [9] and Random Energy [10] models
respectively. However, the ratio R = J{ / C constitutes the code rate and the scenarios
examined by Sourlas therefore correspond to the limited case of a vanishing code
rate. The case of finite code rate, which we will consider here , has only recently
been analyzed [8].
3
Decoding by belief propagation
As our goal, of calculating the posterior of the system P( S IJ) is rather difficult , we
resort to the methods of BP, focusing on the calculation of conditional probabilities
when some elements of the system are set to specific values or removed.
248
Y Kabashima and D. Saad
The approach adopted in this case, which is quite similar to the practical approach
employed in the case of Gallager codes [2], assumes a two layer system corresponding
to the elements of the corrupted message J and the dynamical variables 5 respectively, defining conditional probabilities which relate elements in the two layers:
(2)
r~1
P(JI'ISI=X,{JII~I'}) =
L
P(JI'ISI=X,{Sk#}) P({Sk~dl{JII~I'}) ,
{ Sk;tz}
where the index J.l represents an element of the received vector message J, constituted by a particular choice of indices i 1 , .. . iK, which is connected to the corresponding index of 5 (l in the first equation), i.e., for which the corresponding
element A(i1, ... iK) is non-zero; the notation {Sk~d refers to all elements of 5, excluding the I-th element, which are connected to the corresponding index of J (J.l
in this case for the second equation); the index x can take values of ?1. The conditional probabilities q~1 and r~1 will enable us, through recursive calculations to
obtain an approximated expression to the posterior.
Employing Bayes rule and the assumption that the dependency of SI on
an element JII is factorizable and vice versa: P(SI 1 ,SI 2 ",SIKI{Jvtl'}) =
nf=lP(Slkl{JIlt/'}) and P({J/J~I'} ISI=x) = nllt/'P(JIIISI=X,{J(7~II})'
one can rewrite a set of coupled equations for q!/' q;/ , r!1 and r;/ of the form
q~1 = al'l PI
IT r:1
and
/J~I'
r~1 =
L
P (J I' lSI = x, {Sk#})
{Sk;tz}
IT q!Z '
(3)
k~1
where al'l is a normalizing factor such that q~1 + q;/
our prior beliefs in the value of the source bits SI.
= 1 and pf = P (SI = x)
are
This set of equations can be solved iteratively [2] by updating a coupled set of difference equations for 8ql'I q~/-q;/ and 8rl'I r~l-r;/, derived for this specific model,
making use of the fact that the variables r~/' and sub-sequentially the variables q~/'
can be calculated by exploiting the relation r;/ (1?8rl'l)/2 and Eq.(3). At each
iteration we can also calculate the pseudo-posterior probabilities qf alPI
r~/'
where al are normalizing factors, to determine the current estimated value of SI.
=
=
=
nil
=
Two points that are worthwhile noting: Firstly, the iterative solution makes use of
the normalization r~/+r;/ = 1, which is not derived from the basic probability rules
and makes implicit assumptions about the probabilities of obtaining SI ?1 for all
elements I. Secondly, the iterative solution would have provided the true posterior
probabilities qf if the graph connecting the message J and the encoded bits 5 would
have been free of cycles, i.e., if the graph would have been a tree with no recurrent
dependencies among the variables. The fact that the framework provides adequate
practical solutions has only recently been explained [13].
=
4
Decoding by TAP
We will now show that for this particular problem it is possible to obtain a similar
set of equations from the corresponding statistical mechanics framework based on
Bethe approximation [11] or the TAP (Thouless-Anderson-Palmer) approach [12]
to diluted systems 1 . In the statistical mechanics approach we assign a Boltzmann
1 The terminology in the case of diluted systems is slightly vague. Unlike in the case
of fully connected systems, self consistent equations of diluted systems cannot be derived
249
The Beliefin TAP
weight to each set comprising an encoded message bit J II. and a dynamical vector S
WE (JII.IS) = e-{3 9(1I'IS) ,
(4)
such that the first term of the system's Hamiltonian (Eq.1) can be rewritten as
L II. g ( Jil.l S) , where the index J..l runs over all non-zero sites in the multidimensional
tensor A. We will now employ two straightforward assumptions to write a set of
coupled equations for the mean field q~1 == P(51 1 {Jvtll.})' which may be identified
as the same variable as in the belief network framework (Eq.2) , and the effective
Boltzmann weight weff (J 11.151, {Jvtll.}):
1) we assume a mean field behavior for the dependence of the dynamical variables
S on a certain realization of the message sites J, i.e., the dependence is factorizable
and may be replaced by a product of mean fields .
2) Boltzmann weights (effective) for site 51 are factorizable with respect to J 11..
The resulting set of equations are of the form
weff(J1l. 151, {Jvtll.})
=
Tr{Sk;l!z}
WE (JII. 1 S)
II q~r
k;tl
qSl
== CLII.I
11. 1
II Weff(Jv 15
pfl
1,
{J"tv}) ,
(5)
vtll.
where CLII.I is a normalization factor and pfl is our prior knowledge of the source's
bias. Replacing the effective Boltzmann weight by a normalized field, which may
be identified as the variable r~1 of Eq.(2), we obtain
r~l = P (51 1 JII.' {Jvtll.}) = ali.I weff(J1l. 151, {Jvtll.}) ,
(6)
i.e., a set of equations equivalent to Eq.(3). The explicit expressions of the normalization coefficients, ali.I and CLII.I' are
a~l
= Tr{s}
WE (JII.IS)
II q~f
and
(7)
k;tl
The somewhat arbitrary use of the differences oqll.l = (5i}q and Dril.l = (5i}r in
the BP approach becomes clear form the statistical mechanics description, where
they represent the expectation values of the dynamical variables with respect to the
fields . The statistical mechanics formulation also provides a partial answer to the
successful use of the BP methods to loopy systems , as we consider a finite number
of steps on an infinite lattice [14]. However , it does not provide an explanation in
the case of small systems which should be examined using other methods .
The formulation so far has been general; however, in the case of Sourlas 's code we
can make use of the explicit expression for g to derive the relat\on between q~l, r;l ,
oqll.l and Dril.l as well as an explicit expression for WE (JII.IS,,8)
q~l
==
~ (1 + oq1l.151)
WE (JII.IS ,,8)
,
= ~ cosh ,8JII.
r~l = ~ (1 + or1l.151)
(1
+ tanh,8JII.
II
and
(8)
51) ,
(9)
I H(II.)
by the perturbation expansion of the mean field equations with respect to Onsager reaction fields since these fields are too large in diluted systems. Consequently, the resulting
equations are different than those obtained for fully connected systems [12]. We termed
our approach TAP, following the convention for the Bethe approximation when applied to
disordered systems subject to mean field type random interactions.
Y. Kabashima and D. Saad
250
where .C(J.l) is the set of all sites of S connected to J/.I ' i.e. , for which the corresponding element of the tensor A is non-zero. The explicit form of the equations
for 8q/.ll and 8r/.ll becomes
8rjj l=tanhf3J/.I
II
8q/.ll
and
8q/.ll =tanh (
IEC,(/.I)/I
L
tanh- 1 8r v i +
F) , (10)
vEM(l)//.I
where M(l)/ J.l is the set of all indices of the tensor J , excluding J.l, which are
connected to the vector site I; the external field F which previously appeared in the
last term of Eq.(1) is directly related to our prior belief of the message bias
1
pfl = "2 (1 + tanh FSI) .
(11)
We therefore showed that there is a direct relation between the equations derived
from the BP approach and from TAP in this particular case. One should note that
the TAP approach allows for the use of finite inverse-temperatures f3 which is not
naturally included in the BP approach.
5
Numerical solutions
To examine the efficacy of TAP /BP decoding we used the method for decoding
corrupted messages encoded by the Sourlas scheme [4], for which we have previously
obtained analytical solutions using the replica method [8]. We solved iteratively
Eq.(10) for specific cases by making use of differences 8qJjI and 8r/.ll to obtain the
values of q~l and
and of the magnetization M.
r'N
Numerical solutions of 10 individual runs for each value of the flip rate p starting
from different initial conditions, obtained for the case f{ = 2 and C = 4, different
biases (J = pi = 0.1, 0.5 - the probability of +1 bit in the original message e) and
temperatures (T = 0.26, Tn) are shown in Fig . 1a. For each run , 20000 bit code
words JO were generated from 10000 bit message using a fixed random sparse
tensor A . The noise corrupted code word J was decoded to retrieve the original
message
Initial conditions are set to 8r/.II = 0 and 8q/.ll = tanh F reflecting the prior
belief; whenever the TAP /BP approach was successful in predicting the theoretical
values we observed convergence in most runs corresponding to the ferromagnetic
phase while almost all runs at low temperatures did not converged to a stable
solution above the critical flip-rat e (although the magnetization M did converge as
one may expect) . We obtain good agreement between the TAP /BP solutions and
the theoretical values calculated using the methods of [8] (diamond symbols and
dashed line respectively) . The results for biased patterns at T = 0.26 presented in
the form of mean values and standard deviation , show a sub-optimal improvement
in performance as expected. Obtaining solutions under similar conditions but at
Nishimori's temperature - l/Tn = 1/2In[(1 - p)/p] [7], we see that pattern sparsity
is exploited optimally resulting in a magnetization M ~ 0.8 for high corruption
rates , as Tn simulates accurately the loss of information due to channel noise [6 , 7];
results for unbiased patterns (not shown) are not affected significantly by the use
of Nishimori's temperature.
e
e.
The replica-based theoretical solutions [8] indicate a profoundly different behaviour
for f{ = 2 in comparison to other f{ values. We therefore obtained solutions for
J{ = 5 under similar conditions (which are representative of results obtained in
other cases of f{ # 2). The results presented in Fig. 1b , in terms of means and
standard deviation of 10 individual runs per flip rate value p, are less encouraging
as the iterative solutions are sensitive to the choice of initial conditions and tend to
251
The Beliefin TAP
1.2
r---,-----.-----.---,-------,
1.2
a) K=2
r---...,------,------.----,----,
b) K=5
Mn, Biased
..~
.~~-
O~i"\\ '"'--"'
0"
. I
? s~) \ I\
II8fiIllmllIat!J8!jlilalll!.
O.S
O?6
O.4
i
I
Mn, Biased
)o~
i)
9' l .
~
o~
o
P
0.2
'1.T
?!fi
~'f.t'
?SoP
.:
:h!l.
0.1
0.2
1
f~
p
0.3
0.4
1.001
,
~~1K-iiooh
0.999
0.998
0.997
?~~
o 8o
o
o
0.4
T=O.26, Biased
(I=O.l~
, t~
/ i
i
,-,~1~
T:Tn, Unbiased
~=O.5~
1
T=0.26, Unbiased 08:
~=O.5~
0.6
(1=O. l~
v'l,
i
O.S
(1=O.l~
0.5
0.2
0.996
0.995
'---'-~~~~---LJ
o 0.020.040.060.06 0.1 0.120.14
o~'MttItMi.u.,.........tttftl""Itttt"'"
o 0.1
0.2
0.3
0.4
0.5
p
Figure 1: Numerical solutions for M and different flip rate p. (a) For K = 2,
different biases (f = pi = 0.1, 0.5) and temperatures (T = 0.26, Tn). Results for the
unbiased patterns are shown as raw data (10 runs per flip rate value p - diamond),
while the theoretical solution is marked by the dashed line. Results for biased
patterns are presented by their mean and standard deviation, showing a suboptimal
performance as expected for T= 0.26 and an optimal one at Nishimori's temperature
-Tn. The standard deviation is significantly smaller than the symbol size. Figure
(b) shows results for the case K = 5 and T = Tn in similar conditions to (a).
Also here iterative solutions may generally drift away from the theoretical values
where temperatures other than Tn are employed (not shown); using Nishimori's
temperature alleviates the problem only in the case of biased messages and the
results are in close agreement with the theoretical solutions (inset - focusing on low
p values).
converge to sub-optimal values unless high sparsity and the appropriate choice of
temperature (Tn) forces them to the correct values, showing then good agreement
with the theoretical results (solid line, see inset). This phenomena is indicative of the
fact that the ground state of the non-biased system is macroscopically degenerate
with multiple equally good ground states.
We conclude that the TAP /BP approach may be highly useful in the case of biased
patterns but may lead to errors for unbiased patterns and K 2: 3, and that the use
of the appropriate temperature, i.e., Nishimori's temperature, enables one to obtain
improved results, in agreement with results presented elsewhere [4, 6, 7].
Y. Kabashima and D. Saad
252
6
Summary and discussion
We compared the use of BP to that of TAP for decoding corrupted messages encoded
by Sourlas's method to discover that in this particular case the two methods provide
a similar set of equations. We then solved the equations iteratively for specific cases
and compared the results to those obtained by the replica method. The solutions
indicate that the method is particularly useful in the case of biased messages and
that using Nishimori's temperature is highly beneficial; solutions obtained using
other temperature values may be sub-optimal. For non-sparse messages and l{ 2: 3
we may obtain erroneous solutions using these methods.
It would be desirable to explore whether the similarity in the equations derived using
TAP and BP is restricted to this particular case or whether there is a more general
link between the two methods. Another important question that remains open
is the generality of our conclusions on the efficacy of these methods for decoding
corrupted messages, as they are currently being applied in a variety of state-of-theart coding schemes (e.g., [2,3]). Understanding the limitations ofthese methods and
the proper way to use them in general, especially in the context of error-correcting
codes, may be highly beneficial to practitioners.
Acknowledgment This work was partially supported by the RFTF program of the JSPS
(YK) and by EPSRC grant GR/L19232 (DS).
References
[1] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Inference (Morgan Kaufmann) 1988.
[2] D.J .C . MacKay and R.M. Neal, Elect. Lett., 33, 457 and preprint (1997).
[3] B.J. Frey, Graphical Models for Machine Learning and Digital Communication
(MIT Press), 1998.
[4] N. Sourlas, Nature, 339, 693 (1989) and Europhys. Lett., 25, 159 (1994).
[5] R.G. Gallager, IRE Trans. Info. Theory, IT-8, 21 (1962).
[6] P. Rujan, Phys . Rev. Lett., 10, 2968 (1993) .
[7] H. Nishimori,J. Phys. C, 13,4071 (1980) and J. Phys. Soc. of Japan, 62, 1169
(1993).
[8] Y. Kabashima and D. Saad, Europhys. Lett., 45, in press (1999).
[9] D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett., 35, 1792 (1975).
[10] B. Derrida, Phys. Rev. B, 24, 2613 (1981).
[11] H. Bethe, Proc. R. Soc. A, 151, 540 (1935) .
[12] D. Thouless, P.W. Anderson and R.G. Palmer, Phil. Mag., 35, 593 (1977).
[13] Y. Weiss, MIT preprint CBCL155 (1997).
[14] D. Sherrington and K.Y.M . Wong J. Phys. A, 20, L785 (1987).
| 1570 |@word open:1 tr:2 solid:1 initial:4 efficacy:4 mag:1 reaction:1 current:2 si:8 assigning:1 additive:1 numerical:4 enables:1 selected:1 indicative:1 hamiltonian:3 vanishing:1 ire:1 provides:3 firstly:1 mathematical:1 direct:1 ik:7 retrieving:1 introduce:1 manner:1 expected:2 behavior:1 isi:3 examine:3 mechanic:6 encouraging:1 pf:1 becomes:2 provided:1 discover:1 notation:1 onsager:1 pseudo:1 nf:1 multidimensional:1 uk:1 unit:1 grant:1 frey:1 encoding:2 examined:2 limited:1 palmer:2 practical:3 acknowledgment:1 recursive:1 significantly:2 convenient:1 word:4 refers:1 onto:1 cannot:1 close:1 context:3 wong:1 equivalent:1 phil:1 straightforward:1 starting:1 correcting:7 rule:2 retrieve:3 compt:1 decode:1 exact:1 modulo:1 agreement:4 element:14 approximated:1 particularly:1 updating:1 observed:1 epsrc:1 preprint:2 solved:3 calculate:2 ferromagnetic:1 connected:6 cycle:1 removed:1 yk:1 rewrite:1 ali:2 vague:1 easily:1 joint:2 po:2 various:2 represented:2 describe:1 effective:3 europhys:2 quite:1 encoded:8 plausible:1 noisy:1 pfl:3 analytical:1 interaction:1 product:3 sourlas:10 realization:1 alleviates:1 degenerate:1 description:1 exploiting:1 convergence:1 transmission:1 diluted:5 derive:1 recurrent:1 derrida:1 ij:1 received:2 eq:8 soc:2 indicate:2 convention:1 tokyo:1 correct:1 disordered:1 enable:1 require:1 behaviour:1 assign:1 secondly:1 exploring:1 ground:3 mapping:1 proc:1 birmingham:1 tanh:5 currently:1 sensitive:2 vice:1 tool:1 mit:2 rather:1 derived:5 focus:1 improvement:1 likelihood:1 inference:1 lj:1 relation:2 i1:2 comprising:1 among:1 multiplies:1 special:2 mackay:2 field:12 f3:2 represents:3 constitutes:1 theart:1 eik:1 intelligent:1 employ:1 randomly:1 thouless:2 individual:2 replaced:1 phase:1 attempt:1 interest:1 message:29 highly:3 alignment:1 kirkpatrick:1 analyzed:2 clii:3 implication:1 partial:1 unless:1 tree:1 logarithm:1 re:1 theoretical:7 jil:1 lattice:1 loopy:1 vertex:2 deviation:4 jsps:1 successful:2 gr:1 too:1 optimally:1 dependency:3 answer:1 corrupted:8 probabilistic:2 decoding:12 connecting:1 jo:3 tz:2 external:1 resort:1 japan:2 syst:1 potential:1 jii:11 coding:1 coefficient:1 relat:1 linked:1 bayes:3 macroscopically:1 vem:1 il:4 spin:1 kaufmann:1 correspond:2 bayesian:2 raw:1 accurately:1 corruption:2 kabashima:5 ah:1 converged:1 phys:6 whenever:1 energy:2 naturally:1 knowledge:1 reflecting:1 focusing:2 improved:1 wei:1 arranged:1 formulation:2 anderson:2 generality:1 implicit:1 d:1 ei:1 replacing:1 propagation:5 normalized:1 unbiased:6 true:1 iteratively:4 neal:2 ll:6 during:1 self:1 elect:1 rat:1 sherrington:2 magnetization:3 tn:9 temperature:15 reasoning:1 recently:4 fi:1 ji:2 rl:2 b4:1 significant:1 rftf:1 versa:1 fsi:1 stable:1 similarity:3 posterior:6 showed:1 retrieved:1 scenario:2 termed:1 certain:2 binary:2 exploited:1 transmitted:2 morgan:1 somewhat:1 employed:2 determine:1 converge:2 dashed:2 ii:10 multiple:1 desirable:1 calculation:2 equally:1 basic:1 expectation:1 iteration:1 yoshiyuki:1 represent:2 normalization:3 source:2 saad:5 biased:11 unlike:1 subject:1 tend:1 simulates:1 seem:1 practitioner:1 noting:1 variety:2 identified:2 suboptimal:1 knowing:1 whether:2 expression:4 jj:1 jilt:1 adequate:1 useful:2 generally:2 clear:1 j1l:2 cosh:1 lsi:1 estimated:1 per:5 write:1 affected:1 profoundly:1 group:1 terminology:1 jv:1 replica:4 graph:3 run:8 inverse:1 almost:1 draw:1 bit:10 layer:2 bp:13 tv:1 smaller:1 slightly:1 beneficial:2 lp:1 ofthese:1 making:2 rev:3 explained:1 restricted:1 sij:1 inti:1 equation:17 previously:2 remains:1 discus:1 mechanism:1 flip:5 adopted:1 rewritten:1 worthwhile:1 away:1 appropriate:2 yokohama:1 original:7 assumes:1 graphical:2 calculating:2 especially:2 tensor:5 question:1 flipping:1 dependence:2 link:1 sci:1 code:19 index:8 ratio:2 difficult:1 ql:1 relate:1 info:1 sop:1 proper:1 boltzmann:4 diamond:2 arc:1 finite:3 defining:1 excluding:2 communication:1 discovered:1 perturbation:1 arbitrary:1 drift:1 david:1 introduced:1 required:1 mechanical:1 tap:19 pearl:1 trans:1 usually:1 pattern:9 dynamical:5 appeared:1 sparsity:2 summarize:1 program:1 explanation:1 belief:12 critical:1 force:1 predicting:1 mn:2 scheme:3 technology:1 aston:1 coupled:3 review:1 prior:7 understanding:1 nishimori:8 fully:2 messaged:1 expect:1 loss:1 j8:1 limitation:1 digital:1 consistent:1 sik:1 pi:3 row:1 qf:2 elsewhere:1 summary:1 supported:1 last:2 free:1 bias:4 allow:1 institute:1 taking:1 sparse:5 calculated:2 lett:5 employing:1 far:1 preferred:1 sequentially:1 conclude:1 iterative:4 sk:8 channel:2 bethe:3 nature:1 iec:1 obtaining:2 expansion:1 rujan:1 factorizable:3 did:2 constituted:1 noise:3 ref:1 site:6 fig:2 representative:1 tl:2 sub:4 comprises:1 decoded:2 explicit:4 rjj:1 theorem:1 diagrammatic:1 erroneous:1 specific:4 inset:2 showing:2 symbol:2 jpl:1 normalizing:2 dl:1 explore:1 gallager:4 partially:1 corresponds:1 determines:1 conditional:3 goal:1 marked:1 consequently:1 included:1 typical:1 infinite:1 nil:1 latter:2 dept:1 phenomenon:1 |
624 | 1,571 | Classification on Pairwise Proximity Data
Thore Graepel t , Ralf Herbrich i ,
Peter Bollmann-Sdorra t , Klaus Obermayert
Technical University of Berlin,
t Statistics Research Group, Sekr. FR 6-9,
t
Neural Information Processing Group, Sekr . FR 2-1 ,
Franklinstr. 28/29, 10587 Berlin, Germany
Abstract
We investigate the problem of learning a classification task on data
represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data
items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can
always be calculated. Our first approach is based on a combined
linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean
data. As an alternative we present another approach based on a
linear threshold model in the proximity values themselves, which is
optimized using Structural Risk Minimization. We show that prior
knowledge about the problem can be incorporated by the choice of
distance measures and examine different metrics W.r.t. their generalization. Finally, the algorithms are successfully applied to protein
structure data and to data from the cat's cerebral cortex. They
show better performance than K-nearest-neighbor classification.
1
Introduction
In most areas of pattern recognition, machine learning, and neural computation it
has become common practice to represent data as feature vectors in a Euclidean
vector space. This kind of representation is very convenient because the Euclidean
vector space offers powerful analytical tools for data analysis not available in other
representations. However, such a representation incorporates assumptions about
the data that may not hold and of which the practitioner may not even be aware.
And - an even more severe restriction - no domain-independent procedures for the
construction of features are known [3J.
A more general approach to the characterization of a set of data items is to de-
439
Classification on Pairwise Proximity Data
fine a proximity or distance measure between data items - not necessarily given as
feature vectors - and to provide a learning algorithm with a proximity matrix of
a set of training data. Since pairwise proximity measures can be defined on structured objects like graphs this procedure provides a bridge between the classical and
the" structural" approaches to pattern recognition [3J. Additionally, pairwise data
occur frequently in empirical sciences like psychology, psychophysics , economics,
biochemistry etc., and most of the algorithms developed for this kind of data - predominantly clustering [5 , 4J and multidimensional scaling [8, 6]- fall into the realm
of unsupervised learning.
In contrast to nearest-neighbor classification schemes [10] we suggest algorithms
which operate on the given proximity data via linear models. After a brief discussion of different kinds of proximity data in terms of possible embeddings, we suggest
how the Optimal Hyperplane (OHC) algorithm for classification [2, 9] can be applied
to distance data from both Euclidean and pseudo-Euclidean spaces. Subsequently,
a more general model is introduced which is formulated as a linear threshold model
on the proximities, and is optimized using the principle of Structural Risk Minimization [9J . We demonstrate how the choice of proximity measure influences the
generalization behavior of the algorithm and apply both algorithms to real-world
data from biochemistry and neuroanatomy.
2
The Nature of Proximity Data
When faced with proximity data in the form of a matrix P = {Pij} of pairwise
proximity values between data items , one idea is to embed the data in a suitable
space for visualization and analysis. This is referred to as multidimensional scaling,
and Torgerson [8J suggested a procedure for the linear embedding of proximity data.
Interpreting the proximities as Euclidean distances in some unknown Euclidean
space one can calculate an inner product matrix H = XTX w.r.t. to the center of
mass of the data from the proximities according to [8]
1
(H)ij =
-2
f 21 f 21 f
2)
(!Pij! 2- 1?Ii
!Pmj ! - ? ~ !Pin ! +
m~l !Pmn! .
?2
(1)
Let us perform a spectral decomposition H = UDU T = XTX and choose D
and U such that their columns are sorted in decreasing order of magnitude of
the eigenvalues .Ai of H . The embedding in an n-dimensional space is achieved
by calculating the first n rows of X = D ~ U T . In order to embed a new data
item characterized by a vector p consisting of its pairwise proximities Pi w.r.t. to
the previously known data items, one calculates the corresponding inner product
vector h using (1) with (H)ij, Pij, and Pmj replaced by hi , Pi , and Pm respectively,
and then obtains the embedding x = D -~ UTh.
The matrix H has negative eigenvalues if the distance data P were not Euclidean. Then the data can be isometrically embedded only in a pseudo-Euclidean
or Minkowski space ~(n+,n-), equipped with a bilinear form q> , which is not
positive definite. In this case the distance measure takes the form P(Xi, Xj) =
Jq>(Xi - Xj) = J(Xi - xj)TM(Xi - Xj), where M is any n x n symmetric matrix
assumed to have full rank, but not necessarily positive definite. However, we can
always find a basis such that the matrix M assumes the form M = diag(In+ , -In-)
with n = n+ + n-, where the pair (n+, n-) is called the signature of the pseudoEuclidean space [3J . Also in this case (1) serves to reconstruct the symmetric bilinear
form , and the embedding proceeds as above with D replaced by D , whose diagonal
contains the modules of the eigenvalues of H.
440
T. Graepel. R. Herbrich. P. Bollmann-Sdorra and K. Obermayer
From the eigenvalue spectrum of H the effective dimensionality of the proximity
preserving embedding can be obtained. (i) If there is only a small number of large
positive eigenvalues, the data items can be reasonably embedded in a Euclidean
space. (ii) If there is a small number of positive and negative eigenvalues of large
absolute value, then an embedding in a pseudo-Euclidean space is possible. (iii) If
the spectrum is continuous and relatively flat, then no linear embedding is possible
in less than .e - 1 dimensions.
3
Classification in Euclidean and Pseudo-Euclidean Space
Let the training set S be given by an .e x.e matrix P of pairwise distances of unknown
data vectors x in a Euclidean space, and a target class Yi E {-I, + I} for each data
item. Assuming that the data are linearly separable, we follow the OHC algorithm
[2J and set up a linear model for the classification in data space,
y(x) = sign(x T w + b) .
Then we can always find a weight vector wand threshold b such that
Yi(xTw+b)~l
i=l, . ..
,.e.
(2)
(3)
Now the optimal hyperplane with maximal margin is found by minimizing IIw l12
under the constraints (3). This is equivalent to maximizing the Wolfe dual W(o:)
w.r.t. 0:,
W(o:)
= o:TI-
1
20:TYXTXYo: ,
(4)
with Y = diag(y) , and the .e-vector 1. The constraints are ai ~ 0, Vi, and 1Ty 0:* =
O. Since the optimal weight vector w* can be expressed as a linear combination of
training examples
w* = XYo:*,
(5)
and the optimal threshold b* is obtained by evaluating b* = Yi - xT w* for any
training example X i with at i- 0, the decision function (2) can be fully evaluated
using inner products between data vectors only. This formulation allows us to learn
on the distance data directly.
In the Euclidean case we can apply (1) to the distance matrix P of the training
data, obtain the inner product matrix H = XTX, and introduce it directly without explicit embedding of the data - into the Wolfe dual (4). The same is true
for the test phase, where only the inner products of the test vector with the training
examples are needed.
In the case of pseudo-Euclidean distance data the inner product matrix H obtained
from the distance matrix P via (1) has negative eigenvalues. This means that
the corresponding data vectors can only be embedded in a pseudo-Euclidean space
R(n+ ,n-) as explained in the previous section. Also H cannot serve as the Hessian
in the quadratic programming (QP) problem (4). It turns out, however , that the
indefiniteness of the bilinear form in pseudo-Euclidean spaces does not forestall
linear classification [3]. A decision plane is characterized by the equation xTMw =
0, as illustrated in Fig. 1. However, Fig. 1 also shows that the same plane can just
as well be described by x T W = 0 - as if the space were Euclidean - where w = Mw
is simply the mirror image of w w.r.t. the axes of negative signature. For the
OHC algorithm this means, that if we can reconstruct the Euclidean inner product
matrix XTX from the distance data, we can proceed with the OHC algorithm as
usual. fI = XTX is calculated by "flipping" the axes of negative signature , i.e .,
with D = diag(l>-ll, ... , I>-cl), we can calculate fI according to
fI = UDU T
,
(6)
441
Classification on Pairwise Proximity Data
x-
"-
a/
xTMw =
"-
/
"-
/
"-
xTMx =
w
"-
x+
""/
/
/
a
W
"-
""-
Figure 1: Plot of a decision line (thick)
in a 2D pseudo-Euclidean space with signature (1,1), i.e. , M = diag(l, -1). The
decision line is described by xTMw = a.
When interpreted as Euclidean it is at right
angles with w, which is the mirror image
of w w.r.t. the axis X- of negative signature. In physics this plot is referred to as a
Minkowski space-time diagram, where x+
corresponds to the space axis and x- to the
time axis. The dashed diagonal lines indicate the points xTMx = a of zero length,
the light cone.
which serves now as the Hessian matrix for normal OHC classification. Note, that
H is positive semi-definite, which ensures a unique solution for the QP problem (4).
4
Learning a Linear Decision Function in Proximity Space
In order to cope with general proximity data (case (iii) of Section 2) let the training
set S be given by an f x R proximity matrix P whose elements P' ) = P( .l" " r ) ) "rf'
the pairwise proximity values between data items Xi, i = 1, ... , ?, and a target class
Yi E {-I , + I} for each data item. Let us assume that the proximity values satisfy
reflexivity, Pi i = a,Vi, and symmetry, Pij = pji,Vi,j. We can make a linear model
for the classification of a new data item x represented by a vector of proximities
P = (PI,'" ,pe)T where Pi = p(x, xd are the proximities of x w.r.t. to the items Xi
in the training set,
y(x) = sign(pT w + b) .
(7)
Comparing (7) to (2) we note, that this is equivalent to using the vector of proximities p as the feature vector x characterizing data item x. Consequently, the OHC
algorithm from the previous section can be used to learn a proximity model when
x is replaced by p in (2), XTX is replaced by p2 in the Wolfe dual (4), and the
columns P l of P serve as the training data.
Note that the formal correspondence does not imply that the columns of the proximity matrix are Euclidean feature vectors as used in the SV setting. We merely
consider a linear threshold model on the proximities of a data item to all the training
data items. Since the Hessian of the QP problem (4) is the square of the proximity
matrix, it is always at least positive semi-definite, which guarantees a unique solution of the QP problem. Once the optimal coefficients 0:; have been found, a test
data item can be classified by determining its proximities Pi from the elements Xi of
the training set and by using conditions (2) together with (5) for its classification.
5
Metric Proximities
Let us consider two examples in order to see, what learning on pairwise metric data
amounts to. The first example is the minimalistic a-I-metric, which for two objects
Xi and x J is defined as follows :
Po
( . x.) _ {
Xl,
J
-
a1
if Xi = Xj
otherwise
.
(8)
442
T. Graepe/, R. Herbrich, P Bollmann-Sdorra and K. Obermayer
"
.- ....
. '"
)I
~.:
??
..
??
~
'f
.
"
:
.
~
\
.- I!"' ??
.
.
.
....
..,
'.'~
? I, .....
,
?
?
... ...
. -.
'. ..:
II
.
a}
..
.-
'.
b)
....,
"
.~
..~ ..:. 'l~
. I,
"~ :
: I, ?? ,".'
.
. ..
.. ..
.. .\ ..." ,
,, ',
.. .\ ,..'.
- .......
., "I,
": :
'.
. ......
,',.
c)
Figure 2: Decision functions in a simple two-class classification problem for different
Minkowski metrics. The algorithm described in Sect. 4 was applied with (a) the
city-block metric (r = 1), (b) the Euclidean metric (r = 2), and (c) the maximum
metric (r -+ 00). The three metrics result in considerably different generalization
behavior, and use different Support Vectors (circled).
The corresponding ? x ? proximity matrix Po has full rank as can be seen from its
non-vanishing determinant det(P o) = (_I)l-l(? - 1). From the definition of the
0-1 metric it is clear that every data item x not contained in the training set is
represented by the same proximity vector p = 1, and will be assigned to the same
class. For the 0-1 metric the QP problem (4) can be solved analytically by matrix
inversion, and using POl = (? - 1)-111 T - I we obtain for the classification
This result means, that each new data item is assigned to the majority class of
the training sample, which is - given the available information - the Bayes optimal
decision. This example demonstrates, how the prior information - in the case of the
0-1 metric the minimal information of identity - is encoded in the chosen distance
measure.
As an easy-to-visualize example of metric distance measures on vectors x E
us consider the Minkowski r-metrics defined for r 2: 1 as
~n
let
(10)
For r = 2 the Minkowski metric is equivalent to the Euclidean distance. The case
r = 1 corresponds to the so-called city-block metric, in which the distaqce is given
by the sum of absolute differences for each feature. On the other extreme, the maximum norm, r -+ 00, takes only the largest absolute difference in feature values as
the distance between objects. Note that with increasing r more weight is given to
the larger differences in feature values, and that in the literature on multidimensional scaling [1] Minkowski metrics have been used to examine the dominance of
features in human perception. Using the Minkowski metrics for classification in a
toy example, we observed that different values of r lead to very different generalization behavior on the same set of data points, as can be seen in Fig. 2. Since there
is no apriori reason to prefer one metric over the other , using a particular metric is
equivalent to incorporating prior knowledge into the solution of the problem.
443
Classification on Pairwise Proximity Data
I Size of Class
ORC-cut-off
ORC-flip-axis
OR C-proximi ty
1-NN
2-NN
3-NN
4-NN
5-NN
3.08
3.08
3.08
5.82
6.09
5.29
6.45
5.55
4.62
1.54
4.62
6.00
4.46
2.29
5.14
2.75
6.15
4.62
3.08
6.09
7.91
4.18
3.68
2.72
3.08
3.08
1.54
6.74
5.09
4.71
5.17
5.29
0.91
0.91
0.45
1.65
2.01
2.14
2.46
1.65
4.01
4.01
3.60
3.66
5.27
6.34
5.13
5.09
0.45
0.45
0.45
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
2.01
3.44
2.68
4.87
4.11
Table 1: Classification results for Cat Cortex and Protein data. Bold numbers
indicate best results.
6
Real-World Proximity Data
In the numerical experiments we focused on two real-world data sets, which are both
given in terms of a proximity matrix P and class labels y for each data item. The
data set called "cat cortex" consists of a matrix of connection strengths between
65 cortical areas of the cat. The data was collected by Scannell [7] from text
and figures of the available anatomical literature and the connections are assigned
proximity values p as follows: self-connection (p = 0) , strong and dense connection
(p = 1) , intermediate connection (p = 2), weak connection (p = 3), and absent or
unreported connection (p = 4). From functional considerations the areas can be
assigned to four different regions: auditory (A), visual (V), somatosensory (SS),
and frontolimbic (FL). The classification task is to discriminate between these four
regions, each time one against the three others.
The second data set consists of a proximity matrix from the structural comparison of
224 protein sequences based upon the concept of evolutionary distance. The majority of these proteins can be assigned to one of four classes of globins: hemoglobin-a
(R-a), hemoglobin-;3 (R-;3), myoglobin (M), and heterogenous globins (GR). The
classification task is to assign proteins to one of these classes, one against the rest.
We compared three different procedures for the described two-class classification
problems, performing leave-one-out cross-validation for the "cat cortex" dataset
and lO-fold cross-validation for the "protein" data set to estimate the generalization error. Table 1 shows the results. ORC-cut-off refers to the simple method
of making the inner product matrix H positive semi-definite by neglecting projections to those eigenvectors with negative eigenvalues. ORC-flip-axis flips the axes
of negative signature as described in (6) and thus preserves the information contained in those directions for classification. ORC-proximit}', finally, refers to the
model linear in the proximities as introduced in Section 4. It can be seen that
aRC-proximity shows a better generalization than ORC-flip-axis , which in turn
performs slightly better than ORC-cut-off. This is especially the case on the cat
cortex data set, whose inner Rroduct matrix H has negative eigenvalues. For comparison, the lower part of Table 1 shows the corresponding cross-validation results
for K-nearest-neighbor, which is a natural choice to use, because it only needs the
pairwise proximities to determine the training data to participate in the voting.
The presented algorithms ORC-flip-axis and aRC-proximity perform consistently
better than K-nearest-neighbor, even when the value of K is optimally chosen.
T Graepe/, R. Herbnch, P. Bollmann-Sdorra and K. Obermayer
444
7
Conclusion and Future work
In this contribution we investigated the nature of proximity data and suggested
ways for performing classification on them. Due to the generality of the proximity approach we expect that many other problems can be fruitfully cast into this
framework. Although we focused on classification problems , regression can be considered on proximity data in an analogous way. Noting that Support Vector kernels
and covariance functions for Gaussian processes are similarity measures for vector
spaces, we see that this approach has recently gained a lot of popularity. However,
one problem with pairwise proximities is that their number scales quadratically
with the number of objects under consideration. Hence, for large scale practical
applications the problems of missing data and active data selection for proximity
data will be of increasing importance.
Acknow ledgments
We thank Prof. U. Kockelkorn for fruitful discussions. We also thank S. Gunn for
providing his Support Vector implementation. Finally, we are indebted to M. Vingron and T. Hofmann for providing the protein data set. This project was funded
by the Technical U ni versity of Berlin via the Forschungsinitiativprojekt FIP 13/41.
References
[1 J 1. Borg and J. Lingoes. Multidimensional Similarity Structure Analysis, volume 13 of Springer Series in Statistics. Springer-Verlag, Berlin, Heidelberg,
1987.
[2J B. Boser, 1. Guyon, and V. N. Vapnik. A training algorithm for optimal margin
classifiers. In Proceedings of the Fifth Annual Workshop on Computational
Learning Theory, pages 144~ 152, 1992.
[3J L. Goldfarb. Progress in Pattern Recognition, volume 2, chapter 9: A New
Approach To Pattern Recognition, pages
241 ~402.
Elsevier Science Publishers,
1985.
[4J T. Graepel and K. Obermayer. A stochastic self-organizing map for proximity
data. Neural Computation (accepted for pUblication), 1998.
[5J T. Hofmann and J . Buhmann. Pairwise data clustering by deterministic annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence,
19(1):1- 14, 1997.
[6J H. Klock and J. M. Buhmann. Multidimensional scaling by deterministic annealing. In M. Pelillo and E. R. Hancock, editors, Energy Minimization Methods in Computer Vision and Pattern Recognition, volume 1223, pages 246-260,
Berlin, Heidelberg, 1997. Springer-Verlag.
[7J J. W. Scannell, C. Blakemore, and M. P. Young. Analysis of connectivity in
the cat cerebral cortex. The Journal of Neuroscience, 15(2):1463- 1483,1995.
[8J W. S. Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958.
[9J V. Vapnik. The Nature of Statistical Learning. Springer-Verlag, Berlin, Heidelberg, Germany, 1995.
[10J D. Weinshall, D. W. Jacobs, and Y. Gdalyahu. Classification in non ~metric
space. In Advances in Neural Information Processing Systems, volume 11, 1999.
in press.
| 1571 |@word determinant:1 inversion:1 norm:1 covariance:1 decomposition:1 jacob:1 contains:1 series:1 comparing:1 numerical:1 hofmann:2 plot:2 intelligence:1 item:19 plane:2 vanishing:1 provides:1 characterization:1 herbrich:3 become:1 borg:1 consists:2 introduce:1 pairwise:16 behavior:3 themselves:1 examine:2 frequently:1 decreasing:1 versity:1 equipped:1 increasing:2 project:1 mass:1 sdorra:4 what:1 weinshall:1 kind:3 interpreted:1 developed:1 guarantee:1 pseudo:9 every:1 multidimensional:5 ti:1 voting:1 xd:1 isometrically:1 demonstrates:1 classifier:1 positive:7 bilinear:3 pmn:1 blakemore:1 gdalyahu:1 unique:2 practical:1 practice:1 block:2 definite:5 xtw:1 procedure:5 area:3 empirical:1 xtx:6 convenient:1 projection:1 refers:2 protein:7 suggest:2 cannot:1 selection:1 risk:2 influence:1 restriction:1 equivalent:4 map:1 fruitful:1 center:1 maximizing:1 missing:1 deterministic:2 economics:1 focused:2 his:1 ralf:1 embedding:9 analogous:1 construction:1 target:2 pt:1 programming:1 wolfe:3 element:2 recognition:5 gunn:1 cut:3 observed:1 module:1 solved:1 calculate:2 region:2 ensures:1 sect:1 pol:1 signature:6 serve:2 upon:1 proximit:1 basis:1 po:2 represented:3 cat:7 chapter:1 hancock:1 effective:1 klaus:1 whose:3 encoded:1 larger:1 s:1 reconstruct:2 otherwise:1 statistic:2 sequence:1 eigenvalue:9 analytical:1 product:8 maximal:1 fr:2 organizing:1 leave:1 object:4 xtmx:2 nearest:4 ij:2 pelillo:1 progress:1 strong:1 p2:1 indicate:2 somatosensory:1 klock:1 direction:1 thick:1 subsequently:1 stochastic:1 human:1 assign:1 generalization:6 extension:1 hold:1 proximity:52 considered:1 normal:1 visualize:1 biochemistry:2 label:1 bridge:1 largest:1 successfully:1 tool:1 city:2 minimization:3 always:4 gaussian:1 publication:1 ax:3 consistently:1 rank:2 contrast:1 elsevier:1 nn:5 jq:1 germany:2 classification:26 dual:3 psychophysics:1 apriori:1 aware:1 once:1 unsupervised:1 scannell:2 future:1 others:1 preserve:1 hemoglobin:2 replaced:4 phase:1 consisting:1 investigate:1 severe:1 extreme:1 light:1 neglecting:1 reflexivity:1 euclidean:26 minimal:1 column:3 graepe:2 fruitfully:1 gr:1 optimally:1 sv:1 considerably:1 combined:1 physic:1 off:3 uth:1 together:1 connectivity:1 choose:1 sekr:2 toy:1 de:1 bold:1 coefficient:1 satisfy:1 vi:3 lot:1 bayes:1 contribution:1 square:1 ni:1 weak:1 indebted:1 classified:1 definition:1 against:2 ty:2 energy:1 auditory:1 dataset:1 knowledge:2 realm:1 dimensionality:1 graepel:3 follow:1 formulation:1 evaluated:1 generality:1 just:1 thore:1 concept:1 true:1 analytically:1 assigned:5 hence:1 symmetric:2 goldfarb:1 illustrated:1 ll:1 self:2 demonstrate:1 performs:1 interpreting:1 image:2 iiw:1 consideration:2 fi:3 recently:1 predominantly:1 common:1 functional:1 ohc:6 qp:5 cerebral:2 myoglobin:1 volume:4 refer:1 ai:2 pm:1 unreported:1 funded:1 cortex:6 similarity:2 etc:1 verlag:3 yi:4 preserving:1 seen:3 neuroanatomy:1 determine:1 dashed:1 semi:3 ii:3 full:2 technical:2 characterized:2 offer:1 cross:3 a1:1 calculates:1 regression:1 vision:1 metric:21 represent:1 kernel:1 globin:2 achieved:1 fine:1 annealing:2 diagram:1 publisher:1 operate:1 rest:1 bollmann:4 incorporates:1 practitioner:1 structural:4 mw:1 noting:1 intermediate:1 iii:2 embeddings:1 easy:1 ledgments:1 xj:5 psychology:1 inner:9 idea:1 tm:1 det:1 absent:1 peter:1 hessian:3 proceed:1 york:1 clear:1 eigenvectors:1 amount:1 sign:2 torgerson:2 neuroscience:1 popularity:1 anatomical:1 group:2 dominance:1 four:3 threshold:5 graph:1 merely:1 cone:1 sum:1 wand:1 angle:1 powerful:1 franklinstr:1 guyon:1 decision:7 prefer:1 scaling:5 fl:1 hi:1 correspondence:1 fold:1 quadratic:1 annual:1 strength:1 occur:1 constraint:2 flat:1 minkowski:7 separable:1 performing:2 relatively:1 structured:1 according:2 combination:1 vingron:1 slightly:1 making:1 explained:1 pmj:2 equation:1 visualization:1 previously:1 pin:1 turn:2 needed:1 flip:5 serf:2 available:3 apply:2 spectral:1 minimalistic:1 alternative:1 pji:1 assumes:1 clustering:2 calculating:1 especially:1 prof:1 classical:1 flipping:1 usual:1 diagonal:2 obermayer:4 evolutionary:1 distance:17 thank:2 berlin:6 majority:2 participate:1 collected:1 l12:1 reason:1 assuming:1 length:1 providing:2 minimizing:1 acknow:1 negative:9 implementation:1 unknown:2 perform:2 arc:2 incorporated:1 introduced:2 pair:1 cast:1 optimized:2 connection:7 quadratically:1 boser:1 heterogenous:1 suggested:2 proceeds:1 pattern:6 perception:1 rf:1 suitable:1 natural:1 buhmann:2 scheme:1 brief:1 imply:1 axis:7 faced:1 prior:3 circled:1 literature:2 text:1 determining:1 embedded:3 fully:1 expect:1 validation:3 pij:4 principle:1 editor:1 pi:6 row:1 lo:1 formal:1 neighbor:4 fall:1 characterizing:1 absolute:3 fifth:1 calculated:2 dimension:1 world:3 evaluating:1 cortical:1 cope:1 transaction:1 obtains:1 fip:1 active:1 assumed:1 xi:9 spectrum:2 continuous:1 table:3 additionally:1 nature:3 reasonably:1 learn:2 symmetry:1 heidelberg:3 investigated:1 necessarily:2 cl:1 domain:1 diag:4 dense:1 linearly:1 fig:3 referred:2 orc:8 wiley:1 explicit:2 xl:1 pe:1 young:1 embed:2 xt:1 udu:2 incorporating:1 workshop:1 vapnik:2 gained:1 importance:1 mirror:2 magnitude:1 margin:2 simply:1 visual:1 expressed:1 contained:2 springer:4 corresponds:2 sorted:1 formulated:1 identity:1 consequently:1 hyperplane:3 called:3 discriminate:1 accepted:1 support:3 |
625 | 1,572 | Information Maximization in Single Neurons
Martin Stemmler and Christof Koch
Computation and Neural Systems Program
Caltech 139-74
Pasadena, CA 91 125
Email: stemmler@klab.caltech.edu.koch@klab.caltech.edu
Abstract
Information from the senses must be compressed into the limited range
of firing rates generated by spiking nerve cells. Optimal compression
uses all firing rates equally often, implying that the nerve cell's response
matches the statistics of naturally occurring stimuli. Since changing
the voltage-dependent ionic conductances in the cell membrane alters
the flow of information, an unsupervised, non-Hebbian, developmental
learning rule is derived to adapt the conductances in Hodgkin-Huxley
model neurons. By maximizing the rate of information transmission,
each firing rate within the model neuron's limited dynamic range is used
equally often .
An efficient neuronal representation of incoming sensory information should take advantage of the regularity and scale invariance of stimulus features in the natural world. In
the case of vision, this regularity is reflected in the typical probabilities of encountering
particular visual contrasts, spatial orientations, or colors [1]. Given these probabilities, an
optimized neural code would eliminate any redundancy, while devoting increased representation to commonly encountered features.
At the level of a single spiking neuron, information about a potentially large range of stimuli
is compressed into a finite range of firing rates, since the maximum firing rate of a neuron is
limited. Optimizing the information transmission through a single neuron in the presence
of uniform, additive noise has an intuitive interpretation: the most efficient representation
of the input uses every firing rate with equal probability. An analogous principle for nonspiking neurons has been tested experimentally by Laughlin [2], who matched the statistics
161
Information Maximization in Single Neurons
(coupling conductance)
Soma
(Hodgkin-Huxley
spiking conductances)
Figure 1: The model neuron contains two compartments to represent the cell's soma and
dendrites. To maximize the information transfer, the parameters for six calcium and six
potassium voltage-dependent conductances in the dendritic compartment are iteratively adjusted, while the somatic conductances responsible for the cell's spiking behavior are held
fixed.
of naturally occurring visual contrasts to the response amplitudes of the blowfly'S large
monopolar cell.
From a theoretical perspective, the central question is whether a neuron can "learn" the
best representation for natural stimuli through experience. During neuronal development,
the nature and frequency of incoming stimuli are known to change both the anatomical
structure of neurons and the distribution of ionic conductances throughout the cell [3]. We
seek a guiding principle that governs the developmental timecourse of the Na+, Ca2+ and
K+ conductances in the somatic and dendritic membrane by asking how a neuron would
set its conductances to transmit as much information as possible. Spiking neurons must
associate a range of different inputs to a set of distinct responses-a more difficult task than
M. Stemmler and C. Koch
162
keeping the firing rate or excitatory postsynaptic potential (EPSP) amplitude constant under
changing conditions, two tasks for which learning rules that change the voltage-dependent
conductances have recently been proposed [4, 5] . Learning the proper representation of
stimulus information goes beyond simply correlating input and output; an alternative to the
classic postulate of Hebb [6], in which synaptic learning in networks is a consequence of
correlated activity between pre- and postsynaptic neurons, is required for such learning in
a single neuron.
To explore the feasibility of learning rules for information maximization, a simplified
model of a neuron consisting of two electrotonic compartments, illustrated in fig. 1, was
constructed. The soma (or cell body) contains the classic Hodgkin-Huxley sodium and
delayed rectifier potassium conductances, with the addition of a transient potassium "A"current and an effective calcium-dependent potassium current. The soma is coupled
through an effective conductance G to the dendritic compartment, which contains the
synaptic input conductance and three adjustable calcium and three adjustable potassium
conductances.
The dynamics of this model are given by Hodgkin-Huxley-like equations that govern the
membrane potential and a set of activation and inactivation variables, mi and hi , respectively. In each compartment of the neuron, the voltage V evolves as
dV -- """
h iqi (Ei-V'
)
C ill
~ gi m Pi
i
(1)
i
where C is the membrane capacitance, gi is the (peak) value of the i-th conductance, Pi and
qi are integers, and Ei are the ion-specific reversal potentials. The variables hi and mi obey
first order kinetics of the type dm/dt = (moo (V) - m) /T(V), where moo (V) denotes the
steady state activation when the voltage is clamped to V and T(V) is a voltage-dependent
time constant.
All parameters for the somatic compartment, with the exception of the adaptation conductance, are given by the standard model of Connor et al (1977) [7], This choice of
somatic spiking conductances allows spiking to occur at arbitrarily low firing rates. Adaptation is modeled by a calcium-dependent potassium conductance that scales with the firing rate, such that the conductance has a mean value of 34 mS/cm 2 Hz. The calcium
and potassium conductances in the dendritic compartment have simple activation and inactivation functions described by distinct Boltzmann functions. Together with the peak
conductance values, the midpoint voltages VI2 and slopes s of these Boltzmann functions
adapt to the statistics of stimuli. For simplicity, all time constants for the dendritic conductances are set to a constant 5 msec. For additional details and parameter values, see
http://www.klab.caltech.edu/infomax.
Hodgkin-Huxley models can exhibit complex behaviors on several timescales, such as firing patterns consisting of "bursts"-sequences of multiple spikes interspersed with periods
of silence. We will, however, focus on models of regularly spiking cells that adapt to
a sustained stimulus by spiking periodically. To quantify how much information about a
continuous stimulus variable x the time-averaged firing rate f of a regularly spiking neuron
carries, we use a lower bound [8] on the mutual information J(f; x) between the stimulus
Information Maximization in Single Neurons
163
x and the firing rate f:
hB(J; x) = -jIn (p(J) CTf(X)) p(x) dx -In(J27re),
(2)
where p(J) is the probability, given the set of all stimuli, of a firing rate f, and CTJ (x) is the
variance of the firing rate in response to a given stimulus x.
To maximize the information transfer, does a neuron need to "know" the arrival rates of
photons impinging on the retina or the frequencies of sound waves hitting the ear's tympanic membrane? Since the ion channels in the dendrites only sense a voltage and not the
stimulus directly, the answer to this question, fortunately, is no: maximizing the information between the firing rate f and the dendritic voltage Vdend(t) is equivalent to maximizing
the information about the stimuli, as long as we can guarantee that the transformation from
stimuli to firing rates is always one-to-one.
Since a neuron must be able to adapt to a changing environment and shifting intra- and
extracellular conditions [4], learning and relearning of the proper conductance parameters,
such as the channel densities, should occur on a continual basis. An alphabet zoo of different calcium (Ca 2 +) conductances in neurons of the central nervous system, denoted 'L',
'N', 'P', 'R', and 'T' -conductances, reflects a wealth of different voltage and pharmacological properties [9], matching an equal diversity of potassium (K+) channels. No fewer than
ten different genes code for various Ca2 + subunits, allowing for a combinatorial number
of functionally different channels [10]. A self-regulating neuron should be able to express
different ionic channels and insert them into the membrane. In information maximization,
the parameters for each of the conductances, such as the number of channels, are continually modified in the direction that most increases the mutual information 1[1; Vdend (t)] each
time a stimulus occurs.
The standard approach to such a problem is known as stochastic approximation of the
mutual information, which was recently applied to feedforward neural networks for blind
source sound separation by Bell and Sejnowski [11]. We define a "free energy" :F =
E(J) - (3-1 hB(J;X), where E(J) incorporates constraints on the peak or mean firing
rate f, and (3 is a Lagrangean parameter that balances the mutual information and constraint satisfaction. Stochastic approximation then consists of adjusting the parameter r of
a voltage-dependent conductance by
(3)
whenever a stimulus x is presented; this will, by definition, occur with probability p(x).
In the model, the stimuli are taken to be maintained synaptic input conductances 9syn lasting 200 msec and drawn randomly from a fixed, continuous probability distribution. After an initial transient, we assume that the voltage waveform Vdend(t) settles into a simple periodic limit cycle as dictated by the somatic spiking conductances. We thus posit
the existence of the invertible composition of maps, such that the input conductance 9syn
maps onto a periodic voltage waveform Vdend(t) of period T, from thence onto an averaged current (1) = liT
1(t) dt to the soma, and then finally onto an output firing rate
f. The last element in this chain of transformations, the steady-state current-discharge
J:
M. Stemmler and C. Koch
164
.f'
r.I'J
c:
: input probability - -----.
: optimal firing rate - - - - - - .
10 Llearnedfiringrate--__ _
?
I
~
~
,
~
I
I
~ 0.6 ~
...'
0.4 ,~
Q..
s. : / _..5 0.2 ,
:r:
"
,
"
"
50 :::;'
, ,
, "
,
.g:
N
, ..
I
~'
';, 0.8
60
" " ",
,,
, ''"
~ .~ w
s..,.", 116
m
n5
"
,,
~
~/ -
,
>'
?50
o
Ql
U
\\
100
200
Time ImW'C1
40 '0
~
~
30
g;c
?c
ti:
20
0.0 ,' ---'-----'-----'--- --'-----'----'
100
120
140
160
180
Synaptic Input Conductance [nSl
Figure 2: The inputs to the model are synaptic conductances, drawn randomly from a
Gaussian distribution of mean 141 nS and standard deviation of 25 nS with the restriction that the conductance be non-negative (dot-dashed line). The learning rule in
eq. 4-maximizing the information in the cell 's firing rate-was used to adjust the peak
conductances, midpoint voltages, and slopes of the "dendritic" Ca2 + and K+ conductances over the course of 10.9 (simulated) minutes .. The learning rate decayed with time:
71(t) = 710 exp( -t/Tlearning) , with 710 = 4.3 X 10- 3 and Tlearning = 4.4 sec. The optimal firing
rate response curve (dotted line) is asymptotically proportional to the cumulative probability distribution of inputs. The inset illustrates the typical timecourse of the dendritic
voltage in the trained model.
relationship at the soma, can be predicted from the theory of dynamical systems (see
http://www.klab.caltech.edu/'''stemmler for details).
The voltage and the conductances are nonlinearly coupled: the conductances affect the
voltage, which, in turn, sets the conductances. Since the mutual information is a global
property of the stimulus set, the learning rule for anyone conductance would depend on the
values of all other conductances, were it not for the nonlinear feedback loop between voltages and conductances. This nonlinear coupling must satisfy the strict physical constraint
of charge conservation : when the neuron is firing periodically, the average current injected
by the synaptic and voltage-dependent conductances must equal the average current discharged by the neuron . Remarkably, charge conservation results in a learning mechanism
that is strictly local, so that the mechanism for changing one conductance does not depend
on the values of any other conductances.
For instance, information maximization predicts that the peak calcium or potassium conductance 9i changes by
each time a stimulus is presented. Here 71(t) is a time-dependent learning rate, the angular
brackets indicate an average over the stimulus duration, and c( (Vdend)) is a simple function
that is zero for most commonly encountered voltages, equal to a positive constant below
some minimum, and equal to a negative constant above some maximum voltage. This
Information Maximization in Single Neurons
165
original firing rate ------ optimal firing rate - - - -. - . "
learned firing rate - - " ,
:>..
.::E
:.0
0.1
ro
"
I
I
"
..0
I
I
I
8
0...
,
,
2ro
I
~0.05
I
\'
,
I,
'
'
'
,,
,,
I
C
.;::::
I
I
//:-::-----~:/---
~
0.0
.-/:_--
''--=---'_ _---"''----_
20
-,---- - ------
..,
~
----"'---_--------0..' - -_ _"----'''-'
30
40
50
Firing Rate of Cell [Hz]
60
Figure 3: The probability distribution of firing rates before and after adaptation of voltagedependent conductances. Learning shifts the distribution from a peaked distribution to a
much flatter one, so that the neuron uses each firing rate within the range [22 , 59] Hz equally
often in response to randomly selected synaptic inputs.
function represents the constraint on the maximum and minimum firing rate, which sets
the limit on the neuron's dynamic range. A constraint on the mean firing rate implies
that c( (Vdend)) is simply a negative constant for all suprathreshold voltages. Under this
constraint, the optimal distribution of firing rates becomes exponential (not shown). This
latter case corresponds to transmitting as much information as possible in the rate while
firing as little as possible.
Given a stimulus x, the dominant term 8/8V(t) (mi hi(Ej - V)) of eq. 4 changes those
conductances that increase the slope of the firing rate response to x . A higher slope means
that more of the neuron 's limited range of firing rates is devoted to representing the stimulus
x and its immediate neighborhood. Since the learning rule is democratic yet competitive,
only the most frequent inputs "win" and thereby gain the largest representation in the output
firing rate.
In Fig. 2, the learning rule of eq . 4-generalized to also change the midpoint voltage and
steepness of the activation and inactivation functions-has been used to train the model
neuron as it responds to random, 200 msec long amplitude modulations of a synaptic input
conductance to the dendritic compartment. The cell "learns" the statistical structure of the
input, matching its adapted firing rate to the cumulative distribution function of the conductance inputs. The distribution of firing rates shifts from a peaked distribution to a much
flatter one, so that all firing rates are used nearly equally often (Fig. 3). The information
in the firing rate increases by a factor of three to 10.7 bits/sec, as estimated by adding a
5 msec, Gaussian-distributed noise jitter to the spike times.
Changing how tightly the stimulus amplitudes are clustered around the mean will increase
or decrease the slope of the firing rate response to input, without necessarily changing
the average firing rate. Neuronal systems are known to adapt not only to the mean of
M. Stemmler and C. Koch
166
the stimulus intensity, but also to the variance of the stimulus [12]. We predict that such
adaptation to stimulus variance will occur not just at the level of networks of neurons, but
also at the single cell level.
While the detailed substrate for maximizing the information at both the single cell and
network level awaits experimental elucidation, the terms in the learning rule of eq. 4 have
simple biophysical correlates: the derivative term, for instance, is reflected in the stochastic
flicker of ion channels switching between open and closed states. The transitions between
simple open and closed states will occur at a rate proportional to (8/ 8V (m(V))) 'Y in equilibrium, where the exponent I is 1/2 or 1, depending on the kinetic model. To change
the information transfer properties of the cell, a neuron could use state-dependent phosphorylation of ion channels or gene expression of particular ion channel subunits, possibly
mediated by G-protein initiated second messenger cascades, to modify the properties of
voltage-dependent conductances. The tools required to adaptively compress information
from the senses are thus available at the subcellular level.
References
[1] D. L. Ruderman, Network 5(4), 517 (1995), R. J. Baddeley and P. J. B. Hancock,
Proc. Roy. Soc. B 246, 219 (1991), J. J. Atick, Network 3, 213 (1992).
[2] S. Laughlin, Z. Natuiforsch. 36c, 910 (1981).
[3] Purves, D. Neural activity and the growth of the brain, (Cambridge University Press,
NY, 1994); X. Gu and N. C. Spitzer, Nature 375, 784 (1995).
[4] G. LeMasson, E. Marder, and L. F. Abbott, Science 259,1915 (1993).
[5] A. J. Bell, Neurallnfonnation Processing Systems 4,59 (1992).
[6] D.
o. Hebb, The Organization of Behavior (Wiley, New York, 1949).
[7] J. A. Connor, D. Walter, R. McKown, Biophys. J. 18, 81 (1977).
[8] R. B. Stein, Biophys. J. 7,797 (1967).
[9] R. B. Avery and D. Johnston, J. Neurosci. 16, 5567 (1996), F. Helmchen, K. Imoto,
and B. Sakmann, Biophys. J. 70, 1069 (1996).
[10] F. Hofmann, M. Biel, and V. Flockerzi, Ann. Rev. Neurosci. 17, 399 (t 994).
[11] Y. Z. Tsypkin, Adaptation and Learning in Automatic Systems (Academic Press, NY,
1971))' R. Linsker, Neural Compo 4, 691 (1992), and A. J. Bell and T. J. Sejnowski,
Neural Compo 7,1129 (1995).
[12] S. M. Smirnakis et al., Nature 386, 69 (1997).
| 1572 |@word compression:1 open:2 seek:1 thereby:1 carry:1 phosphorylation:1 initial:1 contains:3 imoto:1 current:6 activation:4 yet:1 dx:1 must:5 moo:2 periodically:2 additive:1 hofmann:1 implying:1 fewer:1 selected:1 nervous:1 compo:2 burst:1 constructed:1 consists:1 sustained:1 avery:1 behavior:3 monopolar:1 brain:1 little:1 becomes:1 matched:1 spitzer:1 cm:1 transformation:2 guarantee:1 every:1 continual:1 ti:1 charge:2 growth:1 smirnakis:1 ro:2 christof:1 continually:1 positive:1 iqi:1 before:1 local:1 modify:1 limit:2 consequence:1 switching:1 initiated:1 firing:41 modulation:1 limited:4 range:8 averaged:2 responsible:1 bell:3 cascade:1 matching:2 pre:1 protein:1 onto:3 www:2 equivalent:1 map:2 restriction:1 maximizing:5 go:1 duration:1 simplicity:1 rule:8 classic:2 analogous:1 transmit:1 discharge:1 substrate:1 us:3 associate:1 element:1 roy:1 predicts:1 cycle:1 decrease:1 developmental:2 govern:1 environment:1 dynamic:3 trained:1 depend:2 basis:1 gu:1 various:1 stemmler:6 alphabet:1 train:1 distinct:2 hancock:1 effective:2 walter:1 sejnowski:2 neighborhood:1 compressed:2 statistic:3 gi:2 advantage:1 sequence:1 biophysical:1 epsp:1 adaptation:5 frequent:1 loop:1 subcellular:1 intuitive:1 potassium:9 regularity:2 transmission:2 coupling:2 depending:1 eq:4 soc:1 predicted:1 imw:1 indicate:1 implies:1 quantify:1 direction:1 waveform:2 posit:1 stochastic:3 transient:2 settle:1 suprathreshold:1 tympanic:1 clustered:1 dendritic:9 adjusted:1 insert:1 kinetics:1 strictly:1 koch:5 klab:4 around:1 exp:1 equilibrium:1 predict:1 proc:1 combinatorial:1 largest:1 helmchen:1 tool:1 reflects:1 always:1 gaussian:2 modified:1 ctj:1 inactivation:3 ej:1 voltage:24 derived:1 focus:1 contrast:2 sense:1 dependent:11 eliminate:1 pasadena:1 orientation:1 ill:1 denoted:1 exponent:1 development:1 spatial:1 mutual:5 devoting:1 equal:5 represents:1 lit:1 unsupervised:1 nearly:1 peaked:2 linsker:1 stimulus:27 retina:1 randomly:3 tightly:1 delayed:1 consisting:2 conductance:50 organization:1 regulating:1 intra:1 adjust:1 bracket:1 sens:2 devoted:1 held:1 chain:1 experience:1 theoretical:1 increased:1 instance:2 asking:1 maximization:7 deviation:1 uniform:1 answer:1 periodic:2 adaptively:1 density:1 peak:5 decayed:1 infomax:1 invertible:1 together:1 transmitting:1 na:1 central:2 postulate:1 ear:1 possibly:1 derivative:1 nsl:1 potential:3 photon:1 diversity:1 sec:2 flatter:2 satisfy:1 blind:1 closed:2 wave:1 competitive:1 purves:1 slope:5 compartment:8 variance:3 who:1 discharged:1 ionic:3 zoo:1 messenger:1 whenever:1 synaptic:8 email:1 definition:1 energy:1 frequency:2 dm:1 naturally:2 mi:3 gain:1 adjusting:1 color:1 amplitude:4 syn:2 nerve:2 higher:1 dt:2 reflected:2 response:8 angular:1 just:1 atick:1 ei:2 ruderman:1 nonlinear:2 iteratively:1 illustrated:1 pharmacological:1 during:1 self:1 maintained:1 steady:2 m:1 generalized:1 recently:2 spiking:11 physical:1 interspersed:1 interpretation:1 functionally:1 composition:1 connor:2 ctf:1 cambridge:1 automatic:1 dot:1 encountering:1 dominant:1 neurallnfonnation:1 dictated:1 perspective:1 optimizing:1 arbitrarily:1 caltech:5 minimum:2 additional:1 fortunately:1 maximize:2 period:2 dashed:1 multiple:1 sound:2 hebbian:1 match:1 adapt:5 academic:1 long:2 equally:4 feasibility:1 qi:1 n5:1 vision:1 represent:1 cell:15 ion:5 c1:1 addition:1 remarkably:1 wealth:1 johnston:1 source:1 strict:1 hz:3 regularly:2 flow:1 incorporates:1 integer:1 presence:1 feedforward:1 hb:2 affect:1 shift:2 whether:1 six:2 expression:1 york:1 nonspiking:1 electrotonic:1 governs:1 detailed:1 stein:1 ten:1 lemasson:1 http:2 flicker:1 alters:1 dotted:1 estimated:1 anatomical:1 express:1 steepness:1 redundancy:1 soma:6 drawn:2 changing:6 abbott:1 asymptotically:1 injected:1 ca2:3 hodgkin:5 jitter:1 throughout:1 separation:1 bit:1 bound:1 hi:3 encountered:2 activity:2 adapted:1 occur:5 marder:1 constraint:6 huxley:5 anyone:1 martin:1 extracellular:1 membrane:6 postsynaptic:2 evolves:1 voltagedependent:1 rev:1 lasting:1 dv:1 taken:1 equation:1 turn:1 mechanism:2 know:1 tsypkin:1 reversal:1 available:1 awaits:1 obey:1 blowfly:1 alternative:1 existence:1 original:1 compress:1 denotes:1 elucidation:1 capacitance:1 question:2 spike:2 occurs:1 responds:1 exhibit:1 win:1 simulated:1 code:2 modeled:1 relationship:1 balance:1 difficult:1 ql:1 potentially:1 negative:3 sakmann:1 calcium:7 proper:2 adjustable:2 boltzmann:2 allowing:1 neuron:32 thence:1 finite:1 jin:1 immediate:1 subunit:2 somatic:5 mckown:1 intensity:1 nonlinearly:1 required:2 optimized:1 timecourse:2 learned:1 beyond:1 able:2 dynamical:1 pattern:1 below:1 democratic:1 program:1 vi2:1 shifting:1 satisfaction:1 natural:2 sodium:1 representing:1 mediated:1 coupled:2 proportional:2 principle:2 pi:2 excitatory:1 course:1 last:1 keeping:1 free:1 silence:1 laughlin:2 midpoint:3 distributed:1 curve:1 feedback:1 world:1 cumulative:2 transition:1 sensory:1 commonly:2 simplified:1 correlate:1 gene:2 global:1 correlating:1 incoming:2 conservation:2 continuous:2 learn:1 transfer:3 nature:3 ca:2 channel:9 dendrite:2 complex:1 necessarily:1 impinging:1 timescales:1 neurosci:2 noise:2 arrival:1 body:1 neuronal:3 fig:3 hebb:2 ny:2 wiley:1 n:2 guiding:1 msec:4 exponential:1 clamped:1 learns:1 minute:1 rectifier:1 specific:1 inset:1 adding:1 illustrates:1 occurring:2 biophys:3 relearning:1 simply:2 explore:1 visual:2 hitting:1 corresponds:1 kinetic:1 ann:1 experimentally:1 change:6 typical:2 lagrangean:1 invariance:1 experimental:1 exception:1 latter:1 baddeley:1 tested:1 correlated:1 |
626 | 1,573 | Learning Instance-Independent Value Functions
to Enhance Local Search
Robert Moll Andrew G. Barto Theodore J. Perkins
Department of Computer Science
University of Massachusetts, Amherst, MA 01003
Richard S. Sutton
AT&T Shannon Laboratory, 180 Park Avenue, Florham Park, NJ 07932
Abstract
Reinforcement learning methods can be used to improve the performance
of local search algorithms for combinatorial optimization by learning
an evaluation function that predicts the outcome of search. The evaluation function is therefore able to guide search to low-cost solutions
better than can the original cost function. We describe a reinforcement
learning method for enhancing local search that combines aspects of previous work by Zhang and Dietterich (1995) and Boyan and Moore (1997,
Boyan 1998). In an off-line learning phase, a value function is learned
that is useful for guiding search for multiple problem sizes and instances.
We illustrate our technique by developing several such functions for the
Dial-A-Ride Problem. Our learning-enhanced local search algorithm exhibits an improvement of more then 30% over a standard local search
algorithm.
1
INTRODUCTION
Combinatorial optimization is of great importance in computer science, engineering, and
operations research. We investigated the use of reinforcement learning (RL) to enhance traditionallocal search optimization (hillclimbing). Since local search is a sequential decision
process. RL can be used to improve search performance by learning an evaluation function that predicts the outcome of search and is therefore able to guide search to low-cost
solutions better than can the original cost function.
Three approaches to using RL to improve combinatorial optimization have been described
1018
R. Moll, A. G. Barto, T. J Perkins and R. S. Sutton
in the literature. One is to learn a value function over multiple search trajectories of a single
problem instance. As the value function improves in its predictive accuracy, its guidance
enhances additional search trajectories on the same instance. Boyan and Moore's STAGE
algorithm (Boyan and Moore 1997, Boyan 1998) falls into this category, showing excellent
performance on a range of optimization problems. Another approach is to learn a value
function off-line and then use it over mUltiple new instances of the same problem. Zhang
and Dietterich's (1995) application of RL to a NASA space shuttle mission scheduling
problem takes this approach (although it does not strictly involve local search as we define
it below). A key issue here is the need to normalize state representations and rewards so
that trajectories from instances of different sizes and difficulties yield consistent training
data. In each of the above approaches, a state of the RL problem is an entire solution (e.g.,
a complete tour in a Traveling Salesman Problem (TSP)) and the actions select next solutions from the current solutions' neighborhoods. A third approach, described by Bertsekas
and Tsitsiklis (1996), uses a learned value function for guiding the direct construction of
solutions rather than for moving between them.
We focused on combining aspects of first two of these approaches with the goal of carefull y
examining how well the TD(,\) algorithm can learn an instance-independent value function
for a given problem to produce an enhanced local search algorithm applicable to all instances of that problem. Our approach combines an off-line learning phase with STAGE's
alternation between using the learned value function and the original cost function to guide
search. We present an extended case study of this algorithm's application to a somewhat
complicated variant of TSP known as the Dial-A-Ride Problem, which exhibits some of
the non-uniform structure present in real-world transportation and logistics problems.
2
ENHANCING LOCAL SEARCH
The components of local search for combinatorial optimization are 1) a finite set ofJeasible
solutions, S; 2) an objective, or cost,function, C : S - 4 ~; and 3) a neighborhood Junction,
A : S - 4 P( S) (the power set of S). Local search starts with an initial feasible solution, So,
ofa problem instance and then at each step k = 1,2, ... , it selects a solution Sk E A(Sk-d
such that C(Sk) < c(sk-d. This process continues until further local improvement is
impossible, and the current local optimum is returned. If the algorithm always moves to the
first less expensive neighboring solution encountered in an enumeration of a neighborhood,
it is called first improvement local search .
Following Zhang and Dietterich (1995) and Boyan and Moore (1997), we note that local
search can be viewed as a policy o?' a Markov decision process (MDP) with state set S
and action sets A(s), S E S, where an action is identified with the neighboring solution
selected. Local search selects actions which decrease the value of c, eventually absorbing
at a state with a locally minimum cost. But C is not the optimal value function for the
local search problem, whose objective is to reach the lowest-cost absorbing state (possibly
including some tradeoff involving the number of search steps required to do so). RL used
with a function approximator can learn an approximate optimal value function, V, thereby
producing an enhanced search algorithm that is locally guided by V instead of by c. One
way to do this is to give a small penalty, E, for each transition and a terminal reward upon
absorption that is inversely related to the cost of the terminal state. Maximizing the expected undiscounted return accomplishes the desired tradeoff (determined by the value of
E) between quality of final solution and search time (cf. Zhang and Dietterich, 1995).
Since each instance of an optimization problem corresponds to a different MDP, a value
Learning Instance-Independent Value Functions to Enhance Local Search
1019
function V learned in this way is instance-specific. Whereas Boyan's STAGE algorithm in
effect uses such a V to enhance additional searches that start from different states of the
same instance, we are interested in learning a V off-line, and then using it for arbitrary instances of the given problem. In this case, the relevant sequential decision problem is more
complicated than a single-instance MDP since it is a summary of aspects of all problem
instances. It would be extremely difficult to make the structure of this process explicit, but
fortunately RL requires only the generation of sample trajectories, which is relatively easy
in this case.
In addition to their cost, secondary characteristics of feasible solutions can provide valuable
information for search algorithms. By adjusting the parameters of a function approximation
system whose inputs are feature vectors describing feasible solutions, an RL algorithm can
produce a compact representation of V. Our approach operates in two distinct phases. In
the learning phase, it learns a value function by applying the TD(A) algorithm to a number
of randomly chosen instances of the problem. In the performance phase, it uses the resulting value function, now held fixed, to guide local search for additional problem instances.
This approach is in principle applicable to any combinatorial optimization problem, but we
describe its details in the context of the Dial-A-Ride problem.
3
THE DIAL-A-RIDE PROBLEM
The Dial-a-Ride Problem (DARP) has the following formulation. A van is parked at a
terminal. The driver receives calls from N customers who need rides. Each call identifies
the location of a customer, as well as that customer's destination. After the calls have been
received, the van must be routed so that it starts from the terminal, visits each pick-up
and drop-off site in some order, and then returns to the terminal. The tour must pick up
a passenger before eventually dropping that passenger off. The tour should be of minimal
length. Failing this goal-and DARP is NP-complete, so it is unlikely that optimal DARP
tours will be found easily-at least a good quality tour should be constructed. We assume
that the van has unlimited capacity and that the distances between pick-up and drop-off
locations are represented by a symmetric Euclidean distance matrix.
We use the notation
012-13 - 3 - 2
to denote the following tour: "start at the terminal (0), then pick up 1, then 2, then drop
off 1 (thus: - 1), pick up 3, drop off 3, drop off 2 and then return to the terminal (site 0)."
Given a tour s, the 2-opt neighborhood of s, A 2(S), is the set oflegal tours obtainable from
s by subsequence reversal. For example, for the tour above, the new tour created by the
following subsequence reversal
01 / 2 -13 / -3 - 2 --. 013 -12 -3-2
is an element of A2 (T). However, this reversal
012 / -13 -3/ -2 --. 012 - 33 - 1 - 2
leads to an infeasible tour, since it asserts that passenger 3 is dropped off first, then picked
up. The neighborhood structure of DARP is highly non-uniform, varying between A2
neighborhood sizes of O(N) and O(N 2 ).
Let s be a feasible DARP tour. By 2-opt(s) we mean the tour obtained by first-improvement
local search using the A2 neighborhood structure (presented in a fixed, standard enumeration), with tour length as the cost function. As with TSP, there is a 3-opt algorithm for
R. Moll. A. G. Barto. T J Perkins and R. S. Sutton
1020
DARP, where a 3-opt neighborhood A3(S) is defined and searched in a fixed, systematic
way, again in first-improvement style. This neighborhood is created by inserting three
rather than two "breaks" in a tour. 3-opt is much slower than 2-opt, more than 100 times
as slow for N = 50, but it is much more effective, even when 2-opt is given equal time to
generate multiple random starting tours and then complete its improvement scheme.
Psaraftis (1983) was the first to study 2-opt and 3-opt algorithms for DARP. He studied
tours up to size N = 30, reporting that at that size, 3-opt tours are about 30% shorter
on average than 2-opt tours. In theoretical studies of DARP, Stein (1978) showed that for
sites placed in the unit square, the globally optimal tour for problem size N has a length
that asymptotically approaches 1.02-/2N with probability 1 as N increases. This bound
applies to our study-although we multiply position coordinates by 100 and then truncate
to get integer distance matrices-and thus, for example, a value of 1020 gives us a baseline
estimate of the globally optimal tour cost for N = 50. Healy and Moll (1995) considered
using a secondary cost function to extend local search on DARP. In addition to primary
cost (tour length) they considered as a secondary cost the ratio of tour cost to neighborhood
size, which they called cost-hood. Their algorithm employed a STAGE-like alternation
between these two cost functions: starting from a random tour s, it first found 20pt(s);
then it performed a limited local search using the cost-hood function, which had the effect
of driving the search to a new tour with a decent cost and a large neighborhood . These
alternating processes were repeated until a time bound was exhausted, at which point the
least cost tour seen so far was reported as the result of the search . This technique worked
well, with effectiveness falling midway between that of 2-opt and 3-opt.
4 ENHANCED 2-0PT FOR DARP
We restrict our description to a learning method for enhancing 2-opt for DARP, but the
same method can be used for other problems. In the learning phase, after initializing the
function approximator, we conduct a number training episodes until we are satisfied that the
weights have stabilized. For each episode k, we select a problem size N at random (from a
predetermined range) and generate a random DARP instance of that size, i.e., we generate
a symmetric Euclidean distance matrix by generating random points in the plane inside the
square bounded by the points (0,0), (0,100), (100,100) and (100,0). We set the "terminal
site" to point (50,50) and the initial tour to a randomly generated feasible tour. We then
conduct a modified first-improvement 2-opt local search using the negated current value
function, - Vk, as the cost function. The modification is that termination is controlled by a
parameter E > as follows: the search terminates at a tour s if there is no s' E A( s) such
that Vk (s') > Vk (s) + E. In other words, a step is taken only if it produces an improvement
of at least E according to the current value function. The episode returns a final tour sf.
We run one unmodified 2-opt local search, this time using the DARP cost function c (tour
length), from sf to compute 2-opt( sf). We then apply a batch version of undiscounted
TD(A) to the saved search trajectory using the following immediate rewards: -E for each
transition, and -c(2-opt( sf)) / Stein N as a terminal reward, where Stein N is the Stein
estimate for instance size N. Normalization by SteinN helps make the terminal reward
consistent across instance sizes. At the end of this learning phase, we have a final value
function, V. V is used in the performance phase, which consists of applying the modified
first-improvement 2-opt local search with cost function - Von new instances, followed by
a 2-opt application to the resulting tour.
?
The results described here were obtained using a simple linear approximator with a bias
Learning Instance-Independent Value Functions to Enhance Local Search
1021
Table 1: Weight Vectors for Learned Value Functions.
Value
Function
v
V20
V30
V40
Vso
V60
Weight Vector
< .951, .033, .0153 >
< .981, .019, .00017 >
< .984, .014, .0006 >
< .977, .022, .0009 >
< .980 , .019 , .0015 >
< .971 , .022 , .0069 >
weight and features developed from the following base features: 1) normcost N (s) =
c(s)jSteinN ; 2) normhood N = [A(s) [jaN' where aN is a normalization coefficient
defined below; and 3) normprox N, which considers a list of the N j 4 least expensive
edges of the distance matrix, as follows . Let e be one of the edges, with endpoints u
and v. The normprOXN feature examines the current tour, and counts the number of
sites on the tour that appear between u and v . normprOXN is the sum of these counts
over the edges on the proximity list divided by a normalizing coefficient bN described
below. Our function approximator is then give by Wo +normcostN j(normhoodN )2Wl +
normproXN j(normhood N )2W2 . The coefficients aN and bN are the result of running
linear regression on randomly sampled instances of random sizes to determine coefficients
that will yield the closest fit to a constant target value for normalized neighborhood size
and proximity. The results were aN = .383N 2 + .28.5N - 244.5 and bN = .43N 2 +
.736N - 68 .9.jN + 181.75. The motivation for the quotient features comes from Healy
and Moll (1995) who found that using a similar term improved 2-opt on DARP by allowing
it to sacrifice cost improvements to gain large neighborhoods .
5
EXPERIMENTAL RESULTS
Comparisons among algorithms were done at five representative sizes N = 20, 30, 40, 50,
and 60. For the learning phase, we conducted approximately 3,000 learning episodes, each
one using a randomly generated instance of size selected randomly between 20 and 60
inclusive. The result of the learning phase was a value function V . To assess the influence
of this multi-instance learning, we also repeated the above learning phase 5 times, except
that in each we held the instance size fixed to a different one of the 5 representative sizes,
yielding in each case a distinct value function VN , where N is the training instance size.
Table 1 shows the resulting weight vector < bias weight, costhood N weight, proximitYN
weight >. With the exception of the proximity,v weight, these are quite consistent across
training instance size. We do not yet understand why training on multiple-sized instances
led to this pattern of variation.
Table 2 compares the tour quality found by six different local search algorithms. For the
algorithms using learned value functions, the results are for the performance phase after
learning using the algorithm listed. Table entries are the percent by which tour length
exceeded SteinN for instance size N averaged over 100 instances of each representative
size. Thus, 2-opt exceeded Stein20 = 645 on the 100 instance sample set by an average of
42%. The last row in the table gives the results of using the five different value functions
VN , for the corresponding N . Results for TDC .8) are shown because they were better than
R. Moll, A. G. Barto, T J. Perkins and R. S. Sutton
1022
Table 2: Comparison of Six Algorithms at Sizes N = 20, 30, 40, 50, 60. Entries are
percentage above SteinN averaged over 100 random instances of size N.
Algorithm
2-opt
3-opt
TD(I)
TD(.8) E = 0
TD(.8) E = .Ol/N
TD(.8) E = 0, VN
N=20
42
8
28
27
29
29
N=30
47
8
31
30
35
30
N=40
53
11
34
35
37
32
N=50
56
10
39
37
41
36
N=60
60
10
40
39
44
40
Table 3: Average Relative Running Times. Times for 2-opt are in seconds; other entries
give time divided by 2-opt time.
Algorithm
2-opt
3-opt
TD( .8) E = 0
TD(.8) E = .01/ N
N=20
.237
32
3.2
2.2
N=30
.770
45
3.4
1.8
N=40
1.09
100
6.3
2.6
N=50
1.95
162
6.9
2.9
N=60
3.55
238
7.1
3.0
those for other values of .A. The learning-enhanced algorithms do well against 2-opt when
running time is ignored, and indeed TD(.8), E = 0, is about 35% percent better (according
to this measure) by size 60. Note that 3-opt clearly produces the best tours, and a non-zero
E for TD(.8) decreases tour quality, as expected since it causes shorter search trajectories.
Table 3 gives the relative running times of the various algorithms. The raw running times
for 2-opt are given in seconds (Common Lisp on 266 Mhz Mac G-3) at each of five sizes in
the first row. Subsequent rows give approximate running times divided by the corresponding 2-opt running time. Times are averages over 30 instances. The algorithms using learned
value functions are slower mainly due to the necessity to evaluate the features. Note that
TD(.8) becomes significantly faster with E non-zero.
Finally. Table 4 gives the relative performance of seven algorithms. normalized for time,
including the STAGE algorithm using linear regression with our features. We generated
20 random instances at each of the representative sizes, and we allowed each algorithm
to run for the indicated amount of time on each instance. If time remained when a local
optimum was reached, we restarted the algorithm at that point, except in the case of 2-opt,
where we selected a new random starting tour. The restarting regime for the learningenhanced algorithms is the regime employed by STAGE. Each algorithm reports the best
result found in the allotted time, and the chart reports the averages of these values across the
20 instances. Notice that the algorithms that take advantage of extensive off-line learning
significantly outperform the other algorithms, including STAGE, which relies on singleinstance learning.
6 DISCUSSION
We have presented an extension to local search that uses RL to enhance the local search
cost function for a particular optimization problem. Our method combines aspects of work
Learning Instance-Independent Value Functions to Enhance Local Search
1023
Table 4: Performance Comparisons, Equalized for Running Time.
Algorithm
2-opt
STAGE
TD(.8) E = 0
TD(.8) E = .011N
Size and Running Time
N=20 N=30
N=40
N=50
10 sec 20 sec 40 sec 100 sec
16
29
28
30
18
20
32
24
12
16
13
22
13
11
14
24
N=60
150 sec
38
27
20
28
by Zhang and Dietterich (1995) and Boyan and Moore (1997; Boyan 1998). We have
applied our method to a relatively pure optimization problem-DARP-which possesses
a relatively consistent structure across problem instances. This has allowed the method to
learn a value function that can be applied across all problem instances at all sizes. Our
method yields significant improvement over a traditional local search approach to DARP
on the basis of a very simple linear approximator, built using a relatively impoverished set
of features. It also improves upon Boyan and Moore's (1997) STAGE algorithm in our
example problem, benefiting from extensive off-line learning whose cost was not included
in our assessment. We think this is appropriate for some types of problems; since it is a
one-time learning cost, it can be amortized over many future problem instances of practical
importance.
Acknowledgement
We thank Justin Boyan for very helpful discussions of this subject. This research was supported by a grant from the Air Force Office of Scientific Research, Bolling AFB (AFOSR
F49620-96-1-0254) .
References
Boyan, J. A. (1998). Learning Evaluation Functions for Global Optimization. Ph .D. Thesis,
Carnegie-Mellon University.
Boyan, J. A., and Moore, A. W. (1997). Using Prediction to Improve Combinatorial Optimization Search. Proceedings of AI-STATS-97 .
D. P. Bertsekas, D. P., and Tsitsiklis, 1. N. (1996). Neuro-Dynamic Programming. Athena
Scientific, Belmont, MA.
Healy, P., and Moll, R. (1995). A New Extension to Local Search Applied to the Dial-ARide Problem. European Journal of Operations Research, 8: 83-104.
Psaraftis, H. N. (1983). ~-interchange Procedures for Local Search in a PrecedenceConstrained Routing Problem. European Journal of Operations Research, 13:391-402.
Zhang, W. and Dietterich, T. G. (1995). A Reinforcement Learning Approach to Job-Shop
Scheduling. In Proceedings of the Fourteenth International Joint Conference on ArtifiCial
Intelligence , pp. 1114-1120. Morgan Kaufmann, San Francisco.
Stein, D. M. (1978). An Asymptotic Probabilistic Analysis of a Routing Problem. Math.
Operations Res. J., 3: 89-101.
| 1573 |@word version:1 termination:1 bn:3 pick:5 thereby:1 necessity:1 initial:2 current:5 yet:1 must:2 belmont:1 subsequent:1 predetermined:1 midway:1 drop:5 intelligence:1 selected:3 plane:1 math:1 location:2 zhang:6 five:3 constructed:1 direct:1 driver:1 consists:1 combine:3 inside:1 sacrifice:1 indeed:1 expected:2 multi:1 ol:1 terminal:10 globally:2 td:14 enumeration:2 becomes:1 notation:1 bounded:1 lowest:1 developed:1 nj:1 ofa:1 unit:1 grant:1 appear:1 producing:1 bertsekas:2 before:1 engineering:1 local:34 dropped:1 sutton:4 approximately:1 studied:1 theodore:1 limited:1 range:2 averaged:2 practical:1 hood:2 procedure:1 jan:1 healy:3 significantly:2 word:1 get:1 scheduling:2 context:1 impossible:1 applying:2 influence:1 customer:3 transportation:1 maximizing:1 starting:3 focused:1 stats:1 pure:1 examines:1 coordinate:1 variation:1 enhanced:5 construction:1 pt:2 target:1 programming:1 us:4 element:1 amortized:1 expensive:2 continues:1 predicts:2 initializing:1 episode:4 decrease:2 valuable:1 reward:5 dynamic:1 predictive:1 upon:2 basis:1 easily:1 joint:1 represented:1 various:1 distinct:2 describe:2 effective:1 artificial:1 equalized:1 outcome:2 neighborhood:13 whose:3 quite:1 florham:1 think:1 tsp:3 final:3 advantage:1 mission:1 neighboring:2 relevant:1 combining:1 inserting:1 benefiting:1 description:1 asserts:1 normalize:1 undiscounted:2 optimum:2 produce:4 generating:1 help:1 illustrate:1 andrew:1 received:1 job:1 quotient:1 come:1 guided:1 saved:1 routing:2 opt:34 absorption:1 strictly:1 extension:2 proximity:3 considered:2 great:1 driving:1 a2:3 failing:1 applicable:2 combinatorial:6 wl:1 clearly:1 always:1 modified:2 rather:2 shuttle:1 varying:1 barto:4 office:1 improvement:11 vk:3 mainly:1 baseline:1 helpful:1 entire:1 unlikely:1 v30:1 selects:2 interested:1 issue:1 among:1 v40:1 equal:1 park:2 future:1 np:1 report:2 richard:1 randomly:5 phase:12 highly:1 multiply:1 evaluation:4 yielding:1 held:2 edge:3 shorter:2 conduct:2 euclidean:2 desired:1 re:1 guidance:1 theoretical:1 minimal:1 instance:42 mhz:1 unmodified:1 cost:28 mac:1 entry:3 tour:39 uniform:2 examining:1 conducted:1 v20:1 reported:1 international:1 amherst:1 destination:1 off:13 systematic:1 probabilistic:1 enhance:7 again:1 von:1 satisfied:1 thesis:1 possibly:1 style:1 return:4 sec:5 coefficient:4 passenger:3 performed:1 break:1 picked:1 reached:1 start:4 parked:1 complicated:2 ass:1 air:1 square:2 accuracy:1 chart:1 kaufmann:1 characteristic:1 who:2 yield:3 vso:1 raw:1 trajectory:6 reach:1 against:1 pp:1 sampled:1 gain:1 adjusting:1 massachusetts:1 improves:2 obtainable:1 impoverished:1 nasa:1 exceeded:2 afb:1 improved:1 formulation:1 done:1 stage:9 until:3 traveling:1 receives:1 assessment:1 quality:4 indicated:1 scientific:2 mdp:3 dietterich:6 effect:2 normalized:2 alternating:1 symmetric:2 laboratory:1 moore:7 complete:3 percent:2 common:1 absorbing:2 rl:9 endpoint:1 extend:1 he:1 significant:1 mellon:1 ai:1 had:1 ride:6 moving:1 base:1 closest:1 showed:1 alternation:2 morgan:1 seen:1 minimum:1 additional:3 somewhat:1 fortunately:1 employed:2 accomplishes:1 determine:1 multiple:5 faster:1 divided:3 visit:1 controlled:1 prediction:1 variant:1 involving:1 regression:2 neuro:1 enhancing:3 normalization:2 whereas:1 addition:2 w2:1 posse:1 subject:1 effectiveness:1 lisp:1 call:3 integer:1 easy:1 decent:1 moll:7 fit:1 identified:1 restrict:1 avenue:1 tradeoff:2 six:2 penalty:1 wo:1 routed:1 returned:1 cause:1 action:4 ignored:1 useful:1 involve:1 listed:1 amount:1 stein:5 locally:2 ph:1 category:1 generate:3 outperform:1 percentage:1 stabilized:1 notice:1 carnegie:1 dropping:1 key:1 falling:1 asymptotically:1 sum:1 run:2 fourteenth:1 reporting:1 vn:3 decision:3 bound:2 followed:1 psaraftis:2 encountered:1 perkins:4 worked:1 inclusive:1 unlimited:1 aspect:4 extremely:1 relatively:4 department:1 developing:1 according:2 truncate:1 terminates:1 across:5 modification:1 v60:1 taken:1 describing:1 eventually:2 count:2 reversal:3 end:1 salesman:1 junction:1 operation:4 apply:1 appropriate:1 batch:1 slower:2 jn:1 original:3 running:9 cf:1 dial:6 objective:2 move:1 primary:1 traditional:1 exhibit:2 enhances:1 distance:5 thank:1 capacity:1 athena:1 seven:1 considers:1 length:6 ratio:1 difficult:1 robert:1 policy:1 negated:1 allowing:1 markov:1 finite:1 logistics:1 immediate:1 extended:1 arbitrary:1 bolling:1 required:1 extensive:2 learned:7 able:2 justin:1 below:3 pattern:1 regime:2 built:1 including:3 power:1 difficulty:1 boyan:13 force:1 scheme:1 improve:4 shop:1 inversely:1 identifies:1 created:2 literature:1 acknowledgement:1 relative:3 afosr:1 asymptotic:1 generation:1 approximator:5 tdc:1 consistent:4 principle:1 row:3 summary:1 placed:1 last:1 supported:1 infeasible:1 tsitsiklis:2 guide:4 bias:2 understand:1 fall:1 van:3 f49620:1 world:1 transition:2 interchange:1 reinforcement:4 san:1 far:1 approximate:2 compact:1 restarting:1 global:1 francisco:1 subsequence:2 search:52 sk:4 why:1 table:10 learn:5 investigated:1 excellent:1 european:2 motivation:1 repeated:2 allowed:2 site:5 representative:4 slow:1 position:1 guiding:2 explicit:1 sf:4 third:1 learns:1 remained:1 specific:1 showing:1 list:2 a3:1 normalizing:1 sequential:2 importance:2 exhausted:1 led:1 hillclimbing:1 applies:1 restarted:1 corresponds:1 relies:1 ma:2 goal:2 viewed:1 sized:1 feasible:5 included:1 determined:1 except:2 operates:1 called:2 secondary:3 experimental:1 shannon:1 exception:1 select:2 allotted:1 searched:1 evaluate:1 |
627 | 1,574 | Analyzing and Visualizing Single-Trial
Event-Related Potentials
Tzyy-Ping Jung 1 ,2, Scott Makeig 2,3, Marissa Westerfield 2
Jeanne Townsend 2, Eric Courchesne 2, Terrence J. Sejnowskp,2
1 Howard
Hughes Medical Institute and Computational Neurobiology Laboratory
The Salk Institute, P.O. Box 85800, San Diego, CA 92186-5800
{jung,scott,terry}~salk.edu
2University of California, San Diego, La Jolla, CA 92093
3Naval Health Research Center, P.O. Box 85122, San Diego, CA 92186-5122
Abstract
Event-related potentials (ERPs), are portions of electroencephalographic (EEG) recordings that are both time- and phase-locked
to experimental events. ERPs are usually averaged to increase
their signal/noise ratio relative to non-phase locked EEG activity, regardless of the fact that response activity in single epochs
may vary widely in time course and scalp distribution. This study
applies a linear decomposition tool, Independent Component Analysis (ICA) [1], to multichannel single-trial EEG records to derive
spatial filters that decompose single-trial EEG epochs into a sum
of temporally independent and spatially fixed components arising
from distinct or overlapping brain or extra-brain networks. Our
results on normal and autistic subjects show that ICA can separate artifactual, stimulus-locked, response-locked, and. non-event
related background EEG activities into separate components, allowing ( 1) removal of pervasive artifacts of all types from single-trial
EEG records, and (2) identification of both stimulus- and responselocked EEG components. Second, this study proposes a new visualization tool, the 'ERP image', for investigating variability in latencies and amplitudes of event-evoked responses in spontaneous EEG
or MEG records. We show that sorting single-trial ERP epochs in
order of reaction time and plotting the potentials in 2-D clearly
reveals underlying patterns of response variability linked to performance. These analysis and visualization tools appear broadly
applicable to electrophyiological research on both normal and clinical populations.
Analyzing and Visualizing Single-Trial Event-Related Potentials
1
119
Introduction
Scalp-recorded event-related potentials (ERPs) are voltage changes in the ongoing
electroencephalogram (EEG) that are both time- and phase-locked to some experimental events. These field potentials are usually averaged to increase their signal/noise ratio relative to artifacts and other non-phase locked EEG activity. The
averaging method disregards the fact that in single epochs response activity may
vary widely in both time course and scalp distribution. These differences are in
part attributed to different strategies employed by subjects for processing different
stimuli, to differences in expectation , attention , and arousal occurring in different
trials, and/or to variations in alertness and fatigue [2 , 3]. Single-trial analysis,
on the other hand, can avoid problems due to time and/or phase shifts and can
potentially reveal much richer information about event-related brain dynamics in
endogenous ERPs, but suffers from pervasive artifacts associated with blinks, eyemovements, and muscle noise, and poor signal-to-noise ratio arising from the fact
that non-phase locked background EEG activities often are larger than phase-locked
response components.
We present here new methods for analyzing and visualizing multichannel unaveraged single-trial ERP records that alleviate these problems. First, multi-channel
EEG epochs were analyzed using Independent Component Analysis (ICA), a signal
processing technique that can decompose multichannel complex data into spatially
fixed and temporally independent components. Next, a new visualization tool, the
' ERP image', is introduced for visualizing relations between single-trial ERP records
and their contributions to the ERP average. To form an ERP image, the recorded
potentials at one channel are plotted as parallel lines and single-trial ERP epochs
are sorted in order of reaction time. ICA , applied to the single-trial EEG records
from normal and autistic subjects in a visual selective attention experiment , derived
components whose dynamics were affected by stimulus presentations and/or subject
responses in distinct ways. We demonstrate, through analysis of two sample data
sets, the power of the proposed analysis and visualization tools for increasing the
amount and quality of information about event-related brain dynamics that can be
derived from single-trial EEG data.
2
Independent Component Analysis of EEG data
Bell and Sejnowski [5] have proposed a simple neural network algorithm that blindly
separates mixtures, x, of independent sources, s , using infomax. They showed that
maximizing the joint entropy, H(y), of the output of a neural processor minimizes
the mutual information among the output components, Yi = g( Ui), where g( ud is
an invertible bounded nonlinearity and u
Wx, a version of the original sources,
s , identical save for scaling and permutation . Lee et al. [1] generalized the infomax
algorithm to perform blind source separation on linear mixtures of sources with
either sub- or super-Gaussian distributions . Please see [5, 1] for details regarding
the algorithms.
=
ICA is suitable for performing blind source separation on EEG data because: (1)
it is plausible that EEG data recorded at multiple scalp sensors are linear sums of
temporally independent components arising from spatially fixed, distinct or overlapping brain or extra-brain networks, and, (2) spatial smearing of EEG data by
volume conduction does not involve significant time delays!. In single-trial EEG
analysis, the rows of the input matrix x are the EEG signals recorded at different
electrodes, while the columns are measurements recorded at different time points.
lSee [4] for details regarding lCA assumptions underlying EEG analysis.
120
T.-P Jung et al.
Single-trial EAPs at Cz
Ordered by AT
With 20-trlal smoothing
25
20
15
10
5
o
-5
-10
-15
- 20
- 25
/.LV
-100
100
300
500
700
900
Time (msec)
Figure 1: ERP images. (left panel) Single-trial ERPs recorded at a central electrode
(Cz) and time-locked to onsets of visual target stimuli (vertical left line), plotted with
subject reaction times (thick black line). (middle panel) The 390 single trials were then
sorted (bottom to top) in order of increasing reaction time. (right panel) To increase
signal-to-noise ratio and minimize EEG signals not both time- and phase-locked to the
experimental events, the trials were averaged vertically using a 30-trial moving window
advanced in one-trial increments.
The rows of the independent output data matrix u = Wx are time courses of
activation of the lCA components, and the columns of the inverse matrix, W-l ,
give the projection strengths of the respective components onto the scalp sensors.
The scalp topographies of the components provide evidence as to their physiological
origin (e.g., eye activity should project mainly to frontal sites) . EEG signals of interest (e.g. , event-related brain signals) can then be obtained by projecting selected
lCA components back onto the scalp as x'
(W)-lu', where u' is the matrix of
activation waveforms, u , with rows representing activations of "irrelevant" sources
set to zero.
=
3
Methods and Materials
EEG data were recorded at 29 scalp electrodes and 2 EOG placements from 2 normal
and 1 autistic subjects who participated in a 2-hr visual selected attention task in
which they were instructed to attend to circles flashed in random order at one of
five locations laterally arrayed 0.8 cm above a central fixation point. Locations were
outlined by five evenly spaced 1.6-cm blue squares displayed on a black background
at visual angles of ?2.7 deg and ?5.5 deg from fixation. Attended locations were
highlighted through entire 90-sec experimental blocks. Subjects were instructed to
maintain fixation on the central cross and press a button each time they saw a circle
in the attended location (see [6] for details).
4
Results
The lCA algorithm was applied separately to concatenated 31-channel single-trial
EEG records from two normal and one autistic subjects. The derived independent
components had a variety of distinct relations to task events. Some were clearly
time-locked to stimuli presentations , while others were time-locked to subject re-
Analyzing and Visualizing Single- Trial Event-Related Potentials
121
sponses. Still others captured spontaneous EEG activity together with blinks, eyemovements, and muscle artifacts, while others accounted for oscillatory and other
background EEG phenomena.
4.1
ERP image
To investigate variability in the latencies and amplitudes of event-evoked responses
in spontaneous EEG, we here introduce a new visualization tool, the ERP image. An
example shown in Figure 1 (left paneQ plots 390 single-trial ERP epochs time-locked
to onsets oftarget stimuli ( vertical left line) and recorded at a central electrode (Cz)
from a normal subject. Each horizontal trace represents a I-sec single-trial ERP
record whose potential variations are plotted in different colors. The thick line
plots the subject reaction times (RT) in successive trials. Note the trial-to-trial
fluctuations in ERP latency and reaction time. The ERP average of these trials
is plotted in the bottom of the panel. Next, the single trials were sorted in order
of increasing reaction time (Fig. 1 middle paneQ, and were then smoothed with a
30-trial moving average (right paneQ. Note that, in all but the longest-RT trials,
the early positive feature (P2) is time-locked to stimulus onset (i.e. is stimuluslocked), and that the P3 feature follows RT in nearly all trials (i.e. is responselocked). ERP image plots allow visualization of relations between event-related
EEG trials and single-trial contributions to their ERP averaged. They disclose a
tight link between the amplitudes and latencies of individual event-related responses
and subject behavior.
4.2
Removing blink and eye-movement artifacts from EEG records
Autistic subjects tend to blink more frequently than normal subjects [8]. ICA,
applied to this data set in which about 50% of the trials were contaminated by
blinks, successfully isolated blink artifacts to a single component (Fig. 2A, left)
whose contributions could be removed from the EEG records by subtracting out
the component projection [7]. Though the subjects were instructed to fixate during each 90-sec blocks, it has been suspected, though poorly documented, that
their eyes tended to drift towards target stimuli presented at peripheral locations.
Here, a second ICA component accounted for these small horizontal eye-movements
(Fig. 2B, right). Fig. 2B (5 traces) also shows separate ERP averages (at periocular
site EOG2) of responses to targets presented at the five different attended locations.
The size of the prominent eye movement-related component is proportional to the
angle between the stimulus location and the fixation point. Figure 2C shows the
averaged ERPs at the same site in response to stimuli presented at the five different
attended locations, before (faint traces) and after (solid traces) artifact removal. After artifact correction, the averaged ERPs to stimuli presented at the five different
locations were independent of stimulus location.
4.3
Extracting event-related brain activity from EEG Records
In these data, ICA also separated stimulus-locked, response-locked, and non-phase
locked background EEG activities into different independent components. Numbers
of components in each class varied across subjects. Figure 3A shows the projections
of the subgroups of ICA components accounting primarily for (left) stimulus-locked,
(middle) response-locked, and (right) remaining non-phase locked background EEG
activity at site P03. Notice that, (1) both the response latencies and active durations of the early stimulus-locked PI and Nl components were very stable in nearly
all trials, (2) the peak of the later P3 component covaried with reaction time, and
(3) the projections of ICA components accounting for non-phase locked background
EEG activity contributed very little to the averaged ERP (right panel, bottom
T.-P lung et al.
122
(A)
(8)
CofT'90nent 1
COfT'90nent 2
II
t=t=?:::h:h;;
o
900 0
900 0
900 0
900 0
900
(C)
rightmost
leftmost
~~~~~rt:
o
900
0
900
0
...L
900
0
IFixation Point
900
0
900
Time (msec)
Figure 2: (A) (left) Scalp topography and 5 consecutive I-sec epochs of the activation time
course of an leA component counting for blink artifacts in 641 single trials recorded from
an adult autistic subject. (B) The scalp topography of a second eye-movement component
and its averaged activation time courses in response to target stimuli presented at the five
different attended locations. (C) Averaged ERPs at site EOG2 to targets presented at
each of five attended locations, before (faint traces) and after (solid traces) artifact removal.
trace). These results indicate that ICA makes possible the extraction and separation of event-related brain phenomena of all types from single-trial EEG records.
4.4
Re-aligning single-trial event-related potentials
Figure 3B (left pane~ shows the raw artifact-corrected single-trial ERP epochs (the
sum of the data in Fig. 3A). Response latency fluctuations resulted in temporal
smearing of the P3 feature in the averaged ERP (bottom left). Realigning the
single-trial ERP epochs to the median reaction time sharpened the averaged P3
(center panel, P3'), but unfortunately made the early stimulus-locked activity out of
phase and the early averaged ERP thus absent in the first 200 msec. Because ICA
separated stimulus-locked and response-locked activity into different independent
components, we could realign the time courses of the response-locked P3' component
to the median reaction time and project the adjusted data, along with the unaligned
time courses of stimulus-locked components (PI/NI), back onto the scalp sensors
(right pane~. This realignment preserved the early stimulus-locked PI/NI while
sharpening the response-locked P3. The method minimized temporal smearing in
the averaged ERP arising from performance fluctuations (left (3 right panels).
4.5
Event-related oscillatory EEG activity
ICA, applied to multichannel single-trial EEG records, can also separate multiple
oscillatory components even within a single frequency band. For example, Figure 3C
plots scalp topographies and ERP images of activations of two ICA components accounting for alpha activity in target-response epochs from a normal subject. Note
that the activity of the first component (left pane~ was augmented following stimulation, while the activity of the second component (middle pane~ was blocked by
the subject response. When the same spatial filter was applied to EEG records from
another session in which the subject was instructed to attend to but not to respond
Analyzing and Visualizing Single-Trial Event-Related Potentials
123
(A)
Stimulus-locked Activity at P03 Response-locked Activity at P03
o
Background ActMty at P03
0
10
5
o
-5
-10
-100
100 300 500 700 900
Time (msec)
-100
100 300 sao 700 900
Time (msec)
-100
100 300 sao 700 900
Time (msec)
(8)
Re-aligned ERPs
Single-trial ERPs at P03
Re--aligned ERPs
o
10
o
-10
100 300 500 700 900
Time (msec)
(C)
Alpha Component 1
Alpha component 2
Motor-response session
o
Alpha component 2
No-response session
Figure 3: (A) Projections of ICA components at site P03 accounting, respectively, for
stimulus-locked (left) , response-locked (middle), and non-phase locked background EEG
activity (right) at one posterior site, P03. (B) (left) Artifact-corrected single-trial ERP
records time-locked to stimulus onsets (left), and subject responses (center) . Note that
the early ERP features (PI , NI) are not in phase in the response-locked trials, and do
not appear in the response-locked average (center bottom). (right) Projections of the
response-locked components were aligned to median reaction time (355 ms) and summed
with stimulus-aligned component projections, forming an enhanced stimulus-aligned ERP
(right bottom). (C) ERP-image plots of activations of ICA components accounting for
alpha activity in EEG recorded from a normal subject . The alpha activity extracted by
these components were either augmented (left) or blocked (middle) by subject responses.
When the spatial filter for the second alpha component (middle) was applied to EEG
records from another session in which the subject was asked only to 'mentally note' the
occurrence of target stimuli, blocking was replaced by continued phase-locking.
T-P lung et al.
124
to target stimuli, this alpha activity was not blocked (right pane~ . ICA identifies
spatially-overlapping patterns of coherent activity over the entire scalp rather than
focusing on single scalp channels or channel pairs.
5
Conclusions
We have developed analytic and visualization tools for analysis of multichannel
single-trial EEG records. Single-trial ERP analysis based on Independent Component Analysis allows blind separation of multichannel complex EEG data into a
sum of temporally independent and spatially fixed components. ICA can effectively
remove eye and muscle artifacts without altering the underlying brain activity in
the EEG records. lCA can also be used to extract event-related brain phenomena
of all types from EEG records. It can identify spatially-overlapping patterns of
coherent activity over the entire scalp , and can be used to realign the time courses
of response-locked components to prevent temporal smearing in the average arising
from performance fluctuations. ERP images make visible systematic relations between single-trial EEG or MEG records and experimental events, and their relations
to averaged ERPs. ERP images can also be used to display relationships between
phase, amplitude and timing of event-related EEG components time-locked to either stimuli or subject responses. The analysis and visualization tools proposed in
this study dramatically increase the amount and quality of information on eventor response-related brain signals that can be extracted from ERP data. Both tools
appear applicable to electrophyiological research on normal and clinical populations.
References
[1] T.W. Lee, M. Girolami and T.J. Sejnowski (1999) Independent Component Analysis
using an Extended Infomax Algorithm for Mixed Sub-Gaussian and Super-Gaussian
Sources, Neural Computation, 11(2): 606-33.
[2] H. Yabe, F. Satio & Y. Fukushima (1993) Median Method for Detecting Endogenous
Event-related Brain Potentials, Electroencephalog. din. Neurophysiolog. 87(6) :403-7.
[3] H. Yabe, F. Satio & Y. Fukushima (1995) Classification of Single-trial ERP Sub-types:
Application of Globally Optimal Vector Quantization Using Simulated Annealing,
Electroencephalog. din. Neurophysiolog. 94(4):288-97.
[4] S. Makeig, T-P Jung, A.J. Bell, D. Ghahremani, and T.J. Sejnowski (1997) Blind
Separation of Event-related Brain Responses into Independent Components, Proc.
Natl. Acad. Sci. USA, USA, 94:10979-84.
[5] A.J. Bell & T.J. Sejnowski (1995). An information-maximization approach to blind
separation and blind deconvolution, Neural Computation 7:1129-1159.
[6] S. Makeig, M. Westerfield, J. Covington, T-P Jung, J. Townsend, T.J. Sejnowski, and
E. Courchesne (in press) Functionally independent components of the late positive
event-related potential in a visual spatial attention paradigm, J. Neuroscience.
[7] Jung T-P, Humphries C, Lee TW, Makeig S, McKeown MJ, Iragui V, Sejnowski
TJ (1998) Extended ICA Removes Artifacts from Electroencephalographic Data, In:
Advances in Neural Information Processing Systems 10, 894-900.
[8] J.G. Small (1971) Sensory Evoked Responses of Autistic Children, In: Infantile
Autism, 224-39.
| 1574 |@word trial:50 middle:7 version:1 decomposition:1 accounting:5 attended:6 solid:2 rightmost:1 reaction:11 activation:7 visible:1 wx:2 arrayed:1 analytic:1 motor:1 remove:2 plot:5 selected:2 nent:2 record:20 detecting:1 location:12 successive:1 five:7 along:1 fixation:4 westerfield:2 introduce:1 ica:19 behavior:1 frequently:1 multi:1 brain:14 globally:1 little:1 window:1 increasing:3 project:2 underlying:3 bounded:1 panel:7 cm:2 minimizes:1 developed:1 electroencephalog:2 sharpening:1 temporal:3 laterally:1 makeig:4 medical:1 appear:3 positive:2 before:2 attend:2 vertically:1 timing:1 acad:1 analyzing:5 fluctuation:4 erps:12 black:2 evoked:3 locked:40 averaged:14 hughes:1 block:2 bell:3 projection:7 onto:3 humphries:1 center:4 maximizing:1 regardless:1 attention:4 duration:1 continued:1 courchesne:2 population:2 lsee:1 variation:2 increment:1 enhanced:1 diego:3 spontaneous:3 target:8 origin:1 blocking:1 bottom:6 disclose:1 alertness:1 movement:4 removed:1 ui:1 locking:1 asked:1 dynamic:3 tight:1 eric:1 joint:1 separated:2 distinct:4 sejnowski:6 whose:3 richer:1 widely:2 larger:1 plausible:1 highlighted:1 subtracting:1 unaligned:1 aligned:5 poorly:1 electrode:4 mckeown:1 derive:1 p2:1 indicate:1 girolami:1 waveform:1 thick:2 filter:3 material:1 decompose:2 alleviate:1 adjusted:1 correction:1 normal:10 vary:2 early:6 consecutive:1 proc:1 applicable:2 saw:1 successfully:1 tool:9 clearly:2 sensor:3 gaussian:3 super:2 rather:1 avoid:1 voltage:1 pervasive:2 derived:3 naval:1 longest:1 electroencephalographic:2 mainly:1 jeanne:1 entire:3 relation:5 selective:1 among:1 unaveraged:1 classification:1 smearing:4 proposes:1 spatial:5 smoothing:1 summed:1 mutual:1 field:1 extraction:1 identical:1 represents:1 nearly:2 minimized:1 others:3 stimulus:29 contaminated:1 primarily:1 resulted:1 individual:1 replaced:1 phase:16 maintain:1 fukushima:2 interest:1 investigate:1 analyzed:1 mixture:2 nl:1 natl:1 tj:1 neurophysiolog:2 respective:1 circle:2 arousal:1 plotted:4 re:4 isolated:1 p03:7 covington:1 column:2 altering:1 maximization:1 delay:1 conduction:1 autistic:7 peak:1 lee:3 terrence:1 systematic:1 infomax:3 invertible:1 together:1 realignment:1 central:4 recorded:10 sharpened:1 potential:13 sec:4 blind:6 onset:4 later:1 endogenous:2 linked:1 portion:1 lung:2 parallel:1 contribution:3 minimize:1 square:1 ni:3 who:1 spaced:1 identify:1 blink:7 identification:1 raw:1 lu:1 autism:1 processor:1 ping:1 oscillatory:3 suffers:1 tended:1 frequency:1 fixate:1 associated:1 attributed:1 color:1 amplitude:4 back:2 focusing:1 response:35 box:2 though:2 hand:1 horizontal:2 overlapping:4 artifact:14 quality:2 reveal:1 usa:2 din:2 spatially:6 laboratory:1 covaried:1 flashed:1 visualizing:6 during:1 please:1 m:1 generalized:1 prominent:1 leftmost:1 fatigue:1 demonstrate:1 electroencephalogram:1 image:11 tzyy:1 mentally:1 stimulation:1 volume:1 functionally:1 significant:1 measurement:1 blocked:3 outlined:1 session:4 nonlinearity:1 eog2:2 had:1 moving:2 stable:1 aligning:1 posterior:1 showed:1 jolla:1 irrelevant:1 yabe:2 yi:1 muscle:3 captured:1 employed:1 paradigm:1 ud:1 signal:10 ii:1 multiple:2 clinical:2 cross:1 expectation:1 blindly:1 cz:3 lea:1 preserved:1 background:9 separately:1 participated:1 annealing:1 median:4 source:7 extra:2 recording:1 subject:25 tend:1 extracting:1 counting:1 variety:1 regarding:2 shift:1 absent:1 dramatically:1 latency:6 involve:1 amount:2 band:1 multichannel:6 documented:1 notice:1 neuroscience:1 arising:5 blue:1 broadly:1 affected:1 erp:34 prevent:1 button:1 sum:4 inverse:1 angle:2 respond:1 separation:6 p3:7 scaling:1 display:1 activity:27 scalp:15 strength:1 placement:1 pane:5 performing:1 lca:5 peripheral:1 poor:1 across:1 tw:1 projecting:1 visualization:8 occurrence:1 save:1 original:1 top:1 remaining:1 concatenated:1 strategy:1 rt:4 separate:5 link:1 simulated:1 sci:1 evenly:1 meg:2 relationship:1 ratio:4 ghahremani:1 unfortunately:1 potentially:1 trace:7 eyemovements:2 perform:1 allowing:1 contributed:1 vertical:2 howard:1 displayed:1 neurobiology:1 variability:3 extended:2 varied:1 smoothed:1 drift:1 introduced:1 pair:1 california:1 coherent:2 subgroup:1 adult:1 usually:2 pattern:3 scott:2 iragui:1 terry:1 power:1 suitable:1 event:28 townsend:2 hr:1 advanced:1 representing:1 eye:7 temporally:4 identifies:1 extract:1 health:1 eog:1 epoch:11 removal:3 relative:2 permutation:1 topography:4 mixed:1 proportional:1 lv:1 plotting:1 suspected:1 realign:2 sao:2 pi:4 row:3 course:8 jung:6 accounted:2 allow:1 institute:2 sensory:1 instructed:4 made:1 san:3 alpha:8 deg:2 active:1 investigating:1 reveals:1 channel:5 mj:1 ca:3 eeg:48 complex:2 noise:5 child:1 augmented:2 site:7 paneq:3 fig:5 salk:2 sub:3 msec:7 late:1 removing:1 faint:2 physiological:1 evidence:1 deconvolution:1 quantization:1 effectively:1 sponses:1 occurring:1 sorting:1 entropy:1 artifactual:1 forming:1 visual:5 ordered:1 applies:1 extracted:2 sorted:3 presentation:2 towards:1 change:1 corrected:2 averaging:1 experimental:5 la:1 disregard:1 frontal:1 ongoing:1 phenomenon:3 |
628 | 1,575 | Semiparametric Support Vector and
Linear Programming Machines
Alex J. Smola, Thilo T. Frie6, and Bernhard Scholkopf
GMD FIRST, Rudower Chaussee 5, 12489 Berlin
{smola, friess, bs }@first.gmd.de
Abstract
Semiparametric models are useful tools in the case where domain
knowledge exists about the function to be estimated or emphasis is
put onto understandability of the model. We extend two learning
algorithms - Support Vector machines and Linear Programming
machines to this case and give experimental results for SV machines.
1
Introduction
One of the strengths of Support Vector (SV) machines is that they are nonparametric techniques, where one does not have to e.g. specify the number of basis functions
beforehand. In fact, for many of the kernels used (not the polynomial kernels) like
Gaussian rbf- kernels it can be shown [6] that SV machines are universal approximators.
While this is advantageous in general, parametric models are useful techniques in
their own right. Especially if one happens to have additional knowledge about the
problem, it would be unwise not to take advantage of it. For instance it might be
the case that the major properties of the data are described by a combination of a
small set of linear independent basis functions {?Jt (.), ... , ?n (.)}. Or one may want
to correct the data for some (e.g. linear) trends. Secondly it also may be the case
that the user wants to have an understandable model, without sacrificing accuracy.
For instance many people in life sciences tend to have a preference for linear models.
This may be some motivation to construct semiparametric models, which are both
easy to understand (for the parametric part) and perform well (often due to the
nonparametric term). For more advocacy on semiparametric models see [1].
A common approach is to fit the data with the parametric model and train the nonparametric add-on on the errors of the parametric part, Le. fit the nonparametric
part to the errors. We show in Sec. 4 that this is useful only in a very restricted
A. 1. Smola, T. T. FriejJ and B. SchOlkopf
586
situation. In general it is impossible to find the best model amongst a given class
for different cost functions by doing so. The better way is to solve a convex optimization problem like in standard SV machines, however with a different set of
admissible functions
n
f(x) = (w,1jJ(x))
+ 2:f3irPi(X).
(1)
i=l
Note that this is not so much different from the classical SV [10J setting where one
uses functions of the type
(2)
f(x) = (w, 1jJ(x)) + b.
2
Semiparametric Support Vector Machines
Let us now treat this setting more formally. For the sake of simplicity in the
exposition we will restrict ourselves to the case of SV regression and only deal with
the c- insensitive loss function 1~lc = max{O, I~I - c}. Extensions of this setting are
straightforward and follow the lines of [7J.
Given a training set of size f, X := {(Xl, yd , ., . ,(xe, ye)} one tries to find a function
f that minimizes the functional of the expected risk l
R[JJ =
J
c(f(x) - y)p(x, y)dxdy.
(3)
Here c(~) denotes a cost function, i.e. how much deviations between prediction
and actual training data should be penalized. Unless stated otherwise we will use
c(~) = 1~lc .
As we do not know p(x, y) we can only compute the empirical risk Remp[JJ (i.e. the
training error). Yet, minimizing the latter is not a good idea if the model class is
sufficiently rich and will lead to overfitting. Hence one adds a regularization term
T [JJ and minimzes the regularized risk functional
e
Rreg[J] =
2: C(f(Xi) -
Yi)
+ AT[J]
with A > O.
(4)
i=l
The standard choice in SV regression is to set T[J] = ~llwI12.
This is the point of departure from the standard SV approach. While in the latter
f is described by (2), we will expand f in terms of (1). Effectively this means that
there exist functions rPl (.), ... , rPn (.) whose contribution is not regularized at all.
If n is sufficiently smaller than f this need not be a major concern, as the VCdimension of this additional class of linear models is n, hence the overall capacity
control will still work, provided the nonparametric part is restricted sufficiently.
Figure 1 explains the effect of choosing a different structure in detail.
Solving the optimization equations for this particular choice of a regularization
term, with expansion (1), the c- insensitive loss function and introducing kernels
1 More general definitions, mainly in terms of the cost function , do exist but for the
sake of clarity in the exposition we ignored these cases. See [10] or [7] for further details
on alternative definitions of risk functionals .
587
Semiparametric Support Vector and Linear Programming Machines
--------- -----------
.-
, '"
I
,
.-
I
,,
I
" ----_
f
,
'.....
,
.....
..........
.....
"\)
/} I
-
-
-;;
----------
\
,
I
---,.
-----
--
.....
" "
" , '
I
"
-----------
Figure 1: Two different nested subsets (solid and dotted lines) of hypotheses and the
optimal model (+) in the realizable case. Observe that the optimal model is already
contained in much a smaller (in this diagram size corresponds to the capacity of
a subset) subset of the structure with solid lines than in the structure denoted by
the dotted lines. Hence prior knowledge in choosing the structure can have a large
effect on generalization bounds and performance.
following [2J we arrive at the following primal optimization problem:
l
%llwl12 + L
minimize
~i +~;
i=l
(W,1jJ(Xi))
+
n
L (3j?j(Xi) - Yi
j=l
subject to
n
Yi - (w, 1jJ(xd) - L (3j?j (Xi)
<
to
+ ~i
<
to
+ ~i
(5)
j=l
> 0
Here k(x, x') has been written as (1jJ(x) , 1jJ(x' )). Solving (5) for its Wolfe dual yields
-~ i,El (ai maXImIze
{
(
(
L (ai
-E
i=l
subject to
{
ai)(aj - aj)k(xi,Xj)
+
an + L
Yi (ai -
i=l
(
(6)
o for all 1 ~ j
L(ai - an?j(Xi)
i=l
Lti,ai
an
E
~
n
[0,1/ >.J
Note the similarity to the standard SV regression model. The objective function
and the box constraints on the Lagrange multipliers ai, a; remain unchanged. The
only modification comes from the additional unregularized basis functions. Whereas
in the standard SV case we only had a single (constant) function b? 1 we now have
an expansion in the basis (3i ?i ( .). This gives rise to n constraints instead of one.
Finally f can be found as
l
f(x)
= L(ai i=l
l
n
a;)k(xi' x)
+L
i=l
(3i?i(X)
since
w
= L(ai -
ai)1jJ(xi).
(7)
i=l
The only difficulty remaining is how to determine (3i. This can be done by exploiting
the Karush- Kuhn- Tucker optimality conditions, or much more easily, by using an
interior point optimization code [9J. In the latter case the variables (3i can be
obtained as the dual variables of the dual (dual dual = primal) optimization problem
(6) as a by product of the optimization process. This is also how these variables
have been obtained in the experiments in the current paper.
A. 1. Smola, T. T. FriefJ and B. SchOlkopf
588
3
Semiparametric Linear Programming Machines
Equation (4) gives rise to the question whether not completely different choices of
regularization functionals would also lead to good algorithms. Again we will allow
functions as described in (7). Possible choices are
T[J] =
~//wI12 +
t /~i/
(8)
i=l
t
or
T[f] =
L
lai - a:/
(9)
i=l
tIn
or
T[f] =
L lai - a:1 +"2 L
i=l
~dJjMij
(10)
i ,j=l
for some positive semidefinite matrix M. This is a simple extension of existing
methods like Basis Pursuit [3] or Linear Programming Machines for classification
(see e.g. [4]). The basic idea in all these approaches is to have two different sets
of basis functions that are regularized differently, or where a subset may not be
regularized at all. This is an efficient way of encoding prior knowledge or the
preference of the user as the emphasis obviously will be put mainly on the functions
with little or no regularization at all. Eq. (8) is essentially the SV estimation model
where an additional linear regularization term has been added for the parametric
part. In this case the constraints of the optimization problem (6) change into
t
-1
< E(ai-ai)?j(xd < 1
i=l
ai,ar
E
forall1:::;j:::;n
(11)
[O,l/A]
It makes little sense (from a technical viewpoint) to compute Wolfe's dual objective
function in (10) as the problem does not get significantly easier by doing so. The
best approach is to solve the corresponding optimization problem directly by some
linear or quadratic programming code, e.g. [9]. Finally (10) can be reduced to the
case of (8) by renaming variables accordingly and a proper choice of M.
4
Why Backfitting is not sufficient
One might think that the approach presented above is quite unnecessary and overly
complicated for semi parametric modelling. In fact, one could try to fit the data to
the parametric model first, and then fit the nonparametric part to the residuals.
In most cases, however, this does not lead to finding the minimum of (4). We will
show this at a simple example.
Take a SV machine with linear kernel (i.e. k(x, x') = (x, x')) in one dimension and
a constant term as parametric part (i.e. f(x) = wx + $). This is one of the simplest
semiparametric SV machines possible. Now suppose the data was generated by
Yi = Xi where Xi 2: 1
(12)
without noise. Clearly then also Yi 2: 1 for all i. By construction the best overall fit
of the pair (~, w) will be arbitrarily close to (0,1) if the regularization parameter A
is chosen sufficiently small.
For backfitting one first carries out the parametric fit to find a constant ~ minimizing
the term E;=l C(Yi - $). Depending on the chosen cost function c(?), ~ will be the
mean (L 2 -error), the median (L1-error), etc., of the set {Yl, ... , Yt}? As all Yi 2: 1
Semiparametric Support Vector and Linear Programming Machines
2
- - __ -
.... _-...
-
589
, ,,
Figure 2: Left: Basis functions used in the toy example. Note the different length
scales of sin x and sinc 27rx. For convenience the functions were shifted by an offset
of 2 and 4 respectively. Right: Training data denoted by '+', nonparametric (dashdotted line), semiparametric (solid line), and parametric regression (dots). The
regularization constant was set to A = 2. Observe that the semiparametric model
picks up the characteristic wiggles of the original function.
also {3 ~ 1 which is surely not the optimal solution of the overall problem as there
(3 would be close to a as seen above. Hence not even in the simplest of all settings
backfitting minimizes the regularized risk functional, thus one cannot expect the
latter to happen in the more complex case either. There exists only one case in
which backfitting would suffice, namely if the function spaces spanned by the kernel
expansion {k(Xi")} and {4>i(')} were orthogonal. Consequently in general one has
to jointly solve for both the parametric and the semiparametric part.
5
Experiments
The main goal of the experiments shown is a proof of concept and to display the
properties of the new algorithm. We study a modification of the Mexican hat
function, namely
f(x) = sinx + sinc(27r{x - 5)).
(13)
Data is generated by an additive noise process, i.e. Yi = f(xd + ~i' where ~i is
additive noise. For the experiments we choose Gaussian rbf-kernels with width
u = 1/4, normalized to maximum output 1. The noise is uniform with 0.2 standard
deviation, the E:-insensitive cost function I . Ie with E = 0.05. Unless stated otherwise averaging is done over 100 datasets with 50 samples each. The Xi are drawn
uniformly from the interval [0,10]. L1 and L2 errors are computed on the interval
[0, 10] with uniform measure. Figure 2 shows the function and typical predictions in
the nonparametric, semiparametric, and parametric setting. One can observe that
the semiparametric model including sin x, cos x and the constant function as basis
functions generalizes better than the standard SV machine. Fig. 3 shows that the
generalization performance is better in the semiparametric case. The length of the
weight vector of the kernel expansion IIwll is displayed in Fig. 4. It is smaller in the
semiparametric case for practical values of the regularization strength. To make a
more realistic comparison, model selection (how to determine 1/ A) was carried out
by la-fold cross validation for both algorithms independently for all 100 datasets.
Table 1 shows generalization performance for both a nonparametric model, a correctly chosen and an incorrectly chosen semiparametric model. The experiments
indicate that cases in which prior knowledge exists on the type of functions to be
used will benefit from semiparametric modelling. Future experiments will show how
much can be gained in real world examples.
A. 1. Smola, T. T. FriejJ and B. Sch6lkopj
590
071~"'---~-~r=====::::;l
_ . _ . _ . - . . . . . . - _ ....... _-11-
..
1
___ .....
06
..
.35
03
- _ .
SeF\'llP&l'ametnc Mode~
~tnc Model
J
-''''
\
05
-
\
\
.\
,
\
'.
0"
02
015
o~oL,?? ~~"?:------'O'-:-,~~'O:-,--,,'-:-.,~"""""O?~--'",
Figure 3: L1 error (left) and L2 error (right) of the nonparametric / semiparametric
regression computed on the interval [0,10] vs. the regularization strength 1/),. The
dotted lines (although hardly visible) denote the variance of the estimate. Note
that in both error measures the semiparametric model consistently outperforms the
nonparametric one.
Figure 4: Length of the weight vector w in feature space CEi,j(ai - ai)(aj - aj)k(xi,Xj))1/2
vs. regularization strength. Note that Ilwl!' controlling the capacity of that part of the function,
belonging to the kernel expansion, is smaller (for
practical choices of the regularization term) in
the semiparametric than in the nonparametric
model. If this difference is sufficiently large the
overall capacity of the resulting model is smaller
in the semiparametric approach. As before dotted lines indicates the variance.
Figure 5: Estimate of the parameters for
sin x (top picture) and cos x (bottom picture)
in the semiparametric model vs. regularization
strength 1/),. The dotted lines above and below
show the variation of the estimate given by its
variance. Training set size was f. = 50. Note the
small variation of the estimate. Also note that
even in the parametric case 1/), ~ 0 neither the
coefficient for sin x converges to 1, nor does the
corresponding term for cos x converge to O. This
is due to the additional frequency contributions
of sinc 27rx.
,"
O().
003
002
00\
Semipar am.
I sinSemiparam.
I
2x, cos 2x, 1
I
I sin x, cos x, 1
L1 error I 0.1263 ? 0.0064 (12) I 0.0887 ? 0.0018 (82) I 0.1267 ? 0.0064 (6) I
Nonparam.
L2 error
I 0.1760 ? 0.0097 112)1
0.1197 ? 0.0046 (82)
I 0.1864 ? 0.0124 (6) I
Table 1: Ll and L2 error for model selection by 10-fold crossvalidation. The correct
semiparametric model (sin x, cos x, 1) outperforms the nonparametric model by at
least 30%, and has significantly smaller variance. The wrongly chosen nonparametric model (sin 2x, cos 2x, 1), on the other hand, gives performance comparable to the
non parametric one, in fact, no significant performance degradation was noticeable.
The number in parentheses denotes the number of trials in which the corresponding
model was the best among the three models.
Semiparametric Support Vector and Linear Programming Machines
6
591
Discussion and Outlook
Similar models have been proposed and explored in the context of smoothing splines.
In fact, expansion (7) is a direct result of the representer theorem, however only in
the case of regularization in feature space (aka Reproducing Kernel Hilbert Space,
RKHS). One can show [5] that the expansion (7) is optimal in the space spanned
by the RKHS and the additional set of basis functions.
Moreover the semi parametric setting arises naturally in the context of conditionally
positive definite kernels of order m (see [8]). There, in order to use a set of kernels
which do not satisfy Mercer's condition, one has to exclude polynomials up to order
m - 1. Hence, to with that one has to add polynomials back in 'manually' and our
approach presents a way of doing that.
Another application of semiparametric models besides the conventional approach
of treating the nonparametric part as nuisance parameters [1] is the domain of
hypothesis testing, e.g. to test whether a parametric model fits the data sufficiently
well. This can be achieved in the framework of structural risk minimization [10] given the different models (nonparametric vs. semiparametric vs. parametric) one
can evaluate the bounds on the expected risk and then choose the model with the
lowest error bound. Future work will tackle the problem of computing good error
bounds of compound hypothesis classes. Moreover it should be easily possible to
apply the methods proposed in this paper to Gaussian processes.
Acknowledgements This work was supported in part by grants of the DFG Ja
379/51 and ESPRIT Project Nr. 25387- STORM. The authors thank Peter Bartlett,
Klaus- Robert Muller, Noboru Murata, Takashi Onoda, and Bob Williamson for
helpful discussions and comments.
References
[1] P.J. Bickel, C.A.J. Klaassen, Y. Ritov, and J.A. Wellner. Efficient and adaptive
estimation for semiparametric models. J. Hopkins Press, Baltimore, ML, 1994.
[2] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal
margin classifiers. In COLT'92, pages 144- 15'2, Pittsburgh, PA, 1992.
[3] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit.
Technical Report 479, Department of Statistics, Stanford University, 1995.
[4] T.T. FrieB and R.F. Harrison. Perceptrons in kernel feature spaces. TR RR720, University of Sheffield, Sheffield, UK, 1998.
[5] G.S. Kimeldorf and G. Wahba. A correspondence between Bayesan estimation
on stochastic processes and smoothing by splines. Ann. Math. Statist., 2:495502, 1971.
[6] C.A. Micchelli. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11- 22, 1986.
[7] A. J. Smola and B. Scholkopf. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica, 22:211231,1998.
[8] A.J. Smola, B. Scholkopf, and K. Muller. The connection between regularization operators and support vector kernels. Neural Netw., 11:637- 649, 1998.
[9] R.J. Vanderbei. LOQO: An interior point code for quadratic programming. TR
SOR-94-15, Statistics and Operations Research, Princeton Univ., NJ, 1994.
[10] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
| 1575 |@word rreg:1 trial:1 inversion:1 polynomial:3 advantageous:1 decomposition:1 pick:1 tr:2 outlook:1 solid:3 carry:1 rkhs:2 outperforms:2 existing:1 current:1 yet:1 written:1 realistic:1 additive:2 wx:1 happen:1 visible:1 treating:1 rpn:1 v:5 accordingly:1 math:1 preference:2 direct:1 scholkopf:5 backfitting:4 expected:2 nor:1 ol:1 actual:1 little:2 provided:1 project:1 moreover:2 suffice:1 kimeldorf:1 lowest:1 minimizes:2 finding:1 nj:1 xd:3 tackle:1 esprit:1 classifier:1 uk:1 control:1 grant:1 positive:3 before:1 treat:1 encoding:1 interpolation:1 yd:1 might:2 emphasis:2 co:7 practical:2 testing:1 atomic:1 definite:2 empirical:1 universal:1 significantly:2 renaming:1 get:1 onto:1 interior:2 selection:2 operator:2 close:2 cannot:1 risk:7 impossible:1 convenience:1 wrongly:1 put:2 context:2 conventional:1 yt:1 straightforward:1 independently:1 convex:1 simplicity:1 spanned:2 variation:2 construction:1 suppose:1 controlling:1 user:2 programming:9 us:1 hypothesis:3 pa:1 trend:1 wolfe:2 recognition:1 fries:1 bottom:1 sef:1 rudower:1 solving:2 basis:10 completely:1 easily:2 differently:1 train:1 univ:1 klaus:1 choosing:2 saunders:1 whose:1 quite:1 stanford:1 solve:3 otherwise:2 statistic:2 think:1 jointly:1 obviously:1 advantage:1 product:1 llwi12:1 crossvalidation:1 exploiting:1 converges:1 depending:1 vcdimension:1 noticeable:1 eq:1 come:1 indicate:1 kuhn:1 correct:2 stochastic:1 explains:1 ja:1 sor:1 generalization:3 karush:1 secondly:1 extension:2 sufficiently:6 major:2 bickel:1 estimation:3 wi12:1 tool:1 minimization:1 clearly:1 gaussian:3 takashi:1 consistently:1 modelling:2 indicates:1 mainly:2 aka:1 realizable:1 sense:1 am:1 helpful:1 el:1 expand:1 overall:4 dual:6 classification:1 among:1 denoted:2 colt:1 smoothing:2 construct:1 manually:1 representer:1 future:2 report:1 spline:2 dfg:1 algorithmica:1 ourselves:1 sinx:1 semidefinite:1 primal:2 beforehand:1 orthogonal:1 unless:2 sacrificing:1 instance:2 ar:1 understandability:1 cost:5 introducing:1 deviation:2 subset:4 uniform:2 sv:14 ie:1 yl:1 hopkins:1 again:1 choose:2 toy:1 exclude:1 de:1 sec:1 coefficient:1 satisfy:1 try:2 doing:3 complicated:1 contribution:2 minimize:1 accuracy:1 variance:4 characteristic:1 murata:1 yield:1 rx:2 bob:1 tnc:1 definition:2 frequency:1 tucker:1 storm:1 naturally:1 proof:1 vanderbei:1 remp:1 knowledge:5 hilbert:1 back:1 follow:1 specify:1 ritov:1 done:2 box:1 smola:7 hand:1 noboru:1 mode:1 aj:4 effect:2 ye:1 concept:1 multiplier:1 normalized:1 hence:5 regularization:14 deal:1 conditionally:2 sin:7 ll:1 width:1 nuisance:1 l1:4 common:1 functional:3 insensitive:3 extend:1 significant:1 ai:15 had:1 dot:1 similarity:1 etc:1 add:3 own:1 compound:1 arbitrarily:1 life:1 approximators:1 xe:1 yi:9 muller:2 seen:1 minimum:1 additional:6 dxdy:1 surely:1 converge:1 determine:2 maximize:1 semi:2 technical:2 unwise:1 cross:1 lai:2 parenthesis:1 prediction:2 regression:6 sheffield:2 basic:1 essentially:1 kernel:15 achieved:1 whereas:1 semiparametric:28 chaussee:1 want:2 interval:3 baltimore:1 diagram:1 median:1 harrison:1 comment:1 subject:2 tend:1 structural:1 easy:1 xj:2 fit:7 restrict:1 wahba:1 idea:2 whether:2 bartlett:1 wellner:1 peter:1 jj:10 hardly:1 ignored:1 useful:3 nonparametric:16 statist:1 gmd:2 reduced:1 simplest:2 exist:2 shifted:1 dotted:5 estimated:1 overly:1 correctly:1 drawn:1 clarity:1 neither:1 lti:1 arrive:1 klaassen:1 guyon:1 comparable:1 bound:4 display:1 correspondence:1 fold:2 quadratic:2 strength:5 constraint:3 alex:1 sake:2 loqo:1 optimality:1 department:1 combination:1 belonging:1 smaller:6 remain:1 b:1 happens:1 modification:2 restricted:2 unregularized:1 equation:2 know:1 pursuit:2 generalizes:1 operation:1 apply:1 observe:3 alternative:1 hat:1 original:1 denotes:2 remaining:1 top:1 especially:1 classical:1 unchanged:1 micchelli:1 objective:2 already:1 question:1 added:1 parametric:17 nr:1 amongst:1 distance:1 thank:1 berlin:1 capacity:4 frieb:1 code:3 length:3 besides:1 minimizing:2 robert:1 stated:2 rise:2 understandable:1 proper:1 perform:1 datasets:2 displayed:1 incorrectly:1 situation:1 reproducing:1 advocacy:1 pair:1 namely:2 connection:1 boser:1 llp:1 nonparam:1 below:1 pattern:1 departure:1 max:1 including:1 difficulty:1 regularized:5 residual:1 picture:2 rpl:1 carried:1 prior:3 l2:4 acknowledgement:1 loss:2 expect:1 validation:1 sufficient:1 mercer:1 viewpoint:1 dashdotted:1 penalized:1 supported:1 allow:1 understand:1 benefit:1 dimension:1 world:1 rich:1 author:1 adaptive:1 functionals:2 netw:1 bernhard:1 ml:1 overfitting:1 pittsburgh:1 unnecessary:1 xi:13 why:1 table:2 nature:1 onoda:1 expansion:7 williamson:1 complex:1 domain:2 iiwll:1 main:1 motivation:1 noise:4 cei:1 fig:2 scattered:1 lc:2 xl:1 tin:1 admissible:1 theorem:1 jt:1 offset:1 explored:1 sinc:3 concern:1 exists:3 vapnik:2 effectively:1 gained:1 wiggle:1 margin:1 chen:1 easier:1 lagrange:1 contained:1 llwl12:1 springer:1 thilo:1 nested:1 corresponds:1 goal:1 consequently:1 rbf:2 exposition:2 donoho:1 ann:1 change:1 typical:1 uniformly:1 averaging:1 mexican:1 degradation:1 experimental:1 la:1 perceptrons:1 formally:1 support:8 people:1 latter:4 arises:1 constructive:1 evaluate:1 princeton:1 |
629 | 1,576 | Gradient Descent for General
Reinforcement Learning
Leemon Baird
leemon@cs.cmu.edu
www.cs.cmu.edu/- Ieemon
Computer Science Department
5000 Forbes Avenue
Carnegie Mellon University
Pittsburgh, PA 15213-3891
Andrew Moore
awm@cs.cmu .edu
www.cs.cmu.edu/-awm
Computer Science Department
5000 Forbes Avenue
Carnegie Mellon University
Pittsburgh, PA 15213-3891
Abstract
A simple learning rule is derived, the VAPS algorithm, which can
be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open
problems, define several new approaches to reinforcement learning,
and unify different approaches to reinforcement learning under a
single theory. These algorithms all have guaranteed convergence,
and include modifications of several existing algorithms that were
known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these
value-based algorithms it also generates pure policy-search
reinforcement-learning algorithms, which learn optimal policies
without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying
two very different approaches to reinforcement learning into a
single Value and Policy Search (V APS) algorithm. And these
algorithms converge for POMDPs without requiring a proper belief
state . Simulations results are given, and several areas for future
research are discussed.
1
CONVERGENCE OF GREEDY EXPLORATION
Many reinforcement-learning algorithms are known that use a parameterized
function approximator to represent a value function, and adjust the weights
incrementally during learning.
Examples include Q-learning, SARSA, and
advantage learning. There are simple MOPs where the original form of these
algorithms fails to converge, as summarized in Table 1. For the cases with..J, the
algorithms are guaranteed to converge under reasonable assumptions such as
Gradient Descent for General Reinforcement Learning
969
Table 1. Current convergence results for incremental, value-based RL algorithms.
Residual algorithms changed every X in the first two columns to ..J.
The new al
in this
X to a ..J.
Usuallydistribution
distribution
greedy
distribution
Markov
chain
r-----~----~----_+------------.--------
MDP
POMDP
r--------:---'''---_+."
=convergence guaranteed
X=counterexample is known that either diverges or oscillates between the
best and worst
ible
icies.
decaying learning rates. For the cases with X, there are known counterexamples
where it will either diverge or osciIlate between the best and worst possible policies,
which have very-different values. This can happen even with infinite training time
and slowly-decreasing learning rates (Baird, 95, Gordon, 96). Each X in the first
two columns can be changed to a ..J and made to converge by using a modified form
of the algorithm, the residual form (Baird 95). But this is only possible when
learning with a fixed training distribution, and that is rarely practical. For most
large problems, it is useful to explore with a policy that is usualIy-greedy with
respect to the current value function, and that changes as the value function changes.
In that case (the rightmost column of the chart), the current convergence guarantees
are not very good. One way to guarantee convergence in alI three columns is to
modify the algorithm so that it is performing stochastic gradient descent on some
average error function, where the average is weighted by state-visitation frequencies
for the current usually-greedy policy. Then the weighting changes as the policy
changes. It might appear that this gradient is difficult to compute. Consider Qlearning exploring with a Boltzman distribution that is usually greedy with respect
to the learned Q function. It seems difficult to calculate gradients, since changing a
single weight will change many Q values, changing a single Q value will change
many action-choice probabilities in that state, and changing a single action-choice
probability may affect the frequency with which every state in the MDP is visited.
Although this might seem difficult, it is not. Surprisingly, unbiased estimates of the
gradients of visitation distributions with respect to the weights can be calculated
quickly, and the resulting algorithms can put a ..J in every case in Table 1.
2 DERIVATION OF THE V APS EQUATION
Consider a sequence of transitions observed while following a particular stochastic
policy on an MDP. Let Sl = {xo,uo,R o, xt.ut.Rt. ... xl.t.ul_t.RI_t. xtout.RI} be the
sequence of states, actions, and reinforcements up to time t, where performing
action U I in state XI yields reinforcement RI and a transition to state XI+I. The
L. Baird and A. W. Moore
970
stochastic policy may be a function of a vector of weights w. Assume the MOP has
a single start state named Xo. If the MOP has terminal states, and x, is a terminal
state, then X'+I=XO. Let S, be the set of all possible sequences from time 0 to t. Let
e(s,) be a given error function that calculates an error on each time step, such as the
squared Bellman residual at time t, or some other error occurring at time t. If e is a
function of the weights, then it must be a smooth function of the weights. Consider
a period of time starting at time 0 and ending with probability P(endls,) after the
sequence s, occurs. The probabilities must be such that the expected squared period
length is finite. Let B be the expected total error during that period, where the
expectation is weighted according to the state-visitation frequencies generated by
the given policy:
r
T
B =
I I
P(period ends at time T after trajectory Sr)
I
e(s,)
(I)
,=0
xc
=I
I
(2)
e(s,)P(sJ
1= 0 s, e St
where :
,- I
pes,)
=
P(u,
I sJP(R, I s,)O P(u, I s,)P(R, I s,)P(S'+1 I s,)fi ,=0
P(end Is,)]
(3)
Note that on the first line, for a particular s" the error e(s,) will be added in to B
once for every sequence that starts with s,. Each of these terms will be weighted by
the probability of a complete trajectory that starts with s,. The sum of the
probabilities of all trajectories that start with s, is simply the probability of s, being
observed, since the period is assumed to end eventually with probability one. So the
second line equals the first. The third line is the probability of the sequence, of
which only the P(u,lx,) factor might be a function of w. If so, this probability must
be a smooth function of the weights and nonzero everywhere. The partial derivative
of B with respect to w, a particular element of the weight vector w, is:
(4)
(5)
Space here is limited, and it may not be clear from the short sketch of this
derivation, but summing (5) over an entire period does give an unbiased estimate of
B, the expected total error during a period. An incremental algorithm to perform
stochastic gradient descent on B is the weight update given on the left side of Table
2, where the summation over previous time steps is replaced with a trace T, for each
weight. This algorithm is more general than previously-published algorithms of this
form, in that e can be a function of all previous states, actions, and reinforcements,
rather than just the current reinforcement. This is what allows V APS to do both
value and policy search.
Every algorithm proposed in this paper is a special case of the V APS equation on
the left side of Table 2. Note that no model is needed for this algorithm. The only
probability needed in the algorithm is the policy, not the transition probability from
the MOP. This is stochastic gradient descent on B, and the update rule is only
correct if the observed transitions are sampled from trajectories found by following
Gradient Descent for General Reinforcement Learning
971
Table 2. The general YAPS algorithm (left), and several instantiations of it (right).
This single algorithm includes both value-based and policy-search approaches and
'
. every case.
. com b"matlOn, an dgives
guarantee d convergence m
thelr
e SARSA (St) =
eQ-learm"g(s,)
~w,
=
t
?2
(R,_1 + }Q(x t , u t ) -
= 1- E2lRI _1 + y m~ Q(x"
-aL~ e(s,) + e(s,)T,]
u) - Q(x, _1' u,-;l
[RH + r m", A(x" u) -1' A(x, _,.
"(~-I)
e adva"lag, (S,)=fE2
~T, = ~I In(P(u' _1 I S,_I))
Q(x t_1 , u,-ll
+
A
m,:u'
A(
UH )
X, _I' U
1
)
+[max E[ R' _I + yV (xJ] - V (x/-I) J
eSARI'A-poh,y (SJ = (t - P)eSARI'A(SJ + pT.b - y' R/J
eva/lte - 'leraIlO" (S/)
=
It,
1
the current, stochastic policy. Both e and P should be smooth functions of w, and
for any given w vector, e should be bounded. The algorithm is simple, but actuaIly
generates a large class of different algorithms depending on the choice of e and
when the trace is reset to zero. For a single sequence, sampled by following the
current policy, the sum of ~w along the sequence will give an unbiased estimate of
the true gradient, with finite variance . Therefore, during learning, if weight updates
are made at the end of each trial, and if the weights stay within a bounded region,
and the learning rate approaches zero, then B wiIl converge with probability one.
Adding a weight-decay term (a constant times the 2-norm of the weight vector) onto
B will prevent weight divergence for small initial learning rates. There is no
guarantee that a global minimum will be found when using general function
approximators, but at least it will converge. This is true for backprop as well.
3
INSTANTIATING THE V APS ALGORITHM
Many reinforcement-learning algorithms are value-based; they try to learn a value
function that satisfies the BeUman equation . Examples are Q-learning, which learns
a value function, actor-critic algorithms, which learn a value function and the policy
which is greedy with respect to it, and TO( 1), which learns a value function based
on future rewards. Other algorithms are pure policy-search algorithms; they
directly learn a policy that returns high rewards. These include REINFORCE
(Williams, 1988), backprop through time, learning automata, and genetic
algorithms. The algorithms proposed here combine the two approaches: they
perform Value And Policy Search (YAPS). The ,general VAPS equation is
instantiated by choosing an expression for e. This can be a Bellman residual
(yielding value-based), the reinforcement (yielding policy-search), or a linear
combination of the two (yielding Value And Policy Search). The single VAPS
update rule on the left side of Table 2 generates a variety of different types of
algorithms, some of which are described in the foIlowing sections.
3.1
REDUCING MEAN SQUARED RESIDUAL PER TRIAL
If the MOP has terminal states, and a trial is the time from the start until a terminal
state is reached, then it is possible to minimize the expected total error per trial by
resetting the trace to zero at the start of each trial. Then, a convergent form of
SARSA, Q-Iearning, incremental value iteration, or advantage learning can be
generated by choosing e to be the squared Bellman residual, as shown on the right
side of Table 2. In each case, the expected value is taken over all possible (x/>u"R,)
L. Baird and A. W Moore
972
triplets, given St-I' The policy must be a smooth, nonzero function of the weights.
So it could not be an c-greedy policy that chooses the greedy action with probability
(I-c) and chooses uniformly otherwise. That would cause a discontinuity in the
gradient when two Q values in a state were equal. But the policy could be
something that approaches c-greedy as a positive temperature c approaches zero:
1+
&
P(u
eQ(x.II) l c
I x) = -;; + (I - &) I (I + eQ(x,u') lc )
(6)
II'
where n is the number of possible actions in each state. For each instance in Table 2
other than value iteration, the gradient of e can be estimated using two, independent,
unbiased estimates of the expected value. For example:
!,
eSARSA (Sf)
==
e SA R.S:4 (Sf
{r? !,
Q(X'f , U'f ) -
!,
Q(X f _l , U f _I ))
(7)
When $=1, this is an estimate of the true gradient. When $<1, this is a residual
algorithm, as described in (Baird, 96), and it retains guaranteed convergence, but
may learn more quickly than pure gradient descent for some values of $. Note that
the gradient of Q(x,u) at time I uses primed variables. That means a new state and
action at time I were generated independently from the state and action at time 1-1.
Of course, if the MOP is deterministic, then the primed variables are the same as the
unprimed. If the MOP is nondeterministic but the model is known, then the model
must be evaluated one additional time to get the other state. If the model is not
known, then there are three choices. First, a model could be learned from past data,
and then evaluated to give this independent sample. Second, the issue could be
ignored, simply reusing the unprimed variables in place of the primed variables.
This may affect the quality of the learned function (depending on how random the
MOP is), but doesn't stop convergence, and be an acceptable approximation in
practice. Third, all past transitions could be recorded, and the primed variables
could be found by searching for all the times (Xt-hUt-') has been seen before, and
randomly choosing one of those transitions and using its successor state and action
as the primed variables. This is equivalent to learning the certainty equivalence
model, and sampling from it, and so is a special case of the first choice. For
extremely large state-action spaces with many starting states, this is likely to give
the same result in practice as simply reusing the unprimed variables as the primed
variables. Note, that when weights do not effect the policy at all, these algorithms
reduce to standard residual algorithms (Baird, 95).
It is also possible to reduce the mean squared residual per step, rather than per trial.
This is done by making period lengths independent of the policy, so minimizing
error per period will also minimize the error per step. For example, a period might
be defined to be the first 100 steps, after which the traces are reset, and the state is
returned to the start state. Note that if every state-action pair has a positive chance
of being seen in the first 100 steps, then this will nol just be solving a finite-horizon
problem. It will be actually be solving the discounted, infinite-horizon problem, by
reducing the Bellman residual in every state. But the weighting of the residuals wilI
be determined only by what happens during the first 100 steps. Many different
problems can be solved by the V APS algorithm by instantiating the definition of
"period" in different ways.
3.2
POLICY-SEARCH AND VALUE-BASED LEARNING
It is also possible to add a term that tries to maximize reinforcement directly. For
example, e could be defined to be e.\?ARSA-I'0!Jcy rather than eSARSA. from Table 2, and
973
Gradient Descent for General Reinforcement Learning
10000 , -- - -- - - - - - - - - - ,
{Jl
ca
._
1000
I-<
E-
100 - t - - - - r - - - , . . . - - - , . . . - - - - - l
o
0.2
0.4
0.6
0.8
Beta
Figure 1. A POMDP and the number of trials needed to learn it vs. p .
A combination of policy-search and value-based RL outperforms either alone.
the trace reset to zero after each terminal state is reached . The constant b does not
affect the expected gradient, but does affect the noise distribution, as discussed in
(Williams, 88). When P=O, the algorithm will try to learn a Q function that satisfies
the Bellman equation, just as before. When P=I, it directly learns a policy that will
minimize the expected total discounted reinforcement. The resulting "Q function"
may not even be close to containing true Q values or to satisfying the Bellman
equation, it will just give a good policy . When P is in between, this algorithm tries
to both satisfy the Bellman equation and give good greedy policies. A similar
modification can be made to any of the algorithms in Table 2. In the special case
where P=I, this algorithm reduces to the REINFORCE algorithm (Williams, 1988).
REINFORCE has been rederived for the special case of gaussian action distributions
(Tresp & Hofman , 1995), and extensions of it appear in (Marbach, 1998). This case
of pure policy search is particularly interesting, because for P=I , there is no need
for any kind of model or of generating two independent successors. Other
algorithms have been proposed for finding policies directly, such as those given in
(Gullapalli, 92) and the various algorithms from learning automata theory
summarized in (Narendra & Thathachar, 89). The VAPS algorithms proposed here
appears to be the first one unifying these two approaches to reinforcement learning,
finding a value function that both approximates a Bellman-equation solution and
directly optimizes the greedy policy.
Figure 1 shows simulation results for the combined algorithm. A run is said to have
learned when the greedy policy is optimal for 1000 consecutive trials. The graph
shows the average plot of 100 runs, with different initial random weights between
?10.6 . The learning rate was optimized separately for each p value. R= 1 when
leaving state A, R=2 when leaving state B or entering end, and R=O otherwise. y=0.9.
The algorithm used was the modified Q-Iearning from Table 2, with exploration as
in equation 13 , and q>=c= l, b=O, c=O.1. States A and B share the same parameters,
so ordinary SARSA or greedy Q-Iearning could never converge, as shown in
(Gordon, 96). When p=O (pure value-based), the new algorithm converges, but of
course it cannot learn the optimal policy in the start state, since those two Q values
learn to be equal. When P=1 (pure policy-search), learning converges to optimality,
but slowly , since there is no value function caching the results in the long sequence
of states near the end. By combining the two approaches, the new algorithm learns
much more quickly than either alone.
It is interesting that the VAPS algorithms described in the last three sections can be
applied directly to a Partially Observable Markov Decision Process (POMDP),
where the true state is hidden, and all that is available on each time step is an
L. Baird and A. W Moore
974
ambiguous "observation", which is a function of the true state. Normally, an
algorithm such as SARSA only has guaranteed convergance when applied to an
MOP. The V APS algorithms will converge in such cases.
4
CONCLUSION
A new algorithm has been presented. Special cases of it give new algorithms
similar to Q-Iearning, SARSA, and advantage learning, but with guaranteed
convergence for a wider range of problems than was previously possible, including
POMOPs . For the first time, these can be guaranteed to converge, even when the
exploration policy changes during learning. Other special cases allow new
approaches to reinforcement learning, where there is a tradeoff between satisfying
the Bellman equation and improving the greedy policy . For one MOP, simulation
showed that this combined algorithm learned more quickly than either approach
alone. This unified theory, unifying for the first time both value-based and policysearch reinforcement learning, is of theoretical interest, and also was of practical
value for the simulations performed. Future research with this unified framework
may be able to empirically or analytically address the old question of when it is
better to learn value functions and when it is better to learn the policy directly. It
may also shed light on the new question, of when it is best to do both at once.
Acknowledgments
This research was sponsored in part by the U.S. Air Force.
References
Baird, L. C. (1995) . Residual Algorithms: Reinforcement Learning with Function
Approximation. In Armand Prieditis & Stuart Russell , eds. Machine Learning: Proceedings
of the Twelfth International Conference, 9- 12 July, Morgan Kaufman Publishers, San
Francisco, CA.
Gordon, G. (1996). " Stable fitted reinforcement learning". In G. Tesauro, M. Mozer, and M.
Hasselmo (eds.), Advances in Neural Information Processing Systems 8, pp. 1052-1058.
MIT Press, Cambridge, MA .
Gullapalli, V. (1992). Reinforcement Learning and Its Application to Control. Dissertation
and COINS Technical Report 92-10, University of Massachusetts, Amherst, MA.
Kaelbling, L. P ., Littman, M. L. & Cassandra, A., " Planning and Acting in Partially
Observable Stochastic Domains". Artificial Intelligence, to appear. Available now at
http ://www.cs.brown.edu/people/lpk.
Marbach , P. (1998). Simulation-Based Optimization of Markov Decision Processes.
LIDS-TH 2429, Massachusetts Institute of Technology.
Thesis
McCallum (1995), A. Reinforcement learning with selective perception and hidden state.
Dissertation, Department of Computer Science, UniverSity of Rochester, Rochester, NY.
Narendra, K .. & Thathachar, M.A.L. (1989). Learning automata: An introduction . Prentice
Hall, Englewood Cliffs, NJ.
Tresp, V., & R. Hofman (1995). "Missing and noisy data in nonlinear time-series
prediction". In Proceedings of Neural Networks for Signal Processing 5, F. Girosi , J.
Makhoul, E. Manolakos and E. Wilson, eds., IEEE Signal Processing Society, New York,
New York, 1995. pp. 1-10.
Williams, R. J. (1988). Toward a theory of reinforcement-learning connectionist systems.
Technical report NU-CCS-88-3, Northeastern University, Boston, MA.
| 1576 |@word trial:8 armand:1 seems:1 norm:1 twelfth:1 open:1 simulation:5 initial:2 series:1 genetic:1 rightmost:1 past:2 existing:1 unprimed:3 current:7 com:1 outperforms:1 must:5 happen:1 girosi:1 plot:1 sponsored:1 aps:7 update:4 v:1 alone:3 greedy:14 intelligence:1 mccallum:1 short:1 dissertation:2 lx:1 vaps:5 along:1 beta:1 combine:1 nondeterministic:1 expected:8 planning:1 terminal:5 bellman:9 discounted:2 decreasing:1 bounded:2 what:2 kind:1 kaufman:1 fe2:1 unified:2 finding:2 nj:1 guarantee:4 certainty:1 every:8 iearning:4 shed:1 oscillates:1 control:1 normally:1 uo:1 appear:3 positive:2 before:2 modify:1 cliff:1 might:4 equivalence:1 limited:1 nol:1 range:2 reinforcementlearning:1 lte:1 practical:2 acknowledgment:1 practice:2 area:1 get:1 onto:1 close:1 cannot:1 put:1 prentice:1 www:3 equivalent:1 deterministic:1 missing:1 williams:4 starting:2 independently:1 automaton:3 pomdp:3 unify:1 pure:6 rule:3 searching:1 pt:1 us:1 pa:2 element:1 yap:2 satisfying:2 particularly:1 observed:3 solved:1 worst:2 calculate:1 region:1 eva:1 russell:1 mozer:1 reward:2 littman:1 solving:2 hofman:2 ali:1 uh:1 leemon:2 various:1 derivation:2 instantiated:2 artificial:1 choosing:3 lag:1 solve:1 otherwise:2 noisy:1 advantage:4 sequence:9 reset:3 combining:1 convergence:10 diverges:1 generating:1 incremental:3 converges:2 wider:1 depending:2 andrew:1 sa:1 eq:3 c:5 correct:1 stochastic:7 awm:2 exploration:3 successor:2 backprop:2 sarsa:7 summation:1 exploring:1 extension:1 hut:1 hall:1 mop:11 narendra:2 consecutive:1 visited:1 hasselmo:1 weighted:3 mit:1 gaussian:1 modified:2 rather:3 primed:6 caching:1 wilson:1 derived:1 entire:1 hidden:2 selective:1 issue:1 special:6 equal:3 once:2 never:1 sampling:1 stuart:1 future:3 report:2 connectionist:1 gordon:3 randomly:1 divergence:1 replaced:1 interest:1 englewood:1 adjust:1 poh:1 yielding:3 light:1 chain:1 partial:1 old:1 theoretical:1 fitted:1 instance:1 column:4 retains:1 ordinary:1 kaelbling:1 combined:3 chooses:2 st:3 international:1 amherst:1 stay:1 diverge:1 quickly:4 squared:5 thesis:1 recorded:1 containing:1 slowly:2 derivative:1 return:1 reusing:2 summarized:2 includes:1 baird:9 satisfy:1 performed:1 try:4 reached:2 start:8 decaying:1 yv:1 rochester:2 forbes:2 minimize:3 chart:1 air:1 variance:1 resetting:1 yield:1 trajectory:4 pomdps:1 cc:1 published:1 lpk:1 ed:3 definition:1 frequency:3 pp:2 sampled:2 stop:1 massachusetts:2 ut:1 actually:1 appears:1 evaluated:2 done:1 just:4 until:1 sketch:1 nonlinear:1 t_1:1 incrementally:1 quality:1 mdp:3 effect:1 requiring:1 unbiased:4 true:6 brown:1 analytically:1 entering:1 moore:4 nonzero:2 ll:1 during:6 ambiguous:1 complete:1 temperature:1 fi:1 rl:2 empirically:1 jl:1 discussed:2 approximates:1 mellon:2 cambridge:1 counterexample:2 marbach:2 stable:1 actor:1 add:1 something:1 showed:1 optimizes:1 tesauro:1 approximators:1 seen:2 minimum:1 additional:1 morgan:1 converge:10 maximize:1 period:11 manolakos:1 july:1 ii:2 signal:2 reduces:1 smooth:4 technical:2 wiil:1 long:1 calculates:1 instantiating:2 prediction:1 cmu:4 expectation:1 iteration:2 represent:1 addition:2 separately:1 leaving:2 publisher:1 sr:1 seem:1 near:1 variety:1 affect:4 xj:1 reduce:2 prieditis:1 avenue:2 tradeoff:1 gullapalli:2 expression:1 returned:1 york:2 cause:1 action:13 ignored:1 useful:1 clear:1 generate:1 http:1 sl:1 estimated:1 per:6 carnegie:2 visitation:3 changing:3 prevent:1 graph:1 sum:2 run:2 parameterized:1 everywhere:1 named:1 place:1 reasonable:1 decision:2 acceptable:1 guaranteed:7 convergent:1 ri:2 foilowing:1 generates:3 ible:1 extremely:1 optimality:1 performing:2 department:3 according:1 combination:2 makhoul:1 lid:1 modification:2 making:1 happens:1 xo:3 taken:1 equation:10 previously:2 eventually:1 fail:1 needed:3 end:6 available:2 coin:1 original:1 include:4 unifying:3 xc:1 society:1 added:1 question:2 occurs:1 rt:1 said:1 gradient:17 reinforce:3 toward:1 length:2 minimizing:1 difficult:3 trace:5 proper:1 policy:40 perform:2 observation:1 markov:3 finite:3 descent:8 pair:1 optimized:1 learned:5 nu:1 discontinuity:1 address:1 able:1 usually:2 perception:1 max:1 including:1 belief:1 force:1 residual:12 technology:1 tresp:2 interesting:2 approximator:1 critic:1 share:1 course:2 changed:2 surprisingly:1 last:1 side:4 allow:1 institute:1 wide:1 calculated:1 transition:6 ending:1 doesn:1 made:3 reinforcement:25 san:1 boltzman:1 sj:3 observable:2 qlearning:2 global:1 instantiation:1 summing:1 pittsburgh:2 assumed:1 francisco:1 xi:2 search:12 triplet:1 table:12 learn:11 ca:2 improving:1 pomops:1 domain:1 rh:1 noise:1 ny:1 lc:1 fails:1 sf:2 xl:1 pe:1 weighting:2 third:2 learns:4 northeastern:1 xt:2 decay:1 adding:1 occurring:1 horizon:2 cassandra:1 boston:1 simply:3 explore:1 likely:1 partially:2 satisfies:2 chance:1 ma:3 change:7 sjp:1 infinite:2 determined:1 reducing:2 uniformly:1 acting:1 total:4 rarely:1 people:1 |
630 | 1,577 | Using Analytic QP and Sparseness to Speed
Training of Support Vector Machines
John C. Platt
Microsoft Research
1 Microsoft Way
Redmond, WA 98052
jplatt@microsoft.com
Abstract
Training a Support Vector Machine (SVM) requires the solution of a very
large quadratic programming (QP) problem. This paper proposes an algorithm for training SVMs: Sequential Minimal Optimization, or SMO.
SMO breaks the large QP problem into a series of smallest possible QP
problems which are analytically solvable. Thus, SMO does not require
a numerical QP library. SMO's computation time is dominated by evaluation of the kernel, hence kernel optimizations substantially quicken
SMO. For the MNIST database, SMO is 1.7 times as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be
1500 times faster than the PCG chunking algorithm.
1
INTRODUCTION
In the last few years, there has been a surge of interest in Support Vector Machines
(SVMs) [1]. SVMs have empirically been shown to give good generalization performance
on a wide variety of problems. However, the use of SVMs is stilI limited to a small group of
researchers . One possible reason is that training algorithms for SVMs are slow, especially
for large problems. Another explanation is that SVM training algorithms are complex,
subtle, and sometimes difficult to implement. This paper describes a new SVM learning
algorithm that is easy to implement, often faster, and has better scaling properties than the
standard SVM training algorithm. The new SVM learning algorithm is called Sequential
Minimal Optimization (or SMO).
1.1
OVERVIEW OF SUPPORT VECTOR MACHINES
A general non-linear SVM can be expressed as
U
= LQiYiK(Xi,X) - b
(1)
J C. Platt
558
where U is the output of the SVM, K is a kernel function which measures the similarity
of a stored training example Xi to the input Yi E {-1, + 1} is the desired output of the
classifier, b is a threshold, and (li are weights which blend the different kernels [1]. For
linear SVMs, the kernel function K is linear, hence equation (1) can be expressed as
x,
u=w?x-b
(2)
where W = Li (liYiXi?
Training of an SVM consists of finding the
of a dual quadratic form:
(li.
The training is expressed as a minimization
(3)
subject to box constraints,
(4)
and one linear equality constraint
N
LYi(li
(5)
= O.
i=l
The (li are Lagrange multipliers of a primal quadratic programming (QP) problem: there
is a one-to-one correspondence between each (li and each training example Xi.
Equations (3-5) form a QP problem that the SMO algorithm will solve. The SMO algorithm will terminate when all of the Karush-Kuhn-Tucker (KKT) optimality conditions of
the QP problem are fulfilled. These KKT conditions are particularly simple:
(li
= 0 '* YiUi
~ 1,
0
< (li < C
'* YiUi = 1,
(li
= C '* YiUi
:::;
1,
(6)
where Ui is the output of the SVM for the ith training example.
1.2
PREVIOUS METHODS FOR TRAINING SUPPORT VECTOR MACHINES
Due to its immense size, the QP problem that arises from SVMs cannot be easily solved via
standard QP techniques. The quadratic form in (3) involves a Hessian matrix of dimension
equal to the number of training examples. This matrix cannot be fit into 128 Megabytes if
there are more than 4000 training examples.
Vapnik [9] describes a method to solve the SVM QP, which has since been known as
"chunking." Chunking relies on the fact that removing training examples with (li = 0
does not change the solution. Chunking thus breaks down the large QP problem into a
series of smaller QP sub-problems, whose object is to identify the training examples with
non-zero (li. Every QP sub-problem updates the subset of the (li that are associated with
the sub-problem, while leaving the rest of the (li unchanged. The QP sub-problem consists
of every non-zero (li from the previous sub-problem combined with the M worst examples
that violate the KKT conditions (6), for some M [1]. At the last step, the entire set of
non-zero (li has been identified, hence the last step solves the entire QP problem.
Chunking reduces the dimension of the matrix from the number of training examples to
approximately the number of non-zero (li. If standard QP techniques are used, chunking
cannot handle large-scale training problems, because even this reduced matrix cannot fit
into memory. Kaufman [3] has described a QP algorithm that does not require the storage
of the entire Hessian.
The decomposition technique [6] is similar to chunking: decomposition breaks the large
QP problem into smaller QP sub-problems. However, Osuna et al. [6] suggest keeping a
Analytic QP and Sparseness to Speed Training o/Support Vector Machines
Q2
=c
Q 2
559
=c
al=oQal=C a{::sJa
l
Q 2
Yt
*- Y2
=
l
=0
~ Qt -
Q
2 =k
Yt
= Y2
Q 2
=0
~
Qt
=C
+Q 2 = k
Figure 1: The Lagrange multipliers al and a2 must fulfill all of the constraints of the full
problem. The inequality constraints cause the Lagrange multipliers to lie in the box. The
linear equality constraint causes them to lie on a diagonal line.
fixed size matrix for every sub-problem, deleting some examples and adding others which
violate the KKT conditions. Using a fixed-size matrix allows SVMs to be trained on very
large training sets. 10achims [2] suggests adding and subtracting examples according to
heuristics for rapid convergence. However, until SMO, decomposition required the use of
a numerical QP library, which can be costly or slow.
2
SEQUENTIAL MINIMAL OPTIMIZATION
Sequential Minimal Optimization quickly solves the SVM QP problem without using numerical QP optimization steps at all. SMO decomposes the overall QP problem into fixedsize QP sub-problems, similar to the decomposition method [7].
Unlike previous methods, however, SMO chooses to solve the smallest possible optimization problem at each step. For the standard SVM, the smallest possible optimization problem involves two elements of a. because the must obey one linear equality constraint. At
each step, SMO chooses two ai to jointly optimize, finds the optimal values for these ai,
and updates the SVM to reflect these new values.
a.
The advantage of SMO lies in the fact that solving for two ai can be done analytically.
Thus, numerical QP optimization is avoided entirely. The inner loop of the algorithm can
be expressed in a short amount of C code, rather than invoking an entire QP library routine.
By avoiding numerical QP, the computation time is shifted from QP to kernel evaluation.
Kernel evaluation time can be dramatically reduced in certain common situations, e.g.,
when a linear SVM is used, or when the input data is sparse (mostly zero). The result of
kernel evaluations can also be cached in memory [1].
There are two components to SMO: an analytic method for solving for the two ai, and
a heuristic for choosing which multipliers to optimize. Pseudo-code for the SMO algorithm can be found in [8, 7], along with the relationship to other optimization and machine
learning algorithms.
2.1
SOLVING FOR TWO LAGRANGE MULTIPLIERS
To solve for the two Lagrange multipliers al and a2, SMO first computes the constraints on
these mUltipliers and then solves for the constrained minimum. For convenience, all quantities that refer to the first multiplier will have a subscript 1, while all quantities that refer
to the second mUltiplier will have a subscript 2. Because there are only two multipliers,
1. C. Platt
560
the constraints can easily be displayed in two dimensions (see figure 1). The constrained
minimum of the objective function must lie on a diagonal line segment.
The ends of the diagonal line segment can be expressed quite simply in terms of a2. Let
s = YI Y2? The following bounds apply to a2:
L = max(O, a2
+ sal
-
1
'2(s
+ l)C),
.
H = mm(C, a2
+ sal
-
1
'2(s - l)C). (7)
Under normal circumstances, the objective function is positive definite, and there is a minimum along the direction of the linear equality constraint. In this case, SMO computes the
minimum along the direction of the linear equality constraint:
new _
a2
-a2
+
Y2(E I
-
E 2)
K( Xl,
.... Xl
.... ) + K( X2,
.... X2
- ) - 2K( Xl,
.... X2
.... )'
(8)
where Ei = Ui - Yi is the error on the ith training example. As a next step, the constrained
minimum is found by clipping a2 ew into the interval [L, H]. The value of al is then
computed from the new, clipped, a2:
(9)
For both linear and non-linear SVMs, the threshold b is re-computed after each step, so that
the KKT conditions are fulfilled for both optimized examples.
2.2
HEURISTICS FOR CHOOSING WHICH MULTIPLIERS TO OPTIMIZE
In order to speed convergence, SMO uses heuristics to choose which two Lagrange multipliers to jointly optimize.
There are two separate choice heuristics: one for al and one for a2. The choice of al
provides the outer loop of the SMO algorithm. If an example is found to violate the KKT
conditions by the outer loop, it is eligible for optimization. The outer loop alternates single
passes through the entire training set with multiple passes through the non-bound ai (ai f.
{a, C}). The multiple passes terminate when all of the non-bound examples obey the KKT
conditions within E. The entire SMO algorithm terminates when the entire training set
obeys the KKT conditions within c. Typically, c = 10- 3 .
The first choice heuristic concentrates the CPU time on the examples that are most likely to
violate the KKT conditions, i.e., the non-bound subset. As the SMO algorithm progresses,
ai that are at the bounds are likely to stay at the bounds, while ai that are not at the bounds
will move as other examples are optimized.
As a further optimization, SMO uses the shrinking heuristic proposed in [2]. After the pass
through the entire training set, shrinking finds examples which fulfill the KKT conditions
more than the worst example failed the KKT conditions. Further passes through the training
set ignore these fulfilled conditions until a final pass at the end of training, which ensures
that every example fulfills its KKT condition.
Once an al is chosen, SMO chooses an a2 to maximize the size of the step taken during
joint optimization. SMO approximates the step size by the absolute value of the numerator
in equation (8): lEI -E21. SMO keeps a cached error value E for every non-bound example
in the training set and' then chooses an error to approximately maximize the step size. If
EI is positive, SMO chooses an example with minimum error E 2 . If EI is negative, SMO
chooses an example with maximum error E 2 .
2.3
KERNEL OPTIMIZATIONS
Because the computation time for SMO is dominated by kernel evaluations, SMO can be
accelerated by optimizing these kernel evaluations. Utilizing sparse inputs is a generally
Analytic QP and Sparseness to Speed Training ofSupport Vector Machines
Experiment
AdultLin
AdultLinD
WebLin
WebLinD
AdultGaussK
AdultGauss
AdultGaussKD
AdultGaussD
WebGaussK
WebGauss
WebGaussKD
WebGaussD
MNIST
Kernel
Linear
Linear
Linear
Linear
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Polynom.
Sparse
Inputs
Used
y
N
y
N
y
y
N
N
y
y
N
N
y
Kernel
Caching
Used
mix
mix
mix
mix
y
N
y
N
y
N
y
N
N
Training
Set
Size
11221
11221
49749
49749
11221
11221
11221
11221
49749
49749
49749
49749
60000
Number of
Support
Vectors
4158
4158
1723
1723
4206
4206
4206
4206
4484
4484
4484
4484
3450
561
C
%
0.05
0.05
1
1
1
1
1
1
5
5
5
5
100
Sparse
Inputs
89
0
96
0
89
89
0
0
96
96
0
0
81
Table 1: Parameters for various experiments
applicable kernel optimization. For commonly-used kernels, equations (1) and (2) can be
dramatically sped up by exploiting the sparseness of the input. For example, a Gaussian
kernel can be expressed as an exponential of a linear combination of sparse dot products.
Sparsely storing the training set also achieves substantial reduction in memory consumption.
To compute a linear SVM, only a single weight vector needs to be stored, rather than all of
the training examples that correspond to non-zero ai. If the QP sub-problem succeeds, the
stored weight vector is updated to reflect the new ai values.
3
BENCHMARKING SMO
The SMO algorithm is tested against the standard chunking algorithm and against the decomposition method on a series of benchmarks. Both SMO and chunking are written in
C++, using Microsoft's Visual C++ 6.0 compiler. Joachims' package SVMlight (version
2.01) with a default working set size of lOis used to test the decomposition method. The
CPU time of all algorithms are measured on an unloaded 266 MHz Pentium II processor
running Windows NT 4.
The chunking algorithm uses the projected conjugate gradient algorithm as its QP solver,
as suggested by Burges [1]. All algorithms use sparse dot product code and kernel caching,
as appropriate [1, 2]. Both SMO and chunking share folded linear SVM code.
The SMO algorithm is tested on three real-world data sets. The results of the experiments
are shown in Tables 1 and 2. Further tests on artificial data sets can be found in [8, 7].
The first test set is the UeI Adult data set [5]. The SVM is given 14 attributes of a census
form of a household and asked to predict whether that household has an income greater
than $50,000. Out of the 14 attributes, eight are categorical and six are continuous. The six
continuous attributes are discretized into quintiles, yielding a total of 123 binary attributes.
The second test set is text categorization: classifying whether a web page belongs to a
category or not. Each web page is represented as 300 sparse binary keywords attributes.
The third test set is the MNIST database of handwritten digits, from AT&T Research
Labs [4]. One classifier of MNIST, class 8, is trained. The inputs are 784-dimensional
562
1. C. Platt
Experiment
AdultLin
AdultLinD
WebLin
WebLinD
AdultGaussK
AdultGauss
AdultGaussKD
AdultGaussD
WebGaussK
WebGauss
WebGaussKD
WebGaussD
MNIST
SMa
Time
(sec)
13.7
21.9
339.9
4589.1
442.4
523.3
1433.0
1810.2
2477.9
2538.0
23365.3
24758.0
19387.9
SVMllg ht
Time
(sec)
217.9
nla
3980.8
nla
284.7
737.5
nla
nla
2949.5
6923.5
nla
nla
38452.3
Chunking
Time
(sec)
20711.3
21141.1
17164.7
17332.8
11910.6
nla
14740.4
nla
23877.6
nla
50371 .9
nla
33109.0
SMa
Scaling
Exponent
1.8
1.0
1.6
1.5
2.0
2.0
2.5
2.0
1.6
1.6
2.6
1.6
nla
SVMllg ht
Scaling
Exponent
2.1
nla
2.2
nla
2.0
2.0
nla
nla
2.0
1.8
nla
nla
nla
Chunking
Scaling
Exponent
3.1
3.0
2.5
2.5
2.9
nla
2.8
nla
2.0
nla
2.0
nla
nla
Table 2: Timings of algorithms on various data sets.
non-binary vectors and are stored as sparse vectors. A fifth-order polynomial kernel is
used to match the AT&T accuracy results.
The Adult set and the Web set are trained both with linear SVMs and Gaussian SVMs with
variance of 10. For the Adult and Web data sets, the C parameter is chosen to optimize
accuracy on a validation set. Experiments on the Adult and Web sets are performed with
and without sparse inputs and with and without kernel caching, in order to determine the
effect these kernel optimizations have on computation time. When a kernel cache is used,
the cache size for SMa and SVMlig ht is 40 megabytes. The chunking algorithm always
uses kernel caching: matrix values from the previous QP step are re-used. For the linear
experiments, SMa does not use kernel caching, while SVMlig ht does.
In Table 2, the scaling of each algorithm is measured as a function of the training set size,
which is varied by taking random nested subsets of the full training set. A line is fitted
to the log of the training time versus the log of the set size. The slope of the line is an
empirical scaling exponent.
4
CONCLUSIONS
As can be seen in Table 2, standard PCG chunking is slower than SMa for the data sets
shown, even for dense inputs. Decomposition and SMa have the advantage, over standard
PCG chunking, of ignoring the examples whose Lagrange multipliers are at C. This advantage is reflected in the scaling exponents for PCG chunking versus SMa and SVMlig ht .
PCG chunking can be altered to have a similar property [3]. Notice that PCG chunking uses
the same sparse dot product code and linear SVM folding code as SMa. However, these
optimizations do not speed up PCG chunking due to the overhead of numerically solving
large QP sub-problems.
SMa and SVM1ight are similar: they decompose the large QP problem into very small QP
sub-problems. SMa decomposes into even smaller sub-problems: it uses analytical solutions of two-dimensional sub-problems, while SVMlight uses numerical QP to solve 10dimensional sub-problems. The difference in timings between the two methods is partly
due to the numerical QP overhead, but mostly due to the difference in heuristics and kernel
optimizations. For example, SMa is faster than SVMlig ht by an order of magnitude on
Analytic QP and Sparseness to Speed Training of Support Vector Machines
563
linear problems, due to linear SVM folding. However, SVMlight can also potentially use
linear SVM folding . In these experiments, SMO uses a very simple least-recently-used kernel cache of Hessian rows, while SVMlig ht uses a more complex kernel cache and modifies
its heuristics to utilize the kernel effectively [2]. Therefore, SMO does not benefit from the
kernel cache at the largest problem sizes, while SVMlig ht speeds up by a factor of 2.5 .
Utilizing sparseness to compute kernels yields a large advantage for SMO due to the lack
of heavy numerical QP overhead. For the sparse data sets shown, SMO can speed up by
a factor of between 3 and 13, while PCG chunking only obtained a maximum speed up of
2.1 times.
The MNIST experiments were performed without a kernel cache, because the MNIST data
set takes up most of the memory of the benchmark machine. Due to sparse inputs, SMO is
a factor of 1.7 faster than PCG chunking, even though none of the Lagrange multipliers are
at C. On a machine with more memory, SVMlig ht would be as fast or faster than SMO for
MNIST, due to kernel caching.
In summary, SMO is a simple method for training support vector machines which does not
require a numerical QP library. Because its CPU time is dominated by kernel evaluation,
SMO can be dramatically quickened by the use of kernel optimizations, such as linear SVM
folding and sparse dot products. SMO can be anywhere from 1.7 to 1500 times faster than
the standard PCG chunking algorithm, depending on the data set.
Acknowledgements
Thanks to Chris Burges for running data sets through his projected conjugate gradient code
and for various helpful suggestions.
References
[1]
c. J. C. Burges.
A tutorial on support vector machines for pattern recognition. Data
Mining and Knowledge Discovery , 2(2), 1998.
[2] T. Joachims. Making large-scale SVM learning practical. In B. Scholkopf, C. J. C.
Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector
Learning, pages 169-184. MIT Press, 1998.
[3] L. Kaufman. Solving the quadratic programming problem arising in support vector
classification. In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in
Kernel Methods - Support Vector Learning, pages 147-168. MIT Press, 1998.
[4] Y. LeCun. MNIST handwritten digit database.
www.research .att.comr yann/ocr/mnistl.
Available on the web at http://
[5] C. J. Merz and P. M. Murphy. UCI repository of machine learning databases, 1998.
[http://www.ics.uci.edu/rvmlearnIMLRepository.html].Irvine.CA: University of California, Department of Information and Computer Science.
[6] E. Osuna, R. Freund, and F. Girosi . Improved training algorithm for support vector
machines. In Proc. IEEE Neural Networks in Signal Processing '97, 1997.
[7] J. C. Platt. Fast training of SVMs using sequential minimal optimization. In
B . Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 185-208. MIT Press, 1998.
[8] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Technical Report MSR- TR-98-14, Microsoft Research, 1998. Available
at http://www.research .microsoft.comrjplattlsmo.html.
[9] V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag,
1982.
| 1577 |@word msr:1 repository:1 version:1 polynomial:1 decomposition:7 invoking:1 tr:1 reduction:1 series:3 att:1 com:1 nt:1 must:3 written:1 john:1 numerical:9 girosi:1 analytic:5 update:2 ith:2 short:1 provides:1 along:3 scholkopf:3 consists:2 overhead:3 rapid:1 surge:1 discretized:1 cpu:3 window:1 solver:1 cache:6 kaufman:2 substantially:1 q2:1 finding:1 pseudo:1 every:5 classifier:2 platt:6 positive:2 timing:2 subscript:2 approximately:2 suggests:1 limited:1 obeys:1 practical:1 lecun:1 implement:2 definite:1 digit:2 empirical:2 lois:1 suggest:1 cannot:4 convenience:1 storage:1 optimize:5 www:3 yt:2 comr:1 modifies:1 utilizing:2 his:1 handle:1 updated:1 programming:3 us:9 element:1 recognition:1 particularly:1 sparsely:1 database:5 solved:1 worst:2 ensures:1 substantial:1 sal:2 ui:2 asked:1 trained:3 solving:5 segment:2 easily:2 joint:1 various:3 represented:1 fast:4 artificial:1 choosing:2 quintiles:1 whose:2 heuristic:9 quite:1 solve:5 jointly:2 final:1 advantage:4 analytical:1 subtracting:1 product:4 uci:3 loop:4 exploiting:1 convergence:2 cached:2 categorization:1 object:1 depending:1 measured:2 keywords:1 qt:2 progress:1 solves:3 involves:2 kuhn:1 direction:2 concentrate:1 attribute:5 require:3 generalization:1 karush:1 decompose:1 mm:1 ic:1 normal:1 predict:1 sma:11 achieves:1 smallest:3 a2:12 estimation:1 proc:1 applicable:1 largest:1 minimization:1 mit:3 gaussian:10 always:1 fulfill:2 rather:2 caching:6 joachim:2 pentium:1 helpful:1 entire:8 typically:1 overall:1 dual:1 classification:1 html:2 pcg:11 exponent:5 proposes:1 constrained:3 equal:1 once:1 lyi:1 others:1 report:1 few:1 murphy:1 microsoft:6 interest:1 mining:1 evaluation:7 yielding:1 primal:1 immense:1 desired:1 re:2 minimal:6 fitted:1 mhz:1 clipping:1 subset:3 jplatt:1 stored:4 combined:1 chooses:6 thanks:1 stay:1 quickly:1 reflect:2 choose:1 megabyte:2 li:16 sec:3 unloaded:1 performed:2 break:3 lab:1 compiler:1 slope:1 accuracy:2 variance:1 correspond:1 identify:1 yield:1 handwritten:2 none:1 researcher:1 processor:1 against:2 tucker:1 associated:1 irvine:1 knowledge:1 subtle:1 routine:1 reflected:1 improved:1 done:1 box:2 though:1 anywhere:1 smola:3 until:2 working:1 web:6 ei:3 lack:1 lei:1 effect:1 multiplier:14 y2:4 analytically:2 hence:3 equality:5 during:1 numerator:1 recently:1 common:1 sped:1 qp:42 overview:1 empirically:1 approximates:1 numerically:1 refer:2 ai:10 dot:4 similarity:1 optimizing:1 belongs:1 certain:1 verlag:1 inequality:1 binary:3 yi:3 seen:1 minimum:6 greater:1 determine:1 maximize:2 signal:1 ii:1 full:2 violate:4 multiple:2 reduces:1 mix:4 technical:1 match:1 faster:6 circumstance:1 kernel:36 sometimes:1 folding:4 interval:1 leaving:1 rest:1 unlike:1 pass:4 subject:1 svmlight:3 e21:1 easy:1 variety:1 fit:2 identified:1 inner:1 whether:2 six:2 hessian:3 cause:2 dramatically:3 generally:1 amount:1 svms:13 category:1 reduced:2 http:3 tutorial:1 shifted:1 notice:1 fulfilled:3 arising:1 group:1 threshold:2 ht:9 utilize:1 year:1 package:1 clipped:1 eligible:1 yann:1 scaling:7 entirely:1 bound:8 correspondence:1 quadratic:5 constraint:10 x2:3 dominated:3 speed:9 optimality:1 department:1 according:1 alternate:1 combination:1 conjugate:2 describes:2 smaller:3 terminates:1 osuna:2 making:1 census:1 taken:1 chunking:24 equation:4 end:2 available:2 apply:1 obey:2 eight:1 ocr:1 appropriate:1 slower:1 running:2 household:2 especially:1 unchanged:1 objective:2 move:1 quantity:2 blend:1 costly:1 dependence:1 diagonal:3 gradient:2 separate:1 outer:3 consumption:1 chris:1 reason:1 code:7 relationship:1 difficult:1 mostly:2 potentially:1 negative:1 benchmark:2 displayed:1 situation:1 varied:1 required:1 optimized:2 california:1 smo:45 adult:5 redmond:1 suggested:1 pattern:1 max:1 memory:5 explanation:1 deleting:1 solvable:1 altered:1 library:4 categorical:1 text:1 acknowledgement:1 discovery:1 freund:1 suggestion:1 versus:2 validation:1 editor:3 storing:1 share:1 classifying:1 heavy:1 row:1 summary:1 last:3 keeping:1 burges:6 wide:1 taking:1 absolute:1 sparse:13 fifth:1 benefit:1 dimension:3 default:1 world:1 computes:2 commonly:1 projected:2 avoided:1 income:1 ignore:1 keep:1 kkt:12 xi:3 continuous:2 decomposes:2 table:5 terminate:2 ca:1 ignoring:1 complex:2 dense:1 benchmarking:1 slow:2 shrinking:2 sub:14 exponential:1 xl:3 lie:4 third:1 removing:1 down:1 svm:22 mnist:9 vapnik:2 sequential:6 adding:2 effectively:1 magnitude:1 sparseness:6 uei:1 simply:1 likely:2 visual:1 failed:1 lagrange:8 expressed:6 springer:1 nested:1 relies:1 change:1 folded:1 called:1 total:1 pas:2 partly:1 merz:1 succeeds:1 ew:1 support:16 arises:1 fulfills:1 accelerated:1 tested:2 avoiding:1 |
631 | 1,578 | Dynamics of Supervised Learning with
Restricted Training Sets
A.C.C. Coolen
Dept of Mathematics
King's College London
Strand, London WC2R 2LS, UK
tcoolen @mth.kcl.ac.uk
D. Saad
Neural Computing Research Group
Aston University
Birmingham B4 7ET, UK
saadd@aston.ac.uk
Abstract
We study the dynamics of supervised learning in layered neural networks, in the regime where the size p of the training set is proportional
to the number N of inputs. Here the local fields are no longer described
by Gaussian distributions. We use dynamical replica theory to predict
the evolution of macroscopic observables, including the relevant error
measures, incorporating the old formalism in the limit piN --t 00.
1 INTRODUCTION
Much progress has been made in solving the dynamics of supervised learning in layered
neural networks, using the strategy of statistical mechanics: by deriving closed laws for the
evolution of suitably chosen macroscopic observables (order parameters) in the limit of an
infinite system size [1, 2, 3, 4]. For a recent review and guide to references see e.g. [5].
The main successful procedure developed so far is built on the following cornerstones:
? The task to be learned is defined by a 'teacher', which is itself a neural network. This induces a natural set of order parameters (mutual weight vector overlaps between the teacher
and the trained, 'student', network).
? The number of network inputs is infinitely large. This ensures that fluctuations in the
order parameters will vanish, and enables usage of the central limit theorem.
? The number of 'hidden' neurons is finite, in both teacher and student, ensuring a finite
number of order parameters and an insignificant cumulative impact of the fluctuations .
? The size of the training set is much larger than the number of updates. Each example
presented is now different from the previous ones, so that the local fields will have Gaussian
distributions, leading to closure of the dynamic equations.
In this paper we study the dynamics of learning in layered networks with restricted training
sets, where the number p of examples scales linearly with the number N of inputs. Individual examples will now re-appear during the learning process as soon as the number of
weight updates made is of the order of p . Correlations will develop between the weights
A. C. C. Coolen and D. Saad
198
,.
et=O.S
1=50
to !-
,.
I.
Y
"
??
a=O.5
,.
1=50
I.
'J",:':",'
,<.~~t,~~:i.~/
00 -
Y
o.
- 10 -
-H)
-20 -
- ~ ()
~o
..
~oL-~~
-4000
_1000
-2000
- 1000
00
1000
2000
loon
4000
...... 0
_10
__
- 10
X
~
-1 0
____
00
~~
10
___ _
20
\0
40
X
Figure 1: Student and teacher fields (x, y) (see text) observed during numerical simulations
of on-line learning (learning rate 11 = 1) in a perceptron of size N = 10, 000 at t = 50, using
examples from a training set of size p = ~N. Left: Hebbian learning. Right: AdaTron
learning [5]. Both distributions are clearly non-Gaussian.
and the training set examples and the student's local fields (activations) will be described by
non-Gaussian distributions (see e.g. Figure 1). This leads to a breakdown of the standard
formalism: the field distributions are no longer characterized by a few moments, and the
macroscopic laws must now be averaged over realizations of the training set. The first rigorous study of the dynamics of learning with restricted training sets in non-linear networks,
via generating functionals [6], was carried out for networks with binary weights. Here we
use dynamical replica theory (see e.g. [7]) to predict the evolution of macroscopic observabIes for finite a, incorporating the old formalism as a special case (a = p/ N -t 00). For
simplicity we restrict ourselves to single-layer systems and noise-free teachers.
2 FROM MICROSCOPIC TO MACROSCOPIC LAWS
A 'student' perceptron operates a rule which is parametrised by the weight vector J E '!RN:
s: {-I,I}N
-t {-I,l}
S(e)
=
sgn [J . e] == sgn [x]
(I)
It tries to emulate a teacher ~erceptron which operates a similar rule, characterized by a
(fixed) weight vector B E'!R . The student modifies its weight vector J iteratively, using
examples of input vectors which are drawn at random from a fixed (randomly composed)
training set D = {e 1 , . ? . , e} c D = {-I, I}N, of size p = aN with a > 0, and the
corresponding values of the teacher outputs T(e) = sgn[B? e] == sgn [y]. Averages
over the training set D and over the full set D will be denoted as (<p(e))i> and (<p(e))D,
respectively. We will analyze the following two classes of learning rules:
e
on-line:
J(m+I)
= J(m) + Ne(m) g [J(m)?e(m), B?e(m)]
batch:
J(m+I)
= J(m) + N(e g [J(m)?e,B?e])i>
(2)
In on-line learning one draws at each step m a question e(m) at random from the training
set, the dynamics is a stochastic process; in batch learning one iterates a deterministic map.
Our key dynamical observables are the training- and generalization errors, defined as
Eg(J) = (O[-(J ?e)(B ?e)]) D
(3)
Only if the training set D is sufficiently large, and if there are no correlations between J and
the training set examples, will these two errors be identical. We now turn to macroscopic
observables n[J] = (OdJ], ... , Ok[J]). For N -t 00 (with finite times t = m/ N
199
Dynamics of Supervised Learning with Restricted Training Sets
and with finite k), and if our observables are of a so-called mean-field type, their associated
macroscopic distribution Pt(!l) is found to obey a Fokker-Planck type equation, with flowand diffusion terms that depend on whether on-line or batch learning is used. We now
choose a specific set of observables !l[J], taylored to the present problem:
Q[J]
= J2,
R[J]
= J?B,
P[x,y;J]
= (8[x-J?e] 8[y-B?eDb
(4)
This choice is motivated as follows: (i) in order to incorporate the old formalism we need
Q[J] and R[ J], (ii) the training error involves field statistics calculated over the training set,
as given by P[x, y; J], and (iii) for a < 00 one cannot expect closed equations for a finite
number of order parameters, the present choice effectively represents an infinite number.
We will assume the number of arguments (x, y) for which P[x, y; J] is evaluated to go to
infinity after the limit N ~ 00 has been taken. This eliminates technical subtleties and
allows us to show that in the Fokker-Planck equation all diffusion terms vanish as N ~ 00.
The latter thereby reduces to a LiouviIle equation, describing deterministic evolution of our
macroscopic observables. For on-line learning one arrives at
:t
:t
Q=
R
:t
dxdy P[x, y] x Q[x; y] + 7]2
27] /
= 7] /
/
dxdy P[x, y] Q2[X; y]
(6)
dxdy P[x, y] y Q[x; y]
P[x, y] =
~
(5)
[ / dx' P[x', y]8[x-x' -7]Q[x' , y)) - P[x,
yl]
-7] :x / dx'dy' g(X', y'] A[x, y; x', y']
2
' y'] 88 .) P [ x, Y]
+ -21 7] 2 / dx I dy , P [x' ,y'] Q2 [x,
x-
(7)
Expansion of these equations in powers of 7], and retaining only the terms linear in 7], gives
the corresponding equations describing batch learning. The complexity of the problem is
fully concentrated in a Green's function A[x, y; x', y'], which is defined as
A[x, y; x', y'] = lim ((([1-8 cc ' 18[x-J?e] 8[y-B?e](6~/) 8[xl-J?(]8[yl-B?e /])b) b)~;t
N~oo
......
It involves a sub-shell average, in which Pt (J) is the weight probability density at time t:
(K[J])~:t
=
JdJ K[J] pt(J)8[Q -Q[J]]8[R-R[J]] ITxy 8[P[x, y] -P[x, y; J]]
JdJ pt(J)8[Q-Q[J]]8[R-R[J]] ITXY 8[P[x, y] - P[x, y; J]]
where the sub-shells are defined with respect to the order parameters. The solution of
(5,6,7) can be used to generate the errors of (3):
1
(8)
Eg = - arccos[R/ JQ]
E t = / dxdy P[x,y]O[-xy]
7r
3
CLOSURE VIA DYNAMICAL REPLICA THEORY
So far our analysis is still exact. We now close the macroscopic laws (5,6,7) by making, for
N ~ 00, the two key assumptions underlying dynamical replica theory [7]:
(i) Our macroscopic observables {Q, R, P} obey closed dynamic equations.
(ii) These equations are self-averaging with respect to the realisation of jj.
(i) implies that probability variations within the {Q, R, P} subshells are either absent or
irrelevant to the evolution of {Q, R, P} . We may thus make the simplest choice for Pt (J):
Pt(J) ~ p(J) '" 8[Q- Q[J]] 8[R-R[J))
IT 8[P[x, y] -P[x, y; J))
xy
(9)
A. C. C. Coolen and D. Saad
200
p(J) depends on time implicitly, via the order parameters {Q, R, Pl. The procedure (9)
leads to exact laws if our observables {Q, R , P} indeed obey closed equations for N --7 00.
It gives an approximation if they don't. (ii) allows us to average the macroscopic laws over
all training sets; it is observed in numerical simulations. and can probably be proven using
the formalism of [6]. Our assumptions result in the closure of (5,6,7), since now A[ . .. ] is
expressed fully in terms of {Q, R, P} . The final ingredient of dynamical replica theory is
the realization that averaging fractions is simplified with the replica identity [8]
IT
/ JdJ W[J,Z]G[J,Z]) = lim jdJ 1 .. . dJ n (G[J 1 ,z]
W[Ja,z])z
\
J dJ W[J , z]
Z n-40
a=l
What remains is to perform integrations. One finds that P[x, y] = P[xly]P[y] with Ply] =
(271")-~ e- h 2 ? Upon introducing the short-hands Dy = (271")- ~ e- h 2 dy and (J(x, y)) =
J Dydx P[xly]f(x, y) we can write the resulting macroscopic laws as follows :
d
dt Q = 2r/V
8
1
8tP[xly] =~
+ TJ
2
d
dt R = TJW
Z
(10)
j dx'P[x'ly] {8[x-X'-TJG[x',yll-8[x-x']} + "2TJ2
1
8
Z 8x2P[Xly]
2
8
-TJ 8x {P[xly] [U(x-Ry)+Wy+[V-RW-(Q-R2)U]~[x,yJ]}
(11)
with
U
= (<I> [x, y]Q[x , y]),
V
= (x9[x,y]),
W
= (y9[x,y]),
Z
= (92[X,y])
As before the batch equations follow upon expanding in TJ and retaining only the linear
terms. Finding the function <I> [x, y] (in replica symmetric ansatz) requires solving a saddlepoint problem for a scalar observable q and a function M[xIY]. Upon introducing
B = JqQ-R2
Q(l- q)
(J[x,y,z])* = Jdx M[xly]eBX Zf[x,y,z]
J dx M[xly]e Bxz
(with J dx M[xly] = 1 for all y) the saddle-point equations acquire the form
for all X , y :
((X-Ry)2)
P[Xly] = j Dz (<5[X -x])*
+ (qQ-R2)[1-~]
=
[Q(1+q)-2R2](x~[x,y])
a
The solution M[xly] of the functional saddle-point equation, given a value for q in the
physical range q E [R2/Q, 1], is unique [9] . The function ~[x, y] is then given by
<I> [X,
4
THE LIMIT a
y] = { JqQ- R2 P[Xly]} -1 j Dz z(<5[X -x])*
(12)
-7 00
For consistency we show that our theory reduces to the simple (Q, R) formalism of infinite
training sets in the limit a --7 00 . Upon making the ansatz
P[xly]
= [271"(Q-R2 )]-t
e-~[x-RyJ 2 /(Q-R2)
one finds that the saddle-point equations are simultaneously and uniquely solved by
M[xly] = P[xly],
q = R2/Q
and <I>[x,y] reduces to
<I>[x,y] = (x-Ry)/(Q-R 2)
Insertion of our ansatz into equation (II), followed by rearranging of terms and usage of the
above expression for <I> [x, y], shows that this equation is satisfied. Thus from our general
theory we indeed recover for a --7 00 the standard theory for infinite training sets.
Dynamics o/Supervised Learning with Restricted Training Sets
0.5
201
_---------------~---,
0.4
"'O-<>-O-<>-CH>""O"""":ro-<>"""""'T<>-<r<>='""""'<To-o-o-tn~u_o_<:~CH>""O_<>_O'<,."...,"O'U'<~
~LO.Q.Q..O.<l~>-<>-O'.Q..O..<>-O-<>..~""-"-'Q..O..O~>_O'<>~~o.D_O_O."_':>_O.<>..o.<>_j
0:=0.25
0:=0.5
0 .3
0.2
0. 1
20
10
40
30
t
Figure 2: Simulation results for on-line Hebbian learning (system size N = 10.000) versus an approximate solution of the equations generated by dynamical replica theory (see
main text), for a E {0.25 , 0.5,1.0, 2.0, 4.0}. Upper five curves: Eg as functions of time.
Lower five curves: E t as functions of time. Circles: simulation results for Eg; diamonds:
simulation results for E t . Solid lines : the corresponding theoretical predictions.
5
BENCHMARK TESTS: HEBBIAN LEARNING
Batch Hebbian Learning
For the Hebbian rule, where 9[x, yJ = sgn(y), one can calculate our order parameters
exactly at any time, even for a < 00 [10], which provides an excellent benchmark for
general theories such as ours. For batch execution all integrations in our present theory can
be done and all equations solved explicitly, and our theory is found to predict the following :
R
f2
= RO+rJty;:
Q
1]
= Qo+2rJtRoyf2;:+rJ 22[2
t ;+;
e- ~[x-Ry - ( 1)1 /0) sgn(y)f /(Q_R2)
P[xly]
Eg
=
J27r(Q-R2)
= ~ arccos [ ~]
"
E
VIq!
t
= ~ - ~ JDY erf [
2
2
(14)
lyIR+7]t/a
J2(Q-R2)
1
(15)
Comparison with the exact solution, calculated along the lines of [10] (where this was done
for on-line Hebbian learning) shows that the above expressions are all rigorously exact.
On-Line Hebbian Learning
For on-line execution we cannot (yet) solve the functional saddle-point equation analytically. However, some explicit analytical predictions can still be extracted [9] :
R
= Ro + 7]t/f
Q
= Qo + 27]t Ro / f + 7] 2 t + 7] 2 t 2
J dx xP[xly]
[~+ ~]
= Ry + (7]t/a) sgn(y)
(16)
(17)
1
P(xIY] '" [
a
]
27r7]2 t 2
2'
[_
exp
a(x-RY-(7]t/a) sgn(y))2]
27]2 t2
(t ---* (0)
(18)
A. C. C. Coolen and D. Saad
202
11- 2.0
,-50
... 1.0
.. 50
lO r
10 ,
? ?~;??:~::
I
i
,, ~
10
V
>0-
,v
t
.c. Jj
.'
?
':;,
~.
..';
,
00
f
_10
l
_I 0
-1 0 to
-10
..... 0 1 _ _ ~-'--~_-'- __ .. _ .... _ "---o.--'---_~
~o
-100.0 -JOOoG _100 0
00
I0Il0
laII O *e
?10.0
.... O~~.....1~~~....0 -_0
-1000
0.0
1000
lUIO
...
.000
00
..
f
',1?
,
, '-
,, ~ ?
-J 0
, ',
':c.:
~
; ,.
-20010
~
~
I
?
-_.0
I
"'O ~""""'--.l.....-.....--~ ? &-
~o
?
. JGO.lt
-DO
-1t00
00
?
100.0
200.0
......
)010
..
dO
Figure 3: Simulation results for on-line Hebbian learning (N = 10,000) versus dynamical
replica theory, for a E {2.0, 1.0, 0.5}. Dots: local fields (x, y) = (J?e, B ?e) (calculated for
examples in the training set), at time t = 50. Dashed lines: conditional average of student
field x as a function ofy, as predicted by the theory, x(y) = Ry + (."t/a) sgn(y).
0 01
.?.? - -
-
OOlS . .. .
,
?
-
-
'.
?
001'
- - ..
.""
Figure 4: Simulations of Hebbian on-line learning with N = 10,000. Histograms: student
field distributions measured at t 10 and t 20. Lines: theoretical predictions for student
field distributions (using the approximate solution of the diffusion equation, see main text),
for a=4 (left), a= 1 (middle), a=0.25 (right).
=
=
Comparison with the exact result of [ 10] shows that the above expressions (16,17,18), and
therefore also that of Eg at any time, are all rigorously exact.
At intermediate times it turns out that a good approximation ofthe solution of our dynamic
equations for on-line Hebbian learning (exact for t ? a and for t -+ 00) is given by
e- Hz:-RY-('1t / a ) sgn(y))2/(Q-R2+'1 2t / a )
P[xly] =
Eg
= ~" arccos [ V~
. ~]
J27r(Q - R2 + .,,2t/a)
Et
= ~2 - ~2 !DY erf [ J2(Q-R2_.,,2t/a)
lyIR+."t/a
1
(19)
(20)
In Figure 2 we compare the approximate predictions (20) with the results obtained from
numerical simulations (N = 10,000, Qo = 1, Ro = 0, ." = 1). All curves show excellent
agreement between theory and experiment. We also compare the theoretical predictions for
the distribution P[xly] with the results of numerical simulations. This is done in Figure 3
where we show the fields as observed at t
50 in simulations (same parameters as in
Figure 2) of on-line Hebbian learning, for three different values of a. In the same figure
we draw (dashed lines) the theoretical prediction for the y-dependent average (17) of the
conditional x-distribution P[xly]. Finally we compare the student field distribution P[x] =
=
Dynamics of Supervised Learning with Restricted Training Sets
203
J
Dy P[xly] according to (19) with that observed in numerical simulations, see Figure 4.
The agreement is again excellent (note: here the learning process has almost equilibrated).
6
DISCUSSION
In this paper we have shown how the formalism of dynamical replica theory [7] can be used
successfully to build a general theory with which to predict the evolution of the relevant
macroscopic performance measures, including the training- and generalisation errors, for
supervised (on-line and batch) learning in layered neural networks with randomly composed but restricted training sets (i.e. for finite a = piN). Here the student fields are
no longer described by Gaussian distributions, and the more familiar statistical mechanical
formalism breaks down. For simplicity and transparency we have restricted ourselves to
single-layer systems and realizable tasks. In our approach the joint distribution P[x, y] for
student and teacher fields is itself taken to be a dynamical order parameter, in addition to
the conventional observables Q and R. From the order parameter set {Q, R, P}, in turn,
we derive both the generalization error Eg and the training error E t . Following the prescriptions of dynamical replica theory one finds a diffusion equation for P[x, y], which we
have evaluated by making the replica-symmetric ansatz in the saddle-point equations. This
equation has Gaussian solutions only for a -+ 00; in the latter case we indeed recover
correctly from our theory the more familiar formalism of infinite training sets, with closed
equations for Q and R only. For finite a our theory is by construction exact if for N -+ 00
the dynamical order parameters {Q, R, P} obey closed, deterministic equations, which are
self-averaging (i.e. independent of the microscopic realization of the training set). If this is
not the case, our theory is an approximation.
We have worked out our general equations explicitly for the special case of Hebbian learning, where the existence of an exact solution [10], derived from the microscopic equations
(for finite a), allows us to perform a critical test of our theory. Our theory is found to be
fully exact for batch Hebbian learning. For on-line Hebbian learning full exactness is difficult to determine, but exactness can be establised at least for (i) t -+ 00, (ii) the predictions
for Q, R, Eg and x(y) = Jdx xP[xly] at any time. A simple approximate solution of our
equations already shows excellent agreement between theory and experiment. The present
study clearly represents only a first step, and many extensions, applications and generalizations are currently under way. More specifically, we study alternative learning rules as well
as the extension of this work to the case of noisy data and of soft committee machines.
References
[I] Kinzel W. and Rujan P. (1990), Europhys. Lett. 13,473
[2] Kinouchi o. and Caticha N. (1992).1. Phys. A: Math. Gen. 25,6243
[3] Biehl M. and Schwarze H. (1992), Europhys. Lett. 20,733
Biehl M. and Schwarze H. (1995),1. Phys. A: Math. Gen. 28,643
[4] Saad D. and Solla S. (1995), Phys. Rev. Lett. 74,4337
[5] Mace C.W.H. and Coolen AC.C (1998), Statistics and Computing 8,55
[6] Horner H. (1992a), Z. Phys. B 86, 291
Horner H. (1992b), Z. Phys. B 87,371
[7] Coolen AC.C., Laughton S.N. and Sherrington D. (1996), Phys. Rev. B 53, 8184
[8] Mezard M., Parisi G. and Virasoro M.A (1987), Spin-Glass Theory and Beyond (Singapore: World Scientific)
[9] Coolen AC.C. and Saad D. (1998), in preparation.
[10] Rae H.C., Sollich P. and Cool en A.C.C. (1998), these proceedings
| 1578 |@word middle:1 suitably:1 closure:3 simulation:11 x2p:1 thereby:1 solid:1 moment:1 xiy:2 ours:1 activation:1 yet:1 dx:7 must:1 laii:1 numerical:5 dydx:1 enables:1 update:2 tjw:1 short:1 iterates:1 provides:1 math:2 five:2 along:1 lyir:2 indeed:3 mechanic:1 ry:8 ol:1 underlying:1 what:1 q2:2 developed:1 finding:1 adatron:1 exactly:1 ro:5 uk:4 ly:1 appear:1 planck:2 before:1 local:4 limit:6 fluctuation:2 range:1 averaged:1 unique:1 yj:2 procedure:2 cannot:2 close:1 layered:4 conventional:1 deterministic:3 map:1 dz:2 modifies:1 go:1 l:1 simplicity:2 rule:5 deriving:1 jgo:1 j27r:2 variation:1 qq:1 pt:6 construction:1 exact:10 agreement:3 breakdown:1 observed:4 solved:2 calculate:1 ensures:1 solla:1 subshells:1 complexity:1 insertion:1 rigorously:2 dynamic:12 trained:1 depend:1 solving:2 upon:4 f2:1 observables:10 joint:1 emulate:1 kcl:1 london:2 europhys:2 larger:1 taylored:1 solve:1 biehl:2 statistic:2 erf:2 t00:1 itself:2 noisy:1 final:1 parisi:1 analytical:1 j2:3 relevant:2 realization:3 gen:2 generating:1 oo:1 develop:1 ac:5 derive:1 measured:1 progress:1 equilibrated:1 predicted:1 involves:2 implies:1 xly:21 cool:1 stochastic:1 sgn:10 ja:1 wc2r:1 generalization:3 extension:2 pl:1 sufficiently:1 viq:1 exp:1 predict:4 birmingham:1 coolen:7 currently:1 successfully:1 exactness:2 clearly:2 gaussian:6 tcoolen:1 derived:1 ebx:1 rigorous:1 realizable:1 glass:1 dependent:1 mth:1 hidden:1 jq:1 denoted:1 retaining:2 arccos:3 special:2 integration:2 mutual:1 edb:1 field:15 identical:1 represents:2 r7:1 t2:1 loon:1 realisation:1 few:1 randomly:2 composed:2 simultaneously:1 individual:1 familiar:2 ourselves:2 rae:1 arrives:1 parametrised:1 tj:3 xy:2 old:3 re:1 circle:1 theoretical:4 virasoro:1 formalism:9 soft:1 tp:1 introducing:2 successful:1 jdy:1 teacher:8 density:1 yl:2 ansatz:4 again:1 central:1 x9:1 satisfied:1 choose:1 leading:1 student:12 explicitly:2 depends:1 try:1 break:1 closed:6 analyze:1 recover:2 spin:1 ofthe:1 cc:1 itxy:2 phys:6 associated:1 lim:2 ok:1 dt:2 supervised:7 follow:1 evaluated:2 done:3 correlation:2 hand:1 qo:3 y9:1 schwarze:2 scientific:1 usage:2 evolution:6 analytically:1 symmetric:2 iteratively:1 eg:9 kinouchi:1 during:2 self:2 uniquely:1 sherrington:1 tn:1 functional:2 kinzel:1 physical:1 b4:1 consistency:1 mathematics:1 dj:2 dot:1 longer:3 recent:1 irrelevant:1 tjg:1 binary:1 dxdy:4 determine:1 dashed:2 ii:5 full:2 ofy:1 rj:1 reduces:3 transparency:1 hebbian:14 technical:1 characterized:2 prescription:1 ensuring:1 impact:1 prediction:7 histogram:1 addition:1 macroscopic:13 saad:6 eliminates:1 probably:1 hz:1 intermediate:1 iii:1 restrict:1 absent:1 whether:1 motivated:1 expression:3 jj:2 cornerstone:1 induces:1 concentrated:1 simplest:1 rw:1 generate:1 singapore:1 correctly:1 write:1 group:1 key:2 drawn:1 diffusion:4 replica:12 fraction:1 saadd:1 almost:1 jdx:2 draw:2 dy:6 jdj:4 layer:2 followed:1 infinity:1 worked:1 argument:1 o_:1 according:1 sollich:1 saddlepoint:1 rev:2 making:3 restricted:8 taken:2 equation:29 remains:1 pin:2 turn:3 describing:2 committee:1 obey:4 batch:9 alternative:1 existence:1 build:1 question:1 already:1 strategy:1 microscopic:3 acquire:1 difficult:1 tj2:1 perform:2 diamond:1 zf:1 upper:1 neuron:1 benchmark:2 finite:9 rn:1 jqq:2 mechanical:1 yll:1 learned:1 horner:2 beyond:1 dynamical:12 wy:1 regime:1 built:1 including:2 green:1 power:1 overlap:1 critical:1 natural:1 aston:2 ne:1 carried:1 text:3 review:1 laughton:1 law:7 fully:3 expect:1 proportional:1 proven:1 versus:2 ingredient:1 xp:2 lo:2 soon:1 free:1 guide:1 perceptron:2 curve:3 calculated:3 lett:3 world:1 cumulative:1 made:2 simplified:1 caticha:1 far:2 functionals:1 approximate:4 observable:1 implicitly:1 don:1 expanding:1 rearranging:1 mace:1 expansion:1 excellent:4 erceptron:1 rujan:1 main:3 linearly:1 noise:1 en:1 sub:2 mezard:1 explicit:1 xl:1 vanish:2 ply:1 theorem:1 down:1 specific:1 r2:13 insignificant:1 incorporating:2 effectively:1 ools:1 execution:2 observabies:1 lt:1 saddle:5 infinitely:1 expressed:1 strand:1 scalar:1 subtlety:1 ch:2 fokker:2 extracted:1 shell:2 conditional:2 identity:1 king:1 infinite:5 generalisation:1 operates:2 specifically:1 averaging:3 called:1 college:1 latter:2 preparation:1 incorporate:1 dept:1 |
632 | 1,579 | Dynamics of Supervised Learning with
Restricted Training Sets
A.C.C. Coolen
Dept of Mathematics
King's College London
Strand, London WC2R 2LS, UK
tcoolen @mth.kcl.ac.uk
D. Saad
Neural Computing Research Group
Aston University
Birmingham B4 7ET, UK
saadd@aston.ac.uk
Abstract
We study the dynamics of supervised learning in layered neural networks, in the regime where the size p of the training set is proportional
to the number N of inputs. Here the local fields are no longer described
by Gaussian distributions. We use dynamical replica theory to predict
the evolution of macroscopic observables, including the relevant error
measures, incorporating the old formalism in the limit piN --t 00.
1 INTRODUCTION
Much progress has been made in solving the dynamics of supervised learning in layered
neural networks, using the strategy of statistical mechanics: by deriving closed laws for the
evolution of suitably chosen macroscopic observables (order parameters) in the limit of an
infinite system size [1, 2, 3, 4]. For a recent review and guide to references see e.g. [5].
The main successful procedure developed so far is built on the following cornerstones:
? The task to be learned is defined by a 'teacher', which is itself a neural network. This induces a natural set of order parameters (mutual weight vector overlaps between the teacher
and the trained, 'student', network).
? The number of network inputs is infinitely large. This ensures that fluctuations in the
order parameters will vanish, and enables usage of the central limit theorem.
? The number of 'hidden' neurons is finite, in both teacher and student, ensuring a finite
number of order parameters and an insignificant cumulative impact of the fluctuations .
? The size of the training set is much larger than the number of updates. Each example
presented is now different from the previous ones, so that the local fields will have Gaussian
distributions, leading to closure of the dynamic equations.
In this paper we study the dynamics of learning in layered networks with restricted training
sets, where the number p of examples scales linearly with the number N of inputs. Individual examples will now re-appear during the learning process as soon as the number of
weight updates made is of the order of p . Correlations will develop between the weights
A. C. C. Coolen and D. Saad
198
,.
et=O.S
1=50
to !-
,.
I.
Y
"
??
a=O.5
,.
1=50
I.
'J",:':",'
,<.~~t,~~:i.~/
00 -
Y
o.
- 10 -
-H)
-20 -
- ~ ()
~o
..
~oL-~~
-4000
_1000
-2000
- 1000
00
1000
2000
loon
4000
...... 0
_10
__
- 10
X
~
-1 0
____
00
~~
10
___ _
20
\0
40
X
Figure 1: Student and teacher fields (x, y) (see text) observed during numerical simulations
of on-line learning (learning rate 11 = 1) in a perceptron of size N = 10, 000 at t = 50, using
examples from a training set of size p = ~N. Left: Hebbian learning. Right: AdaTron
learning [5]. Both distributions are clearly non-Gaussian.
and the training set examples and the student's local fields (activations) will be described by
non-Gaussian distributions (see e.g. Figure 1). This leads to a breakdown of the standard
formalism: the field distributions are no longer characterized by a few moments, and the
macroscopic laws must now be averaged over realizations of the training set. The first rigorous study of the dynamics of learning with restricted training sets in non-linear networks,
via generating functionals [6], was carried out for networks with binary weights. Here we
use dynamical replica theory (see e.g. [7]) to predict the evolution of macroscopic observabIes for finite a, incorporating the old formalism as a special case (a = p/ N -t 00). For
simplicity we restrict ourselves to single-layer systems and noise-free teachers.
2 FROM MICROSCOPIC TO MACROSCOPIC LAWS
A 'student' perceptron operates a rule which is parametrised by the weight vector J E '!RN:
s: {-I,I}N
-t {-I,l}
S(e)
=
sgn [J . e] == sgn [x]
(I)
It tries to emulate a teacher ~erceptron which operates a similar rule, characterized by a
(fixed) weight vector B E'!R . The student modifies its weight vector J iteratively, using
examples of input vectors which are drawn at random from a fixed (randomly composed)
training set D = {e 1 , . ? . , e} c D = {-I, I}N, of size p = aN with a > 0, and the
corresponding values of the teacher outputs T(e) = sgn[B? e] == sgn [y]. Averages
over the training set D and over the full set D will be denoted as (<p(e))i> and (<p(e))D,
respectively. We will analyze the following two classes of learning rules:
e
on-line:
J(m+I)
= J(m) + Ne(m) g [J(m)?e(m), B?e(m)]
batch:
J(m+I)
= J(m) + N(e g [J(m)?e,B?e])i>
(2)
In on-line learning one draws at each step m a question e(m) at random from the training
set, the dynamics is a stochastic process; in batch learning one iterates a deterministic map.
Our key dynamical observables are the training- and generalization errors, defined as
Eg(J) = (O[-(J ?e)(B ?e)]) D
(3)
Only if the training set D is sufficiently large, and if there are no correlations between J and
the training set examples, will these two errors be identical. We now turn to macroscopic
observables n[J] = (OdJ], ... , Ok[J]). For N -t 00 (with finite times t = m/ N
199
Dynamics of Supervised Learning with Restricted Training Sets
and with finite k), and if our observables are of a so-called mean-field type, their associated
macroscopic distribution Pt(!l) is found to obey a Fokker-Planck type equation, with flowand diffusion terms that depend on whether on-line or batch learning is used. We now
choose a specific set of observables !l[J], taylored to the present problem:
Q[J]
= J2,
R[J]
= J?B,
P[x,y;J]
= (8[x-J?e] 8[y-B?eDb
(4)
This choice is motivated as follows: (i) in order to incorporate the old formalism we need
Q[J] and R[ J], (ii) the training error involves field statistics calculated over the training set,
as given by P[x, y; J], and (iii) for a < 00 one cannot expect closed equations for a finite
number of order parameters, the present choice effectively represents an infinite number.
We will assume the number of arguments (x, y) for which P[x, y; J] is evaluated to go to
infinity after the limit N ~ 00 has been taken. This eliminates technical subtleties and
allows us to show that in the Fokker-Planck equation all diffusion terms vanish as N ~ 00.
The latter thereby reduces to a LiouviIle equation, describing deterministic evolution of our
macroscopic observables. For on-line learning one arrives at
:t
:t
Q=
R
:t
dxdy P[x, y] x Q[x; y] + 7]2
27] /
= 7] /
/
dxdy P[x, y] Q2[X; y]
(6)
dxdy P[x, y] y Q[x; y]
P[x, y] =
~
(5)
[ / dx' P[x', y]8[x-x' -7]Q[x' , y)) - P[x,
yl]
-7] :x / dx'dy' g(X', y'] A[x, y; x', y']
2
' y'] 88 .) P [ x, Y]
+ -21 7] 2 / dx I dy , P [x' ,y'] Q2 [x,
x-
(7)
Expansion of these equations in powers of 7], and retaining only the terms linear in 7], gives
the corresponding equations describing batch learning. The complexity of the problem is
fully concentrated in a Green's function A[x, y; x', y'], which is defined as
A[x, y; x', y'] = lim ((([1-8 cc ' 18[x-J?e] 8[y-B?e](6~/) 8[xl-J?(]8[yl-B?e /])b) b)~;t
N~oo
......
It involves a sub-shell average, in which Pt (J) is the weight probability density at time t:
(K[J])~:t
=
JdJ K[J] pt(J)8[Q -Q[J]]8[R-R[J]] ITxy 8[P[x, y] -P[x, y; J]]
JdJ pt(J)8[Q-Q[J]]8[R-R[J]] ITXY 8[P[x, y] - P[x, y; J]]
where the sub-shells are defined with respect to the order parameters. The solution of
(5,6,7) can be used to generate the errors of (3):
1
(8)
Eg = - arccos[R/ JQ]
E t = / dxdy P[x,y]O[-xy]
7r
3
CLOSURE VIA DYNAMICAL REPLICA THEORY
So far our analysis is still exact. We now close the macroscopic laws (5,6,7) by making, for
N ~ 00, the two key assumptions underlying dynamical replica theory [7]:
(i) Our macroscopic observables {Q, R, P} obey closed dynamic equations.
(ii) These equations are self-averaging with respect to the realisation of jj.
(i) implies that probability variations within the {Q, R, P} subshells are either absent or
irrelevant to the evolution of {Q, R, P} . We may thus make the simplest choice for Pt (J):
Pt(J) ~ p(J) '" 8[Q- Q[J]] 8[R-R[J))
IT 8[P[x, y] -P[x, y; J))
xy
(9)
A. C. C. Coolen and D. Saad
200
p(J) depends on time implicitly, via the order parameters {Q, R, Pl. The procedure (9)
leads to exact laws if our observables {Q, R , P} indeed obey closed equations for N --7 00.
It gives an approximation if they don't. (ii) allows us to average the macroscopic laws over
all training sets; it is observed in numerical simulations. and can probably be proven using
the formalism of [6]. Our assumptions result in the closure of (5,6,7), since now A[ . .. ] is
expressed fully in terms of {Q, R, P} . The final ingredient of dynamical replica theory is
the realization that averaging fractions is simplified with the replica identity [8]
IT
/ JdJ W[J,Z]G[J,Z]) = lim jdJ 1 .. . dJ n (G[J 1 ,z]
W[Ja,z])z
\
J dJ W[J , z]
Z n-40
a=l
What remains is to perform integrations. One finds that P[x, y] = P[xly]P[y] with Ply] =
(271")-~ e- h 2 ? Upon introducing the short-hands Dy = (271")- ~ e- h 2 dy and (J(x, y)) =
J Dydx P[xly]f(x, y) we can write the resulting macroscopic laws as follows :
d
dt Q = 2r/V
8
1
8tP[xly] =~
+ TJ
2
d
dt R = TJW
Z
(10)
j dx'P[x'ly] {8[x-X'-TJG[x',yll-8[x-x']} + "2TJ2
1
8
Z 8x2P[Xly]
2
8
-TJ 8x {P[xly] [U(x-Ry)+Wy+[V-RW-(Q-R2)U]~[x,yJ]}
(11)
with
U
= (<I> [x, y]Q[x , y]),
V
= (x9[x,y]),
W
= (y9[x,y]),
Z
= (92[X,y])
As before the batch equations follow upon expanding in TJ and retaining only the linear
terms. Finding the function <I> [x, y] (in replica symmetric ansatz) requires solving a saddlepoint problem for a scalar observable q and a function M[xIY]. Upon introducing
B = JqQ-R2
Q(l- q)
(J[x,y,z])* = Jdx M[xly]eBX Zf[x,y,z]
J dx M[xly]e Bxz
(with J dx M[xly] = 1 for all y) the saddle-point equations acquire the form
for all X , y :
((X-Ry)2)
P[Xly] = j Dz (<5[X -x])*
+ (qQ-R2)[1-~]
=
[Q(1+q)-2R2](x~[x,y])
a
The solution M[xly] of the functional saddle-point equation, given a value for q in the
physical range q E [R2/Q, 1], is unique [9] . The function ~[x, y] is then given by
<I> [X,
4
THE LIMIT a
y] = { JqQ- R2 P[Xly]} -1 j Dz z(<5[X -x])*
(12)
-7 00
For consistency we show that our theory reduces to the simple (Q, R) formalism of infinite
training sets in the limit a --7 00 . Upon making the ansatz
P[xly]
= [271"(Q-R2 )]-t
e-~[x-RyJ 2 /(Q-R2)
one finds that the saddle-point equations are simultaneously and uniquely solved by
M[xly] = P[xly],
q = R2/Q
and <I>[x,y] reduces to
<I>[x,y] = (x-Ry)/(Q-R 2)
Insertion of our ansatz into equation (II), followed by rearranging of terms and usage of the
above expression for <I> [x, y], shows that this equation is satisfied. Thus from our general
theory we indeed recover for a --7 00 the standard theory for infinite training sets.
Dynamics o/Supervised Learning with Restricted Training Sets
0.5
201
_---------------~---,
0.4
"'O-<>-O-<>-CH>""O"""":ro-<>"""""'T<>-<r<>='""""'<To-o-o-tn~u_o_<:~CH>""O_<>_O'<,."...,"O'U'<~
~LO.Q.Q..O.<l~>-<>-O'.Q..O..<>-O-<>..~""-"-'Q..O..O~>_O'<>~~o.D_O_O."_':>_O.<>..o.<>_j
0:=0.25
0:=0.5
0 .3
0.2
0. 1
20
10
40
30
t
Figure 2: Simulation results for on-line Hebbian learning (system size N = 10.000) versus an approximate solution of the equations generated by dynamical replica theory (see
main text), for a E {0.25 , 0.5,1.0, 2.0, 4.0}. Upper five curves: Eg as functions of time.
Lower five curves: E t as functions of time. Circles: simulation results for Eg; diamonds:
simulation results for E t . Solid lines : the corresponding theoretical predictions.
5
BENCHMARK TESTS: HEBBIAN LEARNING
Batch Hebbian Learning
For the Hebbian rule, where 9[x, yJ = sgn(y), one can calculate our order parameters
exactly at any time, even for a < 00 [10], which provides an excellent benchmark for
general theories such as ours. For batch execution all integrations in our present theory can
be done and all equations solved explicitly, and our theory is found to predict the following :
R
f2
= RO+rJty;:
Q
1]
= Qo+2rJtRoyf2;:+rJ 22[2
t ;+;
e- ~[x-Ry - ( 1)1 /0) sgn(y)f /(Q_R2)
P[xly]
Eg
=
J27r(Q-R2)
= ~ arccos [ ~]
"
E
VIq!
t
= ~ - ~ JDY erf [
2
2
(14)
lyIR+7]t/a
J2(Q-R2)
1
(15)
Comparison with the exact solution, calculated along the lines of [10] (where this was done
for on-line Hebbian learning) shows that the above expressions are all rigorously exact.
On-Line Hebbian Learning
For on-line execution we cannot (yet) solve the functional saddle-point equation analytically. However, some explicit analytical predictions can still be extracted [9] :
R
= Ro + 7]t/f
Q
= Qo + 27]t Ro / f + 7] 2 t + 7] 2 t 2
J dx xP[xly]
[~+ ~]
= Ry + (7]t/a) sgn(y)
(16)
(17)
1
P(xIY] '" [
a
]
27r7]2 t 2
2'
[_
exp
a(x-RY-(7]t/a) sgn(y))2]
27]2 t2
(t ---* (0)
(18)
A. C. C. Coolen and D. Saad
202
11- 2.0
,-50
... 1.0
.. 50
lO r
10 ,
? ?~;??:~::
I
i
,, ~
10
V
>0-
,v
t
.c. Jj
.'
?
':;,
~.
..';
,
00
f
_10
l
_I 0
-1 0 to
-10
..... 0 1 _ _ ~-'--~_-'- __ .. _ .... _ "---o.--'---_~
~o
-100.0 -JOOoG _100 0
00
I0Il0
laII O *e
?10.0
.... O~~.....1~~~....0 -_0
-1000
0.0
1000
lUIO
...
.000
00
..
f
',1?
,
, '-
,, ~ ?
-J 0
, ',
':c.:
~
; ,.
-20010
~
~
I
?
-_.0
I
"'O ~""""'--.l.....-.....--~ ? &-
~o
?
. JGO.lt
-DO
-1t00
00
?
100.0
200.0
......
)010
..
dO
Figure 3: Simulation results for on-line Hebbian learning (N = 10,000) versus dynamical
replica theory, for a E {2.0, 1.0, 0.5}. Dots: local fields (x, y) = (J?e, B ?e) (calculated for
examples in the training set), at time t = 50. Dashed lines: conditional average of student
field x as a function ofy, as predicted by the theory, x(y) = Ry + (."t/a) sgn(y).
0 01
.?.? - -
-
OOlS . .. .
,
?
-
-
'.
?
001'
- - ..
.""
Figure 4: Simulations of Hebbian on-line learning with N = 10,000. Histograms: student
field distributions measured at t 10 and t 20. Lines: theoretical predictions for student
field distributions (using the approximate solution of the diffusion equation, see main text),
for a=4 (left), a= 1 (middle), a=0.25 (right).
=
=
Comparison with the exact result of [ 10] shows that the above expressions (16,17,18), and
therefore also that of Eg at any time, are all rigorously exact.
At intermediate times it turns out that a good approximation ofthe solution of our dynamic
equations for on-line Hebbian learning (exact for t ? a and for t -+ 00) is given by
e- Hz:-RY-('1t / a ) sgn(y))2/(Q-R2+'1 2t / a )
P[xly] =
Eg
= ~" arccos [ V~
. ~]
J27r(Q - R2 + .,,2t/a)
Et
= ~2 - ~2 !DY erf [ J2(Q-R2_.,,2t/a)
lyIR+."t/a
1
(19)
(20)
In Figure 2 we compare the approximate predictions (20) with the results obtained from
numerical simulations (N = 10,000, Qo = 1, Ro = 0, ." = 1). All curves show excellent
agreement between theory and experiment. We also compare the theoretical predictions for
the distribution P[xly] with the results of numerical simulations. This is done in Figure 3
where we show the fields as observed at t
50 in simulations (same parameters as in
Figure 2) of on-line Hebbian learning, for three different values of a. In the same figure
we draw (dashed lines) the theoretical prediction for the y-dependent average (17) of the
conditional x-distribution P[xly]. Finally we compare the student field distribution P[x] =
=
Dynamics of Supervised Learning with Restricted Training Sets
203
J
Dy P[xly] according to (19) with that observed in numerical simulations, see Figure 4.
The agreement is again excellent (note: here the learning process has almost equilibrated).
6
DISCUSSION
In this paper we have shown how the formalism of dynamical replica theory [7] can be used
successfully to build a general theory with which to predict the evolution of the relevant
macroscopic performance measures, including the training- and generalisation errors, for
supervised (on-line and batch) learning in layered neural networks with randomly composed but restricted training sets (i.e. for finite a = piN). Here the student fields are
no longer described by Gaussian distributions, and the more familiar statistical mechanical
formalism breaks down. For simplicity and transparency we have restricted ourselves to
single-layer systems and realizable tasks. In our approach the joint distribution P[x, y] for
student and teacher fields is itself taken to be a dynamical order parameter, in addition to
the conventional observables Q and R. From the order parameter set {Q, R, P}, in turn,
we derive both the generalization error Eg and the training error E t . Following the prescriptions of dynamical replica theory one finds a diffusion equation for P[x, y], which we
have evaluated by making the replica-symmetric ansatz in the saddle-point equations. This
equation has Gaussian solutions only for a -+ 00; in the latter case we indeed recover
correctly from our theory the more familiar formalism of infinite training sets, with closed
equations for Q and R only. For finite a our theory is by construction exact if for N -+ 00
the dynamical order parameters {Q, R, P} obey closed, deterministic equations, which are
self-averaging (i.e. independent of the microscopic realization of the training set). If this is
not the case, our theory is an approximation.
We have worked out our general equations explicitly for the special case of Hebbian learning, where the existence of an exact solution [10], derived from the microscopic equations
(for finite a), allows us to perform a critical test of our theory. Our theory is found to be
fully exact for batch Hebbian learning. For on-line Hebbian learning full exactness is difficult to determine, but exactness can be establised at least for (i) t -+ 00, (ii) the predictions
for Q, R, Eg and x(y) = Jdx xP[xly] at any time. A simple approximate solution of our
equations already shows excellent agreement between theory and experiment. The present
study clearly represents only a first step, and many extensions, applications and generalizations are currently under way. More specifically, we study alternative learning rules as well
as the extension of this work to the case of noisy data and of soft committee machines.
References
[I] Kinzel W. and Rujan P. (1990), Europhys. Lett. 13,473
[2] Kinouchi o. and Caticha N. (1992).1. Phys. A: Math. Gen. 25,6243
[3] Biehl M. and Schwarze H. (1992), Europhys. Lett. 20,733
Biehl M. and Schwarze H. (1995),1. Phys. A: Math. Gen. 28,643
[4] Saad D. and Solla S. (1995), Phys. Rev. Lett. 74,4337
[5] Mace C.W.H. and Coolen AC.C (1998), Statistics and Computing 8,55
[6] Horner H. (1992a), Z. Phys. B 86, 291
Horner H. (1992b), Z. Phys. B 87,371
[7] Coolen AC.C., Laughton S.N. and Sherrington D. (1996), Phys. Rev. B 53, 8184
[8] Mezard M., Parisi G. and Virasoro M.A (1987), Spin-Glass Theory and Beyond (Singapore: World Scientific)
[9] Coolen AC.C. and Saad D. (1998), in preparation.
[10] Rae H.C., Sollich P. and Cool en A.C.C. (1998), these proceedings
| 1579 |@word middle:1 suitably:1 closure:3 simulation:11 x2p:1 thereby:1 solid:1 moment:1 xiy:2 ours:1 activation:1 yet:1 dx:7 must:1 laii:1 numerical:5 dydx:1 enables:1 update:2 tjw:1 short:1 iterates:1 provides:1 math:2 five:2 along:1 lyir:2 indeed:3 mechanic:1 ry:8 ol:1 underlying:1 what:1 q2:2 developed:1 finding:1 adatron:1 exactly:1 ro:5 uk:4 ly:1 appear:1 planck:2 before:1 local:4 limit:6 fluctuation:2 range:1 averaged:1 unique:1 yj:2 procedure:2 cannot:2 close:1 layered:4 conventional:1 deterministic:3 map:1 dz:2 modifies:1 go:1 l:1 simplicity:2 rule:5 deriving:1 jgo:1 j27r:2 variation:1 qq:1 pt:6 construction:1 exact:10 agreement:3 breakdown:1 observed:4 solved:2 calculate:1 ensures:1 solla:1 subshells:1 complexity:1 insertion:1 rigorously:2 dynamic:12 trained:1 depend:1 solving:2 upon:4 f2:1 observables:10 joint:1 emulate:1 kcl:1 london:2 europhys:2 larger:1 taylored:1 solve:1 biehl:2 statistic:2 erf:2 t00:1 itself:2 noisy:1 final:1 parisi:1 analytical:1 j2:3 relevant:2 realization:3 gen:2 generating:1 oo:1 develop:1 ac:5 derive:1 measured:1 progress:1 equilibrated:1 predicted:1 involves:2 implies:1 xly:21 cool:1 stochastic:1 sgn:10 ja:1 wc2r:1 generalization:3 extension:2 pl:1 sufficiently:1 viq:1 exp:1 predict:4 birmingham:1 coolen:7 currently:1 successfully:1 exactness:2 clearly:2 gaussian:6 tcoolen:1 derived:1 ebx:1 rigorous:1 realizable:1 glass:1 dependent:1 mth:1 hidden:1 jq:1 denoted:1 retaining:2 arccos:3 special:2 integration:2 mutual:1 edb:1 field:15 identical:1 represents:2 r7:1 t2:1 loon:1 realisation:1 few:1 randomly:2 composed:2 simultaneously:1 individual:1 familiar:2 ourselves:2 rae:1 arrives:1 parametrised:1 tj:3 xy:2 old:3 re:1 circle:1 theoretical:4 virasoro:1 formalism:9 soft:1 tp:1 introducing:2 successful:1 jdy:1 teacher:8 density:1 yl:2 ansatz:4 again:1 central:1 x9:1 satisfied:1 choose:1 leading:1 student:12 explicitly:2 depends:1 try:1 break:1 closed:6 analyze:1 recover:2 spin:1 ofthe:1 cc:1 itxy:2 phys:6 associated:1 lim:2 ok:1 dt:2 supervised:7 follow:1 evaluated:2 done:3 correlation:2 hand:1 qo:3 y9:1 schwarze:2 scientific:1 usage:2 evolution:6 analytically:1 symmetric:2 iteratively:1 eg:9 kinouchi:1 during:2 self:2 uniquely:1 sherrington:1 tn:1 functional:2 kinzel:1 physical:1 b4:1 consistency:1 mathematics:1 dj:2 dot:1 longer:3 recent:1 irrelevant:1 tjg:1 binary:1 dxdy:4 determine:1 dashed:2 ii:5 full:2 ofy:1 rj:1 reduces:3 transparency:1 hebbian:14 technical:1 characterized:2 prescription:1 ensuring:1 impact:1 prediction:7 histogram:1 addition:1 macroscopic:13 saad:6 eliminates:1 probably:1 hz:1 intermediate:1 iii:1 restrict:1 absent:1 whether:1 motivated:1 expression:3 jj:2 cornerstone:1 induces:1 concentrated:1 simplest:1 rw:1 generate:1 singapore:1 correctly:1 write:1 group:1 key:2 drawn:1 diffusion:4 replica:12 fraction:1 saadd:1 almost:1 jdx:2 draw:2 dy:6 jdj:4 layer:2 followed:1 infinity:1 worked:1 argument:1 o_:1 according:1 sollich:1 saddlepoint:1 rev:2 making:3 restricted:8 taken:2 equation:29 remains:1 pin:2 turn:3 describing:2 committee:1 obey:4 batch:9 alternative:1 existence:1 build:1 question:1 already:1 strategy:1 microscopic:3 acquire:1 difficult:1 tj2:1 perform:2 diamond:1 zf:1 upper:1 neuron:1 benchmark:2 finite:9 rn:1 jqq:2 mechanical:1 yll:1 learned:1 horner:2 beyond:1 dynamical:12 wy:1 regime:1 built:1 including:2 green:1 power:1 overlap:1 critical:1 natural:1 aston:2 ne:1 carried:1 text:3 review:1 laughton:1 law:7 fully:3 expect:1 proportional:1 proven:1 versus:2 ingredient:1 xp:2 lo:2 soon:1 free:1 guide:1 perceptron:2 curve:3 calculated:3 lett:3 world:1 cumulative:1 made:2 simplified:1 caticha:1 far:2 functionals:1 approximate:4 observable:1 implicitly:1 don:1 expanding:1 rearranging:1 mace:1 expansion:1 excellent:4 erceptron:1 rujan:1 main:3 linearly:1 noise:1 en:1 sub:2 mezard:1 explicit:1 xl:1 vanish:2 ply:1 theorem:1 down:1 specific:1 r2:13 insignificant:1 incorporating:2 effectively:1 ools:1 execution:2 observabies:1 lt:1 saddle:5 infinitely:1 expressed:1 strand:1 scalar:1 subtlety:1 ch:2 fokker:2 extracted:1 shell:2 conditional:2 identity:1 king:1 infinite:5 generalisation:1 operates:2 specifically:1 averaging:3 called:1 college:1 latter:2 preparation:1 incorporate:1 dept:1 |
633 | 158 | 332
NEURAL NETWORKS THAT LEARN TO
DISCRIMINATE SIMILAR KANJI CHARACTERS
Yoshihiro Morl
Kazuhiko Yokosawa
ATR Auditory and Visual Perception Research Laboratories
2-1-61 Shiromi Higashiku Osaka 540 Japan
ABSTRACT
A neural network is applied to the problem of
recognizing Kanji characters. Using a b a c k
propagation network learning algorithm. a threelayered. feed-forward
network is trained to
recognize similar handwritten Kanji characters. In
addition. two new methods are utilized to make
training effective. The recognition accuracy was
higher than that of conventional methods. An
analysis of connection weights showed that trained
networks can discern the hierarchical structure of
Kanji characters. This strategy of trained networks
makes high recognition accuracy possible. Our
results suggest that neural networks are very
effective for Kanji character recognition.
1 INTRODUCTION
Neural networks are applied to recognition tasks in many fields.
with good results. In the field of letter recognition. net work s
have been made which recognize hand-written digits [Burr 1986]
and complex printed Chinese characters [Ho 1988]. The
performance of these networks has been better than that of
conventional
methods. However. these results are still
rudimentary when we consider not only the large number of
Kanji characters. but the distortion involved in hand-written
characters. We are aiming to make a large-scale network that
recognizes the 3000 Kanji characters commonly used in Japan.
Since it is difficult for a single network to discriminate 3000
characters. our plan is to create a large-scale network by
Neural Networks that Learn Kanji Characters
assembling many smaller ones that would each be responsible for
recognizing only a small number of characters.
There are two issues concerning implementation of such a largescale network : the ability of individual networks. and organizing
the networks. As a first step. the ability of a small network to
discriminate similar Kanji characters was investigated. We found
that the learning speed and performance of networks are highly
influenced by environment (for instance. the order. number.
and repetition of training samples). New methods of teaching the
environment are utilized to make learning effective.
2 NEW TYPES OF TEACHERS
2.1 PROBLEMS OF BACKPROPAGATION
The Backpropagation(BP) learning algorithm only teaches
correct answers [Rumelhart 1986]. BP does not care about the
recognition rate of each category. If we use ordinary BP in a
situation of limited resources. and if there are both easy and
difficult categories to learn in the training set. what happens is
that the easier category uses up most of the resources in the early
stages of training (Figure 1). Yet. for efficiency. the difficult
category to learn should get more resources. This weakness of BP
makes the learning time longer.
Two new methods are used to avoid this problem. In the real
world. learning procedures (human) do not exist in isolation.
There is also a learning environment. It is therefore natural. and
even necessary. to devise teaching methods that incorporate
environmental factors.
333
334
Morl and Yokosawa
Separation by BP
Figure 1. Easily Learned Category
Ideal Separation
Takes more Resources
Environment
(The feature space of samples)
Learning
Procedur
Category
Back Propagation
Figure 2.
Two New methods
Neural Networks that Learn Kanji Characters
2.2 FIRST METHOD (REVIEW METHOD)
This method tracks the performance for each category. At first,
training is focused on categories that are not being recognized
well. After this, on a more fine-grained level, the error for each
sample is first checked, and the greater this error, the more often
that sample is presented (Figure 2). This leads to a more balanced
recognition over the categories.
2.3 SECOND METHOD (PREPARATION METHOD)
The second method, designed to prevent over-training, is to
increase the number of training samples when the network's
total error rate is observed to fall below a certain value (Figure
2).
3 RECOGNITION EXPERIMENT
3.1 INPUT PATTERN AND NETWORK STRUCTURE
Kanji characters are composed of sub-characters called radicals
(Figure 3). The four Kanji characters used in our experiment are
shown in Figure 4. These characters are all combinations of two
kinds of left radicals and two kinds of right radicals. Visually,
these characters are similar and hence difficult to discriminate.
The training samples for this network were chosen from a
database of about 3000 Kanji characters [Saito 1985]. For each
character, there are 200 handwritten samples from different
writers. 100 are used as training samples, and the remaining 100
are used to test recognition accuracy of the trained network. All
samples in the database consist of 64 by 63 dots. If we were to use
this pattern as the input to our neural net, the number of units
required in the input layer would be too large for the
computational abilities of current computers. Therefore, two
kinds of feature vectors extracted from handwritten patterns are
used as the input. In one of the feature vectors, the "MESH
feature ", there are 64 dimensions computing the density of the 8
by 8 small squares into which handwritten samples are divided.
In the other, the "LDCD feature" [Hagita 1983], there are 256
dimensions computing a line segment length along four
directions horizontal, vertical, and two diagonals - in the same
335
336
Morl and Yokosawa
small squares. In this experiment, we use a feed-forward neural
network with three layers an input layer. a hidden layer and
an output layer - . Each unit of the input layer is connected to all
units of the hidden layer, and each unit of the hidden layer is
connected to all units of the output layer.
Kanji for "Theory"
'-"
--o fIB,: I,,'
:fIB-:
.. - - - _..
,- - - - -I
F
I
, ........ ,
,
'
.....
: 0:
I
........
'
,,
,
,,
,
,
I
I
,
.. _ _ _ _ 4
'- - ... --'
Left radical Right radical
Fig. 3 Concept of Radical
Fig. 4
Example of Kanji Characters
...
5
~
0
/' t
"
:61
Figure 5.
1
11
0 >8
horizontal
component
LOCO Feature
3.2 RECOGNITION RESULTS (MESH VS. LDCD)
Average recognition rates when the MESH feature was used were
98.5% for training samples and 82.5% for testing samples.
Average recognition rates when the LOCO feature was used were
Neural Networks that Learn Kanji Characters
99.5 % for training samples and 92.0% for testing samples. These
recognItion rates for neural networks were higher than for
conventional methods we used.
3.3 Recognition Rate & the Number of Samples
We gradually increased the number of training samples to
investigate the influence of this number on the recognition rate
of testing samples. Figure 6 shows the recognition rate of testing
samples for ten different amounts of training samples. When the
number of training samples are 2 and 3, the recognition rates are
lower than for 1 training sample. This result is probably due to
the fact that the second samples in each set are not well-written.
This result means that an average pattern should be used in the
early training period.
-.
-..
u
Q)
0
0
c
Q)
u
100
__-cr---
CI
90
80
70
60
Q)
D.
50
40 1-----~~~--------~------------------.,
1
10
100
Samples I Kanji Category
Figure 6.
Recognition Rate and the Number of Training Samples
3.4 ANALYSIS OF INNER REPRESENTATION
3.4.1
Weights
vs.
Difference
Between
Averaged
Samples
To investigate how this neural network learns to solve the given
task, the weights vector from the input layer to each hidden unit
is compared to the difference between averaged samples with a
common radical. Since the four Kanji characters in this task are
all combinations of two kinds of left radicals and two kinds of
right radicals, two hidden units which take charge of left and
right radicals, respectively, are enough to accomplish
337
338
Morl and Yokosawa
recogmtIon. At first, 200 samples with the same left radical were
averaged. Since there are just two left radicals in the four Kanji
characters, this produced two averaged patterns. These two
patterns were then subtracted, yielding a pattern that
corresponds to the difference between the two left radicals. The
same method was used to obtain a pattern that corresponds to the
difference between the two right radicals. Then, for each of
these patterns, the correlation coefficient with the weights from
the input-layer to each hidden unit is calculated. The pattern for
left radicals was very highly correlated with hidden unit 1
(R=0.71,p<0.01), and not correlated with hidden unit 2. On the
other hand, the pattern for right radicals was very highly
correlated with hidden unit 2 (R=0.79,p<0.01), and not correlated
with hidden unit 1. In other words, each hidden unit is
discriminating among radicals of one particular side of the Kanji
characters.
3.4.2
Weights
vs.
Bayse
Discrimination
The bayse method is used as a discrimination function when
distribution of the categories is known. Supposing that
distribution of categories in this task is normal distribution
the covariance matrix of each category is equal,
discrimination function becomes first order as given below.
f(X)
L
J,ll
J,l r
X
c
the
the
and
the
= (J,ll - J,lr)t L X + c
(1)
Covariance Matrix with the same radical
Average Vector with the same left radical
Average Vector with the same right radical
Input Feature Vector
Constant
The input vector to the input layer is translated to a hidden unit
as follows.
y
y
X
W
a
=
wx+ a
(2)
Input Sum
Input Feature Vector
Weights Matrix from Input Layer to a Hidden Unit
Threshold
Equation (2) is similar to equation (1). If the network uses a
strategy similar to bayse discrimination, there should be some
Neural Networks that Learn Kanji Characters
correlation between
bayse weights (J.l.1 - J.l.r)t L in equation (1)
and W in equation (2). When the correlation coefficient between
bayse weights and the weights from the input layer to each
hidden unit was calculated, there was no significant correlation
between them (R=0.02,p>0.05). In other words, the network does
not use a strategy like bayse discrimination.
4 CONCLUSION
For this experiment, we observed that the learning procedure is
influenced by the surrounding environment. With this fact in
mind, new methods were proposed to make training within a
learning process more effective. These methods lead to balanced
recognition rates over categories. The most important result from
this experiment is that a network trained with BP can perceive
that Kanji characters are composed of radicals. Based on this
ability, it is possible to estimate the number of units required for
the hidden-layer of a network. Such a network could then fonn
the building block of a large-scale network capable of
recognizing as many as the 3000 Kanji characters commonly used
in Japan.
Acknowledgments
We
are grateful to Dr. Michio Umeda for his support and
encouragement. Special thanks to Kazuki Joe for the ideas he
provided in our many discussions and for his help in developing
simulation programs.
Reference
[Burr 1986]
D.J.Burr,"A Neural Network Digit Recognizer",
IEEE-SMC,1621-1625,1986.
[Ho 1988]
A.Ho and W.Furmanski,"Pattern Recognition
by Neural Network Model on Hypercubes" ,HCCA3-528
[Rumelhart 1986] D.E.Rumelhart et aI, "Parallel Distributed
Processing",vol.l,The MIT Press,1986.
[Saito 1985]
T.Saito,H.Yamada,K.Yamamoto,"On the Data
Base ETL9 of Handprinted Characters in JIS Chinese Characters
and Its Analysis" ,J68-D.4,757-764,1985
[Hagita 1983]
N .Hagi ta,S. N aito,I. Masuda, "Recogni tion
of
Handprinted Chinese Characters by Global and Local Direction
Contributivity Density-Feature", J66-D,6,722-729,1983
339
| 158 |@word chinese:3 concept:1 hence:1 direction:2 correct:1 laboratory:1 strategy:3 simulation:1 covariance:2 human:1 ll:2 diagonal:1 fonn:1 atr:1 current:1 length:1 rudimentary:1 normal:1 yet:1 visually:1 written:3 handprinted:2 common:1 difficult:4 mesh:3 wx:1 ji:1 teach:1 early:2 designed:1 recognizer:1 implementation:1 v:3 discrimination:5 assembling:1 he:1 significant:1 vertical:1 ai:1 hagita:2 repetition:1 create:1 yamada:1 encouragement:1 lr:1 teaching:2 situation:1 mit:1 dot:1 avoid:1 cr:1 longer:1 along:1 base:1 showed:1 required:2 connection:1 burr:3 certain:1 learned:1 devise:1 below:2 greater:1 care:1 perception:1 pattern:12 recogmtion:1 hidden:15 recognized:1 period:1 becomes:1 provided:1 program:1 issue:1 among:1 what:1 natural:1 kind:5 plan:1 largescale:1 special:1 field:2 equal:1 divided:1 concerning:1 charge:1 unit:17 review:1 composed:2 addition:1 recognize:2 fine:1 individual:1 local:1 aiming:1 probably:1 supposing:1 highly:3 investigate:2 weakness:1 morl:4 limited:1 smc:1 yielding:1 ideal:1 easy:1 averaged:4 enough:1 acknowledgment:1 responsible:1 isolation:1 testing:4 capable:1 loco:2 block:1 necessary:1 backpropagation:2 inner:1 idea:1 digit:2 procedure:2 side:1 yamamoto:1 fall:1 saito:3 ldcd:2 distributed:1 printed:1 dimension:2 calculated:2 word:2 increased:1 instance:1 world:1 forward:2 suggest:1 made:1 get:1 commonly:2 ordinary:1 influence:1 conventional:3 recognizing:3 amount:1 global:1 ten:1 too:1 focused:1 category:14 answer:1 teacher:1 masuda:1 exist:1 perceive:1 accomplish:1 thanks:1 density:2 hypercubes:1 discriminating:1 osaka:1 his:2 track:1 learn:7 correlated:4 vol:1 four:4 threshold:1 investigated:1 complex:1 us:2 prevent:1 dr:1 rumelhart:3 recognition:20 utilized:2 japan:3 database:2 sum:1 observed:2 letter:1 fig:2 coefficient:2 discern:1 connected:2 separation:2 sub:1 tion:1 balanced:2 environment:5 layer:15 parallel:1 kazuki:1 learns:1 grained:1 kanji:23 trained:5 grateful:1 segment:1 square:2 accuracy:3 writer:1 efficiency:1 bp:6 fib:2 translated:1 easily:1 consist:1 joe:1 handwritten:4 speed:1 produced:1 ci:1 surrounding:1 furmanski:1 effective:4 developing:1 easier:1 combination:2 influenced:2 checked:1 smaller:1 character:32 solve:1 visual:1 distortion:1 involved:1 yokosawa:4 happens:1 ability:4 gradually:1 auditory:1 corresponds:2 environmental:1 extracted:1 resource:4 equation:4 net:2 yoshihiro:1 mind:1 recogni:1 back:1 feed:2 higher:2 ta:1 organizing:1 hierarchical:1 total:1 called:1 discriminate:4 just:1 stage:1 subtracted:1 correlation:4 ho:3 hand:3 horizontal:2 remaining:1 support:1 propagation:2 recognizes:1 help:1 radical:21 preparation:1 incorporate:1 building:1 |
634 | 1,580 | Learning a Continuous Hidden Variable
Model for Binary Data
Daniel D. Lee
Bell Laboratories
Lucent Technologies
Murray Hill, NJ 07974
ddlee~bell-labs.com
Haim Sompolinsky
Racah Institute of Physics and
Center for Neural Computation
Hebrew University
Jerusalem, 91904, Israel
haim~fiz.huji . ac.il
Abstract
A directed generative model for binary data using a small number
of hidden continuous units is investigated. A clipping nonlinearity distinguishes the model from conventional principal components
analysis. The relationships between the correlations of the underlying continuous Gaussian variables and the binary output variables
are utilized to learn the appropriate weights of the network. The
advantages of this approach are illustrated on a translationally invariant binary distribution and on handwritten digit images.
Introduction
Principal Components Analysis (PCA) is a widely used statistical technique for representing data with a large number of variables [1]. It is based upon the assumption
that although the data is embedded in a high dimensional vector space, most of
the variability in the data is captured by a much lower climensional manifold. In
particular for PCA, this manifold is described by a linear hyperplane whose characteristic directions are given by the eigenvectors of the correlation matrix with
the largest eigenvalues. The success of PCA and closely related techniques such as
Factor Analysis (FA) and PCA mixtures clearly indicate that much real world data
exhibit the low dimensional manifold structure assumed by these models [2, 3].
However, the linear manifold structure of PCA is not appropriate for data with
binary valued variables. Binary values commonly occur in data such as computer
bit streams, black-and-white images, on-off outputs of feature detectors, and electrophysiological spike train data [4]. The Boltzmann machine is a neural network
model that incorporates hidden binary spin variables, and in principle, it should be
able to model binary data with arbitrary spin correlations [5]. Unfortunately, the
D. D. Lee and H. Sompolinsky
516
Figure 1: Generative model for N-dimensional binary data using a small number
p of continuous hidden variables.
computational time needed for training a Boltzmann machine renders it impractical
for most applications.
In these proceedings, we present a model that uses a small number of continuous
hidden variables rather than hidden binary variables to capture the variability of
binary valued visible data. The generative model differs from conventional peA
because it incorporates a clipping nonlinearity. The resulting spin configurations
have an entropy related to the number of hidden variables used, and the resulting
states are connected by small numbers of spin flips. The learning algorithm is particularly simple, and is related to peA by a scalar transformation of the correlation
matrix.
Generative Model
Figure 1 shows a schematic diagram of the generative process. As in peA, the
model assumes that the data is generated by a small number P of continuous hidden
variables Yi . Each of the hidden variables are assumed to be drawn independently
from a normal distribution with unit variance:
P(Yi) = exp(
-yt /2)/~.
(1)
The continuous hidden variables are combined using the feedforward weights W ij ,
and the N binary output units are then calculated using the sign of the feedforward
acti vations:
P
Xi
=
L
WijYj
(2)
j=l
sgn(xi).
(3)
Since binary data is commonly obtained by thresholding, it seems reasonable that
a proper generative model should incorporate such a clipping nonlinearity. The
generative process is similar to that of a sigmoidal belief network with continuous
hidden units at zero temperature. The nonlinearity will alter the relationship between the correlations of the binary variables and the weight matrix W as described
below.
Si
The real-valued Gaussian variables Xi are exactly analogous to the visible variables
of conventional peA. They lie on a linear hyperplane determined by the span of
the matrix W, and their correlation matrix is given by:
c xx = (xx T ) = WW T .
(4)
Learning a Continuous Hidden Variable Model for Binary Data
Y2
:
-t-
""""'
--" ..... .
~.. ,
......,.",'"
4+
.... ""
"'/"~'l=LWl
'Y'~O
:
J J
.
...
. +++
.
,
,.
?? '
.' . '
"'
517
,
+-
...
.
,,
: x3
r
Figure 2: Binary spin configurations
variables Yj with P = 2 and N = 3.
,,
"" x2~ 0
,,
,
"~
Si
in the vector space of continuous hidden
By construction, the correlation matrix CXX has rank P which is much smaller
than the number of components N. Now consider the binary output variables
Si = sgn(xd? Their correlations can be calculated from the probability distribution
of the Gaussian variables Xi:
(CSS)ij
= (SiSj) =
JIT
dYk P(Xk) sgn(Xi) sgn(Xj)
(5)
k
where
(6)
The integrals in Equation 5 can be done analytically, and yield the surprisingly
simple result:
( C SS ) .. 'J -
X 1
(_2) sin-1 [C~.
JCfix elf .
'J
11"
(7)
Thus, the correlations of the clipped binary variables CSS are related to the correlations of the corresponding Gaussian variables CXX through the nonlinear arcsine
function. The normalization in the denominator of the arcsine argument reflects the
fact that the sign function is unchanged by a scale change in the Gaussian variables.
Although the correlation matrix CSS and the generating correlation matrix cn are
easily related through Equation 7, they have qualitatively very different properties.
In general, the correlation matrix CSS will no longer have the low rank structure of
CXX. As illustrated by the translationally invariant example in the next section, the
spectrum of CSS may contain a whole continuum of eigenvalues even though cxx
has only a few nonzero eigenvalues.
PCA is typically used for dimensionality reduction of real variables; can this model
be used for compressing the binary outputs Si? Although the output correlations
C SS no longer display the low rank structure of the generating C XX , a more appropriate measure of data compression is the entropy of the binary output states. Consider
how many of the 2N possible binary states will be generated by the clipping process.
The equation Xi = E j WijYj = 0 defines a P - 1 dimensional hyperplane in the
P-dimensional state space of hidden variables Yj, which are shown as dashed lines
in Figure 2. These hyperplanes partition the half-space where Si = +1 from the
D. D. Lee and H. Sompolinsky
518
5;=+1
5;=-1
,
'- ,,
L
IL.--__
'.,
C)()(
--II
______. . .1
10.2
css
., ,
,,
...
...
"' ... ...
...
'-----'-_ _~~_ _ _~.............J
10?
10'
Eigenvalue rank
102
Figure 3: Translationally invariant binary spin distribution with N = 256 units.
Representative samples from the distribution are illustrated on the left, while the
eigenvalue spectrum of CSS and CXX are plotted on the right.
region where Si = -1. Each of the N spin variables will have such a dividing hyperplane in this P-dimensional state space, and all of these hyperplanes will generically
be unique. Thus , the total number of spin configurations Si is determined by the
number of cells bounded by N dividing hyperplanes in P dimensions. The number
of such cells is approximately NP for N ? P, a well-known result from perceptrons
[6]. To leading order for large N, the entropy of the binary states generated by this
process is then given by S = P log N. Thus, the entropy of the spin configurations
generated by this model is directly proportional to the number of hidden variables
P.
How is the topology of the binary spin configurations Si related to the PCA manifold structure of the continuous variables Xi? Each of the generated spin states is
represented by a polytope cell in the P dimensional vector space of hidden variables.
Each polytope has at least P + 1 neighboring polytopes which are related to it by a
single or small number of spin flips. Therefore, although the state space of binary
spin configurations is discrete, the continuous manifold structure of the underlying
Gaussian variables in this model is manifested as binary output configurations with
low entropy that are connected with small Hamming distances .
Translationally Invariant Example
In principle, the weights W could be learned by applying maximum likelihood to
this generative model; however, the resulting learning algorithm involves analytically intractable multi-dimensional integrals. Alternatively, approximations based
upon mean field theory or importance sampling could be used to learn the appropriate parameters [7]. However, Equation 7 suggests a simple learning rule that is also
approximate, but is much more computationally efficient [8]. First, the binary correlation matrix CSS is computed from the data. Then the empirical CSS is mapped
into the appropriate Gaussian correlation matrix using the nonlinear transformation: CXX = sin(7l'Css /2). This results in a Gaussian correlation matrix where the
variances of the individual Xi are fixed at unity. The weights Ware then calculated
using the conventional PCA algorithm. The correlation matrix cxx is diagonalized ,
and the eigenvectors with the largest eigenvalues are used to form the columns of
Learning a Continuous Hidden Variable Model for Binary Data
519
w to yield the best low rank approximation CXX ~ WW T . Scaling the variables Xi
will result in a correlation matrix CXX with slightly different eigenvalues but with
the same rank.
The utility of this transformation is illustrated by the following simple example.
Consider the distribution of N = 256 binary spins shown in Figure 3. Half of the
spins are chosen to be positive, and the location of the positive bump is arbitrary
under the periodic boundary conditions. Since the distribution is translationally
invariant, the correlations CIl depend only on the relative distance between spins
Ii - jl. The eigenvectors are the Fourier modes, and their eigenvalues correspond
to their overlap with a triangle wave. The eigenvalue spectrum of css is plotted in
Figure 3 as sorted by their rank. In this particular case, the correlation matrix CSS
has N /2 positive eigenvalues with a corresponding range of values.
Now consider the matrix CXX = sin(-lI'Css /2). The eigenvalues of CXX are also
shown in Figure 3. In contrast to the many different eigenvalues CSS, the spectrum
of the Gaussian correlation matrix CXX has only two positive eigenvalues, with all
the rest exactly equal to zero. The corresponding eigenvectors are a cosine and sine
function. The generative process can thus be understood as a linear combination
of the two eigenmodes to yield a sine function with arbitary phase. This function
is then clipped to yield the positive bump seen in the original binary distribution.
In comparison with the eigenvalues of CS S, the eigenvalue spectrum of CXX makes
obvious the low rank structure of the generative process. In this case, the original
binary distribution can be constructed using only P = 2 hidden variables, whereas
it is not clear from the eigenvalues of CSS what the appropriate number of modes
is. This illustrates the utility of determining the principal components from the
calculated Gaussian correlation matrix cxx rather than working directly with the
observable binary correlation matrix CSS.
Handwritten Digits Example
This model was also applied to a more complex data set. A large set of 16 x 16
black and white images of handwritten twos were taken from the US Post Office
digit database [9]. The pixel means and pixel correlations were directly computed
from the images. The generative model needs to be slightly modified to account for
the non-zero means in the binary outputs. This is accomplished by adding fixed
biases ~i to the Gaussian variables Xi before clipping:
The biases
pression:
~i
Si = sgn(~i + Xi).
(8)
can be related to the means of the binary outputs through the ex~i = J2CtX erf- 1 (Si).
(9)
This allows the biases to be directly computed from the observed means of the
binary variables. Unfortunately, with non-zero biases, the relationship between
the Gaussian correlations CXX and binary correlations CSS is no longer the simple
expression found in Equation 7. Instead, the correlations are related by the following
integral equation:
Given the empirical pixel correlations CSS for the handwritten digits, the integral
in Equation 10 is numerically solved for each pair of indices to yield the appropriate
D. D. Lee and H Sompolinsky
520
102 ~------~------~------~-------.------~
Morph
2
2
2
2
.... .....
;2
a
CSS
.... .... .....
2
"'to
",
~
103
,,
L -_ _ _ _~_ _ _ _ _ _~_ _~_ _~_ _ _ _ _ _~_ _ _ _ _ _~
50
100
150
200
~
a
250
Eigenvalue Rank
Figure 4: Eigenvalue spectrum of CSS and CXx for handwritten images of twos. The
inset shows the P = 16 most significant eigenvectors for cxx arranged by rows. The
right side of the figure shows a nonlinear morph between two different instances of
a handwritten two using these eigenvectors.
Gaussian correlation matrix CXX . The correlation matrices are diagonalized and
the resulting eigenvalue spectra are shown in Figure 4. The eigenvalues for CXX
again exhibit a characteristic drop that is steeper than the falloff in the spectrum
of the binary correlations CSs. The corresponding eigenvectors of CXX with the 16
largest positive eigenvalues are depicted in the inset of Figure 4. These eigenmodes
represent common image distortions such as rotations and stretching and appear
qualitatively similar to those found by the standard PCA algorithm.
A generative model with weights W corresponding to the P = 16 eigenvectors
shown in Figure 4 is used to fit the handwritten twos, and the utility of this nonlinear generative model is illustrated in the right side of Figure 4. The top and bottom
images in the figure are two different examples of a handwritten two from the data
set, and the generative model is used to morph between the two examples. The hidden values Yi for the original images are first determined for the different examples,
and the intermediate images in the morph are constructed by linearly interpolating in the vector space of the hidden units. Because of the clipping nonlinearity,
this induces a nonlinear mapping in the outputs with binary units being flipped in
a particular order as determined by the generative model. In contrast, morphing
using conventional PCA would result in a simple linear interpolation between the
two images, and the intermediate images would not look anything like the original
binary distribution [10].
The correlation matrix CXX also happens to contain some small negative eigenvalues. Even though the binary correlation matrix CSS is positive definite, the
transformation in Equation 10 does not guarantee that the resulting matrix CXx
will also be positive definite. The presence of these negative eigenvalues indicates
a shortcoming of the generative processs for modelling this data. In particular,
the clipped Gaussian model is unable to capture correlations induced by global
Learning a Continuous Hidden Variable Model for Binary Data
521
constraints in the data. As a simple illustration of this shortcoming in the generative model, consider the binary distribution defined by the probability density:
P({s}) tX lim.B-+ooexp(-,BLijSiSj). The states in this distribution are defined by
the constraint that the sum of the binary variables is exactly zero: Li Si = O. Now,
for N 2: 4, it can be shown that it is impossible to find a Gaussian distribution
whose visible binary variables match the negative correlations induced by this sum
constraint.
These examples illustrate the value of using the clipped generative model to learn
the correlation matrix of the underlying Gaussian variables rather than using the
correlations of the outputs directly. The clipping nonlinearity is convenient because
the relationship between the hidden variables and the output variables is particularly easy to understand. The learning algorithm differs from other nonlinear PCA
models and autoencoders because the inverse mapping function need not be explicitly learned [11, 12]. Instead, the correlation matrix is directly transformed from the
observable variables to the underlying Gaussian variables. The correlation matrix
is then diagonalized to determine the appropriate feedforward weights. This results
in a extremely efficient training procedure that is directly analogous to PCA for
continuous variables.
We acknowledge the support of Bell Laboratories, Lucent Technologies, and the
US-Israel Binational Science Foundation. We also thank H. S. Seung for helpful
discussions.
References
[1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag.
[2] Bartholomew, DJ (1987) . Latent variable models and factor analysis. London:
Charles Griffin & Co. Ltd.
[3] Hinton, GE, Dayan, P & Revow, M (1996). Modeling the manifolds of images
of handwritten digits. IEEE Transactions on Neural networks 8,65- 74.
[4] Van Vreeswijk, C, Sompolinsky, H, & Abeles, M. (1999). Nonlinear statistics
of spike trains . In preparation.
[5] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for
Boltzmann machines. Cognitive Science 9, 147-169.
[6] Cover, TM (1965). Geometrical and statistical properties of systems of linear
inequalities with applications in pattern recognition. IEEE Trans. Electronic
Comput. 14, 326- 334.
[7] Tipping, ME (1999) . Probabilistic visualisation of high-dimensional binary
data. Advances in Neural Information Processing Systems ~1.
[8] Christoffersson, A (1975). Factor analysis of dichotomized variables. Psychometrika 40, 5- 32.
[9] LeCun, Yet al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation i, 541-551.
[10] Bregler, C, & Omohundro, SM (1995). Nonlinear image interpolation using
manifold learning. Advances in Neural Information Processing Systems 7,973980.
[11] Hastie, T and Stuetzle, W (1989). Principal curves. Journal of the American
Statistical Association 84, 502-516.
[12] Demers, D, & Cottrell, G (1993) . Nonlinear dimensionality reduction. Advances
in Neural Information Processing Systems 5, 580-587.
| 1580 |@word compression:1 seems:1 reduction:2 configuration:7 daniel:1 diagonalized:3 com:1 si:11 yet:1 cottrell:1 visible:3 partition:1 drop:1 generative:18 half:2 xk:1 location:1 hyperplanes:3 sigmoidal:1 constructed:2 acti:1 multi:1 psychometrika:1 xx:3 underlying:4 bounded:1 israel:2 what:1 transformation:4 nj:1 impractical:1 guarantee:1 xd:1 exactly:3 unit:7 appear:1 positive:8 before:1 understood:1 ware:1 interpolation:2 approximately:1 black:2 suggests:1 co:1 range:1 directed:1 unique:1 lecun:1 yj:2 definite:2 differs:2 x3:1 backpropagation:1 digit:5 stuetzle:1 procedure:1 empirical:2 bell:3 convenient:1 applying:1 impossible:1 conventional:5 center:1 yt:1 jerusalem:1 independently:1 rule:1 racah:1 analogous:2 cs:22 construction:1 us:1 recognition:2 particularly:2 utilized:1 database:1 observed:1 bottom:1 ackley:1 solved:1 capture:2 region:1 compressing:1 connected:2 sompolinsky:5 seung:1 depend:1 upon:2 triangle:1 easily:1 represented:1 tx:1 train:2 shortcoming:2 london:1 sejnowski:1 vations:1 whose:2 widely:1 valued:3 distortion:1 s:2 erf:1 statistic:1 advantage:1 eigenvalue:23 wijyj:2 neighboring:1 generating:2 illustrate:1 ac:1 ij:2 dividing:2 c:1 involves:1 indicate:1 direction:1 closely:1 pea:4 sgn:5 bregler:1 normal:1 exp:1 mapping:2 bump:2 continuum:1 largest:3 reflects:1 clearly:1 gaussian:17 modified:1 rather:3 sisj:1 office:1 rank:9 likelihood:1 indicates:1 modelling:1 contrast:2 helpful:1 dayan:1 typically:1 hidden:22 visualisation:1 transformed:1 pixel:3 field:1 equal:1 sampling:1 climensional:1 flipped:1 look:1 alter:1 elf:1 np:1 few:1 distinguishes:1 individual:1 pression:1 translationally:5 phase:1 fiz:1 generically:1 mixture:1 tj:1 integral:4 plotted:2 dichotomized:1 instance:1 column:1 modeling:1 cover:1 clipping:7 morph:4 periodic:1 abele:1 combined:1 density:1 huji:1 lee:4 physic:1 off:1 probabilistic:1 again:1 cognitive:1 american:1 leading:1 li:2 account:1 explicitly:1 stream:1 sine:2 lab:1 steeper:1 wave:1 il:2 spin:16 variance:2 characteristic:2 stretching:1 yield:5 correspond:1 handwritten:10 detector:1 falloff:1 obvious:1 hamming:1 demers:1 lim:1 dimensionality:2 electrophysiological:1 tipping:1 arranged:1 done:1 though:2 correlation:40 autoencoders:1 working:1 nonlinear:9 defines:1 mode:2 eigenmodes:2 contain:2 y2:1 analytically:2 laboratory:2 nonzero:1 illustrated:5 white:2 sin:3 anything:1 cosine:1 hill:1 omohundro:1 temperature:1 geometrical:1 image:13 lwl:1 charles:1 common:1 rotation:1 binational:1 jl:1 association:1 numerically:1 significant:1 nonlinearity:6 bartholomew:1 dj:1 longer:3 verlag:1 manifested:1 inequality:1 binary:45 success:1 yi:3 accomplished:1 captured:1 seen:1 zip:1 determine:1 dashed:1 ii:2 match:1 post:1 schematic:1 denominator:1 normalization:1 represent:1 cell:3 whereas:1 diagram:1 rest:1 induced:2 incorporates:2 presence:1 feedforward:3 intermediate:2 easy:1 xj:1 fit:1 hastie:1 topology:1 cn:1 tm:1 expression:1 pca:12 utility:3 ltd:1 render:1 york:1 clear:1 eigenvectors:8 induces:1 sign:2 discrete:1 drawn:1 sum:2 inverse:1 clipped:4 reasonable:1 electronic:1 griffin:1 cxx:22 scaling:1 bit:1 haim:2 display:1 occur:1 constraint:3 x2:1 fourier:1 argument:1 span:1 extremely:1 combination:1 smaller:1 slightly:2 unity:1 happens:1 invariant:5 taken:1 computationally:1 equation:8 vreeswijk:1 jolliffe:1 needed:1 flip:2 ge:2 appropriate:8 original:4 assumes:1 top:1 murray:1 unchanged:1 spike:2 fa:1 exhibit:2 distance:2 unable:1 mapped:1 thank:1 me:1 manifold:8 polytope:2 jit:1 code:1 index:1 relationship:4 illustration:1 hebrew:1 unfortunately:2 negative:3 proper:1 boltzmann:3 sm:1 acknowledge:1 hinton:2 variability:2 ww:2 arbitrary:2 pair:1 learned:2 polytopes:1 trans:1 able:1 below:1 pattern:1 belief:1 overlap:1 representing:1 technology:2 dyk:1 morphing:1 determining:1 relative:1 embedded:1 proportional:1 foundation:1 principle:2 thresholding:1 row:1 surprisingly:1 bias:4 side:2 understand:1 institute:1 van:1 boundary:1 calculated:4 dimension:1 world:1 curve:1 commonly:2 qualitatively:2 transaction:1 approximate:1 observable:2 global:1 assumed:2 xi:11 alternatively:1 spectrum:8 continuous:15 latent:1 learn:3 investigated:1 complex:1 interpolating:1 linearly:1 whole:1 representative:1 ddlee:1 cil:1 comput:1 lie:1 lucent:2 inset:2 intractable:1 adding:1 importance:1 illustrates:1 entropy:5 depicted:1 scalar:1 springer:1 dh:1 sorted:1 revow:1 change:1 determined:4 hyperplane:4 principal:5 total:1 arbitary:1 perceptrons:1 support:1 arcsine:2 preparation:1 incorporate:1 ex:1 |
635 | 1,581 | Classification in Non-Metric Spaces
Daphna Weinshall l ,2 David W. Jacobs l Yoram Gdalyahu 2
1 NEC Research Institute , 4 Independence Way, Princeton, NJ 08540, USA
2Inst. of Computer Science, Hebrew University of Jerusalem, Jerusalem 91904 , Israel
Abstract
A key question in vision is how to represent our knowledge of previously
encountered objects to classify new ones. The answer depends on how we
determine the similarity of two objects. Similarity tells us how relevant
each previously seen object is in determining the category to which a new
object belongs. Here a dichotomy emerges. Complex notions of similarity appear necessary for cognitive models and applications, while simple
notions of similarity form a tractable basis for current computational approaches to classification. We explore the nature of this dichotomy and
why it calls for new approaches to well-studied problems in learning.
We begin this process by demonstrating new computational methods
for supervised learning that can handle complex notions of similarity.
(1) We discuss how to implement parametric met.hods that represent a
class by its mean when using non-metric similarity functions; and (2)
We review non-parametric methods that we have developed using nearest neighbor classification in non-metric spaces. Point (2) , and some of
the background of our work have been described in more detail in [8].
1
Supervised Learning and Non-Metric Distances
How can one represent one 's knowledge of previously encountered objects in order
to classify new objects? We study this question within the framework of supel vised
learning: it is assumed that one is given a number of training objects, each labeled as
belonging to a category; one wishes to use this experience to label new test instances
of objects. This problem emerges both in the modeling of cognitive processes and
in many practical applications. For example , one might want to identify risky
applicants for credit based on past experience with clients who have proven to be
good or bad credit risks. Our work is motivated by computer vision applications.
Most current computational approaches to supervised learning suppose that objects
can be thought of as vectors of numbers , or equivalently as points lying in an ndimensional space. They further suppose that the similarity between objects can be
determined from the Euclidean distance between these vectors, or from some other
simple metric . This classic notion of similarity as Euclidean or metric distance leads
839
Classification in Non-Metric Spaces
to considerable mathematical and computational simplification .
However, work in cognitive psychology has challenged such simple notions of similarity as models of human judgment , while applications frequently employ nonEuclidean distances to measure object similarity. We consider the need for similarity measures that are not only non-Euclidean , but that are non-metric. We focus on
proposed similarities that violate one requirement of a metric distance , the triangle
inequality. This states that if we denote the distance between objects A and B
by d(A , B) , then : VA , B , C : d(A, B) + d(B, C) ~ d(A , C) . Distances violating the
triangle inequality must also be non-Euclidean.
Data from cognitive psychology has demonstrated that similarity judgments may
not be well modeled by Euclidean distances. Tversky [12] has demonstrated instances in which similarity judgments may violate the triangle inequality. For example , close similarity between Jamaica and Cuba and between Cuba and Russia
does not imply close similarity between Jamaica and Russia (see also [10]) . Nonmetric similarity measures are frequently employed for practical reasons, too (cf.
[5]) . In part, work in robust statistics [7] has shown that methods that will survive
the presence of outliers, which are extraneous pieces of information or information
containing extreme errors , must employ non-Euclidean distances that in fact violate
the triangle inequality ; related insights have spurred the widespread use of robust
methods in computer vision (reviewed in [5] and [9]).
We are interested in handling a wide range of non-metric distance functions, including those that are so complex that they must be treated as a black box . However,
to be concrete, we will focus here on two simple examples of such distances:
median distance: This distance assumes that objects are representable as a set
of features whose individual differences can be measured, so that the difference
between two objects is representable as a vector: J = (d 1 , d2 , .. . dn ). The median
distance between the two objects is just the median value in this vector. Similarly,
one can define a k-median distance by choosing the k'th lowest element in this list. kmedian distances are often used in applications (cf. [9]) , because they are unaffected
by the exact values of the most extreme differences between the objects . Only these
features that are most similar determine its value . The k-median distance can
violate the triangle inequality to an arbitrary degree (i.e. , there are no constraints
on the pairwise distances between three points) .
robust non-metric LP distances: Given a difference vector
has the form:
J,
an LP distance
(1)
and is non-metric for p < 1.
Figure 1 illustrates why these distances present significant new challenges in supervised learning. Suppose that given some datapoints (two in Fig. 1) , we wish to
classify each new point as coming from the same category as its nearest neighbor.
Then we need to determine the Voronoi diagram generated by our data: a division
of the plane into regions in which the points all have the same nearest neighbor.
Fig. 1 shows how the Voronoi diagram changes with the function used to compute
the distance between datapoints; the non-metric diagrams (rightmost three pictures
in Fig. 1) are more complex and more likely to make non-intuitive predictions. In
fact , very little is known about the computation of non-metric Voronoi diagrams.
We now describe new parametric methods for supervised learning with non-metric
D. Weins hall, D. W Jacobs and Y. Gdalyahu
840
Figure 1: The Voronoi diagram for two points using, from left to right, p-distances with
p = 2 (Euclidean), p = 1 ( Manhattan, which is still metric), the non-metric distances
arising from p = 0.5, p = 0.2, and the min (I-median) distance. The min distance in 2-D
illustrates the behavior of the other median distances in higher dimensions. The region of
the plane closer to one point is shown in black, and closer to the other in white.
distances, and review non-parametric methods that we described in [8].
2
Parametric methods: what should replace the mean
Parametric methods typically represent objects as vectors in a high-dimensional
space, and represent classes and the boundaries between them in this space using geometric constructions or probability distributions with a limited number of
parameters. One can attempt to extend these techniques to specific non-metric
distances, such as the median distance , or non-metric LP distances. We discuss
the example of the mean of a class below. One can also redefine geometric objects such as linear separators, for specific non-metric distances. However, existing
algorithms for finding such objects in Euclidean spaces will no longer be directly
suitable, nor will theoretical results about such representations hold. Many problems are therefore open in determining how to best apply parametric supervised
learning techniques to specific non-metric distances.
We analyze k-means clustering where each class is represented by its average member; new elements are then classified according to which of these prototypical examples is nearest . In Euclidean space, the mean is the point q whose sum of squared
1
distances to all the class members {qdr=l - (2:~1 d(ij, qi)2)2 - is minimized.
Suppose now that our data come from a vector space where the correct distance
is the LP distance from (1). Using the natural extension of the above definition,
we should represent each class by the point ij whose sum of distances to all the
1
class members - (2:~=1 d(ij, qi)P) p - is minimal. It is now possible to show (proof
is omitted) that for p < 1 (the non-metric cases), the exact value of every feature
of the representative point ij must have already appeared in at least one element in
the class. Moreover, the value of these features can be determined separately with
complexity O(n 2 ), and total complexity of O(dn 2 ) given d features . ij is therefore
determined by a mixture of up to d exemplars, where d is the dimension of the
vector space. Thus there are efficient algorithms for finding the "mean" element of
a class, even using certain non-metric distances.
We will illustrate these results with a concrete example using the corel database,
a commercial database of images pre-labeled by categories (such as "lions"), where
non-metric distance functions have proven effective in determining the similarity of
images [1] . The corel database is very large, making the use of prototypes desirable.
We represent each image using a vector of 11 numbers describing general image
properties, such as color histograms, as described in [1] . We consider the Euclidean
841
Classification in Non-Metric Spaces
and L0 5 distances, and their corresponding prototypes: the mean and the LO .5 _
prototype computed according to the result above. Given the first 45 classes , each
containing 100 images , we found their corresponding prototypes; we then computed
the percentage of images in each class that are closest to their own prototype, using
either the Euclidean or the L 0.5 distance and one of the two prototypes. The results
are the following:
mean
distance
Euclidean distance
d existing features
25%
20
0
In the first column , the prototype is computed using the Euclidean mean. In the
second column the prototype is computed using an LO 5 distance. In each row , a
different function is used to compute the distance from each item to the cluster
prototype. Best results are indeed obtained with the non-metric L05 distance and
the correct prototype for this particular distance . While performance in absolute
terms depends on how well this data clusters using distances derived from a simple
feature vector, relative performance of different methods reveals the advantage of
using a prototype computed with a non-metric distance.
Another important distance function is the generalized Hamming distance: given
two vectors of features, their distance is the number of features which are different in the two vectors. This distance was assumed in psychophysical experiments
which used artificial objects (Fribbles) to investigate human categorization and object recognition [13]. In agreement with experimental results , the prototype if for
this distance computed according to the definition above is the vector of "modal"
features - the most common feature value computed independently at each feature.
3
Non-Parametric Methods: Nearest Neighbors
Non-parametric classification methods typically represent a class directly by its
exemplars. Specifically, nearest-neighbor techniques classify new objects using only
their distance to labeled exemplars. Such methods can be applied using any nonmetric distance function , treating the function as a black-box. However , nearestneighbor techniques must also be modified to apply well to non-metric distances.
The insights we gain below from doing this can form the basis of more efficient and
effective computer algorithms, and of cognitive models for which examples of a class
are worth remembering. This section summarizes work described in [8].
Current efficient algorithms for finding the nearest neighbor of a class work only
for metric distances [3]. The alternative of a brute-force approach, in which a new
object is explicitly compared to every previously seen object , is desirable neither
computationally nor as a cognitive model. A natural approach to handling this
problem is to represent each class by a subset of its labeled examples. Such methods are called condensing algorithms. Below we develop condensing methods for
selecting a subset of the training set which minimizes errors in the classification of
new datapoints, taking into account the non-metric nature of the distance.
In designing a condensing method , one needs to answer the question when is one
object a good substitute for another? Earlier methods (e.g., [6, 2]) make use of the
fact that the triangle inequality guarantees that when two points are similar to each
other, their pattern of similarities to other points are not very different . Thus, in
a metric space , there is no reason to store two similar datapoints, one can easily
substitute for the other. Things are different in non-metric spaces.
842
D. Weinshall, D. W Jacobs and Y. Gdalyahu
a
Figure 2: a) Two clusters of labeled points (left) and their Voronoi diagram (right) computed using the I-median (min) distance. Cluster P consists of four points (black squares)
all close together both according to the median distance and the Euclidean distance. Cluster Q consists of five points (black crosses) all having the same x coordinate, and so all
are separated by zero distance using the median (but not Euclidean) distance. We wish to
select a subset of points to represent each class, while changing this Voronoi diagram as
little as possible. b) All points in class Q have zero distance to each other, using the min
distance. So distance provides no clue as to which are interchangeable. However, the top
points (ql, q2) have distances to the points in class P that are highly correlated with each
other, and poorly correlated with the bottom points (q3, q4, qs). Without using correlation
as a clue, we might represent Q with two points from the bottom (which are nearer the
boundary with P, a factor preferred in existing approaches). This changes the Voronoi
diagram drastically, as shown on the left. Using correlation as a clue, we select points from
the top and bottom, changing the Voronoi diagram much less, as shown on the right.
Specifically, what we really need to know is when two objects will have similar distances to other objects, yet unseen. We estimate this quantity using the correlation
between two vectors: the vector of distances from one datapoint to all the other
training data, and the vector of distances from the second datapoint to all the remaining training datal. It can be shown (proof is omitted) that in a Euclidean space
the similarity between two points is the best measure of how well one can substitute
the other, whereas in a non-metric space the aforementioned vector correlation is a
substantially better measure. Fig. 2 illustrates this result .
We now draw on these insights to produce concrete methods for representing classes
in non-metric spaces, for nearest neighbor classification. We compare three algorithms. The first two algorithms, random selection (cf. [6]) and boundary
detection (e.g., [11]), represent old condensing ideas: in the first we pick a random
selection of class representatives, in the second we use points close to class boundaries as representatives. The last algorithm uses new ideas: correlation selection
includes in the representative set points which are least correlated with the other
class members and representatives. To be fair in our comparison, all algorithms
were constrained to select the same number of representative points for each class.
During the simulation, each of 1000 test datapoints was classified based on: (1) all
the data, (2) the representatives computed by each of the three algorithms. For
each algorithm, the test is successful if the two methods (classification based on all
the data and based on the chosen representatives) give the same results. Fig. 3a-c
summarizes representative results of our simulations. See [8] for details.
IGiven two datapoints X, Y and x, y ERn, where x is the vector of distances from X
to all the other training points and y is the corresponding vector for Y, we measure the
correlation between the datapoints using the statistical correlation coefficient between x, y:
corr(X, Y) = corr(x, y) = ~. Y-I-'y,
where JJx, JJy denote the mean of x, y respectively,
CTy
and fr x , fry denote the standard deviation of x, y respectively.
CTx
843
Classification in Non-Metric Spaces
100
90
ti
80
c:OJ
70
...~
~
0.
100
~
'.
.''1....- . .
90
...-"
ti
80
c:
70
?
.. )~
[p. ...
..
-- /
~
,
f----
~
correlation D?
boundary +random ~
60
OJ
0.
50
60
correlation D?
boundary +random ~
50
40
40
median L?0.2
L?0.5 Euclidean
median L?0.2
L?0.5 Euclidean
b)
a)
100
100
90
ti
~
8
c:
~
OJ
0.
80
70
? ???-m.... ?-m- .. ?;;i
,,
t t//(~
f/"!'
correlation D?
boundary +random ~
60
50
90
ti
~
...
80
0
c:
~
OJ
0.
70
60
s .. .... .. ? E!j
14
correlation D?
boundary +random ~
50
40
40
median L?0.2
C)
L?0.5 Euclidean
5 reps
7 reps
d)
Figure 3 : Results: values of percent correct scores, as well as error bars giving the standard
deviation calculated over 20 repetitions of each test block when appropriate. Each graph
contains 3 plots , giving the percent correct score for each of the three algorithms described
above: random (selection), boundary (detection), and (selection based on) correlation.
(a-c) Simulation results: data is chosen from R2 5 . 30 clusters were randomly chosen, each
with 30 datapoints. The distribution of points in each class was: (a) normal; (b) normal,
where in half the datapoints one random coordinate was modified (thus the points cluster
around a prototype, but many class members vary widely in one random dimension); (c)
union of 2 concentric normal distributions, one spherical and one elongated elliptical (thus
the points cluster around a prototype, but may vary significantly in a few non-defining
dimensions) . Each plot gives 4 values, for each of the different distance functions used
here: median, L o.2 , Lo.5 and L2 . (d) Real data: the number of representatives chosen by
the algorithm was limited to 5 (first column) and 7 (second column) .
To test our method with real images, we used the local curve matching algorithm
described in [4]. This non-metric curve matching algorithm was specifically designed
to compare curves which may be quite different, and return the distance between
them . The training and test data are shown in Fig. 4 . Results are given in Fig. 3d.
The simulations and the real data demonstrate a significant advantage to our new
method . Almost as important , in metric spaces (4th column in Fig . 3a-c) or when
the classes lack any "interesting" structure (Fig . 3a) , our method is not worse than
existing methods. Thus it should be used to guarantee good performance when the
nature of the data and the distance function is not known a priori.
References
[1] Cox, I. , Miller, M., Omohundro , S. , and Yianilos, P., 1996, "PicHunter: Bayesian
Relevance Feedback for Image Retrieval," Proc . of I CPR , C:361- 369.
844
D. Weinshall, D. W Jacobs and Y. Gdalyahu
~ l)1V ~
W>~~
~~~
1yl1
a)
~
o=C0
~
OJcCflp
c/C;'J
P
~
cfY
c!fl
P
b)
,.............
~
~~
\-.
~~
1~)
~);J
c)
N
d)
Figure 4: Real data used to test the three algorithms, incluillng 2 classes with 30 images
each: a) 12 examples from the first class of 30 cow contours, obtained from illfferent
viewpoints of the same cow. b) 12 examples from the second class of 30 car contours,
obtained from different viewpoints of 2 similar cars. c) 12 examples from the set of 30 test
cow contours, obtained from illfferent viewpoints of the same cow with possibly adilltional
occlusion. d) 2 examples of the real images from which the contours in a) are obtained.
[2] Dasarathy, B., 1994, "Minimal Consistent Set (MCS) Identification for Optimal Nearest Neighbor Decision Systems Design," IEEE Trans. on Systems, Man and Cybernetics,24(3):511-517.
[3] Friedman, J., Bently, J ., Finkel, R., 1977, "An Algorithm for Finillng Best Matches
in Logarithmic Expected Time," ACM Trans. on Math. Software, 3:3 209-226.
[4] Gdalyahu, Y. and D. Weinshall, 1997, "Local Curve Matching for Object Recognition
without Prior Knowledge", Proc .: DARPA Image Understanding Workshop, 1997.
[5] Haralick, R. and L. Shapiro, 1993, Computer and Robot Vision, Vol. 2, AddisonWesley Publishing.
[6] Hart, P., 1968, "The Condensed Nearest Neighbor Rule," IEEE Trans. on Information
Theory, 14(3):515- 516.
[7] Hubbr, P., 1981, Robust Statistics, John Wiley and Sons.
[8] Jacobs, D., Weinshall, D., and Gdalyahu, Y., 1998, "Condensing Image Databases
when Retrieval is based on Non-Metric Distances," Int. Conf. on Computer vis.:59660l.
[9] Meer, P., D. Mintz, D. Kim and A . Rosenfeld, 1991, "Robust Regression Methods for
Computer Vision: A Review," Int. J. of Compo Vis. 6(1):59-70.
[10] Rosch, E ., 1975, "Cognitive Reference Points," Cognitive Psychology, 7:532-547.
[11] Tomek, 1., 1976, "Two moillfications of CNN," IEEE Trans. Syst. , Man, Cyber."
SMC-6{1l):769-772.
[12] Tversky, A., 1977, "Features of Similarity," Psychological Review, 84(4):327- 352.
[13] Williams, P., "Prototypes, Exemplars, and Object Recognition", submitted.
PART
VIII
ApPLICATIONS
| 1581 |@word weins:1 cox:1 cnn:1 c0:1 open:1 d2:1 simulation:4 jacob:5 pick:1 contains:1 score:2 selecting:1 rightmost:1 past:1 existing:4 current:3 elliptical:1 yet:1 must:5 applicant:1 john:1 treating:1 plot:2 designed:1 half:1 item:1 plane:2 compo:1 provides:1 math:1 five:1 mathematical:1 dn:2 consists:2 redefine:1 pairwise:1 expected:1 indeed:1 behavior:1 frequently:2 nor:2 spherical:1 little:2 begin:1 moreover:1 lowest:1 israel:1 weinshall:5 what:2 minimizes:1 substantially:1 q2:1 developed:1 finding:3 nj:1 guarantee:2 every:2 ti:4 brute:1 appear:1 local:2 dasarathy:1 might:2 black:5 studied:1 nearestneighbor:1 limited:2 smc:1 range:1 gdalyahu:6 practical:2 union:1 block:1 implement:1 thought:1 significantly:1 matching:3 pre:1 close:4 selection:5 risk:1 elongated:1 demonstrated:2 jerusalem:2 williams:1 independently:1 insight:3 q:1 rule:1 datapoints:9 classic:1 handle:1 notion:5 coordinate:2 meer:1 construction:1 suppose:4 commercial:1 exact:2 us:1 designing:1 agreement:1 element:4 recognition:3 labeled:5 database:4 bottom:3 region:2 complexity:2 tversky:2 interchangeable:1 division:1 basis:2 triangle:6 easily:1 darpa:1 represented:1 separated:1 describe:1 effective:2 artificial:1 dichotomy:2 tell:1 choosing:1 whose:3 quite:1 widely:1 statistic:2 unseen:1 rosenfeld:1 advantage:2 kmedian:1 coming:1 fr:1 relevant:1 poorly:1 cuba:2 intuitive:1 cluster:8 requirement:1 produce:1 categorization:1 object:29 illustrate:1 develop:1 exemplar:4 ij:5 measured:1 nearest:10 come:1 met:1 correct:4 human:2 really:1 extension:1 cpr:1 hold:1 lying:1 around:2 credit:2 hall:1 normal:3 vary:2 omitted:2 proc:2 condensed:1 label:1 repetition:1 modified:2 finkel:1 l0:1 focus:2 derived:1 q3:1 haralick:1 tomek:1 kim:1 inst:1 voronoi:8 typically:2 interested:1 classification:10 aforementioned:1 extraneous:1 priori:1 constrained:1 noneuclidean:1 having:1 survive:1 minimized:1 employ:2 few:1 randomly:1 individual:1 mintz:1 occlusion:1 attempt:1 friedman:1 detection:2 investigate:1 highly:1 mixture:1 extreme:2 closer:2 necessary:1 experience:2 addisonwesley:1 euclidean:19 old:1 theoretical:1 minimal:2 psychological:1 instance:2 classify:4 modeling:1 column:5 earlier:1 challenged:1 deviation:2 subset:3 successful:1 too:1 answer:2 together:1 concrete:3 squared:1 containing:2 russia:2 possibly:1 worse:1 cognitive:8 conf:1 return:1 syst:1 account:1 includes:1 coefficient:1 int:2 explicitly:1 depends:2 vi:2 piece:1 analyze:1 doing:1 square:1 who:1 miller:1 judgment:3 identify:1 bayesian:1 identification:1 mc:1 worth:1 cybernetics:1 unaffected:1 classified:2 submitted:1 datapoint:2 definition:2 proof:2 hamming:1 gain:1 knowledge:3 emerges:2 color:1 car:2 nonmetric:2 higher:1 supervised:6 violating:1 modal:1 box:2 just:1 correlation:12 jamaica:2 lack:1 widespread:1 usa:1 white:1 during:1 generalized:1 omohundro:1 demonstrate:1 percent:2 image:12 common:1 corel:2 extend:1 significant:2 similarly:1 robot:1 similarity:21 longer:1 closest:1 own:1 belongs:1 store:1 certain:1 inequality:6 rep:2 seen:2 remembering:1 employed:1 determine:3 violate:4 desirable:2 match:1 cross:1 retrieval:2 hart:1 va:1 qi:2 prediction:1 regression:1 vision:5 metric:38 histogram:1 represent:12 background:1 want:1 separately:1 whereas:1 diagram:9 median:15 cyber:1 thing:1 member:5 call:1 presence:1 independence:1 psychology:3 cow:4 idea:2 prototype:15 motivated:1 category:4 shapiro:1 percentage:1 arising:1 vol:1 key:1 four:1 datal:1 demonstrating:1 changing:2 neither:1 graph:1 sum:2 almost:1 draw:1 decision:1 summarizes:2 fl:1 simplification:1 encountered:2 constraint:1 vised:1 software:1 min:4 ern:1 according:4 representable:2 belonging:1 son:1 lp:4 making:1 condensing:5 outlier:1 handling:2 computationally:1 previously:4 discus:2 describing:1 know:1 tractable:1 apply:2 appropriate:1 fry:1 alternative:1 yl1:1 substitute:3 assumes:1 spurred:1 cf:3 clustering:1 top:2 remaining:1 publishing:1 daphna:1 yoram:1 giving:2 psychophysical:1 question:3 already:1 quantity:1 rosch:1 parametric:9 distance:76 reason:2 viii:1 modeled:1 hebrew:1 equivalently:1 ql:1 design:1 defining:1 arbitrary:1 concentric:1 david:1 nearer:1 trans:4 bar:1 below:3 lion:1 pattern:1 ctx:1 appeared:1 challenge:1 including:1 oj:4 suitable:1 treated:1 client:1 natural:2 force:1 ndimensional:1 representing:1 imply:1 risky:1 picture:1 review:4 geometric:2 l2:1 prior:1 understanding:1 determining:3 relative:1 manhattan:1 prototypical:1 interesting:1 proven:2 degree:1 consistent:1 viewpoint:3 lo:3 row:1 last:1 drastically:1 institute:1 neighbor:9 wide:1 taking:1 absolute:1 boundary:9 dimension:4 calculated:1 curve:4 feedback:1 contour:4 clue:3 preferred:1 reveals:1 q4:1 assumed:2 why:2 reviewed:1 nature:3 robust:5 complex:4 separator:1 yianilos:1 fair:1 fig:9 representative:10 wiley:1 wish:3 bad:1 specific:3 list:1 r2:1 workshop:1 corr:2 nec:1 illustrates:3 hod:1 logarithmic:1 explore:1 likely:1 acm:1 replace:1 man:2 considerable:1 change:2 determined:3 specifically:3 total:1 called:1 experimental:1 select:3 relevance:1 princeton:1 correlated:3 |
636 | 1,582 | Semi-Supervised Support Vector
Machines
Kristin P. Bennett
Department of Mathematical Sciences
Rensselaer Polytechnic Institute
Troy, NY 12180 bennek@rpi.edu
Ayhan Demiriz
Department of Decision Sciences and Engineering Systems
Rensselaer Polytechnic Institute
Troy, NY 12180 demira@rpi.edu
Abstract
We introduce a semi-supervised support vector machine (S3yM )
method. Given a training set of labeled data and a working set
of unlabeled data, S3YM constructs a support vector machine using both the training and working sets. We use S3 YM to solve
the transduction problem using overall risk minimization (ORM)
posed by Yapnik. The transduction problem is to estimate the
value of a classification function at the given points in the working
set. This contrasts with the standard inductive learning problem
of estimating the classification function at all possible values and
then using the fixed function to deduce the classes of the working
set data. We propose a general S3YM model that minimizes both
the misclassification error and the function capacity based on all
the available data. We show how the S3YM model for I-norm linear support vector machines can be converted to a mixed-integer
program and then solved exactly using integer programming. Results of S3YM and the standard I-norm support vector machine
approach are compared on ten data sets. Our computational results support the statistical learning theory results showing that
incorporating working data improves generalization when insufficient training information is available. In every case, S3YM either
improved or showed no significant difference in generalization compared to the traditional approach.
Semi-Supervised Support Vector Machines
1
369
INTRODUCTION
In this work we propose a method for semi-supervised support vector machines
(S3VM). S3VM are constructed using a mixture of labeled data (the training set)
and unlabeled data (the working set) . The objective is to assign class labels to the
working set such that the "best" support vector machine (SVM) is constructed.
If the working set is empty the method becomes the standl1rd SVM approach to
classification [20, 9, 8]. If the training set is empty, then the method becomes a
form of unsupervised learning. Semi-supervised learning occurs when both training
and working sets are nonempty. Semi-supervised learning for problems with small
training sets and large working sets is a form of semi-supervised clustering. There
are successful semi-supervised algorithms for k-means and fuzzy c-means clustering
[4, 18]. Clustering is a potential application for S3VM as well. When the training
set is large relative to the working set, S3VM can be viewed as a method for solving
the transduction problem according to the principle of overall risk minimization
(ORM) posed by Vapnik at the NIPS 1998 SVM Workshop and in [19, Chapter 10].
S3VM for ORM is the focus of this paper.
In classification, the transduction problem is to estimate the class of each given
point in the unlabeled working set. The usual support vector machine (SVM) approach estimates the entire classification function using the principle of statistical
risk minimization (SRM). In transduction, one estimates the classification function at points within the working set using information from both the training and
working set data. Theoretically, if there is adequate training data to estimate the
function satisfactorily, then SRM will be sufficient. We would expect transduction
to yield no significant improvement over SRM alone. If, however, there is inadequate training data, then ORM may improve generalization on the working set.
Intuitively, we would expect ORM to yield improvements when the training sets are
small or when there is a significant deviation between the training and working set
subsamples of the total population. Indeed,the theoretical results in [19] support
these hypotheses.
In Section 2, we briefly review the standard SV:~\'1 model for structural risk minimization . According to the principles of structural risk minimization, SVM minimize
both the empirical misclassification rate and the capacity of the classification function [19, 20] using the training data. The capacity of the function is determined by
margin of separation between the two classes based on the training set. ORM also
minimizes the both the empirical misclassification rate and the function capacity.
But the capacity of the function is determined using both the training and working
sets. In Section 3, we show how SVM can be extended to the semi-supervised case
and how mixed integer programming can be used practically to solve the resulting
problem. We compare support vector machines constructed by structural risk minimization and overall risk minimization computationally on ten problems in Section
4. Our computational results support past theoretical results that improved generalization can be obtained by incorporating working set information during training
when there is a deviation between the working set and training set sample distributions. In three of ten real-world problems the semi-supervised approach, S3VM ,
achieved a significant increase in generalization. In no case did S3VM ever obtain a
sifnificant decrease in generalization. We conclude with a discussion of more general
S VM algorithms.
370
K. Bennett and A. Demiriz
6
Class 1
- - -- 6 ___ __ .1:> _______ __ ______
-
- - - --
w? x = b+ 1
- - W?
- - - --- - -0-------0---- - - - - - W?
o
o
x= b
x = b- 1
o
o
0
o Class -1
Figure 1: Optimal plane maximizes margin.
2
SVM using Structural Risk Minimization
The basic SRM task is to estimate a classification function
input-output training data from two classes
f : RN
-t
{? I} using
(1)
The function f should correctly classify unseen examples (x, Y), i.e. f(x) = y if (x, y)
is generated from the same underlying probability distribution as the training data.
In this work we limit discussion to linear classification functions. We will discuss
extensions to the nonlinear case in Section 5. If the points are linearly separable,
then there exist an n-vector wand scalar b such that
w? Xi
w . Xi
-
b~ 1
if Yi = 1, and
b :S - 1 if Yi = - 1, i = 1, . .. , f
(2)
or equivalently
Yt [w . Xi - b] ~ 1, i = 1, ... , f.
(3)
The "optimal" separating plane, W . X = b, is the one which is furthest from the
closest points in the two classes. Geometrically this is equivalent to maximizing the
separation margin or distance between the two parallel planes W . X = b + 1 and
W . X = b - 1 (see Figure 1.)
The "margin of separation" in Euclidean distance is 2/llw112 where IIw I1 2 =
:L~=l
is the 2-norm. To maximize the margin, we minimize IIw1l2/2 subject
to the constraints (3). According to structural risk minimization, for a fixed empirical misclassification rate, larger margins should lead to better generalization
and prevent overfitting in high-dimensional attribute spaces. The classifier is called
a support vector machine because the solution depends only on the points (called
support vectors) located on the two supporting planes w? x = b - 1 and W ? x = b + 1.
wt
In general the classes will not be separable, so the generalized optimal plane (GOP)
problem (4) [9, 20] is used. A slack term T]! is added for each point such that if the
point is misclassified , T]i 2: 1. The final GOP formulation is:
e
min
w ,b,'1
s. t.
1
+ 2 II wll2
Yd w . Xi - b] + T]i 2:
C
LT]t
(4)
i= l
T]i ~
1
0, i = 1, ... , f
where C > 0 is a fixed penalty parameter. The capacity control provided by the
margin maximization is imperative to achieve good generalization [21 , 19].
The Robust Linear Programming (RLP) approach to SVM is identical to GOP
except the margin term is changed from the 2-norm II wll2 to the I-norm, II wlll =
371
Semi-Supervised Support Vector Machines
2::;=1IWj l? The problem becomes the following robust linear program (RLP)
1]:
min
w ,b,s ,,,,
s.t.
e
n
CL1]i
+ LS)
i= l
j= l
Yt [w . Xi - b] + 1]i ~ 1
1]i ~ 0, i = 1, ... , f
-Sj <= Wj <= Sj, j
[2, 7,
(5)
= 1, ... , n.
The RLP formulation is a useful variation of SVM with some nice characteristics.
The I-norm weight reduction still provides capacity control. The results in [13] can
be used to show that minimizing II wi ll corresponds to maximizing the separation
margin using the infinity norm. Statistical learning theory could potentially be
extended to incorporate alternative norms. One major benefit of RLP over GOP
is dimensionality reduction. Both RLP and GOP minimize the magnitude of the
weights w. But RLP forces more of the weights to be 0 due to the properties of
the I-norm. Another benefit of RLP over GOP is that it can be solved using linear
programming instead of quadratic programming. Both approaches can be extended
to handle nonlinear discrimination using kernel functions [8, 12] . Empirical comparisons of the approaches have not found any significant difference in generalization
between the formulations [5, 7, 3, 12] .
3
Semi-supervised support vector machines
To formulate the S3VM , we start with either SVM formulation, (4) or (5) , and then
add two constraints for each point in the working set. One constraint calculates
the misclassification error as if the point were in class 1 and the other constraint
calculates the misclassification error as if the point were in class - l. The objective
function calculates the minimum of the two possible misclassification errors. The
final class of the points corresponds to the one that results in the smallest error.
Specifically we define the semi-supervised support vector machine problem (S3VM)
as:
w~~~,, '
C
subjectto
[t,~. + j~' min(~j, Zj)] + II w II
Yi (w'xt+b)+1]i ~I
W . Xj - b
t,j ~ 1
- (w?xj-b)+zj~I
+
1]t~O i
t,j ~ 0 j
Zj ~ O
= I, . . . ,e
= f + 1, ... , f
(6)
+k
where C > 0 is a fixed misclassification penalty.
Integer programming can be used to solve this problem. The basic idea is to add
a 0 or 1 decision variable, dj , for each point Xj in the working set. This variable
indicates the class of the point. If dj = 1 then the point is in class 1 and if d j = 0
then the point is in class -1. This results in the following mixed integer program:
W, ~~~',d
subject to
C
[t,~. + j~l (~j + Zj)] + II
Yt(w?x i- b)+1]t~I
W . Xj - b
t,j A1(I - d j ) ~ 1
+ +
- (w ? xj-b)+zj+Mdj~I
w
II
1]t~O i=I,
t,j ~ 0 j =
... ,f
e+ 1, ... , f
Zj~O dj={O , I}
(7)
+k
The constant M > 0 is chosen sufficiently large such that if d j = 0 then t,j = 0 is
feasible for any optimal wand b. Likewise if dJ = 1 then Zj = O. A globally optimal
K. Bennett and A. Demiriz
372
e
e:,
e
e:,
e:,
e
e e:,e
e:,
e:,
e
ee
e
- - -e- - -~ - - - - - - - ? - - - - - - - - - -e. -- --- - -. -- - - - -- -e- - - --
e
e
0
0
e
o
o
o
e:,
e
.,..' . .. ....
e:,
?
..... ?
.
.
/. ?
~ ",. '
e:,
..? .
o
?
0
~
.
?
0
o
o
o
o
e:,
?4 ? ??
o
?
o
o
o
Figure 2: Left = solution found by RLP; Right = solution found by S3YM
solution to this problem can be found using CPLEX or other commercial mixed
integer programming codes [10] provided computer resources are sufficient for the
problem size. Using the mathematical programming modeling language AMPL [11],
we were able to express the problem in thirty lines of code plus a data file and solve
it using CPLEX.
4
S3VM and Overall Risk Minimization
An integer S3YM can be used to solve the Overall Risk Minimization problem.
Consider the simple problem given in Figure 20 of [19]. Using RLP alone on the
training data results in the separation shown in Figure 1. Figure 2 illustrates what
happens when working set data is added . The training set points are shown as
transparent triangles and hexagons. The working set points are shown as filled
circles. The left picture in Figure 2 shows the solution found by RLP. Note that
when the working set points are added, the resulting separation has very a small
margin. The right picture shows the S3YM solution constructed using the unlabeled
working set. Note that a much larger and clearer separation margin is found. These
computational solutions are identical to those presented in [19] .
We also tested S3YM on ten real-world data sets (eight from [14] and the bright and
dim galaxy sets from [15]). There have been many algorithms applied successfully to
these problems without incorporate working set information. Thus it was not clear
a priori that S3YM would improve generalization on these data sets. For the data
sets where no improvement is possible, we would like transduction using ORM to
not degrade the performance of the induction via SRM approach. For each data set,
we performed 10-fold cross-validation. For the three starred data sets, our integer
programming solver failed due to excessive branching required within the CPLEX
algorithm. On those data sets we randomly extracted 50 point working sets for
each trial. The same C parameter was used for each data set in both the RLP and
S3YM problems l . In all ten problems, S3YM never performed significantly worse
than RLP. In three of the problems, S3YM performed significantly better. So ORM
did not hurt generalization and in some cases it helped significantly. \Ve would
expect this based on ORM theory. The generalization bounds for ORM depend on
the difference between the training and working sets. If there is little difference, we
would not expect any improvement using ORM.
IThe formula for C was C = ;~f~>;;) with oX = .001, f is the size of training set, and k
is the size of the working set . This formula was chosen because it worked well empirically
for both methods.
Semi-Supervised Support Vector Machines
Data Set
Bright
Cancer
Cancer(Prognostic )
Dim
Heart
Housing
Ionosphere
Musk
Pima
Sonar
5
Dim
14
9
30
14
13
13
34
166
8
60
Points
2462
699
569
4192
297
506
351
476
769
208
373
CV-size
50*
70
57
50*
30
51
35
48
50*
21
RLP
0.02
0.036
0.035
0.064
0.173
0.155
0.109
0.173
0.220
0.281
S.1VM
0.018
0.034
0.033
0.054
0.160
0.151
0.106
0.173
0.222
0.219
p-value
0.343
0.591
0.678
0.096
0.104
0.590
0.59
0.999
0.678
0.045
Conclusion
\Ve introduced a semi-supervised SVM model. S3VM constructs a support vector
machine using all the available data from both the training and working sets. We
show how the S3VM model for I-norm linear support vector machines can be converted to a mixed-integer program. One great advantage of solving S3VM using integer programming is that the globall? optimal solution can be found using packages
such as CPLEX. Using the integer S VM we performed an empirical investigation
of transduction using overall risk minimization, a problem posed by Vapnik. Our
results support the statistical learning theory results that incorporating working
data improves generalization when insufficient training information is available. In
every case, S3VM either improved or showed no significant difference in generalization compared to the usual structural risk minimization approach. Our empirical
results combined with the theoretical results in [19], indicate that transduction via
ORM constitutes a very promising research direction.
Many research questions remain. Since transduction via overall risk minimization
is not always be better than the basic induction via structural risk minimization,
can we identify a priori problems likely to benefit from transduction? The best
methods of constructing S3VM for the 2-norm case and for nonlinear functions
are still open questions. Kernel based methods can be incorporated into S3VM.
The practical scalability of the approach needs to be explored. We were able to
solve moderately-sized problems with on the order of 50 working set points using a
general purpose integer programming code. The recent success of special purpose
algorithms for support vector machines [16, 17, 6] indicate that such approaches
may produce improvement for S3VM as well.
References
[1] K. P. Bennett and E . J. Bredensteiner. Geometry in learning. In C . Gorini,
E. Hart, W. Meyer, and T. Phillips, editors, Geometry at Work, Washington,
D.C., 1997. Mathematical Association of America. To appear.
[2] K. P. Bennett and O. 1. Mangasarian. Robust linear programming discrimination of two linearly inseparable sets. Optimization Methods and Software,
1:23- 34, 1992.
[3] K. P. Bennett, D. H. Wu, and L. Auslender. On support vector decision trees for
database marketing. R.P.I. Math Report No. 98-100, Rensselaer Polytechnic
K. Bennett and A. Demiriz
374
Institute, Troy, NY, 1998.
[4J A.M. Bensaid, L.O. Hall, J.e. Bezdek, and L.P. Clarke. Partially supervised
clustering for image segmentation. Pattern Recognition, 29(5):859- 871, 199.
[5J P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. Technical Report Mathematical Programming Technical Report 98-03, University of Wisconsin-Madison, 1998. To
appear in ICML-98.
[6J P. S. Bradley and O. L. Mangasarian. Massive data discrimination via linear support vector machines. Technical Report Mathematical Programming
Technical Report 98-05, University of Wisconsin-Madison, 1998. Submitted
for publication.
[7J E. J. Bredensteiner and K. P. Bennett. Feature minimization within decision
trees. Computational Optimization and Applications, 10:110-126, 1997.
[8J C. J. C Burges. A tutorial on support vector machines for pattern recognition.
Data Mining and Knowledge Discovery, 1998. to appear.
[9J C. Cortes and V. N. Vapnik. Support vector networks. Machine Learning,
20:273- 297, 1995.
[IOJ CPLEX Optimization Incorporated, Incline Village, Nevada. Using the CPLEX
Callable Library, 1994.
[11] R. Fourer, D. Gay, and B. Kernighan. AMPL A Modeling Language for Mathematical Programming. Boyd and Frazer, Danvers, Massachusetts, 1993.
[12J T. T. Fries and R. Harrison Fries. Linear programming support vector machines
for pattern classification and regression estimation: and the sr algorithm. Research report 706, University of Sheffield, 1998.
[13] O. L. Mangasarian. Parsimonious least norm approximation. Technical Report
Mathematical Programming Technical Report 97-03, University of WisconsinMadison, 1997. To appear in Computational Optimization and Applications.
[14] P.M. Murphy and D.W. Aha. UCI repository of machine learning databases.
Department of Information and Computer Science, University of California,
Irvine, California, 1992.
[15J S. Odewahn, E. Stockwell, R. Pennington, R Humphreys, and W Zumach.
Automated star/galaxy discrimination with neural networks. Astronomical
Journal, 103(1):318- 331,1992.
[16] E. Osuna, R. Freund, and F. Girosi. Support vector machines: Training and
applications. AI Memo 1602, Maassachusets Institute of Technology, 1997.
[17] J. Platt. Sequentional minimal optimization: A fast algorithm for training
support vector machines. Technical Report Technical Report 98-14, Microsoft
Research, 1998.
[18] M. Vaidyanathan, R.P. Velthuizen, P. Venugopal, L.P. Clarke, and L.O. Hall.
Tumor volume measurements using supervised and semi-supervised mri segmentation. In Artificial Neural Networks in Engineering Conference, ANNIE(19g4), 1994.
[19] V. N. Vapnik. Estimation of dependencies based on empirical Data. Springer,
New York, 1982. English translation, Russian version 1979.
[20] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag,
New York, 1995.
[21] V. N. Vapnik and A. Ja. Chervonenkis. Theory of Pattern Recognition. Nauka,
Moscow, 1974. In Russian.
| 1582 |@word trial:1 repository:1 mri:1 briefly:1 version:1 norm:12 prognostic:1 open:1 reduction:2 chervonenkis:1 past:1 bradley:2 rpi:2 girosi:1 discrimination:4 alone:2 plane:5 provides:1 math:1 mathematical:7 constructed:4 introduce:1 g4:1 theoretically:1 indeed:1 globally:1 little:1 solver:1 becomes:3 provided:2 estimating:1 underlying:1 maximizes:1 what:1 minimizes:2 fuzzy:1 every:2 concave:1 exactly:1 classifier:1 platt:1 control:2 appear:4 engineering:2 limit:1 yd:1 plus:1 bredensteiner:2 practical:1 satisfactorily:1 thirty:1 empirical:7 significantly:3 boyd:1 orm:12 unlabeled:4 selection:1 risk:15 equivalent:1 yt:3 maximizing:2 l:1 formulate:1 population:1 handle:1 variation:1 hurt:1 commercial:1 massive:1 programming:17 hypothesis:1 recognition:3 located:1 labeled:2 database:2 solved:2 wj:1 decrease:1 moderately:1 depend:1 solving:2 ithe:1 triangle:1 chapter:1 america:1 fast:1 artificial:1 posed:3 solve:6 larger:2 ampl:2 unseen:1 demiriz:4 final:2 subsamples:1 housing:1 advantage:1 nevada:1 propose:2 uci:1 starred:1 achieve:1 bennek:1 cl1:1 scalability:1 empty:2 produce:1 clearer:1 indicate:2 direction:1 attribute:1 ja:1 assign:1 transparent:1 generalization:14 investigation:1 extension:1 practically:1 sufficiently:1 hall:2 ayhan:1 great:1 major:1 inseparable:1 smallest:1 purpose:2 estimation:2 label:1 village:1 successfully:1 kristin:1 minimization:17 always:1 publication:1 focus:1 improvement:5 indicates:1 contrast:1 dim:3 entire:1 misclassified:1 i1:1 overall:7 classification:10 musk:1 priori:2 special:1 construct:2 never:1 washington:1 identical:2 unsupervised:1 excessive:1 constitutes:1 icml:1 report:10 bezdek:1 randomly:1 ve:2 murphy:1 geometry:2 cplex:6 microsoft:1 mining:1 mixture:1 filled:1 tree:2 euclidean:1 aha:1 circle:1 theoretical:3 minimal:1 classify:1 modeling:2 maximization:1 deviation:2 imperative:1 srm:5 successful:1 inadequate:1 dependency:1 sv:1 combined:1 vm:3 ym:1 worse:1 converted:2 potential:1 star:1 ioj:1 depends:1 performed:4 helped:1 start:1 parallel:1 minimize:3 bright:2 characteristic:1 likewise:1 yield:2 identify:1 submitted:1 galaxy:2 irvine:1 massachusetts:1 knowledge:1 astronomical:1 improves:2 dimensionality:1 segmentation:2 supervised:18 wisconsinmadison:1 improved:3 formulation:4 ox:1 marketing:1 working:32 nonlinear:3 kernighan:1 russian:2 gay:1 inductive:1 ll:1 during:1 branching:1 generalized:1 image:1 iiw:1 mangasarian:4 empirically:1 volume:1 association:1 significant:6 measurement:1 cv:1 phillips:1 ai:1 language:2 dj:4 deduce:1 iwj:1 add:2 closest:1 showed:2 recent:1 verlag:1 gop:6 success:1 yi:3 minimum:1 maximize:1 semi:16 ii:8 technical:8 cross:1 hart:1 a1:1 calculates:3 basic:3 sheffield:1 regression:1 kernel:2 achieved:1 harrison:1 sr:1 file:1 subject:2 integer:12 structural:7 ee:1 automated:1 xj:5 idea:1 penalty:2 york:2 adequate:1 useful:1 clear:1 ten:5 exist:1 zj:7 tutorial:1 s3:1 correctly:1 express:1 prevent:1 frazer:1 s3vm:17 geometrically:1 wand:2 package:1 wu:1 separation:7 parsimonious:1 decision:4 clarke:2 hexagon:1 bound:1 fold:1 quadratic:1 constraint:4 infinity:1 worked:1 software:1 min:3 separable:2 department:3 according:3 remain:1 osuna:1 wi:1 happens:1 stockwell:1 intuitively:1 heart:1 computationally:1 resource:1 discus:1 slack:1 nonempty:1 available:4 eight:1 polytechnic:3 fry:2 alternative:1 subjectto:1 moscow:1 clustering:4 madison:2 objective:2 added:3 question:2 occurs:1 usual:2 traditional:1 distance:2 separating:1 capacity:7 degrade:1 furthest:1 induction:2 code:3 insufficient:2 minimizing:1 equivalently:1 potentially:1 pima:1 troy:3 memo:1 supporting:1 extended:3 ever:1 incorporated:2 rn:1 introduced:1 required:1 california:2 nip:1 auslender:1 able:2 pattern:4 program:4 misclassification:8 force:1 improve:2 technology:1 library:1 picture:2 review:1 nice:1 discovery:1 relative:1 wisconsin:2 freund:1 expect:4 mixed:5 validation:1 sufficient:2 principle:3 editor:1 translation:1 cancer:2 changed:1 english:1 burges:1 institute:4 benefit:3 world:2 sj:2 overfitting:1 conclude:1 xi:5 rensselaer:3 sonar:1 promising:1 nature:1 robust:3 incline:1 constructing:1 venugopal:1 did:2 linearly:2 transduction:11 ny:3 meyer:1 humphreys:1 formula:2 xt:1 showing:1 explored:1 svm:11 cortes:1 ionosphere:1 incorporating:3 workshop:1 vapnik:6 pennington:1 magnitude:1 illustrates:1 margin:11 lt:1 likely:1 failed:1 partially:1 scalar:1 rlp:13 springer:2 corresponds:2 extracted:1 viewed:1 sized:1 wll2:2 bennett:8 feasible:1 determined:2 except:1 specifically:1 wt:1 tumor:1 total:1 called:2 nauka:1 support:31 incorporate:2 tested:1 |
637 | 1,583 | Learning a Continuous Hidden Variable
Model for Binary Data
Daniel D. Lee
Bell Laboratories
Lucent Technologies
Murray Hill, NJ 07974
ddlee~bell-labs.com
Haim Sompolinsky
Racah Institute of Physics and
Center for Neural Computation
Hebrew University
Jerusalem, 91904, Israel
haim~fiz.huji . ac.il
Abstract
A directed generative model for binary data using a small number
of hidden continuous units is investigated. A clipping nonlinearity distinguishes the model from conventional principal components
analysis. The relationships between the correlations of the underlying continuous Gaussian variables and the binary output variables
are utilized to learn the appropriate weights of the network. The
advantages of this approach are illustrated on a translationally invariant binary distribution and on handwritten digit images.
Introduction
Principal Components Analysis (PCA) is a widely used statistical technique for representing data with a large number of variables [1]. It is based upon the assumption
that although the data is embedded in a high dimensional vector space, most of
the variability in the data is captured by a much lower climensional manifold. In
particular for PCA, this manifold is described by a linear hyperplane whose characteristic directions are given by the eigenvectors of the correlation matrix with
the largest eigenvalues. The success of PCA and closely related techniques such as
Factor Analysis (FA) and PCA mixtures clearly indicate that much real world data
exhibit the low dimensional manifold structure assumed by these models [2, 3].
However, the linear manifold structure of PCA is not appropriate for data with
binary valued variables. Binary values commonly occur in data such as computer
bit streams, black-and-white images, on-off outputs of feature detectors, and electrophysiological spike train data [4]. The Boltzmann machine is a neural network
model that incorporates hidden binary spin variables, and in principle, it should be
able to model binary data with arbitrary spin correlations [5]. Unfortunately, the
D. D. Lee and H. Sompolinsky
516
Figure 1: Generative model for N-dimensional binary data using a small number
p of continuous hidden variables.
computational time needed for training a Boltzmann machine renders it impractical
for most applications.
In these proceedings, we present a model that uses a small number of continuous
hidden variables rather than hidden binary variables to capture the variability of
binary valued visible data. The generative model differs from conventional peA
because it incorporates a clipping nonlinearity. The resulting spin configurations
have an entropy related to the number of hidden variables used, and the resulting
states are connected by small numbers of spin flips. The learning algorithm is particularly simple, and is related to peA by a scalar transformation of the correlation
matrix.
Generative Model
Figure 1 shows a schematic diagram of the generative process. As in peA, the
model assumes that the data is generated by a small number P of continuous hidden
variables Yi . Each of the hidden variables are assumed to be drawn independently
from a normal distribution with unit variance:
P(Yi) = exp(
-yt /2)/~.
(1)
The continuous hidden variables are combined using the feedforward weights W ij ,
and the N binary output units are then calculated using the sign of the feedforward
acti vations:
P
Xi
=
L
WijYj
(2)
j=l
sgn(xi).
(3)
Since binary data is commonly obtained by thresholding, it seems reasonable that
a proper generative model should incorporate such a clipping nonlinearity. The
generative process is similar to that of a sigmoidal belief network with continuous
hidden units at zero temperature. The nonlinearity will alter the relationship between the correlations of the binary variables and the weight matrix W as described
below.
Si
The real-valued Gaussian variables Xi are exactly analogous to the visible variables
of conventional peA. They lie on a linear hyperplane determined by the span of
the matrix W, and their correlation matrix is given by:
c xx = (xx T ) = WW T .
(4)
Learning a Continuous Hidden Variable Model for Binary Data
Y2
:
-t-
""""'
--" ..... .
~.. ,
......,.",'"
4+
.... ""
"'/"~'l=LWl
'Y'~O
:
J J
.
...
. +++
.
,
,.
?? '
.' . '
"'
517
,
+-
...
.
,,
: x3
r
Figure 2: Binary spin configurations
variables Yj with P = 2 and N = 3.
,,
"" x2~ 0
,,
,
"~
Si
in the vector space of continuous hidden
By construction, the correlation matrix CXX has rank P which is much smaller
than the number of components N. Now consider the binary output variables
Si = sgn(xd? Their correlations can be calculated from the probability distribution
of the Gaussian variables Xi:
(CSS)ij
= (SiSj) =
JIT
dYk P(Xk) sgn(Xi) sgn(Xj)
(5)
k
where
(6)
The integrals in Equation 5 can be done analytically, and yield the surprisingly
simple result:
( C SS ) .. 'J -
X 1
(_2) sin-1 [C~.
JCfix elf .
'J
11"
(7)
Thus, the correlations of the clipped binary variables CSS are related to the correlations of the corresponding Gaussian variables CXX through the nonlinear arcsine
function. The normalization in the denominator of the arcsine argument reflects the
fact that the sign function is unchanged by a scale change in the Gaussian variables.
Although the correlation matrix CSS and the generating correlation matrix cn are
easily related through Equation 7, they have qualitatively very different properties.
In general, the correlation matrix CSS will no longer have the low rank structure of
CXX. As illustrated by the translationally invariant example in the next section, the
spectrum of CSS may contain a whole continuum of eigenvalues even though cxx
has only a few nonzero eigenvalues.
PCA is typically used for dimensionality reduction of real variables; can this model
be used for compressing the binary outputs Si? Although the output correlations
C SS no longer display the low rank structure of the generating C XX , a more appropriate measure of data compression is the entropy of the binary output states. Consider
how many of the 2N possible binary states will be generated by the clipping process.
The equation Xi = E j WijYj = 0 defines a P - 1 dimensional hyperplane in the
P-dimensional state space of hidden variables Yj, which are shown as dashed lines
in Figure 2. These hyperplanes partition the half-space where Si = +1 from the
D. D. Lee and H. Sompolinsky
518
5;=+1
5;=-1
,
'- ,,
L
IL.--__
'.,
C)()(
--II
______. . .1
10.2
css
., ,
,,
...
...
"' ... ...
...
'-----'-_ _~~_ _ _~.............J
10?
10'
Eigenvalue rank
102
Figure 3: Translationally invariant binary spin distribution with N = 256 units.
Representative samples from the distribution are illustrated on the left, while the
eigenvalue spectrum of CSS and CXX are plotted on the right.
region where Si = -1. Each of the N spin variables will have such a dividing hyperplane in this P-dimensional state space, and all of these hyperplanes will generically
be unique. Thus , the total number of spin configurations Si is determined by the
number of cells bounded by N dividing hyperplanes in P dimensions. The number
of such cells is approximately NP for N ? P, a well-known result from perceptrons
[6]. To leading order for large N, the entropy of the binary states generated by this
process is then given by S = P log N. Thus, the entropy of the spin configurations
generated by this model is directly proportional to the number of hidden variables
P.
How is the topology of the binary spin configurations Si related to the PCA manifold structure of the continuous variables Xi? Each of the generated spin states is
represented by a polytope cell in the P dimensional vector space of hidden variables.
Each polytope has at least P + 1 neighboring polytopes which are related to it by a
single or small number of spin flips. Therefore, although the state space of binary
spin configurations is discrete, the continuous manifold structure of the underlying
Gaussian variables in this model is manifested as binary output configurations with
low entropy that are connected with small Hamming distances .
Translationally Invariant Example
In principle, the weights W could be learned by applying maximum likelihood to
this generative model; however, the resulting learning algorithm involves analytically intractable multi-dimensional integrals. Alternatively, approximations based
upon mean field theory or importance sampling could be used to learn the appropriate parameters [7]. However, Equation 7 suggests a simple learning rule that is also
approximate, but is much more computationally efficient [8]. First, the binary correlation matrix CSS is computed from the data. Then the empirical CSS is mapped
into the appropriate Gaussian correlation matrix using the nonlinear transformation: CXX = sin(7l'Css /2). This results in a Gaussian correlation matrix where the
variances of the individual Xi are fixed at unity. The weights Ware then calculated
using the conventional PCA algorithm. The correlation matrix cxx is diagonalized ,
and the eigenvectors with the largest eigenvalues are used to form the columns of
Learning a Continuous Hidden Variable Model for Binary Data
519
w to yield the best low rank approximation CXX ~ WW T . Scaling the variables Xi
will result in a correlation matrix CXX with slightly different eigenvalues but with
the same rank.
The utility of this transformation is illustrated by the following simple example.
Consider the distribution of N = 256 binary spins shown in Figure 3. Half of the
spins are chosen to be positive, and the location of the positive bump is arbitrary
under the periodic boundary conditions. Since the distribution is translationally
invariant, the correlations CIl depend only on the relative distance between spins
Ii - jl. The eigenvectors are the Fourier modes, and their eigenvalues correspond
to their overlap with a triangle wave. The eigenvalue spectrum of css is plotted in
Figure 3 as sorted by their rank. In this particular case, the correlation matrix CSS
has N /2 positive eigenvalues with a corresponding range of values.
Now consider the matrix CXX = sin(-lI'Css /2). The eigenvalues of CXX are also
shown in Figure 3. In contrast to the many different eigenvalues CSS, the spectrum
of the Gaussian correlation matrix CXX has only two positive eigenvalues, with all
the rest exactly equal to zero. The corresponding eigenvectors are a cosine and sine
function. The generative process can thus be understood as a linear combination
of the two eigenmodes to yield a sine function with arbitary phase. This function
is then clipped to yield the positive bump seen in the original binary distribution.
In comparison with the eigenvalues of CS S, the eigenvalue spectrum of CXX makes
obvious the low rank structure of the generative process. In this case, the original
binary distribution can be constructed using only P = 2 hidden variables, whereas
it is not clear from the eigenvalues of CSS what the appropriate number of modes
is. This illustrates the utility of determining the principal components from the
calculated Gaussian correlation matrix cxx rather than working directly with the
observable binary correlation matrix CSS.
Handwritten Digits Example
This model was also applied to a more complex data set. A large set of 16 x 16
black and white images of handwritten twos were taken from the US Post Office
digit database [9]. The pixel means and pixel correlations were directly computed
from the images. The generative model needs to be slightly modified to account for
the non-zero means in the binary outputs. This is accomplished by adding fixed
biases ~i to the Gaussian variables Xi before clipping:
The biases
pression:
~i
Si = sgn(~i + Xi).
(8)
can be related to the means of the binary outputs through the ex~i = J2CtX erf- 1 (Si).
(9)
This allows the biases to be directly computed from the observed means of the
binary variables. Unfortunately, with non-zero biases, the relationship between
the Gaussian correlations CXX and binary correlations CSS is no longer the simple
expression found in Equation 7. Instead, the correlations are related by the following
integral equation:
Given the empirical pixel correlations CSS for the handwritten digits, the integral
in Equation 10 is numerically solved for each pair of indices to yield the appropriate
D. D. Lee and H Sompolinsky
520
102 ~------~------~------~-------.------~
Morph
2
2
2
2
.... .....
;2
a
CSS
.... .... .....
2
"'to
",
~
103
,,
L -_ _ _ _~_ _ _ _ _ _~_ _~_ _~_ _ _ _ _ _~_ _ _ _ _ _~
50
100
150
200
~
a
250
Eigenvalue Rank
Figure 4: Eigenvalue spectrum of CSS and CXx for handwritten images of twos. The
inset shows the P = 16 most significant eigenvectors for cxx arranged by rows. The
right side of the figure shows a nonlinear morph between two different instances of
a handwritten two using these eigenvectors.
Gaussian correlation matrix CXX . The correlation matrices are diagonalized and
the resulting eigenvalue spectra are shown in Figure 4. The eigenvalues for CXX
again exhibit a characteristic drop that is steeper than the falloff in the spectrum
of the binary correlations CSs. The corresponding eigenvectors of CXX with the 16
largest positive eigenvalues are depicted in the inset of Figure 4. These eigenmodes
represent common image distortions such as rotations and stretching and appear
qualitatively similar to those found by the standard PCA algorithm.
A generative model with weights W corresponding to the P = 16 eigenvectors
shown in Figure 4 is used to fit the handwritten twos, and the utility of this nonlinear generative model is illustrated in the right side of Figure 4. The top and bottom
images in the figure are two different examples of a handwritten two from the data
set, and the generative model is used to morph between the two examples. The hidden values Yi for the original images are first determined for the different examples,
and the intermediate images in the morph are constructed by linearly interpolating in the vector space of the hidden units. Because of the clipping nonlinearity,
this induces a nonlinear mapping in the outputs with binary units being flipped in
a particular order as determined by the generative model. In contrast, morphing
using conventional PCA would result in a simple linear interpolation between the
two images, and the intermediate images would not look anything like the original
binary distribution [10].
The correlation matrix CXX also happens to contain some small negative eigenvalues. Even though the binary correlation matrix CSS is positive definite, the
transformation in Equation 10 does not guarantee that the resulting matrix CXx
will also be positive definite. The presence of these negative eigenvalues indicates
a shortcoming of the generative processs for modelling this data. In particular,
the clipped Gaussian model is unable to capture correlations induced by global
Learning a Continuous Hidden Variable Model for Binary Data
521
constraints in the data. As a simple illustration of this shortcoming in the generative model, consider the binary distribution defined by the probability density:
P({s}) tX lim.B-+ooexp(-,BLijSiSj). The states in this distribution are defined by
the constraint that the sum of the binary variables is exactly zero: Li Si = O. Now,
for N 2: 4, it can be shown that it is impossible to find a Gaussian distribution
whose visible binary variables match the negative correlations induced by this sum
constraint.
These examples illustrate the value of using the clipped generative model to learn
the correlation matrix of the underlying Gaussian variables rather than using the
correlations of the outputs directly. The clipping nonlinearity is convenient because
the relationship between the hidden variables and the output variables is particularly easy to understand. The learning algorithm differs from other nonlinear PCA
models and autoencoders because the inverse mapping function need not be explicitly learned [11, 12]. Instead, the correlation matrix is directly transformed from the
observable variables to the underlying Gaussian variables. The correlation matrix
is then diagonalized to determine the appropriate feedforward weights. This results
in a extremely efficient training procedure that is directly analogous to PCA for
continuous variables.
We acknowledge the support of Bell Laboratories, Lucent Technologies, and the
US-Israel Binational Science Foundation. We also thank H. S. Seung for helpful
discussions.
References
[1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag.
[2] Bartholomew, DJ (1987) . Latent variable models and factor analysis. London:
Charles Griffin & Co. Ltd.
[3] Hinton, GE, Dayan, P & Revow, M (1996). Modeling the manifolds of images
of handwritten digits. IEEE Transactions on Neural networks 8,65- 74.
[4] Van Vreeswijk, C, Sompolinsky, H, & Abeles, M. (1999). Nonlinear statistics
of spike trains . In preparation.
[5] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for
Boltzmann machines. Cognitive Science 9, 147-169.
[6] Cover, TM (1965). Geometrical and statistical properties of systems of linear
inequalities with applications in pattern recognition. IEEE Trans. Electronic
Comput. 14, 326- 334.
[7] Tipping, ME (1999) . Probabilistic visualisation of high-dimensional binary
data. Advances in Neural Information Processing Systems ~1.
[8] Christoffersson, A (1975). Factor analysis of dichotomized variables. Psychometrika 40, 5- 32.
[9] LeCun, Yet al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation i, 541-551.
[10] Bregler, C, & Omohundro, SM (1995). Nonlinear image interpolation using
manifold learning. Advances in Neural Information Processing Systems 7,973980.
[11] Hastie, T and Stuetzle, W (1989). Principal curves. Journal of the American
Statistical Association 84, 502-516.
[12] Demers, D, & Cottrell, G (1993) . Nonlinear dimensionality reduction. Advances
in Neural Information Processing Systems 5, 580-587.
Risk Sensitive Reinforcement Learning
Ralph Neuneier
Siemens AG, Corporate Technology
D-81730 Mtinchen, Germany
Oliver Mihatsch
Siemens AG, Corporate Technology
D-81730 Mtinchen, Germany
Ralph.Neuneier@mchp.siemens.de
Oliver.Mihatsch@mchp.siemens.de
Abstract
As already known, the expected return of a policy in Markov Decision Problems is not always the most suitable optimality criterion. For
many applications control strategies have to meet various constraints like
avoiding very bad states (risk-avoiding) or generating high profit within
a short time (risk-seeking) although this might probably cause significant
costs. We propose a modified Q-Iearning algorithm which uses a single
continuous parameter K E [-1, 1] to determine in which sense the resulting policy is optimal. For K = 0, the policy is optimal with respect
to the usual expected return criterion, while K -+ 1 generates a solution
which is optimal in worst case. Analogous, the closer K is to -1 the more
risk seeking the policy becomes. In contrast to other related approaches
in the field of MDPs we do not have to transform the cost model or to
increase the state space in order to take risk into account. Our new approach is evaluated by computing optimal investment strategies for an
artificial stock market.
1
WHY IT SOMETIMES PAYS TO ACT CAUTIOUSLY
Reinforcement learning (RL) deals with the computation of favorable control policies in
sequential decision task. Its theoretical framework of Markov Decision Problems (MDPs)
evaluates and compares policies by their expected (sometimes discounted or averaged) sum
of the immediate returns or costs per time step (Bertsekas & Tsitsiklis, 1996). But there are
numerous applications which require a more sophisticated control scheme: e. g. a policy
should take into account that bad outcomes or states may be possible even if they are very
rare because they are so disastrous, that they should be certainly avoided.
An obvious example is the field of finance where the main question is how to invest resources among various opportunities (e.g. assets like stocks, bonds, etc.) to achieve remarkable returns while simultaneously controlling the risk exposure of the investments
due to changing markets or economic conditions. Many traders try to achieve this by a
Markovitz-like portfolio management which distributes capital according to return and risk
R. Neuneier and 0. Mihatsch
/032
estimates of the assets. A new approach using reinforcement learning techniques which
additionally integrates trading costs and other market imperfections has been proposed in
Neuneier, 1998. Here, these algorithms are naturally extended such that an explicit risk
control is now possible. The investor can decide how much risk shelhe is willing to accept
and then compute an optimal risk-averse investment strategy. Similar trade-off scenarios
can be formulated in robotics, traffic control and further application areas.
The fact that the popular expected value criterion is not always suitable has been already
known in the field of AI (Koenig & Simmons, 1994), control theory and reinforcement
learning (Heger, 1994 and Szepesvari, 1997). Several techniques have been proposed to
handle this problem. The most obvious way is to transform the sum of returns "Et rt using
an appropriate utility function U which reflects the desired properties of the solution. Unfortunately, interesting nonlinear utility functions incorporating the variance of the return,
such as U("Et rt) = "E t rt - >'("E t rt - E("Et rt))2, lead to non-Markovian decision
problems. The popular class of exponential utility functions U("E t rt) = exp(>'"Et rt)
preserves the Markov property but requires time dependent policies even for discounted
infinite horizon MDPs. Furthermore, it is not possible to formulate a corresponding modelfree learning algorithm. A further alternative changes the state space model by including
past returns as an additional state element at the cost of a higher dimensionality of the
MDP. Furthermore, it is not always clear in which way the states should be augmented.
One may also transform the cost model, i. e. by punishing large losses stronger than minor costs. While requiring a significant amount of prior knowledge, this also increases the
complexity of the MDP.
In contrast to these approaches we modify the popular Q-learning algorithm by introducing
a control parameter which determines in which sense the resulting policy is optimal. Intuitively and loosely speaking, our algorithm simulates the learning behavior of an optimistic
(pessimistic) person by overweighting (underweighting) experiences which are more positive (negative) than expected. This main idea will be made more precise in section 2 and
mathematically thoroughly analyzed in section 3. Using artificial data, we demonstrate
some properties of the new algorithm by constructing an optimal risk-avoiding investment
strategy (section 4).
2
RISK SENSITIVE Q-LEARNING
For brevity we restrict ourselves to the subclass of infinite horizon discounted Markov decision problems (MDP). Furthermore, we assume the immediate rewards being deterministic
functions of the current state and control action. Let S = {I, ... , n} be the finite state
space and U be the finite action space. Transition probabilities and immediate rewards are
denoted by Pij(U) and 9i(U), respectively. 'Y denotes the discount factor. Let II be the set
of all deterministic policies mapping states to control actions.
A commonly used objective is to learn a policy 1r that
maximizes ( Q' (i, u)
'~g,(u) + E
{t,
'Y'g" (,,(i,)) } )
(1)
quantifying the expected reward if one executes control action U in state i and follows
the policy 1r thereafter. It is a well-known result that the optimal Q-values Q*(i,u) :=
maX7rETIQ7r (i, u) satisfy the following optimality equation
Q*(i,u) = 9i(U)
+ 'Y L...J
~ Pij(U) maxQ*(j,u') Vi E
u'EU
S,u E U.
(2)
jES
Any policy 1f with 1f(i) = argmaxuEU Q* (i, u) is optimal with respect to the expected
reward criterion.
Risk Sensitive Reinforcement Learning
1033
The Q-function Q1r averages over the outcome of all possible trajectories (series of states)
of the Markov process generated by following the policy 1r. However, the outcome of a
specific realization of the Markov process may deviate significantly from this mean value.
The expected reward criterion does not consider any risk, although the cases where the
discounted reward falls considerably below the mean value is of a living interest for many
applications. Therefore, depending on the application at hand the expected reward approach is not always appropriate. Alternatively, Heger (1994) and Littman & Szepesvari
(1996) present a performance criterion that exclusively focuses on risk avoiding policies:
(Q< (i, u) ,= 9i(U) + "i~f
maximize
p(ll, t 2, ...
?o
{t,-
7'
9;,(1T(i,?}) .
(3)
The Q-function Q1r (i, u) denotes the worst possible outcome if one executes control action
u in state i and follows the policy 1r thereafter. The corresponding optimality equation for
Q*(i, u) := max1r En Q1r (i, u) is given by
Q*(i,u) = 9i(U)
-
+/
min
)ES
maxQ*(j,u') .
(4)
u'EU-
Pij(U?O
Any policy 1[ satisfying 1[( i) = arg maxuE U Q* (i, u) is optimal with respect to this minimal reward criterion. In most real world applications this approach is too restrictive because
it takes very rare events (that in practice never happen) fully into account. This usually leads
to policies with a lower average performance than the application requires. An investment
manager, for instance, which acts with respect to this very pessimistic objective function
will not invest at all.
To handle the trade-off between a sufficient average performance and a risk avoiding (risk
seeking) behavior, we propose a family of new optimality equations parameterized by a
meta-parameter /'i, (-1 < /'i, < 1):
o= ~
Pij(U)X"
~
(9i(U)
jES
where X,. (x) := (1 -
+ / u'EU
max Q,.(j, u') -
Q,.(i, u))
Vi E S, u E U
(5)
sign(x) )x. (In the next section we will show that a unique solution
/'i, =
0 we recover equation (2),
the optimality equation for the expected reward criterion. If we choose /'i, to be positive
(0 < /'i, < 1) then we overweight negative temporal differences
/'i,
Q,. of the above equation (5) exists.) Obviously, for
9i(U)
+/
max Q,.(j, u') - Q,.(i, u) < 0
u'EU
(6)
with respect to positive ones. Loosely speaking, we overweight transitions to states where
the future return is lower than the average one. On the other hand, we underweight transitions to states that promise a higher return than in the average. Thus, an agent that behaves
according to the policy 1r,.(i) := argmaxuEU Q,.(i,u) is risk avoiding if /'i, > O. In the
limit /'i, -+ 1 the policy 1r,. approaches the optimal worst-case policy 1[, as we will show
in the following section. (To get an intuition about this, the reader may easily check that
the optimal worst-case Q-value Q* fulfills the modified optimality equation (5) for /'i, = 1.)
Similarly, the policy 1r,. becomesrisk seeking if we choose /'i, to be negative.
It is straightforward to formulate a risk sensitive Q-Iearning algorithm that bases on the
modified optimality equation (5). Let Q,.(i, u; w) be a parametric approximation of the
Q-function Q,.(i,u). The states and actions encountered at time step k during simulation
are denoted by ik and Uk respectively. At each time step apply the following update rule:
d(k)
w(k+1)
9ik (Uk)
W(k)
+/
max Q,.(ik+l, u'; w(k)) - Q,.(ik, Uk; w(k)),
u'EU
+ a~k) X"(d(k))\7 wQ,.(ik, Uk; w(k)),
(7)
R. Neuneier and 0. Mihatsch
1034
where o:~k) denotes a stepsize sequence. The following section analyzes the properties of
the new optimality equations and the corresponding Q-Iearning algorithm.
3
PROPERTIES OF THE RISK SENSITIVE Q-FUNCTION
Due to space limitations we are not able to give detailed proofs of our results. Instead, we
focus on interpreting their practical consequences. The proofs will be published elsewhere.
Before formulating the mathematical results, we introduce some notation to make the exposition more concise. Using an arbitrary stepsize 0 < 0: < 1, we define the value iteration
operator corresponding to our modified optimality equation (5) as
Ta,~[Q](i, u)
:=
Q(i, u)
+ 0: L Pij(U)X~ (9i(U) +, ~~ Q(j, u')
- Q(i, u)).
(8)
jES
The operator Ta,~ acts on the space of Q-functions. For every Q-function Q and every
state-action pair (i, u) we define N~[Q](i, u) to be the set of all successor states j for
which maxu'EU Q(j, u') attains its minimum:
N~[Q](i,u):=
{j E Slpij(u) > oand u'EU
maxQ(j,u') =
min
j'es
maxQV,u')}. (9)
u'EU
Pij,(U) >0
Let p~[Q]( i, u) :=
states.
2: jE N" [Q](i,u) Pij (u) be the probability of transitions to such successor
We have the following lemma ensuring the contraction property of Ta,~.
Lemma 1 (Contraction Property) Let IQI = maxiES,uEU Q(i, u) and 0
Then
, < 1.
<
0:
<
ITa,~[Qd - Ta,~[Q2ll ::; (1 - 0:(1 -11>:1)(1 - ,)) IQ1 - Q21 VQ1, Q2.
The operator Ta,~ is contracting, because 0
<
(1 - 0:(1 - 11>:1)(1 - ,))
1,0
<
(10)
< 1.
The lemma has several important consequences.
1. The risk sensitive optimality equation (5), i. e. Ta,~[Ql = Q has a unique solution Q~
for all -1 < I>: < 1.
2. The value iteration procedure Qnew
:= Ta,~[Ql converges towards Q~.
3. The existing convergence results for traditional Q-Iearning (Bertsekas & Tsitsiklis
1997, Tsitsiklis & Van Roy 1997) remain also valid in the risk sensitive case I>: i- O.
Particularly, risk sensitive Q-learning (7) converges with probability one in the case
of lookup table representations as well as in the case of optimal stopping problems
combined with linear representations.
4. The speed of convergence for both, risk sensitive value iteration and Q-Iearning becomes worse if 11>:1 -7 1. We can remedy this to some extent if we increase the stepsize
0: appropriately.
Let 7r ~ be a greedy policy with respect to the unique solution Q~ of our modified optimality
equation; that is 7r~(i) = argmaxuEuQ~(i,u). The following theorem examines the
performance of 7r ~ for the risk avoiding case I>: 2: O. It gives us a feeling about the expected
outcome Q'Ir" and the worst possible outcome Q'Ir" of policy 7r~ for different values of 1>:.
The theorem clarifies the limiting behavior of 7r ~ if I>: -7 1.
Risk Sensitive Reinforcement Learning
Theorem 2 Let 0 ~ /\,
pair (i,u) E S x U .
<
1035
1. The following inequalities hold componentwise, i. e. for each
o ~ Q* -
Qrr" ~ 2/\'-1' (Q* - Q*)
-,
o ~ PK[QK](Q*
-
Qrr,,)
-
~
(11)
-
(1- /\,) -'-(Q* - Q*)
2/\, 1-,
-
(12)
Moreover, lim 0"" = Q* and lim Qrr" = Q*.
K~O
K~l--
The difference Q* - Q* between the optimal expected reward and the optimal worst case
reward is crucial in theabove inequalities. It measures the amount of risk being inherent in
our MDP at hand. Besides the value of /\', this quantity essentially influences the difference
between the performance of the policy 7r K and the optimal performance with respect to
both, the expected reward and the worst case criterion. The second inequality (12) states
that the performance of policy 7r K in the worst case sense tends to the optimal worst case
performance if /\, -+ 1. The "speed of convergence" is influenced by the quantity PK [Q K],
i. e. the probability that a worst case transition really occurs. (Note that PK [Q KJ is bounded
from below.) A higher probability PK [Q KJ of worst case transitions implies a stronger risk
avoiding attitude of the policy 7r K.
4
EXPERIMENTS: RISK-AVERSE INVESTMENT DECISIONS
Our algorithm is now tested on the task of constructing an optimal investment policy for an
artificial stock price analogous to the empirical analysis in Neuneier, 1998. The task, illustrated as a MDP in fig. 1, is to decide at each time step (e. g. each day or after each mayor
event on the market) whether to buy the stock and therefore speculating on increasing stock
prices or to keep the capital in cash which avoids potential losses due to decreasing stock
prices.
disturbancies
Figure 1. The Markov Decision Problem:
financial market
investments
return
investor
2.-----~----
rates, prices
Xt
= ($t, Kt)'
at = J-L(xt}
p(xt+llxt}
r(xt,at,$t+d
state: market $t
and portfolio K t
policy J-L, actions
transition probabilities
return function
__- -- -__- -- -__----__--__.
' .9
1 .B
.~: : :
i
'.5
1. ,
Figure 2. A realization of the artificial stock price for 300 time
steps. It is obvious that the
price follows an increasing trend
but with higher values a sudden drop to low values becomes
more and more probable.
It is assumed, that the investor is not able to influence the market by the investment de-
cisions. This leads to a MDP with some of the state elements being uncontrollable and
results in two computationally import implications: first, one can simulate the investments
by historical data without investing (and potentially losing) real money. Second, one can
formulate a very efficient (memory saving) and more robust Q-Ieaming algorithms. Due to
space restriction we skip a detailed description of these algorithms and refer the interested
reader to Neuneier, 1998.
R. Neuneier and O. Mihatsch
1036
The artificial stock price is in the range of [1, 2]. The transition probabilities are chosen
such that the stock market simulates a situation where the price follows an increasing trend
but with higher values a drop to very low values becomes more and more probable (fig. 2).
The state vector consists of the current stock price and the current investment, i. e. the
amount of money invested in stocks or cash. Changing the investment from cash to stocks
results in some transaction costs consisting of variable and fixed terms. These costs are
essential to define the investment problem as a MDP because they couple the actions made
at different time steps. Otherwise we could solve the problem by a pure prediction of the
next stock price. The function which quantifies the immediate return for each time step is
defined as follows: if the capital is invested in cash, then there is nothing to earn even if
the stock price increases, if the investor has bought stocks the return is equal the relative
change of the stock price weighted by the invested amount of capital minus the transaction
costs which apply if one changed from cash to stocks.
o
<..to! on ,took>
1
Figure 3. Left: Risk neutral policy, K, = O. Right:
A small bias of K, = 0.3
against risk changes the policy if one is not invested
(transaction costs apply in
this case) .
o
1.5
? tode prtce S
coptto!
,"stocks 1
.. ", 0.5
It ..
o.s
Figure 4. Left: K, = 0.5
yields a stronger risk averse
attitude. Right: With K, =
0.8 the policy becomes also
more cautious if already invested in stocks.
......
caon
,ncaoh
~,
1.5
capital
R'1
stclCln 1
_todc price ?
Figure 5. Left: K, = 0.9
leads to a policy which invests in stocks in only 5
cases. Right: The worst
case solution never invests
because there is always a
positive probability for decreasing stock prices.
As a reinforcement learning method, Q-Iearning has to interact with the environment (here
the stock market) to learn optimal investment behavior. Thus, a training set of 2000 data
points is generated. The training phase is divided into epochs which consists of as many
trials as data in the training set exist. At every trial the algorithm selects randomly a stock
price from the data set, chooses a random investment state and updates the tabulated Qvalues according to the procedure given in Neuneier, 1998. The only difference of our new
risk averse Q-Iearning is that negative experiences, i. e. smaller returns than in the mean,
are overweighted in comparison to positive experiences using the /\,-factor of eq. (7). Using
different /\, values from 0 (recovering the original Q-Iearning procedure) to 1 (leading to
worst case Q-Iearning) we plot the resulting policies as mappings from the state space to
control actions in figures 3 to 5. Obviously, with increasing /\, the investor acts more and
more cautiously because there are less states associated with an investment decision for
stocks. In the extreme case of /\, = 1, there is no stock investment at all in order to avoid
any loss. The policy is not useful in practice. This supports our introductory comments that
worst case Q-learning is not appropriate in many tasks.
Risk Sensitive Reinforcement Learning
1037
QQ-plol of Ihe distributions
0.8,---.-----r--,---.--.---.----,------,-----r---,
0 .7
"+
,.
?"
, '+
+ ., ..
+'
0.6
""
~
.. 0.5
i
.... 0.4
o
!
~
0.3
N
i
0.2
~
J!
: 0.1
'ii
c
1I ot--~WIIfl!III!'I""""
...
-0.1
'. , '
-O .2':--::'.,---L---:-'-:---::"---:-'-::---'----:c-'-::----'~--'-,____---,J
~
~
~
~
0
u
U
U
quanlU... of the cla..leal approach: KaO
MUM
Figure 6. The quantiles of the distributions of the discounted sum of
returns for It = 0.2 (0) and It = 0.4
(+) are plotted against the quantiles for the classical risk neutral
approach It = O. The distributions
only differ significantly for negative accumulated returns (left tail of
the distributions).
For further analysis, we specify a risky start state io for which a sudden drop of the stock
price in the near future is very probable. Starting at io we compute the cumulated discounted rewards of 10000 different trajectories following the policies 11"0, 11"0.2 and 11"0.4
which have been generated using K, = 0 (risk neutral), K, = 0.2 and K, = 0.4. The resulting
three data sets are compared using a quantile-quantile plot whose purpose is to determine
whether the samples come from the same distribution type. If they do so, the plot will be
linear. Fig. 6 clearly shows that for higher K,-values the left tail of the distribution (negative returns) bends up indicating a fewer number of losses. On the other hand there is
no significant difference for positive quantiles. In contrast to naive utility functions which
penalizes high variance in general, our risk sensitive Q-Iearning asymmetrically reduces
the probability for losses which may be more suitable for many applications.
5
CONCLUSION
We have formulated a new Q-Iearning algorithm which can be continuously tuned towards
risk seeking or risk avoiding policies. Thus, it is possible to construct control strategies
which are more suitable for the problem at hand by only small modifications of Q-Iearning
algorithm. The advantage of our approch in comparison to already known solutions is, that
we have neither to change the cost nor the state model. We can prove that our algorithm
converges under the usual assumptions. Future work will focus on the connections between
our approach and the utility theoretic point of view.
References
D. P. Bertsekas, J. N. Tsitsiklis (1996) Neuro-Dynamic Programming. Athena Scientific.
M. Heger (1994) Consideration of Risk and Reinforcement Learning, in Machine Learning, proceedings of the 11 th International Conference, Morgan Kaufmann Publishers.
S. Koenig, R. G. Simmons (1994) Risk-Sensitive Planning with Probabilistic Decision Graphs.
Proc. of the Fourth Int. Conf. on Principles of Knowledge Representation and Reasoning (KR).
M.L. Littman, Cs. Szepesvari (1996), A generalized reinforcement-learning model: Convergence
and applications. In International Conference of Machine Learning '96. Bari.
R. Neuneier (1998) Enhancing Q-learning for Optimal Asset Allocation, in Advances in Neural Information Processing Systems /0, Cambridge, MA: MIT Press.
M. L. Puterman (1994), Markov Decision Processes, John Wiley & Sons.
Cs. Szepesvari (1997) Non-Markovian Policies in Sequential Decision Problems, Acta Cybernetica.
J. N. Tsitsiklis, B. Van Roy (1997) Approximate Solutions to Optimal Stopping Problems, in Advances in Neural Information Processing Systems 9, Cambridge, MA: MIT Press.
| 1583 |@word trial:2 compression:1 seems:1 stronger:3 willing:1 simulation:1 contraction:2 cla:1 concise:1 profit:1 minus:1 reduction:2 configuration:7 series:1 exclusively:1 daniel:1 tuned:1 past:1 diagonalized:3 existing:1 neuneier:10 com:1 current:3 si:11 yet:1 import:1 john:1 cottrell:1 visible:3 partition:1 happen:1 drop:4 plot:3 update:2 generative:18 half:2 greedy:1 fewer:1 xk:1 short:1 sudden:2 location:1 hyperplanes:3 sigmoidal:1 mathematical:1 constructed:2 ik:5 consists:2 prove:1 acti:1 introductory:1 introduce:1 market:9 expected:13 behavior:4 nor:1 planning:1 multi:1 manager:1 discounted:6 decreasing:2 increasing:4 psychometrika:1 becomes:5 xx:3 underlying:4 bounded:2 maximizes:1 notation:1 moreover:1 q21:1 israel:2 what:1 heger:3 q2:1 ag:2 transformation:4 nj:1 impractical:1 guarantee:1 temporal:1 every:3 act:4 subclass:1 xd:1 iearning:12 finance:1 exactly:3 uk:4 control:13 unit:7 appear:1 bertsekas:3 positive:14 before:2 understood:1 iqi:1 modify:1 tends:1 limit:1 consequence:2 io:2 ware:1 meet:1 interpolation:2 approximately:1 black:2 might:1 acta:1 suggests:1 co:1 range:2 averaged:1 directed:1 unique:4 lecun:1 practical:1 yj:2 investment:17 practice:2 definite:2 differs:2 x3:1 backpropagation:1 digit:5 procedure:4 stuetzle:1 area:1 empirical:3 bell:3 significantly:2 convenient:1 get:1 operator:3 bend:1 risk:41 applying:1 impossible:1 influence:2 restriction:1 conventional:5 deterministic:2 center:1 yt:1 jerusalem:1 exposure:1 straightforward:1 independently:1 starting:1 formulate:3 pure:1 rule:2 examines:1 financial:1 racah:1 handle:2 analogous:4 simmons:2 cs:22 construction:1 controlling:1 limiting:1 qq:1 losing:1 programming:1 us:2 element:2 roy:2 recognition:2 particularly:3 utilized:1 satisfying:1 trend:2 bari:1 database:1 observed:1 bottom:1 ackley:1 solved:1 capture:2 worst:14 region:1 compressing:1 connected:2 averse:4 sompolinsky:5 eu:8 trade:2 cautiously:2 intuition:1 environment:1 complexity:1 reward:13 littman:2 seung:1 dynamic:1 depend:1 upon:2 triangle:1 easily:2 stock:26 represented:1 tx:1 various:2 train:2 attitude:2 approch:1 shortcoming:2 london:1 sejnowski:1 artificial:5 outcome:6 vations:1 whose:3 widely:1 valued:3 solve:1 distortion:1 s:2 otherwise:1 erf:1 statistic:1 invested:5 transform:3 obviously:2 advantage:2 eigenvalue:23 sequence:1 took:1 propose:2 wijyj:2 neighboring:1 realization:2 achieve:2 description:1 kao:1 cautious:1 invest:2 convergence:4 generating:3 converges:3 illustrate:1 depending:1 ac:1 ij:2 minor:1 eq:1 dividing:2 recovering:1 c:3 involves:1 indicate:1 trading:1 qd:1 implies:1 direction:1 skip:1 differ:1 closely:1 come:1 pea:4 sgn:5 successor:2 require:1 really:1 uncontrollable:1 pessimistic:2 probable:3 mathematically:1 bregler:1 hold:1 normal:1 exp:2 maxu:1 mapping:4 bump:2 continuum:1 purpose:1 favorable:1 proc:1 integrates:1 bond:1 sensitive:13 largest:3 reflects:2 weighted:1 mit:2 clearly:2 imperfection:1 gaussian:17 always:5 modified:6 rather:3 cash:5 avoid:1 sisj:1 office:1 focus:3 markovitz:1 rank:9 likelihood:1 indicates:1 modelling:1 check:1 contrast:5 attains:1 sense:3 helpful:1 dayan:1 dependent:1 stopping:2 accumulated:1 typically:1 accept:1 hidden:22 visualisation:1 transformed:1 selects:1 germany:2 interested:1 pixel:3 arg:1 ralph:2 among:1 denoted:2 field:4 construct:1 equal:2 never:2 saving:1 sampling:1 climensional:1 flipped:1 look:1 qrr:3 alter:1 future:3 elf:1 np:1 inherent:1 few:1 distinguishes:1 randomly:1 simultaneously:1 preserve:1 individual:1 pression:1 translationally:5 phase:2 ourselves:1 fiz:1 consisting:1 interest:1 certainly:1 generically:1 mixture:1 analyzed:1 extreme:1 tj:1 implication:1 kt:1 oliver:2 integral:4 closer:1 ieaming:1 experience:3 loosely:2 mihatsch:5 desired:1 plotted:3 penalizes:1 dichotomized:1 theoretical:1 minimal:1 instance:2 column:1 modeling:1 markovian:2 cover:1 clipping:7 cost:12 introducing:1 neutral:3 rare:2 trader:1 too:1 morph:4 periodic:1 abele:1 considerably:1 combined:2 thoroughly:1 person:1 density:1 chooses:1 international:2 huji:1 lee:4 physic:1 off:3 probabilistic:2 continuously:1 earn:1 again:1 management:1 choose:2 worse:1 cognitive:1 conf:1 american:1 leading:2 return:18 li:2 account:4 potential:1 de:3 lookup:1 int:1 satisfy:1 explicitly:1 vi:2 stream:1 sine:2 try:1 view:1 lab:1 optimistic:1 steeper:1 traffic:1 wave:1 investor:5 recover:1 start:1 il:2 ir:2 spin:16 variance:4 characteristic:2 stretching:1 qk:1 yield:6 correspond:1 clarifies:1 kaufmann:1 handwritten:10 trajectory:2 asset:3 published:1 executes:2 detector:1 falloff:1 influenced:1 evaluates:1 against:2 obvious:4 naturally:1 proof:2 associated:1 hamming:1 couple:1 demers:1 popular:3 lim:3 knowledge:2 dimensionality:3 electrophysiological:1 jes:3 sophisticated:1 higher:6 tipping:1 ta:7 day:1 specify:1 arranged:1 done:1 though:2 evaluated:1 furthermore:3 correlation:40 autoencoders:1 working:1 koenig:2 hand:5 nonlinear:10 defines:1 mode:2 eigenmodes:2 llxt:1 scientific:1 mdp:7 contain:2 y2:1 requiring:1 remedy:1 analytically:2 laboratory:2 nonzero:1 illustrated:6 white:2 deal:1 puterman:1 sin:3 ll:1 during:1 anything:1 cosine:1 criterion:9 generalized:1 modelfree:1 hill:1 omohundro:1 demonstrate:1 theoretic:1 temperature:1 interpreting:1 geometrical:1 reasoning:1 image:13 lwl:1 consideration:1 charles:1 common:1 rotation:1 behaves:1 rl:1 binational:1 jl:1 association:1 tail:2 numerically:1 significant:4 refer:1 cambridge:2 ai:1 similarly:1 nonlinearity:6 bartholomew:1 dj:1 portfolio:2 longer:3 money:2 etc:1 base:1 scenario:1 verlag:1 manifested:1 inequality:4 binary:45 success:1 meta:1 yi:3 accomplished:1 morgan:1 captured:1 seen:1 additional:1 analyzes:1 zip:1 minimum:1 q1r:3 determine:3 maximize:1 dashed:1 ii:4 living:1 corporate:2 reduces:1 match:1 divided:1 post:1 schematic:1 ensuring:1 prediction:1 neuro:1 denominator:1 essentially:1 mayor:1 enhancing:1 iteration:3 normalization:1 represent:1 sometimes:2 robotics:1 cell:3 whereas:1 diagram:1 crucial:1 appropriately:1 ot:1 rest:1 publisher:1 probably:1 comment:1 induced:2 simulates:2 incorporates:2 bought:1 near:1 presence:1 feedforward:3 intermediate:2 easy:1 iii:1 xj:1 fit:1 hastie:1 topology:1 restrict:1 economic:1 idea:1 cn:1 tm:1 whether:2 expression:1 pca:12 utility:8 ltd:1 tabulated:1 render:1 york:1 cause:1 speaking:2 action:10 useful:1 clear:2 eigenvectors:8 detailed:2 amount:4 discount:1 induces:1 exist:1 sign:3 per:1 discrete:1 promise:1 thereafter:2 drawn:1 capital:5 changing:2 neither:1 graph:1 sum:5 inverse:1 parameterized:1 fourth:1 clipped:4 family:1 reasonable:1 decide:2 electronic:1 reader:2 griffin:1 cxx:22 scaling:1 decision:11 bit:1 pay:1 haim:2 display:1 encountered:1 occur:1 constraint:4 x2:1 generates:1 fourier:1 simulate:1 argument:1 span:1 extremely:1 optimality:11 min:2 formulating:1 speed:2 according:3 combination:1 smaller:2 slightly:2 remain:1 son:1 unity:1 modification:1 happens:1 intuitively:1 invariant:5 taken:1 computationally:2 equation:20 resource:1 vreeswijk:1 ihe:1 jolliffe:1 needed:1 flip:2 ge:2 qnew:1 apply:3 appropriate:11 stepsize:3 alternative:1 original:5 assumes:1 top:1 denotes:3 opportunity:1 restrictive:1 quantile:2 murray:1 classical:1 unchanged:1 seeking:5 objective:2 already:4 question:1 spike:2 quantity:2 fa:1 strategy:5 rt:7 usual:2 parametric:1 traditional:1 occurs:1 exhibit:2 distance:2 unable:1 mapped:1 thank:1 athena:1 me:1 manifold:8 polytope:2 extent:1 jit:1 code:1 besides:1 index:1 relationship:4 illustration:1 hebrew:1 ql:2 unfortunately:3 disastrous:1 potentially:1 negative:9 proper:1 boltzmann:3 policy:38 speculating:1 markov:8 sm:1 acknowledge:1 finite:2 immediate:4 situation:1 hinton:2 variability:2 extended:1 precise:1 ww:2 arbitrary:3 pair:3 componentwise:1 connection:1 learned:2 polytopes:1 maxq:3 trans:1 able:3 below:3 pattern:1 usually:1 including:1 max:3 memory:1 belief:1 overlap:1 suitable:4 event:2 representing:1 scheme:1 technology:4 mdps:3 numerous:1 risky:1 naive:1 dyk:1 kj:2 deviate:1 morphing:1 prior:1 epoch:1 determining:1 relative:2 embedded:1 loss:5 fully:1 contracting:1 interesting:1 limitation:1 proportional:1 allocation:1 ita:1 remarkable:1 foundation:1 agent:1 pij:7 sufficient:1 principle:3 thresholding:1 row:1 invests:2 elsewhere:1 changed:1 surprisingly:1 tsitsiklis:5 bias:5 side:2 understand:1 institute:1 fall:1 mchp:2 van:3 leal:1 boundary:1 calculated:4 dimension:1 world:2 curve:1 transition:8 valid:1 avoids:1 commonly:3 qualitatively:2 reinforcement:10 avoided:1 made:2 feeling:1 historical:1 transaction:4 approximate:2 observable:2 keep:1 global:1 buy:1 assumed:3 xi:11 alternatively:2 spectrum:8 continuous:16 latent:1 investing:1 quantifies:1 why:1 table:1 additionally:1 learn:5 szepesvari:4 robust:1 interact:1 punishing:1 investigated:1 complex:1 interpolating:1 constructing:2 pk:4 main:2 linearly:1 whole:1 nothing:1 augmented:1 fig:3 representative:1 je:1 en:1 quantiles:3 ddlee:1 cil:1 wiley:1 ueu:1 explicit:1 exponential:1 comput:1 lie:1 theorem:3 lucent:2 bad:2 specific:1 oand:1 inset:2 xt:4 intractable:1 incorporating:1 exists:1 essential:1 adding:1 sequential:2 importance:1 qvalues:1 cumulated:1 kr:1 illustrates:1 horizon:2 entropy:5 depicted:1 scalar:1 springer:1 determines:1 dh:1 ma:2 sorted:1 formulated:2 quantifying:1 exposition:1 towards:2 revow:1 price:16 change:5 determined:4 infinite:2 hyperplane:4 distributes:1 principal:5 lemma:3 total:1 asymmetrically:1 e:2 siemens:4 arbitary:1 perceptrons:1 indicating:1 wq:1 support:2 fulfills:1 arcsine:2 brevity:1 preparation:1 incorporate:1 mum:1 tested:1 avoiding:9 ex:1 |
638 | 1,584 | Learning Lie Groups for Invariant Visual Perception*
Rajesb P. N. Rao and Daniel L. Ruderman
Sloan Center for Theoretical Neurobiology
The Salk Institute
La Jolla, CA 92037
{rao,ruderrnan}@salk.edu
Abstract
One of the most important problems in visual perception is that of visual invariance: how are objects perceived to be the same despite undergoing transformations such as translations, rotations or scaling? In this paper, we describe a
Bayesian method for learning invariances based on Lie group theory. We show
that previous approaches based on first-order Taylor series expansions of inputs
can be regarded as special cases of the Lie group approach, the latter being capable of handling in principle arbitrarily large transfonnations. Using a matrixexponential based generative model of images, we derive an unsupervised algorithm for learning Lie group operators from input data containing infinitesimal transfonnations. The on-line unsupervised learning algorithm maximizes
the posterior probability of generating the training data. We provide experimental results suggesting that the proposed method can learn Lie group operators for
handling reasonably large I-D translations and 2-D rotations.
1 INTRODUCTION
A fundamental problem faced by both biological and machine vision systems is the recognition
of familiar objects and patterns in the presence of transfonnations such as translations, rotations
and scaling. The importance ofthis problem was recognized early by visual scientists such as J. J.
Gibson who hypothesized that "constant perception depends on the ability of the individual to detect the invariants" [6]. Among computational neuroscientists, Pitts and McCulloch were perhaps
the first to propose a method for perceptual invariance ("knowing universals") [12]. A number of
other approaches have since been proposed [5, 7, 10], some relying on temporal sequences of input
patterns undergoing transfonnations (e.g. [4]) and others relying on modifications to the distance
metric for comparing input images to stored templates (e.g. [15]).
In this paper, we describe a Bayesian method for learning in variances based on the notion of continuous transfonnations 'and Lie group theory. We show that previous approaches based on first-order
Taylor series expansions of images [1, 14] can be regarded as special cases of the Lie group approach. Approaches based on first-order models can account only for small transfonnations due
to their assumption of a linear generative model for the transfonned images. The Lie approach on
the other hand utilizes a matrix-exponential based generative model which can in principle handle
arbitrarily large transfonnations once the correct transfonnation operators have been learned. Using Bayesian principles, we derive an on-line unsupervised algorithm for learning Lie group operators from input data containing infinitesimal transfonnations. Although Lie groups have previously
"This research was supported by the Alfred P. Sloan Foundation.
8ll
Learning Lie Groups
been used in visual perception [2], computer vision [16] and image processing [9], the question of
whether it is possible to learn these groups directly from input data has remained open. Our preliminary experimental results suggest that in the two examined cases of l-D translations and 2-D
rotations, the proposed method can learn the corresponding Lie group operators with a reasonably
high degree of accuracy, allowing the use of these learned operators in transformation-invariant
vision.
2
CONTINUOUS TRANSFORMATIONS AND LIE GROUPS
Suppose we have a point (in general, a vector) 10 which is an element in a space F. Let T 10 denote a
transformation of the point 10 to another point, say It. The transformation operator T is completely
specified by its actions on all points in the space F. Suppose T belongs to a family of operators
T. We will be interested in the cases where I is a group i.e. there exists a mapping f : I x I -t
I from pairs of transformations to another transformation such that (a) f is associative, (b) there
exists a unique identity transformation, and (c) for every TEl, there exists a unique inverse
transformation of T. These properties seem reasonable to expect in general for transformations on
images.
Continuous transformations are those which can be made infinitesimally small. Due to their favorable properties as described below, we will be especially concerned with continuous transformation groups or Lie groups. Continuity is associated with both the transformation operators T and
the group T. Each TEl is assumed to implement a continuous mapping from F -t F. To be
concrete, suppose T is parameterized by a single real number x. Then, the group I is continuous if the function T{x) : 1R -t I is continuous i.e. any TEl is the image of some x E 1R
and any continuous variation of x results in a continuous variation of T . Let T{O) be equivalent
to the identity transformation. Then, as x -t 0, the transformation T{x) gets arbitrarily close to
identity. Its effect on 10 can be written as (to first order in x): T{x)/o ~ (1 + xG)/o for some
matrix G which is known as the generator of the transformation group. A macroscopic transforI{x) T{x)/o can be produced by chaining together a number of these infinitesimal
mation It
transformations. For example, by dividing the parameter x into N equal parts and performing each
transformation in tum, we obtain:
=
=
I{x)
In the limit N -t
00,
= {1 + (X/N)G)N 10
(1)
this expression reduces to the matrix exponential equation:
I{x)
= ezG 10
(2)
where 10 is the initial or "reference" input. Thus, each of the elements of our one-parameter Lie
ezG ? The generatorG ofthe Lie group is related to the derivative
group can be written as: T{x)
ofT{x) with respect to x: d~T = GT. This suggests an alternate way of deriving Equation 2.
Consider the Taylor series expansion of a transformed input 1(x) in terms of a previous input 1(O):
=
I{x)
d/{O)
Jl. I{O) x 2
= I{O) + ~x + ---;J;22 +...
(3)
where x denotes the relative transformation between I{x) and I{O). Defining d~1 = GI for some
operator matrix G, we can rewrite Equation 3 as: I{x)
ezG 10 which is the same as equation 2
I{O). Thus, some previous approaches based on first-order Taylor series expansions
with 10
[ 1, 14] can be viewed as special cases ofthe Lie group model.
=
3
=
LEARNING LIE TRANSFORMATION GROUPS
Our goal is to learn the generators G of particular Lie transformation groups directly from input data
containing examples of infinitesimal transformations. Note that learning the generator of a transformation effectively allows us to remain invariant to that transformation (see below). We assume
that during natural temporal sequences of images containing transformations, there are "small" image changes corresponding to deterministic sets of pixel changes that are independent of what the
R. P. N. Rao and D. L. Ruderman
812
N........ l:
FA111118lIon 01
Object Iclen...,.
(a)
1(.)
N........ 2:
EoIlnuIIIoa 01
Tronot......tIon
1(0)
(c)
(b)
"',i;
....
...
...
?
?
'I
"
?
? kG k
( - ; ; - ) 1(0)
Figure 1: Network Architecture and Interpolation Function. (a) An implementation of the proposed approach to invariant vision involving two cooperating recurrent networks, one estimating transformations and
the other estimating object features. The latter supplies the reference image 1(0) to the transformation network. (b) A locally recurrent elaboration of the transformation network for implementing Equation 9. The
network computes e",GI(O) = 1(0) + Lk(xkG k jk!)I(O). (c) The interpolation function Q used to generate
training data (assuming periodic, band-limited signals).
actual pixels are. The rearrangements themselves are universal as in for example image translations. The question we address is: can we learn the Lie group operator G given simply a series of
"before" and "after" images?
Let the n x 1 vector 1(0) be the "before" image and I(x) the "after" image containing the infinitesimal transformation. Then, using results from the previous section, we can write the following
stochastic generative model for images:
I(x) = e",GI(O) + n
(4)
where n is assumed to be a zero-mean Gaussian white noise process with variance (J2. Since learning using this full exponential generative model is difficult due to multiple local minima, we restrict
ourselves to transformations that are infinitesimal. The higher order terms then become negligible
and we can rewrite the above equation in a more tractable form:
~I = xGI(O) + n
(5)
where ~I = I( x) - 1(0) is the difference image. Note that although this model is linear, the generator G learned using infinitesimal transformations is the same matrix that is used in the exponential
model. Thus, once learned, this matrix can be used to handle larger tr,ansformations as well (see
experimental results).
Suppose we are given M image pairs as data. We wish to find the n x n matrix G and the transformations x which generated the data set. To do so, we take a Bayesian maximum a posteriori
approach using Gaussian priors on x and G. The negative log of the posterior probability of generating the data is given by:
E
1
1
1
= -logP[G, xll(x), 1(0)] = 2(J2
(~I-xGI(O))T (~I-xGI(O))+ 2(J;x 2+ 2gTC-lg
(6)
where (J~ is the variance of the zero-mean Gaussian prior on x, g is the n 2 x 1 vector form of G
and C is the covariance matrix associated with the Gaussian prior on G. Extending this equation
Learning Lie Groups
813
to multiple image data is accomplished straightforwardly by summing the data-driven tenn over
the image pairs (we assume G is fixed for all images although the transfonnation x may vary). For
the experiments, u, U x and C were chosen to be fixed scalar values but it may be possible to speed
up learning and improve accuracy by choosing C based on some knowledge of what we expect for
infinitesimal image transfonnations (for example, we may define each entry in C to be a function
only of the distance between pixels associated with the entry and exploit the fact that C needs to
be symmetric; the efficacy of this choice is currently under investigation).
The n x n generator matrix G can be learned in an unsupervised manner by perfonning gradient
descent on E, thereby maximizing the posterior probability of generating the data:
.
G
=
8E
-a 8G
= a(al -
T
xGI(O?(xl(O? - ac(G)
(7)
where a is a positive constant that governs the learning rate and c(G) is the n x n matrix fonn of
the n 2 x 1 vector c- 1 g. The learning rule for G above requires the value of x for the current image
pair to be known. We can estimate x by perfonning gradient descent on E with respect to x (using
a fixed previously learned value for G):
x =
-f3 88E
x
= f3(GI(O?T(al -
xGI(O? -
~x
Ux
(8)
The learning process thus involves alternating between the fast estimation of x for the given image
pair and the slower adaptation ofthe generator matrix G using this x. Figure 1 (a) depicts a possible network implementation of the proposed approach to invariant vision. The implementation,
which is reminiscent of the division oflabor between the dorsal and ventral streams in primate visual cortex [3], uses two parallel but cooperating networks, one estimating object identity and the
other estimating object transfonnations. The object network is based on a standard linear generative model of the fonn: 1(0) = Ur + DO where U is a matrix of learned object "features" and
r is the feature vector for the object in 1(0) (see, for example, [11, 13]). Perceptual constancy is
achieved due to the fact that the estimate of object identity remains stable in the first network as the
second network attempts to account for any transfonnations being induced in the image, appropriately conveying the type of transfonnation being induced in its estimate for x (see [14] for more
details).
The estimation rule for x given above is based on a first-order model (Equation 5) and is therefore
useful only for estimating small (infinitesimal) transfonnations. A more general rule for estimating
larger transfonnations is obtaining by perfonning gradient descent on the optimization function
given by the matrix-exponential generative model (Equation 4):
x =
-y(exGGI(O?)T(I(x) - exGI(O? _lx
u;
(9)
Figure 1 (b) shows a locally recurrent network implementation of the matrix exponential computation required by the above equation.
4 EXPERIMENTAL RESULTS
Training Data and Interpolation Function. For the purpose of evaluating the algorithm, we generated synthetic training data by subjecting a randomly generated image (containing unifonnly random pixel intensities) to a known transfonnation. Consider a given I-D image 1(0) with image
pixels given by I (j), j = 1, ... , N. To be able to continuously transfonn 1(0) sampled at discrete
pixel locations by infinitesimal (sub-pixel) amounts, we need to employ an interpolation function.
We make use of the Shannon-Whittaker theorem [8] stating that any band-limited signal I (j), with
j being any real number, is uniquely specified by its sufficiently close equally spaced discrete samples. Assuming that our signal is periodic i.e. I(j + N) = I(j) for all j. the Shannon-Whittaker
theorem in one dimension can be written as: I(j) = E::~ I(m)
oo sinc[1r(j - m - Nr)]
where sinc[x] = sin(x)Jx. After some algebraic manipulation and simplification, this can be
reduced to: I(j) = E::~ I(m)Q(j - m) where the interpolation function Q is given by:
E:-
R. P. N. Rao and D. L. Ruderman
814
Operator # 10
Analytical
Imaginary
B~
(a)
-1
Operator # 10
Learned
Real
Real
Ima~nary
0.5
(b)
0
-0.5
BIB
Figure 2: Learned Lie Operators for 1?0 Translations. (a) Analytically-derived 20 x 20 Lie operator
matrix G, operator for the 10th pixel (10th row of G), and plot of real and imaginary parts of the eigenvalues
of G. (b) Learned G matrix, 10th operator, and plot of eigenvalues of the learned matrix.
Q(x) = (1/N)[1
+ 2 L::~~-l cos(271'px/N)].
Figure 1 (c) shows this interpolation function. To
translate 1(0) by an infinitesimal amount x E ~,we use: I(j + x) = L:~:~ I(m)Q(j + x - m).
Similarly, to rotate or translate 2-D images, we use the 2-D analog of the above. In addition to
being able to generate images with known transformations, the interpolation function also allows
one to derive an analytical expression for the Lie operator matrix directly from the derivative of
Q. This allows us to evaluate the results oflearning. Figure 2 (a) shows the analytically-derived G
matrix for I-D infinitesimal translations of 20-pixel images (bright pixels =positive values, dark
=negative). Also shown alongside is one of the rows of G (row 10) representing the Lie operator
centered on pixel 10.
Learning 1?D Translations. Figure 2 (b) shows the results of using Equation 7 and 50, 000 training
image pairs forlearning the generator matrix for I-D translations in 20-pixel images. The randomly
generated first image of a training pair was translated left or right by 0.5 pixels (C- 1 = 0.0001 and
learning rate a = 0.4 was decreased by 1.0001 after each training pair). Note that as expected for
translations, the rows of the learned G matrix are identical except for a shift: the same differential
operator (shown in Figure 2 (b? is applied at each image location. A comparison of the eigenvalues of the learned matrix with those of the analytical matrix (Figure 2) suggests that the learning
algorithm was able to learn a reasonably good approximation of the true generator matrix (to within
an arbitrary multiplicative scaling factor). To further evaluate the learned matrix G, we ascertained
whether G could be used to generate arbitrary translations of a given reference image using Equation 2. The results are encouraging as shown in Figure 3 (a), although we have noticed a tendency
for the appearance of some artifacts in translated images if there is significant high-frequency content in the reference image.
Estimating Large Transformations. The learned generator matrix can be used to estimate large
translations in images using Equation 9. Unfortunately, the optimization function can contain local
minima (Figure 3 (b? . The local minima however tend to be shallow and of approximately the same
value, with a unique well-defined global minimum. We therefore searched for the global minimum
by performing gradient descent with several equally spaced starting values and picked the minimum
of the estimated values after convergence. Figure 3 (c) shows results ofthis estimation process.
Learning 2?D Rotations. We have also tested the learning algorithm in 2-D images using image
plane rotations. Training image pairs were generated by infinitesimally rotating images with random pixel intensities 0.2 radians clockwise or counterclockwise. The learned operator matrix (for
three different spatial scales) is shown in Figure 4 (a). The accuracy of these matrices was tested
815
Learning Lie Groups
1(0)
x
I(x)
I(x)
x
1.5_,:
_-1.5
4.5_ \
?~ _-4.5
7.5 _
_-7.5
JO.5 . r:
13.5
-_.. _-
~;~. -10.5
II -13.5
E
(a)
- 1(0)
I(x)
i
x = 8.9787 (9)
(b)
x
= 19.9780 (20)
x = -7.9805 (-8)
x = 2.9805 (3)
x = 15.9775 (16)
x
= 26.9776 (27)
x
= -1.9780 (-2)
x
=-18.9805 (-19)
x
= 4.9774 (5)
(e)
Figure 3: Generating and Estimating Large Transformations. (a) An original reference image 1(0) was
translated to varying degrees by using the learned generator matrix G and varying x in Equation 2. (b) The
negative log likelihood optimization function for the matrix-exponential generative model (Equation 4) which
was used for estimating large translations. The globally minimum value for x was found by using gradient
descent with multiple starting points. (c) Comparison of estimated translation values with actual values (in
parenthesis) for different pairs of reference (1(0) and translated images (I(x) shown in the form of a table.
by using them in Equation 2 for various rotations x. As shown in Figure 4 (b) for the 5 x 5 case,
the learned matrix appears to be able to rotate a gi ven reference image between -180 0 and +180 0
about an initial position (for the larger rotations, some minor artifacts appear near the edges).
5 CONCLUSIONS
Our results suggest that it is possible for an unsupervised network to learn visual invariances by
learning operators (or generators) for the corresponding Lie transformation groups. An important
issue is how local minima can be avoided during the estimation of large transformations. Apart
from performing multiple searches, one possibility is to use coarse-to-fine techniques, where transformation estimates obtained at a coarse scale are used as starting points for estimating transformations at finer scales (see, for example, [1]). A second possibility is to use stochastic techniques that
exploit the specialized stucture of the optimization function (Figure 1 (c)). Besides these directions of research, we are also investigating the use of structured priors on the generator matrix G to
improve learning accuracy and speed. A concurrent effort involves testing the approach on more
realistic natural image sequences containing a richer variety of transformations.!
References
[1] M. J. Black and A. D. Jepson. Eigentracking: Robust matching and tracking of articulated
objects using a view-based representation. In Proc. of the Fourth European Conference on
Computer Vision (ECCV), pages 329-342, 1996.
[2] P. C. Dodwell. The Lie transformation group model of visual perception. Perception and
Psychophysics, 34(1):1-16,1983.
[3] D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1:1-47,1991.
eL;:l
1 The generative model in the case of multiple transformations is given by: I(x) =
",;Gi 1(0) + n
where G i is the generator for the ith type of transformation and Xi is the value of that transformation in the
input image.
? ?
816
Final
Initial
'
,.,......
,
R. P. N. Rao and D. L. Ruderman
.
-.
(a)
Figure 4: Learned Lie Operators for 2-D Rotations. (a) The initial and converged values of the Lie operator matrix for 2D rotations at three different scales (3 x 3, 5 x 5 and 9 x 9). (b) Examples of arbitrary
rotations of a 5 x 5 reference image 1(0) generated by using the learned Lie operator matrix (although only
results for integer-valued x between -4 and 4 are shown, rotations can be generated for any real-valued x).
Learning in variance from transformation sequences. Neural Computation,
3(2): 194-200, 1991.
K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of
pattern recognition unaffected by shift in position. Biological Cybernetics, 36: 193-202, 1980.
J.J. Gibson. The Senses Considered as Perceptual Systems. Houghton-Mifflin, Boston, 1966.
Y. LeCun, B. Boser, J. S. Denker, B. Henderson, R. E. Howard. W. Hubbard, and L. D.
Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation,
1(4):541-551,1989.
R. J. Marks II. Introduction to Shannon Sampling and Interpolation Theory. New York:
Springer-Verlag, 1991.
K. Nordberg. Signal representation and processing using operator groups. Technical Report
Linkoping Studies in Science and Technology, Dissertations No. 366, Department of Electrical Engineering, Linkoping University, 1994.
B. A. 0lshausen, C. H. Anderson, and D. C. Van Essen. A multiscale dynamic routing circuit
for forming size- and position-invariant object representations. Journal of Computational
Neuroscience, 2:45-62,1995.
B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 381 :607-609,1996.
W. Pitts and W.S. McCulloch. How we know universals: the perception of auditory and visual
forms. Bulletin of Mathematical Biophysics, 9:127-147,1947.
R. P. N. Rao and D. H. Ballard. Dynamic model of visual recognition predicts neural response
properties in the visual cortex. Neural Computation, 9(4):721-763,1997.
R. P. N. Rao and D. H. Ballard. Developmentoflocalized oriented receptive fields by learning
a translation-invariant code for natural images. Network: Computation in Neural Systems,
9(2):219-234,1998.
P. Simard, Y. LeCun, and J. Denker. Efficient pattern recognition using a new transformation
distance. In Advances in Neural Information Processing Systems V, pages 5(}-'58, San Mateo,
CA, 1993. Morgan Kaufmann Publishers.
L. Van Gool, T. Moons, E. Pauwels, and A. Oosterlinck. Vision and Lie's approach to invariance. Image and Vision Computing, 13(4):259-277,1995.
[4] P. Foldiak.
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
| 1584 |@word open:1 covariance:1 fonn:2 thereby:1 tr:1 initial:4 series:5 efficacy:1 transfonn:1 daniel:1 imaginary:2 current:1 comparing:1 written:3 reminiscent:1 realistic:1 eigentracking:1 plot:2 generative:9 tenn:1 plane:1 ith:1 dissertation:1 coarse:2 location:2 lx:1 mathematical:1 become:1 supply:1 differential:1 manner:1 expected:1 themselves:1 relying:2 globally:1 actual:2 encouraging:1 estimating:10 maximizes:1 circuit:1 mcculloch:2 what:2 kg:1 transformation:46 temporal:2 every:1 transfonnations:13 appear:1 before:2 negligible:1 scientist:1 local:4 positive:2 engineering:1 limit:1 despite:1 interpolation:8 approximately:1 black:1 mateo:1 examined:1 suggests:2 co:1 limited:2 unique:3 lecun:2 testing:1 implement:1 backpropagation:1 universal:3 gibson:2 matching:1 suggest:2 get:1 close:2 operator:26 equivalent:1 deterministic:1 center:1 maximizing:1 starting:3 xgi:5 rule:3 regarded:2 deriving:1 handle:2 notion:1 variation:2 suppose:4 us:1 element:2 recognition:5 jk:1 houghton:1 predicts:1 constancy:1 electrical:1 dynamic:2 rewrite:2 division:1 completely:1 translated:4 various:1 articulated:1 fast:1 describe:2 choosing:1 richer:1 larger:3 valued:2 say:1 ability:1 gi:6 emergence:1 final:1 associative:1 sequence:4 eigenvalue:3 analytical:3 propose:1 adaptation:1 j2:2 mifflin:1 organizing:1 translate:2 linkoping:2 convergence:1 extending:1 generating:4 object:12 derive:3 recurrent:3 ac:1 stating:1 oo:1 minor:1 dividing:1 involves:2 direction:1 stucture:1 correct:1 stochastic:2 bib:1 centered:1 routing:1 implementing:1 preliminary:1 investigation:1 biological:2 sufficiently:1 considered:1 mapping:2 pitt:2 vary:1 early:1 ventral:1 jx:1 purpose:1 perceived:1 favorable:1 estimation:4 proc:1 currently:1 jackel:1 hubbard:1 concurrent:1 gaussian:4 mation:1 varying:2 derived:2 likelihood:1 detect:1 posteriori:1 oosterlinck:1 el:1 transformed:1 interested:1 pixel:14 issue:1 among:1 spatial:1 special:3 gtc:1 psychophysics:1 equal:1 once:2 f3:2 field:3 sampling:1 identical:1 ven:1 unsupervised:5 subjecting:1 others:1 report:1 employ:1 randomly:2 oriented:1 individual:1 familiar:1 ima:1 ourselves:1 fukushima:1 attempt:1 rearrangement:1 neuroscientist:1 possibility:2 essen:2 henderson:1 sens:1 edge:1 capable:1 ascertained:1 unifonnly:1 taylor:4 rotating:1 theoretical:1 rao:7 logp:1 oflearning:1 entry:2 stored:1 straightforwardly:1 periodic:2 synthetic:1 fundamental:1 together:1 continuously:1 concrete:1 jo:1 containing:7 derivative:2 simard:1 suggesting:1 account:2 sloan:2 depends:1 stream:1 tion:1 multiplicative:1 picked:1 view:1 parallel:1 bright:1 accuracy:4 moon:1 variance:4 who:1 kaufmann:1 spaced:2 ofthe:3 conveying:1 bayesian:4 handwritten:1 produced:1 finer:1 unaffected:1 cybernetics:1 converged:1 infinitesimal:12 frequency:1 associated:3 radian:1 sampled:1 auditory:1 knowledge:1 appears:1 tum:1 higher:1 response:1 anderson:1 hand:1 ruderman:4 multiscale:1 continuity:1 artifact:2 perhaps:1 olshausen:1 effect:1 hypothesized:1 contain:1 true:1 analytically:2 alternating:1 symmetric:1 white:1 ll:1 during:2 sin:1 uniquely:1 self:1 chaining:1 neocognitron:1 felleman:1 image:51 rotation:12 specialized:1 cerebral:2 jl:1 analog:1 significant:1 similarly:1 stable:1 cortex:4 gt:1 posterior:3 foldiak:1 jolla:1 belongs:1 driven:1 manipulation:1 apart:1 verlag:1 arbitrarily:3 accomplished:1 morgan:1 minimum:8 zip:1 recognized:1 signal:4 ii:2 clockwise:1 full:1 multiple:5 reduces:1 technical:1 elaboration:1 equally:2 parenthesis:1 biophysics:1 involving:1 vision:8 metric:1 achieved:1 cell:1 addition:1 fine:1 decreased:1 macroscopic:1 appropriately:1 publisher:1 nary:1 induced:2 tend:1 counterclockwise:1 transfonnation:4 seem:1 integer:1 near:1 presence:1 concerned:1 variety:1 architecture:1 restrict:1 pauwels:1 knowing:1 shift:2 whether:2 expression:2 effort:1 algebraic:1 york:1 action:1 useful:1 governs:1 amount:2 dark:1 locally:2 band:2 reduced:1 generate:3 estimated:2 neuroscience:1 alfred:1 write:1 discrete:2 group:30 cooperating:2 inverse:1 parameterized:1 fourth:1 family:1 reasonable:1 utilizes:1 scaling:3 simplification:1 speed:2 performing:3 infinitesimally:2 px:1 structured:1 department:1 alternate:1 remain:1 ur:1 shallow:1 modification:1 primate:2 invariant:8 equation:16 previously:2 remains:1 mechanism:1 know:1 tractable:1 denker:2 hierarchical:1 slower:1 original:1 denotes:1 exploit:2 especially:1 noticed:1 question:2 receptive:2 nr:1 gradient:5 distance:3 transfonned:1 assuming:2 besides:1 code:3 difficult:1 lg:1 unfortunately:1 negative:3 implementation:4 xll:1 allowing:1 howard:1 descent:5 defining:1 neurobiology:1 arbitrary:3 intensity:2 pair:10 required:1 specified:2 learned:20 boser:1 address:1 able:4 alongside:1 below:2 perception:7 pattern:4 oft:1 gool:1 natural:4 representing:1 improve:2 technology:1 lk:1 xg:1 faced:1 prior:4 relative:1 expect:2 generator:13 foundation:1 degree:2 principle:3 translation:15 row:4 eccv:1 supported:1 institute:1 template:1 bulletin:1 sparse:1 van:3 distributed:1 dimension:1 evaluating:1 computes:1 made:1 san:1 avoided:1 global:2 investigating:1 summing:1 assumed:2 xi:1 continuous:9 search:1 table:1 ballard:2 learn:7 reasonably:3 ca:2 robust:1 nature:1 tel:3 obtaining:1 expansion:4 european:1 jepson:1 noise:1 depicts:1 salk:2 sub:1 position:3 wish:1 exponential:7 xl:1 lie:32 perceptual:3 theorem:2 remained:1 undergoing:2 sinc:2 ofthis:2 exists:3 effectively:1 importance:1 boston:1 simply:1 appearance:1 forming:1 visual:11 ux:1 tracking:1 scalar:1 springer:1 whittaker:2 identity:5 viewed:1 goal:1 content:1 change:2 except:1 invariance:5 experimental:4 la:1 tendency:1 shannon:3 searched:1 mark:1 latter:2 rotate:2 dorsal:1 perfonning:3 evaluate:2 tested:2 handling:2 |
639 | 1,585 | Multiple Paired Forward-Inverse Models
for Human Motor Learning and Control
Masahiko Haruno*
mharuno@hip .atr.co.jp
Daniel M. Wolpert t
wolpert@hera.ucl.ac.uk
Mitsuo Kawato* o
kawato(Q) hip.atr.co.jp
* ATR Human Information Processing Research Laboratories
2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan.
tSobell Department of Neurophysiology, Institute of Neurology,
Queen Square, London WelN 3BG, United Kingdom .
? Dynamic Brain Project, ERATO, JST, Kyoto, Japan.
Abstract
Humans demonstrate a remarkable ability to generate accurate and
appropriate motor behavior under many different and oftpn uncprtain
environmental conditions. This paper describes a new modular approach to human motor learning and control, baspd on multiple pairs of
inverse (controller) and forward (prpdictor) models. This architecture
simultaneously learns the multiple inverse models necessary for control
as well as how to select the inverse models appropriate for a given em'ironm0nt. Simulations of object manipulation demonstrates the ability
to learn mUltiple objects, appropriate generalization to novel objects
and the inappropriate activation of motor programs based on visual
cues, followed by on-line correction, seen in the "size-weight illusion".
1
Introduction
Given the multitude of contexts within which we must act, there are two qualitatively
distinct strategies to motor control and learning. The first is to uSP a Single controller
which would need to be highly complex to allow for all possible scenarios. If this
controller were unable to encapsulate all the contexts it would need to adapt pvery
time the context of the movement changed before it could produce appropriate motor
commands- -this would produce transient and possibly large performancp errors . Alternatively, a modular approach can be used in which multiple controllers co-exist, with
each controller suitable for onp or a small set of contexts. Such a modular strategy' has
been introduced in the "mixture of experts" architecture for supervised learning [6].
This architecture comprises a set of expert networks and a gating network which performs classification by combining each expert's output. These networks are trained
simultaneously so that the gating network splits the input spacp into regions in which
particular experts can specialize.
To apply such a modular strategy to motor control two problems must bp solved. First
M Haruno, D. M Wolpert and M Kawato
32
how are the set of inverse models (controllers) learned to cover the contexts which
might be experienced the module learning problem. Second, given a set of inverse
modules (controllers) how are the correct subset selected for the current context -the module selection problem. From human psychophysical data we know that such
a selection process must be driven by two distinct processes; feedforward switching
based on sensory signals such as the perceived size of an object, and switching based
on feedback of the outcome of a movement . For example, on picking up a object
which appears heavy, feedforward switching may activate controllers responsible for
generating a large motor impulse. However, feedback processes, based on contact with
the object, can indicate that it is in fact light thereby switching control to inverse
models appropriate for a light object.
In the coutext of motor control and learning, Gomi and Kawato [4J combined the
feedback-error-learning [7J approach and the mixture of experts architecture to learn
multiple inverse models for different manipulated objects. They used both the visual
shapes of the manipulated objects and intrinsic signals, such as somatosensory feedback
and efference copy of the motor command, as the inputs to the gating network. Using
this architecture it was quite difficult to acquire multiple inverse models. This difficulty
arose because a single gating network needed to divide up , based solely on control error,
the large input space into complex regions. Furthermore, Gomi and Kawato's model
could not demonstrate feedforward controller selection prior to movement execution.
Here we describe a model of human motor control which addresses these problems and
can solve the module learning and selection problems in a computationally coherent
manner. The basic idea of the model is that the brain contains multiple pairs (modules) of forward (predictor) and inverse (controller) models (~fPFIM) [10J. Within each
module, the forward and inverse models are tightly coupled both during their acquisition and use, in which the forward models determine the contribution (responsibility)
of each inverse model 's output to the final motor command. This architecture can
simultaneously learn the mult.iple inverse models necessary for control as well as how
to select the inverse models appropriate for a given environment in both a feedforward
and a feedback manner.
2
Multiple paired forward-inverse models
actual arm trajectory
contextual
sIgnal
etlerence copy
01 motor
command
desIred arm
trajectory
--- ---:- -----
:
, Feedback: :
: controller : .'.. . ~ .
. 1
_ __ ___
!~~?~~~_r:'!'.t?~7?~':l_a_n_~ . _~':.
Figure 1: A schematic diagram showing how MPFIM architecture is used to control
arm movement while manipulating different objects. Parenthesized numbers in the
figure relate to the equations in the text.
Multiple Paired Forward-Inverse Modelsfor Human Motor Learning and Control
2.1
33
Motor learning and feedback selection
Figure 1 illustrates how the MPFIM architecture can be used to learn and control
arm movements when the hand manipulates different objects. Central to the multiple
paired forward-inverse model is the notion of dividing up experience using predictive
forward models. We consider n undifferentiated forward models which each receive the
current state, Xl, and motor command, Ut, as input . The output of the ith forward
model is xl+!, the prediction of the next state at time t
(1)
where wI are the parameters of a function approximator ? (e.g. neural network weights)
used to model the forward dynamics . These predicted next states are compared to the
actual next state to provide the responsibility signal which represents the extent to
which each forward model presently accounts for the behavior of the system. Based on
the prediction errors of the forward models, the responsibility signal, AL for the i-th
forward-inverse model pair (module) is calculated by the soft-max function
(2)
where X, is the true state of the system and a is a scaling constant. The soft-max
transforms the errors using the exponential function and then normalizes these values
across the modules, so that the responsibilities lie between 0 and 1 and sum to lover
the modules. Those forward models which capture the current behavior, and therefore
produce small prediction errors, will have high responsibilities 1. The responsibilities
are then used to control the learning of the forward models in a competitive manner,
with those models with high responsibilities receiving proportionally more of their error
signal than modules \vith low responsibility. The competitive learning among forward
models is similar in spirit to "annealed competition of experts" architecture [9].
= f/ldl -dd?z. (XI
wi
'" i
....JoW,
Ai)
- X,
\/(
= fddil
-./1, Xt
wi
- XAit )
(3)
For each forward model there is a paired inverse model whose inputs are the desired
next state X;+I and the current state Xt. The ith inverse model produces a motor
command ul as output
,( Z *
)
Uit -_ ,1
'f/ at, x t +I ' Xt
where
al
(4)
are the parameters of some function approximator 'lb .
The total motor command is the summation of the outputs from these inverse models
to weight the contributions.
using the responsibilities.
A: ,
n
Ut
11
= LA~U: = LA;t.b(a;,x;+l,xd
i=1
(5)
;=1
Once again. the responsibilities are used to weight the learning of each inverse model.
This ensures t hat inverse models learns only when their paired forward models make
accurate predictions. Although for supervised learning the desired control command
u; is needed (hut is generally not available), we can approximate (ui - uD with the
feedback motor command signal u fb [7] .
I Because selecting modules can be regarded as a hidden state estimation problem , an
alternative way to determine appropriate forward models is to use the E~1 algorithm [3J.
M. Haruno, D. M. Wolpert and M. Kawato
34
(6)
In summary, the responsibility signals are used in three ways- first to gate the learning
of the forward models (Equation 3), second to gate the learning of the inverse models
(Equation 6), and third to gate the contribution of the inverse models to the final motor
command (Equation 5).
2.2
Multiple responsibility predictors: Feedforward selection
While the system described so far can learn mUltiple controllers and switch between
them based on prediction errors, it cannot provide switching before a motor command
has been generated and the consequences of this action evaluated. To allow the system
to switch controllers based on contextual information, we introduce a new component,
the responsibility predictor (RP). The input to this module, yt, contains contextual
sensory information (Figure 1) and each RP produces a prediction of its own module's
responsibility
(7)
These estimated responsibilities can then be compared to the actual responsibilities A.~
generated from the responsibility estimator. These error signals are used to update the
weights of the RP by supervised learning.
Finally a mechanism is required to combine the responsibility estimates derived from
the feed forward RP and from the forward models' prediction errors derived from
feedback. We determine the final value of responsibility by using Bayes rule; multiplying the transformed feedback errors e- lx ,-5;;1 2/O' 2 by the feed forward responsibility ~; and then normalizing across the modules within the responsibility estimator:
~ ie-IXt -5;; 12/20'2/ ",n
~j e-Ixt -5;{1 2 /2 0'2
~)=l
t
t
The estimates of the responsibilities produced by the RP can be considered as prior
probabilities because they are computed before the movement execution based only on
extrinsic signals and do not rely on knowing the consequences of the action. Once an
action takes place, the forward models' errors can be calculated and this can be thought
of as the likelihood after the movement execution based on knowledge of the result of
the movement. The final responsibility which is the product of the prior and likelihood,
normalized across the modules, represents the posterior probability. Adaptation of the
RP ensures that the prior probability becomes closer to the posterior probability.
3
3.1
Simulation of arm tracking while manipulating objects
Learning and control of different objects
~I
M
K
a
(J
J
5.0
8.0
2.0
7.0
3.0
10.0
4.0
1.0
1.0
Figure 2: Schematic illustration of the simulation experiment in which the arm makes
reaching movements while grasping different objects with mass M, damping Band
spring K. The object properties are shown in the Table.
Multiple Paired Forward-Inverse Models for Human Motor Learning and Control
35
To examine motor learning and control we simulated a task in which the hand had
to track a given trajectory (30 s shown in Fig. 3 (b)), while holding different objects
(Figure 2). The manipulated object was periodically switched every 5 s between three
different objects Ct, {3 and 'Y in this order. The physical characteristics of these objects are shown in Figure 2. The task was exactly the same as that of Gomi and
Kawato [4], and simulates recent grip force-load force coupling experiments by Flanagan and Wing [2].
In the first simulation, three forward-inverse model pairs (modules) were used: the same
number of modules as the number of objects. We assumed the existence of a perfect
inverse dynamic model of the arm for the control of reachiilg movements. In each
module, both forward (? in (1)) and inverse ('IjJ in (4)) models were implemented as a
linear neural network 2 . The use of linear networks allowed M, Band K to be estimated
from the forward and inverse model weights. Let MJ ,Bf ,Kf be the estimates from
the jth forward model and Mj,B},Kj be the estimates from the jth inverse model.
Figure 3(a) shows the evolution of the forward model estimates of MJ ,Bf ,Kf for
the three modules during learning. During learning the desired trajectory (Fig. 3(b))
was repeated 200 times. The three modules started from randomly selected initial
conditions (open arrows) and converged to very good approximations of the three
objects (filled arrows) as shown in Table 1. Each of the three modules converged to
Ct, {3 and 'Y objects, respectively. It is interesting to note that all the estimates of the
forward models are superior to those of inverse models. This is because the inverse
model learning depends on how modules are switched by the forward models .
. .-J
,
.
(a)
Figure 3: (a) Learning acquisition of three pairs of forward and inverse models corresponding to three objects. (b) Responsibility signals from the three modules (top 3)
and tracking performance (bottom) at the beginning (left) and at the end (right) of
learning.
2
3
5.0071
8.0029
7.0040
3.0010
4.0000
0.9999
5.0102
7.8675
6.9554
3.0467
4.0089
0.9527
Table 1: Learned object characteristics
Figure 3(b) shows the performance of the model at the beginning (left) and end (right)
of learning. The top 3 panels show the responsibility signals of Ct, {3 and 'Y modules in
2 Any
kind of architecture can be adopted instead of linear networks
36
M Hanlno, D. M Wolpert and M Kawato
this order, and the bottom panel shows the hand's actual and desired trajectories. At
the start of learning, the three modules were equally poor and thus generated almost
equal responsibilities (1/3) and were involved in control almost equally. As a result,
the overall control performance was poor with large trajectory errors. However, at the
end of learning, the three modules switched almost perfectly (only three noisy spikes
were observed in the top 3 panels on the right), and no trajectory error was visible
at this resolution in the bottom panel. If we compare these results with Figure 7 of
Gomi and Kawato [4] for the same task, the superiority of the MPFIM compared to
the gating-expert architecture is apparent. Note that the number of free parameters
(synaptic weights) is smaller in the current architecture than the other. The difference
in performance comes from two features of the basic architecture. First, in the gating
architecture a single gating network tries to divide the space while many forward models
splits the space in MPFIM. Second, in the gating architecture only a single control error
is used to divide the space, but mUltiple prediction errors are simultaneously utilized
in MPFIM.
3.2
Generalization to a novel object
A natural question regarding MPFIM architecture is how many modules need to be
used. In other words, what happens if the number of objects exceeds the number of
modules or an already trained MPFIM is presented with an unfamiliar object. To
examine this, the MPFIM trained from 4 objects a,(3" and <5 was presented with a
novel object 'fJ (its (M, B, K) is (2.02,3.23,4.47)). Because the object dynamics can be
represented in a 3-dimensional parameter space and the 4 modules already acquired
define 4 vertices of a tetrahedron within the 3-D space, arbitrary object dynamics
contained within the tetrahedron can be decomposed into a weighted average of the
existing 4 forward modules (internal division point of the 4 vertices). The theoretically calculated weights of 'fJ were (0.15,0.20,0.35,0.30). Interestingly, each module's
responsibility signal averaged over trajectory was (0.14,0.24,0.37,0.26). Although the
responsibility was computed in the space of accelerations prediction by soft-max and
had no direct relation to the space of (M, B, K), the two vectors had very similar values. This demonstrates the flexibility of MPFIM architecture which originates from its
probabilistic soft-switching mechanism. This is in sharp contrast to the hard switching
of Narendra [8] for which only one controller can be selected at a time.
3.3
Feedforward selection and the size-weight illusion
Figure 4: Responsibility predictions based on contextual information of 2-D object
shapes (top 3 traces) and corresponding acceleration error of control induced by the
illusion (bottom trace)
In this section, we simulated prior selection of inverse models by responsibility predictors based on contextual information, and reproduce the size-weight illusion. Each
object was associated with a 2-D shape represented as a 3x3 binary matrix, which was
randomly placed at one of four possible locations on a 4x4 retinal matrix (see Gomi
Multiple Paired Forward-Inverse Models for Human Motor Learning and Control
37
and Kawato for more details). The retinal matrix was used as the contextual input
to the RP (3-layer sigmoidal feedforward network). During the course of learning, the
combination of manipulated objects and visual cues were fixed as A-a, B-,B and C-y. After 200 iterations of the trajectory, the combination A--y was presented for the
first. Figure 4 plots the responsibility signals of the three modules (top 3 traces) and
corresponding acceleration error of the control induced by the illusion (bottom trace).
The result replicates the size-weight illusion [1, 5] seen in the erroneous responsibility
prediction of the a responsibility predictor based on the contextual signal A and its
correction by the responsibility signal calculated by the forward models . Until the
onset of movement (time 0) , A was always associated with light Ct, and C was always
associated with heavy -y. Prior to movement when A was associated with -y, the a module was switched on by the visual contextual information, but soon after the movement
was initiated, the responsibility signal from the forward model's prediction dominated,
and the -y module was properly selected. Furthermore, after a while, the responsibility
predictor of the modules were re-Iearned to capture this new association between the
objects visual shape and its dynamics.
In conclusion, the MPFIM model of human motor learning and control, like the human
motor system, can learn multiple tasks, shows generalization to new tasks and an ability
to switch between tasks appropriately.
Acknowledgments
We thank Zoubin Ghahramani for helpful discussions on the Bayesian formulation of
this model. Partially supported by Special Coordination Funds for promoting Science
and Technology at the Science and Technology Agency of Japanese govenmnent , and
by HFSP grant.
References
[1] E. Brenner, B. Jeroen, and J . Smeets. Size illusion influences how we lift but not
how we grasp an object. Exp Brain Res, 111:473- 476, 1996.
[2] J.R. Flanagan and A. Wing. The role of internal models in motion planning and
control: Evidence from grip force adjustments during movements of hand-held
loads. J Neurosci, 17(4):1519- 1528, 1997.
[3] A.M. Fraser and A. Dimitriadis. Forecasting probability densities by using hidden
Markov models with mixed states. In A.S . Wiegand and N.A. Gershenfeld, editors,
Time series prediction: Forecasting the future and understanding the past, pages
265 -282. Addison-Wesley, 1993.
[4] H. Gomi and M. Kawato. Recognition of manipulated objects by motor learning
with modular architecture networks. Neural Networks, 6:485- 497, 1993.
[5] A. Gordon, H. Forssberg, R. Johansson , and G. Westling. Visual size cues in
th~ I>rogramming of manipulative forces during precision grip. Exp Brain Res,
83.477-482, 1991.
[6] R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton. Adaptive mixture of local
experts. Neural Computation , 3:79- 87, 1991.
[7] M. Kawato . Feedback-error-Iearning neural network for supervised learning. In
R. Eckmiller , editor , Advanced neural computers, pages 365- 372. North-Holland,
1990.
[8] K. Narendra and J. Balakrishnan. Adaptive control using multiple models. IEEE
Transaction on Automatic Control, 42(2):171 -187, 1997.
[9] K. Pawelzik, J. Kohlmorgen, and K. Muller. Annealed competition of experts
f<?r a segmentation and classification of switching dynamics. Neural Computation,
8.340- 356, 1996.
[10] D.M. \Volpert and M. Kawato. Multiple paired forward and inverse models for
motor control. Neural Networks, 11:1317- 1329, 1998.
| 1585 |@word neurophysiology:1 johansson:1 bf:2 open:1 simulation:4 jacob:1 thereby:1 initial:1 contains:2 series:1 united:1 selecting:1 daniel:1 iple:1 interestingly:1 past:1 existing:1 current:5 contextual:8 nowlan:1 activation:1 must:3 periodically:1 visible:1 shape:4 motor:28 plot:1 update:1 fund:1 cue:3 selected:4 beginning:2 ith:2 location:1 lx:1 sigmoidal:1 direct:1 specialize:1 combine:1 introduce:1 manner:3 theoretically:1 acquired:1 undifferentiated:1 hera:1 behavior:3 seika:1 examine:2 planning:1 brain:4 decomposed:1 actual:4 pawelzik:1 inappropriate:1 kohlmorgen:1 becomes:1 project:1 panel:4 mass:1 what:1 kind:1 every:1 act:1 xd:1 iearning:1 exactly:1 demonstrates:2 uk:1 control:30 originates:1 grant:1 superiority:1 encapsulate:1 before:3 local:1 consequence:2 switching:8 initiated:1 solely:1 might:1 co:3 averaged:1 acknowledgment:1 responsible:1 flanagan:2 x3:1 illusion:7 mult:1 thought:1 word:1 zoubin:1 cannot:1 selection:8 context:6 influence:1 yt:1 annealed:2 resolution:1 manipulates:1 estimator:2 rule:1 regarded:1 notion:1 iearned:1 recognition:1 utilized:1 bottom:5 observed:1 module:35 role:1 solved:1 capture:2 region:2 ensures:2 grasping:1 movement:14 environment:1 agency:1 ui:1 dynamic:7 trained:3 predictive:1 division:1 represented:2 distinct:2 describe:1 london:1 activate:1 lift:1 outcome:1 quite:1 modular:5 whose:1 efference:1 solve:1 apparent:1 ability:3 noisy:1 haruno:3 final:4 ucl:1 product:1 adaptation:1 combining:1 flexibility:1 competition:2 produce:5 generating:1 perfect:1 object:37 coupling:1 ac:1 dividing:1 implemented:1 predicted:1 indicate:1 somatosensory:1 come:1 correct:1 human:11 transient:1 jst:1 generalization:3 summation:1 correction:2 hut:1 considered:1 exp:2 vith:1 narendra:2 perceived:1 estimation:1 coordination:1 weighted:1 always:2 reaching:1 arose:1 command:11 derived:2 properly:1 likelihood:2 contrast:1 helpful:1 hidden:2 relation:1 manipulating:2 transformed:1 reproduce:1 overall:1 classification:2 among:1 special:1 equal:1 once:2 x4:1 represents:2 future:1 gordon:1 randomly:2 manipulated:5 simultaneously:4 tightly:1 forssberg:1 baspd:1 mitsuo:1 highly:1 grasp:1 replicates:1 mixture:3 light:3 held:1 accurate:2 closer:1 necessary:2 experience:1 damping:1 filled:1 divide:3 desired:5 re:3 hip:2 soft:4 cover:1 queen:1 vertex:2 subset:1 predictor:6 ixt:2 cho:1 combined:1 density:1 ie:1 probabilistic:1 receiving:1 picking:1 again:1 central:1 possibly:1 ldl:1 expert:9 wing:2 japan:2 account:1 retinal:2 north:1 bg:1 depends:1 onset:1 try:1 responsibility:36 competitive:2 bayes:1 start:1 contribution:3 square:1 characteristic:2 bayesian:1 produced:1 trajectory:9 multiplying:1 gomi:6 converged:2 synaptic:1 tetrahedron:2 acquisition:2 involved:1 associated:4 knowledge:1 ut:2 segmentation:1 appears:1 feed:2 wesley:1 supervised:4 formulation:1 evaluated:1 furthermore:2 until:1 hand:4 impulse:1 normalized:1 true:1 evolution:1 laboratory:1 erato:1 during:6 demonstrate:2 performs:1 motion:1 fj:2 novel:3 superior:1 kawato:13 physical:1 jp:2 association:1 unfamiliar:1 ai:1 automatic:1 had:3 posterior:2 own:1 recent:1 driven:1 scenario:1 manipulation:1 binary:1 muller:1 seen:2 determine:3 ud:1 signal:17 multiple:19 kyoto:2 exceeds:1 onp:1 adapt:1 equally:2 fraser:1 paired:9 schematic:2 prediction:13 basic:2 controller:14 iteration:1 receive:1 diagram:1 appropriately:1 induced:2 simulates:1 balakrishnan:1 lover:1 spirit:1 jordan:1 feedforward:7 split:2 switch:3 architecture:18 perfectly:1 idea:1 regarding:1 knowing:1 ul:1 forecasting:2 soraku:1 action:3 generally:1 proportionally:1 grip:3 transforms:1 band:2 generate:1 exist:1 estimated:2 extrinsic:1 track:1 eckmiller:1 four:1 gershenfeld:1 sum:1 inverse:37 place:1 almost:3 scaling:1 layer:1 ct:4 followed:1 bp:1 dominated:1 spring:1 department:1 combination:2 poor:2 describes:1 across:3 em:1 smaller:1 wi:3 happens:1 usp:1 presently:1 computationally:1 equation:4 mechanism:2 needed:2 know:1 addison:1 end:3 adopted:1 available:1 apply:1 promoting:1 appropriate:7 alternative:1 gate:3 hat:1 rp:7 existence:1 top:5 ghahramani:1 hikaridai:1 contact:1 psychophysical:1 question:1 already:2 spike:1 strategy:3 unable:1 thank:1 atr:3 simulated:2 gun:1 extent:1 illustration:1 dimitriadis:1 acquire:1 kingdom:1 difficult:1 relate:1 holding:1 trace:4 markov:1 hinton:1 lb:1 arbitrary:1 sharp:1 introduced:1 pair:5 required:1 coherent:1 learned:2 address:1 program:1 max:3 suitable:1 difficulty:1 rely:1 force:4 natural:1 wiegand:1 advanced:1 arm:7 technology:2 started:1 coupled:1 kj:1 text:1 prior:6 understanding:1 kf:2 mixed:1 interesting:1 approximator:2 remarkable:1 switched:4 dd:1 editor:2 heavy:2 normalizes:1 course:1 changed:1 summary:1 placed:1 supported:1 copy:2 free:1 jth:2 soon:1 allow:2 institute:1 feedback:11 calculated:4 uit:1 fb:1 sensory:2 forward:41 qualitatively:1 adaptive:2 far:1 transaction:1 approximate:1 assumed:1 xi:1 neurology:1 alternatively:1 table:3 learn:6 mj:3 parenthesized:1 complex:2 japanese:1 neurosci:1 arrow:2 allowed:1 repeated:1 fig:2 precision:1 experienced:1 comprises:1 exponential:1 xl:2 lie:1 third:1 learns:2 erroneous:1 load:2 xt:3 gating:8 showing:1 multitude:1 normalizing:1 evidence:1 intrinsic:1 execution:3 illustrates:1 wolpert:5 ijj:1 visual:6 contained:1 adjustment:1 tracking:2 partially:1 holland:1 environmental:1 acceleration:3 brenner:1 hard:1 total:1 hfsp:1 la:2 select:2 internal:2 |
640 | 1,586 | Learning Macro-Actions in Reinforcement
Learning
Jette Randlttv
Niels Bohr Inst., Blegdamsvej 17,
University of Copenhagen,
DK-21 00 Copenhagen 0, Denmark
randlov@nbi.dk
Abstract
We present a method for automatically constructing macro-actions from
scratch from primitive actions during the reinforcement learning process.
The overall idea is to reinforce the tendency to perform action b after
action a if such a pattern of actions has been rewarded. We test the
method on a bicycle task, the car-on-the-hill task, the race-track task and
some grid-world tasks. For the bicycle and race-track tasks the use of
macro-actions approximately halves the learning time, while for one of
the grid-world tasks the learning time is reduced by a factor of 5. The
method did not work for the car-on-the-hill task for reasons we discuss
in the conclusion.
1
INTRODUCTION
A macro-action is a sequence of actions chosen from the primitive actions of the problem. 1 Lumping actions together as macros can be of great help for solving large problems (Korf, 1985a,b; Gullapalli, 1992) and can sometimes greatly speed up learning (lba,
1989; McGovern, Sutton & Fagg, 1997; McGovern & Sutton, 1998; Sutton, Precup &
Singh, 1998; Sutton, Singh, Precup & Ravindran, 1999). Macro-actions might be essential for scaling up reinforcement learning to very large problems. Construction of macroactions by hand requires insight into the problem at hand. It would be more elegant and
useful if the agent itself could decide what actions to lump together (lba, 1989; McGovern
& Sutton, 1998; Sutton, Precup & Singh, 1998; Hauskrecht et al., 1998). (lba, 1989; McGovern & Sutton, 1998; Sutton, Precup & Singh, 1998; Hauskrecht et al., 1998).
IThis is a special case of definitions of macro-actions seen elsewhere. Some researchers take
macro-actions to consist of a policy, terminal conditions and an input set (Precup & Sutton, 1998;
Sutton, Precup & Singh, 1998; Sutton, Singh, Precup & Ravindran, 1999) while others define it as a
local policy (Hauskrecht et al., 1998).
1046
2
J. Randlev
ACTION-TO-ACTION MAPPING
In reinforcement learning we want to learn a mapping from states to actions, s -+ a that
maximizes the total expected reward (Sutton & Barto, 1998). Sometimes it might be of use
to learn a mapping from actions to actions as well. We believe that acting according to an
action-to-action mapping can be useful for three reasons:
1. During the early stages of learning the agent will enter areas of the state space it has
never visited before. If the agent acts according to an action-to-action mapping it might
be guided through such areas where there is yet no clear choice of action otherwise. In
other words it is much more likely that an action-to-action mapping could guide the agent
to perform almost optimally in states never visited than a random policy.
2. In some situations, for instance in an emergency, it can be useful to perform a certain
open-loop sequence of actions, without being guided by state information. Consider for instance an agent learning to balance on a bicycle (Randl~ & Alstr0m, 1998). If the bicycle
is in an unbalanced state, the agent must forget about the position of the bicycle and carry
out a sequence of actions to balance the bicycle again. Some of the state information-the
position of the bicycle relative to some goal-does not matter, and might actually distract
the agent, while the history of the most recent actions might contain just the needed information to pick the next action.
3. An action-to-action mapping might lead the agent to explore the relevant areas of the
state space in an efficient way instead of just hitting them by chance.
We therefore expect that learning an action-to-action mapping in addition to a state-action
mapping can lead to faster overall learning. Even though the system has the Markov property, it may be useful to remember a bit of the action history. We want the agent to perform
a sequence of actions while being aware of the development of the states, but not only being
controlled by the states.
Many people have tried to deal with imperfect state information by adding memory of
previous states and actions to the information the agent receives (Andreae & Cashin, 1969;
McCallum, 1995; Hansen, Barto & Zilberstein, 1997; Burgard et aI., 1998). In this work
we are not specially concerned with non-Markov problems. However the results in this
paper suggest that some methods for partially observable MDP could be applied to MDPs
and result in faster learning.
The difficult part is how to combine the suggestion made by the action-to-action mapping
with the conventional state-to-action mapping. Obviously we do not want to learn the
mapping (Stl at-l) -+ at on tabular form, since that would destroy the possibility of using
the action-to-action mapping generalisation over the state space.
In our approach we decided to learn two value mappings. The mapping Q8 is the conventional Q-value normally used for state-to-action mapping, while the mapping Qa represents
the value belonging to the action-to-action mapping. When making a choice, we add the
Q-values of the suggestions made by the two mappings, normalize and use the new values
to pick an action in the usual way:
Here Q is the Q-value that we actually use to pick the next action. The parameter {3 determines the influence of the action-to-action mapping. For {3 = 0 we are back with the usual
Q-values. The idea is to reinforce the tendency to perform action b after action a if such a
pattern of actions is rewarded. In this way the agent forms habits or macro-actions and it
will sometimes act according to them.
Learning Macro-Actions in Reinforcement Learning
1047
3 RESULTS
How do we implement an action-to-action mapping
and the Q-values? Many algorithms have been developed to find near optimal state-to-action mappings
on a trial-and-error basis. An example of such a
algorithm is Sarsa(A), developed by Rummery and
Niranjan (Rummery & Niranjan, 1994; Rummery,
1995). We use Sarsa(A) with replacing eligibility
traces (Singh & Sutton, 1996) and table look-up. Eligibility traces are attached to the Qa-values-one
for each action-action pair. 2 During learning the Qs
Figure 1: One can think of
and Qa-values are both adjusted according to the
the action-to-action mapping in
r
Q(
)
terms of weights between output
overa11 TD error Ut = Tt+l + "I t St+l,at+l neurons in a network calculating
Qt(st,at). The update for the Qa-valueshasthe form
th Q I
~Qa(at-l' at) = 13 0 e(at-l, at). For description of
e -va ue.
the Sarsa(A)-algorithm see Rummery (1995) or Sutton & Barto (1998). Figure 1 shows the
idea in terms of a neural network with no hidden layers. The new Qa-values correspond to
weights from output neurons to output neurons.
3.1 THE BICYCLE
We first tested the new Q-values on a bicycle system. To solve this problem the agent has
to learn to balance a bicycle for 1000 seconds and thereby ride 2.8 km. At each time step
the agent receives information about the state of the bicycle: the angle and angular velocity
of the handlebars, the angle, angular velocity and angular acceleration of the angle of the
bicycle from vertical.
The agent chooses two basic actions: the torque
2500
that should be applied to the handle bars, and
how much the centre of mass should be displaced ~
2000
from the bicycle's plan-a total of 9 possible acQ)
tions (Randl0\' & Alstr0m, 1998). The reward at
,? 1500
each time step is 0 unless the bicycle has fallen,
in which case it is -1. The agent uses a = 0.5, ~
1000
"I = 0.99 and A = 0.95. For further description and the equations for the system we refer the
reader to the original paper. Figure 2 shows how
500~~~~~~~~~~~~
o 0.02 0.04 0.06 0.08 0.1
the learning time varies with the value of 13. The
{3
error bars show the standard error in all graphs.
Figure 2: Learning time as a function
For small values of 13 (~ 0.03) the agent learns
of the parameter 13 for the bicycle exthe task faster than with usual Sarsa(A) (13 = 0).
periment. Each point is an average of
As expected, large values of 13 slow down learn200 runs.
ing.
1
j
3.2 THE CAR ON THE mLL
The second example is Boyan and Moore's mountain-car task (Boyan & Moore, 1995;
Singh & Sutton, 1996; Sutton, 1996). Consider driving an under-powered car up a steep
mountain road. The problem is that gravity is stronger than the car's engine, and the car
cannot accelerate up the slope. The agent must first move the car away from the goal and
2If one action is taken in a state, we allow the traces for the other actions to continue decaying
instead of cutting them to 0, contrary to Singh and Sutton (Singh & Sutton, 1996).
1048
1. Randlev
up the opposite slope, and then apply full throttle and build up enough momentum to reach
the goal. The reward at each time step is -1 until the agent reaches the goal, where it
receives reward O. The agent must choose one of three possible actions at each time step:
full thrust forward, no thrust, or full thrust backwards. Refer to Singh & Sutton (1996) for
the equations of the task.
We used one of the Sarsa-agents with five 9 x 9
CMAC tilings that have been thoroughly examined by Singh & Sutton (1996). The agent's
parameters are >. = 0.9, a = 0.7, 'Y = 1, and
a greedy selection of actions. (These are the
best values found by Singh and Sutton.) As in
Singh and Sutton's treatment of the problem, all
agents were tried for 20 trials, where a trial is
one run from a randomly selected starting state
to the goal. All the agents used the same set of
starting states. The performance measure is the
average trial time over the first 20 trials. Figure 3
shows results for two of our simulations. Obviously the action-to-action weights are of no use
to the agent, since the lowest point is at (3 = o.
OOO~--~--~--~----~--~
700
~ 600
!500
gs 400
~300
~ 200f-~__"'-
c(
100
Ol~~~~~~~~-.~--~
0.02
0.04(3 0.06
0.08
0.1
Figure 3: Average trial time of the 20
trials as a function of the parameter (3
for the car on the hill. Each point is
an average of 200 runs.
3.3 THE RACE TRACK PROBLEM
In the race track problem, which originally was
presented by Barto, Bradtke & Singh (1995),
the agent controls a car in a race track. The
agent must guide the car from the start line
Figure 4: An example of a nearto the finish line in the least number of steps
optimal path for the race-track probpossible. The exact position on the start line
lem.
Starting line to the left and finis randomly selected. The state is given by
ish
line
at the upper right.
the position and velocity (Pz'PJI' V z , vJI) (all
integer values). The total number of reachable
states is 9115 for the track shown in Fig. 4. At each step, the car can accelerate with
a E {-1, 0 + 1} in both dimensions. Thus, the agent has 9 possible combinations
of actions to choose from. Figure 4 shows positions on a near-optimal path. The
agent receives a reward of -1 for each step it makes without reaching the goal, and - 2
for hitting the boundary of the track. Besides the punishment for hitting the boundary of the
track, and the fact that the agent's choice of action is always carried out, the problem is as
stated in Barto, Bradtke & Singh (1995) and
Rummery (1995). The agent's parameters are
a = 0.5, >. = 0.8 and 'Y = 0.98.
The learning process is divided into epochs
consisting of 10 trials each. We consider the
task learned if the agent has navigated the car
from start to goal in an average of less than 20
time steps for one full epoch. The learning time
is defined as the number of the first epoch for
which the criterion is met. This learning criterion emphasizes stable learning-the agent
needs to be able to solve the problem several
times in a row.
{3
Figure 5: Learning time as a function
of the parameter (3 for the race track.
Each point is an average of 200 runs.
Learning Macro-Actions in Reinforcement Learning
0.5 0
0.05 0.1
1049
0.15 0.2 0.25 0.3 0.35 0.4 0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
~
~
Figure 6: Learning time as a function of the parameter {3 for grid-world tasks: A 3dimensional grid-world with 216 states (left) and a 4-dimensional grid-world with 256
states (right). All points are averages of 50 runs.
Figure 5 shows how the learning time varies with the value of {3. For a large range of
small values of {3 we see a considerable reduction in learning time from 11.5 epochs to 4.2
epochs. As before, large values of {3 slow down learning.
3.4
GRID-WORLD TASKS
We tried the new method on a set of gridworld problems in 3, 4 and 5 dimensions. In
all the problems the starting point is located at
(1,1, ... ). For 3 dimensions the goal is located
at (4,6,4), in 4 dimensions at (2,4,2,4) and in
5 dimensions at (2,4,2,4,2).
For a d-dimensional problem, the agent has 2d
actions to choose from. Action 2i - 1 is to move
by -1 in the ith dimension, and action 2i is to
move by + 1 in the ith dimension. The agent re0.2
0.4
0.6
0.8
o
ceives a reward of -0.1 for each step it makes
~
Figure 7: Learning time as a funcwithout reaching the goal, and + 1 for reaching the goal. If the agent tries to step outside
tion of the parameter {3 for a 5dimensional grid-world with 1024
the boundary of the world it maintains its postates. All points are averages of 50
sition. The 3-dimensional problem takes place
in a 6 x 6 x 6 grid-world, while the 4- and 5runs.
dimensional worlds have each dimension of size
6
4. Again, the learning process is divided into epochs consisting of 10 trials each. The task is considered learned if the agent
5
has navigated from start to goal in an average of less than some
4
fixed number (15 for 3 dimensions, 19 for 4 and 50 for 5 di- at 3
mensions) for one full epoch. The agent uses 0: = 0.5,.A = 0.9
2
and "y = 0.98.
Figures 6 and 7 show our results for the grid-world tasks. The
learning time is reduced a lot. The usefullness of our new
method seems to improve with the number of actions: the more
actions the better it works.
Figure 8 shows one of the more clear (but not untypical) set of
values for the action-to-action weights for the 3-dimensional
1
2
3 4 5
at-l
6
Figure 8: The values
of the action-to-action
weights; the darker the
square the stronger the
relationship.
J. RandZfiJv
1050
problem. Recommended actions are marked with a white 'X'. The agent has learned two
macro-actions. If the agent has performed action number 4 it will continue to perform
action 4 all other things being equal. The other macro-action consists of cycling between
action 2 and 6. This is a reasonable choice, as one route to the goal consists of performing
the actions (44444) and then (262626).
3.5 A TASK WITH MANY ACTIONS
Finally we tried a problem with a large
number of actions. The world is a 10
times 10 meter square. Instead of picking a dimension to advance in, the agent
chooses a direction. The angular space
consists of 36 parts of 10?. The exact position of the agent is discreetized in boxes
of 0.1 times 0.1 meter. The goal is a
square centered at (9.5,7.5) with sides
measuring 0.4 m. The agent moves 0.3
m per time step, and receives a reward of
+1 for reaching the goal and -0.1 otherwise. The task is considered learned if the
agent has navigated from start to goal in
an average of less than 200 time steps for
one full epoch (10 trials).
1000
1'----"'------'----"'--------'
o
0.5
1.5
2
Figure 9: Learning time as a function of the
parameter {3. All points are averages of 50
runs. Note the logarithmic scale.
Figure 9 shows the learning curve. The learning time is reduced by a factor of 147 from
397 (?7) to 2.7 (?0.2); The only real difference compared to the grid-world problems is
the number of actions. The results therefore indicate that the larger the number of actions
the better the method works.
4 CONCLUSION AND DISCUSSION
We presented a new method for calculating Q-values that mix the conventional Q-values
for the state-to-action mapping with Q-values for an action-to-action mapping. We tested
the method on a number of problems and found that for all problems except one, the method
reduces the total learning time. Furthermore, the agent found macros and learned them. A
value function based on values from both state-action and action-action pairs is not guaranteed to converge. Indeed for large values of {3 the method seems unstable, with large
variances in the learning time. A good strategy could be to start with a high initial {3
and gradually decrease the value. The empirical results indicate that the usefulness of the
method depends on the number of actions: the more actions the better it works. This is also
intuitively reasonable, as the information content of the knowledge that a particular action
was performed is higher if the agent has more actions to choose from.
Acknowledgment
The author wishes to thank Andrew G. Barto, Preben Alstr0m, Doina Precup and Amy
McGovern for useful comments and suggestions on earlier drafts of this paper and Richard
Sutton and Matthew Schlesinger for helpful discussion. Also a lot of thanks to David Cohen
for his patience with later than last-minute corrections.
Learning Macro-Actions in Reinforcement Learning
1051
References
Andreae, J. H. & Cashin, P. M. (1969). A learning machine with monologue. International Journal
of Man-Machine Studies, I, 1-20.
Barto, A. G., Bradtke, S. J. & Singh, S. (1995). Learning to act using real-time dynamic programming. Anificial Intelligence, 72, 81-138.
Boyan, J. A. & Moore, A. W. (1995). Generalization in reinforcement learning: Safely approximating
the value function. In NIPS 7. (pp. 369-376). The MIT Press.
Burgard, W., Cremers, A. B., Fox, D., Haehnel, D., Lakemeyer, G., Schulz, D., Steiner, W. & Thrun,
S. (1998). The interactive museum tour-guide robot. In Fifteenth National Conference on
Artificial Intelligence.
Gullapalli, V. (1992). Reinforcement Learning and Its Application to Control. PhD thesis, University
of Massachusetts. COINS Technical Report 92-10.
Hansen, E., Barto, A, & Zilberstein, S. (1997) Reinforcement learning for mixed open-loop and
closed-loop control. In NIPS 9. The MIT Press.
Hauskrecht, M., Meuleau, N., Boutilier, C., Kaelbling, L. P. & Dean, T. (1998). Hierarchical solution of markov decision processes using macro-actions. In Proceedings of the Fourteenth
International Conference on Uncertainty In Anificial Intelligence.
Iba, G. A. (1989). A heuristic approach to the discovery of macro-operators. Machine Learning, 3.
Korf, R. E. (1985a). Learning to solve problems by searching for macro-operators. Research Notes
in Anificial Intelligence, 5.
Karf, R. E. (1985b). Macro-operators: A weak method for learning. Anificial Intelligence, 26, 35-77.
McCallum, R. A. (1995). Reinforcement Learning with Selective Perception and Hidden State. PhD
thesis, University of Rochester.
McGovern, A. & Sutton, R. S. (1998). Macro-actions in reinforcement learning: An empirical analysis. Technical Report 98-70, University of Massachusetts.
McGovern, A., Sutton, R. S. & Fagg, A. H. (1997). Roles of macro-actions in accelerating reinforcement learning. In 1997 Grace Hopper Celebration of Women in Computing.
Precup, D. & Sutton, R. S. (1998). Multi-time models for temporally abstract planning. In NIPS 10.
The MIT Press.
Randl0\', J. & Alstr9Jm, P. (1998). Learning to drive a bicycle using reinforcement learning and
shaping. In Proceedings of the 15th International Conference on Machine Learning.
Rummery, G. A. (1995). Problem Solving with Reinforcement Learning. PhD thesis, Cambridge
University Engineering Department.
Rummery, G. A. & Niranjan, M. (1994). On-line Q-Iearning using connectionist systems. Technical
Report CUED/F-INFENG/TR 166, Engineering Department, Cambridge University.
Singh, S. P. & Sutton, R. S. (1996). Reinforcement learning with replacing eligibility traces. Machine
Learning, 22,123-158.
Sutton, R. S. (1996). Generalization in reinforcement learning: Successful examples using sparse
coarse coding. In NIPS 8. (pp. 1038-1044). The MIT Press.
Sutton, R. S. & Barto, A. G. (1998). Introduction to Reinforcement Learning. MIT Press/Bradford
Books.
Sutton, R. S., Precup, D. & Singh, S. (1998). Between MDPs and semi-MDPs: Learning, planning,
and representing knowledge at multiple temporal scales. Technical Report UM-CS-1998-074,
Department of Computer Science, UMass.
Sutton, R. S., Singh, S., Precup, D. & Ravindran, B. (1999). Improved switching among temporally
abstract actions. In NIPS II. The MIT Press.
| 1586 |@word trial:10 stronger:2 seems:2 open:2 km:1 simulation:1 korf:2 tried:4 pick:3 thereby:1 tr:1 carry:1 reduction:1 initial:1 uma:1 andreae:2 steiner:1 yet:1 must:4 thrust:3 update:1 half:1 greedy:1 selected:2 intelligence:5 mccallum:2 ith:2 meuleau:1 draft:1 coarse:1 five:1 consists:3 combine:1 ravindran:3 indeed:1 expected:2 planning:2 multi:1 ol:1 terminal:1 torque:1 automatically:1 td:1 maximizes:1 mass:1 lowest:1 what:1 mountain:2 macroactions:1 developed:2 hauskrecht:4 safely:1 remember:1 temporal:1 act:3 interactive:1 gravity:1 iearning:1 um:1 control:3 normally:1 before:2 engineering:2 local:1 re0:1 switching:1 sutton:31 path:2 approximately:1 might:6 examined:1 range:1 decided:1 acknowledgment:1 alstr0m:3 implement:1 handlebar:1 habit:1 cmac:1 area:3 empirical:2 word:1 road:1 suggest:1 cannot:1 selection:1 operator:3 influence:1 conventional:3 dean:1 primitive:2 starting:4 monologue:1 lumping:1 amy:1 insight:1 q:1 his:1 handle:1 searching:1 construction:1 exact:2 programming:1 us:2 velocity:3 located:2 role:1 nbi:1 decrease:1 reward:7 dynamic:1 singh:20 solving:2 basis:1 accelerate:2 mcgovern:7 artificial:1 outside:1 heuristic:1 larger:1 solve:3 otherwise:2 think:1 itself:1 obviously:2 sequence:4 macro:21 relevant:1 loop:3 description:2 normalize:1 help:1 tions:1 ish:1 andrew:1 cued:1 qt:1 c:1 indicate:2 met:1 direction:1 guided:2 lakemeyer:1 centered:1 generalization:2 sarsa:5 adjusted:1 correction:1 considered:2 great:1 bicycle:16 mapping:25 matthew:1 driving:1 early:1 niels:1 visited:2 hansen:2 mit:6 always:1 reaching:4 barto:9 zilberstein:2 greatly:1 inst:1 helpful:1 hidden:2 selective:1 schulz:1 overall:2 among:1 development:1 plan:1 special:1 equal:1 aware:1 never:2 represents:1 look:1 tabular:1 others:1 report:4 connectionist:1 richard:1 randomly:2 museum:1 national:1 mll:1 consisting:2 possibility:1 bohr:1 haehnel:1 unless:1 fox:1 schlesinger:1 instance:2 earlier:1 measuring:1 kaelbling:1 tour:1 burgard:2 usefulness:1 successful:1 optimally:1 varies:2 chooses:2 thoroughly:1 st:2 punishment:1 thanks:1 international:3 picking:1 together:2 precup:11 again:2 thesis:3 choose:4 woman:1 book:1 coding:1 matter:1 cremers:1 race:7 depends:1 doina:1 tion:1 try:1 lot:2 performed:2 later:1 closed:1 start:6 decaying:1 maintains:1 slope:2 rochester:1 acq:1 square:3 variance:1 correspond:1 weak:1 fallen:1 emphasizes:1 researcher:1 drive:1 history:2 reach:2 definition:1 pp:2 celebration:1 di:1 treatment:1 massachusetts:2 knowledge:2 car:13 ut:1 shaping:1 actually:2 back:1 originally:1 higher:1 lba:3 improved:1 ooo:1 though:1 box:1 furthermore:1 just:2 stage:1 q8:1 angular:4 until:1 hand:2 receives:5 replacing:2 believe:1 mdp:1 contain:1 moore:3 deal:1 white:1 during:3 ue:1 eligibility:3 iba:1 criterion:2 hill:3 tt:1 bradtke:3 hopper:1 cohen:1 attached:1 refer:2 cambridge:2 enter:1 ai:1 grid:10 centre:1 reachable:1 ride:1 stable:1 robot:1 add:1 recent:1 rewarded:2 route:1 certain:1 continue:2 seen:1 converge:1 recommended:1 semi:1 ii:1 full:6 multiple:1 mix:1 reduces:1 ing:1 technical:4 faster:3 divided:2 niranjan:3 controlled:1 va:1 infeng:1 basic:1 sition:1 fifteenth:1 sometimes:3 addition:1 want:3 randl:1 specially:1 comment:1 elegant:1 thing:1 contrary:1 lump:1 integer:1 near:2 backwards:1 enough:1 concerned:1 finish:1 opposite:1 imperfect:1 idea:3 gullapalli:2 accelerating:1 action:108 boutilier:1 useful:5 clear:2 reduced:3 mensions:1 track:10 per:1 exthe:1 navigated:3 destroy:1 graph:1 fagg:2 run:7 angle:3 fourteenth:1 uncertainty:1 place:1 almost:1 reader:1 decide:1 reasonable:2 decision:1 scaling:1 patience:1 bit:1 layer:1 emergency:1 guaranteed:1 g:1 speed:1 performing:1 department:3 according:4 combination:1 belonging:1 making:1 lem:1 vji:1 intuitively:1 gradually:1 taken:1 equation:2 discus:1 needed:1 tiling:1 finis:1 apply:1 hierarchical:1 away:1 coin:1 pji:1 original:1 calculating:2 build:1 approximating:1 move:4 strategy:1 usual:3 grace:1 jette:1 cycling:1 thank:1 reinforce:2 thrun:1 blegdamsvej:1 unstable:1 reason:2 denmark:1 besides:1 relationship:1 balance:3 difficult:1 steep:1 trace:4 stated:1 policy:3 perform:6 upper:1 vertical:1 neuron:3 displaced:1 markov:3 situation:1 gridworld:1 david:1 copenhagen:2 pair:2 engine:1 learned:5 nip:5 qa:6 able:1 bar:2 pattern:2 perception:1 memory:1 boyan:3 representing:1 rummery:7 improve:1 mdps:3 temporally:2 carried:1 epoch:8 discovery:1 meter:2 powered:1 relative:1 expect:1 mixed:1 suggestion:3 throttle:1 agent:45 row:1 elsewhere:1 last:1 guide:3 allow:1 side:1 sparse:1 boundary:3 dimension:10 curve:1 world:13 forward:1 made:2 reinforcement:18 author:1 observable:1 cutting:1 table:1 preben:1 learn:5 distract:1 anificial:4 constructing:1 did:1 periment:1 fig:1 slow:2 darker:1 position:6 momentum:1 wish:1 learns:1 untypical:1 down:2 minute:1 dk:2 pz:1 stl:1 essential:1 consist:1 adding:1 phd:3 forget:1 logarithmic:1 likely:1 explore:1 hitting:3 partially:1 chance:1 determines:1 goal:15 marked:1 acceleration:1 man:1 considerable:1 content:1 generalisation:1 except:1 acting:1 ithis:1 total:4 bradford:1 tendency:2 people:1 unbalanced:1 tested:2 scratch:1 |
641 | 1,587 | Phase Diagram and Storage Capacity of
Sequence Storing Neural Networks
A. During
Dept. of Physics
Oxford University
Oxford OX 1 3NP
United Kingdom
a.duringl @physics.oxford.ac .uk
A. C. C. Coolen
Dept. of Mathematics
King 's College
London WC2R 2LS
United Kingdom
tcoolen @mth.kc1.ac.uk
D. Sherrington
Dept. of Physics
Oxford University
Oxford OX I 3NP
United Kingdom
d.sherrington I @physics.oxford.ac .uk
Abstract
We solve the dynamics of Hopfield-type neural networks which store sequences of patterns, close to saturation. The asymmetry of the interaction
matrix in such models leads to violation of detailed balance, ruling out an
equilibrium statistical mechanical analysis. Using generating functional
methods we derive exact closed equations for dynamical order parameters, viz. the sequence overlap and correlation and response functions.
in the limit of an infinite system size. We calculate the time translation
invariant solutions of these equations. describing stationary limit-cycles.
which leads to a phase diagram. The effective retarded self-interaction
usually appearing in symmetric models is here found to vanish, which
causes a significantly enlarged storage capacity of eYe ~ 0.269. compared to eYe ~ 0.139 for Hopfield networks s~oring static patterns. Our
results are tested against extensive computer simulations and excellent
agreement is found.
A. Diiring, A. C. C. Coo/en and D. Sherrington
212
1
INTRODUCTION AND DEFINITIONS
We consider a system of N neurons O'(t) = {ai(t) = ?1}, which can change their states
collectively at discrete times (parallel dynamics). Each neuron changes its state with a
probability Pi(t) = ~[l-tanh,Bai(t)[Lj Jijaj(t)+Oi(t)]], so that the transition matrix is
N
W[o'(s
+ l)IO'(s)] = II e.BO',(s+l)[E;=l J, j O'} (s )+ o,( s)]-ln2cosh(i3[E; =1 J'JO'} ( s)+(J, (s )))
i=l
(I)
with the (non-symmetric) interaction strengths J ij chosen as
p
Jij -- N1 "'"
el'+l el'
~ "'i
"'j'
(2)
1'=1
The ~r represent components of an ordered sequence of patterns to be stored I. The gain
parameter ,B can be interpreted as an inverse temperature governing the noise level in the
dynamics (1) and the number of patterns is assumed to scale as N, i. e. P = aN. If
the interaction matrix would have been chosen symmetrically, the model would be accessible to methods originally developed for the equilibrium statistical mechanical analysis
of physical spin systems and related models [1 , 2], in particular the replica method. For
the nonsymmetric interaction matrix proposed here this is ruled out, and no exact solution
exists to our knowledge, although both models have been first mentioned at the same time
and an approximate solution compatible with the numerical evidence at the time has been
provided by Amari [3] . The difficulty for the analysis is that a system with the interactions
(2) never reaches equilibrium in the thermodynamic sense, so that equilibrium methods
are not applicable. One therefore has to apply dynamical methods and give a dynamical
meaning to the notion of the recall state. Consequently, we will for this paper employ the
dynamical method of path integrals, pioneered for spin glasses by de Dominicis [4] and
applied to the Hopfield model by Rieger et al. [5] .
We point out that our choice of parallel dynamics for the problem of sequence recall is
deliberate in that simple sequential dynamics will not lead to stable recall of a sequence.
This is due to the fact that the number of updates of a single neuron per time unit is not
a constant for sequential dynamics. Schemes for using delayed asymmetric interactions
combined with sequential updates have been proposed (see e. g. [6] for a review), but are
outside the scope of this paper.
Our analysis starts with the introduction of a generating functional Z[1jJ] of the form
L
Z[1jJ] =
p[O'(O), . .. , O'(t)] e- i E .<t O' ( s )' 1/1 ( s ) ,
(3)
O'(O) ... O'(t)
which depends on real fields { 'ljJi (t)} . These fields playa formal role only, allowing for the
identification of interesting order parameters, such as
mi(s)
(
')
G iJ S, S
"
8Z[1jJ]
= (a i (s)) = 1 1/1-t0
hm 8 ()
'ljJ1 S
8
= 80j (s')
(
a 1 (s
))
Cij(s,s') = (ai(s)aj(s')) = - lim 8
1/1-t0
I Upper
8 2 Z[1jJ]
..
= 1 Jt.~o 8'IjJi(S)80 j (s')
~201jJ\
)'
S
'ljJj S'
'ljJ i
(pattern) indices are understood to be taken modulo p unless otherwise stated.
213
Phase Diagram and Storage Capacity ojSequence-Storing Neural Networks
for the average activation, response and correlation functions, respectively. Since this functional involves the probability p[o-(O), ... ,o-(t)] of finding a 'path' of neuron activations
{o-(O), ... ,o-(t)}, the task of the analysis is to express this probability in terms of the
macroscopic order parameters itself to arrive at a set of closed macroscopic equations.
The first step in rewriting the path probability is to realise that (I) describes a onestep Markov process and the path probability is therefore just the product of the
single-time transition probabilities, weighted by the probability of the initial state:
p[o-(O), ... , o-(t)] = p(o-(O)) TI~:~ W[o-(s + l)lo-(s)] . Furthermore, we will in the
course of the analysis frequently isolate interesting variables by introducing appropriate
8-functions, such as
The variable hi(t) can be interpreted as the local field (or presynaptic potential) at site i
and time t and their introduction transforms Z['ljJ] into
Z['ljJ] =
L
Jd2~d~t II [e
,
p(o-(O))
u(O) .. .u(t)
()
t-l
i3U (S+I) .h(S)-Li In 2cosh(i3h;is))
s=o
This expression is the last general form of Z['ljJ] we consider. To proceed with the analysis,
we have to make a specific ansatz for the system behaviour.
2
DYNAMIC MEAN FIELD THEORY
As sequence recall is the mode of operation we are most interested in , we make the
ansatz that, for large systems, we have an overlap of order 0 (NO) between the pattern
at time s, and that all other patterns are overlapping with order
(N- 1 / 2 ) at most.
Accordingly, we introduce the macroscopic order parameters for the condensed pattern
m(s) = N- 1 L:i ~:ai(s) and for the quantity k(s) = N- 1 L:i ~:hi(S), and their noncondensed equivalents yl'(s) = N- 1 / 2 L:i ~rai(s) and x(s) = N- 1 / 2 L:i ~rhi(S) (1-? =f. s) ,
where the scaling ansatz is reflected in the normalisation constants. Introducing these objects using 8 functions, as with the local fields hi (s), removes the product of two patterns
in the last line of eq. (4), so that the exponent will be linear in the pattern bits.
e
a
Because macroscopic observab1es will in general not depend on the microscopic realisation
of the patterns, the values of these observab1es do not change if we average Z['ljJ] over the
realisations of the patterns. Performing this average is complicated by the occurrence of
some patterns in both the condensed and the noncondensed overlaps, depending on the
current time index, which is an effect not occurring in the standard Hopfield model. Using
some simple scaling arguments, this difficulty can be removed and we can perform the
average over the noncondensed patterns. The disorder averaged Z['ljJ] acquires the form
A. During, A. C. C. Coo/en and D. Sherrington
214
where we have introduced the new observables q(s, S') = 1/N L:i ai (s )ai(s'), Q(s, S') =
I/N Li hi(S)hi(S'), and K(s, S') = I/N Li ai(s)hi(S'), and their corresponding conjugate variables. The functions in the exponent turn out to be
'l1[m, ril, k, k, q, q, Q, Q, K, K] = i
L
[m(s)m(s) + k(s)k(s) - m(s)k(s)]
+
s<t
i
L
[q(s, S')q(S, S')
+ O(s, S')Q(S, S') + K(s, s')K(s, Sl)],
(6)
s,s'<t
<I>[m, k, q, Q, K]
= ~ LIn [
L
i
O'(O) .. .O'(t)
eL.<t
[,BO'(S+l)h(s) - ln
e- i L""<t
Pi(a(O))
JII
[dh(S;:h(S)]
s<t
2('osh(~h(s))]
X
[q(s,s')O'(s)O'(s')+Q(s ,s')h(s)h(s')+K(s,s')O'(slh(s')]
x
ei L? ,(. j [k( , j-?.(.j -i(, j,:+< ]-; E.? a(.j [m(.j?: H. (,j]].
(7)
and
n[q Q Q] = ~ In /
"N
II [dU(S)
dV(S)] e L,,>t 2::.<t
(2rr
i
U,,+l
(s)v,,(s)
X
)(p-t)
s<t
e- ~ L" >1 L ?.?, < 1 [u" (s)Q(s,s' )u" (s' )+u" (s)K(s' ,s)v" (s' )+v" (s)K(s,s' )u" (s' )+v" (s)q(s,s' )v" (s')]
.
(8)
The first of these expressions is just a result of the introduction of 6 functions, while the
second will turn out to represent a probability measure given by the evolution of a single
neuron under prescribed fields and the third reflects the disorder contribution to the local
fields in that single neuron measure 2 ? We have thus reduced the original problem involving
N neurons in a one-step Markov process to one involving just a single neuron, but at the
cost of introducing two-time observables.
3 DERIVATION OF SADDLE POINT EQUATIONS
The integral in (5) will be dominated by saddle points, in our case by a unique saddle
point when causality is taken into account. Extremising the exponent with respect to all
occurring variables gives a number of equations, the most important of which give the
physical meanings of three observables: q(s, S') = C(s, S'), K(s , S') = iG(s, s'),
m(s)
= N~oo
lim N1
'"
6
(at (s)~i
(9)
with
1
?
1 ' " a(ai(s)
ae i (S ') ,
G(s, s ) = N~oo
hm N 6
t
(10)
2We have assumed p(u(O)) =
n, p,(a,(O)).
Phase Diagram and Storage Capacity of Sequence-Storing Neural Networks
215
which are the single-site correlation and response functions, respectively. The overline . . .
is taken to represent disorder averaged values . Using also additional equations arising from
the normalisation Z [O] = 1, we can rewrite the single neuron measure ell as
(f[{u}])* =
In
2::
O'o ... O'(t)
[dh(S;:h(S)] p(a(O))J[{u} ]eL s< t
[t30'(S+1 )h( s)- ln 2COSh (.L3 h (s)) ]
s< t
(11 )
with the short-hand R = L:~o GtlCG l . To simplify notation, we have here assumed
that the initial probabilities Pi(ai (O )) are uniform and that the external fields Oi(S) are
so-called staggered ones, i. e. Oi (s) = O~: + 1, which makes the single neuron measure
site-independent. This single neuron measure (II) represents the essential result of our
calculations and is already properly normalised (i.e. (1) = 1).
*
When one compares the present form of the single neuron measure with that obtained for
the symmetric Hopfield network, One finds in the latter model an additional term which
corresponds to a retarded self-interaction. The absence of such a term here suggests that
the present model will have a higher storage capacity. It can be explained by the constant
change of state of a large number of neurons as the network goes through the sequence,
which prevents the build-up of microscopic memory of past activations.
However, as is the case for the standard Hopfield model, the measure (II) is still too complicated to find explicit equations for the observables we are interested in. Although it is
possible to evaluate the necessary integrals numerically, we instead concentrate on the interesting behaviour when transients have died out and time-translation invariance is present.
4
STATIONARY STATE
We will now concentrate on the behaviour of the network at the stage when transients have
subsided and the system is on a macroscopic limit cycle. Then the relations
=m
m(s)
C(s , s')
= C(s -
s')
G(s , s') = C(s - s').
(12)
hold and also R(s , s') = R(s - s') . We can then for simplicity shift the time origin
- 00 and the upper temporal bound to t = 00 . Note, however, that this state is not
to be confused with microscopic equilibrium in the thermodynamic sense . The stationary
versions of the measure (11) for the interesting observables are then given by the following
expressions (note that C(O) = 1):
to =
m =
I II
dV( S;:W(S) e ivw-!w .Rw tanh/3[m + 0 + Q! v(O)]
s
C(T
f= 0) =
In
dV (S~:w(S) e iv .w-!w .Rw
x
s
t anh B[m + 0 + Q~V(T)] tanh /3 [m + 0 + Q ~ V(O ) ]
G( T) = (30,,1
[1 - JIf dv(s~~w(s) e'vw- ;wRw tanh' (3 [m + B + ,, '"(0) 1] (13)
and we notice that the response function is nOw limited to a single time step, which again
reflects the influence of the uncorrelated flips induced by the sequence recall. These equations can be sol ved by separating the persistent and fluctuating parts of C( T) and R( T),
A. During, A. C. C. Coolen and D. Sherrington
216
C(T)
= q + C(T),
R(T) = r
+ R(T),
lim C(T)
T=?OO
= T=?OO
lim R(T) = O.
Doing so eventually leads us to the coupled equations
p = [1 - ,82(1 -
q)2rl
(14)
m = / Dz tanh,8[m +
e + zv'aP]
(15)
q = / Dz tanh 2 ,8[m + e + zv'aP]
q = / Dz [/ Dx tanh,8 [m + e + zJoqp +
xV0(1 -
q)p]
r
(16)
(17)
Note that the three equations (14-16) form a closed set, from which the persistent correlation q simply follows.
5
PHASE DIAGRAM AND STORAGE CAPACITY
1.0
0.8
p
0.6
T
0.4
I
r
,
0.2
0.0
I
R
I-
0.0
0.2
0.1
0.3
a
Figure 1: Phase diagram of the sequence storage network, in which one finds two phases:
a recall phase (R), characterized by {m f:. 0, q > 0, ij > O}, and a paramagnetic phase
(P), characterized by {m = 0, q = 0, q > O}. The solid line separating the two phases
is the theoretical prediction for the (discontinuous) phase transition. The markers represent
simulation results, for systems of N = 10, 000 neurons measured after 2, 500 iteration
steps, and obtained by bisection in o. The precision in terms of 0 is at least 6.0 = 0.005
(indicated by error bars); the values for T are exact.
e
The coupled equations (14-17) can be solved numerically for = 0 to find the area in
the o-T plane where solutions m f:. 0 - corresponding to sequence recall- exist. The
boundary of this area describes the storage capacity of the system . This theoretical curve
can then be compared with computer simulations directly performing the neural dynamics
Phase Diagram and Storage Capacity ojSequence-Storing Neural Networks
217
given by (I) and (2). We show the result of doing both in the same accompanying diagram .
We find that there are only two types of solutions, namely a recall phase R where m f:. 0
and q f:. 0, and a paramagnetic phase where m = q = O. Unlike the standard Hopfield
model, the present model does not have a spin glass phase with m = a and q f:. O. The
agreement between simulations (done here for N = la , 000 neurons) and theoretical results is excellent and separate simulations of systems with up to N = 50, 000 neurons to
assess finite size effects confirm that the numerical data are reliable.
6
DISCUSSION
In this paper, we have used path integral methods to solve in the infinite system size limit
the dynamics of a non-symmetric neural network model, designed to store and recall a sequence of patterns, close to saturation. This model has been known for over a decade from
numerical simulations to possess a storage capacity roughly twice that of the symmetric
Hopfield model , but no rigorous analytic results were available. We find here that in contrast to equilibrium statistical mechanical methods, which do not apply due to the absence
of detailed balance, the powerful path integral formalism provides us with a solution and
a transparent explanation of the increased storage capacity. It turns out that this higher capacity is due to the absence of a retarded self-interaction, viz. the absence of microscopic
memory of activations.
The theoretically obtained phase diagram can be compared to the results of numerical simulations and we find excellent agreement. Our confidence in this agreement is supported
by additional simulations to study the effect of finite size scaling. Full details of the calculations will be presented elsewhere [7] .
References
[I] Sherrington D and Kirkpatrick S 1975 Phys. Rev. Lett. 35 1972
[2] Amit D J, Gutfreund H, and Sompolinsky H 1985 Phys. Rev. Lett. 55 1530
[3] Amari Sand Maginu K 1988 Neural Networks 1 63
[4] de Dominicis G 1978 Phys. Rev. B 184913
[5] Rieger H, Schreckenberg M, and Zittartz J 1988 J. Phys. A: Math. Gen. 21 L263
[6] Kuhn R and van Hemmen J L 1991 Temporal Association ed E Domany, J L van
Hemmen , and K Schulten (Berlin, Heidelberg: Springer) p 213
[7] During A, Coolen A C C , and Sherrington D 1998 J. Phys. A: Math. Gen. 31 8607
| 1587 |@word version:1 simulation:8 solid:1 initial:2 bai:1 united:3 past:1 current:1 paramagnetic:2 osh:1 activation:4 dx:1 numerical:4 analytic:1 remove:1 designed:1 update:2 stationary:3 accordingly:1 plane:1 short:1 slh:1 provides:1 math:2 persistent:2 introduce:1 theoretically:1 overline:1 roughly:1 frequently:1 provided:1 confused:1 notation:1 anh:1 interpreted:2 developed:1 gutfreund:1 finding:1 temporal:2 ti:1 uk:3 unit:1 understood:1 local:3 died:1 limit:4 io:1 oxford:6 path:6 ap:2 twice:1 suggests:1 limited:1 averaged:2 unique:1 area:2 significantly:1 confidence:1 t30:1 close:2 storage:11 influence:1 equivalent:1 dz:3 go:1 l:1 simplicity:1 disorder:3 notion:1 rhi:1 pioneered:1 exact:3 modulo:1 origin:1 agreement:4 jif:1 maginu:1 asymmetric:1 role:1 solved:1 calculate:1 cycle:2 sompolinsky:1 sol:1 removed:1 mentioned:1 dynamic:9 depend:1 rewrite:1 observables:5 hopfield:8 derivation:1 effective:1 london:1 outside:1 solve:2 amari:2 otherwise:1 itself:1 ljji:1 sequence:13 rr:1 interaction:9 jij:1 product:2 gen:2 asymmetry:1 generating:2 object:1 derive:1 depending:1 ac:3 oo:4 measured:1 ij:3 eq:1 involves:1 concentrate:2 kuhn:1 discontinuous:1 transient:2 sand:1 wc2r:1 behaviour:3 transparent:1 hold:1 accompanying:1 equilibrium:6 scope:1 applicable:1 condensed:2 coolen:3 tanh:7 weighted:1 reflects:2 i3:1 tcoolen:1 viz:2 properly:1 staggered:1 rigorous:1 contrast:1 sense:2 glass:2 ved:1 el:4 lj:1 mth:1 relation:1 interested:2 exponent:3 ell:1 field:8 never:1 represents:1 np:2 simplify:1 realisation:2 employ:1 delayed:1 phase:16 n1:2 normalisation:2 violation:1 kirkpatrick:1 integral:5 necessary:1 unless:1 iv:1 ruled:1 theoretical:3 increased:1 formalism:1 cost:1 introducing:3 retarded:3 uniform:1 too:1 stored:1 combined:1 accessible:1 physic:4 yl:1 ansatz:3 jo:1 again:1 external:1 li:3 jii:1 potential:1 account:1 de:2 depends:1 closed:3 doing:2 start:1 parallel:2 complicated:2 contribution:1 ass:1 oi:3 spin:3 identification:1 bisection:1 reach:1 phys:5 ed:1 definition:1 against:1 realise:1 mi:1 static:1 gain:1 recall:9 knowledge:1 lim:4 originally:1 higher:2 reflected:1 response:4 done:1 ox:2 furthermore:1 governing:1 just:3 stage:1 correlation:4 hand:1 ei:1 overlapping:1 marker:1 xv0:1 mode:1 aj:1 indicated:1 effect:3 evolution:1 ril:1 symmetric:5 during:4 self:3 acquires:1 sherrington:7 l1:1 temperature:1 meaning:2 functional:3 physical:2 rl:1 nonsymmetric:1 association:1 numerically:2 ai:8 mathematics:1 l3:1 stable:1 playa:1 store:2 additional:3 ii:6 thermodynamic:2 full:1 characterized:2 calculation:2 dept:3 lin:1 prediction:1 involving:2 ae:1 iteration:1 represent:4 diagram:9 macroscopic:5 unlike:1 posse:1 isolate:1 induced:1 rieger:2 vw:1 symmetrically:1 domany:1 shift:1 t0:2 expression:3 proceed:1 cause:1 jj:5 detailed:2 transforms:1 cosh:2 kc1:1 rw:2 reduced:1 sl:1 deliberate:1 exist:1 notice:1 arising:1 per:1 discrete:1 express:1 zv:2 rewriting:1 replica:1 inverse:1 powerful:1 arrive:1 ruling:1 scaling:3 oring:1 bit:1 bound:1 hi:6 strength:1 dominated:1 argument:1 prescribed:1 performing:2 rai:1 conjugate:1 describes:2 rev:3 coo:2 dv:4 invariant:1 explained:1 taken:3 ln:2 equation:11 describing:1 turn:3 eventually:1 flip:1 available:1 operation:1 apply:2 fluctuating:1 appropriate:1 appearing:1 occurrence:1 original:1 build:1 amit:1 already:1 quantity:1 microscopic:4 separate:1 separating:2 capacity:11 berlin:1 presynaptic:1 index:2 balance:2 kingdom:3 cij:1 stated:1 perform:1 allowing:1 upper:2 neuron:16 markov:2 finite:2 introduced:1 namely:1 mechanical:3 extensive:1 bar:1 dynamical:4 pattern:15 usually:1 saturation:2 reliable:1 memory:2 explanation:1 overlap:3 difficulty:2 scheme:1 eye:2 hm:2 coupled:2 ljj:6 review:1 interesting:4 storing:4 pi:3 uncorrelated:1 translation:2 lo:1 dominicis:2 compatible:1 course:1 elsewhere:1 supported:1 last:2 formal:1 normalised:1 van:2 boundary:1 curve:1 lett:2 transition:3 ig:1 approximate:1 confirm:1 assumed:3 decade:1 heidelberg:1 du:1 excellent:3 noise:1 enlarged:1 site:3 causality:1 en:2 hemmen:2 precision:1 schulten:1 explicit:1 vanish:1 third:1 specific:1 jt:1 evidence:1 exists:1 essential:1 sequential:3 occurring:2 simply:1 saddle:3 prevents:1 ordered:1 bo:2 collectively:1 springer:1 corresponds:1 dh:2 king:1 consequently:1 i3h:1 absence:4 change:4 onestep:1 infinite:2 called:1 invariance:1 la:1 college:1 latter:1 evaluate:1 tested:1 |
642 | 1,588 | Approximate Learning of Dynamic Models
Xavier Boyen
Computer Science Dept. 1A
Stanford, CA 94305-9010
xb@cs.stanford.edu
Daphne Koller
Computer Science Dept. 1A
Stanford, CA 94305-9010
koller@cs.stanford.edu
Abstract
Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the
many inference phases requires a traversal over an entire long data sequence; furthermore, the data structures manipulated are exponentially
large, making this process computationally expensive. In [2], we describe
an approximate inference algorithm for monitoring stochastic processes,
and prove bounds on its approximation error. In this paper, we apply this
algorithm as an approximate forward propagation step in an EM algorithm
for learning temporal Bayesian networks. We provide a related approximation for the backward step, and prove error bounds for the combined
algorithm. We show empirically that, for a real-life domain, EM using
our inference algorithm is much faster than EM using exact inference,
with almost no degradation in quality of the learned model. We extend
our analysis to the online learning task, showing a bound on the error
resulting from restricting attention to a small window of observations.
We present an online EM learning algorithm for dynamic systems, and
show that it learns much faster than standard offline EM.
1 Introduction
In many real-life situations, we are faced with the task of inducing the dynamics of a
complex stochastic process from limited observations about its state over time. Until now,
hidden Markov models (HMMs) [12] have played the largest role as a representation for
learning models of stochastic processes. Recently, however, there has been increasing
use of more structured models of stochastic processes, such as factorial HMMs [8] or
dynamic Bayesian networks (DBNs) [4]. Such structured decomposed representations
allow complex processes over a large number of states to be encoded using a much smaller
number of parameters, thereby allowing better generalization from limited data [8, 7, 13].
Furthermore, the natural structure of such processes makes it easier for a human expert to
incorporate prior knowledge about the domain structure into the model, thereby improving
its inductive bias.
Approximate Learning of Dynamic Models
397
Both parameter and structure learning algorithms for dynamic models [12, 7] use probabilistic inference as a crucial component. An inference routine is called multiple times in
order to "fill in" missing data with its expected value according to the current hypothesis;
the resulting expected sufficient statistics are then used to construct a new hypothesis. The
inference step is used many times, each of which iterates over the entire sequence. This
behavior is problematic in two important respects. First, in many settings, we may not
have access to the entire sequence in advance. Second, the various structured representations of stochastic processes do not admit an effective inference procedure. The messages
propagated by exact inference algorithms include an entry for each possible state of the
system; the number of states is exponential in the size of our model, rendering this type of
computation infeasible in all but the smallest of problems. In this paper, we describe and
analyze an approach that helps us address both of these problems.
In [2], we proposed a new approach to approximate inference in stochastic processes, where
approximate distributions that admit compact representation are maintained and propagated.
Our approach can achieve exponential savings over exact inference for DBNs. We showed
empirically that, for a practical DBN [6], our approach results in a factor 15-20 reduction
in running time at only a small cost in accuracy. We also proved that the accumulated
error arising from the repeated approximations remains bounded indefinitely over time.
This result relied on an analysis showing that transition through a stochastic process is a
contraction for relative entropy (KL-divergence) [3].
Here, we apply this approach to the parameter learning task. This application is not
completely straightforward, since our algorithm of [2] and the associated analysis only
applied to the forward propagation of messages, whereas the inference used in learning
algorithms require propagation of information from the entire sequence. In this paper,
we provide an analysis of the error accumulated by an approximate inference process in
the backward propagation phase of inference. This analysis is quite different from the
contraction analysis for the forward phase. We combine these two results to prove bounds
on the error of the expected sufficient statistics relayed to the learning algorithm at each
stage. We then present empirical results for a practical DBN, illustrating the performance
of this approximate learning algorithm. We show that speedups of 15-20 can be obtained
easily, with no discern able loss in the quality of the learned hypothesis.
Our theoretical analysis also suggests a way of dealing with the problematic need to reason
about the entire sequence of temporal observations at once. Our contraction results show
that it is legitimate to ignore observations that are very far in the future. Thus, we can
compute a very accurate approximation to the backward message by considering only a
small window of observations in the future. This idea leads to an efficient online learning
algorithm. We show that it converges to a good hypothesis much faster than the standard
offline EM algorithm, even in settings favorable to the latter.
2 Preliminaries
A model for a dynamic system is specified as a tuple (B,8) where B represents the
qualitative structure of the model, and 8 the appropriate parameterization. In a DBN, the
instantaneous state of a process is specified in terms of a set of variables Xl, ..., X n . Here,
B encodes a network fragment which specifies, for each time t variable Xkt), the set of
parents Parents(Xk t )); an example fragment is shown in Figure l(a). The parameters 8
define for each Xk t ) a conditional probability table P[Xkt) I Parents(Xk t ) )]. For simplicity,
we assume that the variables are partitioned into state variables, which are never observed,
and observation variables, which are always observed. We also assume that the observation
variables at time t depend only on state variables at time t. We use T to denote the transition
matrix over the state variables in the stochastic process; i.e., G,j is the transition probability
X Boyen and D. Koller
398
from state Si to state Sj. Note that this concept is well-defined even for a DBN, although in
that case, the matrix is represented implicitly via the other parameters. We use to denote
the observation matrix; i.e., Oi,j is the probability of observing response rj in state Si.
a
Our goal is to learn the model for stochastic process from partially observable data. To
simplify our discussion, we focus on the problem of learning parameters for a known
structure using the EM (Expectation Maximization) algorithm [5]; most of our discussion
applies equally to other contexts (e.g., [7]). EM is an iterative procedure that searches over
the space of parameter vectors for one which is a local maximum of the likelihood functionthe probability of the observed data D given 8. We describe the EM algorithm for the task
of learning HMMs; the extension to DBNs is straightforward. The EM algorithm starts with
some initial (often random) parameter vector 8, which specifies a current estimate of the
transition and observation matrices of the process T and 6. The EM algorithm computes
the expected sufficient statistics (ESS) for D, using T and 6 to compute the expectation.
In the case of HMMs, the ESS are an average, over t, ofthe joint distribution~ t/J(t) over the
variables at time t - I and the variables at time t. A new parameter vector 8' can then be
computed from the ESS by a simple maximum likelihood step. These two steps are iterated
until an appropriate stopping condition is met.
The t/J(t) for the entire sequence can be computed by a simple forward-backward algorithm.
Let r(t) be the response observed at time t, and let 0rCI) be its likelihood vector (Or; (i) ~
Oi,j). Theforwardmessagesa(t) are propagated as follows: a(t) ex (a(t-I) ?T) X OrCI),
where x is the outer product. The backward messages p(t) are propagated as p(t) ex
(T? (p(t+l) x 0r(t+I) )')'. The estimated belief at time t is now simply a(t) x p(t) (suitably
renormalized); similarly, the joint belief t/J(t) is proportional to (a(t-I) x p(t) x T x Or(t?).
This message passing algorithm has an obvious extension to DBNs. Unfortunately, it is
feasible only for very small DBNs. Essentially, the messages passed in this algorithm have
an entry for every possible state at time t; in a DBN, the number of states is exponential
in the number of state variables, rendering such an explicit representation infeasible in
most cases. Furthermore even highly structured processes do not admit a more compact
representation of these messages [8, 2].
3
Belief state approximation
In [2], we described a new approach to approximate inference in dynamic systems, which
avoids the problem of explicitly maintaining distributions over large spaces. We maintain
our belief state (distribution over the current state) using some computationally tractable
representation of a distribution. We propagate the time t approximate belief state through
the transition model and condition it on our evidence at time t + 1. We then approximate the
resulting time t + I distribution using one that admits a compact representation, allowing
the algorithm to continue. We also showed that the errors arising from the repeated
approximation do not accumulate unboundedly, as the stochasticity of the process attenuates
their effect.
In particular, for DBNs we considered belief state approximations where certain subsets of
less correlated variables are grouped into distinct clusters which are approximated as being
independent. In this case, the approximation at each step consists of a simple projection
onto the relevant marginals, which are used as a factored representation of the time t + 1
approximate belief state. This algorithm can be implemented efficiently using the clique
tree algorithm [10]. To compute a(t+ I) from a(t), we generate a clique tree over these two
time slices of the DBN, ensuring that both the time t and time t + 1 clusters appear as a
subset of some clique. We then incorporate a(t) into the time t cliques; a(t+I) is obtained
Approximate Learning of Dynamic Models
399
by calibrating the tree (doing inference) and reading off the relevant marginals from the tree
(a(HI) is implicitly defined as their product).
These results are directly applicable to the learning task, as the belief state is the forward
message in the forward-backward algorithm. Thus, we can apply this approach to the
forward step, with the guarantee that the approximation will not lead to a big difference
in the ESS. However, this technique does not resolve our computational problems, as the
backward propagation phase is as expensive as the forward phase. We can apply the same
idea to the backward propagation, i.e., we maintain and propagate a compactly represented
approximate backward message p(t). The implementation of this idea is a simple extension
'h m f or f orward messages . '.10
r
. Iy'Incorporate
o f our aIgont
compute (3-(t)from (3-(HI) , we SImp
p(t+I) into our clique tree over these two time slices, then read off the relevant marginals
for computing p(t) .
However, extending the analysis is not as straightforward. It is not completely straightforward to apply the techniques of [2] to get relative error bounds for the backward message.
Furthennore, even if we have bounds on relative entropy error of both the forward and
backward messages, bounds for the error of the ..p(t) do not follow. The solution turns out to
use an alternative notion of distance, which combines additively under Bayesian updating,
albeit at the cost of weaker contraction rates.
Definition 1 Let P and P be two positive vectors of same dimension. Their projective
distance is defined as DProj[p, p] ~ maxi,i' In[(pi . Pi' )/(Pi' . Pi)]'
We note that the projective distance is a (weak) upper bound on the relative entropy.
Based on the results of [1], we show that projective distance contracts when messages are
propagated through the stochastic transition matrix, in either direction. Of course, the rate
of contraction depends on ergodicity properties ofthe matrix:
Lemma 2 Let k = min{i,j,i',j':'T,,J''7i',j',tO} V(Ti,j' . Ti',j)/(Ti,j . Ti',j'), and define
"'T ~ 2 ? k/(I + k). Then DProj[a(t),a(t)] ::; (I - "'T) . DProj[a(t-I),a(t-I)], and
D proj [{3(t),p(t)]::; (I - "'T)' D PrOj [{3(t+I),p(HI)].
We can now show that, if our approximations do not introduce too large an error, then the
expected sufficient statistics will remain close to their correct value.
Theorem 3 Let S be the ESS computed via exact inference, and let 5 be its approximation.
If the forward (backward) approximation step is guaranteed to introduce at most c (6)
projective error, then DProj[S, 5] ::; (c + 8) / "'T. Therefore DkdS 115] ::; (c + 8) / "'T?
Note that even small fluctuations in the sufficient statistics can cause the EM algorithm to
reach a different local maximum. Thus, we cannot analytically compare the quality of the
resulting algorithms. However, as our experimental results show, there is no divergence
between exact EM and aproximate EM in practice.
We tested our algorithms on the task of learning the parameters for the BAT network shown
in Figure 1(a), used for traffic monitoring [6]. The training set was a fixed sequence
of 1000 slices, generated from the correct network distribution. Our test metric was the
average log-likelihood (per slice) of a fixed test sequence of 50 slices. All experiments
were conducted using three different random starting points for the parameters (the same
X Boyen and D. Koller
400
~
- - - -- - -- -- -- - -,
Ld'C''''K---;----(L. . nnrl
( j
-15 .0 ;
) - - - - -----{""loCkS<oo'
~
~
~
~
~
b
-16.0 -
>
?
slll,:e t
s!u;e t+1
eVldeoce
- -
IteratIOn
- -- - -20
30
-
Rererence DBN
-------?
E",c,EM
5+5 EM
3+2+4+1
1+.. +1
40
50
60
Figure I: (a) The BAT DBN. (b) Structural approximations for batch EM.
in all the experiments). We ran EM with different types of structural approximations, and
evaluated the quality of the model after each iteration of the algorithm. We used four
different structural approximations: (i) exact propagation; (ii) a 5+5 clustering of the ten
state variables; (iii) a 3+2+4+ I clustering; (iv) each variable in a separate cluster. The results
for one random starting point are shown on Figure 1(b). As we can see, the impact of (even
severe) structural approximation on learning accuracy is negligible. In all of the runs, the
approximate algorithm tracked the exact one very closely, and the largest difference in the
peak log-likelihood was at most 0.04. This phenomenon is rather remarkable, especially
in view of the substantial savings caused by the approximations: on a Sun Ultra II, the
computational cost of learning was 138 min/iteration in the exact case, vs. 6 min/iteration
for the 5+5 clustering. and less than 5 min/iteration for the other two.
4
Online learning
Our analysis also gives us the tools to address another important problem with learning
dynamic models: the need to reason about the entire temporal sequence at once. One
consequence of our contraction result is that the effect of approximations done far away
in the sequence decays exponentially with the time difference. In particular, the effect
of an approximation which ignores observations that are far in the future is also limited.
Therefore. if we do inference for a time slice based on a small window of observations into
the future, the result should still be fairly accurate. More precisely. assume that we are at
time t and are considering a window of size w . We can view the uniform message as a very
bad approximation to p(t+w). But as we propagate this approximate backward message
from t + w to t, the error will decay exponentially with w.
Based on these insights, we experimented with various online algorithms that use a small
window approximation. Our online algorithms are based on the approach of [11], in
which ESS are updated with an exponential decay every few data cases; the parameters
are then updated correspondingly. The main problem with frequent parameter updates in
the online setting is that they require a recomputation of the messages computed using the
old parameters. For long sequences, the computational cost of such a scheme would be
prohibitive. In our algorithms, we simply leave the forward messages unchanged. under
the assumption that the most recent time slices used parameters that are very close to the
new ones. Our contraction result tells us that the use of old parameters far back in the
sequence has a negligible effect on the message. We tried several schemes for the update
of the backward messages. In the dynamic-JOOO approach, we use a backward message
computed over 1000 slices, with the closer messages recomputed very frequently as the
parameters are changed. based on cached messages that used older parameters. The 8
401
Approximate Learning o/Dynamic Models
-15.0 .
-150
l
,.II
~
. ~
,I"
-5
~
-5
~
,I"
,I"
}
-;
.
E
'I
':"
u
'"
~
//
-16.0 ~
~
- ~
t!"
""
-
Reference DBN
-
Batch EM
-160 -
""
--_. Dynarruc-IOOO
--_ ...... StallC-4
Stalic-{)
Iteratlon
--
---- - - - - - - - -15
20
25
30
35
Tmle shce
15000
-
RctereJk.:c DBN
--------
Dynarruc-l ()(x}
Statlc-lOOO
Slatlc-4
Stallc-{)
25000
35000
Figure 2: Temporal approximations for (a) batch setting; (b) online setting.
closest messages are updated every parameter update, the next 16 every other update, etc.
This approach is the closest realistic alternative to a full update of backward messages. In
the static-JOOO approach, we use a very long window (1000 slices), but do not recompute
messages; when the window ends, we use the current parameters to compute the messages
for the entire next window. In the static-4 approach, we do the same, but use a very short
window of 4 slices. Finally, in the static-O approach, there is no lookahead at all; only the
past and present evidence is used to compute the joint beliefs. The latter case is often used
(e.g., in the context of Kalman filters [9]) for online learning of the process parameters.
To minimize the computational burden, all tests were conducted using the 5+5 structural
approximation. The running time for the various algorithms are: 0.4 sec/slice for batch
EM; 1.4 for dynamic-I 000; 0.5 for static-I 000 and for static-4; and 0.3 for static-a.
We evaluated these temporal approximations both in an online and in a batch setting. In the
batch experiments, we used the same I OOO-step sequence used above. The results are shown
in Figure 2(a). We see that the dynamic-I 000 algorithm reaches the same quality model as
standard batch EM, but converges sooner. As in [11], the difference is due to the frequent
update of the sufficient statistics based on more accurate parameters. More interestingly,
we see that the static-4 algorithm, which uses a lookahead of only 4, also reaches the same
accuracy. Thus, our approximation-ignoring evidence far in the future-is a good one,
even for a very weak notion of "far". By contrast, we see that the quality reached by the
static-O approach is significantly lower: the sufficient statistics used by the EM algorithm
in this case are consistently worse, as they ignore all future evidence. Thus, in this network,
a window of size 4 is as good as full forward-backward, whereas one of size a is clearly
worse. Our online learning experiments, shown in Figure 2(b), used a single long sequence
of 40,000 slices. Again, we see that the static-4 approach is almost indistinguishable in
terms of accuracy from the dynamic-lOOO approach, and that both converge more rapidly
than the static-I 000 algorithm. Thus, frequent updates over short windows are better than
infrequent updates over longer ones. Finally, we see again that the static-O algorithm
converges to a hypothesis of much lower quality. Thus, even a very short window allows
rapid convergence to the "best possible" answer, but a window of size a does not.
5
Conclusion and extensions
In this paper, we suggested the use of simple structural approximations in the inference
algorithm used in an E-step. Our results suggest that even severe structural approximations
have almost negligible effects on the accuracy of learning. The advantages of approximate
inference in the learning setting are even more pronounced than in the inference task [2],
as the small errors caused by approximation are negligible compared to the larger ones
402
X Boyen and D. Koller
induced by the learning process. Our techniques provide a new and simple approach for
learning structured models of complex dynamic systems, with the resulting advantages of
generalization and the ability to incorporate prior knowledge. We also presented a new
algorithm for the online learning task, showing that we can learn high-quality models using
a very small time window of future observations.
The work most comparable to ours is the variational approach to approximate inference
applied to learning factorial HMMs [8]. While we have not done a direct empirical
comparison, it seems likely that the variational approach would work better for densely
connected models, whereas our approach would dominate for structured models such as the
one in our experiments. Indeed, for this model, our algorithms track exact EM so closely
that any significant improvement in accuracy is unlikely. Our algorithm is also simpler and
easier to implement. Most importantly, it is applicable to the task of online learning.
The most obvious extension to our results is an integration of our ideas with structure
learning algorithm for DBNs [7] . We believe that the resulting algorithm will be able to
learn structured models for real-life complex systems.
Acknowledgements. We thank Tim Huang for providing us with the BAT network, and Nir
Friedman and Leonid Gurvits for useful discussions. This research was supported by ARO
under the MURI program "Integrated Approach to Intelligent Systems", and by DARPA
contract DACA 76-93-C-0025 under subcontract to lET, Inc.
References
[I] M. Artzrouni and X. Li. A note on the coefficient of ergodicity of a column-allowable
nonnegative matrix . Linear algebra and applications, 214:93-10 I, 1995.
[2] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In
Proc. VAl, pages 33-42, 1998.
[3] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.
[4] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation.
Camp. Int., 5(3),1989.
[5] A.P. Dempster, N.M. Laird, and D.B . Rubin. Maximum-likelihood from incomplete
data via the EM algorithm. Journal of the Royal Statistical Society, B39: 1-38,1977.
[6] J. Forbes, T. Huang, K. Kanazawa, and SJ. Russell. The BATmobile: Towards a
Bayesian automated taxi. In Proc. IJCAI, 1995.
[7] N. Friedman, K. Murphy, and SJ. Russell. Learning the structure of dynamic probabilistic networks. In Proc. VAl, pages 139-147, 1998.
[8] Z. Ghahramani and M.1. Jordan. Factorial hidden Markov models. In NIPS 8, 1996.
[9] R.E. Kalman. A new approach to linear filtering and prediction problems. J. of Basic
Engineering, 82:34-45, 1960.
[10] S.L. Lauritzen and OJ. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. J. Roy. Stat. Soc., B 50, 1988.
[11] R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental,
sparse, and other variants. In M.I. Jordan, editor, Learning in Graphical Models .
Kluwer, 1998.
[12] L. Rabiner and B. Juang. An introduction to hidden Markov models. IEEE Acoustics,
Speech & Signal Processing, 1986.
[13] G. Zweig and SJ. Russell. Speech recognition with dynamic bayesian networks. In
Proc. AAAI, pages 173-180, 1998.
| 1588 |@word illustrating:1 seems:1 suitably:1 additively:1 propagate:3 tried:1 contraction:7 b39:1 thereby:2 ld:1 reduction:1 initial:1 fragment:2 ours:1 interestingly:1 past:1 current:4 si:2 realistic:1 update:8 v:1 prohibitive:1 parameterization:1 xk:3 es:6 short:3 indefinitely:1 batmobile:1 iterates:1 recompute:1 relayed:1 simpler:1 daphne:1 direct:1 qualitative:1 prove:3 consists:1 combine:2 introduce:2 indeed:1 expected:5 rapid:1 behavior:1 frequently:1 decomposed:1 resolve:1 window:14 considering:2 increasing:1 bounded:1 guarantee:1 temporal:6 every:4 ti:4 appear:1 positive:1 negligible:4 simp:1 local:3 engineering:1 consequence:1 tmle:1 taxi:1 fluctuation:1 suggests:1 hmms:5 limited:3 projective:4 bat:3 practical:2 practice:1 implement:1 procedure:2 empirical:2 significantly:1 projection:1 persistence:1 suggest:1 get:1 onto:1 close:2 cannot:1 context:2 dean:1 missing:1 straightforward:4 attention:1 starting:2 simplicity:1 legitimate:1 factored:1 insight:1 importantly:1 fill:1 dominate:1 notion:2 updated:3 dbns:7 infrequent:1 exact:9 us:1 hypothesis:5 element:1 roy:1 expensive:2 approximated:1 updating:1 recognition:1 muri:1 observed:4 role:1 connected:1 sun:1 russell:3 ran:1 substantial:1 dempster:1 traversal:1 dynamic:18 renormalized:1 depend:1 algebra:1 completely:2 compactly:1 easily:1 joint:3 darpa:1 various:3 represented:2 distinct:1 describe:3 effective:1 tell:1 quite:1 encoded:1 stanford:4 larger:1 furthennore:1 ability:1 statistic:7 laird:1 online:13 sequence:14 advantage:2 aro:1 product:2 frequent:3 relevant:3 rapidly:1 achieve:1 lookahead:2 inducing:1 pronounced:1 parent:3 cluster:3 convergence:1 unboundedly:1 extending:1 ijcai:1 cached:1 incremental:1 converges:3 leave:1 help:1 oo:1 tim:1 stat:1 lauritzen:1 ex:2 soc:1 implemented:1 c:2 met:1 direction:1 closely:2 correct:2 filter:1 stochastic:11 human:1 require:2 generalization:2 preliminary:1 ultra:1 extension:5 considered:1 smallest:1 favorable:1 proc:4 applicable:2 largest:2 grouped:1 tool:1 clearly:1 always:1 rather:1 focus:1 improvement:1 consistently:1 likelihood:6 contrast:1 camp:1 inference:24 stopping:1 accumulated:2 entire:8 unlikely:1 integrated:1 hidden:3 koller:6 proj:2 integration:1 fairly:1 construct:1 saving:2 once:2 never:1 gurvits:1 represents:1 future:7 rci:1 simplify:1 intelligent:1 few:1 causation:1 manipulated:1 divergence:2 densely:1 murphy:1 phase:5 maintain:2 friedman:2 message:26 highly:1 severe:2 xb:1 accurate:3 tuple:1 closer:1 tree:5 iv:1 old:2 sooner:1 incomplete:1 theoretical:1 column:1 cover:1 maximization:1 cost:4 entry:2 subset:2 uniform:1 conducted:2 too:1 answer:1 combined:1 peak:1 jooo:2 probabilistic:3 off:2 contract:2 iy:1 recomputation:1 again:2 aaai:1 huang:2 worse:2 admit:3 expert:2 li:1 sec:1 coefficient:1 inc:1 int:1 explicitly:1 caused:2 depends:1 view:3 analyze:1 observing:1 doing:1 start:1 relied:1 traffic:1 reached:1 forbes:1 minimize:1 oi:2 accuracy:6 efficiently:1 rabiner:1 ofthe:2 weak:2 bayesian:5 iterated:1 monitoring:2 reach:3 definition:1 obvious:2 associated:1 static:11 propagated:5 proved:1 knowledge:2 routine:1 back:1 follow:1 response:2 ooo:1 evaluated:2 done:2 furthermore:3 ergodicity:2 stage:1 until:2 propagation:7 quality:8 believe:1 xkt:2 iooo:1 effect:5 calibrating:1 concept:1 xavier:1 inductive:1 analytically:1 read:1 neal:1 indistinguishable:1 maintained:1 subcontract:1 allowable:1 reasoning:1 variational:2 instantaneous:1 recently:1 empirically:2 tracked:1 exponentially:3 extend:1 kluwer:1 marginals:3 accumulate:1 significant:1 dbn:10 similarly:1 stochasticity:1 access:1 longer:1 etc:1 closest:2 showed:2 recent:1 certain:1 continue:1 life:3 converge:1 signal:1 ii:3 multiple:1 full:2 rj:1 faster:3 long:4 zweig:1 slll:1 equally:1 ensuring:1 impact:1 prediction:1 basic:1 variant:1 essentially:1 expectation:2 metric:1 iteration:5 whereas:3 crucial:1 induced:1 jordan:2 structural:7 iii:1 rendering:2 automated:1 idea:4 passed:1 speech:2 passing:1 cause:1 useful:1 factorial:3 ten:1 generate:1 specifies:2 problematic:2 estimated:1 arising:2 per:1 track:1 key:1 four:1 recomputed:1 backward:17 run:1 discern:1 almost:3 comparable:1 bound:8 hi:3 guaranteed:1 played:1 nonnegative:1 precisely:1 encodes:1 min:4 speedup:1 structured:7 according:1 smaller:1 remain:1 em:25 partitioned:1 making:1 computationally:2 remains:1 turn:1 tractable:2 end:1 apply:5 away:1 appropriate:2 alternative:2 batch:7 thomas:1 running:2 include:1 clustering:3 graphical:2 lock:1 maintaining:1 ghahramani:1 especially:1 society:1 unchanged:1 distance:4 separate:1 thank:1 outer:1 reason:2 kalman:2 providing:1 unfortunately:1 attenuates:1 implementation:1 allowing:2 upper:1 observation:12 markov:3 situation:1 hinton:1 kl:1 specified:2 acoustic:1 learned:2 nip:1 address:2 able:2 suggested:1 boyen:5 reading:1 program:1 royal:1 oj:1 belief:9 natural:1 scheme:2 older:1 spiegelhalter:1 nir:1 faced:1 prior:2 acknowledgement:1 val:2 relative:4 loss:1 proportional:1 filtering:1 remarkable:1 sufficient:7 rubin:1 editor:1 pi:4 course:1 changed:1 looo:2 supported:1 infeasible:2 offline:2 bias:1 allow:1 weaker:1 correspondingly:1 sparse:1 slice:12 dimension:1 transition:6 avoids:1 computes:1 ignores:1 forward:12 far:6 sj:4 approximate:19 observable:2 ignore:2 compact:3 implicitly:2 juang:1 dealing:1 clique:5 search:1 iterative:1 table:1 learn:3 ca:2 ignoring:1 improving:1 complex:5 domain:2 main:1 big:1 repeated:2 wiley:1 explicit:1 exponential:4 xl:1 learns:1 theorem:1 bad:1 showing:3 maxi:1 decay:3 admits:1 experimented:1 evidence:4 burden:1 kanazawa:2 restricting:1 albeit:1 justifies:1 easier:2 entropy:3 simply:2 likely:1 orward:1 partially:2 applies:1 conditional:1 goal:1 towards:1 leonid:1 feasible:1 degradation:1 lemma:1 called:1 experimental:1 latter:2 incorporate:4 dept:2 tested:1 phenomenon:1 correlated:1 |
643 | 1,589 | Perceiving without Learning: from Spirals to
Inside/Outside Relations
Ke Chen" and DeLiang L. Wang
Department of Computer and Information Science and Center for Cognitive Science
The Ohio State University, Columbus, OH 43210-1277, USA
{kchen,dwang}@cis.ohio-state.edu
Abstract
As a benchmark task, the spiral problem is well known in neural networks. Unlike previous work that emphasizes learning, we approach
the problem from a generic perspective that does not involve learning.
We point out that the spiral problem is intrinsically connected to the inside/outside problem. A generic solution to both problems is proposed
based on oscillatory correlation using a time delay network. Our simulation results are qualitatively consistent with human performance, and
we interpret human limitations in terms of synchrony and time delays,
both biologically plausible. As a special case, our network without time
delays can always distinguish these figures regardless of shape, position,
size, and orientation.
1 INTRODUCTION
The spiral problem refers to distinguishing between a connected single spiral and disconnected double spirals, as illustrated in Fig. 1. Since Minsky and Papert (1969) first introduced the problem in their influential book on perceptrons, irhas received much attention
and has become a benchmark task in neural networks. Many solutions have been attempted
using different learning models since Lang and Witbrock (1988) reported that the problem
could not be solved with a standard multilayer perceptron. However, resulting learning
systems are only able to produce decision regions highly constrained by the spirals defined in a training set, thus specific in shape, position, size, and orientation. Moreover,
no explanation is provided as to why the problem is difficult for human subjects to solve.
Grossberg and Wyse (1991) proposed a biologically plausible neural network architecture
for figure-ground separation and reported their network can distinguish between connected
and disconnected spirals. In their paper, however, no demonstration was given to the spiral
problem, and their model does not exhibit the limitations that humans do.
* Also with National Laboratory of Machine Perception and Center for Information Science,
Peking University, Beijing 100871, China. E-mail: chen@cis.pku.edu.cn
11
Perceiving without Learning
There is a related problem in the study of visual perception, i.e., the perception of inside/outside relations. Considering the visual input of a single closed curve, the task of
perceiving the inside/outside relation is to determine whether a specific pixel lies inside or
outside the closed curve. For the human visual system, the perception of inside/outside
relations often appears to be immediate and effortless (see an example in Fig. 2(a?. As illustrated in Fig. 2(b), however, the immediate perception is not available for humans when
the bounding contour becomes highly convoluted (Ullman 1984). Ullman (1984) suggested
the computation of spatial relation through the use of visual routines. Visual routines result
in the conjecture that the inside/outside is inherently sequential. As pointed out recently by
Ullman (1996), the processes underlying the perception of inside/outside relations are as
yet unknown and applying visual routines is simply one alternative .
??
(a)
(b)
Fig. 1: The spiral problem. (a) a connected single
spiral. (b) disconnected double spirals (adapted
from Minsky and Papert 1969. 1988).
(a)
(b)
Fig. 2: Inside/Outside relations. (a) an example (adapted from lulesz 1995). (b) another example (adapted from Ullman 1984).
Theoretical investigations of brain functions indicate that timing of neuronal activity is a
key to the construction of neuronal assemblies (Milner 1974, Malsburg 1981). In particular, the discovery of synchronous oscillations in the visual cortex (Singer & Gray 1995)
has triggered much interest to develop computational models for oscillatory correlation.
Recently, Terman and Wang (1995) proposed locally excitatory globally inhibitory oscillator networks (LEGION). They theoretically showed that LEGION can rapidly achieve both
synchronization in a locally coupled oscillator group representing each object and desynchronization among a number of oscillator groups representing different objects. More
recently, Campbell and Wang (1998) have studied time delays in networks of relaxation
oscillators and analyzed the behavior of LEGION with time delays. Their studies show
that loosely synchronous solutions can be achieved under a broad range of initial conditions and time delays. Therefore, LEGION provides a computational framework to study
the process of visual perception from a standpoint of oscillatory correlation.
We explore both the spiral problem and the inside/outside relations by oscillatory correlation in this paper. We show that computation through LEGION with time delays yields a
generic solution to these problems since time delays inevitably occur in information transmission of a biological system. This investigation indicates that perceptual performance
would be limited if local activation cannot be rapidly propagated due to time delays. As a
special case, LEGION without time delays reliably distinguishes between connected and
disconnected spirals and discriminates the inside and the outside regardless of shape, position, size, and orientation. Thus, we suggest that this kind of problems may be better
solved by a neural oscillator network rather than by sophisticated learning.
2 METHODOLOGY
The architecture of LEGION used in this paper is a two-dimensional network. Each oscillator is connected to its four nearest neighbors, and the global inhibitor (GI) receives
excitation from each oscillator on the network and in turn inhibits each oscillator (Terman
12
K. Chen and D. L. Wang
& Wang 1995). In LEGION, a single oscillator, i , is defined as
dXi
-dt = 3x?' -
dy?
dt' =
?
3
X? -
,
y.,
+ 1-, + S ,? + p
( A +; tanh(,8xi ) - Yi ) .
(la)
(lb)
Here Ii represents external stimulation to the oscillator, and Si represents overall coupling
from other oscillators and the GI in the network. The symbol p denotes the amplitude of a
Gaussian noise. Other parameters ?, ,8, A, and; are chosen to control a periodic solution
of the dynamic system. The periodic solution alternates between the silent and the active
phases of near steady-state behavior (Terman & Wang 1995). The coupling term Si at time
tis
Si =
WikSoo(Xk(t - T) , Oz) - W zSoo(z,Oz),
(2)
2:
kEN(i)
where Soo(x, 0) = 1/(1 + exp[-II:(x - 0)]) and the parameter II: controls the steepness of
the sigmoid function. Wik is a synaptic weight from oscillator k to oscillator i, and N (i)
is the set of its immediate neighbors. T is a time delay in interactions (Campbell & Wang
1998), and Oz is a threshold over which an oscillator can affect its neighbors. Wz is the
positive weight used for the inhibition from the global inhibitor z, whose activity is defined
as
dz
dt
= ?(uoo -
z).
(3)
where U oo = 0 if Xi < Oz for every oscillator i, and U oo = 1 if Xi(t) 2: Oz for at least
one oscillator i. Here Oz represents a threshold to determine whether the GI z sends inhibition to oscillators, and the parameter ? determines the rate at which the inhibitor reacts to
stimulation from oscillators.
We use pattern formation to refer to the behavior that all the oscillators representing the
same object are synchronous, while the oscillators representing different objects are desynchronous. Terman and Wang (1995) have analytically shown that such a solution can be
achieved in LEGION without time delays. However, a solution may not be achieved when
time delays are introduced. Although the loose synchrony concept has been introduced to
describe time delay behavior (Campbell & Wang 1998), it does not indicate pattern formation in an entire network even when loose synchrony is achieved because loose synchrony
is a local concept defined in terms of pairs of neighboring oscillators. Here we introduce a measure called min-max difference in order to examine whether pattern formation is
achieved. Suppose that oscillators Oi and OJ represent two pixels in the same object, and
the oscillator Ok represents a pixel in a different object. Moreover, let t S denote the time
at which oscillator Os enters the active phase. The min-max difference measure is defined
as Iti - t j I < TRB and Iti - t k I 2: TRB, where TRB is the time period of an active phase.
Intuitively, this measure suggests that pattern formation is achieved if any two oscillators
representing two pixels in the same object have some overlap in the active phase, while any
two oscillators representing two pixels belonging to different objects never stay in the active
phase simultaneously. This definition of pattern formation applies to both exact synchrony
in LEGION without time delays and loose synchrony with time delays.
3 SIMULATIONS
For a given image consisting of N x N pixels, a two-dimensional LEGION network with
N x N oscillators is used so that each oscillator in the network corresponds to one pixel
in the image. In the following simulations, the equations 1-3 were numerically solved
using the fourth-order Runge-Kutta method. We illustrate stimulated oscillators with black
squares. All oscillators were initialized randomly. A large number of simulations have
13
Perceiving without Learning
been conducted with a broad range of parameter values and network sizes (Chen & Wang
1997). Here we report typical results using a specific set of parameter values.
3.1 THE SPIRAL PROBLEM
For simulations, the two images in Fig. 1 were sampled as two binary images with 29 x 29
pixels. For these images, two problems can be addressed: (I) When an image is presented,
can one determine whether it contains a single spiral or double spirals? (2) Given a point
on a two-dimensional plane, can one determine whether it is inside or outside a specific
spiral?
(a)
(b)
(c)
Fig. 3: Results of LEGION with a time delay T
0.002T (T is the period of oscillation) for
the spiral problem. The parameter values used in this simulation are t = 0.003, {3 = 500, , =
24.0, >. = 21.5, OT = 6.0, p = 0.03, K = 500, ()x = -0.5, ()z = 0.1, ? = 3.0, Wz = 1.5,
Is = 1.0, and Iu = -1.0 where Is and Iu are external input to stimulated and unstimulated
oscillators, respectively.
We first applied LEGION with time delays to the single spiral image in Fig. lea). Fig. 3(a)
illustrates the visual stimulus, where black pixels correspond to the stimulated oscillators
and white oneS correspond to the unstimulated oscillators. Fig. 3(b) shows a sequence of
snapshots after the network was stabilized except for the first snapshot which shows the
random initial state of the network. These snapshots are arranged in temporal order first
from left to right and then from top to bottom. We observe from these snapshots that an
activated oscillator in the spiral propagates its activation to its two immediate neighbors
with some time delay, and the process of propagation forms a traveling wave along the
spiral. We emphasize that, at any time, only the oscillators corresponding to a portion
of the spiral stay in the active phase together, and the entire spiral can never be in the
active phase simultaneously. Thus, based on the oscillatory correlation theory, our system
cannot group the whole spiral together, which indicates that our system fails to realize that
the pixels in the spiral belong to the same pattern. Note that the convoluted part of the
background behaves similarly. Fig. 3(c) shows the temporal trajectories of the combined x
activities of the oscillators representing the spiral (S) and the background (B) as well as the
temporal activity of the GI. According to the min-max difference measure, Fig. 3(c) shows
that pattern formation cannot be achieved. In order to illustrate the effects of time delays,
we applied LEGION without time delays to the same image. Simulation results show that
pattern formation is achieved, and the single spiral can be segregated from the background
by the second period (Chen & Wang 1997). Thus, LEGION without time delays can readily
solve the spiral problem in this case. The failure to group the spiral in Fig. 3 is caused by
time delays in the coupling of neighboring oscillators.
We also applied LEGION with time delays to the double spirals image in Fig. 1(b). Fig.
4(a) shows the visual stimulus. Fig. 4(b) shows a sequence of snapshots arranged in the
same order as in Fig. 3(b). We observe from these snapshots that starting from an end of one
spiral a traveling wave is formed along the spiral and the activated oscillators representing
the spiral propagate their activation. Due to time delays, however, only the oscillators
corresponding to a portion of the spiral stay in the active phase together, and the entire
K. Chen and D. L. Wang
14
spiral is never in the active phase simultaneously. The oscillators representing the other
spiral have the same behavior. The results show that the pixels in anyone of double spirals
cannot be grouped as the same pattern. We mention that the behavior of our system for the
convoluted part of the background is similar to that for the double spirals. It is also evident
from Fig. 4(c) that the pattern formation is not achieved after the network was stabilized.
We also applied LEGION without time delays to the double spirals image for the same
purpose as described before. Simulation results also show that anyone of spirals can be
segregated from both the other spiral and the background by the second period (Chen &
Wang 1997). Once again, it indicates that the failure to group the double spirals in Fig. 4
results from time delays.
(a)
(c)
(b)
Fig. 4: Results of LEGION without time delays for the spiral problem. The parameter values used
are the same as listed in the caption of Fig. 3. In (c), SI and S2 represent two disconnected spirals.
Band GI denote background and the global inhibitor, respectively.
For the spiral problem, pattern formation means that solutions to the two problems in question can be provided to the questions of counting the number of objects or identifying
whether two pixels belong to the same spiral or not. No such solutions are available when
pattern formation is not achieved. Hence, our system cannot solve the spiral problem in
general. Only under the special condition of no time delay can our system solve the problem.
3.2 INSIDFlOUTSIDE RELATIONS
For simulations, the two pictures in Fig. 2 were sampled as binary images with 43 x 43
pixels. We first applied LEGION with time delays to the two images in Fig. 2. Figures
5(a) and 6(a) show the visual stimuli, where black pixels represent areas A and B that correspond to stimulated oscillators and white pixels represent the boundary that corresponds
to unstimulated oscillators. Figures 5(b) and 6(b) illustrate a sequence of snapshots after
networks were stabilized except for the first snapshot which shows the random initial states
of networks. Figures 5(c) and 6(c) show temporal trajectories of the combined x activities
of the oscillators representing areas A and B as well as the GI, respectively.
A
B
IjdL-I~ ~-?
I UJl-==-'--JLJL
GI
(a)
(b)
(c)
Fig. 5: Results of LEGION with a time delay T = 0.002T for Fig. 2(a). The parameter values used
in this simulation are ? = 0.004, 'Y = 14.0, >. = 11.5 and the other parameter values are the same
15
Perceiving without Learning
as listed in the caption of Fig. 3. In (c), A, B, and GI denote areas A, B, and the global inhibitor,
respectively.
l
I JI I
II
(a)
(b)
L
~nsTl
(c)
Fig. 6: Results of LEGION with a time delay T = O.002T for Fig. 2(b). The parameter values used
and other statements are the same as listed in the caption of Fig. 5.
We observe from Fig. 5(b) that the activation of an oscillator can rapidly propagate through
its neighbors to other oscillators representing the same area, and eventually all the oscillators representing the same area (A or B) stay together in the active phase simultaneously,
though they generally enter the active phase at different times due to time delays. Thus, on
the basis of oscillatory correlation, our system can group an entire area (A or B) together
and recognize all the pixels in area A or B as elements of the same area. According to
the min-max difference measure, Fig. 5(c) shows that pattern fonnation is achieved by the
second period. In contrast, we observe from Fig. 6(b) that although an activated oscillator
rapidly propagates its activation in open regions as shown in the last three snapshots, propagation is limited once the traveling wave spreads in spiral-like regions as shown in earlier
snapshots. As a result, at any time, only the oscillators corresponding to a portion of either
area stay in the active phase together, and the oscillators representing the whole area are
never in the active phase simultaneously. Thus, on the basis of oscillatory correlation, our
system cannot group the whole area, and fails to identify the pixels of one area as belonging
to the same pattern. Furthennore, according to the min-max difference measure, Fig. 6(c)
shows that pattern fonnation is not achieved after the network was stabilized. In order to
illustrate the effects of time delays and show how to use an oscillator network to perceive
inside/outside relations, we applied LEGION without time delays to the two images in Fig.
2. Our simulations show that LEGION without time delays readily segregates two areas
in both cases by the second period (Chen & Wang 1997). Thus, the failure to group each
area in Fig. 6 is also attributed to time delays in the coupling of neighboring oscillators.
In general, the above simulations suggest that oscillatory correlation provides a way to address inside/outside relations by a neural network; when pattern formation is achieved, a
single area segregates from other areas that appear in the same image. For a specific point
on the two-dimensional plane, the inside/outside relations can be identified by examining
whether the oscillator representing the point synchronizes with the oscillators representing
a specific area or not.
4 DISCUSSION AND CONCLUSION
It has been reported that many neural network models can solve the spiral problem through
learning. However, their solutions are subject to limitations because generalization abilities of resulting learning systems highly depend on the training set. As pointed out by
Minsky and Papert (1969), solving the spiral problem is equivalent to detecting connectedness. They showed that connectedness cannot be computed by any diameter-limited or
order-limited perceptrons (Minsky & Papert 1969). This limitation holds for multilayer
perceptrons regardless of learning scheme (Minsky & Papert 1988, p.252). Unfortunately,
16
K. Chen and D. L. Wang
few people have discussed generality of their solutions. In contrast, our simulations have
shown that LEGION without time delays can always distinguish these figures regardless
of shape, position, size, and orientation. We emphasize that no learning is involved in LEGION. In terms of performance, we suggest that the spiral problem may be better solved
by a network of oscillators without learning.
Our system provides an alternative way to perceive inside/outside relations from a neural
computation perspective. Our method is significantly distinguished from visual routines
(Ullman 1984, 1996). First, the visual routine method is described as serial algorithms,
while our system is an inherently parallel and distributed process although its emergent behavior reflects a degree of serial nature of the problems. Second, the visual routine method
does not make a qualitative distinction between rapid effortless perception that corresponds
to simple boundaries and slow effortful perception that corresponds to convoluted boundaries - the time a visual routine, e.g. the coloring method, takes varies continuously. In
contrast, our system makes such a distinction: effortless perception with simple boundaries
corresponds to when pattern formation is achieved, and effortful perception with convoluted boundaries corresponds to when pattern formation is not achieved. Third, perhaps
more importantly conceptually, our system does not invoke high-level serial process to
solve such problems like inside/outside relations; its solution involves the same mechanism as it does for parallel image segmentation (see Wang & Terman 1997).
Acknowledgments: Authors are grateful to S. Campbell for many discussions. This work
was supported in part by an NSF grant (IRI-94233 12), an ONR grant (NOOOI4-93-10335),
and an ONR Young Investigator Award (NOOO14-96-1-00676)to DLW.
References
Campbell, S. & Wang, D.L. (1998) Relaxation oscillators with time delay coupling. Physica D
111:151-178.
Chen, K. & Wang, D.L. (1997) Perceiving without learning: from spirals to inside/outside relations.
Technical Report OSU-C1SRC-8/97-TR38, The Ohio State University.
Grossberg, S. & Wyse, L. (1991) A neural network architecture for figure-ground separation of connected scenic figures. Neural Networks 4:723-742.
Julesz, B. (1995), Dialogues on perception. MIT Press.
Lang, K. & Witbrock, M. (1988) Learning to tell two spirals apart. Proceeding of 1988 Connectionist
Models Summer School, pp. 52-59, Morgan Kaufmann.
Milner, P. (1974) A model for visual shape recognition. Psychological Review 81:512-535.
Minsky, M. & Papert, R. (1969) Perceptrons. MIT Press.
Minsky, M. & Papert, R. (1988) Perceptrons (extended version). MIT Press.
Singer, W. & Gray, C.M. (1995) Visual feature integration and the temporal correlation hypothesis.
Annual Review of Neuroscience 18:555-586.
Terman, D. & Wang, D.L. (1995) Global competition and local cooperation in a network of neural
oscillators. Physica D 81:148-176.
Ullman, S. (1984) Visual routines. Cognition 18:97-159.
Ullman, S. (1996) High-level vision. MIT Press.
von der Malsburg, C. (1981) The correlation theory of brain function. Internal Report 81-2, MaxPlanck-Institute for Biophysical Chemistry.
Wang, D.L. & Terman, D. (1997) Image segmentation based on oscillatory correlation. Neural Computation 9:805-836.
| 1589 |@word version:1 open:1 simulation:13 propagate:2 mention:1 initial:3 contains:1 lang:2 yet:1 activation:5 si:4 readily:2 realize:1 shape:5 plane:2 xk:1 detecting:1 provides:3 along:2 become:1 qualitative:1 inside:18 introduce:1 theoretically:1 rapid:1 behavior:7 examine:1 brain:2 globally:1 considering:1 becomes:1 provided:2 moreover:2 underlying:1 kind:1 temporal:5 every:1 ti:1 control:2 grant:2 appear:1 positive:1 before:1 timing:1 local:3 connectedness:2 black:3 uoo:1 china:1 studied:1 suggests:1 limited:4 range:2 grossberg:2 acknowledgment:1 area:17 significantly:1 refers:1 suggest:3 cannot:7 effortless:3 applying:1 equivalent:1 center:2 dz:1 regardless:4 attention:1 starting:1 iri:1 ke:1 identifying:1 perceive:2 importantly:1 oh:1 construction:1 suppose:1 milner:2 exact:1 caption:3 distinguishing:1 hypothesis:1 maxplanck:1 element:1 recognition:1 bottom:1 wang:20 solved:4 enters:1 region:3 connected:7 discriminates:1 dynamic:1 depend:1 solving:1 grateful:1 basis:2 emergent:1 describe:1 tell:1 formation:13 outside:18 whose:1 plausible:2 solve:6 furthennore:1 ability:1 gi:8 runge:1 triggered:1 sequence:3 biophysical:1 interaction:1 neighboring:3 rapidly:4 achieve:1 oz:6 convoluted:5 competition:1 double:8 transmission:1 produce:1 object:9 coupling:5 develop:1 oo:2 illustrate:4 nearest:1 school:1 received:1 involves:1 indicate:2 human:6 generalization:1 investigation:2 biological:1 physica:2 hold:1 ground:2 exp:1 cognition:1 purpose:1 tanh:1 grouped:1 reflects:1 mit:4 ujl:1 inhibitor:5 always:2 gaussian:1 rather:1 indicates:3 contrast:3 entire:4 relation:15 pixel:17 overall:1 among:1 orientation:4 iu:2 constrained:1 special:3 spatial:1 integration:1 once:2 never:4 legion:25 represents:4 broad:2 unstimulated:3 report:3 stimulus:3 terman:7 connectionist:1 few:1 distinguishes:1 randomly:1 simultaneously:5 national:1 recognize:1 minsky:7 phase:13 consisting:1 interest:1 highly:3 analyzed:1 activated:3 pku:1 loosely:1 initialized:1 theoretical:1 psychological:1 earlier:1 witbrock:2 delay:39 examining:1 conducted:1 reported:3 varies:1 periodic:2 lulesz:1 combined:2 stay:5 invoke:1 together:6 continuously:1 again:1 von:1 cognitive:1 book:1 external:2 dialogue:1 ullman:7 chemistry:1 caused:1 closed:2 portion:3 wave:3 parallel:2 synchrony:6 oi:1 square:1 formed:1 kaufmann:1 yield:1 correspond:3 identify:1 conceptually:1 emphasizes:1 trajectory:2 oscillatory:9 synaptic:1 definition:1 failure:3 pp:1 involved:1 dxi:1 attributed:1 propagated:1 sampled:2 intrinsically:1 noooi4:1 dlw:1 segmentation:2 amplitude:1 routine:8 sophisticated:1 campbell:5 coloring:1 appears:1 ok:1 dt:3 methodology:1 arranged:2 though:1 generality:1 correlation:11 traveling:3 synchronizes:1 receives:1 o:1 propagation:2 columbus:1 perhaps:1 gray:2 usa:1 effect:2 concept:2 analytically:1 hence:1 nooo14:1 laboratory:1 illustrated:2 white:2 excitation:1 steady:1 evident:1 image:16 ohio:3 recently:3 sigmoid:1 behaves:1 stimulation:2 ji:1 belong:2 discussed:1 interpret:1 numerically:1 refer:1 enter:1 similarly:1 pointed:2 cortex:1 inhibition:2 deliang:1 showed:2 perspective:2 apart:1 binary:2 onr:2 yi:1 der:1 morgan:1 determine:4 period:6 ii:4 fonnation:2 technical:1 serial:3 award:1 peking:1 multilayer:2 vision:1 represent:4 achieved:15 lea:1 background:6 addressed:1 sends:1 standpoint:1 ot:1 unlike:1 subject:2 near:1 counting:1 spiral:54 reacts:1 affect:1 architecture:3 identified:1 silent:1 cn:1 synchronous:3 whether:7 generally:1 involve:1 listed:3 julesz:1 band:1 locally:2 ken:1 diameter:1 nsf:1 inhibitory:1 stabilized:4 neuroscience:1 steepness:1 group:8 key:1 four:1 threshold:2 relaxation:2 beijing:1 fourth:1 separation:2 oscillation:2 decision:1 dy:1 summer:1 distinguish:3 annual:1 activity:5 adapted:3 occur:1 anyone:2 min:5 inhibits:1 conjecture:1 department:1 influential:1 according:3 alternate:1 disconnected:5 belonging:2 dwang:1 biologically:2 intuitively:1 equation:1 turn:1 loose:4 eventually:1 mechanism:1 singer:2 end:1 available:2 observe:4 generic:3 distinguished:1 alternative:2 denotes:1 top:1 assembly:1 malsburg:2 question:2 exhibit:1 kutta:1 mail:1 demonstration:1 difficult:1 unfortunately:1 statement:1 reliably:1 unknown:1 snapshot:10 benchmark:2 iti:2 inevitably:1 trb:3 immediate:4 segregate:2 extended:1 lb:1 introduced:3 pair:1 distinction:2 address:1 able:1 suggested:1 perception:12 pattern:18 wz:2 soo:1 explanation:1 max:5 oj:1 overlap:1 representing:15 wik:1 scheme:1 picture:1 coupled:1 review:2 discovery:1 segregated:2 synchronization:1 limitation:4 degree:1 consistent:1 propagates:2 excitatory:1 cooperation:1 supported:1 last:1 perceptron:1 institute:1 neighbor:5 distributed:1 curve:2 boundary:5 contour:1 author:1 qualitatively:1 emphasize:2 global:5 active:13 xi:3 why:1 stimulated:4 nature:1 inherently:2 spread:1 bounding:1 noise:1 whole:3 s2:1 neuronal:2 fig:35 slow:1 papert:7 position:4 fails:2 lie:1 perceptual:1 third:1 young:1 specific:6 desynchronization:1 symbol:1 sequential:1 ci:2 illustrates:1 chen:10 simply:1 explore:1 visual:18 applies:1 corresponds:6 determines:1 oscillator:56 typical:1 perceiving:6 except:2 called:1 la:1 attempted:1 osu:1 perceptrons:5 internal:1 people:1 investigator:1 |
644 | 159 | 748
Performance of a Stochastic Learning Microchip
Joshua Alspector, Bhusan Gupta, ? and Robert B. Allen
Bellcore, Morristown, NJ 07960
We have fabricated a test chip in 2 micron CMOS that can perform supervised
learning in a manner similar to the Boltzmann machine. Patterns can be
presented to it at 100,000 per second. The chip learns to solve the XOR
problem in a few milliseconds. We also have demonstrated the capability to
do unsupervised competitive learning with it. The functions of the chip
components are examined and the performance is assessed.
1. INTRODUCTION
In previous work,(l] (2] we have pointed out the importance of a local learning rule,
feedback connections, and stochastic elements(3] for making learning models that are
electronically implementable. We have fabricated a test chip in 2 micron CMOS
technology that embodies these ideas and we report our evaluation of the microchip and
our plans for improvements.
Knowledge is encoded in the test chip by presenting digital patterns to it that are
examples of a desired input-output Boolean mapping. This knowledge is learned and
stored entirely on chip in a digitally controlled synapse-like element in the form of
connection strengths between neuron-like elements. The only portion of this learning
system which is off chip is the VLSI test equipment used to present the patterns.
This learning system uses a modified Boltzmann machine algorithm[3] which, if
simulated on a serial digital computer, takes enormous amounts of computer time. Our
physical implementation is about 100,000 times faster. The test chip, if expanded to a
board-level system of thousands of neurons, would be an appropriate architecture for
solving artificial intelligence problems whose solutions are hard to specify using a
conventional rule-based approach. Examples include speech and pattern recognition and
encoding some types of expert knowledge.
2. CIllP COMPONENTS
I
Fig. 1 is a photograph of the silicon chip. It contains various test structures, the largest of
which. in the lower left, is a neural-style learning network composed of 6 neurons, each
with its own noise amplifier, and 15 bidirectional synapses which potentially allow the
network to be fully connected. In order to study these components separately, there is a
also a noise amplifier in the upper left comer of the chip, a neuron in the upper right, and
2 synapses in the lower right.
?
Pennanent address: University of California, Berkeley; EE Dep't, Cory Hall; Berkeley, CA 94720
Performance of a Stochastic Learning Microchip
,?
--
- -H ..
..
?I
_____~_J
Figure 1. Photograph of Test Chip Containing a Learning Network in Lower Left.
2.1 Neuron
The electronic neuron perfonns the physical computation:
activation=/ (LWjj sj+noise )=/ (gain*netj)
where / is a monotonic non-linear function such as tanh. In some of our computer
simulations this is a step function corresponding to a high value of gain. The signal from
other neurons to neuron i is the sum of neural states Sj giving input weighted by the
connection strengths Wjj, while the noise simulates a temperature in a physical
thermodynamic system. Their sum is the effective net input netj .
The model neuron is a double differential amplifier as shown in Fig. 2. Noise and signal
have separate differential inputs and are summed at low gain. The differential outputs of
this summing stage are converted to a single output by a high gain stage before being fed
into a switching arrangement. This selects either the net input or an external clamping
signal which forces the neuron into a desired state. The output of the switch is then
749
750
Alspector. Gupta and Allen
Sdeelred
Figure 2. Circuitry of Electronic Analog Neuron.
further amplified before driving the network. The final output approximates a two-state
binary neuron.
2.2 Noise amplifier
anneal
I
Vnot..
>-........-
Figure 3. Block Diagram of Noise Amplifier.
Performance of a Stochastic Learning Microchip
Fig. 3 is a block diagram of the noise amplifier. The original idea was to amplify the
thermal noise in the channel of a transistor with a gain of nearly a million but to stabilize
the de output using low pass negative feedback in 3 stages. By controlling the feedback.
one could control both the bandpass of the noise signal as well as the gain to provide for
annealin$ the temperature (amount of noise) as required by the Boltzmann machine
algorithm. (3) Unfortunately this amplifier proved unstable at high gain values leading to
oscillations of a few MHz which were highly correlated among all the noise amplifiers in
the network. In spite of this undesirable correlation in the noise signals. the network was
still able to learn (see section 3). Rather than a slow "annealing". we used a rapid
"heating" and "flash freezing" of the network to randomize? it. This was done by
momentarily -opening a "noise on" switch during the time allotted for annealing.
Learning was also demonstrated by clamping the free running neurons momentarily to a
pseudo-random state and then releasing them to allow the network to settle.
2.3 Synapse
Fig. 4 is a block diagram of the digitally controlled electronic synapse. The weights are
stored as a sign and four bits of magnitude in five flip-flops arranged as an up-down
counter. The correlation logic tests whether the two neurons that the synapse connects
have the same binary state (correlated) or not at the end of the anneal cycle. If the
neurons are correlated in the "teacher" phase (when the teacher is clamping the output
neurons in the correct state) and not in the "student" phase (when the output neurons are
running free). then a signal to the counter increments the weight by one. If the reverse is
true. the counter is decremented. If the "teacher" and "student" phase have the same
correlation. no change is made.
SJ or'
SJ or'
In,or
UP.
S,
SJ
correlation
logic
ement
down,
&set
logic
son
0
WIJ or II
2
3
1n 1or
Figure 4. Block Diagram of Synapse.
751
752
Alspector, Gupta and Allen
The digital weight is converted to an analog conductance by a set of pass transistors with
graduated binary conductance ratios. Measurements confirmed that the synapse
conductance increased monotonically from a value of -15 though +15 as the counter was
incremented. The -0 value, when loaded into the synapse, disconnected that link. We
usually initialized all the weights to +0 before learning.
3. PERFORMANCE EVALUATION OF NETWORK
3.1 XOR tests
The most difficult test for our 6 neuron network was to have it learn the exclusive-OR
function. The network was arranged with 2 input neurons, 2 hidden neurons, and 1
output neuron as shown in Fig. 5. There is also a so-called 'true' neuron which is always
clamped on. The negative of the weights from that neuron provide the threshold for the
other neurons. The exclusive-OR function is of historical interest because the neural
models of the 1960's could not learn it.[4] [5] This is because those learning algorithms did
not work when there was a layer of hidden neurons. Networks with only a single layer of
modifiable weights could learn the logical OR function but not the exclusive-OR (XOR).
The truth table in Fig. 5 shows that the XOR is 1 (on or true) when either one of the two
inputs is 1, but not when both are 1. However, recent algorithms such as the Boltzmann
machine are able to learn with a hidden layer and hence can solve the XOR.
out
XOR
out
In hidden
00
01
10
11
hidden
o
?
1
1
o
Learn 'rules' to solve problem
In
1
2
Figure 5. 2-2-1 Network to Learn XOR.
To teach a network to be an XOR, we start with a blank slate where all the weights are
zero and then present the patterns of 1's and O's in the figure with the teacher~dtemately
clamping the output to the correct state and letting it run free. On each presentation, the
network is jittered by noise and correlations are counted by each synapse. At the end of
each teacher-student cycle, weights are adjusted.
Performance of a Stochastic Learning Microchip
Tests of the chip were conducted using an HP 8180A data generator to present digital
patterns to the chip, an HP 8182 data analyzer to capture the chip's digital outputs, and
an HP 54112A digitizing oscilloscope to capture wavefonns. Analog wavefonns were
generated using an HP 8770A arbitrary wavefonn synthesizer feeding a Comlinear E20 1I
amplifier. These instruments were controlled by an HP 9836 computer running UNIX
with test programs written in C.
A pattern presentation phase consisted of five subphases and hence five clock cycles of
the data generator. The input and/or output pattern to be presented to the clamped
neurons is present during all five cycles. The first cycle presents noise or an annealing
wavefonn to the network. The second cycle sends a signal to each synapse to count
correlations. The fourth cycle can be used to send a signal to each synapse to adjust
weights. This is usually done only after two 5 cycle phases, one for the "teacher" phase
and one for the "student" phase. Thus, during learning, ten digital words were used in the
data generator for each pattern presentation.
In addition to presenting patterns, digital weights can also be read into the chip with a
similar 5 cycle phase. This uses the flip-flop storage arranged as a shift register for
weight storage and readout. Because the memory of the data generator was only 1024
bits deep, we would present only 66 patterns (660 words) each time the data generator
was loaded by the control computer. The remaining memory was used to initialize the
network to its previous value after the destructive readout of weights. In this way,
perfonnance of the network was monitored after sets of 66 pseudo-randomly selected
patterns. 100 test patterns could also be presented, without learning, to see what
perfonnance the network achieved at that point.
For the XOR, we organized the connectivity as in Fig. 5. For example, the connections
between input and output neurons were fixed at zero. In order to test the settling of the
network, we loaded a set of synapse weights that were learned in one of the computer
simulations. We then checked the settling times of the network for various transitions of
input states. These varied from 130 to 1700 nanoseconds, with most transitions in the
250 to 600 nanosecond range. The shortest time is a simple settling of the neuron
amplifier while the longest time represents several loops of settling of the network before
a stable state is found.
For the learning trials, we initialized all weights to zero. Fig. 6 shows three learning
curves for a 2-2-1 XOR network (Fig. 5). At first the network performs at chance but it
soon learns all the patterns. The values of the weights (which have an accuracy of 4 bits
plus a sign) after learning are also shown for one of the trials.
The chip had an easier time learning the XOR function in a network with only one
hidden unit provided there were also direct connections from input to output as shown in
the inset of Fig. 7. This also demonstrates the flexibility of the connectivity on the chip
which would not be possible if we organized it as a strictly layered network. The figure
shows the learning curves at various speeds of pattern presentation from 500 to 256,000
patterns per second. The clock rate of the data generator at the highest speed was 2.56
MHz so that the time during which noise was applied was only 400 nanoseconds. The
noise amplifier often did not produce an excursion of neural states at these frequencies
753
754
Alspector. Gupta and Allen
f
r
?c
t
I
c
o?
r
r
?
c?
t
132
198
264
330
3911
numlMt of training pattern.
482
528
594
Figure 6. Proportion Correct for On-chip Learning vs. Patterns Presented.
...
I.
f
r
?
C
t
I
0 .8
n
c
o.
r
r
?c
t
1
Z
528
Figure 7. Learning Curves for 2-1-1 XOR at Various Speeds.
effectively limiting learning above this rate. We could have increased the rate by
compressing the five cycle phase to three or by random clamping of free running
neurons, but probably not by an order of magnitude. Note that noise is necessary for
learning by this system as shown by the curve at 500 Hz without noise.
Performance of a Stochastic Learning Microchip
Fig. 8 is an oscilloscope trace of the 4 neural states as a function of time during the
pattern presentations.
lM.a400 _
44.a400 _
1M..... _
output
unit
hidden
unit
Input
unit A
..--------.
f
Input
unit B
.....
~_.a...
?
?
?
I
1'* . . . . . . . . +-4--...... I ,
??
__ ~
? ?? ~. . .
t
I ,
I , ..
? ? ? ? ~a.....a--'
I
I I I , .......=$ .... ~.....,...,...,.......-c
I
?????
'__....a._L.a...-......,.
e_, ?? ...a.-
.pply nol??
? dJu.t weight.
ph.... t ??ch.r
. student
PI ?.m
r
t '" s
11
"'r
t '" s
11
"'r
t '" s
10
"'r
t '" s
10
"'r
t "'.
00
"'r
t '" s
01
"'r
t "'.
11
"'r
t '" s
10
"'r
t
A. A
00
Figure 8. Neural States during Learning.
The time during which noise is applied is apparent from the rapid changes of state in the
hidden neuron and also in the output neuron when it is not clamped. Since each pattern
presentation can take as little as 5 microseconds, the XOR function can be learned in a
few milliseconds. A pattern presentation on a 1 MIP serial computer such as a VAX
11n80 takes about 0.5 seconds with our simulation software.
3.2 Unsupervised Learning
So far, we have described only supervised learning procedures, but the chip can also do
unsupervised learning which has no teacher. Nevertheless, the network can learn to
classify input patterns according to their similarity to one another. We set the chip
connectivity as in Fig. 9 with 4 input neurons and 2 output neurons arranged so that they
strongly inhibit each other to form a 'competitive' layer. With noise, this output layer
performs a 'winner-take-all' function in that the output neuron which has the strongest
net input is on and the other is off. This is because they inhibit each other strongly (are
connected to each other with a large negative weight) so that only one can be on. The
usual supervised learning rule was effectively simplified by removing the teacher
requirement so that correlations always increment weights. Specifically, we stored a
comparison pattern in the student phase which consisted of the 'on' state for the two
competitive neurons and 'off' for all the input neurons. We then presented patterns to the
chip with the "teacher" phase signal on. This has the effect of always decrementing the
competitive connections which therefore remain at the lower limit of -15 since it is not
possible to have more correlations than the stored "student" phase correlation. On the
other hand, the stored "student" phase correlation for the weights leading from the input
755
756
Alspector, Gupta and Allen
Input
Figure 9. A Competitive Learning Network.
to the competitive layer is zero. Then, the winning output neuron will always be
correlated with those input neurons which are ton' and hence these weights will be
incremented. A decay signal decremented weights occasionally to keep them from
growing too large. The net effect of such a procedure is for the output neurons to classify
the input space among themselves, such that each responds to a particular neighborhood
of similar patterns. (2]
To demonstrate competitive learning, an input set was prepared such that the four input
bits were not quite random. We picked two input neurons to represent 'left' and the other
two to represent 'right". Patterns were never used with an equal number of left and right
neurons on. Eventually one of the two output neurons responded to left weighted
patterns and the other to right weighted patterns. Fig. 9 shows one set of weights which
were obtained. Therefore the chip learned left from right although nothing in its wiring
predisposed it in any way.
3.3 Computer Simulations or Chip Test Conditions
Computer tests were conducted which simulated limitations of the operating chip such as
correlated noise. Table 1 presents summaries of 10 replications of 2000 pattern
presentations across 5 testing conditions. The Table reports the mean percent correct on
the last 100 patterns and, in parentheses, the number of networks which reached 100%
performance during at least one block of 100 pattern presentations. The first line of the
table shows the performance of the network with no noise. In the next four lines, two
parameters of the noise were varied yielding 4 conditions. Specifically, noise was either
correlated or uncorrelated across neurons and it was either presented as a single pulse in
a "flash freeze" schedule or following a broad annealing schedule.
Performance of a Stochastic Learning Microchip
The 2-1-1 XOR. in which the inputs are directly connected to the outputs. demonstrated
very good performance across conditions. Indeed. additional tests of the 2-1-1 in the nonoise condition showed that within 10k patterns all networks reached 100%. This
suggests there are deterministic solutions for the 2-1-1.
TABLE 1. Results of Computer Simulations.
nOIse
schedule
no noise
correlated
correlated
uncorrelated
uncorrelated
no noise
flash freeze
anneal temperature
flash freeze
anneal temperature
anneal gain
2-1-1 XOR
2-2-1 XOR
4-4-1 parity
92(9)
95(9)
99(10)
99(10)
99(10)
99(9)
67(0)
83(5)
78(2)
84(4)
85(5)
81(4)
72(0)
71(0)
74(0)
67(0)
79(0)
85(2)
The 2-2-1 networks learned to only 67% correct without noise. Learning with correlated
noise degraded performance compared to learning with uncorrelated noise. While the
chip contained only 6 neurons it was of interest to consider how limitations such as those
studied here might affect solutions to larger problems. Thus. the solution to parity
problems were considered and are included in the table.
It is worth noting that the full complexity of the chip's settling and noise distribution is
not captured in the discrete time simulations on the computer. The fact that we do not
use a circuit simulation may account for some of the differences between the simulations
and chip performance. It is interesting to note that learning by the chip was generally
faster than learning by the simulation program and that the chip seemed to require noise
for learning more than the simulator.
We also considered a system without random noise in which we annealed the inverse
gain of the neurons like a temperature through a broad annealing schedule covering the
values previously exam ined [2] ? As shown in the last line of the Table this performed
comparably to temperature annealing reported above. 10 runs of a 2-2-1 XOR gave a
mean performance of 81 % with 4 networks reaching 100%. On the 4-4-1 parity problem
the mean performance was better than the results of annealing temperature. The mean
performance was 85% and 2 networks reached 100%. For still larger problems. such as
6-8-1 parity. performance was comparable to annealing with noise.
4. FUTURE DIRECTIONS
4.1 Applications of Learning Systems
Learning systems give us a way to encode knowledge as a set of training examples rather
than as a set of rules. Learned behavior emerges from the training set in ways that
depend on the input representation. the network architecture. and the learning procedure.
This technique is suitable for problem domains where there are too many rules or where
the rules are not known. Two general categories of problems suitable for learning
757
758
Alspector, Gupta and Allen
systems are pattern recognition and some types of expert systems.
Pattern recognition of something like an oak leaf is ? difficult because of the many
variations a rule-based system would have to consider even when variations of scale,
rotation, and translation are accounted for. Yet, it is quite easy to give a learning system
many training examples of oak leaves. Scale, rotation, and translation invariance can be
built into the network structure. Similarly, recognition of speech sounds is difficult, but
many training examples exist. Here also, pre-processing of the auditory data is important
to obtain a useful representation. Another pattern learning task useful in
telecommunications is learning the codebook for vector quantization in a real-time visual
data compression system. [61
Expert knowledge is often easier to encode by training examples as well. Experts often
do not know the rules they use to troubleshoot equipment or give advice. Again, it is
quite easy, by taking a history of such advice, to build a large database of training
examples. As knowledge changes, training is a more graceful way of Updating a
knowledge base than changing the rules. In telephone networks, fault handling or traffic
routing are examples of problems for which training is a suitable way of encoding
knowledge.
4.2 Future Large-Scale Learning Systems
Because training takes too much computer time in a simulation, physical
implementations of learning systems such as ours are necessary for speed. It takes
several hours to train a network to recognize a few milliseconds of speech. [7] If we could
expand our system to the thousand-neuron level, it would be possible to learn simple
speech recognition in real time.
Because the chip uses Ohm's law to multiply, charge conservation to add, device physics
to create a threshold step, and a physical noise mechanism for random number
generation, we can present training patterns to this chip about 100,000 times faster than
the computer simulator. This factor, mostly due to the physical analog computation at
this small network size, will increase with the size of the system due to its inherently
parallel nature. It would also be possible to build fast special-purpose digital hardware to
perform the multiply-accumulate calculations and do fast compares in parallel. Such
hardware would take up considerably more silicon area but may be a good way to
integrate neural network calculations into existing computer systems. If we could build a
large VLSI learning system of, say, 10,000 neurons and 1,000,000 synapses, it would be
about a billion times faster than a simulator on a 1 MIP machine. Presumably, such a
system will be able to learn things beyond the capability of simulations even if they are
run on supercomputers. However, there are several challenges to building these systems.
An algorithmic problem divorced from implementation is the effect of scaling to large
size in highly connected networks. The learning time of such a system scales
exponentially with the size of the problem. [8] The traditional way of handling complexity
in large problems is to break them into smaller subpieces. An effective algorithm is yet
to be discovered for doing learning in the modular, hierarchical networks which would be
required to handle large problems.
Perfonnance of a Stochastic Learning Microchip
Even from a technological viewpoint, modularity is necessary to manage the connectivity
in a typical multiple chip system. A highly connected system, even if it could be built,
would take too long to settle even considering the technology and parallel speedups
available. Constraints such as power dissipation, capacitive loading across chips, and
interchip communication are difficult to solve. If we succeed in these challenges, we will
have the problem of presenting data to the system at extremely high rates amounting to
several thousand (or more) bits every few microseconds. Biology solves these problems
in the visual system, for example, by highly parallel communication via the optic nerve.
It is unlikely that we will be able to use a million bit wide bus in our electronic system,
however.
Can one take the weights learned by a learning system and simply load them onto a much
simpler system with programmable rather than adaptive synapses? This is perhaps
possible for smaller systems where analog inaccuracies and defects can be controlled.
Modular networks provide a way of handling inaccuracies. However, for large analog
systems, adaptation mechanisms are needed to maintain accuracy. Even if the accuracy
were a few percent, a system of only a hundred neurons would be inaccurate across
chips. In biological systems, if one were to place the connection strengths found in brain
A onto the structures of brain B, the result would be chaos rather than a brain transplant
The robustness of neural systems depends on having the neurons and synapses adapt to
the particular environment they find themselves in. Nevertheless, some amount of hardwiring is probably possible in modular systems if it is modifiable by a trainable portion of
the network. A speech recognition system may, for example, adapt in real time to the
accents and timbre of a particular speaker. It is also likely that the system would require
at least partial training beforehand for robustness.
We plan to design a larger version of our test chip containing both neurons and synapses
which can form part of a still larger multiple chip network with the addition of chips
containing only synapses. This next chip will have self-powered synapses so that each
neuron need only signal its state rather than drive an unknown number of neurons from
other chips. In addition, the noise generator will be improved so that true annealing is
possible. We may also go further toward a fully analog chip[2] by having a variable gain
neuron. Analog charge domain storage of weights and transport of states would further
reduce the silicon area necessary but the technology required is not standard.
There are many challenges in scaling learning networks up to the 1()4 neuron and 1()6
synapse range although these large electronic learning networks will have on the order of
a billionfold speed advantage over simulations based on serial computers. Thus they may
be able to address many longstanding problems in artificial intelligence which have
resisted attack by more conventional methods.
759
760
Alspector, Gupta and Allen
References
1. J. Alspector & R.B. Allen, "A neuromorphic VLSI learning system", in Advanced
Research in VLSI: Proceedings of the 1987 Stanford Conference. edited by P.
Losleben (MIT Press, Cambridge, MA, 1987) pp. 313-349.
2. J. Alspector, R.B. Allen, V. Hu, & S. Satyanarayana, "Stochastic learning networks
and their electronic implementation", Neural Information Processing Systems
(Denver, Nov. 1987) pp. 9-21.
3. D.H. Ackley, G.E. Hinton, & T J. Sejnowski, "A learning algorithm for Boltzmann
machines", Cognitive Science 9 (1985) pp. 147-169.
4. B. Widrow & M.E. Hoff, "Adaptive switching circuits". IRE WESCON Convention
Record Part 4, (1960) pp. 96-104.
5. F. Rosenblatt. Principles of neurodynamics: Perceptrons and the theory of brain
mechanisms. Spartan Books, Washington. D.C. (1961).
6. J. Alspector, "A VLSI approach to neural-style information processing", in VLSI
Signal Processing III. edited by R.W. Brodersen and H.S. Moscovitz (IEEE Press,
New York, 1988) pp. 232-243.
7. T.K. Landauer, C. Kamm, & S. Singhal, "Teaching a minimally structured backpropagation network to recognize speech sounds", Proceedings of the Cognitive
Science Society (Seattle, Aug. 1987) pp. 531-536.
8. G. Tesauro & B. Janssens, "Scaling relationships in back-propagation learning".
Complex Systems 2 (1988) pp. 39-44.
| 159 |@word trial:2 version:1 compression:1 proportion:1 loading:1 hu:1 simulation:12 pulse:1 contains:1 ours:1 existing:1 blank:1 activation:1 synthesizer:1 yet:2 written:1 v:1 intelligence:2 selected:1 leaf:2 device:1 record:1 ire:1 codebook:1 attack:1 oak:2 simpler:1 five:5 direct:1 differential:3 replication:1 microchip:8 manner:1 indeed:1 rapid:2 behavior:1 oscilloscope:2 themselves:2 growing:1 simulator:3 brain:4 alspector:10 kamm:1 little:1 considering:1 provided:1 circuit:2 what:1 fabricated:2 nj:1 pseudo:2 berkeley:2 every:1 morristown:1 charge:2 demonstrates:1 control:2 unit:5 before:4 local:1 limit:1 switching:2 encoding:2 might:1 plus:1 minimally:1 studied:1 examined:1 suggests:1 nol:1 range:2 testing:1 ement:1 wavefonns:2 block:5 backpropagation:1 procedure:3 area:2 word:2 pre:1 spite:1 amplify:1 undesirable:1 layered:1 onto:2 storage:3 conventional:2 deterministic:1 demonstrated:3 send:1 annealed:1 go:1 pennanent:1 rule:10 handle:1 variation:2 increment:2 limiting:1 controlling:1 us:3 element:3 recognition:6 updating:1 database:1 ackley:1 capture:2 thousand:3 interchip:1 readout:2 compressing:1 connected:5 momentarily:2 cycle:10 counter:4 incremented:2 highest:1 inhibit:2 technological:1 digitally:2 edited:2 environment:1 complexity:2 wjj:1 depend:1 solving:1 comer:1 chip:40 slate:1 various:4 train:1 fast:2 effective:2 sejnowski:1 artificial:2 spartan:1 neighborhood:1 whose:1 encoded:1 apparent:1 solve:4 quite:3 larger:4 say:1 modular:3 stanford:1 final:1 advantage:1 transistor:2 net:4 adaptation:1 loop:1 flexibility:1 amplified:1 amounting:1 billion:1 seattle:1 double:1 requirement:1 produce:1 cmos:2 exam:1 widrow:1 dep:1 aug:1 solves:1 convention:1 direction:1 correct:5 stochastic:9 routing:1 settle:2 require:2 feeding:1 perfonns:1 biological:1 adjusted:1 strictly:1 hall:1 considered:2 presumably:1 mapping:1 algorithmic:1 lm:1 circuitry:1 driving:1 purpose:1 tanh:1 largest:1 create:1 weighted:3 mit:1 always:4 brodersen:1 modified:1 rather:5 reaching:1 encode:2 improvement:1 longest:1 equipment:2 inaccurate:1 unlikely:1 hidden:8 vlsi:6 ined:1 expand:1 wij:1 selects:1 among:2 bellcore:1 plan:2 summed:1 initialize:1 special:1 hoff:1 equal:1 never:1 having:2 washington:1 biology:1 represents:1 broad:2 unsupervised:3 nearly:1 future:2 report:2 decremented:2 few:6 opening:1 randomly:1 composed:1 recognize:2 phase:13 connects:1 maintain:1 amplifier:11 conductance:3 interest:2 highly:4 multiply:2 evaluation:2 adjust:1 yielding:1 cory:1 netj:2 beforehand:1 partial:1 necessary:4 perfonnance:3 initialized:2 desired:2 mip:2 predisposed:1 increased:2 classify:2 boolean:1 mhz:2 neuromorphic:1 singhal:1 hundred:1 conducted:2 too:4 ohm:1 stored:5 reported:1 teacher:9 jittered:1 considerably:1 off:3 e_:1 physic:1 connectivity:4 again:1 manage:1 containing:3 external:1 cognitive:2 expert:4 book:1 style:2 leading:2 account:1 converted:2 de:1 student:8 stabilize:1 register:1 depends:1 performed:1 break:1 picked:1 doing:1 traffic:1 portion:2 competitive:7 start:1 reached:3 capability:2 parallel:4 accuracy:3 xor:17 loaded:3 responded:1 degraded:1 comparably:1 confirmed:1 worth:1 drive:1 history:1 synapsis:8 strongest:1 checked:1 frequency:1 destructive:1 pp:7 monitored:1 gain:10 auditory:1 proved:1 logical:1 knowledge:8 emerges:1 organized:2 schedule:4 back:1 nerve:1 janssens:1 bidirectional:1 supervised:3 specify:1 improved:1 synapse:12 arranged:4 done:2 though:1 strongly:2 stage:3 correlation:10 clock:2 hand:1 freezing:1 transport:1 propagation:1 accent:1 perhaps:1 building:1 effect:3 consisted:2 true:4 hence:3 read:1 wiring:1 during:8 self:1 covering:1 speaker:1 transplant:1 presenting:3 demonstrate:1 performs:2 allen:9 temperature:7 dissipation:1 percent:2 chaos:1 nanosecond:3 rotation:2 physical:6 denver:1 winner:1 exponentially:1 million:2 analog:8 digitizing:1 approximates:1 accumulate:1 silicon:3 measurement:1 freeze:3 cambridge:1 hp:5 pointed:1 similarly:1 analyzer:1 teaching:1 had:1 stable:1 similarity:1 operating:1 base:1 add:1 something:1 own:1 recent:1 showed:1 reverse:1 tesauro:1 occasionally:1 binary:3 fault:1 joshua:1 captured:1 additional:1 shortest:1 monotonically:1 signal:12 ii:1 thermodynamic:1 full:1 sound:2 multiple:2 faster:4 adapt:2 calculation:2 long:1 serial:3 controlled:4 parenthesis:1 represent:2 achieved:1 addition:3 separately:1 annealing:9 diagram:4 sends:1 releasing:1 probably:2 hz:1 simulates:1 thing:1 ee:1 noting:1 satyanarayana:1 iii:1 easy:2 switch:2 affect:1 graduated:1 gave:1 architecture:2 reduce:1 idea:2 shift:1 whether:1 speech:6 york:1 programmable:1 deep:1 generally:1 useful:2 amount:3 prepared:1 ten:1 ph:1 hardware:2 category:1 exist:1 millisecond:3 sign:2 per:2 modifiable:2 rosenblatt:1 discrete:1 four:3 threshold:2 enormous:1 nevertheless:2 changing:1 defect:1 sum:2 run:3 inverse:1 micron:2 unix:1 fourth:1 telecommunication:1 place:1 electronic:6 excursion:1 oscillation:1 scaling:3 comparable:1 bit:6 entirely:1 layer:6 strength:3 optic:1 constraint:1 software:1 speed:5 extremely:1 expanded:1 graceful:1 speedup:1 structured:1 according:1 disconnected:1 remain:1 across:5 son:1 smaller:2 wavefonn:2 making:1 previously:1 bus:1 count:1 eventually:1 mechanism:3 needed:1 know:1 flip:2 letting:1 fed:1 instrument:1 end:2 available:1 hierarchical:1 appropriate:1 robustness:2 supercomputer:1 original:1 capacitive:1 running:4 include:1 remaining:1 divorced:1 embodies:1 giving:1 build:3 society:1 arrangement:1 randomize:1 exclusive:3 usual:1 responds:1 traditional:1 separate:1 link:1 simulated:2 unstable:1 toward:1 relationship:1 ratio:1 difficult:4 unfortunately:1 mostly:1 robert:1 potentially:1 teach:1 trace:1 negative:3 implementation:4 design:1 boltzmann:5 unknown:1 perform:2 upper:2 neuron:54 implementable:1 thermal:1 flop:2 hinton:1 communication:2 discovered:1 varied:2 arbitrary:1 required:3 connection:7 california:1 learned:7 hour:1 inaccuracy:2 address:2 able:5 beyond:1 usually:2 pattern:36 challenge:3 program:2 built:2 memory:2 power:1 suitable:3 force:1 settling:5 advanced:1 technology:3 vax:1 powered:1 law:1 fully:2 interesting:1 limitation:2 generation:1 generator:7 digital:8 integrate:1 principle:1 viewpoint:1 uncorrelated:4 pi:1 translation:2 summary:1 accounted:1 last:2 free:4 electronically:1 soon:1 parity:4 allow:2 wide:1 taking:1 feedback:3 curve:4 transition:2 seemed:1 made:1 adaptive:2 simplified:1 longstanding:1 historical:1 counted:1 far:1 sj:5 nov:1 logic:3 keep:1 wescon:1 summing:1 conservation:1 decrementing:1 landauer:1 losleben:1 modularity:1 table:7 neurodynamics:1 channel:1 learn:10 nature:1 ca:1 inherently:1 correlated:9 complex:1 anneal:5 e20:1 domain:2 did:2 noise:38 heating:1 nothing:1 fig:13 advice:2 board:1 slow:1 bandpass:1 winning:1 clamped:3 learns:2 down:2 removing:1 load:1 inset:1 ton:1 decay:1 timbre:1 gupta:7 quantization:1 effectively:2 importance:1 resisted:1 magnitude:2 clamping:5 easier:2 photograph:2 simply:1 likely:1 visual:2 contained:1 monotonic:1 ch:1 truth:1 chance:1 ma:1 bhusan:1 succeed:1 presentation:9 flash:4 microsecond:2 hard:1 change:3 included:1 specifically:2 telephone:1 typical:1 called:1 pas:2 invariance:1 perceptrons:1 allotted:1 assessed:1 trainable:1 handling:3 |
645 | 1,590 | Bayesian Modeling of Facial Similarity
Baback Moghaddam
Mitsubishi Electric Research Laboratory
201 Broadway
Cambridge , MA 02139 , USA
babackCOmerl.com
Tony Jebara and Alex Pentland
Massachusettes Institute of Technology
20 Ames St .
Cambridge, MA 02139 , USA
{jebara,sandy}COmedia.mit.edu
Abstract
In previous work [6, 9, 10], we advanced a new technique for direct
visual matching of images for the purposes of face recognition
and image retrieval , using a probabilistic measure of similarity
based primarily on a Bayesian (MAP) analysis of image differences, leading to a "dual" basis similar to eigenfaces [13]. The
performance advantage of this probabilistic matching technique
over standard Euclidean nearest-neighbor eigenface matching was
recently demonstrated using results from DARPA 's 1996 "FERET"
face recognition competition , in which this probabilistic matching
algorithm was found to be the top performer. We have further
developed a simple method of replacing the costly com put ion of
nonlinear (online) Bayesian similarity measures by the relatively
inexpensive computation of linear (offline) subspace projections
and simple (online) Euclidean norms, thus resulting in a significant
computational speed-up for implementation with very large image
databases as typically encountered in real-world applications.
1
Introduction
Current approaches to image matching for visual object recognition and image
database retrieval often make use of simple image similarity metrics such as
Euclidean distance or normalized correlation, which correspond to a templatematching approach to recognition [2, 5]. For example, in its simplest form, the
Bayesian Modeling ofFacial Similarity
911
similarity measure S(h , h) between two images hand h can be set to be inversely
proportional to the norm 1111 - hll. Such a simple formulation suffers from a major
drawback: it does not exploit knowledge of which types of variation are critical
(as opposed to incidental) in expressing similarity. In this paper , we formulate a
probabilistic similarity measure which is based on the probability that the image
intensity differences , denoted by .6. h - [2 , are characteristic of typical variations
in appearance of the same object. For example, for purposes of face recognition ,
we can define two classes of facial image variations: intrapersonal variations ~h
(corresponding, for example , to different facial expressions of the same individual)
and extrapersonal variations OE (corresponding to variations between different
individuals) . Our similarity measure is then expressed in terms of the probability
=
(1)
where P(011.6.) is the a posteriori probability given by Bayes rule , using estimates
of the likelihoods P(.6.101) and P(.6.IOE)' The likelihoods are derived from training
data using an efficient subspace method for density estimation of high-dimensional
data [7, 8]. This Bayesian (MAP) approach can also be viewed as a generalized
nonlinear extension of Linear Discriminant Analysis (LDA) [12, 3] or "Fisher Face"
techniques [1] for face recognition. Moreover, our nonlinear generalization has
distinct computational/storage advantages over some of these linear methods for
large databases.
2
Difference Density Modeling
Consider the problem of characterizing the type of intensity differences which
occur when matching two images in a face recognition task. We have two classes
(intrapersonal 0 1 and extrapersonal OE) which we will assume form Gaussian
distributions whose likelihoods can be estimated as P(.6.101) and P(.6.IOE) for a
given intensity difference .6. = h - [2 .
Given these likelihoods we can evaluate a similarity score S(h, h) between a pair
of images directly in terms of the intrapersonal a posteriori probability as given by
Bayes rule :
(2)
S
where the priors P(O) can be set to reflect specific operating conditions (e.g.,
number of test images vs. the size of the database) or other sources of a priori
knowledge regarding the two images being matched. Additionally, this particular
Bayesian formulation casts the standard face recognition task (essentially an M -ary
classification problem for M individuals) into a binary pattern classification problem
with 0 1 and OE. This much simpler problem is then solved using the maximum
a posteriori (MAP) rule - i.e. , two images are determined to belong to the same
individual if P(011.6.) > P(OEI.6.), or equivalently, if S(h, h) >
!.
To deal with the high-dimensionality of .6., we make use of the efficient density
estimation method proposed by Moghaddam & Pentland [7, 8] which divides
the vector space nN into two complementary subs paces using an eigenspace
decomposition . This method relies on a Principal Components Analysis (PCA)
[4] to form a low-dimensional estimate of the complete likelihood which can be
evaluated using only the first M principal components , where M < < N.
B. Moghaddam. T. Jebara and A. Pentland
912
3
Efficient Similarity Computation
Consider now a feature space of ~ vectors, the differences between two images
(Ii and h). The two classes of interest in this space correspond to intrapersonal
and extrapersonal variations and each is modeled as a high-dimensional Gaussian
density as in Equation 3. The densities are zero-mean since for each ~ = Ii - h
there exists a ~ = h - I j ?
(3)
By PCA, the Gaussians are known to only occupy a subspace of image space (facespace) and thus, only the top few eigenvectors of the Gaussian densities are relevant
for modeling. These densities are used to evaluate the similarity score in Equation 2.
Computing the similarity score involves first subtracting a candidate image Ii from a
database entry h . The resulting ~ image is then projected onto the eigenvectors of
the extrapersonal Gaussian and also the eigenvectors of the intrapersonal Gaussian .
The exponentials are computed, normalized and then combined as in Equation 2.
This operation is iterated over all members of the database (many Ik images) until
the maximum score is found (i.e. the match). Thus, for large databases, this
evaluation is expensive but can be simplified by offline transformations.
To compute the likelihoods p(~lrh) and P(~IOE) we pre-process the Ik images
with whitening transformations. Each image is converted and stored as whitened
subspace coefficients; i for intrapersonal space and e for extrapersonal space (see
Equation 4). Here, A and V are matrices of the largest eigenvalues and eigenvectors
of ~E or ~/. Typically, we have used MI
100 and ME
100 for 0 1 and OE
respectively.
=
=
(4)
After this pre-processing, evaluating the Gaussians can be reduced to simple
Euclidean distances as in Equation 5. Denominators are of course pre-computed.
These likelihoods are evaluated and used to compute the MAP similarity S in
Equation 2. Euclidean distances are computed between the lOa-dimensional i
vectors as well as the lOa-dimensional e vectors. Thus, roughly 2 x (ME + AfJ) =
400 arithmetic operations are required for each similarity computation , avoiding
repeated image differencing and projections.
e-tll e )-e k Il 2
(211" )D/2[ ~E [1/2
(5)
The ML similarity matching is even simpler since only the intra-personal class is
evaluated, leading to the following modified form for the similarity measure
(6)
913
Bayesian Modeling ofFacial Similarity
(a)
(b)
Figure 1: Examples of FERET frontal-view image paIrs used for (a) the Gallery set
(training) and (b) the Probe set (testing).
Figure 2: Face alignment system [7].
4
Experimental Results
To test our recognition strategy we used a collection of images from the ARPA
FERET face database. The set of images consists of pairs of frontal-views (FA/FB)
and are divided into two subsets: the "gallery" (training set) and the "probes"
(testing set). The gallery images consisted of 74 pairs of images (2 per individual)
and the probe set consisted of 38 pairs of images, corresponding to a subset of the
gallery members. The probe and gallery datasets were captured a week apart and
exhibit differences in clothing, hair and lighting (see Figure 1).
Each of these images were affine normalized with a canonical model using an
automatic face-processing system which normalizes for translation, scale as well
as slight rotations (both in-plane and out-of-plane). This system is described in
detail in [7, 8] and uses maximum-likelihood estimation of object location (in this
case the position and scale of a face and the location of individual facial features)
to geometrically align faces into standard normalized form as shown in Figure 2.
All the faces in our experiments were geometrically aligned and normalized in this
manner prior to further analysis.
4.1
Eigenface Matching
As a baseline comparison, we first used an eigenface matching technique for
recognition [13]. The normalized images from the gallery and the probe sets were
projected onto a lOO-dimensional eigenspace similar to that shown in Figure 3 and
a nearest-neighbor rule based on a Euclidean distance measure was used to match
914
.. ??
B. Moghaddam, T. Jebara and A. Pentland
.~
alIi!t.
"
.....
~
3
-,
-.,'-, .---.p.
-1If4
-1-
",._..,.
~
".
~
i
......
~~
-
~
Figure 3: Standard Eigenfaces.
(b)
Figure 4: "Dual" Eigenfaces: (a) Intrapersonal, (b) Extrapersonal
each probe image to a gallery image. We note that this method corresponds to
a generalized template-matching method which uses a Euclidean norm measure of
similarity which is, however, restricted to the principal subspace of the data. The
rank-l recognition rate obtained with this method was found to be 84%.
4.2
Bayesian Matching
For our probabilistic algorithm, we first gathered training data by computing
the intensity differences for a training subset of 74 intrapersonal differences (by
matching the two views of every individual in the gallery) and a random subset
of 296 extrapersonal differences (by matching images of different individuals in the
gallery), corresponding to the classes fh and OE, respectively, and performing a
separate PCA analysis on each.
We note that the two mutually exclusive classes Of and OE correspond to a
"dual" set of eigenfaces as shown in Figure 4. Note that the intrapersonal
variations shown in Figure 4-(a) represent subtle variations due mostly to expression
changes (and lighting) whereas the extrapersonal variations in Figure 4-(b) are more
representative of general eigenfaces which code variations such as hair color, facial
hair and glasses. These extrapersonal eigenfaces are qualitatively similar to the
standard normalized intensity eigenfaces shown in Figure 3.
We next computed the likelihood estimates P(~IO[) and P(~IOE) using the PCAbased method [7, 8], using subspace dimensions of M[ 10 and ME
30 for Of and
OE, respectively. These density estimates were then used with a default setting of
equal priors, P(O[) = P(OE), to evaluate the a posteriori intrapersonal probability
P(O[I~) for matching probe images to those in the gallery. Therefore, for each
probe image we computed probe-to-gallery differences and sorted the matching
order, this time using the a posteriori probability P(~hl~) as the similarity measure.
This probabilistic ranking yielded an improved rank-1 recognition rate of 90% .
=
=
915
Bayesian Modeling ofFacial Similarity
1.00
o~ 0.90
~
.s::.
~
E
Q) 0.80
>
~
-
-
-
-I- -
-
-
-4 - - - - -I - - -
-
t>----.
-
-
MIT Sap 96
+--<l MIT Mar 95
"3
E
b;------6. UMD
_ _ _ _ .L __ __ ..l_
(3 0.70
prob.: 38' B
gaJlooy: ,,96
1
scored probe: ,,95
1
9---'Y Excallbur
Q..-.g Rutgers
1
~ARLEF
0.60
o
20
10
30
40
Rank
Figure 5: Cumulative recognition rates for frontal FAjFB views for the competing
algorithms in the FERET 1996 test. The top curve (labeled "MIT Sep 96") corresponds to
our Bayesian matching technique. Note that second placed is standard eigenface matching
(labeled "MIT Mar 95").
4.3
The 1996 FERET Competition
Our Bayesian approach to recognition has yielded even more significant improvement over simple eigenface techniques with very large face databases. The
probabilistic similarity measure was tested in the September 1996 ARPA FERET
face recognition competition and yielded a surprising 95% recognition accuracy (on
nearly 1200 individuals) making it the top-performing system by a typical margin
of 10-20% over the other competing algorithms [11] (see Figure 5). A comparison
between standard eigenfaces and the Bayesian method from this test shows a 10%
gain in performance afforded by the new similarity measure. Thus we note that, in
this particular case, the probabilistic similarity measure has effectively halved the
error rate of eigenface matching .
Note that we can also use the simplified similarity measure based on the intrapersonal eigenfaces for a maximum likelihood (ML) matching technique using
S' = p(~lnI)
(7)
instead of the maximum a posteriori (MAP) approach defined by Equation 2.
Although this simplified measure has not been officially FERET tested, our own
internal experiments with a database of size 2000 have shown that using S' instead
of S results in only a minor (2-3%) deficit in the recognition rate while at the same
time cutting the computational cost by a further factor of 2.
5
Conclusions
The performance advantage of our probabilistic matching technique has been
demonstrated using both a small database (internally tested) as well as a large
(800+) database with an independent double-blind test as part of ARPA's September 1996 "FERET" competition, in which Bayesian similarity out-performed all
competing algorithms (at least one of which was using an LDA/Fisher type method).
We believe that these results clearly demonstrate the superior performance of
probabilistic matching over eigenface, LDA/Fisher and other existing techniques.
916
B. Moghaddam, T. lebara and A. Pentland
The results obtained with the simplified ML similarity measure (5' in Eq. 7)
suggest a computationally equivalent yet superior alternative to standard eigenface
matching. In other words, a likelihood similarity based on the intrapersonal density
p(~lnI) alone is far superior to nearest-neighbor matching in eigenspace while
essentially requiring the same number of projections. For completeness (and a
slightly better performance) however, one should use the a posteriori similarity 5
in Eq. 2, at twice the computational cost of standard eigenfaces.
This probabilistic framework is particularly advantageous in that the intra/extra
density estimates explicitly characterize the type of appearance variations which
are critical in formulating a meaningful measure of similarity. For example, the
deformations corresponding to facial expression changes (which may have high
image-difference norms) are, in fact, irrelevant when the measure of similarity is to
be based on identity. The subspace density estimation method used for representing
these classes thus corresponds to a learning method for discovering the principal
modes of variation important to the classification task.
References
[1] V.I. Belhumeur, J.P. Hespanha, and D.J. Kriegman. Eigenfaces vs. fisherfaces:
Recognition using class specific linear projection . IEEE Transactions on Pattern
Analysis and Machine Intelligence, PAMI-19(7):711-720, July 1997.
[2] R. Brunelli and T. Poggio. Face recognition : Features vs. templates. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 15(10), October 1993.
[3] K. Etemad and R. Chellappa. Discriminant analysis for recognition of human faces .
In Proc. of Int 'l Conf. on Acoustics, Speech and Signal Processing, pages 2148-2151,
1996.
[4] LT. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986.
[5] M. J. Jones and T. Poggio.
Model-based matching by linear combination of
prototypes. AI Memo No. 1583, Artificial Intelligence Laboratory, Massachusettes
Institute of Technology, November 1996.
[6] B. Moghaddam, C. Nastar, and A. Pentland. Bayesian face recognition using
deformable intensity differences. In Proc. of IEEE Conf. on Computer Vision and
Pattern Recognition, June 1996.
[7] B. Moghaddam and A. Pentland. Probabilistic visual learning for object detection.
In IEEE Proceedings of the Fifth International Conference on Computer Vision
(ICCV '95), Cambridge, USA , June 1995.
[8] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI19(7):696-710, July 1997.
[9] B. Moghaddam, W. Wahid, and Alex Pentland. Beyond eigenfaces: Probabilistic
matching for face recognition. In Proc. of Int 'I Conj. on A utomatic Face and Gesture
Recognition, pages 30- 35, N ara, Japan , April 1998.
[10] C. N astar, B. Moghaddam , and A. Pentland. Generalized image matching: Statistical
learning of physically-based deformations. In Proceedings of the Fourth European
Conference on Computer Vision (ECCV'96) , Cambridge, UK, April 1996.
[11] P. J. Phillips, H. Moon, P. Rauss , and S. Rizvi. The FERET evaluation methodology
for face-recognition algorithms. In IEEE Proceedings of Computer Vision and Pattern
Recognition, pages 137-143, June 1997.
[12] D. Swets and J. Weng. Using discriminant eigenfeatures for image retrieval. IEEE
Transactions on Pattern Analysis and Machine Intelligence, PAMI-18(8):831-836 ,
August 1996.
[13] M. Turk and A. Pentland.
Neuroscience, 3(1), 1991.
Eigenfaces for recognition.
Journal of Cognitive
| 1590 |@word norm:4 advantageous:1 mitsubishi:1 decomposition:1 extrapersonal:9 score:4 existing:1 current:1 com:2 surprising:1 yet:1 v:3 alone:1 intelligence:5 discovering:1 plane:2 eigenfeatures:1 completeness:1 location:2 ames:1 simpler:2 direct:1 ik:2 consists:1 manner:1 swets:1 roughly:1 ara:1 moreover:1 matched:1 eigenspace:3 developed:1 transformation:2 every:1 uk:1 internally:1 io:1 pami:2 twice:1 testing:2 matching:26 projection:4 pre:3 word:1 suggest:1 onto:2 put:1 storage:1 equivalent:1 map:5 demonstrated:2 formulate:1 rule:4 ioe:4 lni:2 variation:13 us:2 recognition:27 expensive:1 particularly:1 database:12 labeled:2 solved:1 oe:8 kriegman:1 personal:1 basis:1 sep:1 darpa:1 distinct:1 chellappa:1 artificial:1 whose:1 online:2 advantage:3 eigenvalue:1 subtracting:1 relevant:1 aligned:1 deformable:1 competition:4 double:1 object:5 nearest:3 minor:1 eq:2 involves:1 drawback:1 human:1 eigenface:8 generalization:1 extension:1 clothing:1 week:1 major:1 sandy:1 fh:1 purpose:2 estimation:4 proc:3 largest:1 mit:5 clearly:1 gaussian:5 modified:1 tll:1 derived:1 june:3 improvement:1 rank:3 likelihood:11 baseline:1 glass:1 posteriori:7 nn:1 typically:2 dual:3 classification:3 denoted:1 priori:1 equal:1 rauss:1 jones:1 nearly:1 primarily:1 few:1 individual:9 detection:1 interest:1 intra:2 evaluation:2 alignment:1 weng:1 moghaddam:10 conj:1 poggio:2 facial:6 euclidean:7 divide:1 deformation:2 arpa:3 modeling:6 cost:2 entry:1 subset:4 loo:1 characterize:1 stored:1 combined:1 st:1 density:11 international:1 probabilistic:14 reflect:1 opposed:1 conf:2 cognitive:1 leading:2 japan:1 converted:1 coefficient:1 int:2 explicitly:1 ranking:1 blind:1 performed:1 view:4 bayes:2 il:1 accuracy:1 moon:1 characteristic:1 correspond:3 gathered:1 bayesian:14 utomatic:1 iterated:1 lighting:2 ary:1 suffers:1 inexpensive:1 turk:1 mi:1 gain:1 sap:1 knowledge:2 color:1 dimensionality:1 subtle:1 methodology:1 oei:1 improved:1 april:2 formulation:2 evaluated:3 mar:2 correlation:1 until:1 hand:1 replacing:1 nonlinear:3 mode:1 lda:3 believe:1 usa:3 normalized:7 consisted:2 requiring:1 laboratory:2 brunelli:1 deal:1 generalized:3 complete:1 demonstrate:1 image:39 recently:1 superior:3 rotation:1 belong:1 slight:1 significant:2 expressing:1 cambridge:4 ai:1 phillips:1 automatic:1 similarity:31 operating:1 whitening:1 align:1 halved:1 own:1 irrelevant:1 apart:1 verlag:1 binary:1 captured:1 baback:1 performer:1 belhumeur:1 july:2 ii:3 arithmetic:1 signal:1 match:2 gesture:1 retrieval:3 divided:1 hair:3 whitened:1 essentially:2 metric:1 denominator:1 rutgers:1 vision:4 physically:1 represent:1 ion:1 whereas:1 source:1 extra:1 umd:1 member:2 competing:3 regarding:1 prototype:1 expression:3 pca:3 speech:1 york:1 eigenvectors:4 offacial:3 officially:1 simplest:1 reduced:1 occupy:1 canonical:1 estimated:1 neuroscience:1 per:1 pace:1 geometrically:2 prob:1 fourth:1 encountered:1 yielded:3 occur:1 alex:2 afforded:1 speed:1 formulating:1 performing:2 relatively:1 combination:1 slightly:1 feret:9 making:1 hl:1 restricted:1 iccv:1 computationally:1 equation:7 mutually:1 jolliffe:1 gaussians:2 operation:2 probe:10 alternative:1 top:4 tony:1 exploit:1 strategy:1 costly:1 fa:1 exclusive:1 exhibit:1 september:2 subspace:7 distance:4 separate:1 deficit:1 me:3 discriminant:3 gallery:11 code:1 modeled:1 equivalently:1 differencing:1 mostly:1 october:1 broadway:1 hespanha:1 memo:1 incidental:1 implementation:1 fisherfaces:1 datasets:1 november:1 pentland:11 august:1 jebara:4 intensity:6 pair:5 cast:1 required:1 acoustic:1 beyond:1 pattern:7 critical:2 advanced:1 hll:1 representing:1 technology:2 inversely:1 alii:1 prior:3 proportional:1 affine:1 translation:1 normalizes:1 eccv:1 course:1 loa:2 placed:1 l_:1 offline:2 institute:2 neighbor:3 eigenfaces:13 face:21 characterizing:1 lrh:1 template:2 fifth:1 curve:1 dimension:1 default:1 world:1 evaluating:1 cumulative:1 fb:1 collection:1 qualitatively:1 projected:2 simplified:4 far:1 transaction:4 cutting:1 ml:3 additionally:1 wahid:1 european:1 electric:1 scored:1 repeated:1 complementary:1 rizvi:1 representative:1 sub:1 position:1 exponential:1 candidate:1 afj:1 specific:2 exists:1 effectively:1 margin:1 lt:1 appearance:2 visual:4 expressed:1 springer:1 corresponds:3 relies:1 ma:2 viewed:1 sorted:1 identity:1 fisher:3 change:2 typical:2 determined:1 principal:5 experimental:1 meaningful:1 internal:1 frontal:3 evaluate:3 tested:3 avoiding:1 |
646 | 1,591 | USING COLLECTIVE INTELLIGENCE
TO ROUTE INTERNET TRAFFIC
David H. Wolpert
NASA Ames Research Center
Moffett Field, CA 94035
dhw@ptolemy.arc.nasa.gov
Kagan Turner
NASA Ames Research Center
Moffett Field, CA 94035
kagan@ptolemy.arc.nasa.gov
Jeremy Frank
NASA Ames Research Center
Moffett Field, CA 94035
frank@ptolemy.arc.nasa.gov
Abstract
A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion
so that their collective behavior optimizes a global utility function.
We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing.
These experiments indicate that COINs outperform all previously
investigated RL-based, shortest path routing algorithms.
1
INTRODUCTION
COllective INtelligences (COINs) are large, sparsely connected recurrent neural
networks, whose "neurons" are reinforcement learning (RL) algorithms. The distinguishing feature of COINs is that their dynamics involves no centralized control,
but only the collective effects of the individual neurons each modifying their behavior via their individual RL algorithms. This restriction holds even though the
goal of the COIN concerns the system's global behavior. One naturally-occurring
COIN is a human economy, where the "neurons" consist of individual humans trying to maximize their reward, and the "goal", for example, can be viewed as having
the overall system achieve high gross domestic product. This paper presents a
preliminary investigation of designing and using artificial COINs as controllers of
distributed systems. The domain we consider is routing of internet traffic.
The design of a COIN starts with a global utility function specifying the desired
global behavior. Our task is to initialize and then update the neurons' "local" utility
Using Collective Intelligence to Route Internet Traffic
953
functions, without centralized control, so that as the neurons improve their utilities,
global utility also improves. (We may also wish to update the local topology of the
COIN.) In particular, we need to ensure that the neurons do not "frustrate" each
other as they attempt to increase their utilities. The RL algorithms at each neuron
that aim to optimize that neuron's local utility are microlearners. The learning
algorithms that update the neuron's utility functions are macrolearners.
For robustness and breadth of applicability, we assume essentially no knowledge concerning the dynamics of the full system, i.e., the macrolearning and/ or microlearning
must "learn" that dynamics, implicitly or otherwise. This rules out any approach
that models the full system. It also means that rather than use domain knowledge
to hand-craft the local utilities as is done in multi-agent systems, in COINs the
local utility functions must be automatically initialized and updated using only the
provided global utility and (locally) observed dynamics.
The problem of designing a COIN has never previously been addressed in full hence the need for the new formalism described below. Nonetheless, this problem is related to previous work in many fields: distributed artificial intelligence,
multi-agent systems, computational ecologies, adaptive control, game theory [6],
computational markets [2], Markov decision theory, and ant-based optimization.
For the particular problem of routing, examples of relevant work include [4, 5, 8, 9,
10]. Most of that previous work uses microlearning to set the internal p'arameters
of routers running conventional shortest path algorithms (SPAs). However the microlearning occurs, they do not address the problem of ensuring that the associated
local utilities do not cause the microlearners to work at cross purposes.
This paper concentrates on COIN-based setting of local utilities rather than
macrolearning. We used simulations to compare three algorithms. The first two
are an SPA and a COIN. Both had "full knowledge" (FK) of the true rewardmaximizing path, with reward being the routing time of the associated router's
packets for the SPAs, but set by COIN theory for the COINs. The third algorithm
was a COIN using a memory-based (MB) microlearner [1] whose knowledge was
limited to local observations.
The performance of the FK COIN was the theoretical optimum. The performance
of the FK SPA was 12.5 ? 3 % worse than optimum. Despite limited knowledge,
the MB COIN outperformed the FK SPA, achieving performance 36 ? 8 % closer
to optimum. Note that the performance of the FK SPA is an upper bound on the
performance of any RL-based SPA. Accordingly, the performance of the MB COIN
is at least 36% superior to that of any RL-based SPA.
Section 2 below presents a cursory overview of the mathematics behind COINs.
Section 3 discusses how the network routing problem is mapped into the COIN
formalism, and introduces our experiments. Section 4 presents results of those
experiments, which establish the power of COINs in the context of routing problems.
Finally, Section 5 presents conclusions and summarizes future research directions.
2
MATHEMATICS OF COINS
The mathematical framework for COINs is quite extensive [11, 12]. This paper
concentrates on four of the concepts from that framework: subworlds, factored
systems, constraint-alignment, and the wonderful-life utility function.
We consider the state of the system across a set of discrete, time steps, t E {O, 1, ... }.
All characteristics of a neuron at time t - including its internal parameters at that
954
D. H. Wolpert, K. Turner and J. Frank
time as well as its externally visible actions - are encapsulated in a real-valued
vector i 17 ,t' We call this the "state" of neuron 1] at time t, and let i be the state
of all neurons across all time. World utility, G((), is a function of the state of all
neurons across all time, potentially not expressi@.e as a discounted sum.
A subworld is a set of neurons. All neurons in the same subworld w share the same
subworld utility function 9w ((). So when each subworld is a set of neurons that have
the most effect on each other, neurons are unlikely to work at cross-purposes - all
neurons that affect each other substantially share the same local utility.
Associated with subworlds is the concept of a (perfectly) constraint-aligned system.
In such systems any change to the neurons in subworld w at time 0 will have no
effects on the neurons outside of w at times later than O. Intuitively, a system
is constraint-aligned if the neurons in separate subworlds do not affect each other
directly, so that the rationale behind the use of subworlds holds.
A subworld-factored system is one where for each subworld w considered by itself, a
change at time 0 to the states of the neurons in that subworld results in an increased
value for 9w(() if and only if it results in an increased value for G((). For a subworldfactored system, the side effects on the rest of the system of w's-increasing its own
utility (which perhaps decrease other subworlds' utilities) do not end up decreasing
world utility. For these systems, the separate subworlds successfully pursuing their
separate goals do not frustrate each other as far as world utility is concerned.
The desideratum of subworld-factored is carefully crafted. In particular, it does not
concern changes in the value of the utility of subworlds other than the one changing
its actions. Nor does it concern changes to the states of neurons in more than
one subworld at once. Indeed, consider the following alternative desideratum: any
change to the t = 0 state of the entire system that improves all subworld utilities
simultaneously also improves world utility. Reasonable as it may appear, one can
construct examples of systems that obey this desideratum and yet quickly evolve
to a minimum of world utility [12J.
It can be proven that for a subworld-factored system, when each of the neurons'
reinforcement learning algorithms are performing as well as they can, given each
others' behavior, world utility is at a critical point. Correct global behavior corresponds to learners reaching a (Nash) equilibrium [8, 13J. There can be no tragedy
of the commons for a subworld-factored system [7, 11, 12J.
Let CLw (() be defined as the vector ( modified by clamping the states of all neurons
in subworld w, across all time, to an-arbitrary fixed value, here taken to be O. The
wonderful life subworld utility (WLU) is:
(1)
When the system is constraint-aligned, so that, loosely speaking, subworld w's "absence" would not affect the rest of the system, we can view the WLU as analogous
to the change in world utility that would have arisen if subworld w "had never existed". (Hence the name of this utility - cf. the Frank Capra movie.) Note however,
that CL is a purely mathematical operation. Indeed, no assumption is even being
made that CLw (() is consistent with the dynamics of the system. The sequence of
states the neurons in w are clamped to in the definition of the WL U need not be
consistent with the dynamical laws of the system.
This dynamics-independence is a crucial strength of the WLU. It means that to
evaluate the WLU we do not try to infer how the system would have evolved if all
neurons in w were set to 0 at time 0 and the system evolved from there. So long as
955
Using Collective Intelligence to Route Internet Traffic
we know ( extending over all time, and so long as we know G, we know the value
of WL U. This is true even if we know nothing of the dynamics of the system.
In addition to assuring the correct equilibrium behavior, there exist many other
theoretical advantages to having a system be subworld-factored. In particular, the
experiments in this paper revolve around the following fact: a constraint-aligned
system with wonderful life subworld utilities is subworld-factored. Combining this
with our previous result that subworld-factored systems are at Nash equilibrium at
critical points of world utility, this result leads us to expect that a constraint-aligned
system using WL utilities in the microlearning will approach near-optimal values
of the world utility. No such assurances accrue to WL utilities if the system is not
constraint-aligned however. Accordingly our experiments constitute an investigation of how well a particular system performs when WL utilities are used but little
attention is paid to ensuring that the system is constraint-aligned.
3
COINS FOR NETWORK ROUTING
In our experiments we concentrated on the two networks in Figure 1, both slightly
larger than those in [9]. To facilitate the analysis, traffic originated only at routers
indicated with white boxes and had only the routers indicated by dark boxes as
ultimate destinations. Note that in both networks there is a bottleneck at router 2.
-
(a) Network A
(b ) Network B
Figure 1: Network Architectures.
As is standard in much of traffic network analysis [3], at any time all traffic at
a router is a real-valued number together with an ultimate destination tag. At
each timestep, each router sums all traffic received from upstream routers in this
timestep, to get a load. The router then decides which downstream router to send
its load to, and the cycle repeats.
A running average is kept of the total value of each router's load over a window of the
previous L timesteps. This average is run through a load-to-delay function, W(x),
to get the summed delay accrued at this timestep by all those packets traversing
this router at this timestep. Different routers had different W(x), to reflect the fact
that real networks have differences in router software and hardware (response time,
queue length, processing speed etc). In our experiments W(x) = x 3 for routers 1
and 3, and W(x) = log(x + 1) for router 2, for both networks. The global goal is
to minimize total delay encountered by all traffic.
D. H. Wolpert, K. Tumer and J. Frank
956
In terms of the COIN formalism, we identified the neurons "I as individual pairs of
routers and ultimate destinations. So ~17,t was the vector of traffic sent along all
links exiting rJ's router, tagged for rJ's ultimate destination, at time t. Each subworld
consisted of the set all neurons that shared a particular ultimate destination.
In the SPA each node "I tries to set (
-17,t to minimize the sum of the delays to
be accrued by that traffic on the way to its ultimate destination. In contrast, in
a COIN "I tries to set ~17,t to optimize gw for the subworld w containing "I. For
both algorithms, "full knowledge" means that at time t all of the routers know the
window-averaged loads for all routers for time t - 1, and assume that those values
will be the same at t. For large enough L, this assumption will be arbitrarily good,
and therefore will allow the routers to make arbitrarily accurate estimates of how
best to route their traffic, according to their respective routing criteria.
In contrast, having limited knowledge, the MB COIN could only predict the WLU
value resulting from each routing decision. More precisely, for each router-ultimatedestination pair, the associated microlearner estimates the map from traffic on all
outgoing links (the inputs) to WLU-based reward (the outputs - see below). This
was done with a single-nearest-neighbor algorithm. Next, each router could send
the packets along the path that results in outbound traffic with the best (estimated)
reward. However to be conservative, in these experiments we instead had the router
randomly select between that path and the path selected by the FK SPA.
The load at router r at time t is determined by (. Accordingly, we can encapsulate the load-to-delay functions at the nodes by writing the delay at node r
at time t as Wr,t(O. In our experiments world utility was the total delay, i.e.,
G(~) = 2:r,t Wr,t(~). So using the WLU, gw(~) = 2:r,t ~w,r,t(~), where ~w,r,t(~) =
[Wr,t(() - Wr,t(CL w(())]. At each time t, the MB COIN used 2:r ~w,r,t(O as the
"WLU--=-based" reward-signal for trying optimize this full WLU.
In the MB COIN, evaluating this reward in a decentralized fashion was straightforward. All packets have a header containing a running sum of the ~'s encountered
in all the routers it has traversed so far. Each ultimate destination sums all such
headers it received and echoes that sum back to all routers that had routed to it.
In this way each neuron is apprised of the WLU-based reward of its subworld.
4
EXPERIMENTAL RESULTS
The networks discussed above were tested under light, medium and heavy traffic
loads. Table 1 shows the associated destinations (cf. fig. 1).
Table 1: Source- Destination Pairings for the Three Traffic Loads
Network
I Source II
Dest. (Light)
A
4
6
7
7,8
6,9
5
B
4
5
I Dest.
(Medium)
6,7
7
7,8,9
6,7,9
I Dest.
(Heavy)
6,7
6,7
6,7,8,9
6,7,8,9
In our experiments one new packet was fed to each source router at each time step.
Table 2 reports the average total delay (i.e., average per packet time to traverse the
total network) in each of the traffic regimes, for the shortest path algorithm with
full knowledge, the COIN with full knowledge, and the MB COIN. Each table entry
is based on 50 runs with a window size of 50, and the errors reported are errors
Using Collective Intelligence to Route Internet Traffic
957
in the mean l . All the entries in Table 2 are statistically different at the .05 level,
including FK SPA vs. MB COIN for Network A under light traffic conditions.
Table 2: Average Total Delay
Network
A
B
I
Load
light
medium
heavy
light
medium
heavy
FK SPA
0.53 ? .007
1.26 ? .010
2.17 ? .012
2.13 ? .012
4.37 ? .014
6.94 ? .015
I
FK COIN
0.45 ? .001
1.10 ? .001
1.93 ? .001
1.92 ? .001
3.96 ? .001
6.35 ? .001
MB
0.50
1.21
2.06
2.05
4.19
6.82
COIN
? .008
? .009
? .010
? .010
? .012
? .024
Table 2 provides two important observations: First, the WLU-based COIN outperformed the SPA when both have full knowledge, thereby demonstrating the
superiority of the new routing strategy. By not having its routers greedily strive
for the shortest paths for their packets, the COIN settles into a more desirable
state that reduces the average total delay for all packets. Second, even when the
WLU is estimated through a memory-based learner (using only information available to the local routers), the performance of the COIN still surpasses that of the
FK SPA. This result not only establishes the feasibility of COIN-based routers, but
also demonstrates that for this task COINs will outperform any algorithm that can
only estimate the shortest path, since the performance of the FK SPA is a ceiling
on the performance of any such RL-based SPA.
Figure 2 shows how total delay varies with time for the medium traffic regime
(each plot is based on 50 runs). The "ringing" is an artifact caused by the starting
conditions and the window size (50). Note that for both networks the FK COIN not
only provides the shortest delays, but also settles into that solution very rapidly.
1.4
4.6
FKSPA 0 + FKCOIN + -_.
MBCOIN? ?
1.35
i
1.3
Cl.
1.25
""~
iiiQ.
>cu
i
Cij
~
4.3
Q)
42
Q.
1.2
~
a;
1.15
0
1.1
t-
~
1.05
0+-
-+ --" 0 ""
4.4
~
Cl.
...
a;
0
FKSPA
FKCOIN
MBCOIN
4.5
4.1
4
3.9
3.8
3. 7 '---"---'-----'-----'---,--'-"--..........-'----'-~
1
0
100
200
300
Unit Time Steps
400
500
o
SO
(a) Network A
100 ISO 200' 250 300 350 400 450 SOO
Unit Time Steps
(b) Network B
Figure 2: Total Delay.
5
DISCUSSION
Many distributed computational tasks are naturally addressed as recurrent neural
networks ofreinforcement learning algorithms (i.e., COINs) . The difficulty in doing
so is ensuring that, despite the absence of centralized communication and control,
IThe results are qualitatively identical for window sizes 20 and 100 along with total
timesteps of 100 and 500.
958
D. H. Wolpert, K. Turner and J. Frank
the reward functions of the separate neurons work in synchrony to foster good global
performance, rather than cause their associated neurons to work at cross-purposes.
The mathematical framework synopsized in this paper is a theoretical solution to
this difficulty. To assess its real-world applicability, we employed it to design a fullknowledge (FK) COIN as well as a memory-based (RL-based) COIN, for the task
of packet routing on a network. We compared the performance of those algorithms
to that of a FK shortest-path algorithm (SPA). Not only did the FK COIN beat the
FK SPA, but also the memory-based COIN, despite having only limited knowledge,
beat the full-knowledge SPA. This latter result is all the more remarkable in that
the performance of the FK SPA is an upper bound on the performance of previously
investigated RL-based routing schemes, which use the RL to try to provide accurate
knowledge to an SPA.
There are many directions for future work on COINs, even restricting attention
to domain of packet routing. Within that particular domain, currently we are
extending our experiments to larger networks, using industrial event-driven network
simulators. Concurrently, we are investigating the use of macrolearning for COINbased packet-routing, i.e., the run-time modification of the neurons' utility functions
to improve the subworld-factoredness of the COIN.
References
[1] C. G. Atkenson, A. W. Moore, and S. Schaal. Locally weighted learning.
Artificial Intelligence Review, Submitted, 1996.
[2] E. Baum. Manifesto for an evolutionary economics of intelligence. In C. M.
Bishop, editor, Neural Networks and Machine Learning. Springer-Verlag, 1998.
[3] D. Bertsekas and R. Gallager. Data Networks. Prentice Hall, NJ, 1992.
[4] J. Boyan and M. Littman. Packet routing in dynamically changing networks:
A reinforcement learning approach. In Advances in Neural Information Processing Systems - 6, pages 671-678. Morgan Kaufmann, 1994.
[5] S. P. M. Choi and D. Y. Yeung. Predictive Q-routing: A memory based reinforcement learning approach to adaptive traffic control. In Advances in Neural
Information Processing Systems - 8, pages 945-951. MIT Press, 1996.
[6] D. Fudenberg and J. Tirole. Game Theory. MIT Press, Cambridge, MA, 1991.
[7] G. Hardin. The tragedy of the commons. Science, 162:1243-1248,1968.
[8] Y. A. Korilis, A. A. Lazar, and A. Orda. Achieving network optima using
Stackelberg routing strategies. IEEE Tran. on Networking, 5(1):161-173, 1997.
[9] P. Marbach, O. Mihatsch, M. Schulte, and J. Tsisiklis. Reinforcement learning
for call admission control and routing in integrated service networks. In Adv.
in Neural Info. Proc. Systems - 10, pages 922-928. MIT Press, 1998.
[10] D. Subramanian, P. Druschel, and J. Chen. Ants and reinforcement learning:
A case study in routing in dynamic networks. In Proceedings of the Fifteenth
International Conference on Artificial Intelligence, pages 832-838, 1997.
[11] D. Wolpert and K. Tumer. Collective Intelligence. In J. M. Bradshaw, editor,
Handbook of Agent technology. AAAI Press/MIT Press, 1999. to appear.
[12] D. Wolpert, K. Wheeler, and K. Tumer. Automated design of multi-agent
systems. In Proc. of the 3rd Int. Conf. of Autonomous Agents, 1999. to appear.
[13] D. Wolpert, K. Wheeler, and K. Tumer. Collective intelligence for distributed
control. 1999. (pre-print).
PART
IX
CONTROL, NAVIGATION AND PLANNING
| 1591 |@word cu:1 simulation:1 paid:1 thereby:1 yet:1 router:31 must:2 visible:1 designed:1 plot:1 update:3 v:1 intelligence:12 selected:1 assurance:1 accordingly:3 iso:1 cursory:1 tumer:4 provides:2 node:3 ames:3 traverse:1 mathematical:3 along:3 admission:1 pairing:1 indeed:2 market:1 behavior:7 nor:1 planning:1 multi:3 simulator:1 discounted:1 decreasing:1 frustrate:2 gov:3 automatically:1 little:1 window:5 increasing:1 domestic:1 provided:1 medium:5 evolved:2 substantially:1 ringing:1 nj:1 demonstrates:1 control:9 unit:2 appear:3 superiority:1 encapsulate:1 bertsekas:1 service:1 local:10 despite:3 path:10 dynamically:1 specifying:1 limited:4 statistically:1 averaged:1 tragedy:2 wheeler:2 pre:1 get:2 prentice:1 context:1 writing:1 restriction:1 optimize:3 conventional:1 map:1 center:3 baum:1 send:2 straightforward:1 attention:2 starting:1 economics:1 factored:8 rule:1 autonomous:1 analogous:1 updated:1 assuring:1 distinguishing:1 designing:2 us:1 sparsely:1 observed:1 connected:1 cycle:1 adv:1 decrease:1 gross:1 nash:2 reward:8 littman:1 dynamic:8 ithe:1 predictive:1 purely:1 learner:2 druschel:1 artificial:4 outside:1 header:2 lazar:1 whose:2 quite:1 larger:2 valued:2 otherwise:1 itself:1 echo:1 sequence:1 advantage:1 hardin:1 tran:1 product:1 mb:9 relevant:1 aligned:7 combining:1 rapidly:1 achieve:1 optimum:4 extending:2 recurrent:2 nearest:1 received:2 involves:1 indicate:1 wonderful:3 concentrate:2 direction:2 stackelberg:1 correct:2 modifying:1 human:2 routing:20 packet:12 settle:2 preliminary:1 investigation:2 traversed:1 ofreinforcement:1 hold:2 around:1 considered:1 hall:1 equilibrium:3 predict:1 purpose:3 encapsulated:1 proc:2 outperformed:2 currently:1 wl:5 successfully:1 establishes:1 weighted:1 mit:4 concurrently:1 aim:1 modified:1 rather:3 reaching:1 schaal:1 contrast:2 industrial:1 greedily:1 economy:1 capra:1 integrated:1 unlikely:1 entire:1 overall:1 summed:1 initialize:1 field:4 once:1 never:2 having:5 construct:1 schulte:1 identical:1 future:2 others:1 report:1 randomly:1 simultaneously:1 individual:4 attempt:1 ecology:1 centralized:3 alignment:1 introduces:1 navigation:1 light:5 behind:2 accurate:2 closer:1 respective:1 traversing:1 loosely:1 initialized:1 desired:1 mihatsch:1 accrue:1 theoretical:3 increased:2 formalism:3 applicability:2 surpasses:1 entry:2 delay:13 reported:1 varies:1 accrued:2 dest:3 international:1 destination:9 together:1 quickly:1 reflect:1 aaai:1 containing:2 outbound:1 tirole:1 worse:1 conf:1 strive:1 jeremy:1 int:1 caused:1 later:1 view:1 try:4 doing:1 traffic:22 start:1 synchrony:1 minimize:2 ass:1 kaufmann:1 characteristic:1 ant:2 submitted:1 networking:1 definition:1 nonetheless:1 naturally:2 associated:6 knowledge:13 improves:3 carefully:1 back:1 nasa:6 response:1 done:2 though:1 box:2 hand:1 artifact:1 indicated:2 perhaps:1 facilitate:1 effect:4 name:1 concept:2 true:2 consisted:1 hence:2 tagged:1 moore:1 white:1 gw:2 game:2 criterion:1 trying:2 performs:1 superior:1 common:2 rl:11 overview:1 discussed:1 cambridge:1 rd:1 fk:17 mathematics:2 marbach:1 had:6 etc:1 own:1 optimizes:1 driven:1 route:5 verlag:1 arbitrarily:2 life:3 morgan:1 minimum:1 employed:1 shortest:7 maximize:1 signal:1 ii:1 full:10 desirable:1 rj:2 infer:1 reduces:1 cross:3 long:2 concerning:1 feasibility:1 ensuring:3 desideratum:3 controller:1 essentially:1 fifteenth:1 yeung:1 arisen:1 addition:1 addressed:2 source:3 crucial:1 rest:2 sent:1 call:2 near:1 enough:1 concerned:1 automated:2 affect:3 independence:1 timesteps:2 architecture:1 topology:1 perfectly:1 identified:1 bottleneck:1 utility:37 ultimate:7 routed:1 queue:1 speaking:1 cause:2 constitute:1 action:2 dark:1 bradshaw:1 locally:2 concentrated:1 hardware:1 outperform:2 exist:1 estimated:2 wr:4 per:1 discrete:1 revolve:1 four:1 demonstrating:1 achieving:2 changing:2 breadth:1 kept:1 timestep:4 clw:2 downstream:1 sum:6 run:4 wlu:12 reasonable:1 pursuing:1 decision:2 summarizes:1 spa:21 bound:2 internet:6 existed:1 encountered:2 strength:1 constraint:8 precisely:1 software:1 tag:1 speed:1 performing:1 according:1 across:4 slightly:1 modification:1 intuitively:1 taken:1 ceiling:1 previously:3 discus:1 know:5 fed:1 end:1 available:1 operation:1 decentralized:1 obey:1 alternative:1 coin:50 robustness:1 running:3 ensure:1 include:1 cf:2 establish:1 print:1 kagan:2 occurs:1 strategy:2 evolutionary:1 separate:4 mapped:1 link:2 length:1 cij:1 potentially:1 frank:6 info:1 design:4 collective:10 upper:2 neuron:33 observation:2 markov:1 arc:3 beat:2 communication:1 interacting:1 arbitrary:1 exiting:1 david:1 pair:2 extensive:1 address:1 below:3 dynamical:1 regime:2 summarize:1 including:2 memory:5 soo:1 power:1 critical:2 event:1 difficulty:2 boyan:1 subramanian:1 turner:3 scheme:1 improve:2 movie:1 technology:1 review:1 evolve:1 law:1 expect:1 rationale:1 proven:1 moffett:3 remarkable:1 agent:5 consistent:2 foster:1 editor:2 share:2 heavy:4 repeat:1 side:1 allow:1 neighbor:1 orda:1 distributed:4 world:11 evaluating:1 made:1 reinforcement:7 adaptive:2 qualitatively:1 far:2 implicitly:1 global:9 decides:1 investigating:1 handbook:1 table:7 learn:1 ca:3 investigated:2 cl:4 upstream:1 domain:4 did:1 fudenberg:1 nothing:1 crafted:1 fig:1 fashion:2 originated:1 wish:1 clamped:1 third:1 ix:1 externally:1 choi:1 load:10 bishop:1 concern:3 consist:1 restricting:1 arameters:1 occurring:1 clamping:1 chen:1 wolpert:7 gallager:1 springer:1 corresponds:1 ma:1 goal:4 viewed:1 shared:1 absence:2 change:6 determined:1 conservative:1 total:10 experimental:1 craft:1 select:1 internal:2 latter:1 evaluate:1 outgoing:1 tested:1 |
647 | 1,592 | Unsupervised Classification with
Non-Gaussian Mixture Models using ICA
Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski
Howard Hughes Medical Institute
Computational Neurobiology Laboratory
The Salk Institute
10010 N. Torrey Pines Road
La Jolla, California 92037, USA
{tewon,lewicki,terry}Osalk.edu
Abstract
We present an unsupervised classification algorithm based on an
ICA mixture model. The ICA mixture model assumes that the
observed data can be categorized into several mutually exclusive
data classes in which the components in each class are generated
by a linear mixture of independent sources. The algorithm finds
the independent sources, the mixing matrix for each class and also
computes the class membership probability for each data point.
This approach extends the Gaussian mixture model so that the
classes can have non-Gaussian structure. We demonstrate that
this method can learn efficient codes to represent images of natural
scenes and text. The learned classes of basis functions yield a better
approximation of the underlying distributions of the data, and thus
can provide greater coding efficiency. We believe that this method
is well suited to modeling structure in high-dimensional data and
has many potential applications.
1
Introd uction
Recently, Blind Source Separation (BSS) by Independent Component Analysis
(ICA) has shown promise in signal processing applications including speech enhancement systems, telecommunications and medical signal processing. ICA is a
technique for finding a linear non-orthogonal coordinate system in multivariate data.
The directions of the axes of this coordinate system are determined by the data's
second- and higher-order statistics. The goal of the ICA is to linearly transform the
data such that the transformed variables are as statistically independent from each
Unsupervised Classification with Non-Gaussian Mixture Models Using ICA
509
other as possible (Bell and Sejnowski, 1995; Cardoso and Laheld, 1996; Lee et al.,
1999a). ICA generalizes the technique of Principal Component Analysis (PCA)
and, like PCA, has proven a useful tool for finding structure in data.
One limitation of ICA is the assumption that the sources are independent. Here,
we present an approach for relaxing this assumption using mixture models. In a
mixture model (Duda and Hart, 1973), the observed data can be categorized into
several mutually exclusive classes. When the class variables are modeled as multivariate Gaussian densities, it is called a Gaussian mixture model. We generalize
the Gaussian mixture model by modeling each class with independent variables
(ICA mixture model). This allows modeling of classes with non-Gaussian (e.g.,
platykurtic or leptokurtic) structure. An algorithm for learning the parameters is
derived using the expectation maximization (EM) algorithm. In Lee et al. (1999c),
we demonstrated that this approach showed improved performance in data classification problems. Here, we apply the algorithm to learning efficient codes for
representing different types of images.
2
The ICA Mixture Model
We assume that the data were generated by a mixture density (Duda and Hart,
1973):
K
p(xI8) = LP(xICk,(h)p(Ck ),
(1)
k=l
where 8 = ((}l,'" ,(}K) are the unknown parameters for each p(xICk, (}k), called
the component densities. We further assume that the number of classes, K, and
the a priori probability, p(Ck ), for each class are known. In the case of a Gaussian
mixture model, p(XICk , (}k) ex N(f-Lk, Ek)' Here we assume that the form of the
component densities is non-Gaussian and the data within each class are described
by an ICA model.
Xk = AkSk + bk,
(2)
where Ak is a N x M scalar matrix (called the basis or mixing matrix) and b k
is the bias vector for class k. The vector Sk is called the source vector (these
are also the coefficients for each basis vector). It is assumed that the individual
sources Si within each class are mutually independent across a data ensemble. For
simplicity, we consider the case where Ak is full rank, i.e. the number of sources
(M) is equal to the number of mixtures (N). Figure 1 shows a simple example of
a dataset that can be described by ICA mixture model. Each class was generated
from eq.2 using a different A and b. Class (0) was generated by two uniform
distributed sources, whereas class (+) was generated by two Laplacian distributed
sources (P(s) ex exp( -lsl)). The task is to model the unlabeled data points and
to determine the parameters for each class, A k , bk and the probability of each
class p( Ck lx, (}l:K) for each data point. A learning algorithm can be derived by an
expectation maximization approach (Ghahramani, 1994) and implemented in the
following steps:
? Compute the log-likelihood of the data for each class:
10gp(xICk,(}k) = logp(sk) -log(det IAkl),
where (}k = {Ak, bk,Sd?
? Compute the probability for each class given the data vector x
p(XI(}k' Ck)p(Ck)
p (Ck Ix, () 1 . K ) - ==--'---:---:--,:--:-=-:-:-----:--::-:-:.
LkP(xl(}k,Ck)P(Ck)
(3)
(4)
T.-w. Lee, M. S. Lewicki and T. 1. Sejnowski
510
10
+
+
+
+
+
+
+
'"
)(
++
+
...
+
+
+
+
+ +
-5
-5
-10
10
0
Xl
Figure 1: A simple example for classifying an ICA mixture model. There are
two classes (+) and (0); each class was generated by two independent variables,
two bias terms and two basis vectors. Class (0) was generated by two uniform
distributed sources as indicated next to the data class. Class (+) was generated by
two Laplacian distributed sources with a sharp peak at the bias and heavy tails.
The inset graphs show the distributions of the source variables, Si ,k, for each basis
vector.
? Adapt the basis functions A and the bias terms b for each class. The basis
functions are adapted using gradient ascent
ex:
8
8Ak 10gp(xIBI:K)
8
p(Cklx, B1 : K ) 8Ak 10gp(xICk, Ok).
(5)
Note that this simply weights any standard ICA algorithm gradient by
p(Cklx,OI:K). The gradient can also be summed over multiple data points.
The bias term is updated according to
b k- Lt Xtp( Ck IXt, BI :K )
- Ltp(Cklxt,OI:K) ,
(6)
where t is the data index (t = 1, ... , T) .
The three steps in the learning algorithm perform gradient ascent on the total
likelihood of the data in eq .1.
The extended infomax ICA learning rule is able to blindly separate mixed sources
with sub- and super-Gaussian distributions. This is achieved by using a simple
type of learning rule first derived by Girolami (1998). The learning rule in Lee
et al. (1999b) uses the stability analysis of Cardoso and Laheld (1996) to switch
between sub- and super-Gaussian regimes . The learning rule expressed in terms of
W = A-I, called the filter matrix is:
AWex: [1 - K tanh(u)u T
-
uuT ] W ,
(7)
Unsupervised Classification with Non-Gaussian Mixture Models Using lCA
511
where ki are elements of the N-dimensional diagonal matrix K and u = Wx. The
unmixed sources u are the source estimate s (Bell and Sejnowski, 1995). The ki's
are (Lee et al., 1999b)
ki = sign (E[sech2ui]E[u~] - E[Ui tanh UiJ) .
(8)
The source distribution is super-Gaussian when k i = 1 and sub-Gaussian when k i =
-1. For the log-likelihood estimation in eq.3 the term log p{s) can be approximated
as follows
S2
super-Gaussian
logp(s) ex- 2:logcoshsn - ;
n
logp(s) ex+
2: log cosh Sn -
S2
;
sub-Gaussian
(9)
n
Super-Gaussian densities, are approximated by a density model with heavier tail
than the Gaussian density; Sub-Gaussian densities are approximated by a bimodal
density (Girolami, 1998). Although the source density approximation is crude it
has been demonstrated that they are sufficient for standard leA problems (Lee
et al., 1999b). When learning sparse representations only, a Laplacian prior (p(s) ex
exp{ -lsi? can be used for the weight update which simplifies the infomax learning
rule to
~ W ex [I - sign(u)u T ] W,
(10)
logp(s)
ex
-
2: ISnl
Laplacian prior
n
3
Learning efficient codes for images
Recently, several approaches have been proposed to learn image codes that utilize a
set of linear basis functions. Olshausen and Field (1996) used a sparseness criterion
and found codes that were similar to localized and oriented receptive fields. Similar
results were presented by Bell and Sejnowski (1997) using the infomax algorithm
and by Lewicki and Olshausen (1998) using a Bayesian approach. By applying the
leA mixture model we present results which show a higher degree of flexibility in
encoding the images. We used images of natural scenes obtained from Olshausen
and Field (1996) and text images of scanned newspaper articles. The training
set consisted of 12 by 12 pixel patches selected randomly from both image types.
Figure 2 illustrates examples of those image patches. Two complete basis vectors
Al and A2 were randomly initialized. Then, for each gradient in eq.5 a stepsize
was computed as a function of the amplitude of the basis vectors and the number
of iterations. The algorithm converged after 100,000 iterations and learned two
classes of basis functions as shown in figure 3. Figure 3 (top) shows basis functions
corresponding to natural images. The basis functions show Gabor1-like structure
as previously reported in (Olshausen and Field, 1996; Bell and Sejnowski, 1997;
Lewicki and Olshausen, 1998). However, figure 3 (bottom) shows basis functions
corresponding to text images. These basis functions resemble bars with different
lengths and widths that capture the high-frequency structure present in the text
images.
3.1
Comparing coding efficiency
We have compared the coding efficiency between the leA mixture model and similar
models using Shannon's theorem to obtain a lower bound on the number of bits
1 Gaussian
modulated siusoidal
T-W Lee, M S. Lewicki and T. J. Sejnowski
512
am M
~
~'1 Z a .!IE R m I!!B!!tlr;a
au t:k.1
__ Ui . . :1111 ~ lUi OJ BII KG
W3 tiifi
. . lIPS ~j.l 111 ~
Figure 2: Example of natural scene and text image. The 12 by 12 pixel image
patches were randomly sampled from the images and used as inputs to the ICA
mixture model.
required to encode the pattern.
#bits
~
-log2 P(xIA) - Nlog 2 (O"x),
(11)
where N is the dimensionality of the input pattern x and o"x is the coding precision
(standard deviation of the noise introduced by errors in encoding). Table 1 compares
the coding efficiency of five different methods. It shows the number of bits required
to encode three different test data sets (5000 image patches from natural scenes,
5000 image patches from text images and 5000 image patches from both image
types) using five different encoding methods (ICA mixture model, nature trained
ICA, text trained ICA, nature and text trained ICA, and PCA trained on all three
test sets). It is clear that ICA basis functions trained on natural scene images
exhibit the best encoding when only natural scenes are presented (column: nature).
The same applies to text images (column: text) . Note that text training yields a
reasonable basis for both data sets but nature training gives a good basis only for
nature. The ICA mixture model shows the same encoding power for the individual
test data sets, and it gives the best encoding when both image types are present.
In this case, the encoding difference between the ICA mixture model and PCA is
Significant (more than 20%). ICA mixtures yielded a small improvement over ICA
trained on both image types. We expect the size of the improvement to be greater
in situations where there are greater differences among the classes. An advantage
of the mixture model is that each image patch is automatically classified ..
4
Discussion
The new algorithm for unsupervised classification presented here is based on a
maximum likelihood mixture model using ICA to model the structure of the classes.
We have demonstrated here that the algorithm can learn efficient codes to represent
different image types such as natural scenes and text images. In this case, the
learned classes of basis functions show a 20% improvement over PCA encoding.
ICA mixture model should show better image compression rates than traditional
compression algorithm such as JPEG.
The ICA mixture model is a nonlinear model in which each class is modeled as a
linear process and the choice of class is modeled using probabilities. This model
Unsupervised Classification with Non-Gaussian Mixture Models Using ICA
513
Figure 3: (Left) Basis function class corresponding to natural images. (Right) Basis
function class corresponding to text images.
Table 1: Comparing coding efficiency
Test data
Training set and model
Nature Text Nature and Text
ICA mixtures
4.72
5.20
4.96
Nature trained ICA
4.72
9.57
7.15
5.19
Text trained ICA
5.00
5.10
Nature and text trained ICA
4.83
5.29
5.07
peA
6.22
5.97
6.09
Codmg efficIency (bIts per pIxel) of five methods IS compared for three test sets.
Coding precision was set to 7 bits (Nature: U x = 0.016 and Text: U x = 0.029).
can therefore be seen as a nonlinear ICA model. Furthermore, it is one way of
relaxing the independence assumption over the whole data set. The ICA mixture
model is a conditional independence model, i.e., the independence assumption holds
within only each class and there may be dependencies among classes. A different
view of the ICA mixture model is to think of the classes of being an overcomplete
representation. Compared to the approach of Lewicki and Sejnowski (1998), the
main difference is that the basis functions learned here are mutually exclusive, i.e.
each class uses its own set of basis functions.
This method is similar to other approaches including the mixture density networks
by Bishop (1994) in which a neural network was used to find arbitrary density
functions. This algorithm reduces to the Gaussian mixture model when the source
priors are Gaussian. Purely Gaussian structure, however, is rare in real data sets.
Here we have used priors of the form of super-Gaussian and sub-Gaussian densities.
But these could be extended as proposed by Attias (1999). The proposed model was
used for learning a complete set of basis functions without additive noise. However,
the method can be extended to take into account additive Gaussian noise and an
overcomplete set of basis vectors (Lewicki and Sejnowski, 1998).
In (Lee et al., 1999c), we have performed several experiments on benchmark data
sets for classification problems. The results were comparable or improved over those
obtained by AutoClass (Stutz and Cheeseman, 1994) which uses a Gaussian mixture
514
T.-w. Lee, M. S. Lewicki and T. J. Sejnowski
model. Furthermore, we showed that the algorithm can be applied to blind source
separation in nonstationary environments. The method can switch automatically
between learned mixing matrices in different environments (Lee et al., 1999c). This
may prove to be useful in the automatic detection of sleep stages by observing EEG
signals. The method can identify these stages due to the changing source priors and
their mixing.
Potential applications of the proposed method include the problem of noise removal
and the problem of filling in missing pixels. We believe that this method provides
greater flexibility in modeling structure in high-dimensional data and has many
potential applications.
References
Attias, H. (1999) . Blind separation of noisy mixtures: An EM algorithm for independent factor analysis. Neural Computation, in press.
Bell, A. J. and Sejnowski, T . J. (1995). An Information-Maximization Approach to
Blind Separation and Blind Deconvolution. Neural Computation, 7:1129-1159.
Bell, A. J. and Sejnowski, T. J. (1997). The 'independent components' of natural
scenes are edge filters . Vision Research, 37(23):3327-3338.
Bishop, C. (1994). Mixture density networks. Technical Report, NCRG/4288.
Cardoso, J.-F. and Laheld, B. (1996) . Equivariant adaptive source separation. IEEE
Trans. on S.P., 45(2):434- 444.
Duda, R. and Hart, P. (1973). Pattern classification and scene analysis. Wiley,
New York.
Ghahramani, Z. (1994). Solving inverse problems using an em approach to density
estimation. Proceedings of the 1993 Connectionist Models Summer School, pages
316--323.
Girolami, M. (1998). An alternative perspective on adaptive independent component analysis algorithms. Neural Computation, 10(8):2103-2114.
Lee, T .-W., Girolami, M., Bell, A. J., and Sejnowski, T. J. (1999a). A unifying
framework for independent component analysis. International Journal on Mathematical and Computer Models, in press.
Lee, T.-W., Girolami, M., and Sejnowski, T. J. (1999b). Independent component
analysis using an extended infomax algorithm for mixed sub-gaussian and supergaussian sources. Neural Computation, 11(2):409-433.
Lee, T.-W., Lewicki, M. S., and Sejnowski, T. J. (1999c). ICA mixture models
for unsupervised classification and automatic context switching. In International
Workshop on ICA, Aussois, in press.
Lewicki, M. and Olshausen, B. (1998). Inferring sparse, overcomplete 'image codes
using an efficient coding framework. In Advances in Neural Information Processing Systems 10, pages 556-562.
Lewicki, M. and Sejnowski, T. J. (1998). Learning nonlinear overcomplete represenations for efficient coding. In Advances'in Neural Information Processing Systems
10, pages 815-821.
Olshausen, B. and Field, D. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609.
Stutz, J. and Cheeseman, P. (1994). Autoclass - a Bayesian approach to classification. Maximum Entropy and Bayesian Methods, Kluwer Academic Publishers.
| 1592 |@word compression:2 duda:3 comparing:2 si:2 additive:2 wx:1 update:1 selected:1 xk:1 awex:1 provides:1 unmixed:1 lx:1 five:3 mathematical:1 prove:1 ica:39 equivariant:1 automatically:2 underlying:1 kg:1 finding:2 medical:2 sd:1 switching:1 encoding:8 ak:5 au:1 relaxing:2 bi:1 statistically:1 hughes:1 laheld:3 bell:7 road:1 unlabeled:1 context:1 applying:1 demonstrated:3 missing:1 simplicity:1 rule:5 stability:1 coordinate:2 updated:1 lsl:1 us:3 element:1 approximated:3 observed:2 bottom:1 capture:1 environment:2 ui:2 trained:9 solving:1 purely:1 efficiency:6 basis:25 sejnowski:16 statistic:1 torrey:1 transform:1 gp:3 think:1 noisy:1 emergence:1 advantage:1 xi8:1 nlog:1 mixing:4 flexibility:2 enhancement:1 school:1 eq:4 implemented:1 resemble:1 girolami:5 direction:1 filter:2 pea:1 hold:1 exp:2 pine:1 a2:1 estimation:2 tanh:2 supergaussian:1 tool:1 gaussian:30 super:6 ck:9 encode:2 ax:1 derived:3 improvement:3 rank:1 likelihood:4 am:1 membership:1 uij:1 transformed:1 pixel:4 classification:11 among:2 priori:1 summed:1 equal:1 field:6 unsupervised:7 filling:1 report:1 connectionist:1 oriented:1 randomly:3 individual:2 detection:1 mixture:39 stutz:2 edge:1 orthogonal:1 initialized:1 overcomplete:4 column:2 modeling:4 jpeg:1 logp:4 maximization:3 deviation:1 rare:1 uniform:2 ixt:1 reported:1 dependency:1 density:15 peak:1 international:2 ie:1 lee:14 terrence:1 infomax:4 michael:1 autoclass:2 ek:1 account:1 potential:3 coding:9 coefficient:1 blind:5 performed:1 view:1 observing:1 oi:2 ensemble:1 yield:2 identify:1 generalize:1 bayesian:3 converged:1 classified:1 frequency:1 sampled:1 dataset:1 dimensionality:1 amplitude:1 ok:1 higher:2 improved:2 furthermore:2 stage:2 nonlinear:3 indicated:1 believe:2 olshausen:7 usa:1 consisted:1 laboratory:1 width:1 won:1 criterion:1 complete:2 demonstrate:1 image:32 recently:2 ncrg:1 tail:2 kluwer:1 significant:1 automatic:2 multivariate:2 own:1 showed:2 perspective:1 jolla:1 seen:1 greater:4 determine:1 signal:3 full:1 multiple:1 reduces:1 technical:1 adapt:1 academic:1 hart:3 laplacian:4 vision:1 expectation:2 blindly:1 iteration:2 represent:2 bimodal:1 achieved:1 cell:1 lea:3 whereas:1 source:22 publisher:1 ascent:2 ltp:1 nonstationary:1 switch:2 independence:3 w3:1 simplifies:1 attias:2 det:1 pca:5 introd:1 heavier:1 speech:1 york:1 useful:2 clear:1 cardoso:3 cosh:1 lsi:1 sign:2 per:1 promise:1 aussois:1 changing:1 utilize:1 graph:1 inverse:1 telecommunication:1 extends:1 reasonable:1 separation:5 patch:7 comparable:1 bit:5 ki:3 bound:1 summer:1 sleep:1 tlr:1 represenations:1 yielded:1 adapted:1 scanned:1 scene:9 according:1 lca:1 across:1 em:3 lp:1 mutually:4 previously:1 generalizes:1 apply:1 bii:1 stepsize:1 alternative:1 osalk:1 assumes:1 top:1 include:1 log2:1 unifying:1 ghahramani:2 receptive:2 exclusive:3 diagonal:1 traditional:1 exhibit:1 gradient:5 separate:1 code:8 length:1 modeled:3 index:1 unknown:1 perform:1 howard:1 benchmark:1 situation:1 neurobiology:1 extended:4 sharp:1 arbitrary:1 introduced:1 bk:3 required:2 california:1 learned:5 trans:1 able:1 bar:1 pattern:3 regime:1 including:2 oj:1 terry:1 power:1 natural:11 cheeseman:2 representing:1 tewon:1 lk:1 sn:1 text:18 prior:5 removal:1 expect:1 mixed:2 limitation:1 proven:1 localized:1 degree:1 sufficient:1 article:1 classifying:1 heavy:1 bias:5 institute:2 sparse:3 distributed:4 xia:1 bs:1 computes:1 adaptive:2 newspaper:1 b1:1 assumed:1 xi:1 sk:2 table:2 lip:1 learn:3 nature:11 eeg:1 main:1 linearly:1 s2:2 noise:4 whole:1 categorized:2 salk:1 wiley:1 precision:2 sub:7 xtp:1 inferring:1 xl:2 crude:1 ix:1 theorem:1 bishop:2 inset:1 uut:1 deconvolution:1 workshop:1 uction:1 te:1 illustrates:1 sparseness:1 suited:1 entropy:1 lt:1 simply:1 expressed:1 lewicki:12 scalar:1 applies:1 conditional:1 goal:1 determined:1 lui:1 principal:1 called:5 total:1 la:1 shannon:1 lkp:1 modulated:1 ex:8 |
648 | 1,593 | A Randomized Algorithm for Pairwise Clustering
Yoram Gdalyahu, Daphna Weinshall, Michael Werman
Institute of Computer Science, The Hebrew University, 91904 Jerusalem, Israel
{yoram,daphna,werman}@cs.huji.ac.il
Abstract
We present a stochastic clustering algorithm based on pairwise similarity of datapoints. Our method extends existing deterministic
methods, including agglomerative algorithms, min-cut graph algorithms, and connected components. Thus it provides a common
framework for all these methods. Our graph-based method differs
from existing stochastic methods which are based on analogy to
physical systems. The stochastic nature of our method makes it
more robust against noise, including accidental edges and small
spurious clusters. We demonstrate the superiority of our algorithm
using an example with 3 spiraling bands and a lot of noise.
1
Introduction
Clustering algorithms can be divided into two categories: those that require a vectorial representation of the data, and those which use only pairwise representation.
In the former case, every data item must be represented as a vector in a real normed
space, while in the second case only pairwise relations of similarity or dissimilarity are used. The pairwise information can be represented by a weighted graph
G(V, E): the nodes V represent data items, and the positive weight Wij of an edge
(i, j) representing the amount of similarity or dissimilarity between items i and j.
The graph G might not be a complete graph. In the rest of this paper Wij represents
a similarity value.
A vectorial representation is very convenient when one has either an explicit or
an implicit parametric model for the data. An implicit model means that the
data distribution function is not known, but it is assumed, e.g., that every cluster
is symmetrically distributed around some center. An explicit model specifically
describes the shape of the distribution (e.g., Gaussian). In these cases, if a vectorial
representation is available, the clustering procedure may rely on iterative estimation
of means (e.g., [2, 8]).
In the absence of a vectorial representation, one can either try to embed the graph
of distances in a vector space, or use a direct pairwise clustering method. The
A Randomized Algorithm for Pairwise Clustering
425
embedding problem is difficult, since it is desirable to use a representation that is
both low dimensional and has a low distortion of distances [6, 7, 3]. Moreover, even
if such embedding is achieved, it can help to cluster the data only if at least an
implicit parametric model is valid. Hence, direct methods for pairwise clustering
are of great value.
One strategy of pairwise clustering is to use a similarity threshold (), remove edges
with weight less than (), and identify the connected components that remain as
clusters. A transformation of weights may precede the thresholding!. The physically
motivated transformation in [1] uses a granular magnet model and replaces weights
by "spin correlations". Our algorithm is similar to this model, see Section 2.4.
A second pairwise clustering strategy is used by agglomerative algorithms [2], which
start with the trivial partition of N points into N clusters of size one, and continue
by subsequently merging pairs of clusters. At every step the two clusters which
are most similar are merged together, until the similarity of the closest clusters is
lower than some threshold. Different similarity measures between clusters distinguish between different agglomerative algorithms. In particular, the single linkage
algorithm defines the similarity between clusters as the maximal similarity between
two of their members , and the complete linkage algorithm uses the minimal value.
A third strategy of pairwise clustering uses the notion of cuts in a graph. A cut
(A, B) in a graph G(V, E) is a partition of V into two disjoint sets A and B. The
capacity of the cut is the sum of weights of all edges that cross the cut, namely:
c(A, B) = 2:iEA,jEB Wij. Among all the cuts that separate two marked vertices,
the minimal cut is the one which has minimal capacity. The minimal cut clustering
algorithm [11] divides the graph into components using a cascade of minimal cuts 2 .
The normalized cut algorithm [9] uses the association of A (sum of weights incident
on A) and the association of B to normalize the capacity c(A, B). In contrast with
the easy min-cut problem, the problem of finding a minimal normalized cut (Ncut)
is NP-hard, but with certain approximations it reduces to a generalized eigenvalue
problem [9].
Other pairwise clustering methods include techniques of non parametric density estimation [4] and pairwise deterministic annealing [3]. However, the three categories
of methods above are of special importance to us, since our current work provides a
common framework for all of them. Specifically, our new algorithm may be viewed
as a randomized version of an agglomerative clustering procedure, and in the same
time it generalizes the minimal cut algorithm. It is also strongly related to the
physically motivated granular magnet model algorithm. By showing the connection
between these methods, which may seem very different at a first glance, we provide
a better understanding of pairwise clustering.
Our method is unique in its stochastic nature while provenly maintaining low complexity. Thus our method performs as well as the aforementioned methods in "easy"
cases, while keeping the good performance in "difficult" ,cases. In particular, it is
more robust against noise and pathological configurations: (i) A minimal cut algorithm is intuitively reasonable since it optimizes so that as much of the similarity
1 For example, the mutual' neighborhood clustering algorithm [10] substitutes the edge
= m + n where i is the m th nearest neighbor of j and j
weight W,} with a new weight
is the nth nearest neighbor of i.
2The reader who is familiar with flow theory may notice that this algorithm also belongs
to the first category of methods, as it is equivalent to a weight transformation followed by
thresholding. The weight transformation replaces Wi} by the maximal flow between i and
J.
w:}
426
Y. Gdalyahu, D. Weins hall and M Werman
weight remains within the parts of the clusters, and as little as possible is "wasted"
between the clusters. However, it tends to fail when there is no clean separation
into 2 parts, or when there are many small spurious parts due, e.g., to noise. Our
stochastic approach avoids these problems and behaves more robustly. (ii) The
single linkage algorithm deals well with chained data, where items in a cluster are
connected by transitive relations. Unfortunately the deterministic construction of
chains can be harmful in the presence of noise, where a few points can make a
"bridge" between two large clusters and merge them together. Our algorithm inherits the ability to cluster chained data; at the same time it is robust against such
noisy bridges as long as the probability to select all the edges in the bridge remains
small.
2
Stochastic pairwise clustering
Our randomized clustering algorithm is constructed of two main steps:
1. Stochastic partition of the similarity graph into r parts (by randomized
agglomeration). For each partition index r (r N ... 1):
=
(a) for every pair of points, the probability that they remain in the same
part is computed;
(b) the weight of the edge between the two points is replaced by this probability;
(c) clusters are formed using connected components and threshold of 0.5.
This is described in Sections 2.1 and 2.2.
2. Selection of proper r values, which reflect "interesting" structure m our
problem. This is described in Section 2.3.
2.1
The similarity transformation
At each level r, our algorithm performs a similarity transformation followed by
thresholding. In introducing this process, our starting point is a generalization of
the minimal cut algorithm; then we show how this generalization is obtained by the
randomization of a single linkage algorithm.
First, instead of considering only the minimal cuts, let us induce a probability
distribution on the set of all cuts. We assign to each cut a probability which
decreases with increasing capacity. Hence the minimal cut is the most probable cut
in the graph, but it does not determine the graph partition on its own.
As a second generalization to the min-cut algorithm we consider multi-way cuts.
An r-way cut is a partition of G into r connected components. The capacity of an
r-way cut is the sum of weights of all edges that connect different components. In
the rest of this paper we may refer to r-way cuts simply as "cuts".
Using the distribution induced on r-way cuts, we apply the following family of
weight transformations. The weight Wij is replaced by the probability that nodes i
and j are in the same side of a random r-way cut: Wij -+ pij' This transformation
is defined for every integer r between 1 and N.
Since the number of cuts in a graph is exponentially large, one must ask whether
pi? is computable. Here the decaying rate of the cut probability plays an essential
r01e. The induced probability is found to decay fast enough with the capacity, hence
pij is dominated by the low capacity cuts. Thus, since there exists a polynomial
A Randomized Algorithm/or Pairwise Clustering
427
bound on the number of low capacity cuts in any graph [5], the problem becomes
computable.
This strong property suggests a sampling scheme to estimate the pairing probabilities. Assume that a sampling tool is available, which generates cuts according to
their probability. Under this condition, a sample of polynomial size is sufficient to
estimate the P'ij's.
The sampling tool that we use is called the "contraction algorithm" [5]. Its discovery
led to an efficient probabilistic algorithm for the minimal cut problem. It was shown
that for a given r, the probability that the contraction algorithm returns the minimal
r-way cut of any graph is at least N- 2(r-l), and it decays with increasing capacity 3.
For a graph which is really made of clusters this is a rough underestimation.
The contraction algorithm can be implemented in several ways. We describe here
its simplest form, which is constructed from N-l edge contraction steps. Each edge
contraction follows the procedure below:
? Select edge (i, j) with probability proportional to Wij.
? Replace nodes i and j by a single node {ij}.
? Let the set of edges incident on {ij} be the union of the sets of edges incident
on i and j, but remove self loops formed by edges originally connecting i
to j.
It is shown in [5] that each step of edge contraction can be implemented in O(N)
time, hence this simple form of the contraction algorithm has complexity of O(N2).
For sparse graphs an O(N log N) implementation can be shown.
The contraction algorithm as described above is a randomized version of the agglomerative single linkage procedure. If the probabilistic selection rule is replaced
by a greedy selection of the maximal weight edge, the single linkage algorithm is
obtained.
In terms of similarity transformations, a single linkage algorithm which halts with
r clusters may be associated with the transformation Wij~O,1 (1 if i and j are
returned at the same cluster, 0 otherwise). Our similarity transformation (P'ij)
uses the expected value (or the average) of of this binary assignment under the
probabilistic relaxation of the selection rule.
We could estimate pij by repeating the contraction algorithm M times and averaging
these binary indicators (a better way is described belowJ. Using Chernoff inequality
it can be shown4 that if M 2: (21n 2 + 4ln N - 2ln Il) / t then each P'ij is estimated,
with probability 2:1 - Il, within t from its true value.
2.2
Construction of partitions
To compute a partition at every r level, it is sufficient to know for every i-j pair
which r satisfies P'ij = 0.5.
This is found by repeating the contraction algorithm M times. In each iteration
there exists a single r at which the edge between points i - j is marked and the
points are merged. Denote by rm the level r which joins i and j at the m-th iteration
(m = 1 ... M). The median r' of the sequence {rl' r2 ... r M } is the sample estimate
3The exact decay rate is not known, but found experimentally to be adequate. Otherwise we would ignore cuts generated with high capacity.
4Thanks to Ido Bergman for pointing this out.
Y. Gdalyahu, D. Weins hall and M Werman
428
for the level r that satisfies pij = 0.5. We use an on-line technique (not described
here) to estimate the median r' using constant and small memory.
Having computed the matrix r', where the entry r;j is the estimator for r that
satisfies pij
0.5, we find the connected components at a given r value after
disconnecting every edge (i,j) for which r~j > r. This gives the r level partition.
=
2.3
Hierarchical clustering
We now address the problem of choosing "good" r values.
The transformed weight pij has the advantage of reflecting transitive relations between data items i and j. For a selected value of r (which defines a specification
level) the partition of data items into clusters is obtained by eliminating edges whose
weight (pij) is less than a fixed threshold (0.5). That is: nodes are assigned to the
same cluster if at level r their probability to be on the same side of a random r-way
cut is larger than half.
Partitions which correspond to subsequent r values might be very similar to each
other, or even identical, in the sense that only a few nodes (if at all) change the
component to which they belong. Events which are of interest, therefore, are when
the variation between subsequent partitions is of the order of the size of a cluster . This typically happens when two clusters combine to form one cluster which
corresponds to a higher scale (less resolution).
In accordance, using the hierarchical partition obtained in Section 2.2, we measure
the variation between subsequent partitions by L~=l ~Nk , where J{ is a small
constant (of the order of the number of clusters) and Nk is the size of the kth
largest component of the partition.
2.4
The granular magnet model
Our algorithm is closely related to the successful granular magnet model recently
proposed in [1]. However, the two methods draw the random cuts effectively from
different distributions. In our case the distribution is data driven, imposed by the
contraction algorithm . The physical model imposes the Boltzmann distribution,
where a cut of capacity E is assigned a probability proportional to exp(-E/T) ,
and T is a temperature parameter.
The probability P& measures whether nodes i and j are on the same side of a cut
at temperature T (originally called "spin-spin correlation function" ). The magnetic
model uses the similarity transformation Wij --+ P& and a threshold (0 .5) to break
the graph into components. However, even if identical distributions were used, P&
is inherently different from pij since at a fixed temperature the random cuts may
have different numbers of components.
Superficially, the parameter T plays in the magnetic model a similar role to our parameter r. But the two parameterizations are quite different . First, r is a discrete
parameter while T is a continuous one. Moreover, in order to find the pairing probabilities P'fJ.. for different temperatures, the stochastic process should be employed
for every '1 ' value separately. On the other hand , our algorithm estimates pij for
every 1 ::; r ::; N at once. For hard clustering (v.s. soft clustering) it was shown
above that even this is not necessary, since we can get a direct estimation of r which
satisfies pij = 0.5 .
429
A Randomized Algorithm/or Pairwise Clustering
3
Example
Pairwise clustering has the advantage that a vectorial representation of the data
is not needed. However , graphs of distances are hard to visualize and we therefore demonstrate our algorithm using vectorial data. In spite of having vectorial
representation, the information which is made available to the clustering algorithm
includes only the matrix of pairwise Euclidean distances 5 dij . Since our algorithm
works with similarity values and not with distances, it is necessary to invert the
distances using Wij
f(d ij ). We choose f to be similar to the function used in [1]:
Wij
exp( -drj / a 2 ) where a is the average distance to the n-th nearest neighbor
(we used n=10, but the results remain the same as long as a reasonable value is
selected).
=
=
bl'---_ _ _ _-.J
Figure 1: The 2000 data points (left), and the three most pronounced hierarchical levels
of clustering (right). At r=353 the three spirals form one cluster (figure a) . This cluster
splits at r=354 into two (figures b1,b2), and into three parts at r=368 (figures c1,c2,c3) .
The background points form isolated clusters, usualy of size 1 (not shown).
Figure 1 shows 2000 data points in the Euclidean plane. In the stochastic stage of
the algorithm we used only 200 iterations of graph contraction, during which we
estimated for every pair i-j the value of r which satisfies pij
0.5 (see Section 2.2).
=
As expected, subsequent partitions are typically identical or differ only slightly from
each other (Figure 2). The variation between subsequent partitions was measured
using the 10 largest parts (I{
10, see Section 2.3). The results did not depend on
the exact value of J{ since the sum was dominated by its first terms.
=
At low r values (partition into a small number of components) a typical partition is
composed of one giant component and a few tiny components that capture isolated
noise points. The incorporation of these tiny components into the giant one produce negligible variations between subsequent partitions. At high r values all the
components are small , and therefore the variation between subsequent partitions
must decay. At intermediate r values a small number of sharp peaks appear.
The two highest peaks in Figure 2 are at r=354 and r=368; they mark meaningful
hierarchies for the data clustering, as shown in Figure 1. We compare our results
with two other methods in Figures 3 and 4.
5The vectorial representation of data points is not useful even if it was available, since
the parametric model is not known (see Section 1)
430
Y. Gdalyahu. D. Weinshall and M. Werman
1200_-------------,
1000
BOO
600
400
Figure 2: The variation between subsequent partitions (see text) as a function of
the number of components (r). The variation is computed for every integer r (the
spacing between peaks is not due to sparse
sampling). Outside the displayed range the
variation vanishes.
Figure 3: The best bi-partition according
to the normalized cut algorithm [9). Since
the first partition breaks one of the spirals,
a satisfactory solution cannot be achieved
in any of the later stages.
Figure 4: A three (macroscopic) clusters partition by
a deterministic single linkage
algorithm. The probabilistic
scheme avoids the "bridging
effect" thanks to the small
probability of selecting the
particular chain of edges.
References
[1] Blatt M., Wiseman S. and Domany E., "Data clustering usmg a model granular
magnet", Neural Computation 9, 1805-1842, 1997.
[2] Duda O. and Hart E., "Pattern classification and scene analysis", Wiley-Interscience,
New York, 1973.
[3] Hofmann T. and Buhmann J., "Pairwise data clustering by deterministic annealing",
PAMI 19, 1-14, 1997.
[4] Jain A. and Dubes R., "Algorithms for clustering data", Prentice Hall, NJ, 1988.
[5] Karger D., "A new approach to the minimum cut problem", Journal of the ACM,
43(4) 1996.
[6] Klock H. and Buhmann J., "Data visualization by multidimensional scaling: a deterministic annealing approach", Technical Report IAI-TR-96-8, Institut fur Informatik
III, University of Bonn. October 1996.
[7] Linial N., London E. and Rabinovich Y., "The geometry of graphs and some of its
algorithmic applications", Combinatorica 15, 215-245, 1995.
[8] Rose K., Gurewitz E. and Fox G., "Constrained clustering as an optimization
method", PAMI 15, 785-794, 1993.
[9] Shi J. and Malik J., "Normalized cuts and image segmentation", Proc. CVPR, 731737, 1997.
[10] Smith S., "Threshold validity for mutual neighborhood clustering", PAMI 15, 89-92,
1993.
[ll] Wu Z. and Leahy R., "An optimal graph theoretic approach to data clustering: theory
and its application to image segmentation", PAMI 15, 1l01-1113, 1993.
| 1593 |@word weins:2 version:2 eliminating:1 polynomial:2 duda:1 contraction:12 tr:1 configuration:1 selecting:1 karger:1 existing:2 current:1 must:3 subsequent:8 partition:25 hofmann:1 shape:1 remove:2 greedy:1 selected:2 half:1 item:6 plane:1 smith:1 provides:2 parameterizations:1 node:7 constructed:2 direct:3 c2:1 pairing:2 combine:1 interscience:1 pairwise:20 expected:2 multi:1 little:1 considering:1 increasing:2 becomes:1 moreover:2 israel:1 weinshall:2 finding:1 transformation:12 giant:2 nj:1 every:12 multidimensional:1 rm:1 superiority:1 appear:1 positive:1 negligible:1 accordance:1 tends:1 merge:1 pami:4 might:2 suggests:1 range:1 bi:1 drj:1 gdalyahu:4 unique:1 union:1 differs:1 procedure:4 cascade:1 convenient:1 induce:1 spite:1 get:1 cannot:1 selection:4 prentice:1 disconnecting:1 equivalent:1 deterministic:6 imposed:1 center:1 shi:1 jerusalem:1 starting:1 normed:1 resolution:1 rule:2 estimator:1 leahy:1 datapoints:1 embedding:2 notion:1 variation:8 construction:2 play:2 hierarchy:1 exact:2 us:6 bergman:1 cut:44 role:1 capture:1 connected:6 decrease:1 highest:1 rose:1 vanishes:1 complexity:2 chained:2 depend:1 linial:1 represented:2 jain:1 fast:1 describe:1 london:1 neighborhood:2 choosing:1 outside:1 whose:1 quite:1 larger:1 cvpr:1 distortion:1 otherwise:2 ability:1 noisy:1 sequence:1 eigenvalue:1 advantage:2 maximal:3 loop:1 pronounced:1 normalize:1 cluster:29 produce:1 help:1 ac:1 dubes:1 measured:1 ij:7 nearest:3 strong:1 implemented:2 c:1 klock:1 differ:1 merged:2 closely:1 stochastic:9 subsequently:1 require:1 assign:1 generalization:3 really:1 randomization:1 probable:1 around:1 hall:3 exp:2 great:1 algorithmic:1 visualize:1 werman:5 pointing:1 estimation:3 proc:1 precede:1 bridge:3 largest:2 tool:2 weighted:1 rough:1 gaussian:1 inherits:1 fur:1 contrast:1 sense:1 typically:2 spurious:2 relation:3 wij:10 transformed:1 among:1 aforementioned:1 classification:1 constrained:1 special:1 mutual:2 once:1 having:2 sampling:4 chernoff:1 identical:3 represents:1 np:1 report:1 few:3 pathological:1 composed:1 familiar:1 replaced:3 geometry:1 interest:1 chain:2 edge:20 necessary:2 institut:1 fox:1 divide:1 harmful:1 euclidean:2 isolated:2 minimal:13 soft:1 wiseman:1 assignment:1 rabinovich:1 introducing:1 vertex:1 entry:1 successful:1 dij:1 connect:1 ido:1 thanks:2 density:1 peak:3 randomized:8 huji:1 probabilistic:4 michael:1 together:2 connecting:1 reflect:1 choose:1 return:1 b2:1 includes:1 later:1 try:1 lot:1 break:2 start:1 decaying:1 blatt:1 il:3 spin:3 formed:2 who:1 correspond:1 identify:1 informatik:1 against:3 associated:1 ask:1 segmentation:2 reflecting:1 originally:2 higher:1 iai:1 strongly:1 implicit:3 stage:2 correlation:2 until:1 hand:1 glance:1 defines:2 effect:1 validity:1 normalized:4 true:1 former:1 hence:4 assigned:2 satisfactory:1 deal:1 ll:1 during:1 self:1 generalized:1 complete:2 demonstrate:2 theoretic:1 performs:2 temperature:4 fj:1 image:2 recently:1 common:2 agglomeration:1 behaves:1 physical:2 rl:1 exponentially:1 association:2 belong:1 refer:1 specification:1 similarity:17 closest:1 own:1 optimizes:1 belongs:1 driven:1 certain:1 inequality:1 binary:2 continue:1 jeb:1 minimum:1 employed:1 determine:1 ii:1 desirable:1 reduces:1 technical:1 cross:1 long:2 divided:1 hart:1 halt:1 physically:2 iteration:3 represent:1 achieved:2 invert:1 usmg:1 c1:1 background:1 separately:1 spacing:1 annealing:3 median:2 macroscopic:1 rest:2 induced:2 member:1 flow:2 seem:1 integer:2 symmetrically:1 presence:1 intermediate:1 split:1 easy:2 enough:1 spiral:2 boo:1 iii:1 domany:1 computable:2 whether:2 motivated:2 bridging:1 linkage:8 returned:1 york:1 adequate:1 useful:1 amount:1 repeating:2 band:1 category:3 simplest:1 notice:1 estimated:2 disjoint:1 discrete:1 threshold:6 clean:1 graph:22 wasted:1 relaxation:1 sum:4 extends:1 family:1 reasonable:2 reader:1 wu:1 separation:1 draw:1 scaling:1 bound:1 followed:2 distinguish:1 accidental:1 replaces:2 iea:1 vectorial:8 incorporation:1 scene:1 dominated:2 generates:1 bonn:1 min:3 according:2 describes:1 remain:3 slightly:1 wi:1 happens:1 intuitively:1 ln:2 visualization:1 remains:2 fail:1 needed:1 know:1 available:4 generalizes:1 apply:1 hierarchical:3 magnetic:2 robustly:1 substitute:1 clustering:32 include:1 maintaining:1 daphna:2 yoram:2 bl:1 malik:1 parametric:4 strategy:3 kth:1 distance:7 separate:1 capacity:11 agglomerative:5 trivial:1 index:1 hebrew:1 difficult:2 unfortunately:1 october:1 implementation:1 proper:1 boltzmann:1 l01:1 displayed:1 sharp:1 pair:4 namely:1 c3:1 connection:1 address:1 below:1 pattern:1 including:2 memory:1 event:1 rely:1 indicator:1 buhmann:2 nth:1 representing:1 scheme:2 transitive:2 gurewitz:1 text:1 understanding:1 discovery:1 interesting:1 proportional:2 analogy:1 granular:5 incident:3 pij:11 sufficient:2 imposes:1 thresholding:3 tiny:2 pi:1 keeping:1 side:3 institute:1 neighbor:3 sparse:2 distributed:1 valid:1 avoids:2 superficially:1 made:2 ignore:1 b1:1 assumed:1 continuous:1 iterative:1 nature:2 robust:3 inherently:1 did:1 main:1 noise:6 n2:1 join:1 wiley:1 explicit:2 third:1 embed:1 showing:1 r2:1 decay:4 essential:1 exists:2 merging:1 effectively:1 importance:1 dissimilarity:2 nk:2 led:1 simply:1 ncut:1 corresponds:1 satisfies:5 acm:1 marked:2 viewed:1 replace:1 absence:1 hard:3 experimentally:1 change:1 specifically:2 typical:1 averaging:1 called:2 underestimation:1 meaningful:1 select:2 combinatorica:1 mark:1 magnet:5 |
649 | 1,594 | Learning Nonlinear Dynamical Systems
using an EM Algorithm
Zoubin Ghahramani and Sam T. Roweis
Gatsby Computational Neuroscience Unit
University College London
London WC1N 3AR, U.K.
http://www.gatsby.ucl.ac.uk/
Abstract
The Expectation-Maximization (EM) algorithm is an iterative procedure for maximum likelihood parameter estimation from data
sets with missing or hidden variables [2]. It has been applied to
system identification in linear stochastic state-space models, where
the state variables are hidden from the observer and both the state
and the parameters of the model have to be estimated simultaneously [9]. We present a generalization of the EM algorithm for
parameter estimation in nonlinear dynamical systems. The "expectation" step makes use of Extended Kalman Smoothing to estimate
the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. In general, the nonlinear
maximization step is difficult because it requires integrating out the
uncertainty in the states. However, if Gaussian radial basis function (RBF) approximators are used to model the nonlinearities,
the integrals become tractable and the maximization step can be
solved via systems of linear equations.
1
Stochastic Nonlinear Dynamical Systems
We examine inference and learning in discrete-time dynamical systems with hidden
state Xt, inputs Ut, and outputs Yt. 1 The state evolves according to stationary
nonlinear dynamics driven by the inputs and by additive noise
(1)
1 All lowercase characters (except indices) denote vectors. Matrices are represented by
uppercase characters.
432
Z. Ghahramani and S. T Roweis
where w is zero-mean Gaussian noise with covariance Q.
linearly related to the states and inputs by
Yt = g(Xt, Ut)
+v
2
The outputs are non(2)
where v is zero-mean Gaussian noise with covariance R. The vector-valued non linearities f and 9 are assumed to be differentiable, but otherwise arbitrary.
Models of this kind have been examined for decades in various communities. Most
notably, nonlinear state-space models form one of the cornerstones of modern systems and control engineering. In this paper, we examine these models within the
framework of probabilistic graphical models and derive a novel learning algorithm
for them based on EM. With one exception,3 this is to the best of our knowledge
the first paper addressing learning of stochastic nonlinear dynamical systems of the
kind we have described within the framework of the EM algorithm.
The classical approach to system identification treats the parameters as hidden variables, and applies the Extended Kalman Filtering algorithm (described in section 2)
to the nonlinear system with the state vector augmented by the parameters [5]. 4
This approach is inherently on-line, which may be important in certain applications.
Furthermore, it provides an estimate of the covariance of the parameters at each
time step. In contrast, the EM algorithm we present is a batch algorithm and does
not attempt to estimate the covariance of the parameters.
There are three important advantages the EM algorithm has over the classical approach. First, the EM algorithm provides a straightforward and principled method
for handing missing inputs or outputs. Second, EM generalizes readily to more
complex models with combinations of discrete and real-valued hidden variables.
For example, one can formulate EM for a mixture of nonlinear dynamical systems.
Third, whereas it is often very difficult to prove or analyze stability within the
classical on-line approach, the EM algorithm is always attempting to maximize the
likelihood, which acts as a Lyapunov function for stable learning.
In the next sections we will describe the basic components of the learning algorithm.
For the expectation step of the algorithm, we infer the conditional distribution of the
hidden states using Extended Kalman Smoothing (section 2). For the maximization
step we first discuss the general case (section 3) and then describe the particular
case where the nonlinearities are represented using Gaussian radial basis function
(RBF; [6]) networks (section 4).
2
Extended Kalman Smoothing
Given a system described by equations (1) and (2), we need to infer the hidden
states from a history of observed inputs and outputs. The quantity at the heart
of this inference problem is the conditional density P(XtIUl,"" UT, Yl,.' " YT), for
1 ::; t ::; T, which captures the fact that the system is stochastic and therefore our
inferences about x will be uncertain.
2The Gaussian noise assumption is less restrictive for nonlinear systems than for linear
systems since the nonlinearity can be used to generate non-Gaussian state noise.
3The authors have just become aware that Briegel and Tresp (this volume) have applied
EM to essentially the same model. Briegel and Tresp's method uses multilayer perceptrons
(MLP) to approximate the nonlinearities, and requires sampling from the hidden states to
fit the MLP. We use Gaussian radial basis functions (RBFs) to model the nonlinearities,
which can be fit analytically without sampling (see section 4) .
41t is important not to confuse this use of the Extended Kalman algorithm, to simultaneously estimate parameters and hidden states, with our use of EKS, to estimate just
the hidden state as part of the E step of EM.
Learning Nonlinear Dynamics Using EM
433
For linear dynamical systems with Gaussian state evolution and observation noises,
this conditional density is Gaussian and the recursive algorithm for computing its
mean and covariance is known as Kalman smoothing [4, 8]. Kalman smoothing is
directly analogous to the forward-backward algorithm for computing the conditional
hidden state distribution in a hidden Markov model, and is also a special case of
the belief propagation algorithm. 5
For nonlinear systems this conditional density is in general non-Gaussian and can
in fact be quite complex. Multiple approaches exist for inferring the hidden state
distribution of such nonlinear systems, including sampling methods [7] and variational approximations [3]. We focus instead in this paper on a classic approach from
engineering, Extended Kalman Smoothing (EKS).
Extended Kalman Smoothing simply applies Kalman smoothing to a local linearization of the nonlinear system. At every point x in x-space, the derivatives of the
vector-valued functions f and 9 define the matrices, Ax ==
Ix=x and ex == ~ Ix=x'
respectively. The dynamics are linearized about Xt, the mean of the Kalman filter
state estimate at time t:
M
(3)
The output equation (2) can be similarly linearized. If the prior distribution of the
hidden state at t = 1 was Gaussian, then, in this linearized system, the conditional
distribution of the hidden state at any time t given the history of inputs and outputs
will also be Gaussian. Thus, Kalman smoothing can be used on the linearized system
to infer this conditional distribution (see figure 1, left panel).
3
Learning
The M step of the EM algorithm re-estimates the parameters given the observed
inputs, outputs, and the conditional distributions over the hidden states. For the
model we have described, the parameters define the nonlinearities f and g, and the
noise covariances Q and R.
Two complications arise in the M step. First, it may not be computationally feasible to fully re-estimate f and g. For example, if they are represented by neural
network regressors, a single full M step would be a lengthy training procedure using
backpropagation, conjugate gradients, or some other optimization method. Alternatively, one could use partial M steps, for example, each consisting of one or a few
gradient steps.
The second complication is that f and 9 have to be trained using the uncertain state
estimates output by the EKS algorithm. Consider fitting f, which takes as inputs
Xt and Ut and outputs Xt+l. For each t, the conditional density estimated by EKS is
a full-covariance Gaussian in (Xt, xHd-space. So f has to be fit not to a set of data
points but instead to a mixture of full-covariance Gaussians in input-output space
(Gaussian "clouds" of data). Integrating over this type of noise is non-trivial for
almost any form of f. One simple but inefficient approach to bypass this problem
is to draw a large sample from these Gaussian clouds of uncertain data and then fit
f to these samples in the usual way. A similar situation occurs with g.
In the next section we show how, by choosing Gaussian radial basis functions to
model f and g, both of these complications vanish.
5The forward part of the Kalman smoother is the Kalman filter.
Z. Ghahramani and S. T. Roweis
434
4
Fitting Radial Basis Functions to Gaussian Clouds
We will present a general formulation of an RBF network from which it should be
clear how to fit special forms for f and 9. Consider the following nonlinear mapping
from input vectors x and u to an output vector z:
[
z
=L
+ Ax + Bu + b + w,
hi Pi (x)
i=1
(4)
where w is a zero-mean Gaussian noise variable with covariance Q. For example,
one form of f can be represented using (4) with the substitutions x f- Xt, u f- Ut,
and z f- Xt+!; another with x f- (Xt, ud, u f- 0, and Z f- Xt+ 1. The parameters
are: the coefficients of the I RBFs, hi; the matrices A and B multiplying inputs
x and u, respectively; and an output bias vector b. Each RBF is assumed to be a
Gaussian in x-space, with center Ci and width given by the covariance matrix Si:
(5)
The goal is to fit this model to data (u,x,z). The complication is that the data
set comes in the form of a mixture of Gaussian distributions. Here we show how to
analytically integrate over this mixture distribution to fit the RBF model.
Assume the data set is:
P(x,z,u) =
1
J LNj(x,z) 8(u -
Uj).
(6)
j
That is, we observe samples from the u variables, each paired with a Gaussian
"cloud" of data, Nj, over (x, z). The Gaussian Nj has mean /1j and covariance
matrix Cj .
Let zo(x, u) = 2:;=1 hi Pi(X) + Ax + Bu + b, where () is the set of parameters
() = {hI ... h[ , A, B, b}. The log likelihood of a single data point under the model
is:
-~ [z -
zo(x, u)r Q-l [z - zo(x, u)]-
~ In IQI + const.
The maximum likelihood RBF fit to the mixture of Gaussian data is obtained by
minimizing the following integrated quadratic form:
min{L
O,Q
r rNj(X,Z)[Z-ZO(X,Uj)rQ_l[Z-ZO(X,Uj)]dXdz+JlnIQI}.
.}x }z
(7)
J
We rewrite this in a slightly different notation, using angled brackets (.) j to denote
expectation over N j , and defining
bTr
()
[h~ h; ... hI AT BT
cJ>
[PI (x) P2 (x) ... P[ (x) x u 1] .
Then, the objective can be written
min
{'"
(( z - () cJ>
O,Q
~
J
rQ
-1
(z - ()
cJ?) J. + J In IQI} .
(8)
435
Learning Nonlinear Dynamics Using EM
Taking derivatives with respect to 0, premultiplying by _Q-1, and setting to zero
gives the linear equations I:j((z - O~)~T)j = 0, which we can solve for 0 and Q:
In other words, given the expectations in the angled brackets, the optimal parameters can be solved for via a set of linear equations. In appendix A we show that these
expectations can be computed analytically. The derivation is somewhat laborious,
but the intuition is very simple: the Gaussian RBFs multiply with the Gaussian
densities Nj to form new unnormalized Gaussians in (x, y)-space. Expectations under these new Gaussians are easy to compute. This fitting algorithm is illustrated
in the right panel of figure 1.
,,
x,
~~fJi!~~X,'2
~ ? rZJ??--Gaussian
evidence
from I-I
from 1+1
+
xr_1
t
+
~I
.~
rn
~
~
:;
a.
:;
o
?? ? ?f .... ?? ?
....... ,
.
input dimension
Figure 1: Illustrations of the E and M steps of the algorithm. The left panel shows
the information used in Extended Kalman Smoothing (EKS), which infers the hidden
state distribution during the E-step. The right panel illustrates the regression technique
employed during the M-step. A fit to a mixture of Gaussian densities is required; if
Gaussian RBF networks are used then this fit can be solved analytically. The dashed line
shows a regular RBF fit to the centres of the four Gaussian densities while the solid line
shows the analytic RBF fit using the covariance information_ The dotted lines below show
the support of the RBF kernels.
5
Results
We tested how well our algorithm could learn the dynamics of a nonlinear system
by observing only its inputs and outputs. The system consisted of a single input,
state and output variable at each time, where the relation of the state from one time
step to the next was given by a tanh nonlinearity. Sample outputs of this system
in response to white noise are shown in figure 2 (left panel).
We initialized the nonlinear model with a linear dynamical model trained with
EM, which in turn we initialized with a variant of factor analysis. The model
was given 11 RBFs in Xt-space, which were uniformly spaced within a range which
was automatically determined from the density of points in Xt-space. After the
initialization was over, the algorithm discovered the sigmoid nonlinearity in the
dynamics within less than 10 iterations of EM (figure 2, middle and right panels).
Further experiments need to be done to determine how practical this method will
be in real domains.
Z. Ghahramani and S. T Roweis
436
.
. ., . , .
~
NLOS
~u
'
' /
: -:",;~fS{
~~7-~~.~r.~.~~
Ilel'lltJons of EM
-!,
' _ 1.5
_,''''$ ''
II
,, &
I
t..S
,
..
~
xlt)
Figure 2: (left): Data set used for training (first half) and testing (rest), which consists
of a time series of inputs, Ut (a) , and outputs Yt (b) . (middle): Representative plots of
log likelihood vs iterations of EM for linear dynamical systems (dashed line) and nonlinear
dynamical systems trained as described in this paper (solid line) . Note that the actual
likelihood for nonlinear dynamical systems cannot generally be computed analytically;
what is shown here is the approximate likelihood computed by EKS. The kink in the solid
curve comes when initialization with linear dynamics ends and the nonlinearity starts to
be learned. (right): Means of (Xt , Xt+d Gaussian posteriors computed by EKS (dots) ,
along with the sigmoid nonlinearity (dashed line) and the RBF nonlinearity learned by
the algorithm. At no point does the algorithm actually observe (Xt , Xt+d pairs; these are
inferred from inputs, outputs, and the current model parameters.
6
Discussion
This paper brings together two classic algorithms, one from statistics and another
from systems engineering, to address the learning of stochastic nonlinear dynamical systems. We have shown that by pairing the Extended Kalman Smoothing
algorithm for state estimation in the E-step, with a radial basis function learning
model that permits analytic solution of the M-step, the EM algorithm is capable of
learning a nonlinear dynamical model from data. As a side effect we have derived
an algorithm for training a radial basis function network to fit data in the form of
a mixture of Gaussians.
Our initial approach has three potential limitations. First, the M-step presented
does not modify the centres or widths of the RBF kernels. It is possible to compute
the expectations required to change the centres and widths, but it requires resorting to a partial M-step. For low dimensional state spaces , filling the space with
pre-fixed kernels is feasible, but this strategy needs exponentially many RBFs in
high dimensions . Second, EM training can be slow, especially if initialized poorly.
Understanding how different hidden variable models are related can help devise
sensible initialization heuristics. For example, for this model we used a nested initialization which first learned a simple linear dynamical system, which in turn was
initialized with a variant of factor analysis. Third, the method presented here learns
from batches of data and assumes stationary dynamics. We have recently extended
it to handle online learning of nonstationary dynamics .
The belief network literature has recently been dominated by two methods for
approximate inference, Markov chain Monte Carlo [7] and variational approximations [3]. To our knowledge this paper is the first instance where extended Kalman
smoothing has been used to perform approximate inference in the E step of EM.
While EKS does not have the theoretical guarantees of variational methods, its simplicity has gained it wide acceptance in the estimation and control literatures as a
method for doing inference in nonlinear dynamical systems. We are now exploring
generalizations of this method to learning nonlinear multilayer belief networks.
Learning Nonlinear Dynamics Using EM
437
Acknowledgements
ZG would like to acknowledge the support of the CITO (Ontario) and the Gatsby Charitable Fund. STR was supported in part by the NSF Center for Neuromorphic Systems
Engineering and by an NSERC of Canada 1967 Award.
A
Expectations Required to Fit the RBFs
The expectations we need to compute for equation 9 are (x)j, (z)j, (xx T)j, (xz T)j, (zz T)j,
(Pi(X))j, (x pi(X))j, (z Pi(X))j, (pi(X) Pl(X))).
Starting with some of the easier ones that do not depend on the RBF, kernel p:
=
=
=
(x)j
(XXT)j
(ZZT)j
JLj
JLjJLj,T +Cr
JLjJLj,T +Cjz
(z)j
(xzT)j
=
=
JL}
JLjJLj,T +Cr
Observe that when we multiply the Gaussian RBF kernel pi(X) (equation 5) and N j we
get a Gaussian density over (x, z) with mean and covariance
JLij = Cij
(
Cj-1 JLj
+ [ S-:-l
' 0 Ci
])
and an extra constant (due to lack of normalization),
= (21T)-d",/2IS;j-1/2ICjl-I/2ICijll/2 exp{ -~ij/2}
c~ Si- I Ci + JLl C j- 1JLj - JL0 C i-/ JLij . Using {3ij and JLij, we
{3ij
where ~ij =
other expectatIOns:
can evaluate the
= {3ij,
(x pi(X))j = {3ijJLfj , and (z pi(X))j = {3ijJL'ij .
(pi(X) Pl(X))j = (21T)-d", ICj 1-1/2IS;j-1/2IS11-1/2ICilj 11 / 2 exp{ -,ifj/2}, where
(pi(X))j
Finally,
C,'l)"
+0
= (C):-l + [ Si- 1 Sll
an d ,iij = CiTS-1
i
ci
0])
o
-1
an
d JLilj = Cilj (C-) 1JLj + [ Si-1Ci +0 Sll Cl ]) '
l JLj - JLilj
T C+ ClTS-l
l Cl + JLjTCj
ilj1JLiij .
References
[1] T. Briegel and V. Tresp. Fisher Scoring and a Mixture of Modes Approach for Approximate Inference and Learning in Nonlinear State Space Models. In This Volume.
MIT Press, 1999.
[2] A.P. Dempster, N.M. Laird, and D .B. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. J. Royal Statistical Society Series B, 39:1- 38, 1977.
[3] M. I. Jordan, Z. Ghahramani, T . S. Jaakkola, and L. K. Saul. An Introduction to
variational methods in graphical models. Machine Learning, 1999.
[4] R. E. Kalman and R. S. Bucy. New results in linear filtering and prediction . Journal
of Basic Engineering (A SME) , 83D:95-108, 1961.
[5] L. Ljung and T. Soderstrom. Theory and Practice of Recursive Identification. MIT
Press, Cambridge, MA, 1983.
[6] J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units.
Neural Computation, 1(2):281-294, 1989.
[7] R. M. Neal. Probabilistic inference using Markov chain monte carlo methods. Technical
Report CRG-TR-93-1, 1993.
[8] H . E. Rauch. Solutions -to the linear smoothing problem. IEEE Transactions on
Automatic Control, 8:371-372, 1963.
[9] R . H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting
using the EM algorithm. J. Time Series Analysis, 3(4):253- 264, 1982.
| 1594 |@word middle:2 clts:1 linearized:4 covariance:13 tr:1 solid:3 initial:1 substitution:1 series:4 tuned:1 current:1 si:4 written:1 readily:1 additive:1 analytic:2 plot:1 fund:1 v:1 stationary:2 half:1 provides:2 complication:4 along:1 become:2 pairing:1 prove:1 consists:1 fitting:3 notably:1 examine:2 xz:1 automatically:1 actual:1 str:1 xx:1 linearity:1 notation:1 panel:6 what:1 kind:2 nj:3 guarantee:1 every:1 act:1 uk:1 control:3 unit:2 iqi:2 engineering:5 local:1 treat:1 modify:1 initialization:4 examined:1 range:1 practical:1 testing:1 recursive:2 practice:1 backpropagation:1 procedure:2 word:1 integrating:2 radial:7 regular:1 pre:1 zoubin:1 get:1 cannot:1 www:1 missing:2 yt:4 center:2 straightforward:1 rnj:1 starting:1 formulate:1 simplicity:1 classic:2 stability:1 handle:1 analogous:1 zzt:1 us:1 jlj:5 observed:2 cloud:4 solved:3 capture:1 principled:1 rq:1 intuition:1 dempster:1 dynamic:10 trained:3 depend:1 rewrite:1 basis:7 represented:4 various:1 xxt:1 derivation:1 zo:5 fast:1 describe:2 london:2 monte:2 choosing:1 quite:1 heuristic:1 valued:3 solve:1 otherwise:1 statistic:1 premultiplying:1 btr:1 laird:1 online:1 xlt:1 advantage:1 differentiable:1 ucl:1 sll:2 poorly:1 ontario:1 roweis:4 kink:1 help:1 derive:1 ac:1 ij:6 p2:1 come:2 lyapunov:1 filter:2 stochastic:5 generalization:2 crg:1 exploring:1 pl:2 exp:2 mapping:1 angled:2 estimation:4 tanh:1 mit:2 gaussian:32 always:1 cr:2 jaakkola:1 eks:8 ax:3 focus:1 derived:1 likelihood:8 contrast:1 inference:8 lowercase:1 integrated:1 bt:1 hidden:18 relation:1 smoothing:14 special:2 aware:1 sampling:3 zz:1 filling:1 report:1 few:1 modern:1 simultaneously:2 consisting:1 attempt:1 mlp:2 acceptance:1 multiply:2 stoffer:1 laborious:1 mixture:8 bracket:2 uppercase:1 wc1n:1 chain:2 integral:1 capable:1 partial:2 soderstrom:1 incomplete:1 initialized:4 re:3 theoretical:1 uncertain:4 instance:1 ar:1 maximization:5 neuromorphic:1 addressing:1 icj:1 bucy:1 density:9 bu:2 probabilistic:2 yl:1 together:1 moody:1 derivative:2 inefficient:1 potential:1 nonlinearities:5 lnj:1 coefficient:1 observer:1 analyze:1 observing:1 doing:1 start:1 rbfs:6 spaced:1 identification:3 carlo:2 multiplying:1 history:2 lengthy:1 cjz:1 knowledge:2 ut:6 infers:1 cj:5 actually:1 response:1 formulation:1 done:1 furthermore:1 just:2 nonlinear:26 propagation:1 lack:1 jll:1 mode:1 brings:1 effect:1 consisted:1 evolution:1 analytically:5 neal:1 illustrated:1 white:1 during:2 width:3 unnormalized:1 variational:4 novel:1 recently:2 sigmoid:2 exponentially:1 volume:2 jl:1 cambridge:1 automatic:1 resorting:1 similarly:1 nonlinearity:6 centre:3 dot:1 stable:1 posterior:1 driven:1 certain:1 approximators:1 devise:1 scoring:1 somewhat:1 employed:1 determine:1 maximize:1 ud:1 dashed:3 ii:1 smoother:1 multiple:1 full:3 infer:3 technical:1 award:1 paired:1 prediction:1 variant:2 basic:2 regression:1 multilayer:2 essentially:1 expectation:11 sme:1 iteration:2 kernel:5 normalization:1 whereas:1 extra:1 rest:1 briegel:3 jordan:1 nonstationary:1 jlij:3 easy:1 fit:14 rauch:1 forecasting:1 f:1 cornerstone:1 generally:1 clear:1 locally:1 http:1 generate:1 exist:1 nsf:1 dotted:1 neuroscience:1 estimated:2 discrete:2 four:1 ifj:1 backward:1 cits:1 uncertainty:1 almost:1 draw:1 appendix:1 hi:5 quadratic:1 dominated:1 min:2 attempting:1 handing:1 according:1 combination:1 conjugate:1 slightly:1 em:26 sam:1 character:2 evolves:1 heart:1 computationally:1 equation:7 discus:1 turn:2 tractable:1 end:1 fji:1 generalizes:1 gaussians:4 permit:1 observe:3 batch:2 assumes:1 graphical:2 const:1 restrictive:1 ghahramani:5 uj:3 especially:1 classical:3 society:1 objective:1 quantity:1 occurs:1 strategy:1 usual:1 gradient:2 sensible:1 trivial:1 kalman:18 index:1 illustration:1 minimizing:1 difficult:2 cij:1 perform:1 observation:1 darken:1 markov:3 acknowledge:1 situation:1 extended:11 defining:1 rn:1 discovered:1 arbitrary:1 community:1 canada:1 inferred:1 pair:1 required:3 learned:3 address:1 dynamical:15 below:1 including:1 royal:1 belief:3 tresp:3 prior:1 understanding:1 literature:2 acknowledgement:1 shumway:1 fully:1 ljung:1 limitation:1 filtering:2 integrate:1 rubin:1 charitable:1 bypass:1 pi:12 supported:1 bias:1 side:1 wide:1 saul:1 taking:1 cito:1 curve:1 dimension:2 author:1 forward:2 regressors:1 transaction:1 approximate:5 assumed:2 alternatively:1 iterative:1 decade:1 learn:1 inherently:1 complex:2 cl:2 domain:1 linearly:1 noise:10 arise:1 augmented:1 representative:1 gatsby:3 slow:1 iij:1 inferring:1 vanish:1 third:2 ix:2 learns:1 xt:16 evidence:1 gained:1 ci:5 linearization:1 confuse:1 illustrates:1 easier:1 simply:1 nserc:1 applies:2 nested:1 ma:1 conditional:9 goal:1 rbf:14 fisher:1 feasible:2 change:1 determined:1 except:1 uniformly:1 perceptrons:1 exception:1 zg:1 college:1 support:2 evaluate:1 tested:1 ex:1 |
650 | 1,596 | Learning to Find Pictures of People
Sergey Ioffe
Computer Science Division
U.C. Berkeley
Berkeley CA 94720
iojJe (Cj)cs. be1?keley. edu
David Forsyth
Computer Sciencp Division
U.C. Berkeley
Berkeley CA 94720
daf@cs.beTkeley. edv
Abstract
Finding articulated objects, like people, in pictures present.s a particularly difficult object. recognition problem. We show how t.o
find people by finding putative body segments, and then construct.ing assemblies of those segments that are consist.ent with the constraints on the appearance of a person that result from kinematic
properties. Since a reasonable model of a person requires at. least
nine segments, it is not possible to present every group to a classifier. Instead, the search can be pruned by using projected versions
of a classifier that accepts groups corresponding to people. We
describe an efficient projection algorithm for one popular classifier , and demonstrate that our approach can be used to determine
whether images of real scenes contain people.
1
Introduction
Several t.ypical collpctions containing over ten million images are listed in [2]. Th ere
is an extensiw literature on obtaining images from large collections using features
computed from t.he whole image , including colour histograms, texture measures and
shape measures ; a partial review appears in [5].
However, in the most comprehensive field study of usage pract.ices (a paper by
Enser [2] surveying the use of the Hulton Deutsch collection), t.here is a clear user
preference for searching these collections on image semantics. An ideal search tool
,,,ould be a quite general object recognition system that could be adapted quickly
and easily to the types of objects sought by a user. An important special case
is finding people and det ermining what they are doing . This is hard , because
people have many internal degrees of freedom. We follow the approach of [3],
and represent people as collections of cylinders, each representing a body segment.
Regions that could be the projections of cylinders are easily found using techniques
similar to those of [1]. Once these regions ate found , they must be assembled
Learning to Find Pictures of People
783
int.o collect.ions t.hat. are consistent with the appearance of images of real people,
which are constrained by the kinematics of human joints; consistency is tested
wit.h a classifier. Since t.here are many candidate segment.s, a brute force search
is impossible. \Ve show how this sea rch can be pruned using projections of the
classifier .
2
Learning to Build Seglnent Configurations
Suppose that. ;V segments have been found in an image , and there are m body parts.
We will defin e a labeling as a set L = {(Ll , sd , (l2, S2), .. . , (h?, sd} of pairs wh ere
each segment. Si E {1 .. . N} is labeled with the labelli E {1 . .. m}. A labeling is
complete if it represents a full m-segment configuration (Fig. 2( a,b)).
Assume we have a classifier C that for any complete labeling L output.s C( L) > 0
if L corresponds to a person-like configuration, and C (L) < 0 otherwise. Finding
all the possible body configurations in an image is equivalent. t.o finding all the
complete labelings L for which C(L) > O. This cannot b e done with brute-force
search t.hrough the entire set.. The search can be pruned if, for an (incomplete)
labeling L' there is no complete L ;2 L' such that G(L) > O. For inst.ance, if two
segments cannot represent the upper and lower left. arm, as in Figure la, then we
do not consider any complete labelings where they are labeled as such.
Projected classifiers make the search for body configura tions efficient. by pruning
la belings using the properties of smaller sub-Iabelings (as in [7], who use manually
determined bounds and do not learn the tests). Given a classifier G which is a
function of a set of features whose values depend on segments with labels l1 . . . Im ,
t.he projected classifier Cil (k is a function of of all those features that dep end
only on the segments with labels 11 ... lh ' In particular, GIllk(1') > 0 if there is
some ext.ension L of l' such that C(L) > 0 (see figure l).The converse need not
be true: t.he fea ture values required to bring a projected point inside the positive
. volUl11f' of C' may not be realized with any labeling of t.he current Sf't. of segments
1, . .. , N. For a projected classifier to be usefuL it must b e easy to compute the
projection , and it must be effective in rejecting labelings at. an early stage . These
are strong rf' quirements which are not satisfied by most good classifiers; for example,
in our f'xperience a support vector machine with a posit.ive definit.e quadratic kern el
projects easily but typically yie lds unrestrictive projected classifiers.
2.1
Building Labelings Incre m entally
Assume we have a classifier C that accepts assemblies corresponding to people and
that we can construct. projected classifiers as we need them. We will now show how
t.o use them to ronst.ruct labelings, using a pyramid of classifiers.
A pyramid of classifiers (Fig. 1(c)) , determined by the classifier C and a permutation
of labels (11 .. . ld consists of nodes NI, ... I J corresponding to each of the projected
classifiers CI , .I J ? i ~ j. Each of the bottom-level nodes N I , receives the set of all
segments ill th e image as the input . The top node Nil 1m OUt.pUt.S t.he set of all
complete labelings L = {(/ 1 , sIl . . . (lm,sm)) such that G(L) > 0, i.e. the set of all
assemblies in t.he image classified as people. Further, each node N I , . I, outputs the
set of all sub-labelings L
{(li,sil . . . (lj,Sj)) such that GI, I)(L) > O.
=
ThE' node:,> Nt , at t.he bottom level work by selecting all segments Si in the image for
which n, {(I,.:>i)} > O. Each of the remaining nodes has t.wo part.s: merging and
filt.ering. The merying stage of node NI, .. I J merges the outputs of its children by
computing t.he set of all la belings {(li, s;) . .. (lj, Sj)} wh ere {(Ii , sd ... (lj -1, Sj - tl}
S. Ioffe and D. Forsyth
784
y(sl,s2)
.
\J
?
.
???
..
.
: x(sJ)
II
b
a
.
x(sJ)
'--_---'-_--'-_---'_ _-'--_segments
c
Figure 1: (a) Two segments that cannot correspond to the left upper and lower
arm. Any configuration where they do can be rejected using a projected classifier
regardless of the other segments that might appear in the configuration. (b) ProJecting a classifier G {( [1, SI), ([2, S2)}' The shaded area is the volume classified as
positive, for the feature set {x (SI), y( SI , S2)} . Finding the projection Gil amounts
to projecting off the features that cannot be computed from SI only, i. e., Y(SI' S2}.
(c) A pyramid of classifiers. Each node outputs sub-assemblies accepted by the corresponding projected classifier. Each node except those in the bottom row works by
forming labelings from the outputs of its two children, and filtering the result using
the corresponding projected classifier. The top node outputs the set of all complete
labelings that correspond to body configurations.
and {(li+l, si+d . .. (Ij, Sj)} are in the outputs of N I ,lj_1 and NI,+l .. lj' respectively.
The filtering stage then selects, from the resulting set of labelings, those for which
G1, ... lj(?) > 0, and the resulting set is the output of Nl, . lj' It is clear, from the
definition of projected classifiers, that the output of the pyramid is, in fact, the set
of all complete L for which G(L) > 0 (note that GIl 1m
G) .
=
The only constraint on the order in which the outputs of nodes are computed is that
children nodes have to be applied before parents. In our implementation, we use
nodes Nl, . l where j changes from 1 to m, and, for each j, i changes from j down to
1. This is equivalent to computing sets of labelings of the form {(II , stl . .. (lj, Sj)}
in order, where getting (j + I)-segment labelings from j-segment ones is itself an
incremental process, whereby we check labels againstlj +l in the order [j, lj-I, . . . , [1.
In practice , we choose the latter order on the fly for each increment step using a
greedy algorithm, to minimize the size of labeling sets that are constructed (note
that in this case the classifiers no longer form a pyramid) . The order (11 .. . lm) in
which labels are added to an assembly needs to be fixed. We determine this order
with a greedy algorithm by running a large segment set through the labeling builder
and choosing the next label to add so as to minimize the number of labelings that
result.
j
2.2
Classifiers that Project
In our problem, each segment from the set {I .. . N} is a rectangle in some position
and orientation. Given a complete labeling L = {(I, SI), ... , (m, sm)} , we want to
have G(L) > 0 iff the segment arrangement produced by L looks like a person .
Learning to Find Pictures ofPeople
785
~
~
------ - - - - - - - 1
)'
,,
0.25
0.4
,
,
0.25
0.47
0.85
,,
=0.25+0.22 =0.25+0.6',
,,
0.4
0.62
=0.4+0.22
,,
,,
: fO.15
,
,
0.15
,,
,
:, to.
25
0.37
0.75
=0.15+0.22 =0.15+0.6 '
-- ------ ------_.
0
0.22
~
b
a
,
,
,
0.15
- - -
1.0
=0.4+0.6
0.22
0.6
=0.22+0.38
x
0.6
x"
C
Figure 2: (a) All segments extracted for an image. (b) A labeled segment configuration corresponding to a person, where T=torso, LUA=left upper arm, etc.
The head is not marked because we are not looking for it with our method. The
single left leg segment in (a) has been broken in (b) to generate the upper and
lower leg segments. (c) (top) A combination of a bounding box (the dashed line)
and a boosted classifier, for two features x and y. Each plane in the boosted
classifier is a thick line with the positive half-space indicated by an arrow; the
associated weight {3 is shown next to the arrow. The shaded area is the positive volume of the classifier, which are the points P where LJ wJ{P(f)) > 1/2.
The weights w x (-) and wy{') are shown along the x- and y-axes, respectively, and
the total weight wx{P{x)) + Wy{P{y)) is shown for each region of the bounding
box. (bottom) The projected classifier, given by wx{P{x)) > 1/2 - 8 = 0.1 whel'P
8 = maxp(y) wy{P{y))
max{0.25, 0.4, 0.15}
0.4.
=
=
Each feature will depend on a few segments (1 to 3 in our experiments). Our
kinematic features are invariant to translation, uniform scaling or rotation of the
segment set, and include angles between segments and ratios of lengths, widths and
distances. We expect the features that correspond to human configurations to lie
within small fractions of their possible value ranges. This suggests using an axisaligned bounding box, with bounds learned from a collection of positive labelings,
for a good first separation, and then using a boosted version of a weak classifier that
splits the feature space on a single feature value (as in [6]). This classifier projects
particularly well, using a simple algorithm described in section 2.3.
Each weak classifier (Fig. 2(c)) is defined by the feature Ij on which the split is
made, the position Pj of the splitting hyperplane, and the direct.ion dj E {I, -I}
that determines which half-space is positive. A point P is classified as positive iff
dj{P{fj) - Pj) > 0, where P{fj) is the value of feature /j. The boosting algorithm
will associate a weight {3j with each plane {so that Lj {3j
1), and the resulting
classifier will classify a point as positive iffLd,(p(f,)-Pi?o{3j > 1/2, that is, iff the
total weight of the weak classifiers that classify the point as positive is at least a
half of the total weight of the classifiers. The set {/j} may have repeating features
(which may have different Pj, dj and Wj values), and does not need to span the
entire feature set.
=
By grouping together the weights corresponding to planes splitting on the same
feature, we finally rewrite the classifier as LJ wJ(P(f)) > 1/2, where 'U'J(P(f))
=
S. Joffe and D. Forsyth
786
LfJ=j,
d J (P(f)-Pl
?0 j3j is the weight associated with the particular value of feature
f, is a piece-wise constant function and depends on in which of the intervals given
by {pj Ifj = f} this value falls .
2.3
Projecting a Boosted Classifier
Given a classifier constructed as above, we need to construct classifiers that depend
on on some identified subset of the features . The geometry of our classifiers whose positive regions consist of unions of axis-aligned bounding boxes - makes
this easy to do.
Let 9 be the feature to be projected away - perhaps because the value depends on
a label that is not available. The projection of the classifier should classify a point
pi in the (lower-dimensional) feature space as positive iffmaxp Lj Wj (P(f)) > 1/2
where P is a point which projects into pi but can have any value for P(g). We can
rewrite this expression as LNg Wj(PI(f)) + maXp(g) wg(P(g)) > 1/2. The value
of J = maxwg(P(g)) is readily available and independent of P'. We can see that,
with the feature projected away, we obtain Lj Wj (Pi (f)) > 1/2 - J. Any number
of features can be project.ed away in a sequence in this fashion . An example of the
projected classifier is shown in Figure 2( c).
The classifier C we are using allows for an efficient building of labelings, in that
the features do not need to be recomputed when we move from G/t.l k to Gil .lk+l.
We achieve this efficiency by carrying along with a labeling L = {(it , SI) ... (lk' Sk)}
the sum <T(L)
L.jEF(II.lk) Wj(P(f)) where F(ll ... Ik ) is the set of all features
computable from the segments labeled as 11, ... , lk' and {P(f)} - the values of
these features . When we add another segment. to get L' = {(II , sd .. . (lk+l, Sk+d},
we can compute <T(L')
<T(L) + LjEF(II .lk+d\F(lllk) 11'j(P I (f)). In other words ,
when we add a labellk+l, we need to compute only those features that require Sk+l
for their computation.
=
=
3
Experimental Results
We report results for a system that automatically identifies potential body segments
(using the techniques described in [4]), and then applies the assembly process described above. Images for which assemblies that are kinematically consistent with a
person are reported as having people in them. The segment finder may find either
1 or 2 segments for each limb, depending on whether it is bent or straight; because
the pruning is so effective, we can allow segments to be broken into two equal halves
lengt.hwise (like the left leg in Fig. 2(b)), both of which are tested.
3.1
Training
The training set included 79 images without people , selected randomly from t.he
COREL dat.abase, and 274 images each with a single person on uniform background.
The images wit.h people have been scanned from books of human models [10]. All
segments in the test images were reported; in the control images, only segments
whose int.erior corresponded to human skin in colour and texture were reported.
Control images, both for the training and for the test set, were chosen so that all
had at least 30% of their pixels similar to human skin in colour and texture . This
gives a more realistic test of the system performance by excluding regions that are
obviously not human, and reduces the number of segments in the control images to
the same order of magnitude as those in the test images .
Learning to Find Pictures ofPeople
Features
367
567
I
II
Test
120
120
a
Control
28
86
787
I Features II
I I
367
567
II
False Neg.
37
49
~
0
b
False Pos.
1~~
Table 1: (a) Number of images of people (test) and without people (control) processed
by the classifiers with 367 and 567 features. (b) False negative rim ages with a person
where no body configuration was found) and false positive (images with no people
where a person was detected) rates.
The models are all wearing either swim suits or no clothes, otherwise segment finding
fails; it is an open problem to segment people wearing loose clothing. There is a
wide variation in the poses of the training examples, although all body segments
are visible. The sets of segments corresponding to people were then hand-labeled.
Of the 274 images with people, segments for each body part were found in 193
images. The remaining 81 resulted in incomplete configurations, which could still
be used for computing the bounding box used to obtain a first separation. Since
we assume that if a configuration looks like a person then its mirror image would
too, we double the number of body configurations by flipping each one about a
vertical axis. The bounding box is then computed from the resulting .548 points in
the feature space , without looking at the images without people .
The boosted classifier was trained to separate two classes: the 193 x 2 = 386 points
corresponding to body configurations, and 60727 points that did not correspond to
people but lay in the bounding box, obtained by using the bounding box classifier
to incrementally build labelings for the images with no people. We added 1178
synthetic positive configurations obtained by randomly selecting each limb and the
torso from one of the 386 real images of body configurations (which were rotated
and scaled so the torso positions were the same in all of them) to give an effect
of joining limbs and torsos from different images rather like children's flip-books .
Remarkably, tlw boosted classifier classified each of the real data points correctly but
misclassified 976 out of the 1178 synthetic configurations as negative; the synthetic
examples were unexpectedly more similar to the negative examples than the real
positive examples were.
3.2
Results
The test dataset was separate from the training set and included 120 images with a
person on a uniform background, and varying numbers of control images , reported
in Table 1. We report results for two classifiers, one using 567 features and the
other using a subset of 367 of those features . Table 1b shows the false positive
and false negative rates achieved for each of the two classifiers. By marking 51 %
of test images and only 10% of control images, the classifier using 567 features
compares extremely favorably with that of [3], which marked 54% of test images
and 38% of control images using hand-tuned tests to form groups of four segments.
In 55 of the 59 images where there was a false negative, a segment corresponding
to a body part was missed by the segment finder, meaning that t he overall system
performance significantly understates the classifier performance. There are few
signs of overfitting, probably because the features are highly redundant. Using the
larger set of features makes labeling faster (by a factor of about five), because more
configurations are rejected earlier.
788
4
S. loffe and D. Forsyth
Conclusions and Future Work
Groups of segments that satisfy kinematic constraints, learned from images of real
people, quite reliably correspond to people and can be used to identify them. Our
trick of projecting classifiers is effective at pruning an otherwise completely unmanageable correspondence search . Future issues include: fusing responses from face
finders (such as those of [11, 9]; exploiting patterns of shading on human limbs to
get better selectivity (as in [8]); determining the configuration of the person, which
might tell what they are doing; and exploiting the kinematic similarities between
humans and many animals to build systems that can find many different types of
animal without searching the classes one by one.
References
[1] J .M. Brady and H. Asada. Smoothed local symmetries and their implementation.
International Journal of Robotics Research, 3(3) , 1984.
[2] P.G.B. Enser. Query analysis in a visual information retrieval context. 1. Document
and Text Management, 1(1):25-52, 1993.
[3] M. M. Fleck, D. A. Forsyth, and C. Bregler. Finding naked people. In European
Confel'e nce on Computer Vision 1996. Vol. II, pages 592-602, 1996.
(4] D.A. Forsyth and M.M. Fleck. Body plans. In IEEE Conf. on ComputEr Vision and
Pattern Recognition, 1997.
[5] D .A . Forsyth, J. Malik, M.M. Fleck, H. Greenspan, T. Leung, S. Belongie, C. Carson,
and C . Bregler. Finding pictures of objects in large collections of images. In Proc.
'2 'nd Intel'national Workshop on Object Representation in Computer Vision, 1996.
[6] Y. Freund and R.E. Schapire. Experiments with a new boosting algorithm. In Machine
Learning - 1.'3, 1996.
[7] W.E.L. Grimson and T. Lozano-Perez. Localizing overlapping parts by searching the
interpretation tree. IEEE Trans. Patt. Anal. Mach. Intell. , 9(4):469-482, 1987.
[8] J. Haddon and D.A. Forsyth. Shading primitives. In Int. Conf. on Computer Vision,
1997. to appear.
[9] H.A. Rowley, S. Baluja, and T. Kanade. Human face detection in visual scenes.
In D.S. Touretzky, M.C . Mozer, and M .E. Hasselmo, editors, Advances in Neural
Information Processing 8, pages 875-881, 1996.
[10] Elte Shuppan. Pose file, volume 1-7. Books Nippan, 1993-1996.
photographs of human models, annotated in Japanese.
A collection of
[11] K-K Sung and T. Poggio. Example based learning for view based face detection. Ai
memo 1521, MIT, 1994.
| 1596 |@word version:2 nd:1 open:1 shading:2 ld:1 configuration:19 selecting:2 tuned:1 document:1 current:1 nt:1 si:10 must:3 readily:1 realistic:1 visible:1 wx:2 shape:1 greedy:2 half:4 selected:1 plane:3 rch:1 lua:1 boosting:2 node:13 preference:1 five:1 along:2 constructed:2 direct:1 ik:1 j3j:1 consists:1 inside:1 automatically:1 project:5 what:2 surveying:1 finding:9 clothes:1 brady:1 sung:1 berkeley:4 every:1 classifier:53 scaled:1 brute:2 control:8 converse:1 appear:2 ice:1 positive:15 before:1 local:1 sd:4 ext:1 mach:1 joining:1 might:2 collect:1 shaded:2 suggests:1 range:1 practice:1 union:1 ance:1 area:2 significantly:1 projection:6 word:1 kern:1 get:2 cannot:4 put:1 context:1 impossible:1 equivalent:2 primitive:1 regardless:1 wit:2 splitting:2 searching:3 variation:1 increment:1 suppose:1 user:2 associate:1 trick:1 recognition:3 particularly:2 lay:1 labeled:5 bottom:4 fly:1 unexpectedly:1 region:5 wj:7 grimson:1 mozer:1 broken:2 rowley:1 trained:1 depend:3 rewrite:2 segment:47 carrying:1 division:2 efficiency:1 completely:1 easily:3 joint:1 po:1 articulated:1 describe:1 effective:3 detected:1 query:1 labeling:10 corresponded:1 tell:1 choosing:1 configura:1 quite:2 whose:3 larger:1 ive:1 otherwise:3 wg:1 maxp:2 tlw:1 gi:1 g1:1 axisaligned:1 itself:1 obviously:1 sequence:1 fea:1 aligned:1 iff:3 achieve:1 getting:1 ent:1 exploiting:2 hrough:1 parent:1 double:1 sea:1 incremental:1 rotated:1 object:6 tions:1 depending:1 pose:2 ij:2 dep:1 strong:1 c:2 deutsch:1 posit:1 thick:1 annotated:1 lfj:1 human:10 require:1 im:1 bregler:2 pl:1 clothing:1 lm:2 sought:1 early:1 proc:1 label:7 hasselmo:1 ere:3 builder:1 tool:1 mit:1 rather:1 boosted:6 varying:1 greenspan:1 ax:1 check:1 inst:1 defin:1 el:1 leung:1 entire:2 typically:1 lj:13 misclassified:1 labelings:16 selects:1 semantics:1 pixel:1 abase:1 overall:1 ill:1 orientation:1 issue:1 animal:2 constrained:1 special:1 plan:1 field:1 construct:3 once:1 having:1 equal:1 manually:1 represents:1 look:2 future:2 report:2 few:2 randomly:2 ve:1 comprehensive:1 resulted:1 national:1 intell:1 geometry:1 suit:1 freedom:1 cylinder:2 detection:2 highly:1 kinematic:4 nl:2 perez:1 partial:1 lh:1 poggio:1 tree:1 incomplete:2 classify:3 earlier:1 localizing:1 fusing:1 subset:2 uniform:3 too:1 reported:4 synthetic:3 person:12 international:1 off:1 together:1 quickly:1 satisfied:1 management:1 containing:1 choose:1 kinematically:1 conf:2 book:3 li:3 potential:1 int:3 forsyth:8 satisfy:1 depends:2 piece:1 view:1 doing:2 keley:1 minimize:2 ni:3 who:1 correspond:5 identify:1 lds:1 weak:3 rejecting:1 produced:1 straight:1 classified:4 fo:1 touretzky:1 ed:1 definition:1 associated:2 dataset:1 popular:1 wh:2 torso:4 cj:1 rim:1 appears:1 follow:1 response:1 done:1 box:8 rejected:2 stage:3 hand:2 receives:1 overlapping:1 incrementally:1 perhaps:1 indicated:1 usage:1 effect:1 building:2 contain:1 true:1 lozano:1 ll:2 width:1 whereby:1 carson:1 be1:1 complete:9 demonstrate:1 l1:1 bring:1 fj:2 image:39 wise:1 meaning:1 rotation:1 definit:1 corel:1 volume:3 million:1 he:10 interpretation:1 ai:1 consistency:1 dj:3 had:1 longer:1 similarity:1 etc:1 add:3 selectivity:1 neg:1 determine:2 redundant:1 dashed:1 ii:10 full:1 reduces:1 ing:1 faster:1 retrieval:1 bent:1 finder:3 vision:4 histogram:1 sergey:1 represent:2 pyramid:5 achieved:1 ion:2 robotics:1 background:2 want:1 remarkably:1 interval:1 probably:1 file:1 ideal:1 split:2 easy:2 ture:1 identified:1 ering:1 computable:1 det:1 ould:1 whether:2 expression:1 colour:3 filt:1 swim:1 wo:1 nine:1 useful:1 clear:2 listed:1 amount:1 repeating:1 ten:1 processed:1 generate:1 schapire:1 sl:1 gil:3 sign:1 correctly:1 patt:1 vol:1 group:4 recomputed:1 four:1 ifj:1 pj:4 rectangle:1 fraction:1 sum:1 nce:1 angle:1 lng:1 reasonable:1 separation:2 putative:1 missed:1 scaling:1 bound:2 correspondence:1 quadratic:1 adapted:1 scanned:1 constraint:3 scene:2 span:1 extremely:1 pruned:3 marking:1 combination:1 ate:1 smaller:1 leg:3 projecting:4 invariant:1 kinematics:1 loose:1 flip:1 end:1 available:2 limb:4 away:3 hat:1 top:3 remaining:2 running:1 assembly:7 include:2 build:3 dat:1 move:1 unrestrictive:1 added:2 realized:1 arrangement:1 skin:2 flipping:1 malik:1 distance:1 separate:2 length:1 ruct:1 ratio:1 jef:1 difficult:1 favorably:1 negative:5 memo:1 implementation:2 reliably:1 anal:1 upper:4 vertical:1 sm:2 looking:2 head:1 excluding:1 smoothed:1 david:1 pair:1 required:1 accepts:2 merges:1 learned:2 assembled:1 trans:1 wy:3 pattern:2 rf:1 including:1 max:1 lengt:1 force:2 arm:3 representing:1 picture:6 identifies:1 axis:2 lk:6 text:1 review:1 literature:1 l2:1 determining:1 freund:1 expect:1 permutation:1 filtering:2 sil:2 age:1 degree:1 consistent:2 editor:1 daf:1 pi:5 translation:1 row:1 naked:1 allow:1 fall:1 wide:1 face:3 unmanageable:1 collection:7 made:1 projected:16 sj:7 pruning:3 overfitting:1 ioffe:2 belongie:1 search:7 sk:3 table:3 kanade:1 learn:1 ca:2 obtaining:1 symmetry:1 european:1 japanese:1 did:1 arrow:2 whole:1 s2:5 bounding:8 child:4 body:15 fig:4 intel:1 tl:1 fashion:1 cil:1 sub:3 position:3 fails:1 sf:1 candidate:1 lie:1 down:1 stl:1 grouping:1 consist:2 workshop:1 false:7 merging:1 ci:1 mirror:1 texture:3 magnitude:1 photograph:1 appearance:2 forming:1 visual:2 applies:1 corresponds:1 determines:1 extracted:1 marked:2 hard:1 change:2 included:2 determined:2 except:1 asada:1 baluja:1 hyperplane:1 nil:1 total:3 accepted:1 experimental:1 la:3 fleck:3 internal:1 people:27 support:1 latter:1 wearing:2 tested:2 |
651 | 1,597 | Restructuring Sparse High Dimensional Data for
Effective Retrieval
Charles Lee Isbell, Jr.
AT&T Labs
180 Park Avenue Room A255
Florham Park, NJ 07932-0971
Paul Viola
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
The task in text retrieval is to find the subset of a collection of documents relevant
to a user's information request, usually expressed as a set of words. Classically,
documents and queries are represented as vectors of word counts. In its simplest
form, relevance is defined to be the dot product between a document and a query
vector-a measure of the number of common terms. A central difficulty in text
retrieval is that the presence or absence of a word is not sufficient to determine
relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some
domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance.
We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries
are represented by their distance to these sets, and relevance is measured by the
number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear
projection operator and the independent components of documents.
1 Introduction
The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Naturally, we would like to apply techniques
from natural language understanding to this problem. Unfortunately, the sheer size of the data to be
represented makes this difficult. We wish to process tens or hundreds of thousands of documents,
each of which may contain hundreds of thousands of different words . It is clear that any useful
approach must be time and space efficient.
Following (Salton, 1971), we adopt a modified Vector Space Model (VSM) for document representation. A document is a vector where each dimension is a count of occurrences for a different word 1 .
lIn practice. suffixes are removed and counts are re-weighted by some function of their natural frequency
Restructuring Sparse High Dimensional Data/or Effective Retrieval
Africa
Mandala
football
national
South
481
league
college
Figure 1: A Model of Word Generation. Independent topics give rise to specific words words
according an unknown probability distribution (Line thickness indicates the likelihood of generating
a word).
A collection of documents is a matrix, D, where each column is a document vector d i . Queries are
similarly represented.
We propose a topic based model for the generation of words in documents. Each document is
generated by the interaction of a set of independent hidden random variables called topics. When a
topic is active it causes words to appear in documents. Some words are very likely to be generated
by a topic and others less so. Different topics may give rise to some of the same words. The final set
of observed words results from a linear combination of topics . See Figure 1 for an example.
In this view of word generation, individual words are only weak indicators of underlying topics.
Our task is to discover from data those collections of words that best predict the (unknown) underlying topics. The assumption that words are neither independent of one another or conditionally
independent of topics motivates our belief that this is possible.
Our approach is to construct a set of linear operators which extract the independent topic structure of
documents. We have explored different algorithms for discovering these operators include independent components analysis (Bell and Sejnowski, 1995). The inferred topics are then used to represent
and compare documents.
Below we describe our approach and contrast it with Latent Semantic Indexing (LSI), a technique
. that also attempts to linearly transform the documents from "word space" into one more appropriate
for comparison (Hull, 1994; Deerwester et a\., 1990). We show that the LSI transformation has very
different properties than the optimal linear transformation. We characterize some of these properties
and derive an unsupervised method that searches for them. Finally, we present experiments demonstrating the robustness of this method and describe several computational and space advantages.
2
The Vector Space Model and Latent Semantic Indexing
The similarity between two documents using the VSM model is their inner product, dT dj . Queries
are just short documents, so the relevance of documents to a query, q, is DT q. There are several
advantages to this approach beyond its mathematical simplicity. Above all, it is efficient to compute
and store the word counts. While the word-document matrix has a very large number of potential
entries, most documents do not contain very many of the possible words, so it is sparsely populated.
Thus, algorithms for manipulating the matrix only require space and time proportional to the average
number of different words that appear in a document, a number likely to be much smaller than the
full dimensionality of the document matrix (in practice, non-zero elements represent about 2% of
the total number of elements). Nevertheless, VSM makes an important tradeoff by sacrificing a great
deal of document structure, losing context that may disambiguate meaning.
Any text retrieval system must overcome the fundamental difficulty that the presence or absence
of a word is insufficient to determine relevance. This is due to two intrinsic problems of natural
(Frakes and Baeza-Yates, 1992). We incorporate these methods; however, such details are unimportant for this
discussion.
482
C. L. Isbell and P. Viola
language: synonymy and polysemy. Synonymy refers to the fact that a single underlying concept
can be represented by many different words (e.g. "car" and "automobile" refer to the same class
of objects). Polysemy refers to the fact that a single word can refer to more than one underlying
concept (e.g. "apple" is both a fruit and a computer company). Synonymy results in false negatives
and polysemy results in false positives.
Latent semantic indexing is one proposal for addressing this problem. LSI constructs a smaller
document matrix that retains only the most important information from the original, by using the
Singular Value Decomposition (SVD). Briefly, the SVD of a matrix Dis: U SV T where U and V
contain orthogonal vectors and S is diagonal (see (Golub and Loan, 1993) for further properties and
algorithms). Note that the co-occurrence matrix, DDT, can be written as U S2U T ; U contains the
eigenvectors of the co-occurrence matrix while the diagonal elements of S (referred to as singular
values) contain the square roots of their corresponding eigenvalues. The eigenvectors with the largest
eigenvalues capture the axes of largest variation in the data.
In LSI, each document is projected into a lower dimensional space b = SkI (If D where Sk and Uk
which contain only the largest k singular values and the corresponding eigenvectors, respectively.
The resulting document matrix is of smaller size but still provably represents the most variation in the
original matrix. Thus, LSI represents documents as linear combinations of orthogonal features. It is
hoped that these features represent meaningful underlying "topics" present in the collection. Queries
are also projected into this space, so the relevance of documents to a query is DTUkSk2UI q.
This type of dimensionality reduction is very similar to principal components analysis (peA), which
has been used in other domains, including visual object recognition (Turk and Pentland, 1991). In
practice, there is some evidence to suggest that LSI can improve retrieval performance; however, it
is often the case that LSI improves text retrieval performance by only a small amount or not at all
(see (Hull, 1994) and (Deerwester et aI., 1990) for a discussion).
3
Do Optimal Projections for Retrieval Exist?
Hypotheses abound for the success of LSI, including: i) LSI removes noise from the document
set; ii) LSI finds words that are synonyms; iii) LSI finds clusters of documents. Whatever it does,
LSI operates without knowledge of the queries that will be presented to the system. We could
instead attempt a supervised approach, searching for a matrix P such that DT P pT q results in large
values for documents in D that are known to be relevant for a particular query, q. The choice for
the structure of P embodies assumptions about the structure of D and q and what it means for
documents and queries to be related.
For example, imagine that we are given a collection of documents, D, and queries, Q. For each query
we are told which documents are relevant. We can use this information to construct an optimal P
such that: DT P pT Q ~ R, where Rij equals 1 if document i is relevant to query j, and 0 otherwise.
We find P in two steps. First we find an X minimizing IIDT XQ - RIIF, where II . IIF denotes
the Frobenius norm of a matrix 2 . Second, we find P by decomposing X into P pT. Unfortunately,
this may not be simple. The matrix P pT has properties that are not necessarily shared by X. In
particular, while P pT is symmetric, there is no guarantee that X will be (in our experiments X is
far from symmetric). We can however take SVD of X = UxSx
using matrix Ux to project the
documents and Vx to project the queries.
vt,
We can now compare LSI's projection axes, U with the optimal Ux computed as above. One measure
of comparison is the distribution of documents as projected onto these axes. Figure 2a shows the
distribution of Medline documents 3 projected onto the first axis of Ux . Notice that there is a large
2First find M that minimizes IIDT M - RIIF . X is the matrix that minimizes IIXQ - MIIF
3Medline is a small test collection, consisting of 1033 documents and about 8500 distinct words. We have
found similar results for other, larger collections.
Restructuring Sparse High Dimensional Datajor Effective Retrieval
483
Figure 2: (A). The distribution of medline documents projected onto one of the "optimal" axes. The
kurtosis of this distribution is 44. (B). The distribution of medline documents projected onto one
of the LSlaxes. The kurtosis of this distribution is 6.9. (C). The distribution of medline documents
projected onto one of the ICA axes. The kurtosis of this distribution is 60.
spike near zero, and a well-separated outlier spike. The kurtosis of this distribution is 44. Subsequent
axes of Ux result in similar distributions. We might hope that these axes each represent a topic shared
by a few documents. Figure 2b shows the distribution of documents projected onto the first LSI axis.
This axis yields a distribution with a much lower kurtosis of 6.9 (a normal distribution has kurtosis
3). This induces a distribution that looks nothing like a cluster: there is a smooth continuum of
values. Similar distributions result for many of the first 100 axes.
These results suggest that LSI-like approaches may well be searching for projections that are suboptimal. In the next section, we describe an algorithm designed to find projections that look more
like those in Figure 2a than in Figure 2b.
4 Topic Centered Representations
There are several problems with the "optimal" approach described in the previous section. Aside
from its completely supervised nature, there may be a problem of over-fitting: the number of parameters in X (the number of words squared) can be large compared to the number of documents and
queries. It is not clear how to move towards a solution that will likely have low generalization error,
our ultimate goal. Further, computing X is expensive, involving several full-rank singular value
decompositions.
On the other hand, while we may not be able to take advantage of supervision, it seems reasonable to
search for projections like those in Figure 2a. There are several unsupervised techniques we might
use. We begin with independent component analysis (Bell and Sejnowski, 1995), a technique that
has recently gained popularity. Extensions such as (Amari, Cichocki and Yang, 1996) have made
the algorithm more efficient and robust.
4.1
What are the Independent Components of Documents?
Figure 2C shows the distribution of Medline documents along one of the ICA axes (kurtosis 60). It
is representative of other axes found for that collection, and for other, larger collections.
Like the optimal axes found earlier, this axis also separates documents. This is desirable because
it means that the axes are distinguishing groups of (presumably related) documents. Still, we can
ask a more interesting question; namely, how do these axes group words? Rather than project our
documents onto the ICA space, we can project individual words (this amounts to projecting the
identity matrix onto that space) and observe how ICA redistributes them.
Figure 3 shows a typical distribution of all the words along one of the axes found by ICA on the
484
C. L. Isbell and P Viola
africa
apartheid
L
.,
anc
transition
mandela
continent
elite
ethiopia
saharan
--o=7S~~
-Q.'5 - -Q?~.2~
5 -!----=o.2=5-~O.'~~O.7S
P~V.1uM
Figure 3: The distribution of words with large magnitude along an leA axis from the White House
collection.
White House collection. 4 leA induces a highly kurtotic distribution over the words. It is also
quite sparse: most words have a value very close to zero. The histogram shows only the words
large values, both positive and negative. One group of words is made up of highly-related words;
namely, "africa," "apartheid," and "man del a." The other is made up of words that have no obvious
relationship to one another. In fact, these words are not directly related, but each co-occurs with
different individual words in the first group. For example, "saharan" and "africa" occur together
many times, but not in the context of apartheid and South Africa; rather, in documents concerning
US policy toward Africa in general. As it so happens, "saharan" acts as a discriminating word for
these subtopics.
4.2
Topic Centered Representations
It appears that leA is finding a set of words, S, that selects for related documents, H, along with
another set of words, T, whose elements do not select for H, but co-occur with elements of S.
Intuitively, S selects for documents in a general subject area, and T removes a specific subset of
those documents, leaving a small set of highly related documents. This suggests a straightforward
algorithm to achieve the same goal directly:
foreach topic, Ck , you wish to define:
-Choose a source document de from D
-Let b be the documents of D sorted by similarity to de
-Divide b into into three groups:
those assumed to be relevant,
those assumed to be completely irrelevant,
and those assumed to be weakly relevant.
-Let G k , Bk, and Afk be the centroid of each respective group
-Let C k = f(G k - Bk) - f(Afk _ G k )
where f(x) = max(x,O).
The three groups of documents are used to drive the discovery of two sets of words. One set selects
for documents in a general topic area by finding the set of words that distinguish the relevant documents from documents in general, a form of global clustering. The other set of words distinguish
the weakly-related documents from the relevant documents. Assigning them negative weight results
in their removal. This leaves only a set of closely related documents. This local clustering approach
is similar to an unsupervised version of Rocchio with Query Zoning (Singhal, 1997).
4The White House collection contains transcripts of press releases and press conferences from 1993. There
are 1585 documents and 18675 distinct words.
Restructuring Sparse High Dimensional Data for Effective Retrieval
485
07
Baseline
LS I
Documents as Clisters
Relevanl Documents as Clusters
ICA
TopIc Clustenng
06
I'
05
c:: o.
0
'Cij
'0
~
a..
0 3 .... \
,
,
'"
02
I
'.
0
,
\ '-, --
01 1
0
"'--
01
02
03
------
O'
05
06
07
08
O.
Recall
Figure 4: A comparison of different algorithms on the Wall Street Journal
5
Experiments
In this section, we show results of experiments with the Wall Street Journal collection. It contains 42,652 documents and 89757 words. Following convention, we measure the success of a text
retrieval system using precision-recall curves 5 . Figure 4 illustrates the performance of several algorithms:
1. Baseline: the standard inner product measure, DT q.
2. LSI: Latent Semantic Indexing.
3. - Jeuments as Clusters: each document is a projection axis. This is equivalent to a modified
inner product measure, DT DDT q.
4. Relevant Documents as Clusters: In order to simulate psuedo-relevance feedback, we use
the centroid of the top few documents returned by the D T q similarity measure.
5. ICA: Independent Component Analysis.
6. Topic Clustering: The algorithm described in Section 4.2.
In this graph, we restrict queries to those that have at least fifty relevant documents. The topic
clustering approach and ICA perform best, maintaining higher average precision over all ranges .
Unlike smaller collections such as Medline, documents from this collection do not tend to cluster
around the queries naturally. As a result, the baseline inner product measure performs poorly. Other
clustering techniques that tend to work well on collections such as Medline perform even worse.
Finally, LSI does not perform well.
Figure 5 illustrates different approaches on subsets of Wall Street Journal queries. In general, as
each query has more and more relevant documents, overall performance improves. In particular,
the simple clustering scheme using only relevant documents performs very well. Nonetheless, our
approach improves upon this standard technique with minimal additional computation.
5When asked to return n documents precision is the percentage of those which are rei avant. Recall is the
percentage of the total relevant documents which are returned.
486
C. L. Isbell and P Viola
I~
r,'
O Tr,
;'
Ol~
\. ,
,~
~
~'~
"I
"r~.~
.
"
"
o~
Rocall
0 T
O.
Olt
Roc ? ?
Figure 5: (A). Performance of various clustering techniques for those queries with more than 75
relevant documents . (B). Performance for those queries with more than 100 relevant documents.
6
Discussion
We have described typical dimension reduction techniques used in text retrieval and shown that
these techniques make strong assumptions about the form of projection axes. We have characterized another set of assumptions and derived an algorithm that enjoys significant computational and
space advantages. Further, we have described experiments that suggest that this approach is robust.
Finally, much of what we have described here is not specific to text retrieval. Hopefully, similar
characterizations will apply to other sparse high-dimensional domains.
References
Amari , S., Cichocki, A., and Yang, H. (1996). A new learning algorithm for blind source separation. In
Advances in Neural Information Processing Systems.
Bell, A. and Sejnowski, T. (1995). An information-maximizaton approach to blind source separation and blind
deconvolution. Neural Computation, 7: 1129-1159.
Deerwester, S., Dumais, S. T., Landauer, T. K., Furnas, G. w., and Harshman, R. A. (1990). Indexing by latent
semantic analysis. Journal of the Society for Information Science, 41 (6):391-407.
Frakes, W. B. and Baeza-Yates, R., editors (1992). Information Retrieval: Data Structures and Algorithms.
Prentice-Hall.
Golub, G. H. and Loan, C. F. V. (1993). Matrix Computations. The Johns Hopkins University Press.
Hull, D. (1994). Improving text retrieval for the routing problem using latent semantic indexing. In Proceedings of the 17th ACMISIGIR Conference, pages 282-290.
Kwok, K. L. (1996). A new method of weighting query terms for ad-hoc retrieval. In Proceedings of the 19th
ACMISIGIR Conference, pages 187-195.
O'Brien, G. W. (1994). Information management tools for updating an svd-encoded indexing scheme. Technical Report UT-CS-94-259, University of Tennessee.
Sahami, M., Hearst, M., and Saund, E. (1996). Applying the multiple cause mixture model to text categorization. In Proceedings of the 13th International Machine Learning Conference.
Salton, G., editor (1971). The SMART Retrieval System: Experiments in Automatic Document Processing.
Prentice-Hall.
Singhal, A. (1997). Learning routing queries in a query zone. In Proceedings of the 20th International Conference on Research and Development in Information Retrieval.
Turk, M. A. and Pentland, A. P. (1991). Face recognition using eigenfaces. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 586-591.
| 1597 |@word version:1 briefly:1 norm:1 seems:1 decomposition:2 tr:1 reduction:4 contains:3 document:83 africa:6 brien:1 assigning:1 must:2 written:1 john:1 subsequent:1 remove:2 designed:1 aside:1 intelligence:1 discovering:1 leaf:1 short:1 characterization:1 mathematical:1 along:4 constructed:1 fitting:1 ica:8 ol:1 company:1 abound:1 project:4 discover:1 underlying:6 begin:1 what:3 minimizes:2 finding:2 transformation:2 nj:1 guarantee:1 act:1 um:1 uk:1 whatever:1 appear:2 harshman:1 positive:2 local:1 saharan:3 might:2 suggests:1 co:4 range:1 practice:3 area:2 bell:3 significantly:1 projection:8 word:51 synonymy:3 refers:2 suggest:3 onto:8 close:1 operator:3 prentice:2 context:2 applying:1 equivalent:1 straightforward:1 elite:1 l:1 simplicity:1 searching:2 variation:2 pt:5 imagine:1 user:2 losing:1 distinguishing:1 hypothesis:1 element:5 recognition:3 expensive:1 updating:1 sparsely:1 observed:1 rij:1 capture:1 thousand:2 removed:1 complexity:1 asked:1 weakly:2 smart:1 upon:1 completely:2 represented:5 various:1 separated:1 distinct:2 effective:4 describe:3 sejnowski:3 artificial:1 query:27 quite:1 whose:1 larger:2 encoded:1 otherwise:1 football:1 amari:2 florham:1 redistributes:1 transform:1 final:1 hoc:1 advantage:4 eigenvalue:2 kurtosis:7 propose:2 interaction:1 product:5 relevant:16 poorly:1 achieve:1 frobenius:1 cluster:7 produce:1 generating:1 categorization:1 object:2 derive:1 measured:1 transcript:1 strong:1 c:1 convention:1 psuedo:1 rei:1 closely:1 hull:3 pea:1 centered:2 vx:1 routing:2 require:1 generalization:1 wall:3 extension:1 around:1 hall:2 normal:1 great:1 presumably:1 predict:1 continuum:1 adopt:1 rocchio:1 largest:3 tool:1 weighted:1 hope:1 modified:2 rather:2 ck:1 tennessee:1 ax:15 release:1 derived:1 rank:1 indicates:1 likelihood:1 contrast:1 afk:2 centroid:2 baseline:3 suffix:1 hidden:1 manipulating:1 selects:3 provably:1 overall:1 development:1 iidt:2 equal:1 construct:3 iif:1 represents:2 park:2 look:2 unsupervised:3 others:1 report:1 few:2 national:1 individual:3 consisting:1 attempt:2 highly:4 golub:2 mixture:1 respective:1 orthogonal:2 divide:1 re:1 sacrificing:1 minimal:1 column:1 earlier:1 kurtotic:1 retains:1 addressing:1 subset:4 entry:1 singhal:2 hundred:2 characterize:1 apartheid:3 thickness:1 sv:1 dumais:1 fundamental:1 international:2 discriminating:1 lee:1 told:1 together:1 hopkins:1 medline:8 central:1 squared:1 management:1 choose:1 classically:1 worse:1 return:1 potential:1 de:2 blind:3 ad:1 view:1 root:1 lab:1 saund:1 square:1 subtopics:1 yield:1 weak:1 apple:1 drive:1 nonetheless:1 frequency:1 turk:2 obvious:1 naturally:2 salton:2 massachusetts:1 ask:1 recall:3 knowledge:1 car:1 dimensionality:4 improves:4 ut:1 appears:1 higher:1 dt:6 supervised:2 just:1 hand:1 hopefully:1 del:1 contain:5 concept:2 symmetric:2 laboratory:1 semantic:6 deal:1 conditionally:1 white:3 continent:1 performs:2 meaning:1 novel:1 recently:1 charles:1 common:2 foreach:1 refer:2 significant:1 cambridge:1 frakes:2 ai:1 automatic:1 league:1 similarly:1 populated:1 language:2 dj:1 dot:1 similarity:3 supervision:1 irrelevant:1 store:1 success:2 vt:1 additional:1 determine:2 ii:2 multiple:1 desirable:1 full:2 reduces:1 smooth:1 technical:1 characterized:1 retrieval:22 lin:1 concerning:1 involving:1 vision:2 histogram:1 represent:4 lea:3 proposal:1 singular:4 leaving:1 source:3 fifty:1 unlike:1 south:2 subject:1 tend:2 extracting:1 near:1 presence:2 yang:2 iii:1 baeza:2 restrict:1 suboptimal:1 inner:4 avenue:1 tradeoff:1 ultimate:1 returned:2 cause:2 mandela:1 useful:1 clear:2 unimportant:1 eigenvectors:3 amount:2 ten:1 induces:2 simplest:1 exist:1 lsi:17 percentage:2 notice:1 popularity:1 ddt:2 yates:2 group:7 sheer:1 demonstrating:1 nevertheless:1 neither:1 graph:1 deerwester:3 you:1 reasonable:1 separation:2 distinguish:2 occur:2 isbell:4 simulate:1 according:1 request:2 combination:2 jr:1 smaller:4 happens:1 outlier:1 projecting:1 indexing:7 intuitively:1 count:4 sahami:1 decomposing:1 apply:2 observe:1 kwok:1 appropriate:1 occurrence:3 alternative:1 robustness:1 original:2 denotes:1 clustering:7 include:1 top:1 maintaining:1 embodies:1 society:1 move:1 question:1 spike:2 occurs:1 diagonal:2 distance:1 separate:1 street:3 topic:21 toward:1 relationship:1 insufficient:1 minimizing:1 difficult:1 unfortunately:2 cij:1 negative:3 rise:2 motivates:1 ski:1 unknown:2 policy:1 perform:3 pentland:2 viola:4 olt:1 hearst:1 inferred:1 bk:2 namely:2 beyond:1 able:1 usually:2 below:1 pattern:1 including:2 max:1 belief:1 difficulty:2 natural:3 indicator:1 scheme:2 improve:2 technology:1 axis:6 extract:1 cichocki:2 xq:1 text:11 understanding:1 discovery:1 removal:1 generation:3 interesting:1 proportional:1 sufficient:1 fruit:1 editor:2 share:1 avant:1 dis:1 enjoys:1 institute:1 eigenfaces:1 face:1 sparse:7 overcome:1 dimension:2 curve:1 transition:1 feedback:1 collection:18 made:3 projected:8 far:1 global:1 active:1 assumed:3 landauer:1 search:2 latent:6 sk:1 disambiguate:1 nature:1 robust:2 improving:1 anc:1 automobile:1 necessarily:1 domain:3 polysemy:3 linearly:1 synonym:1 noise:1 paul:1 nothing:1 referred:1 representative:1 roc:1 vsm:3 furnas:1 precision:3 wish:2 house:3 weighting:1 ethiopia:1 specific:3 explored:1 evidence:1 deconvolution:1 intrinsic:1 false:2 gained:1 magnitude:1 hoped:1 illustrates:2 likely:3 visual:1 expressed:2 restructuring:4 ux:4 ma:1 goal:2 identity:1 sorted:1 towards:1 room:1 shared:2 absence:2 man:1 loan:2 typical:2 operates:1 principal:1 called:1 total:2 svd:4 meaningful:1 zone:1 select:1 college:1 relevance:7 incorporate:1 |
652 | 1,598 | Global Optimisation of Neural Network
Models Via Sequential Sampling
J oao FG de Freitas
Cambridge University
Engineering Department
Cambridge CB2 1PZ England
jfgf@eng.cam.ac.uk
[Corresponding author]
Arnaud Doucet
Cambridge University
Engineering Department
Cambridge CB2 1PZ England
ad2@eng.cam.ac.uk
Mahesan Niranjan
Cambridge University
Engineering Department
Cambridge CB2 1PZ England
niranjan@eng.cam.ac.uk
Andrew H Gee
Cambridge University
Engineering Department
Cambridge CB2 1PZ England
ahg@eng.cam.ac.uk
Abstract
We propose a novel strategy for training neural networks using sequential sampling-importance resampling algorithms. This global
optimisation strategy allows us to learn the probability distribution of the network weights in a sequential framework. It is well
suited to applications involving on-line, nonlinear, non-Gaussian or
non-stationary signal processing.
1
INTRODUCTION
This paper addresses sequential training of neural networks using powerful sampling
techniques. Sequential techniques are important in many applications of neural networks involving real-time signal processing, where data arrival is inherently sequential. Furthermore, one might wish to adopt a sequential training strategy to deal
with non-stationarity in signals, so that information from the recent past is lent more
credence than information from the distant past. One way to sequentially estimate
neural network models is to use a state space formulation and the extended Kalman
filter (Singhal and Wu 1988, de Freitas, Niranjan and Gee 1998). This involves local
linearisation of the output equation, which can be easily performed, since we only
need the derivatives of the output with respect to the unknown parameters. This
approach has been employed by several authors, including ourselves.
Global Optimisation of Neural Network Models via Sequential Sampling
4]]
However, locallinearisation leading to the EKF algorithm is a gross simplification of
the probability densities involved. Nonlinearity of the output model induces multimodality of the resulting distributions. Gaussian approximation of these densities
will loose important details. The approach we adopt in this paper is one of sampling.
In particular, we discuss the use of 'sampling-importance resampling' and 'sequential
importance sampling' algorithms, also known as particle filters (Gordon, Salmond
and Smith 1993, Pitt and Shephard 1997), to train multi-layer neural networks.
2
STATE SPACE NEURAL NETWORK MODELLING
We start from a state space representation to model the neural network's evolution
in time. A transition equation describes the evolution of the network weights, while
a measurements equation describes the nonlinear relation between the inputs and
outputs of a particular physical process, as follows:
Wk+l
Yk
=
=
Wk +dk
g(Wk, Xk) + Vk
(1)
(2)
where (Yk E lRO) denotes the output measurements, (Xk E !R<i) the input measurements and (Wk E lRm) the neural network weights. The measurements nonlinear
mapping g(.) is approximated by a multi-layer perceptron (MLP). The measurements are assumed to be corrupted by noise Vk. In the sequential Monte Carlo
framework, the probability distribution of the noise is specified by the user. In
our examples we shall choose a zero mean Gaussian distribution with covariance
R. The measurement noise is assumed to be uncorrelated with the network weights
and initial conditions.
We model the evolution of the network weights by assuming that they depend
on the previous value Wk and a stochastic component d k. The process noise dk
may represent our uncertainty in how the parameters evolve, modelling errors or
unknown inputs. We assume the process noise to be a zero mean Gaussian process
with covariance Q, however other distributions can also be adopted. This choice of
distributions for the network weights requires further research. The process noise
is also assumed to be uncorrelated with the network weights.
The posterior density p(WkIYk), where Yk = {Yl, Y2, "', Yk} and Wk =
{Wl, W2, "', Wk}, constitutes the complete solution to the sequential estimation problem. In many applications, such as tracking, it is of interest to estimate
one of its marginals, namely the filtering density p(wkIYk). By computing the filtering density recursively, we do not need to keep track of the complete history of
the weights. Thus, from a storage point of view, the filtering density turns out
to be more parsimonious than the full posterior density function. IT we know the
filtering density of the network weights, we can easily derive various estimates of
the network weights, including centroids, modes, medians and confidence intervals.
3
SEQUENTIAL IMPORTANCE SAMPLING
In the sequential importance sampling optimisation framework, a set of representative samples is used to describe the posterior density function of the network
parameters. Each sample consists of a complete set of network parameters. More
specifically, we make use of the following Monte Carlo approximation:
N
p(WkIYk) =
~ L 6(Wk - W~i))
i=l
1. F G. de Freitas, M. Niranjan, A. Doucet and A. H. Gee
412
where W~i) represents the samples used to describe the posterior density and 6(.)
denotes the Dirac delta function. Consequently, any expectations of the form:
E[A(Wk)]
=
!
!k(Wk)p(WkIYk)dWk
may be approximated by the following estimate:
N
E[jk(Wk)]
~ ~ LA(W~i?
i=l
where the samples W~i) are drawn from the posterior density function. Typically,
one cannot draw samples directly from the posterior density. Yet, if we can draw
samples from a proposal density function 7r(WkIYk), we can transform the expectation under p(WkIYk) to an expectation under 7r(WkIYk) as follows:
E[A(Wk)]
=
!
p(WkIYk)
!k(Wk) 7r(WkI Yk) 7r(WkIYk)dWk
J A(Wk)qk (Wk)7r(WkIYk) dWk
J qk (Wk)7r(Wk IYk)dWk
E,.. [qk (Wk)!k(Wk)]
E,..[qk(Wk)]
where the variables qk(Wk) are known as the unnormalised importance ratios:
qk =
p(YkIWk)p(Wk)
7r(Wk IYk)
(3)
=--:...-:..:-=-:-~,-:-....::..
Hence, by drawing samples from the proposal function 7r(.), we can approximate
the expectations of interest by the following estimate:
N
(")
(")
liN Li=l !k(Wk' )qk(Wk' )
N
(")
liN Li=l qk(Wk' )
N
L !k(W~i?qk(W~i)
i=l
(4)
where the normalised importance ratios tiii) are given by:
-Ci) _
qk -
Ci)
qk
"N
L..Jj=l
(j)
qk
It is not difficult to show (de Freitas, Niranjan, Gee and Doucet 1998) that, if we
assume w to be a hidden Markov process with initial density p(wo) and transition
density p(wklwk-l), various recursive algorithms can be derived. One of these
algorithms (HySIR), which we derive in (de Freitas, Niranjan, Gee and Doucet
1998), has been shown to perform well in neural network training. Here we extended
the algorithm to deal with multiple noise levels. The pseudo-code for the HySIR
algorithm with EKF updating is as follows l :
1 We have made available the software for the implementation of the HySIR algorithm
at the following web-site: http://svr-vwv.eng.cam.ac.ukrjfgf/ softvare . html .
Global Optimisation of Neural Network Models via Sequential Sampling
413
1. INITIALISE NETWORK WEIGHTS (k=O):
2. For k = 1"", L
(a)
SAMPLING STAGE:
For i = 1,???,N
? Predict via the dynamics equation:
(i)
_
W k +1 A
(i)
wk
+ d(i)
k
where d~i) is a sample from p(d k ) (N(O, Qk) in our case).
? Update samples with the EKF equations.
? Evaluate the importance ratios:
qi11 = qii)p(Yk+1Iw~iL) = q~) N(g(Xk+1, W~11)' Rk)
? Normalise the importance ratios:
(b) RESAMPLING STAGE:
For i = 1,?? ? ,N
If Nell ~ Threshold:
(i)
_
? w k +1
?
?
-
(i) _
PH1 -
A
(i)
Wk
+1
(i)
Pk + 1
A
Q*(i) _ Q*(i)
k+1 -
k+1
Else
? Resample new
(i)
?
Wk
_
+1
(i)
-
A
(j)
Wk
_
1
? qk+l -
N
+1 ,
where KH1 is known as ~he Kalman gain matrix, Imm denotes the identity matrix of
size m x m, and R* and Q* are two tuning parameters, whose roles are explained in
detail in (de Freitas, Niranjan and Gee 1997). G represents the Jacobian matrix and,
strictly speaking, Pk is an approximation to the covariance matrix of the network
weights. The resampling stage is used to eliminate samples with low probability
and multiply samples with high probability. Various authors have described efficient
algorithms for accomplishing this task in O(N) operations (Pitt and Shephard 1997,
Carpenter, Clifford and Fearnhead 1997, Doucet 1998).
J. F G. de Freitas, M. Niranjan, A. Doucet and A. H. Gee
414
4
EXPERlMENT
To assess the ability of the hybrid algorithm to estimate time-varying hidden parameters, we generated input-output data from a logistic function followed by a linear
scaling and a displacement as shown in Figure 1. This simple model is equivalent
to an MLP with one hidden neuron and an output linear neuron. We applied two
Gaussian (N(O, 10)) input sequences to the model and corrupted the weights and
output values with Gaussian noise (N(O, 1 x 10-3) and N(O, 1 x 10-4) respectively).
We then trained a second model with the same structure using the input-output
y
Figure 1: Logistic function with linear scaling and displacement used in the experiment. The weights were chosen as follows: wl(k) = 1 + k/100, w2(k) =
sin(0.06k) - 2, w3(k) = 0.1, w4(k) = 1, ws(k) = -0.5.
data generated by the first model. In so doing, we chose 100 sampling trajectories
and set R to 10, Q to 1 X 10- 3155 , the initial weights variance to 5, Po to 100155 ,
R* to 1 X 10-5 ? The process noise parameter Q* was set to three levels: 5 x 10- 3,
1 X 10-3 and 1 x 10-10, as shown in the plot of Figure 2 at time zero. In the training
20
,s
'20
Samples
Figure 2: Noise level estimation with the HySIR algorithm.
phase, of 200 time steps, we allowed the model weights to vary with time. During
this phase, the HySIR algorithm was used to track the input-output training data
and estimate the latent model weights. In addition, we assumed three possible noise
variance levels at the begining of the training session. After the 200-th time step,
we fixed the values of the weights and generated another 200 input-output data
test sets from the original model. The input test data was then fed to the trained
model, using the weights values estimated at the 200-th time step. Subsequently,
415
Global Optimisation of Neural Network Models via Sequential Sampling
the output prediction of the trained model was compared to the output data from
the original model to assess the generalisation performance of the training process.
As shown in Figure 2, the noise level of the trajectories converged to the true value
(1 x 10-3 ). In addition, it was possible to track the network weights and obtain
accurate output predictions as shown in Figures 3 and 4.
3.5
"
sa. 3
S
0
: .:
'[2.5
2
S
0
Qi
1/1 1
Cl
Qi
"E
~
2
0
~ -1
.
1
0.5
-2
-2
0
"
2
0
Output prediction
/
.
.
1/11 .5
C
"~
....
3
~
0
.
1
:/ "
.
.
....
.
-
.
..
-
.
2
Output prediction
Figure 3: One step ahead predictions during the training phase (left) and stationary
predictions in the test phase (right).
E 100
III
....
50
C)
.s
2
II)
I~
0
W2
Time
.!!!
"
-?,2
'm
~
....
.::L.
0
0
.! -2
II)
Z
-4
0
20
40
60
80
100
120
140
160
180
200
Time
Figure 4: Weights tracking performance with the HySIR algorithm. As indicated
by the histograms of W2, the algorithm performs a global search in parameter space.
1. F G. de Freitas, M. Niranjan , A. Doucet and A. H. Gee
416
5
CONCLUSIONS
In this paper, we have presented a sequential Monte Carlo approach for training
neural networks in a Bayesian setting. In particular, we proposed an algorithm
(HySIR) that makes use of both gradient and sampling information. HySIR can be
interpreted as a Gaussian mixture filter, in that only a few sampling trajectories
need to be employed. Yet, as the number of trajectories increases, the computational
requirements increase only linearly. Therefore, the method is also suitable as a
sampling strategy for approximating multi-modal distributions. Further avenues
of research include the design of algorithms for adapting the noise covariances R
and Q, studying the effect of different noise models for the network weights and
improving the computational efficiency of the algorithms.
ACKNOWLEDGEMENTS
Joao FG de Freitas is financially supported by two University of the Witwatersrand
Merit Scholarships, a Foundation for Research Development Scholarship (South
Africa), an ORS award and a Trinity College External Studentship (Cambridge).
References
Carpenter, J., Clifford, P. and Fearnhead, P. (1997). An improved particle filter for
non-linear problems, Technical report, Department of Statistics, Oxford University, England. Available at http://www.stats.ox.ac.ukrclifford/index.htm.
de Freitas, J. F. G., Niranjan, M. and Gee, A. H. (1997). Hierarchichal BayesianKalman models for regularisation and ARD in sequential learning, Technical Report CUED/F-INFENG/TR 307, Cambridge University, http://svrwww.eng.cam.ac.uk/-jfgf.
de Freitas, J. F. G., Niranjan, M. and Gee, A. H. (1998). Regularisation in sequential
learning algorithms, in M. I. Jordan, M. J. Kearns and S. A. Solla (eds),
Advances in Neural Information Processing Systems, Vol. 10, MIT Press.
de Freitas, J. F. G., Niranjan, M., Gee, A. H. and Doucet, A. (1998). Sequential Monte Carlo methods for optimisation of neural network models, Technical Report CUED/F-INFENG/TR 328, Cambridge University, http://svrwww.eng.cam.ac.uk/-jfgf.
Doucet, A. (1998). On sequential simulation-based methods for Bayesian filtering,
Technical Report CUED/F-INFENG/TR 310, Cambridge University. Available at http://www.stats.bris.ac.uk:81/MCMC/pages/list.html.
Gordon, N. J ., Salmond, D. J. and Smith, A. F. M. (1993). Novel approach
to nonlinear/non-Gaussian Bayesian state estimation, lEE Proceedings-F
140(2): 107-113.
Pitt, M. K. and Shephard, N. (1997). Filtering via simulation: Auxiliary particle
filters, Technical report, Department of Statistics, Imperial College of London,
England. Available at http://www.nuff.ox.ac.uk/economics/papers.
Singhal, S. and Wu, L. (1988). Training multilayer perceptrons with the extended
Kalman algorithm, in D. S. Touretzky (ed.), Advances in Neural Information
Processing Systems, Vol. 1, San Mateo, CA, pp. 133-140.
| 1598 |@word simulation:2 eng:7 covariance:4 tr:3 recursively:1 initial:3 past:2 freitas:12 africa:1 nell:1 yet:2 distant:1 plot:1 update:1 resampling:4 stationary:2 credence:1 xk:3 smith:2 consists:1 vwv:1 multimodality:1 multi:3 wki:1 joao:1 interpreted:1 pseudo:1 uk:8 engineering:4 local:1 oxford:1 lrm:1 might:1 chose:1 mateo:1 qii:1 recursive:1 witwatersrand:1 cb2:4 displacement:2 w4:1 adapting:1 confidence:1 svr:1 cannot:1 storage:1 www:3 equivalent:1 economics:1 stats:2 initialise:1 user:1 approximated:2 jk:1 updating:1 role:1 solla:1 yk:6 gross:1 cam:7 dynamic:1 trained:3 depend:1 efficiency:1 easily:2 po:1 htm:1 various:3 train:1 describe:2 london:1 monte:4 whose:1 drawing:1 ability:1 statistic:2 transform:1 sequence:1 propose:1 dirac:1 requirement:1 cued:3 derive:2 andrew:1 ac:10 ard:1 sa:1 shephard:3 auxiliary:1 involves:1 filter:5 stochastic:1 subsequently:1 trinity:1 strictly:1 ad2:1 mapping:1 predict:1 pitt:3 vary:1 adopt:2 resample:1 estimation:3 iw:1 wl:2 mit:1 gaussian:8 fearnhead:2 ekf:3 varying:1 derived:1 vk:2 modelling:2 centroid:1 typically:1 eliminate:1 hidden:3 relation:1 w:1 html:2 development:1 sampling:16 represents:2 constitutes:1 report:5 gordon:2 few:1 phase:4 ourselves:1 stationarity:1 mlp:2 interest:2 multiply:1 mixture:1 accurate:1 mahesan:1 singhal:2 corrupted:2 density:15 lee:1 yl:1 clifford:2 choose:1 external:1 derivative:1 leading:1 li:2 de:12 wk:30 ahg:1 performed:1 view:1 doing:1 start:1 ass:2 il:1 accomplishing:1 qk:14 variance:2 kh1:1 bayesian:3 carlo:4 trajectory:4 history:1 converged:1 touretzky:1 ed:2 pp:1 involved:1 gain:1 modal:1 improved:1 formulation:1 ox:2 furthermore:1 stage:3 lent:1 web:1 nonlinear:4 mode:1 logistic:2 indicated:1 effect:1 y2:1 true:1 evolution:3 hence:1 arnaud:1 deal:2 sin:1 during:2 complete:3 performs:1 oao:1 tiii:1 novel:2 physical:1 he:1 marginals:1 measurement:6 cambridge:12 tuning:1 session:1 particle:3 nonlinearity:1 posterior:6 recent:1 linearisation:1 employed:2 signal:3 ii:2 full:1 multiple:1 technical:5 england:6 jfgf:3 lin:2 niranjan:12 award:1 qi:2 prediction:6 involving:2 infeng:3 multilayer:1 optimisation:7 expectation:4 histogram:1 represent:1 proposal:2 addition:2 interval:1 else:1 median:1 w2:4 south:1 jordan:1 iii:1 w3:1 avenue:1 wo:1 speaking:1 jj:1 induces:1 http:6 lro:1 delta:1 estimated:1 track:3 shall:1 vol:2 begining:1 threshold:1 drawn:1 imperial:1 wkiyk:10 powerful:1 uncertainty:1 wu:2 parsimonious:1 draw:2 scaling:2 layer:2 followed:1 simplification:1 ahead:1 software:1 department:6 describes:2 explained:1 equation:5 discus:1 loose:1 turn:1 know:1 merit:1 fed:1 adopted:1 available:4 operation:1 studying:1 original:2 denotes:3 include:1 scholarship:2 approximating:1 strategy:4 financially:1 gradient:1 normalise:1 assuming:1 kalman:3 code:1 index:1 ratio:4 difficult:1 implementation:1 design:1 unknown:2 perform:1 neuron:2 bris:1 markov:1 extended:3 namely:1 specified:1 address:1 including:2 suitable:1 hybrid:1 iyk:2 unnormalised:1 acknowledgement:1 evolve:1 regularisation:2 filtering:6 foundation:1 uncorrelated:2 supported:1 gee:11 normalised:1 salmond:2 perceptron:1 fg:2 studentship:1 transition:2 author:3 made:1 san:1 approximate:1 keep:1 global:6 doucet:9 sequentially:1 imm:1 assumed:4 search:1 latent:1 ph1:1 learn:1 ca:1 inherently:1 improving:1 cl:1 pk:2 linearly:1 noise:14 arrival:1 allowed:1 carpenter:2 site:1 representative:1 wish:1 jacobian:1 rk:1 list:1 pz:4 dk:2 sequential:20 importance:9 ci:2 suited:1 tracking:2 identity:1 consequently:1 specifically:1 generalisation:1 kearns:1 dwk:4 la:1 perceptrons:1 college:2 evaluate:1 mcmc:1 |
653 | 1,599 | Optimizing admission control while ensuring
quality of service in multimedia networks via
reinforcement learning*
Timothy X Brown t , Hui Tong t , Satinder Singh+
t Electrical and Computer Engineering
+Computer Science
University of Colorado
Boulder, CO 80309-0425
{timxb, tongh, baveja}@colorado.edu
Abstract
This paper examines the application of reinforcement learning to a
telecommunications networking problem . The problem requires that revenue be maximized while simultaneously meeting a quality of service
constraint that forbids entry into certain states. We present a general
solution to this multi-criteria problem that is able to earn significantly
higher revenues than alternatives.
1 Introduction
A number of researchers have recently explored the application of reinforcement learning
(RL) to resource allocation and admission control problems in telecommunications. e.g.,
channel allocation in wireless systems, network routing, and admission control in telecommunication networks [1, 6, 7, 8]. Telecom problems are attractive applications for RL
research because good, simple to implement, simulation models exist for them in the engineering literature that are both widely used and results on which are trusted, because
there are existing solutions to compare with, because small improvements over existing
methods can lead to significant savings in the long run, because they have discrete states,
and because there are many potential commercial applications. However, existing RL applications have ignored an issue of great practical importance to telecom engineers, that
of ensuring quality of service (QoS) while simultaneously optimizing whatever resource
allocation performance criterion is of interest.
This paper will focus on admission control for broadband multimedia communication networks. These networks are unlike the current internet in that voice, video, and data calls
arrive and depart over time and, in exchange for giving QoS guarantees to customers, the
network collects revenue for calls that it accepts into the network. In this environment, admission control decides what calls to accept into the network so as to maximize the earned
revenue while meeting the QoS guarantees of all carried customers.
'Timothy Brown and Hui Tong were funded by NSF CAREER Award NCR-9624791. Satinder
Singh was funded by NSF grant IIS-97 11753.
Optimizing Admission Control via RL
983
Meeting QoS requires a decision function that decides when adding a new call will violate
QoS guarantees. Given the diverse nature of voice, video, and data traffic, and their often
complex underlying statistics, finding good QoS decision functions has been the subject
of intense research [2, 5]. Recent results have emphasized that robust and efficient QoS
decision functions require on-line adaptive methods [3].
Given we have a QoS decision function, deciding which of the heterogeneous arriving calls
to accept and which to reject in order to maximize revenue can be framed as a dynamic
program problem . The rapid growth in the number of states with problem complexity has
led to reinforcement learning approaches to the problem [6].
In this paper we consider the problem of finding a control policy that simultaneously meets
QoS guarantees and maximizes the network's earned revenue. We show that the straightforward approach of mixing positive rewards for revenue with negative rewards for violating
QoS leads to sub-optimal policies. Ideally we would like to find the optimal policy from
the subset of policies that never violate the QoS constraint. But there is no a priori useful
way to characterize the space of policies that don ' t violate the QoS constraint. We present
a general approach to meeting such multicriteria that solves this problem and potentially
many other applications. Experiments show that incorporating QoS and RL yield significant gains over some alternative heuristics.
2
Problem Description
This section describes the admission control problem model that will be used. To emphasize the main features of the problem, networking issues such as queueing that are not
essential have been simplified or eliminated. It should be emphasized that these aspects
can readily be incorporated back into the problem.
We focus on a single network link. Users attempt to access the link over time and the
network immediately chooses to accept or reject the call. If accepted, the call generates
traffic in terms of bandwidth as a function of time. At a later time, the call terminates and
departs from the network. For each call accepted, the network receives revenue at a fixed
rate over the duration of the call. The network measures QoS metrics such as transmission
delays or packet loss rates and compares them against the guarantees given to the calls.
Thus, the problem is described by the call arrival, traffic, and departure processes; the
revenue rates; QoS metrics; QoS constraints; and link model. The choices used in this
paper are given in the next paragraph.
Calls are divided into discrete classes indexed by i. The calls are generated via a Poisson
arrival process (arrival rate Ai) and exponential holding times (mean holding time 1/ f.Li) .
Within a call the bandwidth is an ON/OFF process where the traffic is either ON at rate Ti or
OFF at rate zero with mean holding times V?N, and V?FF . The effective immediate revenue
are Ct. The link has a fixed bandwidth B. The total bandwidth used by accepted calls varies
over time. The QoS metric is the fraction of time that the total bandwidth exceeds the link
bandwidth (i.e. the overload probability, p). The QoS guarantee is an upper limit, p*.
In previous work each call had a constant bandwidth over time so that the effect on QoS
was predictable. Variable rate traffic is safely approximated by assuming that it always
transmits at its maximum or peak rate. Such so-called peak rate allocation under-utilizes
the network; in some cases by orders of magnitude less than what is possible. Stochastic
traffic rates in real traffic, the desire for high network utilization/revenue, and the resulting
potential for QoS violations distinguish the problem in this paper.
3
Semi-Markov Decision Processes
At any given point of time, the system is in a particular configuration , x, defined by the
number of each type of ongoing calls. At random times a call arrival or a call termination
T. X Brown, H. Tong and S. Singh
984
event, e, can occur. The configuration and event together determine the state of the system, s = (x, e). When an event occurs, the learner has to choose an action feasible for
that event. The choice of action, the event, and the configuration deterministically define
the next configuration and the payoff received by the learner. Then after an interval the
next event occurs, and this cycle repeats. The task of the learner is to determine a policy that maximizes the discounted sum of payoffs over an infinite horizon. Such a system
constitutes a finite state, finite action, semi-Markov decision process (SMDP).
3.1 Multi-criteria Objective
The admission control objective is to learn a policy that assigns an accept or reject decision
to each possible state of the system so as to maximize
J = E
{foOO ,,/C(t)dt} ,
where E{?} is the expectation operator, c(t) is the total revenue rate of ongoing calls at
time t, and, E (0,1) is a discount factor that makes immediate profit more valuable than
future profit. 1
In this paper we restrict the maximization to policies that never enter states that violate QoS
guarantees. In general SMDP, due to stochastic state transitions, meeting such constraints
may not be possible (e.g. from any state no matter what actions are taken there is a possibility of entering restricted states). In this problem service quality decreases with more
calls in the system and adding calls is strictly controlIed by the admission controller so that
meeting this QoS constraint is possible.
3.2 Q-Iearning
RL methods solve SMDP problems by learning good approximations to the optimal value
function , J*, given by the solution to the Bellman optimality equation which takes the
following form for the dynamic call admission problem:
J*(s)
max [E.6.t , s,{c(s,a,flt)+,(L~t)J*(s')}l
(I)
aEA(s)
where A(s) is the set of actions available in the current state s, flt is the random time
until the next event, c(s, a, flt) is the effective immediate payoff with the discounting, and
,(flt) is the effective discount for the next state s' .
We learn an approximation to J* using Watkin's Q-learning algorithm. To focus on the
dynamics of this paper's problem and not on the confounding dynamics of function approximation, the problem state space is kept small enough so that table lookup can be used.
Bellman's equation can be rewritten in Q-values as
J*(s)
max
aEA (s)
Q*(s,a)
(2)
Call Arrival: When a call arrives. the Q-value of accepting the call and the Q-value of
rejecting the call is determined . If rejection has the higher value, we drop the call. Else, if
acceptance has the higher value, we accept the call.
Call Termination: No action needs to be taken.
Whatever our decision, we update our value function as follows: on a transition from state
s to s' on action a in time flt,
Q(s, a)
(1 - 0:)Q(8, a)
+ 0: (c(s, a, flt) + ,(flt)
max
bEArs')
Q(8',
b))
(3)
1 Since we will compare policies based on total reward rather than discounted sum of reward, we
can use the Tauberian approximation [4), i.e., r is chosen to be sufficiently close to I.
985
Optimizing Admission Control via RL
where ex E [0, 1] is the learning rate.
In order for Q-Iearning to perform well, all potentially important state-action pairs (s, a)
must be explored. At each state, with probability E we apply an action that will lead to a
less visited configuration, instead of the action recommended by the Q-value. However, to
update Q-values we still use the action b recommended by the Q-Iearning.
4
Combining Revenue and Quality of Service
The primary question addressed in this paper is how to combine the QoS constraint
with the objective of maximizing revenue within this constraint. Let p(s, a, ~t) and
q(s, a, ~t) be the revenue and measured QoS components of the reward, c(s, a, ~t). Ideally c(s, a, ~t) = p(s, a, ~t) when the QoS constraint is met and c(s, a, ~t) = -Large
(where -Large is any large negative value) when QoS is not met. If the QoS parameters
could be accurately measured between each state transition then this approach would be a
valid solution to the problem. In network systems, the QoS metrics contain a high-degree
of variability. For example, overload probabilities can be much smaller than 10- 3 while
the interarrival periods can be only a few ON/OFF cycles so that except for states with the
most egregious QoS violations, most interarrival periods will have no overloads.
If the reward is a general function of revenue and QoS:
c(s, a, ~t)
= f(p(s, a, ~t), q(s, a, ~t)),
(4)
sufficient and necessary condition for inducing optimal policy with the QoS constraint is
given by:
E{J(p(s,a,~t),q(s,a,~t))} = { ~t~~~a,~t)}
ifE{q(s,a,~t)} <p*
otherwise
(5)
For fe) satisfying this condition, states that violate QoS will be highly penalized and never
visited. The actions for states that are visited will be based solely on revenue.
The Appendix gives a simple example showing that finding a fO that yields the optimal
policy is unlikely without significant prior knowledge about each state. Several attempts at
using (4) to combine QoS and revenue into the reward either violated QoS or had significantly lower reward.
A straight-forward alternative exists to meeting the multicriteria formulated as follows.
For each criteria, j, we estimate a separate set of Q-factors, Qj (s, a). Each is updated via
on-line Q-Iearning . These are then combined post facto at the time of decision via some
function Q(.) so that:
Q(s, a) = Q( {Qj (s, a)}).
(6)
For example in this paper the two criteria are estimated separately as QP and Qq and
Q(s, a) = Q(QP(s, a), Qq(s, a)) = {
~~~~~)
if Qq(s, a)
otherwise
< p*
(7)
The structure of this problem allows us to estimate Qq without using (3). As stated, the
QoS is an intrinsic property of a state and not of future states so it is independent of the
policy. This allows us to collect QoS statistics about each state and treat them in a principled
way (e.g. computing confidence intervals on the estimates). Using these QoS estimates,
the set of allowable states contracts monotonically over time eventually converging to a
fixed set of allowable states. Since the QoS constraint is guaranteed to reach a fixed point
asymptotically, the Q-Iearned policy also approaches a fixed point at the optimal policy via
standard Q-Iearning proofs. A related scheme is analyzed in [4] suggesting that similar
cases will also converge to optimal policies.
T. X Brown, H. Tong and S. Singh
986
Many other QoS criteria do depend on the policy and require using (3). A constraint on the
expected overload probability with a given policy is an example.
5 Simulation Results
The experiment uses the following model. The total bandwidth is normalized to 1.0 unit of
traffic per unit time. The target overflow probability is p* = 10- 3 . Two source types are
considered with the properties shown in Table 1. As noted before, call holding times are
exponential and the arrivals are Poisson. For the first experiment, the ON/OFF holding times
are exponentially distributed, while for the second experiment, they are Pareto distributed.
The Pareto distribution is considered to be a more accurate representation of data traffic.
Table 1: Experimental parameters
Source Type
I
II
Parameter
0.08
0.2
ON rate, r
Mean ON period, 11l1ON
5
5
OFF
15
45
Mean OFF period, 11l1
2.08
2.12
Hyperbolic exponent, U + 1
0.2
Call arrival rate, A
0.067
60
60
Call holding time, II J.L
Immediate payoff, c
5
I
In the experiments, for each state-action pair, (s , a), QP(s , a) is updated using (3) . As
stated, in this case the update of Qq(s, a) does not need to use (3). Since random exploration is employed to ensure that all potentially important state-action pairs be tried, it naturally enables us to collect statistics that can be used to estimate QoS at these state-action
pairs, Qq (s, a) . As the number of visits to each state-action pair increases, the estimated
Qq(s, a) becomes more and more accurate and, with confidence, we can gradually eliminate those state-action pairs that will violate QoS requirement. As a consequence, QP(s, a)
is updated in a gradually correct subset of state-action space in the sense that QoS is met
for any action within this subspace. Initial Q-values for RL are artificial1y set such that
Q-Iearning started with the greedy policy (the greedy policy always accepts).
After training is completed, we apply a test data set to compare the policy obtained through
RL with alternative heuristic policies. The final QoS measurements obtained at the end of
the RL training while learning QoS are used for testing different policies. To test the RL
policies, when there is a new call arrival, the algorithm first determines if accepting this
call will violate QoS. If it will, the call is rejected, else the action is chosen according to
a = argmaxaEA(s) Q(s, a), where A(s) = {l=accept, O=reject}. For the QoS constraint
we use three cases: Peak rate allocation; Statistical multiplexing function learned on-line.
denoted QoS learned; Given statistical multiplexing function a priori, denoted QoS given .
We examine six different cases: (I) RL: QoS given ; (2) RL: QoS learned; (3) RL: peak
rate; (4) A heuristic that only accepts calls from the most valuable class, i.e .. type I, with
given QoS; (5) Greedy: QoS given; (6) Greedy: peak rate.
From the results shown in Fig. I , it is clear that simultaneously doing Q-Iearning and
QoS learning converges correctly to the RL policy obtained by giving the QoS a priori and
doing standard Q-Iearning only. We see significant gains (about 15%) due to statistical
multiplexing: (6) vs (5), and (3) vs (l). The gains due to RL are about 25%: (6) vs (3),
and (5) vs (2). Together they yield about 45% increase in revenue over conservative peak
rate allocation in this example. It is also clear from the figure that the RL policies perform
better than the heuristic policies. Fig. (2) shows the rejection ratios for different policies.
987
Optimizing Admission Control via RL
l-Greectt peak ''''. 2-RL peak rate. a-Greed! 00S grven. '-RL 00S ~arn.d.
Companson 01 dlffer&nt policies. expooentlal OWOFF
"_,,oJ OWOfF
I
Jjl /... O"" ....
"liDo g-
__ - 0-0- __ -0 _ ...
0 - ... --0----- 0
0
3
08
?
07
--s-- - g--
06
09
__ ... ... _ ...... ......... ... ............ ............ ..................... 2_ ... ...
08
07
o
o
06
I .AL: OOS given
2.AL: OOS leamed
3.AL: peak rate
4 Greedy. type I only
5.Greedy? QoS given
6.Greedy- peak rale
05 L~,-----,-_--"-_--:,--'==:c==-====.J
o
6
8
10
12
14
Iraining llfT18Sleps(x 106,
Figure) : Comparison of total rewards of
RL while learning QoS, RL with given
QoS measurements, RL with peak rate,
greedy policies and peak rate allocation,
normalized by the greedy total reward exponential ON/OFF.
Figure 2: Comparison of rejection ratios
for the policies learned in Fig. 1.
We repeat the above experiments with Pareto distributed ON and OFF periods, using the
same parameters listed in Table 1. The results are shown in Figs. 3-4. Clearly, the different
ON/OFF distributions yield similar gains for RL.
6
Conclusion
This paper shows that a QoS constraint could be incorporated into a RL solution to maximizing a network's revenue, using a vector value Q-learning function. The formulation
is quite general and can be applied to many possible constraints. The approach, when applied to a simple networking problem, increases revenue by up to 45%. Future research
includes: using neural networks or other function approximators to deal with more complex problems for which lookup tables are infeasible; and extending admission control to
multi-link routing.
7
Appendix: Simple One-State Example
A simple example will show that a function with property (5) is unlikely. Consider a link
that can accept only one type of call and it can accept no more than one call. With no actions
possible when carrying a call there is only one state. Only two rewards are possible, c(R)
for reject and c(A) for accept. To fix the value function let c(R) = 0 and let p and q be
the random revenues and QoS experienced. Analysis of () and (2) shows that the accept
action will be chosen if and only if E{J(p, q)} > O.
In this example, the revenues are random and possibly negative (e.g. if they are net after
cost of billing and transport). The call should be accepted if E {p} > 0 and E {q} < p*.
Therefore the correct reward function has the property:
E{J(p , q)} > 0
if E{p}
> 0 and E{q} < p*
(8)
The point of the example is that an f(?) satisfying (8) requires prior knowledge about the
distributions of the revenue and the QoS as a function of the state. Even if it were possible
T. X Brown, H. Tong and S. Singh
988
1-Greedy peak .... 2-G...dy' <loS lIlY.... 3- RL <loS Ioamod. Pare" ONIOFF
COII'!"lnson .1_'" policies. Pareto OIroFF
1 3 ,-~r-------,-----r-----r---r--r---.----,
09
t2
', -
,
I
08
I
,I
,
11
1 '
I
09
08
07
1- - -
06
- -
os
0
6
8
10
'''''flgIImU9pO(x 1 0~
,
RL: QoS leamed I 1
Gteedy: QoS given
Gteedy: peak rate
12
"
Figure 3: Comparison of total rewards of
RL while learning QoS, greedy policy and
peak rate allocation, normalized by the
greedy total reward - Pareto ON/OFF.
II
Figure 4: Comparison of rejection ratios
for the policies learned in Fig. 3.
for this example, setting up constraints such as (8) for a real problem with a huge state
space would be non-trivial because p and q are functions of the many state and action pairs.
References
[I] Boyan, J.A. , Littman, ML , "Packet routing in dynamically changing networks: a
reinforcement learning approach," in Cowan, J.D. , et al. , ed. Advances in NIPS 6,
Morgan Kauffman, SF, 1994. pp. 671-678.
[2] Brown, T.X, "Adaptive Access Control Applied to Ethernet Data," Advances in NIPS
9, ed. M. Mozer et al., MIT Press, 1997. pp. 932-938.
[3] Brown, T.X, "Adaptive Statistical Multiplexing for Broadband Communications," Invited Tutorial Fifth IFfP Workshop on Peiformance Modeling & Evaluation of ATM
Networks, Ilkley, U.K., July, 1997.
[4] Gabor, Z., Kalmar, Z., Szepesvari, c., "Multi-criteria Reinforcement Learning," to
appear in International Conference on Machine Learning, Madison, WI, July, 1998.
[5] Hiramatsu, A., "ATM Communications Network Control by .Neural Networks," IEEE
T on Neural Networks, v. 1, n. 1, pp. 122-130, 1990.
[6] Marbach, P., Mihatsch, 0. , Schulte, M ., Tsitsiklis, J.N., "Reinforcement learning for
cal1 admission control and routing in integrated service networks," in Jordan, M ., et
aI., ed. Advances in NIPS 10, MIT Press, 1998.
[7] Nie, J ., Haykin, S., "A Q-learning based dynamic channel assignment technique for
mobile communication systems," to appear in IEEE T on Vehicular Technology.
[8] Singh, S.P. , Bertsekas, D .P., "Reinforcement learning for dynamic channel allocation
in cel1ular telephone systems," in Advances in NIPS 9, ed. Mozer, M ., et al., MIT
Press, 1997. pp. 974-980.
| 1599 |@word termination:2 simulation:2 tried:1 profit:2 initial:1 configuration:5 existing:3 current:2 nt:1 must:1 readily:1 enables:1 drop:1 update:3 smdp:3 v:4 greedy:12 leamed:2 haykin:1 accepting:2 admission:14 combine:2 paragraph:1 expected:1 rapid:1 examine:1 multi:4 bellman:2 discounted:2 becomes:1 underlying:1 maximizes:2 what:3 finding:3 ife:1 guarantee:7 safely:1 ti:1 growth:1 iearning:8 facto:1 control:15 whatever:2 grant:1 utilization:1 unit:2 appear:2 bertsekas:1 positive:1 service:6 engineering:2 before:1 treat:1 limit:1 consequence:1 meet:1 solely:1 dynamically:1 collect:3 co:1 practical:1 testing:1 implement:1 significantly:2 reject:5 hyperbolic:1 gabor:1 confidence:2 close:1 operator:1 customer:2 peiformance:1 maximizing:2 straightforward:1 duration:1 immediately:1 assigns:1 examines:1 qq:7 updated:3 target:1 commercial:1 colorado:2 user:1 iearned:1 us:1 approximated:1 satisfying:2 electrical:1 cycle:2 earned:2 decrease:1 valuable:2 principled:1 mozer:2 environment:1 predictable:1 complexity:1 nie:1 reward:14 ideally:2 littman:1 dynamic:6 singh:6 depend:1 carrying:1 learner:3 effective:3 quite:1 heuristic:4 widely:1 solve:1 otherwise:2 statistic:3 final:1 net:1 combining:1 mixing:1 description:1 inducing:1 los:2 transmission:1 requirement:1 extending:1 converges:1 measured:2 received:1 solves:1 ethernet:1 met:3 correct:2 stochastic:2 exploration:1 packet:2 routing:4 exchange:1 require:2 fix:1 strictly:1 sufficiently:1 considered:2 deciding:1 great:1 visited:3 trusted:1 mit:3 clearly:1 always:2 rather:1 mobile:1 focus:3 improvement:1 sense:1 unlikely:2 eliminate:1 accept:10 integrated:1 issue:2 denoted:2 priori:3 exponent:1 jjl:1 saving:1 never:3 schulte:1 eliminated:1 constitutes:1 future:3 t2:1 few:1 simultaneously:4 attempt:2 interest:1 acceptance:1 huge:1 possibility:1 highly:1 evaluation:1 violation:2 analyzed:1 arrives:1 egregious:1 accurate:2 necessary:1 intense:1 indexed:1 mihatsch:1 modeling:1 assignment:1 maximization:1 cost:1 entry:1 subset:2 delay:1 characterize:1 varies:1 chooses:1 combined:1 peak:15 international:1 contract:1 off:10 together:2 earn:1 choose:1 possibly:1 watkin:1 hiramatsu:1 li:1 suggesting:1 potential:2 lily:1 lookup:2 includes:1 matter:1 later:1 doing:2 traffic:9 atm:2 maximized:1 yield:4 interarrival:2 rejecting:1 accurately:1 researcher:1 straight:1 networking:3 fo:1 reach:1 ed:4 against:1 pp:4 naturally:1 transmits:1 proof:1 gain:4 knowledge:2 back:1 higher:3 dt:1 violating:1 formulation:1 rejected:1 until:1 receives:1 transport:1 o:1 quality:5 effect:1 fooo:1 brown:7 contain:1 normalized:3 discounting:1 entering:1 deal:1 attractive:1 noted:1 criterion:7 allowable:2 l1:1 recently:1 rl:28 qp:4 exponentially:1 significant:4 measurement:2 ai:2 enter:1 framed:1 marbach:1 baveja:1 funded:2 had:2 access:2 recent:1 confounding:1 optimizing:5 certain:1 meeting:7 approximators:1 morgan:1 employed:1 arn:1 converge:1 determine:2 maximize:3 period:5 july:2 semi:2 recommended:2 monotonically:1 ii:4 violate:7 exceeds:1 long:1 divided:1 post:1 award:1 visit:1 ensuring:2 converging:1 heterogeneous:1 controller:1 expectation:1 poisson:2 metric:4 separately:1 interval:2 addressed:1 else:2 kalmar:1 source:2 invited:1 unlike:1 subject:1 cowan:1 jordan:1 call:42 enough:1 bandwidth:8 restrict:1 billing:1 qj:2 six:1 greed:1 aea:2 action:23 ignored:1 useful:1 clear:2 listed:1 discount:2 exist:1 nsf:2 tutorial:1 estimated:2 per:1 correctly:1 diverse:1 discrete:2 queueing:1 changing:1 kept:1 asymptotically:1 fraction:1 sum:2 run:1 telecommunication:3 arrive:1 utilizes:1 decision:9 appendix:2 dy:1 ct:1 internet:1 guaranteed:1 distinguish:1 occur:1 constraint:16 multiplexing:4 generates:1 aspect:1 optimality:1 vehicular:1 tauberian:1 according:1 describes:1 terminates:1 smaller:1 wi:1 restricted:1 gradually:2 boulder:1 taken:2 resource:2 equation:2 eventually:1 end:1 available:1 rewritten:1 apply:2 alternative:4 voice:2 ensure:1 completed:1 madison:1 giving:2 overflow:1 objective:3 question:1 depart:1 occurs:2 primary:1 subspace:1 link:7 separate:1 trivial:1 assuming:1 ratio:3 fe:1 potentially:3 holding:6 negative:3 stated:2 policy:33 perform:2 upper:1 markov:2 coii:1 finite:2 immediate:4 payoff:4 communication:4 incorporated:2 variability:1 qos:63 pair:7 accepts:3 learned:5 nip:4 able:1 kauffman:1 departure:1 rale:1 program:1 max:3 oj:1 video:2 event:7 boyan:1 scheme:1 pare:1 technology:1 started:1 carried:1 prior:2 literature:1 loss:1 bear:1 allocation:9 revenue:24 oos:2 degree:1 sufficient:1 pareto:5 penalized:1 repeat:2 wireless:1 arriving:1 infeasible:1 tsitsiklis:1 fifth:1 distributed:3 transition:3 valid:1 forward:1 reinforcement:8 adaptive:3 simplified:1 emphasize:1 satinder:2 ml:1 decides:2 forbids:1 don:1 table:5 channel:3 nature:1 robust:1 learn:2 career:1 szepesvari:1 complex:2 main:1 arrival:8 fig:5 telecom:2 broadband:2 ff:1 tong:5 sub:1 experienced:1 deterministically:1 exponential:3 sf:1 departs:1 emphasized:2 showing:1 explored:2 multicriteria:2 flt:7 incorporating:1 essential:1 exists:1 intrinsic:1 adding:2 workshop:1 importance:1 hui:2 magnitude:1 horizon:1 rejection:4 led:1 timothy:2 desire:1 determines:1 formulated:1 timxb:1 feasible:1 infinite:1 determined:1 except:1 telephone:1 engineer:1 conservative:1 multimedia:2 total:9 called:1 accepted:4 experimental:1 ncr:1 overload:4 violated:1 ongoing:2 ex:1 |
654 | 16 | 219
Network Generality, Training Required,
and PrecisIon Required
John S. Denker and Ben S. Wittner
AT&T Bell Laboratories
Holmdel, New Jersey 07733
1
Keep your hand on your wallet.
- Leon Cooper, 1987
Abstract
We show how to estimate (1) the number of functions that can be implemented by a
particular network architecture, (2) how much analog precision is needed in the connections in the network, and (3) the number of training examples the network must see
before it can be expected to form reliable generalizations.
Generality versus Training Data Required
Consider the following objectives: First, the network should be very powerful and versatile, i.e., it should implement any function (truth table) you like, and secondly, it
should learn easily, forming meaningful generalizations from a small number of training
examples. Well, it is information-theoretically impossible to create such a network. We
will present here a simplified argument; a more complete and sophisticated version can
be found in Denker et al. (1987).
It is customary to regard learning as a dynamical process: adjusting the weights (etc.)
in a single network. In order to derive the results of this paper, however, we take
a different viewpoint, which we call the ensemble viewpoint. Imagine making a very
large number of replicas of the network. Each replica has the same architecture as the
original, but the weights are set differently in each case. No further adjustment takes
place; the "learning process" consists of winnowing the ensemble of replicas, searching
for the one( s) that satisfy our requirements.
Training proceeds as follows: We present each item in the training set to every network
in the ensemble. That is, we use the abscissa of the training pattern as input to the
network, and compare the ordinate of the training pattern to see if it agrees with the
actual output of the network. For each network, we keep a score reflecting how many
times (and how badly) it disagreed with a training item. Networks with the lowest score
are the ones that agree best with the training data. If we had complete confidence in
lCurrently at NYNEX Science and Technology, 500 Westchester Ave., White Plains, NY 10604
@) American Institute of Physics 1988
220
the reliability of the training set, we could at each step simply throwaway all networks
that disagree.
For definiteness, let us consider a typical network architecture, with No input wires and
Nt units in each processing layer I, for I E {I?? ?L}. For simplicity we assume NL = 1.
We recognize the importance of networks with continuous-valued inputs and outputs,
but we will concentrate for now on training (and testing) patterns that are discrete,
with N == No bits of abscissa and N L = 1 bit of ordinate. This allows us to classify the
networks into bins according to what Boolean input-output relation they implement,
and simply consider the ensemble of bins.
There are 22N jossible bins. If the network architecture is completely general and
powerful, all 22 functions will exist in the ensemble of bins. On average, one expects
that each training item will throwaway at most half of the bins. Assuming maximal
efficiency, if m training items are used, then when m ~ 2N there will be only one bin
remaining, and that must be the unique function that consistently describes all the
data. But there are only 2N possible abscissas using N bits. Therefore a truly general
network cannot possibly exhibit meaningful generalization - 100% of the possible data
is needed for training.
Now suppose that the network is not completely general, so that even with all possible
settings of the weights we can only create functions in 250 bins, where So < 2N. We call
So the initial entropy of the network. A more formal and general definition is given in
Denker et al. (1987). Once again, we can use the training data to winnow the ensemble,
and when m ~ So, there will be only one remaining bin. That function will presumably
generalize correctly to the remaining 2N - m possible patterns. Certainly that function
is the best we can do with the network architecture and the training data we were given.
The usual problem with automatic learning is this: If the network is too general, So
will be large, and an inordinate amount of training data will be required. The required
amount of data may be simply unavailable, or it may be so large that training would be
prohibitively time-consuming. The shows the critical importance of building a network
that is not more general than necessary.
Estimating the Entropy
In real engineering situations, it is important to be able to estimate the initial entropy
of various proposed designs, since that determines the amount of training data that will
be required. Calculating So directly from the definition is prohibitively difficult, but we
can use the definition to derive useful approximate expressions. (You wouldn't want to
calculate the thermodynamic entropy of a bucket of water directly from the definition,
either. )
221
Suppose that the weights in the network at each connection i were not continuously
adjustable real numbers, but rather were specified by a discrete code with bi bits. Then
the total number of bits required to specify the configuration of the network is
(1)
Now the total number offunctions that could possibly be implemented by such a network
architecture would be at most 2B. The actual number will always be smaller than this,
since there are various ways in which different settings of the weights can lead to identical
functions (bins). For one thing, for each hidden layer 1 E {1??? L-1}, the numbering of
the hidden units can be permuted, and the polarity of the hidden units can be flipped,
which means that 2 50 is less than 2B by a factor (among others) of III Nl! 2N ,. In
addition, if there is an inordinately large number of bits bi at each connection, there
will be many settings where small changes in the connection will be immaterial. This
will make 2 so smaller by an additional factor. We expect aSO/abi ~ 1 when bi is small,
and aSO/ab i ~ 0 when bi is large; we must now figure out where the crossover occurs.
The number of "useful and significant" bits of precision, which we designate b*, typically
scales like the logarithm of number of connections to the unit in question. This can be
understood as follows: suppose there are N connections into a given unit, and an input
signal to that unit of some size A is observed to be significant (the exact value of A
drops out of the present calculation). Then there is no point in having a weight with
magnitude much larger than A, nor much smaller than A/N. That is, the dynamic
range should be comparable to the number of connections. (This argument is not exact,
and it is easy to devise exceptions, but the conclusion remains useful.) If only a fraction
1/ S of the units in the previous layer are active (nonzero) at a time, the needed dynamic
range is reduced. This implies b* ~ log(N/S).
Note: our calculation does not involve the dynamics of the learning process. Some
numerical methods (including versions of back propagation) commonly require a number
of temporary "guard bits" on each weight, as pointed out by llichard Durbin (private
communication). Another log N bits ought to suffice. These bits are not needed after
learning is complete, and do not contribute to So.
If we combine these ideas and apply them to a network with N units in each layer, fully
connected, we arrive at the following expression for the number of different Boolean
functions that can be implemented by such a network:
(2)
where
B ~ LN 2 log N
(3)
These results depend on the fact that we are considering only a very restricted type of
processing unit: the output is a monotone function of a weighted sum of inputs. Cover
222
(1965) discussed in considerable depth the capabilities of such units. Valiant (1986) has
explored the learning capabilities of various models of computation.
Abu-Mustafa has emphasized the principles of information and entropy and applied
them to measuring the properties of the training set. At this conference, formulas
similar to equation 3 arose in the work of Baum, Psaltis, and Venkatesh, in the context
of calculating the number of different training patterns a network should be able to
memorize. We originally proposed equation 2 as an estimate of the number of patterns
the network would have to memorize before it could form a reliable generalization. The
basic idea, which has numerous consequences, is to estimate the number of (bins of)
networks that can be realized.
References
1. Vasser Abu-Mustafa, these proceedings.
2. Eric Baum, these proceedings.
3. T. M. Cover, "Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition," IEEE Trans. Elec. Comp., EC-14,
326-334, (June 1965)
4. John Denker, Daniel Schwartz, Ben Wittner, Sara Solla, John Hopfield, Richard
Howard, and Lawrence Jackel, Complex Systems, in press (1987).
5. Demetri Psaltis, these proceedings.
6. 1. G. Valiant, SIAM J. Comput. 15(2), 531 (1986), and references therein.
7. Santosh Venkatesh, these proceedings.
| 16 |@word implemented:3 private:1 version:2 implies:1 memorize:2 concentrate:1 objective:1 question:1 realized:1 laboratory:1 nonzero:1 occurs:1 white:1 usual:1 exhibit:1 bin:10 versatile:1 require:1 initial:2 configuration:1 generalization:4 score:2 daniel:1 complete:3 secondly:1 designate:1 water:1 assuming:1 code:1 geometrical:1 nt:1 polarity:1 presumably:1 must:3 lawrence:1 john:3 difficult:1 numerical:1 permuted:1 offunctions:1 drop:1 analog:1 nynex:1 discussed:1 half:1 design:1 psaltis:2 item:4 significant:2 jackel:1 adjustable:1 disagree:1 winnowing:1 agrees:1 wire:1 howard:1 create:2 automatic:1 aso:2 weighted:1 pointed:1 situation:1 communication:1 contribute:1 always:1 had:1 reliability:1 rather:1 arose:1 guard:1 etc:1 ordinate:2 venkatesh:2 consists:1 required:7 june:1 combine:1 winnow:1 consistently:1 specified:1 connection:7 theoretically:1 ave:1 expected:1 inequality:1 temporary:1 abscissa:3 nor:1 trans:1 devise:1 able:2 proceeds:1 dynamical:1 additional:1 typically:1 pattern:7 actual:2 hidden:3 relation:1 considering:1 signal:1 estimating:1 reliable:2 suffice:1 thermodynamic:1 among:1 including:1 lowest:1 what:1 critical:1 calculation:2 santosh:1 once:1 wittner:2 having:1 technology:1 ought:1 identical:1 flipped:1 every:1 numerous:1 basic:1 prohibitively:2 others:1 schwartz:1 demetri:1 unit:10 richard:1 addition:1 before:2 recognize:1 engineering:1 understood:1 want:1 fully:1 expect:1 consequence:1 inordinate:1 ab:1 abi:1 versus:1 thing:1 therein:1 certainly:1 sara:1 call:2 truly:1 principle:1 nl:2 viewpoint:2 throwaway:2 bi:4 range:2 iii:1 easy:1 unique:1 testing:1 architecture:6 implement:2 necessary:1 idea:2 formal:1 institute:1 logarithm:1 expression:2 bell:1 crossover:1 regard:1 plain:1 depth:1 confidence:1 classify:1 boolean:2 commonly:1 cover:2 cannot:1 wouldn:1 measuring:1 simplified:1 ec:1 context:1 impossible:1 useful:3 approximate:1 expects:1 involve:1 amount:3 baum:2 keep:2 mustafa:2 active:1 too:1 reduced:1 simplicity:1 consuming:1 exist:1 continuous:1 siam:1 correctly:1 table:1 learn:1 physic:1 searching:1 discrete:2 continuously:1 abu:2 imagine:1 suppose:3 unavailable:1 again:1 exact:2 complex:1 possibly:2 recognition:1 replica:3 american:1 monotone:1 fraction:1 sum:1 observed:1 powerful:2 you:2 calculate:1 place:1 arrive:1 satisfy:1 connected:1 definiteness:1 cooper:1 ny:1 solla:1 precision:3 holmdel:1 comput:1 comparable:1 bit:10 layer:4 capability:2 dynamic:3 formula:1 immaterial:1 depend:1 durbin:1 badly:1 emphasized:1 your:2 explored:1 efficiency:1 eric:1 completely:2 ensemble:6 disagreed:1 easily:1 generalize:1 hopfield:1 differently:1 argument:2 jersey:1 various:3 leon:1 valiant:2 importance:2 magnitude:1 elec:1 comp:1 numbering:1 according:1 entropy:5 describes:1 smaller:3 definition:4 simply:3 larger:1 valued:1 forming:1 making:1 adjustment:1 restricted:1 bucket:1 truth:1 adjusting:1 determines:1 ln:1 equation:2 agree:1 remains:1 needed:4 maximal:1 sophisticated:1 reflecting:1 back:1 considerable:1 change:1 originally:1 typical:1 specify:1 denker:4 apply:1 total:2 generality:2 meaningful:2 hand:1 requirement:1 customary:1 original:1 exception:1 remaining:3 ben:2 propagation:1 derive:2 calculating:2 building:1 |
655 | 160 | 626
ANALYZING THE ENERGY LANDSCAPES
OF DISTRIBUTED
WINNER-TAKE-ALL NETWORKS
David S. Touretzky
School of Computer Science
Carnegie Mellon University
Pittsburgh, P A 15213
ABSTRACT
DCPS (the Distributed Connectionist Production System) is a neural
network with complex dynamical properties. Visualizing the energy
landscapes of some of its component modules leads to a better intuitive
understanding of the model, and suggests ways in which its dynamics
can be controlled in order to improve performance on difficult cases.
INTRODUCTION
Competition through mutual inhibition appears in a wide variety of network designs.
This paper discusses a system with unusually complex competitive dynamics. The
system is DCPS, the Distributed Connectionist Production System of Touretzky
and Hinton (1988). DCPS is a Boltzmann machine composed of five modules,
two of which, labeled "Rule Space" and "Bind Space," are winner-take-all (WTA)
networks. These modules interact via their effects on two attentional mod ules called
clause spaces. Clause spaces are another type of competitive architecture based on
mutual inhibition, but they do not produce WTA behavior. Both clause spaces
provide evidential input to both WTA nets, but since connections are symmetric
they also receive top-down "guidance" from the WTA nets. Thus, unlike most
other competitive architectures, in DCPS the external input to a WTA net does
not remain constant as its state evolves. Rather, the present output of the WTA
net helps to determine which evidence will become visible in the clause spaces in the
future. This dynamic attentional mechanism allows rule and bind spaces to work
together even though they are not directly connected.
DCPS actually uses a distributed version of winner-take-all networks whose operating characteristics differ slightly from the non-distributed version. Analyzing the
energy landscapes of DWTA networks has led to a better intuitive understanding
of their dynamics. For a complete discussion of the role of DWTA nets in DCPS,
and the ways in which insights gained from visualization led to improvements in
the system's stochastic search behavior, see [Touretzky, 1989].
Energy Landscapes of Distributed Winner-Take-All Networks
DISTRIBUTED WINNER-TAKE-ALL NETWORKS
In classical WTA nets [Feldman & Ballard, 1982], a unit's output value is a continuous quantity that reflects its activation level. In this paper we analyze a stochastic,
distributed version of winner-take-all dynamics using Boltzmann machines, whose
units have only binary outputs [Hinton & Sejnowski, 1986]. The amount of evidential input to these units determines its energy gap [Hopfield, 1982], which in turn
determines its probability of being active. The network's degree of confidence in
a hypothesis is thus reflected in the amount of time the unit spends in the active
state. A good instantaneous approximation to strength of support can be obtained
by representing each hypothesis with a clique of k independent units looking at a
common evidence pool. The number of active units in a clique reflects the strength
of that hypothesis. DCPS uses cliques of size 40. Units in rival cliques compete via
inhibitory connections
If all units in a clique have identical receptive fields, the result is an "ensemble"
Boltzmann machine [Derthick & Tebelskis, 1988]. In DCPS the units have only
moderately sized, but highly overlapped, receptive fields, so the amount of evidence
individual units perceive is distributed binomially. Small excitatory weights between
sibling units help make up for variations in external evidence. They also make states
where all the units in a single clique are active be powerful attractors.
Energy tours in a DWTA take one of four basic shapes. Examples may be seen in
Figure 1a. Let e be the amount of external evidence available to each unit, 0 the
unit's threshold, k the clique size, and W, the excitatory weight between siblings.
The four shapes are:
Eager vee: the evidence is above threshold (e > 0). The system is eager to
turn units on; energy decreases as the number of active units goes up. We
have a broad, deep energy well, which the system will naturally fall into given
the chance.
Reluctant vee: the evidence is below threshold, but a little bit of sibling
influence (fewer than k/2 siblings) is enough to make up the difference and
put the system over the energy barrier. We have e < 0 < e +w,(k-1)/2. The
system is initially reluctant to turn units on because that causes the energy to
go up, but once over the hump it willingly turns on more units. With all units
in the clique active, the system is in an energy well whose energy is below
zero.
Dimpled peak: with higher thresholds the total energy of the network may
remain above zero even when all units are on. This happens when more than
half of the siblings must be active to boost each unit above threshold, i.e.,
e + w,(k - 1) > 0 > e + w,(k - 1)/2. The system can still be trapped in
the small energy well that remains, but only at low temperatures. The well
is hard to reach since the system must first cross a large energy barrier by
traveling far uphill in energy space. Even if it does visit the well, the system
may easily bounce out of it again if the well is shallow.
627
628
Touretzky
Smooth peak: when () > e + w.(k - 1), units will be below threshold even
with full sibling support. In this case there is no energy well, only a peak.
The system wants to turn all units off.
VISUALIZING ENERGY LANDSCAPES
Let's examine the energy landscape of one WTA space when there is ample evidence
in the clause spaces for the winning hypothesis. We select three hypotheses, A, B,
and C, with disjoint evidence populations. Let hypothesis B be the best supported
one with evidence 100, and let A have evidence 40 and C have evidence 5. We will
simplify the situation slightly by assuming that all units in a clique perceive exactly
the same evidence. In the left half of Figure 1b we show the energy curves for A,
B, and C, using a value of 69 for the unit thresholds. 1 Each curve is generated by
starting with all units turned off; units for a particular hypothesis are turned on one
at a time until all 40 are on; then they are turned off again one at a time, making
the curve symmetric. Since the evidence for hypothesis A is a bit below threshold,
its curve is of the "reluctant vee" type. The evidence for hypothesis B is well above
threshold, so its curve is an "eager vee." Hypothesis C has almost no evidence; its
"dimpled peak" shape is due almost entirely to sibling support. (Sibling weights
have a value of +2; rival weights a value of -2.)
Note that the energy well for B is considerably deeper than for A. This means at
moderate temperature the model can pop out of A's energy well, but it is more
likely to remain in B's well. The well for B is also somewhat broader than the well
for A, making it easier for the B attractor to capture the model; its attract or region
spans a larger portion of state space.
The energy tours for hypotheses A, B, and C correspond to traversing three orthogonal edges extending from a corner of a 40 x 40 x 40 cube. A point at location
(x, y, z) in this cube corresponds to x A units, y B units, and z C units being
active. During the stochastic search, A and B units will be flickering on and off
simultaneously, so the model will also visit internal points of the cube not covered
in the energy tour diagram. To see these points we will use two additional graphic
representations of energy landscapes. First, note that hypothesis C gets so little
support that we safely can ignore it and concentrate on A and B. This allows us
to focus on just the front face of the state space cube. In Figure 2a, the number
of active A units runs from zero to forty along the vertical axis, and the number of
active B units runs from zero to forty along the horizontal axis. The arrows at each
point on the graph show legal state transitions at zero temperature. For example,
at the point where there are are 38 active B units and 3 active A units there are
two arrows, pointing down and to the right. This means there are two states the
model could enter next: it could either turn off one of the active A units, or turn
on one more B unit, respectively. At nonzero temperatures other state transitions
1 All the weights and thresholds used in this paper are actual DCPS values taken from [Touretzky
& Hinton, 1988].
Energy Landscapes of Distributed Winner-Take-All Networks
are possible, corresponding to uphill moves in energy space, but these two remain
the most probable.
The points in the upper left and lower right corners of Figure 2a are marked by
"Y" shapes. These represent point attractors at the bottoms of energy wells; the
model will not move out of these states unless the temperature is greater than zero.
Other points in state space are said to be within the region of a particular attractor
if all legal transition sequences (at T = 0) from those points lead eventually to the
attractor. The attractor regions of A and B are outlined in the figure. Note that
the B attractor covers more area than A, as predicted by its greater breadth in
the energy tour diagram. Note also that there is a small ridge between the two
attractor regions. From starting points on the ridge the model can end up in either
final state.
Figure 2b shows the depths of the two attractors. The energy well for B is substantially deeper than the well for A. Starting at the point in the lower left corner where
there are zero A units and zero B units active, the energy falls off immediately when
moving in the B direction (right), but rises initially in the A direction (left) before
dropping into a modest energy well when most of the A units are on. Points in
the interior of the diagram, representing a combination of A and B units active,
have higher energies than points along the edges due to the inhibitory connections
between units in rival cliques.
We can see from Figures lb and 2 that the attractor for A, although narrower and
shallower than the one for B, is still sizable. This is likely to mislead the model, so
that some of the time it will get trapped in the wrong energy well. The fact that
there is an attractor for A at all is due largely to sibling support, since the raw
evidence for A is less than the rule unit threshold.
We can eliminate the unwanted energy well for A by choosing thresholds that exceed
the maximum sibling support of 2 x 39 = 78. DCPS uses a value of 119. However,
early in the stochastic search the evidence visible in the clause spaces will be lower
than at the conclusion of the search; high thresholds combined with low evidence
would make the B attractor small and very hard to find. (See the right half of
Figure Ie, and Figure 3.) Under these conditions the largest attractor is the one
with all units turned off: the null hypothesis. '
DISCUSSION
Our analysis of energy landscapes pulls us in two directions: we need low thresholds
so the correct attractor is broad and easy to find, but we need high thresholds to
eliminate unwanted at tractors associated with local energy minima. Two solutions
have been investigated. The first is to start out with low thresholds and raise them
gradually during the stochastic search. This "pulls the rug out from under" poorlysupported hypotheses while giving the model time to find the desired winner. The
second solution involves clipping a corner from the state space hypercube so that
the model may never have fewer than 40 units active at a time. This prevents the
629
630
Touretzky
model from falling into the null attractor. When it attempts to drop the number of
active units below 40 it is kicked away from the clipped edge by forcing it to turn
on a few inactive units at random.
Although DCPS is a Boltzmann machine it does not search the state space by
simulated annealing in the usual sense. True annealing implies a slow reduction
in temperature over many update cycles. Stochastic search in DCPS takes place
at a single temperature that has been empirically determined to be the model's
approximate "melting point." The search is only allowed to take a few cycles;
typically it takes less than 10. Therefore the shapes of energy wells and the dynamics
of the search are particularly important, as they determine how likely the model is
to wander into particular attractor regions.
The work reported here suggests that stochastic search dynamics may be improved
by manipulating parameters other than just absolute temperature and cooling rate.
Threshold growing and corner clipping appear useful in the case of DWTA nets.
Additional details are available in [Touretzky, 1989].
Acknowledgments
This research was supported by the Office of Naval Research under contract N0001486-K-0678, and by National Science Foundation grant EET-8716324. I thank Dean
Pomerleau, Roni Rosenfeld, Paul Gleichauf, and Lokendra Shastri for helpful comments, and Geoff Hinton for his collaboration in the development of DCPS.
References
[1] Derthick, M. A., & Tebelskis, J. M. (1988) "Ensemble" Boltzmann machines
have collective computational properties like those of Hopfield and Tank neurons. In D. Z. Anderson (ed.), Neural Information Processing Systems. New
York: American Institute of Physics.
[2] Feldman, J. A., & Ballard, D. H. (1982) Connectionist models and their properties. Cognitive Science 6:205-254.
[3] Hinton, G. E., & Sejnowski, T. J. (1986) Learning and relearning in Boltzmann
machines. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed
Processing: Explorations in the Microstructure of Cognition, volume 1. Cambridge, MA: Bradford Books/The MIT Press.
[4] Hopfield, J. J. (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences
USA, 79:2554-2558.
[5] Touretzky, D. S., & Hinton, G. E. (1988) A distributed connectionist product.ion
system. Cognitive Science 12(3):423-466.
[6] Touretzky, D. S. (1989) Controlling search dynamics by manipulating energy
landscapes. Technical report CMU-CS-89-113, School of Computer Science,
Carnegie Mellon University, Pittsburgh, PA.
Energy Landscapes of Distributed Winner-Take-All Networks
1\
!AJ'\
.
,
!
??
???
?
I
,
;
\.
...
..
.f
;
\:
\ (
~
~
!
!
Evldlncl: A&4O. "100. C:5.
Evldlncl: A&4O. "100. C:5.
1\
1\
! \
/\
\,
j
!
I \
/\
!
!
??l
???
\
\.
..
...
-.
\ !
:
\
:
\
???
??
???
??
..
...
..
..
..
.
??
??
~
":.
:
0.
..
?
:
\f
!
llnIhold = 119
llnIhold ? 69
Evldlncl: A&4O. 1060, C:5.
Evldlncl: Aa4O. 1060. C:5.
!A'.
:
!
!
\
! \
r\
!
!
.
\, !
'=
nr.hold = 69
!
??
\(
??
???
????
?
??
\
\
\..
..
nr.hold
.f
f\\
..
..
..
..
.
\.
=119
Figure 1: (a) four basic shapes for DWTA energy tours; (b) comparison of low
vs. high thresholds in energy tours where there is a high degree of evidence for
hypothesis B; (c) corresponding tours with low evidence for B.
631
'~~'eJms A~J~U~ ~u!puods~JJo~ ~q'l (q) !~m'l'eJ~dw~'l OJ~Z 'l'e SUO!'l!su'eJ'l ~'l'e'ls I'e~~1
('e) 'q 1 ~m~!d JO Jl'eq U~l ~q'l U! S'e '~~U~P!A~ q~!q pU'e sPloqs~Jq'l MOl :~ ~JI1~!d
"QIlI1
r
?1Ik?.
~':J.
....\~t'>
???
fb
e"
'69
= PT o4 sa.J41
'001=8
'O~=~
:aouapT A3
A~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1444444444444444444444444444444444~44444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
144444444444444444444444444~444444444444
1444444~4444444444444444444~444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
'4444444444444444~4444444444444444444
14444444444444444
44444444444444444444
14444444444444444 4 44444444444444444444
14444444444444444
4444444444444444444
14444444444444444
44444444444444444
14444444444444444
4444444444444444
~
144444444444444444
444444444444444
~~
14444444444444444444444444444444444
~~~
1444444444444444444444444444444444
~~~~
144444444444444444444444444444444
~~~~~
14444444444444444444444444444444
~~~~~~
1444444444444444444444444444444
~~~~~~~
144444444444444444444444444444
~~~~~~~~
14444444444444444444444444444
~~~~~~~~~
1444444444444444444444444444
~~~~~~~~~~
144444444444444444444444444
~~~~~~~~~~~
14444444444444444444444444
~~~~~~~~~~~~
1444444444444444444444444
~~~~~~~~~~~~~
144444444444444444444444
~~~~~~~~~~~~~~
14444444444444444444444
~~~~~~~~~~~~~~~
1444444444444444444444
~~~~~~~~~~~~~~~~
144444444444444444444
~~~~~~~~~~ ~~~~
14444444444444444444
~~~~~~~~~ ~~ ~~~~~
1444444444444444444
~~~~~~~~~~
~~~~~
144444444444444444
~~~~~~~~~~~~
~~~~~
14444444444444444
~~~~~~~~~~~~~~ ~~~~~~
1444444444444444
~~~~~~~~~~~~~~~~~~~~~~
144444444444444
~~~~~~~~~~~~~~~~~~~~~~~
144
~~~~~~~~~~~~~~~~~~~~~~~
AlIZla,m0J,
~f!9
t-t:;.!
,..,
~~
~ ...
<
....
a.
-('I)
~
III
~~
~
~
('I)
~
...
..
:J
0
==
.....
oq
III
14444444444444444444~44444444444444
::r
;..... ::r
~
C-. ...
~
o
~
~
l>
II
A
0
?
('I)
~
I:T'
tIl
II
o
m
~
~o..
N
...
o
('I)
0
?
fI)
~
~
0..
-i
~('I)
0
..,
S
III
~
==
('I)
~ .....
<
"'C:j
Ul
~
....
~
0
~o..
~
('I)
... ~
('I)
_.
0..
II
(")
('I)
.....
.....
..--
O"'~
\0
?
"-"~
~
.....
::r~
~
('I)
tTl.
(") I:T'
......o ...
('I)
('I)
.....
~oq
"t:I::r
o
4
I
-
oq 0
....,
~
~
('I)
... (")
~.
(")..-
~~
r
~
(D
fI)
~
~
fI)
,
444444444444444444444444444444444444444
,~
44444444444444444444444444444444444444
,~
4444444444444444444444444444444444444
,~~
444444444444444444444444444444444444
,~~~
44444444444444444444444444444444444
&:
0'"
~
~
j;l..
~
~
~
~
~
z
('I)
....
~
-
'< ...
~
tE.1
(D
.....
"'oq
oq ~
~
t44444444444~4444444~4444444444444444
t4444444444444444444
4444444444444444
t44444444444444444444
44444444444444444
t44444444444444444441
44444444444444
t44444444444444444444.
1144444444444444
1444444444444444444444444444444444444444
t444444444444444444444444444444444444444
t444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
t444444444444444444444444444444444444444
t444444444444444444444444444444444444444
t444444444444444444444444444444444444444
t444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
t444444444444444444444444444444444444444
14 4444444 444
4 4
4444444444444444444444444444444444
~
444444444444444444444444444444444
l
~~
44444444444444444444444444444444
l~~~~~~~
4444444444444444444444444444444
444444444444444444444444444444
~
('I)
14444444444444444444444
444444444444444
t44444444444444444444
4 44444444444444
t44444444444444444444
44444444444444
t444444444444444444444444444444444444444
,
,
5..;.....
....,
~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
1444444444444444444444444444444444444444
(D
i
'\?.,
~
?
a-
0)
to
c.o
| 160 |@word version:3 reduction:1 activation:1 must:2 visible:2 shape:6 drop:1 update:1 v:1 half:3 fewer:2 location:1 five:1 along:3 become:1 ik:1 uphill:2 behavior:2 examine:1 growing:1 little:2 actual:1 null:2 ttl:1 spends:1 substantially:1 safely:1 unusually:1 unwanted:2 exactly:1 wrong:1 unit:47 grant:1 appear:1 before:1 bind:2 local:1 analyzing:2 suggests:2 acknowledgment:1 area:1 confidence:1 get:2 interior:1 put:1 influence:1 dean:1 go:2 starting:3 l:1 mislead:1 immediately:1 perceive:2 rule:3 insight:1 pull:2 his:1 dw:1 population:1 variation:1 controlling:1 pt:1 us:3 hypothesis:15 overlapped:1 pa:1 rumelhart:1 particularly:1 cooling:1 labeled:1 bottom:1 role:1 module:3 capture:1 region:5 connected:1 cycle:2 decrease:1 moderately:1 dynamic:8 raise:1 easily:1 hopfield:3 geoff:1 emergent:1 sejnowski:2 choosing:1 whose:3 larger:1 ability:1 rosenfeld:1 final:1 derthick:2 sequence:1 net:7 product:1 turned:4 academy:1 intuitive:2 competition:1 extending:1 produce:1 help:2 school:2 sa:1 eq:1 sizable:1 predicted:1 involves:1 implies:1 c:1 differ:1 concentrate:1 direction:3 correct:1 stochastic:7 exploration:1 microstructure:1 probable:1 hold:2 cognition:1 pointing:1 early:1 largest:1 reflects:2 mit:1 rather:1 ej:2 broader:1 office:1 focus:1 naval:1 improvement:1 sense:1 helpful:1 attract:1 eliminate:2 typically:1 initially:2 manipulating:2 jq:1 tank:1 development:1 mutual:2 cube:4 field:2 once:1 never:1 identical:1 broad:2 future:1 connectionist:4 report:1 simplify:1 few:2 composed:1 simultaneously:1 national:2 individual:1 attractor:16 attempt:1 highly:1 hump:1 edge:3 traversing:1 orthogonal:1 unless:1 modest:1 desired:1 guidance:1 cover:1 clipping:2 tour:7 graphic:1 eager:3 front:1 reported:1 considerably:1 combined:1 peak:4 ie:1 contract:1 off:7 physic:1 pool:1 together:1 jo:1 again:2 external:3 corner:5 american:1 cognitive:2 book:1 til:1 analyze:1 portion:1 competitive:3 start:1 parallel:1 characteristic:1 largely:1 ensemble:2 correspond:1 landscape:11 ji1:1 raw:1 evidential:2 reach:1 touretzky:9 ed:2 energy:42 naturally:1 associated:1 reluctant:3 actually:1 appears:1 higher:2 reflected:1 melting:1 improved:1 though:1 anderson:1 just:2 until:1 traveling:1 horizontal:1 su:1 aj:1 usa:1 effect:1 true:1 symmetric:2 nonzero:1 visualizing:2 during:2 dcps:13 complete:1 ridge:2 temperature:8 instantaneous:1 fi:3 common:1 clause:6 empirically:1 physical:1 winner:9 volume:1 jl:1 mellon:2 cambridge:1 feldman:2 enter:1 suo:1 outlined:1 moving:1 operating:1 inhibition:2 pu:1 moderate:1 forcing:1 binary:1 seen:1 minimum:1 additional:2 somewhat:1 rug:1 greater:2 determine:2 forty:2 ii:3 full:1 smooth:1 technical:1 cross:1 visit:2 controlled:1 basic:2 cmu:1 m0j:1 represent:1 ion:1 receive:1 want:1 annealing:2 diagram:3 unlike:1 comment:1 ample:1 oq:5 mod:1 exceed:1 iii:3 enough:1 easy:1 variety:1 gleichauf:1 architecture:2 sibling:10 bounce:1 inactive:1 ul:1 roni:1 york:1 cause:1 deep:1 useful:1 covered:1 amount:4 rival:3 mcclelland:1 inhibitory:2 trapped:2 disjoint:1 carnegie:2 dropping:1 four:3 threshold:18 falling:1 breadth:1 graph:1 compete:1 run:2 powerful:1 clipped:1 almost:2 place:1 bit:2 entirely:1 strength:2 tebelskis:2 span:1 combination:1 remain:4 slightly:2 kicked:1 shallow:1 wta:8 evolves:1 happens:1 making:2 gradually:1 taken:1 legal:2 visualization:1 remains:1 discus:1 turn:8 mechanism:1 eventually:1 end:1 available:2 away:1 top:1 giving:1 classical:1 hypercube:1 move:2 quantity:1 receptive:2 usual:1 nr:2 said:1 attentional:2 thank:1 simulated:1 o4:1 assuming:1 jjo:1 vee:4 difficult:1 shastri:1 rise:1 design:1 binomially:1 pomerleau:1 boltzmann:6 collective:2 shallower:1 upper:1 vertical:1 neuron:1 situation:1 hinton:6 looking:1 lb:1 david:1 connection:3 boost:1 pop:1 dynamical:1 below:5 oj:1 representing:2 improve:1 axis:2 willingly:1 understanding:2 tractor:1 wander:1 foundation:1 degree:2 collaboration:1 production:2 excitatory:2 supported:2 deeper:2 institute:1 wide:1 fall:2 face:1 barrier:2 absolute:1 distributed:13 curve:5 depth:1 transition:3 fb:1 far:1 approximate:1 ignore:1 eet:1 clique:10 active:17 pittsburgh:2 search:11 continuous:1 ballard:2 mol:1 interact:1 investigated:1 complex:2 arrow:2 paul:1 allowed:1 slow:1 winning:1 down:2 evidence:21 a3:1 gained:1 te:1 relearning:1 gap:1 easier:1 led:2 likely:3 prevents:1 corresponds:1 determines:2 chance:1 ma:1 sized:1 marked:1 narrower:1 flickering:1 hard:2 determined:1 called:1 total:1 bradford:1 select:1 internal:1 support:6 |
656 | 1,600 | Non-linear PI Control Inspired by
Biological Control Systems
Lyndon J. Brown
Gregory E. Gonye
James S. Schwaber *
Experimental Station, E.!. DuPont deNemours & Co. Wilmington, DE 19880
Abstract
A non-linear modification to PI control is motivated by a model
of a signal transduction pathway active in mammalian blood pressure regulation. This control algorithm, labeled PII (proportional
with intermittent integral), is appropriate for plants requiring exact set-point matching and disturbance attenuation in the presence
of infrequent step changes in load disturbances or set-point. The
proportional aspect of the controller is independently designed to
be a disturbance attenuator and set-point matching is achieved
by intermittently invoking an integral controller. The mechanisms
observed in the Angiotensin 11/ AT1 signaling pathway are used to
control the switching of the integral control. Improved performance
over PI control is shown on a model of cyc1opentenol production.
A sign change in plant gain at the desirable operating point causes
traditional PI control to result in an unstable system. Application of this new approach to this problem results in stable exact
set-point matching for achievable set-points.
Biological processes have evolved sophisticated mechanisms for solving difficult control problems. By analyzing and understanding these natural systems it is possible
that principles can be derived which are applicable to general control systems. This
approach has already been the basis for the field of artificial neural networks, which
are loosely based on a model of the electrical signaling of neurons. A suitable candidate system for analysis is blood pressure control. Tight control of blood pressure
is critical for survival of an animal. Chronically high levels can lead to premature
death. Low blood pressure can lead to oxygen and nutrient deprivation and sudden
load changes must be quickly responded to or loss of consciousness can result. The
baroreflex, reflexive change of heart rate in response to blood pressure challenge,
has been previously studied in order to develop some insights into biological control
systems [1, 2, 3].
?Jyndon.j .brown@usa.dupont .com
Address correspondence to this author
Gregory.E.Gonye-PHD@usa.dupont .com James.S.Scwhaber@usa.dupont.com
976
L. J. Brown, G. E. Gonye and J. S. Schwaber
Neurons exhibit complex dynamic behavior that is not directly revealed by their
electrical behavior, but is incorporated in biochemical signal transduction pathways. This is an important basis for plasticity of neural networks. The area of the
brain to which the baroreceptor afferents project is the nucleus of tractus solitarus
(NTS). The neurons in the NTS are rich with diverse receptors for signaling pathways. It is logical that this richness and diversity playa crucial role in the signal
processing that occurs here. Hormonal and neurotransmitter signals can activate
signal transduction pathways in the cell, which result in physical modification of
some components of a cell, or altered gene regulation. Fuxe et al [4] have shown the
presence of the angiotensin 11/ AT! receptor pathway in NTS neurons, and Herbert
[5] has demonstrated its ability to affect the baroreflex.
To develop understanding of the effects of biochemical pathways, a detailed kinetic
model of the angiotensin/AT! pathway was developed. Certain features of this
model and the baroreflex have interesting characteristics from a control engineering
perspective. These features have been used to develop a novel control strategy.
The resulting control algorithm utilizes a proportional controller that intermittently
invokes integral action to achieve set-point matching. Thus the controller will be
labeled PII.
The use of integral control is popular as it guarantees cancellation of offsets and
ensures exact set-point matching. However, the use of integral control does have
drawbacks. It introduces significant lag in the feedback system, which limits the
bandwidth of the system. Increasing the integral gain, in order to improve response
time, can lead to systems with excessive overshoot, excessive settling times, and
less robustness to plant changes or uncertainty. Many processes in the chemical
industry have a steady-state response curve with a maximum and frequently, the
optimal operating condition is at this peak. Unfortunately, any controller with true
integral action will be unstable at this operating point.
In a crude sense, the integrator learns the constant control action required to achieve
set-point matching. If the integral control is viewed as a simple learning device, than
a logical step is to remove it from the feedback loop once the necessary offset has
been learned. If the offset is being successfully compensated for, only noise remains
as a source for learning. It has been well established that learning based on nothing
but noise leads to undesirable results. The maxim, 'garbage in, garbage out' will
apply. Without integral control, the proportional controller can be made more aggressive while maintaining stability margins and/or control actions at similar levels.
This control strategy will be appropriate for plants with infrequent step changes in
set-points or loads. The challenge becomes deciding when, and how to perform this
switching so that the resulting controller provides significant improvements.
1
Angiotensin III ATI receptor Signal Transduction Model
Regulation of blood pressure is a vital control problem in mammals. Blood pressure
is sensed by stretch sensitive cells in the aortic arch and carotid sinus. These cells
transmit signals to neurons in the NTS which are combined with other signals from
the central nervous system (CNS) resulting in changes to the cardiac output and
vascular tone [6]. This control is implemented by two parallel systems in the CNS,
the sympathetic and parasympathetic nervous systems. The sympathetic system
primarily affects the vascular tone and the parasympathetic system affects cardiac
output [7]. Cardiac control can have a larger and faster effect, but long term
application of this control is injurious to the overall health of the animal. Pottman
et al [2] have suggested that these two systems separately control for long term
set-point control and fast disturbance rejection.
Non-Linear PI Control Inspired by Biological Control Systems
977
One receptor in NTS neuronal cells is the AT1 receptor which binds Angiotensin
II. The NTS is located in the brain stem where much of the processing of the autonomic regulatory systems reside. Angiotensin infusion in this region of the brain
has been shown to significantly affect blood pressure control. In order to understand this aspect of neuronal behavior, a detailed kinetic model of this signaling
pathway was developed. The pathway is presented in Figure 2. The outputs can
be considered to be the concentrations of Gq?GTP, GO-y, activated protein kinase
C, and/or calmodulin dependent protein kinase.
Several reactions in the cascade are of interest. The binding of phospholipase C is
significantly slower than the other steps in the reaction. This can be modeled as
a first order transfer function with a long time constant or as a pure integrator.
The IP 3 receptor is a ligand gated channel on the membrane of the endoplasmic
reticulum (ER). As Figure 2 shows, when IP 3 binds to this receptor, calcium is
released from the ER into the cells cytoplasm. However the IP3 receptor also
has 2 binding sites on its cytoplasmic domain for binding calcium. The first has
relatively fast dynamics and causes a substantial increase in the channel opening.
The second calcium binding site has slower dynamics and inactivates the channel.
The effect of this first binding site is to introduce positive feedback into the model.
In traditional control literature, positive feedback is generally undesirable. Thus it
is very interesting to see positive feedback in neuronal control systems.
A typical surface response for the model, comparing the time response of activated
calmodulin versus the peak concentration of a pulse of angiotensin, is shown in
Figure 1. The results are consistent with behavior of cells measured by Li and
Guyenet [8]. The output level is seen to abruptly rise after a delay, which is a
decreasing function of the magnitude of the input. Unlike a linear system, both the
magnitude and speed of the response of the system are functions of the magnitude
of the input. Further, the relaxing of the system to its equilibrium is a very slow
response as compared to its activation. This behavior can be attributed to the
positive feedback response inherent to the IP3 receptor. The effect of the slow
dynamics of the phospholipase C binding, and the IP3 receptor dynamics results in
an activation behavior similar to a threshold detector on the integrated input signal.
However, removal of the input results in a slow recovery back to zero. The activation
of the calcium calmodulin dependent protein kinase can lead to phosphorilation of
channels that result in synaptic conductance changes that are functionally related
to the amount of activated kinase. The activation of calcium calmodulin can also
lead to changes in gene regulation that could potentially result in long term changes
in the neurons synaptic conductances.
2
Proportional with Intermittent Integral Control
Key features from the model that are incorporated in the control law are:
1. separate controllers for set-point control and disturbance attenuation;
2. activation of set-point controller when integrated error exceeds threshold;
3. strength of integral action when activated will be a function of the speed
with which activation was achieved;
4. smooth removal of integral action, without disruption of control action.
The PII controller begins initially as a proportional controller with a nominal offset
added to its output. The integrated error is monitored. The integral controller
is turned on when the integrated error exceeds a threshold. Once the integral
control action is activated, it remains active as long as the error is excessive. Once
the error is not significant, then the integral control action can be removed in a
L. J. Brown, G. E. Gonye and J. S. Schwaber
978
r:ti*rsuoorut
f;~f,:(~I:~:::::~
~hra~
( adb .h
ra~ . ld
~ ~
p=PIP2
\-
~~~:L~
!-'
~~S;7'
??=
-<'Jf:-.
Cell Membrane
Cytoplasm
dar
atl p
t?~
___________
______ .metabollt
;ER
:
'~='
CaM
)
Di:uslon
M,
I~ Receptor ~ -- - ---- --- - -~ C~ CaM
Ca r!umr-----------
,-- ------ -- ------_.
Figure 1: Schematic and Surface Responses of Angiotensin II
I
ATI Model
smooth manner. This has been achieved by allowing the value of the integral gain,
Ki, to decay exponentially. It is important that this is done in such a manner
as not to affect the actual control signal. This can be achieved by adjusting the
offset appropriately. Since u = Kpe + Kiels and Ki ex: -Ki' then u can be made
constant for constant e by adding offset Ko where Ko ex: Kiel s. The integral
action is completely removed once Ki has decayed to the point where it is no longer
significant. In order to make the effect of activation of the integrator correspond
to the behavior of the angiotensin model, the integrated error is scaled by the time
spent reaching the threshold when the integrator is turned on. This corresponds to
point 3 above.
If the error undergoes significant change when the integrator is already fully active
the system will behave similarly to a system with a PI controller whose gains have
been set too high. This may result in significant overshoot and possibly instability.
There is a small chance that even with infrequent step changes, the residual error, or
random disturbance could trigger the integrator immediately before a step change.
In a biological control system, control does not rest in one neuron or necessarily in
one signal transduction pathway but in multiple pathways. Furthermore, study of
individual cells shows a great deal of variability in the details of their behavior. By
implementing the intermittent integral control as a sum of many equivalent controllers, as in left side of Figure 2, with variability in their threshold parameters,
a controller can be developed that is not subject to the chance of being fully activated by random disturbance or residual error. During steady-state operation these
integrators will quickly deactivate when noise or small disturbances trigger them,
as the error will be less than the threshold. However, an actual step change in the
error signal will result in all or most of the integrators activating, and remaining
active until the error is compensated for.
The block diagram on right side of Figure 2 and the time dependent definitions in
Table 1 precisely define the control algorithm for the single integrator case.
979
Non-Linear PI Control Inspired by Biological Control Systems
e
,..----+1+
11
+
"'2
u
+
....--+1+
+
5uml
kis=ki 1+ki2+ki3+ki4+ki5
xu=xu1 <xu2<xu3<xu4<xu5
eu=eu1<eu2<eu3<eu4<eu5
Figure 2: Block Diagrams for Control Algorithm Implementations
If
t
then
= to
Ki(t)
X(tO)
= 0 and
Ix(t)1 > Xu
IKi(t)1 > Ki and le(t)1 < eu
0< IKi(t)1 < Ki
Otherwise
= 0,
Ki(tO) = 0, Ko(to)
= K;,
tt(tO)
= to.
K it
( ) -- K*i'X (+)
x(t)
t -_ max(l,K.(t-tz)).
Ki(t) = -KdecayKi(t), Ko(t) = KdecayKi(t)X(t).
Ko(t+) = Ko(t) + Ki(t)X(t),
Ki(t+) = 0, x(t+) = 0, tt(t+) = t.
Ki(t)
= 0,
Ko(t)
= 0, x = e,
it(t)
= o.
Table 1: Definition of Gains for PII Control
3
Control of CSTR Reactor for Cyclopentenol Production
The model of the CSTR reactor is taken from [9]. The basic process converts
cyclopentadiene to cyclopentenol. Cyclopentenol can undergo a further undesirable
reaction to form cyclopentadiol, and cyclopentadiene can undergo an alternative
reaction to form dicyclopentadiene. The rates of the reactions are temperature
dependent. Inputs to the model are flow rate, and the jacket temperature. The first
input is the control input, and the jacket temperature is an unmeasured disturbance,
with a root mean square deviation of 0.1 C about a nominal value of 130 C. The
regulated output will be the cyclopentenol concentration in the outflow.
The steady-state response of this process is shown in Figure 3. Operation in the
region labeled II up to the peak of the curve labeled VIII has been considered. At
the point labeled VIII, the steady-state gain of the plant goes to o. Plants with
steady-state gains which change sign can not be stably controlled with PI control.
An additional complicating factor is that the plant has significant inverse response
in this region.
Criteria for this control design problem, in order of importance are
? operate between 45 and 60 ljhour with reasonable high frequency gain
? minimize the overshoot
? minimize rise time
980
L. J. Brown, G. E. Gonye and J. S. Schwaber
? minimize the inverse response
Satisfying the first and last criteria should ensure a robust controller. Precise numerical performance criteria for the rise time have not been specified as no fixed
values are reasonable for the entire region.
A PI controller, as well as a PH controller have been designed and the results are
displayed in Figures 3. The controller parameters were Kp = 75, Ki = 7500 for
the standard PI controller. The PH controller used 5 equally weighted parallel
integrators with Kp = 125, total K; = 10000 and Kdecay = 100. The threshold
parameters were chosen as eu = [4 3 2 1 11 * 0.00025, x., = r16 8 4 2 11 * 0.00004,
:n:~ K' K::~'=
..,
!i~~':'"'''''' 'I
.
=
~
1 ?"
0
1
2
J
..
5
6
7
I
9
10
1l
.........
..
~ . .
.... ..... .....
- 0 01
-0015
~~
0.8
o
20
40
60
M
Foedtalo (lib)
100
120
140
160
-0 025
"
"
:
.'
...~.: ,
. ; . ....
{:,
-0 03 "~-"'':-2--:'
..-:--~
? ?:--..:''':--~S--:':
5 2:---:':
":---::5.--::,,---'.
Tme(t'lOUnl)
r'.
PI!
R. . I'M'ICe.,
Figure 3: Steady-State Response of Cyclopentenol CSTR Reactor and Output Concentration from CSTR Reactor
The set-point was chosen to be a series of smoothed steps. Smoothing was performed with a first-order , low-pass filter with unity DC gain and a time constant of
30 hours-i . While operating in the region of design from 0 to 4.8 hours and 5.4 to 7
hours, the PH controlled system, as compared to the PI controlled system, had reduced inverse response, less worst-case overshoot, similar response times and greater
disturbance attenuation. A closer examination of the PI controlled system, during
the interval 4.8-5.4 hours, showed that at this extreme operating point , oscillations
of a fixed period begin to appear. This indicates the existence of poorly damped
poles. The PH controlled system did not show this degradation of performance.
The set-point was raised to nearly the maximum achievable concentration. This
allows examination of the behavior of the controller when operating near regions
of uncertainty in the sign of the plant gain. This operating point achieves the
maximum possible conversion to cyclopentenol and thus has significant economic
advantages. In the region from 7.2s to lOs , there is a 10% reduction in the disturbance response with the PH controller. At this operating point, the PI controlled
system can be shown to be locally st able. However, the effects of integrated noise
easily allow the system trajectory to escape the region of attraction. As expected,
the PI controlled system went unstable. The PH controlled system remains well
behaved. The simulation was run for a total simulated time of 43 hours at this operating point, and repeated many times without seeing any loss of stability with PH
controller. With PI control, the system went unst able within 10 hours for each trial.
Thus , PH control allows operation at set-points closer to maximums or minimums.
4
Conclusion
The mechanisms that biological control systems employ to successfully control nonlinear, time varying, multivariable physiological systems under very demanding per-
Non-Linear PI Control Inspired by Biological Control Systems
981
formance requirements are likely to have application in process control problems. In
addition to neural networks already incorporated in advanced controllers, cells process information through biochemical signal transduction networks that may also
contain useful non-linear mechanisms. A model of one such pathway has been developed, and features have been identified which can be used to develop an improved
control system.
The fundamental idea is to design two separate control laws, one intermittently
used for cancelling infrequently changing but mostly predictable disturbances, and
another for attenuating white disturbances. The first controller learns the simple
characteristics of the predictable disturbance. When the predictable disturbance is
learned, it can be canceled with an open loop controller, and no further learning
takes place. However if it appears that the open loop controller is not cancelling the
disturbance, further learning takes place until the disturbance is again successfully
cancelled. The second controller is designed strictly for fast disturbance attenuation.
Without the lag inherent in integration, the controller can be made more aggressive
resulting in better performance. The two controllers can be integrated by applying
the threshold and switching mechanisms identified in the signal transduction model.
References
[1] M. A. Henson, B. A. Ogunnaike, J. S. Schwaber, and F. J. Doyle III, "The
baroreceptor reflex: A biological control system with applications in chemical
process control," I&EC Research, vol. 33, pp. 2453- 2465, 1994.
[2] M. Pottman, M. A. Henson, B. A. Ogunnaike, and J. S. Schwaber, "A parrallel
control strategy abstracted from the baroreceptor reflex," Chemical Engineering
Science, vol. 51, pp. 931-945, 1996.
[3] H. S. Kwatra, F. J. Doyle III, and J. S. Schwaber, "Dynamic gain scheduled
process control," Chemical Engineering Science, 1997.
[4] K. Fuxe and B. B. et aI, "Pre- and post-synaptic features of the central angiotensin systems: Indications for a role of angiotensin peptides in volume
transmission and for interactions with central monamine neurons," Clin Exp
Hypertens [Theory Practj, vol. Ala, pp. 143- 168, 1988.
[5] J. Herbert, "Studying the central actions of angiotensin using the expression
of immediate-early genes: Expectations and limitations," Regulatory Peptides,
vol. 66, pp. 13-18, 1996.
[6] K. M. Spyer, "The central nervous organization of reflex circulatory control," in
Clin Exp Hypertens [Theory PractjCentral Regulation of Automanomic Fuctions
(A. D. Loewy and K. M. Spyer, eds.), p. 168, New York: Oxford University
Press, 1990.
[7] M. N. Kumada, N. Terui, and T. Kuwaki, "Arterial baroreceptor reflex: Its
central and peripheral neural mechanisms," Progr. Neurophysiol., vol. 35, p. 331,
1988.
[8] Y. Li and P. G. Guyenet, "Angiotensin II decreases a resting K+ conductance in
rat bulbospinal neurons of the c1 area," Circulatiob Research, vol. 78, pp. 274282,1996.
[9] B. Ogunnaike and W. H. Ray, Process dynamics, Modeling and Control. New
York: Oxford University Press, 1995.
| 1600 |@word trial:1 achievable:2 open:2 iki:2 pulse:1 sensed:1 simulation:1 invoking:1 pressure:8 mammal:1 ld:1 reduction:1 series:1 ala:1 ati:2 reaction:5 com:3 nt:6 comparing:1 activation:7 must:1 numerical:1 plasticity:1 dupont:4 remove:1 designed:3 device:1 nervous:3 tone:2 sudden:1 provides:1 kiel:1 pathway:13 ray:1 manner:2 introduce:1 expected:1 ra:1 behavior:9 frequently:1 brain:3 integrator:10 inspired:4 decreasing:1 actual:2 increasing:1 becomes:1 project:1 begin:2 lib:1 evolved:1 developed:4 angiotensin:13 guarantee:1 attenuation:4 ti:1 scaled:1 control:68 baroreceptor:4 appear:1 positive:4 before:1 engineering:3 bind:2 ice:1 limit:1 switching:3 receptor:11 analyzing:1 oxford:2 umr:1 studied:1 inactivates:1 relaxing:1 co:1 jacket:2 block:2 calmodulin:4 tme:1 signaling:4 area:2 significantly:2 cascade:1 matching:6 pre:1 seeing:1 protein:3 undesirable:3 applying:1 instability:1 equivalent:1 demonstrated:1 compensated:2 go:2 arterial:1 independently:1 recovery:1 immediately:1 pure:1 insight:1 attraction:1 stability:2 atl:1 unmeasured:1 transmit:1 nominal:2 infrequent:3 trigger:2 exact:3 infrequently:1 satisfying:1 located:1 mammalian:1 labeled:5 observed:1 role:2 electrical:2 worst:1 region:8 ensures:1 richness:1 went:2 eu:3 decrease:1 removed:2 substantial:1 predictable:3 cam:2 dynamic:7 overshoot:4 solving:1 tight:1 basis:2 completely:1 neurophysiol:1 easily:1 neurotransmitter:1 fast:3 activate:1 kp:2 artificial:1 cytoplasmic:1 whose:1 lag:2 larger:1 otherwise:1 ability:1 hormonal:1 ip:2 advantage:1 indication:1 interaction:1 gq:1 cancelling:2 turned:2 loop:3 poorly:1 achieve:2 los:1 requirement:1 transmission:1 spent:1 develop:4 measured:1 implemented:1 pii:4 drawback:1 filter:1 nutrient:1 eu2:1 implementing:1 cstr:4 activating:1 biological:9 adb:1 strictly:1 stretch:1 considered:2 deciding:1 great:1 equilibrium:1 exp:2 achieves:1 early:1 released:1 applicable:1 sensitive:1 peptide:2 successfully:3 weighted:1 reaching:1 varying:1 derived:1 improvement:1 indicates:1 deactivate:1 sense:1 dependent:4 biochemical:3 integrated:7 entire:1 initially:1 overall:1 canceled:1 animal:2 smoothing:1 raised:1 integration:1 field:1 once:4 excessive:3 nearly:1 inherent:2 primarily:1 opening:1 escape:1 employ:1 eu1:1 doyle:2 individual:1 reactor:4 cns:2 conductance:3 organization:1 interest:1 introduces:1 extreme:1 activated:6 damped:1 integral:19 closer:2 necessary:1 hra:1 loosely:1 industry:1 modeling:1 reflexive:1 deviation:1 pole:1 schwaber:7 delay:1 too:1 gregory:2 combined:1 st:1 decayed:1 peak:3 fundamental:1 quickly:2 again:1 central:6 possibly:1 r16:1 tz:1 li:2 aggressive:2 de:1 diversity:1 xu1:1 afferent:1 performed:1 root:1 parallel:2 minimize:3 square:1 responded:1 formance:1 characteristic:2 correspond:1 trajectory:1 detector:1 synaptic:3 ed:1 definition:2 frequency:1 pp:5 james:2 attributed:1 monitored:1 di:1 fuctions:1 gain:11 adjusting:1 popular:1 logical:2 sophisticated:1 back:1 appears:1 response:16 improved:2 done:1 furthermore:1 arch:1 until:2 nonlinear:1 undergoes:1 stably:1 scheduled:1 behaved:1 usa:3 effect:6 brown:5 requiring:1 true:1 contain:1 progr:1 chemical:4 death:1 consciousness:1 deal:1 white:1 during:2 steady:6 rat:1 criterion:3 multivariable:1 tt:2 temperature:3 autonomic:1 disruption:1 oxygen:1 intermittently:3 novel:1 physical:1 exponentially:1 volume:1 resting:1 functionally:1 significant:8 ai:1 similarly:1 cancellation:1 had:1 stable:1 henson:2 longer:1 operating:9 surface:2 playa:1 showed:1 perspective:1 certain:1 herbert:2 seen:1 additional:1 greater:1 minimum:1 period:1 signal:14 ii:4 multiple:1 desirable:1 stem:1 exceeds:2 smooth:2 faster:1 ip3:3 long:5 post:1 equally:1 controlled:8 schematic:1 ko:7 basic:1 controller:31 expectation:1 sinus:1 achieved:4 cell:10 c1:1 addition:1 separately:1 interval:1 diagram:2 source:1 crucial:1 appropriately:1 rest:1 unlike:1 operate:1 subject:1 undergo:2 flow:1 near:1 presence:2 revealed:1 iii:3 vital:1 affect:5 bandwidth:1 identified:2 economic:1 idea:1 ogunnaike:3 motivated:1 expression:1 vascular:2 abruptly:1 york:2 cause:2 action:11 dar:1 garbage:2 generally:1 useful:1 detailed:2 amount:1 locally:1 ph:8 reduced:1 sign:3 per:1 diverse:1 vol:6 key:1 threshold:8 blood:8 changing:1 sum:1 convert:1 run:1 inverse:3 uncertainty:2 place:2 reasonable:2 utilizes:1 oscillation:1 ki:15 correspondence:1 strength:1 precisely:1 aspect:2 speed:2 relatively:1 uml:1 peripheral:1 membrane:2 cardiac:3 unity:1 modification:2 heart:1 taken:1 previously:1 remains:3 mechanism:6 studying:1 operation:3 apply:1 appropriate:2 cancelled:1 alternative:1 robustness:1 slower:2 existence:1 remaining:1 ensure:1 maintaining:1 clin:2 infusion:1 invokes:1 xu2:1 already:3 added:1 occurs:1 strategy:3 concentration:5 traditional:2 exhibit:1 regulated:1 separate:2 simulated:1 unstable:3 viii:2 modeled:1 regulation:5 difficult:1 unfortunately:1 mostly:1 potentially:1 rise:3 implementation:1 design:3 calcium:5 kinase:4 perform:1 gated:1 allowing:1 conversion:1 neuron:9 behave:1 displayed:1 immediate:1 incorporated:3 variability:2 precise:1 dc:1 intermittent:3 station:1 smoothed:1 required:1 specified:1 learned:2 established:1 hour:6 address:1 able:2 suggested:1 challenge:2 max:1 gtp:1 suitable:1 critical:1 natural:1 settling:1 disturbance:18 examination:2 demanding:1 residual:2 advanced:1 altered:1 improve:1 health:1 understanding:2 literature:1 removal:2 law:2 plant:8 loss:2 fully:2 interesting:2 limitation:1 proportional:6 versus:1 at1:2 nucleus:1 consistent:1 principle:1 pi:17 production:2 ki2:1 last:1 side:2 allow:1 understand:1 feedback:6 curve:2 complicating:1 rich:1 author:1 made:3 reside:1 premature:1 ec:1 cytoplasm:2 gene:3 abstracted:1 active:4 chronically:1 regulatory:2 table:2 channel:4 transfer:1 robust:1 ca:1 complex:1 necessarily:1 domain:1 did:1 noise:4 nothing:1 repeated:1 outflow:1 xu:2 neuronal:3 site:3 transduction:7 slow:3 candidate:1 crude:1 deprivation:1 learns:2 ix:1 load:3 er:3 offset:6 decay:1 physiological:1 survival:1 adding:1 importance:1 maxim:1 phd:1 magnitude:3 margin:1 rejection:1 likely:1 sympathetic:2 pip2:1 binding:6 ligand:1 reflex:4 corresponds:1 kwatra:1 chance:2 kinetic:2 viewed:1 attenuating:1 jf:1 change:15 typical:1 degradation:1 total:2 pas:1 experimental:1 ex:2 |
657 | 1,601 | The Role of Lateral Cortical Competition
in Ocular Dominance Development
Christian Piepenbrock and Klaus Obermayer
Dept. of Computer Science, Technical University of Berlin
FR 2-1; Franklinstr. 28-29; 10587 Berlin, Germany'
{piep,oby}@cs.tu-berlin.de; http://www.ni.cs.tu-berlin.de
Abstract
Lateral competition within a layer of neurons sharpens and localizes the
response to an input stimulus. Here, we investigate a model for the activity dependent development of ocular dominance maps which allows
to vary the degree of lateral competition. For weak competition, it resembles a correlation-based learning model and for strong competition,
it becomes a self-organizing map. Thus, in the regime of weak competition the receptive fields are shaped by the second order statistics of the
input patterns, whereas in the regime of strong competition, the higher
moments and "features" of the individual patterns become important.
When correlated localized stimuli from two eyes drive the cortical development we find (i) that a topographic map and binocular, localized
receptive fields emerge when the degree of competition exceeds a critical
value and (ii) that receptive fields exhibit eye dominance beyond a second critical value. For anti-correlated activity between the eyes, the second order statistics drive the system to develop ocular dominance even
for weak competition, but no topography emerges. Topography is established only beyond a critical degree of competition.
1 Introduction
Several models have been proposed in the past to explain the activity depending development of ocular dominance (00) in the visual cortex. Some models make the ansatz of
linear interactions between cortical model neurons [2, 7], other approaches assume competitive winner-take-all dynamics with intracortical interactions [3, 5]. The mechanisms
that lead to ocular dominance critically depend on this choice. In linear activity models,
second order correlations of the input patterns determine the receptive fields. Nonlinear
competitive models like the self-organizing map. however, use higher order statistics of the
input stimuli and map their features. In this contribution. we introduce a general nonlinear
C. Piepen brock and K. Obermayer
140
Figure I: Model for OD development: the input patterns p/!J. and P!RJ-L in the LGN drive
the Hebbian modification of the cortical afferR
YJ
LGN
ent synaptic weights S~i and S~. Cortical neupLIl
? ~:=:::=:::. left-eye rons are in competition and interact with effecpRIl
right-eye tive strengths f xy. Locations in the LGN are in? Rl
J
dexed i or j, cortical locations are labeled x or y.
Oilx ~~~~~~mQQQ!~?QJ Cortex
Hebbian development rule which interpolates the degree of lateral competition and allows
us to systematically study the role of non-linearity in the lateral interactions on pattern formation and the transition between two classes of models.
2
Ocular Dominance Map Development by Hebbian Learning
Figure I shows our basic model framework for ocular dominance development. We consider two input layers in the lateral geniculate nucleus (LGN). The input patterns f1 =
1, ... , U on these layers originate from the two eyes and completely characterize the input statistics (the mean activity P is identical for all input neurons). The afferent synaptic
connection strengths of cortical cells develop according to a generalized Hebbian learning
rule with learning rate ",.
ASLJ-L L).
xi
-
~ f
0- J-LpLJ-L
'" ~ xy y i
y
(I)
An analogous rule is used for the connections from the right eyes S~. We use v = 2 in the
following and rescale the length of each neurons receptive field weight vector to a constant
length after a learning step. The model includes effective cortical interactions fry for the
development of smooth cortical maps that spread the output activities 6~ in the neighborfry for N output neurons). The cortical output
hood of neuron x (with a mean j = ~
signals are connectionist neurons with a nonlinear activation function g(.),
Lx
_
Ot
exp({3HJ-L)
= g(H~) = Lz exp(;3~r)
with
H~
= L(S~jPjLJ-L + s~pt!J.)
,
(2)
j
which models the effect of cortical response sharpening and competition for an input stimulus. The degree of competition is determined by the parameter {3. Such dynamics may result as an effect of local excitation and long range inhibition within the cortical layer [6, I],
and in the limits of weak and strong competition, we recover two known types of developmental models-the correlation based learning model and the self-organizing map.
2.1
From Linear Neurons to Winner-take-all Networks
In the limit ;3 ---+ 0 of weak cortical competition. the output 6~ becomes a linear function
of the input. A Taylor series expansion around 13 = 0 yields a correlation-based-learning
(CBL) rule in the average over all patterns
~st
T];3L
~(l:rz - j)(S~jctL + S~CflL) +const ..
z .j
b
where CfiL =
LJ-L ptJ-L p/J-L is the correlation function of the input patterns. Ocular
dominance development under this rule requires correlated activity between inputs from
141
Role ofLateral Cortical Competition in Ocular Dominance Development
(3
CBL limit
= 2.5
(3
= 32
SOM limit
Figure 2: The network response for different degrees of cortical competition: the plots
show the activity rates Ly 1xy 6~ for a network of cortical output neurons (the plots are
scaled to have equal maxima). Each gridpoint represents the activity of one neuron on a
16 x 16 grid. The interactions Ixy are Gaussian (variance 2.25 grid points) and all neurons are stimulated with the same Gaussian stimulus (variance 2.25). The neurons have
Gaussian receptive fields (variance (J'2 = 4.5) in a topographic map with additive noise
(uniformly distributed with amplitude 10 times the maximum weight value).
within one eye and anti-correlated activity (or uncorrelated activity with synaptic competition) between the two eyes [2,4]. It is important to note, however, that CBL models cannot
explain the emergence of a topographic projection. The topography has to be hard-wired
from the outset of the development process which is usually implemented by an "arbor
function" that forces all non-topographic synaptic weights to zero.
Strong competition with (3 --t
~sf:
= TJlxq(Jl)P/Jl
00,
on the other hand, leads to a self-organizing map [3, 5],
with
q(ll)
= argmaXy I)S~jPf!l + s~ptJl) .
j
Models of this type use the higher order statistics of the input patterns and map the important features of the input. In the SOM limit, the output activity pattern is identical in shape
for all input stimuli. The input influences only the location of the activity on the output
layer but does not affect its shape.
For intermediate values of (3, the shape of the output activity patterns depends on the input.
The activity of neurons with receptive fields that match the input stimulus better than others is amplified, whereas the activity of poorly responding neurons is further suppressed as
shown in figure 2. On the one hand, the resulting output activity profiles for intermediate (3
may be biologically more realistic than the winner-take-alllimit case. On the other hand,
the difference between the linear response case (low (3) and the nonlinear competition (intermediate (3) is important in the Hebbian development process-it yields qualitatively different results as we show in the next section.
2.2
Simulations of Ocular Dominance Development
In the following, we study the transition from linear CBL models to winner-take-all SOM
networks for intermediate values of 13. We consider input patterns that are localized and
show ocular dominance
p.LJl
l
= 0.5 + eye L (11)
2rr(J'2
exp (_ (i -IOC(f-L))2)
with
eye L (f-L)
= -eyeR (f-L)
(3)
2(J'2
Each stimulus 11 is of Gaussian shape centered on a random position loc(ll) within the input
layer and the neuron index i is interpreted as a two-dimensional location vector in the input
0 produces
layer. The parameter eye (f-L) sets the eye dominance for each stimulus. eye
binocular stimuli and eye = ? ~ results in uncorrelated left and right eye activities.
=
We have simulated the development of receptive fields and cortical maps according to
equations 1 and 2 (see figure 3) for square grids of model neurons with periodic boundary conditions, Gaussian cortical interactions. and 00 stimuli (equation 3). The learning
C. Piepenbrock and K. Obermayer
142
5
0.5
(3 * = 1. 783 analytic prediction
;3+
I {3 * = 1. 783 analytic prediction
(!)
N
.<ii
i
4
(3 +
(!)
;:::I
"0
~
<l:
B
(!)
.~
fru
3
I
~
2
o
CBL~
Ir'
0.4
C
~0.3
o
I! topogr. map with OD
o
?0.2
topogr. map with OD
1 ??;'; :~?;;;;;~i;:';~;;?~~?OD ?1
234567
log 2 (3
-+SOM
0
E
B
C
W. '.'?:?'.~i~'.:,~~.~.~.~~.
A
0.1
0.0 ...........::w._~'--~_~~~~_-'
0
1234567
log2 {3
-+SOM
CBL~
Figure 3: Simulation of ocular dominance development for a varying degree of cortical
competition ;3 in a network of 16 x 16 neurons in each layer. The figure shows receptive
fields sizes (left) and mean 00 value (right) as a function of cortical competition 8. Each
point in the figure represents one simulation with 30000 pattern presentations. The cortical
interactions are Gaussian with a variance of "{ 2 = 2.25 grid points. The Gaussian input
stimuli are 5.66 times stronger in one eye than in the other (equation 3 with (j2 = 2.25,
eye(f.-t) = ?O .35). The synaptic weights are intialized with a noisy topographic map
(curves labeled "no aD") and additionally with ocular dominance stripes (curves labeled
"with aD"). To determine the receptive field size we have applied a Gaussian fit to all receptive field profiles sf; and st and averaged the standard deviation (in grid points) over
all neurons x . The mean 00 value is given by ~ L x L i(Sf; - S{; )I L i(Sfx + S{!.)J.
J
rate is set at the first stimulus presentation to change the weights of the best responding
neuron by half a percent. After each learning step the weights are rescaled to enforce the
constraint from equation I.
The simulations yield the results expected in the CBL and SaM limit cases (small and
large (3) for initially constant synaptic weight values with 5 percent additional noise. In the
CBL limit, our particular choice of input patterns does not lead to the development of ocular dominance, because the necessary conditions for the input pattern correlations are not
satisfied-the pattern correlations and interactions are all positive. Instead, the learning
rule has only one fixpoint with uniform synaptic weights-unstructured receptive fields
that cover the whole input layer. In the SaM limit, our set of stimuli leads to the emergence of a topographic projection with localized receptive fields and ocular dominance
stripes. The topographic maps often develop defects which can be avoided by an annealing
scheme. Instead of annealing {3 or the cortical interaction range, however, we initialize the
weights with a topographic projection and some additive noise. This is a common assumption in cortical development models [2], because the fibers from the LGN first innervate
the visual cortex already in a coarsely topographic order.
For intermediate degrees of cortical competition, we find sharp transitions between the
CBL and SaM states and distinguish three parameter regimes (see figure 3). For weak
competition (A) all receptive fields are unstructured and cover the whole input layer. At
some critical {3* , the receptive fields begin to form a topographic projection from the geniculate to the cortical layer. This projection (B) has no stable ocular dominance stripes, but
a small degree of ocular dominance that fluctuates continuously. For yet stronger competition (C), a cortical map with stable ocular dominance stripes emerges .
The simulations, however, show that a topographic map without ocular dominance remains
a stable attractor of the learning dynamics (C) . For increasing competition its basin of attraction becomes smaller, and smaller learning rates are necessary in order to remain within
the binocular state. On the one hand, simulations with slowly increasing beta lead to a to-
143
Role of Lateral Cortical Competition in Ocular Dominance Development
0.6
(3*
=
2.002 analytic prediction
(3*
-1.0
, ;3+
= 2.002 analytic prediction
I
I
A
topogr. map with 00
Vi
o
u
C
0.0 0
I
CBLf-
2
3
4
log2 (3
5
6
local minimum: no 00
-2.0
local minimum: no 00
t?,. . .,'. . ,. . . - . . . . . . . ----.................. . . . . . . .. . ........... . . _ . . ___ . . ~_.
, ........................... ..-_-0-...... ..-...... ....... _ ........... _ .................. ....
topogr. map with 00
7
0
-+SOM
I
2
CBLf-
3
4
log2 (3
5
6
7
-+SOM
Figure 4: Simulations for the learning equation 5. The figure shows the mean ocular dominance (left) and the cost (right) as a function of (3 . The parameters are identical to figure 3
and eye(p) = ?O.425.
pographic map and ocular dominance stripes suddenly pop up somewhere in regime Cfor small learning rates later than for large ones. On the other hand, in simulations with
decreasing (3 and an initially topographic map with ocular dominance, we find a second
critical (3+ at which the OD map becomes unstable.
To understand the system's properties better, we analytically predict the value (3* -the
point where structured receptive fields emerge-and discuss the relation to cost functions
to get some intuition about the value (3+ in the following paragraph.
2.3
Analysis of the Emergence of Structured Receptive Fields
For (3 < (3* the system shows basically CBL properties-in our case constant weights and
unstructured receptive fields. It is possible to study the stability of this state analytically.
We consider the learning equation I under a hard renormalization constraint that enforces
L~l (S~i )2 + ( S~)2 = 2M 52 by rescaling the weights after each learning step. A linear
perturbation analysis of the learning rule around constant weights yields a critical degree
of competition (3* = (5 ).~ax ).~ aJ -1 where 5 is the strength of the constant synaptic
weights. ).~ax is the largest eigenvalue of the input covariance matrix
(
~ [ CC~~ CC~:]
P
ji
ji
_p2) ,
which has to be diagonalized with respect to Land R, as well as with respect to i and j.
The input correlation functions for the patterns from equation 3 are given by Cfl =
R = CfiL = (~- 2eye2)b G (i - j, 20'2)
CflR = (~+ 2eye2)bG(i - j , 20'2) and
where G Cr,0'2) is a two-dimensional Gaussian with variance 0'2. The eigenvalues with respect to Land R in this symmetric case_are the sum and difference terms of the correlation
LL + C ~R _ p 2) and /{ d.lff = .l(C L. L - G LR ) The term I{ sum
functions /{ lJsum = .l(C
p
Jl
Jl
lJ
P
Jl
Jl'
lJ
is larger for positive input correlations and in the next step we have to find the eigenvalues of this matrix. For periodic boundary conditions and in the limit of large networks, we
can approximate the eigenvalue by the fourier transform of the Gaussian and finally obtain
).~ax = exp ( - (0'211' /m)2) (for a square grid of M = m x m neurons). ).~ ax is the
largest eigenvalue of Jlxz - ~) and Gaussian cortical interactions Ixy with variance
on N = n x n output neurons yield ).~ ax = exp ( - ~h211' /n)2). Stronger competition
beyond the point (3* leads to the formation of structured receptive fields. It is interesting to
note, that the critical (3* does not depend on eye(p) , the strength of ocularity in the input
patterns. The predicted value for (3* is plotted in figure 3 and matches the transition found
in the simulations.
ct
(J
,'2
144
2.4
C. Piepenbrock and K. Obermayer
Hebbian Development With a Global Objective Function
The learning equation 1 does not optimize a global cost function [5]. To understand the
dynamics of the 00 development better and to interpret the transition at /3*, we derive a
learning rule very similar to equation 1 that minimizes the global cost function E,
E =
u1 L O~cost~
with
cost~ = -
J.I ,X
L Ixy(S~jPjLJ.I + SijprJ.l) .
(4)
y,j
We minimize this cost function in a stochastic network of binary output neurons 0'; that
compete for the input stimuli, i.e. one output neuron is active at any given time. The probability for a neuron y to become active in response to pattern J.l. depends on its advantage
in cost over the currently active neuron x:
exp[-/3(costi - cost~)]
P( O~ = 1 -t O~ = 1)
I:z exp[ -
/3( cost'; - cost~)]
This type of output dynamics leads to a Boltzmann probability distribution for the state of
the system. We marginalize over all possible network outputs and derive a learning rule by
gradient descent on the log likelihood of a particular set of synaptic connections (subject
to I:i(S~i)n + (s~)n = const.).
L
~Sxi
a
L
R
T/ aS L log Prob( {Sxi' Sxi}) =
Xl
a
""
aS L log L.J Z1 exp( -BE) .
Xl
{O~}
Finally, we obtain a learning rule that contains the expectation values 0'; (or mean fields)
of the binary outputs,
~SLJ.I='TI""I
Xl
./ L.J xy
y
Ot'pLJ.lwithOJ.l= exp(,8I: y,jlxy (S{;jP/J.I+S{hpr t'))
5
Y z
X '\"' .
'\"'
L LJ.I
R RJ.I)? ( )
D Z exp(,8Dy ,j Izy (Syj Pj
+SyjPj )
This learning rule is almost identical to equation 1, it only contains an additional cortical
interaction inside the output term 0';, but it has the advantage of an underlying cost function.
Figure 4 shows the development of ocular dominance according to equation 5 and the associated cost is plotted for each state of the system. The value {3* of the first transition is
calculated analogously to the previous section and ).:nax becomes the maximum eigenvalue
of the matrix (J[ I: y Ixylyz -1) which is ).~ax = exp (- ('"y27l'/m)2). Around,8+ a topographic map without ocular dominance is a stable state and it remains stable for larger ,8.
In addition, a different minimum of the cost function equation 4 emerges at (3+: an ocular
dominance map with a lower associated cost. This shows that an ocular dominance map
becomes the preferred state of the system beyond,8+ although the binocular topographic
map is still stable. In the SOM limit,8 -t 00 the binocular topographic map becomes
unstable and ocular dominance stripes develop.
The value,8+ marks the first emergence of an ocular dominance map. For the simulations
in the figures 3 and 4 we have used positive correlations between the two eyes-a realistic assumption for 00 map development. For weaker correlations (eye(J.l.) approaches
? ~), ,8+ decreases. For anti-correlated stimuli, an ocular dominance map develops even
in the CBL limit [4] (this, however, requires additional model assumptions like inhibition between the layers within the LGN). Such a map has no topographic structure (if
not imposed by an arbor function) but mostly monocular receptive fields. The value ,8*
is not affected directly by those changes and the monocular receptive fields localize, if ;3*
is exceeded. Consequently, the "feature" 00 emerges, if it is dominant in the relevant
pattern statistics-for anti-correlated eyes around ,8 = 0, and for positive between-eyecorrelations only in the regime of higher order moments at /3+ .
Role ofLateral Cortical Competition in Ocular Dominance Development
3
145
Conclusions
We have introduced a model for cortical development with a variable degree of cortical
competition. For weak competition it has CBL models, for strong competition the SOM
as limit cases. Localized stimuli with ocular dominance require a minimum degree of cortical competition to develop a topographic map, and a stronger degree of competition for
the emergence of ocular dominance stripes. Anti-correlated activity between the two eyes
lets OD emerge for weak competition and localized fields only beyond a critical degree of
competition.
A Taylor series expansion of the learning equation I yields a CBL model that uses only
second order input statistics. For increasing !3 the higher order terms become significant
which consist of the higher moments of the input patterns. In this contribution, we have
used only simple activity blobs in two eyes, but it is well known that in the winner-take-all
limit features like orientation selectivity can emerge as well [3].
The soft cortical competition in our model implements a mechanism of response sharpening in which the input patterns do still influence the output pattern shape. This should relax
the biologically unplausible assumption of winner-take-all dynamics of SOM models and
yields similar ocular dominance maps. Cortical microcircuits-local cortical amplifiershave been proposed as a cortical module of computation [6]. Our model suggests that such
circuits may be important to sharpen the responses during the development and to permit
the emergence offeature mapping simple cell receptive fields .
Our model shows that small changes in the degree of cortical competition may result in
qualitative changes of the emerging receptive fields and cortical maps. Such changes in
competition could be a result of the maturation of the intra-cortical connectivity. A slowly
increasing degree of cortical competition could make the cortical neurons sensitive to more
and more complex features of the input stimuli .
Acknowledgements
This work was supported by the Boehringer Ingelheim Fonds (C. Piepenbrock) and by
DFG grant Ob 102/2-1.
References
[1] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Bioi.
Cyb., 27:77-87, 1977.
[2] K. D. Miller, J. B. Keller, and M. P. Stryker. Ocular dominance column development:
Analysis and simulation. Science, 245:605-615, 1989.
[3] K. Obermayer, H. Ritter, and K. Schulten. A principle for the formation of the spatial
structure of cortical feature maps. Proc. Nat. Acad. Sci. USA, 87:8345-49, 1990.
[4] C. Piepenbrock, H. Ritter, and K. Obermayer. The joint development of orientation
and ocular dominance: Role of constraints. Neur. Comp., 9:959-970, 1997.
[5] M. Riesenhuber, H.-U. Bauer, and T. Geisel. Analyzing phase transitions in highdimensional self-organizing maps. Biol. Cyb., 75 :397-407, 1996.
[6] D. C. Somers, S. B. Nelson, and M. Sur. An emergent model of orientation selectivity
in cat visual cortical simple cells. 1. Neurosci., 15:5448-5465, 1995.
[7] A. L. Yuille, J. A. Kolodny, and C. W. Lee. Dimension reduction, generalized deformable models and the development of ocularity and orientation. Neur. Netw., 9:309319,1996.
| 1601 |@word sharpens:1 stronger:4 simulation:11 covariance:1 reduction:1 moment:3 series:2 loc:1 contains:2 past:1 diagonalized:1 od:6 activation:1 yet:1 realistic:2 additive:2 shape:5 christian:1 piepenbrock:5 analytic:4 plot:2 half:1 lr:1 ron:1 location:4 lx:1 become:3 beta:1 qualitative:1 paragraph:1 inside:1 introduce:1 expected:1 decreasing:1 increasing:4 becomes:7 begin:1 linearity:1 underlying:1 circuit:1 costi:1 interpreted:1 minimizes:1 emerging:1 sharpening:2 ti:1 scaled:1 ly:1 grant:1 positive:4 local:4 limit:13 acad:1 analyzing:1 resembles:1 suggests:1 range:2 averaged:1 hood:1 yj:1 enforces:1 implement:1 projection:5 outset:1 get:1 cannot:1 marginalize:1 influence:2 www:1 optimize:1 map:37 imposed:1 keller:1 unstructured:3 rule:11 attraction:1 stability:1 analogous:1 pt:1 sfx:1 us:1 stripe:7 labeled:3 role:6 module:1 decrease:1 rescaled:1 intuition:1 developmental:1 dynamic:7 depend:2 cyb:2 yuille:1 completely:1 joint:1 emergent:1 cat:1 fiber:1 effective:1 oby:1 klaus:1 formation:4 fluctuates:1 larger:2 relax:1 amari:1 statistic:7 topographic:17 emergence:6 noisy:1 transform:1 advantage:2 oflateral:2 rr:1 eigenvalue:6 blob:1 interaction:11 fr:1 tu:2 j2:1 relevant:1 organizing:5 poorly:1 amplified:1 deformable:1 competition:42 ent:1 wired:1 produce:1 depending:1 develop:5 derive:2 rescale:1 p2:1 strong:5 implemented:1 c:2 predicted:1 geisel:1 stochastic:1 centered:1 require:1 f1:1 around:4 exp:11 mapping:1 predict:1 vary:1 ptj:1 proc:1 geniculate:2 currently:1 sensitive:1 largest:2 gaussian:11 hj:1 cr:1 varying:1 ax:6 likelihood:1 dependent:1 lj:4 initially:2 relation:1 lgn:6 germany:1 orientation:4 development:29 spatial:1 initialize:1 field:26 equal:1 shaped:1 ioc:1 identical:4 represents:2 connectionist:1 stimulus:18 others:1 develops:1 individual:1 dfg:1 phase:1 attractor:1 investigate:1 intra:1 cfl:1 argmaxy:1 necessary:2 xy:4 offeature:1 taylor:2 plotted:2 column:1 soft:1 cover:2 cost:15 deviation:1 uniform:1 characterize:1 periodic:2 st:2 ritter:2 lee:1 ansatz:1 analogously:1 continuously:1 connectivity:1 satisfied:1 slowly:2 rescaling:1 de:2 intracortical:1 includes:1 afferent:1 depends:2 ad:2 vi:1 later:1 bg:1 competitive:2 recover:1 contribution:2 minimize:1 square:2 ni:1 ir:1 variance:6 miller:1 yield:7 weak:8 gridpoint:1 critically:1 basically:1 comp:1 drive:3 cc:2 explain:2 synaptic:9 ocular:36 associated:2 emerges:4 amplitude:1 exceeded:1 higher:6 maturation:1 response:7 microcircuit:1 binocular:5 correlation:12 hand:5 nonlinear:4 aj:1 usa:1 effect:2 analytically:2 symmetric:1 ll:3 during:1 self:5 excitation:1 ixy:3 generalized:2 percent:2 common:1 rl:1 ji:2 winner:6 jp:1 jl:6 interpret:1 significant:1 grid:6 innervate:1 sharpen:1 stable:6 cortex:3 inhibition:3 dominant:1 selectivity:2 binary:2 minimum:4 additional:3 determine:2 signal:1 ii:2 rj:2 hebbian:6 smooth:1 technical:1 match:2 exceeds:1 long:1 prediction:4 basic:1 expectation:1 cell:3 whereas:2 addition:1 annealing:2 ot:2 subject:1 intermediate:5 affect:1 fit:1 qj:1 interpolates:1 fixpoint:1 http:1 coarsely:1 affected:1 dominance:38 boehringer:1 localize:1 pj:1 sxi:3 defect:1 sum:2 compete:1 prob:1 franklinstr:1 somers:1 almost:1 ob:1 dy:1 layer:12 ct:1 distinguish:1 activity:20 strength:4 constraint:3 fourier:1 u1:1 slj:1 structured:3 according:3 neur:2 smaller:2 remain:1 suppressed:1 sam:3 modification:1 biologically:2 equation:13 monocular:2 remains:2 discus:1 mechanism:2 permit:1 enforce:1 fry:1 rz:1 responding:2 log2:3 const:2 plj:1 somewhere:1 suddenly:1 objective:1 already:1 cfor:1 receptive:23 stryker:1 obermayer:6 exhibit:1 gradient:1 berlin:4 lateral:8 simulated:1 sci:1 nelson:1 originate:1 unstable:2 length:2 sur:1 index:1 mostly:1 boltzmann:1 neuron:26 descent:1 anti:5 riesenhuber:1 perturbation:1 sharp:1 tive:1 introduced:1 connection:3 z1:1 established:1 pop:1 beyond:5 usually:1 pattern:24 regime:5 ocularity:2 critical:8 force:1 localizes:1 scheme:1 eye:25 brock:1 acknowledgement:1 topography:3 interesting:1 localized:6 nucleus:1 degree:16 basin:1 principle:1 systematically:1 uncorrelated:2 land:2 supported:1 weaker:1 understand:2 emerge:4 distributed:1 bauer:1 boundary:2 curve:2 cortical:44 transition:7 calculated:1 dimension:1 qualitatively:1 avoided:1 lz:1 approximate:1 netw:1 preferred:1 global:3 active:3 xi:1 ljl:1 stimulated:1 additionally:1 hpr:1 interact:1 expansion:2 complex:1 som:10 spread:1 neurosci:1 whole:2 noise:3 profile:2 renormalization:1 position:1 schulten:1 sf:3 xl:3 ingelheim:1 lff:1 consist:1 nat:1 fonds:1 visual:3 bioi:1 presentation:2 consequently:1 hard:2 change:5 determined:1 uniformly:1 arbor:2 highdimensional:1 mark:1 dept:1 biol:1 correlated:7 |
658 | 1,602 | Boxlets: a Fast Convolution Algorithm for
Signal Processing and Neural Networks
Patrice Y. Simard?, Leon Botton, Patrick Haffner and Yann LeCnn
AT&T Labs-Research
100 Schultz Drive, Red Bank, NJ 07701-7033
patrice@microsoft.com
{leon b ,haffner ,yann }@research.att.com
Abstract
Signal processing and pattern recognition algorithms make extensive use of convolution. In many cases, computational accuracy is
not as important as computational speed. In feature extraction,
for instance, the features of interest in a signal are usually quite
distorted. This form of noise justifies some level of quantization in
order to achieve faster feature extraction . Our approach consists
of approximating regions of the signal with low degree polynomials, and then differentiating the resulting signals in order to obtain
impulse functions (or derivatives of impulse functions). With this
representation, convolution becomes extremely simple and can be
implemented quite effectively. The true convolution can be recovered by integrating the result of the convolution. This method
yields substantial speed up in feature extraction and is applicable
to convolutional neural networks.
1
Introduction
In pattern recognition, convolution is an important tool because of its translation
invariance properties. Feature extraction is a typical example: The distance between
a small pattern (i.e. feature) is computed at all positions (i.e. translations) inside a
larger one. The resulting "distance image" is typically obtained by convolving the
feature template with the larger pattern. In the remainder of this paper we will use
the terms image and pattern interchangeably (because of the topology implied by
translation invariance).
There are many ways to convolve images efficiently. For instance, a multiplication
of images of the same size in the Fourier domain corresponds to a convolution of
the two images in the original space. Of course this requires J{ N log N operations
(where N is the number of pixels of the image and J{ is a constant) just to go in and
out of the Fourier domain. These methods are usually not appropriate for feature
extraction because the feature to be extracted is small with respect to the image.
For instance, if the image and the feature have respectively 32 x 32 and 5 x 5 pixels,
? Now with Microsoft, One Microsoft Way, Redmond, WA 98052
P Y. Simard, L. BOllou, P Haffner and Y. Le Cun
572
the full convolution can be done in 25 x 1024 multiply-adds. In contrast, it would
require 2 x J{ x 1024 x 10 to go in and out of the Fourier domain.
Fortunately, in most pattern recognition applications, the interesting features are
already quite distorted when they appear in real images. Because of this inherent
noise, the feature extraction process can usually be approximated (to a certain degree) without affecting the performance. For example, the result of the convolution
is often quantized or thresholded to yield the presence and location of distinctive
features ll]. Because precision is typically not critical at this stage (features are
rarely optimal, thresholding is a crude operation), it is often possible to quantize
the signals before the convolution with negligible degradation of performance.
The subtlety lies in choosing a quantization scheme which can speed up the convolution while maintaining the same level of performance. We now introduce the
convolution algorithm, from which we will deduce the constraints it imposes on
quantization.
The main algorithm introduced in this paper is based on a fundamental property of
convolutions. Assuming that 1 and 9 have finite support and that
denotes the
n-th integral of 1 (or the n-th derivative if n is negative), we can write the following
convolution identity:
r
(J * g)n
= r * 9 = 1 * gn
(1)
where * denotes the convolution operator. Note that 1 or 9 are not necessarily
differentiable. For instance, the impulse function (also called Dirac delta function),
denoted J, verifies the identity:
(2)
where J~ denotes the n-th integral of the delta function, translated by a (Ja(x) =
J(x - a)). Equations 1 and 2 are not new to signal processing. Heckbert has developed an effective filtering algorithm [2] where the filter 9 is a simple combination
of polynomial of degree n - 1. Convolution between a signal 1 and the filter 9 can
be written as
I*g =
*g-n
(3)
where
is the n-th integral of the signal, and the n-th derivative of the filter
9 can be written exclusively with delta functions (resulting from differentiating
n - 1 degree polynomials n times). Since convolving with an impulse function is
a trivial operation, the computation of Equation 3 can be carried out effectively.
Unfortunately, Heckbert's algorithm is limited to simple polynomial filters and is
only interesting when the filter is wide and when the Fourier transform is unavailable
(such as in variable length filters).
In contrast, in feature extraction, we are interested in small and arbitrary filters
(the features). Under these conditions, the key to fast convolution is to quantize
the images to combinations of low degree polynomials, which are differentiated,
convolved and then integrated. The algorithm is summarized by equation:
1 * 9 ~ F * C = (F- n * C-m)m+n
(4)
r
r
where F and C are polynomial approximation of 1 and g, such that F- n and
C- m can be written as sums of impulse functions and their derivatives. Since the
convolution F- n *C- m only involves applying Equation 2, it can be computed quite
effectively. The computation of the convolution is illustrated in Figure 1. Let 1
and 9 be two arbitrary I-dimensional signals (top of the figure). Let's assume that
1 and 9 can both be approximated by partitions of polynomials, F and C. On
the figure , the polynomials are of degree 0 (they are constant), and are depicted in
the second line. The details on how to compute F and C will be explained in the
next section. In the next step, F and C are differentiated once, yielding successions
of impulse functions (third line in the figure). The impulse representation has the
advantage of having a finite support, and of being easy to convolve. Indeed two
impulse functions can be convolved using Equation 2 (4 x 3 = 12 multiply-adds on
the figure). Finally the result of the convolution must be integrated twice to yield
F
*C =
(F- 1
* C- 1 )2
(5)
573
Boxlets: A Fast Convolution Algorithm
Original
G
=
Quantization
F
Differentiation
V
------,11-__
'r-I
G'
-1...'- - - - - L . . - - - r _ - - 1 -
----..I
t
t
FIG'
Convolution
FIG
Double
Integration
Figure 1: Example of convolution between I-dimensional function
the approximations of f and 9 are piecewise constant .
2
f and
g , where
Quantization: from Images to Boxlets
The goal of this section is to suggest efficient ways to approximate an image f by
cover of polynomials of degree d suited for convolution. Let S be the space on
Cj = 0 for i =f. j ,
which f is defined , and let C = {cd be a partition of S (Ci
and
Ci = S). For each Ci, let Pi be a polynomial of degree d which minimizes
equatIOn:
Ui
n
(6)
The uniqueness of Pi is guaranteed if Ci is convex. The problem is to find a cover
C which minimizes both the number of Ci and I.:i ei. Many different compromises
are possible, but since the computational cost of the convolution is proportional
to the number of regions, it seemed reasonable to chose the largest regions with a
maximum error bounded by a threshold K . Since each region will be differentiated
and integrated along the directions of the axes, the boundaries of the CiS are restricted to be parallel to the axes , hence the appellation boxlet. There are still many
ways to compute valid partitions of boxlets and polynomials. We have investigated
two very different approaches which both yield a polynomial cover of the image in
reasonable time. The first algorithm is greedy. It uses a procedure which, starting
from a top left corner , finds the biggest boxlet Ci which satisfies ei < K without
overlapping another boxlet . The algorithm starts with the top left corner of the
image, and keeps a list of all possible starting points (uncovered top left corners)
sorted by X and Y positions. When the list is exhausted, the algorithm terminates.
Surprisingly, this algorithm can run in O(d(N + PlogN)), where N is the number
of pixels, P is the number of boxlets and d is the order of the polynomials PiS.
Another much simpler algorithm consists of recursively splitting boxlets, starting
from a boxlet which encompass the whole image, until ei < K for all the leaves
of the tree. This algorithm runs in O(dN) , is much easier to implement, and is
faster (better time constant). Furthermore , even though the first algorithm yields
a polynomial coverage with less boxlets, the second algorithm yields less impulse
functions after differentiation because more impulse functions can be combined (see
next section). Both algorithms rely on the fact that Equation 6 can be computed
P. Y. Simard, L. Bottou, P. Haffner and Y. Le Cun
574
Figure 2: Effects of boxletization: original (top left), greedy (bottom left) with a
threshold of tO,OOO, and recursive (top and bottom right) with a threshold of 10,000.
in constant time. This computation requires the following quantities
L
f(x, y),
L
f(x, y)2 ,
L
f(x, y)x,
L
f(x, y)y,
L
f(x, y)xy,...
(7)
~~------~v~------~"~--------------v-------------~
degree
a
degree 1
to be pre-computed over the whole image, for the greedy algorithm, or over recursively embedded regions, for the recursive algorithm. In the case of the recursive
algorithm these quantities are computed bottom up and very efficiently. To prevent
the sums to become too large a limit can be imposed on the maximum size of Ci.
The coefficients of the polynomials are quickly evaluated by solving a small linear
system using the first two sums for polynomials of degree a (constants), the first 5
sums for polynomials of degree 1, and so on.
Figure 2 illustrates the results of the quantization algorithms. The top left corner
is a fraction of the original image. The bottom left image illustrates the boxletization of the greedy algorithm, with polynomials of degree 1, and ei <= 10, 000
( 13000 boxlets, 62000 impulse (and its derivative) functions . The top right image
illustrates the boxletization of the recursive algorithm, with polynomials of degree
o and ei <= 10, 000 ( 47000 boxlets, 58000 impulse functions). The bottom right
is the same as top right without displaying the boxlet boundaries. In this case the
pixel to impulse function ratio 5.8.
3
Differentiation: from Boxlets to Impulse Functions
If Pi is a polynomial of degree d, its (d + 1)-th derivative can be written as a sum of
impulse function's derivatives, which are zero everywhere but at the corners of Ci.
These impulse functions summarize the boundary conditions and completely characterize Pi. They can be represented by four (d + 1)-dimensional vectors associated
with the 4 corners of Ci. Figure 3 (top) illustrates the impulse functions at the 4
575
Boxlets: A Fast Convolution Algorithm
Polynomial
(constant)
X derivative
l-~C~)
?1.
Yd envatlve
(of X derivative)
~~
D
D
D
~
D
D
~
D
Polynomial covering
(constants)
Derivatives
Combined
D
Sorted list
representation
Figure 3: Differentiation of a constant polynomial in 2D (top).
derivative of adjacent polynomials (bottom)
Combining the
corners when the polynomial is a constant (degree zero). Note that the polynomial
must be differentiated d + 1 times (in this example the polynomial is a constant,
so d = 0), with respect to each dimension of the input space. This is illustrated at
the top of Figure 3. The cover C being a partition, boundary conditions between
adjacent squares do simplify, that is, the same derivatives of a impulse functions
at the same location can be combined by adding their coefficients. It is very advantageous to do so because it will reduce the computation of the convolution in
the next step. This is illustrated in Figure 3 (bottom). This combining of impulse
functions is one of the reason why the recurslve algorithm for the quantization is
preferred to the greedy algorithm. In the recursive algorithm, the boundaries of
boxlets are often aligned , so that the impulse functions of adjacent boxlets can be
combined . Typically, after simplification, there are only 20% more impulse functions than there are boxlets. In contrast, the greedy algorithm generates up to 60%
more impulse functions than boxlets, due to the fact that there are no alignment
constraints. For the same threshold the recursive algorithm generates 20% to 30%
less impulse functions than the greedy algorithm.
Finding which impulse functions can be combined is a difficult task because the
recursive representation returned by the recursive algorithm does not provide any
means for matching the bottom of squares on one line, with the top of squares
from below that line. Sorting takes O(P log P) computational steps (where P is the
number of impulse functions) and is therefore too expensive. A better algorithm is
to visit the recursive tree and accumulate all the top corners into sorted (horizontal)
lists. A similar procedure sorts all the bottom corners (also into horizontal lists).
The horizontal lists corresponding to the same vertical positions can then be merged
in O(P) operations. The complete algorithm which quantizes an image of N pixels
and returns sorted lists of impulse functions runs in O(dN) (where d is the degree
of the polynomials).
4
Results
The convolution speed of the algorithm was tested with feature extraction on the
image shown on the top left of Figure 2. The image is quantized, but the feature
is not. The feature is tabulated in kernels of sizes 5 x 5, 10 x 10, 15 x 15 and
20 x 20 . If the kernel is decomposable, the algorithm can be modified to do two 1D
convolutions instead of the present 2D convolution.
The quantization of the image is done with constant polynomials, and with thresholds varying from 1,000 to 40,000. This corresponds to varying the pixel to impulse
function ratio from 2.3 to 13.7. Since the feature is not quantized , these ratios
correspond exactly to the ratios of number of multiply-adds for the standard convolution versus the boxlet convolution (excluding quantization and integration). The
P Y. Simard, L. Bottou, P Haffner and Y. Le Cun
576
8.4
12.5
13.4
13.8
Table 1: Convolution speed-up factors
Horizontal convolution
A
*
(a)
Convolution of runs
(b)
---
A
*
~'\-
I
ld..bA
?-)-------------- - - ---r ------T-------
(C)
~
'w w'
\.
----
(d)~;%)+r: ~ ~
Figure 4: Run length X convolution
actual speed up factors are summarized in Table 1. The four last columns indicate
the measured time ratios between the standard convolution and the boxlet convolution. For each threshold value, the top line indicates the time ratio of standard
convolution versus quantization, convolution and integration time for the boxlet
convolution. The bottom line does not take into account the quantization time.
The feature size was varied from 5 x 5 to 20 x 20. Thus with a threshold of 10,000
and a 5 x 5 kernel, the quantization ratio is 5.8, and the speed up factor is 2.8.
The loss in image quality can be seen by comparing the top left and the bottom
right images. If several features are extracted, the quantization time of the image
is shared amongst the features and the speed up factor is closer to 4.7.
It should be noted that these speed up factors depend on the quantization level
which depends on the data and affects the accuracy of the result. The good news is
that for each application the optimal threshold (the maximum level of quantization
which has negligible effect on the result) can be evaluated quickly. Once the optimal
threshold has been determined, one can enjoy the speed up factor. It is remarkable
that with a quantization factor as low as 2.3, the speed up ratio can range from
1.5 to 2.3, depending on the number of features. We believe that this method is
directly applicable to forward propagation in convolutional neural nets (although
no results are available at this time) .
The next application shows a case where quantization has no adverse effect on the
accuracy of the convolution, and yet large speed ups are obtained.
Boxlets: A Fast Convolution Algorithm
5
577
Binary images and run-length encoding
The quantization steps described in Sections 2 and 3 become particularly simple
when the image is binary. If the threshold is set to zero, and if only the X derivative is considered, the impulse representation is equivalent to run-length encoding.
Indeed the position of each positive impulse function codes the beginning of a run,
while the position of each negative impulses code the end of a run. The horizontal
convolution can be computed effectively using the boxlet convolution algorithm.
This is illustrated in Figure 4. In (a), the distance between two binary images must
be evaluated for every horizontal position (horizontal translation invariant distance).
The result is obtained by convolving each horizontal line and by computing the sum
of each of the convolution functions. The convolution of two runs, is depicted in
(b), while the summation of all the convolutions of two runs is depicted in (c). If
an impulse representation is used for the runs (a first derivative) , each summation
of a convolution between two runs requires only 4 additions of impulse functions,
as depicted in (d). The result must be integrated twice, according to Equation 5.
The speed up factors can be considerable depending on the width of the images (an
order of magnitude if the width is 40 pixels), and there is no accuracy penalty.
Figure 5: Binary image (left) and compact impulse function encoding (right).
This speed up also generalizes to 2-dimensional encoding of binary images. The gain
comes from the frequent cancellations of impulse functions of adjacent boxlets. The
number of impulse functions is proportional to the contour length of the binary
shapes. In this case, the boxlet computation is mostly an efficient algorithm for
2-dimensional run-length encoding. This is illustrated in Figure 5. As with runlength encoding, a considerable speed up is obtained for convolution, at no accuracy
penalty cost.
6
Conclusion
When convolutions are used for feature extraction, preCISIon can often be sacrificed for speed with negligible degradation of performance. The boxlet convolution
method combines quantization and convolution to offer a continuous adjustable
trade-off between accuracy and speed. In some cases (such as in relatively simple
binary images) large speed ups can come with no adverse effects. The algorithm is
directly applicable to the forward propagation in convolutional neural networks and
in pattern matching when translation invariance results from the use of convolution.
References
[1] Yann LeCun and Yoshua Bengio, "Convolutional networks for images, speech,
and time-series," in The Han'dbook of Brain Theory and Neural Networks, M. A.
Arbib, Ed. 1995, MIT Press.
[2] Paul S. Heckbert, "Filtering by repeated integration," in ACM SIGGRAPH
conference on Computer graphics, Dallas, TX , August 1986, vol. 20, pp. 315321.
| 1602 |@word polynomial:29 advantageous:1 recursively:2 ld:1 uncovered:1 att:1 exclusively:1 series:1 recovered:1 com:2 comparing:1 yet:1 written:4 must:4 botton:1 partition:4 shape:1 greedy:7 leaf:1 beginning:1 quantized:3 location:2 simpler:1 along:1 dn:2 become:2 consists:2 combine:1 inside:1 introduce:1 indeed:2 brain:1 actual:1 becomes:1 bounded:1 minimizes:2 developed:1 finding:1 differentiation:4 nj:1 every:1 exactly:1 enjoy:1 appear:1 before:1 negligible:3 positive:1 dallas:1 limit:1 encoding:6 yd:1 chose:1 twice:2 limited:1 range:1 lecun:1 recursive:9 implement:1 procedure:2 matching:2 ups:2 pre:1 integrating:1 suggest:1 operator:1 applying:1 equivalent:1 imposed:1 go:2 starting:3 convex:1 decomposable:1 splitting:1 us:1 recognition:3 approximated:2 expensive:1 particularly:1 bottom:11 region:5 news:1 trade:1 substantial:1 ui:1 depend:1 solving:1 compromise:1 distinctive:1 completely:1 translated:1 siggraph:1 represented:1 tx:1 sacrificed:1 fast:5 effective:1 choosing:1 quite:4 larger:2 transform:1 patrice:2 advantage:1 differentiable:1 net:1 remainder:1 frequent:1 aligned:1 combining:2 achieve:1 dirac:1 double:1 depending:2 measured:1 implemented:1 coverage:1 involves:1 indicate:1 come:2 direction:1 merged:1 filter:7 require:1 ja:1 summation:2 considered:1 uniqueness:1 applicable:3 largest:1 tool:1 mit:1 modified:1 varying:2 ax:2 indicates:1 contrast:3 typically:3 integrated:4 interested:1 pixel:7 denoted:1 integration:4 once:2 extraction:9 having:1 yoshua:1 piecewise:1 inherent:1 simplify:1 microsoft:3 interest:1 multiply:3 alignment:1 yielding:1 integral:3 closer:1 xy:1 tree:2 instance:4 column:1 gn:1 cover:4 cost:2 too:2 graphic:1 characterize:1 combined:5 fundamental:1 off:1 quickly:2 corner:9 convolving:3 simard:4 derivative:14 return:1 account:1 summarized:2 coefficient:2 depends:1 lab:1 red:1 start:1 sort:1 parallel:1 square:3 accuracy:6 convolutional:4 efficiently:2 succession:1 yield:6 correspond:1 drive:1 ed:1 pp:1 associated:1 gain:1 cj:1 ooo:1 done:2 though:1 evaluated:3 furthermore:1 just:1 stage:1 until:1 horizontal:8 ei:5 overlapping:1 propagation:2 quality:1 impulse:35 believe:1 effect:4 true:1 hence:1 illustrated:5 adjacent:4 ll:1 interchangeably:1 width:2 covering:1 noted:1 complete:1 image:34 accumulate:1 cancellation:1 han:1 deduce:1 patrick:1 add:3 certain:1 binary:7 seen:1 fortunately:1 signal:10 full:1 encompass:1 faster:2 offer:1 visit:1 kernel:3 affecting:1 addition:1 presence:1 bengio:1 easy:1 affect:1 arbib:1 topology:1 reduce:1 haffner:5 penalty:2 tabulated:1 returned:1 speech:1 delta:3 write:1 vol:1 key:1 four:2 threshold:10 prevent:1 thresholded:1 fraction:1 sum:6 run:14 everywhere:1 distorted:2 reasonable:2 yann:3 guaranteed:1 simplification:1 constraint:2 generates:2 fourier:4 speed:18 extremely:1 leon:2 relatively:1 according:1 combination:2 terminates:1 cun:3 explained:1 restricted:1 invariant:1 equation:8 end:1 available:1 operation:4 generalizes:1 differentiated:4 appropriate:1 convolved:2 original:4 convolve:2 denotes:3 top:17 maintaining:1 approximating:1 implied:1 already:1 quantity:2 amongst:1 distance:4 trivial:1 reason:1 assuming:1 length:6 code:2 ratio:8 difficult:1 unfortunately:1 mostly:1 negative:2 ba:1 adjustable:1 vertical:1 convolution:54 finite:2 excluding:1 varied:1 arbitrary:2 august:1 introduced:1 extensive:1 redmond:1 usually:3 pattern:7 below:1 summarize:1 critical:1 rely:1 scheme:1 carried:1 multiplication:1 embedded:1 loss:1 interesting:2 filtering:2 proportional:2 versus:2 remarkable:1 degree:17 imposes:1 thresholding:1 displaying:1 bank:1 pi:5 cd:1 translation:5 course:1 surprisingly:1 last:1 wide:1 template:1 differentiating:2 boundary:5 dimension:1 valid:1 contour:1 seemed:1 forward:2 schultz:1 approximate:1 compact:1 preferred:1 keep:1 quantizes:1 continuous:1 why:1 table:2 unavailable:1 quantize:2 investigated:1 necessarily:1 bottou:2 domain:3 main:1 whole:2 noise:2 paul:1 verifies:1 repeated:1 fig:2 biggest:1 precision:2 position:6 lie:1 crude:1 third:1 list:7 quantization:19 adding:1 effectively:4 ci:10 magnitude:1 justifies:1 exhausted:1 illustrates:4 sorting:1 easier:1 suited:1 depicted:4 subtlety:1 corresponds:2 satisfies:1 extracted:2 acm:1 identity:2 goal:1 sorted:4 shared:1 considerable:2 adverse:2 typical:1 determined:1 degradation:2 called:1 invariance:3 rarely:1 support:2 tested:1 |
659 | 1,603 | The effect of eligibility traces on finding optimal memoryless
policies in partially observable Markov decision processes
John Loch
Department of Computer Science
University of Colorado
Boulder, CO 80309-0430
loch@cs.colorado.edu
Abstract
Agents acting in the real world are confronted with the problem of
making good decisions with limited knowledge of the environment.
Partially observable Markov decision processes (POMDPs) model
decision problems in which an agent tries to maximize its reward in the
face of limited sensor feedback. Recent work has shown empirically that
a reinforcement learning (RL) algorithm called Sarsa(A) can efficiently
find optimal memoryless policies, which map current observations to
actions, for POMDP problems (Loch and Singh 1998). The Sarsa(A)
algorithm uses a form of short-term memory called an eligibility trace,
which distributes temporally delayed rewards to observation-action
pairs which lead up to the reward. This paper explores the effect of
eligibility traces on the ability of the Sarsa(A) algorithm to find optimal
memoryless policies. A variant of Sarsa(A) called k-step truncated
Sarsa(A) is applied to four test problems taken from the recent work of
Littman, Littman, Cassandra and Kaelbling, Parr and Russell, and
Chrisman. The empirical results show that eligibility traces can be
significantly truncated without affecting the ability of Sarsa(A) to find
optimal memoryless policies for POMDPs.
1 Introduction
Agents which operate in the real world, such as mobile robots, must use sensors which at
best give only partial information about the state of the environment. Information about
the robot's surroundings is necessarily incomplete due to noisy and/or imperfect sensors,
occluded objects, and the inability of the robot to know precisely where it is. Such agentenvironment systems can be modeled as partially observable Markov decision processes
or POMDPs (Sondik, 1978).
A variety of algorithms have been developed for solving POMDPs (Lovejoy, 1991).
However most of these techniques do not scale well to problems involving more than a
few dozen states due to the computational complexity of the solution methods
(Cassandra, 1994; Littman 1994). Therefore, finding efficient reinforcement learning
Effect of Eligibility Traces on Finding Optimal Memoryless Policies
lOll
methods for solving POMDPs is of great practical interest to the Artificial Intelligence
and engineering fields.
Recent work has shown empirically that the Sarsa(A) algorithm can efficiently find the
best deterministic memoryless policy for several POMDPs problems from the recent
literature (Loch and Singh 1998). The empirical results from Loch and Singh (1998)
suggest that eligibility traces are necessary for finding the best or optimal memoryless
policy. For this reason, a variant of Sarsa(A) called k-step truncated Sarsa(A) is formulated
to explore the effect of eligibility traces on the ability of Sarsa( A) to find the best
memory less policy.
The main contribution of this paper is to show empirically that a variant of Sarsa(A)
using truncated eligibility traces can find the optimal memory less policy for several
POMDP problems from the literature. Specifically we show that the k-step truncated
Sarsa(A) method can find the optimal memoryless policy for the four POMDP problems
tested when k :S 2.
2 Sarsa(J..) and POMDPs
An environment is defined by a finite set of states S, the agent can choose from a finite set
of actions A, and the agent's sensors provide it observations from a finite set X. On
executing action a ? A in state s ? S the agent receives expected reward rsa and the
environment transitions to a state s' ? S with probability pass" The probability of the
agent observing x ? X given that the state is s is O(xls).
A straightforward way to extend RL algorithms to POMDPs is to learn Q-value functions
of observation-action pairs, i.e. to simply treat the agents observations as states. Below
we describe the standard Sarsa(A) algorithm applied to POMDPs. At time step t the Qvalue function is denoted Qt ; the eligibility trace function is denoted 'YIt ; and the reward
received is denoted rt . On experiencing transition <xt. at. rb Xt+l> the following updates
are performed in order:
'YIt(x, a) = YA 'YIt-l(X, a) ; for all X"# Xt and a"# at
where bt = rt + Y Qt<xt+h at+l) - Qt(Xb aJ and a is the step-size (learning rate). The
eligibility traces are initialized to zero, and in episodic tasks they are reinitiaHzed to zero
after every episode. The greedy policy at time step t assigns to each observation x the
action a = argmaxb Qt<x, b).
2.1 Sarsa(A) Using Truncated Eligibility Traces
Sarsa(A) with truncated eligibility traces uses a parameter k which sets the eligibility
trace for an observation-action pair to zero if that observation-action pair was not visited
within the last k-I time steps. Thus I-step truncated Sarsa(A) is equivalent to Sarsa(O)
and 2-step truncated Sarsa(A) updates the Q-values of the current observation-action pair
and the immediately preceding observation-action pair.
J Loch
1012
3 Empirical Results
The truncated Sarsa(/..) algorithm was applied in an identical manner to four POMDP
problems taken from the recent literature. Complete descriptions of the states, actions,
observations, and rewards for each problem are provided in Loch and Singh (1998). Here
we describe the aspects of the empirical results common to all four problems. At each
step. the agent selected a random action with a probability equal to the exploration rate
parameter and selected a greedy action otherwise. An initial exploration rate of 35% was
used, decreasing linearly with each action (step) until the 350000th action from there
onward the exploration rate remain fixed at 0%. Q-values were initialized to O. Both the
step-size a and the /.. values are held constant in each experiment. A discount factor y of
0.95 and a /.. value of 1.0 were used for all four problems.
3.1 Sutton's Grid World
Sutton's grid world (Littman 1994) is an agent-environment system with 46 states, 30
observations, and 4 actions. State transitions and observations are deterministic.
The I-step truncated eligibility trace, equivalent to Sarsa(O), was able to find a policy
which could only reach the goal from start states within 7 steps of the goal state as shown
in Figure 1. The optimal memoryless policy yielding 416 total steps to the goal state
was found by the 2-step, 4-step and 8-step truncated eligibility trace methods shown in
Figure 1.
lQ _ _ . - _ - _ . )
s)' ut
IS
os
'0
os
..
'
,.,
-.,-p. .....?eo)
?eO
.Q-_.--
sT. ..?
''''
??
""
10'
soo
_.,_,Io
.....)
1100
5DO
11&
.00
s.t
1M
,....
"~
"~
3$
IS
?
I
$
1
2
I
?
la
J
to
IS
"
l
?r ~
~
IS
~l
. 100
. .L1
0.5
200
SfIO
4!l1
. . . . . . . . AcIIeIN (M dOO".)
'"
MO
"'.
lOll
SOB
....... oI~ 0- l . . - . )
...
Figure 1: Sutton's Grid World (from Littman, 1994). Total steps to goal performance as
a function of the number oflearning steps for 1,2,4, and 8-step eligibility traces.
Effect ofEligibility Traces on Finding Optimal Memoryless Policies
1013
3.2 Chrisman's Shuttle Problem
Chrisman's shuttle problem is an agent-environment system with 8 states, 5
observations, and 3 actions. State transitions and observations are stochastic.
The I-step truncated eligibility trace, equivalent to Sarsa(O), was unable to find a policy
which could could reach the goal state (Figure 2). The optimal memoryless policy
yielding an average reward per step of 1.02 was found by the 2-step, 4-step, and 8-step
truncated eligibility trace methods shown in Figure 2.
2Q _ _ - _
l Q _ ... _?-..cOI
"
"
"
"
114
],
],
es
G.
r?a"
I"
??
oz
?o~~
,GO
.00
*
.........
AdIoM""
t.eor.l
...
<G'
I:
l~
HO
..
~
4M
_"_110'-'1
eQ _ _
...
_
?
,.
?
?
r'
..,.
I.'
..u..
...
'0""
f
,
.i..... ,
,1
1??
rr
'
.
..
.....
08
.,
?
,
0
,110
,
?
1.'
'0
4Q _ _ ... _
,
,
,
,
..
~
'2
2
..
......,..
r.
r
I'
......
.....
I Gt
l'ftl
I
JOo
...0
.......... ~~ (lftS0."8)
5tft
uo
?
lee
XI"
~
040(1
~f)O
6.L'II
........ elAdIaN(1a I .....)
Figure 2: Chrisman's shuttle problem. Average reward per step performance as a function
of the number ofleaming steps for 1,2,4, and 8-step eligibility traces.
3.3 Littman, Cassandra, and Kaelbling's 89 State Office World
Littman et al.' s 89 state office world (Littman (995) is an agent-environment system
with 89 states, 17 observations, and 5 actions. State transitions and observations are
stochastic.
The I-step truncated eligibility trace, equivalent to Sarsa(O), was able to find a policy
which could reach the goal state in only 51% of the 251 trials (Figure 3). The 2-step, 4step and 8-step truncated eligibility trace methods converged to the best memoryless
policy found by Loch & Singh (1998) yielding a 77% success rate in reaching the goal
state (Figure 3).
1014
J. Loch
2Q _ _ _ _
IQ _ _ _ _ - _ O }
??
ot
"'
I.,
lu
??
t..
fo.
j::
i::
1?'
.
JOS
. 2
,
'.
H
.
,
..
,
???0)
_"_0_1_'
2<>0
...
..
??
??
lO'
s..
???
u
u
lor
10$
jos
??
J..
.,' 2
.
,
"""'''AdMM O. 1"'.)
Figure 3: Littman et al.'s 89 state office world, Percent successful trials in reaching goal
performance as a function of the number oflearning steps for 1,2,4, and 8-step eligibility
traces_
3.4 Parr & Russell's Grid World
Parr and Russell's grid world (parr and Russell 1995) is an agent-environment system
with 11 states, 6 observations, and 4 actions, State transitions are stochastic while
observations are deterministic,
The optimal memoryless policy yielding an average reward per step of 0,024 was found
by both the I-step and 2-step truncated eligibility trace methods (Figure 4), Policies
found by the 4-step and 8-step methods were not optimal, This result can be attributed to
the sharp eligibility trace cutoff as this effect was not observed with smoothly decaying
eligibility traces.
Effect ofEligibility Traces on Finding Optimal Memoryless Policies
1015
II
I?
1
I
,~
.. ,
Of
.. '
.,
I .os
1
I
I
.. ,
.. '
51
100
I so
it1l
2SO
'H
'sa
......... . , AdIona CIIII 1 ......
...,
+W
$90
Figure 4: Parr & Russell's Grid World. Average reward per step performance as a
function of the number oflearning steps for 1, 2, 4, and 8-step eligibility traces.
3.5 Discussion
In all the empirical results presented above, we have shown that the k-step truncated
Sarsa(i-.) algorithm was able to find the best or the optimal deterministic memoryless
policy when k=2.
This result is surprising since it was expected that the length of the eligibility trace
required to find a good or optimal policy would vary widely depending on problem
specific factors such as landmark (unique observation) spacing and the delay between
critical decisions and rewards. Several additional POMDP problems were formulated in
an attempt to create a POMDP which would require a k value greater than 2 to find the
optimal policy. However, for all trial POMDPs tested the optimal memoryless policy
could be found with k ~ 2.
4 Conclusions and Future Work
The ability of the Sarsa(i-.) algorithm and the k-step truncated Sarsa(i-.) algorithm to find
optimal deterministic memoryless policies for a class of POMDP problems is important
for several reasons. For POMDPs with good memoryless policies the Sarsa(i-.) algorithm
provides an efficient method for finding the best policy in that space.
If the performance of the memoryless policy is unsatisfactory, the observation and action
spaces of the agent can be modified so as to produce an agent with a good memoryless
policy. The designer of the autonomous system or agent can modifY the observation
1016
JLoch
space of the agent by either adding sensors or making finer distinctions in the current
sensor values. In addition, the designer can add attributes from past observations into the
current observation space. The action space can be modified by adding lower-level actions
and by adding new actions to the space. Thus one method for designing a capable agent
is to iterate between selecting an observation and action space for the agent, using
Sarsa(J...) to find the best memoryless policy in that space, and repeating until satisfactory
perfonnance is achieved.
This suggests a future line of research into how to automate the process of observation
and action space selection so as to acheive an acceptable performance level. Other avenues
of research include an exploration into theoretical reasons why Sarsa(J...) and k-step
truncated Sarsa(J...) are able to solve POMDPs. In addition, further research needs to be
conducted as to why short (k -::; 2) eligibility traces work well over a wide class of
POMDPs.
References
Cassandra, A (1994). Optimal policies for partially observable Markov decision
processes. Technical Report CS-94-14, Brown University, Department of Computer
Science, Providence RI.
Littman, M. (1994). The Witness Algorithm: Solving partially observable Markov
decision processes. Technical Report CS-94-40, Brown University, Department of
Computer Science, Providence RI.
Littman, M., Cassandra, A, & Kaelbling, L. (1995). Learning policies for partially
observable environments: Scaling up. In Proceedings of the Twelfth International
Conference on Machine Learning, pages 362-370, San Francisco, CA, 1995. Morgan
Kaufinann.
Loch, J., & Singh, S. (1998). Using eligibility traces to find the best memoryless policy
in partially observable Markov decision processes. To appear In Proceedings of the
Fifteenth International Conference on Machine Learning" Madison, WI, 1998. Morgan
Kaufinann. (Available from http://www.cs.colorado.edul-baveja/papers.htm1)
Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov
decision processes. In Annals of Operations Research, 28 : 47~66.
Parr, R. & Russell, S. (1995) . Approximating optimal policies for partially observable
stochastic domains. In Proceedings of the International Joint Conference on Artificial
Intelligence.
Sondik, E . J. (1978). The optimal control of partially observable Markov decision
processes over the infinite horizon: Discounted costs. InOperations Research, 26(2).
Sutton, R.S . (1990). Integrated architectures for learning, planning, and reacting based on
approximating dynamic programming. In Proceedings of the Seventh International
Conference ofMachine Learning, pages 216-224, San Mateo, CA Morgan Kaufman.
Littman, M. (1994). Memoryless policies: theoretical limitations and practical results. In
From Animals to Animats 3: Proceedings of the Third International Conference on
Simulation of Adaptive Behavior, Cambridge,
MA MIT Press.
| 1603 |@word eor:1 trial:3 twelfth:1 simulation:1 initial:1 selecting:1 past:1 current:4 surprising:1 must:1 john:1 update:2 intelligence:2 greedy:2 selected:2 short:2 provides:1 lor:1 loll:2 manner:1 expected:2 behavior:1 planning:1 discounted:1 decreasing:1 provided:1 kaufman:1 developed:1 finding:7 every:1 control:1 uo:1 appear:1 engineering:1 treat:1 modify:1 io:1 sutton:4 reacting:1 mateo:1 suggests:1 co:1 limited:2 practical:2 unique:1 rsa:1 episodic:1 empirical:5 significantly:1 suggest:1 selection:1 www:1 equivalent:4 map:1 deterministic:5 straightforward:1 go:1 pomdp:7 survey:1 assigns:1 immediately:1 autonomous:1 annals:1 colorado:3 experiencing:1 programming:1 us:2 designing:1 observed:1 episode:1 russell:6 environment:9 complexity:1 reward:11 littman:12 occluded:1 dynamic:1 singh:6 solving:3 joint:1 describe:2 artificial:2 widely:1 solve:1 otherwise:1 ability:4 noisy:1 confronted:1 rr:1 oz:1 description:1 produce:1 executing:1 object:1 iq:1 depending:1 qt:4 received:1 sa:1 eq:1 c:4 attribute:1 coi:1 stochastic:4 exploration:4 require:1 sarsa:30 onward:1 great:1 algorithmic:1 mo:1 parr:6 automate:1 vary:1 argmaxb:1 visited:1 create:1 mit:1 sensor:6 modified:2 reaching:2 kaufinann:2 shuttle:3 mobile:1 office:3 unsatisfactory:1 lovejoy:2 bt:1 integrated:1 denoted:3 animal:1 field:1 equal:1 identical:1 future:2 report:2 few:1 surroundings:1 delayed:1 attempt:1 interest:1 yielding:4 held:1 xb:1 capable:1 partial:1 necessary:1 perfonnance:1 incomplete:1 ofmachine:1 initialized:2 theoretical:2 kaelbling:3 oflearning:3 cost:1 delay:1 successful:1 conducted:1 seventh:1 providence:2 st:1 explores:1 international:5 lee:1 jos:2 choose:1 sob:1 tft:1 performed:1 try:1 sondik:2 observing:1 start:1 decaying:1 contribution:1 oi:1 efficiently:2 lu:1 pomdps:13 finer:1 converged:1 reach:3 fo:1 attributed:1 knowledge:1 ut:1 until:2 receives:1 o:3 aj:1 effect:7 brown:2 memoryless:23 satisfactory:1 eligibility:29 complete:1 l1:2 percent:1 common:1 empirically:3 rl:2 extend:1 cambridge:1 grid:6 baveja:1 joo:1 robot:3 gt:1 add:1 recent:5 success:1 morgan:3 additional:1 greater:1 preceding:1 eo:2 maximize:1 ii:2 technical:2 variant:3 involving:1 fifteenth:1 achieved:1 doo:1 affecting:1 ftl:1 addition:2 spacing:1 ot:1 operate:1 acheive:1 variety:1 iterate:1 architecture:1 imperfect:1 avenue:1 action:25 repeating:1 discount:1 http:1 designer:2 per:4 rb:1 four:5 yit:3 cutoff:1 decision:11 acceptable:1 scaling:1 precisely:1 ri:2 aspect:1 edul:1 department:3 remain:1 wi:1 making:2 boulder:1 taken:2 know:1 available:1 operation:1 ho:1 include:1 madison:1 approximating:2 rt:2 unable:1 landmark:1 reason:3 loch:10 length:1 modeled:1 trace:30 policy:36 animats:1 observation:26 markov:8 finite:3 truncated:20 witness:1 sharp:1 pair:6 required:1 distinction:1 chrisman:4 able:4 below:1 memory:3 soo:1 critical:1 ofleaming:1 temporally:1 literature:3 limitation:1 agent:19 lo:1 last:1 wide:1 face:1 feedback:1 world:11 transition:6 reinforcement:2 san:2 adaptive:1 observable:10 francisco:1 xi:1 why:2 learn:1 ca:2 necessarily:1 domain:1 main:1 linearly:1 lq:1 xl:1 qvalue:1 third:1 dozen:1 xt:4 specific:1 adding:3 horizon:1 cassandra:5 smoothly:1 simply:1 explore:1 partially:10 ma:1 goal:8 formulated:2 admm:1 specifically:1 infinite:1 acting:1 distributes:1 called:4 total:2 pas:1 e:1 ya:1 la:1 htm1:1 inability:1 tested:2 |
660 | 1,604 | A Theory of Mean Field Approximation
T.Tanaka
Department of Electronics and Information Engineering
Tokyo Metropolitan University
I-I, Minami-Osawa, Hachioji , Tokyo 192-0397 Japan
Abstract
I present a theory of mean field approximation based on information geometry. This theory includes in a consistent way the naive mean field
approximation, as well as the TAP approach and the linear response theorem in statistical physics, giving clear information-theoretic interpretations to them.
1
INTRODUCTION
Many problems of neural networks, such as learning and pattern recognition, can be cast
into a framework of statistical estimation problem. How difficult it is to solve a particular
problem depends on a statistical model one employs in solving the problem. For Boltzmann
machines[ 1] for example, it is computationally very hard to evaluate expectations of state
variables from the model parameters.
Mean field approximation[2], which is originated in statistical physics, has been frequently
used in practical situations in order to circumvent this difficulty. In the context of statistical
physics several advanced theories have been known , such as the TAP approach[3], linear
response theorem[4], and so on. For neural networks, application of mean field approximation has been mostly confined to that of the so-called naive mean field approximation,
but there are also attempts to utilize those advanced theories[5, 6, 7, 8] .
In this paper I present an information-theoretic formulation of mean field approximation. It
is based on information geometry[9], which has been successfully applied to several problems in neural networks[ 10]. This formulation includes the naive mean field approximation
as well as the advanced theories in a consistent way. I give the formulation for Boltzmann
machines, but its extension to wider classes of statistical models is possible, as described
elsewhere[ 11 ].
2
BOLTZMANN MACHINES
A Boltzmann machine is a statistical model with N binary random variables Si E {-I, I},
i = 1, ... , N. The vector s = (s}, . .. , S N) is called the state of the Boltzmann machine.
T. Tanaka
352
The state s is also a random variable, and its probability law is given by the BoltzmannGibbs distribution
p(s) = e- E (s)-1/J(p) ,
(I)
where E( s) is the "energy" defined by
E(s)
=-
2: his i - 2: w ij SiSj
(2)
(ij)
with hi and w ij the parameters, and -1jJ(p) is determined by the normalization condition
and is called the Helmholtz free energy of p. The notation (ij) means that the summation
should be taken over all distinct pairs.
Let 'fJi(P)
== (Si}p
and 'fJij(p)
== (SiSj}p,
where (.}p means the expectation with respect to
p. The following problem is essential for Boltzmann machines:
Problem 1 Evaluate the expectations '1Ji (p) and 'fJij (p) from the parameters hi and w ij of
the Boltzmann machine p.
3
3.1
INFORMATION GEOMETRY
ORTHOGONAL DUAL FOLIATIONS
A whole set M of the Boltzmann-Gibbs distribution (1) realizable by a Boltzmann machine
is regarded as an exponential family. Let us use shorthand notations I, J, ... , to represent
distinct pairs of indices, such as ij. The parameters hi and wI constitute a coordinate
system of M, called the canonical parameters of M. The expectations "Ii and 'f/I constitute
another coordinate system of M, called the expectation parameters of M.
Let Fo be a subset of M on which wI are all equal to zero. I call Fo the factorizable
submodel of M since p(s) E Fo can be factorized with respect to Si. On Fo the problem
is easy: Since wI are all zero, Si are statistically independent of others, and therefore
'fJi = tanh - 1 hi and 'fJij = 'fJi'fJj hold.
Mean field approximation systematically reduces the problem onto the factorizable submodel Fo. For this reduction, I introduce dual foliations F and A onto M. The foliation
F = {F(w)}, M = Uw F(w), is parametrized by w == (wI) and each leaf F(w) is
defined as
F(w) = {p(s) I wI (p) = wI}.
(3)
The leaf F(O) is the same as Fo, the factorizable submodel. Each leaf F( w) is again an
exponential family with hi and 'fJi the canonical and the expectation parameters, respectively. A pair of dual potentials is defined on each leaf, one is the Helmholtz free energy
1/J(p) == 1jJ(p) and another is its Legendre transform, or the Gibbs free energy,
(4)
and the parameters of p E F( w) are given by
'fJi(P)
= ()i1/J(p),
hi(p)
= ()i?>(p),
where {)i == ({)/{)h i ) and {)i ::::i (()/{)'fJi). Another foliation A
{A(m)}, M
Urn A( m), is parametrized by m == (md and each leaf A( m) is defined as
A(m) = {p(s) I 'fJi(P) = mi}.
(5)
=
(6)
A Theory of Mean Field Approximation
353
Each leaf A(m) is not an exponential family, but again a pair of dual potentials."b and
defined on each leaf, the former is given by
? is
(7)
and the latter by its Legendre transform as
?(p) = L wI (p)1]I(p) - ."b(p),
(8)
I
and the parameters of P E A(m) are given by
1JI(p) = fh."b(p),
=
wI (p) = a l ?(p),
(9)
=
where al
(a/awl) and al
(a / a1JI). These two foliations form the orthogonal dual
foliations, since the leaves F{w) and A(m) are orthogonal at their intersecting point. I
introduce still another coordinate system on M, called the mixed coordinate system, on
the basis of the orthogonal dual foliations. It uses a pair (m, w) of the expectation and
the canonical parameters to specify a single element p EM. The m part specifies the leaf
A(m) on which p resides, and the w part specifies the leaf F(w).
3.2
REFORMULATION OF PROBLEM
Assume that a target Boltzmann machine q is given by specifying its parameters hi (q) and
wI (q). Problem I is restated as follows: evaluate its expectations 1Ji(q) and 1JI(q) from
those parameters. To evaluate 1Ji mean field approximation translates the problem into the
following one:
Problem 2
to q.
Let F( w) be a leaf on which q resides. Find p E F{ w) which is the closest
At first sight this problem is trivial, since one immediately finds the solution p = q. However, sol ving this problem with respect to TJi (p) is nontrivial, and it is the key to understanding of mean field approximation including advanced theories.
Let us measure the proximity of p to q by the Kullback divergence
D{pllq) = LP{s) log
:~:~,
(10)
s
then solving Problem 2 reduces to finding a minimizer p E F{w) of D{pllq) for a given q.
For p, q E F(w), D{pllq) is expressed in terms of the dual potentials",& and ? of F(w) as
(11 )
The minimization problem is thus equivalent to minimizing
( 12)
since ",&{q) in eq. (11) does not depend on p. Solving the stationary condition EfG{p) = 0
with respect to 1Ji(P) will give the correct expectations 1Ji{q), since the true minimizer is
p = q. However, the scenario is in general intractable since?{p) cannot be given explicitly
as a function of 1Ji{P).
T Tanaka
354
3.3
PLEFKA EXPANSION
The problem is easy if
mi == 7]i(p) as
wI
= O. In this case ?(p) is given explicitly as a function of
1 '" [
</>(p)
= "2 ~ (1
+ mi) log 1 +2mi + (1
-
md log 1 - 2mi] .
(13)
i
Minimization of G(p) with respect to mi gives the solution m i = tanh hi as expected.
When wI 1= 0 the expression (13) is no longer exact, but to compensate the error one may
use, leaving convergence problem aside, the Taylor expansion of?(w) == ?(p) with respect
to w = 0,
?(w)
?(O)
+ 2:(ch?(O))w I + ~ 2:Uh aJ?(O))w I w J
1
+!
IJ
2: ({hfhaK?(O))wlwJw K + ....
6 IJK
( 14)
This expansion has been called the Plefka expansion[ 12] in the literature of spin glasses.
Note that in considering the expansion one should temporarily assume that m is fixed: One
can rely on the solution m evaluated from the stationary condition 8G(p) = 0 only if the
expansion does not change the value of m.
The coefficients in the expansion can be efficiently computed by fully utilizing the orthogonal dual structure of the foliations. First, we have the following theorem:
Theorem 1 The coefficients o/the expansion (14) are given by the cumulant tensors (l/the
corresponding orders, dejined on A(m).
Because ? = -;fi holds, one can consider derivatives of;fi instead of those of ?. The firstorder derivatives aI;fi are immediately given by the property of the potential of the leaf
A(m) (eq. (9?, yielding
( 15)
where Po denotes the distribution on A(m) corresponding to w = O. The coefficients of
the lowest-orders, including the first-order one, are given by the following theorem.
Theorem 2 The jirst-, second-, and third-order coefficients o/the expansion (14) are given
by:
(h;fi(o)
(h{h;fi(O)
alaJaK;fi(O)
T/I(PO)
((all)(aJl) )po
((all)(aJl)(aKl) )po
(16)
where l == logPo.
The proofs will be found in [11]. It should be noted that, although these results happen to
be the same as the ones which would be obtained by regarding A(m) as an exponential
family, they are not the same in general since actually A(m) is not an exponential family;
for example, they are different for the fourth-order coefficients.
The explicit formulas for these coefficients for Boltzmann machines are given as follows :
? For the first-order,
(17)
355
A Theory ofMean Field Approximation
? For the second-order,
(th )2~(O)
= (1 -
mr)(1 - m;,)
(I
= ii'),
(18)
and
(19)
? For the third-order,
(th )3~(O) = 4mi mi' (1 - mn(1 - mr,)
(I = ii'),
(20)
and for 1 = ij, J = j k, K = ik for three distinct indices i , j, and k,
(h{h8K~(O) = (1 - m;)(1 - m;)(1 - m~)
(21)
For other combinations of I , J, and K ,
(22)
4
4.1
MEAN FIELD APPROXIMATION
MEAN FIELD EQUATION
Truncating the Plefka expansion (14) up to n-th order term gives n-th order approximations,
~n (P) and Gn(p) == ~n(P) - L:i hi(q)mi . The Weiss free energy, which is used in the naive
mean field approximation, is given by ~l (p). The TAP approach picks up all relevant terms
of the Plefka expansion[ 12], and for the SK model it gives the second-order approximation
~2(P) .
The stationary condition 8i G n (p) = 0 gives the so-called mean field equation, from which
a solution of the approximate minimization problem is to be determined. For n = 1 it takes
the following familiar form ,
tanh - 1 m i - hi -
2: wi jmj = 0
(23)
# i
and for n = 2 it includes the so-called On sager reaction term.
tanh- 1 m i - hi -
2: w ijmj + 2:(w ij )2(1 - m;)mi = 0
#i
(24)
# i
Note that all of these are expressed as functions of ffii.
Geometrically, the mean field equation approximately represents the "surface" hf(p)
hi(q) in terms of the mixed coordinate system of M , since for the exact Gibbs free energy
G, the stationary condition QiG(p) = 0 gives hi(p) - hi(q) = O. Accordingly, the approximate relation hi(p) = 8i~n(P), for fixed m, represents the n-th order approximate
expression of the leaf A(m) in the canonical coordinate system . The fit of this expression
to the true leaf A( m) around the point w = 0 becomes beller as the order of approximation
gets higher, as seen in Fig. I. Such a behavior is well expected, since the Plefka expansion
is essentially a Taylor expansion.
4.2
LINEAR RESPONSE
For estimating r/1(p) one can utilize the linear response theorem . In information geometrical framework it is represented as a trivial identity relation for the Fisher information on
the leaf F( w) . The Fisher information matrix (gij) , or the Riemannian metric tensor, on
the leaf F(w) , and its inverse (gij) are given by
(25)
356
T Tanaka
0.4
.---------r--.,..---_~-
. ,/
\1
I
: ;
!. "
,...
..._.. Oth order
----? 1st order
-_. 2nd order
//",'"
:'::~=~==~'~- ~--. ~.'.~ !;~ ~;~:;
0.2
,.
I '\
0.25
FO,
A(m)
/ /
) / ./ /
;
OL-~--~~--------~
,/
o
0 .1 '---_ _
0.499
I
"
I
l\\ "
,1"----- . . . :
I
\
"
-lo.-_~
' -- -. _ __
_ ___"
0.501
Figure I: Approximate expressions of A(m) by mean field approximations of several orders for 2-unit Boltzmann machine, with (ml' m2) = (0.5, 0.5) (left), and their magnified
view (right).
Figure 2: Relation between "naive" approximation and present theory.
and
(26)
respectively. In the framework here, the linear response theorem states the trivial fact that
those are the inverse of the other. In mean field approximation, one substitutes an approximation ?n(P) in place of ?(p) in eq. (26) to get an approximate inverse of the metric (r/j).
The derivatives in eq . (26) can be analytically calculated, and therefore (rJj) can be numerically evaluated by substituting to it a solution Tni of the mean field equation. Equating its
inverse to (9ij) gives an estimate of 17ij (p) by using eq. (25). So far, Problem I has been
sol ved within the framework of mean field approximation, with T1li and 17ij obtained by the
mean field equation and the linear response theorem, respectively.
5
DISCUSSION
Following the framework presented so far, one can in principle construct algorithms of
mean field approximation of desired orders. The first-order algorithm with linear response
has been first proposed and examined by Kappen and Rodrfguez[7, 8]. Tanaka[13] has
formulated second- and third-order algorithms and explored them by computer simulations ,
It is also possible to extend the present formulation so that it can be applicable to higherorder Boltzmann machines. Tanaka[ 14] discusses an extension of the present formulation
to third-order Boltzmann machines: It is possible to extend linear response theorem to
higher-orders, and it allows us to treat higher-order correlations within the framework of
mean field approximation.
A Theory ofMean Field Approximation
357
The common understanding about the "naive" mean field approximation is that it minimizes
Kullback divergence D(A>llq) with respect to A> E Fo for a given q. It can be shown that
this view is consistent with the theory presented in this paper. Assume that q E F( w)
and Po E A(m), and let p be a distribution corresponding the intersecting point of the
leaves F(w) and A(m). Because of the orthogonality of the two foliations F and A the
following "Pythagorean law[9]" holds (Fig. 2).
D(Pollq) = D(Pollp) + D(pllq)
(27)
Intuitively, D(A> lip) measures the squared distance between F( w) and Fa, and is a secondorder quantity in w.
D(Pollq)
It should be ignored in the first-order approximation, and thus
~
D(pllq) holds. Under this approximation minimization of the former with
respect to Po is equivalent to that of the latter with respect to p, which establishes the relation between the "naive" approximation and the present theory. It can also be checked
directly that the first-order approximation of D(pllq) exactly gives D(A>llq), the Weiss free
energy.
The present theory provides an alternative view about the validity of mean field approximation: As opposed to a common "belief" that mean field approximation is a good one
when N is sufficiently large, one can state from the present formulation that it is so whenever higher-order contribution of the Plefka expansion vanishes, regardless o/whether N
is large or not. This provides a theoretical basis for the observation that mean field approximation often works well for small networks.
The author would like to thank the Telecommunications Advancement Foundation for financial support.
References
[1] Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985) A learning algorithm for Boltzmann
machines. Cognitive Science 9: 147-169.
[2] Peterson, c., and Anderson, J. R. (1987) A mean field theory learning algorithm for neural
networks. Complex Systems 1: 995-1019.
[3] Thouless, D. J., Anderson, P. w., and Palmer, R. G. (1977) Solution of 'Solvable model of a
spin glass'. Phil. Mag. 35 (3): 593-60l.
[4] Parisi, G. (1988) Statistical Field Theory. Addison-Wesley.
[5] Galland, C. C. (1993) The limitations of deterministic Boltzmann machine learning. Network 4
(3): 355-379.
[6] Hofmann, T. and Buhmann, J. M. (1997) Pairwise data clustering by deterministic annealing.
IEEE Trans. Patl. Anal. & Machine IntelJ. 19 (I): 1-14; Errata, ibid. 19 (2): 197 (1997).
[7] Kappen, H. 1. and RodrIguez, F. B. (1998) Efficient learning in Boltzmann machines using
linear response theory. Neural Computation. 10 (5): 1137-1156.
[8] Kappen, H. J. and Rodriguez, F. B. (1998) Boltzmann machine learning using mean field theory
and linear response correction. In M. I. Jordan, M. 1. Kearns, and S. A. Solla (Eds.), Advances
ill Neural Information Processing S.ystems 10, pp. 280-286. The MIT Press.
[9] Amari, S.-I. (1985) Differential-Geometrical Method in Statistics. Lecture Notes in Statistics
28, Springer- Verlag.
[10] Amari , S.-I., Kurata, K.? and Nagaoka, H. (1992) Information geometry of Boltzmann machines. IEEE Trans. Neural Networks 3 (2): 260-271.
[II] Tanaka, T. Information geometry of mean field approximation. preprint.
[12] Plefka, P. (1982) Convergence condition of the TAP equation for the infinite-ranged Ising spin
glass model. 1. Phys. A: Math. Gen. 15 (6): 197t-1978.
[13] Tanaka, T. (1998) Mean field theory of Boltzmann machine learning. Phys. Rev. E. 58 (2):
2302-2310.
[14] Tanaka, T. (1998) Estimation of third-order correlations within mean field approximation. In S.
Usui and T. Omori (Eds.), Proc. Fifth International Conference on Neurallllformation Processing, vol. 1, pp. 554-557.
PART
IV
ALGORITHMS AND ARCHITECTURE
| 1604 |@word fjij:3 nd:1 simulation:1 pick:1 kappen:3 reduction:1 electronics:1 mag:1 reaction:1 si:4 happen:1 hofmann:1 aside:1 stationary:4 leaf:17 advancement:1 accordingly:1 provides:2 math:1 differential:1 ik:1 shorthand:1 introduce:2 pairwise:1 expected:2 behavior:1 frequently:1 ol:1 considering:1 sager:1 becomes:1 estimating:1 notation:2 factorized:1 lowest:1 akl:1 minimizes:1 finding:1 magnified:1 firstorder:1 exactly:1 unit:1 engineering:1 treat:1 approximately:1 equating:1 examined:1 specifying:1 palmer:1 statistically:1 practical:1 ving:1 get:2 onto:2 cannot:1 context:1 equivalent:2 deterministic:2 phil:1 regardless:1 truncating:1 restated:1 immediately:2 m2:1 utilizing:1 regarded:1 submodel:3 his:1 financial:1 coordinate:6 target:1 exact:2 us:1 secondorder:1 element:1 helmholtz:2 recognition:1 ising:1 ackley:1 preprint:1 solla:1 sol:2 vanishes:1 depend:1 solving:3 basis:2 uh:1 po:6 represented:1 distinct:3 sejnowski:1 solve:1 amari:2 statistic:2 nagaoka:1 transform:2 parisi:1 relevant:1 gen:1 awl:1 tni:1 convergence:2 wider:1 ij:12 eq:5 tokyo:2 tji:1 correct:1 minami:1 summation:1 extension:2 correction:1 hold:4 proximity:1 around:1 sufficiently:1 substituting:1 fh:1 estimation:2 proc:1 applicable:1 tanh:4 successfully:1 metropolitan:1 establishes:1 minimization:4 mit:1 sight:1 sisj:2 realizable:1 glass:3 ved:1 relation:4 i1:1 dual:8 ill:1 field:36 equal:1 construct:1 represents:2 others:1 employ:1 divergence:2 thouless:1 familiar:1 geometry:5 attempt:1 llq:2 yielding:1 oth:1 orthogonal:5 iv:1 taylor:2 desired:1 theoretical:1 beller:1 gn:1 subset:1 plefka:7 st:1 international:1 physic:3 intersecting:2 again:2 squared:1 opposed:1 cognitive:1 derivative:3 japan:1 potential:4 includes:3 coefficient:6 explicitly:2 depends:1 view:3 hf:1 contribution:1 spin:3 efficiently:1 fo:8 phys:2 whenever:1 checked:1 ed:2 energy:7 pp:2 proof:1 mi:10 riemannian:1 actually:1 wesley:1 higher:4 response:10 specify:1 wei:2 formulation:6 evaluated:2 ystems:1 anderson:2 correlation:2 rodriguez:2 aj:1 validity:1 ranged:1 true:2 former:2 analytically:1 noted:1 ffii:1 theoretic:2 geometrical:2 fi:6 common:2 ji:8 extend:2 interpretation:1 numerically:1 gibbs:3 ai:1 longer:1 surface:1 closest:1 ajl:2 scenario:1 verlag:1 binary:1 seen:1 mr:2 ii:4 reduces:2 compensate:1 fjj:1 essentially:1 expectation:9 metric:2 normalization:1 represent:1 confined:1 annealing:1 leaving:1 jordan:1 call:1 easy:2 fit:1 architecture:1 regarding:1 translates:1 whether:1 expression:4 jj:2 constitute:2 ignored:1 clear:1 ibid:1 specifies:2 canonical:4 vol:1 key:1 reformulation:1 utilize:2 uw:1 geometrically:1 inverse:4 fourth:1 telecommunication:1 place:1 family:5 hi:15 nontrivial:1 orthogonality:1 qig:1 urn:1 department:1 combination:1 legendre:2 em:1 wi:12 lp:1 rev:1 intuitively:1 taken:1 computationally:1 equation:6 discus:1 addison:1 fji:7 pllq:6 alternative:1 galland:1 kurata:1 substitute:1 denotes:1 clustering:1 giving:1 tensor:2 quantity:1 fa:1 md:2 distance:1 higherorder:1 thank:1 parametrized:2 trivial:3 index:2 minimizing:1 difficult:1 mostly:1 anal:1 boltzmann:20 observation:1 efg:1 situation:1 hinton:1 cast:1 pair:5 tap:4 tanaka:9 trans:2 pattern:1 including:2 belief:1 difficulty:1 rely:1 circumvent:1 buhmann:1 usui:1 solvable:1 advanced:4 mn:1 naive:7 understanding:2 literature:1 law:2 fully:1 lecture:1 mixed:2 limitation:1 foundation:1 consistent:3 principle:1 systematically:1 lo:1 elsewhere:1 free:6 hachioji:1 peterson:1 fifth:1 calculated:1 resides:2 author:1 far:2 approximate:5 kullback:2 ml:1 sk:1 lip:1 expansion:14 complex:1 factorizable:3 whole:1 fig:2 originated:1 explicit:1 rjj:1 exponential:5 third:5 theorem:10 formula:1 explored:1 essential:1 intractable:1 jmj:1 erratum:1 expressed:2 temporarily:1 springer:1 ch:1 minimizer:2 identity:1 formulated:1 fisher:2 hard:1 change:1 determined:2 infinite:1 kearns:1 called:9 gij:2 ijk:1 support:1 latter:2 cumulant:1 pythagorean:1 evaluate:4 |
661 | 1,605 | Tight Bounds for the VC-Dimension of
Piecewise Polynomial Networks
Akito Sakurai
School of Knowledge Science
Japan Advanced Institute of Science and Technology
Nomi-gun, Ishikawa 923-1211, Japan.
CREST, Japan Science and Technology Corporation.
ASakurai@jaist.ac.jp
Abstract
O(ws(s log d+log(dqh/ s))) and O(ws((h/ s) log q) +log(dqh/ s)) are
upper bounds for the VC-dimension of a set of neural networks of
units with piecewise polynomial activation functions, where s is
the depth of the network, h is the number of hidden units, w is
the number of adjustable parameters, q is the maximum of the
number of polynomial segments of the activation function, and d is
the maximum degree of the polynomials; also n(wslog(dqh/s)) is
a lower bound for the VC-dimension of such a network set, which
are tight for the cases s = 8(h) and s is constant. For the special
case q = 1, the VC-dimension is 8(ws log d).
1
Introduction
In spite of its importance, we had been unable to obtain VC-dimension values for
practical types of networks, until fairly tight upper and lower bounds were obtained
([6], [8], [9], and [10]) for linear threshold element networks in which all elements
perform a threshold function on weighted sum of inputs. Roughly, the lower bound
for the networks is (1/2)w log h and the upper bound is w log h where h is the number
of hidden elements and w is the number of connecting weights (for one-hidden-Iayer
case w ~ nh where n is the input dimension of the network).
In many applications, though, sigmoidal functions, specifically a typical sigmoid
function 1/ (1 + exp( -x)), or piecewise linear functions for economy of calculation,
are used instead of the threshold function. This is mainly because the differentiability of the functions is needed to perform backpropagation or other learning
algorithms. Unfortunately explicit bounds obtained so far for the VC-dimension of
sigmoidal networks exhibit large gaps (O(w2h2) ([3]), n(w log h) for bounded depth
A. Sakurai
324
and f!(wh) for unbounded depth) and are hard to improve. For the piecewise linear
case, Maass obtained a result that the VO-dimension is O(w210g q), where q is the
number of linear pieces of the function ([5]).
Recently Koiran and Sontag ([4]) proved a lower bound f!(w 2 ) for the piecewise
polynomial case and they claimed that an open problem that Maass posed if there
is a matching w 2 lower bound for the type of networks is solved. But we still have
something to do, since they showed it only for the case w = 8(h) and the number
of hidden layers being unboundedj also O(w 2 ) bound has room to improve.
We in this paper improve the bounds obtained by Maass, Koiran and Sontag and
consequently show the role of polynomials, which can not be played by linear functions, and the role of the constant functions that could appear for piecewise polynomial case, which cannot be played by polynomial functions.
After submission of the draft, we found that Bartlett, Maiorov, and Meir had obtained similar results prior to ours (also in this proceedings). Our advantage is that
we clarified the role played by the degree and number of segments concerning the
both bounds.
2
Terminology and Notation
log stands for the logarithm base 2 throughout the paper.
The depth of a network is the length of the longest path from its external inputs to
its external output, where the length is the number of units on the path. Likewise
we can assign a depth to each unit in a network as the length of the longest path
from the external input to the output of the unit. A hidden layer is a set of units at
the same depth other than the depth of the network. Therefore a depth L network
has L - 1 hidden layers.
In many cases W will stand for a vector composed of all the connection weights in
the network (including threshold values for the threshold units) and w is the length
of w. The number of units in the network, excluding "input units," will be denoted
by hj in other words, the number of hidden units plus one, or sometimes just the
number of hidden units. A function whose range is {O, 1} (a set of 0 and 1) is
called a Boolean-valued function.
3
Upper Bounds
To obtain upper bounds for the VO-dimension we use a region counting argu.ment,
developed by Goldberg and Jerrum [2]. The VO-dimension of the network, that is,
the VO-dimension of the function set {fG(wj . ) IW E'RW} is upper bounded by
max {N 12N
~ Xl~.~N Nee ('Rw - UJ: 1.N'(fG(:Wj x?))) }
where NeeO is the number of connected components and .N'(f)
{w I f(w) = O}.
(3.1)
IS
the set
The following two theorems are convenient. Refer [11] and [7] for the first theorem.
The lemma followed is easily proven.
Theorem 3.1. Let fG(wj Xi) (1 ~ i ~ N) be real polynomials in w, each of degree
d or less. The number of connected components of the set n~l {w I fG(wj xd = O}
is bounded from above by 2(2d)W where w is the length of w.
325
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
Lemma 3.2. Ifm ~ w(1ogC + loglogC + 1), then 2m
> (mC/w)W
for C ~ 4.
First let us consider the polynomial activation function case.
Theorem 3.3. Suppose that the activation function are polynomials of degree at
most d. O( ws log d) is an upper bound of the VC-dimension for the networks with
depth s. When s = 8(h) the bound is O(whlogd). More precisely ws(1ogd +
log log d + 2) is an upper bound. Note that if we allow a polynomial as the input
function, d 1d 2 will replace d above where d 1 is the maximum degree of the input
functions and d 2 is that of the activation functions.
The theorem is clear from the facts that the network function (fa in (3.1)) is a
polynomial of degree at most d S + d s- 1 + ... + d, Theorem 3.1 and Lemma 3.2.
For the piecewise linear case, we have two types of bounds. The first one is suitable
for bounded depth cases (i. e. the depth s = o( h)) and the second one for the
unbounded depth case (i.e . s = 8(h)).
Theorem 3.4. Suppose that the activation functions are piecewise polynomials with
at most q segments of polynomials degree at most d. O(ws(slogd + log(dqh/s)))
and O(ws((h/s)logq) +log(dqh/s)) are upper bounds for the VC-dimension, where
s is the depth of the network. More precisely, ws((s/2)logd + log(qh)) and
ws( (h/ s) log q + log d) are asymptotic upper bounds. Note that if we allow a polynomial as the input function then d 1d 2 will replace d above where d 1 is the maximum
degree of the input functions and d 2 is that of the activation functions.
Proof. We have two different ways to calculate the bounds. First
S
i=1
<
s
-p
J=1
(8eNQhs(di-1
+ .. . + d + l)d) 'l?l+'''+ W;
Wl+"'+W'
J
::; (8eN qd(s:)/2(h/S)) ws
where hi is the number of hidden units in the i-th layer and 0 is an operator to
form a new vector by concatenating the two. From this we get an asymptotic upper
bound ws((s/2) log d + log(qh)) for the VC-dimension.
Secondly
From this we get an asymptotic upper bound ws((h/s)logq + log d) for the VCdimension. Combining these two bounds we get the result. Note that sin log( dqh/ s)
in it is introduced to eliminate unduly large term emerging when s = 8(h) .
0
4
Lower Bounds for Polynomial Networks
Theorem 4.1 Let us consider the case that the activation function are polynomials
of degree at most d . n( ws log d) is a lower bound of the VC-dimension for the
networks with depth s. When s = 8(h) the bound is n(whlogd), More precisely,
326
A. Sakurai
(1/16)w( 5 - 6) log d is an asymptotic lower bound where d is the degree of activation
functions and is a power of two and h is restricted to O(n 2) for input dimension n.
The proof consists of several lemmas. The network we are constructing will have
two parts: an encoder and a decoder. We deliberately fix the N input points. The
decoder part has fixed underlying architecture but also fixed connecting weights
whereas the encoder part has variable weights so that for any given binary outputs
for the input points the decoder could output the specified value from the codes in
which the output value is encoded by the encoder.
First we consider the decoder, which has two real inputs and one real output. One
of the two inputs y holds a code of a binary sequence bl , b2, ... ,bm and the other x
holds a code of a binary sequence Cl, C2, ... ,Cm . The elements of the latter sequence
are all O's except for Cj = 1, where Cj = 1 orders the decoder to output bj from it
and consequently from the network.
We show two types of networks; one of which has activation functions of degree at
most two and has the VC-dimension w(s-l) and the other has activation functions
of degree d a power of two and has the VC-dimension w( s - 5) log d.
?
We use for convenience two functions 'H9(X) = 1 if x 2:: 0 and
otherwise and
'H9,t/J (x) = 1 if x 2:: cp, if x ::; 0, and undefined otherwise. Throughout this section
we will use a simple logistic function p(x) = (16/3)x(1- x) which has the following
property.
?
Lemma 4.2. For any binary sequence bl , b2, . .. , bm , there exists an interval [Xl, X2]
such that bi = 'H l / 4,3/4(pi(x)) and :S /(x) ::; 1 for any x E [Xl, X2]'
?
The next lemmas are easily proven.
Lemma 4.3. For any binary sequence Cl, C2,"" Cm which are all O's except for
= 1, there exists Xo such that Ci = 'H l / 4,3/4(pi(xo)). Specifically we will take Xo =
p~(j-l)(1/4), where PLl(x) is the inverse of p(x) on [0,1/2]. Then pi-l(xo) = 1/4,
pi(xo) = 1, pi(xo) = for all i > j, and pj-i(xo) ::; (1/4)i for all positive i ::; j.
Cj
?
Proof. Clear from the fact that p(x) 2:: 4x on [0,1/4].
o
Lemma 4.4. For any binary sequence bl , b2, ... , bm , take y such that bi
'H 1 / 4,3/4(pi(y)) and
pi(y) ::; 1 for all i and Xo = p~(j-l)(1/4), then
'H 7 / 12 ,3/4 (l::l pi(xo)pi(y)} = bi' i.e. 'Ho (l::l pi(xo)pi(y) - 2/3} = bi'
? :;
Proof. If bj = 0, l::l pi(xo)pi(y) = l:1=1 pi(xo)pi(y) :S pi(y) + l:1:::(1/4)i <
pi(y) + (1/3)::; 7/12. If bj = 1, l::l pi(xo)pi(y) > pi(xo)pi(y) 2:: 3/4.
0
By the above lemmas, the network in Figure 1 (left) has the following function:
Suppose that a binary sequence bl , ... ,bm and an integer j is given. Then we
can present y that depends only on bl , ?? ? ,bm and Xo that depends only on j
such that bi is output from the decoder.
Note that we use (x
+ y)2
- (x - y)2 = 4xy to realize a multiplication unit.
For the case of degree of higher than two we have to construct a bit more complicated
one by using another simple logistic function fL(X) = (36/5)x(1- x). We need the
next lemma.
Lemma 4.5. Take Xo = fL~(j-l)(1/6), where fLLl(X) is the inverse of fL(X) on
[0,1/2]. Then fLi-1(xo) = 1/6, fLj(XO) = 1, fLi(xo) = for all i > j, and fLi-i(xo) =
?
327
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
L--_L...-_---L_...L..-_ X.
~A?l
f?i~~]
i~-!~
,----_ .. ...... ..
'. .......
x,
y
__
__
Figure 1: Network architecture consisting of polynomials of order two (left) and
those of order of power of two (right).
(1/6)i for all i > 0 and $ j.
Proof. Clear from the fact that J-L(x)
~
6x on [0,1/6].
any binary sequence bl. b2, . .. , bk ,
take y such that bi = 1-l1/4,3/4(pi(y))
for all i. Moreover for any 1 $ j $ m and any 1 $
J-LL(j-1)(1/6), and Xo = J-LL(I-1)(1/6 k ). Then for Z =
Lemma
4.6.
For
0
... , b(m-1)H1,'''' bmk
1-lo (E~==-Ol pi(z)J-Li(xo) - (1/2))
bk+b bk+2, . .. ,b2k ,
and 0 $ pi(y) $ 1
1 $ k take Xl =
E:1 pik(Y)J-Lik(xt),
= bki+l holds.
Lemma 4.7. If 0 < pi(x) < 1 for any 0 < i $1, take an ? such that (16/3)1?
Then pl(x) - (16/3)1? < pl(x + ?) < pl(x) + (16/3)1?.
< 1/4.
?)
Proof.. There are four cases ~epending on ~hether pl- ~ (x + is on the uphill or
downhIll of p and whether x IS on the uphlll or downhIll of p -1 . The proofs are
done by induction.
First suppose that the two are on the uphill. Then pl(x + ?) = p(pl-1\X + f)) <
p(pl-1(X) + (16/3)1-1?)) < pl(x) + (16/3)1?. Secondly suppose that p -l(x + ?)
is on the uphill but x is on the downhill. Then pl(x + ?) = p(pl-1(x + f)) >
p(pl-1(x) - (16/3)1-1?)) > pl(x) - (16/3)1?. The other two cases are similar.
0
Proof of Lemma 4.6.
We will show that the difference between piHl(y)
and E~==-ol p'(z)J-Li(xo) is sufficiently small. Clearly Z = E:1 J-Lik(X1)pik(y) =
E{=l J-Lik(X1)pik(y) $ pik(y)+ E{~i(1/6k)i < pik(y)+1/(6 k -1) and pik(y) < z. If
Z is on the uphill of pI then by using the above lemma, we get E~==-Ol pi(z)J-Li(xO) =
E~=o p'(z)J-Li(xo) < pl(z) + 1/(6 k - 1) < piHl(y) + (1 + (16/3)1)(1/(6 k - 1)) <
pik+1(y) + 1/4 (note that 1 $ k - 1 and k ~ 2). If z is on the downhill of pI then
by using the above lemma, we get E~==-Ol pi(Z)J-Li(xo) = E~=o pi(z)J-Li(xo) > pl(z) >
pl(pik(y)) _ (16/3)1(1/(6 k - 1)) > pik+l(y) - 1/4.
0
Next we show the encoding scheme we adopted. We show only the case w = 8(h 2 )
since the case w = 8(h) or more generally w = O(h2) is easily obtained from this.
Theorem 4.8 There is a network of2n inputs, 2h hidden units with h 2 weights w,
A. Sakurai
328
and h 2 sets of input values Xl, ... ,Xh2 such that for any set of values
we can chose W to satisfy Yi = fG(w; Xi).
Y1, ... , Yh2
Proof. We extensively utilize the fact that monomials obtained by choosing at most
k variables from n variables with repetition allowed (say X~X2X6) are all linearly
independent ([1]). Note that the number of monomials thus formed is (n~m).
Suppose for simplicity that we have 2n inputs and 2h main hidden units (we have
other hidden units too), and h = (n~m). By using multiplication units (in fact each
is a composite of two squaring units and the outputs are supposed to be summed up
as in Figure 1), we can form h = (n~m) linearly independent monomials composed
of variables Xl, . ?? ,X n by using at most (m -l)h multiplication units (or h nominal
units when m = 1). In the same way, we can form h linearly independent monomials
composed of variables Xn+ll . .? , X2n. Let us denote the monomials by U1, ?.? , Uh
and V1, . .. , Vh.
We form a subnetwork to calculate 2:7=1 (2:7=1 Wi,jUi)Vj by using h multiplication
units. Clearly the calculated result Y is the weighted sum of monomials described
above where the weights are Wi,j for 1 $ i, j $ h.
Since y = fG(w; x) is a linear combination of linearly independent terms, if we
choose appropriately h 2 sets of values Xll . . . , Xh2 for X = (Xl, .. ? , X2n) , then for
any assignment of h 2 values Y1, ... ,Yh2 to Y we have a set of weights W such that
Yi = f(xi, w).
0
Proof of Theorem -4.1. The whole network consists of the decoder and the encoder.
The input points are the Cartesian product of the above Xl, ... ,Xh2 and {xo defined
in Lemma 4.4 for bj = 111 $ j :$ 8'} for some h where 8' is the number of bits to
be encoded. This means that we have h 2 s points that can be shattered.
Let the number of hidden layers of the decoder be 8. The number of units used
for the decoder is 4(8 - 1) + 1 (for the degree 2 case which can decode at most 8
bits) or 4(8 - 3) + 4(k - 1) + 1 (for the degree 2k case which can decode at most
(8 - 2)k bits). The number of units used for the encoder is less than 4h; we though
have constraints on 8 (which dominates the depth of the network) and h (which
dominates the number of units in the network) that h :$ (n~m) and m = O(s) or
roughly log h = 0(8) be satisfied.
Let us chose m = 2 (m = log 8 is a better choise). As a result, by using 4h + 4(s I} + 1 (or 4h + 4(8 - 3) + 4(k -1) + 1) units in s + 2 layers, we can shatter h 2 8 (or
h 2 (8 - 2) log d) points; or asymptotically by using h units 8 layers we can shatter
(1/16)w( 8 - 3) (or (1/16)w( 8 - 5) log d) points.
0
5
Piecewise Polynomial Case
Theorem 5.1. Let us consider a set of networks of units with linear input functions and piecewise polynomial (with q polynomial segments) activation functions .
Q(W8 log( dqh/ 8)) is a lower bound of the VC-dimension, where 8 is the depth of the
network and d is the maximum degree of the activation functions. More precisely,
(1/16)w(s - 6)(10gd+ log(h/s) + logq) is an asymptotic lower bound.
For the scarcity of space, we give just an outline of the proof. Our proof is based
on that of the polynomial networks. We will use h units with activation function
of q ~ 2 polynomial segments of degree at most d in place of each of pk unit in the
decoder, which give the ability of decoding log dqh bits in one layer and slog dqh
bits in total by 8( 8h) units in total. If h designates the total number of units, the
329
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
number of the decodable bits is represented as log(dqh/s).
In the following for simplicity we suppose that dqh is a power of 2. Let pk(x) be
the k composition of p(x) as usual i.e. pk(x) = p(pk-l(x)) and pl(X) = p(x). Let
plogd,/(x) = /ogd(,X/(x)), where 'x(x) = 4x if x $ 1/2 and 4 - 4x otherwise, which
by the way has 21 polynomial segments.
Now the pk unit in the polynomial case is replaced by the array
h units that is defined as follows:
(i)
(ii)
/ogd,logq,logh(x)
of
is an array of two units; one is plogd,logq(,X+(x)) where ,X+(x) =
4x if x $ 1/2 and 0 otherwise and the other is plog d,log q ('x - (x)) where ,X - (x) = 0
if x $ 1/2 and 4 - 4x otherwise.
plogd,logq,l(X)
plog d,log q,m~x) is the array
plogd,logq(,X ( . .? ('x?(x)) . . . ))
of 2m units, each with one of the functions
where ,X?( ... ('x?(x)) .. ?) is the m composition
of 'x+(x) or 'x - (x). Note that ,X?( ... ('x?(x)) ... ) has at most three linear segments (one is linear and the others are constant 0) and the sum of 2m possible
combinations t(,X?(. . . ('x?(x)) ? . . )) is equal to t(,Xm(x)) for any function f
such that f(O) = O.
Then lemmas similar to the ones in the polynomial case follow.
References
[1] Anthony, M: Classification by polynomial surfaces, NeuroCOLT Technical Report Series, NC-TR-95-011 (1995).
[2] Goldberg, P. and M. Jerrum: Bounding the Vapnik-Chervonenkis dimension
of concept classes parameterized by real numbers, Proc. Sixth Annual ACM
Conference on Computational Learning Theory, 361-369 (1993).
[3] Karpinski, M. and A. Macintyre, Polynomial bounds for VC dimension of sigmoidal neural networks, Proc. 27th ACM Symposium on Theory of Computing,
200-208 (1995) .
[4] Koiran, P. and E. D. Sontag: Neural networks with quadratic VC dimension,
Journ. Compo Syst. Sci., 54, 190-198(1997).
[5] Maass, W . G.: Bounds for the computational power and learning complexity of
analog neural nets, Proc. 25th Annual Symposium of the Theory of Computing,
335-344 (1993).
[6] Maass, W. G.: Neural nets with superlinear VC-dimension, Neural Computation, 6, 877-884 (1994)
[7] Milnor, J.: On the Betti numbers of real varieties, Proc. of the AMS, 15,
275-280 (1964).
[8] Sakurai, A.: Tighter Bounds of the VC-Dimension of Three-layer Networks,
Proc. WCNN'93, III, 540-543 (1993).
[9] Sakurai, A.: On the VC-dimension of depth four threshold circuits and the
complexity of Boolean-valued functions, Proc. ALT93 (LNAI 744), 251-264
(1993) ; refined version is in Theoretical Computer Science, 137, 109-127 (1995).
[10] Sakurai, A. : On the VC-dimension of neural networks with a large number of
hidden layers, Proc. NOLTA'93, IEICE, 239-242 (1993).
[11] Warren, H. E.: Lower bounds for approximation by nonlinear manifolds, Trans .
AMS, 133, 167-178, (1968) .
| 1605 |@word version:1 polynomial:32 open:1 tr:1 series:1 chervonenkis:1 ours:1 activation:14 realize:1 compo:1 draft:1 clarified:1 sigmoidal:3 unbounded:2 shatter:2 c2:2 symposium:2 consists:2 uphill:4 roughly:2 ol:4 ifm:1 bounded:4 notation:1 underlying:1 moreover:1 circuit:1 cm:2 emerging:1 developed:1 corporation:1 whlogd:2 w8:1 xd:1 unit:35 appear:1 positive:1 encoding:1 path:3 plus:1 chose:2 range:1 bi:6 practical:1 backpropagation:1 composite:1 convenient:1 matching:1 word:1 spite:1 jui:1 get:5 cannot:1 convenience:1 superlinear:1 operator:1 simplicity:2 array:3 qh:2 suppose:7 flj:1 nominal:1 decode:2 ogd:3 goldberg:2 element:4 submission:1 logq:7 role:3 wcnn:1 solved:1 calculate:2 region:1 wj:4 connected:2 complexity:2 tight:6 segment:7 uh:1 easily:3 represented:1 choosing:1 refined:1 whose:1 encoded:2 posed:1 valued:2 say:1 otherwise:5 encoder:5 ability:1 jerrum:2 maiorov:1 advantage:1 sequence:8 net:2 ment:1 product:1 combining:1 supposed:1 pll:1 ac:1 vcdimension:1 school:1 qd:1 vc:22 assign:1 fix:1 tighter:1 secondly:2 pl:16 hold:3 sufficiently:1 exp:1 bj:4 koiran:3 proc:7 iw:1 wl:1 repetition:1 weighted:2 clearly:2 hj:1 logh:1 longest:2 mainly:1 am:2 economy:1 squaring:1 shattered:1 eliminate:1 lnai:1 w:13 hidden:14 journ:1 classification:1 denoted:1 special:1 fairly:1 summed:1 equal:1 construct:1 x2n:2 ishikawa:1 others:1 report:1 piecewise:13 b2k:1 composed:3 decodable:1 replaced:1 consisting:1 xh2:3 slog:1 undefined:1 bki:1 xy:1 logarithm:1 theoretical:1 boolean:2 sakurai:7 assignment:1 plog:2 monomials:6 too:1 gd:1 decoding:1 connecting:2 satisfied:1 choose:1 external:3 li:6 japan:3 syst:1 b2:4 satisfy:1 depends:2 piece:1 h1:1 ogc:1 complicated:1 formed:1 likewise:1 mc:1 pihl:2 sixth:1 proof:12 di:1 proved:1 wh:1 knowledge:1 yh2:2 cj:3 higher:1 follow:1 done:1 though:2 just:2 until:1 nonlinear:1 logistic:2 ieice:1 concept:1 deliberately:1 maass:5 sin:1 ll:3 outline:1 vo:4 cp:1 l1:1 logd:1 recently:1 sigmoid:1 jp:1 nh:1 analog:1 refer:1 composition:2 had:2 surface:1 base:1 something:1 showed:1 claimed:1 binary:8 yi:2 ii:1 lik:3 technical:1 calculation:1 concerning:1 karpinski:1 sometimes:1 whereas:1 interval:1 appropriately:1 integer:1 counting:1 iii:1 variety:1 architecture:2 bmk:1 whether:1 bartlett:1 sontag:3 generally:1 clear:3 extensively:1 differentiability:1 rw:2 meir:1 macintyre:1 dqh:11 four:2 terminology:1 threshold:6 pj:1 utilize:1 v1:1 asymptotically:1 sum:3 inverse:2 parameterized:1 place:1 throughout:2 pik:9 bit:7 bound:37 layer:10 hi:1 followed:1 played:3 fl:3 quadratic:1 annual:2 precisely:4 constraint:1 x2:2 u1:1 combination:2 wi:2 restricted:1 xo:28 needed:1 adopted:1 plogd:4 ho:1 uj:1 bl:6 fa:1 usual:1 exhibit:1 subnetwork:1 unable:1 neurocolt:1 sci:1 decoder:10 gun:1 manifold:1 induction:1 length:5 code:3 nc:1 unfortunately:1 adjustable:1 perform:2 xll:1 upper:12 excluding:1 y1:2 introduced:1 bk:3 specified:1 nomi:1 connection:1 h9:2 unduly:1 nee:1 trans:1 xm:1 max:1 including:1 power:5 suitable:1 advanced:1 scheme:1 improve:3 technology:2 jaist:1 vh:1 prior:1 multiplication:4 asymptotic:5 proven:2 h2:1 degree:17 nolta:1 pi:30 lo:1 l_:1 warren:1 allow:2 institute:1 fg:6 dimension:29 depth:17 stand:2 xn:1 calculated:1 bm:5 far:1 crest:1 iayer:1 xi:3 designates:1 betti:1 cl:2 constructing:1 fli:3 vj:1 anthony:1 pk:5 main:1 linearly:4 whole:1 bounding:1 allowed:1 x1:2 en:1 downhill:4 explicit:1 concatenating:1 xl:8 theorem:11 xt:1 dominates:2 exists:2 vapnik:1 importance:1 ci:1 cartesian:1 gap:1 argu:1 acm:2 consequently:2 room:1 replace:2 hard:1 specifically:2 typical:1 except:2 lemma:18 called:1 total:3 milnor:1 latter:1 scarcity:1 |
662 | 1,606 | Tight Bounds for the VC-Dimension of
Piecewise Polynomial Networks
Akito Sakurai
School of Knowledge Science
Japan Advanced Institute of Science and Technology
Nomi-gun, Ishikawa 923-1211, Japan.
CREST, Japan Science and Technology Corporation.
ASakurai@jaist.ac.jp
Abstract
O(ws(s log d+log(dqh/ s))) and O(ws((h/ s) log q) +log(dqh/ s)) are
upper bounds for the VC-dimension of a set of neural networks of
units with piecewise polynomial activation functions, where s is
the depth of the network, h is the number of hidden units, w is
the number of adjustable parameters, q is the maximum of the
number of polynomial segments of the activation function, and d is
the maximum degree of the polynomials; also n(wslog(dqh/s)) is
a lower bound for the VC-dimension of such a network set, which
are tight for the cases s = 8(h) and s is constant. For the special
case q = 1, the VC-dimension is 8(ws log d).
1
Introduction
In spite of its importance, we had been unable to obtain VC-dimension values for
practical types of networks, until fairly tight upper and lower bounds were obtained
([6], [8], [9], and [10]) for linear threshold element networks in which all elements
perform a threshold function on weighted sum of inputs. Roughly, the lower bound
for the networks is (1/2)w log h and the upper bound is w log h where h is the number
of hidden elements and w is the number of connecting weights (for one-hidden-Iayer
case w ~ nh where n is the input dimension of the network).
In many applications, though, sigmoidal functions, specifically a typical sigmoid
function 1/ (1 + exp( -x)), or piecewise linear functions for economy of calculation,
are used instead of the threshold function. This is mainly because the differentiability of the functions is needed to perform backpropagation or other learning
algorithms. Unfortunately explicit bounds obtained so far for the VC-dimension of
sigmoidal networks exhibit large gaps (O(w2h2) ([3]), n(w log h) for bounded depth
A. Sakurai
324
and f!(wh) for unbounded depth) and are hard to improve. For the piecewise linear
case, Maass obtained a result that the VO-dimension is O(w210g q), where q is the
number of linear pieces of the function ([5]).
Recently Koiran and Sontag ([4]) proved a lower bound f!(w 2 ) for the piecewise
polynomial case and they claimed that an open problem that Maass posed if there
is a matching w 2 lower bound for the type of networks is solved. But we still have
something to do, since they showed it only for the case w = 8(h) and the number
of hidden layers being unboundedj also O(w 2 ) bound has room to improve.
We in this paper improve the bounds obtained by Maass, Koiran and Sontag and
consequently show the role of polynomials, which can not be played by linear functions, and the role of the constant functions that could appear for piecewise polynomial case, which cannot be played by polynomial functions.
After submission of the draft, we found that Bartlett, Maiorov, and Meir had obtained similar results prior to ours (also in this proceedings). Our advantage is that
we clarified the role played by the degree and number of segments concerning the
both bounds.
2
Terminology and Notation
log stands for the logarithm base 2 throughout the paper.
The depth of a network is the length of the longest path from its external inputs to
its external output, where the length is the number of units on the path. Likewise
we can assign a depth to each unit in a network as the length of the longest path
from the external input to the output of the unit. A hidden layer is a set of units at
the same depth other than the depth of the network. Therefore a depth L network
has L - 1 hidden layers.
In many cases W will stand for a vector composed of all the connection weights in
the network (including threshold values for the threshold units) and w is the length
of w. The number of units in the network, excluding "input units," will be denoted
by hj in other words, the number of hidden units plus one, or sometimes just the
number of hidden units. A function whose range is {O, 1} (a set of 0 and 1) is
called a Boolean-valued function.
3
Upper Bounds
To obtain upper bounds for the VO-dimension we use a region counting argu.ment,
developed by Goldberg and Jerrum [2]. The VO-dimension of the network, that is,
the VO-dimension of the function set {fG(wj . ) IW E'RW} is upper bounded by
max {N 12N
~ Xl~.~N Nee ('Rw - UJ: 1.N'(fG(:Wj x?))) }
where NeeO is the number of connected components and .N'(f)
{w I f(w) = O}.
(3.1)
IS
the set
The following two theorems are convenient. Refer [11] and [7] for the first theorem.
The lemma followed is easily proven.
Theorem 3.1. Let fG(wj Xi) (1 ~ i ~ N) be real polynomials in w, each of degree
d or less. The number of connected components of the set n~l {w I fG(wj xd = O}
is bounded from above by 2(2d)W where w is the length of w.
325
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
Lemma 3.2. Ifm ~ w(1ogC + loglogC + 1), then 2m
> (mC/w)W
for C ~ 4.
First let us consider the polynomial activation function case.
Theorem 3.3. Suppose that the activation function are polynomials of degree at
most d. O( ws log d) is an upper bound of the VC-dimension for the networks with
depth s. When s = 8(h) the bound is O(whlogd). More precisely ws(1ogd +
log log d + 2) is an upper bound. Note that if we allow a polynomial as the input
function, d 1d 2 will replace d above where d 1 is the maximum degree of the input
functions and d 2 is that of the activation functions.
The theorem is clear from the facts that the network function (fa in (3.1)) is a
polynomial of degree at most d S + d s- 1 + ... + d, Theorem 3.1 and Lemma 3.2.
For the piecewise linear case, we have two types of bounds. The first one is suitable
for bounded depth cases (i. e. the depth s = o( h)) and the second one for the
unbounded depth case (i.e . s = 8(h)).
Theorem 3.4. Suppose that the activation functions are piecewise polynomials with
at most q segments of polynomials degree at most d. O(ws(slogd + log(dqh/s)))
and O(ws((h/s)logq) +log(dqh/s)) are upper bounds for the VC-dimension, where
s is the depth of the network. More precisely, ws((s/2)logd + log(qh)) and
ws( (h/ s) log q + log d) are asymptotic upper bounds. Note that if we allow a polynomial as the input function then d 1d 2 will replace d above where d 1 is the maximum
degree of the input functions and d 2 is that of the activation functions.
Proof. We have two different ways to calculate the bounds. First
S
i=1
<
s
-p
J=1
(8eNQhs(di-1
+ .. . + d + l)d) 'l?l+'''+ W;
Wl+"'+W'
J
::; (8eN qd(s:)/2(h/S)) ws
where hi is the number of hidden units in the i-th layer and 0 is an operator to
form a new vector by concatenating the two. From this we get an asymptotic upper
bound ws((s/2) log d + log(qh)) for the VC-dimension.
Secondly
From this we get an asymptotic upper bound ws((h/s)logq + log d) for the VCdimension. Combining these two bounds we get the result. Note that sin log( dqh/ s)
in it is introduced to eliminate unduly large term emerging when s = 8(h) .
0
4
Lower Bounds for Polynomial Networks
Theorem 4.1 Let us consider the case that the activation function are polynomials
of degree at most d . n( ws log d) is a lower bound of the VC-dimension for the
networks with depth s. When s = 8(h) the bound is n(whlogd), More precisely,
326
A. Sakurai
(1/16)w( 5 - 6) log d is an asymptotic lower bound where d is the degree of activation
functions and is a power of two and h is restricted to O(n 2) for input dimension n.
The proof consists of several lemmas. The network we are constructing will have
two parts: an encoder and a decoder. We deliberately fix the N input points. The
decoder part has fixed underlying architecture but also fixed connecting weights
whereas the encoder part has variable weights so that for any given binary outputs
for the input points the decoder could output the specified value from the codes in
which the output value is encoded by the encoder.
First we consider the decoder, which has two real inputs and one real output. One
of the two inputs y holds a code of a binary sequence bl , b2, ... ,bm and the other x
holds a code of a binary sequence Cl, C2, ... ,Cm . The elements of the latter sequence
are all O's except for Cj = 1, where Cj = 1 orders the decoder to output bj from it
and consequently from the network.
We show two types of networks; one of which has activation functions of degree at
most two and has the VC-dimension w(s-l) and the other has activation functions
of degree d a power of two and has the VC-dimension w( s - 5) log d.
?
We use for convenience two functions 'H9(X) = 1 if x 2:: 0 and
otherwise and
'H9,t/J (x) = 1 if x 2:: cp, if x ::; 0, and undefined otherwise. Throughout this section
we will use a simple logistic function p(x) = (16/3)x(1- x) which has the following
property.
?
Lemma 4.2. For any binary sequence bl , b2, . .. , bm , there exists an interval [Xl, X2]
such that bi = 'H l / 4,3/4(pi(x)) and :S /(x) ::; 1 for any x E [Xl, X2]'
?
The next lemmas are easily proven.
Lemma 4.3. For any binary sequence Cl, C2,"" Cm which are all O's except for
= 1, there exists Xo such that Ci = 'H l / 4,3/4(pi(xo)). Specifically we will take Xo =
p~(j-l)(1/4), where PLl(x) is the inverse of p(x) on [0,1/2]. Then pi-l(xo) = 1/4,
pi(xo) = 1, pi(xo) = for all i > j, and pj-i(xo) ::; (1/4)i for all positive i ::; j.
Cj
?
Proof. Clear from the fact that p(x) 2:: 4x on [0,1/4].
o
Lemma 4.4. For any binary sequence bl , b2, ... , bm , take y such that bi
'H 1 / 4,3/4(pi(y)) and
pi(y) ::; 1 for all i and Xo = p~(j-l)(1/4), then
'H 7 / 12 ,3/4 (l::l pi(xo)pi(y)} = bi' i.e. 'Ho (l::l pi(xo)pi(y) - 2/3} = bi'
? :;
Proof. If bj = 0, l::l pi(xo)pi(y) = l:1=1 pi(xo)pi(y) :S pi(y) + l:1:::(1/4)i <
pi(y) + (1/3)::; 7/12. If bj = 1, l::l pi(xo)pi(y) > pi(xo)pi(y) 2:: 3/4.
0
By the above lemmas, the network in Figure 1 (left) has the following function:
Suppose that a binary sequence bl , ... ,bm and an integer j is given. Then we
can present y that depends only on bl , ?? ? ,bm and Xo that depends only on j
such that bi is output from the decoder.
Note that we use (x
+ y)2
- (x - y)2 = 4xy to realize a multiplication unit.
For the case of degree of higher than two we have to construct a bit more complicated
one by using another simple logistic function fL(X) = (36/5)x(1- x). We need the
next lemma.
Lemma 4.5. Take Xo = fL~(j-l)(1/6), where fLLl(X) is the inverse of fL(X) on
[0,1/2]. Then fLi-1(xo) = 1/6, fLj(XO) = 1, fLi(xo) = for all i > j, and fLi-i(xo) =
?
327
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
L--_L...-_---L_...L..-_ X.
~A?l
f?i~~]
i~-!~
,----_ .. ...... ..
'. .......
x,
y
__
__
Figure 1: Network architecture consisting of polynomials of order two (left) and
those of order of power of two (right).
(1/6)i for all i > 0 and $ j.
Proof. Clear from the fact that J-L(x)
~
6x on [0,1/6].
any binary sequence bl. b2, . .. , bk ,
take y such that bi = 1-l1/4,3/4(pi(y))
for all i. Moreover for any 1 $ j $ m and any 1 $
J-LL(j-1)(1/6), and Xo = J-LL(I-1)(1/6 k ). Then for Z =
Lemma
4.6.
For
0
... , b(m-1)H1,'''' bmk
1-lo (E~==-Ol pi(z)J-Li(xo) - (1/2))
bk+b bk+2, . .. ,b2k ,
and 0 $ pi(y) $ 1
1 $ k take Xl =
E:1 pik(Y)J-Lik(xt),
= bki+l holds.
Lemma 4.7. If 0 < pi(x) < 1 for any 0 < i $1, take an ? such that (16/3)1?
Then pl(x) - (16/3)1? < pl(x + ?) < pl(x) + (16/3)1?.
< 1/4.
?)
Proof.. There are four cases ~epending on ~hether pl- ~ (x + is on the uphill or
downhIll of p and whether x IS on the uphlll or downhIll of p -1 . The proofs are
done by induction.
First suppose that the two are on the uphill. Then pl(x + ?) = p(pl-1\X + f)) <
p(pl-1(X) + (16/3)1-1?)) < pl(x) + (16/3)1?. Secondly suppose that p -l(x + ?)
is on the uphill but x is on the downhill. Then pl(x + ?) = p(pl-1(x + f)) >
p(pl-1(x) - (16/3)1-1?)) > pl(x) - (16/3)1?. The other two cases are similar.
0
Proof of Lemma 4.6.
We will show that the difference between piHl(y)
and E~==-ol p'(z)J-Li(xo) is sufficiently small. Clearly Z = E:1 J-Lik(X1)pik(y) =
E{=l J-Lik(X1)pik(y) $ pik(y)+ E{~i(1/6k)i < pik(y)+1/(6 k -1) and pik(y) < z. If
Z is on the uphill of pI then by using the above lemma, we get E~==-Ol pi(z)J-Li(xO) =
E~=o p'(z)J-Li(xo) < pl(z) + 1/(6 k - 1) < piHl(y) + (1 + (16/3)1)(1/(6 k - 1)) <
pik+1(y) + 1/4 (note that 1 $ k - 1 and k ~ 2). If z is on the downhill of pI then
by using the above lemma, we get E~==-Ol pi(Z)J-Li(xo) = E~=o pi(z)J-Li(xo) > pl(z) >
pl(pik(y)) _ (16/3)1(1/(6 k - 1)) > pik+l(y) - 1/4.
0
Next we show the encoding scheme we adopted. We show only the case w = 8(h 2 )
since the case w = 8(h) or more generally w = O(h2) is easily obtained from this.
Theorem 4.8 There is a network of2n inputs, 2h hidden units with h 2 weights w,
A. Sakurai
328
and h 2 sets of input values Xl, ... ,Xh2 such that for any set of values
we can chose W to satisfy Yi = fG(w; Xi).
Y1, ... , Yh2
Proof. We extensively utilize the fact that monomials obtained by choosing at most
k variables from n variables with repetition allowed (say X~X2X6) are all linearly
independent ([1]). Note that the number of monomials thus formed is (n~m).
Suppose for simplicity that we have 2n inputs and 2h main hidden units (we have
other hidden units too), and h = (n~m). By using multiplication units (in fact each
is a composite of two squaring units and the outputs are supposed to be summed up
as in Figure 1), we can form h = (n~m) linearly independent monomials composed
of variables Xl, . ?? ,X n by using at most (m -l)h multiplication units (or h nominal
units when m = 1). In the same way, we can form h linearly independent monomials
composed of variables Xn+ll . .? , X2n. Let us denote the monomials by U1, ?.? , Uh
and V1, . .. , Vh.
We form a subnetwork to calculate 2:7=1 (2:7=1 Wi,jUi)Vj by using h multiplication
units. Clearly the calculated result Y is the weighted sum of monomials described
above where the weights are Wi,j for 1 $ i, j $ h.
Since y = fG(w; x) is a linear combination of linearly independent terms, if we
choose appropriately h 2 sets of values Xll . . . , Xh2 for X = (Xl, .. ? , X2n) , then for
any assignment of h 2 values Y1, ... ,Yh2 to Y we have a set of weights W such that
Yi = f(xi, w).
0
Proof of Theorem -4.1. The whole network consists of the decoder and the encoder.
The input points are the Cartesian product of the above Xl, ... ,Xh2 and {xo defined
in Lemma 4.4 for bj = 111 $ j :$ 8'} for some h where 8' is the number of bits to
be encoded. This means that we have h 2 s points that can be shattered.
Let the number of hidden layers of the decoder be 8. The number of units used
for the decoder is 4(8 - 1) + 1 (for the degree 2 case which can decode at most 8
bits) or 4(8 - 3) + 4(k - 1) + 1 (for the degree 2k case which can decode at most
(8 - 2)k bits). The number of units used for the encoder is less than 4h; we though
have constraints on 8 (which dominates the depth of the network) and h (which
dominates the number of units in the network) that h :$ (n~m) and m = O(s) or
roughly log h = 0(8) be satisfied.
Let us chose m = 2 (m = log 8 is a better choise). As a result, by using 4h + 4(s I} + 1 (or 4h + 4(8 - 3) + 4(k -1) + 1) units in s + 2 layers, we can shatter h 2 8 (or
h 2 (8 - 2) log d) points; or asymptotically by using h units 8 layers we can shatter
(1/16)w( 8 - 3) (or (1/16)w( 8 - 5) log d) points.
0
5
Piecewise Polynomial Case
Theorem 5.1. Let us consider a set of networks of units with linear input functions and piecewise polynomial (with q polynomial segments) activation functions .
Q(W8 log( dqh/ 8)) is a lower bound of the VC-dimension, where 8 is the depth of the
network and d is the maximum degree of the activation functions. More precisely,
(1/16)w(s - 6)(10gd+ log(h/s) + logq) is an asymptotic lower bound.
For the scarcity of space, we give just an outline of the proof. Our proof is based
on that of the polynomial networks. We will use h units with activation function
of q ~ 2 polynomial segments of degree at most d in place of each of pk unit in the
decoder, which give the ability of decoding log dqh bits in one layer and slog dqh
bits in total by 8( 8h) units in total. If h designates the total number of units, the
329
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks
number of the decodable bits is represented as log(dqh/s).
In the following for simplicity we suppose that dqh is a power of 2. Let pk(x) be
the k composition of p(x) as usual i.e. pk(x) = p(pk-l(x)) and pl(X) = p(x). Let
plogd,/(x) = /ogd(,X/(x)), where 'x(x) = 4x if x $ 1/2 and 4 - 4x otherwise, which
by the way has 21 polynomial segments.
Now the pk unit in the polynomial case is replaced by the array
h units that is defined as follows:
(i)
(ii)
/ogd,logq,logh(x)
of
is an array of two units; one is plogd,logq(,X+(x)) where ,X+(x) =
4x if x $ 1/2 and 0 otherwise and the other is plog d,log q ('x - (x)) where ,X - (x) = 0
if x $ 1/2 and 4 - 4x otherwise.
plogd,logq,l(X)
plog d,log q,m~x) is the array
plogd,logq(,X ( . .? ('x?(x)) . . . ))
of 2m units, each with one of the functions
where ,X?( ... ('x?(x)) .. ?) is the m composition
of 'x+(x) or 'x - (x). Note that ,X?( ... ('x?(x)) ... ) has at most three linear segments (one is linear and the others are constant 0) and the sum of 2m possible
combinations t(,X?(. . . ('x?(x)) ? . . )) is equal to t(,Xm(x)) for any function f
such that f(O) = O.
Then lemmas similar to the ones in the polynomial case follow.
References
[1] Anthony, M: Classification by polynomial surfaces, NeuroCOLT Technical Report Series, NC-TR-95-011 (1995).
[2] Goldberg, P. and M. Jerrum: Bounding the Vapnik-Chervonenkis dimension
of concept classes parameterized by real numbers, Proc. Sixth Annual ACM
Conference on Computational Learning Theory, 361-369 (1993).
[3] Karpinski, M. and A. Macintyre, Polynomial bounds for VC dimension of sigmoidal neural networks, Proc. 27th ACM Symposium on Theory of Computing,
200-208 (1995) .
[4] Koiran, P. and E. D. Sontag: Neural networks with quadratic VC dimension,
Journ. Compo Syst. Sci., 54, 190-198(1997).
[5] Maass, W . G.: Bounds for the computational power and learning complexity of
analog neural nets, Proc. 25th Annual Symposium of the Theory of Computing,
335-344 (1993).
[6] Maass, W. G.: Neural nets with superlinear VC-dimension, Neural Computation, 6, 877-884 (1994)
[7] Milnor, J.: On the Betti numbers of real varieties, Proc. of the AMS, 15,
275-280 (1964).
[8] Sakurai, A.: Tighter Bounds of the VC-Dimension of Three-layer Networks,
Proc. WCNN'93, III, 540-543 (1993).
[9] Sakurai, A.: On the VC-dimension of depth four threshold circuits and the
complexity of Boolean-valued functions, Proc. ALT93 (LNAI 744), 251-264
(1993) ; refined version is in Theoretical Computer Science, 137, 109-127 (1995).
[10] Sakurai, A. : On the VC-dimension of neural networks with a large number of
hidden layers, Proc. NOLTA'93, IEICE, 239-242 (1993).
[11] Warren, H. E.: Lower bounds for approximation by nonlinear manifolds, Trans .
AMS, 133, 167-178, (1968) .
On-Line Learning with Restricted
Training Sets:
Exact Solution as Benchmark
for General Theories
H.C. Rae
hamish.rae@kcl.ac.uk
P. Sollich
psollich@mth.kcl.ac.uk
A.C.C. Coolen
tcoolen@mth.kcl.ac.uk
Department of Mathematics
King's College London
The Strand
London WC2R 2LS, UK
Abstract
We solve the dynamics of on-line Hebbian learning in perceptrons
exactly, for the regime where the size of the training set scales
linearly with the number of inputs. We consider both noiseless
and noisy teachers. Our calculation cannot be extended to nonHebbian rules, but the solution provides a nice benchmark to test
more general and advanced theories for solving the dynamics of
learning with restricted training sets.
1
Introduction
Considerable progress has been made in understanding the dynamics of supervised
learning in layered neural networks through the application of the methods of statistical mechanics. A recent review of work in this field is contained in [1 J. For
the most part, such theories have concentrated on systems where the training set is
much larger than the number of updates. In such circumstances the probability that
a question will be repeated during the training process is negligible and it is possible
to assume for large networks, via the central limit theorem, that the local field distribution is Gaussian. In this paper we consider restricted training sets; we suppose
that the size of the training set scales linearly with N, the number of inputs. The
probability that a question will reappear during the training process is no longer
negligible, the assumption that the local fields have Gaussian distributions is not
tenable, and it is clear that correlations will develop between the weights and the
317
Learning with Restricted Training Sets: Exact Solution
questions in the training set as training progresses. In fact, the non-Gaussian character of the local fields should be a prediction of any satisfactory theory of learning
with restricted training sets, as this is clearly demanded by numerical simulations.
Several authors [2, 3, 4, 5, 6, 7] have discussed learning with restricted training sets
but a general theory is difficult. A simple model of learning with restricted training
sets which can be solved exactly is therefore particularly attractive and provides
a yardstick against which more difficult and sophisticated general theories can, in
due course, be tested and compared. We show how this can be accomplished for
on-line Hebbian learning in perceptrons with restricted training sets and we obtain exact solutions for the generalisation error and the training error for a class of
noisy teachers and students with arbitrary weight decay. Our theory is in excellent
agreement with numerical simulations and our prediction of the probability density
of the student field is a striking confirmation of them, making it clear that we are
indeed dealing with local fields which are non-Gaussian.
2
Definitions
We study on-line learning in a student percept ron S, which tries to perform a task
defined by a teacher percept ron characterised by a fixed weight vector B* E ~N.
We assume, however, that the teacher is noisy and that the actual teacher output
T and the corresponding student response S are given by
T: {-I, I}N ~ {-I, I}
T(e) = sgn[B?
eL
S: {-I, I}N ~ {-I, I}
S(e) = sgn[J? e]'
where the vector B is drawn independently of with probability p(B} which may
depend explicitly on the correct teacher vector B*. Of particular interest are the
following two choices, described in literature as output noise and Gaussian input
noise, respectively:
e
p(B}
where >.
~
= >. 6(B+B*} + (1->.) 6(B-B*}
(1)
0 represents the probability that the teacher output is incorrect, and
N
( B)
P
= [~] T
211'~2
e
-I:f(B-Bo)2/'E 2
.
(2)
The variance ~2 / N has been chosen so as to achieve appropriate scaling for N ~
CXl.
Our learning rule will be the on-line Hebbian rule, i.e.
J(f+l) =
(1- ~)J(f) + ~ e(f) sgn[B(f)? e(f)]
(3)
where the non-negative parameters, and fJ are the decay rate and the learning rate ,
respectively. At each iteration step f an input vector e(f) is picked at random from
E {-I, I} N, f..L =
a training set consisting of p = aN randomly drawn vectors
1, . . . p. This set remains unchanged during the learning dynamics. At the same
time the teacher selects at random, and independently of e(f}, the vector B(?),
according to the probability distribution p(B} . Iterating equation (3) gives
e?
J(m) =
(1 - ~)
mJ
o
+
~ ~ (1 _~) m-l-Ie(e) sgn[B(f) . e(f)]
(4)
(=0
We assume that the (noisy) teacher output is consistent in the sense that if a
question reappears at some stage during the training process the teacher makes
the same choice of B in both cases , i.e. if e(e) = e(f') then also B(f) = B(e') . This
consistency allows us to define a generalised training set iJ by including with the p
e
H. C. Rae, P. Sollich and A. C. C. Coo/en
318
questions the corresponding teacher vectors:
D = {(e,B 1), ... ,(e,BP)}
There are two sources of randomness in this problem. First of all there is the random
realisation of the 'path' n = ((e(O), B(O)), (e(l), B(l)), ... , (e(f), B (f)), ... }. This
is simply the randomness of the stochastic process that gives the evolution of the
vector J. Averages over this process will be denoted as ( ... ). Secondly there is the
randomness in the composition of the training set. We will write averages over all
training sets as ( ... )sets. We note that
p
(J[e(f), B(e))) =
~L
p
f(e, Btl)
(for all e)
tL=1
and that averages over all possible realisations of the training set are given by
(J[(e, B1), (e, B2), ... , (e, BP)])sets
L ... L 2~P J[IT p(BIl) dBIl] f[(e, B1), (e, B2), ... ,(e, BP)]
e e e
=L
1
e
tL=l
where
E {-I, l}N. We normalise B* so that [B*]2 = 1 and choose the time unit
t miN. We finally assume that J o and B* are statistically independent of the
training vectors ell, and that they obey Ji(O), B; = O(N-~) for all i.
=
3
Explicit Microscopic Expressions
At the m-th stage of the learning process the two simple scalar observables Q[J] =
J2 and R[J] = B* . J, and the joint distribution of fields x = J . e, y = B* . e, z =
B . e (calculated over the questions in the training set D), are given by
Q[J(m)] = J2(m)
R[J(m)] = B* . J(m)
(5)
1 P
Pix, y, z; J(m)] = o[x - J(m) . e] o[y - B* . ell] o[z - Bil . ell]
p 11=1
L
(6)
For infinitely large systems one can prove that the fluctuations in mean-field observables such as {Q, R, P}, due to the randomness in the dynamics, will vanish [6].
Furthermore one assumes, with convincing support from numerical simulations, that
for N -r (Xl the evolution of such observables, observed for different random realisations of the training set, will be reproducible (i.e. the sample-to-sample fluctuations
will also vanish, which is called 'self-averaging'). Both properties are central ingredients of all current theories. We are thus led to the introduction of the averages of
the observables in (5,6), with respect to the dynamical randomness and with respect
to the randomness in the training set (to be carried out in precisely this order):
Q(t)
= N-+oo
lim ( (Q[J(tN)))
Pt(x,y,z) =
)set.s
R(t) =
lim ( (R[J(tN)])
N-+oo
lim ?P[x,y,z;J(tN)])
N-+oo
)sets
)sets
( 7)
(8)
A fundamental ingredient of our calculations will be the average (~i sgn(B ?e))(e , B),
calculated over all realisations of (e, B). We find, for a wide class of p(B), that
(9)
where, for example,
Learning with Restricted Training Sets: Exact Solution
P_
P=
if
f!.
1
(output noise)
(10)
(Gaussian input noise)
(11)
(1-2>.)
- V-; V1 + 'f,2
4
3/9
Averages of Simple Scalar Observables
Calculation of Q(t) and R(t) using (4, 5, 7, 9) to execute the path average and the
average over sets is relatively straightforward, albeit tedious. We find that
-"Yt(l
-"Yt)
2
Q(t) = e-2""(tQo + 21}PRo e
-e
+ ~(1_e-2"Yt)
"(
2,
( 1_e - "Yt)2 1
+1}2
(_+p2)
(12)
a
"(2
and that
(13)
where p is given by equations (10, 11) in the examples of output noise and Gaussian
input noise, respectively. We note that the generalisation error is given by
Eg =
~arccos [R(t)/v'Q(t)]
(14)
All models of the teacher noise which have the same p will thus have the same
generalisation error at any time. This is true, in particular, of output noise and
Gaussian input noise when their respective parameters>. and 'f, are related by
1 - 2>' =
1
(15)
V1 + 'f,2
With each type of teacher noise for which (9) holds, one can thus associate an
effective output noise parameter >.. Note, however, that this effective teacher error
probability>. will in general not be identical to the true teacher error probability
associated with a given p(B), as can immediately be seen by calculating the latter
for the Gaussian input noise (2).
5
Average of the Joint Field Distribution
The calculation of the average of the joint field distribution starting from equation
(8) is more difficult. Writing a = (l-,IN) , and expressing the 6 functions in terms
of complex exponentials, we find that
P, (x y z)
t
,
,
= jdidydZ
ei(xHyy+zi)
871"3
lim (e-i[xe-"YtJo ?el+i;B ? .e+zBl.el]
N-400
X
fi:[~ te-[i1)XN-
1
/TtN-t(e 1 f')
sg~(B""f')l])
(16)
p v=l
sets
In this expression we replace e 1 bye and Bl by B, and abbreviate S = I1~~0[' ?l
Upon writing the latter product in terms of the auxiliary variables Vv = (e 1 V ) I IN
and Wv == B V ? C, we find that for large N
?=0
?e
.
logS", X(x sgn[B? e],t) where
Ul, U2
A
t1}XUl
"(
2 A2
(l_e - "Yt) _
are the random variables given by
1}
x u2(1_e- 2"Y t )
4"(
(17)
H. C. Rae. P. Sollich and A. C. C. Coolen
320
Ul
=
1 '""'
.jN
~ Vv sgn(w v ),
a N v>l
and with
U2
= -1 '""'
~ Vv 2 .
P v>l
it
X(w, t) = -1
ds [e- [-Y(.-t)]
11]We
1]
a 0
and U2 shows that limN --700 U2 = 1, and that
A study of the statistics of Ul
(N
~
(18)
00),
where U is a Gaussian random variable with mean equal to zero and variance unity.
On the basis of these results and equations (16, 17) we find that
P, (x y z) = jdXdfjdi ei(x:Hyy+==)_~x2[Q - R2- e -2-yt(Qo-R6)]+ ~dx sgn [=] ,t) -ixy(R-Roe-> ' )
t
,
,
87f3
(19)
where Q and R are given by the expressions (12,13) (note: Q - R2 is independent
of p, i.e. of the distribution p(B)). Let Xo = J o .~, y = B* .~, z = B . ~.
We assume that, given y, z is independent of Xo. This condition, which reflects in
some sense the property that the teacher noise preserves the perceptron structure.
is certainly satisfied for the models which we are considering and is probably true
of all reasonable noise models. The joint probability density then has the form
p( Xo, y, z) = p( Xo Y )p(y , z). Equation (19) then leads to the following expression for
the conditional probability of x, given y and z:
J
P,t(xJy, z)
=j
~! eiX[x-Ry]-~x2[Q-R2J+x(x
sgn[z),t)
(20)
We observe that this probability distribution is the same for all models with the
same p and that the dependence on z is through r = sgn[ z], a directly observable
quantity. The training error and the student field probability density are given by
E tr
=j
dxdy
L
B( -xr)P,t (xJy , r)P(rJy)P,(y)
(21 )
T=?l
P,t(x)
=j
L
dy
P,t(xJy, r)P,(rJy)P(y)
(22)
T=?l
1
1
2
in which P,(y) = (27f)-2e- 2Y . We note that the dependence of E tr and P,t{x) on
the specific noise model arises solely through P,( rJy) which we find is given by
P(rJy) = )"B( -ry)
+ (1
- )..)B(ry)
1
P{rJy) = 2(1
+ rerf[y/J2~])
in the output noise and Gaussian input noise models, respectively. In order to simplify the numerical computation of the remaining integrals one can further reduce
the number of integrations analytically. Details will be reported elsewhere.
6
Comparison with Numerical Simulations
It will be clear that there is a large number of parameters that one could vary in
order to generate different simulation experiments with which to test our theory.
Here we have to restrict ourselves to presenting a number of representative results.
Figure 1 shows, for the output noise model, how the probability density Pdx) of
321
Learning with Restricted Training Sets: Exact Solution
0.2 , - - - -- -----,
0.1
f
0.0 '-L-_~~.
-10
o
10 -10
X
o
X
10 -10
o
X
10 -10
o
10
X
Figure 1: Student field distribution P(x) for the case of output noise, at different
times (left to right: t= 1,2,3,4), for a=,=~, 10 =1}= 1, A=0.2. Histograms:
distributions measured in simulations, (N = 10,000). Lines: theoretical predictions.
the student field x = J . ~ develops in time, starting as a Gaussian at t = 0
and evolving to a highly non-Gaussian distribution with a double peak by time
t = 4. The theoretical results give an extremely satisfactory account of the numerical
simulations . Figure 2 compares our predictions for the generalisation and training
errors Eg and E tr with the results of numerical simulations, for different initial
conditions, Eg(O) = 0 and Eg(O) = 0.5, and for different choices of the two most
important parameters A (which controls the amount of teacher noise) and a (which
measures the relative size of the training set). The theoretical results are again in
excellent agreement with the simulations. The system is found to have no memory of
its past (which will be different for some other learning rules), the asymptotic values
of Eg and E tr being independent of the initial student vector. In our examples Eg is
consistently larger than E tr , the difference becoming less pronounced as a increases.
Note, however, that in some circumstances E tr can also be larger then E g . Careful
inspection shows that for Hebbian learning there are no true overfitting effects, not
even in the case of large A and small, (for large amounts of teacher noise, without
regularisation via weight decay). Minor finite time minima of the generalisation
error are only found for very short times (t < 1), in combination with special
choices for parameters and initial conditions.
7
Discussion
Starting from a microscopic description of Hebbian on-line learning in perceptrons
with restricted training sets, of size p = aN where N is the number of inputs,
we have developed an exact theory in terms of macroscopic observables which has
enabled us to predict the generalisation error and the training error, as well as the
probability density of the student local fields in the limit N ~ 00. Our results are in
execellent agreement with numerical simulations (as carried out for systems of size
N = 5,000) in the case of output noise; our predictions for the Gaussian input noise
model are currently being compared with the results of simulations. Generalisations
of our calculations to scenarios involving, for instance, time-dependent learning
rates or time-dependent decay rates are straightforward. Although it will be clear
that our present calculations cannot be extended to non-Hebbian rules, since they
H. C. Rae, P Sollich and A. C. C. Coo/en
322
0.5
0.4
0.3
a=O.5
b,.
a=0.5
r
I~
f
a=4.0
~
0.2
0.1
~~
~
-~
""V'
a=0.5
Jhj
'v-
~
a=O.5
l
0.0
1
a=4.0
0.5
0.4 ~
A=D.25
A:=0.25
)..=0.0
0.3
~
A=D.O
0.2
~~
A:=0.25
~
0.1
)..=0.25
"'..t>
~
"\.T
~
J
-~
)..=0.0
0.0
o
10
20
30
A.---0.0
40
o
10
t
20
30
j,
40
t
Figure 2: Generalisation errors (diamonds/lines) and training errors (circles/li.nes)
as observed during on-line Hebbian learning, as functions of time. Upper two graphs:
A = 0.2 and a E {0.5,4.0} (upper left: Eg(O) = 0.5, upper right: Eg(O) = 0). Lower
two graphs: a = 1 and A E {O.O, 0.25} (lower left: Eg(O) = 0.5. lower right:
Eg(O) = 0.0). Markers: simulation results for an N = 5,000 system. Solid lines:
predictions of the theory. In all cases Jo = 'f} = 1 and 'Y = 0.5 .
ultimately rely on our ability to write down the microscopic weight vector J at
any time in explicit form (4), they do indeed provide a significant yardstick against
which more sophisticated and more general theories can be tested. In particular.
they have already played a valuable role in assessing the conditions under which a
recent general theory of learning with restricted training sets, based on a dynamical
version of the replica formalism, is exact [6, 7].
References
[1] Mace C .W.H.and Coolen A.C.C. (1998) Statistics and Computing 8 , 55
[2] Horner H. (1992a) , Z.Phys . B 86.291; (1992b) , Z.Phys . B 87,371
[3] Krogh A. and Hertz J.A. (1992) IPhys . A: Math. Gen. 25, 1135
[4] Sollich P. and Barber D. (1997) Europhys. Lett. 38 , 477
[5] SoUich P. and Barber D. (1998) Advances in N eural Information Processing
Systems 10, Eds. Jordan M., Kearns M. and Solla S. (Cambridge: MIT)
[6] Cool en A.C.C. and Saad D. , King's College London preprint KCL-MTH-98-08
[7] Coolen A .C.C. and Saad D. (1998) (in preparation)
| 1606 |@word version:2 polynomial:32 tedious:1 open:1 simulation:12 ttn:1 tr:7 solid:1 zbl:1 initial:3 series:1 chervonenkis:1 ours:1 past:1 current:1 activation:14 dx:1 realize:1 numerical:8 reproducible:1 update:1 inspection:1 reappears:1 short:1 compo:1 draft:1 provides:2 math:1 clarified:1 ron:2 sigmoidal:3 unbounded:2 shatter:2 c2:2 symposium:2 incorrect:1 consists:2 prove:1 uphill:4 indeed:2 roughly:2 mechanic:1 ry:3 ol:4 ifm:1 actual:1 considering:1 bounded:4 notation:1 underlying:1 moreover:1 circuit:1 cm:2 emerging:1 developed:2 corporation:1 whlogd:2 w8:1 xd:1 exactly:2 uk:4 control:1 unit:36 appear:1 positive:1 negligible:2 generalised:1 local:5 t1:1 limit:2 encoding:1 path:5 fluctuation:2 solely:1 becoming:1 plus:1 chose:2 p_:1 range:1 bi:6 statistically:1 practical:1 backpropagation:1 reappear:1 xr:1 evolving:1 composite:1 convenient:1 matching:1 word:1 spite:1 jui:1 get:5 cannot:3 convenience:1 superlinear:1 operator:1 layered:1 writing:2 yt:6 straightforward:2 starting:3 l:1 independently:2 simplicity:2 immediately:1 rule:5 eix:1 array:3 enabled:1 qh:2 suppose:8 flj:1 nominal:1 decode:2 exact:7 ogd:3 goldberg:2 pt:1 agreement:3 associate:1 element:4 particularly:1 submission:1 logq:7 observed:2 role:4 wcnn:1 preprint:1 solved:2 calculate:2 region:1 wj:4 connected:2 solla:1 valuable:1 complexity:2 hyy:1 dynamic:5 ultimately:1 depend:1 tight:6 segment:7 solving:1 upon:1 xul:1 observables:6 uh:1 basis:1 easily:3 joint:4 represented:1 kcl:4 effective:2 london:3 choosing:1 refined:1 europhys:1 whose:1 encoded:2 posed:1 valued:2 solve:1 say:1 larger:3 otherwise:5 encoder:5 ability:2 statistic:2 jerrum:2 noisy:4 maiorov:1 advantage:1 sequence:8 net:2 ment:1 product:2 j2:3 combining:1 gen:1 achieve:1 supposed:1 description:1 pll:1 pronounced:1 double:1 assessing:1 oo:3 develop:1 ac:4 vcdimension:1 measured:1 ij:1 minor:1 school:1 progress:2 krogh:1 p2:1 auxiliary:1 cool:1 r2j:1 qd:1 correct:1 stochastic:1 vc:22 sgn:10 wc2r:1 assign:1 fix:1 tighter:1 secondly:3 pl:16 hold:4 sufficiently:1 exp:1 bj:4 predict:1 koiran:3 vary:1 a2:1 proc:7 iw:1 coolen:4 currently:1 wl:1 repetition:1 weighted:2 reflects:1 mit:1 clearly:3 gaussian:14 hj:1 tcoolen:1 logh:1 longest:2 consistently:1 mainly:1 am:2 sense:2 economy:1 dependent:2 el:3 squaring:1 shattered:1 eliminate:1 lnai:1 w:13 hidden:14 journ:1 mth:3 selects:1 i1:2 classification:1 denoted:2 arccos:1 special:2 fairly:1 summed:1 ell:3 equal:2 construct:1 field:14 f3:1 x2n:2 integration:1 identical:1 ishikawa:1 represents:1 others:1 report:1 piecewise:13 realisation:4 simplify:1 bil:2 develops:1 randomly:1 b2k:1 composed:3 preserve:1 decodable:1 replaced:1 consisting:2 ourselves:1 interest:1 rae:5 highly:1 xjy:3 certainly:1 xh2:3 slog:1 undefined:1 bki:1 integral:1 xy:1 respective:1 logarithm:1 circle:1 theoretical:4 instance:1 formalism:1 boolean:2 sakurai:7 assignment:1 plog:2 monomials:6 too:1 reported:1 teacher:18 gd:1 density:5 fundamental:1 peak:1 ie:1 decoding:1 connecting:2 jo:1 again:1 central:2 satisfied:2 choose:2 external:3 li:7 japan:3 syst:1 account:1 b2:6 student:9 satisfy:1 explicitly:1 depends:2 piece:1 h1:1 try:1 picked:1 ogc:1 complicated:1 formed:1 variance:2 likewise:1 percept:2 mc:1 randomness:6 pihl:2 phys:2 ed:1 sixth:1 definition:1 against:2 proof:12 di:1 associated:1 proved:1 wh:1 knowledge:1 yh2:2 lim:4 cj:3 sophisticated:2 higher:1 supervised:1 follow:1 response:1 done:1 though:2 execute:1 furthermore:1 just:2 stage:2 until:1 correlation:1 d:1 ei:2 qo:1 nonlinear:1 marker:1 logistic:2 ieice:1 jhj:1 effect:1 concept:1 true:4 deliberately:1 evolution:2 analytically:1 satisfactory:2 maass:5 eg:10 attractive:1 sin:1 ll:3 during:5 self:1 ixy:1 presenting:1 outline:1 vo:4 tn:3 cp:1 l1:1 fj:1 logd:1 pro:1 recently:1 fi:1 sigmoid:1 ji:1 jp:1 nh:1 analog:1 discussed:1 refer:1 composition:3 expressing:1 significant:1 cambridge:1 consistency:1 mathematics:1 had:2 longer:1 surface:1 base:1 something:1 showed:1 recent:2 scenario:1 claimed:1 binary:8 wv:1 xe:1 yi:2 accomplished:1 seen:1 minimum:1 dxdy:1 ii:1 lik:3 hebbian:7 technical:1 calculation:7 concerning:1 prediction:6 involving:1 noiseless:1 circumstance:2 karpinski:1 sometimes:1 iteration:1 roe:1 histogram:1 whereas:1 interval:1 source:1 limn:1 macroscopic:1 appropriately:1 saad:2 probably:1 jordan:1 integer:1 counting:1 iii:1 variety:1 zi:1 architecture:2 restrict:1 reduce:1 bmk:1 whether:1 expression:4 bartlett:1 ul:3 sontag:3 generally:1 iterating:1 clear:7 amount:2 extensively:1 concentrated:1 differentiability:1 rw:2 generate:1 meir:1 macintyre:1 dqh:11 write:2 four:2 terminology:1 threshold:6 drawn:2 pj:1 btl:1 utilize:1 replica:1 v1:3 tenable:1 asymptotically:1 graph:2 sum:3 pix:1 inverse:2 parameterized:1 striking:1 place:1 throughout:2 reasonable:1 pik:9 scaling:1 dy:1 bit:7 bound:37 layer:10 hi:1 followed:1 played:4 fl:3 quadratic:1 annual:2 precisely:5 constraint:1 bp:3 x2:4 u1:1 min:1 extremely:1 relatively:1 department:1 pdx:1 according:1 combination:3 hertz:1 sollich:5 character:1 unity:1 wi:2 making:1 coo:2 restricted:13 xo:32 equation:5 remains:1 needed:1 adopted:1 obey:1 observe:1 appropriate:1 plogd:4 ho:1 jn:1 assumes:1 remaining:1 calculating:1 uj:1 unchanged:1 bl:7 question:6 quantity:1 already:1 fa:1 dependence:2 usual:1 exhibit:1 subnetwork:1 microscopic:3 unable:1 neurocolt:1 sci:1 decoder:10 normalise:1 gun:1 manifold:1 barber:2 induction:1 length:5 code:3 convincing:1 nc:1 difficult:3 unfortunately:1 negative:1 adjustable:1 perform:3 xll:1 upper:15 diamond:1 benchmark:2 finite:1 extended:2 excluding:1 y1:2 arbitrary:1 introduced:1 bk:3 specified:1 nomi:1 connection:1 h9:2 unduly:1 nee:1 horner:1 trans:1 dynamical:2 xm:1 regime:1 including:2 max:1 memory:1 power:5 suitable:1 rely:1 abbreviate:1 advanced:2 scheme:1 improve:3 technology:2 jaist:1 ne:1 carried:2 vh:1 prior:1 nice:1 understanding:1 review:1 literature:1 multiplication:4 sg:1 asymptotic:6 relative:1 regularisation:1 proven:2 ingredient:2 h2:1 degree:17 consistent:1 nolta:1 pi:30 lo:1 course:1 elsewhere:1 bye:1 l_:1 warren:1 allow:2 vv:3 perceptron:1 institute:1 wide:1 fg:6 dimension:29 depth:17 stand:2 xn:2 calculated:3 lett:1 author:1 made:1 bm:5 far:1 crest:1 observable:1 dealing:1 overfitting:1 b1:2 iayer:1 xi:3 demanded:1 designates:1 cxl:1 betti:1 mj:1 confirmation:1 mace:1 excellent:2 cl:2 complex:1 constructing:1 fli:3 vj:1 anthony:1 pk:5 main:1 linearly:6 whole:1 bounding:1 noise:23 scarcity:1 allowed:1 repeated:1 x1:2 eural:1 representative:1 en:4 tl:2 downhill:4 explicit:3 concatenating:1 xl:9 exponential:1 r6:1 vanish:2 theorem:12 down:1 xt:1 specific:1 r2:2 decay:4 dominates:2 exists:2 vapnik:1 albeit:1 importance:1 ci:1 te:1 cartesian:1 gap:1 argu:1 led:1 simply:1 infinitely:1 strand:1 contained:1 bo:1 scalar:2 u2:5 acm:2 conditional:1 king:2 consequently:2 careful:1 room:1 replace:3 considerable:1 hard:1 specifically:2 typical:1 except:2 generalisation:8 characterised:1 averaging:1 lemma:18 kearns:1 called:2 total:3 milnor:1 perceptrons:3 college:2 support:1 latter:3 arises:1 yardstick:2 preparation:1 tested:2 |
663 | 1,607 | Improved Switching
among Temporally Abstract Actions
Richard S. Sutton Satinder Singh
AT&T Labs
Florham Park, NJ 07932
{sutton,baveja}@research.att.com
Doina Precup Balaraman Ravindran
University of Massachusetts
Amherst, MA 01003-4610
{dprecup,ravi}@cs.umass.edu
Abstract
In robotics and other control applications it is commonplace to have a preexisting set of controllers for solving subtasks, perhaps hand-crafted or
previously learned or planned, and still face a difficult problem of how to
choose and switch among the controllers to solve an overall task as well as
possible. In this paper we present a framework based on Markov decision
processes and semi-Markov decision processes for phrasing this problem,
a basic theorem regarding the improvement in performance that can be obtained by switching flexibly between given controllers, and example applications of the theorem. In particular, we show how an agent can plan with
these high-level controllers and then use the results of such planning to find
an even better plan, by modifying the existing controllers, with negligible
additional cost and no re-planning. In one of our examples, the complexity
of the problem is reduced from 24 billion state-action pairs to less than a
million state-controller pairs.
In many applications, solutions to parts of a task are known, either because they were handcrafted by people or because they were previously learned or planned. For example, in
robotics applications, there may exist controllers for moving joints to positions, picking up
objects, controlling eye movements, or navigating along hallways. More generally, an intelligent system may have available to it several temporally extended courses of action to choose
from. In such cases, a key challenge is to take full advantage of the existing temporally extended actions, to choose or switch among them effectively, and to plan at their level rather
than at the level of individual actions.
Recently, several researchers have begun to address these challenges within the framework of
reinforcement learning and Markov decision processes (e.g., Singh, 1992; Kaelbling, 1993;
Dayan & Hinton, 1993; Thrun and Schwartz, 1995; Sutton, 1995; Dietterich, 1998; Parr &
Russell, 1998; McGovern, Sutton & Fagg, 1997). Common to much of this recent work is
the modeling of a temporally extended action as a policy (controller) and a condition for
terminating, which we together refer to as an option (Sutton, Precup & Singh, 1998). In
this paper we consider the problem of effectively combining given options into one overall
policy, generalizing prior work by Kaelbling (1993). Sections 1-3 introduce the framework;
our new results are in Sections 4 and 5.
1067
Improved Switching among Temporally Abstract Actions
1 Reinforcement Learning (MDP) Framework
In a Markov decision process (MDP), an agent interacts with an environment at some discrete, lowest-level time scale t = 0,1,2, ... On each time step, the agent perceives the state
of the environment, St E S, and on that basis chooses a primitive action, at E A. In response
to each action, at, the environment produces one step later a numerical reward, Tt+l' and
a next state, StH. The one-step model of the environment consists of the one-step statetransition probabilities and the one-step expected rewards,
p~s'
= Pr{sHl = s' I St = S,at = a}
T~
and
= E{TtH
1st
= S,at = a},
for all s, s' E S and a E A. The agent's objective is to learn an optimal Markov policy, a
mapping from states to probabilities of taking each available primitive action, 7r : S x A -+
[0, 1], that maximizes the expected discounted future reward from each state s:
V 1T (s)
= E{Tt+l
+,Tt+2
+ ... \ St = S,7r} =
7r(s,a)[T~ +, LP~S,V1T(S')],
L
aEA.
s'
where 7r(s, a) is the probability with which the policy 7r chooses action a E As in state s, and
, E [0, 1] is a discount-rate parameter. V1T (s) is called the value of state S under policy 7r, and
V1T is called the state-value Junction for7r. The optimal state-value function gives the value of
a state under an optimal policy: V*(s)
max1T V1T(S)
maxaEA.[T~ +,2: s ' P~SI V*(s')].
Given V*, an optimal policy is easily formed by choosing in each state S any action that
achieves the maximum in this equation. A parallel set of value functions, denoted Q1T and Q*,
=
=
and Bellman equations can be defined for state-action pairs, rather than for states. Planning
in reinforcement learning refers to the use of models of the environment to compute value
functions and thereby to optimize or improve policies.
2 Options
We use the term options for our generalization of primitive actions to include temporally
extended courses of action. Let ht,T = St, at, Tt+l, St+l, at+l, . .. , TT, ST be the history
sequence from time t :::; T to time T, and let n denote the set of all possible histories in
the given MDP. Options consist of three components: an initiation set I ~ S, a policy
7r : n x A -+ [0, 1], and a termination condition {3 : n -+ [0, 1]. An option 0 = (I, 7r, (3)
can be taken in state S if and only if S E I. If 0 is taken in state St, the next action at
is selected according to 7r(St, .). The environment then makes a transition to SHl, where
o terminates with probability (3(h t ,t+d, or else continues, determining atH according to
7r(ht,tH' .), and transitioning to state SH2, where 0 terminates with probability (3(h t ,t+2)
etc. We call the general options defined above semi-Markov because 7r and {3 depend on the
history sequence; in Markov options 7r and {3 depend only on the current state. Semi-Markov
options allow "timeouts", i.e., termination after some period of time has elapsed, and other
extensions which cannot be handled by Markov options.
The initiation set and termination condition of an option together limit the states over which
the option's policy must be defined. For example, a h~nd-crafted policy 7r for a mobile robot
to dock with its battery charger might be defined only for states I in which the battery charger
is within sight. The termination condition (3 would be defined to be 1 outside of I and when
the robot is successfupy docked.
We can now define policies over options. Let the set of options available in state S be denoted
the set of all options is denoted = USES
When initiated in a state St, the Markov
policy over options p : S X 0-+ [0,1] selects an option 0 E
according to the probability
distribution p(St, .). The option 0 is then taken in St, determining actions until it terminates
in St+k. at which point a new option is selected, according to P(SHk' .), and so on. In this
way a policy over options, p, determines a (non-stationary) policy over actions, or flat policy,
7r = f(p). We define the value of a state S under a general flat policy 7r as the expected return
as;
a
aS.
aS!
R. S. Sutton, S. Singh, D. Precup and B. Ravindran
1068
if the policy is started in s:
V 1T (s)
d~f E {rt+l + r'rt+2 + .. ?1 ?(7r, s, t) },
where ?(7r, s, t) denotes the event of 7r being initiated in s at time t. The value of a state
under a general policy (i.e., a policy over options) J-L can then be defined as the value of
the state under the corresponding flat policy: VtL(s) ~f Vf(tL) (s). An analogous definition
can be used for the option-value function, QtL(s,o). For semi-Markov options it is useful
to define QtL(h, 0) as the expected discounted future reward after having followed option 0
through history h.
3
SMDP Planning
Options are closely related to the actions in a special kind of decision problem known as a
semi-Markov decision process, or SMDP (Puterman, 1994; see also Singh, 1992; Bradtke &
Duff, 1995; Mahadevan et. aI., 1997; Parr & Russell, 1998). In fact, any MDP with a fixed
set of options is an SMDP. Accordingly, the theory of SMDPs provides an important basis for
a theory of options. In this section, we review the standard SMDP framework for planning,
which will provide the basis for our extension.
Planning with options requires a model of their consequences. The form of this model is
given by prior work with SMDPs. The reward part of the model of 0 for state s E S is the
total reward received along the way:
r~
= E{rt+l +,rt+2
+ .. . +,k-lrt+k I ?(o,s,t)},
where ?(0, s, t) denotes the event of 0 being initiated in state s at time t. The state-prediction
part of the model is
00
p~s' = LP(s', k)'l , E{-l&s'st+k 1?(0, s, t)},
k=l
for all s' E S, where p(s', k) is the probability that the option terminates in s' after k steps.
We call this kind of model a multi-time model because it describes the outcome of an option
not at a single time but at potentially many different times, appropriately combined.
Using multi-time models we can write Bellman equations for general policies and options.
For any general Markov policy J-L, its value functions satisfy the equations:
VtL(s) =
L
J-L(s, 0)
[r~ + 2:P~s' VtL(S')]
oEO.
and
QtL(s,o) =
r~ + LP~s' VtL(s').
s'
s'
Let us denote a restricted set of options by 0 and the set of all policies selecting only from
options in 0 by IJ( 0). Then the optimal value function given that we can select only from 0
is Va(s) = maxoEO. [r~ + 2: P~s' Va(s')]. A corresponding optimal policy, denoted J-Lo'
is any policy that achieves Va' i.e., for which VtLe, (s) = Va (s) in all states s E S. If Va and
the models of the options are known, then J-Lo can be formed .by choosing in any proportion
among the maximizing options in the equation above for Va'
s'
It is straightforward to extend MDP planning methods to SMDPs. For example, synchronous
value iteration with options initializes an approximate value function %(s) arbitrarily and
then updates it by :
Vk+l(S)
f-
max[r~
oEO s
+ 2: p~s' Vk(s')],
"Is E S.
s'ES
Note that this algorithm reduces to conventional value iteration in the special case in which
= A. Standard results from SMDP theory guarantee that such processes converge for
o
Improved Switching among Temporally Abstract Actions
1069
general semi-Markov options: limk-too Vk(s) = Vo(s) for all s E S, 0 E 0, and for all O.
The policies found using temporally abstract options are approximate in the sense that they
achieve only V which is typically less than the maximum possible, V?.
o'
4 Interrupting Options
We are now ready to present the main new insight and result of this paper. SMDP methods apply to options, but only when they are treated as opaque indivisible units. Once an
option has been selected, such methods require that its policy be followed until the option
terminates. More interesting and potentially more powerful methods are possible by looking
inside options and by altering their internal structure (e.g. Sutton, Precup & Singh, 1998).
In particular, suppose we have determined the option-value function QI' (s, 0) for some policy
J-L and for all state-options pairs s,o that could be encountered while following J-L. This
function tells us how well we do while following J-L committing irrevocably to each option,
but it can also be used to re-evaluate our commitment on each step. Suppose at time t we
are in the midst of executing option o. If 0 is Markov in s, then we can compare the value
of continuing with 0, which is QI' (St, 0), to the value of interrupting 0 and selecting a new
option according to J-L, which is VI'(s) = Lo' J-L(s, o')QI'(s, 0'). If the latter is more highly
valued, then why not interrupt 0 and allow the switch? This new way of behaving is indeed
better, as shown below.
We can characterize the new way of behaving as following a policy J-L' that is the same as the
original one, but over new options, i.e. J-L' (s, 0') = J-L( s, 0), for all s E S. Each new option
0' is the same as the corresponding old option 0 except that it terminates whenever switching
seems better than continuing according to QI'. We call such a J-L' an interrupted policy of J-L.
We will now state a general theorem, which extends the case described above, in that options
may be semi-Markov (instead of Markov) and interruption is optional at each state where it
could be done. The latter extension lifts the requirement that QI' be completely known, since
the interruption can be restricted to states for which this information is available.
Theorem 1 (Interruption) For any MDP, any set of options 0, and any Markov policy
J-L : S x 0 -+ [0,1], define a new set of options, 0', with a one-to-one mapping between
the two option sets as follows: for every 0 = (I, 7r, (3) E 0 we define a corresponding
0' = (I, 7r, (3') EO', where{3' = (3exceptthatforanyhistoryhinwhichQI'(h,o) < VI'(s),
where s is the final state of h, we may choose to set (3' (h) = 1. Any histories whose termination conditions are changed in this way are called interrupted histories. Let J-L' be the policy
over 0' corresponding to J-L.' J-L'(s, 0') = J-L(s, 0), where 0 is the option in 0 corresponding to
o',for all s E S. Then
1. VI" (s) ~ VI'(s) for all s E S.
2. Iffrom state s E S there is a non-zero probability of encountering an interrupted
history upon initiating J-L' in s, then VI" (s) > VI'(s).
Proof: The idea is to show that, for an arbitrary start state s, executing the option given by
the termination improved policy J-L' and then following policy J-L thereafter is no worse than
always following policy J-L. In other words, we show that the following inequality holds:
LJ-L'(s,o')[r~'
0'
+ LP~~'VI'(s')] ~ VI'(s)
= LJ-L(s,o)[r~
o
s'
+ LP~8'VI'(S')].
(1)
s'
If this is true, then we can use it to expand the left-hand side, repeatedly replacing every
occurrence of VI'(x) on the left by the corresponding Lo' J-L' (x, o')[r~' + Lx' p~'x' VI' (x')].
In the limit, the left-hand side becomes VI", proving that VI" ~ VI'. Since J-L'(s, 0') =
J-L(s,o) \Is E S, we need to show that
(2)
s'
s'
R. S. Sutton. S. Singh. D. Precup and B. Ravindran
1070
Let r denote the set of all interrupted histories:
hand side of (2) can be re-written as
I
r = {h En: f3 (h) =f f3' (h)}.
r}
I
Then, the left
r},
E {r + ,kVJL(s') ?(0', s), hSSI ~
+ E {r + ,kVJL(s') ?(0', s), hSSI E
where s', r, and k are the next state, cumulative reward, and number of elapsed steps following option 0 from s (hSSI is the history from s to s'). Trajectories that end because of
encountering a history hSSI ~ r never encounter a history in r, and therefore also occur
with the same probability and expected reward upon executing option 0 in state s. There-
for~, we can re-write the right hand side of (2) as E {r + ,kVJL(S') I ?(0', s), hSSI ~ r} +
E {f3(s')[r
+ ,kVJL(S')] + (1 -
f3(s'))[r
+ ,kQJL(hsSI, 0)]1
?(0', s), hSsl E
r}.
This proves (1) because for all hSSI E r, Q6(hsSI, 0) :S VJL(s'). Note that strict inequality
holds in (2) if Q6(hsSI, 0) < VJL(s') for at least one history hSSI E r that ends a trajectory
generated by 0' with non-zero probability.)
<>
As one application of this result, consider the case in which /-L is an optimal policy for a given
set of Markov options O. The interruption theorem gives us a way of improving over /-La
with just the cost of checking (on each time step) if a better option exists, which is negligible
Kaelbling (1993) and Dicompared to the combinatorial process of computing Q'O or
etterich (1998) demonstrated a similar performance improvement by interrupting temporally
extended actions in a different setting.
Va'
5
Illustration
Figure 1 shows a simple example of the gain that can be obtained by interrupting options.
The task is to navigate from a start location to a goal location within a continuous twodimensional state space. The actions are movements of length 0.01 in any direction from the
current state. Rather than work with these low-level actions, infinite in number, we introduce
seven landmark locations in the space. For each landmark we define a controller that takes us
to the landmark in a direct path. Each controller is only applicable within a limited range of
states, in this case within a certain distance of the corresponding landmark. Each controller
then defines an option: the circular region around the controller'S landmark is the option's
initiation set, the controller itself is the policy, and the arrival at the target landmark is the
termination condition. We denote the set of seven landmark options by O. Any action within
0.01 of the goal location transitions to the terminal state, , = 1, and the reward is -Ion all
transitions, which makes this a minimum-time task.
One of the landmarks coincides with the goal, so it is possible to reach the goal while picking
only from O. The optimal policy within II(O) runs from landmark to landmark, as shown
by the thin line in Figure 1. This is the optimal solution to the SMDP defined by 0 and is
indeed the best that one can do while picking only from these options. But of course one can
do better if the options are not followed all the way to each landmark. The trajectory shown
by the thick line in Figure 1 cuts the corners and is shorter. This is the interrupted policy
with respect to the SMDP-optimal policy. The interrupted policy takes 474 steps from start
to goal which, while not as good as the optimal policy (425 steps), is much better than the
SMDP-optimal policy, which takes 600 steps. The state-value functions, VJLe, and VJL ' for
the two policies are also shown in Figure 1.
Figure 2 presents a more complex, mission planning task. A mission is a flight from base to
observe as many of a given set of sites as possible and to return to base without running out
of fuel. The local weather at each site flips from cloudy to clear according to independent
lWe note that the same proof would also apply for switching to other options (not selected by /1-) if
they improved over continuing with o. That result would be more general and closer to conventional
policy improvement. We prefer the result given here because it emphasizes its primary application.
Improved Switching among Temporally Abstract Actions
Trajectories through
, ~ - - -',
Space of Landmarks ,"
Interrupted Sorution
(474 Stops)
, -
I
~
,
I
,
I
\
,
J_
1
/'
I
.... ~ I
I
r
\<
J-\-
...... I
.. G . . .
\ /'
L _ /'..._
\
....
"I
....
o
?100
?200
?300
'::l..-J -
~~
\
"
1-"
,\
I
->- -l - " - -.-1
\'
';'
t',
"S
,
. . 1-- . . . . ----..
1071
"-.. / \ ,
/1
'- , ~ ' - -" SMDPSoIution
,
' (600 Stops)
1
SMDP Value Function
0
0
Values with Interruption
Figure 1: Using interruption to improve navigation with landmark-directed controllers. The task (left)
is to navigate from S to G in minimum time using options based on controllers that run each to one
of seven landmarks (the black dots). The circles show the region around each landmark within which
the controllers operate. The thin line shows the optimal behavior that uses only these controllers run to
termination, and the thick line shows the corresponding interrupted behavior, which cuts the corners.
The right panels show the state-value functions for the SMDP-optimal and interrupted policies.
Poisson processes. If the sky at a given site is cloudy when the plane gets there, no observation is made and the reward is a. If the sky is clear, the plane gets a reward, according to the
importance of the site. The positions, rewards, and mean time between two weather changes
for each site are given in Figure 2. The plane has a limited amount of fuel, and it consumes
one unit of fuel during each time tick. If the fuel runs out before reaching the base, the plane
crashes and receives a reward of -lOa.
The primitive actions are tiny movements in any direction (there is no inertia). The state of
the system is described by several variables : the current position of the plane, the fuel level,
the sites that have been observed so far, and the current weather at each of the remaining sites.
The state-action space has approximately 24.3 billion elements (assuming 100 discretization
levels of the continuous variables) and is intractable by normal dynamic programming methods. We introduced options that can take the plane to each of the sites (including the base),
from any position in the state space. The resulting SMDP has only 874,800 elements and it
is feasible to exactly determine
(S') for all sites S'. From this solution and the model of
the options, we can determine Qo(s , 0) = r~ + LSi P~SI VO(S') for any option 0 and any
state s in the whole space.
Vo
We performed asynchronous value iteration using the options in order to compute the optimal
option-value function , and then used the interruption approach based on the values computed,
The policies obtained by both approaches were compared to the results of a static planner,
which exhaustively searches for the best tour assuming the weather does not change, and
then re-plans whenever the weather does change. The graph in Figure 2 shows the reward
obtained by each of these methods, averaged over 100 independent simulated missions. The
policy obtained by interruption performs significantly better than the SMDP policy, which in
turn is significantly better than the static planner. 2
6
Closing
This paper has developed a natural, even obvious, observation-that one can do better by
continually re-evaluating one 's commitment to courses of action than one can by committing irrevocably to them. Our contribution has been to formulate this observation precisely
enough to prove it and to demonstrate it empirically. Our final example suggests that this
technique can be used in applications far too large to be solved at the level of primitive actions. Note that this was achieved using exact methods, without function approximators to
represent the value function . With function approximators and other reinforcement learning
techniques, it should be possible to address problems that are substantially larger stilL
2In preliminary experiments, we also used interruption on a crudely learned estimate of Qo . The
performance of the interrupted solution was very close to the result reported here.
R. S. Sutton, S. Singh, D. Precup and B. Ravindran
1072
O
k:?
.4T~h,
10\
'n".!
'/I I
50
15 (reward)
25 (mean time between
weather changes)
oPtions~8
,'/ r~~j7'A
100
decision
'
'"
')( ~ ""iiF
''''
t
1: 0
?
Base
~
Expected
Reward
per 50
Mission
10
?50
40
High Fuel
Low Fuel
Figure 2: The mission planning task and the perfonnance of policies constructed by SMDP methods, interruption of the SMDP policy, and an optimal static re-planner that does not take into account
possible changes in weather conditions.
Acknowledgments
The authors gratefully acknowledge the substantial help they have received from many colleagues, including especially Amy McGovern, Andrew Barto, Ron Parr, Tom Dietterich,
Andrew Fagg, Leo Zelevinsky and Manfred Huber. We also thank Paul Cohen, Robbie Moll,
Mance Harmon, Sascha Engelbrecht, and Ted Perkins for helpful reactions and constructive
criticism. This work was supported by NSF grant ECS-9511805 and grant AFOSR-F4962096-1-0254, both to Andrew Barto and Richard Sutton. Satinder Singh was supported by NSF
grant IIS-9711753.
References
Bradtke, S. 1. & Duff, M. O. (1995). Reinforcement learning methods for continuous-time Markov
decision problems. In NIPS 7 (393-500). MIT Press.
Dayan, P. & Hinton, G. E. (1993). Feudal reinforcement learning. In NIPS 5 (271-278). MIT Press.
Dietterich, T. G. (1998). The MAXQ method for hierarchical reinforcement learning. In Proceedings
of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann.
Kaelbling, L. P. (1993). Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning (167-173). Morgan Kaufmann.
Mahadevan, S., Marchallek, N., Das, T. K. & Gosavi, A. (1997). Self-improving factory simulation
using continuous-time average-reward reinforcement learning. In Proceedings of the Fourteenth
International Conference on Machine Learning (202-210). Morgan Kaufmann.
McGovern, A., Sutton, R. S., & Fagg, A. H. (1997). Roles of macro-actions in accelerating reinforcement learning. In Grace Hopper Celebration of Women in Computing (13-17) .
Parr, R. & Russell, S. (1998). Reinforcement learning with hierarchies of machines. In NIPS 10. MIT
Press.
Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.
Wiley.
Singh, S. P. (1992). Reinforcement learning with a hierarchy of abstract models. In Proceedings of the
Tenth National Conference on Artificial Intelligence (202-207). MIT/AAAI Press.
Sutton, R. S. (1995). TD models: Modeling the world as a mixture of time scales. In Proceedings of
the Twelfth International Conference on Machine Learning (531-539). Morgan Kaufmann.
Sutton, R. S., Precup, D. & Singh, S. (1998). Intra-option learning about temporally abstract actions. In
Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufman.
Sutton, R. S., Precup, D. & Singh, S. (1998). Between MDPs and Semi-MDPs: learning, planning,
and representing knowledge at multiple temporal scales. TR 98-74, Department of Compo Sci.,
University of Massachusetts, Amherst.
Thrun, S. & Schwartz, A. (1995). Finding structure in reinforcement learning. In NIPS 7 (385-392).
MIT Press.
| 1607 |@word proportion:1 seems:1 nd:1 twelfth:1 termination:8 simulation:1 thereby:1 tr:1 att:1 uma:1 selecting:2 existing:2 reaction:1 current:4 com:1 discretization:1 si:2 must:1 written:1 interrupted:10 numerical:1 update:1 smdp:15 stationary:1 intelligence:1 selected:4 accordingly:1 plane:6 hallway:1 manfred:1 compo:1 provides:1 location:4 lx:1 ron:1 along:2 constructed:1 direct:1 consists:1 prove:1 inside:1 introduce:2 huber:1 ravindran:4 indeed:2 dprecup:1 expected:6 planning:10 behavior:2 multi:2 v1t:4 bellman:2 terminal:1 discounted:2 initiating:1 td:1 perceives:1 becomes:1 maximizes:1 panel:1 fuel:7 lowest:1 kind:2 kaufman:1 substantially:1 developed:1 finding:1 nj:1 guarantee:1 temporal:1 sky:2 every:2 exactly:1 schwartz:2 control:1 unit:2 grant:3 continually:1 before:1 negligible:2 local:1 limit:2 consequence:1 switching:7 sutton:14 initiated:3 path:1 approximately:1 might:1 black:1 suggests:1 irrevocably:2 limited:2 range:1 averaged:1 directed:1 acknowledgment:1 significantly:2 weather:7 word:1 refers:1 get:2 cannot:1 close:1 twodimensional:1 optimize:1 conventional:2 demonstrated:1 maximizing:1 primitive:5 straightforward:1 flexibly:1 vtl:4 formulate:1 amy:1 insight:1 proving:1 analogous:1 controlling:1 suppose:2 target:1 hierarchy:2 exact:1 programming:2 us:2 element:2 continues:1 cut:2 observed:1 role:1 solved:1 commonplace:1 region:2 timeouts:1 movement:3 russell:3 consumes:1 substantial:1 q1t:1 environment:6 complexity:1 reward:17 battery:2 dynamic:2 exhaustively:1 terminating:1 singh:12 solving:1 depend:2 upon:2 basis:3 completely:1 easily:1 joint:1 leo:1 committing:2 preexisting:1 mcgovern:3 artificial:1 tell:1 lift:1 choosing:2 outside:1 outcome:1 whose:1 larger:1 solve:1 valued:1 florham:1 itself:1 final:2 advantage:1 sequence:2 mission:5 commitment:2 macro:1 combining:1 ath:1 achieve:1 billion:2 requirement:1 produce:1 executing:3 object:1 help:1 andrew:3 ij:1 received:2 c:1 direction:2 thick:2 closely:1 modifying:1 stochastic:2 qtl:3 require:1 generalization:1 charger:2 preliminary:2 extension:3 hold:2 around:2 normal:1 mapping:2 parr:4 achieves:2 applicable:1 combinatorial:1 indivisible:1 mit:5 j7:1 always:1 sight:1 rather:3 reaching:1 mobile:1 barto:2 interrupt:1 improvement:3 vk:3 criticism:1 balaraman:1 sense:1 helpful:1 dayan:2 typically:1 lj:2 expand:1 selects:1 lrt:1 overall:2 among:7 denoted:4 plan:4 special:2 once:1 f3:4 shl:2 having:1 never:1 ted:1 iif:1 park:1 thin:2 future:2 statetransition:1 intelligent:1 richard:2 national:1 individual:1 zelevinsky:1 highly:1 circular:1 intra:1 navigation:1 mixture:1 closer:1 shorter:1 perfonnance:1 harmon:1 continuing:3 old:1 re:7 circle:1 lwe:1 modeling:2 planned:2 altering:1 cost:2 kaelbling:4 tour:1 too:2 characterize:1 reported:1 chooses:2 combined:1 st:15 international:5 amherst:2 picking:3 together:2 precup:8 aaai:1 choose:4 woman:1 worse:1 corner:2 return:2 account:1 satisfy:1 doina:1 vi:14 later:1 performed:1 lab:1 start:3 option:76 parallel:1 contribution:1 formed:2 kaufmann:4 emphasizes:1 trajectory:4 q6:2 researcher:1 history:12 reach:1 whenever:2 definition:1 colleague:1 celebration:1 obvious:1 engelbrecht:1 proof:2 static:3 gain:1 stop:2 massachusetts:2 begun:1 knowledge:1 tom:1 response:1 improved:6 done:1 robbie:1 just:1 until:2 crudely:1 hand:5 flight:1 receives:1 replacing:1 qo:2 defines:1 perhaps:1 mdp:6 dietterich:3 true:1 puterman:2 during:1 self:1 coincides:1 tt:5 demonstrate:1 vo:3 performs:1 bradtke:2 recently:1 common:1 hopper:1 empirically:1 sh2:1 cohen:1 handcrafted:1 million:1 extend:1 refer:1 ai:1 closing:1 gratefully:1 baveja:1 dot:1 phrasing:1 moving:1 robot:2 encountering:2 behaving:2 etc:1 base:5 recent:1 certain:1 initiation:3 inequality:2 arbitrarily:1 approximators:2 morgan:5 minimum:2 additional:1 eo:1 converge:1 determine:2 period:1 semi:8 ii:2 full:1 multiple:1 reduces:1 vjl:3 va:7 qi:5 prediction:1 basic:1 controller:17 poisson:1 fifteenth:2 iteration:3 represent:1 robotics:2 ion:1 achieved:1 crash:1 else:1 appropriately:1 operate:1 limk:1 strict:1 call:3 mahadevan:2 enough:1 switch:3 moll:1 regarding:1 idea:1 synchronous:1 handled:1 j_:1 accelerating:1 aea:1 action:31 repeatedly:1 generally:1 useful:1 clear:2 amount:1 discount:1 tth:1 reduced:1 exist:1 lsi:1 nsf:2 per:1 discrete:2 write:2 key:1 thereafter:1 ravi:1 tenth:2 ht:2 graph:1 fagg:3 run:4 fourteenth:1 powerful:1 opaque:1 extends:1 planner:3 interrupting:4 decision:9 prefer:1 vf:1 followed:3 encountered:1 occur:1 precisely:1 perkins:1 feudal:1 flat:3 cloudy:2 department:1 according:8 terminates:6 describes:1 sth:1 lp:5 restricted:2 pr:1 dock:1 taken:3 equation:5 previously:2 turn:1 flip:1 end:2 available:4 junction:1 mance:1 apply:2 observe:1 hierarchical:2 occurrence:1 encounter:1 original:1 denotes:2 running:1 include:1 remaining:1 prof:1 especially:1 objective:1 initializes:1 primary:1 rt:4 interacts:1 interruption:10 grace:1 navigating:1 distance:1 thank:1 thrun:2 simulated:1 landmark:15 sci:1 seven:3 assuming:2 length:1 illustration:1 difficult:1 potentially:2 policy:52 observation:3 markov:21 acknowledge:1 optional:1 extended:5 hinton:2 looking:1 duff:2 arbitrary:1 subtasks:1 introduced:1 pair:4 elapsed:2 learned:3 maxq:1 nip:4 address:2 below:1 challenge:2 maxaea:1 max:1 including:2 event:2 treated:1 natural:1 representing:1 improve:2 mdps:2 eye:1 temporally:11 started:1 ready:1 prior:2 review:1 checking:1 determining:2 afosr:1 interesting:1 agent:4 oeo:2 tiny:1 lo:4 course:4 changed:1 loa:1 supported:2 asynchronous:1 side:4 allow:2 tick:1 face:1 taking:1 transition:3 cumulative:1 evaluating:1 world:1 inertia:1 made:1 reinforcement:12 shk:1 author:1 far:2 ec:1 approximate:2 satinder:2 sascha:1 continuous:4 search:1 why:1 learn:1 improving:2 complex:1 domain:1 da:1 main:1 midst:1 whole:1 paul:1 arrival:1 crafted:2 site:9 tl:1 en:1 wiley:1 position:4 factory:1 theorem:5 transitioning:1 navigate:2 consist:1 exists:1 intractable:1 effectively:2 importance:1 generalizing:1 determines:1 ma:1 goal:5 feasible:1 change:5 smdps:3 determined:1 except:1 infinite:1 called:3 total:1 e:1 la:1 select:1 internal:1 people:1 latter:2 constructive:1 evaluate:1 |
664 | 1,608 | Exploring Unknown Environments with
Real-Time Search or Reinforcement Learning
Sven Koenig
College of Computing, Georgia Institute of Technology
skoenig@cc.gatech.edu
Abstract
Learning Real-Time A* (LRTA*) is a popular control method that interleaves planning and plan execution and has been shown to solve search problems in known
environments efficiently. In this paper, we apply LRTA * to the problem of getting to
a given goal location in an initially unknown environment. Uninformed LRTA * with
maximal lookahead always moves on a shortest path to the closest unvisited state,
that is, to the closest potential goal state. This was believed to be a good exploration
heuristic, but we show that it does not minimize the worst-case plan-execution time
compared to other uninformed exploration methods. This result is also of interest to
reinforcement-learning researchers since many reinforcement learning methods use
asynchronous dynamic programming, interleave planning and plan execution, and
exhibit optimism in the face of uncertainty, just like LRTA *.
1 Introduction
Real-time (heuristic) search methods are domain-independent control methods that interleave planning and plan execution. They are based on agent-centered search [Dasgupta et
at., 1994; Koenig, 1996], which restricts the search to a small part of the environment that
can be reached from the current state of the agent with a small number of action executions.
This is the part of the environment that is immediately relevant for the agent in its current
situation. The most popular real-time search method is probably the Learning Real-Time
A * (LRTA *) method [Korf, 19901 It has a solid theoretical foundation and the following
advantageous properties: First, it allows for fine-grained control over how much planning
to do between plan executions and thus is an any-time contract algorithm [Russell and Zilberstein, 1991]. Second, it can use heuristic knowledge to guide planning, which reduces
planning time without sacrificing solution quality. Third, it can be interrupted at any state
and resume execution at a different state. Fourth, it amortizes learning over several search
episodes, which allows it to find plans with suboptimal plan-execution time fast and then
improve the plan-execution time as it solves similar planning tasks, until its plan-execution
time is optimal. Thus, LRTA * always has a small sum of planning and plan-execution
S. Koenig
1004
Initially, u( s) = 0 for all s E S.
1.
:= S s tart.
2. If Scurrent E G, then stop successfully.
Scurrent
3. Generate a local search space Sios ~ S with
S current E Si s s and Siss n G = 0.
4. Update u( s) for all S E Sios (Figure 2).
5. a := one-ofargminaEA(scurrent)
u( succ( S c urrent , a)) .
6. Execute action a.
7. S current := SUCC(Scurrent, a).
8. If Scurrent E Si s ", then go to 5.
9. Go to 2.
1. For all S E SI .. : u(s) := 00.
2. If u( s) < 00 for all S E Slss, then return.
3. s' := one-ofargminsEs, ?? :u(s)= oo
minaEA(s) u( succ(s, a)) .
4. IfminaEA(sl) u(succ(s' , a)) = 00, then
return.
5. u(s') := 1 + minaEA( s l) u(succ(s' , a)).
6. Go to 2.
Figure 2: Value-Update Step
Figure 1: Uninformed LRTA *
time, and it minimizes the plan-execution time in the long run in case similar planning tasks
unexpectedly repeat. This is important since no search method that executes actions before
it has solved a planning task completely can guarantee to minimize the plan-execution time
right away.
Real-time search methods have been shown to be efficient alternatives to traditional search
methods in known environments. In this paper, we investigate real-time search methods
in unknown environments. In such environments, real-time search methods allow agents
to gather information early. This information can then be used to resolve some of the
uncertainty and thus reduce the amount of planning done for unencountered situations.
We study robot-exploration tasks without actuator and sensor uncertainty, where the sensors
on-board the robot can uniquely identify its location and the neighboring locations. The
robot does not know the map in advance, and thus has to explore its environment sufficiently
to find the goal and a path to it. A variety of methods can solve these tasks, including LRTA *.
The proceedings of the AAAI-97 Workshop on On-Line Search [Koenig et al., 1997] give
a good overview of some of these techniques. In this paper, we study whether uninformed
LRTA * is able to minimize the worst-case plan-execution time over all state spaces with the
same number of states provided that its lookahead is sufficiently large. Uninformed LRTA *
with maximallookahead always moves on a shortest path to the closest unvisited state, that
is, to the closest potential goal state - it exhibits optimism in the fac\! of uncertainty [Moore
and Atkeson, 19931 We show that this exploration heuristic is not as good as it was believed
to be. This sol ves the central problem left open in [Pemberton and Korf, 1992] and improves
our understanding of LRTA *. Our results also apply to learning control for tasks other than
robot exploration, for example the control tasks studied in [Davies et ai., 19981 They are
also of interest to reinforcement-learning researchers since many reinforcement learning
methods use asynchronous dynamic programming, interleave planning and plan execution,
and exhibit optimism in the face of uncertainty, just like LRTA * [Barto et ai., 1995;
Kearns and Singh, 19981
2
LRTA*
We use the following notation to describe LRTA *: S denotes the finite set of states of the
environment, S3t(Jrt E S the start state, and 0 =I G ~ S the set of goal states. The number
of states is n := lSI. A( s) =I 0is the finite, nonempty set of actions that can be executed in
state s E S. succ( s, a) denotes the successor state that results from the execution of action
a E A(s) in state s E S. We also use two operators with the following semantics: Given
1005
Exploring Unknown Environments
a set X, the expression "one-of X" returns an element of X according to an arbitrary rule.
A subsequent invocation of "one-of X" can return the same or a different element. The
expression "arg minxEx !(x)" returns the elements x E X that minimize !(x), that is, the
minx'Ex !(x ' )}.
set {x E XI!(x)
=
We model environments (topological maps) as state spaces that correspond to undirected
graphs, and assume that it is indeed possible to reach a goal state from the start state. We
measure the distances and thus plan-execution time in action executions, which is reasonable
if every action can be executed in about the same amount of time. The graph is initially
unknown. The robot can always observe whether its current state is a goal state, how many
actions can be executed in it, and which successor states they lead to but not whether the
successor states are goal states. Furthermore, the robot can identify the successor states
when it observes them again at a later point in time. This assumption is realistic, for
example, if the states look sufficiently different or the robot has a global positioning system
(GPS) available.
LRTA * learns a map of the environment and thus needs memory proportional to the number
of states and actions observed. It associates a small amount of information with the states
in its map. In particular, it associates a u-value u(s) with each state s E S. The u-values
approximate the goal distances of the states. They are updated as the search progresses and
used to determine which actions to execute. Figure 1 describes LRTA *: LRTA * first checks
whether it has already reached a goal state and thus can terminate successfully (Line 2). If
not, it generates the local search space S/H ~ S (Line 3). While we require only that the
current state is part of the local search space and the goal states are not [Barto et al., 1995],
in practice LRTA * constructs S/88 by searching forward from the current state. LRTA * then
updates the u-values of all states in the local search space (Line 4), as shown in Figure 2.
The value-update step assigns each state its goal distance under the assumption that the
u-values of all states outside of the local search space correspond to their correct goal
distances. Formally, if u( s) E [0,00] denotes the u-values before the value-update step and
u(s) E [0,00] denotes the u-values afterwards, then u(s) = 1 + min a EA(8) u(succ(s, a))
for all s E S/S8 and u( s)
u( s) otherwise. Based on these u-values, LRTA * decides which
action to execute next (Line 5). It greedily chooses the action that minimizes the u-value of
the successor state (ties are broken arbitrarily) because the u-values approximate the goal
distances and LRTA * attempts to decrease its goal distance as much as possible. Finally,
LRTA * executes the selected action (Line 6) and updates its current state (Line 7). Then, if
the new state is still part of the local search space used previously, LRTA * selects another
action for execution based on the current u-values (Line 8). Otherwise, it iterates (Line 9),
(The behavior of LRTA * with either minimal or maximal lookahead does not change if
Line 8 is deleted.)
=
3 Plan-Execution Time of LRTA * for Exploration
In this section, we study the behavior of LRTA * with minimal and maximallookaheads in
unknown environments. We assume that no a-priori heuristic knowledge is available and,
thus, that LRTA * is uninformed. In this case, the u-values of all unvisited states are zero
and do not need to be maintained explicitly.
Minimal Lookahead: The lookahead of LRTA * is minimal if the local search space contains only the current state. LRTA * with minimallookahead performs almost no planning
between plan executions. Its behavior in initially known and unknown environments is
identical. Figure 3 shows an example.
Let gd(s) denote the goal distance of state s. Then, according to one of our previous results,
uninformed LRTA * with any lookahead reaches a goal state after at most L 8 Es gd (s) action
1i
1/2n2 - 1/2n,
executions [Koenig and Simmons, 1995]. Since L8ES gd(s) ~
L7:o =
S. Koenig
1006
o
goal
o
-~
?
03
= visited vertex (known not to be a goal vertex)
= unvisited (but known) vertex (unknown whether ft is a goal vertex)
=current vertex 0' the robot
= u?value of the vertex
= edge trav~sed in at least one direction
= untraversed edge
_
= local search space
LATA" with minimallookahead:
LATA" with maximallookahead:
~
~o
o
Figure 3: Example
start
all edge lengths
ara one
t
goal
Figure 4: A Planar Undirected Graph
uninformed LRTA* with any lookahead reaches a goal state after O(n 2 ) action executions.
This upper bound on the plan-execution time is tight in the worst case for uninformed
LRTA * with rninimallookahead, even if the number of actions that can be executed in any
state is bounded from above by a small constant (here: three) . Figure 4, for example, shows
a rectangular grid-world for which uninformed LRTA * with rninimallookahead reaches a
goal state in the worst case only after 8( n 2 ) action executions. In particular, LRTA * can
traverse the state sequence that is printed by the following program in pseudo code. The
scope of the for-statements is shown by indentation.
for i := n-3 downto n / 2 step 2
for j : = 1 to i step 2
print j
for j : = i+l downto 2 step 2
print j
for i := 1 to n-l step 2
print i
In this case, LRTA * executes 3n 2 /16 - 3/4 actions before it reaches the goal state (for
n 2: 2 with n mod 4 = 2). For example, for n 10, it traverses the state sequence 8), 83,
85,87,88,86,84,82,8),83,85,86,84,82,81,83,85,87, and 89 .
=
Maximal Lookahead: As we increase the lookahead of LRTA *, we expect that its planexecution time tends to decrease because LRTA * uses more information to decide which
Exploring Unknown Environments
1007
goal
+
branches of
length 3
the order in which the remaining
unvisited vertices are visited
/~
t
t
LRTA* is now here
start
o =visited vertex
o =unvisited vertex
=edge traversed in at least one direction
=untraversed edge
Figure 5: Another Planar Undirected Graph (m
= 3)
action to execute next. This makes it interesting to study LRTA * with maximallookahead.
The lookahead of LRTA * is maximal in known environments if the local search space
contains all non-goal states. In this case, LRTA * performs a complete search without
interleaving planning and plan execution and follows a shortest path from the start state to
a closest goal state. Thus, it needs gd( Sst art ) action executions. No other method can do
better than that.
The maximallookahead ofLRTA * is necessarily smaller in initially unknown environments
than in known environments because its value-update step can only search the known part of
the environment. Therefore, the look ahead of LRTA * is maximal in unknown environments
if the local search space contains all visited non-goal states. Figure 3 shows an example.
Uninformed LRTA * with maximal lookahead always moves on a shortest path to the
closest unvisited state, that is, to the closest potential goal state. This appears to be a
good exploration heuristic. [Pemberton and Korf, 1992] call this behavior "incremental
best-first search," but were not able to prove or disprove whether this locally optimal
search strategy is also globally optimal. Since this exploration heuristic has been used
on real mobile robots [Thrun et al., 1998], we study how well its plan-execution time
compares to the plan-execution time of other uninformed exploration methods. We show
that the worst-case plan-execution time of uninformed LRTA * with maximallookahead in
unknown environments is Q( IO~~; n n) action executions and thus grows faster than linearly
in the number of states n. It follows that the plan-execution time of LRTA * is not optimal
in the worst case, since depth-first search needs a number of action executions in the worst
case that grows only linearly in the number of states.
Consider the graph shown in Figure 5, that is a variation of a graph in [Koenig and Smirnov,
19961. It consists of a stem with several branches. Each branch consists of two parallel
paths of the same length that connect the stem to a single edge. The length of the branch is
the length of each of the two paths. The stem has length mm for some integer m ;:::: 3 and
consists of the vertices Vo, VI , .. . , Vmm . For each integer i with 1 ::; i ::; m there are mm- i
branches of length :L~~~ m j each (including branches of length zero). These branches
attach to the stem at the vertices Vj m' for integers j; if i is even, then 0::; j ::; mm-i - 1,
otherwise 1 ::; j ::; mm-i. There is one additional single edge that attaches to vertex Vo .
S. Koenig
1008
is the starting vertex. The vertex at the end of the single edge of the longest branch is
the goal vertex. Notice that the graph is planar. This is a desirable property since non-planar
graphs are, in general, rather unrealistic models of maps.
Vm m
Uninformed LRTA * with maximallookahead can traverse the stem repeatedly forward and
backward, and the resulting plan-execution time is large compared to the number of vertices
that are necessary to mislead LRTA * into this behavior. In particular, LRTA * can behave
as follows: It starts at vertex Vmm and traverses the whole stem and all branches, excluding
the single edges at their end, and finally traverses the additional edge attached to vertex
vo, as shown in Figure 5. At this point, LRTA* knows all vertices. It then traverses the
whole stem, visiting the vertices at the ends of the single edges of the branches of length O.
It then switches directions and travels along the whole stem in the opposite direction, this
time visiting the vertices at the end of the single edges of the branches of length m, and so
forth, switching directions repeatedly. It succeeds when it finally uses the longest branch
and discovers the goal vertex. To summarize, the vertices at the ends of the branches are
tried out in the order indicated in Figure 5. The total number of edge traversals is.o.( mm+l )
since the stem of length mm is traversed m + 1 times. To be precise, the total number of
edge traversal~ is (mm+3 +3m m+ 2 _8m m+ 1 +2m2 -m+3)/(m 2 -2m+ 1). It holds that
n = 8(mm) smcen = (3m m+2_5m m+l_mm +mm-l +2m2-2m+2)/(m2-2m+l) .
This implies that m = .0.( IO~~; n) since it holds that, for k > 1 and all sufficiently large m
(to be precise: m with m ~ k)
1
10Ik m+IOlk logk m
mlOlk m
.1.+ logk
m
logk m
mlogk m
<
-
1
I":Ui+o = m.
m
=
Put together, it follows that the total number of edge traversals is .o.(mm+!)
.o.(m n) =
.0.( IO:~; n n). (We also performed a simulation that confirmed our theoretical results.)
The graph from Figure 5 can be modified to cause LRTA * to behave similarly even if the
assumptions of the capabilities of the robot or the environment vary from our assumptions
here, including the case where the robot can observe only the actions that lead to unvisited
states but not the states themselves.
4 Future Work
Our example provided a lower bound on the plan-execution time of uninformed LRTA *
with maximallookahead in unknown environments. The lower bound is barely super-linear
in the number of states. A tight bound is currently unknown, although upper bounds are
known. A trivial upper bound, for example, is O(n2) since LRTA* executes at most n - 1
actions before it visits another state that it has not visited before and there are only n states
to visit. A tighter upper bound follows directly from [Koenig and Smirnov, 19961. It was
surprisingly difficult to construct our example. It is currently unknown, and therefore a
topic of future research, for which classes of graphs the worst-case plan-execution time of
LRTA * is optimal up to a constant factor and whether these classes of graphs correspond to
interesting and realistic environments. It is also currently unknown how the bounds change
as LRTA * becomes more informed about where the goal states are.
5
Conclusions
Our work provides a first analysis of uninformed LRTA * in unknown environments. We
studied versions of LRTA * with minimal and maximal lookaheads and showed that their
Exploring Unknown Environments
1009
worst-case plan-execution time is not optimal, not even up to a constant factor. The worstcase plan-execution time of depth-first search, for example, is smaller than that of LRTA *
with either minimal or maximallookahead. This is not to say that one should always prefer
depth-first search over LRTA * since, for example, LRTA * can use heuristic knowledge to
direct its search towards the goal states. LRTA * can also be interrupted at any location and
get restarted at a different location. If the batteries of the robot need to get recharged during
exploration, for instance, LRTA * can be interrupted and later get restarted at the charging
station. While depth-first search could be modified to have these properties as well, it would
lose some of its simplicity.
Acknowledgments
Thanks to Yury Smirnov for our collaboration on previous work which this paper extends. Thanks also
to the reviewers for their suggestions for improvements and future research directions. Unfortunately,
space limitations prevented us from implementing all of their suggestions in this paper.
References
(Barto etal., 1995) Barto, A.; Bradtke, S.; and Singh, S. 1995. Learning to act using real-time
dynamic programming. Artificial1ntelligence 73(1):81-138.
(Dasgupta et aI., 1994) Dasgupta, P.; Chakrabarti, P.; and DeSarkar, S. 1994. Agent searching in a
tree and the optimality of iterative deepening. Artificial Intelligence 71 : 195-208.
(Davies et al., 1998) Davies, S.; Ng, A; and Moore, A 1998. Applying online search techniques
to reinforcement learning. In Proceedings of the National Conference on Artificial Intelligence .
753-760.
(Kearns and Singh, 1998) Kearns, M. and Singh, S. 1998. Near-optimal reinforcement learning in
polynomial time. In Proceedings of the International Conference on Machine Learning. 260-268.
(Koenig and Simmons, 1995) Koenig, S. and Simmons, RG. 1995. Real-time search in nondeterministic domains. In Proceedings of the International Joint Conference on Artificial Intelligence. 1660-1667.
(Koenig and Smirnov, 1996) Koenig, S. and Smirnov, Y. 1996. Graph learning with a nearest neighbor approach. In Proceedings of the Conference on Computational Learning Theory. 19-28.
(Koenig et aI., 1997) Koenig, S.; Blum, A; Ishida, T.; and Korf, R, editors 1997. Proceedings of
the AAAI-97 Workshop on On-Line Search. AAAI Press.
(Koenig, 1996) Koenig, S. 1996. Agent-centered search: Situated search with small look-ahead. In
Proceedings of the National Conference on Artificial Intelligence . 1365.
(Korf,1990) Korf, R. 1990. Real-time heuristic search. Artificial Intelligence 42(2-3):189-211.
(Moore and Atkeson, 1993) Moore, A. and Atkeson, C. 1993. Prioritized sweeping: Reinforcement
learning with less data and less time. Machine Learning 13:103-130.
(Pemberton and Korf, 1992) Pemberton, J. and Korf, R 1992. Incremental path planning on graphs
with cycles. In Proceedings of the International Conference on Artificial Intelligence Planning
Systems. 179-188.
(Russell and Zilberstein, 1991) Russell, S. and Zilberstein, S. 1991. Composing real-time systems.
In Proceedings of the Internationalloint Conference on Artificial Intelligence. 212-217.
(Thrun etal., 1998) Thrun, S.; BUcken, A; Burgard, W; Fox, D.; Frohlinghaus, T.; Hennig, D.;
Hofmann, T.; Krell, M.; and Schmidt, T. 1998. Map learning and high-speed navigation in rhino.
In Kortenkamp, D.; Bonasso, R.; and Murphy, R., editors 1998, Artificial Intelligence Based
Mobile Robotics: Case Studies of Successful Robot Systems. MIT Press. 21-52.
| 1608 |@word version:1 polynomial:1 interleave:3 advantageous:1 smirnov:5 open:1 simulation:1 korf:8 tried:1 solid:1 contains:3 current:11 si:3 interrupted:3 realistic:2 subsequent:1 hofmann:1 update:7 intelligence:8 selected:1 iterates:1 provides:1 location:5 traverse:6 along:1 direct:1 ik:1 chakrabarti:1 prove:1 consists:3 nondeterministic:1 tart:1 indeed:1 behavior:5 themselves:1 planning:16 ara:1 globally:1 resolve:1 becomes:1 provided:2 notation:1 bounded:1 minimizes:2 minaea:2 informed:1 guarantee:1 pseudo:1 every:1 act:1 tie:1 control:5 before:5 local:10 tends:1 io:3 switching:1 path:8 studied:2 acknowledgment:1 practice:1 printed:1 davy:3 get:3 operator:1 put:1 applying:1 map:6 reviewer:1 go:3 starting:1 rectangular:1 mislead:1 simplicity:1 immediately:1 assigns:1 m2:3 rule:1 searching:2 amortizes:1 variation:1 updated:1 simmons:3 programming:3 gps:1 us:2 associate:2 element:3 observed:1 ft:1 solved:1 unexpectedly:1 worst:9 cycle:1 episode:1 russell:3 sol:1 decrease:2 observes:1 environment:27 broken:1 ui:1 battery:1 dynamic:3 traversal:3 singh:4 tight:2 completely:1 joint:1 succ:7 sven:1 fast:1 fac:1 describe:1 artificial:8 outside:1 heuristic:9 solve:2 say:1 otherwise:3 online:1 sequence:2 maximal:7 neighboring:1 relevant:1 lookahead:11 forth:1 getting:1 incremental:2 oo:1 uninformed:16 nearest:1 progress:1 solves:1 implies:1 direction:6 correct:1 downto:2 exploration:10 centered:2 successor:5 implementing:1 require:1 tighter:1 traversed:2 exploring:4 mm:10 hold:2 sufficiently:4 lookaheads:1 scope:1 indentation:1 vary:1 early:1 travel:1 lose:1 currently:3 visited:5 successfully:2 mit:1 sensor:2 always:6 super:1 modified:2 rather:1 kortenkamp:1 mobile:2 barto:4 gatech:1 zilberstein:3 longest:2 improvement:1 check:1 greedily:1 initially:5 rhino:1 selects:1 semantics:1 arg:1 l7:1 priori:1 plan:28 art:1 construct:2 ng:1 identical:1 look:3 future:3 vmm:2 national:2 murphy:1 attempt:1 interest:2 investigate:1 jrt:1 navigation:1 edge:15 necessary:1 disprove:1 fox:1 tree:1 sacrificing:1 theoretical:2 minimal:6 instance:1 vertex:23 burgard:1 successful:1 connect:1 chooses:1 gd:4 thanks:2 international:3 contract:1 vm:1 together:1 again:1 aaai:3 central:1 deepening:1 return:5 unvisited:8 potential:3 yury:1 explicitly:1 vi:1 later:2 performed:1 reached:2 start:6 parallel:1 capability:1 sed:1 minimize:4 efficiently:1 skoenig:1 identify:2 correspond:3 resume:1 confirmed:1 cc:1 researcher:2 executes:4 reach:5 stop:1 popular:2 knowledge:3 improves:1 ea:1 appears:1 planar:4 execute:4 done:1 furthermore:1 just:2 until:1 koenig:17 quality:1 indicated:1 grows:2 moore:4 during:1 uniquely:1 maintained:1 complete:1 vo:3 performs:2 bradtke:1 discovers:1 overview:1 attached:1 s8:1 ai:4 grid:1 similarly:1 etal:2 ishida:1 interleaf:1 robot:13 closest:7 showed:1 arbitrarily:1 additional:2 determine:1 shortest:4 branch:13 afterwards:1 desirable:1 reduces:1 stem:9 positioning:1 faster:1 believed:2 long:1 prevented:1 visit:2 robotics:1 fine:1 probably:1 undirected:3 mod:1 call:1 integer:3 near:1 variety:1 switch:1 suboptimal:1 opposite:1 reduce:1 whether:7 expression:2 optimism:3 f:1 cause:1 action:25 repeatedly:2 sst:1 amount:3 locally:1 situated:1 generate:1 sl:1 bonasso:1 restricts:1 lsi:1 notice:1 dasgupta:3 hennig:1 blum:1 deleted:1 backward:1 graph:13 sum:1 run:1 uncertainty:5 fourth:1 extends:1 almost:1 reasonable:1 decide:1 prefer:1 bound:8 topological:1 ahead:2 generates:1 speed:1 min:1 optimality:1 lata:2 according:2 describes:1 smaller:2 previously:1 nonempty:1 know:2 end:5 available:2 apply:2 observe:2 actuator:1 away:1 alternative:1 schmidt:1 denotes:4 remaining:1 move:3 already:1 print:3 strategy:1 traditional:1 visiting:2 exhibit:3 minx:1 distance:7 thrun:3 topic:1 trivial:1 barely:1 length:11 code:1 difficult:1 executed:4 unfortunately:1 statement:1 unknown:18 upper:4 finite:2 behave:2 situation:2 excluding:1 precise:2 station:1 arbitrary:1 sweeping:1 able:2 summarize:1 program:1 including:3 memory:1 charging:1 unrealistic:1 attach:1 improve:1 technology:1 understanding:1 expect:1 interesting:2 attache:1 proportional:1 suggestion:2 limitation:1 foundation:1 agent:6 gather:1 editor:2 collaboration:1 repeat:1 surprisingly:1 asynchronous:2 guide:1 allow:1 institute:1 neighbor:1 face:2 depth:4 world:1 forward:2 reinforcement:8 atkeson:3 approximate:2 global:1 decides:1 xi:1 search:39 iterative:1 terminate:1 composing:1 necessarily:1 domain:2 vj:1 linearly:2 whole:3 n2:2 recharged:1 board:1 georgia:1 invocation:1 third:1 learns:1 grained:1 interleaving:1 workshop:2 logk:3 execution:38 rg:1 explore:1 restarted:2 worstcase:1 goal:33 towards:1 prioritized:1 change:2 kearns:3 total:3 e:1 succeeds:1 formally:1 college:1 ex:1 |
665 | 1,609 | Support Vector Machines Applied to Face
Recognition
P. Jonathon Phillips
National Institute of Standards and Technology
Bldg 225/ Rm A216
Gaithersburg. MD 20899
Tel 301.975.5348; Fax 301.975.5287
jonathon@nist.gov
Abstract
Face recognition is a K class problem. where K is the number of known
individuals; and support vector machines (SVMs) are a binary classification method. By reformulating the face recognition problem and reinterpreting the output of the SVM classifier. we developed a SVM-based
face recognition algorithm. The face recognition problem is formulated
as a problem in difference space. which models dissimilarities between
two facial images. In difference space we formulate face recognition as a
two class problem. The classes are: dissimilarities between faces of the
same person. and dissimilarities between faces of different people. By
modifying the interpretation of the decision surface generated by SVM.
we generated a similarity metric between faces that is learned from examples of differences between faces. The SVM-based algorithm is compared with a principal component analysis (PeA) based algorithm on a
difficult set of images from the FEREf database. Performance was measured for both verification and identification scenarios. The identification
performance for SVM is 77-78% versus 54% for PCA. For verification.
the equal error rate is 7% for SVM and 13 % for PCA.
1 Introduction
Face recognition has developed into a major research area in pattern recognition and computer vision. Face recognition is different from classical pattern-recognition problems such
as character recognition. In classical pattern recognition. there are relatively few classes,
and many samples per class. With many samples per class. algorithms can classify samples
not previously seen by interpolating among the training samples. On the other hand, in
P. J Phillips
804
face recognition, there are many individuals (classes), and only a few images (samples) per
person, and algorithms must recognize faces by extrapolating from the training samples.
In numerous applications there can be only one training sample (image) of each person.
Support vector machines (SVMs) are formulated to solve a classical two class pattern
recognition problem. We adapt SVM to face recognition by modifying the interpretation
of the output of a SVM classifier and devising a representation of facial images that is
concordant with a two class problem. Traditional SVM returns a binary value, the class of
the object. To train our SVM algorithm, we formulate the problem in a difference space,
which explicitly captures the dissimilarities between two facial images. This is a departure
from traditional face space or view-based approaches, which encodes each facial image as
a separate view of a face.
In difference space, we are interested in the following two classes: the dissimilarities be-
tween images of the same individual, and dissimilarities between images of different people. These two classes are the input to a SVM algorithm. A SVM algorithm generates a
decision surface separating the two classes. For face recognition, we re-interpret the decision surface to produce a similarity metric between two facial images. This allows us to
construct face-recognition algorithms. The work of Moghaddam et al. [3] uses a Bayesian
method in a difference space, but they do not derive a similarity distance from both positive
and negative samples.
We demonstrate our SVM-based algorithm on both verification and identification applications. In identification, the algorithm is presented with an image of an unknown person.
The algorithm reports its best estimate of the identity of an unknown person from a database
of known individuals. In a more general response, the algorithm will report a list of the most
similar individuals in the database. In verification (also referred to as authentication), the
algorithm is presented with an image and a claimed identity of the person. The algorithm
either accepts or rejects the claim. Or, the algorithm can return a confidence measure of the
validity of the claim.
To provide a benchmark for comparison, we compared our algorithm with a principal component analysis (PCA) based algorithm. We report results on images from the FEREf
database of images, which is the de facto standard in the face recognition community. From
our experience with the FEREf database, we selected harder sets of images on which to
test the algorithms. Thus, we avoided saturating performance of either algorithm and providing a robust comparison between the algorithms. To test the ability of our algorithm to
generalize to new faces, we trained and tested the algorithms on separate sets of faces.
2 Background
In this section we will give a brief overview of SVM to present the notation used in this
paper. For details of SVM see Vapnik [7], or for a tutorial see Burges [1]. SVM is a binary
classification method that finds the optimal linear decision surface based on the concept of
structural risk minimization. The decision surface is a weighted combination of elements
of the training set. These elements are called support vectors and characterize the boundary
between the two classes. The input to a SVM algorithm is a set {( XI, Yi) } of labeled training
data, where XI is the data and Yi = -1 or 1 is the label. The output of a SVM algorithm is
a set of Ns support vectors SI, coefficient weights ai, class labels Yi of the support vectors,
and a constant term b. The linear decision surface is
w? z +b
= 0,
where
Ns
W
= ~aiYisl'
i=l
Support Vector Machines Applied to Face Recognition
805
SVM can be extended to nonlinear decision surfaces by using a kernel K (" .) that satisfies
Mercer's condition [1, 7]. The nonlinear decision surface is
Ns
L oWiK(sj, z) + b = O.
i= l
A facial image is represented as a vector P E RN, where RN is referred to as face space.
Face space can be the original pixel values vectorized or another feature space; for example,
projecting the facial image on the eigenvectors generated by performing PCA on a training
set of faces [6] (also referred to as eigenfaces).
We write PI '" P2 if PI and P2 are images of the same face, and PI 1- P2 if they are
images of different faces. To avoid confusion we adopted the following terminology for
identification and verification. The gallery is the set of images of known people and a
probe is an unknown face that is presented to the system. In identification, the face in
a probe is identified. In verification, a probe is the facial image presented to the system
whose identity is to be verified. The set of unknown faces is call the probe set.
3 Verification as a two class problem
Verification is fundamentally a two class problem. A verification algorithm is presented
with an image P and a claimed identity. Either the algorithm accepts or rejects the claim.
A straightforward method for constructing a classifier for person X, is to feed a SVM algorithm a training set with one class consisting of facial images of person X and the other
class consisting of facial images of other people. A SVM algorithm will generated a linear
decision surface, and the identity of the face in image P is accepted if
w?p
+ b:::; 0,
otherwise the claim is rejected.
This classifier is designed to minimizes the structural risk. Structural risk is an overall
measure of classifier performance. However, verification performance is usually measured
by two statistics, the probability of correct verification, Pv, and the probability of false
acceptance, PF . There is a tradeoff between Pv and PF . At one extreme all claims are
rejected and P v = PF = 0; and at the other extreme, all claims are accepted and Pv =
PF = 1. The operating values for Pv and PF are dictated by the application.
Unfortunately, the decision surface generated by a SVM algorithm produces a single performance point for P v and PF . To allow for adjusting Pv and PF. we parameterize a SVM
decision surface by ~. The parametrized decision surface is
w? z +b =~,
and the identity of the face image p is accepted if
w ' p+ b:::;~.
If ~ = -00, then all claims are rejected and Pv = PF = 0; if ~ = +00, all claims
are accepted and Pv = PF = O. By varying ~ between negative and positive infinity, all
possible combinations of Pv and PF are found.
Nonlinear parametrized decision surfaces are described by
Ns
L QiYiK(Sj, z) + b = ~.
i= l
806
4
P J. Phillips
Representation
In a canonical face recognition algorithm. each individual is a class and the distribution of
each face is estimated or approximated. In this method. for a gallery of K individuals. the
identification problem is a K class problem. and the verification problem is K instances
of a two class problems. To reduce face recognition to a single instance of a two class
problem. we introduce a new representation. We model the dissimilarities between faces.
Let T = {t 1, ... , t M} be a training set of faces of K individuals. with multiple images of
each of the K individuals. From T. we generate two classes. The first is the within-class
differences set. which are the dissimilarities in facial images of the same person. Formally
the within-class difference set is
The set C1 contains within-class differences for all K individuals in T. not dissimilarities
for one of the K individuals in the training set. The second is the between-class differences
set. which are the dissimilarities among images of different individuals in the training set.
Formally.
C2 = {tl - tjltl
f
tj}.
Classes C 1 and C2 are the inputs to our SVM algorithm. which generates a decision surface. In the pure SVM paradigm. given the difference between facial images Pl and
P2. the classifier estimates if the faces in the two images are from the same person. In
the modification described in section 3. the classification returns a measure of similarity
t5 = W, (Pl - P2) + b. This similarity measure is the basis for the SVM-based verification
and identification algorithms presented in this paper.
5 Verification
In verification. there is a gallery {gj} of m known individuals. The algorithm is presented
with a probe p and a claim to be person j in the gallery. The first step of the verification
algorithm computes the similarity score
Ns
t5= LO:iYiK(Sl,gj -p) +b.
i= l
The second step accepts the claim if t5 ~ ~. Otherwise. the claim is rejected. The value of
~ is set to meet the desired tradeoff between P v and PF.
6 Identification
In identification. there is a gallery {gj} of m known individuals. The algorithm is presented
with a probe p to be identified. The first step of the identification algorithm computes
a similarity score between the probe and each of the gallery images. The similar score
between p and gj is
Ns
t5j =
L
O:iYiK(St, gj -
p)
+ b.
i=l
In the second step. the probe is identified as person j that has minimum similarity score
t5j . An alternative method of reporting identification results is to order the gallery by the
similarity measure t5j
.
807
Support Vector Machines Applied to Face Recognition
(a)
(b)
Figure 1: (a) Original image from the FEREr database. (b) Image after prepr?:>ee)sing.
7 Experiments
We demonstrate our SVM-based verification and identification algorithms on 400 frontal
images from the FEREf database of facial images [5]. To provide a benchmark for algorithm pedormance. we provide performance for a PCA-based algorithm on the same set of
images. The PCA algorithm identifies faces with a L2 nearest neighbor classifier. For the
SVM-based algorithms. a radial basis kernel was used.
The 400 images consisted of two images of 200 individuals. and were divided into disjoint
training and testing sets. Each set consisted of two images of 100 people. All 400 images
were preprocessed to normalize geometry and illumination. and to remove background and
hair (figure 1). The preprocessing procedure consisted of manually locating the centers
of the eyes; translating. rotating. and scaling the faces to place the center of the eyes on
specific pixels; masking the faces to remove background and hair; histogram equalizing
the non-masked facial pixels; and scaling the non-masked facial pixels to have zero mean
and unit variance.
PeA was pedormed on 100 preprocessed images (one image of each person in the training
set). This produced 99 eigenvectors {et} and eigenvalues {Ad. The eigenvectors were
ordered so that Ai < Aj when i < j. Thus. the low order eigenvectors encode the majority
of the variance in the training set. The faces were represented by projecting them on a
subset of the eigenvectors and this is the face space. We varied the dimension of face space
by changing the number of eigenvectors in the representation.
In all experiments. the SVM training set consisted of the same images. '!he SVM-training
set T consisted of two images of 50 individuals from the general training set of 100 individuals. The set C1 consisted of all 50 within-class differences from faces of the same
individuals. The set C2 consisted of 50 randomly selected between-class differences.
The verification and identification algorithms were tested on a gallery consisted of 100
images from the test set. with one image person. The probe set consisted of the remaining
images in the test set (100 individuals. with one image per person).
We report results for verification on a face space that consisted of the first 30 eigenfeatures
(an eigenfeature is the projection of the image onto an eigenvector). The results are reported as a receiver operator curve (ROC) in figure 2. The ROC in figure 2 was computed
P. J. Phillips
808
_..J..
r'
--
_
_
j
r..r __r~
0.9
r,-1--.2---'-"
SVM algcrithm PeA algcrithm ----.
}
Ii
I
0.8
(
"8
."...to
>
'0
~
)
r"
./
0.7
~
-8a:
0'
,f
0.6
I
l
j
0.5
..
(
0.4
I
0
0.1
0.2
0.3
0.4
Probabilty of false acceptance
0.5
0.6
Figure 2: ROC for verification (using first 30 eigenfeatures).
by averaging the ROC for each of the 100 individuals in the gallery. For person gj' the
probe set consisted of one image of person gj and 99 faces of different people. A summary
statistic for verification is the equal error rate. The equal error rate is the point where the
probability of false acceptance is equal to the probability of false verification, or mathematically, PF = 1 - Pv. For the SVM-based algorithm the equal error rate is 0.07, and
for the PeA-based algorithm is 0.13.
For identification, the algorithm estimated the identity of each of the probes in the probe
set. We compute the probability of correctly identifying the probes for a set of face spaces
parametrized by the number of eigenfeatures. We always use the first n eigenfeatures, thus
we are slowly increasing the amount of information, as measured by variance, available to
the classifier. Figure 3 shows probability of identification as a function of representing faces
by the first n eigenfeatures. PeA achieves a correct identification rate of 54% and SVM
achieves an identification rate of 77-78%. (The PCA results we report are significantly
lower than those reported in the literature [2, 3]. This is because we selected a set of images
that are more difficult to recognize. The results are consistent with experimentations in our
group with PeA-based algorithms on the FEREf database [4]. We selected this set of
images so that performance of neither the PCA or SVM algorithms are saturated.)
8
Conclusion
We introduced a new technique for applying SVM to face recognition. We demonstrated
the algorithm on both verification and identification applications. We compared the performance of our algorithm to a PCA-based algorithm. For verification, the equal error rate
of our algorithm was almost half that of the PCA algorithm, 7% versus 13%. For identification, the error of SVM was half that of PeA, 22-23% versus 46%. This indicates that
SVM is making more efficient use of the information in face space than the baseline PeA
algorithm.
One of the major concerns in practical face recognition applications is the ability of the
809
Support Vector Machines Applied to Face Recognition
SVMscore& PCA score" ---.
0.8
0.6
/----------------
0.4
//
---_.. _-----____--i.---------------
--------------------- ------ -- --------
/
0.2
o~
______
o
~
20
______i __ _ _ _ _ _
40
~
60
_ _ _ _ _ _J __ _ _ _
80
~
100
Number of eigenfeature&
Figure 3: Probability of identification as a function of the number eigenfeatures.
algorithm to generalize from a training set of faces to faces outside of the training set. We
demonstrated the ability of the SVM-based algorithm to generalize by training and testing
on separate sets.
Future research directions include varying the kernel K, changing the representation space,
and expanding the size of the gallery and probe set. There is nothing in our method that is
specific to faces, and it should generalize to other biometrics such as fingerprints.
References
[1] C. J. C. Burges. A tuturial on support vector machines for pattern recognition. Data
mining and knowledge discovery, (submitted), 1998.
[2] B. Moghaddam and A. Pentland. Face recognition using view-based and modular
eigenspaces. In Proc. SPIE Conference on Automatic Systems for the Identification
and Inspection of Humans, volume SPIE Vol. 2277, pages 12-21, 1994.
[3] B. Moghaddam, W. Wahid, and A. Pentland. Beyond eigenfaces: probablistic matching
for face recognition. In 3rd International Conference on Automatic Face and Gesture
Recognition, pages 30--35, 1998.
[4] H. Moon and P. J. Phillips. Analysis of PCA-based face recognition algorithms. In
K W. Bowyer and P. J. Phillips, editors, Empirical Evaluation Techniques in Computer
Vision. IEEE Computer Society Press, Los Alamitos, CA. 1998.
[5] P. J. Phillips, H. Wechsler, J. Huang. and P. Rauss. The FEREf database and evaluation procedure for f~recognition algorithms. Image and Vision Computing Journal,
16(5):295-306.1998.
[6] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience,
3(1):71-86,1991.
[7] V. Vapnik. The nature of statistical learning theory. Springer. New York, 1995.
| 1609 |@word harder:1 contains:1 score:5 si:1 must:1 remove:2 extrapolating:1 designed:1 half:2 selected:4 devising:1 inspection:1 eigenfeatures:6 c2:3 reinterpreting:1 introduce:1 gov:1 pf:12 increasing:1 notation:1 minimizes:1 eigenvector:1 developed:2 rm:1 classifier:8 facto:1 unit:1 positive:2 meet:1 probablistic:1 practical:1 testing:2 procedure:2 area:1 empirical:1 reject:2 significantly:1 projection:1 matching:1 confidence:1 radial:1 onto:1 operator:1 risk:3 applying:1 demonstrated:2 center:2 straightforward:1 formulate:2 identifying:1 pure:1 us:1 element:2 recognition:32 approximated:1 database:9 labeled:1 capture:1 parameterize:1 trained:1 basis:2 represented:2 train:1 outside:1 whose:1 modular:1 solve:1 otherwise:2 ability:3 statistic:2 eigenvalue:1 equalizing:1 eigenfeature:2 normalize:1 los:1 produce:2 object:1 derive:1 measured:3 nearest:1 p2:5 direction:1 correct:2 modifying:2 pea:8 human:1 jonathon:2 translating:1 mathematically:1 pl:2 claim:11 major:2 achieves:2 proc:1 label:2 weighted:1 minimization:1 always:1 avoid:1 varying:2 encode:1 indicates:1 baseline:1 interested:1 pixel:4 overall:1 classification:3 among:2 equal:6 construct:1 rauss:1 manually:1 future:1 report:5 fundamentally:1 few:2 randomly:1 national:1 recognize:2 individual:20 geometry:1 consisting:2 acceptance:3 mining:1 evaluation:2 saturated:1 extreme:2 tj:1 moghaddam:3 experience:1 facial:15 eigenspaces:1 biometrics:1 rotating:1 re:1 desired:1 instance:2 classify:1 owik:1 subset:1 masked:2 characterize:1 reported:2 person:17 st:1 international:1 huang:1 slowly:1 cognitive:1 return:3 de:1 coefficient:1 explicitly:1 gaithersburg:1 ad:1 view:3 prepr:1 masking:1 moon:1 variance:3 generalize:4 identification:22 bayesian:1 produced:1 submitted:1 turk:1 spie:2 adjusting:1 knowledge:1 feed:1 response:1 t5j:3 rejected:4 hand:1 nonlinear:3 aj:1 validity:1 concept:1 consisted:11 reformulating:1 authentication:1 demonstrate:2 confusion:1 image:54 overview:1 volume:1 interpretation:2 he:1 interpret:1 phillips:7 ai:2 automatic:2 rd:1 fingerprint:1 similarity:9 surface:14 operating:1 gj:7 dictated:1 scenario:1 claimed:2 binary:3 yi:3 seen:1 minimum:1 paradigm:1 ii:1 multiple:1 adapt:1 gesture:1 divided:1 hair:2 vision:3 metric:2 histogram:1 kernel:3 c1:2 background:3 call:1 structural:3 ee:1 identified:3 reduce:1 tradeoff:2 pca:12 locating:1 york:1 probabilty:1 eigenvectors:6 amount:1 svms:2 generate:1 sl:1 canonical:1 tutorial:1 estimated:2 disjoint:1 per:4 correctly:1 neuroscience:1 write:1 vol:1 group:1 terminology:1 changing:2 preprocessed:2 neither:1 verified:1 iyik:2 bldg:1 reporting:1 place:1 almost:1 decision:14 scaling:2 bowyer:1 infinity:1 encodes:1 generates:2 performing:1 relatively:1 combination:2 character:1 modification:1 making:1 projecting:2 previously:1 adopted:1 available:1 experimentation:1 probe:14 alternative:1 original:2 remaining:1 include:1 wechsler:1 classical:3 society:1 alamitos:1 md:1 traditional:2 distance:1 separate:3 separating:1 parametrized:3 majority:1 gallery:10 providing:1 difficult:2 unfortunately:1 negative:2 unknown:4 benchmark:2 nist:1 sing:1 pentland:3 extended:1 rn:2 varied:1 community:1 introduced:1 learned:1 accepts:3 beyond:1 usually:1 pattern:5 departure:1 representing:1 technology:1 brief:1 eye:2 numerous:1 identifies:1 fax:1 literature:1 l2:1 discovery:1 versus:3 verification:24 vectorized:1 consistent:1 mercer:1 editor:1 pi:3 lo:1 summary:1 allow:1 burges:2 institute:1 neighbor:1 eigenfaces:3 face:62 curve:1 boundary:1 dimension:1 t5:3 computes:2 preprocessing:1 avoided:1 sj:2 receiver:1 xi:2 wahid:1 nature:1 robust:1 expanding:1 ca:1 tel:1 interpolating:1 constructing:1 tween:1 nothing:1 referred:3 tl:1 roc:4 n:6 pv:9 specific:2 list:1 svm:38 concern:1 false:4 vapnik:2 dissimilarity:10 illumination:1 saturating:1 ordered:1 springer:1 satisfies:1 identity:7 formulated:2 averaging:1 principal:2 called:1 accepted:4 concordant:1 formally:2 support:10 people:6 frontal:1 tested:2 |
666 | 161 | 264
NEURAL APPROACH FOR TV IMAGE COMPRESSION
USING A HOPFIELD TYPE NETWORK
Martine NAILLON
Jean-Bernard THEETEN
Laboratoire d'Electronique et de Physique Appliquee *
3 Avenue DESCARTES, BP 15
94451 LIMEIL BREVANNES Cedex FRANCE.
ABSTRACT
A self-organizing Hopfield network has been
developed in the context of Vector Ouantiza-tion, aiming at compression of television
images. The metastable states of the spin
glass-like network are used as an extra
storage resource using the Minimal Overlap
learning rule (Krauth and Mezard 1987) to
optimize the organization of the attractors.
The sel f-organi zi ng scheme that we have
devised results in the generation of an
adaptive codebook for any qiven TV image.
I NTRODOCTI ON
The ability of an Hopfield network (Little,1974;
Hopfield, 1982,1986; Amit. and al., 1987; Personnaz and
al. 1985; Hertz, 1988) to behave as an associative memory
usua 11 y aSSlJ11es a pri ori knowl edge of the patterns to be
stored. As in many applications they are unknown, the aim
of this work is to develop a network capable to learn how
to select its attractors. TV image compression using
Vector Quantization (V.Q.)(Gray, 1984), a key issue for
HOTV transmission, is a typical case, since the non
neural algorithms which generate the list of codes (the
codebookl are suboptimal. As an alternative to the
prani si ng neural canpressi on techni ques (Jackel et al.,
1987; Kohonen, 1988; Grossberg, 1987; Cottrel et al.,
19B7) our idea is to use the metastability in a spin
glass-like net as an additional storage resource and to
cl usteri nq a1gori thm a
derive after a "cl assi cal
sel f-organi zi ng sheme for generatf ng adaptively the
codebook. We present the illustrative case of 2D-vectors.
II
* LEP : A member of the Philips Research Organization.
Neural Approach for TV Image Compression
NON NEURAL APPROACH
In V.O., the image is divided into blocks, named vectors,
of N pixels (typically 4 x 4 pixels). Given the codebook,
each vector is coded by associating it with the nearest
element of the list (Nearest Neighbour Classifier)
( fi g ure 1).
EMCaD?"
INPUT
YEtTa"
COP1PARE
INDEX
ftECDNINDEX ~ CODEBOOK ~ STRUCTED
VECTOR
CODE BOOK
Figure 1 : Basic scheme of a vector quantizer.
For designing an optimal codebook, a clustering algorithm
is app1 ied to a training set of vectors (figure 2), the
criterium of optimality being a distorsion measure
between the training set and the codebook. The algorithm
is actua 11 y subopt ima1, especi a11 y for non connex
training set, as it is based on an iterative computation
of centers of grav i ty whi ch tends to overcode the dense
regions of poi nts whereas the 1ight ones are undercoded
(figure 2).
--
---- - - - - - - - - - - - - -
a~r_---------~~-~---~
1::-
..
..
a+--____~-------~--------~
110.0
230.0
PIXEl.. 1 .
.
.
.
-----.
--------.- - - - - - -
Figure 2 : Training set of two pixels vectors and the
associated codebook canputed by a non neural c1 ustering
algorithm: overcoding of the dense regions (pixel 1 148)
and subcoding of the light ones.
265
266
Naillon and Theeten
NEURAL APPROACH
In a Hopfield neural network, the code vectors are the
attractors of the net and the neural dynamics (resolution
phase)
is
substituted to the nearest neighbourg
classification.
~en patterns - referred to as II prototypes" and named
here "explicit memory"
are prescribed in a spin
glass-like net, other attractors
referred to as
"me tastable states" - are induced in the net (Sherrington
and Kirkpatrick, 1975; Toulouse, 1977; Hopfield, 1982;
Mezard and al., 1984). We consider those induced
attractors as additional memory named here "impl icit
memory" whi ch can be used by the network to code the
previously mentioned light regions of points. This
provides a higher flexibility to the net during the
self-organization process, as it can choose in a large
basis of explicit and implicit attractors the ones which
will optimize the coding task.
NEURAL NOTATION
A vector of 2 pixels with 8 bits per pel is a vector of
2 dimensions in an Eucl idean space where each dimension
corresponds to 256 grey levels. To preserve the Euclidean
di stance, we use the well-known themometri c notati on :
256 neurons for 256 level s per dimens i on, the number of
neurons set to one, wi th a reg ul ar orderi ng, g iv i ng the
pixel luminance, e.g. 2 = 1 1-1-1-1-1 ??? For vectors of
dimension 2, 512 neurons will be used, e.g. v=(2,3)=
(1 1-1-1 ?????? -1,1 1 1-1-1 ??? ,-1)
INDUCTION PROCESS
The induced impl icit memory depends on the prescription
rule. We have compared the Projection rule (Personnaz and
al., 1985) and the Minimal Overlap rule (Krauth and
Mezard, 1987).
The metastable states are detected by relaxing any point
of the training set of the figure 2, to its
corresponding prescribe or induced attractor marked in
figure 3 with a small diamond.
For the two rules, the induction process is rather
detenni ni stic, generati ng an orthogonal mesh : if two
prototypes (P11,P12) and (P21,P22) are prescribed, a
metastable state is induced at the cross-points, namely
(P11,P22) and (P21,P12) (figure 3).
Neural Approach for TV Image Compression
a+-__
ar---~----~--~
... ~-.____~.. __~.....
.. ... ...
PDIB. 1
PDCEI. 1
Figure 3 : Comparaison of the induction process for 2
prescription rules. The prescribed states are the full
squares, the induced states the open diamonds.
What differs between the two rul es ; s the number of
induced attractors. For 50 prototypes and a training set
of 2000 2d-vectors, the projection rule induces about
1000 metastable states (ratio 1000/50 = 20) whereas Min
Over induces only 234 (ratio 4.6). This is due to the
different stabil ity of the prescribed and the induced
states in the case of Min Over (Naillon and Theeten, to
be published).
GENERALIZED ATTRACTORS
Some attractors are induced out of the image space
(Figure 4) as the 512 neurons space has 2512
configurations to be compared with the (2 8 )2= 216 image
configurati ons.
We extend the image space by defi n1 ng a "genera 1i ze d
attractor" as the class of patterns having the same
number of neurons set to one for each pixel, whatever
thei r orderi ng. Such a notati on corresponds to a random
thermometri c neural representati on. The simul ati on has
shown that the generalized attractors correspond to
acceptable states (Figure 4) i.e. they are located at the
place when one would like to obtain a normal attractor.
267
268
NsiIlon and Theeten
i
NO GENERAUZATION
i
WITI-I GENERAUZATJON
i?
~;m""ING~
r' ~ / ,
wrTHOVT AT
0-\ -,'~ \l'J!'ft
~ ~!~~ . '~~6
....
--< ?. ~
i
~ CII!JeMIJZm
AJ'TAACT'OR
1'-.(f;-6
,~
.A
'1ft 6..
i
..
~~
"~ ~~~r.~,J::.,..,.lf~ .
\---J. ...
-~
~
~.
-.,... -(?1.f~ll1"
'fl ??
f~?'.
. 1
~6
\
i
I'
l!Jl!.
...
",b.
.... ...
..... ,,'
~~/ ~.. ;;~ (J.,". -
..._
-
j.J ~.
t,)t ~~'~ ~5\ :?.... ... ...
I
PIXEL!
I't hjlt
~
~
6+6
?
~
--
....
t
f.
& -.
;
[ '&
6"'6.- 6
'- l!.4--6
~ ~J -'-t~~i &~ ~
~,
+6A-+
4-
.. ....
..
to
PIXEL !
Figure 4 : The induced bassins of attractions are
represented with arrows. In the left plot, some training
vectors have no attractor in the image space. After
generalization (randon thermometric notation), the right
~ot shows their corresponding attractors.
ADAPTIVE NEURAL CODEBOOK LEARNING
An iterative sel f- organi zi ng process has been developed
to optimi ze the codebook. For a given TV image, the
codebook is defined, at each step of the process, as the
set of prescribed and induced attractors, selected by the
training set of vectors. The self-organizing scheme is
controlled by a cost function, the distorsion measure
between the training set and the codebook. Having a
target of 50 code vectors, we have to prescri be at each
step, as discussed above, typically 50/4.6 = 11
prototypes. As seen in figure Sa, we choose 11 initial
prototypes uniformly distributed along the bisecting
line. Using the training set of vectors of the figure 2,
the induced metastable states are detected with their
corresponding bassins of attraction. The 11 most
frequent, prescribed or induced, attractors are selected
and the
11 centers of gravi ty of thei r bassi ns of
attracti on are taken as new prototypes (figure 5b ).
After 3 iterations, the distorsion measure stabilizes
(Table 1).
Neural Approach for TV Image Compression
INmALIZATION
n:
?
? ?
i
? "?
"
? ~
~
?
?s?
s
?
?
?
i
.....- --- .... ...
PlXB.1
--
PIXEl. 1
....
Initialization of the self-organizing scheme.
Fi gure 5a
ITERATION ?1
FAST ORGANIZATION
PROTOTYPES
i
?
?
??
??
??
i
"
~
?
,! 8
N
~
?
I, ?? ?
,
i
Figure 5b
scheme.
??
First iteration of the self-organizinq
?Iobal
codebook
dislofsion size
1001
1
itrrllioM
2
3
4
5
1571
1031
97
97
98 .
53
57
79
84
68
?eneralized
aUraclors
!
i
I
i
0
4
20
20
15
Table 1 : Evolution of the distorsion measure versus the
iterations of the self-organizing scheme. It stabilizes
in 3 iterations.
269
270
NOOllon and Theeten
Fourty 1i nes of a TV image (the port of Ba 1timore) of 8
bits per pel, has been coded with an adaptive neural
codebook of 50 20-vectors. The coherence of the coding is
visible from the apparent continuity of the image
(Figure 6).
The coded image has 2.5 bits per pel.
I
-
j
? 1
Figure 6 : Neural coded image with 2.5 bits per pel.
CONCLUSION
Using
a
"classical"
clusterinq
algorithm,
a
self-organizing scheme has been developed in a Hopfield
network f.or the adaptive design of a codebook of small
d imensi on vectors ina Vector Quanti zati on techni Que. It
has been shown that using the Minimal Overlap
prescription rule, the metastable states induced in a
spin gl ass-like network can be used as extra-codes. The
optimal organization of the prescribed and induced
attractors, has been defined as the limit organization
obtained from the iterative learning process. It is an
example of "learning by selection" as already proposed by
physicists and biologists (Toulouse and ale 1986).
Hard~re
impl ementation on the neural VLSI ci rcuit
curren~y
designed at LEP should allow for on-line
codebook computations.
We woul d like to thank J.J. Hopfield who has inspired
this study as well H. Bosma and W. Kreuwel s from Phil ips
Research Laboratories, Eindhoven, who have allow to
initialize this research.
Neural Approach for TV Image Compression
REFERENCES
1
- J.J. Hopfield, Proc. Nat. Acad. Sci. USA, 79, 2554 - 2558
(1982); J.J. Hopfield and D.W. Tank, SC1ence 233 , 625
(1986) ; W.A. Little, Math. Biosi.,..!2., 101-120 :-T1974).
2
- D.J. ftrnit, H. Gutfreund, and H. Sanpolinslc.y, Phys.Rev. 32,
Ann. Phys. 173, 30 (1987).
-
3
- L. Personnaz, I. Guyon and G. Dreyfus, J. Phys. Lett. 46,
L359 (1985).
4
- J.A. Hertz, 2nd
5
- M.A. Virasoro, Disorder Systems and Biological Organization,
6
- R.M. Gray, IEEE ASSP Magazi ne 5 (Apr. 1984).
7
- L.D. Jackel, R.E. Howard, J.S. Denker, W. Hubbard and
S.A. ~ol1a, ADpl ied Ootics, Vol. 26, Q, (1987).
8
- i. Kononen, Finland, Helsinky University of Technology,
Tech. ~eo. No. iKK..;:"?A601; T. Kahanen, ~Jeural Networks, 1,
~jumoer :, (1988).
-
9
- S. Grossoerg, Cognitive ScL,.!.!., 23-63 (1987).
International Conference on "Vector and
pa ra 11 e 1 canputi ng, Transo, Norway, June (1988).
ed. E. Bienenstoclc., Springer, Berlin (1985); H. Gutfreund
(Racah Institute of Physics, Jerusalem) (1986); C. Cortes,
A. Kro<;lh and J .A. Hertz, J. of Phys. A., (1986).
10 - G.W. Cottrell, P. Murro and D.Z. Zioser, Institute of
cognitive Science, Report 8702 (1987).
11 - D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett. 35 t
1792 (1975); G. Toulouse, Commun. Phys. 2, 115-119 (lID);
M. Mezard , G. Parisi, N. Sourlas , G. Toulouse and
M. Virasoro, Phys. Dey. Lett., g, 1156-1159 (1984).
12 - W. Krauth and M. Mezard
L 745-L 752 (1987)
13 - M.
~Jaillon
t
J. Phys.A : Math. Gen. 20,
and J.B. Theeten, to be published.
14 - G. Toulouse, S. Dehaene and J.P. Changeux, Pro. Natl.Acad.
Sci. USA,~, 1695, (1986).
271
| 161 |@word compression:7 nd:1 open:1 grey:1 initial:1 configuration:1 ati:1 nt:1 si:1 cottrell:1 mesh:1 visible:1 plot:1 designed:1 selected:2 nq:1 gure:1 provides:1 quantizer:1 math:2 codebook:15 along:1 ra:1 inspired:1 little:2 notation:2 pel:4 what:1 developed:3 gutfreund:2 impl:3 classifier:1 whatever:1 tends:1 limit:1 aiming:1 physicist:1 acad:2 ure:1 initialization:1 relaxing:1 metastability:1 genus:1 grossberg:1 block:1 differs:1 lf:1 projection:2 selection:1 cal:1 storage:2 context:1 optimize:2 center:2 phil:1 jerusalem:1 resolution:1 disorder:1 rule:8 attraction:2 ity:1 racah:1 target:1 designing:1 prescribe:1 pa:1 element:1 defi:1 ze:2 located:1 ft:2 region:3 mentioned:1 dynamic:1 basis:1 bisecting:1 hopfield:10 represented:1 fast:1 detected:2 que:1 jean:1 whi:2 apparent:1 ability:1 toulouse:5 ip:1 associative:1 parisi:1 net:5 frequent:1 kohonen:1 fourty:1 sourlas:1 gen:1 organizing:5 basso:1 flexibility:1 transmission:1 a11:1 derive:1 develop:1 connex:1 nearest:3 sa:1 generalization:1 ied:2 biological:1 eindhoven:1 normal:1 stabilizes:2 finland:1 proc:1 knowl:1 jackel:2 hubbard:1 aim:1 rather:1 r_:1 sel:3 poi:1 ikk:1 june:1 tech:1 glass:3 typically:2 vlsi:1 france:1 pixel:11 issue:1 classification:1 tank:1 initialize:1 biologist:1 having:2 ng:11 report:1 neighbour:1 preserve:1 phase:1 attractor:18 n1:1 organization:7 physique:1 lep:2 kirkpatrick:2 light:2 natl:1 eucl:1 edge:1 capable:1 lh:1 orthogonal:1 iv:1 euclidean:1 re:1 minimal:3 virasoro:2 ar:2 cost:1 stored:1 adaptively:1 international:1 physic:1 martine:1 choose:2 cognitive:2 book:1 de:1 coding:2 depends:1 tion:1 ori:1 theeten:6 square:1 spin:4 ni:1 who:2 correspond:1 published:2 phys:8 ed:1 actua:1 ty:2 associated:1 di:1 higher:1 norway:1 dey:1 implicit:1 continuity:1 aj:1 gray:2 usa:2 evolution:1 stance:1 laboratory:1 pri:1 during:1 self:7 illustrative:1 generalized:2 sherrington:2 p12:2 pro:1 image:18 dreyfus:1 fi:2 krauth:3 jl:1 extend:1 discussed:1 commun:1 criterium:1 seen:1 additional:2 cii:1 eo:1 ight:1 ale:1 ii:2 full:1 ing:1 cross:1 devised:1 divided:1 prescription:3 coded:4 controlled:1 descartes:1 basic:1 subopt:1 iteration:5 c1:1 whereas:2 laboratoire:1 extra:2 ot:1 cedex:1 induced:15 dehaene:1 member:1 curren:1 b7:1 zi:3 associating:1 suboptimal:1 idea:1 prototype:7 avenue:1 ul:1 induces:2 generate:1 per:5 vol:1 key:1 p11:2 luminance:1 named:3 place:1 guyon:1 coherence:1 acceptable:1 bit:4 fl:1 bp:1 optimality:1 prescribed:7 min:2 distorsion:4 tv:9 metastable:6 yetta:1 hertz:3 wi:1 rev:2 lid:1 stic:1 taken:1 resource:2 previously:1 scl:1 denker:1 alternative:1 clustering:1 amit:1 classical:1 personnaz:3 already:1 assi:1 p22:2 thank:1 ques:1 philip:1 sci:2 berlin:1 me:1 induction:3 code:6 index:1 ratio:2 thei:2 ba:1 design:1 unknown:1 diamond:2 neuron:5 howard:1 behave:1 assp:1 thm:1 namely:1 pattern:3 memory:5 overlap:3 ll1:1 scheme:7 technology:1 ne:2 ina:1 generation:1 versus:1 port:1 gl:1 allow:2 institute:2 techni:2 electronique:1 distributed:1 dimension:3 lett:3 adaptive:4 ons:1 rul:1 iterative:3 table:2 learn:1 as:1 cl:2 substituted:1 quanti:1 apr:1 dense:2 arrow:1 referred:2 en:1 n:1 mezard:5 explicit:2 eneralized:1 changeux:1 list:2 simul:1 cortes:1 quantization:1 structed:1 ci:1 nat:1 television:1 springer:1 ch:2 corresponds:2 marked:1 ann:1 comparaison:1 hard:1 typical:1 uniformly:1 bernard:1 e:1 a601:1 select:1 p21:2 reg:1 |
667 | 1,610 | Linear Hinge Loss and Average Margin
Claudio Gentile
DSI, Universita' di Milano,
Via Comelico 39,
20135 Milano. Italy
gentile@dsi.unimi.it
Manfred K. Warmuth?
Computer Science Department,
University of California,
95064 Santa Cruz, USA
manfred@cse.ucsc.edu
Abstract
We describe a unifying method for proving relative loss bounds for online linear threshold classification algorithms, such as the Perceptron and
the Winnow algorithms. For classification problems the discrete loss is
used, i.e., the total number of prediction mistakes. We introduce a continuous loss function, called the "linear hinge loss", that can be employed
to derive the updates of the algorithms. We first prove bounds w.r.t. the
linear hinge loss and then convert them to the discrete loss. We introduce a notion of "average margin" of a set of examples . We show how
relative loss bounds based on the linear hinge loss can be converted to
relative loss bounds i.t.o. the discrete loss using the average margin.
1
Introduction
Consider the classical Perceptron algorithm. The hypothesis of this algorithm at trial t
is a linear threshold function determined by a weight vector Wt E Rn. For an instance
Xt ERn the linear activation at = Wt . Xt is passed through a threshold function (7 r
which is -Ion arguments less than the threshold rand + 1 otherwise. Thus the prediction
of the algorithm is binary and -1 , +1 denote the two classes. The Perceptron algorithm
is aimed at learning a classification problem where the examples have the form (X t , Yt ) E
R n x {-I , +1} .
After seeing T examples (Xt,Yt)1<t<T, the algorithm predicts with YT+1 = (7r(WT +1 .
xT+ d on the next instance XT+ 1. Tfthe algorithm's prediction YT+l agrees with the label
YT+ 1 on the instance xT +1, then its loss is zero. If the prediction and the label disagree,
then the loss is one. We call this loss the discrete loss.
The convergence of the Perceptron algonthm is established in the Perceptron convergence
theorem. There is a second by now classical algorithm for learning with linear threshold
functions : the Winnow algorithm of Nick Littlestone [Lit88] . This algorithm also maintains
a weight vector and predicts with the same linear threshold function defined by the current
weight vector Wt . However, the update of the weight vector W t = (Wt.l , . . . , Wt ,n )
? Supported by NSF grant CCR-970020 1.
C. Gentile and M. K. Warmuth
226
performed by the two algorithms is radically different:
Perceptron: Wt+l := Wt - 1] 6 t Xt
Winnow: In Wt+! ,i := In Wt,i - 1] 6 t Xt ,i
The Perceptron algorithm performs a simple additive update. The parameter 1] is a positive
learning rate and 6 t equals (fit - Yt) /2, which lies in {-1, 0, + 1}. When 6 t = 0 the prediction of the algorithm is correct and no update occurs. Both the Perceptron algorithm and
Winnow update conservatively, i.e., they update only when the prediction of the algorithm
is wrong. If fit = +1 and Yt = -1 then the algorithm overshot and 6 t = + 1. This causes
-1 and
the Perceptron to subtract 1] Xt from the current weight Wt. Similarly if fit
Yt = +1 then the algorithm undershot and 6 t = -1. Now the Perceptron adds 1] Xt to the
current weight Wt. We will later interpret 6 t Xt as a gradient of a loss function. Winnow
uses the same gradient but the update is done through the componentwise logarithm of the
weight vector. One can also rewrite Winnow's update as
=
Wt+l ,i
:= Wt ,i exp (-1] 6tXt,i), i = 1, ... , n ,
so that the gradient appears in the exponents of factors that multiply the old weights. The
factors are now used to correct the weights in the right direction when the algorithm under
or overshot.
The algorithms are good for different purposes and, generally speaking, incomparable (see
[KWA97] for a discussion). In [KW97] a framework was introduced for deriving simple
on-line learning updates. This framework has been applied to a variety of different learning
algorithms and differentiable loss functions [HKW95, KW98]. The updates are always
derived by approximately solving the following minimization problem
Wt+!
:= argminwU(w), where U(w) = d(w , Wt)
+ 1]loss(Yt, C1 r (w ? Xt )) .
(1)
Here loss denotes the chosen loss function. In our setting this would be the discrete loss.
What is different now is that the prediction of the algorithm Yt = C1 r (Wt . xd and the
discrete loss are discontinuous in the weight vector Wt. We will return to this point later
after discussing the other parts of the above minimization problem . The parameter TJ is the
learning rate mentioned above and, most importantly, d(w , Wt) is a divergence measuring
how far W is from Wt. The divergence function has two purposes. It motivates the update
and it becomes the potential function in the amortized analysis used to prove loss bounds
for the corresponding algorithm.
The use of an amortized analysis in the context of learning essentially goes back to [Lit89]
and the method for deriving updates based on the divergence was introduced in [KW97].
The divergence may be seen as a regularization term and may also serve as a barrier function in the optimization problem (1) for the purpose of keeping the weights in a particular
region. The additive algorithms, such as gradient descent and the Perceptron algorithm , use
d( w, wd = Ilw - Wt W/2 as the divergence. This can be used as a potential function for
the proof of the Perceptron convergence theorem. Multiplicative update algorithms such as
Winnow and various exponentiated gradient algorithms use entropy-based divergences as
potential functions [HKW95, KW98]. The function U in (1) is minimized by differentiating w.r.t. w. This works very well when the loss function is convex and differentiable. For
example for linear regression, when the loss function is the square loss (Wt . Xt - Yt)2/2 ,
then minimizing U( w ) with the divergence Ilw - Wt 112/2 gives the Widrow-Hoff update :
Wt+! := W t -1](Wt+l . Xt - Yt )X t ~ Wt -1](Wt . Xt - Yt)Xt.
Various exponentiated gradient algorithms [KW97] can be derived in the same way when
entropic divergences are used instead. However, in our case we cannot differentiate the
discrete loss since it is discontinuous.
We asked ourselves which loss function motivates the Perceptron and Winnow algorithms
in this framework. We will see that the loss function that achieves this is continuous and
Linear Hinge Loss and Average Margin
227
its gradient w.r.t. Wt is Otxt. where Ot E {-I, 0, + I}. We call this loss the (linear) hinge
loss (HL) and we believe this is the key tool for understanding linear threshold algorithms
such as the Perceptron and Winnow. However, in the process of changing the discrete
loss to the HL we also changed our learning problem from a classification to a regression
problem. There are now two versions of each algorithm, a classification version and a
regression version. The classification version predicts with a binary label using its linearly
thresholded prediction. The loss function is the discrete loss. The regression version, on
the other hand. predicts on the next instance Xt with its linear activation at = Wt ?Xt. In the
classification problem the labels Yt of the examples are -1 and +1, while in the regression
problem the labels at are -00 and +00. We will see that both versions of each algorithm
use the same rule to update the weight vector Wt.
Another strong hint that the HL is related to Perceptron and Winnow comes from the fact
that this loss may be seen as a limiting case of the entropic loss used in logistic regression.
In logistic regression the threshold function U r is replaced by the smooth tanh function.
There is a technical way of associating a "matching loss function" with a given increasing
transfer function [HKW95]. The matching loss for the tanh transfer function is the entropic loss. We will show that by making this transfer function steeper and by taking the
right viewpoint of the matching loss, the entropic loss converges to the HL. In the limiting
case the slope ofthe transferfunction is infinite, i.e., it becomes the threshold function U r ?
The question is whether this introduction of the HL buys us anything. We believe so.
We can prove a unifying meta-theorem for the whole class of general additive algorithms
[GLS97, KW98], when defined w.r.t. the HL. The bounds for the regression versions of the
Perceptron and Winnow are simple special cases. These loss bounds can then be converted
to loss bounds for the corresponding classification problems w.r.t. the discrete loss. This
conversion is carried out through working with the "average margin" of a set of examples
relative to a linear threshold classifier. The conversion of the HL described in this paper
can then be considered a principled way of deriving average margin-based mistake bounds.
The average margin reveals the inner structure of mistake bound results that have been
proven thus far for conservative on-line algorithms. Previously used definitions, such as
the deviation [FS98] and the attribute error [Lit91], can easily be related to the average
margin or reinterpreted in terms of the HL and the average margin.
2
Preliminaries and the linear hinge loss
We define two subsets of n n: the weight domain Wand the instance domain X. The
weights W maintained by the algorithms always lie in the weight domain and the instances
x of the examples always lie in the instance domain. We require W be convex.
A general additive algorithm and a divergence are defined in terms of a link function f.
Such a function is a vector valued function from the interior int W of the weight domain
W onto nn. with the property that its Jacobian is strictly positive definite everywhere in
int W. A link function f has a unique inverse f- 1 : nn -+ int W. We assume that f
is the gradient of a (potential) function Pr from int W to n, i.e., f(w) = \7Pr (w) for
W E int W. It is easy to extend the domain of Pr such that it includes the boundary of W.
For any link function f, a (Bregman) divergence function d r : W x int W -+ [0, (0) is
defined as [Bre67]:
(2)
dr(u,w) = Pr(u) - Pr(w) - (u - w)? f(w).
Thus dr(u, w) is the difference between Pr{u) and its first order Taylor expansion around
w. Since f has a strictly positive definite Jacobian everywhere in int W, the potential Pr is
strictly convex over W. Thus dr(u, w) ~ 0 with equality holding iff u = w.
The Perceptron algorithm is motivated by the identity link f (w) = w, with weight domain
W = nn. The corresponding divergence is dr(u, w) = Ilu - wW /2. For Winnow the
C. Gentile and M. K. Warmuth
228
ii=u(a)
Ora
0
Ur(a) = -1
r
Ur(a) = +1
Figure 1: HL( a, a) as a function of a for the two
cases ur{a) = -1, +1.
Figure 2: The matching loss
MLu-l (y, f).
=
weight domain is W
[O,oo)n. The link function is the componentwise logarithm. The
divergence related to this link function is the un-normalized relative entropy dr( U, w) =
2:~=1 Ui In ~ + Wi - Ui. Note that now U E W, but w must lie in int W.
The following key property immediately follows from the definition of the divergence dr.
Lemma! [KW98JForanyu E Wandwl,w2 E intW:
In this paper we focus on a single neuron using a hard threshold as the transfer function (see
beginning of the introduction). We will view such a neuron in two ways. In the standard
view the neuron is used for binary classification. It outputs f) = U r (a) trying to predict the
desired label y using a threshold r. In the new view the neuron is a regressor. It outputs the
linear activation a E 'R, and is trying to predict a E 'R,.
=
For classification we use the discrete loss DL(y,f)
~Ifj - yl E {0,1}. For regression
we use the linear hinge loss (HL) parameterized by a threshold r:
For any a, a E R: HLr{a, a) := ~(Ur(a) - CTr(a?(a - r) = DL(y, fj)la - rl?
Note that the arguments in the two losses DL and HLr are switched. This is intentional and
will be discussed later on.
It can be easily shown that HLr{w . x, a) is convex in wand that the gradient of this
loss w.r.t. w is 'VwHLr(w, x, a) = ~(ur{a) - ur(a? X. Note that 6 = (ur(a) u r (a?/2 can only take the three values 0, -1, and +1 mentioned in the introduction.
Strictly speaking, this gradient is not defined when w . x equals the threshold r. But we
will show in the subsequent sections that even in that case 6 x has the properties we need.
Figure 1 provides a graphical representation of HL r . The threshold function CTr "transfers"
the linear activation a w . x to a prediction f) which is a hard classification in {-1, +1}.
(For the remaining discussion of this section we can assume with no loss of generality that
the threshold r is 0.) Smooth transfer functions such as the tanh are commonly used
in neural networks, e.g., fj
tanh(a), and relative loss bounds have been proven when
the comparison class consists of single neurons with any increasing differentiable transfer
function CT [HKW95, KW98]. However, for this to work a loss function that "matches" the
transfer function has to be used. This loss is defined] as follows [HKW95] (see Figure 2):
=
=
MLu-l (y, fj) :=
f:~ll(~; u(z) -
y dz = dU-1 (y, f).
The matching loss for u(z) = z is the square loss (linear regression) and the matching
loss for u(z)
tanh(z) is the entropic loss (logistic regression), which is defined as:
=
lIn [HKW95] the notation Lu(Y, y) is used for the matching loss MLu-l (y, y). We use here the
subscript a -1 instead of a to stress a connection between the matching loss and the divergence that
is discussed at the end of this section.
229
Linear Hinge Loss and Average Margin
t(1
=
MLO'-l (y , y)
~(1 - y) In !=~ +
+ y) In !!~ . The entropic loss is finite when y E
[-1, +1] and y = tanh(a) E (-1 , +1). These are the ranges for y and Y needed for
logistic regression. We now want to use this type of loss for classification with linear
threshold functions, i.e., when y, y E {-I, + I} and the slope s of the tanh function is
increased until in the limit it becomes the hard threshold (10 . Obviously, (1-1 ( -1) = - 00
and (1-1 (+ 1)
+00 for any slope s. Thus the matching loss is infinite for all slopes.
Also, the known relative loss bounds based on the above notion of matching loss grow with
the slope of the transfer function. Thus it seems to be impossible to use the matching loss
when the transfer function is the hard threshold 170. However, we can still make sense of
the matching loss by viewing the neuron as a regressor. The matching loss is now rewritten
as another Bregman divergence:
=
MLcr(a,a) =
lil
u( z ) - a(a) dz = PO'(a) - PO'(a) - (a - a)a(a) = dcr((i , a),
(3)
where PO' is any function such that P;(a) = (1(a) . We now increase the slope of the transfer
function tanh while keeping and a fixed. In the limiting case (hard thr~old (70) the
above loss becomes twice the linear hinge loss with threshold zero, i.e., MLO'o(a, a)
2 HLo(a, a) = ((1o(a) - (1o(a))(a - 0). Finally, observe that the two views of the neuron
are related to a duality property [AW98] of Bregman divergences:
a
=
(4)
3
The algorithms
In this paper we always associate two general additive algorithms with a given
Such algolink function: a classification algorithm and a regression algorithm.
rithms, given in the next table, correspond to the two views of a linear threshold neuron discussed in the last section.
For brevity, we will call the two algorithms "the classification algorithm" and "the regression algorithm", respectively.
Gen. add. classification algorithm:
For t = 1,2, . . .
Instance: Xt E R n
Prediction: Yt = ar(wt . Xt)
Label: Yt E {-I, +1}
Update:
Wt+1 =f- 1 (f(wt) - ?{f)t - Yt)xt}
Discrete loss:
DL(yt. Yt) = tlYt - Ytl
Gen. add. regression algorithm:
For t = 1,2, .. .
Instance: Xt E R n
Prediction: at = Wt . Xt
Label: 2 at = Ytoo
Update:
wt+l=f- 1(f(wt) - ?(ar(at) -ur (ad)xt)
Linear hinge loss:
HLr(at , at) = t(ar(at) -ar(at ))(at - r)
The classification algorithm receives a label Yt E {- I, + 1}, while the regression algorithm receives the infinite label at with the sign of Yt. This assures that Yt = a r (ad. The
classification algorithm predicts with Yt = ar(ad, and the regression algorithm with its
linear activation at. The loss for the classification algorithm is the discrete loss DL(Yt, Yt),
while for the regression algorithm we use HLr( at. at) . The updates of the two algorithms
are equivalent. The update of the regression algorithm is motivated by the minimization
problem:
Wt+1 := argminwU(w) where U( w) = dr (w, wd + 'T/ HLr(w . Xt. ad.
By setting the gradient of U(w ) w.r.t.
w to zero we get the following equilibrium equation that holds at the minimum of U(w):
Wt -"-1
f- 1 (f (wt ) - ?(a r (Wt +l . Xt) -o"r(at))xt} . We approximately solve this equation by replacing Wt +l ' Xt by at = Wt ' Xt, i.e., Wt+1 = f- 1 (f(wt) - ? (ar(at}-ar (at)) xt) .
2This is a short-hand meaning at =
;- 00
if Yt = ;-1 and at =
- 00
if Yt = -1 .
C. Gentile and M. K. Warmuth
230
Both versions of the Perceptron and Winnow are obtained by using the link functions
f(w) = wand f(w)
(In(wd, ... , In(w n )). respectively.
=
4 Relative loss bounds
The following lemma relates the hinge loss of the regression algorithm to the hinge loss of
an arbitrary linear predictor u.
Lemma 2 For all U E W, W t E int W, Xt EX, at , r E Rand 1]
HLr(at,at) - HLr(u , xt,at) + HLr(u, xt,at)
=
*
(dr(u,wt) - dr(u,wt+1)
+ dr(wt,wt+1))
> 0:
= !(Yt - Yt) (at - U? xt}
(5)
Proof. We have dr(u, Wt) - dr(u, Wt+d + dr(wt , Wt+l) = (u - Wt) . (J(Wt+d j(wd) = (Wt - u) . ?(O"r(at) - o"r(at)) Xt = ?(O"r(at) - O"r(at)) (at - U . Xt) =
1] (HLr(at, at) - HLr(u . Xt, at) + HLr(u . Xt , ad) . The first equality follows Lemma 1
and the second follows from the update rule of the regression algorithm. The last equality
uses HLr(at, at) as a divergence d rTr (at , at) (see (4)) and again Lemma 1. 0
By summing the first equality of (5) over all trials t we could relate the total HLr of the
regression algorithm to the total HLr of the regressor u. However, our goal is to obtain
bounds on the number of mistakes on the classification algorithm. It is therefore natural
to interpret u too as a linear threshold classifier. with the same threshold r used by the
classification algorithm. We use the second equality of (5) and sum up over all T trials:
L,;=I !(Yt - Yt)
*
(a - u . Xt) =
(dr(u, wd - dr(u, wT+d
+ L,;=I dr(Wt, wt+d).
Note that the sums in the above equality are unaffected by trials in which no mistake occurs.
In such trials. Yt = Yt and Wt+1 = Wt . Thus the above is equivalent to the following. where
M is the set of trials in which a mistake occurs:
L,tEM !(Yt - Yt ) (at - U? Xt) =
~ (dr(u, wd -
dr (u , wT+d + L,tE.vt dr(wt, wt+d).
Since t(Yt -Yt) = -Yt when t E J\It and dr(u , WT+1) :::: 0 we get the following theorem:
Theorem 3 Let M ~ {I, .. . ,T} be the set a/trials in which the classification algorithm
makes a mistake. Then/or every u E W we have
L,tEM Yt (u . Xt - at)
~ ~ (dr(u, wt} + L,tEM dr(wt, wt+d) . 0
Throughout the rest of this section the classification algorithm is compared to the performance of a linear threshold classifier u with threshold r = O. We now apply Theorem 3 to
the Perceptron algorithm with WI = 0, giving a bound i.t.o. the average margin of a linear
threshold classifier u with threshold 0 on a trial sequence M:
i'u ,M
:=
ILl L,tEM Yt U . Xt ?
Since Yt at ~ 0 for t E M . the I.h.s.
of the inequality of Theorem 3 is at least
,!i.
2 ~ !i.
"'2
By the update rule. L."tEM dr wt, W t+1 )=,L."tEM
211 xtl1 2
21.Iv1IX2
'
where IIxI12 ~ X 2 for t E M . Since in Theorem 3 u is an arbitrary vector. we replace
IM I"(U,M'
A
"
(
u by A u therein, and set A = .x~ 1) ? When we solve the resulting inequality for
I'U ,M
1.1v11 the
dependence on 1] cancels out. This gives us the following bound on the number of mistakes:
IMI ~ ( 1 1~
1 12X)2
I'U. )vl
Linear Hinge Loss and Average Margin
231
Note that in the usual mistake bound for the Perceptron algorithm the average 'Yu,/vt is replaced by mintEM Ytu, Xt. 3 Also, observe that the predictions of the Perceptron algorithm
with r = 0 and WI = 0 are not affected by 1]. Hence the previous bound holds for any
1]
> O.
Next, we apply Theorem 3 to a normalized version of Winnow. This version of Winnow
keeps weights in the probability simplex and is obtained by a slight modification of Winnow's link function. We assume r = 0 and choose X = {x E nn : Ilxlloo ~ Xoo}.
Unlike the Perceptron algorithm, a Winnow-like algorithm heavily depends on the learning
rate, so a careful tuning is needed. One can show (details omitted due to space limitations)
that if 1] is such that 1] 'YU,M
+ 1] X
00 -
In (
e
2 X
'1
2+1) > 0 then this normalized version of
?o
Winnow achieves the bound
IMI <
-1]'YU,M
,
dr(u, WI)
X
I
+1] 00 - n
(e
2 '1Xoo
2
+1) ,
where dr( u, wd is the relative entropy between the two probability vectors U and Wl.
Conclusions: In the full paper we study the case when there is no consistent threshold U
more carefully and give more involved bounds for the Winnow and normalized Winnow
algorithms as well as for the p-norm Perceptron algorithm [GLS97].
References
[AW98]
K. Azoury and M. K. Warmuth", "Relative loss bounds and the exponential
family of distributions", "1998", Unpublished manuscript.
[Bre67]
L.M. Bregman. The relaxation method of finding the common point of convex
sets and its application to the solution of problems in convex programming.
USSR Computational Mathematics and Physics, 7 :200-217, 1967.
[FS98]
y. Freund and R. Schapire. Large margin classification using the perceptron
algorithm. In 11th COLT, pp. 209-217, ACM, 1998.
[GLS97]
A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results
for linear discriminant updates. In 10th COLT, pp. 171-183. ACM, 1997.
[HKW95] D. P. Helmbold, 1. Kivinen. and M. K. Warmuth . Worst-case loss bounds for
sigmoided linear neurons. In NIPS 1995, pp. 309-315. MIT Press, 1995.
[KW97]
J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates
for linear prediction. Inform. and Comput., 132(1): 1-64. 1997.
[KW98]
1. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems. In NIPS 10, pp. 287-293 . MIT Press, 1998.
[KWA97] J. Kivinen, M. K. Warmuth, and P. Auer. The perceptron algorithm vs. winnow:
linear vs. logarithmic mistake bounds when few input variables are relevant.
Artijiciallntelligence, 97:325-343,1997.
[Lit88]
N. Littlestone. Learning when irrelevant attributes abound: A new Iinearthreshold algorithm. Machine Learning, 2:285-318, 1988.
[Lit89]
N. Littlestone. Mistake Bounds and Logarithmic Linear-threshold Learning
Algorithms. PhD thesis. Umversity of California Santa Cruz, 1989.
[Lit91 J
N. Littlestone. Redundant noisy attributes, attribute errors, and linear threshold
learning using Winnow. In 4th COLT, pp. 147-156, Morgan Kaufmann, 1991.
3The average margin ~IU.M may be positive even though u is not consistent.
| 1610 |@word trial:8 version:11 seems:1 norm:1 current:3 wd:7 activation:5 artijiciallntelligence:1 must:1 cruz:2 fs98:2 subsequent:1 additive:6 update:23 v:2 warmuth:9 beginning:1 short:1 manfred:2 provides:1 cse:1 ucsc:1 prove:3 consists:1 introduce:2 os:1 increasing:2 becomes:4 abound:1 notation:1 what:1 finding:1 every:1 multidimensional:1 xd:1 wrong:1 classifier:4 gls97:3 grant:1 positive:4 mistake:11 limit:1 subscript:1 approximately:2 twice:1 therein:1 range:1 unique:1 definite:2 matching:13 v11:1 seeing:1 get:2 cannot:1 interior:1 onto:1 context:1 impossible:1 equivalent:2 yt:41 dz:2 go:1 convex:6 immediately:1 helmbold:1 rule:3 importantly:1 deriving:3 proving:1 notion:2 limiting:3 heavily:1 programming:1 us:2 hypothesis:1 associate:1 amortized:2 predicts:5 worst:1 region:1 mentioned:2 principled:1 ui:2 asked:1 rewrite:1 solving:1 serve:1 easily:2 po:3 various:2 describe:1 valued:1 solve:2 otherwise:1 noisy:1 online:1 obviously:1 differentiate:1 sequence:1 differentiable:3 relevant:1 gen:2 iff:1 convergence:4 converges:1 derive:1 widrow:1 oo:1 strong:1 overshot:2 come:1 direction:1 correct:2 discontinuous:2 attribute:4 milano:2 viewing:1 require:1 preliminary:1 im:1 strictly:4 hold:2 around:1 considered:1 intentional:1 exp:1 equilibrium:1 predict:2 achieves:2 entropic:6 omitted:1 purpose:3 label:10 tanh:8 agrees:1 wl:1 tool:1 minimization:3 mit:2 always:4 claudio:1 ytl:1 hkw95:7 derived:2 focus:1 sense:1 nn:4 vl:1 iu:1 classification:23 ill:1 colt:3 exponent:1 ussr:1 special:1 hoff:1 equal:2 yu:3 cancel:1 tem:6 minimized:1 simplex:1 hint:1 few:1 divergence:17 replaced:2 ourselves:1 multiply:1 reinterpreted:1 xoo:2 tj:1 bregman:4 grove:1 old:2 logarithm:2 littlestone:5 taylor:1 desired:1 instance:9 increased:1 ar:7 measuring:1 deviation:1 subset:1 predictor:1 too:1 imi:2 yl:1 physic:1 regressor:3 ctr:2 again:1 thesis:1 choose:1 dr:25 ilu:1 return:1 converted:2 potential:5 includes:1 int:9 lit91:2 ad:5 depends:1 performed:1 later:3 multiplicative:1 view:5 steeper:1 maintains:1 slope:6 square:2 kaufmann:1 correspond:1 ofthe:1 lu:1 unaffected:1 lit89:2 inform:1 definition:2 pp:5 involved:1 proof:2 di:1 rithms:1 algonthm:1 carefully:1 auer:1 back:1 appears:1 manuscript:1 rand:2 done:1 though:1 generality:1 until:1 hand:2 working:1 receives:2 replacing:1 logistic:4 believe:2 usa:1 normalized:4 regularization:1 equality:6 hence:1 ll:1 maintained:1 anything:1 trying:2 mlu:3 stress:1 kwa97:2 performs:1 fj:3 meaning:1 common:1 sigmoided:1 rl:1 extend:1 discussed:3 slight:1 interpret:2 tuning:1 mathematics:1 similarly:1 add:3 italy:1 winnow:22 irrelevant:1 meta:1 binary:3 inequality:2 discussing:1 vt:2 seen:2 minimum:1 gentile:5 morgan:1 employed:1 redundant:1 ii:1 relates:1 full:1 smooth:2 technical:1 match:1 lin:1 prediction:13 regression:23 essentially:1 txt:1 ion:1 c1:2 want:1 grow:1 ot:1 w2:1 rest:1 unlike:1 call:3 easy:1 variety:1 fit:3 associating:1 incomparable:1 inner:1 whether:1 motivated:2 passed:1 speaking:2 cause:1 generally:1 santa:2 aimed:1 mlo:2 schapire:1 nsf:1 sign:1 ccr:1 discrete:13 affected:1 key:2 threshold:30 ifj:1 changing:1 thresholded:1 relaxation:1 convert:1 sum:2 wand:3 inverse:1 everywhere:2 parameterized:1 throughout:1 family:1 bound:25 ct:1 argument:2 ern:1 department:1 kw97:4 ur:8 wi:4 making:1 modification:1 hl:11 pr:7 equation:2 previously:1 assures:1 dcr:1 needed:2 end:1 rewritten:1 apply:2 observe:2 denotes:1 remaining:1 graphical:1 hinge:14 unifying:2 giving:1 universita:1 classical:2 question:1 occurs:3 dependence:1 usual:1 gradient:12 link:8 discriminant:1 minimizing:1 holding:1 ilw:2 relate:1 motivates:2 lil:1 disagree:1 conversion:2 neuron:9 finite:1 descent:1 rn:1 ww:1 arbitrary:2 introduced:2 unpublished:1 componentwise:2 connection:1 comelico:1 nick:1 thr:1 california:2 established:1 nip:2 natural:1 kivinen:4 carried:1 understanding:1 relative:11 freund:1 loss:83 dsi:2 limitation:1 proven:2 versus:1 lit88:2 switched:1 consistent:2 viewpoint:1 changed:1 supported:1 last:2 keeping:2 exponentiated:3 perceptron:25 taking:1 barrier:1 differentiating:1 boundary:1 conservatively:1 commonly:1 far:2 keep:1 buy:1 reveals:1 summing:1 continuous:2 un:1 table:1 transfer:11 schuurmans:1 expansion:1 du:1 domain:8 linearly:1 azoury:1 whole:1 rtr:1 exponential:1 comput:1 lie:4 jacobian:2 theorem:9 xt:43 dl:5 phd:1 te:1 margin:14 subtract:1 entropy:3 logarithmic:2 radically:1 acm:2 identity:1 goal:1 careful:1 replace:1 hard:5 determined:1 infinite:3 unimi:1 wt:68 lemma:5 conservative:1 total:3 called:1 duality:1 la:1 brevity:1 ex:1 |
668 | 1,611 | Synergy and redundancy among brain
cells of behaving monkeys
Itay Gat?
Institute of Computer Science and
Center for Neural Computation
The Hebrew University, Jerusalem 91904, Israel
Naftali Tishby t
NEC Research Institute
4 Independence Way
Princeton NJ 08540
Abstract
Determining the relationship between the activity of a single nerve
cell to that of an entire population is a fundamental question that
bears on the basic neural computation paradigms. In this paper
we apply an information theoretic approach to quantify the level
of cooperative activity among cells in a behavioral context. It is
possible to discriminate between synergetic activity of the cells vs .
redundant activity, depending on the difference between the information they provide when measured jointly and the information
they provide independently. We define a synergy value that is positive in the first case and negative in the second and show that the
synergy value can be measured by detecting the behavioral mode of
the animal from simultaneously recorded activity of the cells. We
observe that among cortical cells positive synergy can be found,
while cells from the basal ganglia , active during the same task, do
not exhibit similar synergetic activity.
titay,tishby} @cs.huji.ac.il
Permanent address: Institute of Computer Science and Center for Neural Computation, The Hebrew University, Jerusalem 91904, Israel.
I. Gat and N. Tishby
112
1
Introduction
Measuring ways by which several neurons in the brain participate in a specific
computational task can shed light on fundamental neural information processing
mechanisms . While it is unlikely that complete information from any macroscopic
neural tissue will ever be available, some interesting insight can be obtained from
simultaneously recorded cells in the cortex of behaving animals. The question we
address in this study is the level of synergy, or the level of cooperation , among brain
cells, as determined by the information they provide about the observed behavior
of the animal.
1.1
The experimental data
We analyze simultaneously recorded units from behaving monkeys during a delayed
response behavioral experiment. The data was collected at the high brain function
laboratory of the Haddassah Medical School of the Hebrew universitY[l, 2]. In this
task the monkey had to remember the location of a visual stimulus and respond by
touching that location after a delay of 1-32 sec. Correct responses were rewarded
by a drop of juice. In one set of recordings six micro-electrodes were inserted
simultaneously to the frontal or prefrontal cortex[l, 3]. In another set of experiments
the same behavioral paradigm was used and recording were taken from the striatum
- which is the first station in basal ganglia (a sub-cortical ganglia)[2]. The cells
recorded in the striatum were the tonically active neurons[2], which are known to
be the cholinergic inter-neurons of the striatum. These cells are known to respond
to reward.
The monkeys were trained to perform the task in two alternating modes, "Go" and
"No-Go" [1]. Both sets of behavioral modes can be detected from the recorded spike
trains using several statistical modeling techniques that include Hidden Markov
Models (HMM) and Post Stimulus Histograms (PSTH). The details of these detection methods are reported elsewhere[4 , 5]. For this paper it is important to know
that we can significantly detect the correct behavior, for example in the "Go" vs.
the "No-Go" correct detection is achieved about 90% of the time, where the random
is 50% and the monkey's average performance is 95% correct on this task.
2
Theoretical background
Our measure of synergy level among cells is information theoretic and was recently
proposed by Brenner et. aZ. [6] for analysis of spikes generated by a single neuron.
This is the first application of this measure to quantify cooperativity among neurons.
2.1
Synergy and redundancy
A fundamental quantity in information theory is the mutual information between
two random variables X and Y. It is defined as the cross-entropy (Kullbak-Liebler
divergence) between the joint distribution of the variables , p(x, y), and the product
of the marginal distributions p(x)p(y). As such it measures the statistical dependence of the variables X and Y. It is symmetric in X and Y and has the following
Synergy and Redundancy among Brain Cells of Behaving Monkeys
113
familiar relations to their entropies[7]:
I(X; Y)
~ P( x, y) log (~~~~r~)
DKL [P(X, Y) IP(X) P(Y)] =
H(X)
+ H(Y) -
H(X, Y)
= H(X) -
H(XIY)
)
= H(Y) -
(1)
H(YIX).
When given three random variables X I, X 2 and Y, one can consider the
mutual information between the joint variables (X I ,X2 ) and the variable Y,
I(XI' X 2; Y) (notice the position of the semicolon), as well as the mutual informations I(XI; Y) and I(X2; Y). Similarly, one can consider the mutual information between Xl and X 2 conditioned on a given value of Y = y, I(XI; X21y) =
DKL[P(X I ,X2Iy)IP(Xl ly)P(X2Iy)]' as well as its average, the conditional mutual
information ,
I(XI; X 2IY) = LP(y)Iy(XI; X2)'
Y
Following Brenner et. al.[6] we define the synergy level of Xl and X2 with respect
to the variable Y as
Syny(XI ,X2)
= I(XI ,X2;Y) -
(I(XI;Y)
+ I(X2;Y)),
(2)
with the natural generalization to more than two variables X . This expression can
be rewritten in terms of entropies and conditional information as follows:
Syny(X I , X 2) =
(3)
H(XI,X2) - H(XI,X2IY) - ((H(Xt) - H(XIIY))
H(XIIY)
"
+ H(X2IY)
.,
+ (H(X2)
+ H(XI' X 2) -
H(XI' X2IY)
I
"
Depends On Y
(H(Xd
.,
- H(X2IY)))
+ H(X2))
"
Independent of Y
When the variables exhibit positive synergy value, with respect to the variable Y,
they jointly provide more information on Y than when considered independently, as
expected in synergetic cases. Negative synergy values correspond to redundancy the variables do not provide independent information about Y. Zero synergy value
is obtained when the variables are independent of Y or when there is no change in
their dependence when conditioned on Y. We claim that this is a useful measure
of cooperativity among neurons, in a given computational task.
It is clear from Eq.( 3) that if
Iy(XI; X 2)
since in that case
= I(XI; X 2)
Vy E Y => Syny (Xl, X 2 )
L y P(y)Iy (XI;X2) =
= 0,
(4)
I(XI;X2).
In other words, the synergy value is not zero only if the statistical dependence, hence
the mutual information between the variables, is affected by the value of Y . It is
positive when the mutual information increase, on the average, when conditioned
on Y, and negative if this conditional mutual information decrease. Notice that
the value of synergy can be both positive and negative since information, unlike
entropy, is not sub-additive in the X variables.
1. Gat and N Tishby
114
3
Synergy among neurons
Our measure of synergy among the units is based on the ability to detect the
behavioral mode from the recorded activity, as we discuss bellow. As discussed
above, synergy among neurons is possible only if their statistical dependence change
with time. An important case where synergy is not expected is pure "population
coding" [8]. In this case the cells are expected to fire independently, each with its
own fixed tuning curve. Our synergy value can thus be used to test if the recorded
units are indeed participating in a pure population code of this kind, as hypothesized
for certain motor cortical activity.
Theoretical models of the cortex that clearly predict nonzero synergy include attractor neural networks (ANN)[9] and synfire chain models(SFC)[3] . Both these
models predict changes in the collective activity patterns, as neurons move between
attractors in the ANN case, or when different synfire-chains of activity are born
or disappear in the SFC case. To the extent that such changes in the collective
activity depend on behavior, nonzero synergy values can be detected. It remains
an interesting theoretical challenge to estimate the quantitative synergy values for
such models and compare it to observed quantities.
3.1
Time-dependent cross correlations
In our previous studies[4] we demonstrated, using hidden Markov models of the
activity, that the pairwise cross-correlations in the same data can change significantly with time, depending on the underlying collective state of activity. These
states, revealed by the hidden Markov model, in turn depend on the behavior and
enable its prediction . Dramatic and fast changes in the cross-correlation of cells
has also been shown by others[lO]. This finding indicate directly that the statistical
dependence of the neurons can change (rapidly) with time, in a way correlated to
behavior. This clearly suggests that nonzero synergy should be observed among
these cortical units , relative to this behavior. In the present study this theoretical
hypothesis is verified.
3.2
Redundancy cases
If on the other hand the conditioned mutual information equal zero for all behavioral
modes, i.e. Iy(Xl; X2) = 0 Vy E Y, while I(Xl; X 2) > 0, we expect to get negative
synergy, or redundancy among the cells, with respect to the behavior variable Y.
We observed clear redundancy in another part of the brain, the basal ganglia, during the same experiment, when the behavior was the pre-reward and post-reward
activity. In this case different cells provide exactly the same information, which
yields negative synergy values.
4
4.1
Experimental results
Synergy measurement in practice
To evaluate the synergy value among different cells, it is necessary to estimate
the conditional distribution p(ylx) where y is the current behavior and x represent
a single trial of spike trains of the considered cells. Estimating this probability,
115
Synergy and Redundancy among Brain Cells of Behaving Monkeys
however, requires an underlying statistical model, or a represented of the spike
trains. Otherwise there is never enough data since cortical spike trains are never
exactly reproducible. In this work we choose the rate representation, which is the
simplest to evaluate. The estimation of p(ylx) goes as follows:
? For each of the M behavioral modes (Y1, Y2 .. , YM) collect spike train samples
(the tmining data set).
? Using the training sample, construct a Post Stimulus Time Histogram
(PSTH), i.e. the rate as function of time, for each behavioral mode.
? Given a spike train, outside of the training set, compute its probability to
be result in each of the M modes.
? The spike train considered correctly classified if the most probable mode is
in fact the true behavioral mode, and incorrectly otherwise.
? The fraction of correct classification, for all spike trains of a given behavioral
mode Yi, is taken as the estimate of P(Yi Ix), and denoted pc., where Ci 1S
the identity of the cells used in the computation.
For the case of only two categories of behavior and for a uniform distribution of the
different categories, the value of the entropy H(Y) is the same for all combinations of
cells, and is simply H (Y) = - Ly p(y) log2 (p(y)) = log22 = 1. The full expression
(in bits) for the synergy value can be thus written as follows:
~p(x) [- ~ Po""
1+
log2(P"",)] ;
~P(x) [- ~ Po, IOg,(P,,)] + ~ p(x) [- ~ Po, IOg2(P,,)]
(5)
,
If the first expression is larger than the second than there is (positive) synergy and
vice versa for redundancy. However there is one very important caveat. As we saw
the computation of the mutual information is not done exactly, and what one really
computes is only a lower bound . If the bound is tighter for multiple cell calculation,
the method could falsely infer positive synergy, and if the bound is tighter for the
single cell computation, the method could falsely infer negative synergy. In previous
works we have shown that the method we use for this estimation is quite reasonable
and robust[5], therefore, we believe that we have even a conservative (i.e. less
positive) estimate of synergy.
4.2
Observed synergy values
In the first set of experiments we tried to detect the behavioral mode during the
delay-period of correct trials. In this case the two types of behavior were the
"Go" and the "No-Go" described in the introduction. An example of this detection
problem is given in figure lAo In this figure there are 100 examples of multi-electrode
recording of spike trains during the delay period. On the left is the "Go-mode" data
and on the right the "No-Go mode", for two cells. On the lower part there is an
example of two single spike trains that need to be classified by the mode models.
I. Gat and N. Tishby
116
A.
00_.
110-00 1104.
B.
Poet-reward
Pre-r-.r4
? ??????.? _
. ..?... __ ??? -
--"""--""'-"--""'''1
~:~I
"
:
,
1",
, ;,-,.-c:;;---..------;;;--"'.?~
~ -m~~'
,~
.
[_.:~ :~_ ~
? ?? ??
?
:?
"""'If-._
?? ,,...
... :::-'--._
.1
? 'T'
r"'ij"l"i"i""i,~"':('l',~u,i;~','Ll r'??jil?~??~?~IIUTI~j~I?I ;i? ? "?I ? I ? I ?:.j
l. :.~!!.:......:. ...........:...........!........~.._.J l.
abag1e trial 110. 1
?
"""'If-. ___ ???? ,...
: ?
........ - -.---
................... L.,.:.:.:.~~.~.~................
??
??? . : ??
;'~~"h ~. ~ ~_ ~.J
??
__
????
??
?
?
i
_.............~. ::-:. :-::::::"',..i..........................,
I. II 1.1.. . II i.!
I. . 1UJ 1 Jli...i
I !
J . .L. .I...:..!. .:. I...:......l. . ..:. . ,.j I I 1 1 I I I i I I I
i
i '
~ ??: ..... ; ........ : ................. _ ??? _ ?? ;U: ........ _....::.:
811agl. trial ?ct. 2
~ ..: ?????~ ........ u
I
~. . . . . . _ _ ?
....................
-
. . . .":'.:
81. .1& trial 110. 1
Figure 1: Raster displays of simultaneously recorded cells in the 2 different areas,
in each area there were 2 behavioral modes.
Table 1 gives some examples of detection results obtained by using 2 cells independently, and by using their joint combination. It can be seen that the synergy is
positive and significant. We examined 19 recording session of the same behavioral
modes for two different animals and evaluated the synergy value. In 18 out of the
19 sessions there was at least one example of significant positive synergy among the
cells.
For comparison we analyzed another set of experiments in which the data was
recorded from the striatum in the basal ganglia. An example for this detection is
shown in figure lB. The behavioral modes were the "pre-reward" vs. the "postreward" periods. Nine recording sessions for the two different monkeys were examined using the same detection technique. Although the detection results improve
when the number of cells increase, in none of these recordings a positive synergy
value was found. For most of the data the synergy value was close to zero, i.e. the
mutual information among two cells jointly was close to the sum of the mutual information of the independent cells, as expected when the cells exhibit (conditionally)
independent activity.
The prevailing difference between the synergy measurements in the cortex and in the
TAN s' of the basal ganglia is also strengthen by the different mechanisms underlying
those cells. The TANs' are assumed to be globally mediators of information in the
striatum, a relatively simple task, whereas the information processed in the frontal
cortex in this task is believed to be much more collective and complicated. Here we
suggest a first handle for quantitative detection of such different neuronal activities.
Acknowledgments
Special thanks are due to Moshe Abeles for his encouragement and support, and to
William Bialek for suggesting the idea to look for the synergy among cortical cells.
We would also like to thank A. Raz, Hagai Bergman, and Eilon Vaadia for sharing
their data with us. The research at the Hebrew university was supported in part
by a grant from the Unites States Israeli Binational Science Foundation (BSF).
117
Synergy and Redundancy among Brain Cells ofBehaving Monkeys
Table 1: Examples of synergy among cortical neurons. For each example the mutual
information of each cell separately is given together with the mutual information
of the pair. In parenthesis the matching detection probability (average over p(ylx))
is also given. The last column gives the percentage of increase from the mutual
information of the single cells to the mutual information of the pair. The table gives
only those pairs for which the percentage was larger than 20% and the detection
rate higher than 60%.
Session
Cells
b116b
bl21b
bl21b
bl26b
bl26b
cl77b
cr38b
cr38b
cr38b
cr43b
5,6
1,4
3,4
0,3
1,2
2,3
0,2
0,4
3,4
0,1
CellI
0.068
0.201
0.082
0.062
0.030
0 .054
0.074
0.074
0.051
0.070
(64.84)
(73.74)
(66.67)
(62.63)
(60.10)
(62.74)
(65.93)
(65.93)
(62.09)
(65.00)
Ce1l2
0.083
0.118
0.118
0.077
0.051
0.013
0.058
0.042
0.042
0.063
(66.80)
(69.70)
(69.70)
(66.16)
(63.13)
(61.50)
(63.19)
(62.09)
(62.09)
(64.44)
Both cells
0.209
0.497
0.240
0.198
0.148
0.081
0.160
0.144
0.111
0.181
(76.17)
(87.88)
(77.78)
(75.25)
(72.22)
(68.01)
(73.08)
(71.98)
(69.23)
(74.44)
Syn (%)
38
56
20
42
82
20
21
24
20
36
References
[1] M. Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the prefrontal cortex and neural-networks models., Network 1 (1990).
[2] E. Raz , et al Neuronal synchronization of tonically active neurons in the
striatum of normal and parkinsonian primates, J. Neurophysiol. 76:2083-2088
(1996).
[3] M. Abeles, Corticonics, (Cambridge University Press, 1991).
[4] I. Gat , N. Tishby and M. Abeles, Hidden Markov modeling of simultaneously
recorded cells in the associative cortex of behaving monkeys, Network,8:297-322
(1997).
[5] I. Gat, N. Tishby, Comparative study of different supervised detection methods
of simultaneously recorded spike trains, in preparation.
[6] N. Brenner, S.P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van
Steveninck, The Economy of Impulses and the Stiffnes of Spike Trains, NEC
Research Institute Technical Note (1998).
[7] T.M . Cover and J.A . Thomas, Elements of Information Theory., (Wiley NY,
1991).
[8] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal Population Coding
of Movement Direction, Science, 233:1416-1419 (1986).
[9] D.J. Amit, Modeling Brain Function , (Cambridge University Press, 1989).
[10] E. Ahissar et al Dependence of Cortical Plasticity on Correlated Activity of
Single Neurons and on Behavioral Context, Science, 257:1412-1415 (1992).
| 1611 |@word trial:5 tried:1 dramatic:1 born:1 xiy:1 current:1 written:1 additive:1 plasticity:1 motor:1 drop:1 reproducible:1 v:3 iog2:1 caveat:1 detecting:1 location:2 psth:2 behavioral:16 falsely:2 pairwise:1 inter:1 indeed:1 expected:4 behavior:11 multi:1 brain:9 globally:1 estimating:1 underlying:3 israel:2 what:1 kind:1 monkey:10 finding:1 ahissar:1 nj:1 remember:1 quantitative:2 xd:1 shed:1 exactly:3 schwartz:1 unit:5 medical:1 ly:2 grant:1 positive:11 striatum:6 firing:1 examined:2 r4:1 suggests:1 collect:1 steveninck:1 acknowledgment:1 practice:1 area:2 significantly:2 matching:1 word:1 pre:3 suggest:1 get:1 close:2 context:2 eilon:1 demonstrated:1 center:2 jerusalem:2 go:9 independently:4 pure:2 insight:1 bsf:1 his:1 population:4 handle:1 jli:1 tan:2 itay:1 strengthen:1 hypothesis:1 bergman:2 element:1 cooperative:1 observed:5 inserted:1 decrease:1 movement:1 reward:5 trained:1 depend:2 neurophysiol:1 po:3 joint:3 represented:1 train:12 fast:1 detected:2 cooperativity:2 outside:1 quite:1 larger:2 otherwise:2 ability:1 jointly:3 ip:2 associative:1 vaadia:2 product:1 rapidly:1 participating:1 az:1 electrode:2 comparative:1 depending:2 ac:1 measured:2 ij:1 school:1 eq:1 strong:1 c:1 indicate:1 quantify:2 direction:1 correct:6 enable:1 generalization:1 really:1 probable:1 tighter:2 hagai:1 celli:1 considered:3 normal:1 predict:2 claim:1 tonically:2 estimation:2 saw:1 vice:1 clearly:2 yix:1 detect:3 economy:1 dependent:1 entire:1 unlikely:1 hidden:4 relation:1 mediator:1 among:20 classification:1 denoted:1 animal:4 prevailing:1 special:1 mutual:16 marginal:1 equal:1 construct:1 never:2 corticonics:1 look:1 others:1 stimulus:3 micro:1 simultaneously:7 divergence:1 delayed:1 familiar:1 fire:1 attractor:2 william:1 detection:11 sfc:2 cholinergic:1 analyzed:1 light:1 pc:1 chain:2 necessary:1 theoretical:4 column:1 modeling:3 jil:1 cover:1 measuring:1 uniform:1 delay:3 tishby:7 reported:1 abele:4 thanks:1 fundamental:3 huji:1 ym:1 iy:5 together:1 recorded:11 choose:1 prefrontal:2 suggesting:1 de:1 sec:1 coding:2 agl:1 permanent:1 depends:1 analyze:1 complicated:1 il:1 correspond:1 yield:1 none:1 tissue:1 liebler:1 classified:2 sharing:1 raster:1 syn:1 nerve:1 higher:1 supervised:1 response:2 done:1 evaluated:1 correlation:3 hand:1 synfire:2 mode:18 impulse:1 believe:1 hypothesized:1 y2:1 true:1 hence:1 alternating:1 symmetric:1 laboratory:1 nonzero:3 conditionally:1 ll:1 during:5 naftali:1 theoretic:2 complete:1 recently:1 juice:1 binational:1 discussed:1 measurement:2 significant:2 versa:1 cambridge:2 encouragement:1 tuning:1 similarly:1 session:4 had:1 cortex:7 behaving:6 own:1 touching:1 rewarded:1 certain:1 yi:2 seen:1 paradigm:2 redundant:1 period:3 ii:2 full:1 multiple:1 infer:2 technical:1 calculation:1 believed:1 cross:4 post:3 dkl:2 iog:1 parenthesis:1 prediction:1 basic:1 histogram:2 represent:1 achieved:1 cell:40 background:1 whereas:1 separately:1 macroscopic:1 unlike:1 recording:6 revealed:1 enough:1 independence:1 idea:1 raz:2 six:1 expression:3 synergetic:3 nine:1 useful:1 clear:2 ylx:3 processed:1 category:2 simplest:1 percentage:2 vy:2 notice:2 correctly:1 affected:1 basal:5 redundancy:10 verified:1 fraction:1 sum:1 respond:2 reasonable:1 bit:1 bound:3 ct:1 display:1 activity:17 georgopoulos:1 x2:13 semicolon:1 relatively:1 tmining:1 combination:2 log22:1 lp:1 primate:1 taken:2 remains:1 discus:1 turn:1 mechanism:2 know:1 available:1 rewritten:1 apply:1 observe:1 thomas:1 include:2 log2:2 uj:1 amit:1 disappear:1 move:1 question:2 quantity:2 spike:13 moshe:1 dependence:6 bialek:2 exhibit:3 thank:1 hmm:1 participate:1 collected:1 extent:1 code:1 relationship:1 hebrew:4 negative:7 collective:4 perform:1 neuron:13 markov:4 incorrectly:1 ever:1 y1:1 station:1 lb:1 pair:3 israeli:1 address:2 pattern:2 challenge:1 natural:1 improve:1 lao:1 koberle:1 determining:1 relative:1 synchronization:1 expect:1 bear:1 interesting:2 foundation:1 lo:1 elsewhere:1 cooperation:1 supported:1 last:1 institute:4 van:1 curve:1 cortical:8 computes:1 poet:1 synergy:43 active:3 assumed:1 xi:16 table:3 kettner:1 robust:1 ruyter:1 unites:1 neuronal:3 ny:1 wiley:1 sub:2 position:1 xl:6 ix:1 specific:1 xt:1 ci:1 gat:6 nec:2 conditioned:4 xiiy:2 entropy:5 simply:1 ganglion:6 visual:1 conditional:4 identity:1 ann:2 brenner:3 change:7 determined:1 bellow:1 conservative:1 discriminate:1 experimental:2 support:1 frontal:2 preparation:1 evaluate:2 princeton:1 correlated:2 |
669 | 1,612 | Sparse Code Shrinkage: Denoising by
Nonlinear Maximum Likelihood Estimation
Aapo Hyvarinen, Patrik Hoyer and Erkki Oja
Helsinki University of Technology
Laboratory of Computer and Information Science
P.O. Box 5400, FIN-02015 HUT, Finland
aapo.hyvarinen@hut.fi,patrik.hoyer@hut.fi,erkki.oja@hut.fi
http://www.cis.hut.fi/projects/ica/
Abstract
Sparse coding is a method for finding a representation of data in
which each of the components of the representation is only rarely
significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has
some neurophysiological plausibility. In this paper, we show how
sparse coding can be used for denoising. Using maximum likelihood
estimation of nongaussian variables corrupted by gaussian noise, we
show how to apply a shrinkage nonlinearity on the components of
sparse coding so as to reduce noise. Furthermore, we show how to
choose the optimal sparse coding basis for denoising. Our method
is closely related to the method of wavelet shrinkage, but has the
important benefit over wavelet methods that both the features and
the shrinkage parameters are estimated directly from the data.
1
Introduction
A fundamental problem in neural network research is to find a suitable representation for the data. One of the simplest methods is to use linear transformations of the
observed data. Denote by x = (Xl, X2, ... , Xn)T the observed n-dimensional random
vector that is the input data (e.g., an image window), and by s = (81,82 , . .. , 8 n )T
the vector of the linearly transformed component variables. Denoting further the
n x n transformation matrix by W, the linear representation is given by
s=Wx.
(1)
474
A. Hyviirinen, P Hoyer and E. Dja
We assume here that the number of transformed components equals the number of
observed variables, but this need not be the case in general.
An important representation method is given by (linear) sparse coding [1 , 10], in
which the representation of the form (1) has the property that only a small number
of the components Si of the representation are significantly non-zero at the same
time. Equivalently, this means that a given component has a 'sparse' distribution .
A random variable Si is called sparse when Si has a distribution with a peak at zero,
and heavy tails, as is the case, for example, with the double exponential (or Laplace)
distribution [6]; for all practical purposes , sparsity is equivalent to supergaussianity
or leptokurtosis [8]. Sparse coding is an adaptive method, meaning that the matrix
W is estimated for a given class of data so that the components Si are as sparse as
possible; such an estimation procedure is closely related to independent component
analysis [2J.
Sparse coding of sensory data has been shown to have advantages from both physiological and information processing viewpoints [1] . However, thorough analyses of
the utility of such a coding scheme have been few. In this paper, we introduce and
analyze a statistical method based on sparse coding. Given a signal corrupted by
additive gaussian noise, we attempt to reduce gaussian noise by soft thresholding
('shrinkage') of the sparse components. Intuitively, because only a few of the components are significantly active in the sparse code of a given data point, one may
assume that the activities of components with small absolute values are purely noise
and set them to zero, retaining just a few components with large activities. This
method is closely connected to the wavelet shrinkage method [3]. In fact, sparse
coding may be viewed as a principled way for determining a wavelet-like basis and
the corresponding shrinkage nonlinearities, based on data alone.
2
Maximum likelihood estimation of sparse components
The starting point of a rigorous derivation of our denoising method is the fact that
the distributions of the sparse components are nongaussian. Therefore, we shall
begin by developing a general theory that shows how to remove gaussian noise from
nongaussian variables, making minimal assumptions on the data.
Denote by S the original nongaussian random variable (corresponding here to a
noise-free version of one of the sparse components Si), and by v gaussian noise of
zero mean and variance a 2 ? Assume that we only observe the random variable y :
(2)
y=S+v
and we want to estimate the original s. Denoting by p the probability density of s,
and by f = -logp its negative log-density, the maximum likelihood (ML) method
gives the following estimator for s:
? =
argmin
u
~(y
2a
u)2
+ f(u).
(3)
Assuming f to be strictly convex and differentiable, this can be solved [6] to yield
g(y), where the function g can be obtained from the relation
? =
(4)
This nonlinear estimator forms the basis of our method.
Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation
475
"'-~~-----r\--~-~~---,
'.'.
..' ...
, "
Figure 1: Shrinkage nonlinearities and associated probability densities. Left: Plots
of the different shrinkage functions. Solid line: shrinkage corresponding to Laplace
density. Dashed line: typical shrinkage function obtained from (6). Dash-dotted
line: typical shrinkage function obtained from (8). For comparison, the line x = y is
given by dotted line. All the densities were normalized to unit variance, and noise
variance was fixed to .3. Right: Plots of corresponding model densities of the sparse
components. Solid line: Laplace density. Dashed line: a typical moderately supergaussian density given by (5). Dash-dotted line: a typical strongly supergaussian
density given by (7). For comparison, gaussian density is given by dotted line.
3
Parameterizations of sparse densities
To use the estimator defined by (3) in practice, the densities of the Si need to
be modelled with a parameterization that is rich enough. We have developed two
parameterizations that seem to describe very well most of the densities encountered
in image denoising. Moreover, the parameters are easy to estimate, and the inversion
in (4) can be performed analytically. Both models use two parameters and are thus
able to model different degrees of supergaussianity, in addition to different scales,
i.e. variances. The densities are here assumed to be symmetric and of zero mean.
The first model is suitable for supergaussian densities that are not sparser than the
Laplace distribution r6], and is given by the family of densities
p(s) = C exp( -as 2 12 - bls!),
(5)
where a, b > 0 are parameters to be estimated, and C is an irrelevant scaling
constant . The classical Laplace density is obtained when a = 0, and gaussian
densities correspond to b = o. A simple method for estimating a and b was given
in [6]. For this density, the nonlinearity g takes the form:
g(u) =
1 2 sign(u) max(O, lui - ba 2 )
1 +a a
(6)
where a 2 is the noise variance. The effect of the shrinkage function in (6) is to
reduce the absolute value of its argument by a certain amount, which depends on
the parameters, and then rescale. Small arguments are thus set to zero. Examples
of the obtained shrinkage functions are given in Fig. l.
The second model describes densities that are sparser than the Laplace density:
1 (a: + 2) [a: (a: + 1)/2](a/Hl)
p(s) = 2d [Va: (a: + 1)/ 2 + I sid 1](a+3)?
(7)
476
A. Hyvarinen, P Hoyer and E. Dja
When a -+ 00, the Laplace density is obtained as the limit. A simple consistent
method for estimating the parameters d, a > 0 in (7) can be obtained from the
relations d = JE{S2} and a = (2 - k + Jk(k + 4))/(2k - 1) with k = d2Ps(O)2,
see [6]. The resulting shrinkage function can be obtained as [6]
g(u) = sign(u)max(O,
lui -
2
ad
1 ,..,---,-----,--::-------:,....,---- 4a 2(a + 3))
+ "2 J (l u l + ad)2
(8)
where a = Ja(a + 1)/2, and g(u) is set to zero in case the square root in (8) is
imaginary. This is a shrinkage function that has a certain hard-thresholding flavor,
as depicted in Fig. 1.
Examples of the shapes of the densities given by (5) and (7) are given in Fig. 1,
together with a Laplace density and a gaussian density. For illustration purposes,
the densities in the plot are normalized to unit variance, but these parameterizations
allow the variance to be choosen freely.
Choosing whether model (5) or (7) should be used can be based on moments of
the distributions; see [6]. Methods for estimating the noise variance a 2 are given in
[3,6].
4
Sparse code shrinkage
The above results imply the following sparse code shrinkage method for denoising.
Assume that we observe a noisy version x = x + v of the data x, where v is gaussian
white noise vector. To denoise x, we transform the data to a sparse code, apply the
above ML estimation procedure component-wise, and then transform back to the
original variables. Here, we constrain the transformation to be orthogonal; this is
motivated in Section 5. To summarize:
1. First, using a noise-free training set of x, use some sparse coding method
for determining the orthogonal matrix W so that the components Si in
s = Wx have as sparse distributions as possible. Estimate a density model
Pi(Si) for each sparse component, using the models in (5) and (7).
2. Compute for each noisy observation x(t) of x the corresponding noisy sparse
components y(t) = Wx(t). Apply the shrinkage non-linearity gi(') as defined in (6), or in (8), on each component Yi(t), for every observation index
t. Denote the obtained components by Si(t) = gi(Yi(t)).
3. Invert the relation (1) to obtain estimates of the noise-free x, given by
x(t) = WT?(t) .
To estimate the sparsifying transform W, we assume that we have access to a noisefree realization of the underlying random vector. This assumption is not unrealistic
on many applications: for example, in image denoising it simply means that we
can observe noise-free images that are somewhat similar to the noisy image to be
treated, i.e., they belong to the same environment or context. This assumption can
be, however, relaxed in many cases, see [7]. The problem of finding an optimal
sparse code in step 1 is treated in the next section.
Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation
477
In fact , it turns out that the shrinkage operation given above is quite similar to
the one used in the wavelet shrinkage method derived earlier by Donoho et al [3]
from a very different approach. Their estimator consisted of applying the shrinkage
operator in (6) , with different values for the parameters, on the coefficients of the
wavelet transform. There are two main differences between the two methods. The
first is the choice of the transformation. We choose the transformation using the
statistical properties of the data at hand, whereas Donoho et al use a predetermined
wavelet transform. The second important difference is that we estimate the shrinkage nonlinearities by the ML principle, again adapting to the data at hand, whereas
Donoho et al use fixed thresholding operators derived by the minimax principle.
5
Choosing the optimal sparse code
Different measures of sparseness (or nongaussianity) have been proposed in the literature [1, 4, 8, 10]. In this section, we show which measures are optimal for our
method. We shall here restrict ourselves to the class of linear, orthogonal transformations. This restriction is justified by the fact that orthogonal transformations
leave the gaussian noise structure intact, which makes the problem more simply
tractable. This restriction can be relaxed, however, see [7].
A simple, yet very attractive principle for choosing the basis for sparse coding is
to consider the data to be generated by a noisy independent component analysis
(ICA) model [10, 6, 9] :
x = As+v,
(9)
where the Si are now the independent components, and v is multivariate gaussian
noise. We could then estimate A using ordinary maximum likelihood estimation
of the ICA model. Under the restriction that A is constrained to be orthogonal,
estimation of the noise-free components Si then amounts to the above method of
shrinking the values of AT x, see [6]. In this ML sense, the optimal transformation
matrix is thus given by W = AT. In particular, using this principle means that
ordinary ICA algorithms can be used to estimate the sparse coding basis. This
is very fortunate since the computationally efficient methods for ICA estimation
enable the basis estimation even in spaces of rather high dimensions [8, 5].
An alternative principle for determining the optimal sparsifying transformation is
to minimize the mean-square error (MSE). In [6], a theorem is given that shows that
the optimal basis in minimum MSE sense is obtained by maximizing 2:~=1 IF(wTx)
where IF(s) = E{[P'(s)jp(s)J2} is the Fisher information of the density of s, and
the
are the rows of W . Fisher information of a density [4] can be considered as
a measure of its nongaussianity. It is well-known [4] that in the set of probability
densities of unit variance, Fisher information is minimized by the gaussian density,
and the minimum equals 1. Thus the theorem shows that the more nongaussian
(sparse) S is, the better we can reduce noise. Note, however, that Fisher information
is not scale-invariant.
wT
The former (ML) method of determining the basis matrix gives usually sparser
components than the latter method based on minimizing MSE. In the case of image
denoising, however, these two methods give essentially equivalent bases if a perceptually weighted MSE is used [6]. Thus we luckily avoid the classical dilemma of
choosing between these two optimality criteria.
478
6
A. Hyvtirinen, P. Hoyer and E. Oja
Experiments
Image data seems to fulfill the assumptions inherent in sparse code shrinkage: It is
possible to find linear representations whose components have sparse distributions,
using wavelet-like filters [10]. Thus we performed a set of experiments to explore the
utility of sparse code shrinkage in image denoising. The experiments are reported
in more detail in [7].
Data. The data consisted of real-life images, mainly natural scenes. The images
were randomly divided into two sets. The first set was used in estimating the
matrix W that gives the sparse coding transformation, as well as in estimating the
shrinkage nonlinearities. The second set was used as a test set. It was artificially
corrupted by Gaussian noise, and sparse code shrinkage was used to reduce the
noise. The images were used in the method in the form of subwindows of 8 x 8
pixels.
Methods. The sparse coding matrix W was determined by first estimating the
ICA model for the image windows (with DC component removed) using the FastICA
algorithm [8, 5], and projecting the obtained estimate on the space of orthogonal
matrices. The training images were also used to estimate the parametric density
models of the sparse components. In the first series of experiments, the local variance was equalized as a preprocessing step [7]. This implied that the density in
(5) was a more suitable model for the densities of the sparse components; thus the
shrinkage function in (6) was used. In the second series, no such equalization was
made, and the density model (7) and the shrinkage function (8) were used [7].
Results. Fig. 2 shows, on the left, a test image which was artificially corrupted
with Gaussian noise with standard deviation 0.5 (the standard deviations of the
original images were normalized to 1). The result of applying our denoising method
(without local variance equalization) on that image is shown on the right. Visual
comparison of the images in Fig. 2 shows that our sparse code shrinkage method
cancels noise quite effectively. One sees that contours and other sharp details are
conserved quite well, while the overall reduction of noise is quite strong, which in
is contrast to methods based on low-pass filtering. This result is in line with those
obtained by wavelet shrinkage [3]. More experimental results are given in [7].
7
Conclusion
Sparse coding and ICA can be applied for image feature extraction, resulting in a
wavelet-like basis for image windows [10]. As a practical application of such a basis,
we introduced the method of sparse code shrinkage. It is based on the fact that in
sparse coding the energy of the signal is concentrated on only a few components,
which are different for each observed vector. By shrinking the absolute values of the
sparse components towards zero, noise can be reduced. The method is also closely
connected to modeling image data with noisy independent component analysis [9].
We showed how to find the optimal sparse coding basis for denoising, and we developed families of probability densities that allow the shrinkage nonlinearities to
adapt accurately to the data at hand. Experiments on image data showed that the
performance of the method is very appealing. The method reduces noise without
blurring edges or other sharp features as much as linear low-pass or median filtering.
This is made possible by the strongly non-linear nature of the shrinkage operator
that takes advantage of the inherent statistical structure of natural images.
| 1612 |@word version:2 inversion:1 seems:1 solid:2 moment:1 reduction:2 series:2 denoting:2 imaginary:1 si:11 yet:1 additive:1 wx:3 predetermined:1 shape:1 remove:1 plot:3 alone:1 parameterization:1 parameterizations:3 introduce:1 ica:7 window:3 project:1 begin:1 estimating:6 moreover:1 linearity:1 underlying:1 argmin:1 developed:2 finding:2 transformation:10 thorough:1 every:1 unit:3 local:2 limit:1 practical:2 practice:1 procedure:2 significantly:3 adapting:1 operator:3 context:1 applying:2 equalization:2 www:1 equivalent:2 restriction:3 maximizing:1 starting:1 convex:1 estimator:4 laplace:8 jk:1 observed:4 solved:1 connected:2 removed:1 principled:1 environment:1 moderately:1 purely:1 dilemma:1 blurring:1 basis:11 derivation:1 describe:1 equalized:1 choosing:4 quite:4 whose:1 gi:2 transform:5 noisy:6 advantage:2 differentiable:1 j2:1 realization:1 double:1 leave:1 rescale:1 wtx:1 strong:1 closely:5 filter:1 luckily:1 enable:1 ja:1 strictly:1 hut:5 considered:1 exp:1 finland:1 purpose:2 estimation:11 supergaussian:3 weighted:1 gaussian:14 rather:1 fulfill:1 avoid:1 shrinkage:36 derived:2 likelihood:7 mainly:1 contrast:1 rigorous:1 sense:2 relation:3 transformed:2 pixel:1 overall:1 retaining:1 constrained:1 equal:2 noisefree:1 extraction:1 cancel:1 minimized:1 inherent:2 few:4 randomly:1 oja:3 ourselves:1 attempt:1 edge:1 orthogonal:6 minimal:1 soft:1 earlier:1 modeling:1 logp:1 ordinary:2 deviation:2 fastica:1 reported:1 corrupted:4 density:36 fundamental:1 peak:1 together:1 nongaussian:5 again:1 choose:2 nonlinearities:5 coding:18 coefficient:1 depends:1 ad:2 performed:2 root:1 analyze:1 minimize:1 square:2 variance:11 yield:1 correspond:1 modelled:1 sid:1 accurately:1 energy:1 associated:1 back:1 box:1 strongly:2 furthermore:1 just:1 hand:3 nonlinear:4 effect:1 normalized:3 consisted:2 former:1 analytically:1 symmetric:1 laboratory:1 white:1 attractive:1 criterion:1 image:22 meaning:1 wise:1 fi:4 dja:2 jp:1 tail:1 belong:1 nonlinearity:2 access:1 base:1 multivariate:1 showed:2 irrelevant:1 certain:2 life:1 yi:2 conserved:1 minimum:2 somewhat:1 relaxed:2 freely:1 signal:2 dashed:2 reduces:1 adapt:1 plausibility:1 divided:1 va:1 aapo:2 essentially:1 invert:1 justified:1 addition:1 want:1 whereas:2 median:1 seem:1 enough:1 easy:1 restrict:1 reduce:5 whether:1 motivated:1 utility:2 amount:2 concentrated:1 simplest:1 reduced:1 http:1 dotted:4 sign:2 estimated:3 shall:2 bls:1 sparsifying:2 redundancy:1 family:2 scaling:1 dash:2 encountered:1 activity:2 constrain:1 helsinki:1 erkki:2 x2:1 scene:1 argument:2 optimality:1 developing:1 describes:1 appealing:1 making:1 hl:1 intuitively:1 invariant:1 projecting:1 computationally:1 turn:1 tractable:1 operation:1 apply:3 observe:3 alternative:1 original:4 classical:2 implied:1 parametric:1 hoyer:5 subwindows:1 assuming:1 code:14 index:1 illustration:1 minimizing:1 equivalently:1 negative:1 ba:1 observation:2 fin:1 dc:1 sharp:2 introduced:1 able:1 usually:1 sparsity:1 summarize:1 max:2 unrealistic:1 suitable:3 treated:2 natural:2 minimax:1 scheme:1 technology:1 imply:1 patrik:2 literature:1 determining:4 filtering:2 degree:1 consistent:1 thresholding:3 viewpoint:1 principle:5 pi:1 heavy:1 row:1 free:5 choosen:1 allow:2 absolute:3 sparse:48 benefit:1 dimension:1 xn:1 rich:1 contour:1 sensory:1 made:2 adaptive:1 preprocessing:1 hyvarinen:3 ml:5 active:2 assumed:1 hyviirinen:1 nature:1 mse:4 artificially:2 main:1 linearly:1 s2:1 noise:26 denoise:1 fig:5 je:1 shrinking:2 exponential:1 xl:1 fortunate:1 r6:1 wavelet:10 theorem:2 physiological:1 effectively:1 ci:1 perceptually:1 sparseness:1 sparser:3 flavor:1 depicted:1 simply:2 explore:1 neurophysiological:1 visual:1 viewed:1 donoho:3 towards:1 fisher:4 hard:1 typical:4 determined:1 lui:2 wt:2 denoising:13 called:1 pas:2 experimental:1 intact:1 rarely:1 latter:1 nongaussianity:2 |
670 | 1,613 | An Integrated Vision Sensor for the
Computation of Optical Flow Singular Points
Charles M. Higgins and Christof Koch
Division of Biology, 139-74
California Institute of Technology
Pasadena, CA 91125
[chuck,koch]@klab.caltech.edu
Abstract
A robust, integrative algorithm is presented for computing the position of
the focus of expansion or axis of rotation (the singular point) in optical
flow fields such as those generated by self-motion. Measurements are
shown of a fully parallel CMOS analog VLSI motion sensor array which
computes the direction of local motion (sign of optical flow) at each pixel
and can directly implement this algorithm. The flow field singular point
is computed in real time with a power consumption of less than 2 m W.
Computation of the singular point for more general flow fields requires
measures of field expansion and rotation, which it is shown can also be
computed in real-time hardware, again using only the sign of the optical
flow field. These measures, along with the location of the singular point,
provide robust real-time self-motion information for the visual guidance
of a moving platform such as a robot.
1
INTRODUCTION
Visually guided navigation of autonomous vehicles requires robust measures of self-motion
in the environment. The heading direction, which corresponds to the focus of expansion
in the visual scene for a fixed viewing angle, is one of the primary sources of guidance
information. Psychophysical experiments [WH88] show that humans can determine their
heading direction very precisely. In general, the location of the singular point in the visual
field provides important self-motion information.
Optical flow, representing the motion seen in each local area of the visual field, is partic-
C. M. Higgins and C. Koch
700
ularly compute-intensive to process in real time. We have previously shown [DHK97] a
fully parallel, low power, CMOS analog VLSI vision processor for computing the local
direction of motion. With onboard photoreceptors, each pixel computes in continuous time
a vector corresponding to the sign of the local normal flow. In this article, we show how
these motion vectors can be integrated in hardware to compute the singular point of the
optical flow field. While each individual pixel suffers from transistor mismatch and spatial
variability with respect to its neighbors, the integration of many pixels serves to average out
these irregularities and results in a highly robust computation. This compact, low power
self-motion processor is well suited for autonomous vehicle applications.
Extraction of self-motion information has been a topic of research in the machine vision
community for decades, and has generated volumes of research; see [FA97] for a good
review. While many algorithms exist for determining flow field singular points in complex
self-motion situations, few are suitable for real-time implementation. Integrated hardware
attempts at self-motion processing have only begun recently, with the work of Indiveri et
al [IKK96]. The zero crossing in a ID array of CMOS velocity sensors was used to detect
one component of the focus of expansion. In a separate chip, the sum of a radial array
of velocity sensors was used to compute the rate of flow field expansion, from which the
time-to-contact can be calculated. McQuirk [McQ96] built a CCD-based image processor
which used an iterative algorithm to locate consistent stable points in the image, and thus
the focus of expansion. More recently, Deutschmann et al. [DW98] have extended Indiveri
et al.'s work to 2D by summing rows and columns in a 2D CMOS motion sensor array and
using software to detect zero crossings and find the flow field singular point.
2
SINGULAR POINT ALGORITHM
In order to compute the flow field singular point, we compute the sum of the sign of optical
flow over the entire field of view. Let the field of view be centered at (0,0) and bounded
by ?L in both spatial dimensions; then (vector quantities are indicated in boldface)
s = [:[: U(x , y)dxdy
(1)
where U(x,y) = (Ux(x,y),Uy(x,y)) = sgn(V(x,y)) and V(x,y) is the optical flow
field. Consider a purely expanding flow field with the focus of expansion (FOE) at the
center of the visual field. Intuitively, the vector sum of the sign of optical flow will be zero,
because each component is balanced by a spatially symmetric component with opposite
sign. As the FOE moves away from the center of the visual field, the sum will increase or
decrease depending on the FOE position.
An expanding flow field may be expressed as
Ve(x, y) = A(x, y) . ((x - Xe), (y - Ye))
(2)
where A(x, y) denotes the local rate of expansion and (Xe, Ye ) is the focus of expansion.
The integral (1) applied to this flow field yields
as long as A is positive. Note that, due to the use of optical flow sign only, this quantity is
independent of the speed of the flow field components. We will discuss in Section 5 how
the positivity requirement of A can be relaxed somewhat.
701
Integrated Computation of Optical Flow Singular Points
Similarly, a clockwise rotating flow field may be expressed as
V r (x, y) = B (x, y) . ((y - Yr ), - (x - X r))
(3)
where B(x, y) denotes the local rate of rotation and (Xr, Yr ) is the axis of rotation (AOR).
The integral (1) applied to this flow field yields
S = -4L? (Yr , -X r )
as long as B is positive.
Let us now consider the case of a combination of these expanding and rotating fields (2)
and (3):
(4)
V(x,y) = aVe + (1- a)Vr
a
This flow field is spiral in shape; the parameter defines the mix of the two field types. The
sum in this case is more complex to evaluate, but for small (rotation dominating),
S = -4L? (CXe
and for
a
+ Yr , CYe - X r )
(5)
alarge (expansion dominating),
S
= -4L? (Xe + (l/C)Yr, Ye -
(6)
(l/G)Xr)
where C = l~:B' Since it is mathematically impossible to recover both the FOE and AOR
with only two equations, l let us equate the FOE and AOR and concentrate on recovering the
unique singular point of this spiral flow field. In order to do this, we need a measurement
of the quantity C, which reflects the relative mix and strength of the expanding and rotating
flow fields.
2.1
COEFFICIENTS OF EXPANSION AND ROTATION
Consider a contour integral around the periphery of the visual field of the sign of optical
flow components normal to the contour of integration. If we let this contour be a square of
size 2L centered at (0,0), we can express this integral as
8LCexp
=
i:
(Uy(x,L) - Uy(x, -L))dx
+
i:
(Ux(L,y) - Ux(-L,y)) dy
(7)
This integral can be considered as a 'template' for expanding flow fields. The quantity
Cexp reaches unity for a purely expanding flow field with FOE within the visual field, and
reaches zero for a purely rotating flow field. A similar quantity for rotation may be defined
by an integral of the sign of optical flow components parallel to the contour of integration:
8LCrot =
i:
(Ux(x, L) - Ux(x, -L)) dx
+
i:
(Uy( -L, y) - Uy(L, y)) dy
a
a
(8)
It can be shown that for small (rotation dominating), Gexp ~ C. As increases, Cexp
saturates at unity. Similarly, for () large (expansion dominating), Crot ~ (l/C). As
decreases, Grot saturates at unity. This suggests the following approximation to equations
(5) and (6), letting Xs = Xe = Xr and Y s = Ye = Yr
S = -4L . (CexpXs
+ CrotYs , GexpYs
- CrotX s)
a
(9)
from which equation the singular point (X s , Ys ) may be uniquely calculated. Note that this
generalized expression also covers contracting and counterclockwise rotating fields (for
which the quantities C exp and Crot would be negative).
1 In
fact, if A and B are constant, there exists no unique solution for the FOE and AOR.
C. M. Higgins and C. Koch
702
3 HARDWARE IMPLEMENTATION
The real-time hardware implementation of the above algorithm utilizes a fully parallel
14x 13 CMOS analog VLSI motion sensor array. The elementary motion detectors are
briefly described below. Each pixel in the array creates a local motion vector when crossed
by a spatial edge; this vector is represented by two currents encoding the x and y components. These currents persist for an adjustable period of time after stimulation. By using
the serial pixel scanners at the periphery of the chip (normally used to address each pixel
individually), it is possible to connect all of these currents to the same output wire, thus implementing the sum required by the algorithm. In this mode, the current outputs of the chip
directly represent the sum S in equation (1), and power consumption is less than 2 mW.
A similar sum combining sensor row and column outputs around the periphery of the chip
could be used to implement the quantities G exp and Grot in equations (7) and (8). Due
to the sign changes necessary, this sum cannot be directly implemented with the present
implementation. However, it is possible to emulate this sum by scanning off the vector
field and performing the sum in real-time software.
3.1
ELEMENTARY MOTION DETECTOR
The ID elementary motion detector used in this processor is the IT! (Inhibit, Trigger, and
Inhibit) sensor. Its basic operation is described in Figure 1; see [DHK97] for details . The
sensor is edge sensitive, approximately invariant to stimulus contrast above 20% and functions over a stimulus velocity range from 10-800 pixels/sec.
~
~ PIXEL B
,-- - ~ l
PIXEL A
~--l
~E~j
;TEDj
~ PIXEL C
ITED:
~~-- JJ}TJ[ __ ~
-
MOTION
Vnght
-
MOTION
Vleft
A Intensity
~
Blntensity
Cintensity
Direction voltage
Vrlghl
Direction voltage
Vlelt
Output current
lout
__~Il~____________
L
time
Figure 1: IT! sensor: a spatial edge crossing the sensor from left to right triggers direction voltages for both directions Vright and Viejt in pixel B. The same edge subsequently
crossing pixel G inhibits the null direction voltage Viejt. The output current is continuously computed as the difference between Vright and Viejt; the resulting positive output
current lout indicates rightward motion. Pixels B and A interact similarly to detect leftward motion, resulting in a negative output current.
The output of each ID IT! sensor represents the order in which the three involved photoreceptors were crossed by a spatial edge. Like all local motion sensors, it suffers from the
aperture problem, and thus can only respond to the optical flow normal to the local gradients of intensity. The final result of this computation is the sign of the projection of the
normal flow onto the sensor orientation. Two such sensors placed orthogonally effectively
compute the sign of the normal flow vector.
703
Integrated Computation of Optical Flow Singular Points
15
05
o
-05
-1
-1 5
FOE Y coordinate
FOE X coord inate
(a) X output
FOE Y coordinate
FOE X coordinate
(b) Youtput
Figure 2: Hardware FOE computation: the chip was presented with a computer-generated
image of high-contrast expanding circles; the FOE location was varied under computer
control on a 2D grid. The measured chip current output has been scaled by a factor of
6 x 105 chip radii per Ampere. All FOE locations are shown in chip radii, where a radius
of 1.0 corresponds to the periphery of the sensor array. Data shown is the mean output over
one stimulus period; RMS variation is 0.27 chip radii.
4
SENSOR MEASUREMENTS
In Figure 2, we demonstrate the hardware computation of the FOE. To generate this data,
the chip was presented with a computer-generated image of high-contrast expanding circles. The focus of expansion was varied on a 2D grid under computer control, and the
mean of the chip's output current over one period of the stimulus was calculated for each
FOE position. This output varies periodically with the stimulus because each motion sensor stops generating output while being crossed by a stimulus edge. The RMS value of
this variation for the expanding circles stimulus is 0.27 chip radii; this variation can be
decreased by increasing the resolution of the sensor array. The data shows that the FOE
is precisely located when it is within the chip's visual field. Each component of the chip
output is virtually independent of the other. When the FOE is outside the chip's visual field,
the chip output saturates, but continues to indicate the correct direction towards the FOE.
The chip's AOR response to a rotating 'wagon wheel' stimulus is qualitatively and quantitatively very similar, and is not shown for lack of space.
In Figure 3, the coefficients of expansion and rotation are shown for the same expanding
circles stimulus used in Figure 2. Since these coefficients cannot be calculated directly by
the present hardware, the flow field was scanned out of the chip and these quantities were
calculated in real-time software. While the FOE is on the chip, Gexp remains near unity,
dropping off as the FOE leaves the chip. As expected, Grot remains near zero regardless
of the FOE position. Note that, because these coefficients are calculated by integrating a
ring of only 48 sensors near the chip periphery, they have more spatial noise than the FOE
calculation which integrates all 182 motion sensors.
In Figure 4, a spiral stimulus is presented, creating an equal combination of expansion and
rotation ?() = 0.5 in equation (4)). The singular point is calculated from equation (9) using
the optical flow field scanned from the chip . Due to the combination of the coefficients with
the sum computation, more spatial noise has been introduced than was seen in the FOE
case. However, the singular point is still clearly located when within the chip. When the
C. M. Higgins and C. Koch
704
0 .5
o
- 05
- 1
- 1
o
o
- 1
FOE Y coordinate
FOE X coordinate
FOE Y coordinate
(a) Cexp
FOE X coordinate
(b) Grot
Figure 3: Coefficients of expansion and rotation: again using the computer-generated expanding circles stimulus, the FOE was varied on a 2D grid. All FOE locations are shown
in chip radii, where a radius of 1.0 corresponds to the periphery of the sensor array. Data
shown is the mean output over one stimulus period.
1 .5
15
0 .5
0.5
o
o
-0.5
-0.5
-1
-1
- 15
-15
o
Singular pt. Y coord .
Singular pt. X COOrd .
(a) X output
Singular pt Y coord.
SIngular pt. X coord.
(b) Y output
Figure 4: Singular point calculation: the chip was shown a computer-generated image of a
rotating spiral; the singular point location was varied under computer control on a 2D grid.
All singular point locations are shown in chip radii, where a radius of 1.0 corresponds to
the periphery of the sensor array. Data shown is the mean output over one stimulus period.
singular point leaves the chip, the calculated position drops towards zero as the algorithm
can no longer compute the mix of expansion and rotation.
5 DISCUSSION
We have presented a simple, robust algorithm for computing the singular point of an optical
flow field and demonstrated a real-time hardware implementation. Due to the use of the
sign of optical flow only, the solution is independent of the relative velocities of components
of the flow field. Because a large number of individual sensors are integrated to produce
this output, it is quite robust to the spatial variability of the individual motion sensors. We
have also shown how coefficients indicating the mix of expansion and rotation may be
computed in hardware. A motion sensor array which directly computes these coefficients,
as well as the flow field singular point, is currently in fabrication.
Integrated Computation of Optical Flow Singular Points
705
In order to derive the equations relating the flow field sums to the FOE, it was necessary
in Section 2 to make the unrealistic assumption that the optical flow field contains no areas
of zero optical flow. Due to the persistence time of the motion sensor used, it is possible to
relax this assumption significantly. As long as all parts of the visual field receive stimulation
within the persistence time of the motion output, the optical flow field seen by the motion
sensor array will contain no zeros and the singular point output will remain correct. This
is a simple example of temporal motion integration. In fact, it is possible in practice to
relax this assumption even further: as long as the location of zeros in the optical flow field
is spatially random, the magnitude of the output will be reduced hut it will continue to
provide a clear error signal pointing towards the flow field singular point.
Because of the fully parallel design of the motion sensor array, larger arrays may be obtained by simply replicating pixels. The FOE summing algorithm is not affected by this
increase in the number of pixels. As the number of pixels is increased, the average power
consumption will increase sublinearly, because the sum output current (the dominant source
of prolonged power consumption) can be maintained at approximately the same absolute
value regardless of the number of pixels integrated. However, the periodic variation of
the output with the stimulus will be decreased, the precision of the FOE output will be
improved, and the need for temporal averaging will be reduced.
Acknowledgments
This research was supported by the Caltech Center for Neuromorphic Systems Engineering
as a part of the National Science Foundation's Engineering Research Center program, as
well as by the Office of Naval Research. The authors wish to thank Rainer Deutschmann
for stimulating discussions.
References
[DHK97] R. Deutschmann, e. Higgins, and e. Koch. Real-time analog VLSI sensors for
2-D direction of motion. In Proceedings of the Int. Con! on Artificial Neural
Networks, pages 1163-1168. Springer Verlag, 1997.
[DW98]
R. A. Deutschmann and O. G. Wenisch. Compressive computation in analog
VLSI motion sensors. In Proceedings of Deutsche Arbeitsgemeinschaftfiir Mustererkennung, 1998.
[FA97]
e. Fermtiller and Y. Aloimonos. On the geometry of visual correspondence.
International Journal of Computer Vision, 21(3):233-247,1997.
[IKK96]
G. Indiveri, 1. Kramer, and e. Koch. Parallel analog VLSI architectures for
computation of heading direction and time-to-contact. In D .S. Touretzky, M.e.
Mozer, and M.E. Hasselmo, editors, Advances in Neural Information Processing
Systems, volume 8, pages 720-726, Cambridge, MA, 1996. MIT.
[McQ96] I. McQuirk. An analog VLSI chip for estimating the focus of expansion. Technical Report 1577, Massachusetts Institute of Technology, Artificial Intelligence
Laboratory, 1996.
[WH88]
W. Warren and D. Hannon. Direction of self-motion is perceived from opticalflow. Nature, 336(6195):162-163,1988.
| 1613 |@word briefly:1 integrative:1 contains:1 gexp:2 current:11 dx:2 periodically:1 shape:1 drop:1 intelligence:1 leaf:2 yr:6 provides:1 location:8 along:1 expected:1 sublinearly:1 prolonged:1 increasing:1 estimating:1 bounded:1 deutsche:1 null:1 compressive:1 temporal:2 scaled:1 control:3 normally:1 christof:1 positive:3 engineering:2 local:9 encoding:1 id:3 approximately:2 coord:5 suggests:1 range:1 uy:5 unique:2 acknowledgment:1 practice:1 implement:2 irregularity:1 xr:3 area:2 significantly:1 projection:1 persistence:2 radial:1 integrating:1 deutschmann:4 cannot:2 onto:1 wheel:1 impossible:1 demonstrated:1 center:4 regardless:2 resolution:1 aor:5 higgins:5 array:14 autonomous:2 coordinate:7 variation:4 pt:4 trigger:2 crossing:4 velocity:4 located:2 continues:1 persist:1 decrease:2 inhibit:2 balanced:1 mozer:1 environment:1 purely:3 creates:1 division:1 rightward:1 chip:28 represented:1 emulate:1 artificial:2 outside:1 quite:1 larger:1 dominating:4 relax:2 final:1 transistor:1 combining:1 requirement:1 produce:1 generating:1 cmos:5 ring:1 depending:1 derive:1 measured:1 recovering:1 implemented:1 indicate:1 direction:13 guided:1 concentrate:1 radius:9 correct:2 subsequently:1 centered:2 human:1 sgn:1 viewing:1 implementing:1 elementary:3 mathematically:1 scanner:1 koch:7 klab:1 around:2 normal:5 visually:1 considered:1 exp:2 hut:1 pointing:1 perceived:1 integrates:1 currently:1 sensitive:1 individually:1 hasselmo:1 reflects:1 mit:1 clearly:1 sensor:31 voltage:4 partic:1 office:1 rainer:1 focus:8 indiveri:3 naval:1 indicates:1 contrast:3 ave:1 detect:3 vleft:1 integrated:8 entire:1 pasadena:1 vlsi:7 pixel:18 orientation:1 platform:1 spatial:8 integration:4 field:48 equal:1 extraction:1 biology:1 represents:1 report:1 stimulus:14 quantitatively:1 few:1 ve:1 national:1 individual:3 geometry:1 attempt:1 highly:1 navigation:1 tj:1 integral:6 edge:6 necessary:2 rotating:7 circle:5 guidance:2 increased:1 column:2 cover:1 neuromorphic:1 fabrication:1 connect:1 scanning:1 varies:1 periodic:1 international:1 off:2 continuously:1 again:2 positivity:1 creating:1 sec:1 coefficient:8 int:1 crossed:3 vehicle:2 view:2 recover:1 parallel:6 square:1 il:1 equate:1 yield:2 processor:4 foe:33 detector:3 reach:2 suffers:2 touretzky:1 involved:1 con:1 stop:1 begun:1 massachusetts:1 response:1 improved:1 lack:1 hannon:1 defines:1 mode:1 indicated:1 ye:4 contain:1 spatially:2 symmetric:1 laboratory:1 self:9 uniquely:1 maintained:1 generalized:1 demonstrate:1 motion:36 onboard:1 image:5 recently:2 charles:1 rotation:13 stimulation:2 volume:2 analog:7 relating:1 measurement:3 cambridge:1 grid:4 similarly:3 replicating:1 moving:1 robot:1 stable:1 longer:1 dominant:1 leftward:1 periphery:7 verlag:1 chuck:1 continue:1 xe:4 caltech:2 seen:3 dxdy:1 relaxed:1 somewhat:1 determine:1 period:5 clockwise:1 signal:1 mix:4 technical:1 calculation:2 long:4 ited:1 serial:1 y:1 wagon:1 basic:1 vision:4 represent:1 receive:1 decreased:2 singular:30 source:2 virtually:1 counterclockwise:1 flow:49 mw:1 near:3 spiral:4 architecture:1 opposite:1 intensive:1 expression:1 rms:2 jj:1 clear:1 hardware:10 reduced:2 generate:1 exist:1 sign:13 per:1 dropping:1 affected:1 express:1 sum:14 angle:1 respond:1 utilizes:1 dy:2 correspondence:1 strength:1 scanned:2 precisely:2 scene:1 software:3 speed:1 performing:1 optical:23 inhibits:1 combination:3 remain:1 unity:4 intuitively:1 invariant:1 equation:8 previously:1 remains:2 discus:1 letting:1 serf:1 operation:1 away:1 denotes:2 ccd:1 contact:2 psychophysical:1 move:1 quantity:8 primary:1 gradient:1 separate:1 thank:1 consumption:4 topic:1 boldface:1 vright:2 negative:2 implementation:5 design:1 adjustable:1 wire:1 situation:1 extended:1 variability:2 saturates:3 locate:1 varied:4 community:1 intensity:2 introduced:1 required:1 california:1 aloimonos:1 address:1 below:1 mismatch:1 lout:2 program:1 built:1 power:6 suitable:1 unrealistic:1 representing:1 technology:2 orthogonally:1 axis:2 review:1 determining:1 relative:2 fully:4 contracting:1 foundation:1 consistent:1 article:1 editor:1 row:2 placed:1 supported:1 heading:3 warren:1 institute:2 neighbor:1 template:1 absolute:1 calculated:8 dimension:1 contour:4 computes:3 author:1 qualitatively:1 compact:1 aperture:1 photoreceptors:2 summing:2 continuous:1 iterative:1 decade:1 nature:1 robust:6 ca:1 expanding:11 interact:1 expansion:19 complex:2 noise:2 vr:1 precision:1 position:5 wish:1 x:1 exists:1 effectively:1 magnitude:1 suited:1 simply:1 visual:12 expressed:2 ux:5 springer:1 corresponds:4 ma:1 stimulating:1 kramer:1 towards:3 change:1 averaging:1 indicating:1 mustererkennung:1 evaluate:1 |
671 | 1,614 | Reinforcement Learning based on
On-line EM Algorithm
Masa-aki Sato t
Information Processing Research Laboratories
masaaki@hip.atr.co.jp
Seika, Kyoto 619-0288, Japan
t ATR Human
Shin Ishii +t
tNara Institute of Science and Technology
Ikoma, Nara 630-0101, Japan
ishii@is.aist-nara.ac.jp
Abstract
In this article, we propose a new reinforcement learning (RL)
method based on an actor-critic architecture. The actor and
the critic are approximated by Normalized Gaussian Networks
(NGnet), which are networks of local linear regression units. The
NGnet is trained by the on-line EM algorithm proposed in our previous paper. We apply our RL method to the task of swinging-up
and stabilizing a single pendulum and the task of balancing a double pendulum near the upright position. The experimental results
show that our RL method can be applied to optimal control problems having continuous state/action spaces and that the method
achieves good control with a small number of trial-and-errors.
1
INTRODUCTION
Reinforcement learning (RL) methods (Barto et al., 1990) have been successfully
applied to various Markov decision problems having finite state/action spaces, such
as the backgammon game (Tesauro, 1992) and a complex task in a dynamic environment (Lin, 1992). On the other hand, applications to continuous state/action
problems (Werbos, 1990; Doya, 1996; Sofge & White, 1992) are much more difficult
than the finite state/action cases. Good function approximation methods and fast
learning algorithms are crucial for successful applications.
In this article, we propose a new RL method that has the above-mentioned two
features. This method is based on an actor-critic architecture (Barto et al., 1983),
although the detailed implementations of the actor and the critic are quite differ-
1053
Reinforcement Learning Based on On-Line EM Algorithm
ent from those in the original actor-critic model. The actor and the critic in our
method estimate a policy and a Q-function , respectively, and are approximated by
Normalized Gaussian Networks (NGnet) (l'doody & Darken , 1989). The NGnet is a
network of local linear regression units. The model softly partitions the input space
by using normalized Gaussian functions, and each local unit linearly approximates
the output within its partition. As pointed out by Sutton (1996), local models such
as the NGnet are more suitable than global models such as multi-layered perceptrons, for avoiding serious learning interference in on-line RL processes. The NGnet
is trained by the on-line EM algorithm proposed in our previous paper (Sato &
Ishii, 1998). It was shown that this on-line E11 algorithm is faster than a gradient
descent algorithm. In the on-line EM algorithm, the positions of the local units
can be adjusted according to the input and output data distribution. Moreover,
unit creation and unit deletion are performed according to the data distribution.
Therefore , the model can be adapted to dynamic environments in which the input
and output data distribution changes with time (Sato & Ishii, 1998).
\Ve have applied the new RL method to optimal control problems for deterministic
nonlinear dynamical systems. The first experiment is the task of swinging-up and
stabilizing a single pendulum with a limited torque (Doya, 1996) . The second
experiment is the task of balancing a double pendulum where a torque is applied
only to the first pendulum. Our RL method based on the on-line E11 algorithm
demonstrated good performances in these experiments.
2
NGNET AND ON-LINE EM ALGORITHM
In this section, we review the on-line EM algorithm for the NGnet proposed in our
previous paper (Sato & Ishii, 1998). The NGnet (Moody & Darken, 1989) , which
transforms an N-dimensional input vector x to a D-dimensional output vector y , is
defined by the following equations.
(la)
(lb)
AI denotes the number of units , and the prime (') denotes a transpose. Gi(x) is
an N-dimensional Gaussian function, which has an N-dimensional center /11 and an
(N x N)-dimensional covariance matrix E W i and bi are a (D x N)-dimensionallinear regression matrix and a D-dimensional bias vector, respectively. Subsequently,
we use notations ll'-j == (Wi, bl ) and x' == (x' , 1).
j ?
The NGnet can be interpreted as a stochastic model, in which a pair of an input and
an output , (x, y) , is a stochastic event. For each event, a unit index i E {I , ... , AI}
is assumed to be selected, which is regarded as a hidden variable. The stochastic
model is defined by the probability distribution for a triplet (x, y , i), which is called
a complete event:
P(x , y , ilB) = (27r)-(D+N)/2 a ;-DIE i l- 1 / 2 AI - I
x exp [-
~(x -
/1i )'Ei l (x - 11i) -
(2)
2~? (y -
It-ix )2] .
Here , B == {/1i, E i , a?, 11"1 Ii = 1, ... , AI} is a set of model parameters. We can easily
proye that the expectation value of the output y for a giYen input x, i.e., E[Ylx] ==
M. Sato and S. Ishii
1054
J yP(ylx , B)dy , is identical to equation (1). Namely, the probability distribution (2)
provides a stochastic model for the NGnet .
From a set of T events (observed data) (X,Y) == {(x(t),y(t)) It = 1, ... ,TL the
model parameter B of the stochastic model (2) can be determined by the maximum
likelihood estimation method, in particular, by the EM algorithm (Dempster et al.,
1977) . The EM algorithm repeats the following E- and M-steps.
Let fJ be the present estimator. By using fJ , the posterior
E (~stimation) step:
probability that the i-th unit is selected for (x(t), yet)) is given as
M
P(i lx(t) , yet) , fJ)
= P(x(t), yet) , ilfJ)!2: P(x(t) , yet), jlfJ).
(3)
j=1
M (l\laximization) step:
Using the posterior probability (3), the expected loglikelihood L(Bj1J, X, Y) for the complete events is defined by
T
AI
L(Bj1J, X, Y) = 2: 2: P(ilx(t) , yet) , fJ) log P( x (t), yet), iIB).
t=1 ;=1
(4)
Since an increase of L(Bj1J, X , Y) implies an increase of the log-likelihood for the observed data (X, Y) (Dempster et al., 1977) , L(BlfJ, X, Y) is maximized with respect
to B. A solution of the necessity condition 8L!8B = 0 is given by (Xu et al. , 1995) .
Ili
= (x)i(T)!(l)i(T)
~i 1
= [(xx')i(T)!(l)i(T) -
(5a)
lli(T)Il~(T)] - 1
(5b)
Tili = (yi;')i(T)[(i;i;')i(T)]-l
(5c)
a; = ~ [(ly
(5d)
2 1)i(T)
- Tr (Tt';(i;y')i(T))] !(l)i(T),
where Oi denotes a weighted mean with respect to the posterior probability (3)
and it is defined by
1
T
_
(f(x, y)),(T) == T 2: f(x(t), y(t))P(ilx(t), yet) , B).
t=1
(6)
The EM algorithm introduced above is based on batch learning (Xu et al., 1995) ,
namely, the parameters are updated after seeing all of the observed data. We
introduce here an on-line version (Sato & Ishii, 1998) of the EM algorithm . Let
B(t) be the estimator after the t-th observed data (x(t),y(t)). In this on-line EM
algorithm, the weighted mean (6) is replaced by
T
?f(x,y)
?i (T)
== TJ(T) 2:(
T
II
>.(s))f(x(t),y(t))P(ilx(t),y(t),B(t -1)).
(7)
t=1 s=i+1
The parameter >'(t) E [0,1] is a discount factor, which is introduced for forgetting
the effect of earlier inaccurate estimator. TJ(T) == (Li=1(TI~= t+l >.(S))) - 1 is a normalization coefficient and it is iteratively calculated by TJ(t) = (1 + >.(t)!TJ(t _1)) - 1.
The modified weighted mean ? . ?i can be obtained by the step-wise equation:
?
f(x, y)
?i (t) =?
f(x, y) ? i (t - 1)
+TJ(t) [!(x(t),y(t))Pi(t)-? f(x,y)
(8)
?i (t -
l)J,
/055
Reinforcement Learning Based on On-Line EM Algorithm
where Pi(t) == P(ilx(t) , y(t) , {}(t - 1)). Using the modified weighted mean, the new
parameters are obtained by the following equations.
=
Ai(t
)
1
[Ai(t - 1 Pi(t)Ai (t - l)x(t)x'(t~Ai (t - 1)
1 - 17(t)
) (l/17(t) - 1) + Pi(t)x'(t)Ai(t - l)x(t)
=? x ?i (t)/ ? 1 ?i (t)
W'i (t) = W'i(t - 1) + 17(t)Pi(t)(y(t) -
1
f.Li(t)
a;(t)
= ~ [? lyl2 ?i
(9a)
(9b)
Wi(t - l)x(t))x'(t)Ai(t)
(t) - Tr (Wi(t)? xy'
?i
(t))]
/? 1 ?i (t),
(9c)
(9d)
It can be proved that this on-line EM algorithm is equivalent to the stochastic
approximation for finding the maximum likelihood estimator, if the time course of
the discount factor A(t) is given by
A(t) t~ 1 - (1 - a)/(at + b),
where a (1
(11)
> a > 0) and b are constants (Sato & Ishii, 1998).
We also employ dynamic unit manipulation mechanisms in order to efficiently allocate the units (Sato & Ishii, 1998). The probability P(x(t), y(t), i I (}(t-1)) indicates
how probable the i-th unit produces the datum (x(t) , y(t)) with the present parameter {)( t - 1) . If the probability for every unit is less than some threshold value , a
new unit is produced to account for the new datum. The weighted mean ? 1 ? i (t)
indicates how much the i-th unit has been used to account for the data until t. If
the mean becomes less than some threshold value, this unit is deleted.
In order to deal with a singular input distribution, a regularization for 2:;1 (t) is
introduced as follows.
[(<< xx' ?i (t) - f.Li(t)f.L;(t)? 1 ?i (t)
+ Q ? ~; ?i (t)IN) / ? 1 ?i (t)]-l
?i (t) = (<< Ixl 2 ?i (t) -1f.Li(t)12? 1 ?i (t)) /N ,
2: ; l(t) =
(12a)
?~T
(12b)
where IN is the (N x N)-dimensional identity matrix and Q is a small constant. The
corresponding Ai(t) can be calculated in an on-line manner using a similar equation
to (9a) (Sato & Ishii , 1998).
3
REINFORCEMENT LEARNING
In this section, we propose a new RL method based on the on-line EM algorithm
described in the previous section. In the following, we consider optimal control problems for deterministic nonlinear dynamical systems having continuous state/action
spaces. It is assumed that there is no knowledge of the controlled system. An
actor-critic architecture .(Barto et al. ,1983) is used for the learning system. In the
original actor-critic model, the actor and the critic approximated the probability
of each action and the value function, respectively, and were trained by using the
TD-error. The actor and the critic in our RL method are different from those in
the original model as explained later.
M. Sato and S. Ishii
1056
For the current state , xc(t), of the controlled system, the actor outputs a control
signal (action) u(t), which is given by the policy function 00, i.e., u(t) = O(xc(t)).
The controlled system changes its state to xc(t + 1) after receiving the control
signal u(t). Subsequently, a reward r(xc(t) , u(t)) is given to the learning system.
The objective of the learning system is to find the optimal policy function that
maximizes the discounted future return defined by
00
V(xc) ==
L
"/r(xc(t), O(xc(t)))l xc (O)=::x c '
(13)
/= 0
where 0 < , < 1 is a discount factor. V(x c), which is called the value function , is
defined for the current policy function 0(-) employed by the actor. The Q-function
is defined by
(14)
where xc(t) = Xc and u(t) = u are assumed. The value function can be obtained
from the Q-function:
(15)
V(xc) = Q(x c, O(xc))?
The Q-function should satisfy the consistency condition
Q(xc(t), u(t))
= ,Q(xc(t + 1), O(xc(t + 1)) + r(xc(t) , u(t)).
(16)
In our RL method, the policy function and the Q-function are approximated by the
NGnets, which are called the actor-network and the critic-network, respectively. In
the learning phase, a stochastic actor is necessary in order to explore a better policy.
For this purpose, we employ a stochastic model defined by (2) , corresponding to
the actor-network. A stochastic action is generated in the following way. A unit
index i is selected randomly according to the conditional probability P(ilx c) for
a given state X C. Subsequently, an action u is generated randomly according to
the conditional probability P(ulx c, i) for a given Xc and the selected i. The value
function can be defined for either the stochastic policy or the deterministic policy.
Since the controlled system is deterministic, we use the value function defined for
the deterministic policy which is given by the actor-network.
The learning process proceeds as follows. For the current state xc(t) , a stochastic
action u(t) is generated by the stochastic model corresponding to the current actornetwork. At the next time step , the learning system gets the next state xc(t+ 1) and
the reward r(xc(t) , u(t)). The critic-network is trained by the on-line EM algorithm.
The input to the critic-network is (xc(t) , u(t)). The target output is given by the
right hand side of (16) , where the Q-function and the deterministic policy function
00 are calculated using the current critic-network and the current actor-network,
respectively. The actor-network is also trained by the on-line EM algorithm. The
input to the actor-network is xc(t). The target output is given by using the gradient
of the critic-network (Sofge & White, 1992):
(17)
where the Q-function and the deterministic policy function 00 are calculated using
the modified critic-network and the current actor-network, respectively. E is a small
constant. This target output gives a better action, which increases the Q-function
value for the current state Xc(t) , than the current deterministic action 0 (x c(t)).
In the above learning scheme, the critic-network and the actor-network are updated
concurrently. One can consider another learning scheme. In this scheme, the learning system tries to control the controlled system for a given period of time by using
the fixed actor-network. In this period, the critic-network is trained to estimate the
Reinforcement Learning Based on On-Line EM Algorithm
1057
Q-function for the fixed actor-network . The state trajectory in this period is saved.
At the next stage, the actor-network is trained along the saved trajectory using the
critic-network modified in the first stage.
4
EXPERIMENTS
The first experiment is the task of swinging-up and stabilizing a single pendulum
with a limited torque (Doya, 1996) . The state of the pendulum is represented
by X c
(?, cp), where cp and ? denote the angle from the upright position and the
angular velocity of the pendulum, respectively. The reward r(xc(t) , u(t)) is assumed
to be given by f(x c(t + 1)) , where
=
f(x c) = exp( -(?)2/(2vi) - cp2/(2v~)).
(18)
VI and V2 are constants. The reward (18) encourages the pendulum to stay high .
After releasing the pendulum from a vicinity of the upright position , the control
and the learning process of the actor-critic network is conducted for 7 seconds. This
is a single episode. The reinforcement learning is done by repeating these episodes.
After 40 episodes, the system is able to make the pendulum achieve an upright
position from almost every initial state. Even from a low initial position , the system
swings the pendulum several times and stabilizes it at the upright position. Figure
1 shows a control process, i.e., stroboscopic time-series of the pendulum, using the
deterministic policy after training. According to our previous experiment, in which
both of the actor- and critic- networks are the NGnets with fixed centers trained
by the gradient descent algorithm, a good control was obtained after about 2000
episodes. Therefore , our new RL method is able to obtain a good control much
faster than that based on the gradient descent algorithm.
The second experiment is the task of balancing a double pendulum near the upright position. A torque is applied only to the first pendulum. The state of the
pendulum is represented by X c = (?1, ?2 , CPl, CP2), where CPl and CP2 are the first pendulum's angle from the upright direction and the second pendulum's angle from the
first pendulum 's direction, respectively. ?1 (?2) is the angular velocity of the first
(second) pendulum. The reward is given by the height of the second pendulum's
end from the lowest position. After 40 episodes, the system is able to stabilize
the double pendulum. Figure 2 shows the control process using the deterministic
policy after training. The upper two figures show stroboscopic time-series of the
pendulum. The dashed, dotted, and solid lines in the bottom figure denote cPl/7r,
CP2/7r , and the control signal u produced by the actor-network, respectively. After
a transient period, the pendulum is successfully controlled to stay near the upright
position.
The numbers of units in the actor- (critic-) networks after training are 50 (109) and
96 (121) for the single and double pendulum cases, respectively. The RL method
using center-fixed NGnets trained by the gradient descent algorithm employed 441
(= 212) actor units and 18,081 (= 212x41) critic units, for the single pendulum task.
For the double pendulum task, this scheme did not work even when 14,641 (= 114)
actor units and 161 ,051 (= 114 X 11) critic units were prepared. The numbers of
units in the NGnets trained by the on-line EM algorithm scale moderately as the
input dimension increases.
5
CONCLUSION
In this article, we proposed a new RL method based on the on-line EM algorithm .
We showed that our RL method can be applied to the task of swinging-up and
M. Sato and S. Ishii
1058
stabilizing a single pendulum and the task of balancing a double pendulum near
the upright position. The number of trial-and-errors needed to achieve good control
was found to be very small in the two tasks. In order to apply a RL method
to continuous state/action problems, good function approximation methods and
fast learning algorithms are crucial. The experimental results showed that our RL
method has both features.
References
Barto, A. G., Sutton, R. S., & Anderson, C. W. (1983). IEEE Transactions on
Systems, Man, and Cybernetics, 13,834-846.
Barto, A. G., Sutton, R. S., & Watkins, C. J. C. H. (1990). Learning and Computational Neuroscience: Foundations of Adaptive Networks (pp. 539-602), MIT
Press.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Journal of Royal Statistical
Society B, 39, 1-22.
Doya, K. (1996). Advances in Neural Information Processing Systems 8 (pp. lO731079), MIT Press.
Lin, L. J. (1992). Machine Learning, 8,293-321.
Moody, J., & Darken, C. J. (1989). Neural Computation, 1, 281-294.
Sato, M., & Ishii, S. (1998). ATR Technical Report, TR-H-243, ATR.
Sofge, D. A., & White, D. A. (1992). Handbook of Intelligent Control (pp. 259-282),
Van Nostrand Reinhold.
Sutton, R. S. (1996) . Advances in Neural Information Processing Systems 8
(pp. 1038-1044), MIT Press.
Tesauro, G. J. (1992). Machine Learning, 8, 257-278.
Werbos, P. J. (1990). Neural Networks for Control (pp. 67-95), MIT Press.
Xu, 1., Jordan, M. 1., & Hinton, G. E. (1995). Advances in Neural Information
Processing Systems "( (pp. 633-640), MIT Press.
Time Sequence of Inverted Pendulum
3l IIo I II!
j
l \ \\\\\\\ \ \\
\>
2
3
-l~~________~________~_
o
2
3
2
3
4
Jl <II i \ \ '\,'-.~/'//////'--~
-l~~________~________~_
2 3 4
4 5 6
U:-\I//?-~' ~
6
\ II 111111
7
~
8
Time (sec.)
Figure 1
Figure 2
| 1614 |@word trial:2 version:1 covariance:1 tr:3 solid:1 initial:2 cp2:4 series:2 necessity:1 current:9 yet:7 partition:2 selected:4 provides:1 lx:1 height:1 along:1 introduce:1 manner:1 forgetting:1 expected:1 seika:1 multi:1 torque:4 discounted:1 td:1 becomes:1 xx:2 moreover:1 notation:1 maximizes:1 lowest:1 interpreted:1 finding:1 every:2 ti:1 control:16 unit:23 ly:1 local:5 sutton:4 cpl:3 co:1 limited:2 bi:1 shin:1 seeing:1 get:1 layered:1 equivalent:1 deterministic:10 demonstrated:1 center:3 swinging:4 stabilizing:4 estimator:4 regarded:1 updated:2 target:3 velocity:2 approximated:4 iib:1 werbos:2 observed:4 bottom:1 episode:5 mentioned:1 environment:2 dempster:3 moderately:1 reward:5 dynamic:3 trained:10 creation:1 easily:1 various:1 represented:2 fast:2 quite:1 loglikelihood:1 gi:1 laird:1 sequence:1 propose:3 achieve:2 ent:1 double:7 produce:1 ac:1 implies:1 differ:1 direction:2 saved:2 iio:1 subsequently:3 stochastic:12 human:1 transient:1 probable:1 adjusted:1 exp:2 stabilizes:1 achieves:1 purpose:1 estimation:1 successfully:2 weighted:5 mit:5 concurrently:1 gaussian:4 modified:4 barto:5 backgammon:1 likelihood:3 indicates:2 ishii:13 softly:1 inaccurate:1 hidden:1 e11:2 having:3 identical:1 future:1 report:1 ilb:1 intelligent:1 serious:1 employ:2 randomly:2 ve:1 replaced:1 phase:1 masa:1 tj:5 necessary:1 xy:1 hip:1 earlier:1 successful:1 conducted:1 stay:2 receiving:1 moody:2 return:1 yp:1 li:4 japan:2 account:2 sec:1 stabilize:1 coefficient:1 satisfy:1 vi:2 performed:1 later:1 try:1 pendulum:30 x41:1 il:1 oi:1 efficiently:1 maximized:1 produced:2 lli:1 trajectory:2 cybernetics:1 pp:6 ngnet:11 proved:1 knowledge:1 done:1 anderson:1 angular:2 stage:2 until:1 hand:2 ei:1 nonlinear:2 effect:1 normalized:3 swing:1 regularization:1 vicinity:1 laboratory:1 aist:1 iteratively:1 white:3 deal:1 ll:1 game:1 encourages:1 aki:1 die:1 complete:2 tt:1 cp:2 fj:4 wise:1 rl:17 jp:2 jl:1 approximates:1 ai:12 consistency:1 pointed:1 actor:30 posterior:3 showed:2 tesauro:2 prime:1 manipulation:1 nostrand:1 yi:1 inverted:1 employed:2 period:4 signal:3 ii:5 dashed:1 kyoto:1 technical:1 faster:2 nara:2 lin:2 controlled:6 regression:3 expectation:1 normalization:1 singular:1 crucial:2 releasing:1 jordan:1 near:4 architecture:3 allocate:1 action:13 detailed:1 ylx:2 transforms:1 repeating:1 discount:3 prepared:1 dotted:1 neuroscience:1 threshold:2 deleted:1 angle:3 almost:1 doya:4 decision:1 dy:1 datum:2 sato:12 adapted:1 lyl2:1 according:5 em:20 wi:3 explained:1 interference:1 equation:5 mechanism:1 needed:1 end:1 apply:2 v2:1 batch:1 original:3 denotes:3 xc:24 society:1 bl:1 objective:1 gradient:5 atr:4 index:2 difficult:1 implementation:1 policy:13 upper:1 darken:3 markov:1 finite:2 descent:4 hinton:1 lb:1 reinhold:1 introduced:3 pair:1 namely:2 deletion:1 able:3 proceeds:1 dynamical:2 royal:1 suitable:1 event:5 scheme:4 technology:1 review:1 ixl:1 foundation:1 article:3 rubin:1 critic:24 balancing:4 pi:5 course:1 repeat:1 transpose:1 bias:1 side:1 institute:1 van:1 calculated:4 dimension:1 reinforcement:8 adaptive:1 ikoma:1 transaction:1 global:1 sofge:3 handbook:1 assumed:4 continuous:4 triplet:1 complex:1 did:1 linearly:1 xu:3 tl:1 position:11 watkins:1 ix:1 ilx:5 explore:1 conditional:2 identity:1 man:1 change:2 upright:9 determined:1 called:3 ili:1 experimental:2 la:1 perceptrons:1 avoiding:1 |
672 | 1,615 | Regularizing AdaBoost
Gunnar Riitsch, Takashi Onoda; Klaus R. M iiller
GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany
{raetsch, onoda, klaus }@first.gmd.de
Abstract
Boosting methods maximize a hard classification margin and are
known as powerful techniques that do not exhibit overfitting for low
noise cases. Also for noisy data boosting will try to enforce a hard
margin and thereby give too much weight to outliers, which then
leads to the dilemma of non-smooth fits and overfitting. Therefore
we propose three algorithms to allow for soft margin classification
by introducing regularization with slack variables into the boosting
concept: (1) AdaBoost reg and regularized versions of (2) linear
and (3) quadratic programming AdaBoost. Experiments show the
usefulness of the proposed algorithms in comparison to another soft
margin classifier: the support vector machine.
1
Introd uction
Boosting and other ensemble methods have been used with success in several applications, e.g. OCR [13, 8]. For low noise cases several lines of explanation have
been proposed as candidates for explaining the well functioning of boosting methods. (a) Breiman proposed that during boosting also a "bagging effect" takes place
[3] which reduces the variance and effectively limits the capacity of the system and
(b) Freund et al. [12] show that boosting classifies with large margins, since the
error function of boosting can be written as a function of the margin and every
boosting step tries to minimize this function by maximizing the margin [9, 11].
Recently, studies with noisy patterns have shown that boosting does indeed overfit
on noisy data, this holds for boosted decision trees [10], RBF nets [11] and also
other kinds of classifiers (e.g. [7]). So it is clearly a myth that boosting methods
will not overfit. The fact that boosting is trying to maximize the margin, is exactly
also the argument that can be used to understand why boosting must necessarily
overfit for noisy patterns or overlapping distributions and we give asymptotic arguments for this statement in section 3. Because the hard margin (smallest margin in
the trainings set) plays a central role in causing overfitting, we propose to relax the
hard margin classification and allow for misclassifications by using the soft margin
classifier concept that has been applied to support vector machines successfully [5].
?permanent address: Communication & Information Research Lab. CRIEPI, 2-11-1
Iwado kita, Komae-shi, Tokyo 201-8511, Japan.
565
Regularizing AdaBoost
Our view is that the margin concept is central for the understanding of both support vector machines and boosting methods. So far it is not clear what the optimal
margin distribution should be that a learner has to achieve for optimal classification
in the noisy case. For data without noise a hard margin might be the best choice.
However, for noisy data there is always the trade-off in believing in the data or
mistrusting it, as the very data point could be an outlier. In general (e.g. neural
network) learning strategies this leads to the introduction of regularization which
reflects the prior that we have about a problem. We will also introduce a regularization strategy (analogous to weight decay) into boosting. This strategy uses
slack variables to achieve a soft margin (section 4). Numerical experiments show
the validity of our regularization approach in section 5 and finally a brief conclusion
is given .
2
AdaBoost Algorithm
Let {ht(x) : t = 1, ... ,T} be an ensemble of T hypotheses defined on input vector
x and e = [Cl ... CT] their weights satisfying Ct > 0 and lei = 2:t Ct = 1. In the
binary classification case, the output is one of two class labels, i.e. h t (x) = ?1.
The ensemble generates the label which is the weighted majority of the votes:
sgn (2:t Ctht(x)). In order to train this ensemble of T hypotheses {ht(x)} and
e, several algorithms have been proposed: bagging, where the weighting is simply
Ct = l/T [2] and AdaBoost/ Arcing, where the weighting scheme is more complicated [12]. In the following we give a brief description of AdaBoost/ Arcing. We use
a special form of Arcing, which is equivalent to AdaBoost [4]. In the binary classification case we define the margin for an input-output pair Zi = (Xi, Yi), i = 1, ... ,1
by
T
mg(zi' e) = Yi
L Ctht(Xi),
(1)
t=l
which is between -1 and 1, if lei = 1. The correct class is predicted, if the margin
at Z is positive. When the positivity of the margin value increases, the decision
correctness becomes larger. AdaBoost maximizes the margin by (asymptotically)
minimizing a function of the margin mg(zi' e) [9, 11]
t,
g(b) =
exp {
-1~lmg(Zi' C)},
(2)
where b = [b l ... bTl and Ibl = 2:t bt (starting from b = 0). Note that bt is the
unnormalized weighting of the hypothesis ht, whereas e is simply a normalized
version of b, i.e. e = b/lbl. In order to find the hypothesis h t the learning examples
Zi are weighted in each iteration t with Wt(Zi). Using a bootstrap on this weighted
sample we train h t ; alternatively a weighted error function can be used (e.g. weighted
MSE). The weights Wt(Zi) are computed according tol
Wt (Zi) =
I
exp{-lbt-llmg(zi,et-l)/2}
(3)
2: j =l exp {-Ibt-dmg(zj, et-d/2}
and the training error tOt of h t is computed as tOt = 2:~=1 Wt(zi)I(Yi t ht(Xi)), where
I(true) = 1 and I(false) = O. For each given hypothesis h t we have to find a weight
bt , such that g(b) is minimized. One can optimize this parameter by a line search
1
This direct way for computing the weights is equivalent to the update rule of AdaBoost.
G. RaIsch. T. Onoda and K.-R. Maller
566
or directly by analytic minimization [4], which gives bt
Interestingly, we can write
Wt
()
Zi
=
8g(ht-d/8mg(zi, h tI
= 10g(1 -
?t} - log ft.
1}
2: j =l 8g(ht-d/8mg(zj, ht-d
(4)
'
as a gradient of g(h t - 1 ) with respect to the margins. The weighted minimization
with Wt(Zi) will give a hypothesis h t which is an approximation to the best possible
hypothesis h; that would be obtained by minimizing 9 directly. Note that, the
weighted minimization (bootstrap, weighted LS) will not necessarily give hi, even
if ?t is minimized [11]. AdaBoost is therefore an approximate gradient descent
method which minimizes 9 asymptotically.
Hard margins
3
A decrease of g(c, Ihl) := g(h) is predominantly achieved by improvements of the
margin mg(zi' c). IT the margin mg(zi, c) is negative, then the error g(c, Ihl) takes
clearly a big value, which is additionally amplified by Ihl. So, AdaBoost tries to
decrease the negative margin efficiently to improve the error g(c, Ihl).
Now, let us consider the asymptotic case, where the number of iterations and
therefore also Ihl take large values [9]. In this case, when the values of all
mg(zi,c),i = 1,???,l, are almost the same but have small differences , these differences are amplified strongly in g(c, Ihl). Obviously the function g(c, Ihl) is asymptotically very sensitive to small differences between margins. Therefore , the margins
mg(zi' c) of the training patterns from the margin area (boundary area between
classes) should asymptotically converge to the same value. From Eq. (3), when
Ihl takes a very big value, AdaBoost learning becomes a "hard competition" case:
only the pattern with smallest margin will get high weights, the other patterns are
effectively neglected in the learning process. In order to confirm that the above
reasoning is correct, Fig. 1 shows margin distributions after 104 AdaBoost iterations for a toy example [9] at different noise levels generated by uniform distribution
U(0.0,u 2 ) (left). From this figure, it becomes apparent that the margin distribution
asymptotically makes a step at a fixed size of the margin for training patterns which
are in the margin area. In previous studies [9, 11] we observed that those patterns
exhibit a large overlap to support vectors in support vector machines. The numerical results support our theoretical asymptotic analysis. The property of AdaBoost
to produce a big margin area (no pattern in the area, i.e. a hard margin), will not
always lead to the best generalization ability (d. [5, 11]). This is especially true,
0
09
,
0.8
F:
I
.~ 0.5
~ o.
,
~ 0.3
,
,,
I
0.22
"
0
1'.
0.215
,
0
?00 0
0.21
'"
0
0.205
0.2
0.2
0.1
0
0
0.2
0 .'
stability
0.6
0.8
0 . 1'15
10'
10'
10'
10'
10'
1.5
2.5
Figure 1: Margin distributions for AdaBoost (left) for different noise levels (a 2 =
O%(dotted), 9%(dashed), 16%(solid? with fixed number of RBF-centers for the base hypothesis and typical overfitting behaviour in the generalization error as a function of the
number of iterations (middle) and a typical decision line (right) generated by AdaBoost
using RBF networks in the case with noise (here: 30 centers and a 2 = 16%; smoothed)
Regularizing AdaBoost
567
if the training patterns have classification or input noise. In our experiments with
noisy data, we often observed that AdaBoost made overfitting (for a high number
of boosting iterations). Fig. 1 (middle) shows a typical overfitting behaviour in the
generalization error for AdaBoost: after only 80 boosting iterations the best generalization performance is already achieved. Quinlan [10] and Grove et al. [7] also
observed overfitting and that the generalization performance of AdaBoost is often
worse than that of the single classifier, if the data has classification noise .
The first reason for overfitting is the increasing value of Ibl: noisy patterns (e.g. bad
labelled) can asymptotically have an "unlimited" influence to the decision line leading to overfitting (cf. Eq. (3)). Another reason is the classification with a hard
margin, which also means that all training patterns will asymptotically be correctly
classified (without any capacity limitation!). In the presence of noise this will certainly be not the right concept, because the best decision line (e.g. Bayes) usually
will not give a training error of zero. So, the achievement of large hard margins for
noisy data will produce hypotheses which are too complex for the problem.
4
How to get Soft Margins
Changing AdaBoost's error function
In order to avoid overfitting, we introduce slack variables, which are similar to those of the support vector algorithm
[5, 14], into AdaBoost.
We know that all training patterns will get non-negative stabilities after many iterations(see Fig. 1(left)), i.e. mg(zi, c) 2: p for all i = 1, ... , I, where p is the minimum
margin of the patterns. Due to this fact, AdaBoost often produces high weights for
the difficult training patterns by enforcing a non-negative margin p 2: 0 (for every
pattern including outliers) and this property will eventually lead to overfitting, as
observed in Fig. 1. Therefore, we introduce some variables ~i - the slack variables and get
mg(zi, c) 2: p - C~L
~f > O.
(5)
In these inequalities, ~! are positive and if a training pattern has high weights in
the previous iterations, the ~! should be increasing. In this way, for example, we do
not force outliers to be classified according to their possibly wrong labels, but we
allow for some errors. In this sense we get a trade-off between the margin and the
importance of a pattern in the training process (depending on the constant C 2: 0).
If we choose C = 0 in Eq. (5), the original AdaBoost algorithm is retrieved. If C is
chosen too high, the data is not taken seriously. We adopt a prior on the weights
Wr(Zi) that punishes large weights in analogy to weight decay and choose
?l ~
(t,
c,. Wc(Zi)
r
(6)
where the inner sum is the cumulative weight of the pattern in the previous iterations
(we call it influence of a pattern - similar to Lagrange multipliers in SVMs) . By
this ~!, AdaBoost is not changed for easy classifiable patterns, but is changed for
difficult patterns. From Eq. (5), we can derive a new error function:
I
9reg(ct,lbt l) =
~exp{ -1~tlmg(zi,Ct) - C~f}
(7)
By this error function, we can control the trade-off between the weights, which the
pattern had in the last iterations, and the achieved margin. The weight Wt(Zi) of a
pattern is computed as the derivative ofEq. (7) subject to mg(zi, b t - 1 ) (cf. Eq. (4))
and is given by
exp {lbt-11(mg(zi,Ct-d - ~:-1)/2}
()
(8)
Wt Zi =
I
E j =l exp { Ibt-11(mg(zj, Ct-t} - ~jt - 1 )/2 } .
G. Riitsch, T. Onoda and K.-R. Muller
568
Table 1: Pseudocode description of the algorithms
LP-AdaBoost(Z, T)
I LPreg-AdaBoost(Z, T, C) I QPreg-AdaBoost(Z, T, C)
Run Ada Boost on dataset Z to get T hypotheses h and their weights c
. L
Construct Ioss matnx
i,t
E~=l CtLi,t ~
Ct ~
0,
2:t=l
P
ECt =
{-I
1
minimize -p+C2:?ei
T
?
S.t.
CtLi ,t ~ P +
minimize -p
S.t.
=
1
Ct ~
{i ~
0,
E Ct =
1
0
ei
if h t (Xi) =1= Yi
otherwise
minimize IlbW +CE?ei
T
?
S.t.
btLi ,t ~
Et=l
bt ~
{i ~
0
1- ei
0
Thus we can get an update rule for the weight of a training pattern [11]
Wt(Zi) = Wt-l (Zi) exp{bt-1I(Yi
=I
ht- 1 (Xi?)
+ C~:-2Ibt_21 -
C~;-llbt_ll}.
(9)
It is more difficult to compute the weight bt of the t-th hypothesis analytically.
However, we can get bt by a line search procedure over Eq. (7), which has an unique
solution because 8~t greg> 0 is satisfied. This line search can be implemented very
efficiently. With this line-search, we can now also use real-valued outputs of the
base hypotheses, while the original AdaBoost algorithm could not (d. also [6]).
Optimizing a given ensemble
In Grove et al. [7], it was shown how to use
linear programming to maximize the minimum margin for a given ensemble and
LP-AdaBoost was proposed (table 1 left). This algorithm maximizes the minimum margin on the training patterns. It achieves a hard margin (as AdaBoost
asymptotically does) for small number of iterations. For the reasoning for a hard
margin (section 3) this can not generalize well. If we introduce slack variables to
LP-AdaBoost, one gets the algorithm LP reg-AdaBoost (table 1 middle) [11]. This
modification allows that some patterns have lower margins than p (especially lower
than 0). There is a trade-off: (a) make all margins bigger than p and (b) maximize
p. This trade-off is controlled by the constant C.
Another formulation of a optimization problem can be derived from the support vector algorithm. The optimization objective of a SVM is to find a function h W which
minimizes a functional of the form E = IlwW + C 2:i ~i' where Yih(Xi) ~ 1 - ~i
and the norm of the parameter vector w is the measure for the complexity of the
hypothesis h W [14]. For ensemble learning we do not have such a measure of complexity and so we use the norm of the hypotheses weight vector b. For Ibl = 1 this is
a small value, if the elements are approximately equal (analogy to bagging) and has
high values, when there are some strongly emphasized hypotheses (far away from
bagging). Experimentally, we found that IIbl12 is often larger for more complex
hypothesis. Thus, we can apply the optimization principles of SVMs to AdaBoost
and get the algorithm QPreg-AdaBoost (table 1 right). We effectively use a linear
SVM on top of the results of the base hypotheses.
5
Experiments
In order to evaluate the performance of our new algorithms, we make a comparison among the single RBF classifier, the original AdaBoost algorithm, AdaBoost reg
(with RBF nets), LfQPreg-AdaBoost and a Support Vector Machine (with RBF
kernel). We use ten artificial and real world datasets from the DCI and DELVE
benchmark repositories: banana (toy dataset as in [9, 11]), breast cancer, image segment, ringnorm, flare sonar, splice, new-thyroid, titanic, twonorm, waveform. Some of
the problems are originally not binary classification problems, hence a (random)
partition into two classes was used. At first we generate 20 partitions into training
and test set (mostly ~ 60% : 40%). On each partition we train the classifier and
get its test set error. The performance is averaged and we get table 2.
569
Regularizing AdaBoost
Table 2: Comparison among the six methods: Single RBF classifier, AdaBoost(AB),
AdaBoostreg(ABreg), L/QP reg-AdaBoost (L/QPR) and a Support Vector Machine(SVM) :
Estimation of generalization error in % on 10 datasets (best method in bold face). Clearly,
AdaBoostreg gives the best overall performance. For further explanation see text .
Banana
Cancer
Image
Ringnorm
FSonar
Splice
Thyroid
Titanic
Twonorm
Waveform
Mean '70
Winner '70
RBF
AB
ABreg
LPR
QPR
SVM
10.9?0.5
28.7?5.3
2.8?0.7
1.1?O.3
34.6?2.1
1O.0?0.3
4.8?2.4
23.4?1.7
2.8?0.2
10.7?1.0
6.7
16.4
12.3?0.7
30.5?4.5
2.5?0.7
2.0?0.2
35.6?1.9
10.1?0.3
4.4?1.9
22.7?1.2
3.1?0.3
10.8?0.4
9.6
8.2
lO.1?O.5
26.3?4.3
2.5?0.7
1.1?O.2
33.6?1.7
9.5?O.2
4.4?2.1
22.5?1.0
2.1?2.1
9.9?0.9
1.0
28.5
10.8?0.4
31.0?4.2
2.6?0.6
2.2?0.4
35.7?4.5
10.2?1.6
4.4?2.0
22 .9?1.9
3.4?0.6
10.6?1.0
11.1
15.0
10.9?0.5
26.2?4.7
2.4?O.5
1.9?0.2
36.2?1.7
10.1?0.5
4.4?2.2
22.7?1.0
3.0?0.3
10.1?0.5
4.7
15.3
11.5?4.7
26.1?4.8
2.9?0.7
1.1?O.1
32.5?1.1
10.9?0.7
4.8?2.2
22.4?1.0
3.0?0.2
9.8?O.3
6.3
16.6
We used RBF nets with adaptive centers (some conjugate gradient iterations to
optimize positions and widths of the centers) as base hypotheses as described in
[1, 11]. In all experiments, we combined 200 hypotheses. Clearly, this number of
hypotheses may be not optimal, however Adaboost with optimal early stopping
is not better than AdaBoost. reg . The parameter C of the regularized versions of
AdaBoost and the parameters (C, a) of the SVM are optimized by the first five
training datasets. On each training set 5-fold-cross validation is used to find the
best model for this dataset 2 . Finally, the model parameters are computed as the
median of the five estimations. This way of estimating the parameters is surely
not possible in practice, but will make this comparison more robust and the results
more reliable. The last but one line in Tab. 2 shows the line 'Mean %', which is
computed as follows: For each dataset the average error rate of all classifier types
are divided by the minimum error rate and 1 is subtracted. These resulting numbers are averaged over the 10 datasets. The last line shows the probabilities that a
method wins, i.e. gives the smallest generalization error, on the basis of our experiments (averaged over all ten datasets) . Our experiments on noisy data show that
(a) the results of AdaBoost are in almost all cases worse than the single classifier
(clear overfitting effect) and (b) the results of AdaBoost reg are in all cases (much)
better than those of AdaBoost and better than that of the single classifier. Furthermore, we see clearly, that (c) the single classifier wins as often as the SVM, (d)
L/QPreg-AdaBoost improves the results of AdaBoost, (e) AdaBoost reg wins most
often. L/QP reg-AdaBoost improves the results of AdaBoost in almost cases due
to established the soft margin. But the results are not as good as the results of
AdaBoost reg and the SVM, because the hypotheses generated by AdaBoost (aimed
to construct a hard margin) may be not the appropriate ones generate a good soft
margin. We also observe that quadratic programming gives slightly better results
than linear programming. This may be due to the fact that the hypotheses coefficients generated by LPreg-AdaBoost are more sparse (smaller ensemble). Bigger
ensembles may have a better generalization ability (due to the reduction of variance
[3]). The worse performance of SVM compared to AdaBoost reg and the unexpected
tie between SVM and RBF net may be explained with (a) the fixed a of the RBFkernel (loosing multi-scale information), (b) coarse model selection, (c) worse error
function ofthe SV algorithm (noise model). Sumarizing, AdaBoost is useful for low
noise cases, where the classes are separable (as shown for OCR[13, 8]). AdaBoost reg
extends the applicability of boosting to "difficult separable" cases and should be
applied, if the data is noisy.
2The parameters are only near-optimal. Only 10 values for each parameter are tested.
G. Ratsch, T. Onoda and K.-R. Maller
570
6
Conclusion
We introduced three algorithms to alleviate the overfitting problems of boosting algorithms for high noise data: (1) direct incorporation ofthe regularization term into
the error function (Eq.(7)), use of (2) linear and (3) quadratic programming with
constraints given by the slack variables. The essence of our proposal is to introduce
slack variables for regularization in order to allow for soft margin classification in
contrast to the hard margin classification used before. The slack variables basically
allow to control how much we trust the data, so we are permitted to ignore outliers
which would otherwise have spoiled our classification. This generalization is very
much in the spirit of support vector machines that also trade-off the maximization
of the margin and the minimization of the classification errors in the slack variables.
In our experiments, AdaBoost reg showed a better overall generalization performance
than all other algorithms including the Support Vector Machines. We conjecture
that this unexpected result is mostly due to the fact that SVM can only use one CT
and therefore loose scaling information. AdaBoost does not have this limitation.
So far we balance our trust in the data and the margin maximization by cross validation. Better would be, if we knew the "optimal" margin distribution that we
could achieve for classifying noisy patterns, then we could of course balance the
errors and the margin sizes optimally.
In future works, we plan to establish more connections between AdaBoost and SVM.
Acknowledgements: We thank for valuable discussions with A. Smola, B.
Sch6lkopf, T. FrieB and D. Schuurmans. Partial funding from EC STORM project
grant number 25387 is greatfully acknowledged. The breast cancer domain was
obtained from the University Medical Centre, Inst. of Oncology, Ljubljana, Yugoslavia. Thanks go to M. Zwitter and M. Soklic for providing the data.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon, 1995.
L. Breiman. Bagging predictors. Machine Learning, 26(2):123- 140, 1996.
L. Breiman. Arcing classifiers. Tech.Rep.460, Berkeley Stat.Dept., 1997.
L. Breiman. Prediction games and arcing algorithms. Tech.Rep. 504, Berkeley
Stat.Dept., 1997.
C. Cortes, V. Vapnik. Support vector network. Mach.Learn., 20:273-297,1995.
R. Schapire, Y. Singer. Improved Boosting Algorithms Using Confidence-rated
Predictions. In Proc. of COLT'98.
A.J. Grove, D. Schuurmans. Boosting in the limit: Maximizing the margin of
learned ensembles. In Proc. 15th Nat. Conf. on AI, 1998. To appear.
Y. LeCun et al. Learning algorithms for classification: A comparism on handwritten digit recognistion. Neural Networks, pages 261-276, 1995.
T. Onoda, G. Ratsch, and K.-R. Muller. An asymptotic analysis of adaboost
in the binary classification case. In Proc. of ICANN'98, April 1998.
J. Quinlan. Boosting first-order learning. In Proc. of the 7th Internat. Workshop on Algorithmic Learning Theory, LNAI, 1160,143-155. Springer.
G. Ratsch. Soft Margins for AdaBoost. August 1998. Royal Holloway College,
Technical Report NC-TR-1998-021. Submitted to Machine Learning.
R. Schapire, Y. Freund, P. Bartlett, W. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Mach. Learn. , 148-156, 1998.
H. Schwenk and Y. Bengio . -Adaboosting neural networks: Application to online character recognition. In ICANN'97, LNCS, 1327,967-972,1997. Springer.
V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
| 1615 |@word repository:1 middle:3 version:3 norm:2 thereby:1 yih:1 solid:1 tr:1 reduction:1 punishes:1 seriously:1 interestingly:1 riitsch:2 written:1 must:1 tot:2 numerical:2 partition:3 analytic:1 update:2 flare:1 coarse:1 boosting:23 five:2 c2:1 direct:2 ect:1 introduce:5 indeed:1 multi:1 increasing:2 becomes:3 project:1 classifies:1 estimating:1 maximizes:2 what:1 kind:1 minimizes:2 berkeley:2 every:2 ti:1 voting:1 tie:1 exactly:1 classifier:12 wrong:1 control:2 qpr:2 grant:1 medical:1 appear:1 positive:2 before:1 limit:2 io:1 mach:2 approximately:1 might:1 ringnorm:2 delve:1 averaged:3 unique:1 lecun:1 matnx:1 practice:1 bootstrap:2 digit:1 procedure:1 lncs:1 area:5 confidence:1 get:12 selection:1 influence:2 optimize:2 equivalent:2 shi:1 maximizing:2 center:4 go:1 starting:1 l:1 rule:2 stability:2 analogous:1 play:1 programming:5 us:1 spoiled:1 hypothesis:22 element:1 satisfying:1 recognition:2 observed:4 role:1 ft:1 trade:6 decrease:2 valuable:1 complexity:2 neglected:1 rudower:1 segment:1 dilemma:1 learner:1 basis:1 schwenk:1 train:3 artificial:1 klaus:2 apparent:1 larger:2 valued:1 relax:1 otherwise:2 ability:2 noisy:12 online:1 obviously:1 mg:13 net:4 propose:2 causing:1 achieve:3 amplified:2 description:2 competition:1 achievement:1 produce:3 depending:1 derive:1 stat:2 eq:7 implemented:1 predicted:1 waveform:2 ctli:2 tokyo:1 correct:2 sgn:1 behaviour:2 generalization:10 alleviate:1 hold:1 ibt:2 exp:7 algorithmic:1 achieves:1 adopt:1 smallest:3 early:1 estimation:2 dmg:1 proc:4 label:3 sensitive:1 correctness:1 successfully:1 reflects:1 weighted:8 minimization:4 clearly:5 always:2 avoid:1 breiman:4 boosted:1 arcing:5 takashi:1 derived:1 improvement:1 believing:1 tech:2 contrast:1 ibl:3 sense:1 inst:1 stopping:1 bt:8 lnai:1 greatfully:1 germany:1 overall:2 classification:16 among:2 colt:1 plan:1 special:1 equal:1 construct:2 future:1 minimized:2 report:1 ab:2 certainly:1 grove:3 partial:1 tree:1 lbl:1 theoretical:1 soft:9 ada:1 maximization:2 applicability:1 introducing:1 uniform:1 usefulness:1 predictor:1 too:3 optimally:1 ilww:1 sv:1 combined:1 ljubljana:1 thanks:1 twonorm:2 lee:1 off:6 central:2 satisfied:1 choose:2 possibly:1 positivity:1 worse:4 conf:1 derivative:1 leading:1 criepi:1 toy:2 japan:1 de:1 bold:1 coefficient:1 permanent:1 try:3 view:1 lab:1 tab:1 bayes:1 complicated:1 ofeq:1 minimize:4 greg:1 variance:2 efficiently:2 ensemble:10 ofthe:2 generalize:1 handwritten:1 basically:1 classified:2 submitted:1 storm:1 dataset:4 ihl:8 improves:2 clarendon:1 originally:1 adaboost:64 permitted:1 improved:1 april:1 formulation:1 strongly:2 furthermore:1 smola:1 myth:1 overfit:3 ei:4 trust:2 overlapping:1 lei:2 effect:2 validity:1 concept:4 normalized:1 true:2 multiplier:1 functioning:1 regularization:6 analytically:1 hence:1 during:1 width:1 game:1 essence:1 unnormalized:1 trying:1 reasoning:2 image:2 regularizing:4 recently:1 funding:1 predominantly:1 pseudocode:1 functional:1 qp:2 winner:1 raetsch:1 ai:1 centre:1 had:1 internat:1 base:4 showed:1 retrieved:1 optimizing:1 inequality:1 binary:4 success:1 rep:2 yi:5 muller:2 minimum:4 surely:1 converge:1 maximize:4 dashed:1 reduces:1 smooth:1 technical:1 cross:2 divided:1 bigger:2 yugoslavia:1 controlled:1 prediction:2 breast:2 iteration:12 kernel:1 achieved:3 proposal:1 whereas:1 chaussee:1 ratsch:3 median:1 subject:1 spirit:1 effectiveness:1 call:1 near:1 presence:1 bengio:1 easy:1 fit:1 zi:28 misclassifications:1 inner:1 six:1 introd:1 bartlett:1 tol:1 useful:1 clear:2 aimed:1 ten:2 svms:2 gmd:2 generate:2 schapire:2 zj:3 dotted:1 correctly:1 wr:1 zwitter:1 write:1 gunnar:1 acknowledged:1 changing:1 ce:1 ht:8 btl:1 asymptotically:8 sum:1 run:1 powerful:1 classifiable:1 place:1 almost:3 extends:1 kita:1 decision:5 scaling:1 ct:12 hi:1 fold:1 quadratic:3 incorporation:1 constraint:1 unlimited:1 generates:1 wc:1 thyroid:2 argument:2 separable:2 conjecture:1 according:2 conjugate:1 smaller:1 slightly:1 character:1 lp:4 modification:1 outlier:5 explained:1 taken:1 slack:9 eventually:1 loose:1 singer:1 know:1 apply:1 observe:1 ocr:2 away:1 enforce:1 appropriate:1 subtracted:1 original:3 bagging:5 top:1 cf:2 quinlan:2 especially:2 establish:1 objective:1 already:1 strategy:3 exhibit:2 gradient:3 win:3 thank:1 berlin:1 capacity:2 majority:1 evaluate:1 frieb:1 reason:2 enforcing:1 providing:1 minimizing:2 balance:2 nc:1 difficult:4 mostly:2 statement:1 dci:1 negative:4 sch6lkopf:1 datasets:5 benchmark:1 maller:2 descent:1 communication:1 banana:2 smoothed:1 oncology:1 august:1 introduced:1 pair:1 optimized:1 connection:1 learned:1 established:1 boost:1 address:1 usually:1 pattern:28 including:2 reliable:1 explanation:3 royal:1 overlap:1 force:1 regularized:2 scheme:1 improve:1 lbt:3 rated:1 brief:2 titanic:2 text:1 prior:2 understanding:1 acknowledgement:1 asymptotic:4 freund:2 limitation:2 analogy:2 validation:2 principle:1 classifying:1 lo:1 cancer:3 course:1 changed:2 last:3 adaboostreg:2 allow:5 understand:1 explaining:1 face:1 sparse:1 boundary:1 world:1 cumulative:1 made:1 adaptive:1 far:3 ec:1 approximate:1 ignore:1 confirm:1 overfitting:13 knew:1 xi:6 alternatively:1 search:4 sonar:1 why:1 table:6 additionally:1 nature:1 learn:2 onoda:6 robust:1 rbfkernel:1 schuurmans:2 mse:1 necessarily:2 cl:1 complex:2 domain:1 icann:2 big:3 noise:12 fig:4 position:1 candidate:1 weighting:3 splice:2 bad:1 bishop:1 emphasized:1 jt:1 decay:2 svm:11 cortes:1 workshop:1 uction:1 false:1 vapnik:2 effectively:3 importance:1 nat:1 margin:62 simply:2 lagrange:1 unexpected:2 springer:3 loosing:1 rbf:10 soklic:1 labelled:1 hard:14 experimentally:1 typical:3 wt:10 vote:1 holloway:1 college:1 support:13 dept:2 reg:13 tested:1 |
673 | 1,616 | m
s
l
!
#" %$ &
'
'("
o
p
)*,+.-0/132547698
C DFE
G D3HIJLKMON D3GQPSRUTUV
_.`Ba0bYc<deSf0gihkj lYm^n cpoFq rts
q
n
r
:<;
=>@? * 8BA 1 +
WYX<Z X\[<]^E P
d0u vxw<y0z a b a z dYzYn<a c f|{^n<}0}~3cYB?
???.?t?3?
?0?9?
? D D3?9??? J D??U??D??
GQ???
?UD^???
?0DFPS?UKM??p] T K? J9??? ??D?9??] ? D ? DFGQP? ?<]??? ? T R J ???pR ] ?
? K??P ??G?R E
??? K???
??tNt?3G ]?,? D??
? ? E
D3G ? ? ? K??PS?^G D ? D ? ? ?U? ?DtP^?<? P?? ?H?
? ? J ?
? ] ?
? Dt? R,??? ??D ?T G ]3??K? J
? R J9? ? ] ? ] ? ?? D]3?
] ? V9?^D ? ?U? R,P?? G?? ?E?? D ?? ??GQ?
? R,?^?]
? ]^D.V?]3Dt?SP PxR ? ]3? ? DF? G?PS? Dt???] K?O?tTUDp? ] ? E
P D N D3D ?? ??? ] ? DFG ? ?F???U? ] ?<? GQ?U? G?? ? ] ? ? J ? ????? GQ? ? DFP
?U? ????K ?
??,? ? T ?i? DFG N ? ?
?U?F???DtPY? ??? P ? K ? ? ?
T?D ?
J ?,V]iPS??] ?U? Px?
E P D ?
?
? ? ? D ? ? PxPSR E ?UD?? ? ? ? ? DtP^???D Px?
??H ?? H? ??????F? ? DFG??Q??? ???t??
RU] J9?UDFG D ]J
?^DHR,? ? ? ??? P?? ? ? ?
??
R D3G ? ? ?
RU? ]^? ?
GQ???? GY] ? ? ? ? ? ?^???9? ?,??? ? DR T ?,?P ? ] J ? ? ? ?
? ? GQ? ?3??DtP3?
? ?t? ?3???Y??? ??^? ?0?
? J D ?
?
?U? D0? ? P ?FRU???9??GQ? ] J T^?
G ??E
? Dt??P R,?iPx? ] ?UK??PQ?,K??? ]^? K??J ? ?3G D J
? D<K? P?? ? ? ? ? ?
? ?? ? ?B??DFP ? K? ? ] ? ? ? J ??9? K??N D J
?? ? ?D ]3?
P G? ? ?
E R ]3P E?,?tR ?,G ?V ? V ? ?? ? ? E9T ? PQDD3G J?? D] ????,?i?LJ? G P K? ] RU??J ??? D ? ? ? D ? ? ? ? D D J9P ??YK? ?p? G R ] ]^? ? ? PGQPHR,? ?U? D ] ? ? ? D R P E ? G?DPS?T??D E?PQD ? ? ? ? ?? ? K? ? ? ? ???,E ?F? ??? ??3? DtPPx????? ?9G ? D ? ??? G??FD ?
?F? ?p? ? D3?pPQ? ? ?UR P ? K??? ? ??p? ? DF? ?^??P?? ? ? ]3P ?9G?D ? K??? ?U?U?
J T G?D3DtP.?x?x??^??? ?^? ?Q? ?
K?O?9??D J? ? ?
??N?7? ? DF?UP?U?
?
] J ? ?.] V?D PSR,] J ? D ? H? GQ? P
?
D G ? ?
? P ? ? ???
? ??K M J ????K? ]^?BD P ?UR ? ] ?UR ?
Jp? ?p? ]^?3? ?U? [ ? ?9? ] ? D P
H? G
R J ?U? D ? t? ? ? D J ? ??G?V ? ?
??? ] V?D P R ? J?? ? D ?tGQV Y? ? D ?^? ]^PxP R ? ] ? ? GQ? ] ?3? ?
? ??? ? R J ?t??R ]^?YD P ? K?O? ] T ?U?
J R P N R ] ?U??D7? PQD ?
?.? ? D
? x? ? ? ? ? R,PS??G?R E??
T R,?
"
? !?P D3D???? G??UJ PS?U] J
? #D $%'&( [ ] ? ? ] ? D )?"P *,+ ?,] H??
?P ? ?3?^D P P?R,?
J - i]^? ? ?
? ? D G
?3? ??] ? ? ?p??? D ? ?9T ? ? E P ? ] J E
D
@? D3D G R N D ? ? PSK? J . ? ?.] V?DFP ? ? ] J R,? ?,D G D J ?D D H KM??,? T ??0D D/6 R?GQK1 ? ?#? DFT9T.? K???PS? DGQK? E ?
? PSDF?U3? ??2
? J
]^P ?9G?R ? G R P G?R ? ? R ?
J ? /?R?'G 4,?3?
?UD ? R,P ? GQRUE ? ? ? ?7? 5!P NFDFGQ]^?? GQ? ? ? DFP? ?9]
?
P
]
T
D
<
D
;
?
=
?
?
?
?
A
>
Y
@
F
D
S
P
T
]
D
?
?
T
>
Dt?R J P ?P 3R D GQ? K? ]3? K????PD T ?U?F? ? ?8J 7 N PR3D 9G?]^: ???,PDFG?DPJ H] ? ?,?U????? G ??? ?UE9D ]^E ? ?,? ?,?R,KE?P ? T V J ?
E KJ DT K ?,?pK??? ? ??]3?D ? P???? G ??HD ] RU? ? R P /? D G K FUGQE?RU?
?F??T R,?U?
D J ? &T ?9G G??x? K? ?
? DF? GQH3? P N ? I ? DD ? ? ? ? !?V ?U?? B D? ? C?9? U?^D?
? P ?? DtDFP! GQ??K???
??D G
R ? E R,?
JK ? /K??G?RU? ? K? ?
? J?E ?F????? D K ? O GQ? ?
?
R ? ? D G D? P ? ? ?] ? NFD ? ] ? ? ? PSD ?Q ? ? ? ? P ? ?,T ??T] R,?9& <? R ?<6?
<J ??LD M ? ? 2 G?D @ ????D ? G D ??D ? ? ] ? PS? ???S? DFP<?
? ? ? ?
JJ ?UR ? D.!P? G?D ? KE??3? R ?
J
? ?R,D3? G R?N R ]? 7 PQTD J P E ?
D J
? DtPiPPQD3D?tH ? ? ??P ? ] G D D RU?33W3? T ?
? ] ? P ?,J
? Di/G?D R U D ? ??D ? T D ? G?KV?????DG PJ D RUP0?3& ? ??? Dt??R ? ???tD.H? ? ?.? ???
? ? ? ?9G?R,??G R,P ? ?
?UR,??J ! ??GR J ? J ? ? J ?U? ? G <J X P.? 4,? v
I?
??? 6i? ? ? ??D3? V Y3R,JP ? ?D ? D V?] ? ?9? R ? ] ? 4,; ? 9 P Z /4 G?R ? ? ? D T ? G?? ? GQP ? ? D?
?90H T?D ??? <V [\
@DFP DTd ] ? ? ?UR ? ] ?U?? ? ?9P
?D ? ? ? ] ?3?3]^G^ ?xD ? E VPQDFNFD ?G ] ???D ? T ??G D P ;_ 0n `
DPQD T ? ?a ? PSP E ?UD ? ?
? ? ? ? D Pcb??PD ? D ? D3?,V ? ? ? ? D? e
] J ? ? ? T D 9? ? T
J 2?H? ^ J ] N ] ? ? <D f_hg `
D J ? ??E
D3G 2 ? T G i ? J bj J ? D ?9? ? k DtP R,PYPS?p]x? k ? ? ? ? D
! #"
$&%
'(&) $*,+
-.
/0/1 )23$ ' #4 5 $ / 687 9:<; $='>#? 5 $ /@#AB @ A B C $=. /0D @ D C $E * ) B ) D#F#G >#H
4
I / @ D @ '@J$ B,K $L B >JM3N $23H<O ( B 2 F / ' ) / $> - > A $ E
/0/0D )P $ ' > 4 (Q$ /RS > A
D#/ / '
) /0T >3U AV $ C $ *XW 1 /Y >
Z % V I[D % B\ C B I 4 $]_^XI[@ A $ / $ /0` >#' B > D % / a E
b $c D 4 @ D3
%/ ) B/ $c
I Bd D * D#4A 2 $> E * D3 * /fe $ % \ ># B O / D3g I
5 / > - @ A $ E * ) B ) D 2 D > Hh( B// >#+ i @ 4 5 $ / > AB @ V $ *j$ I + > / $$ IQD %k> A $> *ml I
D % g / $@Xn
o
* $p
q E 2#$r 4 I
/0s \ $* B % B t ' * B 2 F B % g ' B g $ B E E P D ? B > 13
IufVkA T *X$ '@34 ( $ /kv $ V, * \ / \ * B V I
-3* 5 B %w Y g 2 D3/ A \ s ?> s + I
xzy B I \{#A $ E * ) F $ 5|D / E * $ \ D 4 >#} ~ % g > A $ E
* ) B ) ` 2 D > H - V * \ /? A B >? ? F V
B E v > D 4' F3v?V * \f? / B H&??? ?
/0I
D3B?????h? - V $ \ Q%
> A B C $ B IH E * D# * Z I V,F $ \?? $
? V $k4 B % 4
% /D c $*
B % H V? * \D % > A $ \?? ? 4 > D % v H B / B E
//mD ) ? $ 4 B I \ D \?B @ $
?_?_$@j? i * Z I V P $ \?g $ -f2 B I g ' B g $ V i F \
F $ B c ' / > &) $ 2 s $ N $&@ A
B >?D Y - B 4>j?
IF H<- $ V V * \ / ? / ' 4 A B /??? ? $ *m? $ g + C
D % B????? / A ' F \ %B @3' * BF ? H
-
? 2 V @ " $ V * \???? ? / I
D3B????? o i? A $ ? ? $? $ C $ % D3I&B ?3? g $ 4 ? ' / ?fV $ \ % @ $ p E $ 4 @ >#+ / $ $
5 B IH> *Xl %s3% g $p? E 2#$O@ A B @ D %C 2 C $ > A ` / E A * B/ $ n??k/ B I +
@#" $ * $ p q E
2#$ 4 % /0D \ $ *f> A $ E
* ) ? T (
+ - $ /0> DJ5QB @ D % g @#" $ E v ? T >J$*mO ?[? \ `#/04 * $> TQ\ H I??D34 ?2 / H / @J$ 5 R ? $* $=> A $ > B /0? } ? / > +=? % \ B
\ D#/ >J* D ) i > D3 IC $* @ A $ / > B @#$ /h@#? B > 4 B Y)$ * $ B 4 A $c - * ( B . v > D 4 ' 2 v?/ > B @J$?? E /0/ D )23H B - $* @#A $
/m? /0@J$( * $ 4 $ ` C $ / B Ik$ p >J$ * % B F ? % >J* ?/ D g % B ?3?m? ? g
? %?? D % 5&B % H \ ( B } ??% / D > ` O?I B > ' * B F @ ,B / / ' (Q$
> AB > > A $ / H / > $ ( D3/ / E v/ $
??
%F H B[/ ( B F#F?/ '
) / $ >? - /0@ B >J$ / D3/ * $ B 4A B ) F $ -3* (??%? / > B >J$ ?
? Y > A
D3/ E B E $*????$ E * $ / $% > B ?[B H $ / D B %Q> * T B >?( $%
@ ??@ A
D3/ E
*
) 2 $ ( ' / ` Y gkB I?? ? ??3? ?j? ? ? ? ???? E * D *
@#" B > B N $* B g $O? N $ *?? %$ p E
% T %
> D3B? I' 5 ) $ * - ? ? E ? A $ / $ / $ B 4 A - V A
D#4 A *j$.*X$ / $%
@ /B / ' ) / $>
??@ ? $ -3$ B/0D ) 23$
' ? 4 5 $/
' 4 AB E* D * VkB / E* $ C 1# ' / 23H i / $ \ D % B/ E $ 4 D#? 4k4 %
> $ p> - % 2 ` I $
E
*X$c D34 @ `
I='
/ D % g O '- ? p?>J?,
*X$? $<@ * B % /X\ ' 4 $*m?&?0?
?? 6 ? /V $ /A V[??B ? > A ' ?
A @ AD3/ E * D * D % N +2 C $ /
$ p E
%$ I> s B ?#? ? 5?B IH A H E > A $ / $ / ??V $ 4B % ??? ? ? ? ? ???#??? E $? ? E* $ \ D34 @ s3
I
/ ?|?? * $ C $ *X? ' *
B E E * B 4 A?,? Vk/ i / > \ $ B F V D A 4 ' % > B ) F H D % ? % D > T %'( ) $* - '
> 4 ($ /
?
?
???m?
?X??[?X????????X? ?
?
?
/0/ D3? 2 $ C?F '$ / -3? 5?B / $ @f? ? } > A
ie?F
/ / @J* l I ` % g / $> AB @_4%> l I / > A $ ' > 4 (Q$ O
?
B I ' I Z I V I,5 ' F > D % 5QD B c D3/ e *X} )'
> D3 I!#"
/ - > A $kO$ H( ) +
2+ ? D I @ A $ @ *Xl % } ??% g \ ? t3B,; $ ?
\ ? . E * p D ( B >Js3 % - * "
?
E
2
$
@
E
E
*
$
?
:
4
%
)
$
>
$
m
*
)
$
(
X
*
$
c
%
>
$
'
$
3
D
/
/
/
?
?
4
#
>
D
#
@
4
;
!
<
;
?
B
B e \[B
D
g A
5
g DJC $ % ?= ;?A@
B JD N $% B E * D * \ D3/ @#* D )'
> D % C $ * @ A $ E
/0/ D ) 2 $_(&' F @ D % ( D3B F \ `3/ @#* D ) i > D %/ C > A $?? B H $O D3B %$ / > ~ D (> B > $
? ? / ??
%DE ? <GF ? ?? IH8JKLNM PO !<!Q*R H8S POTRVU W ? 'X H JIY R
Z[\
bdfe ?? / B C$ 4 > * @ ? B > \ $ /04 * }
)$ / E
/0/ } g )
F $ C B F '$ / - > ?
i
$ hkj% Z % V I \ E * ) B l
V A $* $ R^]_a`Ib
? c ~
mo
O
? np B % \ H } / > A
) D F ` > D $ / "2m ? J
"
$ qsr 4
I
>J$ p > u? t C v } v B ) ? $@ A B > \ $%
> $ /?? 2 P > A $ * ?/O ' : E >#} ~
I
/
~ ?
B )
i @?>#? $ \ ( B } 4 %f??? $4I / D \ $* 2E w ? 4 ' F ? 4 %>?$p >#/?D %k> A $ % $p> / $ 4@ D
I?? x
; $ E
/ @J$ * D ? E * ) B ) D#F#D e H -fR 4 B % * $ V * ` @#@J$ %k'/ D % g ? B H $ /F BV B/ ]
?
%yER F W ? ? > Hzn|{ OE}~ ? ? ?? R IHzn y R H8? ]? O RV?? ?8? b? ??
ZT? ?
?
?
- ? ? ? ?0? ? ? 8
; $- ??D#? H ;
? ?J? 6 \D3/ @ * ` ) '
> D3 I
/ D3/ ? 1
I? ??? .? ? 6 ? > + > A $(&' ? >#D % : ` B F \D / >J* D )'
> }
I8?;? B >?} / ?
D -?@ A $ E
* D * \ D#/ >J* D ) '> D3 % D3/ -j* ( > A ` / - ??D 2 H ? / + D3/ @#? $ E
/ >J$* ? ? I* ?[? d ? * D 4?
F $@ E * D# *?- *? D /
H ? ? ? ? ?? ??? ?#? - ? 6 ??? 5|?G? ? ? d? B % \QAB/ @ A $- ? ?
/ E $ ? D ? $ \ )
/
?
OE? ? ? & n
y R?? H8????? ? ?
? b ? ? ?2??? ? yE? b ? ???? B % \ b ? ?;?? - * B F + v J
? O? ? & ? n
?
? ? ??
? ?
n'_
? ???????2? T?2?? Y ? D />JA $ ? ? -!- ? - '%? ?> 1 %;? B D C$Y B d ` * D 4 A 2#$> E * D * rf> A $ D % D > D3B F
V ? $ *$
?
?
@
E *X$ \?D 4> D % - * $ ?4A C B 2 i $ - ? D / PO V
? ?i?? + F H8? ? ? ? b ? ( y R? Hzn Y R??? ? & ?? ? ?f? (?? ? @?s /
? $ >??8)$ B * B % \ ( Nv } ~ B ) ? $,>J? B > 4 ?%&>3??$? E +
T v $ gD N $% B
g $%$* B ` H F $ ?
?
0?
- D I\ $ E $% \ $ %> \ * B V / ? ? -?? - * 5
$ \ $ I
>J$k) H%'& ( )$ A $ I
' ( )$ * - 44 '*) $ % 4 $
-/. ?#? ? 0 2? 1 -/3 4 ?f? ? 576 ? ? - ?86 ? ? 9
? .
* ) F $ ( s3/ > ? I \B g
?
! " #
%$ & (' ) * +!, ' # - .
/102
3546798;: 6 33=<;> 4 8@?A;BC<@D 3FE1G A :G A 6 4HJI K GL@M DN 3O8 E GP Q!:5R(SFA 8;DTD 7 E 3UWVYXZ3 < 3G6\[F]^5_`5ab[-c(d < Dfeg <;D 3
a
^[ cp qTc `Jtvuw6xCSo3my 3O<C< D 4z
E : 6 8 3 G A;:GhL 6 4 H AiGP M D1N 3 8 S L 8 DYD1j E13 Ukml 3 8i3 G6 [on(prq n ^s
?
Q5Q
n
nY???? s??? n
8 D 3 E1G 3{ L M <}| : g B : G~W?F?
? ? [-? "5? qm? " ??w? ? " ? [C? " p qo???!?`
A;6F??? ~ ?F?
^_?5_ ^ ? ^
?w?Y??
? 3 M 4 g 8;D L g? :B 8;D 3 D 7 E 3 U??Y??? < 3G
6 [-? ? 46 8 D 3 g u?Y?3G?? B(?!? Li? 4? L g1????? 3? ??? N 3 6? g S D L M D
_
S 3 64S?:u 8 M:?T3 ? ????? u 6? < D 3YG4 8 L :??13 8 So33 g?D 7 E13 Uk ? 3 8 3 G6TM :?36 E : g? 6T8 :: u G L?? P < L 4?
46 6
36
6
?T3 g 8 : B 8;D 3 G?3 ?@4 8 L;? 3?? G :? 4 ? L ?;A 8;7 :1B 8;D 3JM:? 356E : ? { L ? ?T:1u1<;? :? 356_ t 3 8 :8 4 ? S 3 L@? D18 : B
8;? 3 D17 E 3U ? 4 ? 3 < ? G6JG 3 E G 3 63 g < :w? G M: g? ? 3 ? M3? :G-3 g 8 G?3 g M D ? 3 g 8@? L g 8;D 3mE1GL : G? g : So? 3 {?? 3
? 6hS3 ?? ??6 33?5L@B 8;D L@6 So3L yf? < L@6 ? kJ?531?: u G?356 < Ai?948i36hB:G <}D 3 Ew? 4 ? 3 8 3 G 6-8}3 g {J< : ? 3??5? D 3 G :? Q
B?G : ? < D 33 ??E1L G A ? 4?w? G 3???3 g MLi356 : ? 63G?53 { L ? 8 D 3 8 G? g L g y {?4 8 4`
???????1? ?5?w? ?? ?(?C? ? ?? ?1?
?
?
_
3 g
: S { 3?6 ? G P ? 3 6
3 {
? 4? u13 6 :1BC~?? ? F
?
6 ? ? 6 3 8 6h:1B??%?
3 A
F
?
8 Gu M 8 u G ? { E GP : G6 8;D 4 8 M 4 E 8 u G 3 : ? G-? g M 3OG 8 4L g 8 7 4?w: u 8 8
3 ??g 3T4 G 4? { : ???5? L@4 ? N 3m? ;8 D 4 <?8@??3 6 ? 4 N u 3 6??R
: ? 8;D
g 8 3 g { 3?
? 63? 4 g 8 L M6B: G ;8 D A;6 ? k P ? 4? N 3oA 6 < D ? 8w?? ??
L B@?
a
a
?-? P 6 M: g 6 P 6 8 3 g <
| 8;D?< G 4 L g
D17 E :18 D 356
A@6
? ? B:G ?
?
?
?
D 3 L@? { P ? M36
8;D 3F6
3 < :1B
?: G S D A M D q ? )
`
3{w3 g : 8 3 ? 7 ?
`
4 g { So3 ? 3 8
?
? ?
`
a
? g :
< D 3 ? 4?@??3T:1B-?
JP ?5? ? 8;D L 6
6u5?YE 8 A@: g So3 M 4 g { 3
7 E 3 G E ?
3o??? N 8 L g : YL@4 N?{ L@6 8 GL;?1? < L@: g 6
@BSo3Tuw63 8 D 3 6l?3
?: v4 N;N@7 So3 { 3 g 3 8;D 3 E R L : G
D 3J6
3 8 :1Bh??? B?? 4 ?6 A;? ?
3 ???
3 63 8??? :1B?E : ?6 6
L;? i? 3
`
?
N 3 ?
7 ? < D 3
L ? ? { 48@4Y: ? N;7 L ? ?
? : g 8 ? L g 6(4?;?
:1?w63G ? 3 ? 6 7 ?Y? :?@6`
? 4 8 L 6 ?"!$# % &(' )
- /.021 34
(*,+
5 ? ? E :163JS 3
(;
7698
9:
=< g 3T4 H L;G ?@M D N 3?> E G ? ? G
: ? 3 G-E :16
6
L;? N
A@
BDC
FE HG X?3 8 3 G [?B : G(3?4M D
$Z M <
6 7 ? ? : N A g ?JI(K AL
ON
??ZO[\ [ ]
6po
? ? BiP ? ?RQSUT V ? WYX
s [ ^?v_ u ] ? ?` ?Aab ?Hc ? ) ? ? )d?fe bhg # i b ??j ?t k 4 g {Y?? )(lm B : G 4n N ? ? ? ?HqAr
w 6xC ? ?(y ??_=zH{ ?h| SF3 D 4 ? 3 8 D 4 8 ``
] `=? ? Pa B ? 6 ?
? n
??
?Y?w?
?~}Y ???9?? # i?
? ` K ? ^ ^ ?T?R?????? ?? ` ? ? :A? D 3 GS?;6?3
?
? : 3 S M : ? g : 6 g < 6G Lu1? M 3 8?G 4 < D 8 SJ3T: M48 6
L 33 GSo3 ? ? 3E G G3YL@:S G(3: ? kJ3G 3Y8;u D g 3 M? 3 4?-N ??53 ? 6 4:1?wB:?7u 8?? 8iD ? 3 3T4 M6 <?8 ?uw4 N S 63 IQ 88?E :14B B?E 3 G4 Pa 6
:L G-? N: 3 ? 3:1G u 88 DM :3J? 6?LY35? 63 6
: B ?7? 4 g { D 3 g 46
6 u ?Y3 < D 4 8?4 N ? 6 3< 6-:1B 8;D 3F6???3TMk { L ? 4?? ? 8;7=??4 ?53 8 D 3 6 X?3 E G | :5G E G :?w4 ?1L N A 8;7 6
? 3 ? 3 8?8;D 3 G4 g { : ? ? kTA?4?1?i3/? {13 g : 8 3 8 D 3FM ? { P a g 4 N L 8;7 : B-?
? 3T46
6u ? 3 < D 4 8wS 3 ? 3 ? L ?53 ? ?
i
{ L@6 8 G A;?u < L@: g ? ?v???? - ? B: G3???f
? ? k Q=?$? ?? i ? 3 {13 ? g 3 D 3 E G L;:G : ? 3GC6 3 8Y? 8;? ? 3A?i
n
?
?H?p?
? ? ?2?p???? ? ? ??
? ?
-??F?
? 3 ? :5S 3??p? P g 3D :5S 8 : M: ? E u < 3 8;D 3 E : 6 8 3 G L : G1E1G3 ? P a M 8 A : g 6 ? Li?53 ? 8;D ? 6 D Li3G k ? D C M 4? E GL@: Gv?p? 3 8
? {13 g : < 3o8 D 3 8 Gh? g L g ? {?4< 4 ??? ^ K?? _ ^ ? ? i t 3 g L 8 L 6C354 6 7 8;? ?O3G ? ? 7 z D 4 <
n
?v? ?
l? ? ? ? [W[ p p q q j ](????]?? ???? ] ? ?v?2?? ?
? ?~ ? ?
?S ? K?? ???
?
?p? ? )??
? 38?u16 ? :5S?3 ? ? ? L g 3FS D L@M D 6 3 8 6o? 4M 8 u14 ?;? 7 M: g z G A;?u < 3 8 :> D L@6C6? ? &
? LiG 6 8 ?M S3 g : 8 3 8 D 4<(63 8 6 8 ? 4< { : g : 8 M: g <@4 L g ? D ? ? 37? 3G : E :w6
8 3GL : G ? G :1?w4 ?p? N L 8;7$? 6 A g M 3 8 D 3 7
?T3A g M : g 6 L 6 < 3 ? 8 L < DJ8;D 3 : ?w63OG ? 3 { { 4 8 ? ? t u 65
5 3 M: g {1? 4 6So3 g :18 3 {Y4 ? : ?3A?? ??? ? [ ? ? L;6 8;D 3F? 6SoX?33 M4 B g:$? 3 ?4 ? X ? 6 3L g<@6 3F: : ? g?N 7 V 6 ? 3 L 8 g 6 4 ?N L < 7 8 D 9- 4 8 8 MD : 4 g8 8 M ? : ?o? < ??? 5 g `
? `9? : G 3? ? 3 G ??? 7 {13 ? g L;8 L : g 8 D 3 E GL;:1G B : G 4 N}N?8;D 3563T6
3 8 6-??6 < D 3F6XZ3p?J
? 6 ? g ?
? 4 713 6 G uN 3?wS 3
? : g M N u { 3 8;D 4 < ? ? ??? ? ? L@6 < D 3 6 lZ3?B :5G 4 N;N 638 6F: ? 6 ? ? 3 ? 8;D 4< M :1? < ? gF? $6 t u16?? So3mM 4 g
! "
2 7 #%899;:=<'$+>
5 +* ?A@CBEDGFIH
JLKM A
#%$'&(
)+*',.-0/'1243'56
- N 8 ) / O$ ( ) -$ ?P /RQ 2 5 8 9 M
2 7TS
U # 2V+#*'6 /'1
W #%X> 7 2ZY
,
b 7 Y M : M $'c 3 / -
/'1 2 #%899[:/\$ Y 6 M ^
Y ,E# 8
d 1 L# 2V #=e
- ] 1
2_Y`#%/R2 7.a Y 7E
f
27 ^
2 g 2 '/ h ^
Y ij 2 #=kClLmon pGqsrTtu V 126v: 6 - # 2/owx/'1
yz/
Q j|{ Y ?
}I~=2 7 YA(Y #%VR27I3 Y 7?( 7 Y M :=? 3 )+? ??V -, Y 7 n p
y=((
2 g #T3 6 / 12 # ? 9?? f 8 #??$'5/'1$+#CX y # 2 h 2Ad : 6 h 7L3 /\2
??
???.???A??????? ??
??? ????
?
l ,?n ?
? r t? V+1 2 6Ah 2? 2 W ? V Y2#%V'? &
d Y 6 /'? ?^? ? e???? #o)'2 : ? #C/+Y?: 6^2 B 8
,7 : d V?Y 6
- # - N? 2 / 7 - : ?O? ? 6 Y M
p?
h 2 d : ? # ? ? N(G) -^? 3R? * ? 2?V+12 9?: ?%#
8 ? #L22?0# - 9 M Y ).#=?
e
$+, n ?
q ? t
? ?????0? ? ???I????
? ??? ???
?
? ? ?
:/'2?V Q 2 , 7 : d / 3 Y6 Y, #?8 M
: +V ? ? Y ?A# 3 9v?'? g?/ > /R1 2 Y ?
#L2 7L? 2 ??Y 8 /'iY 9v2# 1: ?2
/ 1 :/ h?:# ? Y/ :# # 3 P6 2 ?
# 2 /'# Y , w?h
2 : M Y? 2? M 8
V+12# ??2 (
V Y0/'1
?AY M
?? ?
3 ] Q ? Y ? }I~ 27 Y?(># / 27 $.Y 7? 1Gy V
V ? $ /'1^: d Y? 2 i V 3 Y ? mIY 7 V'1 3 #
E
Y#%/ 27 * Y 7 ( 7 Y M : M $'?'$ V -? f 8G?=?
7L?2 ? Y 8 / d Y 9?2#???>
?? /'1 2
?
3 6?@T?E???L?J??'1 :=/ ? 2 ( 26 ? # Y 6 ??? ?[? :=6 M 2 &
1 2x#%3 6 ?)'2 / 2?
Y / 3 d 2 / 1 =: /;/
f 8 #=? +/ Y [
/'12?#%899 : V ?+Y 6
9 ?s2 ( 7 2 ? 3 i V ? Y 6#=? h 2 Y 6 ) - ?22 ? / Y 2#%V'3 &v:/ 2
p
? ? ? ? ? ? ? ???
? ? ??????????
???? ???? ?'? ? ?o?
:6 ? V 1 26
? ?
? ? ??? ?
3 ,Cn ?q ? t
???I? ? ??
? ? ? ? ?
p
? ? ? ? ? ? ? ? $ U ? ? ??rCt
? ?
? ? ? y# # d : ) 3+? ?,I: d / Y 7 ?'1 : V h2?:=(
(
) -/+Y V 1 2 3 7 ?'i
2 d : ?V'127I2IYI2TV 1$+6Y,
/'1
:V : # ? ? 92# / : /h "
2 !: ? 2 #L226^:=?+? Y, /'1!2 , 2 :=# ? # M c 2 # -!N M Y) %# $ f 2 B ? : 6
V $+/ -'& ?
D 6 #?3 6 ? ( 7 Y (
( 7 Y M :=? )3 (+* , / - 9 : # # : # #%? P
62 ? /'.
Y -/0 21 ? $I? 2 $43 8 6
# 22
6 5 Y ? / d Y9v2# 7
2 8?$ 2
( 7 3 Y 7 #oh 2 P 2 :
/ 9 1 2 , >)') Y ? 3 6`P i 1
gv: d VR2 7I3 ~z:V'? ; Y 60Y , ? =? <?> A? @
BDCEGFH=I+J I FKMLN)O PQ
ByCzF{%|}X~
1`: ? 2 / 1: ]
?VU ? ?XW
? ? ?SRT
Y d Y &0(8
VR2 ? ? ?
?
?
rAs)tx r
?Gu
?b? ? ?
e
?=? ?A h 2 6 2 2 ? /+Y d Y & ( 8 / 2 ? ? ????
? ?
? ? ?
??
K ??J ?
? ??
s
? ? ????
: / k?
- ? ? 6
V 7 Y ? ? i V ? Y 6 Y
, ? g * : M( 2# ? ?21 : ? 2A? 1
? ?
?
? ?=??? ?
??
??
? ?b??
??? ?
?
? ?%? ? ?
??? ? I ?
?h??? ? ?4? ?D?4?? ?
?
? #%3+? P0? V)? ??? ? ( 7 Y (
2 ? 3R2 # Y
,G? 3?? ? d ? (R2/!( 7=* ? Y 7L#%? h 2h? : ?e?
?A? w? ?
?
?
? ? ? ? w ?
?
? ? ?A??
?
? w?? ????? ? w?? ?
? Y hX? 8 #L3 6P?@ BE? ?=?e?o:=6 ? ?????? h 2AP2 / /? :/G$+m ? t'|
? w'? ? ?
?
? ?
?? ?
?
? ? ? ? w7? ? ???? ???? ? ? ????
?
???
? ?
? ??? ?
f 8
#%?????????e???
?
?
?
?
?????
?
?? $ ??
?
?
?
&)(
, t
? ?
?
.
+ = >@?
<*
/
#
AH ?
?
? ?ed
? ? ?bac
\[+^] 7
? _`
r Y"Z
?
6
?
#%$+??
?
? 2 ? Y 8 /'#%? ? 2
Y
/'1 2 B 8 : ? V 3 V -??
? 2 ] ( 7I2 ? 3 i V 3 Y 6
? ??
? ? +? # ? 1 2
# Y
, ^3R7 a i 1
) W V
?
v ? ?f ?hgj
w i klem l ?
?
?=n ?
?^oqp
:=- 2#??)'2? h 2
?? ?
??? ?
?
w ? ? ? w
?
?+???
? ?????? ?
?
?
/'1:=/
? ? , ? t?????? /'126
?
????????? ? ?
?
? ??
? ?
? ?? ?
w??4? / 2
?=?\?
?
?
?"? v ?\?
?
?
? ?
? ? ? ? ? ???%?
?
? ??? ?
?!#"
%'&)(
?
?
2
?
? ?3
4
? ; 10
,
pFE
?
?G
A B ? ? t 4)CD
?+? J
5768:9
!"# $ %# & ' )( * +,.- 0/21 ( 3 40 1 ' $ 5
:;=<
E =; iFj
S i
{
??
?
???
??
E ?j
S ?#? ;
S?; E ?j
>
Ok
Olm
O | } l ~ ?? ;0?F? ? ?
?
k ? ? ??? ? ?!? ??
?
k ?? ? l ? ? ? ? ?=??? ? ? ??
? ? ? ? ? ??? ? ? ? ? ? ? ? ? ? L ?
? ? | ? ? ;=? ? ? ? ?
? ? ?? ??? ? ? ? L
? O ??? ? ?
?
?
? ? ? ? ?L ? ? ? ?
O O ?#? ? ?
?? ? ? ?
? ?? L [
?A@BDCFEHG
?APQ2RTS!U V W X9Y#Z J
?n BoB S S
S SAq5rsut
?A??????ip E?S??=??? J J
I
{ ??#?
??
?
6798
?
[ W \=]=^ =] _ J ;]K ; ; JL ; M ; ; ]=d=d=d ;e=N ; ] J; f=f=e=d dcd d#d d d d=f ] ] ; ; K ; d f f#d d
a=a a=g a g gcg=g h ` ` g
S ;=vxw y z *=`= acb=a=a
z
0
B
B
B
B
S?J ?
? ? ?#?0?????0?0?0?????0?0? ??? ?0?0? ?0?0?0?0??? B ?0?0?0?????0?0?0? ?0???0?#?
E ;?
E
?#?
S?;?
R?? ??? P?B? i?j ? ? ? [ ? [ ? ?
jS??E d=] f d
g
S?? ?? ?? STS
J J
?i ? S?E
?
S <
FR :
STS
?RTS
E
: E
?
?
[ [
C
; CT? W W S C
? ?
"
!
"
CTC
C
<
<
?
D
#%$&('*),+.-0/1%23+547698:%;,<=<'(>@? )@A ? $BC BEF?G0+IHB>@?J+ ),K B )LM$ >@6ON POQ $SR '?=Q TSU CWVYX,Z\[]_^`4,B*)ba$ c 474d+5)7+ C 6*eA < '*+>fB4Fgih
jlk mn Gobprqsutvwxzy{}|~v(v%??????? t??A~C*a??????z???o%????v?? t?(? /?? K & G(??// ;,?J? ' > 6?)?A??O$B?C?>@G?B j $C & ?O?+
? G A?? & +I? C???????? xI??4 B*)ba K 4d47+??,+ ? 6M??A~?7?0+>fB 4 gi? j $?=G op??? t(?? ??????5? A?CL????,???y?o ^ ? v?? ? ?M?
? +?O+?? Q? C ? G?+ >@? ?*?.+ R ) A ??? +56>?aFB0+0> C B?6a+ H + C*? B ??OG*+???GMB($ ? +.B4 oF? ? 'M>?? K ? ? A C??+ < >?B('?
j G?+0? H ? ' &????%?? |~v?? $C ? ?%? ?7? ? ?F? +?aM+> $O) +~a.+ ? ?A~<=Q $ ?=? 4,B?<=< B j >?a $=),+5? 6=< ? /??
#%)?B????G*+?A R B0e+ H ) B H B(>@Q TS6=Q ? B(C j +l$O????+??M$A~?J+ < ? & +5??J?MA??
? ? ??? ?I? ?? ?Y??? ?o%?????ug ? ?? ?
? ? -~?
o ??? g??
? ? ????? ?
???O??????
???? ?*?
? B(?O+??=G(A??V ? Z?y?
a A?C*a?a?U*+0>3CMB??a+ H + C a BC ?=G*+
?
o ] ? A C a ? ? ] ? x ?a+ H + C a?B C ? ? BC op A?C0
aQ > ?O)?$ R '(?=$B?C?B4 ? B*' C 6> B
C W6G?+ p B R >@+ ) ? +?a> ? ? R B*?>
l? > B ? B?O+ ? GMA?? j G*+C g $ >%> '# 4 ? K + ? 6O??
? ? & + )I?=GA C op ? A??0aY6 G Q ? >?a + H 0? / +5C?a > BC??dG0+ ? G B*$ ? + B?4?? ? 6 G + ? 6 G +?6?+ ? ? "/ !#%$ ( '? & (*),+ K >
. ? ? G > ? R A~<??=+ )???GA?? |5? ? $=> Q H ? $ +5> 6 GA?? 6 G+ H B > ?O+?) $=B*) 4 B0) ? ? & + )f>?+ ? > a(+5?5A ? - >f?)@A ?H K ab<)??K R 132?1 +C ??A?C
>@$Oa+.B 43# Q ? & ' ) + - ? G(A 6?> GU j > ? G?+HMB > ?O+5) Q T B*)3a$>@? '(?=Q B B4
> +?+??=G$?> + GA~? $ B ) B C 6 G(+ < + 4,?%G?A??*?9a?
8
5V 497
Z 6(? 4?B ) a?Q ?474+ )7+5C 6 aA???A?>?+56?>@Q +> 1
E
:
? CS
;=<?>A@CBD<FEAG3HJILKNMDO I > QG PR
+SMA???$=C($=C & ? G + A CA ? ? ? ? ??4,BT B 4 ? ? UWV x ? w j +?>YX~+ ? ? A????G*+ a?+?H+? a + C? ? BC x $>3+ S*H() +5> > +~a
PB C ? ? ?[
Z C 6 G + C ' ? R + )?BE??O+ T > $dC ? G(+?>@'?? . A~6=$B(? /]\ E??=G+r?O+ > ?*A C$ > G 4,B ) ? ? & + oC^ ?G+0C
? ? ] ? x R +??5B0??+0> KC >?+ C >@Q _ ??$O?0+??Bl?=G?+ +S?A~???b>@$ 8 + U4?? G + A` H G A R +0?ba ? 2 ? + ? A?C > + +l? G Q $ > R +0GA ? $ B ) B C
? G + ),$ & G(?MGMA C ??> $ aM+?B4 #?$ & ? ) +?-c j G$ ? G > G B j > ?ed,] ? xgf A > Al49'MC???=$BC.B4 xih > j + ? Aj > +?+lk
j G +0? x $=>?? < B> +6B?op w ?=G*+ C ? d ] Vm f $ >W??<dB>?+?6B -on > xqp )?B j >sr ? ? ] ?smg A?> ? . H3t B ? + >W6 B?A
?*Asdu '*+?? G A??a(+ H +C0a > B C gwv C a obpyx A > j + <=< A?> ? A??0a 6 G0+ H ) Q B ) V 4 Z{z| }~ /
? $ > aM$?> ??'> > $ B C >@' && + > ? > ???A~? j + ? A?C?A HH < ? B ?*) H ) Q T B) $=C??5A > + > j G*+?)?+ j +rabB C B?6 ? C B j x
K=C A a ?*A???~+ ??? C 4,A???6? j +?? v C A?> >@' . +?6 G A?? x $ > ?(C R B%? C*a(+?a ? ? A~?W$ >s??? $ > $> B0??B? G$? ? B
? /? ? ?"t ??? T ? ?l? ? m > > '0??+ ? G A?? j +?A > >@$ & C ??G*+ H )?$=U*)?V ? ???? oQ??4,B ) +0A?? G ???? ?MB $ ?~+_B(4?xi? A~C?a??=G(A??
? 4 ? ? m ? / 2?+ t G?+5C
VY?7Z ?? o}? +sS?$=> ?=> 4,B ) A~<O)?? ??o?:,????+?a + C?+ ? ???=?? ? ?? < $ .
?$
TQ? ] ????? ( (Q???? ??? ? ? ? ? /
'M>?+?4,B )FH) +?a $???O$dBCl??G*+??J+T V 4'?
?
?
?
#%B/ ) ? G $=> / o+ ? GMB a ?=B j B ) ?? ? j + G?A~? + ?B+ C > '?)7+??=G(A?? ? d ]=Vs? ? K > j +5???%?+o? C +?a ?? n G A?? / $ >s?*6 G(A??M? ? +
? K $?*s+ S K >@?=> j B > ???5G??5A?>?+> ? + $ a +5C?=$ +?a R ???=G*+lE9B <=< B j Q T C & H ) B H B(>@$?O$dB? ??? ) BB(4%B $=?=?O+ ?M? m
3m ?
?L? ?=???9?? ?Y????????
??? ????9???F?s???? ??V ??? ? Z 1 ??
? ? ??? g? ? ?? ????
?3? ?3?N? 1 ??????D? ?F?o?? ?? ??? ? ? ?? ? m ? ? o ? ? ? ?1 ??? - ? ? ? V 4,Z??? o
?
? / ? ? ? ?3? ? ? ? ? ?? ? ?? ? ? ?? ??? ?s??? ? ] ? ? ? / ?*?N??????????3? ?
? ? ] ??? ?R ? ? U ? H '(?=$=C & >@' ? ? +5> > $O?0+ e A ? '0+0>?B4A??6G*+ ? B & ? $ 6 G?
B ?
; C H )?A?? n K ?~+ j + + ? A < ?A??J+ ?
C
$
+
)
7
+
A
?
G
*
?
~
A
d
<
*
'
0
+
> ? GA ? ??+.> ? & C$ ? v C ? ? ??>,??A << + ) 6=??A C 6 G +?<? & +5> ????A~?'(+ R +4 B0)7+G v C a
'
?
<
j
?
? ???
m
?
?
./"0,123
DFEG HJILKN? M
[ E,D]\*\^9 2(*_a` B 0a" 3cb
2 2@d^egfih 9 _,j*k M
u \ v 7 "wg.xh*%'0'y,z 2@`{y'| E !N2~},- M
u \ v 7 "?g.xh*%'0,? ??z 2@`{y B %?? ??( \pM
46578"9;:"<
O#P*Q,RS ?
O#P*l,Rm
OXp ?^X
O?p? SX
!#"$&%'"(*),+'=>2@?"A
RTVUW X
P@n^OpoqX
?pPN?O?? S
?S??? ?'R
4 :" 98B %C%
O P? Y OZ
r Ps RtS
r P W Z*r
r?P s Z ?
f B 5 } "?R?? !#"?$?} " (_'+'- 9 " 7?h %'+ 7?? j 1" + "?982 k "j~"@2*?7?? 1??Bt? + " 9 < B 0 B@?
u _,? ? "]??? )g? " ( \ 2 z " ? 0 _'|6},-?< "@? B -_ z ?^??" ?B z ,_ k j2 9 "0,1~"?? B 7 7) z + 1 " + B _?}N2~? + 1 " 7 "?*?~" z ??" ?
? D 7
? " ?B j 78""?g982 `??'? "J9 ),? 1 +#? Bz~<?7 _ <*"?2*??? _'? h 9 ?
J
" ?? + 1~"9?"cy,7 z
2 0p`?h? 1 *
d ) ?g? " 9 "
5 " 0 ??"" z C+ ? "
z ? "]~
\9?"< _ ? 0 _ 2 ? ?*7 ),j ? B }???k "c??*Bz3?hz5 2 ?? <"<2 j *
" ?
?
?????J???#??? ?V??*?V?????J???t?????
" h78" 3 + 1 " \ 9 2 \ 2 7 "d?"@7 +'y ` B + _?2z?`?" + 1 2 3 + 2 ? 2 z~7 + 9 h ? +?7 0 B + )g7 0 _ ?B % ` 2@<*"A,7???2 9 \ 9g"< _ ? 0'_?j k + 1 "
?
\^9 2~5 B5 _?},_ 0 -2~? ? ? v?B ? + " 9;7 ),j ? 1 " ? 2j + "( + 2~? + 1~"?\*9g": _ 2?7??'-?2*5^78" 9 ?"<?? 1v?B? + " 9?? u^? ?1>? 2@<^"?},7?
` 2@<^" } 7? v "?2*??k 9 " B + ),j + " 9 " 7 + _ z/B \ \ }g? ? ? B +'? ? 2 j 7]7h ? 1 B 7?2 \ 0 _'? B A
2~? + "z?9g"?,"??"< + 2???5 _?k 9 ?
?1 ? B ? + "?? 9 "??@2~k z ) + )?2 z?B?z <&0'"(0 ? 2@` \9 "7 7 _,2 z
??" + " 7 0C"d + ??2{2?\ 9 )g2 9 <#_'7 + 9 _'?*h + _,2 z~7 ??2 9 + 1 "
?p?
QlB?z "?( \ 2 z "@j + _ B } \ 9 ?? ??2 9?? ? ??g?F??
}? \ 1*B5 " + 7 _'?t">???@?????
Bz<?B \ 2} - j2@`?_ B } \ 9 _ 29g?
?????
???????
? ? ? ???
W ?" + 9 6z _,j k B?z~< +?"7 0 ` B0a"9 _ B %i?J"?9?"J<*" 9 _'?" < ? 9 2@` :~v _ 2 h 7?v
1 ? :@"7?Bz<
? ?
?
),z?A?h~d"<{<#_C?g?," 9 "j +?+ - \ "7?2
?
N?a"7>7 h ?1 \*9 2k 9;B ` 7t?V?29g"?<ph \ 7 ? Bz < B 7 ? _'_ + "?( 0 }'" 7Q "
B '\^1*B5 " + ?82 ? + 1~"cB }gk 2~9 _ +C? ` ? 2^j 7 _ 7 + 7?2? B }'} 0~" E?B 7 ? _?) B j~d z 2*j ? ? _'_ O \^2*77 _ 5 % " ? 1 v?B ?+ "9?7?
" + 9 ! j~),j k <#B + B?? 2j 7 _ 7 + "</2? v 2*? z < ?Xn ?" k B ? - + "7 B j~d ? 2 9 + " 7 + _ z k ? "?h*78"< ? " k B
5 -^+ "7 "
#?B??1
??2 < "?}]?J" ? 2 ?>\^v "d 1^B< + 2 ??7 _,k z B?\*982~5 B 5_'%'_ + - + 2 B j- ? 1 ? B ? 0'"?98?%$?? B ? 1 v?B? + " 9
? ? z 2 + 2~? 7 " 9 ?"d?_ z + 1 " ? 2 z + "t( + 2 ?0,1 " \9?": ? & 2h7 ? 1' B ? + " 9)( +'? "?j "? ? 1 ? B? + "?9]y'7c??7?_'k z "?<
+ 1"?\9 2 5 B5 _C}'_ + -{2~??+ 1 " ? 2 + B %?` B?7 7?2 ? j 2:" % ":"z + 7 ? ?&" ? 2` \ ? "< 2?9 B \ \9 2 B? ? ?J_'0 1 + ?62
"@7 +'_* B +'_C2j>+ "?1z _*+#? "@7 + ? B + 1 B :@"5 "" z 7 12, z?+ 2 \ " 9 ?82.? "%'% 2 j>j B + ?~9 B %^d B + BJ7 " + 707 / ??.1?? "
2 ?87 0?"@7 0 3y B 5+ 4 7?0?1~"]\ 982~5 B 5 _'%?y'+'- 2*? B 7 -??5^2~%6 ? _ z + 1 " ? 2 z0 "(*+ 2? B k _C?"@j???2 9g<?? G7 I K ? ? "?9g"
8 _ 7 + 1~"z^? 9?5~"9 2 ?V< _ ?,? " 9 "@j + ?1 v?B??0a"9872~57;"98:"t< B +?k )a?"@j ? 2 z + 4 ( 0:g0,1~"6\ 9 "?:_,2h^7?? 1v?B ? + " 9<; =
"78"?2j~d`?" + 1 2@< >5 B 7;"<?2*z B j ? \ \9 2(_a` B + _ 2z?2~P
? + 1~" b 22<#egfih 9 _ z k?" 7 ? _?` B +'_ 2 j 7 ? 1~"@??"BA DC)E
H 7RQ
?
? ? "9?"UT _?7 + 1 "cj?` 5~"?9?2?i< ? ?g??"9?"j +
"7 0'y?` B 0a"70'1"J\9?2~5 B 5_ } F 0C-]2? B 7 - ` 5 2~G
? 6H ?
I)JKML 7ON
S
I
?1v?B? + " 9?7?0C1^B + 1#B :"&5"?" z 2*5^78" 9 :"< ?j} - 2*j ? " B + ??2~90,1~" k )':@" z??2z + " ( 0 E ??2~9?`?29g"?< " + ! } 7
?
7;"W
" V R? C M8G
l X?2@9?" ? B } ?*B +a? 2*z ? "?h7?"3 + 1@" \ "$ } "(? Y[Z'-]? 1 ? \ ?1 ? \ 7?7 ? ]q` \ ?'- ?'? ""?( \ 2 z " z 0'? ? B0 ? ? 2 j 2*? + 1"
k ^g% 27 72z + 1 " + "@7 + < B + Ba
Bt:"9;B?@" } 2 _
l `NB5 % "??67h@b
` v _,?" 7 0 1 " B:"9 B k" + "7 d+ c 7?" + \ "$ }'"(_ 0'-?? 2 9
?
2? 7 " 98: "<{? 1*v?B? + " 987 j2:" } "?@" z + 7 e ? z <?+ 1 " 2~:"9 B } ? \ "$&?a"(*_ *+ f h
Q g?z + ? "]"?( \ "?9?)a`?"z + 7??J" (@" <
+
+
" m b
" o]_ 9 ) ? 1 } " + \ 9?_'2~9;7 Bz q
v n " " 9 7>2 ? 1 b
< p jr r???2~9 + 1"?"?( \ 2 z " z0 _ B ?a} - Bz<
ikj R lr?? 2 90,1
\p2 } - j
2 s ? B %a}?- < 4?? B - _'?^k \^9 F 2 987??*??0,1~" B % \ 1 B 5" + 7 d_ t" W
4 ? " ?vu 78""??g9 2` + 1~" + B 5 } "0,1 B + \^9g"?<^_ ? + ),2j*7? 7 ? ??jk7 \#v?78" c,` ? } 0C),j2@`{_ B } 7?B?1 ? \ " : " + 1" } 2w,?"7 +
2?@" 9;BxdxGy "$ }'"(_C+'- E "<^),???g" 9 " z? " 7 v>"c7 + B + _ 7 +a_ ?B }z -67 ) k jy ??B j ? <{^"0,2 + 1~"67 y ?"62 ? 0 ? "J< B + B}|qM
~?" \ " 9 ??2.- Bz ??"?5 ? "d?2z + 1 " + , 2 < _ ?,? " 9?" j + \9?_C2~987???2~9 + 1 " B } \#1 B5 " + 7? ? ? " y 7 ? ?@` \^v?B 5?a" "
]
9 "7?~A + 7 _ z <^_ ?B + " + 1 " B } x 0C?"?%'"?@" 98B k~"{),j{h 7 y'j*k 78\ v 7;"v? ` h} + ) z 2@`{) B } 7??2~9?\^9g"?< F ?0 _ 2j?),7?<ph~" + ?
` 2 ? "B ? ? h~? B + "?\*9?"< _ ? +'_,2 z7 ? 29?25 78"9?:@"3c" ? "z + 7 z d""<G? + 1~" \^" $ } " ( ),0 - 2*? z 2?@"% ": " z + 7 h 7 ),j k
]??
7;\#v?7;" h },+'_ z^2@?
` ? B } 7 ? ? 7 ` h^??1?1 )?k 1~"?9 + 1 Bz ? 1 "j?h^7;_,jk " _'+ 1"9`?" + 1 2@<???2 9??W !N?*+ B j ? + 1~"?9
- ? 2 ? 96B\*\9 2B ?
2 ??? 7B?9 d_ 2N? v
?cB .
1 y9 "??"? 7 + ?
" ????"?:"z + 7>? & 0 1 Ag2 ??\^9 ?5 B 5*_ x F'+ -???j*?~?"??":" z + 7?cBz<
=
h7 ??"?? 1 _?k 1 } ?7 7?) z>??B ?2@? 2~?N`?2 9 " B ? ? h 98B + " \ 9g"d) ?+ ? 2*j7V??2 9 ? 9<? + h*" j+a} ->????h?_ z ?6"?"j ? 7 "
]
? "0^"?,?g"? ? F 7 B } ??J"9L2~:"9 B % z \ @" ? } " ( \ 0 - ?
!#" $% & '!( & )*+ , -/. 102 & 3
45678 09) : ; <$
@
=<>?
ACBEDFHG DDIKJML
NO2PRQTSVUXWZY WT[<\H]^[:WT\[-U_[O`/[a Y1b2Y<c [U
S Y O Y WTWT\5d Ye Qgf d \ ` Q [ Wh\ djihkl[mnd f [ U `Vo mpYPRSlOHq2PRQ [ Wsr Y m [`/[ \_U
d f Y mptuVP S O dhvwS YuTU dxh\ e [ydz-[\ Y|{V}~q [ Y k WHQ Y i [PEM?yt\?v [ P/? d-a SVU i YU [??d O?QZ? ? [ \_} e QT? ???Y k WT\? ? d \_U?
8?
? [ e u [ } k c S!aH[ OTPlSVfEc `VQj[C? U
U
t v~W P oR? O U m Y aT[?i c?P Qh[?_[WT\KSVd\
U?????? [-O P Qj[U [ Y U
U t vwW P ? ?/dhOZU?
W \ [<aHS e P SVdT?~\ [a t e [U P d W \ dhi Y iT? ? u ? ??U P ? ? e ? ??OHf?[\6[?j? [ ????tj\ m Y<?/Ow\ [ U
tuVP S U+U QHd] ol? q QHdj] P d W [ \f d?
Pl? S U SVO f [\[O e [ ??X???V??? ? ?8O??<Op[fR? e ? [ O ` m?YO? [ \???^m d O q P/? [ OTt v?[\d tTUgP [ e QT?TSl? t [ UgPl?HY ` QTY?z [
ih[[ ? x U [a f dh\gv t k P SlOHdjvwS Y k [ U5P S/m Y ` ? ??dTO P Q[?dhO [ W\ dWXd U [a?i c??SVU5PVY a??
??T? ? ?!U?PlQ [ eu dTU [ U ` P d
dHxh\U??? d tTqTQ ` Q [?m?[ P QHdaZdhk6d qTc?tTU [a?i c?? o U PVY a o U U5th?HU ` Y ?TP?S Yulu c a S f f [ \6[?TPPRQ Y O d t \_U?QjSVU
v?[ ` Q da e YO ih[<[ ??U [[ O ? U^YgU W [ e SVY k ?YU [wd f:U W }?U [-?v t kV?/o OTdm?S Y k U ]?S P Q???U[ PMP d? Y Oha?]yS P Q Y
?
8?
U WT[ e Sl? e W \_? d \ d O?Y k WZQ Y i[ PTU5? ??? [ U?? ??[m Y SlO Y ? Y O PlY<q [^df P Qj[U_[ e Q ? ? ? e [ U1? ??U Y U
? WH?/[\?Sl?TfV[-\6[O e [
???
?
uSe SPc e d
WT\
d e [a t \[T????? U9U S
[U Y ` PVQ [ W\Sl? [?d f u d U o ? qy? [?? ? i o ?lS Plc N_? Y aHa o P o d?X?d tT\m [ PVQ da
?
?
[?hW u S e S P k c \ [ \[-U_[O ` U P Q [ WsdhU P [<\_? dh\ a ? UP?\_? ? i tPl? ? d ?
[ O e [h? S/? o U v?dh\K[ U
t S PVY iZ?/[w?d \PVY<U
?TU???U x e Q
?y?
YU?U5P d e Q Y U ` S e?U? W u ? O qZ? ] Q[\ [ Y O [ ? W u Sl?SVP?\ [ W \ [ U [ O ` YP o d O dT? P ?h[ Y WTW \dh?TS/m YP [<a?aH??? P/\ o i t `lS?d O
? U?\ [ ? t ?R\6[a?????? ? Y kRk ch? djxh\^v~[ P QZdh? e ?O?? [ e dm i ? ?8O [a?] ? ? P Q d PlQ [\g? Yc [ U
? ??Y<O?Y W WH\ d Y e Q [ U?f d \
k ?O q t Yq [^m d ? [ u ? ?!OTq?U
t e ??YU P Q [?d ? [gW \ djWXd U [a?i c???Y e? Yc?Y O a?? [ P ???V??_? Y ? ap? ? ? P Q b2Yc [ U S Y O
Y WTWZ\_d Y e Qh[U f d \ u [ }?OT? ? O q e d m?W kR[ ??m ? aT[ u U'U#x e????b Y c [ U o YO?O [-`l]?d \_?TU:?V?T? ?
??????
?
?[ } [ q \
Y /` [ ft k P d'?[ \_OHY ?Hd ? [\6[-S/\ Y:Y Oh? P x ? xTU U [ lk k fKdh\T?H? ? U e tU U
SVdOTU
\ [k Y?P [? P d ` ?hS U ]?dh\ ? ??S U ]?d\ ? ] Y U2?ZdTOh[?]yQZS k [y? ? \X\S/[ a m?YO ] Y Ug
Y P
? b [ \ ? [ u[ c Y O a
?
U
t WW
d :[a i c
? '? t O a [\ q \ Y ? P Z
O xm i [\!???
?"$#&% ?' ?(#6?*)+ ? Y<? a i c ? ? x O a [\ q \ Y ?TP
?
O t v?ih[\ ???T? ,? "$-/.01??K?.+1? ?
24365 373sL
F H
3 D
?98???;: b xO P SV?[T?=<?[>?OhSlO q?e k Y U
U5Sl? e Y ` ? ??dOA@B [[ UDCFEOHGJIK L ? K ? ?*MONQP*R/ MS K ? T (P ? HUWV/XP R K ? V_ K ? P
Y?u k/^?,%$. )6_
Y R ? ?&Z ? ? K ? ?D[ QTY?Whm?YO]\
?
b a ??b ul}w? [ Y ?h????? ?^?bc dH?s?ed
[ f&f \Qg cWh ?Ji\? ??dj\ ? ?!U Y U c m W P d PlS e Y ulu c { [ Y Uj@MfY z d \ Y ?k/[ tHO a[ \
?&`T? ]
[ ?P/\ d W o ?9\S?U
?*l6m ?Y ?/?H? ?on ? ?$PP K ? P T ?$Pp]NqPsrtVEDP ? vu + ? :?w)0yx$z ?|{T?}.."
?
?&~T???w? ?X? dTdW[ \MY?O
a
[ \_U
? d z o PVU ? b^Y?c [?5S Y O
[ P Qhd a Qf ? \?PlQ [ ? ??O a t e Pl? ? d O d f WT\ d ? ? ? S u SVU P ? e
? ?
?
? [ P ]?d \?TU f \ dm aX? PVY
? ,? ?H PH
?M?
? ??Z ? P TW? . [ ? .? )o+$0 }? ..??
h
? ?
?? ?
?
?" ?
? ? ? ?
? ?^[ ??\ dTd P
] ? S uluQ?*? . 0 ?
? ???
&
? M Y ?!?X? K ? ? K ? ?
? ?~ ? Z ? ? Z ? XP ?????\ Y
? ???
W
?
?
?
?
?&? ??E
? d-???djdh????[?WZdhW tjuVYP Sld O?f \E[? t [O e ? ? [ U d f ?W [ ?? [ UwY?O a P Q[?[ UP o v??` o d O d?Wsd WTt k YPl? ? d O
WH}?? [ P [ \U
K?
? ??? + $? ? )? ?? ` ) 0? ` ? +*? ? .$?6)H?
*? ? X$??? V Z
? z$????: [ D? ?h[? Y |O ?? ?^[-S q [?\ ? Y O a ??_ ? b ?? ??t? ? [ B_?lO 6q ? < [ } ? o O q b2Y c [ U S Y<O Oj[ P ]?d \ ? U ?? [
?
?
e d vwi ? ? OT? ` ? ? d ? d f ? ? d ] u [a q [ Y O ? ?
PV(
Y @ o U
PRS e
Y ??XYPlY ?
K?
K? T ? `
? ? D? . 0? ` " ~$?y? .. ? ?
? ? }? ? PH ? ? ? P
?
? ?$? ? ? ?? < Y W uVY e [h
? n ? K ? ?? XHq X ? K ? ? ? t
? ?
?h!
? XP n V&X6?h*
? ?6Z ? S Z ? /? Z ? ? WT\ o ? q [ \ Q? ??[ \ u Y qs? ? .6. ?
?
?
?
? Y?? ? Y?c?Y?O a < ? ?Z[ P d ? QhSR[\ } e QT? ? e Y?
u ?? ? \? ???-Q kl[ P k Y O q tHY q [?v?dha[ u ?
?? ?
? ? d
? ? |
? ?&? V ? ?
? ? y?
?
? P T ? ? T ? P T ?(? )? ? ? ?'}? .???}. .$? ?
w? ?
?
? ?
? .H?
? < ? ?
Y i o O [\ YO a ?
d tHY<OTq ??O?SlO P \dh?Ht e PlS d OwP d ? ? aaH[ ?
O ? } ?Hd
dhaT[ ?lU N ?????
?
? ? ?
?
?
?
G YY n
? ? }? . ? ? ?
? T6?w? ) ? ? ? " ? ? 6
?
?
?
? ? ? U5PlY a _j??OZY `Vx B5Y u u Y ] d f U
t?? [ U#U o dT?s$
?
?? ?
? ?H[ e ? :o[WZ
d
$??????+$. ?' . ??? ? \_? ??O e [ P d O OTS/z?? ?
?
?
D? ..
?
?
?
?H ? ??? HSV? q [\?|? a Y W P ? ? z[ SV? PRt \6[ U d f W \ dTi Y iHo { SVU
P o e P/\_Y?OTU a ? e [\U?
? ? ? V :???? ? ? ? .$?5? ? :Q? ?,0 ? ?j?
?
?
D? 0 ) "*?s}? .$.0*?
?
? D
q ? [ PlQ d
? ?H? ? ? d?#? S ??? [ U ? ? ? ?
? Q ` >?? d tz ?YO J? ? ?6? Yul? [ OHU? ? [ e d OTP [ ? `h` B [<[2?+[ S q ?ZPl? ? ?
a ??
? ?
??
i Y<U
? ? ?^Wj\ d W
[ 9o/[ U ? Ns??? ??? V 6
? ? )o? ?T? "*?(
? .. ?s?
? P a XP!NQPQrtX ? ? ?ZD X? ? +?? ? )? _?
? ? )???
? ?
? o PlP [O Y OT???M? b ? b [ ulu ??[ ? [\
d f \[ ? t [ OT? c W \ di u [m ??T[U P Slv YP ? q P Q[ WT\dTi Y?? S?lS P o [ U d f
?
O d z [kH[ z [ OPlU o O ?a YW ` ? ? z [ P [<? PH? dm?W \ [U U o dhO?? N ??? V6
? PZ ? XP?NQPsrX ? ?
?
X*? ~ 0 ? + ?
? ?#? ? ?
?
?
? .+ZD
? ..??
| 1616 |@word h:2 c0:1 hu:1 km:1 r:1 p0:1 d3d:3 tr:1 bc:2 wd:1 bd:1 pqd:3 gv:3 v_:2 v:1 rts:3 hsv:1 c6:1 vxw:2 c2:2 ipx:1 ik:1 ksvd:1 f3v:1 li3:1 thy:2 ra:1 wxd:1 ol:1 td:1 jm:1 dha:1 sfa:1 acbed:1 otp:1 sox:1 dti:2 y3:1 xd:1 qm:2 uk:1 pfe:1 id:1 yd:2 kml:3 ap:2 plu:1 dtp:4 g7:2 ppq:1 tsu:1 vu:3 w4:2 oqp:1 gkb:1 vr2:3 bh:1 nb:1 h8:3 yt:1 l:3 ke:2 y8:1 sld:1 q:1 oh:2 hd:2 qh:2 pa:2 pxr:1 qtc:2 dg0:1 cy:2 wj:1 oe:2 yk:1 rq:2 pd:1 plc:1 f2:1 gu:1 po:3 zo:1 s:1 wg:1 gi:1 g1:1 ip:3 dfp:6 gq:13 mb:1 fr:3 tu:3 tvu:1 kh:1 kv:3 g9:1 p:6 r1:1 lig:1 iq:1 n_:1 op:1 qt:3 b0:4 p2:1 gma:1 qd:1 vx:1 ja:1 hx:1 ots:1 d_:2 gqp:1 tho:1 pl:4 cb:4 bj:1 mo:2 lm:2 g8:1 u3:1 sma:1 fh:1 i3:5 gmb:2 og:3 ax:1 vk:1 u_:3 am:3 prt:1 lj:1 bt:2 w:2 fhg:1 whq:1 spc:1 pyps:1 r7:1 so3:7 dg:1 dfg:3 uvp:2 tq:2 psd:1 ab:1 djc:1 fd:1 tj:1 hg:2 egfih:2 xy:1 aha:1 smg:1 wb:1 tp:3 poq:1 ott:1 acb:1 sv:2 my:1 v4:1 yl:2 r3d:1 iy:1 yg:1 gcg:1 ohu:1 dhi:1 tz:1 ghl:1 li:2 f6:3 de:1 gy:2 hzn:2 sts:2 ad:1 pxp:1 e13:2 mpy:1 v9:1 t3:4 ahs:1 zy:1 mc:1 lu:1 cmb:1 bob:1 j6:1 ah:5 ed:3 c7:1 dm:1 di:1 kta:1 wh:4 ut:1 cj:1 pmp:1 ok:1 dt:10 x6:1 qty:2 mpt:1 xzy:2 qo:1 yf:1 ye:2 i2:2 gw:1 dto:1 plp:1 oc:1 o3:1 slo:2 ay:1 tt:1 vo:1 gh:1 dtd:3 ef:1 plq:4 ug:2 ctc:1 ji:2 jp:2 jl:1 ai:1 pm:1 jg:1 pq:2 l3:2 dj:1 ikjml:1 j:2 ud:4 tfv:1 rv:1 d7:1 d0:1 h7:1 y:1 qg:2 ko:1 df:4 qy:1 c1:1 utu:1 tsl:1 ot:4 sr:2 db:3 svo:1 oq:1 hb:1 m6:1 w3:2 fm:1 tm:2 cn:1 qj:2 j_:1 ul:2 f:1 jj:1 j9:4 u5:1 ph:5 pqt:1 bac:1 sl:5 vy:1 s3:3 hji:1 zd:2 iz:1 pb:1 ifj:1 d3:24 k4:2 xtu:1 pj:1 wtw:1 ht:1 olm:1 aig:1 wtt:1 wsd:1 pvq:2 ct:1 d3b:6 hy:1 u1:2 px:3 bdc:2 jr:1 psp:1 y0:1 ur:6 g3:1 tw:1 ikj:1 sf3:1 pr:2 jlk:2 hh:1 rct:1 otu:2 b_:1 ozy:1 svp:1 v2:2 d1n:1 jd:1 qsr:1 xw:2 yx:2 xc:1 dfe:4 cbd:1 uj:2 yz:1 bl:1 g0:3 mnd:1 md:1 prq:2 ow:1 dp:3 oa:3 w7:1 oha:1 w6:5 ru:7 y4:1 ql:1 fe:3 gk:1 pcb:1 ba:5 zt:1 ppn:1 t:2 svu:3 bhg:1 dc:2 tnt:1 wyx:2 pvu:1 kl:1 xih:1 fv:1 qa:1 h3t:1 rf:1 oj:1 wz:1 dtu:1 lk:2 gf:2 kj:2 l2:4 zh:1 otd:1 xp:1 o8:2 dd:1 i8:1 cd:1 lo:1 qf:1 t6:2 dcd:1 vwi:1 xn:1 fb:2 d34:3 ml:1 a_:1 xi:3 qz:2 u5p:2 hc:1 cl:1 uwv:1 krk:1 t8:1 sp:1 pk:1 s2:1 n2:2 ny:1 n:1 pv:1 xh:3 d3i:1 xl:2 pe:1 ib:1 ply:2 hw:1 r2:3 pz:1 ih:4 kr:1 yer:1 t4:4 mf:1 cx:1 psr:1 v6:1 pls:2 g2:1 u2:1 aa:1 ch:1 dh:7 ma:1 srt:1 psk:1 wt:9 hkj:1 nfd:1 egf:1 svd:1 ya:1 ew:1 lu1:1 |
674 | 1,617 | Tractable Variational Structures for
Approximating Graphical Models
David Barber
Wim Wiegerinck
{davidb,wimw}@mbfys,kun,nl
RWCP* Theoretical Foundation SNNt University of Nijmegen
6525 EZ Nijmegen, The Netherlands.
Abstract
Graphical models provide a broad probabilistic framework with applications in speech recognition (Hidden Markov Models), medical
diagnosis (Belief networks) and artificial intelligence (Boltzmann
Machines). However, the computing time is typically exponential
in the number of nodes in the graph. Within the variational framework for approximating these models, we present two classes of distributions, decimatable Boltzmann Machines and Tractable Belief
Networks that go beyond the standard factorized approach. We
give generalised mean-field equations for both these directed and
undirected approximations. Simulation results on a small benchmark problem suggest using these richer approximations compares
favorably against others previously reported in the literature.
1
Introduction
Graphical models provide a powerful framework for probabilistic inference[l] but
suffer intractability when applied to large scale problems. Recently, variational approximations have been popular [2, 3, 4, 5], and have the advantage of providing
rigorous bounds on quantities of interest, such as the data likelihood, in contrast to
other approximate procedures such as Monte Carlo methods[l]. One of the original
models in the neural networks community, the Boltzmann machine (BM), belongs
to the class of undirected graphical models. The lack of a suitable algorithm has
hindered its application to larger problems. The deterministic BM algorithm[6], a
variational procedure using a factorized approximating distribution, speeds up the
learning of BMs, although the simplicity of this approximation can lead to undesirable effects [7] . Factorized approximations have also been successfully applied to
sigmoid belief networks[4]. One approach to producing a more accurate approximation is to go beyond the class of factorized approximating models by using, for
example, mixtures of factorized models. However, it may be that very many mixture components are needed to obtain a significant improvement beyond using the
factorized approximation[5]. In this paper, after describing the variational learnOReal World Computing Partnership
tFoundation for Neural Networks
D. Barber and W Wiegerinck
184
ing framework, we introduce two further classes of non-factorized approximations,
one undirected (decimatable BMs in section (3)) and the other, directed (Tractable
Belief Networks in section (4)) . To demonstrate the potential benefits of these
methods, we include results on a toy benchmark problem in section (5) and discuss
their relation to other methods in section (6).
2
Variational Learning
We assume the existence of a graphical model P with known qualitative structure
but for which the quantitative parameters of the structure remain to be learned from
data. Given that the variables can be considered as either visible (V) or hidden
(H), one approach to learning is to carry out maximum likelihood on the visible
variables for each example in the dataset. Considering the KL divergence between
the true distribution P(HIV) and a distribution Q(H),
Q(H) ~ 0
KL(Q(H),P(H/V? = "~ Q(H) In P(H/V}
H
and using P(H/V)
= P(H, V}/ pev)
In P(V) 2: -
gives the bound
L Q(H) In Q(H) + L Q(H) In P(H, V)
H
(1)
H
Betraying the connection to statistical physics, the first term is termed the "entropy" and the second the "energy". One typically chooses a variational distribution Q so that the entropic term is "tractable". We assume that the energy E(Q)
is similarly computable, perhaps with recourse to some extra variational bound (as
in section (5)). By tractable, we mean that all necessary marginals and desired
quantities are computationally feasible, regardless of the issue of the scaling of the
computational effort with the graph size. Learning consists of two iterating steps:
first optimize the bound (1) with respect to the parameters of Q, and then with
respect to the parameters of P(H, V). We concentrate here on the first step. For
clarity, we present our approach for the case of binary variables Si E {O, 1} ,i = LN.
We now consider two classes of approximating distributions Q.
3
Undirected Q: Decimatable Boltzmann Machines
Boltzmann machines describe probability distributions parameterized by a symmetric weight matrix J
1
Q(s) = Z expfjJ,
fjJ ==
JijSiSj = s?Js
(2)
L
ij
where the normalization constant, or "partition function" is Z = Es exp fjJ. For
convenience we term the diagonals of J the "biases", hi = J ii . Since In Z (J, h) is a
generating function for the first and second order statistics of the variables s, the
entropy is tractable provided that Z is tractable. For general connection structures,
J, computing Z is intractable as it involves a sum over 2N states; however, not all
Boltzmann machines are intractable. A class of tractable structures is described by
a set of so-called decimation rules in which nodes from the graph can be removed
one by one, fig(l). Provided that appropriate local changes are made to the BM
parameters, the partition function of the reduced graph remains unaltered (see eg
[2]). For example, node c in fig(l) can be removed, provided that the weight matrix
J and bias h are transformed, J -t JI, h -t hi, with J~c = Jtc = h~ = 0 and
I
Jab -
(1 + e he ) (1 + ehe+2(Jae+Jbe))
2 ln (1 + ehe+2Jae) (1 + ehe+2Jbc )
1
_
Jab
+
I
'
ha/ b
_
-
h a / b + In
1 + ehc+2Ja/b.c
1 + e he (3)
Tractable Variational Strnctures for Approximating Graphical Models
185
Figure 1: A decimation rule for BMs. We can remove the upper node on the left so
that the partition function of the reduced graph is the same. This requires a simple
change in the parameters J, h coupling the two nodes on the right (see text).
By repeatedly applying such rules, Z is calculable in time linear in N.
3.1
Fixed point (Mean Field) Equations
Using (2) in (1), the bound we wish to optimize with respect to the parameters
B = (J, h) of Q has the form (( ... ) denotes averages with respect to Q)
B(B)
=-
(?) + In Z + E(B)
where E(B) is the energy. Differentiating (4) with respect to Jij(i
8B
8J ..
tJ
=-
L
(4)
=1=
8E
Fij,ktlkl
kl
+ 8J? ?
j) gives
(5)
tJ
where Fij,kl = (SiSjSkSI) - (SiSj) (SkSI) is the Fisher information matrix. A similar
expression holds for the bias parameters, h, so that we can form a linear fixed point
equation in the total parameter set B where the derivatives of the bound vanish.
This suggests the iterative solution, Bnew = F- 1 'Voj where the right hand side is
evaluated at the current parameter values, Bold.
4
Directed Q: Tractable Belief Networks
Belief networks are products of conditional probability distributions,
Q(H)
= II Q(Hil 1Ti)
(6)
iEH
in which 1Ti denotes the parents of node i (see for example, [1]). The efficiency
of computation depends on the underlying graphical structure of the model and is
exponential in the maximal clique size (of the moralized triangulated graph [1]). We
now assume that our model class consists of belief networks with a fixed, tractable
graphical structure. The entropy can then be computed efficiently since it decouples
into a sum of averaged entropies per site i (Q(1TJ == 1 if 1Ti = ?),
H
iEH
7ri
H,
Note that the conditional entropy at each site i is trivial to compute since the values
required can be read off directly from the definition of Q (6). By assumption, the
marginals Q(1Ti) are tractable, and can be found by standard methods, for example
using the Junction Tree Algorithm[I].
To optimize the bound (1), we parameterize Q via its conditional probabilities,
qi(1Ti) == Q(Hi = II1Ti). The remaining probability Q(Hi = 011Ti) follows from
D. Barber and W. Wiegerinck
186
normalization. We therefore have a set {qi(1I'dI1l'i = (0 . .. 0), ... ,(1 ... I)} of variational parameters for each node in the graph . Setting the gradient of the bound
with respect to the qi (11'd 's equal to zero yields the equations
(8)
with
(9)
where a (z) = 1/ (1 + e- Z ). The gradient V'ilTi is with respect to qi(1I'i). The
explicit evaluation of the gradients can be performed efficiently, since all that need
to be differentiated are at most scalar functions of quantities that depend again
only linearly on the parameters Qi(1I'd . To optimize the bound, we iterate (8) till
convergence, analogous to using factorized models[4]. However, the more powerful
class of approximating distributions described by belief networks should enable a
much tighter bound on the likelihood of the visible units.
5
Application to Sigmoid Belief Networks
We now describe an application of these non-factorized approximations to a particular class of directed graphical models, sigmoid belief networks[8J for which the
conditional distributions have the form
(10)
Wij
= 0 if j tJ. 1I'i.
The joint distribution then has the form
P(H, V) =
II exp [ZiSi -In(1 + eZi)J
(11)
where Zi = 2: j WijS j + ki. In (11) it is to be understood that the visible units are
set to their observed values. In the lower bound (1) , unfortunately, the average of
In P(H, V) is not tractable, since (In [1 + e Z ]) does not decouple into a polynomial
number of single site averages. Following [4J we use therefore the bound
(12)
where
~
is a variational parameter in [0, IJ. We can then define the energy function
E(Q,O =
L Wij (SiSj) + L kdsi) - L ki~i ij
LIn
i i i
(e-~iZ; + e(1-~;) Zi)
(13)
where ki = k i - 2: j ~j Wji. Expect for the final term, the energy is a function of
first or second order statistics of the variables. For using a BM as the variational
distribution, the final terms of (13) (e-~iZi) = 2:H e</>-~iZi /Z are simply the ratio of
two partition functions, with the one in the numerator having a shifted bias. This
is therefore tractable, provided that we use a tractable BM Q.
Similarly, if we are using a Belief Network as the variational distribution, all but the
last term in (13) is trivially tractable, provided that Q is tractable. We write the
terms (e-~iZ;) = e-~ihi 2:HR(H), where R(H) = Il j R(Hj I1l'j) and R(Hj I1l'j) ==
Tractable Variational Structures for Approximating Graphical Models
(a) Directed graph
toy problem. Hidden
units are black
e e
e e e e
(b) Decimatable BM - 25 parameters, mean: 0.0020 .
Lii
o
187
0~.02~--:0C':"
.04:-'
(c) disconnected (,standard mean
field') - 16 parameters, mean:
0.01571. Max. clique size: 1
e e
I'\..
e e e e
/1
(e) trees - 20 parameters, mean:
0.0089. Max. clique size: 2
e e
e-e-e-e
(d) chain - 19 parameters, mean:
0.01529. Max. clique size: 2
e e
e e e e
~
o
0.02
0.04
(f) network - 28 parameters, mean:
0.00183. Max. clique size: 3
Figure 2: (a) Sigmoid Belief Network for which we approximate In P(V) . (b): BM approximation . (c,d,e,f): Structures of the directed approximations on H. For each structure,
histograms of the relative error between the true log likelihood and the lower bound is
plotted. The horizontal scale has been fixed to [0,0 .05] in all plots. The maximum clique
size refers to the complexity of computation for each approximation, which is exponential
in this quantity. The number of parameters includes the vector ?.
Q(Hj In j) exp ( -~Jij Hj). Rand Q have the same graphical structure and we can
therefore use message propagation techniques again to compute
(e-{iZi).
To test our methods numerically, we generated 500 networks with parameters
{Wij , k j } drawn randomly from the uniform distribution over [-1 , 1J. The lower
bounds Fv for several approximating structures are compared with the true log
likelihood, using the relative error [ = Fv/lnP(V} -1, fig. 2. These show that
considerable improvements can be obtained when non-factorized variational distributions are used. Note that a 5 component mixture model (~ 80 variational
parameters) yields [ = 0.01139 On this problem [5F. These results suggest therefore that exploiting knowledge of the graphical structure of the model is useful. For
instance, the chain (fig. 2(b? with no graphical overlap with the original graph
shows hardly any improvement over the standard mean field approximation. On
the other hand, the tree model (fig. 2(c), which has about the same number of
parameters, but a larger overlap with the original graph, does improve considerably
over the mean field approximation (and even over the 5 component mixture model).
By increasing the overlap, as in fig. 2(d), the improvement gained is even greater.
D. Barber and W. Wiegerinck
188
6
Discussion
In this section, we briefly explain the relationship of the introduced methods to
other, "non-factorized" methods in the literature, namely node-elimination[9] and
substructure variation[lO].
6.1
Graph Partitioning and Node Elimination
A further class of approximating distributions Q that could be considered are those
in which the nodes can be partitioned into clusters, with independencies between
the clusters. For expositional clarity, consider two partitions, s = (S1' S2), and
define Q to be factorized over these partitions2 , Q = Q1(sdQ2(S2). Using this Q
in (1), we obtain (with obvious notational simplifications)
InP(V) 2:: - (lnQ1)1 - (InQ2) 2 + (InP)1.2
(14)
A functional derivative with respect to Ql and Q2 gives the optimal forms:
Q2 = exp (InP)1/Z2
If we substitute this form for Q2 in (14) and use Z2 =
InP(V) 2:: - (InQ1)1
+ In L
(15)
E exp (In P)l' we obtain
exp (InP)1
(16)
2
In general, the final term may not have a simple form. In the case of approximating
a BM P , InP = SI?JllSI + 2s 1?J12 S2 + s2?h2S2 -lnZpo Used in (16), we get:
In P(V) 2:: - (In Q1)1 -In Zp
+ (SI ?Jll S1)1 + In L
2
exp (S2? J22 S2 + 2s 2 ?J21 ($1)1)
(17)
so that the final term of (17) is the normalizing constant of a BM with connection
matrix h2 and whose diagonals are shifted by J 21 (SI)1' One can therefore identify a
set of nodes S1 which, when eliminated, reveal a tractable structure on the nodes S2.
The nodes that were removed are compensated for by using a variational distribution
Q1(sd. If P is a BM, then the optimal Q1 has its weights fixed to those of P
restricted to variables S1, but with variable biases shifted by J 12 (S2)2' Restricting
Q1 to factorized models, we recover the node elimination bound [9] which can
readily be improved by considering non-factorized distributions Q1 (for example
those introduced in this paper), see fig(3) . Note, however, that there is no apriori guarantee that using such partitioned approximations will lead to a better
approximation than that obtained from a tractable variational distribution defined
on the whole graph, but which does not have such a product form . Using a product
of conditional distributions over clusters of nodes is developed more fully in [11].
6.2
Substructure Variation
The process of using a Q defined on the whole graph but for which only a subset of
the connections are adaptive is termed substructure variation [10]. In the context of
BMs, Saul et al [2] identified weights in the original intractable distribution P that,
if set to zero, would lead to a tractable graph Q(s) = P(slh, J, Jintractable = 0). To
compensate for these removed weights they allowed the biases in Q to vary such
that the KL divergence between Q and P is minimized. In general, this is a weaker
method than one in which potentially all the parameters in the approximating
network are adaptive, such as using a decimatable BM.
2In the case of fully connected BMs, for computing with a Q which is the product of
K partitions (each of which is fully connected say), the computing time reduces from 2N
for the "intractable" P to K2N/K for Q, which can be a considerable reduction.
Tractable Variational Structures for Approximating Graphical Models
o
o
o
0
o
189
~o
0
(a) Intractable Model (b) "Naive" mean field
(c) Node elimination
(d) Partioning
Figure 3: (a) A non-decimatable 5 node BM. (b) The standard factorized approximation. (c) Node Elimination (d) Partitioning, where a richer distribution is considered on the eliminated nodes. A solid line denotes a weight fixed to those in the
original graph. A solid node is fixed , and an open node represents a variable bias.
7
Conclusion
Finding accurate, controllable approximations of graphical models is crucial if their
application to large scale problems is to be realised. We have elucidated two general
classes of tractable approximations, both based on the Kullback-Leibler divergence .
Future interesting directions include extending the class of distributions to higher
order Boltzmann Machines (for which the class of decimation rules is greater), and to
mixtures of these approaches. Higher order perturbative approaches are considered
in [12]. These techniques therefore facilitate the approximating power of tractable
models which can lead to a considerable improvement in performance.
[1] E. Castillo, J . M. Gutierrez, and A. S. Hadi. Expert Systems and Probabilistic Network
Models. Springer, 1997.
[2] L. K. Saul and M. I. Jordan. Boltzmann Chains and Hidden Markov Models. In
G. Tesauro, D. S. Touretzky, and T. K. Leen, editors , Advances in Neural Information
Processing Systems, pages 435- 442 . MIT Press, 1995. NIPS 7.
[3] T. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models.
PhD thesis, Massachusetts Institute of Technology, 1997.
[4] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean Field Theory for Sigmoid Belief
Networks. Journal of Artificial Intelligence Research, 4:61- 76, 1996.
[5] C.M. Bishop, N. Lawrence, T. Jaakkola, and M. I. Jordan. Approximating Posterior
Distributions in Belief Networks using Mixtures. MIT Press, 1998. NIPS 10.
[6] C. Peterson and J. R. Anderson. A Mean Field Theory Learning Algorithm for Neural
Networks. Complex Systems, 1:995- 1019, 1987.
[7] Conrad C. Galland. The limitations of deterministic Boltzmann machine learning .
Network: Computation in Neural Systems, 4:355- 379, 1993.
[8] R. Neal. Connectionist learning of Belief Networks. Artificial Intelligence, 56:71-113,
1992 .
[9] T . S. Jaakkola and M. I. Jordan. Recursive Algorithms for Approximating Probabilities in Graphical Models. MIT Press, 1996. NIPS 9.
[10] L. K. Saul and M. I. Jordan. Exploiting Tractable Substructures in Intractable Networks. MIT Press, 1996. NIPS 8.
[11] W. Wiegerinck and D. Barber. Mean Field Theory based on Belief Networks for
Approximate Inference. 1998. ICANN 98.
[12] D. Barber and P. van de Laar. Variational Cumulant Expansions for Intractable
Distributions. Journal of Artificial Intelligence Research, 1998. Accepted.
| 1617 |@word briefly:1 unaltered:1 polynomial:1 open:1 simulation:1 q1:6 solid:2 carry:1 reduction:1 expositional:1 current:1 z2:2 si:4 perturbative:1 readily:1 i1l:2 visible:4 partition:6 remove:1 plot:1 intelligence:4 slh:1 node:21 jbe:1 qualitative:1 consists:2 calculable:1 introduce:1 mbfys:1 considering:2 increasing:1 provided:5 underlying:1 factorized:15 q2:3 developed:1 finding:1 guarantee:1 quantitative:1 ti:6 decouples:1 partitioning:2 unit:3 medical:1 producing:1 generalised:1 understood:1 local:1 sd:1 black:1 suggests:1 partioning:1 averaged:1 directed:6 ihi:1 recursive:1 procedure:2 refers:1 inp:6 suggest:2 get:1 convenience:1 undesirable:1 context:1 applying:1 optimize:4 deterministic:2 compensated:1 go:2 regardless:1 simplicity:1 rule:4 j12:1 variation:3 analogous:1 decimation:3 recognition:1 observed:1 parameterize:1 connected:2 removed:4 complexity:1 bnew:1 depend:1 efficiency:1 joint:1 describe:2 monte:1 artificial:4 hiv:1 richer:2 larger:2 whose:1 say:1 statistic:2 final:4 advantage:1 wijs:1 jij:2 product:4 maximal:1 till:1 exploiting:2 parent:1 convergence:1 cluster:3 zp:1 extending:1 generating:1 coupling:1 ij:3 involves:1 triangulated:1 concentrate:1 direction:1 fij:2 enable:1 elimination:5 ja:1 tighter:1 hold:1 considered:4 exp:7 lawrence:1 vary:1 entropic:1 estimation:1 wim:1 gutierrez:1 successfully:1 mit:4 hil:1 hj:4 sisj:2 jaakkola:4 improvement:5 notational:1 likelihood:5 contrast:1 rigorous:1 inference:3 typically:2 hidden:4 relation:1 transformed:1 wij:3 issue:1 apriori:1 field:9 equal:1 having:1 eliminated:2 represents:1 broad:1 future:1 minimized:1 others:1 connectionist:1 randomly:1 divergence:3 interest:1 message:1 evaluation:1 mixture:6 nl:1 tj:4 chain:3 accurate:2 necessary:1 voj:1 tree:3 desired:1 plotted:1 theoretical:1 instance:1 subset:1 uniform:1 reported:1 considerably:1 chooses:1 probabilistic:3 physic:1 off:1 again:2 thesis:1 lii:1 expert:1 derivative:2 toy:2 potential:1 de:1 bold:1 includes:1 wimw:1 depends:1 performed:1 realised:1 recover:1 substructure:4 il:1 hadi:1 efficiently:2 yield:2 identify:1 carlo:1 explain:1 touretzky:1 definition:1 against:1 energy:5 obvious:1 dataset:1 popular:1 massachusetts:1 knowledge:1 higher:2 izi:3 rand:1 improved:1 leen:1 evaluated:1 laar:1 anderson:1 hand:2 horizontal:1 lack:1 propagation:1 jll:1 reveal:1 perhaps:1 facilitate:1 effect:1 true:3 read:1 symmetric:1 leibler:1 neal:1 eg:1 numerator:1 demonstrate:1 variational:21 recently:1 sigmoid:5 functional:1 ji:1 jab:2 he:2 marginals:2 numerically:1 significant:1 trivially:1 similarly:2 ezi:1 j:1 ehc:1 posterior:1 belongs:1 tesauro:1 termed:2 binary:1 lnp:1 wji:1 conrad:1 greater:2 ii:3 reduces:1 ing:1 compensate:1 lin:1 qi:5 fjj:2 histogram:1 normalization:2 crucial:1 extra:1 undirected:4 jordan:5 iterate:1 ieh:2 zi:2 j21:1 identified:1 hindered:1 computable:1 expression:1 j22:1 effort:1 suffer:1 speech:1 hardly:1 repeatedly:1 useful:1 iterating:1 k2n:1 netherlands:1 reduced:2 shifted:3 per:1 diagnosis:1 write:1 iz:2 independency:1 drawn:1 clarity:2 graph:15 sum:2 parameterized:1 powerful:2 scaling:1 bound:15 hi:4 ki:3 simplification:1 elucidated:1 ri:1 speed:1 disconnected:1 remain:1 partitioned:2 s1:4 restricted:1 recourse:1 ln:2 equation:4 computationally:1 remains:1 previously:1 describing:1 discus:1 needed:1 tractable:25 junction:1 appropriate:1 differentiated:1 galland:1 existence:1 substitute:1 original:5 denotes:3 remaining:1 include:2 graphical:17 approximating:16 quantity:4 diagonal:2 gradient:3 barber:6 trivial:1 relationship:1 providing:1 ratio:1 kun:1 unfortunately:1 ql:1 potentially:1 favorably:1 nijmegen:2 boltzmann:9 upper:1 markov:2 benchmark:2 community:1 david:1 introduced:2 namely:1 required:1 kl:5 connection:4 fv:2 learned:1 nip:4 beyond:3 max:4 belief:16 power:1 suitable:1 overlap:3 hr:1 improve:1 technology:1 naive:1 text:1 literature:2 relative:2 fully:3 expect:1 interesting:1 limitation:1 foundation:1 h2:1 editor:1 intractability:1 lo:1 last:1 bias:7 side:1 weaker:1 institute:1 saul:4 peterson:1 differentiating:1 benefit:1 van:1 world:1 made:1 adaptive:2 bm:17 approximate:3 kullback:1 clique:6 iterative:1 ehe:3 controllable:1 expansion:1 complex:1 icann:1 linearly:1 s2:8 whole:2 jae:2 allowed:1 fig:7 site:3 wish:1 explicit:1 exponential:3 vanish:1 moralized:1 bishop:1 normalizing:1 intractable:7 restricting:1 gained:1 phd:1 entropy:5 simply:1 ez:1 scalar:1 springer:1 conditional:5 fisher:1 feasible:1 change:2 considerable:3 wiegerinck:5 decouple:1 called:1 total:1 castillo:1 accepted:1 e:1 partnership:1 rwcp:1 cumulant:1 |
675 | 1,618 | A VI model of pop out and asymmetry
visual search
?
In
Zhaoping Li
University College London, z.li@ucl.ac.uk
Abstract
Visual search is the task of finding a target in an image against a
background of distractors. Unique features of targets enable them
to pop out against the background, while targets defined by lacks of
features or conjunctions of features are more difficult to spot. It is
known that the ease of target detection can change when the roles
of figure and ground are switched. The mechanisms underlying
the ease of pop out and asymmetry in visual search have been
elusive. This paper shows that a model of segmentation in VI based
on intracortical interactions can explain many of the qualitative
aspects of visual search.
1
Introduction
Visual search is closely related to visual segmentation, and therefore can be used to
diagnose the mechanisms of visual segmentation. For instance, a red dot can popout against a background of green distractor dots instantaneously, suggesting that
only pre-attentive mechanisms are necessary (Treisman et aI, 1990). On the other
hand, it is much more difficult to search for a red 'X' among green 'X's and red
'O's - the time it takes to detect the target's presence increases with the number of
background distractors, suggesting some form of attentive serial search. Sometimes,
the search times change when the role of the figure (target) and ground (distractors)
are switched -- asymmetry in visual search. For instance, it is easier to find a longer
bar in a background of shorter bars than vice-versa.
It has been unclear which visual areas or neural mechanisms are responsible for
the pop out and asymmetry in visual search. There are, however, psychophysical theories (Treisman et at 1990, Treisman and Gormican 1988) which argue that
visual inputs are coded in a number of primitive or basic feature dimensions: orientation, color, brightness, motion direction, disparity, line ends, line intersections,
and closure. A target can pop-out preattentively if it has a feature in one of these
dimensions, such as a particular color or orientation, which is absent in the distrac-
A VI Model ofPop Out and Asymmetry in Visual Search
797
tors. Hence, a red dot pops out among green ones. However, red 'X' is difficult
to spot among green 'X's and red 'O's because neither being red nor being 'X' is
unique for the target, and therefore serial search is required. While a vertical line
pops out of horizontal ones and vice versa without any search asymmetry, search
asymmetry will arise when a single feature in which target and distractors differ is
present in one of the two and absent or reduced in the other. Hence, a long line is
more easily spotted among short lines than the reserve. This theory has been very
helpful in understanding search phenomena. However, it has to make assumptions
about what are the primitive feature dimensions, as well as what constitutes larger
or smaller values along a given dimension. For instance, to explain that a curved
line is more easily spotted among straight lines than the reverse, the theory has
to define straightness as the default or standard, and curvaciousness as the deviation from this standard and thus an added feature. Empirically, other pairs of
standard and deviant properties include vertical versus tilted, parallel versus convergent, short vs long lines, circle vs ellipse, and complete versus incomplete circles.
The basis behind these assumptions are not completely clear. Other related theories
have similar problems. For instance, Julesz's texton theory (Julesz 1981) for visual
segmentation or pop out starts off by assuming a complete set of special features
that constitute textons.
This paper proposes and demonstrates in a model that pre-attentive mechanisms
in VI can qualitatively explain many of the phenomena of visual search. It is
assumed that the ease of search is determined by the relative saliencies of the target
and distractors. Intracortical interactions in VI alter the saliencies of targets and
distractors according to their own image features as well as those of the distractor
or targets images that form the context. Hence, the relative saliency depends on
the particular target-distractor pair involved. In particular, asymmetry is a natural
consequence of contextual influences.
2
The VI model
We use a VI model of pre-attentive visual segmentation which has been shown to
be able to detect and highlight smooth contours in noisy backgrounds and find
boundaries between texture regions in images (Li 1998a, 1998b). Its behavior
agrees with physiological observations (Knierim and van Essen 1992, Kapadia et
al 1995). Without loss of generality, the model ignores color, motion, and stereo
dimensions, includes mainly layer 2-3 orientation selective cells, and ignores the
intra-hypercolumnar mechanism by which their receptive fields are formed. Inputs
to the model are images filtered by the edge- or bar-like local receptive fields (RFs)
of VI cells.! The cells influence each other contextually via horizontal intra-cortical
connections (Rockland and Lund 1983, Gilbert, 1992), transforming patterns of inputs to patterns of cell responses. Fig. 1 shows the elements of the model and their
interactions. At each location i there is a model VI hypercolumn composed of K
neuron pairs. Each pair (i, 0) has RF center i and preferred orientation 0 = br / K
for k = 1,2, ... K, and is called (the neural representation of) an edge segment.
Based on experimental data (White, 1989, Douglas and Martin 1990), each edge
segment consists of an excitatory and an inhibitory neuron that are interconnected,
and each model cell represents a collection of local cells of similar types. The excitatory cell receives the visual input; its output is used as a measure of the response
or salience of the edge segment and projects to higher visual areas. The inhibitory
cells are treated as interneurons. Based on observations by Gilbert, Lund and their
colleagues (Rockland and Lund, 1983, Gilbert 1992) horizontal connections JiO,jO'
IThe terms 'edge' and 'bar' will be used interchangeably.
Z. Li
798
A Visual space, edge detectors,
and their interactions
B Neural connection pattern.
Solid: J, Dashed: W
~~~
~~~~
*!~*
!mp!-~ne
~ " .< "'"
~~"" " ""
~~
~~~-~~~
~~"" "'."~~
~""'."" '." ." ' ~
o !edge
detectors
location
" .., ~
C Model Neural Elements
Edge outputs to higher visual areas
Inputs Ic to
inhibitory cells
r-+-;--+-.--+-~-+-'-*
An interconnected
..:.- neuron pair for
- ~ edge segment i e
1"
Inhibitory
: intemeurons
Excitatory
neurons
Visual inputs, filtered through the
receptive fields, to the excitatory cells.
Figure 1: A: Visual inputs are sampled in a discrete grid of edge/bar detectors.
Each grid point i has K neuron pairs (see C), one per bar segment, tuned to
different orientations 8 spanning 180 0 ? Two segments at different grid points can
interact with each other via monosynaptic excitation J (the solid arrow from one
thick bar to anothe r) or disynaptic inhibition W (the dashed arrow to a thick
dashed bar). See also C. B: A schematic of the neural connection pattern from the
center (thick solid) bar to neighboring bars within a few sampling unit distances.
J's contacts are shown by thin solid bars. W's are shown by thin dashed bars. The
connection pattern is translation and rotation invariant. C: An input bar segment
is directly processed by an interconnected pair of excitatory and inhibitory cells,
each cell models abstractly a local group of cells of the same type. The excitatory
cell receives visual input and sends output 9x(XiO) to higher centers. The inhibitory
cell is an interneuron. Visual space is taken as having periodic boundary conditions.
(respectively Wio.jo') mediate contextual influences via monosynaptic excitation
(respectively disynaptic inhibition) from j8' to i8 which have nearby but different
RF centers, i # j, and similar orientation preferences, 8
8'. The membrane
potentials follow the equations:
f'V
XiO
-O:xXiO -
L 1P(~8)9Y(Yi.6+~O) + J 9x(XiO) + L
0
=
-O:yYiO
+ 9x(XiB) +
+ liO + lo
ji-i,O'
~o
ilie
Jio?jO'9x(Xjo')
L
WiO.jO' 9x(XjO')
+ Ie
ji-i,O'
where O:xXiO and O:yYiO model the decay to resting potential, 9x(x) and 9y(Y) are
sigmoid-like functions modeling cells' firing rates in response to membrane potentials x and Y, respectively, 1P(~(}) is the spread of inhibition within a hypercolumn,
J09x (XiO) is self excitation, Ie and 10 are background inputs, including noise and
inputs modeling the general and local normalization of activities (see Li (1998b)
for more details). Visual input liB persists after onset, and initializes the activity
levels 9x (XiO ). The activities are then modified by the contextual influences. Depending on the visual input, the system often settles into an oscillatory state (Gray
799
A VI Modelo/Pop Out and Asy mmetry in Visual Search
and Singer, 1989, see the details in Li 1998b). Temporal averages of gx(XiO) over
several oscillation cycles are used as the model's output. The nature of the computation performed by the model is determined largely by the horizontal connections
J and W, which are local (spanning only a few hypercolumns), and translation and
rotation invariant (Fig. 1B).
A: Pop out ~
B: No pop ~out c: Cross amo!.1g bars D: Bar aJIlong CrOsses
Input (liB)
Input (liB)
Input (liB)
Input (liB)
~~ ~~ ~
~~ ~~~
~
~
~~~
~~ ~~~
~
~
~
~
~~~~~~~
~~~ ~ T- ~~
~ ~~~ T-~~
~~
~~~
~~~ ~~
~ ~
~~ ~
~~~~~~ ~
~~
~
~ ~ ~~
~~~
Output
.r
~
,r
~~,r~~~l
~~,r~T- ~ 1
~~
~~~.r
,r
~.r~
.r ~
~.r~
~.r~~.r.r ~
(r, z)
= (2 .5,3.3)
(r, z)
= (0.38, -0.9)
I
I
I
I
I
I
I
I I I I I I
I I
I I
I I
I I
I I
I+ I
I I
I
I I
I
I I
Output
++++ +++
+++ ++
+++ +++
++ 1++++
+++ + +
+++ ++
+ ++++
Output
111
1 I I
+++++++
+++
++
+++ +++
++1++++
+++ + +
+++
++
+ ++++
(r, z) = (2.4 , 7.1)
(r, z) = (1.5 , 0.8)
11111
1 I 1
I I I
11+1
I 1 1
Figure 2: Visual search examples plotted by the model inputs and outputs. A: A single
distinctive feature, the horizontal bar in the target, enables pop out. This target is the
most salient (measured as the saliency of the horizontal bar in target) spot in the image.
B: The target does not pop out since neither of its features , a horizontal and a 45? bars,
is unique in the image. The target is less salient than average in the image. C and D
demonstrate the asymmetry in a target-distractor pair. C: The cross is the most salient
(measured by the saliency of the horizontal bar) spot in the image. The popout strength
is stronger than in A. D: The target bar does not pop out,
The model was applied to a variety of input patterns, as shown in examples in the
figures. The input values f io are the same for all visible bars in each example. The
differences in the outputs are caused by intracortical interactions. They become
significant about one membrane time constant after the initial neural response (Li,
1998b). The widths of the bars in the figures are proportional to input and output
strengths. The plotted region in each picture is often a small region of an extended
image. The same model parameters (e.g. the dependence of the synaptic weights
on distances and orientations, the thresholds and gains in the functions gx 0 and
gyO, and the level of input noise in 10 ) are used for all the simulation examples.
We define the net saliency S i at each grid point i as that of the most activated bar.
Define S and a s be the mean and standard deviation of the saliencies of all grid
points with visible stimuli. Let Ti == Sd S and Zi == (Si - S)/a s . A highly salient
point i should have large values of (T i , Zi ) - in particular, both Ti and Zi should be
larger than 1. For larger targets that occupy more than one grid point, the relative
saliency measure of the target is that of the most salient grid point on the target.
Fig. (2)A,B compare the state of the target '7'- ' in two different contexts. Against a
texture of ')" it is highly salient because of its unique horizontal bar. Against ')" and
' ~ ' it is much less salient because only the conjunction of '- ' and '/ ' distinguishes
it. Fig. (2)C,D exhibit search asymmetry. The horizontal bar in the target is unique
in the image of Fig. (2)A,C, which leads to pop out, and each target sits at the most
salient location in the respective images. On the other hand, no feature in the targets
of Fig. (2)B,D is unique. These examples are consistent with the psychophysical
Z. Li
800
A: closed vs
open
+>
::I
~
,,
t)
'"
-
= -
~
'"
T, Z
=
,,
:)
, , :) , ,
, , , , '.. )
, , ' J 'J
c. , ,
,... )
,
, -'
1.02 , 0.4
+>
~
,,
=
~
I.. )
' J ()
, , , , ,,)
'... ) , ,
T ,Z
()
,,
()
,,
= 1.1,9.7
,
- , -, ,
- , - , ,- ,
- -,
, ,
,
- - -,
,
)
- ,, -
._.
I I
)
= 0 .99 , -0. 06
T, Z
= 1.02, 0 .3
I
,
I
(I
(I
o
T, Z
:
'-'
(I
- (
(I
(I
(I
)
o
T, Z
-
(I
,,
)
II
- n-
(I
I)
,
= 0 .89, -1.4
ellipse
)
)
,
,
T ,Z
long
,
E: circle vs
D: straight vs.
curved
C: short vs.
,
,
,,
,
'... ) , , t..) ()
C) ,:) '... ) , ,
::I
B: parallel vs
convergent
oo I..
0
?c
0
o
=
1.05, 0 . 7
(I
(l
I)
(I
,,
,
,
T ,Z
- -,
, - ,
-
:
,
'
,
= 1. 17,1.9
T, Z
= 1.06, 1.07
T, Z
= 1.09, 1.12
Z
= 1.13,2.8
Figure 3: Five typical examples, one column each, of visual search asymmetry as simulated in the model. The input stimuli are plotted, the target saliency r, z scores are
indicated below each of them. All input bars are of the same intermediate input contrast.
The role of figure and ground is switched from the top to the bottom rows.
theories mentioned in introduction. Further, we note that because intracortical
interactions link mostly neurons preferring similar orientations , two very different
orientations can be viewed as independent features. The pop out is stronger in Fig.
(2)C than Fig. (2)A since horizontal differs more from vertical (90?) than from 45? .
The V1 orientation selective RFs and orientation specific horizontal connnections
provide the neural basis for orientation as one of the primitive feature dimensions.
In fact, the contextual influences between image features imply that saliency values
depend on detailed geometrical relationships between features within and between a
target or distrator and its nearby target or distractors (see Fig. (2)B). The relative
ease in searches varies continuously from extreme pop out to slow serial searches
depending on the specific stimuli, as suggested by Duncan and Humphreys (1989) .
Further interesting examples of search asymmetry include cases for which neither
target nor distractors have a primitive feature (such as color or orientation) that
is absent in the other. Asymmetry is much weaker but still present. Figure 3
shows some typical examples. Although the saliencies of the more salient targets
are only fractionally higher than the average feature saliency in rest of the image,
this fraction is significant when the standard deviation (J" s of the saliencies is small
or when z is large enough, thus making the search task easier.
3
Summary and Discussion
Early psychophysical studies (Treisman et al 1990) suggested that most aspects of
visual search involve mechanisms of early vision. However, it has never been clear
which visual areas or neural mechanisms might be responsible. To the best of my
knowledge, this model is the first non-phenomenological model to understand the
801
A V1 Model a/Pop Out and Asymmetry in Visual Search
)l
~~~~~~~~~~~~~~j~j~j~~~~~j~~
""""""'/~////////////
""""""'1'/////////////
""""",,'//////////////
""""""'//////////////
"',"""','//////////////
""""""'//////////////
""""""'//////////////
""""""'//////////////
,,"""""'//////////////
... ,
B
/-/-/-/-/1/1/1/1/1/1/
-/-/-/-/-/1/1/1/1/1/1
/-/-/-/-/1/1/1/1/1/1/
-/-/-/-/-/1/1/1/1/1/1
/-/-/-/-/1/1/1/1/1/1/
-/-/-/-/-/1/1/1/1/1/1
/-/-/-/-/1/1/1/1/1/1/
-/-/-/-/-/1/1/1/1/1/1
/-/-/-/-/1/1/1/1/1/1/
-/-/-/-;-/1/1/1/1;1/1
/-/-;-;-;1;1;1;1/1/1/
I
.... ,,
.... ,,
.. ,
.... ,,
.... ,,
.. ,
c
//////////
///////////
//////////
///////////
//////////
/"//J';;;;;;
//////////
///////////
//////////
///////////
//////////
, ,,
,,
,
, ,,
,,
/"
,.--.
*\
---+----~--,
/
'-.......
Figure 4: Four examples of model performance under various inputs. Each plots the visual
input image at the top and the most activated bars in VI cell outputs (using a threshold)
at the bottom. Every visible bar in a given input image has the same input strength. A, B,
and C demonstrate that the texture region boundaries have the highest output saliencies.
D shows that the smooth contours are detected as the most salient against a background
of noise.
neural bases of visual search phenomena (see Rubenstein and Sagi (1990) for a
model of asymmetry using variances of the local image filter responses). This paper has shown that intra-cortical interactions in VI can account for the qualitative
phenomena of pop-out and asymmetry in visual search, assuming that the ease of
detection is directly determined by the saliencies of targets. Of course, the task
of search requires decision making and often visual attention, especially when the
target does not spontaneously pop-out. The quantitative search times can only be
modeled on the basis of an assumption of specific mechanisms for attention and decision making. Our model suggests, nevertheless, that pre-attentive VI mechanisms
playa significant and controlling role in such tasks. Furthermore, it suggests that
some otherwise intractable phenomena can be understood without resorting to additional concepts such as textons (Julesz 1981) or defining certain image properties
(such as closure and straightness) as having standard or reference values.
Our current implementation of VI is still very simplistic. We have not yet included color, motion, or stereo inputs, nor multiscale sampling. Further, our input
sampling density is very low. Consequently, the model cannot simulate many of
the more complex input stimuli used in psychophysical experiments (Treisman and
Gormican, 1988). An extended implementation is needed to test whether VI mechanisms alone can qualitatively account for all or most types of search pop-out and
asymmetries. Physiological evidence (Gilbert 1992) suggests that intracortical connections tend to link neurons with similar selectivities in other dimensions, such as
color and stereo, in addition to orientation. This supports the idea that color, motion, and disparity are also primitive visual coding dimensions like orientation. We
802
Z. Li
believe that the example in Fig. 2A,B demonstrating pop-out versus serial search
would be more convincing if color were included to simulate, for instance, a red
'X' among green 'X's with and without red 'O's in the background. Our current
model does not explain why a slightly tilted line pops out more readily from vertical line distractors than the reverse. This is because our VI model idealistically
assumes rotational symmetry, and so vertical is not distinguished from other orientations. Neither our visual environment nor our visual system is in fact rotationally
invariant.
The VI model was originally proposed to account for pre-attentive contour enhancement and visual segmentation (Li 1998a, 1998b). The contextual influences
mediated by the intracortical interactions enable each VI neuron to process inputs
from a local image area larger than its classical receptive field. This enables cortical
neurons to detect image locations where translation invariance in the input image
breaks down, and highlight these image locations with higher neural activities, making them conspicuous. These highlights mark candidate locations for image region
(or object surface) boundaries, smooth contours and small figures against backgrounds, serving the purpose of pre-attentive segmentation. Fig. 4 demonstrates
the performance of the model for pre-attentive segmentation. In each example, the
visual inputs and the most salient outputs are shown. All examples are simulated
using exactly the same model parameters as those used in examples of visual search.
It is not too surprising that a model of pre-attentive segmentation in VI can explain visual search phenomena. Indeed, pop out has been commonly understood as
a sign of pre-attentive segmentation. Our model further suggests that asymmetry
in visual search is partly a side-effect of pre-attentive segmentation. Our VI model
can in turn be improved using visual search as a diagnostic tool.
References
[1] R. J. Douglas and K. A. Martin (1990) "Neocortex" in Synaptic Organization of
the Brain ed. G. M. Shepherd. (Oxford University Press), 3rd Edition, pp389438
[2] Duncan J. Humphreys G. Psychological Review 96: pl-26, (1989).
[3] C. D. Gilbert (1992) Neuron. 9(1), 1-13.
[4] C. M. Gray and W. Singer (1989) Proc. Natl. A cad. Sci. USA 86, 1698-1702.
[5] B. Julesz. (1981) Nature 290, 91-97.
[6] M. K. Kapadia, M. Ito, C. D. Gilbert, and G. Westheimer (1995) Neuron.
15(4),843-56.
[7] J. J . Knierim and D. C. van Essen (1992) J. Neurophysiol. 67, 961-980.
[8] Z. Li (1998a) in Theoretical aspects of neural computation Eds. Wong, K.Y.M,
King, I, and D-Y Yeung, Springer-Verlag, 1998.
[9] Z. Li (1998b) Neural Computation 10(4) p 903-940.
[10] K.S. Rockland and J. S. Lund (1983) J. Compo Neurol. 216, 303-318
[11] Rubenstein B. and Sagi D. asymmetries" J. Opt. Soc. Am. A 9: 1632-1643
(1990).
[12] Treisman A, Cavanagh, P, Fischer B, Ramachandran V.S. , and R. von der
Heydt in Visual perception, the Neurophysiological Foundations Eds. L. Spillmann and J S. Werner, 1990 Academic Press.
[13] Treisman A. and Gormican S. (1988) Psychological Rev. 95, 15-48.
[14] E. L. White (1989) Cortical circuits (Birkhauser).
| 1618 |@word stronger:2 open:1 closure:2 simulation:1 brightness:1 solid:4 initial:1 disparity:2 score:1 tuned:1 current:2 contextual:5 surprising:1 cad:1 si:1 yet:1 readily:1 tilted:2 visible:3 enables:2 plot:1 v:7 alone:1 short:3 compo:1 filtered:2 location:6 preference:1 gx:2 sits:1 five:1 along:1 become:1 qualitative:2 consists:1 indeed:1 behavior:1 nor:4 distractor:4 brain:1 lib:5 project:1 monosynaptic:2 underlying:1 circuit:1 what:2 finding:1 temporal:1 quantitative:1 every:1 ti:2 exactly:1 demonstrates:2 uk:1 unit:1 modelo:1 persists:1 local:7 sagi:2 sd:1 understood:2 consequence:1 io:1 oxford:1 firing:1 gormican:3 might:1 suggests:4 ease:5 contextually:1 unique:6 responsible:2 spontaneously:1 differs:1 spot:4 area:5 pre:10 cannot:1 context:2 influence:6 wong:1 gilbert:6 center:4 elusive:1 primitive:5 attention:2 target:34 controlling:1 element:2 bottom:2 role:4 region:5 cycle:1 highest:1 mentioned:1 transforming:1 environment:1 xio:6 depend:1 segment:7 ithe:1 distinctive:1 basis:3 completely:1 neurophysiol:1 easily:2 various:1 london:1 detected:1 asy:1 larger:4 otherwise:1 fischer:1 abstractly:1 noisy:1 kapadia:2 net:1 ucl:1 interaction:8 interconnected:3 neighboring:1 rockland:3 enhancement:1 asymmetry:19 hypercolumnar:1 object:1 depending:2 oo:1 ac:1 measured:2 soc:1 differ:1 direction:1 thick:3 closely:1 filter:1 enable:2 settle:1 opt:1 pl:1 ground:3 ic:1 reserve:1 tor:1 early:2 purpose:1 proc:1 agrees:1 vice:2 tool:1 instantaneously:1 modified:1 conjunction:2 rubenstein:2 mainly:1 contrast:1 detect:3 am:1 helpful:1 selective:2 among:6 orientation:16 proposes:1 special:1 field:4 never:1 having:2 sampling:3 zhaoping:1 represents:1 constitutes:1 thin:2 alter:1 stimulus:4 few:2 distinguishes:1 composed:1 detection:2 organization:1 interneurons:1 essen:2 highly:2 intra:3 extreme:1 behind:1 activated:2 natl:1 edge:10 necessary:1 shorter:1 respective:1 incomplete:1 circle:3 plotted:3 theoretical:1 psychological:2 instance:5 column:1 modeling:2 werner:1 deviation:3 too:1 varies:1 periodic:1 hypercolumns:1 my:1 density:1 ie:2 preferring:1 off:1 treisman:7 continuously:1 jo:4 von:1 li:12 suggesting:2 potential:3 account:3 intracortical:6 coding:1 includes:1 textons:2 mp:1 caused:1 vi:20 depends:1 onset:1 performed:1 break:1 diagnose:1 closed:1 red:9 start:1 parallel:2 formed:1 variance:1 largely:1 saliency:15 straight:2 explain:5 detector:3 oscillatory:1 synaptic:2 ed:3 against:7 attentive:11 colleague:1 disynaptic:2 involved:1 sampled:1 gain:1 distractors:9 color:8 knowledge:1 segmentation:11 higher:5 originally:1 follow:1 response:5 improved:1 generality:1 furthermore:1 hand:2 receives:2 horizontal:12 ramachandran:1 spillmann:1 multiscale:1 lack:1 indicated:1 gray:2 believe:1 usa:1 effect:1 intemeurons:1 concept:1 hence:3 white:2 interchangeably:1 self:1 width:1 excitation:3 complete:2 demonstrate:2 motion:4 geometrical:1 image:23 sigmoid:1 rotation:2 empirically:1 ji:2 amo:1 resting:1 significant:3 versa:2 ai:1 rd:1 grid:7 resorting:1 dot:3 phenomenological:1 xib:1 longer:1 surface:1 inhibition:3 base:1 playa:1 own:1 reverse:2 selectivity:1 certain:1 verlag:1 yi:1 der:1 straightness:2 rotationally:1 additional:1 dashed:4 ii:1 smooth:3 academic:1 cross:3 long:3 serial:4 spotted:2 coded:1 schematic:1 simplistic:1 basic:1 vision:1 popout:2 yeung:1 sometimes:1 normalization:1 texton:1 cell:17 addition:1 ilie:1 background:10 sends:1 rest:1 shepherd:1 tend:1 presence:1 intermediate:1 enough:1 variety:1 zi:3 idea:1 br:1 absent:3 whether:1 stereo:3 constitute:1 clear:2 detailed:1 involve:1 julesz:4 neocortex:1 processed:1 reduced:1 occupy:1 inhibitory:6 sign:1 diagnostic:1 per:1 serving:1 discrete:1 group:1 salient:11 fractionally:1 threshold:2 four:1 nevertheless:1 demonstrating:1 neither:4 douglas:2 v1:2 fraction:1 oscillation:1 decision:2 duncan:2 layer:1 convergent:2 activity:4 strength:3 nearby:2 aspect:3 simulate:2 martin:2 jio:2 according:1 membrane:3 smaller:1 slightly:1 conspicuous:1 rev:1 making:4 invariant:3 taken:1 equation:1 turn:1 mechanism:11 singer:2 needed:1 end:1 cavanagh:1 distinguished:1 top:2 assumes:1 include:2 especially:1 ellipse:2 classical:1 contact:1 psychophysical:4 initializes:1 added:1 receptive:4 dependence:1 unclear:1 exhibit:1 distance:2 link:2 simulated:2 sci:1 lio:1 argue:1 spanning:2 assuming:2 modeled:1 relationship:1 rotational:1 convincing:1 westheimer:1 difficult:3 mostly:1 implementation:2 vertical:5 observation:2 neuron:11 curved:2 defining:1 extended:2 heydt:1 knierim:2 pair:8 required:1 hypercolumn:2 connection:7 pop:24 able:1 bar:28 suggested:2 below:1 pattern:6 perception:1 lund:4 rf:4 green:5 including:1 natural:1 treated:1 imply:1 ne:1 picture:1 mediated:1 review:1 understanding:1 relative:4 loss:1 highlight:3 j8:1 interesting:1 proportional:1 versus:4 foundation:1 switched:3 consistent:1 i8:1 translation:3 lo:1 row:1 excitatory:6 summary:1 course:1 salience:1 side:1 weaker:1 understand:1 van:2 boundary:4 dimension:8 default:1 cortical:4 contour:4 ignores:2 qualitatively:2 collection:1 commonly:1 preferred:1 assumed:1 preattentively:1 search:37 why:1 nature:2 symmetry:1 interact:1 complex:1 yyio:2 spread:1 arrow:2 noise:3 arise:1 deviant:1 mediate:1 edition:1 fig:11 xjo:2 slow:1 candidate:1 ito:1 humphreys:2 down:1 specific:3 decay:1 physiological:2 neurol:1 evidence:1 intractable:1 texture:3 interneuron:1 easier:2 intersection:1 neurophysiological:1 visual:45 springer:1 viewed:1 king:1 consequently:1 change:2 included:2 determined:3 typical:2 birkhauser:1 called:1 invariance:1 experimental:1 wio:2 partly:1 college:1 support:1 mark:1 phenomenon:6 |
676 | 1,619 | Source Separation as a
By-Product of Regularization
J urgen Schmidhuber
Sepp Hochreiter
Fakultat fur lnformatik
Technische Universitat Munchen
80290 Munchen, Germany
IDSIA
Corso Elvezia 36
6900 Lugano, Switzerland
hochreit~informatik.tu-muenchen.de
juergen~idsia.ch
Abstract
This paper reveals a previously ignored connection between two
important fields: regularization and independent component analysis (ICA). We show that at least one representative of a broad
class of algorithms (regularizers that reduce network complexity)
extracts independent features as a by-product. This algorithm is
Flat Minimum Search (FMS), a recent general method for finding
low-complexity networks with high generalization capability. FMS
works by minimizing both training error and required weight precision. According to our theoretical analysis the hidden layer of
an FMS-trained autoassociator attempts at coding each input by
a sparse code with as few simple features as possible. In experiments the method extracts optimal codes for difficult versions of
the "noisy bars" benchmark problem by separating the underlying
sources, whereas ICA and PCA fail. Real world images are coded
with fewer bits per pixel than by ICA or PCA.
1
INTRODUCTION
In the field of unsupervised learning several information-theoretic objective functions (OFs) have been proposed to evaluate the quality of sensory codes. Most OFs
focus on properties of the code components - we refer to them as code componentoriented OFs, or COCOFs. Some COCOFs explicitly favor near-factorial, minimally redundant codes of the input data [2, 17, 23, 7, 24] while others favor local
codes [22,3, 15]. Recently there has also been much work on COCOFs encouraging
biologically plausible sparse distributed codes [19,9, 25, 8, 6, 21, 11, 16].
While COCOFs express desirable properties of the code itself they neglect the costs
of constructing the code from the data. E.g., coding input data without redun-
S. Hochreiter and J Schmidhuber
460
dancy may be very expensive in terms of information required to describe the codegenerating network, which may need many finely tuned free parameters. We believe
that one of sensory coding's objectives should be to reduce the cost of code generation through data transformations, and postulate that an important scarce resource
is the bits required to describe the mappings that generate and process the codes.
Hence we shift the point of view and focus on the information-theoretic costs of
code generation. We use a novel approach to unsupervised learning called "lowcomplexity coding and decoding" (LOCOCODE [14]). Without assuming particular
goals such as data compression, subsequent classification, etc., but in the spirit
of research on minimum description length (MDL), LOCOCODE generates so-called
lococodes that (1) convey information about the input data, (2) can be computed
from the data by a low-complexity mapping (LCM), and (3) can be decoded by an
LCM. We will see that by minimizing coding/decoding costs LOCOCODE can yield
efficient, robust, noise-tolerant mappings for processing inputs and codes.
Lococodes through regularizers. To implement LOCOCODE we apply regularization to an autoassociator (AA) whose hidden layer activations represent the code.
The hidden layer is forced to code information about the input data by minimizing
training error; the regularizer reduces coding/decoding costs. Our regularizer of
choice will be Flat Minimum Search (FMS) [13].
2
FLAT MINIMUM SEARCH: REVIEW AND ANALYSIS
FMS is a general gradient-based method for finding low-complexity networks with
high generalization capability. FMS finds a large region in weight space such that
each weight vector from that region has similar small error. Such regions are called
"flat minima". In MDL terminology, few bits of information are required to pick a
weight vector in a "flat" minimum (corresponding to a low-complexity network) the weights may be given with low precision. FMS automatically prunes weights
and units, and reduces output sensitivity with respect to remaining weights and
units. Previous FMS applications focused on supervised learning [12, 13].
Notation. Let 0, H,I denote index sets for output, hidden, and input units,
respectively. For lEO U H, the activation yl of unit 1 is yl = f (SI), where
SI = Em Wlmym is the net input of unit 1 (m E H for lEO and mEl for 1 E H),
Wlm denotes the weight on the connection from unit m to unit l, f denotes the
activation function, and for mEl, ym denotes the m-th component of an input
vector. W = 1(0 x H) U (H x 1)1 is the number of weights.
Algorithm. FMS' objective function E features an unconventional error term:
B =
i'; ~UH
log
~ (::'~j)
2
+ W log
~ (i';~UH L I~~)')
kED
2
8Wij
E = Eq + >'B is minimized by gradient descent, where Eq is the training set mean
squared error (MSE), and >. a positive "regularization constant" scaling B's influence. Choosing>' corresponds to choosing a tolerable error level (there is no a
priori "optimal" way of doing so). B measures the weight precision (number of
bits needed to describe all weights in the net). Given a constant number of output
units, FMS can be implemented efficiently, namely, with standard backprop's order
of computational complexity [13].
461
Source Separation as a By-Product of Regularization
2.1
FMS: A Novel Analysis
Simple basis functions (BFs). A BF is the function determining the activation
of a code component in response to a given input. Minimizing B 's term
k)2
~
T1 :=
~
i, j : iEDuH
8
log~ ( -y~ 8w??
kED
tJ
obviously reduces output sensitivity with respect to weights (and therefore units).
T1 is responsible for pruning weights (and, therefore, units). T1 is one reason why
low-complexity (or simple) BFs are preferred: weight precision (or complexity) is
:!~j'
mainly determined by
Sparseness. Because T1 tends to make unit activations decrease to zero it favors
sparse codes. But T1 also favors a sparse hidden layer in the sense that few hidden
units contribute to producing the output. B's second term
T2
WlogL (
:=
kED
L
i ,j : iEDuH
punishes units with similar influence on the output. We reformulate it:
T2 = Wlog
~UH ~OUH
("j,
u ,u,
See intermediate steps in [14] . We observe: (1) an output unit that is very sensitive
with respect to two given hidden units will heavily contribute to T2 (compare the
numerator in the last term of T2). (2) This large contribution can be reduced by
making both hidden units have large impact on other output units (see denominator
in the last term of T2).
Few separated basis functions. Hence FMS tries to figure out a way of using
(1) as few BFs as possible for determining the activation of each output unit, while
simultaneously (2) using the same BFs for determining the activations of as many
output units as possible (common BFs). (1) and T1 separate the BFs: the force towards simplicity (see T1) prevents input information from being channelled through
a single BF; the force towards few BFs per output makes them non-redundant. (1)
and (2) cause few BFs to determine all outputs.
Summary. Collectively T1 and T2 (which make up B) encourage sparse codes
based on few separated simple basis functions producing all outputs. Due to space
limitations a more detailed analysis (e.g. linear output activation) had to be left to
a TR [14] (on the WWW).
S. Hochreiter and J. Schmidhuber
462
3
EXPERIMENTS
We compare LOCOCODE to "independent component analysis" (ICA, e.g., [5, 1,
4, 18]) and "principal component analysis" (PCA, e.g., [20]). ICA is realized by
Cardoso's JADE algorithm, which is based on whitening and subsequent joint diagonalization of 4th-order cumulant matrices. To measure the information conveyed
by resulting codes we train a standard backprop net on the training set used for
code generation. Its inputs are the code components; its task is to reconstruct the
original input. The test set consists of 500 off-training set exemplars (in the case
of real world images we use a separate test image). Coding efficiency is the average
number of bits needed to code a test set input pixel. The code components are
scaled to the interval [0,1] and partitioned into discrete intervals. Assuming independence of the code components we estimate the probability of each discrete code
value by Monte Carlo sampling on the training set. To obtain the test set codes'
bits per pixel (Shannon's optimal value) the average sum of all negative logarithms
of code component probabilities is divided by the number of input components. All
details necessary for reimplementation are given in [14].
Noisy bars adapted from [10, 11]. The input is a 5 x 5 pixel grid with horizontal
and vertical bars at random positions. The task is to extract the independent
In
features (the bars). Each of the 10 possible bars appears with probability
contrast to [10, 11] we allow for bar type mixing - this makes the task hamer.
Bar intensities vary in [0.1, 0.5]; input units that see a pixel of a bar are activated
correspondingly others adopt activation -0.5. We add Gaussian noise with variance
0.05 and mean a to each pixel. For ICA and PCA we have to provide information
about the number (ten) of independent sources (tests with n assumed sources will
be denoted by ICA-n and PCA-n). LOCOCODE does not require this - using 25
hidden units (HUs) we expect LOCOCODE to prune the 15 superfluous HUs.
k.
Results. See Table 1. While the reconstruction errors of all methods are similar,
LOCOCODE has the best coding efficiency. 15 of the 25 HUs are indeed automatically pruned: LOCOCODE finds an optimal factorial code which exactly mirrors the
pattern generation process. PCA codes and ICA-15 codes, however, are unstructured and dense. While ICA-lO codes are almost sparse and do recognize some
sources, the sources are not clearly separated like with LOCOCODE - compare the
weight patterns shown in [14].
Real world images. Now we use more realistic input data, namely subsections of:
1) the aerial shot of a village, 2) an image of wood cells, and 3) an image of striped
piece of wood. Each image has 150 x 150 pixels, each taking on one of 256 gray
levels. 7 x 7 (5 x 5 for village) pixels subsections are randomly chosen as training
inputs. Test sets stem from images similar to 1) , 2), and 3).
Results. For the village image LOCOCODE discovers on-center-off-surround hidden
units forming a sparse code. For the other two images LOCOCODE also finds appropriate feature detectors - see weight patterns shown in [14J. Using its compact,
low-complexity features it always codes more efficiently than ICA and PCA.
Source Separation as a By-Product of Regularization
expo
bars
bars
bars
bars
bars
village
village
village
village
village
village
village
village
village
village
cell
cell
cell
cell
cell
piece
piece
piece
piece
piece
input
field
5x5
5x 5
5x 5
5x5
5x5
5x5
5x 5
5x5
5x5
5x5
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
7x7
meth.
LOC
lCA
PCA
lCA
PCA
LOC
lCA
PCA
lCA
PCA
LOC
lCA
PCA
lCA
PCA
LOC
lCA
PCA
lCA
PCA
LOC
lCA
PCA
lCA
PCA
num.
camp.
rec.
error
10
10
10
15
15
8
8
8
10
10
10
10
10
15
15
11
11
11
15
15
4
4
4
10
10
1.05
1.02
1.03
0.71
0.72
1.05
1.04
1.04
1.11
0.97
8.29
7.90
9.21
6.57
8.03
0.840
0.871
0.722
0.360
0.329
0.831
0.856
0.830
0.716
0.534
code
type
sparse
sparse
dense
dense
dense
sparse
sparse
dense
sparse
dense
sparse
dense
dense
dense
dense
sparse
sparse
sparse
sparse
dense
sparse
sparse
sparse
sparse
sparse
463
bits per pixel: # intervals
10
0.584
0.811
0.796
1.189
1.174
0.436
0.520
0.474
0.679
0.578
0.250
0.318
0.315
0.477
0.474
0.457
0.468
0.452
0.609
0.581
0.207
0.207
0.207
0.535
0.448
20
0.836
1.086
1.062
1.604
1.584
0.622
0.710
0.663
0.934
0.807
0.368
0.463
0.461
0.694
0.690
0.611
0.622
0.610
0.818
0.798
0.269
0.276
0.269
0.697
0.590
50
1.163
1.446
1.418
2.142
2.108
0.895
0.978
0.916
1.273
1.123
0.547
0.652
0.648
0.981
0.972
0.814
0.829
0.811
1.099
1.073
0.347
0.352
0.348
0.878
0.775
100
1.367
1.678
1.655
2.502
2.469
1.068
1.165
1.098
1.495
1.355
0.688
0.796
0.795
1.198
1.189
0.961
0.983
0.960
1.315
1.283
0.392
0.400
0.397
1.004
0.908
Table 1: Overview of experiments: name of experiment, input field size, coding
method, number of relevant code components (code size), reconstruction error, nature of code observed on the test set. PCA's and ICA 's code sizes need to be pre wired.
LOCOCODE's, however, are found automatically (we always start with 25 HUs). The
final 4 columns show the coding efficiency measured in bits per pixel, assuming the
real-valued HU activations are partitioned into 10, 20, 50, and 100 discrete intervals. LOCOCODE codes most effiCiently.
4
CONCLUSION
According to our analysis LOCOCODE attempts to describe single inputs with as few
and as simple features as possible. Given the statistical properties of many visual
inputs (with few defining features), this typically results in sparse codes. Unlike
objective functions of previous methods, however, LOCOCODE's does not contain
an explicit term enforcing, say, sparse codes - sparseness or independence are not
viewed as a good things a priori. Instead we focus on the information-theoretic
complexity of the mappings used for coding and decoding. The resulting codes
typically compromise between conflicting goals. They tend to be sparse and exhibit
low but not minimal redundancy - if the cost of minimal redundancy is too high.
Our results suggest that LOCOCODE'S objective may embody a general principle of
unsupervised learning going beyond previous, more specialized ones. We see that
there is at least one representative (FMS) of a broad class of algorithms (regularizers
that reduce network complexity) which (1) can do optimal feature extraction as a
by-product, (2) outperforms traditional ICA and PCA on visual source separation
tasks, and (3) unlike ICA does not even need to know the number of independent
sources in advance. This reveals an interesting, previously ignored connection be-
464
S. Hochreiter and J. Schmidhuber
tween regularization and ICA, and may represent a first step towards unification of
regularization and unsupervised learning.
More. Due to space limitations, much additional theoretical and experimental
analysis had to be left to a tech report (29 pages, 20 figures) on the WWW: see
[14].
Acknowledgments. This work was supported by DFG grant SCHM 942/3-1 and
DFG grant BR 609/10-2 from "Deutsche Forschungsgemeinschaft".
References
[1] S. Amari, A. Cichocki, and H.H. Yang. A new learning algorithm for blind
signal separation. In David S. Touretzky, Michael C. Mozer, and Michael E.
Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages
757-763. The MIT Press, Cambridge, MA, 1996.
[2] H. B. Barlow, T. P. Kaushal, and G. J. Mitchison. Finding minimum entropy
codes. Neural Computation, 1(3):412- 423, 1989.
[3] H. G. Barrow. Learning receptive fields . In Proceedings of the IEEE 1st Annual
Conference on Neural Networks, volume IV, pages 115- 121. IEEE, 1987.
[4] A. J. Bell and T. J . Sejnowski. An information-maximization approach to
blind separation and blind deconvolution. Neural Computation, 7(6):11291159,1995.
[5] J.-F. Cardoso and A. Souloumiac. Blind beamforming for non Gaussian signals.
lEE Proceedings-F, 140(6):362- 370, 1993.
[6] P. Dayan and R. Zemel. Competition and multiple cause models.
Computation, 7:565- 579, 1995.
Neural
[7] G. Deco and L. Parra. Nonlinear features extraction by unsupervised redundancy reduction with a stochastic neural network. Technical report, Siemens
AG, ZFE ST SN 41, 1994.
[8] D. J . Field. What is the goal of sensory coding? Neural Computation, 6:559601, 1994.
[9] P. Foldilik and M. P. Young. Sparse coding in the primate cortex. In M. A.
Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 895898. The MIT Press, Cambridge, Massachusetts, 1995.
[10] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The wake-sleep algorithm
for unsupervised neural networks. Science, 268:1158- 1161,1995.
[11] G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse
distributed representations. Philosophical Transactions of the Royal Society B,
352:1177- 1190,1997.
[12] S. Hochreiter and J. Schmidhuber. Simplifying nets by discovering fiat minima.
In G. Tesauro, D. S. Touretzky, and T . K. Leen, editors, Advances in Neural
Information Processing Systems 7, pages 529- 536. MIT Press, Cambridge MA,
1995.
[13] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):142,1997.
[14] S. Hochreiter and J . Schmidhuber. LOCOCODE. Technical Report FKI-22297, Revised Version, Fakultat fUr Informatik, Technische Universitat Miinchen,
1998.
Source Separation as a By-Product of Regularization
465
[15] T. Kohonen. Self-Organization and Associative Memory. Springer, second ed.,
1988.
[16] M. S. Lewicki and B. A. Olshausen. Inferring sparse, overcomplete image codes
using an efficient coding framework. In M. 1. Jordan, M. J. Kearns, and S. A.
Solla, editors, Advances in Neural Information Processing Systems 10, 1998.
To appear.
[17J R . Linsker. Self-organization in a perceptual network. IEEE Computer, 21:105117,1988.
[18] L. Molgedey and H. G. Schuster. Separation of independent signals using timedelayed correlations. Phys. Reviews Letters, 72(23) :3634- 3637, 1994.
[19] M. C. Mozer. Discovering discrete distributed representations with iterative
competitive learning. In R. P. Lippmann, J. E . Moody, and D. S. Touretzky,
editors, Advances in Neural Information Processing Systems 3, pages 627- 634.
San Mateo, CA: Morgan Kaufmann , 1991.
[20J E. Oja. Neural networks, principal components, and subspaces. International
Journal of Neural Systems, 1(1):61- 68, 1989.
[21] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field
properties by learning a sparse code for natural images. Nature, 381(6583):607609, 1996.
[22] D. E. Rumelhart and D. Zipser. Feature discovery by competitive learning. In
Parallel Distributed Processing, pages 151- 193. MIT Press, 1986.
[23J J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863- 879, 1992.
[24] S. Watanabe. Pattern Recognition: Human and Mechanical. Willey, New York,
1985.
[25] R. S. Zemel and G. E. Hinton. Developing population codes by minimizing
description length. In J. D. Cowan, G. Tesauro, and J. Alspector, editors ,
Advances in Neural Information Processing Systems 6, pages 11- 18. San Mateo ,
CA: Morgan Kaufmann , 1994.
| 1619 |@word autoassociator:2 version:2 compression:1 bf:2 hu:1 simplifying:1 pick:1 tr:1 shot:1 reduction:1 loc:5 punishes:1 tuned:1 outperforms:1 activation:10 si:2 subsequent:2 realistic:1 hochreit:1 wlm:1 generative:1 fewer:1 discovering:3 num:1 contribute:2 miinchen:1 consists:1 indeed:1 ica:14 alspector:1 embody:1 brain:1 automatically:3 encouraging:1 underlying:1 notation:1 deutsche:1 what:1 finding:3 transformation:1 ag:1 exactly:1 scaled:1 unit:22 grant:2 appear:1 producing:2 positive:1 t1:8 local:1 frey:1 tends:1 minimally:1 mateo:2 acknowledgment:1 responsible:1 implement:1 reimplementation:1 bell:1 kaushal:1 pre:1 suggest:1 influence:2 www:2 center:1 zfe:1 sepp:1 focused:1 simplicity:1 unstructured:1 bfs:8 population:1 heavily:1 idsia:2 expensive:1 rumelhart:1 rec:1 recognition:1 observed:1 region:3 solla:1 decrease:1 mozer:2 complexity:11 trained:1 compromise:1 molgedey:1 efficiency:3 basis:3 uh:3 joint:1 regularizer:2 leo:2 train:1 separated:3 forced:1 describe:4 monte:1 sejnowski:1 zemel:2 choosing:2 jade:1 whose:1 plausible:1 valued:1 say:1 reconstruct:1 amari:1 favor:4 emergence:1 noisy:2 itself:1 final:1 associative:1 obviously:1 net:4 reconstruction:2 product:6 tu:1 relevant:1 kohonen:1 mixing:1 description:2 competition:1 wired:1 measured:1 exemplar:1 eq:2 implemented:1 switzerland:1 stochastic:1 human:1 backprop:2 require:1 generalization:2 parra:1 mapping:4 vary:1 adopt:1 sensitive:1 village:13 hasselmo:1 minimization:1 mit:4 clearly:1 gaussian:2 always:2 focus:3 fur:2 mainly:1 tech:1 contrast:1 sense:1 camp:1 dayan:2 typically:2 hidden:10 wij:1 going:1 germany:1 pixel:10 classification:1 denoted:1 priori:2 urgen:1 field:8 extraction:2 sampling:1 broad:2 unsupervised:6 linsker:1 minimized:1 others:2 t2:6 channelled:1 report:3 few:10 randomly:1 oja:1 simultaneously:1 recognize:1 dfg:2 attempt:2 organization:2 mdl:2 activated:1 tj:1 regularizers:3 superfluous:1 encourage:1 unification:1 necessary:1 iv:1 logarithm:1 overcomplete:1 theoretical:2 minimal:2 column:1 juergen:1 maximization:1 cost:6 technische:2 too:1 universitat:2 st:2 international:1 sensitivity:2 lee:1 yl:2 off:2 decoding:4 michael:2 ym:1 moody:1 squared:1 postulate:1 deco:1 de:1 coding:14 explicitly:1 blind:4 piece:6 view:1 try:1 doing:1 start:1 competitive:2 capability:2 parallel:1 contribution:1 variance:1 kaufmann:2 efficiently:3 yield:1 fki:1 informatik:2 carlo:1 detector:1 phys:1 touretzky:3 ed:1 corso:1 ofs:3 massachusetts:1 subsection:2 lococode:18 fiat:1 appears:1 supervised:1 response:1 leen:1 correlation:1 horizontal:1 nonlinear:1 quality:1 gray:1 believe:1 olshausen:2 name:1 contain:1 barlow:1 regularization:9 hence:2 neal:1 x5:7 numerator:1 self:2 mel:2 theoretic:3 image:12 novel:2 recently:1 discovers:1 common:1 specialized:1 overview:1 volume:1 refer:1 surround:1 cambridge:3 grid:1 lcm:2 had:2 hus:4 cortex:1 whitening:1 etc:1 add:1 recent:1 tesauro:2 schmidhuber:8 morgan:2 minimum:9 additional:1 prune:2 determine:1 redundant:2 signal:3 multiple:1 desirable:1 reduces:3 stem:1 technical:2 divided:1 coded:1 impact:1 muenchen:1 denominator:1 represent:2 hochreiter:7 cell:7 whereas:1 interval:4 wake:1 source:11 finely:1 unlike:2 tend:1 thing:1 beamforming:1 cowan:1 spirit:1 jordan:1 zipser:1 near:1 yang:1 intermediate:1 forschungsgemeinschaft:1 independence:2 arbib:1 fm:13 reduce:3 br:1 shift:1 ked:3 pca:19 york:1 cause:2 ignored:2 detailed:1 cardoso:2 factorial:3 ten:1 reduced:1 generate:1 per:5 discrete:4 express:1 redundancy:3 terminology:1 sum:1 wood:2 letter:1 almost:1 separation:8 scaling:1 bit:8 layer:4 sleep:1 annual:1 adapted:1 striped:1 expo:1 flat:6 generates:1 x7:15 pruned:1 developing:1 according:2 lca:10 lowcomplexity:1 aerial:1 em:1 partitioned:2 biologically:1 making:1 primate:1 resource:1 previously:2 fail:1 needed:2 know:1 unconventional:1 apply:1 munchen:2 observe:1 appropriate:1 tolerable:1 original:1 denotes:3 remaining:1 neglect:1 ghahramani:1 society:1 objective:5 realized:1 receptive:2 traditional:1 exhibit:1 gradient:2 subspace:1 separate:2 separating:1 reason:1 enforcing:1 assuming:3 code:48 length:2 index:1 reformulate:1 minimizing:5 difficult:1 negative:1 vertical:1 revised:1 benchmark:1 descent:1 barrow:1 defining:1 hinton:3 timedelayed:1 intensity:1 david:1 namely:2 required:4 mechanical:1 connection:3 philosophical:1 fakultat:2 conflicting:1 beyond:1 bar:13 pattern:4 royal:1 memory:1 natural:1 force:2 scarce:1 meth:1 extract:3 cichocki:1 sn:1 review:2 discovery:1 determining:3 expect:1 generation:4 limitation:2 interesting:1 conveyed:1 principle:1 editor:6 lo:1 summary:1 supported:1 last:2 free:1 allow:1 taking:1 correspondingly:1 sparse:29 distributed:4 souloumiac:1 world:3 sensory:3 san:2 transaction:1 pruning:1 compact:1 lippmann:1 preferred:1 reveals:2 tolerant:1 handbook:1 assumed:1 mitchison:1 search:3 iterative:1 why:1 table:2 nature:2 robust:1 ca:2 mse:1 constructing:1 tween:1 dense:11 noise:2 convey:1 representative:2 predictability:1 wlog:1 precision:4 position:1 decoded:1 explicit:1 inferring:1 lugano:1 watanabe:1 perceptual:1 young:1 deconvolution:1 elvezia:1 mirror:1 diagonalization:1 sparseness:2 entropy:1 forming:1 visual:2 prevents:1 lewicki:1 collectively:1 springer:1 ch:1 corresponds:1 aa:1 ma:2 willey:1 goal:3 viewed:1 towards:3 determined:1 principal:2 kearns:1 called:3 experimental:1 siemens:1 shannon:1 cumulant:1 evaluate:1 schuster:1 |
677 | 162 | 49
Mapping Classifier Systems
Into Neural Networks
Lawrence Davis
BBN Laboratories
BBN Systems and Technologies Corporation
10 Moulton Street
Cambridge, MA 02238
January 16, 1989
Abstract
Classifier systems are machine learning systems incotporating a genetic algorithm as the learning mechanism. Although they respond to inputs that neural
networks can respond to, their internal structure, representation fonnalisms, and
learning mechanisms differ marlcedly from those employed by neural network researchers in the same sorts of domains. As a result, one might conclude that these
two types of machine learning fonnalisms are intrinsically different. This is one
of two papers that, taken together, prove instead that classifier systems and neural
networks are equivalent. In this paper, half of the equivalence is demonstrated
through the description of a transfonnation procedure that will map classifier
systems into neural networks that are isomotphic in behavior. Several alterations
on the commonly-used paradigms employed by neural networlc researchers are
required in order to make the transfonnation worlc. These alterations are noted
and their appropriateness is discussed. The paper concludes with a discussion of
the practical import of these results, and with comments on their extensibility.
1
Introd uction
Classifier systems are machine learning systems that have been developed since the
1970s by 10hn Holland and, more recently, by other members of the genetic algorithm
research community as well l . Classifier systems are varieties of genetic algorithms
- algorithms for optimization and learning. Genetic algorithms employ techniques
inspired by the process of biological evolution in order to "evolve" better and better
IThis paper has benefited from discussions with Wayne Mesard, Rich Sutton, Ron Williams, Stewart
Wilson, Craig Shaefer, David Montana, Gil Syswerda and other members of BARGAIN, the Boston Area
Research Group in Genetic Algorithms and Inductive Networks.
50
Davis
individuals that are taken to be solutions to problems such as optimizing a function,
traversing a maze, etc. (For an explanation of genetic algorithms, the reader is
referred to [Goldberg 1989].) Classifier systems receive messages from an external
source as inputs and organize themselves using a genetic algorithm so that they will
"learn" to produce responses for internal use and for interaction with the external
source.
This paper is one of two papers exploring the question of the fonnal relationship
between classifier systems and neural networks. As normally employed, the two sorts
of algorithms are probably distinct, although a procedure for translating the operation
of neural networks into isomorphic classifier systems is given in [Belew and Gherrity
1988]. The technique Belew and Gherrity use does not include the conversion of the
neural network learning procedure into the classifier system framework, and it appears
that the technique will not support such a conversion. Thus, one might conjecture that
the two sorts of machine learning systems employ learning techniques that cannot be
reconciled, although if there were a subsumption relationship, Belew and Gherrity's
result suggests that the set of classifier systems might be a superset of the set of
neural networks.
The reverse conclusion is suggested by consideration of the inputs that each sort
of learning algorithm processes. When viewed as "black boxes", both mechanisms
for learning receive inputs, carry out self-modifying procedures, and produce outputs.
The class of inputs that are traditionally processed by classifier systems - the class
of bit strings of a fixed length - is a subset of the class of inputs that have been
traditionally processed by neural networks. Thus, it appears that classifier systems
operate on a subset of the inputs that neural networks can process, when viewed as
mechanisms that can modify their behavior.
In fact, both these impressions are correct. One can translate classifier systems
into neural networks, preserving their learning behavior, and one can translate neural
networks into classifier systems, again preserving learning behavior. In order to do
so, however, some specializations of each sort of algorithm must be made. This
paper deals with the translation from classifier systems to neural networks and with
those specializations of neural networks that are required in order for the translation
to take place. The reverse translation uses quite different techniques, and is treated
in [Davis 1989].
The following sections contain a description of classifier systems, a description of
the transformation operator, discussions of the extensibility of the proof, comments
on some issues raised in the course of the proof, and conclusions.
2 Classifier Systems
A classifier system operates in the context of an environment that sends messages to
the system and provides it with reinforcement based on the behavior it displays. A
classifier system has two components - a message list and a population of rule-like
entities called classifiers. Each message on the message list is composed of bits, and
Mapping Classifier Systems Into Neural Networks
each has a pointer to its source (messages may be generated by the environment or
by a classifier.) Each classifier in the population of classifiers has three components:
a match string made up of the characters 0,1, and # (for "don't care"); a message
made up of the characters 0 and 1; and a strength. The top-level description of a
classifier system is that it contains a population of production rules that attempt to
match some condition on the message list (thus "classifying" some input) and post
their message to the message list, thus potentially affecting the envirorunent or other
classifiers. Reinforcement from the environment is used by the classifier system to
modify the strengths of its classifiers. Periodically, a genetic algorithm is invoked
to create new classifiers, which replace certain members of the classifier set. (For
an explanation of classifier systems, their potential as machine learning systems, and
their formal properties, the reader is referred to [Holland et al 1986].)
Let us specify these processing stages more precisely. A classifier system operates
by cycling through a fixed list of procedures. In order, these procedures are:
Message List Processing. 1. Clear the message list. 2. Post the envirorunental
messages to the message list. 3. Post messages to the message list from classifiers
in the post set of the previous cycle. 4. Implement envirorunental reinforcement by
analyzing the messages on the message list and altering the strength of classifiers in
the post set of the previous cycle.
Form the Bid Set. 1. Determine which classifiers match a message in the
message list. A classifier matches a message if each bit in its match field matches its
corresponding message bit. A 0 matches a 0, a 1 matches a I, and a # matches either
bit. The set of all matching classifiers forms the current bid set. 2. Implement bid
taxes by subtracting a portion of the strength of each classifier c in the bid set. Add
the strength taken from c to the strength of the classifier or classifiers that posted
messages matched by c in the prior step.
Form the Post Set. 1. If the bid set is larger than the maximum post set size,
choose classifiers stochastically to post from the bid set, weighting them in proportion
to the magnitude of their bid taxes. The set of classifiers chosen is the post set.
Reproduction Reproduction generally does not occur on every cycle. When it
does occur, these steps are carried out: 1. Create n children from parents. Use
crossover and/or mutation, chOOSing parents stochastically but favoring the strongest
ones. (Crossover and mutation are two of the operators used in genetic algorithms.)
2. Set the strength of each child to equal the average of the strength of that child's
parents. (Note: this is one of many ways to set the strength of a new classifier.
The transformation will work in analogous ways for each of them.) 3. Remove n
members of the classifier population and add the n new children to the classifier
population.
3
Mapping Classifiers Into Classifier Networks
The mapping operator that I shall describe maps each classifier into a classifier
network. Each classifier network has links to environmental input units, links to
51
52
Davis
other classifier networks, and match, post, and message units. The weights on the
links leading to a match node and leaving a post node are related to the fields in
the match and message lists in the classifier. An additional link is added to provide
a bias term for the match node. (Note: it is assumed here that the environment
posts at most one message per cycle. Modifications to the transfonnation operator to
accommodate multiple environmental messages are described in the final comments
of this paper.)
Given a classifier system CS with n classifiers, each matching and sending messages of length m, we can construct an isomorphic neural network composed of n
classifier networks in the following way. For each classifier c in CS, we construct its
corresponding classifier network, composed of n match nodes, I post node, and m
message nodes. One match node (the environmental match node) has links to inputs
from the environment. Each of the other match nodes is linked to the message and
post node of another classifier network. The reader is referred to Figure 2 for an
example of such a transformation.
Each match node in a classifier network has m + 1 incoming links. The weights
on the first m links are derived by applying the following transformation to the m
elements of c's match field: 0 is associated with weight -1, 1 is associated with
weight 1, and # is associated with weight O. The weight . of the final link is set to
m + 1 - l, where l is the number of links with weight = 1. Thus, a classifier with
match field (1 0 # 0 1) would have an associated network with weights on the links
leading to its match nodes of 1, -1, 0, -I, 1, and 4. A classifier with match field (1
0#) would have weights of 1, -I, 0, and 3.
The weights on the links to each message node in the classifier network are set
to equal the corresponding element of the classifier's message field. Thus, if the
message field of the classifier were (0 1 0), the weights on the links leading to the
three message nodes in the corresponding classifier network would be 0, I, and O.
The weights on all other links in the classifier network are set to 1.
Each node in a classifier network uses a threshold function to determine its activation level. Match nodes have thresholds = m + .9. All other nodes have thresholds
=.9. If a node's threshold is exceeded, the node's activation level is set to 1. If not,
it is set to O.
Each classifier network has an associated quantity called strength that may be
altered when the network is run, during the processing cycle described below.
A cycle of processing of a classifier system CS maps onto the following cycle of
processing in a set of classifier networks:
Message List Processing. 1. Compute the activation level of each message
node in CS. 2. If the environment supplies reinforcement on this cycle, divide that
reinforcement by the number of post nodes that are currently active, plus 1 if the
environment posted a message on the preceding cycle, and add the quotient to the
strength of each active post node's classifier network. 3. If there is a message on this
cycle from the environment, map it onto the first m environment nodes so that each
node associated with a 0 is off and each node associated with a 1 is on. Tum the final
environmental node on. If there is no environmental message, turn all environmental
Mapping Classifier Systems Into Neural Networks
nodes off.
Form the Bid Set. 1. Compute the activation level of each match node in
each classifier network. 2. Compute the activation level of each bid node in each
classifier network (the set of classifier networks with an active bid node is the bid
set). 3. Subtract a fixed proportion of the strength of each classifier network cn in
the bid set. Add this amount to the strength of those networks connected to an active
match node in cn. (Strength given to the environment passes out of the system.)
Form the Post Set. 1. If the bid set is larger than the maximum post set size,
choose networks stochastically to post from the bid set, weighting them in proportion
to the magnitude of their bid taxes. The set of networks chosen is the post set. (This
might be viewed as a stochastic n-winners-take-all procedure).
Reproduction. If this is a cycle on which reproduction would occur in the
classifier system, carry out its analog in the neural network in the following way.
1. Create n children from parents. Use crossover and/or mutation, choosing parents
stochastically but favoring the strongest ones. The ternary alphabet composed of -I,
I, and 0 is used instead of the classifier alphabet of 0, 1, and #. After each operator
is applied, the final member of the match list is set to m + 1 - l. 2. Write over the
weights on the match links and the message links of n classifier networks to match
the weights in the children. Choose networks to be re-weighted stochastically, so that
the weakest ones are most likely to be chosen. Set the strength of each re-weighted
classifier network to be the average of the strengths of its parents.
It is simple to show that a classifier network match node will match a message
in just those cases in which its associated classifier matched a message. There are
three cases to consider. If the original match character was a #, then it matched any
message bit. The corresponding link weight is set to 0, so the state of the node it
comes from will not affect the activation of the match node it goes to. If the original
match character was a 1, then its message bit had to be a 1 for the message to be
matched. The corresponding link weight is set to 1, and we see by inspection of the
weight on the final link, the match node threshold, and the fact that no other type
of link has a positive weight, that every link with weight I must be connected to an
active node for the match node to be activated. Finally, the link weight corresponding
to a 0 is set to -1. If any of these links is connected to a node that is active, then the
effect is that of turning off a node connected to a link with weight 1, and we have
just seen that this will cause the match node to be inactive.
Given this correspondence in matching behavior, one can verify that a set of
classifier networks associated with a classifier system has the following properties:
During each cycle of processing of the classifier system, a classifier is in the bid set
in just those cases in which its associated networlc has an active bid node. Assuming
that both systems use the same randomizing technique, initialized in the same way,
the classifier is in the post set in just those cases when the network is in the post
set. Finally, the parents that are chosen for reproduction are the transform as of those
chosen in the classifier system, and the children produced are the transformations of
the classifier system parents. The two systems are isomorphic in operation, assuming
that they use the same random number generator.
53
54
Davis
CLASSIFIER NETWORK 1
strength = 49.3
CLASSIFIER NETWORK 2
strength
= 21.95
MESSAGE
NODES
TH = .9
POST
NODES
TH =.9
MATCH
NODES
TH = 3 .9
2
ENVIRONMENT
INPUT
NODES
Figure 1: Result of mapping a classifier system
witH two classifiers into a neural network .
Classifier 1 has match field (0 1 #), message field (1 1 0),
and strength 49 .3. Classifier 2 has match field (1 1 #),
message field (0 1 1), and strength 21.95.
Mapping Classifier Systems Into Neural Networks
4 Concluding Comments
The transfonnation procedure described above will map a classifier system into a
neural network that operates in the same way. There are several points raised by the
techniques used to accomplish the mapping. In closing, let us consider four of them.
First, there is some excess complexity in the classifier networks as they are shown
here. In fact, one could eliminate all non-environmental match nodes and their
links, since one can determine whenever a classifier network is reweigh ted whether it
matches the message of each other classifier network in the system. If so, one could
introduce a link directly from the post node of the other classifier networlc to the post
node of the new networlc. The match nodes to the environment are necessary, as
long as one cannot predict what messages the environment will post. Message nodes
are necessary as long as messages must be sent out to the environment. If not, they
and their incoming links could be eliminated as well. These simplifications have not
been introduced here because the extensions discussed next require the complexity
of the current architecture.
Second, on the genetic algorithm side, the classifier system considered here is an
extremely simple one. There are many extensions and refinements that have been
used by classifier system researchers. I believe that such refinements can be handled
by expanded mapping procedures and by modifications of the architecture of the
classifier networks. To give an indication of the way such modifications would go,
let us consider two sample cases. The first is the case of an environment that may
produce multiple messages on each cycle. To handle multiple messages, an additional
link must be added to each environmental match node with weight set to the match
node's threshold. This link will latch the match node. An additional match node
with links to the environment nodes must be added, and a latched counting node
must be attached to it. Given these two architectural modifications, the cycle is
modified as follows: During the message matching cycle, a series of subcycles is
carried out, one for each message posted by the environment. In each subcycle, an
environmental message is input and each environmental match node computes its
activation. The environmental match nodes are latched., so that each will be active
if it matched any environmental message. The count nodes will record how many
were matched by each classifier network. When bid strength'is paid from a classifier
network to the posters of messages that it matched, the divisor is the number of
environmental messages matched as recorded by the count node, plus the number
of other messages matched. Finally, when new weights are written onto a classifier
network's links, they are written onto the match node connected to the count node
as well. A second sort of complication is that of pass-through bits - bits that
are passed from a message that is matched to the message that is posted. This
sort of mechanism can be implemented in an obvious fashion by complicating the
structure of the classifier networlc. Similar complications are produced by considering
multiple-message matching, negation, messages to effectors, and so forth. It is an
open question whether all such cases can be handled by modifying the architecture
and the mapping operator, but I have not yet found one that cannot be so handled.
55
56
Davis
Third, the classifier networks do not use the sigmoid activation functions that support hill-c~bing techniques such as back-propagation. Further, they are recurrent
networks rather than strict feed-forwanl networks. Thus, one might wonder whether
the fact that one can carry out such transformations should affect the behavior of
researchers in the field. This point is one that is taken up at greater length in the
companion paper. My conclusion there is that several of the techniques imported into
the neural network domain by the mapping appear to improve the performance of neural networks. These include tracking strength in order to guide the learning process,
using genetic operators to modify the network makeup. and using population-level
measurements in order to determine what aspects of a network to use in reproduction.
The reader is referred to [Montana and Davis 1989] for an example of the benefits
to be gained by employing these techniques.
Finally, one might wonder what the import of this proof is intended to be. In
my view, this proof and the companion proof suggest some exciting ways in which
one can hybridize the learning techniques of each field. One such approach and its
successful application to a real-world problem is characterized in [Montana and Davis
1989].
References
[1] Belew, Richard K. and Michael Gherrity, "Back Propagation for the Classifier
System", in preparation.
[2] Davis, Lawrence, "Mapping Neural Networks into Classifier Systems", submitted to the 1989 International Conference on Genetic Algorithms.
[3] Goldberg, David E. Genetic Algorithms in Search, Optimization, and Machine
Learning, Addison Wesley 1989.
[4] Holland, John H, Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard,
Induction, MIT Press, 1986.
[5] Montana, David J. and Lawrence Davis, "Training Feedforward Neural Networks Using Genetic Algorithms", submitted to the 1989 International Joint
Conference on Artificial Intelligence.
| 162 |@word proportion:3 open:1 holyoak:1 paid:1 accommodate:1 carry:3 contains:1 series:1 genetic:14 current:2 activation:8 yet:1 import:2 must:6 written:2 john:1 periodically:1 remove:1 half:1 intelligence:1 inspection:1 record:1 pointer:1 provides:1 node:62 ron:1 complication:2 supply:1 prove:1 introduce:1 behavior:7 themselves:1 inspired:1 considering:1 matched:10 what:3 string:2 developed:1 transformation:6 corporation:1 every:2 gherrity:4 classifier:112 wayne:1 normally:1 unit:2 appear:1 organize:1 positive:1 subsumption:1 modify:3 sutton:1 analyzing:1 might:6 black:1 plus:2 montana:4 equivalence:1 suggests:1 practical:1 ternary:1 implement:2 procedure:9 area:1 crossover:3 poster:1 matching:5 suggest:1 cannot:3 onto:4 operator:7 context:1 applying:1 equivalent:1 map:5 demonstrated:1 williams:1 go:2 rule:2 population:6 handle:1 traditionally:2 analogous:1 goldberg:2 us:2 element:2 imported:1 cycle:15 connected:5 thagard:1 extensibility:2 environment:17 complexity:2 joint:1 alphabet:2 distinct:1 describe:1 artificial:1 choosing:2 quite:1 larger:2 transform:1 final:5 indication:1 subtracting:1 interaction:1 translate:2 syswerda:1 tax:3 forth:1 description:4 parent:8 produce:3 envirorunent:1 recurrent:1 keith:1 implemented:1 c:4 quotient:1 come:1 differ:1 appropriateness:1 correct:1 modifying:2 stochastic:1 translating:1 require:1 biological:1 exploring:1 extension:2 considered:1 lawrence:3 mapping:12 predict:1 currently:1 create:3 weighted:2 mit:1 latched:2 modified:1 rather:1 wilson:1 derived:1 eliminate:1 favoring:2 issue:1 raised:2 field:13 equal:2 construct:2 ted:1 eliminated:1 richard:2 employ:2 composed:4 individual:1 intended:1 divisor:1 negation:1 attempt:1 message:65 activated:1 necessary:2 traversing:1 divide:1 initialized:1 re:2 effector:1 stewart:1 altering:1 subset:2 wonder:2 successful:1 randomizing:1 accomplish:1 my:2 international:2 off:3 michael:1 together:1 again:1 recorded:1 hn:1 choose:3 external:2 stochastically:5 leading:3 potential:1 alteration:2 view:1 linked:1 portion:1 sort:7 mutation:3 craig:1 produced:2 researcher:4 submitted:2 strongest:2 whenever:1 obvious:1 proof:5 associated:10 hybridize:1 intrinsically:1 reweigh:1 back:2 appears:2 exceeded:1 tum:1 feed:1 wesley:1 response:1 specify:1 box:1 just:4 stage:1 propagation:2 believe:1 effect:1 contain:1 verify:1 evolution:1 inductive:1 laboratory:1 deal:1 latch:1 during:3 self:1 davis:10 noted:1 hill:1 impression:1 consideration:1 invoked:1 recently:1 sigmoid:1 winner:1 attached:1 discussed:2 analog:1 measurement:1 cambridge:1 closing:1 had:1 etc:1 add:4 optimizing:1 reverse:2 certain:1 preserving:2 seen:1 additional:3 care:1 preceding:1 greater:1 employed:3 determine:4 paradigm:1 multiple:4 match:49 characterized:1 long:2 post:26 moulton:1 receive:2 affecting:1 source:3 sends:1 leaving:1 operate:1 probably:1 comment:4 pass:1 strict:1 sent:1 member:5 transfonnation:4 counting:1 feedforward:1 superset:1 variety:1 bid:18 affect:2 architecture:3 cn:2 inactive:1 whether:3 specialization:2 handled:3 introd:1 passed:1 cause:1 generally:1 clear:1 amount:1 processed:2 gil:1 per:1 write:1 shall:1 group:1 four:1 threshold:6 run:1 respond:2 place:1 reader:4 architectural:1 bit:9 networlc:5 bargain:1 display:1 correspondence:1 simplification:1 strength:22 occur:3 precisely:1 aspect:1 extremely:1 concluding:1 expanded:1 conjecture:1 character:4 modification:4 taken:4 bing:1 turn:1 count:3 mechanism:5 addison:1 sending:1 operation:2 original:2 top:1 include:2 belew:4 fonnalisms:2 fonnal:1 question:2 added:3 quantity:1 cycling:1 link:30 entity:1 street:1 induction:1 assuming:2 length:3 relationship:2 potentially:1 conversion:2 january:1 community:1 makeup:1 david:3 introduced:1 required:2 suggested:1 below:1 explanation:2 treated:1 turning:1 altered:1 improve:1 technology:1 concludes:1 carried:2 prior:1 evolve:1 generator:1 exciting:1 classifying:1 translation:3 production:1 course:1 formal:1 bias:1 side:1 guide:1 benefit:1 complicating:1 world:1 rich:1 maze:1 computes:1 commonly:1 made:3 reinforcement:5 refinement:2 employing:1 excess:1 active:8 incoming:2 nisbett:1 conclude:1 assumed:1 don:1 search:1 learn:1 posted:4 domain:2 reconciled:1 paul:1 child:7 benefited:1 referred:4 fashion:1 weighting:2 third:1 companion:2 list:14 reproduction:6 weakest:1 uction:1 gained:1 bbn:2 magnitude:2 boston:1 subtract:1 likely:1 tracking:1 holland:3 environmental:13 ma:1 viewed:3 replace:1 operates:3 ithis:1 called:2 isomorphic:3 pas:1 internal:2 support:2 preparation:1 |
678 | 1,620 | Temporally Asymmetric Hebbian Learning,
Spike Timing and Neuronal Response Variability
L.F. Abbott and Sen Song
Volen Center and Department of Biology
Brandeis University
Waltham MA 02454
Abstract
Recent experimental data indicate that the strengthening or weakening of
synaptic connections between neurons depends on the relative timing of
pre- and postsynaptic action potentials. A Hebbian synaptic modification
rule based on these data leads to a stable state in which the excitatory and
inhibitory inputs to a neuron are balanced, producing an irregular pattern
of firing. It has been proposed that neurons in vivo operate in such a
mode.
1 Introduction
Hebbian modification of network interconnections plays a central role in the study of learning in neural networks (Rumelhart and McClelland, 1986; Hertz et al., 1991). Most work
on Hebbian learning involves network models in which the activities of the individual units
are represented by continuous variables. A Hebbian learning rule, in this context, is specified by describing how network weights change as a function of the activities of the units
that transmit and receive signals across a given network connection. While analyses of
Hebbian learning along these lines have provided important results, direct application of
these ideas to neuroscience is hindered by the fact that real neurons cannot be adequately
described by continuous activity variables such as firing rates. Instead, the inputs and outputs of neurons are sequences of action potentials or spikes. All the information conveyed
by one neuron to another over any appreciable distance is carried by the temporal patterns
of action potential sequences. Rules by which synaptic connections between real neurons
are modified in a Hebbian manner should properly be expressed as functions of the relative
timing of the action potentials fired by the input (presynaptic) and output (postsynaptic)
neurons. Until recently, little information has been available about the exact dependence of
synaptic modification on pre- and postsynaptic spike timing (see however, Levy and Steward, 1983; Gustafsson et ai., 1987). New experimental results (Markram et at., 1997; Bell
et al., 1997; Debanne et at., 1998; Zhang et at., 1998; Bi and Poo, 1999) have changed
L. F Abbott and S. Song
70
this situation dramatically, and these allow us to study Hebbian learning in a manner that
is much more realistic and relevant to biological neural networks. The results may find
application in artificial neural networks as well.
2
Temporally Asymmetric LTP and LTD
The biological substrate for Hebbian learning in neuroscience is provided by long-term
potentiation (LTP) and long-term depression (LTD) of the synaptic connections between
neurons (see for example, Malenka and Nicoll, 1993). LTP is a long-lasting strengthening of synaptic efficacy associated with paired pre- and postsynaptic activity. LTD is
a long-lasting weakening of synaptic strength. In recent experiments on neocortical slices
(Markram et aI., 1997), hippocampal cells in culture (Bi and Poo, 1999), and in vivo studies
of tadpole tectum (Zhang et aI., 1998), induction of LTP required that presynaptic action
potentials preceded postsynaptic firing by no more than about 20 ms. Maximal LTP occurred when presynaptic spikes preceded postsynaptic action potentials by less than a few
milliseconds. If presynaptic spikes followed postsynaptic action potentials, long-term depression rather than potentiation resulted. These results are summarized schematically in
Figure 1.
Figure 1: A model of the change in synaptic strength 6..g produced by paired pre- and postsynaptic
spikes occurring at times tpre and tpost respectively. Positive changes correspond to LTP and negative
to LTD. There is an abrupt transition at tpre - tpost = O. The units for 6..g are arbitrary in this figure,
but data indicate a maximum change of approximately 0.5 % per spike pair.
The curve in Figure 1 is a caricature used to model the weight changes arising from pairings
of pre- and postsynaptic action potentials separated by various intervals of time. This curve
resembles the data from all three preparations discussed above, but a couple of assumptions have been made in its construction. The data indicate that there is a rapid transition
from LTP to LTD depending on whether the time difference between pre- and postsynaptic
spiking is positive or negative, but the existing data cannot resolve exactly what happens
at the transition point. We have assumed that there is a discontinuous jump from LTP to
LTD at this point. In addition, we assume that the area under the LTP side of the curve is
slightly less than the area under the LTD side. In Figure 1, this diffetence is imposed by
making the magnitude of LTD slightly greater than the magnitude of LTP, while both sides
of the curve have equal exponential fall-offs away from zero time difference. Alternately,
we could have given the LTD side a slower exponential fall-off and equal amplitude. The
data do not support either assumption unambiguously, nor do they indicate which area is
larger. The assumption that the area under the LTD side of the curve is larger than that under the LTP side is critical if the resulting synaptic modification rule is to be stable against
uncontrolled growth of synaptic strengths.
Hebb (1949) postulated that a synapse should be strengthened when the presynaptic neuron
is frequently involved in making the postsynaptic neuron fire an action potential. Causality
is an important element in Hebb's statement; synaptic potentiation should occur only if
there is a causal relationship between the pre- and postsynaptic spiking. The LTPILTD rule
summarized in Figure 1 imposes causality through a tight timing requirement. The narrow
71
Hebbian Learning and Response Variability
windows for LTP and LTD seen in the data, and the abrupt transition from potentiation to
depression near zero separation between pre- and postsynaptic spike times impose a strict
causality condition for LTP induction.
3
Response Variability
What are the implications of the synaptic modification rule summarized in Figure I? To address this question, we introduce another topic that has been discussed extensively within
the computational neuroscience community in recent years, the origin of response variability (Softky and Koch, 1992 & 1994; Shadlen and Newsome, 1994 & 1998; Tsodyks
and Sejnowski, 1995; Amit and BruneI, 1997; Troyer and Miller, 1997a & b; Bugmann
et aI., 1997; van Vreeswijk and Sompolinsky, 1996 & 1998). Neurons can respond to
multiple synaptic inputs in two different modes of operation. Figure 2 shows membrane
potentials of a model neuron receiving 1000 excitatory and 200 inhibitory synaptic inputs.
Each input consists of an independent Poisson spike train driving a synaptic conductance.
The integrate-and-fire model neuron used in this example integrates these synaptic conductances as a simple capacitor-resistor circuit. To generate action potentials in this model,
we monitor the membrane potential and compare it to a threshold voltage. Whenever the
membrane potential reaches the threshold an action potential is "pasted" onto the membrane potential trace and the membrane potential is reset to a prescribed value.
B
A
-50
-58
250
500
750
1000
:> -20
E
~
>
500
750
1000
_-20
>
-40
-60
250
~
V
r-
50
100
t (ms)
150
200
E.
-40
>
-60
"Y'-'~"'h.llNM,~j~~"~"250
500
750
1000
t (ms)
Figure 2: Regular and irregular firing modes of a model integrate-and-fire neuron. Upper panels
show the model with action potentials deactivated, and the dashed lines show the action potential
threshold. The lower figures show the model with action potentials activated. A) In the regular firing
mode, the average membrane potential without spikes is above threshold and the firing pattern is fast
and regular (note the different time scale in the lower panel). B) In the irregular firing mode, the
average membrane potential without spikes is below threshold and the firing pattern is slower and
irregular.
Figures 2A and 2B illustrate the two modes of operation. The upper panels of Figure 2 show
the membrane potential with the action potential generation mechanism of the model turned
off, and the lower panels show the membrane potential and spike sequences that result when
the action potential generation is turned on. In Figure 2A, the effect of the excitatory inputs
is strong enough relative to that of the inhibitory inputs so that the average membrane
potential, when action potential generation is blocked, is above the spike threshold of the
model. When the action potential mechanism is turned back on (lower panel of Figure
2A), this produces a fairly regular pattern of action potentials at a relatively high rate.
The total synaptic input attempts to charge the neuron above the threshold, but every time
the potential reaches the threshold it gets reset and starts charging again. In this regular
L. F Abbott and S. Song
72
firing mode of operation, the timing of the action potentials is determined primarily by the
charging rate of the cell, which is controlled by its membrane time constant. Since this does
not vary as a function of time, the firing pattern is regular despite the fact that the synaptic
input is varying.
Figure 2B shows the other mode of operation that produces an irregular firing pattern. In
the irregular firing mode, the average membrane is more hyperpolarized than the threshold
for action potential generation (upper panel of Figure 2B). In this mode, action potentials
are only generated when there is a fluctuation in the total synaptic current strong enough to
make the membrane potential cross the threshold. This results in slower and more irregular
firing (lower panel of Figure 2B). The irregular firing mode has a number of interesting
features (Shadlen and Newsome, 1994 & 1998; Tsodyks and Sejnowski, 1995; Amit and
Brunei, 1997; Troyer and Miller, 1997a & b; Bugmann et aI., 1997; van Vreeswijk and
Sompolinsky, 1996 & 1998). First, it generates irregular firing patterns that are far closer
to the firing patterns seen in vivo than the patterns produced in the regular firing mode.
Second, responses to changes in the synaptic input are much more rapid in this mode, being
limited only by the synaptic rise time rather than the membrane time constant. Finally, the
timing of action potentials in the irregular firing mode is related to the timing of fluctuations
in the synaptic input rather than being determined primarily by the membrane time constant
of the cell.
B
A
~ 1.
(I)
I'"
1
(I)
~
11l
!!? 0.8
?20
?10
0
10
20
tpre -Ipost (ms)
Figure 3: Histograms indicating the relative probability of finding pre- and postsynaptic spikes
separated by the indicated time interval. A) Regular firing mode. The probability is essentiaIly flat
and at the chance level of one. B) Irregular firing mode. There is an excess of presynaptic spike
shortly before a postsynaptic spike.
An important difference between the regular and irregular firing modes is illustrated in the
cross-correlograms shown in Figure 3 (Troyer and Miller, 1997b; Bugmann et al. 1997).
These indicate the probability that an action potential fired by the postsynaptic neuron is
preceded or followed by an presynaptic spike separated by various intervals. The histogram
has been normalized so its value for pairings that are due solely to chance is one. The
histogram when the model is in the regular firing mode (Figure 3A) takes a value close to
one for almost all input-output spike time differences. This is a reflection of the fact that the
timing of individual action potentials in the regular firing mode is relatively independent
of the timing of the presynaptic inputs. In contrast, the histogram for a model neuron in
the irregular firing mode (Figure 3B) shows a much larger excess of presynaptic spikes
occurring shortly before the postsynaptic neuron fires. This excess reflects the fluctuations
in the total synaptic input that push the membrane potential up to the threshold and produce
a spike in the irregular firing mode. It indicates that, in this mode, there is a tight temporal
correlation between the timing of such fluctuations and output spikes.
For a neuron to operate in the irregular firing mode, there must be an appropriate balance
between the strength of its excitatory and inhibitory inputs. The excitatory input must be
weak enough, relative to the inhibitory input, so that the average membrane potential in the
absence of spikes is below the action potential threshold to avoid regular firing. However,
excitatory input must be sufficiently strong to keep the average potential close enough to
73
Hebbian Learning and Response Variability
the threshold so that fluctuations can reach it and cause the cell to fire. How is this balance
achieved?
4
Asymmetric LTPILTD Leads to an Irregular Firing State
A comparison of the LTPILTD synaptic modification rule illustrated in Figure 1, and the
presynaptic/postsynaptic timing histogram shown in Figure 3, reveals that a temporally
asymmetric synaptic modification rule based on the curve in Figure 1 can automatically
generate the balance of excitation and inhibition needed to produce an irregular firing state.
Suppose that we start a neuron model in a regular firing mode by giving it relatively strong
excitatory synaptic strengths. We then apply the LTPILTD rule of Figure 1 to the excitatory
synapse while holding the inhibitory synapse at constant values. Recall that Figure 1 has
been adjusted so that the area under the LTD part of the curve is greater than that under the
LTP part. This means that if there is an equal probability of a presynaptic spike to either
precede or follow a postsynaptic spike the net effect will be a weakening of the excitatory
synapses. This is exactly what happens in the regular firing mode, where the relationship
between the timing of pre- and postsynaptic spikes is approximately random (Figure 3A).
As the LTPILTD rule weakens the excitatory synapses, the average membrane potential
drops and the neuron enters the irregular firing mode. In the irregular firing mode, there is
a higher probability for a presynaptic spike to precede than to follow a postsynaptic spike
(Figure 3B). This compensates for the fact that the rule we use produces more LTD than
LTP. Equilibrium will be reached when the asymmetry of the LTPILTD modification curve
of Figure 1 is matched by the asymmetry of the presynaptic/postsynaptic timing histogram
of Figure 3B. The equilibrium state corresponds to a balanced, irregular firing mode of
operation, and it is automatically produced by the temporally asymmetric learning rule.
Figure 4A shows a transition from a regular to an irregular firing state mediated by the temporally asymmetric LTPILTD modification rule. The irregularity of the postsynaptic spike
train has been quantified by plotting the coefficient of variation (CV), the standard deviation over the mean of the interspike intervals, of the model neuron as a function of time.
Initially, the neuron was in a regular firing state with a low CV value. After the synaptic
modification rule reached an equilibrium state, the CV took a value near one indicating that
the neuron has been transformed into an irregular firing mode. The solid curve in Figure 4B
shows that temporally asymmetric LTPILTD can robustly generate irregular output firing
for a wide range of input firing rates.
A
B
1.0
1.2
?
?
.~
0.8
.~
1.0
.~
0.6
.~
0.8
(5
~
'0
~
8
(5 0.6
c
0"
~
Qi
0.2
8
0.0-'------.- - , -- - . - ----,
.0
20
30
40
time steps
04
2
o.
".
~
.......... .
0 0 + - - , - --..,.----,--5
,0
15
20
-;
"
25
input rate (Hz)
Figure 4: Coefficient of variation (CV) of the output spike train of the model neuron. A) Transition from a regular to an irregular firing state as temporally asymmetric LTPILTD modifies synaptic
strengths. The units of time in this plot 'are arbitrary because they depend on the magnitude of LTP
and LTD used in the model. B) Equilibrium CV values as a function of the firing rates of excitatory
inputs to the model neuron, The solid curve gives the results when temporally asymmetric LTP/LTD
is active, The dashed curve shows the results if the synaptic strengths that arose for 5 Hz inputs are
left unmodified.
74
L. F Abbott and S. Song
5 Discussion
Temporally asymmetric LTPILTD provides a Hebbian-type learning rule with interesting
properties (Kempter et aI., 1998). In the past, temporally asymmetric Hebbian learning
rules have been studied and applied to problems of temporal sequence generation (Manai
and Levy, 1993), navigation (Blum and Abbott, 1996; Gerstner and Abbott, 1997), motor
learning (Abbott and Blum, 1996), and detection of spike synchrony (Gerstner et al., 1996).
In these studies, two different LTPILTD window sizes were assumed: either of order 100 ms
(Manai and Levy, 1993; Blum and Abbott, 1996; Gerstner and Abbott, 1997; (Abbott and
Blum, 1996) or around 1 ms (Gerstner et aI., 1996). The new data (Markram et al., 1997;
Bell et aI., 1997; Zhang et al., 1998; Bi and Poo, 1999) give a window size of order 10
ms. For alms window size, temporally asymmetric LTPILTD is sensitive to precise spike
timing. When the window size is of order 100 ms, changes in stimuli or motor actions on a
behavioral level become relevant for LTP and LTD. A window size of 10 ms, as supported
by the recent data, suggests that LTP and LTD are sensitive to firing correlations relevant
to neuronal circuitry, such as input-output correlations, which vary over this time scale.
Temporally asymmetric LTPILTD has some interesting properties that distinguish it from
Hebbian learning rules based on correlations or covariances in pre- and postsynaptic rates.
We have found that the rule used here is not sensitive to input firing rates or to variability
in input rates. If we split the excitatory inputs of the model into two groups and give these
two input sets different rates, we see no difference in the distribution of synaptic strengths
arising from the learning rule. Similarly, if one group is given a steady firing rate and
the other group has firing rates that vary in time, no difference in synaptic strengths is
apparent. The most effective way to induce LTP in a set of inputs is to synchronize some of
their spikes. Inputs with synchronized spikes are slightly more effective at firing the neuron
than un synchronized spikes. This means that such inputs will preceded postsynaptic spikes
more frequently and thus will get stronger. This suggests that spike synchrony may be a
signal that marks a set of inputs for learning. Even when this synchrony has no particular
functional effect, so that it has little impact on the firing pattern of the postsynaptic neuron,
it can lead to dramatic shifts in synaptic strength. Thus, spike synchronization may be a
mechanism for inducing LTP and LTD .
Acknowledgments
Research supported by the National Science Foundation (DMS-9503261), the Sloan Center for Theoretical Neurobiology at Brandeis University, a Howard Hughes Predoctoral
Fellowship, and the W.M. Keck Foundation.
References
Abbott, LF & Blum, KI (1996) Functional significance of long-term potentiation for sequence learning and prediction. Cerebral Cortex 6:406-416.
Amit, DJ & BruneI N (1997) Global spontaneous activity and local structured (learned)
delay activity in cortex. Cerebral Cortex 7:237-252.
Bell ce, Han VZ, Sugawara Y & Grant K (1997) Synaptic plasticity in a cerebellum-like
structure depends on temporal order. Nature 387:278-281.
Bi G-q & Poo M-m (1999) Activity-induced synaptic modifications in hippocampal culture:
dependence on spike timing, synaptic strength and cell type. J. Neurophysiol. (in press).
Blum, KI & Abbott, LF (1996) A model of spatial map formation in the hippocampus of
the rat. Neural Compo 8:85-93.
Bugmann, G, Christodoulou, C & and Taylor, JG (1997) Role of temporal integration and
fluctuation detection in the highly irregular firing of a leaky integrator neuron model
with partial reset. Neural Compu . 9:985-1000.
Hebbian Learning and Response Variability
75
Debanne D, Gahwiler BH, Thompson SM (1998) Long-term synaptic plasticity between
pairs of individual CA3 pyramidal cells in rat hippocampal slices. J. Physiol. 507:237247.
Gerstner, W & Abbott, LF (1997) Learning navigational maps through potentiation and
modulation of hippocampal place cells. J. Computational Neurosci. 4:79-94.
Gerstner W, Kempter R, van Hemmen JL & Wagner, H (1996) A neural learning rule for
sub-millisecond temporal coding. Nature 383:76-78.
Gustafsson B, Wigstrom H, Abraham WC & Huang Y-Y (1987) Long-term potentiation
in the hippocampus using depolarizing current pulses as the conditioning stimulus to
single volley synaptic potentials. J. Neurosci. 7:774-780.
Hebb, DO (1949) The Organization of Behavior: A Neuropsychological Theory. New
York:Wiley.
Hertz, JA, Palmer, RG & Krogh, A (1991) Introduction to the Theory of Neural Computation. New York:Addison-Wesley.
Kempter R, Gerstner W & van Hemmen JL (1999) Hebbian learning and spiking neurons.
(submitted).
Levy WB & Steward 0 (1983) Temporal contiguity requirements for long-term associative
potentiation/depression in the hippocampus. Neurosci. 8:791-797.
Malenka, RC & Nicoll, RA (1993) MBDA-receptor-dependent synaptic plasticity: Multiple forms and mechanisms. Trends Neurosci. 16:521-527.
Minai, AA & Levy, WB (1993) Sequence learning in a single trial. INNS World Congress
on Neural Networks 11:505-508.
Markram H, Lubke J, Frotscher M & Sakrnann B (1997) Regulation of synaptic efficacy
by coincidence of postsynaptic APs and EPSPs. Science 275:213-215.
Rumelhart, DE & McClelland, JL, editors (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volumes I & II. Cambridge, MA:MIT Press.
Shadlen, MN & Newsome, WT (1994) Noise, neural codes and cortical organization. Current Opinion in Neurobiology 4:569-579.
Shadlen, MN & Newsome, WT (1998) The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding. Journal of Neuroscience 18:3870-3896.
Softky, WR & Koch, C (1992) Cortical cells should spike regularly but do not. Neural
Computation 4:643-646.
Softky, WR & Koch, C (1994) The highly irregular firing of cortical cells is inconsistent
with temporal integration of random EPSPs. Journal of Neuroscience 13:334-350.
Troyer, 1W & Miller, KD (1997a) Physiological gain leads to high lSI variability in a
simple model of a cortical regular spiking cell. Neural Compo 9:971-983:
Troyer, 1W & Miller, KD (1997b) Integrate-and-fire neurons matched to physiological
F-I curves yield high input sensitivity and wide dynamic range. Computational Neuroscience, Trends in Research. 1M Boser, ed. New York:Plenum, pp. 197-20l.
Tsodyks, M & Sejnowski, TJ (1995) Rapid switching in balanced cortical network models.
Network 6: 1-14.
van Vreeswijk, C & Sompolinsky, H (1996) Chaos in neuronal networks with balanced
excitatory and inhibitory activity. Science 274: 1724-1726.
van Vreeswijk, C & Sompolinsky, H (1998) Chaotic balanced state in a model of cortical
circuits. Neural Compo 10:1321-1327.
Zhang LI, Tao, HW, Holt CE, Harris WA & Poo M-m (1998) A critical window for cooperation and competition among developing retinotectal synapses. Nature 395:37-44.
| 1620 |@word trial:1 stronger:1 hippocampus:3 hyperpolarized:1 pulse:1 covariance:1 dramatic:1 solid:2 efficacy:2 past:1 existing:1 current:3 must:3 physiol:1 realistic:1 plasticity:3 interspike:1 motor:2 drop:1 plot:1 aps:1 compo:3 provides:1 zhang:4 rc:1 along:1 correlograms:1 direct:1 become:1 pairing:2 gustafsson:2 consists:1 behavioral:1 introduce:1 manner:2 ra:1 rapid:3 behavior:1 nor:1 frequently:2 integrator:1 automatically:2 resolve:1 little:2 window:7 provided:2 matched:2 circuit:2 panel:7 what:3 contiguity:1 finding:1 temporal:8 every:1 charge:1 growth:1 exactly:2 unit:4 grant:1 producing:1 positive:2 before:2 timing:16 local:1 congress:1 switching:1 despite:1 receptor:1 firing:49 fluctuation:6 approximately:2 solely:1 modulation:1 resembles:1 quantified:1 studied:1 suggests:2 limited:1 palmer:1 bi:4 range:2 sugawara:1 neuropsychological:1 acknowledgment:1 hughes:1 lf:3 irregularity:1 chaotic:1 area:5 bell:3 pre:11 induce:1 regular:18 holt:1 get:2 cannot:2 onto:1 close:2 bh:1 context:1 imposed:1 map:2 center:2 poo:5 modifies:1 thompson:1 abrupt:2 rule:20 variation:2 debanne:2 transmit:1 discharge:1 tectum:1 play:1 construction:1 suppose:1 exact:1 substrate:1 spontaneous:1 plenum:1 origin:1 element:1 rumelhart:2 trend:2 asymmetric:13 role:2 coincidence:1 enters:1 tsodyks:3 sompolinsky:4 balanced:5 dynamic:1 depend:1 tight:2 neurophysiol:1 represented:1 various:2 train:3 separated:3 fast:1 effective:2 sejnowski:3 artificial:1 formation:1 apparent:1 larger:3 interconnection:1 compensates:1 associative:1 sequence:6 net:1 inn:1 sen:1 took:1 maximal:1 strengthening:2 reset:3 relevant:3 turned:3 fired:2 inducing:1 competition:1 requirement:2 asymmetry:2 keck:1 produce:5 depending:1 illustrate:1 weakens:1 krogh:1 epsps:2 strong:4 involves:1 indicate:5 waltham:1 synchronized:2 discontinuous:1 exploration:1 opinion:1 ja:1 potentiation:8 microstructure:1 biological:2 adjusted:1 koch:3 sufficiently:1 around:1 equilibrium:4 cognition:1 circuitry:1 driving:1 vary:3 integrates:1 precede:2 sensitive:3 vz:1 reflects:1 offs:1 mit:1 modified:1 rather:3 arose:1 avoid:1 varying:1 voltage:1 properly:1 indicates:1 contrast:1 dependent:1 weakening:3 initially:1 transformed:1 caricature:1 tao:1 among:1 spatial:1 integration:2 fairly:1 frotscher:1 equal:3 biology:1 stimulus:2 few:1 primarily:2 resulted:1 national:1 individual:3 fire:6 attempt:1 conductance:2 detection:2 organization:2 brunei:3 highly:2 navigation:1 activated:1 tj:1 implication:2 closer:1 partial:1 culture:2 taylor:1 causal:1 theoretical:1 wb:2 newsome:4 unmodified:1 ca3:1 deviation:1 delay:1 sensitivity:1 off:2 receiving:1 connectivity:1 again:1 central:1 huang:1 compu:1 li:1 potential:43 de:1 summarized:3 coding:2 coefficient:2 postulated:1 sloan:1 depends:2 reached:2 start:2 parallel:1 synchrony:3 depolarizing:1 vivo:3 lubke:1 miller:5 correspond:1 yield:1 weak:1 produced:3 submitted:1 synapsis:3 reach:3 whenever:1 synaptic:39 ed:1 against:1 pp:1 involved:1 dm:1 associated:1 couple:1 gain:1 recall:1 amplitude:1 back:1 wesley:1 higher:1 follow:2 unambiguously:1 response:7 gahwiler:1 synapse:3 until:1 correlation:4 mode:29 indicated:1 effect:3 normalized:1 adequately:1 illustrated:2 cerebellum:1 excitation:1 steady:1 rat:2 m:9 hippocampal:4 neocortical:1 reflection:1 chaos:1 recently:1 functional:2 preceded:4 spiking:4 conditioning:1 cerebral:2 jl:3 discussed:2 occurred:1 volume:1 blocked:1 cambridge:1 ai:8 cv:5 similarly:1 dj:1 jg:1 stable:2 han:1 cortex:3 inhibition:1 recent:4 steward:2 seen:2 greater:2 impose:1 signal:2 dashed:2 ii:1 multiple:2 hebbian:16 cross:2 long:9 paired:2 controlled:1 qi:1 impact:1 prediction:1 poisson:1 histogram:6 achieved:1 cell:10 irregular:26 receive:1 schematically:1 addition:1 fellowship:1 interval:4 pyramidal:1 operate:2 strict:1 hz:2 induced:1 ltp:21 regularly:1 inconsistent:1 capacitor:1 near:2 split:1 enough:4 hindered:1 idea:1 shift:1 whether:1 ltd:18 song:4 york:3 cause:1 action:27 depression:4 dramatically:1 extensively:1 mcclelland:2 generate:3 lsi:1 inhibitory:7 millisecond:2 neuroscience:6 arising:2 per:1 wr:2 group:3 threshold:13 blum:6 monitor:1 ce:2 abbott:13 year:1 respond:1 place:1 almost:1 separation:1 ki:2 uncontrolled:1 followed:2 distinguish:1 activity:8 strength:11 occur:1 flat:1 generates:1 wc:1 prescribed:1 malenka:2 relatively:3 department:1 structured:1 developing:1 kd:2 hertz:2 across:1 slightly:3 membrane:18 postsynaptic:27 modification:11 happens:2 making:2 lasting:2 nicoll:2 describing:1 vreeswijk:4 mechanism:4 needed:1 addison:1 available:1 operation:5 apply:1 away:1 appropriate:1 robustly:1 shortly:2 slower:3 giving:1 amit:3 question:1 spike:39 dependence:2 distance:1 softky:3 bugmann:4 topic:1 presynaptic:13 induction:2 code:1 relationship:2 balance:3 christodoulou:1 regulation:1 statement:1 holding:1 trace:1 negative:2 rise:1 upper:3 predoctoral:1 neuron:33 sm:1 howard:1 situation:1 neurobiology:2 variability:8 volen:1 precise:1 arbitrary:2 community:1 pair:2 required:1 specified:1 connection:4 learned:1 narrow:1 boser:1 alternately:1 tpre:3 address:1 below:2 pattern:11 navigational:1 deactivated:1 charging:2 critical:2 synchronize:1 mn:2 temporally:12 carried:1 mediated:1 wigstrom:1 relative:5 synchronization:1 kempter:3 generation:5 interesting:3 foundation:2 integrate:3 conveyed:1 imposes:1 shadlen:4 plotting:1 editor:1 excitatory:13 changed:1 cooperation:1 supported:2 side:6 allow:1 fall:2 wide:2 markram:4 wagner:1 leaky:1 van:6 slice:2 curve:12 distributed:1 cortical:7 transition:6 world:1 tpost:2 made:1 jump:1 brandeis:2 far:1 excess:3 keep:1 global:1 active:1 reveals:1 assumed:2 continuous:2 un:1 retinotectal:1 nature:3 gerstner:7 troyer:5 significance:1 neurosci:4 abraham:1 noise:1 neuronal:3 causality:3 hemmen:2 strengthened:1 hebb:3 wiley:1 sub:1 resistor:1 exponential:2 volley:1 levy:5 hw:1 physiological:2 magnitude:3 occurring:2 push:1 rg:1 expressed:1 aa:1 corresponds:1 chance:2 harris:1 ma:2 appreciable:1 absence:1 change:7 determined:2 wt:2 total:3 experimental:2 indicating:2 minai:1 support:1 mark:1 preparation:1 |
679 | 1,621 | Optimizing Correlation Algorithms for
Hardware-based Transient Classification
R. Timothy Edwards l , Gert Cauwenberghsl, and Fernando J. Pineda2
1 Electrical
2
and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218
Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723
e-mail: {tim, gert, fernando}@bach.ece. jhu. edu
Abstract
The perfonnance of dedicated VLSI neural processing hardware depends
critically on the design of the implemented algorithms. We have previously proposed an algorithm for acoustic transient classification [1].
Having implemented and demonstrated this algorithm in a mixed-mode
architecture, we now investigate variants on the algorithm, using time
and frequency channel differencing, input and output nonnalization, and
schemes to binarize and train the template values, with the goal of achieving optimal classification perfonnance for the chosen hardware.
1 Introduction
At the NIPS conference in 1996 [1], we introduced an algorithm for classifying acoustic
transient signals using template correlation. While many pattern classification systems use
template correlation [2}, our system differs in directly addressing the issue of efficient implementation in analog hardware, to overcome the area and power consumption drawbacks
of equivalent digital systems. In the intervening two years, we have developed analog circuits and built VLSI hardware implementing both the template correlation and the frontend
acoustic processing necessary to map the transient signal into a time-frequency representation corresponding to the template [3, 4]. In the course of hardware development, we have
been led to reevaluate the algorithm in the light of the possibilities and the limitations of
the chosen hardware.
The general architecture is depicted in Figure 1 (a), and excellent agreement between simulations and experimental output from a prototype is illustrated in Figure 1 (b). Issues of implementation efficiency and circuit technology aside, the this paper specifically addresses
further improvements in classification perfonnance achievable by algorithmic modifications, tailored to the constraints and strengths of the implementation medium.
679
Optimizing Correlation Algorithms/or Transient Classification
70
60
Template
Correlator NxM
'5
-
Simulated
50
.940
::I
o
c 30
o
~ 20
? Measured
~
8
N
Shift and Accumulate
-20
20
40
cJtl
(a)
60
80
Time
100
120
140
(b)
Figure 1: (a) System architecture of the acoustic transient classifier (b) Demonstration of
accurate computation in the analog correlator on a transient classification task.
2 The transient classification algorithm
The core of our architecture performs the running correlation between an acoustic input
and a set of templates for distiguishing between Z distinct classes. A simple template
correlation equation for the acoustic transient classification can be written:
M
cz[tJ = Kz L
N
L x[t - n, mJ pz[n, mJ
(1)
m=l n=l
where M is the number of frequency channels of the input, N is the maximum number of
time bins in the window, and x is the array of input signals representing the energy content
in each of the M bandpass frequency channels. The inputs x are normalized across channels using an L-l normalization so that the correlation is less affected by volume changes
in the input. The matrix pz contains the template pattern values for pattern z out of a total
of Z classes; K z is a constant gain coefficient for class z, and t is the current time. This
formula produces a running correlation C z [tJ of the input array with the template for class
z. A signal is classified as belonging to class z when the output C z exceeds the output for
all other classes at a point in time t determined by simple segmentation of the input.
To train and evaluate the system, we used a database of 22 recorded samples of 10 different
classes of "everyday" transients such as the sounds made by aluminum cans, plastic tubs,
handclaps, and the like.
Each example transient recording was processed through a thirty-two channel constant-Q
analog cochlear filter with output taps spaced on a logarithmic frequency scale [6]. For
the simulations, the frontend system outputs were sampled and saved to disk, then digitally
rectified and smoothed with a lowpass filter function with a 2 ms time constant. These
thirty-two channel outputs representing short-term average energy in each frequency band
were decimated to 500 Hz and normalized with the function
M+l
x[t, mJ = y[t, mJ/ Ly[t, kJ,
(2)
k=l
where y[t, M + 1J is a constant-valued input added to the system in order to supress noise
in the normalized outputs during periods of silence. The additional output x[t, M + 1J
680
R. T. Edwards, G. Cauwenberghs and F. J. Pineda
becomes maximum during the periods of silence and minimum during presentation of a
transient event. This extra output can be used to detect onsets of transients, but is not used
in the correlation computation of equation (1).
Template values pz are learned by automatically aligning all examples of the same class in
the training set using a threshold on the normalization output x[t, M + 1], and averaging
the values together over N samples. starting a few samples before the point of alignment.
Class outputs are normalized relative to one another by mUltiplying each output by a gain
factor K z , computed from the template values using the L-2 norm function
M
Kz =
N
L LPz[n,m]2.
(3)
m=l n=l
We evaluated the accuracy of the system with a cross-validation loop in which we train the
system on all of the database except one example of one class, then test on that remaining
example. repeating the test for each of the 220 examples in the database. The baseline
algorithm gives a classification accuracy of 96.4%.
3 Single-bit template values
A major consideration for hardware implementations (both digital and analog) is the memory storage required by the templates, one of which is required for each class. Minimal
storage space in terms of bits per template is practical only if the algorithm can be proved
to perform acceptably well under decreased levels of quantization of the template values.
At one bit per template location (i.e., M x N bits per template), the complexity of the hardware is greatly simplified, but it is no longer obvious what method is best to use for learning the template values, or for calculating the per-class gains. The choice of the method is
guided by knowledge about the acoustic transients themselves, and simulation to evaluate
its effect on the accuracy of a typical classification task.
4 Simulations of different zero-mean representations
One bit per template value is a desirable goal, but realizing this goal requires reevaluating
the original correlation equation. The input values to be correlated represent band-limited
energy spectra, and range from zero to some maximum determined by the L-l normalization. To determine the value of a template bit, the averaged value over all examples of the
class in the training set must be compared to a threshold (which itself must be determined),
or else the input itself must be transformed into a form with zero average mean value. In
the latter method, the template value is determined by the sign of the transformed input,
averaged over all examples of the class in the training set.
The obvious transformations of the input which provide a vector of zero-mean signals to the
correlator are the time derivative of each input channel, and the difference between neighboring channels. Certain variations of these are possible, such as a center-surround computation of channel differences, and zero-mean combinations of time and channel differences.
While there is evidence that center-surround mechanisms are common to neurobiological
signal processing of various sensory modalities in the brain, including processing in the
mammalian auditory cortex [5], time derivatives of the input are also plausible in light of
the short time base of acoustic transient events. Indeed, there is no reason to assume a
priori that channel differences are even meaningful on the time scale of transients.
Table 1 shows simulation results, where classification accuracy on the cross-validation test
is given for different combinations of continuous-valued and binary inputs and templates,
Optimizing Correlation Algorithms for Transient Classification
681
Table I: Simulation results with different architectures.
Method
One-to-One
Time Difference
Channel Difference
Center-Surround
Both
Cont.
96.40%
85.59%
90.54%
92.79%
Binary
Input
65.32%
53.60%
53.60%
Both
Binary
59.46%
95.05%
95.05%
Binary (1, -1)
Template
82.43%
94.59%
92.34%
Binary (1,0)
Template
81.98%
94.14%
92.34%
and different zero-mean transformations of the input. There are several significant points
to the results of these classification tasks. The first is to note that in spite of the fact that
acoustic transient events are short-term and the time steps between the bins in the template
as low as 2 ms, using time differences between samples does not yield reliable classification
when either the input or the template or both is reduced to binary form. However, reliability
remains high when the correlation is performed using channel differences. The implication
is that even the shortest transient events have stable and reliable structure in the frequency
domain, a somewhat surprising conclusion given the impulsive nature of most transients.
Another interesting point is that we observe no significant difference between the use of
pairwise channel differences and the more complicated center-surround mechanism (twice
the channel value minus the value of the two neighboring channels). The slight decrease in
accuracy for the center-surround in some instances is most likely due only to the fact that
one less channel contributes information to the correlator than in the pairwise channel difference computation. When accuracy is constant, a hardware implementation will always
prefer the simpler mechanism.
Very little difference in accuracy is seen between the use of a binary (1, -1) representation
and a binary (1,0) representation, in spite ofthe fact that all zero-valued template positions
do not contribute to the correlation output. This lack of difference is a result of the choice
of the L-l normalization across the input vector, which ensures that the part of the correlation due to positive template values is roughly the same magnitude as that due to negative
template values, leading to a redundant representation which can be removed without affecting classification results. In analog hardware, particularly current-mode circuits, the
(1,0) template representation is much simpler to implement.
Time differencing of the input can be efficiently realized in analog hardware by commuting
the time-difference calculation to the end of the correlation computation and implementing
it with a simple switch-capacitor circuit. Taking differences between input channel values,
on the other hand, is no so easily reduced to a simple hardware form. To find a reasonable
solution, we simulated a number of different combinations of channel differencing and
binarization. Table 2 shows a few examples. The first row is our standard implementation
of channel differences using binary (1,0) templates and continuous-valued input. The
drawback of this method in analog hardware is the matching between negative and positive
parts of the correlation sum. We found two ways to get around this problem without greatly
compromising the system performance: The first, shown in the second row of Table 2 is to
add to the correlation sum only if the channel difference is positive and the template value
is 1 (one-quadrant multiplication). Another (shown in the last row) is to add the maximum
of each pair of channels if the template value is 1, which is preferable in that it uses the
input values directly and does not require computing a difference at all. Unfortunately,
it also adds a large component to the output which is related only to the total energy of
the input and therefore is common to all class outputs, reducing the dynamic range of the
system.
R. T Edwards. G. Cauwenberghs and F. J Pineda
682
Table 2: Simulation results for different methods of computing channel differences
method
channel difference
one-quadrant multiply
maximum channel
5
accuracy
94.14%
92.34%
93.69%
Optimization of the classifier using per-class gains
The per-class gain values Kz in equation (1) are optimal for the baseline algorithm when using the L-2 normalization. The same normalization applied to the binary templates (when
the template value is assumed to be either +1 or -1) yields the same K z value for all
classes. This unity gain on all class outputs is assumed in all the simulations of the previous section. A careful evaluation of errors from several runs indicated the possibility that
different gains on each channel could improve recognition rates, and simple experiments
with values tweaked by hand proved this suspicion to be true.
To automate the process of gain optimization, we consider the templates, as determined by
averaging together examples of each class in the training set, to be fixed. Then we compute
the correlation between each template and the aligned, averaged inputs for each class which
were used to generate the templates. The result is a Z x Z matrix, which we denote C, of
expected values for the correlation between a typical example of a transient input and the
template for its own class (diagonal elements C ii ) and the templates for all other classes
(off-diagonal elements Cij, i '=I j). Each column of C is like the correlator outputs on
which we make a classification decision by choosing the maximum. Therefore we wish to
maximize Cii with respect to all other elements in the same column. The only degree of
freedom for adjusting these values is to multiply the correlation output of each template z
by a constant coefficient K z . This corresponds to multiplying each row of C by K z . This
per-class gain mechanism is easily transferred to the analog hardware domain.
In the case of continuous-valued templates, an optimal solution can be directly evaluated
and yields the L-2 normalization. However, for all binary forms of the template and/or
the input, direct evaluation is impossible and the solution must be found by choosing an
error function E to minimize or maximize. The error function must assign a large error to
any off-diagonal element in a column that approaches or exceeds the diagonal element in
that column, but must not force the cross-correlations to arbitrarily low negative values. A
minimizing function that fits this description is
E=
L L exp (KjCji i
KiCii ).
(4)
#i
This function unfortunately has no closed-form solution for the coefficients Ki, which must
be determined numerically using Newton-Raphson or some other iterative method.
Improvements in the recognition rates of the classification task using this optimization of
per-class gains is shown in Table 3, where we have considered only the case of inputs and
templates encoding channel differences. Although the database is small, the gains of 2 to
4% for the quantized cases are significant. For this particular simulation we used a different
type of frontend section to verify that the performance of the correlation algorithm was
not linked to a specific frontend architecture. To generate these performance values, we
used sixteen channels with the inputs digitally processed through a constant-Q bandpass
filter having a Q of 5.0 and with center frequencies spaced on a mel scale from 100Hz
to 4500 Hz. The bandpass filtering was followed by rectification and smoothing with a
lowpass filter function with a cutoff frequency scaled logarithmically across channels, from
60 Hz to 600 Hz. The channel output data were decimated to a 500 Hz rate. Half of the
Optimizing Correlation Algorithms/or Transient Classification
683
database was used to train the system, and half used to test. Performance is similar to that
reported in the previous section in spite of the fact that the number of channels was cut in
half, and the number of training examples was also cut in half. Slight gains in performance
are most likely due to the cleaner digital filtering of the recorded data.
Table 3: System accuracy with and without per-class normalization.
binarization
none
template only
template & input
accuracy, optimized
accuracy, non-optimized
100%
93%
95%
100%
91%
91%
6 System Robustness
We performed several additional experiment in addition to those covered in the previous
sections. One of these was an evaluation of recognition accuracy as a function of the template length N (number of time bins), to determine what is a proper size for the templates.
The result is shown in Figure 2 (a). This curve reaches a reliable maximum at about 50 time
bins, from which our chosen size for the hardware implementation of 64 bins provides a
safe margin of error. However, it is interesting to note that recognition accuracy does not
drop to that of random chance until only two time bins are used (64 bits per template), and
accuracy is nearly 50% with only 3 time bins (96 bits per template).
100~--~~~====~======~
100r---~--~----~--~----~--~
90
~
<fl.
80
80
;: 70
[:6
5u
60
c(
50
u
E
~ 40
~
(J) 30
20
20
1%~--~
20
~--~40~---OO
~--~8~
0 ----~100
Correlator length N
(a)
-~O
-20
- 10
0
10
20
30
SNR (dB)
(b)
Figure 2: (a) Effect of decreasing the number of time-bins. (b) Effect of white noise added
to the correlator inputs.
We made one evaluation of the robustness of the algorithm in the presence of noise by
introducing additional white noise at the correlator inputs. The graph of Figure 2 (right)
shows that accuracy remains high until the signal-to-noise ratio is roughly OdB.
An interesting question to ask about the L-l normalization at the frontend is how the added
constant normalization channel (y[t, M + 1]) affects the classification performance. If this
channel is omitted, then the total instantaneous value of all outputs must equal the same
value, even during periods of silence, in which low-level noise gets amplified. The nominal
value of this channel was chosen to match the levels of noise in the transient recordings.
For one of the cases of Table 1 (real input, binary (1,0) template, channel differencing at
R. T. Edwards, G. Cauwenberghs and F. 1. Pineda
684
the input), we tried two other tests, one with the normalization constant doubled, and one
with it omitted (zero). Doubling the normalization constant had no effect on the error rate,
while omitting it caused the accuracy to drop only from 94.1 % to 92.3%. The conclusion
is that for large templates, random noise has a low probability of producing a spurious
positive correlation that would be classified as a transient. The classification algorithm is
not largely dependent on input signal normalization.
7
Conclusions
Starting from a template correlation architecture for acoustic transient classification targeted for high-density, low-power analog VLSI implementation, we have investigated several variants on the correlation algorithms, accounting for the strengths and constraints
of the VLSI implementation medium while maintaining acceptable classification performance.
Reduction of input and templates to binary form does not significantly affect performance,
as long as they are transformed to encode the difference in neighboring channels of the
original filterbank frontend outputs. This suggests that acoustic transient classification is
not only amenable to implementation in simple analog hardware, but also in reasonably
simple digital hardware.
In looking for zero-mean representations of the input compatible with a binary template,
we found that computing pairwise differences between channels gives a more robust representation than a time-differential form, as was reported previously in [1]. We have found
that computing a center-surround function of the inputs yields virtually the same results
as taking pairwise channel differences. Where hardware implementation is the goal, the
pairwise difference function is preferred due to its greater simplicity.
We have additionally shown that cross-correlations between aligned, averaged inputs and
templates can be used with an iterative method to solve for optimal gain coefficients per
class output, which yield better classification performance. This is a method which can be
applied in general to all template correlation systems.
References
[1] F. J. Pineda, G. Cauwenberghs, R. T. Edwards, "Bangs, Clicks, Snaps, Thuds, and
Whacks: An Architecture for Acoustic Transient Processing," Neural Information
Processing Systems (NIPS), Denver, 1996.
[2] K. P. Unnikrishnan, J. J. Hopfield, and D. W. Tank, "Connected-Digit SpeakerDependent Speech Recognition Using a Neural Network with Time-Delayed Connections," IEEE Transactions on Signal Processing, 39, pp. 698-713,1991.
[3] R. T. Edwards, G. Cauwenberghs, and F. J. Pineda, "A Mixed~Signal Correlator for
Acoustic Transient Classification," International Symposium on Circuits and Systems
(ISCAS), Hong Kong, June 1997.
[4] R. T. Edwards and G. Cauwenberghs, "A Second-Order Log-Domain Bandpass Filter
for Audio Frequency Applications," International Symposium on Circuits and Systems
(ISCAS), Monterey, CA, June 1998.
[5] K. Wang and S. Shamma, "Representation of Acoustic Signals in the Primary Auditory
Cortex," IEEE Trans. Audio and Speech Processing, 3(5), pp. 382-395, 1995.
[6] F. J. Pineda, K. Ryals, D. Steigerwald, and P. Furth, "Acoustic Transient Processing using the Hopkins Electronic Ear," World Conference on Neural Networks, Washington,
D.C., 1995.
| 1621 |@word kong:1 achievable:1 norm:1 disk:1 simulation:9 tried:1 accounting:1 minus:1 reduction:1 contains:1 current:2 surprising:1 written:1 must:8 john:2 drop:2 aside:1 half:4 realizing:1 core:1 short:3 provides:1 quantized:1 contribute:1 location:1 simpler:2 direct:1 differential:1 symposium:2 pairwise:5 expected:1 indeed:1 roughly:2 themselves:1 brain:1 decreasing:1 automatically:1 little:1 correlator:9 window:1 becomes:1 tweaked:1 circuit:6 medium:2 what:2 developed:1 transformation:2 preferable:1 classifier:2 scaled:1 filterbank:1 ly:1 acceptably:1 producing:1 before:1 positive:4 engineering:1 encoding:1 twice:1 suggests:1 limited:1 shamma:1 range:2 averaged:4 practical:1 thirty:2 implement:1 differs:1 digit:1 area:1 jhu:1 significantly:1 matching:1 nonnalization:1 speakerdependent:1 quadrant:2 spite:3 get:2 doubled:1 storage:2 impossible:1 equivalent:1 map:1 demonstrated:1 center:7 starting:2 simplicity:1 array:2 gert:2 variation:1 nominal:1 us:1 agreement:1 element:5 logarithmically:1 recognition:5 particularly:1 mammalian:1 cut:2 database:5 electrical:1 wang:1 ensures:1 connected:1 decrease:1 removed:1 digitally:2 complexity:1 dynamic:1 efficiency:1 easily:2 lowpass:2 hopfield:1 various:1 train:4 distinct:1 choosing:2 valued:5 plausible:1 solve:1 snap:1 itself:2 pineda:6 neighboring:3 aligned:2 loop:1 amplified:1 intervening:1 description:1 everyday:1 produce:1 tim:1 oo:1 measured:1 edward:7 implemented:2 guided:1 safe:1 drawback:2 saved:1 compromising:1 filter:5 aluminum:1 transient:29 implementing:2 bin:8 require:1 assign:1 around:1 considered:1 exp:1 algorithmic:1 automate:1 major:1 omitted:2 always:1 encode:1 unnikrishnan:1 june:2 improvement:2 laurel:1 greatly:2 baseline:2 detect:1 dependent:1 spurious:1 vlsi:4 transformed:3 tank:1 issue:2 classification:26 steigerwald:1 priori:1 development:1 smoothing:1 equal:1 having:2 washington:1 nearly:1 few:2 delayed:1 iscas:2 freedom:1 investigate:1 possibility:2 multiply:2 evaluation:4 alignment:1 light:2 tj:2 implication:1 accurate:1 amenable:1 necessary:1 perfonnance:3 minimal:1 instance:1 column:4 impulsive:1 introducing:1 addressing:1 ryals:1 snr:1 reported:2 density:1 international:2 physic:1 off:2 together:2 hopkins:3 recorded:2 ear:1 derivative:2 leading:1 coefficient:4 caused:1 depends:1 onset:1 performed:2 closed:1 linked:1 cauwenberghs:6 complicated:1 minimize:1 accuracy:16 largely:1 efficiently:1 spaced:2 yield:5 ofthe:1 plastic:1 critically:1 none:1 multiplying:2 rectified:1 classified:2 reach:1 energy:4 frequency:10 pp:2 obvious:2 gain:13 sampled:1 proved:2 auditory:2 adjusting:1 ask:1 knowledge:1 segmentation:1 evaluated:2 correlation:29 until:2 hand:2 lack:1 mode:2 indicated:1 effect:4 omitting:1 normalized:4 true:1 verify:1 laboratory:1 decimated:2 illustrated:1 white:2 during:4 mel:1 m:2 hong:1 performs:1 dedicated:1 consideration:1 instantaneous:1 common:2 denver:1 volume:1 analog:11 slight:2 numerically:1 accumulate:1 significant:3 surround:6 had:1 reliability:1 odb:1 furth:1 stable:1 longer:1 cortex:2 base:1 aligning:1 add:3 own:1 optimizing:4 certain:1 binary:14 arbitrarily:1 seen:1 minimum:1 additional:3 somewhat:1 greater:1 cii:1 determine:2 shortest:1 period:3 fernando:2 signal:11 ii:1 redundant:1 maximize:2 desirable:1 sound:1 exceeds:2 match:1 calculation:1 bach:1 cross:4 raphson:1 long:1 variant:2 normalization:13 tailored:1 cz:1 represent:1 affecting:1 addition:1 baltimore:1 decreased:1 else:1 modality:1 extra:1 recording:2 hz:6 virtually:1 db:1 capacitor:1 presence:1 switch:1 affect:2 fit:1 architecture:8 click:1 prototype:1 tub:1 shift:1 reevaluate:1 speech:2 supress:1 monterey:1 covered:1 cleaner:1 repeating:1 band:2 hardware:19 processed:2 reduced:2 generate:2 sign:1 per:13 affected:1 threshold:2 achieving:1 cutoff:1 graph:1 year:1 sum:2 run:1 reasonable:1 electronic:1 decision:1 prefer:1 acceptable:1 bit:8 ki:1 fl:1 followed:1 strength:2 constraint:2 transferred:1 combination:3 belonging:1 across:3 unity:1 modification:1 rectification:1 equation:4 nxm:1 previously:2 remains:2 mechanism:4 end:1 observe:1 robustness:2 original:2 running:2 remaining:1 maintaining:1 newton:1 calculating:1 added:3 realized:1 question:1 primary:1 md:2 diagonal:4 simulated:2 consumption:1 mail:1 cochlear:1 binarize:1 reason:1 length:2 cont:1 ratio:1 demonstration:1 minimizing:1 differencing:4 unfortunately:2 cij:1 negative:3 design:1 implementation:11 proper:1 perform:1 commuting:1 looking:1 smoothed:1 introduced:1 pair:1 required:2 optimized:2 connection:1 tap:1 acoustic:15 learned:1 nip:2 trans:1 address:1 pattern:3 built:1 including:1 memory:1 reliable:3 power:2 event:4 force:1 representing:2 scheme:1 improve:1 technology:1 suspicion:1 thud:1 kj:1 binarization:2 multiplication:1 relative:1 mixed:2 interesting:3 limitation:1 filtering:2 sixteen:1 digital:4 validation:2 degree:1 classifying:1 row:4 course:1 compatible:1 last:1 silence:3 template:58 taking:2 overcome:1 curve:1 world:1 kz:3 sensory:1 made:2 simplified:1 transaction:1 preferred:1 neurobiological:1 assumed:2 spectrum:1 continuous:3 iterative:2 table:8 additionally:1 channel:39 mj:4 nature:1 reasonably:1 robust:1 ca:1 contributes:1 excellent:1 investigated:1 domain:3 reevaluating:1 noise:8 whack:1 position:1 wish:1 bandpass:4 formula:1 specific:1 pz:3 evidence:1 quantization:1 frontend:6 magnitude:1 margin:1 depicted:1 led:1 timothy:1 logarithmic:1 likely:2 doubling:1 corresponds:1 chance:1 goal:4 presentation:1 targeted:1 bang:1 careful:1 content:1 change:1 specifically:1 determined:6 except:1 typical:2 averaging:2 reducing:1 total:3 ece:1 experimental:1 meaningful:1 latter:1 evaluate:2 audio:2 correlated:1 |
680 | 1,622 | General Bounds on Bayes Errors for
Regression with Gaussian Processes
Manfred Opper
Neural Computing Research Group
Dept. of Electronic Engineering
and Computer Science,
Aston University,
Birmingham, B4 7ET
United Kingdom
oppermGaston.ac.uk
Francesco Vivarelli
Centro Ricerche Ambientali
Montecatini,
via Ciro Menotti, 48
48023 Marina di Ravenna,
Italy
fvivarelliGcramont.it
Abstract
Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian
processes. The basic bounds are formulated for a fixed training set.
Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel,
yielding asymptotically tight results. The results are compared
with numerical experiments.
1
Introduction
Nonparametric Bayesian models which are based on Gaussian priors on function
spaces are becoming increasingly popular in the Neural Computation Community
(see e.g.[2, 3, 4, 7, 1]). Since the model classes considered in this approach are
infinite dimensional, the application of Vapnik - Chervonenkis type of methods to
determine bounds for the learning curves is nontrivial and has not been performed
so far (to our knowledge). In these methods, the target function to be learnt is
fixed and input data are drawn independently at random from a fixed (unknown)
distribution. The approach of this paper is different. Here, we assume that the target
is actually drawn at random from a known prior distribution, and we are interested
in developing simple bounds on the average prediction performance (with respect
to the prior) which hold for a fixed set of inputs. Only at a later stage, an average
over the input distribution is made.
General Bounds on Bayes Errors for Regression with Gaussian Processes
2
303
Regression with Gaussian processes
To explain the Gaussian process scenario for regression problems [4J, we assume that
observations Y E R at input points x E RD are corrupted values of a function 8(x)
by an independent Gaussian noise with variance u 2 . The appropriate stochastic
model is given by the likelihood
pe(Ylx)
=
e
_ ( y _9 ( .,))2
2 .. 2
~
(1)
The goal of a learner is to give an estimate of the function 8(x), based on a set of
observed example data D t = ((Xl, Yl)"'" (Xt) Yt)) . As the prior information about
the unknown function 8(x) we asume that 8 is a realization of a Gaussian random
field with zero mean and covariance
(2)
C(x, x') = 18[8(x)8(x')J.
It is useful to expand the random functions as
00
(3)
k=O
in a complete set of deterministic functions ?>k (x) with random Gaussian coefficients
Wk. As is well known, if the ?>k are chosen as orthonormal eigenfunctions of the
integral equation
/ C(x,x')?>k(x')p(x')dx'
= Ak?>k(X),
(4)
with eigenvalues Ak and a nonnegative weight function p(x), the a priori statistics
of WI is simple. They are independent Gaussian variables which satisfy 18[wkwd =
AkOkl'
3
Prediction and Bayes error
Usually, the posterior mean of 8(x) is chosen as the prediction 8(x) on a new point
x based on a dataset Dn = (Xl, yI), ... , (x n )Yn). Its explicit form can be easily
derived by using the expansion 8(x) = Lk Wk?>k(X), and the fact that for Gaussian
random variables, their mean coincides with their most probable value. Maximizing
the log posterior, with respect to the W k) one finds for the infinite dimensional vector
W ~ (Wk)k=O, ... ,oo the result W = (u 2 J + AV) - 1 b where Vkl = L~=l ?>k(Xi)?>I(xd
Akl = AkOkl and bk = L~=1 AkYi?>k(xd Fixing the set of inputs xn, the Bayesian
prediction error at a point x is given by
c(xlx n ) ~ 18 (8(x) - 8(x)f
(5)
Evaluating (5) yields, after some work, the expression
c(xlx n ) =
u2 Tr { (u 2 J + AV) -1 AU(x) }
(6)
with the matrix Ukl(X) = ?>k(X)?>I(X). U has the properties that ~ L~=1 U(Xi) = V
and J dx p(x)U(x) = I. We define the Bayesian training error as the empirical
average of the error (5) at the n datapoints of the training set and the Bayesian
generalization error as the average error over all x weighted by the function p(x).
We get
.!.n Tr { AV (I + AV / u 2 ) -1 }
Tr { A (I
+ AV / u 2 ) -1 }
.
(7)
(8)
304
4
M. Opper and F Vivarelli
Entropic error
In order to understand the next type of error [9], we assume that the data arrive
sequentially, one after the other. The predictive distribution after t - 1 training data
at the new input Xt is the posterior expectation of the likelihood (1), Le.
Let L t as the Bayesian average of the relative entropy (or Kullback Leibler divergence) between the predictive distribution and the true distribution Pe from which
the data were generated, Le. L t
Lt
= lE [D K L
P) ].
(Pel I
It can also be shown that
= ! In (1 + ~g(X~f-l?) . Hence, when the prediction error is small, we will have
(9)
The cumulative entropic error Ee (xn) is defined by summing up all the losses (which
gives an integrated learning curve) from t = 1 up to time n and one can show that
E(x n) = tLt(Xt,Dt-d
t=l
= lEDKL (Pellpn) = ~Trln (I + AV/(12)
(10)
where P; = rr=l Pe(yilxd and pn = lE[n~=l Pe(Yilxd]. The first equality may be
found e.g. in [9], and the second follows from direct calculation.
5
Bounds for fixed set of inputs
In order to get bounds on (7),(8) and (10), we use a lemma, which has been used in
Quantum Statistical Mechanics to get bounds on the free energy. The lemma (for
the special function f(x) = e-.B X ) was proved by Sir Rudolf Peierls in 1938 [10]. In
order to keep the paper self contained, we have included the proof in the appendix.
Lemma 1 Let H be a real symmetric matrix and f a convex real function. Then
Tr f(H) ~ L::k f(Hkk).
By noting, that for concave functions the bound goes in the other direction, we
immediately get
ct
Cg
<
(12
-
n
>
<
k
L
k
E{xn)
L
Ak Vk k
(12
+ Ak Vkk
(12 Ak
(12
+ Ak Vkk ~
<(1
2
-
k
L
k
L
AkVk
(12
+ nAkVk
(12 Ak
(12
+ nAkVk
~ LIn (1 + V kk A k/(12) ~ ~ LIn (1 + nVkAk/(12)
k
(11)
(12)
(13)
k
where in the rightmost inequalities, we assume that all n inputs are in a compact
region V, and we define Vk = sUPxE'D 4>~(x). 1
IThe entropic case may also be proved by Hadamard's inequality.
General Bounds on Bayes Errorsfor Regression with Gaussian Processes
6
305
A verage case bounds
Next, we assume that the input data are drawn at random and denote by ( ... )
the expectations with respect to the distribution. We do not have to assume independence here, but only the fact that all marginal distributions for the n inputs are
identical! Using Jensen's inequality
Ct
(Ct(xn))
~ 0- 2 2:
k
cg
E
)..kUk
0- 2
+ n)..kUk
0- 2 )..
2: 2 + n~ U
1
(E(xn)) ~ 22: In (1 +
(cg(xn))
(14)
~
0-
k
k
(15)
k
nUk)..kj0-2)
(16)
k
where now Uk = (?'Hx)). This result is especially simple, when the weighting
function p(x) is a probability density and the inputs have the marginal distribution
p(x). In this case, we simply have Uk = 1. In this case, training and generalization
error sandwich the bound
2"
)..k
(17)
cb = 0- L- 2
\.
k
0-
+ n/\k
We expect that the bound Cb becomes asymptotically exact, when n -+ 00. This
should be intuitively clear, because training and generalization error approach each
other asymptotically. This fact may also be understood from (9), which shows that
the cumulative entropic error is within a factor of asymptotically equal to the
cumulative generalization error. By integrating the lower bound (17) over n, we
obtain precisely the upper bound on E with a factor 2, showing that upper and
lower bounds show the same behaviour.
!
7
Simulations
We have compared our bounds with simulations for the average training error and
generalization error for the case that the data are drawn from p( x). Results for the
entropic error will be given elsewhere.
We have specialized on the case, where the covariance kernel is of the RBF form
C(x,x') = exp[(x - X')2j)..2], and p(x) = (27r)-~e-~X2, for which, following Zhu
et al. (1997), the k-th eigenvalue of the spectrum (k = 0 ... 00) can be written
as)..k = abk, where a = VC,b = c/)..2, c = 2(1+2j)..2+v'1+4/)..2)-I, and)"
is the lengthscale of the process. We estimated the average generalisation error
for each training set based on the exact analytical expressions (8) and (7) over
the distribution of the datasets by using a Monte Carlo approximation. To begin
with, let us consider x E R. We sampled the I-dimensIOnal input space generating
100 training sets whose data points were normally distributed around zero with
unit variance. For each generation, the expected training and generalisation errors
for a GP have been evaluated using up to 1000 data points. We set the value
of the lengthscale 2 ).. to 0.1 and we let the noise level 0- 2 assume several values
(0- 2 = 10- 4 , 10- 3 , 10- 2 ,10- 1 , 1). Figure 1 shows the results we obtained when
2The value of the lengthscale ..\ has the effect of stretching the training and learning
curves; thus the results of the experiments performed with different ..\ are qualitatively
similar to those presented.
M Opper and F Vivarelli
306
~" "
?
~ '"
9
0.1
?1
.~
..
0.1
."...
0.01
,,
10
n
100
10
1000
(a) >. = 0.1, (72 = 0.1
(b) >.
n
= 0.1, (72 = 1
Figure 1: The Figures show the graphs of the training and learning curves with their
bound fb(n) obtained with>. = 0.1; the noise level is set to 0.1 in Figure l(a) and to
1 in Figure l(b). In all the graphs, ft and fg(n) are drawn by the solid line and their
95% confidence interval is signed by the dotted curves. The bound fb(n) is drawn by the
dash-dotted lines.
(Figure l(a)) and (12 = 1 (Figure l{b)). The bound ?b{n) lies within the
training and learning curves, being an upper bound for ?t (n) and a lower bound for
?g(n). This bound is tighter for the processes with higher noise level; in particular,
for large datasets the error bars on the curves ?t (n) and ?g (n) overlap the bound
?b(n). The curves ?t{n), ?g(n) and ?b(n) approach zero as O(log(n)/n).
(12
= 0.1
Our bounds can also be applied to higher dimensions D > 1 using the covariance
C{x, x') = exp
(-llx - x'11 2/>.2)
(18)
for x, x' E RD. Obviously the integral kernel C is just a direct product of RBF
kernels, one for each coordinate of x and x'. The eigenvalue problem (4) can be
immediately reduced to the one for a single variable. Eigenfunctions and eigenvalues
are simply products of those for the single coordinate problems. Hence, using a bit
of combinatorics, the bound Cb can be written as
_
Cb -
- 1)
L (k + D
k
00
k=O
b
(12a D k
(72
+ naDb k ,
(19)
where a and b have been defined above. We performed experiments when x E R2
and x E R5 . The correlation lengths along each direction of the input space has
been set to 1 and the noise level was (12 = 1.0. The graphs of the curves, with their
error bars are reported in Figure 2{a) (for x E R2) and in Figure 2{b) (for x E R 5 ).
8
Discussion
Based on the minimal requirements on training inputs and covariances, we conjecture that our bounds cannot be improved much without making more detailed
assumptions on models and distributions. We can observe from the simulations
that the tightness of the bound ?b{n) depends on the dimension of the input space.
In particular, for large datasets ?b{n) is tighter for small dimension of the input
space; Figure 2{a) shows this quite clearly since ?b{n) overlaps the error bars of the
307
General Bounds on Bayes Errors for Regression with Gaussian Processes
-t;,-- --_
0.1
10
10
100
n
n
(a) d = 2
(b) d = 5
1000
Figure 2: The Figures show the graphs of the training and learning curves with their
bound Eb(n) obtained with the squared exponential covariance function with A = 1 and
(72 = 1; the input space is R2 (Figure 2( a? and R 5 (Figure 2(b?. In all the Figures, Et
and Eg(n) are drawn by the solid line and their 95% confidence interval is signed by the
dotted curves. The bound Eb(n) is drawn by the dash-dotted lines.
training and learning curves for large n. Numerical simulations performed using
modified Bessel covariance functions of order r (describing random processes r - 1
time mean square differentiable) have shown that the bound ?b(n} becomes tighter
for smoother processes.
Acknowledgement: We are grateful for many inspiring discussions with C.K.I.
Williams. M.O. would like to thank Peter Sollich for his conjecture that (17) is an
exact lower bound on the generalization error, which motivated part of this work.
F. V. was supported by a studentship of British Aerospace.
9
Appendix: Proof of the lemma 1
Let {~(j)} be a complete set of orthonormal eigenvectors and {Ei} the correspond(i)
(i)
ing set of eigenvalues of H, i.e. we have the properties Ll Hkl~l = Ei ~k ,
(i) (i )
(i) (j)
Li ~k ~l = 8kl , and Lk ~k ~k = 8ij . Then we get
Tr f(H}
L
f(Ed = L
i
k
L(~~i)}2 f(Ei }
i
>
~f (~(~~?2Ei) ~ ~f (~~~i) ~H.. ~fi?)
=
Lf(Hkk}
k
The second equality follows from orthonormality, because Lk(~~i)}2 = 1. The
inequality uses the fact that by completeness, for any k, we have Li(~~i)}2 = 1
and we may regard the (~~i)}2 as probabilities, such that by convexity, Jensen's
inequality can be used. After using the eigenvalue equation, the sum over i was
carried out with the help of the completeness relation, in order to obtain the last
line.
308
M. Opper and F Vivarelli
References
[1] D. J. C. Mackay, Gaussian Processes, A Replacement for Neural Networks,
NIPS tutorial 1997. May be obtained from
http://wol.ra.phy.cam.ac.uk/pub/mackay/.
[2] R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics,
Springer (1996).
[3] C. K. I. Williams, Computing with Infinite Networks, in Neural Information
Processing Systems 9, M. C. Mozer, M.1. Jordan and T. Petsche, eds., 295-30l.
MIT Press (1997).
[4] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, in
Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer and
M. E. Hasselmo eds., 514-520, MIT Press (1996).
[5] R. M. Neal, Monte Carlo Implementation of Gaussian Process Models for
Bayesian Regression and Classification, Technical Report CRG-TR-97-2, Dept.
of Computer Science, University of Toronto (1997) .
[6] M. N. Gibbs and D. J. C. Mackay, Variational Gaussian Process Classifiers,
Preprint Cambridge University (1997).
[7] D. Barber and C. K. I. Williams, Gaussian Processes for Bayesian Classification
via Hybrid Monte Carlo, in Neural Information Processing Systems 9, M. C.
Mozer, M. I. Jordan and T. Petsche, eds., 340-346. MIT Press (1997).
[8] C. K. I. Williams and D. Barber, Bayesian Classification with Gaussian Processes, Preprint Aston University (1997) .
[9] D. Haussler and M. Opper, Mutual Information, Metric Entropy and Cumulative Relative Entropy Risk, The Annals of Statistics, Vol 25, No 6, 2451
(1997).
[10] R. Peierls, Phys. Rev. 54, 918 (1938).
[11] H. Zhu, C. K. I. Williams, R. Rohwer and M. Morciniec, Gaussian Regression and Optimal Finite Dimensional Linear Models, Technical report
NCRG /97/011, Aston University (1997).
| 1622 |@word simulation:4 covariance:7 tr:6 solid:2 phy:1 united:1 chervonenkis:1 pub:1 rightmost:1 dx:2 written:2 numerical:2 kj0:1 manfred:1 completeness:2 toronto:1 simpler:1 dn:1 along:1 direct:2 ra:1 expected:1 mechanic:1 becomes:2 begin:1 pel:1 akl:1 concave:1 xd:2 abk:1 classifier:1 uk:4 normally:1 unit:1 yn:1 engineering:1 understood:1 morciniec:1 ak:7 becoming:1 signed:2 au:1 eb:2 lf:1 empirical:1 confidence:2 integrating:1 get:5 cannot:1 risk:1 deterministic:1 yt:1 maximizing:1 go:1 williams:6 independently:1 convex:1 immediately:2 haussler:1 orthonormal:2 datapoints:1 his:1 coordinate:2 annals:1 target:2 exact:3 us:1 observed:1 ft:1 preprint:2 region:1 mozer:3 convexity:2 cam:1 grateful:1 tight:1 ithe:1 predictive:2 learner:1 easily:1 lengthscale:3 monte:3 whose:1 quite:1 tightness:1 statistic:3 gp:1 obviously:1 eigenvalue:6 rr:1 analytical:1 differentiable:1 product:2 hadamard:1 realization:1 requirement:1 generating:1 help:1 oo:1 develop:1 ac:2 fixing:1 ij:1 direction:2 stochastic:1 vc:1 wol:1 hx:1 behaviour:1 generalization:6 probable:1 tighter:3 crg:1 hold:1 around:1 considered:1 exp:2 cb:4 entropic:5 birmingham:1 hasselmo:1 weighted:1 mit:3 clearly:1 gaussian:20 vkl:1 modified:1 pn:1 derived:1 vk:2 likelihood:2 cg:3 integrated:1 relation:1 expand:1 interested:1 classification:3 priori:1 special:1 mackay:3 mutual:1 marginal:2 equal:2 field:1 ukl:1 sampling:1 identical:1 r5:1 report:2 divergence:1 replacement:1 sandwich:1 yielding:1 integral:2 minimal:1 vkk:2 reported:1 corrupted:1 learnt:1 density:1 yl:1 squared:1 li:2 hkk:2 wk:3 coefficient:1 satisfy:1 combinatorics:1 depends:1 performed:4 later:1 bayes:5 xlx:2 square:1 variance:2 stretching:1 yield:1 correspond:1 bayesian:10 carlo:3 explain:1 phys:1 touretzky:1 ed:4 rohwer:1 energy:1 proof:2 di:1 sampled:1 dataset:1 proved:2 popular:1 knowledge:1 actually:1 higher:2 dt:1 improved:1 evaluated:1 just:1 stage:1 correlation:1 ei:4 effect:1 true:1 orthonormality:1 hence:2 equality:2 symmetric:1 leibler:1 neal:2 eg:1 ll:1 self:1 coincides:1 complete:2 variational:1 fi:1 specialized:1 b4:1 ncrg:1 cambridge:1 gibbs:1 llx:1 rd:2 posterior:3 italy:1 scenario:1 inequality:5 yi:1 ricerche:1 determine:1 bessel:1 smoother:1 ing:1 technical:2 calculation:1 lin:2 marina:1 prediction:6 regression:10 basic:1 expectation:2 metric:1 kernel:4 interval:2 eigenfunctions:2 jordan:2 ee:1 noting:1 independence:1 expression:3 motivated:1 peter:1 useful:1 clear:1 detailed:1 ylx:1 eigenvectors:1 nonparametric:1 inspiring:1 reduced:1 http:1 tutorial:1 dotted:4 estimated:1 vol:1 group:1 drawn:8 kuk:2 asymptotically:4 graph:4 sum:1 arrive:1 electronic:1 appendix:2 bit:1 bound:34 ct:3 dash:2 nonnegative:1 nontrivial:1 precisely:1 x2:1 centro:1 conjecture:2 developing:1 verage:1 sollich:1 increasingly:1 wi:1 rev:1 making:1 intuitively:1 equation:2 describing:1 vivarelli:4 observe:1 appropriate:1 petsche:2 peierls:2 especially:1 thank:1 barber:2 length:1 kk:1 kingdom:1 implementation:1 unknown:2 upper:3 av:6 observation:1 francesco:1 datasets:3 finite:1 community:1 bk:1 kl:1 aerospace:1 nip:1 bar:3 usually:1 hkl:1 overlap:2 hybrid:1 zhu:2 aston:3 lk:3 carried:1 prior:4 acknowledgement:1 relative:2 sir:1 loss:1 expect:1 lecture:1 generation:1 elsewhere:1 supported:1 last:1 free:1 rasmussen:1 understand:1 fg:1 distributed:1 regard:1 curve:12 opper:5 xn:6 evaluating:1 cumulative:4 dimension:3 quantum:1 fb:2 studentship:1 made:1 qualitatively:1 far:1 tlt:1 compact:1 kullback:1 keep:1 sequentially:1 summing:1 xi:2 spectrum:1 expansion:1 noise:5 explicit:1 exponential:1 xl:2 lie:1 pe:4 weighting:1 british:1 xt:3 showing:1 jensen:2 r2:3 vapnik:1 entropy:3 lt:1 simply:2 contained:1 u2:1 springer:1 goal:1 formulated:1 rbf:2 included:1 infinite:3 generalisation:2 lemma:5 rudolf:1 dept:2 |
681 | 1,623 | Basis Selection For Wavelet Regression
Kevin R. Wheeler
Caelum Research Corporation
NASA Ames Research Center
Mail Stop 269-1
Moffett Field, CA 94035
kwheeler@mail .arc.nasa.gov
Atam P. Dhawan
College of Engineering
University of Toledo
2801 W. Bancroft Street
Toledo, OH 43606
adhawan@eng.utoledo.edu
Abstract
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and threshold are selected using crossvalidation. The method includes the capability of incorporating
prior knowledge on the smoothness (or shape of the basis functions)
into the basis selection procedure. The results of the method are
demonstrated using widely published sampled functions. The results of the method are contrasted with other basis function based
methods.
1
INTRODUCTION
Wavelet regression is a technique which attempts to reduce noise in a sampled
function corrupted with noise. This is done by thresholding the small wavelet decomposition coefficients which represent mostly noise. Most of the papers published
on wavelet regression have concentrated on the threshold selection process. This
paper focuses on the effect that different wavelet bases have on cross-validation
based threshold selection, and the error in the final result. This paper also suggests
how prior information may be incorporated into the basis selection process, and the
effects of choosing a wrong prior. Both orthogonal and biorthogonal wavelet bases
were explored.
Wavelet regression is performed in three steps. The first step is to apply a discrete
wavelet transform to the sampled data to produce decomposition coefficients. Next
a threshold is applied to the coefficients. Then an inverse discrete wavelet transform
is applied to these modified coefficients.
K. R. Wheeler and A. P Dhawan
628
The basis selection procedure is demonstrated to perform better than other wavelet
regression methods even when the wrong prior on the space of the basis selections
is specified.
This paper is broken into the following sections. The background section gives a
brief summary of the mathematical requirements of the discrete wavelet transform.
This section is followed by a methodology section which outlines the basis selection
algorithms, and the process for obtaining the presented results. This is followed by
a results section and then a conclusion.
2 BACKGROUND
2.1
DISCRETE WAVELET TRANSFORM
The Discrete Wavelet Transform (DWT) [Daubechies, 92] is implemented as a series
of projections onto scaling functions in L2 (R). The initial assumption is that the
original data samples lie in the finest space Vo, which is spanned by the scaling
function ,p E Vo such that the collection {,p( x -t) It E Z} is a Riesz basis of Vo . The
first level of the dyadic decomposition then consists of projecting the data samples
onto scaling functions which have been dilated to be twice as wide as the original
,p. These span the coarser space V?- 1 : {,p(2x - 2t) It E Z}. The information that
is lost going from the finer to coarser scale is retained in what is known as wavelet
coefficients. Instead of taking the difference, the wavelet coefficients can be obtained
via a projection operation onto the wavelet basis functions 'I/J which span a space
known as Woo The projections are typically implemented using Quadrature Mirror
Filters (QMF) which are implemented as Finite Impulse Response filters (FIR) .
The next level of decomposition is obtained by again doubling the scaling functions
and projecting the first scaling decomposition coefficients onto these functions . The
difference in information between this level and the last one is contained in the
wavelet coefficients for this level. In general , the scaling functions for level j and
translationmmayberepresentedby: ,pj(t) = 2:::,}-,p(2- j t-m)wheretE [0, 2k-1J,
k ~ 1, 1 ~ j ~ k, ~ m ~ 2k - j - 1.
?
2.1.1
Orthogonal
An orthogonal wavelet decomposition is defined such that the difference space Wj
is the orthogonal complement of Vj in Vj +! : Wo..l Vo which means that the
projection of the wavelet functions onto the scaling functions on a level is zero:
('I/J,,pC -t)) = 0, t E Z
This results in the wavelet spaces Wj with j E Z being all mutually orthogonal. The refinement relations for an orthogonal decomposition may be written as:
,p(x) = 2 Lk hk,p(2x - k) and 'I/J (x) = 2 Lk gk,p(2x - k).
2.1.2
Biorthogonal
Symmetry is as an important property when the scaling functions are used as interpolatory functions. Most commonly used interpolatory functions are symmetric.
It is well known in the subband filtering community that symmetry and exact reconstruction are incompatible if the same FIR filters are used for reconstruction
and decomposition (except for the Haar filter) [Daubechies, 92]. If we are willing to
629
Basis Selectionfor Wavelet Regression
use different filters for the analysis and synthesis banks, then symmetry and exact
reconstructior:: are possible using b~orthogonal wavelets. Biorthogonal wavelets have
dual scaling 4> and dual wavelet 1/J functions . These generate a dual multiresolution analysis with subspaces ~ and TVj so that: l~ 1.. Wj and Vj 1.. Wj and the
orthogonality conditions can now be written as:
(?, 1/J (. -l))
=
('?,4>(- -l))
=
0
(?j,l,4>k,m)
OJ-k ,OI-m for l,m,j,k E Z
(-0j ,I,1/Jk,m)
OJ-k,OI -m for l,m , j,k E Z
where OJ - k = 1 when j = k, and zero otherwise.
The refinement relations for biorthogonal wavelets can be written:
4>(:::) = 2
L hk4>(2x -
k) and 1/J(x)
2
L gk4>(2x -
k
?(x)
2
k)
k
L hk?(2x -
k) and -0(x)
k
Basically, this means that the scaling functions at one level are composed of linear
combinations of scaling functions at the next finer level. The wavelet functions at
one level are also composed of linear combinations of the scaling functions at the
next finer level.
2.2
LIFTING AND SECOND GENERATION WAVELETS
Swelden's lifting scheme [Sweldens, 95a] is a way to transform a biorthogonal wavelet
decomposition obtained from low order filters to one that could be obtained from
higher order filters (more FIR filter coefficients), without applying the longer filters
and thus saving computations. This method can be used to increase the number
of vanishing moments of the wavelet, or change the shape of the wavelet. This
means that several different filters (i.e. sets of basis functions) may be applied with
properties relevant to the problem domain in a manner more efficient than directly
applying the filters individually. This is beneficial to performing a search over the
space of admissible basis functions meeting the problem domain requirements.
Swelden's Second Generation Wavelets [Sweldens , 95b] are a result of applying
lifting to simple interpolating biorthogonal wavelets, and redefining the refinement
relation of the dual wavelet to be:
,?(x)
=
?(2x - 1) -
L ak?(x -
k)
k
where the ak are the lifting parameters. The lifting parameters may be selected to
achieve desired properties in the basis functions relevant to the problem domain .
Prior information for a particular application domain may now be incorporated into
the basis selection for wavelet regression. For example, if a particular application
requires that there be a certain degree of smoothness (or a certain number of vanishing moments in the baSiS), then only those lifting parameters which result in a
number of vanishing moments within this range are used. Another way to think
K. R. Wheeler and A. P Dhawan
630
about this is to form a probability distribution over the space of lifting parameters.
The most likely lifting parameters will be those which most closely match one's
intuition for the given problem domain.
2.3
THRESHOLD SELECTION
Since the wavelet transform is a linear operator the decomposition coefficients will
have the same form of noise as the sampled data. The idea behind wavelet regression
is that the decomposition coefficients that have a small magnitude are substantially
representative of the noise component of the sampled data. A threshold is selected
and then all coefficients which are below the threshold in magntiude are either set
to zero (a hard threshold) or a moved towards zero (a soft threshold). The soft
threshold'T]t(Y) = sgn(Y)(1 Y I -t) is used in this study.
There are two basic methods of threshold selection: 1. Donoho 's [Donoho, 95]
analytic method which relies on knowledge of the noise distribution (such as a
Gaussian noise source with a certain variance); 2. a cross-validation approach (many
of which are reviewed in [Nason, 96]). It is beyond the scope of this paper to review
these methods. Leave-one-out cross-validation with padding was used in this study.
3
METHODOLOGY
The test functions used in this study are the four functions published by Donoho
and Johnstone [Donoho and Johnstone, 94]. These functions have been adopted
by the wavelet regression community to aid in comparison of algorithms across
publications .
Each function was uniformly sampled to contain 2048 points. Gaussian white noise
was added so that the signal to noise ratio (SNR) was 7.0. Fifty replicates of each
noisy function were created, of which four instantiations are depicted in Figure 1.
The noise removal process involved three steps. The first step was to perform a
discrete wavelet transform using a paticular basis. A threshold was selected for
the resulting decomposition coefficients using leave-one-out cross validation with
padding.
The soft threshold was then applied to the decomposition. Next, the inverse wavelet
transform was applied to obtain a cleaner version of the original signal. These steps
were repeated for each basis set or for each set of lifting parameters.
3.1
WAVELET BASIS SELECTION
To demonstrate the effect of basis selection on the threshold found and the error
in the resulting recovered signal, the following experiments were conducted. In the
first trial two well studied orthogonal wavelet families were used: Daubechies most
compactly supported (DMCS), and Symlets (8) [Daubechies, 92]. For the DMCS
family, filters of order 1 (which corresponds to the Haar wavelet) through 7 were
used. For the Symlets, filters of order 2 through 8 were used. For each filter, leaveone-out cross-validation was used to find a threshold which minimized the mean
square error for each of the 50 replicates for the four test functions. The median
threshold found was then applied to the decomposit.ion of each of the r eplicates
Basis Selection for Wavelet Regression
631
for each test function. The resulting reconstructed signals are compared to the
ideal function (the original before noise was added) and the Normalized Root Mean
Square Error (NRMSE) is presented.
3.2
INCORPORATING PRIOR INFORMATION: LIFTING
PARAMETERS
If the function that we are sampling is known to have certain smoothness properties, then a distribution of the admissible lifting coefficients representing a similar
smoothness characteristic can be formed. However, it is not necessary to cautiously
pick a prior. The performance of this method with a piecewise linear prior (the
(2,2) biorthogonal wavelet of Cohen-Daubechies-Feauveau [Cohen , 92]) has been
applied to the non-linear smooth test functions Bumps, Doppler, and Heavysin.
This method has been compared with several standard techniques [Wheeler, 96].
The Smoothing Spline method (SS) [Wahba, 90] , Donoho's Sure Shrink method
(SureShrink)[Donoho, 95], and an optimized Radial Basis Function Neural Network
(RBFNN) .
4
RESULTS
In the first experiment, the procedure was only allowed to select between two well
known bases (Daubechies most compactly supported and symmlet wavelets) with
the desired filter order. Table 1 shows the filter order resulting in lowest crossvalidation error for each filter and function. The NRMSE is presented with respect
to the original noise-free functions for comparison. As expected the best basis
for the noisy blocks function was the piecewise linear basis (Daubechies, order 1) .
The doppler, which had very high frequency components required the highest filter
order. Figure 2 represents typical denoised versions for the functions recovered by
the filters listed in bold in the table.
The method selected the basis having similar properties to the underlying function
without knowing the original function. When higher order filters were applied to
the noisy Blocks data, the resulting NRMSE was higher.
The basis selection procedure (labelled CV-Wavelets in Table 2) was compared with
Donoho's SureShrink, Wahba's Smoothing Splines (SS), and an optimized RBFNN
[Wheeler , 96]. The prior information specified incorrectly to the procedure to prefer
bases near piecewise linear. The remarkable observation is that the method did
better than the others as measured by Mean Square Error.
5
CONCLUSION
A basis selection procedure for wavelet regression was presented. The method was
shown to select bases appropriate to the characteristics of the underlying functions.
The shape of the basis was determined with cross-validation selecting from either a
pre-set library of filters or from previously calculated lifting coefficients. The lifting
coefficients were calculated to be appropriate for the particular problem domain.
The method was compared for various bases and against other popular methods.
Even with the wrong lifting parameters, the method was able to reduce error better
than other standard algorithms.
K. R. Wheeler and A . P Dhawan
632
Noisy Blocks Function
Noisy Bumps Function
Noisy Heavysin function
Noisy Doppler function.
Figure 1: Noisy Test Functions
Recovered Blocks Function
Recovered Bumps Function
Recovered Heavysin function
Recovered Doppler function.
Figure 2: Recovered Functions
633
Basis Selection/or Wavelet Regression
Table 1: Effects of Basis Selection
Function
Blocks
Blocks
Bumps
Bumps
Doppler
Doppler
Heavysin
Reavysin
Filter
Order
1
2
4
5
8
8
2
5
Family
Daubechies
Symmlets
Daubechies
Symmlets
Daubechies
Symmlets
Daubechies
Symmlets
Median
Thr. (MT)
1.33
1.245
1.11
1.13
1.27
1.36
1.97
1.985
NRMSE
Using MT
0.038
0.045
0.059
0.058
0.058
0.054
0.039
0.039
Median
True Thr.
1.61
1.40
1.47
1.48
1.65
1.74
2.17
2.16
NRMSE
using MTT
0.036
0.045
0.056
0.055
0.054
0.050
0.038
0.038
Table 2: Methods Comparison Table of MSE
Function
Blocks
Heavysin
Doppler
SS
0.546
0.075
0.205
Sure Shrink
0.398
0.062
0.145
RBFNN
1.281
0.113
0.287
CV-Wavelets
0.362
0.051
0.116
References
A. Cohen , 1. Daubechies , and J . C. Feauveau (1992), "Biorthogonal bases of compactly supported wavelets," Communications on Pure and Applied Math ematics,
vol. 45, no. 5, pp. 485 - 560, June.
1. Daubechies (1992), Ten Lectures on Wav elets, CBMS-NSF Regional Conference
Series in Applied Mathematics, vol. 61, SIAM, Philadelphia, PA.
D. L. Donoho (1995), "De-noising by soft-thresholding," IEEE Transactions on
Information Th eory, vol. 41, no. 3, pp.613-627, May.
D. L. Donoho, 1. M. Johnstone (1994), "Ideal spatial adaptation by wavelet shrinkage," Biometrika, vol. 81, no. 3, pp. 425-455, September .
G. P. Nason (1996), "Wavelet shrinkage using cross-validation," Journal of the Royal
Statistical Society, Series B , vol. 58, pp. 463 - 479.
W. Sweldens (1995), "The lifting scheme: a custom-design construction of biorthogonal wavelets," Technical Report, no. IMI 1994:7, Dept. of Mathematics, University
of South Carolina.
W. Sweldens (1995), "The lifting scheme: a construction of second generation
wavelets," Technical Report, no. IMI 1995:6, Dept. of Mathematics, University
of South Carolina.
G. Wahba (1990), Spline Models for Observational Data, SIAM, Philadelphia, PA.
K. Wheeler (1996), Smoothing Non-uniform Data Samples With Wav elets, Ph.D.
Thesis, University of Cincinnati, Dept. of Electrical and Computer Engineering,
Cincinnati, OR .
| 1623 |@word trial:1 version:2 willing:1 carolina:2 eng:1 decomposition:13 pick:1 moment:3 initial:1 series:3 selecting:1 recovered:7 written:3 finest:1 shape:3 analytic:1 selected:5 vanishing:3 math:1 ames:1 mathematical:1 consists:1 manner:1 expected:1 gov:1 underlying:2 lowest:1 what:1 substantially:1 corporation:1 biometrika:1 wrong:3 before:1 engineering:2 ak:2 twice:1 studied:1 suggests:1 range:1 lost:1 block:7 procedure:7 wheeler:7 projection:4 pre:1 radial:1 symlets:2 onto:5 selection:19 operator:1 noising:1 applying:3 demonstrated:2 center:1 pure:1 spanned:1 oh:1 cincinnati:2 construction:2 exact:2 pa:2 jk:1 coarser:2 electrical:1 wj:4 highest:1 cautiously:1 intuition:1 broken:1 basis:32 compactly:3 various:1 kevin:1 choosing:1 widely:1 s:3 otherwise:1 think:1 transform:9 noisy:8 final:1 reconstruction:2 adaptation:1 relevant:2 achieve:1 multiresolution:1 moved:1 crossvalidation:2 requirement:2 produce:1 leave:2 measured:1 implemented:3 riesz:1 closely:1 filter:22 sgn:1 observational:1 scope:1 bump:5 individually:1 gaussian:2 modified:1 shrinkage:2 publication:1 tvj:1 focus:1 june:1 hk:2 biorthogonal:9 typically:1 relation:3 going:1 dual:4 smoothing:3 spatial:1 field:1 saving:1 having:1 sampling:1 represents:1 minimized:1 others:1 spline:3 piecewise:3 report:2 composed:2 attempt:1 custom:1 replicates:2 ematics:1 pc:1 behind:1 necessary:1 orthogonal:8 desired:2 soft:4 snr:1 uniform:1 conducted:1 imi:2 corrupted:1 siam:2 synthesis:1 mtt:1 daubechies:13 again:1 thesis:1 fir:3 de:1 bold:1 includes:1 coefficient:16 dilated:1 performed:1 root:1 denoised:1 capability:1 oi:2 square:3 formed:1 variance:1 characteristic:2 basically:1 finer:3 published:3 against:1 frequency:1 involved:1 pp:4 stop:1 sampled:6 popular:1 knowledge:2 nasa:2 cbms:1 higher:3 methodology:2 response:1 done:1 shrink:2 impulse:1 effect:4 contain:1 normalized:1 true:1 symmetric:1 white:1 outline:1 demonstrate:1 vo:4 mt:2 cohen:3 gk4:1 cv:2 smoothness:4 mathematics:3 had:1 longer:1 base:7 certain:4 meeting:1 signal:4 smooth:1 technical:2 match:1 cross:7 regression:13 basic:1 represent:1 ion:1 background:2 median:3 source:1 fifty:1 regional:1 sure:2 south:2 near:1 paticular:1 ideal:2 wahba:3 reduce:2 idea:1 knowing:1 padding:2 wo:1 listed:1 cleaner:1 ten:1 ph:1 concentrated:1 eory:1 generate:1 wav:2 nsf:1 discrete:6 vol:5 nrmse:5 four:3 threshold:16 pj:1 inverse:2 family:3 incompatible:1 prefer:1 scaling:12 followed:2 orthogonality:1 span:2 performing:1 combination:2 beneficial:1 across:1 projecting:2 mutually:1 previously:1 adopted:1 operation:1 sweldens:4 dhawan:4 apply:1 appropriate:2 dwt:1 original:6 subband:1 society:1 added:2 september:1 subspace:1 street:1 mail:2 retained:1 ratio:1 mostly:1 gk:1 design:1 perform:2 observation:1 arc:1 finite:1 incorrectly:1 incorporated:2 communication:1 community:2 complement:1 required:1 specified:2 doppler:7 redefining:1 optimized:2 thr:2 toledo:2 beyond:1 able:1 below:1 oj:3 royal:1 haar:2 representing:1 scheme:3 brief:1 library:1 lk:2 created:1 woo:1 philadelphia:2 prior:9 review:1 l2:1 removal:1 lecture:1 generation:3 filtering:1 moffett:1 remarkable:1 validation:7 degree:1 thresholding:2 bank:1 summary:1 supported:3 last:1 free:1 johnstone:3 wide:1 taking:1 leaveone:1 calculated:2 collection:1 refinement:3 commonly:1 transaction:1 reconstructed:1 instantiation:1 search:1 reviewed:1 table:6 ca:1 obtaining:1 symmetry:3 mse:1 interpolating:1 domain:6 vj:3 did:1 noise:12 decomposit:1 dyadic:1 repeated:1 allowed:1 quadrature:1 representative:1 aid:1 lie:1 wavelet:56 admissible:2 explored:1 incorporating:2 mirror:1 lifting:16 magnitude:1 depicted:1 likely:1 contained:1 doubling:1 corresponds:1 relies:1 donoho:9 towards:1 labelled:1 change:1 hard:1 typical:1 except:1 contrasted:1 uniformly:1 determined:1 select:2 college:1 dept:3 |
682 | 1,624 | Neural Networks for Density Estimation
Amir Atiya
Malik Magdon-Ismail*
magdon~cco.caltech.edu
amir~deep.caltech.edu
Caltech Learning Systems Group
Department of Electrical Engineering
California Institute of Technology
136-93 Pasadena, CA, 91125
Caltech Learning Systems Group
Department of Electrical Engineering
California Institute of Technology
136-93 Pasadena, CA, 91125
Abstract
We introduce two new techniques for density estimation. Our approach poses the problem as a supervised learning task which can
be performed using Neural Networks. We introduce a stochastic method for learning the cumulative distribution and an analogous deterministic technique. We demonstrate convergence of our
methods both theoretically and experimentally, and provide comparisons with the Parzen estimate. Our theoretical results demonstrate better convergence properties than the Parzen estimate.
1
Introduction and Background
A majority of problems in science and engineering have to be modeled in a probabilistic manner. Even if the underlying phenomena are inherently deterministic,
the complexity of these phenomena often makes a probabilistic formulation the only
feasible approach from the computational point of view. Although quantities such
as the mean, the variance, and possibly higher order moments of a random variable
have often been sufficient to characterize a particular problem, the quest for higher
modeling accuracy, and for more realistic assumptions drives us towards modeling
the available random variables using their probability density. This of course leads
us to the problem of density estimation (see [6]).
The most common approach for density estimation is the nonparametric approach,
where the density is determined according to a formula involving the data points
available. The most common non parametric methods are the kernel density estimator, also known as the Parzen window estimator [4] and the k-nearest neighbor
technique [1]. Non parametric density estimation belongs to the class of ill-posed
problems in the sense that small changes in the data can lead to large changes in
"To whom correspondence should be addressed.
Neural Networks for Density Estimation
523
the estimated density. Therefore it is important to have methods that are robust to
slight changes in the data. For this reason some amount of regularization is needed
[7]. This regularization is embedded in the choice of the smoothing parameter (kernel width or k). The problem with these non-parametric techniques is their extreme
sensitivity to the choice of the smoothing parameter. A wrong choice can lead to
either undersmoothing or oversmoothing.
In spite of the importance of the density estimation problem, proposed methods
using neural networks have been very sporadic. We propose two new methods
for density estimation which can be implemented using multilayer networks. In
addition to being able to approximate any function to any given precision, multilayer
networks give us the flexibility to choose an error function to suit our application.
The methods developed here are based on approximating the distribution function,
in contrast to most previous works which focus on approximating the density itself.
Straightforward differentiation gives us the estimate of the density function. The
distribution function is often useful in its own right - one can directly evaluate
quantiles or the probability that the random variable occurs in a particular interval.
One of the techniques is a stochastic algorithm (SLC), and the second is a deterministic technique based on learning the cumulative (SIC). The stochastic technique
will generally be smoother on smaller numbers of data points, however, the deterministic technique is faster and applies to more that one dimension. We will
present a result on the consistency and the convergence rate of the estimation error
for our methods in the univariate case. When the unknown density is bounded
and has bounded derivatives up to order K, we find that the estimation error is
O((loglog(N)/N)-(l-t?), where N is the number of data points. As a comparison,
for the kernel density estimator (with non-negative kernels), the estimation error is
O(N- 4 / 5 }, under the assumptions that the unknown density has a square integrable
second derivative (see [6]), and that the optimal kernel width is used, which is not
possible in practice because computing the optimal kernel width requires knowledge
of the true density. One can see that for smooth density functions with bounded
derivatives, our methods achieve an error rate that approaches O(N- 1 ).
2
New Density Estimation Techniques
To illustrate our methods, we will use neural networks, but stress that any sufficiently general learning model will do just as well. The network's output will
represent an estimate of the distribution function, and its derivative will be an
estimate of the density. We will now proceed to a description of the two methods.
2.1
SLC (Stochastic Learning of the Cumulative)
Let Xn E R, n = 1, ... , N be the data points. Let the underlying density be g(x)
and its distribution function G(x) = J~oog(t)dt. Let the neural network output be
H (x, w), where w represents the set of weights of the network. Ideally, after training
the neural network, we would like to have H (x, w) = G (x). It can easily be shown
that the density of the random variable G(x) (x being generated according to g(x))
is uniform in [0,1]. Thus, if H(x,w) is to be as close as possible to G(x), then
the network output should have a density that is close to uniform in [0,1]. This is
what our goal will be. We will attempt to train the network such that its output
density is uniform, then the network mapping should represent the distribution
function G(x). The basic idea behind the proposed algorithm is to use the N data
points as inputs to the network. For every training cycle, we generate a different
set of N network targets randomly from a uniform distribution in [0, 1], and adjust
524
M Magdon-Ismail and A. Atiya
the weights to map the data points (sorted in ascending order) to these generated
targets (also sorted in ascending order). Thus we are training the network to map
the data to a uniform distribution.
Before describing the steps of the algorithm, we note that the resulting network has
to represent a monotonically non decreasing mapping, otherwise it will not represent
a legitimate distribution function. In our simulations, we used a hint penalty to
enforce monotonicity [5]. The algorithm is as follows.
Xl S X2 S ... S XN be the data points. Set t = 1, where t is the
training cycle number. Initialize the weights (usually randomly) to w(l).
2. Generate randomly from a uniform distribution in [0,1] N points (and sort
them): UI S U2 S '" S UN? The point Un is the target output for X n ?
3. Adjust the network weights according to the backpropagation scheme:
1. Let
w(t
+ 1) = w(t)
a?
- 17(t) aw
(1)
where ? is the objective function that includes the error term and the
monotonicity hint penalty term [5]:
N
?
2
Nh
= I: [H(xn)-Un] +AI:8(H(Yk)-H(Yk+~)) [H(Yk)-H(Yk+~)]
n=l
2
k=l
(2)
where we have suppressed the w dependence. The second term is the monotonicity penalty term, A is a positive weighting constant, ~ is a small positive number, 8(x) is the familiar unit step function, and the Yk'S are any
set of points where we wish to enforce the monotonicity.
4. Set t = t + 1, and go to step 2. Repeat until the error is small enough.
Upon convergence, the density estimate is the derivative of H.
Note that as presented, the randomly generated targets are different for every cycle,
which will have a smoothing effect that will allow convergence to a truly uniform
distribution. One other version, that we have implemented in our simulation studies, is to generate new targets after every fixed number L of cycles, rather than
every cycle. This will generally improve the speed of convergence as there is more
"continuity" in the learning process. Also note that it is preferable to choose the
activation function for the output node to be in the range of 0 to 1, to ensure that
the estimate of the distribution function is in this range .
SLC is only applicable to estimating univariate densities, because, for the multivariate case, the nonlinear mapping Y = G (x) will not necessarily result in a uniformly
distributed output y. Fortunately, many, if not the majority of problems encountered in practice are univariate. This is because multivariate problems, with even
a modest number of dimensions, need a huge amount of data to obtain statistically
accurate results. The next method, is applicable to the multivariate case as well.
2.2
SIC (Smooth Interpolation of the Cumulative)
Again, we have a multilayer network, to which we input the point x, and the
network outputs the estimate of the distribution function. Let g(x) be the true
density function, and let G(x) be the corresponding distribution function. Let
x = (Xl, ... , xd)T. The distribution function is given by
xl
G(x)
=
xd
100'" 100
g(x)dx 1
...
xd,
(3)
525
Neural Networks for Density Estimation
a straightforward estimate of G(x) could be the fraction of data points falling in
the area of integration:
1 N
(4)
G(x) = N :Le(x - x n ),
where
n=l
e is defined as
if xi 2 0 for all i = 1, ... , d,
otherwise.
The method we propose uses such an estimate for the target outputs of the neural
network. The estimate given by (4) is discontinuous. The neural network method
developed here provides a smooth, and hence more realistic estimate of the distribution function. The density can be obtained by differentiating the output of the
network with respect to its inputs.
e(x)
= {~
For the low-dimensional case, we can uniformly sample (4) using a grid, to obtain
the examples for the network. Beyond two or three dimensions, this becomes computationally intensive. Alternatively, one could sample the input space randomly
(using say a uniform distribution over the approximate range of Xn 's) , and for every
point determine the network target according to (4) . Another option is to use the
data points themselves as examples. The target for a point Xm would then be
A
1
N
G(xm) = N _ 1
~
(5)
Lt
n=l, n;im
We also use monotonicity as a hint to guide the training. Once training is performed,
and H(x, w) approximates G(x), the density estimate can be obtained as
g(x)
= ad H(x, w) .
(6)
OXl ... oxd
3
Simulation Results
Tn.lI:!DtIn3llly
- - -
soc
OpI_mat Parlen W\n00w
?,?",,
'"
.
,
,, ,,
,
(a)
(b)
Figure 1: Comparison of optimal Parzen windows, with neural network estimators.
Plotted are the true density and the estimates (SLC , SIC, Parzen window with
optimal kernel width [6, pg 40]). Notice that even the optimal Parzen window is
bumpy as compared to the neural network.
We tested our techniques for density estimation on data drawn from a mixture of
two Gaussians:
(7)
M. Magdon-Ismail and A. Atiya
526
Data points were randomly generated and the density estimates using SLC or SIC
(for 100 and 200 data points) were compared to the Parzen technique. Learning
was performed with a standard 1 hidden layer neural network with 3 hidden units.
The hidden unit activation function used was tanh and the output unit was an erf
function l . A set of typical density estimates are shown in figure 1.
4
Convergence of the Density Estimation Techniques
~.
!
1
1
I
,,'
,,'
,,'
N
Figure 2: Convergence of the density estimation error for SIC . A five hidden unit
two layer neural network was used to perform the mapping Xi -+ i/{N + 1), trained
according to SIC. For various N, the resulting density estimation error was computed for over 100 runs. Plotted are the results on a Log-Log scale. For comparison,
also shown is the best 1IN fit.
Using techniques from stochastic approximation theory, it can be shown that SLC
converges to a similar solution to SIC [3], so, we focus our attention on the convergence of SIC. Figure 2 shows an empirical study of the convergence behavior. The
optimal linear fit between 10g(E) and 10g(N) has a slope of -0.97. This indicates
that the convergence rate is about liN. The theoretically derived convergence rate
is loglog(N)IN as we will shortly discuss.
To analyze SIC, we introduce so called approximate generalized distribution functions. We will assume that the true distribution function has bounded derivatives.
Therefore the cumulative will be "approximately" implementable by generalized
distributions with bounded derivatives (in the asymptotic limit, with probability
1). We will then obtain the convergence to the true density.
Let 9 be the space of distribution functions on the real line that possess continuous
densities, i.e., X E 9 if X : R -+ [0,1]; X'(t) exists everywhere, is continuous and
X' (t) ~ 0; X ( - 00) = 0 and X (00) = 1. This is the class of functions that we will
be interested in. We define a metric with respect to 9 as follows
I f II~
=
i:
f(t)2 X'(t)dt
(8)
I f II~ is the expectation of the squared value of f with respect to the distribution
X E g. Let us name this the L2 X-norm of f. Let the data set (D) be {Xl S X2 S
. .. S XN}, and corresponding to each Xi, let the target be Yi = il N + 1. We will
assume that the true distribution function has bounded derivatives up to order K .
We define the set of approximate sample distribution functions 1l'D as follows
527
Neural Networks for Density Estimation
Definition 4.1 Fix v > O. A v-approximate sample distribution (unction, H, satisfies the following two conditions
We will denote the set of all v-approximate sample distribution functions for a data
set, D, and a given v by
1-lo'
Let Ai = sUPx IG(i) I, i = 1 . . . K where we use the notation f(i) to denote the ith
derivative. Define BnD) by
BV(D)= inf sup lQ(iJI
QE1I.'h
t
(9)
x
1-lo
for fixed v > O. Note that by definition, for all E > 0, :J H E
such that
sUPx IH(i)(x)1 ::; Bi + E. Bi(D) is the lowest possible bound on the ith derivative
for the v-approximate sample distribution functions given a particular data set. In
a sense, the "smoothest" approximating sample distribution function with respect
to the ith derivative has an ith derivative bounded by BnD). One expects that
Bi ::; Ai, at least in the limit N -+ 00.
In the next theorem, we present the main theoretical result of the paper, namely
a bound on the estimation error for the density estimator obtained by using the
approximate sample distribution functions. It is embedded in a large amount of
technical machinery, but its essential content is that if the true distribution function
has bounded derivatives to order K, then, picking the approximate distribution
function obeying certain bounds, we obtain a convergence rate for the estimation
error of O((loglog(N)jN)l - l/K).
Theore"m 4.2 (L2 convergence to the true density) Let N data points, Xi be
drawn i.i.d. from the distribution G E g. Let sUPx IG(i) 1 = Ai for i = 0 ... K , where
K ~ 2. Fix v > 2 and E > O. Let B'K(D) = infQE1I.'h suPx IQ(K) I. Let H E
be a v-approximate distribution function with BK = sUPx IHKI ::; B'K + E (by the
definition of B/o such a v-approximate sample distribution function must exist).
Then, for any F E g, as N -+ 00, the inequality
1-lo
I H'
where
:F(N) =
- G' II~
::; 22 (K - l)(2A K + E)k F(N)
[(1 + v) C10g~g(N)) l + N ~ 1
holds with probability 1, as N -+
(10)
rr..
(11)
00.
We present the proof elsewhere [3].
1-l o'
?
N otel: The theorem applies uniformly to any interpolator H E
In particular,
a large enough neural network will be one such monotonic interpolator,
provided that the network can be trained to small enough error. This is
possible by the universal approximation results for multilayer networks [2].
Note 2: This theorem holds for any E > 0 and v > 1. For smooth density functions, with bounded higher derivatives, the convergence rate approaches
o (log log( N) j N) which is faster convergence than the kernel density estimator (for which the optimal rate is O(N - 4 / 5 )).
M. Magdon-Ismail and A. Atiya
528
Note 3: No smoothing parameter needs to be determined.
Note 4: One should try to find an approximate distribution function with the
smallest possible derivatives. Specifically, of all the sample distribution
functions, pick the one that "minimizes" B K, the bound on the Kth deri vative. This could be done by introducing penalty terms, penalizing the magnitudes of the derivatives (for example Tikhonov type regularizers [7]).
5
Comments
We developed two techniques for density estimation based on the idea of learning the
cumulative by mapping the data points to a uniform density. Two techniques were
presented, a stochastic technique (SLC), which is expected to inherit the characteristics of most stochastic iterative algorithms, and a deterministic technique (SIC).
SLC tends to be slow in practice, however, because each set of targets is drawn
from the uniform distribution, this is anticipated to have a smoothing/regularizing
effect - this can be seen by comparing SLC and SIC in figure 1 (a). We presented
experimental comparison of our techniques with the Parzen technique.
We presented a theoretical result that demonstrated the consistency of our techniques as well as giving a convergence rate of O(loglog(N)/N), which is better
than the optimal Parzen technique. No smoothing parameter needs to be chosen smoothing occurs naturally by picking the interpolator with the lowest bound for
a certain derivative. For our methods, the majority of time is spent in the learning
phase, but once learning is done, evaluating the density is fast.
6
Acknowledgments
We would like to acknowledge Yaser Abu-Mostafa and the Caltech Learning Systems
Group for their useful input.
References
[1] K. Fukunaga and L. D. Hostetler. Optimization of k-nearest neighbor density
estimates. IEEE Transactions on Information Theory, 19(3):320-326, 1973.
[2] K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an
unknown mapping and its derivatives using multilayer feedforward networks.
Neural Networks, 3:551-560, 1990.
[3] M. Magdon-Ismail and A. Atiya. Consistent density estimation from the sample
distribution function. manuscript in preparation for submission, 1998.
[4] E. Parzen. On the estimation of a probability density function and mode. Annals
of Mathematical Statistics, 33:1065-1076, 1962.
[5] J. Sill and Y. S. Abu-Mostafa. Monotonicity hints. In M. C. Mozer, M. I.
Jordan, and T. Petsche, editors, Advances in Neural Information Processing
Systems (NIPS), volume 9, pages 634-640. Morgan Kaufmann, 1997.
[6] B. Silverman. Density Estimation for Statistics and Data Analysis. Chapman
and Hall, London, UK, 1993.
[7] A. N. Tikhonov and V. 1. Arsenin. Solutions of Ill-Posed Problems. Scripta Series
in Mathematics. Distributed solely by Halsted Press, Winston; New York, 1977.
Translation Editor: Fritz, John.
| 1624 |@word version:1 norm:1 cco:1 simulation:3 pg:1 pick:1 moment:1 series:1 comparing:1 unction:1 activation:2 dx:1 must:1 john:1 realistic:2 amir:2 ith:4 provides:1 node:1 five:1 mathematical:1 introduce:3 manner:1 theoretically:2 expected:1 behavior:1 themselves:1 decreasing:1 window:4 becomes:1 provided:1 estimating:1 underlying:2 bounded:9 notation:1 lowest:2 what:1 minimizes:1 developed:3 differentiation:1 every:5 xd:3 preferable:1 wrong:1 uk:1 unit:5 before:1 positive:2 engineering:3 tends:1 limit:2 solely:1 interpolation:1 approximately:1 sill:1 range:3 statistically:1 bi:3 acknowledgment:1 practice:3 backpropagation:1 silverman:1 area:1 universal:2 empirical:1 spite:1 close:2 deterministic:5 map:2 demonstrated:1 straightforward:2 go:1 attention:1 legitimate:1 estimator:6 analogous:1 undersmoothing:1 annals:1 target:10 us:1 submission:1 electrical:2 cycle:5 yk:5 mozer:1 ui:1 complexity:1 ideally:1 trained:2 upon:1 easily:1 various:1 oversmoothing:1 train:1 fast:1 london:1 posed:2 say:1 tested:1 otherwise:2 erf:1 statistic:2 itself:1 rr:1 propose:2 flexibility:1 achieve:1 ismail:5 description:1 convergence:18 converges:1 spent:1 illustrate:1 iq:1 pose:1 nearest:2 soc:1 implemented:2 discontinuous:1 stochastic:7 fix:2 im:1 hold:2 sufficiently:1 hall:1 mapping:6 mostafa:2 smallest:1 estimation:24 applicable:2 tanh:1 rather:1 derived:1 focus:2 indicates:1 contrast:1 sense:2 pasadena:2 hidden:4 interested:1 ill:2 smoothing:7 integration:1 initialize:1 once:2 chapman:1 represents:1 anticipated:1 hint:4 randomly:6 familiar:1 phase:1 suit:1 attempt:1 huge:1 adjust:2 truly:1 extreme:1 mixture:1 behind:1 regularizers:1 accurate:1 modest:1 machinery:1 plotted:2 theoretical:3 modeling:2 introducing:1 expects:1 uniform:10 characterize:1 aw:1 supx:5 density:52 fritz:1 sensitivity:1 probabilistic:2 picking:2 parzen:10 again:1 squared:1 bumpy:1 choose:2 possibly:1 derivative:18 li:1 slc:9 includes:1 ad:1 performed:3 view:1 try:1 analyze:1 sup:1 sort:1 option:1 slope:1 square:1 il:1 accuracy:1 variance:1 characteristic:1 kaufmann:1 drive:1 definition:3 naturally:1 proof:1 iji:1 knowledge:1 manuscript:1 higher:3 dt:2 supervised:1 formulation:1 done:2 hostetler:1 just:1 until:1 nonlinear:1 continuity:1 sic:11 mode:1 name:1 effect:2 true:8 deri:1 regularization:2 hence:1 white:1 width:4 generalized:2 stress:1 demonstrate:2 tn:1 common:2 nh:1 volume:1 slight:1 approximates:1 ai:4 consistency:2 grid:1 mathematics:1 multivariate:3 own:1 belongs:1 inf:1 tikhonov:2 certain:2 inequality:1 yi:1 caltech:5 integrable:1 seen:1 morgan:1 fortunately:1 determine:1 monotonically:1 ii:3 smoother:1 smooth:4 technical:1 faster:2 lin:1 involving:1 basic:1 multilayer:5 metric:1 expectation:1 kernel:8 represent:4 background:1 addition:1 addressed:1 interval:1 oxl:1 posse:1 comment:1 jordan:1 feedforward:1 enough:3 fit:2 idea:2 intensive:1 penalty:4 yaser:1 proceed:1 york:1 deep:1 useful:2 generally:2 amount:3 nonparametric:1 atiya:5 generate:3 exist:1 notice:1 estimated:1 abu:2 group:3 falling:1 drawn:3 interpolator:3 sporadic:1 penalizing:1 fraction:1 run:1 everywhere:1 layer:2 bound:5 correspondence:1 winston:1 encountered:1 bv:1 x2:2 speed:1 fukunaga:1 department:2 according:5 smaller:1 suppressed:1 computationally:1 describing:1 discus:1 needed:1 ascending:2 available:2 magdon:6 gaussians:1 enforce:2 petsche:1 shortly:1 jn:1 ensure:1 giving:1 approximating:3 malik:1 objective:1 quantity:1 occurs:2 parametric:3 dependence:1 kth:1 majority:3 whom:1 reason:1 modeled:1 negative:1 unknown:3 perform:1 implementable:1 acknowledge:1 bk:1 namely:1 california:2 nip:1 able:1 beyond:1 usually:1 xm:2 stinchcombe:1 scheme:1 improve:1 technology:2 l2:2 asymptotic:1 embedded:2 sufficient:1 consistent:1 scripta:1 editor:2 translation:1 lo:3 course:1 elsewhere:1 arsenin:1 repeat:1 guide:1 allow:1 institute:2 neighbor:2 differentiating:1 distributed:2 dimension:3 xn:5 evaluating:1 cumulative:6 ig:2 transaction:1 approximate:12 monotonicity:6 xi:4 alternatively:1 un:3 continuous:2 iterative:1 oog:1 robust:1 ca:2 inherently:1 hornik:1 necessarily:1 inherit:1 main:1 quantiles:1 slow:1 precision:1 wish:1 obeying:1 lq:1 xl:4 smoothest:1 loglog:4 weighting:1 formula:1 theorem:3 exists:1 essential:1 ih:1 importance:1 magnitude:1 lt:1 univariate:3 u2:1 bnd:2 applies:2 monotonic:1 satisfies:1 goal:1 sorted:2 towards:1 feasible:1 experimentally:1 change:3 content:1 determined:2 typical:1 uniformly:3 specifically:1 called:1 experimental:1 quest:1 preparation:1 evaluate:1 regularizing:1 phenomenon:2 |
683 | 1,625 | Unsupervised and supervised clustering:
the mutual information between
parameters and observations
Didier Herschkowitz
Jean-Pierre Nadal
Laboratoire de Physique Statistique de l'E.N.S .*
Ecole Normale Superieure
24 , rue Lhomond - 75231 Paris cedex 05, France
herschko@lps.ens.fr
nadal@lps.ens.fr
http://www.lps .ens.frrrisc/rescomp
Abstract
Recent works in parameter estimation and neural coding have
demonstrated that optimal performance are related to the mutual
information between parameters and data. We consider the mutual
information in the case where the dependency in the parameter (a
vector 8) of the conditional p.d.f. of each observation (a vector
0, is through the scalar product 8.~ only. We derive bounds and
asymptotic behaviour for the mutual information and compare with
results obtained on the same model with the" replica technique" .
1
INTRODUCTION
In this contribution we consider an unsupervised clustering task. Recent results
on neural coding and parameter estimation (supervised and unsupervised learning
tasks) show that the mutual information between data and parameters (equivalently between neural activities and stimulus) is a relevant tool for deriving optimal
performances (Clarke and Barron, 1990; Nadal and Parga, 1994; Opper and Kinzel,
1995; Haussler and Opper, 1995; Opper and Haussler, 1995; Rissanen, 1996; BruneI
and Nadal 1998).
Laboratory associated with C.N.R.S. (U.R.A. 1306) , ENS, and Universities Paris VI
and Paris VII.
233
Mutual Information between Parameters and Observations
With this tool we analyze a particular case which has been studied extensively
with the "replica t echnique" in the framework of statistical mechanics (Watkin and
Nadal , 1994; Reimann and Van den Broeck, 1996; Buhot and Gordon, 1998). After
introducing the model in the next section, we consider the mutual information
between the patterns and the parameter. We derive a bound on it which is of
interest for not too large p. We show how the "free energy" associated to Gibbs
learning is related to the mutual information. We then compare the exact results
with replica calculations. We show that the asymptotic behaviour (p > > N) of
the mutual information is in agreement with the exact result which is known to be
related to the Fish er information (Clarke and Barron, 1990; Rissanen , 1996; Brunei
and Nadal 1998). However for moderate values of a = pIN, we can eliminate false
solutions of the replica calculation. Finally, we give bounds related to the mutual
information between the parameter and its estimators , and discuss common features
of parameter estimation and neural coding.
2
THE MODEL
We consider the problem where a direction 0 (a unit vector) of dimension N has
to be found based on the observation of p patterns. The probability distribution
of the patterns is uniform except in the unknown symmetry-breaking direction O.
Various instances of this problem have been studied recently within the satistical
mechanics framework, making use of the replica technique (Watkin and Nadal, 1994;
Reimann and Van den Broeck, 1996; Buhot and Gordon, 1998). More specifically
it is assumed that a set of patterns D = {~J.L}~ = 1 is generated by p independant
samplings from a non-uniform probability distribution P(~IO) where 0 = {Ol , ... , ON}
represents the symmetry-breaking orientation. The probability is written in the
form:
1
~2
(1)
P(~IO) = o/S ex p ( -2 - V(A))
where N is the dimension of the space, A = O.~ is the overlap and V(A) characterizes
the structure of the data in the breaking direction . As justified within the Bayesian
and Statistical Physics frameworks, one has to consider a prior distribution on the
parameter space, p(O), e.g. the uniform distribution on the sphere.
The mutual information J(DIO) between the data and 0 is defined by
(2)
It can be rewritten:
J(DIO)
N
(3)
1
(4)
N
where
Z
= -a < V(A) > _ <<In(Z) ?,
=
00
-00
dOp(O) exp( -
t
V(AJ.L))
J.L=l
In the statistical physics literature -In Z is a "free energy". The brackets < < .. > >
stand for the average over the pattern distribution , and < .. > is the average over
the resulting overlap distribution . We will consider properties valid for any Nand
any p . others for p > > N , and the replica calculations are valid for Nand p large
at any given value of a = ~ .
234
3
D. Herschkowitz and J.-P Nadal
LINEAR BOUND
The mutual information, a positive quantity, cannot grow faster than linearly in the
amount of data, p. We derive the simple linear bound:
[(DIB) :::; - p < V(>') >
(5)
We proove the inequality for the case < >. >= O. The extension to the case < >. >i- 0
is straightforward. The mutual information can be written as [ = H(D) - H(DIB).
The calculation of H(DIB) is straightforward:
H(DIB)
=
p; In(e27r) + ~?
p
>.2 > -1) + < V >
(6)
J
Now, the entropy of the data H(D) = dDP(D)lnP(D) is lower or equal to the
entropy of a Gaussian distribution with the same variance . We thus calculate the
covariance matrix of the data
?
where
~r~j
?= 61Jv( 6ij + ? >. 2 > -l)ei(/j)
D denotes the average over the parameter distribution.
H(D) :::;
pN
T
(7)
We then have
N
ln (27fe)
P",
2
+ "2 L....ln(l + ? >. > -l)ri)
(8)
i=l
N_
where I i are the eigen value of the matrix BiB). Using
In(l
+ x)
I: Bt = 1 and the
property
i=l
:::; x we obtain
H(D) :::;
p;
In(27fe)
+ ~? >.2 > -1)
(9)
Putting (9) and (6) together, we find the inequality (5). l.From this and (3) it
follows also
(10)
p < V >:::; -? In(Z)>>:::; 0
4
REPLICA CALCULATIONS
In the limit N -T 00 with a finite, the free energy becomes self-averaging, that
is equal to its average , and its calculation can be performed by standard replica
technique. This calculation is the same as calculations related to Gibbs learning,
done in (Reimann and van den Broeck, 1996, Buhot and Gordon, 1998), but the
interpretation of the order parameters is different . Assuming replica symmetry, we
reproduce in fig.2 results from (Buhot and Gordon, 1998) for the behaviour with a
of Q which is the typical overlap between two directions compatible with the data.
The overlap distribution P(>.) was chosen to get patterns distributed according to
two clusters along the symmetry-breaking direction
P(>.) =
1
L
2O'.j27i=
f = ?l
exp( _ (>. - Ep)2 )
20'2
(11)
In fig.2 and fig.1 we show the corresponding behaviour of the average free energy
and of the mutual information .
235
Mutual Information between Parameters and Observations
4.1
Discussion
Up to aI , Q = 0 and the mutual information is in a purely linear phase I(~D) =
-a < F()') >. This correspond to a regime where the data have no correlations.
For a ~ aI, the replica calculation admits up to three differents solutions. In view of
the fact that the mutual information can never decrease with a and that the average
free energy can not be positive, it follows that only two behaviours are acceptable.
In the first , Q leaves the solution Q = 0 at aI , and follows the lower branch until a 3
where it jumps to the upper branch. This is the stable way. The second possibility
is that Q = 0 until a2 where it directly jumps to the upper branch. In (Buhot and
Gordon, 1998) , it has been suggested that One can reach the upper branch, well
before a 3 . Here we have thus shown that it is only possible from a2. It remains
also the possibility of a replica symetry breacking phase in this range of a.
In the limit a --+
information
00
the replica calculus gives for the behaviour of the mutual
I(DIO)
~ ~ In(a < (dV~f))2 ?
(12)
The r.h.s can be shown to be equal to half the logarithm of the determinant of the
Fish er information matrix, which is the exact asymptotic behaviour (Clarke and
Barron, 1990; BruneI and Nadal, 1998). It can be shown that this behaviour for
p > > N implies that the best possible estimator based on the data will saturate
the Cramer-Rao bound (see e.g. Blahut, 1988) . It has already been noted that the
asymptotics performance in estimating the direction, as computed by the replica
technique , saturate this bound (Van den Broeck, 1997) . What we have check here
is that this manifests itself in the behaviour of the mutual information for large a.
4.2
Bounds for specific estimators
Given the data D, one wants to find an estimate J of the parameter. The amount
of information I(DIO) limits the performance of the estimator. Indeed, one has
I(JIO) ::; I(DIO). This basic relationship allows to derive interesting bounds based
on the choice of particular estimators. We consider first Gibbs learning, which
consists in sampling a direction J from the 'a posteriori' probability P(JID) =
P(DIJ)p(J) / P(D) . In this particular case, the differential entropy ofthe estimator
J and of the parameter 0 are equal H(J) = H(O). If 1 - Qg 2 is the variance of the
Gibbs estimator one gets , for a Gaussian prior on 0 , the relations
(13)
These relations together with the linear bound (5) allows to bound the order parameter Qg for small a where this bound is of interest.
The Bayes estimator consists in taking for J the center of mass of the 'a posteriori'
probability. In the limit a --+ 00 , this distribution becomes Gaussian centered at its
most probable value. We can thus assume PBayes (JIO) to be Gaussian with mean
QbB and variance 1 - Qb 2 , then the first inequality in (13) (with Qg replaced by
Qb and Gibbs by Bayes) is an equality. Then using the Cramer-Rao bound on the
variance of the estimator, that is (1 - Q~)/Q~ ~ (a < (dV/d).)2 ?-1, one can
bound the mutual information for the Bayes estimator
N
IBay es(JIB) ::; 2ln(1
dV().)
2
+ a < (~) ?
(14)
D. Herschkowitz and J-P. Nadal
236
These different quantities are shown on fig.1.
5
CONCLUSION
We have studied the mutual information between data and parameter in a problem
of unsupervised clustering: \ve deriyed bounds, asymptotic behaviour, and compared these results with replica calculations. Most of the results concerning the
behaviour of the mutual information, observed for this particular clustering task,
are" universal" , in that they will be qualitatively the same for any problem which
can be formulated as either a parameter estimation task or a neural coding/signal
processing task. In particular, there is a linear regime for small enough amount of
data (number of coding cells), up to a maximal value related to the VC dimension
of the system. For large data size, the behaviour is logarithmic - that is I ,...., lnp
(Nadal and Parga, 1994; Opper and Haussler, 1995) or ~ lnp (Clarke and Barron, 1990; Opper and Haussler, 1995; BruneI and Nadal, 1998) depending on the
smoothness of the model. A mOre detailed review with mOre such universal features,
exact bounds and relations between unsupervised and supervised learning will be
presented elsewhere. (Nadal, Herschkowitz, to appear in Phys. rev. E).
Acknowledgements
We thank Arnaud Buhot and Mirta Gordon for stimulating discussions. This work
has been partly supported by the French contract DGA 96 2557 A/DSP.
References
[B88]
R. E. Blahut, Addison-Wesley, Cambridge MA, 1998.
[BG98]
A. Buhot and M. Gordon. Phys. Rev. E, 57(3):3326- 3333,1998.
[BN98]
N. BruneI and J.-P. Nadal. Neural Computation, to appear, 1998.
[CB90]
B. S. Clarke and A. R. Barron. IEEE Trans. on Information Theory,
36 (3):453- 471, 1990.
[H095]
D. Haussler and M. Opper. conditionally independent observations. In VIIIth Ann. Workshop on Computational Learning Theory
(COL T95) , pages 402-411, Santa Cruz, 1995 (ACM, New-York).
[OH95]
M. Opper and D. Haussler supervised learning, Phys. Rev. Lett.,
75:3772-3775, 1995.
[NP94a]
J.-P. Nadal and N. Parga. unsupervised learning. Neural Computation, 6:489- 506, 1994.
[OK95]
M. Opper and W. Kinzel. In E. Domany J .L. van Hemmen and
K. Schulten, editors, Physics of Neural Networks, pages 151- .
Springer, 1995.
[Ris]
J. Rissanen.
1996.
IEEE Trans. on Information Theory, 42 (1) :40-47,
Mutual Information between Parameters and Observations
237
[RVdB96]
P. Reimann and C. Van den Broeck. Phys. Rev. E, 53 (4):3989-3998,
1996.
[VdB98]
C. Van den Broeck. In proceedings of the TANG workshop (HongKong May 26-28, 1997).
[WN94]
T . Watkin and J.-P. Nadal. 1. Phys. A: Math. and Gen., 27:18991915, 1994.
I
I
1.8
I
---=-~.--
I
1.6
1.4
./
/
1.2
1.0
I
I
/
./'
I
I
I
0.4
I
0.0
I
I I
I
I
0.0
I
-- ----- - --
,f
I
I
I
0.2
/'
/
1/
I
--- -
./
,
I
,.-
./
I /
I
I ,I
I
I
I
0.6
./
,. ,.
/
I
I
0.8
---------- -- -.-"':
I
-piN <V>
replica information
O.5*ln(1 +p/N ?V')**2?
-O.5*ln(1-Qb**2)
-O.5*ln(1-Qg**2)
J?
2.0
4.0
6.0
8.0
a
10.0
12.0
14.0
Figure 1: Dashed line is the linear bound on the mutual information I(DI())/N. The
latter , calculated with the replica technique, saturates the bound for a ::::: aI, and
is the (lower) solid line for a > al . The special structure On fig.2 is not visible here
due to the graph scale. The curve -~ln(1 - Q/) is a lower bound On the mutual
information between the Gibbs estimator and () (which would be equal to this bound
if the conditional probability distribution of the estimator were Gaussian with mean
Qg() and variance 1- Qg2). Shown also is the analogous curve -~ln(1 - Qb 2) for
the Bayes estimator. In the limit a -t (Xl these two latter Gaussian CUrves and
the replica information I(DI()), all converge toward the exact asymptotic behaviour,
which can be expressed as ~ln(1 + a < (d\~~'\))2 ? (upper solid line). This latter
expression is, for any p, an upper bound for the two Gaussian CUrves.
D. Herschkowitz and J-P. Nadal
238
0.002 r - - - - - - - r - - - - - , - - - - - r - - - - - - - ,
- - -?In(z?>
0.001
0.000
-0.001
-0.002
-0.003
L -_ _ _---JI....-_ _ _---L_ _ _....I....--L_ _ _ _--.J
2.0
0.9
2.2
2.4
2.6
ex.
-----------------------------------------
0.8
0.7
-Qb
- - _. bome Cramer-Rao
0.6
0.5
0.4
0.3
0.2
a2
0.1
0.0
2.0
2.2
2.4
a
ex 3
2.6
Figure 2: In the lower figure , the optimal learning curve Qb(a) for p = 1.2 and
a = 0.5, as computed in (Buhot and Gordon , 1998) under the replica symetric
ansatz . We have put the Cramer-Rao bound for this quantity. In the upper figure,
the average free energy - < < InZ > > / N. All the part above zero has to be
rejected.
al = 2.10, a2 = 2.515 and a3 = 2.527
| 1625 |@word determinant:1 implies:1 equality:1 direction:7 arnaud:1 already:1 laboratory:1 calculus:1 quantity:3 bib:1 centered:1 covariance:1 independant:1 vc:1 conditionally:1 self:1 echnique:1 solid:2 dio:5 noted:1 behaviour:13 thank:1 ecole:1 probable:1 toward:1 extension:1 assuming:1 cramer:4 relationship:1 exp:2 recently:1 written:2 equivalently:1 common:1 cruz:1 visible:1 fe:2 kinzel:2 ji:1 a2:4 estimation:4 interpretation:1 half:1 leaf:1 unknown:1 upper:6 observation:7 cambridge:1 gibbs:6 ai:4 smoothness:1 finite:1 tool:2 saturates:1 math:1 didier:1 gaussian:7 normale:1 stable:1 pn:1 dib:4 along:1 differential:1 consists:2 recent:2 paris:3 dsp:1 moderate:1 buhot:8 check:1 inequality:3 indeed:1 posteriori:2 mechanic:2 lnp:3 ol:1 trans:2 suggested:1 pattern:6 eliminate:1 bt:1 nand:2 regime:2 relation:3 converge:1 reproduce:1 becomes:2 france:1 estimating:1 signal:1 branch:4 dashed:1 mass:1 orientation:1 overlap:4 what:1 faster:1 nadal:17 calculation:10 special:1 sphere:1 mutual:25 equal:5 concerning:1 never:1 sampling:2 hongkong:1 qg:5 represents:1 unsupervised:6 basic:1 jid:1 others:1 stimulus:1 gordon:8 unit:1 prior:2 literature:1 appear:2 cell:1 justified:1 review:1 positive:2 before:1 ve:1 want:1 acknowledgement:1 laboratoire:1 limit:5 io:2 replaced:1 phase:2 grow:1 blahut:2 asymptotic:5 interesting:1 dop:1 cedex:1 interest:2 brunei:5 possibility:2 studied:3 physique:1 editor:1 bracket:1 range:1 enough:1 compatible:1 elsewhere:1 supported:1 free:6 l_:2 domany:1 taking:1 asymptotics:1 logarithm:1 universal:2 expression:1 van:7 distributed:1 curve:5 opper:8 dimension:3 statistique:1 instance:1 stand:1 valid:2 lett:1 rao:4 calculated:1 get:2 cannot:1 jib:1 york:1 qualitatively:1 jump:2 put:1 introducing:1 detailed:1 www:1 santa:1 uniform:3 demonstrated:1 center:1 reimann:4 dij:1 straightforward:2 cb90:1 too:1 amount:3 extensively:1 dependency:1 http:1 assumed:1 broeck:6 estimator:13 haussler:6 fish:2 inz:1 deriving:1 contract:1 physic:3 ansatz:1 analogous:1 together:2 symmetry:4 putting:1 exact:5 rissanen:3 rue:1 jv:1 agreement:1 watkin:3 linearly:1 replica:18 graph:1 ep:1 observed:1 de:2 fig:5 coding:5 en:4 calculate:1 hemmen:1 dga:1 pbayes:1 lhomond:1 vi:1 decrease:1 performed:1 view:1 acceptable:1 clarke:5 schulten:1 analyze:1 characterizes:1 col:1 bound:22 bayes:4 xl:1 ddp:1 breaking:4 tang:1 saturate:2 contribution:1 activity:1 specific:1 purely:1 variance:5 er:2 ri:2 correspond:1 ofthe:1 admits:1 a3:1 workshop:2 bayesian:1 parga:3 various:1 bg98:1 qb:6 false:1 jio:2 according:1 b88:1 vii:1 reach:1 phys:5 entropy:3 logarithmic:1 jean:1 energy:6 lp:3 rev:4 making:1 expressed:1 scalar:1 associated:2 di:2 den:6 dv:3 springer:1 itself:1 ln:9 acm:1 manifest:1 remains:1 ma:1 pin:2 discus:1 stimulating:1 conditional:2 formulated:1 product:1 maximal:1 fr:2 addison:1 ann:1 relevant:1 wesley:1 superieure:1 gen:1 supervised:4 rewritten:1 specifically:1 except:1 typical:1 averaging:1 barron:5 done:1 pierre:1 partly:1 rejected:1 e:1 correlation:1 until:2 cluster:1 eigen:1 ei:1 denotes:1 clustering:4 latter:3 derive:4 depending:1 n_:1 french:1 aj:1 ij:1 ex:3 |
684 | 1,626 | A Phase Space Approach to Minimax
Entropy Learning and the Minutemax
Approximations
A.L.Yuille
James M. Coughlan
Smith-Kettlewell Inst.
San Francisco, CA 94115
Smith-Kettlewell Inst.
San Francisco, CA 94115
Abstract
There has been much recent work on measuring image statistics
and on learning probability distributions on images. We observe
that the mapping from images to statistics is many-to-one and
show it can be quantified by a phase space factor. This phase
space approach throws light on the Minimax Entropy technique for
learning Gibbs distributions on images with potentials derived from
image statistics and elucidates the ambiguities that are inherent to
determining the potentials. In addition, it shows that if the phase
factor can be approximated by an analytic distribution then this
approximation yields a swift "Minutemax" algorithm that vastly
reduces the computation time for Minimax entropy learning. An
illustration of this concept, using a Gaussian to approximate the
phase factor, gives a good approximation to the results of Zhu
and Mumford (1997) in just seconds of CPU time. The phase
space approach also gives insight into the multi-scale potentials
found by Zhu and Mumford (1997) and suggests that the forms of
the potentials are influenced greatly by phase space considerations.
Finally, we prove that probability distributions learned in feature
space alone are equivalent to Minimax Entropy learning with a
multinomial approximation of the phase factor.
1
Introduction
Bayesian probability theory gives a powerful framework for visual perception (Knill
and Richards 1996). This approach, however, requires specifying prior probabilities
and likelihood functions. Learning these probabilities is difficult because it requires
estimating distributions on random variables of very high dimensions (for example,
images with 200 x 200 pixels, or shape curves of length 400 pixels). An important
J M. Coughlan and A. L. Yuille
762
recent advance is the Minimax Entropy Learning theory. This theory was developed
by Zhu, Wu and Mumford (1997 and 1998) and enables them to learn probability
distributions for the intensity properties and shapes of natural stimuli and clutter.
In addition, when applied to real world images it has an interesting link to the work
on natural image statistics (Field 1987), (Ruderman and Bialek 1994), (Olshaussen
and Field 1996). We wish to simplify Minimax and make the learning easier, faster
and more transparent .
In this paper we present a phase space approach to Minimax Entropy learning. This
approach is based on the observation that the mapping from images to statistics
is many-to-one and can be quantified by a phase space factnr. If this phase space
factor can be approximated by an analytic function then we obtain approximate
"Minutemax" algorithms which greatly speed up the learning process. In one version
of this approximation, the unknown parameters of the distribution to be learned
are related linearly to the empirical statistics of the image data set, and may be
solved for in seconds or less. Independent of this approximation, the Minutemax
framework also illuminates an important combinatoric aspect of Minimax, namely
the fact that many different images can give rise to the same image statistics. This
"phase space" factor explains the ambiguities inherent in learning the parameters
of the unknown distribution, and motivates the approximation that reduces the
problem to linear algebra. Finally, we prove that probability distributions learned in
feature space alone are equivalent to Minimax Entropy learning with a multinomial
approximation of the phase factor.
2
A Phase Space Perspective on Minimax
We wish to learn a distribution P(I) on images, where I denotes the set of pixel
values [(x, y) on a finite image lattice, and each value [(x , y) is quantized to a finite
set of intensity values. (In fact, this approach is general and applies to any patterns,
not just images.) We define a set of image statistics ?1 (I), ?2(1), ... , ?s(I), which
we concatenate as a single vector function ?(I) . If these statistics have empirical
mean d =< ?(I) > on a dataset of images (we assume a large enough dataset for
the law of large numbers to apply; see Zhu and Mumford (1997) for an analysis
of the errors inherent in this assumption) then the maximum entropy distribution
PM(I) with these empirical statistics is an exponential (Gibbs) distribution of the
form
eX?i(I)
PM(I) = where the potential
....-,
Z('\)
(1)
Xis set so that < ?(I) >M= 1.
In summary, the goal of Minimax Learning is to to find an appropriate set of
image filters for the domain of interest (i.e. maximally informative filters) and to
estimate Xgiven 1. Extensive computation is required to determine X; the phase
space approach to Minimax Le~ning motivates approximations that make Xeasy
to estimate.
2.1
Image Histogram Statistics
The statistics we consider (following Zhu, Wu and Mumford (1997, 1998)) are defined as histograms of the responses of one or more filters applied acrOss an entire
image. Consider a single filter f (linear or non-linear) with response fx(l) centered
at position x in the image. Without loss of generality, we will assume the filter has
quantized integer responses from 1 through f max,
A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations
763
For notational convenience we transform the filter response fx(l) to a binary representation bx(I) , defined as a column vector with fmax components: bx,z(l) = 6z'/x(I) ,
where index z ranges from 1 through f max . This vector is composed of all zeros
except for the entry corresponding to the filter response, which is set to one. The
image statistics vector is then a histogram vector defined as the average of the
bx (I) 's over all N pixels: ?(I) =
L:x bx (I). The entries in ?(I) then sum to 1.
(We can generalize to the case of multiple filters f(1), f(2), . . . , f(m), as detailed in
Coughlan and Yuille (1999).)
iv
2.2
The Phase Factor
The original Minimax distribution PM (I) induces a distribution PM(?) on the statistics themselves, without reference to a particular image:
(2)
where g(?) is a combinatoric phase space factor, with a corresponding normalized
combinatoric distribution g(?), defined by:
g(?o)
= L 6io ,i(I), and
g(?)
= g(?)/Q N ,
(3)
I
where the phase space factor g( ?) counts the number of images 1 having statistics
?. N is the number of pixels and Q is the number of pixel intensity levels, Le.
QN is the total number of possible images I. It should be emphasized that the
phase factor depends only on the set of filters chosen and is independent of the true
distribution P(I). Thus the phase factor can be computed offline, independent of
the image data set.
In this paper we will discuss two useful approximations to g(?): a Gaussian approximation, which yields the swift approximation for learning, and a multinomial
approximation, which establishes a connection between Minimax and standard feature learning.
2.3
The Non-Uniqueness of the Potential
X
Given a set of filters and their empirical mean statistics d, is the potential Xuniquely
specified? Clearly, any solution for X may be shifted by an additive constant
(Ai -+ A~ = Ai + k for all i), yielding a different normalization constant Z(~)
but preserving PM(I). In this section we show that other, non-trivial ambiguities
in X which preserve PM(I) can exist, stemming from the fact that some values of
? are inconsistent with every possible image 1 and hence never arise (in any possible image dataset). These "intrinsic" ambiguities are inherent to Minimax and are
independent of the true distribution P(I). We will also discuss a second type of
possible ambiguity which depends on the characteristics of the image dataset used
for learning.
We can uncover the intrinsic ambiguities in X by examining the covariance C of
g(?). (See Coughlan and Yuille (1999) for details on calculating the mean c and
covariance C for any set of linear filters or non-linear filters that are scalar functions
J. M. Coughlan and A. L. Yuille
764
of linear filters.) Defining the set of all possible statistics values <P = {? : g( ?) :f. O},
the null space of G reflects degeneracy (Le. flatness) in <P. The following theorem,
proved in Coughlan and Yuille (1999), shows that X is determined only up to a
hyperplane whose dimension is the nullity of G.
Theorem 1 (Intrinsic Ambiguity in X). Gil = 0 if and only if e().+t1)?i(I) /Z(X+
jI) and e).?i(l) /Z(X) are identical distributions on I.
In addition to this intrinsic ambiguity in X, it is also possible that different values of
Xmay yield distinct distributions which nevertheless have the same mean statistics
< ? > on the image dataset. (As shown in Coughlan and Yuille (1999), there is a
convex set of distributions, of which the true distribution P(I) is a member, which
share the same mean statistics < ? >.) This second kind of ambiguity stems from
the fact that the mean statistics convey only a fraction of the information that
is contained in the true distribution P(I). To resolve this second ambiguity it is
necessary to extract more information from the image data set. The simplest way
to achieve this is to use a larger (or more informative) set of filters to lower the
entropy of PM(I) (this topic is discussed in more detail in Zhu, Wu and Mumford
(1997, 1998), Coughlan and Yuille (1999)). Alternatively, one can extend Minimax
to include second-order statistics, i.e. the covariance of ? in addition to its mean d.
This is an important topic for future research.
3
The Minutemax Approximations
We now illustrate the phase space approach by showing that suitable approximations
of the phase space factor g( ?) make it easy to estimate the potential X given the
empirical mean d. The resulting fast approximations to Minimax Learning are
called "Minutemax" algorithms.
3.1
The Gaussian Approximation of g(?)
If the phase space factor g( ?) may be approximated as a multi-variate Gaussian
(see Coughlan and Yuille (1999) for a justification of this approximation) then the
probability distribution PM(?) = g(?)e).?i/Z(X) reduces to another multi-variate
Gaussian. (Note that we are making the Gaussian approximation in ? space- the
space of all possible image statistics histograms-and not filter response (feature)
space.) As we will see, this result greatly simplifies the problem of estimating the
potential X.
Recall that the mean and covariance of g( ?) are denoted by c and G, respectively.
The null space of G has dimension n and is spanned by vectors il(1), il(2) ... il(n).
As discussed in Theorem 1, for all feasible values of ? (Le. all ? E <p) and all il in
the null space, il? ? is a constant k. Thus we have that
(4)
where the subscript r denotes projection onto the rank of G. Thus PgatJss(?) ex
ggatJss(?)e).?i ex U]7=l di.Ui ,k}e-!(ir- cr)TC;l(ir- cr)+)..i. Completing the square
in the exponent yields PgatJss(?) ex
U17=1 di'Ui ,k}e-!(ir-Ifr)TC;l(ir-lfr)
where
fr
A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations
765
rn[Q[]]
Figure 1: From left to right: J, cand
approximation) for first filter alone.
-X (as computed by the Gaussian Minutemax
is the projection of any .,p that satisfies .,p = c + eX. Since Pgauss ($) is a Gaussian
we have < ? >gauss= .,p = J, and so we can write a linear equation relating Xand
d: d= c+cX.
It can be shown (Zhu - private communication) that solving this equation is equivalent to one step of Newton-Raphson for minimization of an appropriate cost function. This will fail to be a good approximation if the cost function is highly nonquadratic. As explained in Coughlan and Yuille (1999), the Gaussian approximation
is also equivalent to a second-order perturbation expansion of the partition function
Z(X); higher-order corrections can be made by computing higher-order moments of
g($).
3.2
Experimental Results
We tested the Gaussian Minutemax procedure on two sets of filters: a single (fine
scale) image gradient filter aI/ax, and a set of multi-scale image gradient filters
defined at three scales, similar to those used by Zhu and Mumford (1997). In both
sets, the fine scale gradient filter is linear with kernel (1, -1), representing a discretization of a/ax. In the second set, the medium scale filter kernel is (U2 , -U2 )/4
and the coarse scale kernel is (U4 , -U4 )/16, where Un denotes the n x n matrix of all
ones. The responses of the medium and coarse filters were rounded (i.e. quantized)
to the nearest integer, thus adding a non-linearity to these filters. Finally, d was
measured on a data set of over 100 natural images; the fine scale components of d
are shown in the first panel of Figure (1) and were empirically very similar to the
medium and coarse scale components.
A Xthat solves d = c + cX is shown in the third panel of Figure (1) for the first
filter (along with c in the second panel) and in the three panels of Figure (2) for
the multi-scale filter set. The form of Xis qualitatively similar to that obtained by
Zhu and Mumford (1997) (bearing in mind that Zhu disregarded any filter responses
with magnitude above Q/2, i.e. his filter response range is half of ours). In addition,
the eigenvectors of C with small eigenvalues are large away from the origin, so one
should not trust the values of the potentials there (obtained by any algorithm).
Zhu and Mumford (1997) report interactions between filters ' applied at different
scales. This is because the resulting potentials appear different than the potential
at the fine scale even though the histograms appear similar at all scales. We argue,
however, that some of this "interaction" is due to the different phase factors at
different scales. In other words the potentials would look different at different scales
even if the empirical histograms were identical because of differing phase factors.
3.3
The Multinomial Approximation of g(?)
Many learning theories simply make probability distributions on feature space. How
do they differ from Minimax Entropy Learning which works on image space? By
1. M Coughlan and A. L. Yuille
766
., ~
I
~
;
.,
l
..
!
, '.
Figure 2: From left to right: the fine, medium and coarse components of computed by the Gaussian Minutemax approximation.
X as
".
Figure 3: Left to right:
d, c, and -X as given by multinomial approximation for the
a/ax filter at fine scale.
examining the phase factor we will show that the two approaches are not identical
in general. The feature space learning ignores the coupling between the filters
which arise due to how the statistics are obtained. More precisely, the probability
distribution obtained on feature space, PF, is equivalent to the Minimax distribution
PM if, and only if, the phase factor is multinomial.
We begin the analysis by considering a single filter. As before we define the combinatoric mean c = L:r$ g( i)i. The multinomial approximation of g( i) is equivalent to assuming that the combinatoric frequencies of filter responses are independent from pixel to pixel. Since the combinatoric frequency of filter response
j E {I, 2, .. . , fmax} is Cj and there are N<pj pixels with response j, we have:
~
and Pmult(<p) ex
fm4~
N!
}1 (cje>'j/N)NI/Jj TIJ:l~ (N<pj)!'
(5)
using Pmult(i) ex 9mult(i)e5.??. Therefore Pmult(i) is also a multinomial. Shifting
the Aj'S by an appropriate additive constant, we can make the constant of proportionality in the above equation equal to 1. In this case we have < <Pj >mult= cje>'j/N
and Aj = N log( dj / Cj) by setting < <Pj >mult to the empirical mean dj .
Note that if any component dj of the empirical mean is close to 0 then by the
previous equation any small perturbations in dj (e.g. from sampling error) will
yield large changes in Aj , making the estimate of that component unstable.
We can generalize the multinomial approximation of 9(i) to the multiple filter
case merely by factoring gmult(i) into separate multinomials, one for each filter .
Of course, this approximation neglects all interactions among filters (and among
pixels).
A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations
3.4
767
The Multinomial Approximation and Feature Learning
The connection between the multinomial approximation and feature learning is
straightforward once we consider a distribution on the feature vector f This distribution (denoted PF for "feature") is constructed assuming independent filter
responses from pixel to pixel and with statistics matching the empirical mean d:
PF(f) = TI~l dU;), where fi denotes the filter response at pixel i. Then it follows
that PF(?) is a multinomial: PF(?)
= TI;:l'" d~f/J; TIJ"':;Nf/J
}=l
)!.
Since dj
= cje>.;/N,
}
we have our main result that PF(?) = Pmult (?)'
4
Conclusion
The main point of this paper is to introduce the phase space factor to quantify the
mapping between images and their feature statistics. This phase space approach
can: (i) provide fast approximate "Minutemax" algorithms, (ii) clarify the relationship between probability distributions learned in feature and image space, and (iii)
to determine intrinsic ambiguities in the Xpotentials.
Acknowledgements
We acknowledge stimulating discussions with Song Chun Zhu. Funding was provided by the Smith-Kettlewell Institute Core Grant and the Center for Imaging
Sciences ARO grant DAAN04-95-1-0494.
References
Coughlan, J.M. and Yuille, A.L. "The Phase Space of Minimax Entropy Learning".
In preparation. 1999.
Field, D. J. "Relations between the statistics of natural images and the response
properties of cortical cells". Journal of the Optical Society 4,(12), 2379-2394. 1987.
D.C. Knill and W. Richards. (Eds). Perception as Bayesian Inference. Cambridge University Press. 1996.
Olshausen, B. A. and Field, D. J. "Emergence of simple-cell receptive field properties
by learning a sparse code for natural images". Nature. 381, 607-609. 1996.
B.D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press. 1996.
Ruderman, D. and Bialek, W. "Statistics of Natural Images: Scaling in the Woods".
Physical Review Letters. 73, Number 6,(8 August 1994), 814-817. 1994.
S.C. Zhu, Y. Wu, and D. Mumford. "Minimax Entropy Principle and Its Application to Texture Modeling". Neural Computation. Vol. 9. no. 8. Nov. 1997.
S.C. Zhu and D. Mumford. "Prior Learning and Gibbs Reaction-Diffusion". IEEE
Trans. on PAMI vol. 19, no. 11. Nov. 1997.
S-C Zhu, Y-N Wu and D. Mumford. FRAME: Filters, Random field And Maximum
Entropy: - Towards a Unified Theory for Texture Modeling. Int'l Journal of
Computer Vision 27(2) 1-20, Marchi April. 1998.
| 1626 |@word private:1 version:1 proportionality:1 covariance:4 moment:1 ours:1 reaction:1 xand:1 discretization:1 stemming:1 additive:2 concatenate:1 partition:1 informative:2 shape:2 analytic:2 enables:1 alone:3 half:1 coughlan:12 smith:3 core:1 coarse:4 quantized:3 along:1 constructed:1 kettlewell:3 prove:2 introduce:1 themselves:1 cand:1 multi:5 resolve:1 cpu:1 pf:6 considering:1 begin:1 estimating:2 linearity:1 provided:1 panel:4 medium:4 null:3 kind:1 developed:1 differing:1 unified:1 every:1 nf:1 ti:2 grant:2 appear:2 t1:1 before:1 io:1 subscript:1 pami:1 quantified:2 suggests:1 specifying:1 range:2 procedure:1 empirical:9 mult:3 projection:2 matching:1 word:1 marchi:1 convenience:1 onto:1 close:1 equivalent:6 center:1 straightforward:1 convex:1 insight:1 spanned:1 his:1 fx:2 justification:1 elucidates:1 origin:1 approximated:3 recognition:1 richards:2 u4:2 solved:1 ui:2 solving:1 algebra:1 yuille:12 distinct:1 fast:2 whose:1 larger:1 statistic:27 transform:1 emergence:1 eigenvalue:1 aro:1 interaction:3 fr:1 fmax:2 achieve:1 illustrate:1 coupling:1 measured:1 nearest:1 solves:1 throw:1 quantify:1 differ:1 ning:1 filter:39 centered:1 explains:1 transparent:1 correction:1 clarify:1 mapping:3 uniqueness:1 establishes:1 reflects:1 minimization:1 clearly:1 gaussian:11 cr:2 derived:1 ax:3 notational:1 rank:1 likelihood:1 greatly:3 inst:2 inference:1 factoring:1 entire:1 relation:1 pixel:13 among:2 denoted:2 exponent:1 field:6 equal:1 never:1 having:1 once:1 sampling:1 identical:3 look:1 future:1 report:1 stimulus:1 simplify:1 inherent:4 composed:1 preserve:1 phase:33 interest:1 highly:1 yielding:1 light:1 necessary:1 iv:1 column:1 combinatoric:6 modeling:2 measuring:1 lattice:1 cost:2 entry:2 examining:2 minutemax:13 rounded:1 vastly:1 ambiguity:11 bx:4 potential:13 cje:3 int:1 depends:2 il:5 ir:4 square:1 ni:1 characteristic:1 yield:5 generalize:2 bayesian:2 influenced:1 ed:1 frequency:2 james:1 di:2 degeneracy:1 dataset:5 proved:1 recall:1 cj:2 uncover:1 higher:2 response:15 maximally:1 april:1 though:1 generality:1 just:2 ruderman:2 trust:1 aj:3 olshausen:1 concept:1 normalized:1 true:4 hence:1 illuminates:1 image:41 consideration:1 fi:1 funding:1 multinomial:13 ji:1 empirically:1 physical:1 discussed:2 extend:1 relating:1 cambridge:2 gibbs:3 ai:3 pm:9 dj:5 recent:2 perspective:1 binary:1 preserving:1 determine:2 ii:1 multiple:2 flatness:1 reduces:3 stem:1 faster:1 raphson:1 vision:1 histogram:6 normalization:1 kernel:3 cell:2 addition:5 fine:6 member:1 inconsistent:1 integer:2 iii:1 enough:1 easy:1 variate:2 simplifies:1 nonquadratic:1 song:1 jj:1 useful:1 tij:2 detailed:1 eigenvectors:1 clutter:1 induces:1 simplest:1 exist:1 shifted:1 gil:1 write:1 vol:2 nevertheless:1 pj:4 diffusion:1 imaging:1 merely:1 fraction:1 sum:1 wood:1 letter:1 powerful:1 wu:5 scaling:1 completing:1 precisely:1 aspect:1 speed:1 optical:1 across:1 making:2 explained:1 equation:4 discus:2 count:1 fail:1 mind:1 apply:1 observe:1 away:1 appropriate:3 original:1 denotes:4 include:1 nullity:1 newton:1 calculating:1 neglect:1 society:1 mumford:12 ifr:1 receptive:1 bialek:2 gradient:3 link:1 separate:1 topic:2 argue:1 unstable:1 trivial:1 assuming:2 length:1 code:1 index:1 relationship:1 illustration:1 difficult:1 rise:1 motivates:2 unknown:2 observation:1 finite:2 acknowledge:1 defining:1 communication:1 frame:1 rn:1 perturbation:2 august:1 intensity:3 namely:1 required:1 specified:1 extensive:1 connection:2 lfr:1 learned:4 trans:1 perception:2 pattern:2 max:2 shifting:1 suitable:1 natural:6 zhu:15 minimax:24 swift:2 representing:1 extract:1 prior:2 review:1 acknowledgement:1 determining:1 law:1 loss:1 interesting:1 principle:1 share:1 course:1 summary:1 offline:1 institute:1 sparse:1 curve:1 dimension:3 cortical:1 world:1 qn:1 ignores:1 made:1 qualitatively:1 san:2 approximate:3 nov:2 francisco:2 xi:2 alternatively:1 ripley:1 un:1 learn:2 nature:1 ca:2 e5:1 expansion:1 bearing:1 du:1 domain:1 main:2 linearly:1 arise:2 knill:2 convey:1 position:1 wish:2 exponential:1 third:1 theorem:3 emphasized:1 showing:1 chun:1 intrinsic:5 adding:1 texture:2 magnitude:1 disregarded:1 easier:1 entropy:16 tc:2 cx:2 simply:1 visual:1 contained:1 scalar:1 u2:2 applies:1 satisfies:1 stimulating:1 goal:1 towards:1 feasible:1 change:1 determined:1 except:1 hyperplane:1 total:1 called:1 gauss:1 experimental:1 preparation:1 tested:1 ex:7 |
685 | 1,627 | A Polygonal Line Algorithm for Constructing
Principal Curves
Balazs Kegl, Adam Krzyzak
Dept. of Computer Science
Concordia University
1450 de Maisonneuve Blvd. W.
Montreal, Canada H3G IM8
kegl@cs.concordia.ca
krzyzak@cs.concordia.ca
Tamas Linder
Dept. of Mathematics
and Statistics
Queen's University
Kingston, Ontario
Canada K7L 3N6
linder@mast.queensu.ca
Kenneth Zeger
Dept. of Electrical and
Computer Engineering
University of California
San Diego, La Jolla
CA 92093-0407
zeger@ucsd.edu
Abstract
Principal curves have been defined as "self consistent" smooth curves
which pass through the "middle" of a d-dimensional probability distribution or data cloud. Recently, we [1] have offered a new approach by
defining principal curves as continuous curves of a given length which
minimize the expected squared distance between the curve and points of
the space randomly chosen according to a given distribution. The new
definition made it possible to carry out a theoretical analysis of learning
principal curves from training data. In this paper we propose a practical
construction based on the new definition. Simulation results demonstrate
that the new algorithm compares favorably with previous methods both
in terms of performance and computational complexity.
1 Introduction
Hastie [2] and Hastie and Stuetzle [3] (hereafter HS) generalized the self consistency property of principal components and introduced the notion of principal curves. Consider a
d-dimensional random vector X = (X(I), ... ,X(d)) with finite second moments, and let
f{t) = (II (t), ... ,!d(t)) be a smooth curve in 1{,d parameterized by t E 1{,. For any x E 1{,d
let tf(x) denote the parameter value t for which the distance between x and f(t) is minimized. By the HS definition, f(t) is a principal curve if it does not intersect itself and is
self consistent, that is, f(t) = E(Xltr(X) = t). Intuitively speaking, self-consistency means
that each point of f is the average (under the distribution of X) of points that project there.
Based on their defining property HS developed an algorithm for constructing principal
curves for distributions or data sets, and described an application in the Stanford Linear
Collider Project [3].
B. Keg/, A. Krzyiak, T. Linder and K. Zeger
502
Principal curves have been applied by Banfield and Raftery [4] to identify the outlines of
ice floes in satellite images. Their method of clustering about principal curves led to a fully
automatic method for identifying ice floes and their outlines. On the theoretical side, Tibshirani [5] introduced a semiparametric model for principal curves and proposed a method
for estimating principal curves using the EM algorithm. Recently, Delicado [6] proposed
yet another definition based on a property of the first principal components of multivariate normal distributions. Close connections between principal curves and Kohonen's selforganizing maps were pointed out by Mulier and Cherkas sky [7]. Self-organizing maps
were also used by Der et al. [8] for constructing principal curves.
There is an unsatisfactory aspect of the definition of principal curves in the original HS
paper as well as in subsequent works. Although principal curves have been defined to be
nonparametric, their existence for a given distribution or probability density is an open
question, except for very special cases such as elliptical distributions. This also makes it
difficult to theoretically analyze any learning schemes for principal curves.
Recently, we [1] have proposed a new definition of principal curves which resolves this
problem. In the new definition, a curve f" is called a principal curve of length L for X if f"
minimizes ~(f) E [infr IIX - f(t) 112] EIIX - f(tf(X)) 11 2, the expected squared distance
between X and the curve, over all curves of length less than or equal to L. It was proved in
[1] that for any X with finite second moments there always exists a principal curve in the
new sense.
=
=
A theoretical algorithm has also been developed to estimate principal curves based on a
common model in statistical learning theory (e.g. see [9]). SUJ'pose that the distribution of
X is concentrated on a closed and bounded convex set K C 1l ,and we are given n training
points XI, ... ,Xn drawn independently from the distribution of X. Let S denote the family
of curves taking values in K and having length not greater than L. For k :2: 1 let Sk be the
set of polygonal (piecewise linear) curves in K which have k segments and whose lengths
do not exceed L. Let
(1)
~(X,f) = minllx - f(t)112
r
denote the squared distance between x and f. For any f E S the empirical squared error of f on the training data is the sample average ~n(f) = L:':I ~(Xi' f). Let the
theoretical algorithm choose an fk ,n E Sk which minimizes the empirical error, i.e, let
fk,n = arg minfEsk ~n (f). It was shown in [1] that if k is chosen to be proportional to n I/3 ,
then the expected squared loss of the empirically optimal polygonal curve with k segments
and length at most L converges, as n -+ 00, to the squared loss of the principal curve of
length L at a rate ~(fk,n) - ~(f") = O(n- 1/ 3 ).
*
Although amenable to theoretical analysis, the algorithm in [1] is computationally burdensome for implementation. In this paper we develop a suboptimal algorithm for learning
principal curves. This practical algorithm produces polygonal curve approximations to the
principal curve just as the theoretical method does, but global optimization is replaced by
a less complex iterative descent method. We give simulation results and compare our algorithm with previous work. In general, on examples considered by HS the performance of
the new algorithm is comparable with the HS algorithm, while it proves to be more robust
to changes in the data generating model.
2
A Polygonal Line Algorithm
=
Given a set of data points ~ {XI, ... ,xn} C 1ld , the task of finding the polygonal curve
with k segments and length L which minimizes ~ Li=1 ~(Xi' f) is computationally difficult.
We propose a suboptimal method with reasonable complexity. The basic idea is to start
with a straight line segment fl ,n (k = 1) and in each iteration of the algorithm to increase
503
A Polygonal Line Algorithmfor Constructing Principal Curves
the number of segments by one by adding a new vertex to the polygonal curve fk,n produced
by the previous iteration. After adding a new vertex, the positions of all vertices are updated
in an inner loop.
. .
.......
.
'
" ,
.'.
.....'
' ,1"1'
.
.
.,
.,
~
""
""
...
~
.
.
V?::?
. ...
:";"
.'
':
.,
'
........\.
i'>
1"'
1"'
, .IV ~ .. ?
i"
Figure I: The curves fk,n produced by the polygonal line algorithm for n = 100 data points.
The data was generated by adding independent Gaussian errors to both coordinates of a
point chosen randomly on a half circle. (a) fl,n, (b) f2,n, (c) f4 ,n, (d) fU,n (the output of the
algorithm).
START
Projection
y
Vertex optimization
END
Figure 2: The flow chart of the polygonal line algorithm.
The inner loop consists of a projection step and an optimization step. In the projection
step the data points are partitioned into "Voronoi regions" according to which segment or
vertex they project. In the optimization step the new position of each vertex is determined
by minimizing an average squared distance criterion penalized by a measure of the local
curvature. These two steps are iterated until convergence is achieved and An is produced.
Then a new vertex is added.
The algorithm stops when k exceeds a threshold c(n,~) . This stopping criterion is based
on a heuristic complexity measure, determined by the number segments k, the number of
data points n, and the average squared distance ~n(fk,n) .
THE INITIALIZATION STEP. To obtain fl,n, take the shortest segment of the first principal
component line which contains all of the projected data points.
THE PROJECTION STEP. Let f denote a polygonal curve with vertices VI , . . . ,Vk+1 and
closed line segments SI , ... ,Sk, such that Si connects vertices Vi and Vi+l. In this step the
data set Xn is partitioned into (at most) 2k + 1 disjoint sets VI, .. ' ,Vk+1 and SI,'" ,Sk,
the Voronoi regions of the vertices and segments of f, in the following manner. For any
x E 1(d let ~(x, Si) be the squared distance from x to Si (see definition (1?, and let ~(x, Vi) =
IIX-ViI12. Then let
Vi
= {x E Xn: ~(x, Vi) = ~(x,f),
Upon setting V =
Si
~(x, Vi)
< ~(x, vm), m = 1, ... ,i -I}.
utl Vi, the Si sets are defined by
= {x E Xn: x ? V, ~(X, Si) = ~(x,f) ,
~(X,Si)
The resulting partition is illustrated in Figure 3.
< ~(x,sm) , m = 1, . .. ,i -I}.
B. Keg/, A. Krzyz'ak, T Linder and K. Zeger
504
Figure 3: The Voronoi partition induced by the vertices and segments of f
THE VERTEX OPTIMIZATION STEP. In this step we iterate over the vertices, and relocate
each vertex while all the others are kept fixed. For each vertex, we minimize ~n(Vi) +
APP(Vi) , a local average squared distance criterion penalized by a measure of the local
curvature by using a gradient (steepest descent) method.
The local measure of the average squared distance is calculated from the data points which
project to Vi or to the line segment(s) starting at Vi (see Projection Step). Accordingly,
let O+(Vi) = LXESi~(X,Si)' O-(Vi) = LXESi-l ~(X,Si-l), and V(Vi) = LXEVi~(X, Vi). Now
define the local average squared distance as a function of Vi by
V(Vi) + O+(Vi)
IVil + ISil
if i
=1
O-(Vi) + V(Vi) + O+(Vi)
lSi-Ii + IVil + ISil
ifl
< i < k+ 1
(2)
O-(Vi) + V(Vi)
ifi = k+ 1.
lSi-II + IV;!
In the theoretical algorithm the average squared distance ~n(x,f) is minimized subject to
the constraint that f is a polygonal curve with k segments and length not exceeding L. One
could use a Lagrangian formulation and attempt to find a new position for Vi (while all
other vertices are fixed) such that the penalized squared error ~n(f) + Al(f)2 is minimum.
However, we have observed that this approach is very sensitive to the choice of A, and
reproduces the estimation bias of the HS algorithm which flattens the curve at areas of high
curvature. So, instead of directly penalizing the lengths of the line segments, we chose
to penalize sharp angles to obtain a smooth curve solution. Nonetheless, note that if only
one vertex is moved at a time, penalizing sharp angles will indirectly penalize long line
segments. At inner vertices Vi, 3 ::; i ::; k - 1 we penalize the sum of the cosines of the
three angles at vertices Vi-I, Vi, and Vi+l. The cosine function was picked because of
its regular behavior around 1t, which makes it especially suitable for the steepest descent
algorithm. To make the algorithm invariant under scaling, we multiply the cosines by the
squared radius of the data, that is, r = 1/2maxx Ex",YEx"lIx- YII. At the endpoints and at
their immediate neighbors (Vi, i = 1,2,k,k+ 1), where penalizing sharp angles does not
translate to penalizing long line segments, the penalty on a nonexistent angle is replaced
by a direct penalty on the squared length of the first (or last) segment. Formally, let "Ii
denote the angle at vertex Vi, let 1t(Vi) = ,2(1 + COS'Yi) , let Jl+(Vi) = IIVi - Vi+111 2 , and let
Jl- (Vi) = IIVi - Vi_111 2 ? Then the penalty at vertex Vi is
P(Vi)
=
!
2Jl+(Vi) + 1t(Vi+l)
Jl-(Vi) + 1t(Vi) + 1t(Vi+l)
1t(Vi-I) +1t(Vi) +1t(Vi+l)
1t(Vi-I) + 1t(Vi) + Jl+(Vi)
1t(Vi-l) + 2Jl-(Vi)
ifi= 1
if i = 2
if2::; i::; k-l
if i = k
if i = k+ 1.
A Polygonal Line Algorithmfor Constructing Principal Curves
505
One important issue is the amount of smoothing required for a given data set. In the HS
algorithm one needs to set the penalty coefficient of the spline smoother, or the span of
the scatterplot smoother. In our algorithm, the corresponding parameter is the curvature
penalty factor Ap. If some a priori knowledge about the distribution is available, one can
use it to determine the smoothing parameter. However in the absence of such knowledge,
the coefficient should be data-dependent. Intuitively, Ap should increase with the number
of segments and the size of the average squared error, and it should decrease with the data
size. Based on heuristic considerations and after carrying out practical experiments, we
set Ap = A~n-l/3~n(fk,n)1/2r-l, where A'p is a parameter of the algorithm, and can be kept
fixed for substantially different data sets.
ADDING A NEW VERTEX. We start with the optimized fk,n and choose the segment that
has the largest number of data points projecting to it. If more then one such segment exists,
we choose the ~n~~.st ~ne. ~e I?~dPoint of this seg~ent is selecte~ as the new vertex.
Formally, let! - tl. IS,I ~ IS;I, ) - 1, ... ,k}, and f - argmaxiEI Ilv, - v,+lli. Then the
new vertex is V new = (ve+ vl+l)/2.
STOPPING CONDITION. According to the theoretical results of [1], the number of segments k should be proportional to n 1/ 3 to achieve the O(n 1/ 3) convergence rate for the expected squared distance. Although the theoretical bounds are not tight enough to determine
the optimal number of segments for a given data size, we found that k '" n 1/ 3 also works
in practice. To achieve robustness we need to make k sensitive to the average squared distance. The stopping condition blends these two considerations. The algorithm stops when
k exceeds c(n,~n(fk,n)) = Aknl/3~n(fk,n)-1/2r.
COMPUTATIONAL COMPLEXITY. The complexity of the inner loop is dominated by the
complexity of the projection step, which is O(nk). Increasing the number of segments by
one at a time (as described in Section 2), and using the stopping condition of Section 2, the
computational complexity of the algorithm becomes O( n5/ 3 ). This is slightly better than the
O(n2) complexity of the HS algorithm. The complexity can be dramatically decreased if,
instead of adding only one vertex, a new vertex is placed at the midpoint of every segment,
giving O(n 4/ 3logn) , or if k is set to be a constant, giving O(n). These simplifications work
well in certain situations, but the original algorithm is more robust.
3 Experimental Results
We have extensively tested our algorithm on two-dimensional data sets. In most experiments the data was generated by a commonly used (see, e.g., [3) [5] [7]) additive model
X = Y + e, where Y is uniformly distributed on a smooth planar curve (hereafter called the
generating curve) and e is bivariate additive noise which is independent of Y.
Since the "true" principal curve is not known (note that the generating curve in the model
X = Y + e is in general not a principal curve either in the HS sense or in our definition), it
is hard to give an objective measure of performance. For this reason, in what follows, the
performance is judged subjectively, mainly on the basis of how closely the resulting curve
follows the shape of the generating curve.
In general, in simulation examples considered by HS the performance of the new algorithm
is comparable with the HS algorithm. Due to the data-dependence of the curvature penalty
factor and the stopping condition, our algorithm turns out to be more robust to alterations in
the data generating model, as well as to changes in the parameters of the particular model.
We use varying generating shapes, noise parameters, and data sizes to demonstrate the robustness of the polygonal line algorithm. All of the plots in Figure 4 show the generating
curve (Generator Curve), the curve produced by our polygonal line algorithm (Principal
B. Keg/, A. Krzyiak, T. Linder and K. Zeger
506
Curve), and the curve produced by the HS algorithm with spline smoothing (HS Principal
Curve), which we have found to perform better than the HS algorithm using scatterplot
smoothing. For closed generating curves we also include the curve produced by the Banfield and Raftery (BR) algorithm [4], which extends the HS algorithm to closed curves (BR
Principal Curve). The two coefficients of the polygonal line algorithm are set in all experiments to the constant values Ak = 0.3 and A~ = 0.1. All plots have been normalized to fit
in a 2 x 2 square. The parameters given below refer to values before this normalization.
(a)
(c)
(b)
H.I_.'OOpoi'lll""rnodiumnoioa
CiIdt, 'OOpoi'lllwlhrnodiumnoioa
OioIoItadhalcllcit, 'OOpoinllwthmodllllnoioa
r-~~~~~~~~~'~~~~~~~~~~
1
~poi'III .
? "-. I _,' . ". GIMIiIorOllW -
?
OJ
'. p '
M
:
0.8
c..vo ...
? ?
~ "? O.I
0.4
\
::,;.~.
' /
0.4
????
' .'
02
.!
5enIriIorp.xve -
?
?
0.8
!f'~~ ~-:-
M
1 ."
(1
.
0.4
..
.
' ~~~~~~~~~~
. 'Slft'4>lapoi'lll ?
I
.
02
~
tt,:
-0.4
"
'.
-0.1
,
?
I'
-0.2
0
. '--,--~~""""",----""--,"--,"--,--'
.,
-0.& -0.1 -0.4
_~
02
0.4
0.1
I?
.~ .--..::::.. :.
.-0~2'-0'-.4
" . ..
1 ?
-OJ
.
'---'-0.-&~.O.-I-'--0.4----"-:-02:--'-0
'1
0.&
I
.. I,
?
t"
?
-0.....
.1 ~O.~&--', ., .'-, .......
-0.- &~-o.I -'--o.4~-o2~0-0"':-2~0'-.4~O""'
.I -0.&..........,
c...., 'OOpohta""omaIl!OIII
S<IIapo, 'OOOOpoi'lllwthmodllllnoioa
0.&
'OOpoinllwth_noIoa
~~~~-T~~~~~'~~~~~~~~~~
: a:'::: -=-
0.&
1
I.
?
OA
?
02
? (
\
Slft'4>lapoi'lll ?
GtnItaIorCUM -
OJ
Pltrdpalc..vo ---.
HS pltrdpalc..vo . .
Pltrdpalc..vo .. ~ Pltrdpalc..... .. 0.1
..~.:;.:::..,."
I , ." ..
I?
D.4
?
02
?02
l
-0.4
1
,
-0.1
."."
..
..
'
.,
~'
;"
-0.&
,
.
.,
-0.& -0.1
-0.4 -0.2
0
W
02
0.4
O.!
0.&
,
.,
.,
-0.& -0.1 -0.4 -0.2
0
02
0.4
0.1
0.8
,
.,
~/
'---'"--'~~---'-~~~~--'
.,
-0.8 -0.1 -0.4
-0.2
0
02
0.4
O.!
0.&
,
00
00
Figure 4: (a) The Circle Example: the BR and the polygonal line algorithm show less
bias than the HS algorithm. (b) The Half Circle Example: the HS and the polygonal line
algorithms produce similar curves. (c) and (d) Transformed Data Sets: the polygonal line
algorithm still follows fairly closely the "distorted" shapes. (e) Small Noise Variance and
(f) Large Sample Size: the curves produced by the polygonal line algorithm are nearly
indistinguishable from the generating curves.
In Figure 4(a) the generating curve is a circle of radius r = 1, and e = (el' e2) is a zero mean
bivariate uncorrelated Gaussian with variance E(er) = 0.04, i = 1, 2. The performance of
the three algorithms (HS, BR, and the polygonal line algorithm) is comparable, although
the HS algorithm exhibits more bias than the other two. Note that the BR algorithm [4] has
been tailored to fit closed curves and to reduce the estimation bias. In Figure 4(b), only half
of the circle is used as a generating curve and the other parameters remain the same. Here,
too, both the HS and our algorithm behave similarly.
When we depart from these usual settings the polygonal line algorithm exhibits better behavior than the HS algorithm. In Figure 4(c) the data set of Figure 4(b) was linearly transformed using the matrix (j'.~ ?:~). In Figure 4(d) the transformation (- =b:~) was used.
The original data set was generated by an S-shaped generating curve, consisting of two
half circles of unit radii, to which the same Gaussian noise was added as in Figure 4(b). In
both cases the polygonal line algorithm produces curves that fit the generator curve more
closely. This is especially noticeable in Figure 4(c) where the HS principal curve fails to
follow the shape of the distorted half circle.
I:g
A Polygonal Line Algorithm for Constructing Principal Curves
507
There are two situations when we expect our algorithm to perform particularly well. If the
distribution is concentrated on a curve, then according to both the HS and our definitions
the principal curve is the generating curve itself. Thus, if the noise variance is small,
we expect both algorithms to very closely approximate the generating curve. The data in
Figure 4(e) was generated using the same additive Gaussian model as in Figure 4(a), but
the noise variance was reduced to E(er) = 0.001 for i = 1,2. In this case we found that the
polygonal line algorithm outperformed both the HS and the BR algorithms.
The second case is when the sample size is large. Although the generating curve is not
necessarily the principal curve of the distribution, it is natural to expect the algorithm to
well approximate the generating curve as the sample size grows. Such a case is shown in
Figure 4(f), where n = 10000 data points were generated (but only a small subset of these
was actually plotted). Here the polygonal line algorithm approximates the generating curve
with much better accuracy than the HS algorithm.
The Java implementation of the algorithm is available at the WWW site
http://www.cs.concordia.ca/-grad/kegl/pcurvedemo.html
4
Conclusion
We offered a new definition of principal curves and presented a practical algorithm for
constructing principal curves for data sets. One significant difference between our method
and previous principal curve algorithms ([3],[4], and [8]) is that, motivated by the new
definition, our algorithm minimizes a distance criterion (2) between the data points and the
polygonal curve rather than minimizing a distance criterion between the data points and the
vertices of the polygonal curve. This and the introduction of the data-dependent smoothing
factor Ap made our algorithm more robust to variations in the data distribution, while we
could keep computational complexity low.
Acknowledgments
This work was supported in part by NSERC grant OGPOOO270, Canadian National Networks of
Centers of Excellence grant 293, and the National Science Foundation.
References
[1] B. Kegl, A. Krzyzak, T. Linder, and K. Zeger, ''Principal curves: Learning and convergence," in
Proceedings of IEEE Int. Symp. on Information Theory, p. 387, 1998.
[2] T. Hastie, Principal curves and surfaces. PhD thesis, Stanford University, 1984.
[3] T. Hastie and W. Stuetzle, ''Principal curves," Journal of the American Statistical Association,
vol. 84,no. 406, pp. 502-516, 1989.
[4] J. D. Banfield and A. E. Raftery, "Ice floe identification in satellite image~ using mathematical
morphology and clustering about principal curves," Journal of the American Statistical Association, vol. 87, no. 417, pp. 7-16, 1992.
[5] R. Tibshirani, "Principal curves revisited," Statistics and Computation, vol. 2, pp. 183-190, 1992.
[6] P. Delicado, ''Principal curves and principal oriented points," Tech. Rep. 309, Department
d'Economia i Empresa, Universitat Pompeu Fabra, 1998.
http://www.econ.upf.es/deehome/what/wpapers/postscripts/309.pdf.
[7] F. Mulier and V. Cherkassky, "Self-organization as an iterative kernel smoothing process," Neural
Computation, vol. 7, pp. 1165-1177, 1995.
[8] R. Der, U. Steinmetz, and G. Balzuweit, "Nonlinear principal component analysis," tech. rep.,
Institut fUr Infonnatik, Universitilt Leipzig, 1998.
http://www.informatik.uni-leipzig.de/~der/Veroeff/npcafin.ps.
[9] V. N. Vapnik, The Nature of Statistical Learning Theory. New York: Springer-Verlag, 1995.
| 1627 |@word h:27 middle:1 isil:2 open:1 simulation:3 ld:1 carry:1 moment:2 nonexistent:1 contains:1 hereafter:2 o2:1 elliptical:1 si:11 yet:1 additive:3 partition:2 subsequent:1 zeger:6 shape:4 leipzig:2 plot:2 half:5 accordingly:1 steepest:2 mulier:2 revisited:1 mathematical:1 direct:1 consists:1 symp:1 manner:1 excellence:1 theoretically:1 expected:4 behavior:2 morphology:1 resolve:1 lll:3 increasing:1 becomes:1 project:4 estimating:1 bounded:1 what:2 minimizes:4 substantially:1 developed:2 finding:1 transformation:1 sky:1 every:1 unit:1 grant:2 ice:3 before:1 engineering:1 local:5 ak:2 ap:4 chose:1 blvd:1 initialization:1 co:1 practical:4 acknowledgment:1 practice:1 stuetzle:2 intersect:1 area:1 empirical:2 maxx:1 java:1 projection:6 regular:1 close:1 judged:1 www:4 map:2 lagrangian:1 center:1 starting:1 independently:1 convex:1 identifying:1 pompeu:1 notion:1 coordinate:1 variation:1 updated:1 diego:1 construction:1 particularly:1 observed:1 cloud:1 lix:1 electrical:1 seg:1 region:2 ifl:1 decrease:1 krzyiak:2 complexity:10 carrying:1 tight:1 segment:24 upon:1 f2:1 basis:1 banfield:3 whose:1 heuristic:2 stanford:2 statistic:2 itself:2 propose:2 relocate:1 if2:1 kohonen:1 yii:1 loop:3 organizing:1 translate:1 ontario:1 achieve:2 moved:1 ent:1 convergence:3 p:1 satellite:2 produce:3 generating:17 adam:1 converges:1 develop:1 montreal:1 pose:1 noticeable:1 c:3 collider:1 radius:3 closely:4 f4:1 around:1 considered:2 normal:1 ilv:1 estimation:2 outperformed:1 sensitive:2 largest:1 tf:2 always:1 gaussian:4 rather:1 poi:1 varying:1 vk:2 unsatisfactory:1 fur:1 mainly:1 tech:2 sense:2 burdensome:1 utl:1 voronoi:3 stopping:5 dependent:2 el:1 vl:1 oiii:1 transformed:2 arg:1 issue:1 html:1 xve:1 logn:1 priori:1 smoothing:6 special:1 fairly:1 equal:1 having:1 shaped:1 nearly:1 minimized:2 others:1 spline:2 piecewise:1 randomly:2 oriented:1 steinmetz:1 ve:1 national:2 iivi:2 replaced:2 connects:1 consisting:1 attempt:1 organization:1 multiply:1 amenable:1 fu:1 institut:1 iv:2 circle:7 plotted:1 theoretical:9 queen:1 vertex:27 ivil:2 subset:1 too:1 universitat:1 st:1 density:1 vm:1 squared:19 thesis:1 choose:3 american:2 li:1 de:2 alteration:1 coefficient:3 int:1 vi:51 picked:1 closed:5 analyze:1 start:3 minimize:2 chart:1 square:1 accuracy:1 variance:4 identify:1 identification:1 iterated:1 produced:7 informatik:1 lli:1 straight:1 app:1 definition:12 nonetheless:1 pp:4 e2:1 stop:2 proved:1 concordia:4 knowledge:2 actually:1 follow:1 planar:1 formulation:1 just:1 until:1 nonlinear:1 cherkas:1 grows:1 normalized:1 true:1 tamas:1 illustrated:1 indistinguishable:1 self:6 cosine:3 criterion:5 generalized:1 pdf:1 outline:2 tt:1 demonstrate:2 vo:4 image:2 consideration:2 recently:3 common:1 empirically:1 endpoint:1 jl:6 association:2 approximates:1 refer:1 significant:1 automatic:1 consistency:2 mathematics:1 fk:10 pointed:1 similarly:1 surface:1 subjectively:1 curvature:5 multivariate:1 jolla:1 verlag:1 balazs:1 certain:1 rep:2 der:3 yi:1 minimum:1 greater:1 upf:1 shortest:1 determine:2 ii:4 smoother:2 smooth:4 exceeds:2 long:2 basic:1 n5:1 iteration:2 kernel:1 tailored:1 normalization:1 achieved:1 penalize:3 semiparametric:1 decreased:1 induced:1 subject:1 flow:1 exceed:1 iii:1 enough:1 canadian:1 iterate:1 fit:3 hastie:4 ifi:2 suboptimal:2 inner:4 idea:1 reduce:1 br:6 grad:1 suj:1 motivated:1 krzyzak:3 penalty:6 speaking:1 york:1 algorithmfor:2 dramatically:1 selforganizing:1 amount:1 nonparametric:1 extensively:1 concentrated:2 reduced:1 http:3 lsi:2 disjoint:1 tibshirani:2 econ:1 vol:4 threshold:1 drawn:1 penalizing:4 kenneth:1 kept:2 sum:1 angle:6 parameterized:1 distorted:2 extends:1 family:1 reasonable:1 scaling:1 comparable:3 fl:3 bound:1 simplification:1 constraint:1 dominated:1 aspect:1 span:1 department:1 according:4 remain:1 slightly:1 em:1 partitioned:2 intuitively:2 invariant:1 projecting:1 computationally:2 turn:1 end:1 available:2 indirectly:1 robustness:2 existence:1 original:3 clustering:2 include:1 iix:2 giving:2 prof:1 especially:2 objective:1 question:1 added:2 flattens:1 blend:1 depart:1 dependence:1 usual:1 fabra:1 exhibit:2 gradient:1 distance:15 oa:1 reason:1 length:11 postscript:1 minimizing:2 difficult:2 favorably:1 implementation:2 perform:2 i_:1 sm:1 finite:2 descent:3 behave:1 kegl:4 immediate:1 defining:2 situation:2 ucsd:1 sharp:3 canada:2 introduced:2 required:1 connection:1 optimized:1 california:1 below:1 oj:3 suitable:1 natural:1 scheme:1 ne:1 raftery:3 n6:1 fully:1 loss:2 expect:3 proportional:2 generator:2 foundation:1 offered:2 consistent:2 uncorrelated:1 penalized:3 placed:1 last:1 supported:1 infonnatik:1 side:1 bias:4 neighbor:1 taking:1 midpoint:1 distributed:1 curve:90 calculated:1 xn:5 made:2 commonly:1 san:1 projected:1 approximate:2 uni:1 keep:1 global:1 reproduces:1 xi:4 continuous:1 iterative:2 sk:4 nature:1 robust:4 ca:5 complex:1 yex:1 constructing:7 necessarily:1 linearly:1 noise:6 n2:1 site:1 tl:1 fails:1 position:3 exceeding:1 er:2 bivariate:2 exists:2 scatterplot:2 polygonal:28 adding:5 vapnik:1 phd:1 nk:1 cherkassky:1 led:1 nserc:1 springer:1 absence:1 change:2 hard:1 determined:2 except:1 uniformly:1 principal:48 called:2 pas:1 experimental:1 la:1 e:1 formally:2 linder:6 mast:1 dept:3 tested:1 ex:1 |
686 | 1,628 | Utilizing Time: Asynchronous Binding
Bradley C. Love
Department of Psychology
Northwestern University
Evanston, IL 60208
Abstract
Historically, connectionist systems have not excelled at representing and manipulating complex structures. How can a system composed of simple neuron-like computing elements encode complex
relations? Recently, researchers have begun to appreciate that representations can extend in both time and space. Many researchers
have proposed that the synchronous firing of units can encode complex representations. I identify the limitations of this approach
and present an asynchronous model of binding that effectively represents complex structures. The asynchronous model extends the
synchronous approach. I argue that our cognitive architecture utilizes a similar mechanism.
1
Introduction
Simple connectionist models can fall prey to the "binding problem" . A binding
problem occurs when two different events (or objects) are represented identically.
For example, representing "John hit Ted" by activating the units JOHN, HIT,
and TED would lead to a binding problem because the same pattern of activation
would also be used to represent "Ted hit John". The binding problem is ubiquitous
and is a concern whenever internal representations are postulated. In addition
to guarding against the binding problem, an effective binding mechanism must
construct representations that assist processing. For instance, different states of the
world must be represented in a manner that assists in discovering commonalities
between disparate states, allowing for category formation and analogical processing.
Interestingly, new connectionist binding mechanisms [5, 9, 12J utilize time in their
operation. Pollack's Recursive Auto-Associative Memory (RAAM) model combines
a standard fixed-width multi-layer network architecture with a stack and a simple
controller, enabling RAAM to encode hierarchical representations over multiple processing steps. RAAM requires more time to encode representations as they become
more complex, but its space requirements remain constant. The clearest example
Utilizing Time: Asynchronous Binding
39
of utilizing time are models that perform dynamic binding through synchronous
firings of units [17, 5, 12]. Synchrony models explicitly use time to mark relations
between units, distributing complex representations across multiple time steps.
Most other models neglect the time aspect of representation. Even synchrony models fail to fully utilize time (I will clarify this point in a later section). In this
paper, a model is introduced (the asynchronous binding mechanism) that attempts
to rectify this situation. The asynchronous approach is similar to the synchronous
approach but is more effective in binding complex representations and exploiting
time.
2
Utilizing time and the brain
Representational power can be greatly increased by taking advantage of the time
dimension of representation. For instance, a telephone would need thousands of
buttons to make a call if sequences of digits were not used. From the standpoint of
a neuron, taking advantage of timing information increases processing capacity by
more than a 100 fold [13] . While this suggests that the neural code might utilize
both time and space resources, the neuroscience community has not yet arrived at a
consensus. While it is known that the behavior of a postsynaptic neuron is affected
by the location and arrival times of dendritic input [10], it is generally believed that
only the rate of firing (a neuron's firing rate is akin to the activation level of a unit in
a connectionist network) can code information, as opposed to the timing of spikes,
since neurons are noisy devices [14]. However, findings that are taken as evidence
for rate coding, like elevated firing rates in memory retention tasks [8], can often
be reinterpreted as part of complex cortical events that extend through time [1]. In
accord with this view, recent empirical findings suggests that the timing of spikes
(e.g., firing patterns, intervals) are also part of the neural code [4, 16]. Contrary to
the rate based view (which holds only that only the firing rate of a neuron encodes
information), these studies suggest that the timing of spikes encodes information
(e.g., when two neurons repeatedly spike together it signifies something different
than when they fire out of phase, even if their firing rates are identical in both
cases).
Behavioral findings also appear consistent with the idea that time is used to construct complex representations. Behavioral research in illusory conjunction phenomena [15], and sentence processing performance [11] all suggest that bindings
or relations are established through time, with bindings becoming more certain as
processing proceeds. In summary, early in processing humans can gauge which
representational elements are relevant while remaining uncertain about how these
elements are interrelated.
3
Dynamic binding through synchrony
Given the demands placed on a representational system, a system that utilizes
dynamic binding through synchrony would seem to be a good candidate mental
architecture (though, as we will see, limitations arise when representing complex
structures) . A synchronous binding account of our mental architecture is consistent
(at a general level) with behavioral findings, the intuition that complex representations are distributed across time, and that neural temporal dynamics code information . Synchrony seems to offer the power to recombine a finite set of elements
in a virtually unlimited number of ways (the defining characteristic of a discrete
combinatorial system).
B. C. Love
40
While synchrony models seem appropriate for modeling certain behaviors, dynamic
binding through synchrony does not seem to be an appropriate mechanism for
establishing complex recursive bindings [2]. In a synchronous dynamic binding
system, the distinction between a slot and a filler is lost, since bindings are not
directional (i.e., which unit is a predicate and which unit is an argument is not clear).
The slot and the filler simply share the same phase. In this sense, the mechanism is
more akin to a grouping mechanism than to a binding mechanism. Grouping units
together indicates that the units are a part of the same representation, but does
not sort out the relations among the units as binding does.
Synchrony runs into trouble when a unit has to act simultaneously as a slot and
a filler. For instance, to represent embedded propositions with synchronous binding, a controller needs to be added. For instance, a structure with embedding, like
A-+B-+C, could be represented with synchronous firings if A and B fired synchronously and then Band C fired synchronously. Still, synchronous binding blurs
the distinction between a slot and a filler, necessitating that A, B, and C be marked
as slots or fillers to unambiguously represent the simple A-+B-+C structure. Notice
that B must be marked as a slot when it fires synchronously with A, but must be
marked as filler when it synchronously fires with C. When representing embedded
structures, the synchronous approach becomes complicated (Le., simple connections
are not sufficient to modulate firing patterns) and rigid (Le., parallelism and flexibility are lost when a unit has to be either a slot or a filler). Ideally, units would be
able to act simultaneously as slots and fillers, instead of alternating between these
two structural roles.
4
The asynchronous approach
While synchrony models utilize some timing information, other valuable timing information is discarded as noise, making it difficult to represent multiple levels of
structure. If A fired slightly before B, which fired slightly before C, asynchronous
timing information (ordering information) would be available. This ordering information allows for directional binding relations and alleviates the need to label
units as slots or fillers. Notice that B can act simultaneously as a slot and a filler.
Directional bindings can unambiguously represent complex structures.
Phase locking and wave like patterns of firing need not occur during asynchronous
binding. For instance, the firing pattern that encodes a structure like A-+B-+C
does not need to be orderly (Le., starting with A and ending with C). To encode
A-+B-+C, unit B's firing schedule must observably speed up (on average) after unit
A fires, while C's must speed up after B fires. For example, if we only considered the
time window immediately after a unit fires, a firing sequence of B, C, no unit fires,
A, and then B would provide evidence for the structure A-+B-+C. Of course, if
A, B, and C fire periodically with stochastic schedules that are influenced by other
units' firings, spurious binding evidence will accrue (e.g., occasionally, C will fire
and A will fire in the next time step). Luckily, these accidents will be less frequent
than events that support the intended bindings. As binding evidence is accumulated
over time, binding errors will become less likely.
Interestingly, the asynchronous mechanism can also represent structures through an
inhibitory process that mirrors the excitatory process described above. A-+B-+C
could be represented asynchronously if A was less likely to fire after B fired and B
was less likely to fire after C fired. An inhibitory (negative) connection from B to
A is in some ways equivalent to an excitatory (positive) connection form A to B.
41
Utilizing Time: Asynchronous Binding
4.1
The mathematical expression of the model
The previous discussion of the asynchronous approach can be formalized. Below is
a description of an asynchronous model that I have implemented.
4.1.1
The anatomy of a unit
Individual units, when unaffected by other units, will fire periodically when active:
(1)
if Rti ~ 1, then Oti+l = 1, otherwise Oti + l = O.
where Oti+l is the unit 's output (at time i + 1) , Rti is the unit's output refractory
period which is randomly set (after the unit fires) to a value drawn from the uniform
distribution between 0 and 1 and is incremented at each time step by some constant
(which was set to .1 in all simulations). Notice that a unit produces an output one
time step after its output refractory period reaches threshold.
4.1.2
A unit's behavior in the presence of other units
A unit alters its output refractory if it receives a signal (via a connection) from a
unit that has just fired (i.e., a unit with a positive output) . For example, if unit A
fires (its output is 1) and there is a connection to unit B of strength +.3, then B's
output refractory will be incremented by +.3, enabling unit B to fire during the
next time step or at least decreasing the time until B fires. Alternatively, negative
(inhibitory) connections lower refractory.
Two unconnected units will tend to fire independently of each other, providing little
evidence for a binding relation. Again, over a small time window, two units may
fire contiguously by chance, but over many firings the evidence for a binding will
approach zero.
4.1.3
Interpreting firing patterns
Every time a unit fires, it creates evidence for binding hypotheses. The critical
issue is how to collect and evaluate evidence for bindings. There are many possible
evidence functions that interpret firing patterns in a sensible fashion. One simple
function is to have evidence for two units binding decrease linearly as the time
between their firings increases. Evidence is updated every time step according to
the following equation:
if p ~ (tuj - tu,) ~ 1, then ~Eij = - (1/ p) (tUj - tuJ
+ (1/ p) + 1.
(2)
where p is the size of the window for considering binding evidence (Le., if p is 5,
then units firing 5 time steps apart still generate binding evidence), tUi is the most
recent time step unit Ui fired , and ~Eij is the change in the amount of evidence for
Ui binding to Uj . Of course, some evidence will be spurious. The following decision
rule can be used to determine if two units share a binding relation:
if (Eij - E ji ) > k, then Ui binds to Uj .
(3)
where k is some threshold greater than O. This decision rule is formally equivalent
to the diffusion model which is a type of random walk model [6]. Equations 2 and 3
are very simple. Other more sophisticated methods can be used for collecting and
evaluating binding evidence.
4.2
Performance of the Asynchronous Mechanism
In this section, the asynchronous binding mechanism 's performance characteristics
are examined. In particular, the model's ability to represent tree structures of
42
B. C. Love
Organized by Depth
Organized by Branching
~
~
'go
'go
1J,iil
00
g(?
Eranc~in8=
ranc In =
aiBm~~
~ R
h
ranc
In
~a)
~R
'C
-
0
0
,/
I
g,iil
~R
~
~o
~'"
iil
8
~
i
g
iil
2000
2500
2000
2500
2000
2500
~;:-:--
~R
~
~/
1/
It'
If
/:
iil
1000
1500
processmg time
,
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I I ,
2000
2S00
-- -----::=--_.------.... .. -, ,- , "
'
I
~2
1 SOD
/.,'
l~
g.~
-B
SOD
I
c
~
~'"
/
"0
.....
---.:;:;:
/.",
~o
/1/
1000
processing time
I 1/
h
500
2S00
1/
1/
I
.o~
-",-
2000
/ /
(
/
8
-B
/i
iil
1000
1500
proCessing lime
500
8
(,'
= ig I:
iil
-B
1/
801
~
h
~,-
I
,,
I
I
I
,
,, "
500
~
8
/,
00
g>~
~o
~
~o
8.'"
iil
SOD
....-:,-:::.-.... ..
.,..-:....:,
/,'
h
.o~
1500
1000
processrng tuna
2000
2500
1SOD
1000
procesSing lime
/"
1/
1/
,
500
1000
1500
processing lime
Figure 1: Performance curves for the 9 different structures are shown.
varying complexity was explored. Tree structures can be used to represent complex
relational information, like the parse of a sentence. An advantage of using tree
structures to measure performance is that the complexity of a tree can be easily
described by two factors . Trees can vary in their depth and branching. In the
simulations reported here, trees had a branching factor and depth of either 1, 2,
or 3. These two factors were crossed, yielding 9 different tree structures. This
design makes it possible to assess how the model processes structures of varying
complexity. One sensible prediction (given our intuitions about how we process
structured representations) is that trees with greater depth and branching will take
longer to represent.
In the simulations reported here, both positive and negative connections were used
simultaneously. For instance, in a tree structure, if A was intended to bind to B ,
A 's connection to B was set to +.1 and B 's connection to A was set to - .1. The
combination of both connection types yields the best performance.
In these simulations both excitatory and inhibitory binding connection values were
set relatively low (all binding connections were set to size .1), providing a strict test
of the model 's sensitivity. The low connection values prevented bound units from
establishing tight couplings (characteristic of bound units in synchrony models) .
For example, with an excitatory connection from A to B of .1, A 's firing does not
ensure that B will fire in the next time step (or the next few time steps for that
matter) . The lack of a tight coupling requires the model to be more sensitive to
how one unit affects another unit's firing schedule. With all connections of size .1,
firing patterns representing complex structures will appear chaotic and unorderly.
Utilizing Time: Asynchronous Binding
43
In all simulations, the time window for considering binding evidence was 5 time
steps (i.e., Equation 2 was used with p set to 5).
Performance was measured by calculating the percent bindings correct. The bindings the model settled upon were determined by calculating the number of bindings
in the intended structure. The model then created a structure with this number
of bindings (this is equivalent to treating k like a free parameter), choosing the
bindings it believed to be most likely (based on accrued evidence). The model was
correct when the bindings it believed to be present corresponded to the intended
bindings.
For each of the 9 structures (3 levels of depth by 3 levels of branching), hundreds
of trials were run (the mechanism is stochastic) until performance curves became
smooth. The model's performance was measured every 25th time step up to the
2500th time step. Performance (averaged across trials) for all structures is shown
in Figure 1. Any viewable difference between performance curves is statistically
significant. As predicted, there was a main effect for both branching and depth. The
left panels of Figure 1 organize the data by branching factor, revealing a systematic
effect of depth. The right panel is organized by depth and reveals a systematic effect
of branching. As structures become more complex, they appear to take longer to
represent.
5
Conclusions
The ability to effectively represent and manipulate complex knowledge structures
is central to human cognition [3]. Connectionists models generally lack this ability, making it difficult to give a connectionist account of our mental architecture.
The asynchronous mechanism provides a connectionist framework for representing
structures in a way that is biologically, computationally, and behaviorally feasible.
The mechanism establishes bindings over time using simple neuron-like computing
elements. The asynchronous approach treats bindings as directional and does not
blur the distinction between a slot and a filler as the synchronous approach does.
The asynchronous mechanism builds representations that can be differentiated from
each other, capturing important differences between representational states. The
representations that the asynchronous mechanism builds also can be easily compared and commonalities between disparate states can be extracted by analogical
processes, allowing for generalization and feature discovery. In fact, an analogical
(i.e., graph) matcher has been built using the asynchronous mechanism [7]. Variants
of the model need to be explored. This paper only outlines the essentials of the architecture. Synchronous dynamic binding models were partly inspired from work in
neuroscience. Hopefully the asynchronous dynamic binding model will now inspire
neuroscience researchers. Some evidence for rate-based firing (spatially based) neural codes has been revisited and viewed as consistent with more complex temporal
codes [1]; perhaps evidence for synchrony can be subjected to more sophisticated
analyses and be better construed as evidence for the asynchronous mechanism.
Acknow ledgments
This work was supported by the Office of Naval Research under the National Defense
Science and Engineering Graduate Fellowship Program. I would like to thank John
Hummel for his helpful comments.
44
B. C. Love
References
[1] M. Abeles, H. Bergman, E. Margalit, and E. Vaadia. Spatiotemporal firing
patterns in the frontal cortex of behaving monkeys. Journal of Neurophysiology,
70:1629- 1638,1993.
[2] E. Bienenstock. Composition. In A. Aertsen and V. Braitenberg, editors, Brain
Theory: Biological Basis and Computational Principles. Elsevier, New York,
1996.
[3] D. Gentner and A. B. Markman. Analogy-watershed or waterloo? structural
alignment and the development of connectionist models of analogy. In S. J.
Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information
Processing Systems 5, pages 855-862. Morgan Kaufman Publishers, San Mateo,
CA,1993.
[4] C. M. Gray and W. Singer. Stimulus specific neuronal oscillations in orientation
columns of cat visual cortex. Proceedings of the National Academy of Sciences,
USA, 86:1698- 1702, 1989.
[5] J. E. Hummel and I. Biederman. Dynamic binding in a neural network for
shape recognition. Psychological Review, 99:480-517, 1992.
[6] D.R.J. Laming. Information theory of choice reaction time. Oxford University
Press, New York, 1968.
[7] B. C. Love. Asynchronous connectionist binding. (Under Review), 1998.
[8] Y. Miyashita and H. S. Chang. Neuronal correlate of pictorial short-term
memory in primate temporal cortex. Nature, 331:68- 70, 1988.
[9] J. Pollack. Recursive distributed representations. Artificial Intelligence, 46:77105, 1990.
[10] W. Rail. Dendritic locations of synapses and possible mechanisms for the
monosynaptic EPSP in motorneurons. Journal of Neurophysiology, 30:11691193,1967.
[11] R. Ratcliff and G. McKoon. Speed and accuracy in the processing of false
statements about semantic information. Journal of Experimental Psychology:
Learning, Memory, fj Cognition, 8:16- 36, 1989.
[12] L. Shastri and V. Ajjanagadde. From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic binding
using temporal synchrony. Behavioral and Brain Sciences, 16:417- 494, 1993.
[13] W. Softky. Fine analog coding minimizes information transmission. Neural
Networks, 9:15- 24, 1996.
[14] A. C. Tang and T. J. Sejnowski. An ecological approach tp the neural code.
In Proceedings of the Nineteenth Annual Conference of the Cogntive Science
Society, page 852, Mahwah, NJ, 1996. Erlbaum.
[15] A. Treisman and H. Schmidt. Illusory conjunctions in the perception of objects.
Cognitive Psychology, 14:107- 141,1982.
[16] E. Vaadia, I. Haalman, M. Abeles, and H. Bergman. Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature, 2373:515518, 1995.
[17] C. von der Malsburg. The correlation theory of brain function. Technical
Report 81-2, Max-Planck-Institut for Biophysical Chemistry, G6ttingen, Germany, 1981.
| 1628 |@word neurophysiology:2 trial:2 seems:1 simulation:5 guarding:1 interestingly:2 reaction:1 bradley:1 activation:2 yet:1 must:6 john:4 periodically:2 blur:2 shape:1 treating:1 intelligence:1 discovering:1 device:1 short:1 mental:3 provides:1 revisited:1 location:2 mathematical:1 become:3 combine:1 behavioral:5 manner:1 behavior:3 love:5 multi:1 brain:4 excelled:1 inspired:1 decreasing:1 little:1 window:4 considering:2 becomes:1 monosynaptic:1 panel:2 kaufman:1 minimizes:1 monkey:2 finding:4 nj:1 temporal:4 every:3 collecting:1 act:3 evanston:1 hit:3 unit:45 appear:3 organize:1 planck:1 before:2 retention:1 positive:3 timing:7 bind:2 treat:1 engineering:1 oxford:1 establishing:2 firing:25 becoming:1 might:1 mateo:1 examined:1 suggests:2 collect:1 graduate:1 statistically:1 averaged:1 recursive:3 lost:2 chaotic:1 digit:1 empirical:1 revealing:1 suggest:2 equivalent:3 go:2 starting:1 independently:1 formalized:1 immediately:1 rule:3 utilizing:6 his:1 embedding:1 updated:1 hypothesis:1 bergman:2 element:5 recognition:1 role:1 thousand:1 ordering:2 decrease:1 incremented:2 valuable:1 intuition:2 locking:1 ui:3 complexity:3 ideally:1 dynamic:11 tight:2 recombine:1 creates:1 upon:1 basis:1 easily:2 ajjanagadde:1 represented:4 cat:1 effective:2 sejnowski:1 artificial:1 corresponded:1 formation:1 choosing:1 nineteenth:1 otherwise:1 ability:3 noisy:1 asynchronously:1 associative:1 advantage:3 sequence:2 vaadia:2 biophysical:1 interaction:1 epsp:1 frequent:1 tu:1 relevant:1 alleviates:1 fired:8 flexibility:1 representational:4 academy:1 analogical:3 description:1 exploiting:1 requirement:1 transmission:1 produce:1 object:2 coupling:2 measured:2 implemented:1 predicted:1 anatomy:1 correct:2 stochastic:2 luckily:1 human:2 mckoon:1 activating:1 generalization:1 proposition:1 dendritic:2 biological:1 clarify:1 hold:1 considered:1 iil:8 cognition:2 vary:1 commonality:2 early:1 combinatorial:1 label:1 sensitive:1 waterloo:1 gauge:1 establishes:1 behaviorally:1 varying:2 office:1 conjunction:2 encode:5 naval:1 indicates:1 ratcliff:1 greatly:1 sense:1 helpful:1 elsevier:1 rigid:1 accumulated:1 margalit:1 spurious:2 relation:8 manipulating:1 bienenstock:1 germany:1 issue:1 among:1 orientation:1 development:1 construct:2 ted:3 identical:1 represents:1 markman:1 braitenberg:1 connectionist:9 stimulus:1 report:1 few:1 viewable:1 randomly:1 composed:1 simultaneously:4 national:2 individual:1 pictorial:1 phase:3 intended:4 fire:21 hummel:2 attempt:1 reinterpreted:1 alignment:1 yielding:1 watershed:1 institut:1 tree:9 walk:1 accrue:1 pollack:2 uncertain:1 psychological:1 instance:6 increased:1 modeling:1 giles:1 column:1 tp:1 signifies:1 uniform:1 hundred:1 sod:4 predicate:1 erlbaum:1 reported:2 spatiotemporal:1 abele:2 accrued:1 sensitivity:1 systematic:3 laming:1 together:2 treisman:1 again:1 s00:2 settled:1 central:1 opposed:1 von:1 cognitive:2 account:2 chemistry:1 coding:2 matter:1 postulated:1 explicitly:1 crossed:1 later:1 view:2 wave:1 sort:1 complicated:1 synchrony:12 ass:1 il:1 construed:1 accuracy:1 became:1 characteristic:3 yield:1 identify:1 directional:4 researcher:3 unaffected:1 synapsis:1 influenced:1 reach:1 whenever:1 against:1 clearest:1 begun:1 illusory:2 knowledge:1 ubiquitous:1 organized:3 schedule:3 sophisticated:2 unambiguously:2 inspire:1 though:1 just:1 until:2 correlation:1 receives:1 parse:1 hopefully:1 lack:2 gray:1 perhaps:1 usa:1 effect:3 alternating:1 spatially:1 semantic:1 haalman:1 during:2 width:1 branching:8 arrived:1 outline:1 necessitating:1 interpreting:1 fj:1 percent:1 reasoning:1 recently:1 ji:1 refractory:5 extend:2 elevated:1 association:1 analog:1 interpret:1 significant:1 composition:1 had:1 rectify:1 longer:2 cortex:4 processmg:1 behaving:1 something:1 recent:2 apart:1 occasionally:1 certain:2 ecological:1 der:1 raam:3 morgan:1 greater:2 accident:1 determine:1 period:2 signal:1 multiple:3 smooth:1 technical:1 rti:2 believed:3 offer:1 prevented:1 manipulate:1 prediction:1 variant:1 controller:2 represent:11 accord:1 addition:1 fellowship:1 fine:1 interval:1 standpoint:1 publisher:1 strict:1 comment:1 tend:1 virtually:1 cowan:1 contrary:1 seem:3 call:1 structural:2 presence:1 identically:1 ledgments:1 affect:1 psychology:3 architecture:6 idea:1 synchronous:12 expression:1 defense:1 assist:2 distributing:1 akin:2 york:2 repeatedly:1 generally:2 clear:1 amount:1 band:1 category:1 generate:1 gentner:1 inhibitory:4 notice:3 alters:1 neuroscience:3 discrete:1 affected:1 threshold:2 drawn:1 prey:1 diffusion:1 utilize:4 contiguously:1 button:1 graph:1 run:2 extends:1 utilizes:2 oscillation:1 decision:2 lime:3 capturing:1 layer:1 bound:2 fold:1 annual:1 strength:1 occur:1 encodes:3 unlimited:1 aspect:1 speed:3 argument:1 relatively:1 department:1 structured:1 according:1 combination:1 remain:1 across:3 slightly:2 postsynaptic:1 making:2 biologically:1 primate:1 taken:1 computationally:1 resource:1 equation:3 mechanism:19 fail:1 singer:1 subjected:1 available:1 operation:1 hierarchical:1 appropriate:2 differentiated:1 schmidt:1 remaining:1 ensure:1 trouble:1 malsburg:1 neglect:1 calculating:2 uj:2 build:2 society:1 appreciate:1 added:1 occurs:1 spike:4 tuna:1 aertsen:1 softky:1 thank:1 capacity:1 sensible:2 argue:1 consensus:1 code:7 providing:2 difficult:2 shastri:1 statement:1 acknow:1 negative:3 disparate:2 design:1 perform:1 allowing:2 neuron:8 discarded:1 enabling:2 finite:1 situation:1 defining:1 unconnected:1 relational:1 synchronously:4 stack:1 community:1 biederman:1 introduced:1 tui:1 sentence:2 connection:15 hanson:1 connectionists:1 distinction:3 established:1 miyashita:1 able:1 proceeds:1 parallelism:1 pattern:9 below:1 perception:1 program:1 built:1 max:1 memory:4 power:2 event:4 critical:1 representing:6 historically:1 created:1 auto:1 review:2 discovery:1 embedded:2 fully:1 northwestern:1 limitation:2 analogy:2 sufficient:1 consistent:3 principle:1 editor:2 share:2 course:2 summary:1 excitatory:4 placed:1 supported:1 asynchronous:24 free:1 fall:1 taking:2 distributed:2 curve:3 dimension:1 cortical:1 world:1 ending:1 evaluating:1 depth:8 san:1 ig:1 correlate:1 orderly:1 active:1 reveals:1 alternatively:1 tuj:3 nature:2 ca:1 complex:18 main:1 linearly:1 noise:1 arise:1 arrival:1 motorneurons:1 mahwah:1 neuronal:3 fashion:1 candidate:1 rail:1 oti:3 tang:1 specific:1 explored:2 concern:1 evidence:21 grouping:2 essential:1 false:1 effectively:2 mirror:1 demand:1 interrelated:1 simply:1 likely:4 eij:3 visual:1 chang:1 binding:63 chance:1 extracted:1 slot:11 modulate:1 marked:3 viewed:1 feasible:1 change:1 telephone:1 determined:1 partly:1 experimental:1 matcher:1 ranc:2 formally:1 internal:1 mark:1 support:1 filler:11 frontal:1 evaluate:1 phenomenon:1 |
687 | 1,629 | Learning to estimate scenes from images
William T. Freeman and Egon C. Pasztor
MERL , Mitsubishi Electric Research Laboratory
201 Broadway; Cambridge, MA 02139
freeman@merl.com, pasztor@merl.com
Abstract
We seek the scene interpretation that best explains image data.
For example, we may want to infer the projected velocities (scene)
which best explain two consecutive image frames (image). From
synthetic data , we model the relationship between image and scene
patches , and between a scene patch and neighboring scene patches.
Given' a new image, we propagate likelihoods in a Markov network
(ignoring the effect of loops) to infer the underlying scene. This
yields an efficient method to form low-level scene interpretations.
We demonstrate the technique for motion analysis and estimating
high resolution images from low-resolution ones.
1
Introduction
There has been recent interest in studying the statistical properties of the visual
world. Olshausen and Field [23J and Bell and Sejnowski [2J have derived VI-like
receptive fields from ensembles of images; Simon celli and Schwartz [30J account for
contrast normalization effects by redundancy reduction. Li and Atick [1 J explain
retinal color coding by information processing arguments. Various research groups
have developed realistic texture synthesis methods by studying the response statistics of VI-like multi-scale, oriented receptive fields [12 , 7, 33, 29J. These methods
help us understand the early stages of image representation and processing in the
brain.
Unfortunately, they don 't address how a visual system might interpret images , i.e.,
estimate the underlying scene. In this work, we study the statistical properties of
a labelled visual world , images together with scenes, in order to infer scenes from
images. The image data might be single or multiple frames; the scene quantities
776
W T. Freeman and E. C. Pasztor
to be estimated could be projected object velocities, surface shapes, reflectance
patterns, or colors.
We ask: can a visual system correctly interpret a visual scene if it models (1)
the probability that any local scene patch generated the local image, and (2) the
probability that any local scene is the neighbor to any other? The first probabilities
allow making scene estimates from local image data, and the second allow these
local estimates to propagate. This leads to a Bayesian method for low level vision
problems, constrained by Markov assumptions. We describe this method, and show
it working for two low-level vision problems.
2
Markov networks for scene estimation
First, we synthetically generate images and their underlying scene representations,
using computer graphics. The synthetic world should typify the visual world in
which the algorithm will operate.
For example, for the motion estimation problem of Sect. 3, our training images were
irregularly shaped blobs, which could occlude each other, moving in randomized
directions at speeds up to 2 pixels per frame . The contrast values of the blobs and
the background were randomized. The image data were the concatenated image
intensities from two successive frames of an image sequence. The scene data were
the velocities of the visible objects at each pixel in the two frames.
Second, we place the image and scene data in a Markov network [24]. We break
the images and scenes into localized patches where image patches connect with underlying scene patches; scene patches also connect with neighboring scene patches.
The neighbor relationship can be with regard to position, scale, orientation, etc.
For the motion problem, we represented both the images and the velocities in 4level Gaussian pyramids [6], to efficiently communicate across space. Each scene
patch then additionally connects with the patches at neighboring resolution levels.
Figure 2 shows the multiresolution representation (at one time frame) for images
and scenes. 1
Third, we propagate probabilities. Weiss showed the advantage of belief propagation
over regularization methods for severall-d problems [31]; we apply related methods
to our 2-d problems. Let the ith and jth image and scene patches be Yi and
Xj, respectively. For the MAP estimate [3] of the scene data,2 we want to find
argmaxxl ,X2 ,... ,XNP(Xl,X2,'" ,xNIYl,Y2, .. . ,YM), where Nand M are the number
of scene and image patches. Because the joint probability is simpler to compute,
we find, equivalently, argmaxx1,X2, ... ,XNP(Xl , X2,? . . , XN, Yl , Y2, ? .. , YM) .
The conditional independence assumptions of the Markov network let us factorize
the desired joint probability into quantities involving only local measurements and
calculations [24, 32]. Consider the two-patch system of Fig. 1. We can factorize
P(Xl , X2,Yl,Y2) in three steps: (1) P(XI,X2 ,Yl,Y2) = P(X2 , Yl,Y2Ixt}P(Xl) (by elementary probability); (2) P(X2,Yl,Y2Ixl) = P(ydXl)P(X2 ,Y2Ixl) (by conditional
ITo maintain the desired conditional independence relationships, we appended the image data to the scenes. This provided the scene elements with image contrast information ,
which they would otherwise lack.
2Related arguments follow for the MMSE or other estimators.
777
Learning to Estimate Scenes from Images
independence); (3) P(X2,Y2IxI) = P(x2Ixt}P(Y2Ix2) (by elementary probability
and the Markov assumption). To estimate just Xl at node 1, the argmax x2 becomes
max X 2 , and then slides over constants, giving terms involving only local computations at each node:
argmax X1 max x2 P(xI, X2, YI, Y2) = argmax x1 [P(XI )P(Yllxl)max X2 [P(x2Ixt}P(Y2I x 2)]].
(1)
This factorization generalizes to any network structure without loops. We use a
different factorization at each scene node: we turn the initial joint probability into
a conditional by factoring out that node's prior, P(Xj) , then proceeding analogously
to the example above. The resulting factorized computations give local propagation
rules, similar to those of [24, 32] : Each node, j, receives a message from each
neighbor, k , which is an accumulated likelihood function, Lkj = P(Yk ... Yzlxj),
where Yk . .. Yz are all image nodes that lie at or beyond scene node k, relative to
scene node j. At each iteration, more image nodes Y enter that likelihood function.
After each iteration, the MAP estimate at node j is argmax Xj P(x j )P(Yj Ix j) Ilk L kj ,
where k runs over all scene node neighbors of node j . We calculate Lkj from:
L kj = maxxkP(xklxj)P(Yklxk)
II ?lk,
(2)
l#j
where Llk is Llk from the previous iteration. The initial ?lk'S are 1. Using the
Figure 1: Markov network nodes used in example.
factorization rules described above, one can verify that the local computations will
compute argmax x1 ,X2 , . .. , XN P(XI' X2, ... ,xNIYI, Y2, ... ,YM), as desired. To learn the
network parameters, we measure P(Xj), P(Yjlxj), and P(xklxj) , directly from the
synthetic training data.
If the network contains loops, the above factorization does not hold . Both learning
and inference then require more computationally intensive methods [15]. Alternatively, one can use multi-resolution quad-tree networks [20], for which the factorization rules apply, to propagate information spatially. However , this gives results
with artifacts along quad-tree boundaries , statistical boundaries in the model not
present in the real problem. We found good results by including the loop-causing
connections between adjacent nodes at the same tree level but applying the factorized propagation rules, anyway. Others have obtained good results using the same
approach for inference [8, 21, 32]; Weiss provides theoretical arguments why this
works for certain cases [32].
3
Discrete Probability Representation (motion example)
We applied the training method and propagation rules to motion estimation, using
a vector code representation [11] for both images and scenes. We wrote a treestructured vector quantizer, to code 4 by 4 pixel by 2 frame blocks of image data
778
W. T. Freeman and E. C. Pasztor
for each pyramid level into one of 300 codes for each level. We also coded scene
patches into one of 300 codes.
During training, we presented approximately 200,000 examples of irregularly shaped
moving blobs, some overlapping, of a contrast with the background randomized
to one of 4 values. Using co-occurance histograms, we measured the statistical
relationships that embody our algorithm: P(x) , P(ylx), and P(xnlx), for scene Xn
neighboring scene x.
Figure 2 shows an input test image, (a) before and (b) after vector quantization. The
true underlying scene, the desired output, is shown (c) before and (d) after vector
quantization. Figure 3 shows six iterations of the algorithm (Eq. 2) as it converges
to a good estimate for the underlying scene velocities. The local probabilities we
learned (P(x), P(ylx), and P(xnlx)) lead to figure/ground segmentation, aperture
problem constraint propagation, and filling-in (see caption).
Figure 2: (a) First of two frames of image data (in gaussian pyramid), and (b)
vector quantized. (c) The optical flow scene information , and (d) vector quantized.
Large arrow added to show small vectors ' orientation.
4
Density Representation (super-resolution example)
For super-resolution, the input "image" is the high-frequency components (sharpest
details) of a sub-sampled image. The "scene" to be estimated is the high-frequency
components of the full-resolution image, Fig. 4.
We improved our method for this second problem. A faithful image representation
requires so many vector codes that it becomes infeasible to measure the prior and
co-occurance statistics (note unfaithful fit of Fig. 2) . On the other hand, a discrete
representation allows fast propagation. We developed a hybrid method that allows
both good fitting and fast propagation.
We describe the image and scene patches as vectors in a continuous space, and
first modelled the probability densities, P(x) , P(y, x), and P(xn, x), as gaussian
mixtures [4] . (We reduced the dimensionality some by principal components analysis
[4]). We then evaluated the prior and conditional distributions of Eq. 2 only at a
discrete set of scene values, different for each node. (This sample-based approach
relates to [14, 7]). The scenes were a sampling of those scenes which render to the
image at that node. This focusses the computation to the locally feasible scene
interpretations. P(xkIXj) in Eq. 2 becomes the ratios of the gaussian mixtures
P(Xk , Xj) and P(Xj), evaluated at the scene samples at nodes k and j, respectively.
P(Yklxk) is P(Yk , Xk)/P(Xk) evaluated at the scene samples of node k.
To select the scene samples, we could condition the mixture P(y , x) on the Y observed at each node, and sample x's from the resulting mixture of gaussians . We
obtained somewhat better results by using the scenes from the training set whose
779
Learning to Estimate Scenes from Images
T-
1
1
I
1
k.....
,~"'....
'<6.
~
.......,..J
...... ~
it::;)
4
I
(a)
l..............
:ol. .
.......................
! ...... . . . . "1?
t'~
r-:;;
I
I
.....
I
1
I ?...
~
I
2
I
3
~ .. I; ?
:
1
.. ;1,I;., ~ ,; ~.
I ,. .....;i),.
t..;i ' ....... ~ .... ,"',
... ..
~A~
t
I
A~
!
"
~~
............. ;::::
,. ,,;~ :
#::~;:..
I :!::::::~::::
~;;;;~ ... ~",,~~~~~~
i "~~A'''1~''!"'''-!:-:'' I
t
I
I
I
I
...
I
I
I
I
I
I
I
Figure 3: The most probable scene code for Fig. 2b at first 6 iterations of Bayesian
belief propagation. (a) Note initial motion estimates occur only at edges. Due to
the "aperture problem", initial estimates do not agree. (b) Filling-in of motion
estimate occurs. Cues for figure/ground determination may include edge curvature,
and information from lower resolution levels. Both are included implicitly in the
learned probabilities. (c) Figure/ground still undetermined in this region of low
edge curvature. (d) Velocities have filled-in, but do not yet all agree. (e) Velocities
have filled-in , and agree with each other and with the correct velocity direction,
shown in Fig. 2.
images most closely matched the image observed at that node (thus avoiding one
gaussian mixture modeling step).
Using 40 scene samples per node, setting up the P(xklxj) matrix for each link took
several minutes for 96x96 pixel images. The scene (high resolution) patch size was
3x3; the image (low resolution) patch size was 7x7. We didn't feel long-range scene
propagation was critical here, so we used a flat, not a pyramid, node structure.
Once the matrices were computed, the iterations of Eq. 2 were completed within
seconds.
Figure 4 shows the results. The training images were random' shaded and painted
blobs such as the test image shown. After 5 iterations, the synthesized maximum
likelihood estimate of the high resolution image is visually close to the actual high
frequency image (top row). (Including P(x) gave too flat results, we suspect due
to errors modeling that highly peaked distribution). The dominant structures are
all in approximately the correct position. This may enable high quality zooming of
low-resolution images, attempted with limited success by others [28, 25].
5
Discussion
In related applications of Markov random fields to vision, researchers typically use
relatively simple, heuristically derived expressions (rather than learned) for the likelihood function P(ylx) or for the spatial relationships in the prior term on scenes
W. T. Freeman and E. C. Pasztor
780
sub-sampled
image
zoomed high freqs.
of sub-sampled image
(algorithm input)
high freqs. of
full-detail image
(desired output)
full-detail
image
,'f'
?J
.'
/
I
r
I
"" .
/
f
/
f .
. ,- }
,/ 1'(,I
I
?
,t
,
\:
'.
iteration 0
iteration 1
iteration 5
(output)
w/o
with
computed output
Figure 4: Superresolution example. Top row: Input and desired output (contrast
normalized, only those orientations around vertical). Bottom row: algorithm output and comparison of image with and without estimated high vertical frequencies .
[10, 26, 9, 17, 5, 20, 19, 27]. Some researchers have applied related learning approaches to low-level vision problems, but restricted themselves to linear models
[18, 13]. For other learning or constraint propagation approaches in motion analysis, see [20, 22, 16].
In summary, we have developed a principled and practical learning based method
for low-level vision problems. Markov assumptions lead to factorizing the posterior
probability. The parameters of our Markov random field are probabilities specified
by the training data. For our two examples (programmed in C and Matlab, respectively), the training can take several hours but the running takes only several minutes. Scene estimation by Markov networks may be useful for other low-level vision
problems, such as extracting intrinsic images from line drawings or photographs.
Acknowledgements We thank E. Adelson, J. Tenenbaum, P. Viola, and Y. \Veiss for
helpful discussions.
References
[1] J. J. Atick, Z. Li, and A. N. Redlich. Understanding retinal color coding from first
principles. Neural Computation, 4:559- 572, 1992.
[2] A. J. Bell and T . J. Senjowski. The independent components of natural scenes are
edge filters. Vision R esearch, 37(23):3327- 3338, 1997.
[3] J. O. Berger. Statistical decision theory and Bayesian analysis. Springer, 1985.
[4] C. l\l. Bishop. Neural networks for pattern recognition. Oxford, 1995.
[5] M. J. Black and P. Anandan. A framework for the robust estimation of optical flow.
In Fmc. 4th Inti. Conf. Computer Vision, pages 231- 236. IEEE, 1993.
[6] P. J. Burt and E . H. Adelson. The Laplacian pyramid as a compact image code.
IEEE Trans. Comm., 31(4):532- 540, 1983.
[7] J. S. DeBonet and P. Viola. Texture recognition using a non-parametric multi-scale
Learning to Estimate Scenes from Images
781
statistical model. In Proc. IEEE Comput er Vision and Patt ern R ecognition, 1998.
[8] B. J . Frey. Bayesian n etworks for pattern classification. MIT Press, 1997.
[9] D. Geiger and F. Girosi. Parallel and deterministic algorithms from MRF 's: surface
reconstruction. IEEE Pattern Analysis and Machine Intellig ence, 13(5) :401- 412 , May
1991 .
[10] S. Geman and D . Geman. Stochastic relaxation , Gibbs distribution, and the Bayesian
restoration of images. IEEE Pattern Analysis and Machin e Intelligence, 6:721- 741 ,
1984.
[11] R. M. Gray, P. C. Cosman, and K. L. Oehler. Incorporating visual factors into vector
quantizers for image compression. In A. B. Watson , editor , Digital images and human
vision. MIT Press, 1993.
[12] D. J. Heeger and J . R. Bergen. Pyramid-based texture analysis/synthesis. In ACM
SIGGRAPH, pages 229- 236, 1995. In Computer Graphics Proceedings, Annual Conference Series.
[13] A . C. Hurlbert and T . A. Poggio. Synthesizing a color algorithm from examples.
Scien ce, 239:482- 485, 1988.
[14] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional
density. In Proc. European Conf. on Computer Vision , pages 343- 356 , 1996.
[15] M. 1. Jordan , editor. Learning in graphical models. MIT Press, 1998.
[16] S. Ju, M. J. Black, and A. D. Jepson . Skin and bones: Multi-layer, locally affine ,
optical flow and regularization with transparency. In Proc. IEEE Computer Vision
and Pattern Recognition, pages 307- 314, 1996.
[17] D. Kersten. Transparancy and the cooperative computation of scene attributes. In
M. S. Landy and J. A. Movshon , editors, Computational Models of Visual Processing,
chapter 15. MIT Press, Cambridge, MA , 1991.
[18] D. Kersten, A. J. O 'Toole, M. E. Sereno, D . C. Knill , and J . A. Anderson. Associative
learning of scene parameters from images. Applied Optics, 26(23):4999- 5006, 1987.
[19] D. Knill and W . Richards, editors. P erception as Bayesian inference. Cambridge
Univ. Press, 1996.
[20] M. R. Luettgen, W. C. Karl , and A. S. Will sky. Efficient multiscale regularization
with applications to the computation of optical flow . IEEE Trans. Imag e Processing,
3(1):41- 64, 1994.
[21] D. J . C. Mackay and R. M. Neal. Good error- correcting codes based on very sparse
matrices. In Cryptography and coding - LNCS 1025, 1995.
[22] S. Nowlan and T . J. Senjowski. A selection model for motion processing in area l'vIT
of primates. J . Neurosci ence, 15:1195- 1214, 1995.
[23] B. A. Olshausen and D . J. Field. Emergence of simple-cell receptive field properties
by learning a sparse code for natural images. Nature , 381:607- 609, 1996.
[24] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference.
Morgan Kaufmann, 1988.
[25] A. Pentland and B. Horowitz. A practical approach to fractal-based image compression. In A. B. Watson , editor, Digital imag es and human vision. MIT Press, 1993.
[26] T . Poggio, V . Torre, and C. Koch. Computational vision and regularization theory.
Nature, 317(26) :314- 139, 1985.
[27] E. Saund. Perceptual organization of occluding contours of opaque surfaces. In CVPR
'98 Workshop on Perceptual Organization, Santa Barbara, CA, 1998.
[28] R. R. Schultz and R . L. Stevenson . A Bayesian approach to image expansion for
improved definition. IEEE Trans. Image Processing, 3(3):233- 242, 1994.
[29] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conf. on Sig. , Sys . and Computers, Pacific Grove, CA , 1997.
[30] E. P . Simoncelli and O. Schwartz . Modeling surround suppression in vI neurons with
a statistically-derived normalization model. In Adv. in N eural Information Processing
Systems , volume 11 , 1999.
[31] Y . Weiss. Interpreting images by propagating Bayesian beliefs. In Adv. in Neural
Information Processing Systems , volume 9, pages 908- 915 , 1997.
[32] Y. Weiss. Belief propagation and revision in networks with loops. Technical Report
1616, AI Lab Memo, MIT , Cambridge, MA 02139, 1998.
[33] S. C. Zhu and D. Mumford. Prior learning and Gibbs reaction-diffusion. IEEE
Pattern Analysis and Ma chine Int elligence, 19(11), 1997.
| 1629 |@word compression:3 heuristically:1 seek:1 mitsubishi:1 propagate:4 reduction:1 initial:4 contains:1 series:1 mmse:1 reaction:1 com:2 nowlan:1 yet:1 visible:1 realistic:1 girosi:1 shape:1 occlude:1 cue:1 intelligence:1 isard:1 xk:3 sys:1 ith:1 provides:1 quantizer:1 node:22 quantized:2 successive:1 simpler:1 along:1 fitting:1 yllxl:1 embody:1 themselves:1 multi:4 brain:1 ol:1 freeman:5 actual:1 quad:2 becomes:3 provided:1 estimating:1 underlying:6 matched:1 revision:1 factorized:2 didn:1 superresolution:1 developed:3 sky:1 schwartz:2 imag:2 before:2 local:10 frey:1 painted:1 oxford:1 approximately:2 might:2 black:2 shaded:1 co:2 scien:1 factorization:5 limited:1 programmed:1 range:1 statistically:1 faithful:1 practical:2 yj:1 block:1 x3:1 lncs:1 y2i:1 area:1 bell:2 close:1 selection:1 applying:1 kersten:2 map:2 deterministic:1 vit:1 resolution:12 correcting:1 estimator:1 rule:5 anyway:1 feel:1 machin:1 caption:1 sig:1 velocity:8 element:1 recognition:3 richards:1 geman:2 cooperative:1 observed:2 bottom:1 calculate:1 region:1 adv:2 sect:1 yk:3 principled:1 comm:1 egon:1 joint:3 siggraph:1 various:1 represented:1 chapter:1 univ:1 fast:2 describe:2 sejnowski:1 whose:1 plausible:1 cvpr:1 drawing:1 otherwise:1 statistic:2 emergence:1 associative:1 blob:4 sequence:1 advantage:1 took:1 reconstruction:1 zoomed:1 neighboring:4 causing:1 loop:5 multiresolution:1 converges:1 object:2 help:1 propagating:1 measured:1 eq:4 direction:2 closely:1 correct:2 attribute:1 filter:1 stochastic:2 torre:1 human:2 xnp:2 enable:1 explains:1 require:1 probable:1 elementary:2 celli:1 hold:1 around:1 koch:1 ground:3 blake:1 visually:1 consecutive:1 early:1 estimation:5 proc:3 cosman:1 treestructured:1 mit:6 gaussian:5 super:2 rather:1 derived:3 focus:1 likelihood:5 contrast:5 esearch:1 suppression:1 helpful:1 inference:4 bergen:1 factoring:1 accumulated:1 typically:1 nand:1 pixel:4 classification:1 orientation:3 constrained:1 spatial:1 mackay:1 field:7 once:1 shaped:2 sampling:1 adelson:2 filling:2 peaked:1 others:2 report:1 intelligent:1 oriented:1 argmax:5 connects:1 william:1 maintain:1 organization:2 interest:1 message:1 highly:1 fmc:1 mixture:5 grove:1 edge:4 poggio:2 tree:3 filled:2 desired:6 theoretical:1 merl:3 modeling:3 unfaithful:1 ence:2 restoration:2 undetermined:1 graphic:2 too:1 connect:2 synthetic:3 ju:1 density:3 st:1 randomized:3 probabilistic:1 yl:5 synthesis:3 together:1 ym:3 analogously:1 luettgen:1 conf:3 horowitz:1 li:2 account:1 stevenson:1 retinal:2 coding:3 int:1 vi:3 break:1 bone:1 saund:1 lab:1 parallel:1 simon:1 appended:1 kaufmann:1 efficiently:1 ensemble:1 yield:1 modelled:1 bayesian:8 sharpest:1 researcher:2 explain:2 definition:1 frequency:4 hurlbert:1 sampled:3 ask:1 color:4 dimensionality:1 segmentation:1 follow:1 response:1 wei:4 improved:2 evaluated:3 anderson:1 just:1 atick:2 stage:1 working:1 receives:1 hand:1 x96:1 multiscale:1 overlapping:1 propagation:12 lack:1 quality:1 gray:1 artifact:1 olshausen:2 effect:2 verify:1 y2:6 true:1 normalized:1 regularization:4 spatially:1 laboratory:1 neal:1 adjacent:1 during:1 demonstrate:1 motion:9 interpreting:1 freqs:2 reasoning:1 image:70 occurance:2 volume:2 interpretation:3 interpret:2 synthesized:1 measurement:1 cambridge:4 gibbs:2 enter:1 surround:1 ai:1 chine:1 moving:2 surface:3 etc:1 dominant:1 curvature:2 posterior:1 recent:1 showed:1 barbara:1 certain:1 success:1 watson:2 yi:2 morgan:1 somewhat:1 anandan:1 ii:1 relates:1 multiple:1 full:3 simoncelli:2 infer:3 transparency:1 technical:1 determination:1 calculation:1 long:1 coded:1 laplacian:1 involving:2 mrf:1 vision:14 x2ixt:2 iteration:10 normalization:2 histogram:1 pyramid:6 cell:1 background:2 want:2 operate:1 suspect:1 flow:4 jordan:1 extracting:1 synthetically:1 xj:6 independence:3 fit:1 gave:1 intensive:1 six:1 expression:1 movshon:1 render:1 matlab:1 fractal:1 useful:1 santa:1 ylx:3 slide:1 locally:2 tenenbaum:1 reduced:1 generate:1 debonet:1 estimated:3 correctly:1 per:2 patt:1 discrete:3 group:1 redundancy:1 ce:1 diffusion:1 relaxation:1 run:1 communicate:1 opaque:1 place:1 patch:18 geiger:1 decision:1 layer:1 ilk:1 annual:1 occur:1 optic:1 constraint:2 scene:64 x2:16 flat:2 x7:1 erception:1 speed:1 argument:3 optical:4 relatively:1 ern:1 pacific:1 across:1 making:1 primate:1 restricted:1 intellig:1 inti:1 asilomar:1 computationally:1 agree:3 etworks:1 turn:1 irregularly:2 studying:2 generalizes:1 gaussians:1 apply:2 top:2 running:1 include:1 completed:1 graphical:1 landy:1 giving:1 concatenated:1 reflectance:1 yz:1 skin:1 added:1 quantity:2 occurs:1 mumford:1 receptive:3 parametric:1 ecognition:1 link:1 zooming:1 thank:1 code:9 elligence:1 relationship:5 berger:1 ratio:1 equivalently:1 unfortunately:1 quantizers:1 broadway:1 memo:1 synthesizing:1 vertical:2 neuron:1 markov:11 pasztor:5 pentland:1 viola:2 frame:8 intensity:1 burt:1 toole:1 specified:1 connection:1 learned:3 hour:1 pearl:1 trans:3 address:1 beyond:1 lkj:2 pattern:7 max:3 including:2 belief:4 critical:1 natural:2 hybrid:1 zhu:1 lk:2 kj:2 prior:5 understanding:1 acknowledgement:1 relative:1 localized:1 digital:2 affine:1 principle:1 editor:5 row:3 karl:1 summary:1 jth:1 infeasible:1 allow:2 understand:1 neighbor:4 sparse:2 regard:1 boundary:2 xn:4 world:4 contour:2 projected:2 schultz:1 compact:1 implicitly:1 aperture:2 wrote:1 llk:2 factorize:2 xi:4 alternatively:1 don:1 factorizing:1 continuous:1 why:1 additionally:1 learn:1 nature:2 robust:1 ca:2 ignoring:1 expansion:1 european:1 electric:1 jepson:1 neurosci:1 arrow:1 sereno:1 knill:2 cryptography:1 x1:3 eural:1 fig:5 redlich:1 sub:3 position:2 heeger:1 xl:5 lie:1 comput:1 perceptual:2 third:1 ito:1 ix:1 minute:2 bishop:1 er:1 intrinsic:1 incorporating:1 quantization:2 workshop:1 texture:3 photograph:1 visual:8 tracking:1 springer:1 acm:1 ma:4 conditional:6 labelled:1 feasible:1 included:1 principal:1 e:1 attempted:1 occluding:1 select:1 avoiding:1 |
688 | 163 | 502
LINKS BETWEEN MARKOV MODELS AND
MULTILAYER PERCEPTRONS
H. Bourlard t,t & C.J. Wellekens t
(t)Philips Research Laboratory
Brussels, B-1170 Belgium.
mInt. Compo Science Institute
Berkeley, CA 94704 USA.
ABSTRACT
Hidden Markov models are widely used for automatic speech recognition. They inherently incorporate the sequential character of the
speech signal and are statistically trained. However, the a-priori
choice of the model topology limits their flexibility. Another drawback of these models is their weak discriminating power. Multilayer
perceptrons are now promising tools in the connectionist approach
for classification problems and have already been successfully tested
on speech recognition problems. However, the sequential nature of
the speech signal remains difficult to handle in that kind of machine. In this paper, a discriminant hidden Markov model is defined and it is shown how a particular multilayer perceptron with
contextual and extra feedback input units can be considered as a
general form of such Markov models.
INTRODUCTION
Hidden Markov models (HMM) [Jelinek, 1976; Bourlard et al., 1985] are widely used
for automatic isolated and connected speech recognition. Their main advantages
lie in the ability to take account of the time sequential order and variability of
speech signals. However, the a-priori choice of a model topology (number of states,
probability distributions and transition rules) limits the flexibility of the HMl\l's,
in particular speech contextual information is difficult to incorporate. Another
drawback of these models is their weak discriminating power. This fact is clearly
illustrated in [Bourlard & Wellekens, 1989; Waibel et al., 1988] and several solutions
have recently been proposed in [Bahl et al., 1986; Bourlard & VVellekens, 1989;
Brown, 1987].
The multilayer perceptron (MLP) is now a familiar and promising tool in connectionist approach for classification problems [Rumelhart et al., 1986; Lippmann,
1987} and has already been widely tested on speech recognition problems [Waibel
et aI., 1988; Watrous & Shastri, 1987; Bourlard & Wellekens, 1989]. However, the
sequential nature of the speech signal remains difficult to handle with ltfLP. It is
shown here how an MLP with contextual and extra feedback input units can be
considered as a form of discriminant HMM.
Links Between Markov Models and Multilayer Perceptrons
STOCHASTIC MODELS
TRAINING CRITERIA
Stochastic speech recognition is based on the comparison of an utterance to be
recognized with a set of probabilistic finite state machines known as Hl\1l\f. These
are trained such that the probability P(Wi IX) that model Wi has produced the
associated utterance X must be maximized, but the parameter space which this
optimization is performed over makes the difference between independently trained
models and discriminant ones.
Indeed, the probability P(WiIX) can be written as
P(W.'IX) = P(XIWi).P(Wi )
t
P(X)?
(1)
In a recognition phase, P(X) may be considered as a constant since the model
parameters are fixed but, in a training phase, this probability depends on the parameters of all possible models. Taking account of the fact that the models are
mutually exclusive and if A represents the parameter set (for all possible models),
(1) may then be rewritten as:
Maximization of P(WdX, A) as given by (2) is usually simplified by restricting it
to the subspace of the Uti parameters. This restriction leads to the Maximum
Likelihood Estimators (MLE). The summation term in the denominator is constant
over the parameter space of Uti and thus, maximization of P(XIWi' A) implies that
of its bilinear map (2). A language model provides the value of P(Wi ) independently
of the acoustic decoding [Jelinek, 1976].
On the other hand, maximization of P(WiIX, A) with respect to the whole parameter
space (Le. the parameters of all models WI, W 2 , ??? ) leads to discriminant models
since it implies that the contribution of P(X IWi , A)P(Wi) should be enhanced while
that of the rival models, represented by
L P(XIWk' A)P(Wk),
kti
should be reduced. This maximization with respect to the whole parameter space
has been shown equivalent to the maximization of Mutual Information (l\fMI) between a model and a vector sequence [Bahl et al., 1986; Brown, 1987].
STANDARD HIDDEN MARKOV MODELS
In the regular discrete HMM, the acoustic vectors (e.g. corresponding to 10 ms
speech frames) are generally quantized in a front-end processor where each one is
replaced by the closest (e.g. according to an Euclidean norm) prototype vector
503
504
Bourlard and Wellekens
Yi selected in a predetermined finite set y of cardinality I. Let Q be a set of I<
different states q(k), with k = 1, ... , K. Markov models are then constituted by the
association (according to a predefined topology) of some of these states. If H~MM are
trained along the MLE criterion, the parameters of the models (defined hereunder)
must be optimized for maximizing P(XIW) where X is a training sequence of
quantized acoustic vectors Xn E y, with n = 1, ... , N and W is its associated
Markov model made up of L states ql E Q with l = 1, ... , L. Of course, the same
state may occur several times with different indices l, so that L :f. I<. Let us denote
by q,/ the presence on state ql at a given time n E [1, N]. Since events q,/ are
mutually exclusive, probability P(XIW) can be written for any arbitrary n:
L
P(XIW) =
L
(3)
P(q,/,XIW) ,
l=l
where P(q,/, XIW) denotes thus the probability that X is produced by W while
associating Xn with state ql. Maximization of (3) can be worked out by the classical
forward-backward recurrences of the Baum- Welch algorithm [J elinek 1976, Bourlard
et al., 1985]
Maximization of P(XIW) is also usually approximated by the Viterbi criterion. It
can be viewed as a simplified version of the MLE criterion where, instead of taking
account of all possible state sequences in W capable of producing X, one merely
considers the most probable one. To make all possible paths apparent, (3) can also
be rewritten as
P(XIW) =
L
L
II =1
IN=1
L ... L
P(qt,???,qt"XIW),
and the explicit formulation of the Viterbi criterion is obtained by replacing all
summations by a "max" operator. Probability (3) is then approximated by:
-
P(XIW) =
1
N
max P(qll,???,qlN,XIW) ,
ll, ... ,lN
(4)
and can be calculated by the classical dynamic time warping (DTW) algorithm
[Bourlard et al., 1985]. In that case, each training vector is then uniquely associated
with only one particular transition.
In both cases (MLE and Viterbi) , it can be shown that, according to classical
hypotheses, P(XIW) and P(XIW) are estimated from the set of local parameters
p[q(l), Yi Iq(-)(k), W], for i = 1, ... , I and k, f = 1, ... , I<. Notations q (-)(k) and
q(l) denote states E Q observed at two consecutive instants. In the particular case
of the Viterbi criterion, these parameters are estimated by:
Vi E [1, I],
Vk, l E [1, [(],
(5)
where niH denotes the number of times each prototype vector Yi has been associated with a particular transition from q(k) to q(l) during the training. However, if
Links Between Markov Models and Multilayer Perceptrons
the models are trained along this formulation of the Viterbi algorithm, no discrimination is taken into account. For instance, it is interesting to observe that the local
probability (5) is not the suitable measure for the labeling of a prototype vector
Yi, i.e. to find the most probable state given a current input vector and a specified
previous state. Indeed, the decision should ideally be based on the Bayes rule. In
that cae, the most probable state q(f opt ) is defined by
f opt
=
argmax p[q( f) IYi, q( - >( k)] ,
f
(6)
and not on the basis of (5).
It can easily be proved that the estimate of the Bayes probabilities in (6) are:
(7)
In the last section, it is shown that these values can be generated at the output of
a particular MLP.
DISCRIMINANT HMM
For quantized acoustic vectors and Viterbi criterion, an alternative HMM using
discriminant local probabilities can also be described. Indeed, as the correct criterion should be based on (1), comparing with (4), the "Viterbi formulation" of this
probability is
(8)
Expression (8) clearly puts the best path into evidence. The right hand side factorizes into
and suggests two separate steps for the recognition. The first factor represents the
acoustic decoding in which the acoustic vector sequence is converted into a sequence
of states. Then, the second factor represents a phonological and lexical step: once
the sequence of states is known, the model W associated with X can be found from
the state sequence without an explicit dependence on X so that
For example, if the states represent phonemes, this probability must be estimated
from phonological knowledge of the vocabulary once for all in a separate process
without any reference to the input vector sequence.
On the contrary, P( ql1 ' ? ? ? , q~ IX) is immediately related to the discriminant local
probabilities and may be factorized in
505
506
Bourlard and Wellekens
Now, each factor of (9) may be simplified by relaxing the conditional constraints.
More specifically, the factors of (9) are assumed dependent on the previous state
only and on a signal window of length 2p + 1 centered around the current acoustic
vector. The current expression of these local contributions becomes
(10)
where input contextual information is now taken into account, X~ denoting the
vector sequence X m , X m +1! ... , X n ? If input contextual information is neglected (p =
0), equation (10) represents nothing else but the discriminant local probability (7)
and is at the root of a discriminant discrete HMM. Of course, as for (7), these
local probabilities could also be simply estimated by counting on the training set,
but the exponential increase of the number of parameters with the width 2p + 1 of
the contextual window would require an exceedingly large storage capacity as an
excessive size of training data to obtain statistically significant parameters. It is
shown in the following section how this drawback is circumvented by using an MLP.
It is indeed proved that, for the training vectors, the optimal outputs of a recurrent
and context-sensitive MLP are the estimates of the local probabilities (10). Given
its so-called "generalization property", the MLP can then be used for interpolating
on the test set.
Of course, from the local contributions (10), P(WIX) can still be obtained by the
classical one-stage dynamic programming [Ney, 1984; Bourlard et al., 1985] . Indeed,
inside HMM, the following dynamic programming recurrence holds
(11)
where parameter k runs over all possible states preceding qe and P(qeIXr) denotes
the cumulated best path probability of reaching state ql and having emitted the
partial sequence
Xr .
RECURRENT MLP AND DISCRIMINANT HMM
=
Let q( k), with k
1, ... , K, be the output units of an MLP associated with different
classes (each of them corresponding a particular state of Q) and I the number of
prototype vectors Yi. Let Vi denote a particular binary input of the l\fLP. If no
contextual information is used, Vi is the binary representation of the index i of
prototype vector Yi and, more precisely, a vector with all zero components but
the i-th one equal to 1. In the case of contextual input, vector Vi is obtained
by concatenating several representations of prototype vectors belonging to a given
contextual window centered on a current '!Ii. The architecture of the resulting MLP
is then similar to NETtaik initially described in [Sejnowski & Rosenberg, 1987] for
mapping written texts to phoneme strings. The same kind of architecture has also
been proved successful in performing the classification of acoustic vector strings
into phoneme strings, where each current vector was classified by taking account
Links Between Markov Models and Multilayer Perceptrons
of its surrounding vectors [Bourlard & Wellekens, 1989]. The input field is then
constituted by several groups of units, each group representing a prototype vector.
Thus, if 2p + 1 is the width of the contextual window, there are 2p + 1 groups of I
units in the input layer.
However, since each acoustic vector is classified independently of the preceding classifications in such feedforward architectures, the sequential character of the speech
signal is not modeled. The system has no short-term memory from one classification to the next one and successive classifications may be contradictory. This
phenomenon does not appear in HMM since only some state sequences are permitted by the particular topology of the model.
Let us assume that the training is performed on a sequence of N binary inputs
{Vii' ??? , ViN} where each in represents the index of the prototype vector at time n (if
no context) or the "index" of one of the I(2p+l) possible inputs (in the case of a 2p+ 1
contextual window). Sequential classification must rely on the previous decisions
but the final goal remains the association of the current input vectors with their own
classes. An MLP achieving this task will generate, for each current input vector Vin
and each class q(f), f = 1, ... , K, an output value g(in, kn' f) depending on the class
q(k n ) in which the preceding input vector Vi n _ i was classified. Supervision comes
from the a-priori knowledge of the classification of each Vi n ? The training of the
MLP parameters is usually based on the minimization of a mean square criterion
(LMSE) [Rumelhart et al., 1986] which, with our requirements, takes the form:
N
E =
K
4L L
[g(in, kn, f) - d(in, f)]2 ,
(12)
n=1 l=1
where d(in, f) represents the target value of the f-th output associated with the
input vector Vi n ? Since the purpose is to associate each input vector with a single
class, the target outputs, for a vector Vi E q(f), are:
d(i, f)
d(i, m)
1,
0,
Vm ;f:. f ,
which can also be expressed, for each particular Vi E q(f) as: d(i, m) = bml . The
target outputs d( i, f) only depend on the current input vector 'Vi and the considered
output unit, and not on the classification of the previous one. The difference between
criterion (12) and that of a memoryless machine is the additional index k n which
takes account of the previous decision. Collecting all terms depending on the same
indexes, (12) can thus be rewritten as:
1
J
KKK
LLLL
2.
E= -
nilel?
[g(i, k, m) - d(i,m)]2 ,
(13)
*=1 k=1 l=1 m=1
where J = I if the MLP input is context independent and J = .z(2p+l) if a 2p + 1
contextual window is used; nikl represents the number of times 'Vi has been classified
507
508
Bourlard and Wellekens
decisions
input
vectors
c
f~dback
units
left context
current
vector
right context
Figure 1: Recurrent and Context-Sensitive MLP (IZI = delay)
in q(f) while the previous vector was known to belong to class q(k). Thus, whatever
the MLP topology is, i.e. the number of its hidden layers and of units per
layer, the optimal output values gopt(i, k, m) are obtained by canceling the partial
derivative of E versus g( i, k, m). It can easily be proved that, doing so, the optimal
values for the outputs are then
gopt(i, k, m)
K
Ll=l nikl
(14)
.
The optimal g(i, k, m)'s obtained from the minimization of the MLP criterion are
thus the estimates of the Bayes probabilities, i.e. the discriminant local probabilities
defined by (7) if no context is used and by (10) in the contextual case.
It is important to keep in mind that these optimal values can be reached only
provided the MLP contains enough parameters and does not get stuck into a local
minimum during the training.
A convenient way to generate the g(i, k,f) is to modify its input as follows. For
each Vin' an extended vector "'in =
Vin) is formed where
is an extra input
vector containing the information on the decision taken at time n - 1. Since output
information is fed back in the input field, such an MLP has a recurrent topology.
The final architecture of the corresponding MLP (with contextual information and
output feedback) is represented in Figure 1 and is similar in design to the net
developed in [J ordan, 1986] to produce output pattern sequences.
The main advantage of this topology, when compared with other recurrent models
proposed for sequential processing [Elman 1988, Watrous, 1987], over and above the
possible interpretation in terms of HMM, is the control of the information fed back
during the training. Indeed, since the training data consists of consecutive labeled
speech frames, the correct sequence of output states is known and the training is
supervised by providing the correct information.
(vt.,
vt.
Replacing in (13) dei, m) by the optimal values (14) provides a new criterion where
the target outputs depend now on the current vector, the considered output and
Links Between Markov Models and Multilayer Perceptrons
the classification of the previous vector:
=~ L
J
E'"
KK
LK L
L
[
nikl.
g(i,k,m) -
i=l k=l l=l m=l
]2
;ikm.
(15)
'
l:l=l n,kl
and it is clear (by canceling the partial derivative of E'" versus g(i, k, m)) that the
lower bound for E'" is reached for the same optimal outputs as (14) but is now equal
to zero, what provides a very useful control parameter during the training phase.
It is evident that these results directly follow from the minimized criterion and not
from the topology of the model. In that way, it is interesting to note that the same
optimal values (14) may also result from other criteria as, for instance, the entropy
[Hinton, 1987] or relative entropy [Solla et al., 1988] of the targets with respect to
outputs. Indeed, in the case of relative entropy, e.g., criterion (12) is changed in:
~ ~
Ee = L.J L.J
n=l l=l
[ .
d(ln, f). In
d(in, f)
C
k f) +
9 I n , n,
.
( 1- d(in' f) )]
(1 - d(ln, ?)).In 1 - C k f)
,
9
In,
n,
(16)
and canceling its partial derivative versus g(i, k, m) yields the optimal values (14).
In that case, the optimal outputs effectively correspond now to Ee,min = o.
Of course, since these results are independent of the topology of the models, they
remain also valid for linear discriminant functions but, in that case, it is not guaranteed that the optimal values (14) can be reached. However, it has to be noted
that in some particular cases, even for not linearly separable classes, these optimal
values are already obtained with linear discriminant functions (and thus with a one
layered perceptron trained according to an LMS criterion).
It is also important to point out that the same kind of recurrent A1LP could also be
used to estimate local probabilities of higher order Markov models where the local
contribution in (9) are no longer assumed dependent on the previous state only but
also on several preceding ones. This is easily implemented by extending the input
field to the information related to these preceding classifications. Another solution
is to represent, in the same extra input vector, a weighted sum (e.g. exponentially
decreasing with time) of the preceding outputs [Jordan, 1986].
CONCLUSION
Discrimination is an essential requirement in speech recognition and is not incorporated in the standard HMM. A discriminant HMM has been described and links
between this new model and a recurrent MLP have been shown. Recurrence permits
to take account of the sequential information in the output sequence. Moreover,
input contextual information is also easily captured by extending the input field. It
has finally been proved that the local probabilites of the discriminant HAf}.l may
be computed (or interpolated) by the particular MLP so defined.
509
510
Bourlard and Wellekens
References
[1] Bahl L.R., Brown P.F., de Souza P.V. & Mercer R.L. (1986). Maximum Mutual Information Estimation of Hidden Markov Model Parameters for Speech
Recognition, Proc.ICASSP-86, Tokyo, ppA9-52,
[2] Bourlard H., Kamp Y., Ney H. & Wellekens C.J. (1985). Speaker-Dependent
Connected Speech Recognition via Dynamic Programming and Statistical
Methods, Speech and Speaker Recognition, Ed. M.R. Schroeder, KARGER,
[3] Bourlard H. &. Wellekens C.J. (1989). Speech Pattern Discrimination and Multilayer Perceptrons, Computer, Speech and Language, 3, (to appear),
[4] Brown P. (1987). The Acoustic-Modeling Problem in A utomatic Speech Recognition, Ph.D. thesis, Comp.Sc.Dep., Carnegie-Mellon University,
[5] Elman J .L. (1988). Finding Structure in Time, CRL Technical Report 8801,
UCSD, Report,
[6] Hinton G.E. (1987). Connectionist Learning Procedures, Technical Report
CMU-CS-87-115,
[7] Jelinek F. (1976). Continuous Recognition by Statistical Methods, Proceedings
IEEE, vol. 64, noA, pp. 532-555,
[8] Jordan M.L. (1986). Serial Order: A Parallel Distributed Processing Approach,
UCSD, Tech. Report 8604,
[9] Lippmann R.P. (1987). An Introduction to Computing with Neural Nets, IEEE
ASSP Magazine, vol. 4, pp. 4-22,
[10] Ney H. (1984). The use of a one-stage dynamic programming algorithm for
connected word recognition. IEEE Trans. ASSP vol. 32, pp.263-271,
[11] Rumelhart D.E., Hinton G.E. & Williams R.J. (1986). Learning Internal Representations by Error Propagation, Parallel Distributed Processing. Exploration
of the Microstructure of Cognition. vol. 1: Foundations, Ed. D.E.Rumelhart &.
J .L.McClelland, MIT Press,
[12] Sejnowski T.J. &. Rosenberg C.R. (1987). Parallel Networks that Learn to Pronounce English Text. Complex Systems, vol. 1, pp. 145-168,
[13] Solla S.A., Levin E. &. Fleisher M. (1988). Accelerated Learning in Layered
Neural Networks, AT&T Bell Labs. Manuscript,
[14] Waibel A., Hanazawa T., Hinton G., Shikano K. & Lang, K. (1988). Phoneme
Recognition Using Time-Delay Neural Networks, Proc. ICASSP-88, New York,
[15] Watrous R.L. & Shastri L. (1987). Learning phonetic features using connectionist networks: an experiment in speech recognition, Proceedings of the First
International Conference on Neural Networks, IV -381-388, San Diego, CA,
| 163 |@word version:1 norm:1 contains:1 karger:1 denoting:1 current:10 contextual:15 comparing:1 lang:1 must:4 written:3 predetermined:1 discrimination:3 selected:1 short:1 compo:1 provides:3 quantized:3 successive:1 along:2 consists:1 inside:1 indeed:7 elman:2 decreasing:1 window:6 cardinality:1 becomes:1 provided:1 notation:1 moreover:1 factorized:1 xiwi:2 what:1 kind:3 watrous:3 string:3 probabilites:1 developed:1 finding:1 berkeley:1 collecting:1 whatever:1 unit:8 control:2 appear:2 producing:1 local:14 modify:1 limit:2 bilinear:1 path:3 suggests:1 relaxing:1 statistically:2 pronounce:1 xr:1 procedure:1 bell:1 convenient:1 word:1 regular:1 get:1 layered:2 operator:1 put:1 storage:1 context:7 restriction:1 equivalent:1 map:1 lexical:1 maximizing:1 baum:1 williams:1 independently:3 welch:1 immediately:1 rule:2 estimator:1 handle:2 enhanced:1 target:5 diego:1 magazine:1 programming:4 hypothesis:1 associate:1 rumelhart:4 recognition:16 approximated:2 qln:1 labeled:1 qll:1 observed:1 fleisher:1 connected:3 solla:2 ideally:1 dynamic:5 neglected:1 trained:6 depend:2 basis:1 cae:1 easily:4 icassp:2 represented:2 surrounding:1 sejnowski:2 sc:1 labeling:1 apparent:1 widely:3 ability:1 hanazawa:1 final:2 advantage:2 sequence:15 net:2 flexibility:2 haf:1 requirement:2 extending:2 produce:1 iq:1 recurrent:7 depending:2 qt:2 dep:1 implemented:1 c:1 implies:2 come:1 drawback:3 correct:3 tokyo:1 stochastic:2 centered:2 exploration:1 require:1 microstructure:1 generalization:1 opt:2 probable:3 summation:2 mm:1 hold:1 around:1 considered:5 viterbi:7 mapping:1 cognition:1 lm:1 consecutive:2 belgium:1 purpose:1 estimation:1 proc:2 sensitive:2 successfully:1 tool:2 weighted:1 minimization:2 mit:1 clearly:2 reaching:1 ikm:1 factorizes:1 rosenberg:2 vk:1 likelihood:1 tech:1 dependent:3 initially:1 hidden:6 classification:11 priori:3 mutual:2 equal:2 once:2 phonological:2 having:1 field:4 represents:7 excessive:1 minimized:1 connectionist:4 report:4 familiar:1 replaced:1 phase:3 argmax:1 mlp:20 predefined:1 capable:1 partial:4 iv:1 euclidean:1 isolated:1 instance:2 modeling:1 maximization:7 delay:2 successful:1 wix:1 levin:1 front:1 kn:2 international:1 discriminating:2 probabilistic:1 vm:1 decoding:2 thesis:1 containing:1 derivative:3 account:8 converted:1 de:1 wk:1 depends:1 vi:11 performed:2 root:1 lab:1 doing:1 reached:3 bayes:3 parallel:3 vin:4 iwi:1 contribution:4 square:1 formed:1 phoneme:4 maximized:1 yield:1 correspond:1 kamp:1 weak:2 utomatic:1 produced:2 comp:1 gopt:2 processor:1 classified:4 canceling:3 ed:2 pp:4 associated:7 proved:5 knowledge:2 back:2 manuscript:1 higher:1 supervised:1 follow:1 permitted:1 izi:1 formulation:3 stage:2 hand:2 fmi:1 replacing:2 propagation:1 bahl:3 bml:1 usa:1 brown:4 memoryless:1 laboratory:1 illustrated:1 ll:2 during:4 width:2 recurrence:3 uniquely:1 noted:1 qe:1 speaker:2 criterion:16 m:1 evident:1 recently:1 nih:1 exponentially:1 association:2 belong:1 interpretation:1 significant:1 mellon:1 ai:1 automatic:2 flp:1 language:2 iyi:1 supervision:1 longer:1 closest:1 own:1 mint:1 phonetic:1 binary:3 vt:2 yi:6 captured:1 minimum:1 additional:1 preceding:6 recognized:1 signal:6 ii:2 technical:2 serial:1 mle:4 multilayer:9 denominator:1 cmu:1 represent:2 else:1 extra:4 contrary:1 jordan:2 emitted:1 ee:2 presence:1 counting:1 feedforward:1 enough:1 architecture:4 topology:9 associating:1 prototype:8 expression:2 ordan:1 speech:21 york:1 generally:1 useful:1 clear:1 rival:1 ph:1 mcclelland:1 reduced:1 generate:2 estimated:4 per:1 discrete:2 carnegie:1 vol:5 group:3 achieving:1 backward:1 merely:1 sum:1 run:1 uti:2 decision:5 layer:3 bound:1 guaranteed:1 schroeder:1 occur:1 constraint:1 worked:1 precisely:1 interpolated:1 min:1 performing:1 separable:1 circumvented:1 according:4 waibel:3 brussels:1 belonging:1 remain:1 character:2 wi:6 lmse:1 hl:1 taken:3 ln:3 equation:1 mutually:2 wellekens:10 remains:3 mind:1 fed:2 end:1 rewritten:3 permit:1 nettaik:1 observe:1 llll:1 ney:3 alternative:1 denotes:3 instant:1 classical:4 warping:1 already:3 xiw:12 exclusive:2 dependence:1 subspace:1 link:6 separate:2 capacity:1 philip:1 hmm:12 considers:1 discriminant:15 length:1 index:6 modeled:1 kk:1 providing:1 difficult:3 ql:4 shastri:2 design:1 hml:1 markov:14 finite:2 extended:1 variability:1 hinton:4 incorporated:1 frame:2 assp:2 ucsd:2 arbitrary:1 souza:1 specified:1 kl:1 optimized:1 acoustic:10 trans:1 usually:3 pattern:2 max:2 memory:1 power:2 event:1 suitable:1 rely:1 noa:1 bourlard:15 representing:1 dei:1 dtw:1 lk:1 utterance:2 text:2 relative:2 interesting:2 wdx:1 versus:3 foundation:1 kti:1 mercer:1 course:4 changed:1 last:1 english:1 side:1 perceptron:3 institute:1 taking:3 jelinek:3 distributed:2 feedback:3 calculated:1 xn:2 transition:3 vocabulary:1 valid:1 exceedingly:1 forward:1 made:1 stuck:1 san:1 simplified:3 lippmann:2 keep:1 assumed:2 shikano:1 continuous:1 promising:2 nature:2 learn:1 ca:2 inherently:1 interpolating:1 complex:1 main:2 constituted:2 linearly:1 whole:2 nothing:1 explicit:2 exponential:1 concatenating:1 lie:1 ix:3 kkk:1 evidence:1 essential:1 restricting:1 sequential:8 cumulated:1 effectively:1 vii:1 entropy:3 simply:1 expressed:1 conditional:1 viewed:1 goal:1 crl:1 specifically:1 contradictory:1 called:1 perceptrons:7 internal:1 accelerated:1 incorporate:2 tested:2 phenomenon:1 |
689 | 1,630 | Mechanisms of generalization
perceptual learning
Zili Lin
Rutgers University, Newark
?
In
Daphna Weinshall
Hebrew University, Israel
Abstract
The learning of many visual perceptual tasks has been shown to be
specific to practiced stimuli, while new stimuli require re-Iearning
from scratch. Here we demonstrate generalization using a novel
paradigm in motion discrimination where learning has been previously shown to be specific. We trained subjects to discriminate
the directions of moving dots, and verified the previous results
that learning does not transfer from the trained direction to a new
one. However, by tracking the subjects' performance across time
in the new direction, we found that their rate of learning doubled.
Therefore, learning generalized in a task previously considered too
difficult for generalization. We also replicated, in the second experiment, transfer following training with "easy" stimuli.
The specificity of perceptual learning and the dichotomy between
learning of "easy" vs. "difficult" tasks were hypothesized to involve
different learning processes, operating at different visual cortical
areas. Here we show how to interpret these results in terms of signal
detection theory. With the assumption of limited computational
resources, we obtain the observed phenomena - direct transfer
and change of learning rate - for increasing levels of task 'difficulty.
It appears that human generalization concurs with the expected
behavior of a generic discrimination system.
1
Introduction
Learning in biological systems is of great importance. But while cognitive learning
(or "problem solving") is typically abrupt and generalizes to analogous problems,
perceptual skills appear to be acquired gradually and specifically: Human subjects
cannot generalize a perceptual discrimination skill to solve similar problems with
different attributes. For example, in a visual discrimination task (Fig. 1), a subject
who is trained to discriminate motion directions between 43? and 47? cannot use
46
Z. Liu and D. Weinshall
this skill to discriminate 133? from 137?. Generalization has been found only when
stimuli of different attributes are interleaved [7 , 10], or when the task is easier [6, 1].
For example, a subject who is trained to discriminate 41 ? from 49? can later readily
discriminate 131? from 139? [6]. The specificity of learning has been so far used to
support the hypothesis that perceptual learning embodies neuronal modifications
in the brain's stimulus-specific cortical areas (e.g., visual area MT) [9,3, 2, 5, 8, 4].
In contrast to previous results of learning specificity, we show in two experiments in
Section 2 that learning in motion discrimination generalizes in all cases where specificity was thought to exist, although the mode of generalization varies . (1) When
the task is difficult, it is direction specific in the traditional sense; but learning in
a new direction accelerates. (2) When the task is easy, it generalizes to all directions after training in only one direction. While (2) is consistent with the findings
reported in [6 , 1], (1) demonstrate that generalization is the rule, not an exception
limited only to "easy" stimuli.
2
Perceptual learning experiments
__
st_
im_UI_Us_'-+-_ _-+_ _ _+-re_s_po_ns_e_ _ time
SOOms
Figure 1: Schematic of one trial. Left: the stimulus was a random dot pattern viewed
in a circular aperture, spanning 8? of visual angle, moving in a given primary direction
(denoted dir). The primary direction was chosen from 12 directions, separated by 30?.
Right: the direction of each of the two stimuli was randomly chosen from two candidate
directions (dir ? D./2). The subject judged whether the two stimuli moved in the same or
different directions. Feedback was provided.
The motion discrimination task is described in Fig. 1. In each trial, the subject
was presented with two consecutive stimuli, each moving in one of two possible
directions (randomly chosen from the two directions dir + ~/2 and dir - ~/2). The
directional difference I~I between the two stimuli was 8? in the easy condition, and
4? in the difficult condition. The experiment was otherwise identical to that in [2]
that used I~I = 3?, except that our stimuli were displayed on an SGI computer
monitor. I~I = 8? was chosen as the easy condition because most subjects found it
relatively easy to learn, yet still needed substantial training.
2.1
A difficult task
We trained subjects extensively in one primary direction with a difficult motion
discrimination task (~ = 4?), followed by extensive training in a second primary
direction. The two primary directions were sufficiently different so direct transfer between them was not expected [2] (Fig. 2). Subjects ' initial performance in
both directions was comparable, replicating the classical result of stimulus specific
learning (no direct transfer). However, all subjects took only half as many training sessions to make the same improvement in the second direction. All subjects
had extensive practice with the task prior to this experiment, thus the acceleration
cannot be simply explained by familiarity.
47
Mechanisms of Generalization in Perceptual Learning
Our results show that although perceptual learning did not directly transfer in this
difficult task, it did nevertheless generalize to the new direction. The generalization
was manifested as 100% increase in the rate of learning in the second direction. It
demonstrates that the generalization of learning , as manifested via direct transfer
and via increase in learning rate, may be thought of as two extremes of a continuum
of possibilities.
': X"U~IL~~
,
05 1)
5
10
\S
i'O
2S
30
o s:)
"I
2
, " <4 deg
s~1KI
l JX
J7~~~::~~=:~
L:...---.
.------... ""
83 51
i__ l~do ' l l1O o.gl l
1._. _,...
1__ 2t>d"" ,l 20 '*i l l
5
S.sslon
'0
IS
i'O
S...lon
2S
30
S??? lon
Figure 2: Subjects DJ and ZL needed 20 training sessions in the first direction, and nine in
the second; subject ZJX needed seven training sessions in the first , and four in the second.
The rate of learning (the amount of improvement per session) in the second direction is
significantly greater than in the first (t(2) = 13.41 , p < 0.003) .
2.2
An easy task
We first measured the subjects' baseline performance in an easy task - the discrimination of motion directions 8? apart - in 12 primary directions (64 trials
each, randomly interleaved). We then trained four subjects in one oblique primary
direction (chosen randomly and counter-balanced among subjects) for four sessions,
each with 700 trials. Finally, we measured again the subjects' performance in all
directions. Every subject improved in all directions (Fig. 3). The performance of
seven control subjects was measured without intermediate training ; two more control subjects were added who were "trained" with similar motion stimuli but were
asked to discriminate a brightness change instead. The control subjects improved
as well, but significantly less (!ld' = 0.09 vs. 0.78, Fig. 3) .
Our results clearly show that training with an easy task in one direction leads to
immediate improvement in other directions. Hence the learned skill generalized
across motion directions.
3
A computational model
We will now adopt a general framework for the analysis of perceptual learning
results, using the language of signal detection theory. Our model accounts for the
results in this paper by employing the constraint of limited computational resources.
The model's assumptions are as follows.
1. In each trial, each of the two stimuli is represented by a population of measurements that encode all aspects of the stimulus, in particular, the output of localized
direction detectors. The measurements are encoded as a vector. The decision as to
whether the two stimuli are the same or not is determined by the difference of the
two vectors.
2. Each component of the input measurements is characterized by its sensitivity
for the discrimination task, e.g., how well the two motion directions can be discriminated apart based on this component. The entire population itself is generally
divided into two sets: informative - measurements with significant sensitivity, and
48
Z. Liu and D. Weinshall
.....
C' _
~
___ __ ~ _A~~_____ ........
d'
d'
270
Slt>jects
Figure 3: Left: Discrimination sensitivity d' of subject JY who was trained in the primary
direction 300 0 ? Middle: d' of control subject YHL who had no training in between
the two measurements. Right: Average d' (and standard error) for all subjects before
and after training. Trained: results for the four trained subjects. Note the substantial
improvement between the two measurements. For these subjects, the d' measured after
training is shown separately for the trained direction (middle column) and the remaining
directions (right column). Control: results for the nine control subjects. The control
subjects improved their performance significantly less than the trained subjects (tld'
0.09 vs. 0.78 ; F(l, 11) = l4.79,p < 0.003).
uninformative - measurements with null sensitivity. In addition, informative measurements may vary greatly in their individual sensitivity. When many have high
sensitivity, the task is easy. When most have low sensitivity, the task is difficult.
We assume that sensitivity changes from one primary direction to the next, but
the population of informative measurements remains constant. For example, in our
psychophysical task localized directional signals are likely to be in the informative
set for any motion direction, though their individual sensitivity will vary based
on specific motion directions. On the other hand, local speed signals are never
informative and therefore always belong to the uninformative set.
3. Due to limited computational capacity, the system can, at a time, only process
a small number of components of the input vector. The decision in a single trial
is therefore made based on the magnitude of this sub-vector, which may vary from
trial to trial.
In each trial the system rates the processed components of the sub-vector according
to their sensitivity for the discrimination task. After a sufficient number of trials
(enough to estimate all the component sensitivities of the sub-vector), the system
identifies the least sensitive component and replaces it in the next trial with a new
random component from the input vector. In effect, the system is searching from the
input vector a sub-vector that gives rise to the maximal discrimination sensitivity.
Therefore the performance of the system is gradually improving, causing learning
from session to session in the training direction.
4. After learning in one training direction, the system identifies the sets of informative and uninformative measurements and include in the informative set any
measurement with significant (though possibly low) sensitivity. In the next training
direction, only the set of informative measurements is searched. The search becomes
more efficient, and hence the acceleration of the learning rate. This accounts for
the learning between training directions.
We further assume that each stimulus generates a signal that is a vector of N
measurements: {Id~l' We also assume that the signal for the discrimination task
is the difference between two stimulus measurements: x = {Xi}~l' Xi = tlli . The
Mechanisms of Generalization in Perceptual Learning
49
same/different discrimination task is to decide whether x is generated by noise the null vector 0, or by some distinct signal - the vector S.
At time t a measurement vector xt is obtained, which we denote x st if it is the
signal S, and xnt otherwise. Assume that each measurement in xt is a normal
random variable'? xnt = {xnt}N
xnt
'" N(O , a)
x st = {xst}N
x 1st '" N(II
. a?)
1
1= l'
1
1 ,
1
1 = l'
1""" ,
1 .
We measure the sensitivity d' of each component. Since both the signal and noise
are assumed to be normal random variables, the sensitivity of the i-th measurement
in the discrimination task is d~ = lJ.lil/ai. Assuming further that the measurements
are independent of each other and of time, then the combined sensitivity of M
measurements is
3.1
d' = JL/:~l (J.ld ai)2.
Limited resources: an assumption
We assume that the system can simultaneously process at most M ? N of the
original N measurements. Since the sensitivity d~ of the different measurements
varies, the discrimination depends on the combined sensitivity of the particular set
of M measurements that are being used. Learning in the first training direction,
therefore, leads to the selection of a "good" subset of the measurements, obtained
by searching in the measurement space.
After searching for the best M measurements for the current training direction, the
system divides the measurements into two sets: those with non-negligible sensitivity,
and those with practically null sensitivity. This rating is kept for the next training
direction, when only the first set is searched.
One prediction of this model is that learning rate should not increase with exposure
only. In other words, it is necessary for subjects to be exposed to the stimulus
and do the same discrimination task for effective inter-directional learning to take
place. For example, assume that the system is given N measurements: N /2 motion
direction signals and N /2 speed signals. It learns during the first training direction
that the N /2 speed signals have null sensitivity for the direction discrimination
task, whereas the directional signals have varying (but significant) sensitivity. In the
second training direction, the system is given the N measurements whose sensitivity
profile is different from that in the first training direction, but still with the property
that only the directional signals have any significant sensitivity (Fig. 4b). Based on
learning in the first training direction, the system only searches the measurements
whose sensitivity in the first training direction was significant, namely, the N /2
directional signals. It ignores the speed signals. Now the asymptotic performance
in the second direction remains unchanged because the most sensitive measurements
are within the searched population - they are directional signals. The learning rate,
however, doubles since the system searches a space half as large.
3.2
Simulation results
To account for the different modes of learning, we make the following assumptions.
When the task is easy, many components have high sensitivity d'. When the task is
difficult, only a small number of measurements have high d'. Therefore , when the
task is easy, a subset of M measurements that give rise to the best performance is
found relatively fast. In the extreme, when the task is very easy (e.g., all the measurements have very high sensitivity), the rate of learning is almost instantaneous
and the observed outcome appears to be transfer. On the other hand, when the
task is difficult, it takes a long time to find the M measurements that give rise to
the best performance, and learning is slow.
Z. Liu and D. Weinshall
50
Figure 4: Hypothetical sensitivity profile for a population of measurements of speed and
motion direction. Left: First training direction - only the motion direction measurements have significant sensitivity (d' above 0.1), with measurements around 45 0 having
the highest d'. Right: Second direction - only the motion direction measurements have
significant sensitivity, with measurements around 135 0 having the highest d'.
The detailed operations of the model are as follows. In the first training direction,
the system starts with a random set of M measurements. In each trial and using
feedback, the mean and standard deviation of each measurement is computed: J.L: t ,
ar for the signal and J.Li t , art for the noise. In the next trial, given M measurements
J:" M (
")2 (x'+1 n')2 ,and c1assl'fies
1+1
{ Xit+l}M
i=l' th e sys t em eva1uat es u = L..d=l Xi O'l'-1-?;
x as the signal if <5 < 0, and noise otherwise.
!
-
- 1-?;
O'~t
At time T, the worst measurement is identified as argval of mini d~, d~
It is then replaced randomly from one of the remaining
N - M measurements. The learning and decision making then proceed as above
for another T iterations. This is repeated until the set of chosen measurements
stabilizes. At the end, the decision is made based on the set of M measurements
that have the highest sensitivities.
21J.Lf - J.LiTI/(ar + art).
0
!
i~
(
Figure 5: Simulated performance (percent correct) as function of time. Left: Difficult
condition - the number of measurements with high d~ is small (4 out of 150); there is no
transfer from the first to the second training direction, but the learning rate is increased
two-fold. This graph is qualitatively similar to the results shown in the top row of Fig. 2.
Right : Easy condition - the number of measurements with high d~ is large (72 out of
150); there is almost complete transfer from the first to the secQnd training direction.
At the very beginning of training in the second direction, based on the measured
in the first direction, the measurement population is labeled as informative those with d~ larger than the median value, and uninformative - the remaining
measurements. The learning and decision making proceeds as above, while only
informative measurements are considered during the search.
d~
In the simulation we used N = 150 measurements, with M = 4. Half of the N
measurements (the informative measurements) had significant d~. In the second
training direction, the sensitivities of the measurements were randomly changed,
but only the informative measurements had significant d~. By varying the number
of measurements with high di in the population of informative measurements, we
get the different modes of generalization(Fig. 5).
Mechanisms of Generalization in Perceptual Learning
4
51
Discussions
In contrast to previous results on the specificity of learning, we broadened the
search for generalization beyond traditional transfer. We found that generalization
is the rule, rather than an exception. Perceptual learning of motion discrimination
generalizes in various forms: as acceleration of learning rate (Exp. 1), as immediate
Thus we show that perceptual learning
improvement in performance (Exp. 2).
is more similar to cognitive learning than previously thought, with both stimulus
specificity and generalization as important ingredients.
In our scheme, the assumption of the computational resource forced the discrimination system to search in the measurement space. The generalization phenomena transfer and increased learning rate - occur due to improvement in search sensitivity from one training direction to the next, as the size of the search space decreases
with learning. Our scheme also predicts that learning rate should only improve if
the subject both sees the stimulus and does the relevant discrimination task, in
agreement with the results in Exp. 1. Importantly, our scheme does not predict
transfer per se, but instead a dramatic increase in learning rate that is equivalent
to transfer.
Our model is qualitative and does not make any concrete quantitative predictions.
We would like to emphasize that this is not a handicap of the model. Our goal is to
show, qualitatively, that the various generalization phenomena should not surprise
us, as they should naturally occur in a generic discrimination system with limited
computational resources. Thus we argue that it may be too early to use existing
perceptual learning results for the identification of the cortical location of perceptual
learning, and the levels at which modifications are taking place.
References
[1] Ahissar M and Hochstein S. Task difficulty and the specificity of perceptual
learning. Nature, 387:401- 406, 1997.
[2] Ball K and Sekuler R. A specific and enduring improvement in visual motion
discrimination. SCience, 218:697-698, 1982.
[3] Fiorentini A and Berardi N. Perceptual learning specific for orientation and
spatial frequency. Nature, 287:43- 44, 1980.
[4] Gilbert C D. Early perceptual learning. PNAS, 91:1195-1197, 1994.
[5] Karni A and Sagi D. Where practice makes perfect in texture discrimination:
Evidence for primary visual cortex plasticity. PNAS, 88:4966- 4970, 1991.
[6] Liu Z. Learning a visual skill that generalizes. Tech. Report, NECI, 1995.
[7] Liu Z and Vaina L M. Stimulus specific learning: a consequence of stimulusspecific experiments? Perception, 24(supplement):21, 1995.
[8] Poggio T, Fahle M, and Edelman S. Fast perceptual learning in visual hyperacuity. Science, 256:1018- 1021 , May 1992.
[9] Ramachandran V S. Learning-like phenomena in stereopsis. Nature, 262:382384, 1976.
[10] Rubin N, Nakayama K, and Shapley R. Abrupt learning and retinal size
specificity in illusory-contour perception. Current Biology, 7:461-467,1997.
| 1630 |@word trial:13 middle:2 simulation:2 brightness:1 dramatic:1 ld:2 initial:1 liu:5 practiced:1 existing:1 current:2 yet:1 readily:1 informative:13 plasticity:1 discrimination:24 v:3 half:3 beginning:1 sys:1 oblique:1 location:1 direct:4 qualitative:1 edelman:1 shapley:1 acquired:1 inter:1 expected:2 behavior:1 brain:1 increasing:1 becomes:1 provided:1 null:4 israel:1 weinshall:4 finding:1 ahissar:1 quantitative:1 every:1 hypothetical:1 iearning:1 demonstrates:1 zl:1 control:7 broadened:1 appear:1 before:1 negligible:1 local:1 sagi:1 consequence:1 id:1 fiorentini:1 sekuler:1 limited:6 practice:2 lf:1 area:3 thought:3 significantly:3 word:1 specificity:8 doubled:1 cannot:3 get:1 selection:1 judged:1 gilbert:1 equivalent:1 exposure:1 abrupt:2 vaina:1 rule:2 importantly:1 population:7 searching:3 analogous:1 hypothesis:1 agreement:1 hyperacuity:1 predicts:1 labeled:1 observed:2 worst:1 counter:1 highest:3 decrease:1 balanced:1 substantial:2 asked:1 trained:12 solving:1 exposed:1 represented:1 various:2 separated:1 distinct:1 fast:2 effective:1 forced:1 dichotomy:1 outcome:1 whose:2 encoded:1 larger:1 solve:1 otherwise:3 itself:1 took:1 maximal:1 causing:1 relevant:1 moved:1 sgi:1 double:1 perfect:1 measured:5 berardi:1 direction:68 correct:1 attribute:2 human:2 require:1 generalization:18 biological:1 practically:1 sufficiently:1 considered:2 around:2 normal:2 exp:3 great:1 predict:1 stabilizes:1 continuum:1 consecutive:1 jx:1 adopt:1 vary:3 early:2 sensitive:2 clearly:1 j7:1 always:1 rather:1 varying:2 encode:1 xit:1 lon:2 improvement:7 greatly:1 contrast:2 tech:1 baseline:1 sense:1 typically:1 entire:1 lj:1 among:1 orientation:1 denoted:1 art:2 spatial:1 never:1 having:2 identical:1 biology:1 report:1 stimulus:23 tlli:1 randomly:6 simultaneously:1 individual:2 replaced:1 detection:2 circular:1 possibility:1 extreme:2 necessary:1 poggio:1 divide:1 re:1 increased:2 column:2 ar:2 deviation:1 subset:2 too:2 reported:1 varies:2 dir:4 combined:2 st:3 sensitivity:33 concrete:1 again:1 possibly:1 cognitive:2 li:1 account:3 retinal:1 zjx:1 depends:1 later:1 start:1 il:1 who:5 directional:7 generalize:2 identification:1 detector:1 frequency:1 naturally:1 di:1 illusory:1 appears:2 improved:3 though:2 until:1 hand:2 ramachandran:1 mode:3 effect:1 hypothesized:1 l1o:1 hence:2 during:2 generalized:2 complete:1 demonstrate:2 motion:17 percent:1 instantaneous:1 novel:1 mt:1 discriminated:1 jl:1 belong:1 interpret:1 measurement:59 significant:9 ai:2 session:7 replicating:1 had:4 dot:2 dj:1 moving:3 language:1 cortex:1 operating:1 apart:2 manifested:2 greater:1 paradigm:1 signal:19 ii:1 pnas:2 characterized:1 long:1 lin:1 divided:1 jects:1 jy:1 schematic:1 prediction:2 rutgers:1 iteration:1 neci:1 addition:1 uninformative:4 separately:1 whereas:1 xst:1 median:1 subject:32 intermediate:1 easy:15 enough:1 zili:1 identified:1 whether:3 proceed:1 nine:2 generally:1 detailed:1 involve:1 se:1 amount:1 extensively:1 processed:1 exist:1 per:2 i__:1 four:4 nevertheless:1 monitor:1 verified:1 kept:1 graph:1 angle:1 place:2 almost:2 decide:1 decision:5 comparable:1 interleaved:2 accelerates:1 ki:1 handicap:1 followed:1 fold:1 replaces:1 occur:2 constraint:1 generates:1 aspect:1 speed:5 hochstein:1 relatively:2 according:1 ball:1 across:2 em:1 modification:2 making:2 explained:1 gradually:2 resource:5 previously:3 concurs:1 remains:2 mechanism:4 needed:3 stimulusspecific:1 end:1 generalizes:5 operation:1 generic:2 original:1 top:1 remaining:3 include:1 daphna:1 embodies:1 classical:1 unchanged:1 psychophysical:1 added:1 primary:10 traditional:2 simulated:1 capacity:1 seven:2 argue:1 spanning:1 assuming:1 mini:1 hebrew:1 difficult:11 rise:3 xnt:4 lil:1 displayed:1 immediate:2 rating:1 namely:1 extensive:2 learned:1 beyond:1 proceeds:1 pattern:1 perception:2 difficulty:2 scheme:3 improve:1 identifies:2 prior:1 asymptotic:1 localized:2 ingredient:1 sufficient:1 consistent:1 tld:1 rubin:1 row:1 changed:1 gl:1 taking:1 slt:1 karni:1 feedback:2 cortical:3 contour:1 ignores:1 fies:1 made:2 qualitatively:2 replicated:1 far:1 employing:1 skill:5 emphasize:1 aperture:1 fahle:1 deg:1 assumed:1 xi:3 stereopsis:1 search:8 learn:1 transfer:14 nature:3 nakayama:1 improving:1 did:2 noise:4 profile:2 repeated:1 neuronal:1 fig:8 slow:1 sub:4 candidate:1 perceptual:20 learns:1 familiarity:1 specific:9 xt:2 evidence:1 enduring:1 importance:1 supplement:1 texture:1 magnitude:1 easier:1 surprise:1 simply:1 likely:1 visual:9 tracking:1 yhl:1 viewed:1 goal:1 acceleration:3 change:3 specifically:1 except:1 determined:1 discriminate:6 e:1 exception:2 l4:1 support:1 searched:3 newark:1 phenomenon:4 scratch:1 |
690 | 1,631 | Learning a Hierarchical Belief Network of
Independent Factor Analyzers
H. Attias*
hagai@gatsby.ucl.ac.uk
Sloan Center for Theoretical Neurobiology, Box 0444
University of California at San Francisco
San Francisco, CA 94143-0444
Abstract
Many belief networks have been proposed that are composed of
binary units. However, for tasks such as object and speech recognition which produce real-valued data, binary network models are
usually inadequate. Independent component analysis (ICA) learns
a model from real data, but the descriptive power of this model
is severly limited. We begin by describing the independent factor
analysis (IFA) technique, which overcomes some of the limitations
of ICA. We then create a multilayer network by cascading singlelayer IFA models. At each level, the IFA network extracts realvalued latent variables that are non-linear functions of the input
data with a highly adaptive functional form, resulting in a hierarchical distributed representation of these data. Whereas exact
maximum-likelihood learning of the network is intractable, we derive an algorithm that maximizes a lower bound on the likelihood,
based on a variational approach.
1
Introduction
An intriguing hypothesis for how the brain represents incoming sensory information holds that it constructs a hierarchical probabilistic model of the observed data.
The model parameters are learned in an unsupervised manner by maximizing the
likelihood that these data are generated by the model. A multilayer belief network is a realization of such a model. Many belief networks have been proposed
that are composed of binary units. The hidden units in such networks represent
latent variables that explain different features of the data, and whose relation to the
?Current address: Gatsby Computational Neuroscience Unit , University College London, 17 Queen Square, London WC1N 3AR, U .K.
H. Attias
362
data is highly non-linear. However, for tasks such as object and speech recognition
which produce real-valued data, the models provided by binary networks are often
inadequate. Independent component analysis (ICA) learns a generative model from
real data, and extracts real-valued latent variables that are mutually statistically
independent. Unfortunately, this model is restricted to a single layer and the latent
variables are simple linear functions of the data; hence, underlying degrees of freedom that are non-linear cannot be extracted by ICA. In addition , the requirement
of equal numbers of hidden and observed variables and the assumption of noiseless
data render the ICA model inappropriate.
This paper begins by introducing the independent factor analysis (IFA) technique.
IFA is an extension of ICA, that allows different numbers of latent and observed
variables and can handle noisy data. The paper proceeds to create a multilayer
network by cascading single-layer IFA models. The resulting generative model produces a hierarchical distributed representation of the input data, where the latent
variables extracted at each level are non-linear functions of the data with a highly
adaptive functional form. Whereas exact maximum-likelihood (ML) learning in
this network is intractable due to the difficulty in computing the posterior density
over the hidden layers, we present an algorithm that maximizes a lower bound on
the likelihood. This algorithm is based on a general variational approach that we
develop for the IFA network.
2
Independent Component and Independent Factor
Analysis
Although the concept of ICA originated in the field of signal processing, it is actually
a density estimation problem. Given an L' x 1 observed data vector y, the task is
to explain it in terms of an LxI vector x of unobserved 'sources' that are mutually
statistically independent. The relation between the two is assumed linear,
y = Hx
+ u,
(1)
where H is the 'mixing' matrix; the noise vector u is usually assumed zero-mean
Gaussian with a covariance matrix A . In the context of blind source separation
[1]-[4], the sOurce signals x should be recovered from the mixed noisy signals y with
no knowledge of H, A, or the source densities P(Xi), hence the term 'blind '. In the
density estimation approach, one regards (1) as a probabilistic generative model for
the observed p(y), with the mixing matrix, noise covariance, and source densities
serving as model parameters. In principle, these parameters should be learned by
ML, followed by inferring the sources via a MAP estimator.
For Gaussian sources, (1) is the factor analysis model, for which an EM algorithm
exists and the MAP estimator is linear. The problem becomes interesting and more
difficult for non-Gaussian sources. Most ICA algorithms focus on square (L' = L),
noiseless (y = Hx) mixing, and fix P(Xi) using prior knowledge (but see [5] for the
case of noisy mixing with a fixed Laplacian source prior). Learning H occurs via
gradient-ascent maximization of the likelihood [1]-[4]. Source density parameters
can also be adapted in this way [3],[4], but the resulting gradient-ascent learning is
rather slow. This state of affairs presented a problem to ICA algorithms, since the
ability to learn arbitrary sOurce densities that are not known in advance is crucial:
using an inaccurate p( Xi) often leads to a bad H estimate and failed separation.
This problem was recently solved by introducing the IFA technique [6]. IFA
employs a semi-parametric model of the source densities, which allows learning
them (as well as the mixing matrix) using expectation-maximization (EM). Specifically, P(Xi) is described as a mixture of Gaussians (MOG), where the mixture
HierarchicalIFA Belief Networks
363
components are labeled by s = 1, ... , ni and have means f..ti,s and variances Ii,s:
p( Xi) = ~ s p( Si = s) 9 (Xi - f..ti,s, Ii ,s). I The mixing proportions are parametrized
using the softmax form: P(Si = s) = exp(ai,s)/ ~s' exp(ai ,s'). Beyond noiseless
leA, an EM algorithm for the noisy case (1) with any L, L' was also derived in
[6] using the MOG description. 2 This algorithm learns a probabilistic model
p(y I W) for the observed data, parametrized by W = (H,A,{ai ,s,f..ti,s"i,s}) . A
graphical representation of this model is provided by Fig. 1, if we set n = 1 and
yOJ = bJi ,S = VIJ ,s = 0.
3
Hierarchical Independent Factor Analysis
In the following we develop a multilayer generalization of IFA, by cascading duplicates of the generative model introduced in [6]. Each layer n = 1, ..., N is composed
of two sublayers: a source sublayer which consists of the units xi, i = 1, ... , L n , and
an output sublayer which consists of Yj, j = 1, ... , L~ . The two are linearly related
via yn = Hnxn + un as in (1); un is a Gaussian noise vector with covariance An.
The nth-layer source xi is described by a MOG density model with parameters ai S'
f..ti,s' and Irs' in analogy to the IFA sources above.
'
The important step is to determine how layer n depends on the previous layers. We
choose to introduce a dependence of the ith source of layer n only on the ith output
of layer n - 1. Notice that matching Ln = L~ _ l is now required. This dependence
is implemented by making the means and mixture proportions of the Gaussians
which compose p(xi) dependent on yr- l . Specifically, we make the replacements
n + bin,sYin- l . Th e resu Img
t ? ?Jomt
?
d ensl?ty ?or
f..tin,s -t f..tin,s + vin,sYin- l an d ain,s -t ai,s
layer n, conditioned on layer n - 1, is
Ln
p(sn , xn,yn I yn-l, wn) = IIp(si I yr - 1 ) p(xi I si,yr- l ) p(yn I xn) ,
i=I
where
( n _
(2)
vvn are the parameters of layer nand
I
n-I) _
exp(ai,s + bi,syr- 1 )
- '"'
n
n
n-l '
L... exp(a i s' + bi s'Yi
)
p Si - S Yi
s'
P(Xin ISin=n
S,YiI )
'
,
= 9( Xin - f..tin"s - VinsYin- l "in)
,s .
The full model joint density is given by the product of (2) over n = 1, ... , N (setting
yO = 0) . A graphical representation of layer n of the hierarchical IFA network is
given in Fig. 1. All units are hidden except yN.
To gain some insight into our network, we examine the relation between the nthlayer source xi and the n - lth-Iayer output yr- 1 . This relation is probabilistic
and is determined by the conditional density p(xi I yr- 1 ) = ~s~ p(si I y~-l )p(xi I
si,yr- 1 ) . Notice from (2) that this is a MOG density. Its yr - (dependent mean is
given by
Xi
= f['(yr- 1 ) = LP(si = s I yr- 1 )
(f..t~s + vf,syr- 1 )
,
(3)
s
IThroughout this paper, Q(x,~) =1 27r~ 1- 1 / 2 exp( _XT~ - IX/2) .
2However, for many sources the E-step becomes intractable, since the number
ni
of source state configurations s = (s 1, ... , s L) depends exponentially on L. Such cases are
treated in [6] using a variational approximation.
TIi
H. Attias
364
n
J-lj,s
Figure 1: Layer n of the hierarchical leA generative model.
and is a non-linear function of y~-l due to the softmax form of p(si I yr- 1 ).
By adjusting the parameters, the function II' can assume a very wide range of
forms: suppose that for state si , ai,s and bi,s are set so that p(si = s I yr- 1 ) is
significant only in a small, continuous range of yr- 1 values, with different ranges
associated with different s 's. In this range, II' will be dominated by the linear
term J.1.is + lIrs y~ -l. Hence, a desired ii can be produced by placing oriented
line seg~ents ~t appropriate points above the yr-1-axis, then smoothly join them
together by the p(si I yr- 1 ) . Using the algorithm below, the optimal form of ii
will be learned from the data. Therefore , our model describes the data yf as a
potentially highly complex function of the top layer sources, produced by repeated
application of linear mixing followed by a non-linearity, with noise allowed at each
stage.
4
Learning and Inference by Variational EM
The need for summing over an exponentially large number of source state configurations (sr, ... , s"lJ, and integrating over the softmax functions p(si I yi), makes
exact learning intractable in our network. Thus, approximations must be made .
In the following we develop a variational approach, in the spirit of [8], to hierarchical IFA. We begin, following the approach of [7] ~o EM , by bounding the loglikelihood from below: ? = 10gp(yN) 2: l:n{Elogp(yn I xn) + l:i , s~[Elogp(xi I
.
si, y~-l) + E logp(si I y~ - l)]} - E log q, where E denotes averaging over the hidden
layers using an arbitrary posterior q = q(Sl??-N,xI ... N,yl .. . N-l I yN). In exact EM,
q at each iteration is the true posterior, parametrized by W 1 ... N from the previous iteration. In variational EM, q is chosen to have a form which makes learning
tractable , and is parametrized by a separate set of parameters V I .. . N . These are
optimized to bring q as close to the true posterior as possible.
365
Hierarchical IFA Belief Networks
E-step. We use a variational posterior that is factorized across layers. Within layer
n it has the form
Ln
q(sn, x n , yn I vn)
= II Vf,si
9(zn _ pn, ~n) ,
(4)
i=l
for n < N, and q(sN, x N I VN) = TIi Vt,'Si 9(x N - pN, ~N). The variational parameters vn = (pn, ~n, {vf,s}) depend on the data yN. The full N -layer posterior is
simply a product of (4) over n. Hence, given the data, the nth-layer sources and
outputs are jointly Gaussian whereas the states sf are independent. 3
Even with the variational posterior (4), the term Elogp(sf I y~-l) in the lower
bound cannot be calculated analytically, since it involves integration over the
softmax function. Instead, we calculate yet a lower bound on this term. Let
ci,s = ai,s + bi,syr- l and drop the unit and layer indices i, n, then logp(s I y) =
-log(l + e- c , Ls'#s e C . ' ) . B<?rrowing an idea from [8], we multiply and divide by
e71 ? under the logarithm sign and use Jensen's inequality to get Elogp(s I y) 2':
-TJsEc s -logE [e- 71 ? C ? +e - (H71.)C. Ls'#se c .,], This results in a bound that can
be calculated in closed form:
Elogp(sf = sl
yr-
l )
2':
-v~TJ~e~ - v~ log (eJ:: + L ef ':.,) =:Frs,
(5)
s'#s
where ens = ans + bnpn-l
jn = -'Ylncn
- I/2 jn = -(1 + 'Yln)cn +
s y
's
'/ S s + ('Ylnbn)2~n
'/ s s
yy
' s s'
'/ s s
~, + [(1 + TJ~)b~ - b~, F~~;l /2, and the subscript i is omitted. We also defined
pn = (p~, p;)T and similarly ~xx, ~yy, ~xy = ~;x are the subblocks of~. Since
(5) holds for arbitrary TJfs' the latter are treated as additional variational parameters
which are optimized to tighten this bound. 4
To optimize the variational parameters V I .. N , we equate the gradient of the lower
bound on I:- to zero and obtain
(HTA-IH)n+An
_(HTA-I)n ) n
p~+l )
(
n-l
_(A-1H)n
(A-l)n+B n+ 1 p
Py
(6)
(7)
where Ai} = Ls (Vi,s /'t ,s)n8ij , Eij = Ls (Vi,slli,s /'i,s)nsij, af = Ls (Vi ,sJ-ti,s/'i,s)n,
and f3t = Ls(Vi,sJ-ti,slli,s/'i,S)n. (All parameters within (- . . )n belong to layer n).
Fntl
contain the corresponding derivatives of :F";+l (5), summed over s. For the
p,
state posteriors we have
vn
s
1
= _Zn
exp
(n's2 + _[(pn
1
_
2",n
/s
x
lin _
f"s
n)
O:Fs
lInpn-I)2 + ~n + (lIn)2~n-ll + __
(8)
s Y
xx
s
YY
ovn'
s
3It is easy to introduce more structure into (4) by allowing the means p~ to depend
on 8~, and the covariances ~0 to depend on 87, 8;, thus making the approximation more
accurate (but more complex) while maintaining tractability.
4 An alternative approach to handle E log p( 87 I y~ - l) is to approximate the required
integral by, e.g., the maximum value of the integrand, possibly including Gaussian corrections. The resulting approximation is simpler than (5); however, it is no longer guaranteed
to bound the log-likelihood from below.
H. Attias
366
zn
where the unit subscript i is omitted (i.e., ~~x = ~~x , ii) ;
= Zi is set such that
2: s v~s = 1. A simple modification of these equations is required for layer n = N.
The optimal V I .. R are obtained by solving the fixed-point equations (6~8) iteratively for each data vector yN, keeping the generative parameters W I ...N fixed.
Notice that these equations couple layer n to layers n ? 1. The additional parameters 1}~s are adjusted using gradient ascent on .'Frs' Once learning is complete, the
inference problem is solved since the MAP estimate of the hidden unit values given
the data is readily available from pi and v~s?
M-Step. In terms of the variational parameters obtained in the E-step, the new
generative parameters are given by
(
f.l~
v;-
Hn
(pnpn
T
y x
An
p ny pny T
+ ~nyx )(pnpn
T + ~nxx )~1 '
x x
+ ~nyy _ H n (pnpn
T + ~n )
x x
xy
(9)
,
)
_1 [(pn _
vn
x
/In _
rs
vnpn~1)2
s y
+ ~nxx + (vn)2~n~l]
vn
s
yy
s
'
(10)
s
omitting the subscript i as in (8), and are slightly modified for layer N. In batch
mode, averaging over the data is implied and the v;- do not cancel out. Finally, the
softmax parameters ai,s' bi,s are adapted by gradient ascent on the bound (5).
5
Discussion
The hierarchical IFA network presented here constitutes a quite general framework
for learning and inference using real-valued probabilistic models that are strongly
non-linear but highly adaptive. Notice that this network includes both continuous
xi, yi and binary si units, and can thus extract both types of latent variables.
In particular, the uppermost units sI may represent class labels in classification
tasks. The models proposed in [9]-[11] can be viewed as special cases where xi is
a prescribed deterministic function (e.g., rectifier) of the previous outputs yj ~ l: in
the IFA network, a deterministic (but still adaptive) dependence can be obtained
by setting the variances 'ris = O. Note that the source xi in such a case assumes
only the values f.li,s, and thus corresponds to a discrete latent variable.
The learning and inference algorithm presented here is based on the variational
approach. Unlike variational approximations in other belief networks [8],[10] which
use a completely factorized approximation, the structure of the hierarchical IFA
network facilitates using a variational posterior that allows correlations among hidden units occupying the same layer, thus providing a more accurate description of
the true posterior. It would be interesting to compare the performance of our variational algorithm with the belief propagation algorithm [12] which, when adapted to
the densely connected IFA network, would also be an approximation. Markov chain
Monte Carlo methods , including the more recent slice sampling procedure used in
[11], would become very slow as the network size increases.
It is possible to consider a more general non-linear network along the lines of hierarchical IFA . Notice from (2) that given the previous layer output yn ~ l, the
mean output of the next layer is Yi = 2: j H[jfP(yj~l) (see (3)), i.e. a linear
mixing preceded by a non-linear function operating on each output component separately. However, if we eliminate the sources xj, replace the individual source
HierarchicalIFA Belief Networks
367
states sj by collective states sn , and allow the linear transformation to depend on
sn , we arrive at the following model: p(sn = s I y n-l) ex: exp(a~ + b~Tyn-l),
p(yn I sn = s,yn- l) = 9(yn - h~ - H~y n- l,An). Now we have yn = 2: s p(sn =
s I yn-l )(h~ + H~yn - l) == F(yn - l), which is a more general non-linearity.
Finally, the blocks {yn , xn, sn I yn - l} (Fig. 1), or alternatively the blocks {yn, sn I
yn - l} described above, can be connected not only vertically (as in this paper) and
horizontally (creating layers with multiple blocks), but in any directed acyclic graph
architecture, with the variational EM algorithm extended accordingly.
Acknowledgements
I thank V. de Sa for helpful discussions. Supported by The Office of Naval Research
(N00014-94-1-0547), NIDCD (R01-02260), and the Sloan Foundation.
References
[1] Bell, A.J. and Sejnowski, T.J. (1995). An information-maximization approach
to blind separation and blind deconvolution. Neural Computation 7, 1129-1159.
[2] Cardoso, J.-F. (1997). Infomax and maximum likelihood for source separation.
IEEE Signal Processing Letters 4, 112-114.
[3] Pearlmutter, B.A. and Parra, L.C. (1997). Maximum likelihood blind source separation: A context-sensitive generalization of ICA. Advances in Neural Information
Processing Systems 9 (Ed. Mozer, M.C. et al), 613-619. MIT Press.
[4] Attias, H. and Schreiner, C.E. (1998). Blind source separation and deconvolution: the dynamic component analysis algorithm . Neural Computation 10, 13731424.
[5] Lewicki, M.S. and Sejnowski, T.J. (1998). Learning nonlinear overcomplete
representations for efficient coding. Advances in Neural Information Processing
Systems 10 (Ed. Jordan, M.L et al), MIT Press.
[6] Attias, H. (1999). Independent factor analysis. Neural Computation, in press.
[7] Neal, R.M . and Hinton, G.E. (1998) . A view of the EM algorithm that justifies
incremental, sparse, and other variants. Learning in Graphical Models (Ed. Jordan,
M.L) , Kluwer Academic Press.
[8] Saul, L.K., Jaakkola, T ., and Jordan, M.I. (1996) . Mean field theory of sigmoid
belief networks. Journal of Artificial Intelligence Research 4, 61-76.
[9] Frey, B.J. (1997) Continuous sigmoidal belief networks trained using slice sampling. Advances in Neural Information Processing Systems 9 (Ed . Mozer, M.C. et
al). MIT Press.
[10] Frey, B.J. and Hinton, G.E. (1999). Variational learning in non-linear Gaussian
belief networks. Neural Computation, in press.
[11] Ghahramani, Z. and Hinton, G.E. (1998). Hierarchical non-linear factor analysis and topographic maps. Advances in Neural Information Processing Systems 10
(Ed. Jordan, M.L et al), MIT Press.
[12] Pearl, J . (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo, CA.
| 1631 |@word proportion:2 r:1 covariance:4 configuration:2 current:1 recovered:1 si:19 yet:1 intriguing:1 must:1 readily:1 drop:1 generative:7 intelligence:1 yr:15 accordingly:1 affair:1 ith:2 sigmoidal:1 simpler:1 along:1 become:1 consists:2 compose:1 introduce:2 manner:1 ica:10 examine:1 brain:1 inappropriate:1 becomes:2 begin:3 provided:2 underlying:1 linearity:2 maximizes:2 factorized:2 xx:2 unobserved:1 transformation:1 ti:6 sublayer:2 uk:1 unit:12 yn:23 vertically:1 frey:2 subscript:3 mateo:1 limited:1 bi:5 statistically:2 range:4 directed:1 yj:3 block:3 procedure:1 bell:1 matching:1 integrating:1 get:1 cannot:2 close:1 context:2 pnpn:3 py:1 optimize:1 map:4 deterministic:2 center:1 maximizing:1 l:6 schreiner:1 estimator:2 insight:1 cascading:3 handle:2 suppose:1 exact:4 hypothesis:1 recognition:2 labeled:1 observed:6 solved:2 seg:1 calculate:1 connected:2 mozer:2 dynamic:1 trained:1 depend:4 solving:1 completely:1 joint:1 london:2 monte:1 sejnowski:2 artificial:1 ithroughout:1 whose:1 quite:1 valued:4 loglikelihood:1 ability:1 topographic:1 gp:1 jointly:1 noisy:4 descriptive:1 ucl:1 product:2 fr:2 yii:1 realization:1 mixing:8 description:2 requirement:1 produce:3 incremental:1 object:2 derive:1 develop:3 ac:1 syr:3 sa:1 implemented:1 involves:1 sublayers:1 bin:1 hx:2 fix:1 generalization:2 parra:1 hagai:1 adjusted:1 extension:1 correction:1 hold:2 exp:7 omitted:2 estimation:2 label:1 ain:1 sensitive:1 create:2 occupying:1 uppermost:1 mit:4 gaussian:7 modified:1 rather:1 pn:6 ej:1 jaakkola:1 office:1 derived:1 focus:1 yo:1 naval:1 likelihood:9 helpful:1 inference:4 dependent:2 inaccurate:1 lj:2 eliminate:1 nand:1 hidden:7 relation:4 classification:1 among:1 softmax:5 integration:1 summed:1 special:1 equal:1 construct:1 field:2 once:1 sampling:2 represents:1 placing:1 unsupervised:1 cancel:1 constitutes:1 intelligent:1 duplicate:1 employ:1 oriented:1 composed:3 densely:1 individual:1 replacement:1 ents:1 freedom:1 irs:1 highly:5 multiply:1 mixture:3 tj:2 wc1n:1 chain:1 accurate:2 integral:1 xy:2 divide:1 logarithm:1 loge:1 desired:1 overcomplete:1 theoretical:1 ar:1 logp:2 queen:1 maximization:3 zn:3 tractability:1 introducing:2 h71:1 f3t:1 inadequate:2 density:12 probabilistic:6 yl:1 infomax:1 together:1 iip:1 choose:1 possibly:1 hn:1 creating:1 derivative:1 li:1 tii:2 de:1 coding:1 includes:1 sloan:2 blind:6 depends:2 vi:4 view:1 closed:1 vin:1 square:2 ni:2 nidcd:1 variance:2 kaufmann:1 equate:1 pny:1 produced:2 carlo:1 explain:2 ed:5 ty:1 associated:1 couple:1 gain:1 adjusting:1 knowledge:2 actually:1 box:1 strongly:1 stage:1 correlation:1 nyx:1 nonlinear:1 propagation:1 mode:1 yf:1 omitting:1 concept:1 true:3 contain:1 hence:4 analytically:1 iteratively:1 neal:1 ll:1 complete:1 pearlmutter:1 bring:1 reasoning:1 variational:18 ef:1 recently:1 sigmoid:1 functional:2 preceded:1 exponentially:2 belong:1 kluwer:1 significant:1 ai:10 similarly:1 analyzer:1 longer:1 operating:1 posterior:10 recent:1 n00014:1 inequality:1 binary:5 vt:1 yi:5 morgan:1 additional:2 determine:1 signal:4 semi:1 ii:8 full:2 multiple:1 academic:1 af:1 lin:2 laplacian:1 variant:1 multilayer:4 noiseless:3 expectation:1 mog:4 iteration:2 represent:2 lea:2 whereas:3 addition:1 separately:1 source:28 crucial:1 unlike:1 sr:1 ascent:4 facilitates:1 spirit:1 jordan:4 easy:1 wn:1 xj:1 zi:1 architecture:1 idea:1 cn:1 attias:6 render:1 f:1 speech:2 se:1 cardoso:1 sl:2 notice:5 sign:1 neuroscience:1 yy:4 serving:1 discrete:1 isin:1 graph:1 hta:2 letter:1 arrive:1 vn:7 separation:6 vf:3 bound:9 layer:30 followed:2 guaranteed:1 adapted:3 ri:1 dominated:1 integrand:1 prescribed:1 describes:1 across:1 em:9 slightly:1 lp:1 making:2 modification:1 restricted:1 ln:3 equation:3 mutually:2 describing:1 tractable:1 available:1 gaussians:2 hierarchical:13 appropriate:1 nxx:2 alternative:1 batch:1 lxi:1 jn:2 top:1 denotes:1 assumes:1 graphical:3 ifa:19 maintaining:1 ghahramani:1 r01:1 implied:1 occurs:1 parametric:1 dependence:3 gradient:5 separate:1 thank:1 parametrized:4 index:1 providing:1 difficult:1 unfortunately:1 potentially:1 collective:1 allowing:1 markov:1 yln:1 neurobiology:1 extended:1 hinton:3 arbitrary:3 introduced:1 required:3 optimized:2 california:1 learned:3 pearl:1 address:1 beyond:1 proceeds:1 usually:2 below:3 including:2 belief:12 power:1 difficulty:1 treated:2 nth:2 realvalued:1 axis:1 extract:3 sn:10 prior:2 acknowledgement:1 singlelayer:1 mixed:1 interesting:2 limitation:1 resu:1 analogy:1 acyclic:1 foundation:1 degree:1 principle:1 vij:1 pi:1 supported:1 keeping:1 allow:1 wide:1 saul:1 sparse:1 distributed:2 regard:1 slice:2 calculated:2 xn:4 sensory:1 made:1 adaptive:4 san:3 tighten:1 sj:3 approximate:1 overcomes:1 lir:1 ml:2 incoming:1 summing:1 img:1 assumed:2 francisco:2 severly:1 xi:19 iayer:1 alternatively:1 un:2 latent:8 continuous:3 learn:1 ca:2 complex:2 linearly:1 bounding:1 noise:4 s2:1 repeated:1 allowed:1 fig:3 join:1 en:1 gatsby:2 slow:2 ny:1 inferring:1 originated:1 sf:3 tin:3 learns:3 ix:1 bad:1 xt:1 rectifier:1 jensen:1 deconvolution:2 intractable:4 exists:1 ih:1 ci:1 conditioned:1 justifies:1 smoothly:1 simply:1 eij:1 failed:1 horizontally:1 lewicki:1 corresponds:1 extracted:2 bji:1 conditional:1 lth:1 viewed:1 replace:1 specifically:2 except:1 determined:1 averaging:2 xin:2 college:1 subblocks:1 latter:1 ex:1 |
691 | 1,632 | Convergence of The Wake-Sleep Algorithm
Shiro Ikeda
PRESTO,JST
Wako, Saitama, 351-0198, Japan
shiro@brain.riken.go.jp
Shun-ichi Amari
RIKEN Brain Science Institute
Wako, Saitama, 351-0198,Japan
amari@brain.riken.go.jp
Hiroyuki Nakahara
RIKEN Brain Science Institute
hiro@brain.riken.go.jp
Abstract
The W-S (Wake-Sleep) algorithm is a simple learning rule for the models
with hidden variables. It is shown that this algorithm can be applied to
a factor analysis model which is a linear version of the Helmholtz machine. But even for a factor analysis model, the general convergence is
not proved theoretically. In this article, we describe the geometrical understanding of the W-S algorithm in contrast with the EM (ExpectationMaximization) algorithm and the em algorithm. As the result, we prove
the convergence of the W-S algorithm for the factor analysis model. We
also show the condition for the convergence in general models.
1 INTRODUCTION
The W-S algorithm[5] is a simple Hebbian learning algorithm. Neal and Dayan applied the
W-S algorithm to a factor analysis mode1[7]. This model can be seen as a linear version of
the Helmholtz machine[3]. As it is mentioned in[7], the convergence of the W-S algorithm
has not been proved theoretically even for this simple model.
From the similarity of the W-S and the EM algorithms and also from empirical results, the
W-S algorithm seems to work for a factor analysis model. But there is an essential difference between the W-S and the EM algorithms. In this article, we show the em algorithm[2],
which is the information geometrical version of the EM algorithm, and describe the essential difference. From the result, we show that we cannot rely on the similarity for the reason
of the W-S algorithm to work. However, even with this difference, the W-S algorithm works
on the factor analysis model and we can prove it theoretically. We show the proof and also
show the condition of the W-S algorithm to work in general models.
S. Ikeda, S. Amari and H. Nakahara
240
2 FACTOR ANALYSIS MODEL AND THE W-S ALGORITHM
A factor analysis model with a single factor is defined as the following generative model,
Generative model
x = J.t + yg + ?,
where x = (Xl,'" ,xn)T is a n dimensional real-valued visible inputs, y ,. . . ,
N(O, 1) is the single invisible factor, g is a vector of "factor loadings", J.t is the
overall means vector which is set to be zero in this article, and ? ,......, N(O, E) is the
noise with a diagonal covariance matrix, E = diag( a;). In a Helmholtz machine,
this generative model is accompanied by a recognition model which is defined as,
y = rT x + 15,
Recognition model
where r is the vector of recognition weights and 15 ,. . . , N (0, 8 2 ) is the noise.
When data Xl, ... ,XN is given, we want to estimate the MLE(Maximum Likelihood Estimator) of g and E. The W-S algorithm can be applied[7] for learning of this model.
Wake-phase: From the training set {x s} choose a number of x randomly and for each
data, generate y according to the recognition model y = rT x + 15,15 ,. . . , N(O, 8F).
Update g and E as follows using these x's and y's, where a is a small positive
number and (3 is slightly less than 1.
+ a(x - gtY)Y
(3a;,t + (1 - (3) (Xi -
gt+l
al,t+l
gt
=
(1)
9i,ty)2,
(2)
where denotes the averaging over the chosen data.
Sleep-phase: According to the updated generative model x = ygt+l + ?, y ,. . . ,
N(O, 1) , ? ,. . . , N(O,diag(a[+1))' generate a number of x and y. And update r
and 8 2 as,
+ a(y (38; + (1 rt
(3)
rTx)x
---=-(3)(y - rT x)2.
(4)
By iterating these phases, they try to find the MLE as the converged point.
For the following discussion, let us define two probability densities p and q, where p is the
density of the generative model, and q is that of the recognition model.
Let 0 = (g, E), and the generative model gives the density function of x and y as,
p(y,x; 0) = exp
(-~(y xT)A ( ~
) -1/J(O))
l+gTE-lgl_gTE-l)
A= (
-E 19
E 1
,1/J(O)
(5)
2
)
= 21(Llog a i+(n+l)log211"
,
while the recognition model gives the distribution of y conditional to x as the following,
q(ylx; "1) ,. . . , N(rT x, 8 2),
where, "1 = (r , 8 2 ). From the data xl,'" ,XN, we define,
1 N
C = N
XsXs T,
q(x),......, N(O, C).
L
s=l
With this q( x), we define q(y, x; "1) as,
q(y, x; "1)
1 (
B = -2
8
= q(x)q(ylx; "1) = exp
1
-r
I 2C _+rrrT )
8
1
(-
~(y xT)B ( ~
1 ' , 1/J("1)
) -1/J("1))
= -21 (log 8 2 + log ICI + (n + 1) log 211") .
(6)
241
Convergence of the Wake-Sleep Algorithm
3
THE EM AND THE em ALGORITHMS FOR A FACTOR
ANALYSIS MODEL
It is mentioned that the W-S algorithm is similar to the EM algorithm[ 4]([5][7]). But there
is an essential difference between them. In this section, first, we show the EM algorithm.
We also describe the em algorithm[2] which gives us the information geometrical understanding of the EM algorithm. With these results, we will show the difference between
W-S and the EM algorithms in the next section.
The EM algorithm consists of the following two steps.
E-step: Define Q(O, Ot) as,
1 N
Q(O,Ot)
=
N
2:
Ep(Yiz. ;8.)
[logp(y, xs; O)J
s=1
M-step: Update 0 as,
Ot+l
= argmaxQ(O, Ot),
8
gt+l =
T t"'-le t"'-1
gt L.Jt
L.Jt gt
T t"'-l
+ 1 + gt
L.Jt gt
'
Et+l
= diag ( C -
gt+l
gT E - 1e
t / -1
1 + gt E t gt
)
.
(7)
Ep [.J denotes taking the average with the probability distribution p. The iteration of these
two steps converges to give the MLE.
The EM algorithm only uses the generative model, but the em algorithm[2] also uses the
recognition model. The em algorithm consists of the e and m steps which are defined as the
e and m projections[l] between the two manifolds M and D. The manifolds are defined
as follows.
Model manifold M: M ~ {p(y, x; 0)10 = (g, diag(aD), 9 ERn , 0
< (Ii < oo}.
DatamanifoldD: D ~ {q(y,x;1J)I1J = (r,s2),r E Rn,O < S < oo},q(x) include the
matrix C which is defined by the data, and this is called the "data manifold".
D
M
Figure 1: Information geometrical understanding of the em algorithm
Figure 1 schematically shows the em algorithm. It consists of two steps, e and m steps. On
each step, parameters of recognition and generative models are updated respectively.
S. Ikeda. S. Amari and H. Nakahara
242
e-step: Update 7J as the e projection of p(y, x; 8d on D.
7Jt+1
= argminKL(q(7J)'p(8t ))
(8)
'1
rt+l =
hi 1 gt
T -1 '
1 + gt E t gt
2
St+l =
1
T -1 .
1 + gt h t gt
(9)
where K L(q(7J),p(8)) is the Kullback-Leiblerdivergence defined as,
KL(q(7J),p(8))
=
q~y,x:~~]
y,x,
E q (y ,:ll;'1) [log
p
m-step: Update 8 as the m projection of q(y, x; 7Jd on M.
8t+1 = argminKL(q(7Jt+1),P(8))
(10)
9
gt+1
=
Crt+l
T C
'
St+1 + r t+1 rt+1
= diag (C -
Et+1
2
T
gt+1rt+1C),
(11)
By substituting (9) for rt+1 and s;+1 in (11), it is easily proved that (1) is equivalent to
(7), and the em and EM algorithms are equivalent.
4 THE DIFFERENCE BETWEEN THE W-S AND THE EM
ALGORITHMS
The wake-phase corresponds to a gradient flow of the M-step[7] in the stochastic sense.
But the sleep-phase is not a gradient flow of the E-step. In order to see these clear, we show
the detail of the W-S phases in this section.
First, we show the averages of (1), (2), (3) and (4),
gt+1 = gt - 0.(8;
+ rTCrd (gt -
2 cr;c )
+ r t rt
Et+1 = ht - (1- [3) (ht - diag (C - 2(Crt)gT + (s;
rt+1 = rt - o.(Et+1
T
+ gt+1gt+1)
(12)
St
( rt -
+ rTCrdgtgT))
hi=i.\9t+1)
T
-1
1 + gt+1 ht+1gt+1
S;+1 = s; - (1-[3) (s; - ((1-g~1rd2 +rTht+1rt)).
(13)
(14)
(15)
As the K-L divergence is rewritten as K L(q(7J) ,p(8)),
K L(q(7J),p(8)) =
1
"2 tr (B - 1A)
n+l
- -2-
+ 'ljJ (8) - 'ljJ (7J),
the derivatives of this K -L divergence with respect to 8 = (g , h) are,
:gKL(q(7J)'P(8))
2 ((S2 +rTCr)h-l) (g -
8
8EK L(q(7J),p(8))
E- 2 (E - diag (C - 2Crg T
S2
+C;Tcr)
(16)
+ (S2 + rTCr)ggT)) ~17)
With these results, we can rewrite the wake-phase as,
a
8
gt+1 = gt - 2,Et Bg t KL(q(7Jt) , p(8t ))
2
8
Et+1 = ht - (1 - [3 )ht BEt K L(q(7Jd ,p(8r))
(18)
(19)
243
Convergence of the Wake-Sleep Algorithm
Since E is a positive definite matrix, the wake-phase is a gradient flow of m-step which is
defined as (0).
On the other hand, K L(p( 0), q( 1])) is,
KL(p(O),q(1]))
21 tr (A - 1 B) -"2n +1/1(1]) -1/1(0).
=
The derivatives of this K-L divergence respect to rand
82
are,
8
8r K L(p(O) , q(1]))
(20)
8
8(S2) K L(p(O) , q(1]))
(21)
Therefore, the sleep-phase can be rewritten as,
rt+l = rt -
0: 2
8
"2 8t 8rt K L(p(Ot+t} , q(1]t})
8;+1 = 8; - (1- {3)(SF)2 8(~F)KL(P(Ot+1),q(1]d).
(22)
(23)
These are also a gradient flow, but because of the asymmetricity of K-L divergence, (22),
(23) are different from the on-line version of the m-step. This is the essential difference
between the EM and W-S algorithms. Therefore, we cannot prove the convergence of the
W-S algorithm based on the similarity of these two algorithms[7].
11
I
D
KL(p(a),q (11?
KL(q(l1l.P
(a?
Figure 2: The Wake-Sleep algorithm
5
CONVERGENCE PROPERTY
We want to prove the convergence property of the W-S algorithm. If we can find a Lyapnov
function for the W-S algorithm, the convergence is guaranteed[7]. But we couldn't find it.
Instead of finding a Lyapnov function, we take the continuous time, and see the behavior
ofthe parameters and K-L divergence, K L(q(1]t),p(Ot)).
KL(q(1]),p(O)) is a function of g, r, E and 8 2 ? The derivatives with respect to 9 and E
are given in (16) and (17). The derivatives with respect to rand 8 2 are,
8
8r K L(q(1]),p(O))
(24)
8
8(8 2) K L(q(1]),p(O))
(25)
S. Ikeda, S. Amari and H. Nakahara
244
On the other hand, we set the flows of g, T, E and S2 to follow the updating due to the W-S
algorithm, that is,
d
(26)
dtg
d
-T
(27)
~E
(28)
~(S2)
(29)
dt
dt
dt
With theses results, dK L( q( 7]t), p( Ot)) / dt is,
dKL(q(7]t),p(Ot))
8KLdg
dt
= 7i9 dt
8KLdT
+ fiT dt
+
8KLdE
8E dt
+
8KL d(S2)
8(S2)--;{t?
(30)
First 3 terms in the right side of (30) are apparently non-positive. Only the 4th one is not
clear.
8K L d(S2)
I (2
8(S2) --;{t = -(3 St -
(
T
(1 - gt Tt)
+ TtT EtTt )) ( 1 + gtT Et-1 gt - sF1 )
2
= - 1+g[E;lgt(2
2
St - ( (1 - gtT Tt) 2 + TtT EtTt ))(2
St -
St
T1 -1 )
1 + gt E t gt
s;
The K L(q(7]t), p(Ot)) does not decrease when stays between ((1 - g[ Tt)2 + T[ EtTt)
and 1/(1 + g[ E;lgd. but if the following equation holds, these two are equivalent,
Tt
=
E;lgt
T
1
.
1 + gt E ; gt
(31)
s;
From the above results, the flows of g, T and E decrease K L(q(7]d, p( Ot)) at any time.
converge to ((1- g[ Tt)2 +T[ EtTt) but it does not always decrease K L(q( 7]t), p( Od). But
finally
converges to 1/ (1 +
since T does converge to satisfy (31) independently of
s;,
s;
g[ E;lgt).
6
DISCUSSION
This factor analysis model has a special property that p(ylx ; 0) and q(ylx; 7]) are equivalent
when following conditions are satisfied[7],
T
=
E - 1g
1 + gT E - 1g '
2
S
=
1
1 + gT E-1 g
---,=-----:--
(32)
From this property, minimizing K L(p( 0) , q( 7])) and K L( q( 7]) , p( 0)) with respect to 7]
leads to the same point.
KL(p(O) , q(7]))
=Ep(:Jl ;9)
K L(q(7]) ,p(O))
=Eq(:Jl)
[log p(x;O)]
q(x)
q(X)]
[log p(x
; 0)
;O)]
+ E p (y ,:Jl ;9) [log P(Y1x
q(ylx;7])
(33)
7])]
+ E q(y ,:Jl ;'1) [q(Y1X;
log p(ylx; 0) ,
(34)
both of (33) and (34) include 7] only in the second term of the right side. If (32) holds,
those two terms are O. Therefore K L(p( 0) , q( 7])) and K L(q(7]) , p(O)) are minimized at
the same point.
.
245
Convergence o/the Wake-Sleep Algorithm
We can use this result to modify the W-S algorithm. If the factor analysis model does not
try wake- and sleep- phase alternately but "sleeps we11" untill convergence, it will find the
TJ which is equivalent to the e-step in the em algorithm. Since the wake-phase is a gradient
flow of the m-step, this procedure will converge to the MLE. This algorithm is equivalent
to what is called the GEM(Generalized EM) algorithm[6].
The reason ofthe GEM and the W-S algorithms work is thatp(ylx; 6) is realizable with the
recognition model q(ylx; TJ). If the recognition model is not realizable, the W-S algorithm
won't converge to the MLE. We are going to show an example and conclude this article.
Suppose the case that the average of y in the recognition model is not a linear function of
r and x but comes through a nonlinear function f (.) as,
Recognition model
y = f (r T x) + <5,
where f(?) is a function of single input and output and 6 ,...., N(O,8 2 ) is the noise. In
this case, the generative model is not realizable by the recognition model in general.
And minimizing (33) with respect to TJ leads to a different point from minimizing (34).
K L(p( 6), q( TJ)) is minimized when rand 8 2 satisfies,
Ep(~;9) [J(rT x)f'(rT x)x] = Ep(y ,~;9) [Y1'(r T x)x]
(35)
= 1 - Ep(y,~;9) [-2yf(r T x) + f2(r T x)],
while KL(q(TJ),p(6)) is minimized when rand 8 2 satisfies,
(1 + gT E-1g)Eq(~;'I1) [f(r T x)1'(rT x)x] = Eq(~;'I1) [1'(r T x)xxT] E- 1g
82
(36)
(37)
1
(38)
1 + gT E-1 g
Here, l' (.) is the derivative of f (.). If f (.) is a linear function, l' (.) is a constant value and
(35), (36) and (37), (38) give the same TJ as (32), but these are different in general.
8
2
=
We studied a factor analysis model, and showed that the W-S algorithm works on this
model. From further analysis, we could show that the reason why the algorithm works
on the model is that the generative model is realizable by the recognition model. We also
showed that the W-S algorithm doesn't converge to the MLE if the generative model is not
realizable with a simple example.
Acknowledgment
We thank Dr. Noboru Murata for very useful discussions on this work.
References
[1] Shun-ichi Amari. Differential-Geometrical Methods in Statistics, volume 28 of Lecture
Notes in Statistics. Springer-Verlag, Berlin, 1985.
[2] Shun-ichi Amari. Information geometry of the EM and em algorithms for neural networks. Neural Networks, 8(9):1379-1408, 1995.
[3] Peter Dayan, Geoffrey E. Hinton, and Radford M. Neal. The Helmholtz machine.
Neural Computation, 7(5):889-904,1995.
[4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. J. R. Statistical Society, Series B, 39:1-38, 1977.
[5] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The "wake-sleep" algorithm for
unsupervised neural networks. Science, 268:1158-1160,1995.
[6] Geoffrey J. McLachlan and Thriyambakam Krishnan. The EM Algorithm and Extensions. Wiley series in probability and statistics. John Wiley & Sons, Inc., 1997.
[7] Radford M. Neal and Peter Dayan. Factor analysis using delta-rule wake-sleep learning. Neural Computation, 9(8):1781-1803,1997.
| 1632 |@word version:4 come:1 society:1 seems:1 loading:1 stochastic:1 neal:4 rt:20 covariance:1 crt:2 ll:1 diagonal:1 jst:1 gradient:5 tr:2 shun:3 thank:1 berlin:1 won:1 generalized:1 series:2 manifold:4 tt:5 reason:3 invisible:1 wako:2 crg:1 extension:1 hold:2 geometrical:5 od:1 exp:2 minimizing:3 ikeda:4 john:1 visible:1 substituting:1 jp:3 volume:1 update:5 yiz:1 rd2:1 jl:4 generative:11 mclachlan:1 hinton:2 always:1 shiro:2 y1:1 rn:1 cr:1 similarity:3 bet:1 gtt:2 differential:1 gt:38 prove:4 consists:3 showed:2 kl:10 lgt:3 likelihood:2 verlag:1 theoretically:3 contrast:1 behavior:1 sense:1 realizable:5 thriyambakam:1 alternately:1 brain:5 dayan:4 seen:1 hidden:1 converge:5 going:1 i1:2 ii:1 overall:1 what:1 hebbian:1 rely:1 special:1 rtx:1 ggt:1 finding:1 mle:6 dkl:1 unsupervised:1 minimized:3 iteration:1 ljj:2 understanding:3 randomly:1 l1l:1 schematically:1 positive:3 t1:1 divergence:5 frey:1 modify:1 want:2 wake:14 phase:11 geometry:1 lecture:1 ot:11 argmaxq:1 geoffrey:2 studied:1 flow:7 article:4 rubin:1 tj:6 ygt:1 krishnan:1 acknowledgment:1 fit:1 definite:1 side:2 institute:2 procedure:1 incomplete:1 taking:1 empirical:1 projection:3 xn:3 doesn:1 peter:2 ici:1 cannot:2 logp:1 y1x:2 useful:1 iterating:1 clear:2 equivalent:6 saitama:2 ylx:8 kullback:1 go:3 independently:1 dtg:1 gem:2 generate:2 conclude:1 xi:1 sf1:1 rule:2 estimator:1 st:7 density:3 delta:1 continuous:1 why:1 stay:1 mode1:1 updated:2 yg:1 ichi:3 suppose:1 thesis:1 satisfied:1 us:2 choose:1 diag:7 tcr:1 dr:1 helmholtz:4 recognition:14 ht:5 updating:1 ek:1 derivative:5 s2:11 noise:3 japan:2 ep:6 accompanied:1 inc:1 satisfy:1 wiley:2 ad:1 bg:1 decrease:3 i9:1 try:2 thatp:1 mentioned:2 apparently:1 dempster:1 sf:1 xl:3 hi:2 guaranteed:1 ttt:2 sleep:13 rewrite:1 xt:2 jt:6 f2:1 murata:1 ofthe:2 x:1 dk:1 easily:1 essential:4 klde:1 xxt:1 riken:5 hiro:1 describe:3 ern:1 converged:1 according:2 gkl:1 couldn:1 slightly:1 em:29 son:1 valued:1 ty:1 amari:7 i1j:1 statistic:3 proof:1 springer:1 radford:2 laird:1 proved:3 corresponds:1 satisfies:2 equation:1 conditional:1 hiroyuki:1 nakahara:4 presto:1 dt:8 follow:1 rewritten:2 averaging:1 rand:4 llog:1 gte:1 called:2 convergence:13 hand:2 jd:2 denotes:2 nonlinear:1 converges:2 include:2 oo:2 noboru:1 untill:1 yf:1 expectationmaximization:1 eq:3 |
692 | 1,633 | Signal Detection in Noisy Weakly-Active
Dendrites
Amit Manwani and Christof Koch
{quixote,koch}@klab.caltech.edu
Computation and Neural Systems Program
California Institute of Technology
Pasadena, CA 91125
Abstract
Here we derive measures quantifying the information loss of a synaptic
signal due to the presence of neuronal noise sources, as it electrotonically
propagates along a weakly-active dendrite. We model the dendrite as an
infinite linear cable, with noise sources distributed along its length. The
noise sources we consider are thermal noise, channel noise arising from
the stochastic nature of voltage-dependent ionic channels (K+ and Na+)
and synaptic noise due to spontaneous background activity. We assess the
efficacy of information transfer using a signal detection paradigm where
the objective is to detect the presence/absence of a presynaptic spike from
the post-synaptic membrane voltage. This allows us to analytically assess
the role of each of these noise sources in information transfer. For our
choice of parameters, we find that the synaptic noise is the dominant
noise source which limits the maximum length over which information
be reliably transmitted.
1 Introduction
This is a continuation of our efforts (Manwani and Koch, 1998) to understand the information capacity ofa neuronal link (in terms of the specific nature of neural "hardware") by a
systematic study of information processing at different biophysical stages in a model of a
single neuron. Here we investigate how the presence of neuronal noise sources influences
the information transmission capabilities of a simplified model of a weakly-active dendrite.
The noise sources we include are, thermal noise, channel noise arising from the stochastic
nature of voltage-dependent channels (K+ and Na+) and synaptic noise due to spontaneous
background activity. We characterize the noise sources using analytical expressions of their
current power spectral densities and compare their magnitudes for dendritic parameters reported in literature (Mainen and Sejnowski, 1998). To assess the role of these noise sources
on dendritic integration, we consider a simplified scenario and model the dendrite as a lin-
133
Signal Detection in Noisy Weakly-Active Dendrites
,(y'L
lsynapse
_ _....~I
Cable
Optimal
Detector
Spike
No spike
Pe
\ / Measurement
v
tttttttttttt ttttttt t t t f
y
Noise Sources
x
Figure 1: Schematic diagram of a simplified dendritic channel. The dendrite is modeled a weaklyactive I-D cable with noise sources distributed along its length. Loss of signal fidelity as it propagates
from a synaptic location (input) y to a measurement (output) location x is studied using a signal
detection task. The objective is to optimally detect the presence of the synaptic input I (y, t) (in the
fonn ofa unitary synaptic event) on the basis of the noisy voltage wavefonn Vm(x, t), filtered by the
cable's Green's function and corrupted by the noise sources along the cable. The probability of error,
Pe is used to quantify task perfonnance.
ear, infinite, one-dimensional cable with distributed current noises. When the noise sources
are weak so that the corresponding voltage fluctuations are small, the membrane voltage
satisfies a linear stochastic differential equation satisfied. Using linear cable theory, we express the power spectral density of the voltage noise in terms of the Green's function of an
infinite cable and the current noise spectra. We use these results to quantify the efficacy of
information transfer under a "signal detection" paradigm 1 where the objective is to detect
the presence/absence of a presynaptic spike (in the form of an epsc) from the post-synaptic
membrane voltage along the dendrite. The formalism used in this paper is summarized in
Figure 1.
2 Neuronal Noise Sources
In this section we consider some current noise sources present in nerve membranes which
distort a synaptic signal as it propagates along a dendrite. An excellent treatment of membrane noise is given in DeFelice (1981) and we refer the reader to it for details. For a linear
one-dimensional cable, it is convenient to express quantities in specific length units. Thus,
we express all conductances in units of S/j.Lm and current power spectra in units of A 2 /Hz
j.Lm.
A. Thermal Noise
Thermal noise arises due to the random thermal agitation of electrical charges in a conductor and represents a fundamental lower limit of noise in a system. A conductor of
resistance R is equivalent to a noiseless resistor R in series with a voltage noise source
vth (t) of spectral density SVth (I) = 2kT R (V2 IHz), or a noiseless resistor R in parallel
with a cu.rrentnoise source, Ith(t) of spectral density SIth(l)
2kT / R (A2/ Hz), where k
is the Boltzmann constant and T is the absolute temperature of the conductor2. The transverse resistance Tm (units of 0 j.Lm) ofa nerve membrane is due to the combined resistance
of the lipid bilayer and the resting conductances of various voltage-gated, ligand-gated and
leak channels embedded in the lipid matrix. Thus, the current noise due to T m , has power
=
I For sake of brevity, we do not discuss the corresponding signal estimation paradigm as in Manwani and Koch (1998).
2Since the power spectra of real signals are even functions of frequency, we choose the doublesided convention for all power spectral densities.
A. Manwani and C. Koch
134
spectral density,
(1)
B. Channel Noise
Neuronal membranes contain microscopic voltage-gated and ligand-gated channels which
open and close randomly. These random fluctuations in the number of channels is another
source of membrane noise. We restrict ourselves to voltage-gated K+ and Na+ channels,
although the following can be used to characterize noise due to other types of ionic channels
as well. In the classical Hodgkin-Huxley formalism (Koch, 1998), a K+ channel consists
of four identical two-state sub-units (denoted by n) which can either be open or closed.
The K+ channel conducts only when all the sub-units are in their open states. Since the
sub-units are identical, the channel can be in one of five states; from the state in which
all the sub-units are closed to the open state in which all sub-units are open. Fluctuations
in the number of open channels cause a random K+ current IK of power spectral density
(DeFelice, 1981)
SIK(f)
2
m= 1]K'YK(V
2 4
EK) noo
~ (4)
(
f=t
i 1-
)i 4-i
28n /i
noo noo 1 + 411'2 j2(8 n /i)2'
(2)
where 1]K, 'YK and EK denote the K+ channel density (per unit length), the K+ single
channel conductance and the K+ reversal potential respectively. Here we assume that the
membrane voltage has been clamped to a value Vm . noo and 8n are the steady-state open
probability and relaxation time constant of a single K+ sub-unit respectively and are in
general non-linear functions of Vm (Koch, 1998). When Vm is close to the resting potential
Vrest (usually between -70 to -65 mV), noo ? 1 and one can simplify S I K (f) as
2
SI K(f) ~ 1]K'YK(Vrest
-
2 4
4
28 n /4
EK) noo (1- noo) 1 + 411'2 j2(8 n /4)2
(3)
Similarly, the Hodgkin-Huxley Na+ channel is characterized by three identical activation
sub-units (denoted by m) and an inactivation sub-unit (denoted by h). The Na+ channel
conducts only when all the m sub-units are open and the h sub-unit is not inactivated. Thus,
the Na+ channel can be in one of eight states from the state corresponding to all m subunits closed and the h sub-unit inactivated to the open state with all m sub-units open and
the h sub-unit not inactivated. moo (resp. h oo ) and 8m (resp. 8h ) are the corresponding
steady-state open probability and relaxation time constant of a single Na+ m (resp. h)
sub-unit respectively. For Vm ~ Vrest , moo ? 1, hoo ~ 1 and
2
(
SINa(f) ~ 1]Na'YNa Vrest
-
)2 3 (
)3 2
28m /3
ENa moo 1 - moo hoo 1 + 411'2 j2(8m /3)2
(4)
where 1]Na, 'YNa and ENa denote the Na+ channel density, the Na+ single channel conductance and the sodium reversal potential respectively.
C. Synaptic Noise
In addition to voltage-gated ionic channels, dendrites are also awash in ligand-gated synaptic receptors. We restrict our attention to fast voltage-independent (AMPA-like) synapses.
A commonly used function to represent the postsynaptic conductance change in response
to a presynaptic spike is the alpha function (Koch, 1998)
go:(t)
= gpeak e t e-t/tpeak,
0 :::; t
< 00
(5)
tpeak
where gpeak denotes the peak conductance change and tpeak the time-to-peak of the conductance change. We shall assume that for a spike train s(t) = ~j &(t - tj), the postsynaptic conductance is given gSyn(t) = ~j go:(t - tj). This ignores inter-spike interaction
/35
Signal Detection in Noisy Weakly-Active Dendrites
r?I
Figure 2: Schematic diagram of the equivalent electrical circuit of a linear dendritic cable. The
dendrite is modeled as an infinite ladder network. Ti (units ofOlJ-L m) denotes the longitudinal cytoplasmic resistance; em (units of FIJ-L m) and gL (units of SIJ-L m) denote the transverse membrane capacitance and conductance (due to leak channels with reversal potential E L) respectively. The membrane also contains active channels (K+, Na+) with conductances and reversal potentials denoted by
(gK, gNa) and (EK, ENa) respectively, and fast voltage-independent (AMPA-like) synapses with
conductance gSlI n and reversal potential Es lIn ?
and synaptic saturation. The synaptic current is given by iSyn(t) = 9Syn(t)(Vm - ESyn)
where ESyn is the synaptic reversal potential. If the spike train can be modeled as a homogeneous Poisson process with mean firing rate An, the power spectrum ofisyn(t) can be
computed using Campbell's theorem (Papoulis, 1991)
SISyn(f) = 7]Syn An(Vm - ESyn)2 I GnU)
where 7]Syn denotes the synaptic density and Gn(f) =
Fourier transform of 9n(t). Substituting for 9o(t) gives
S
ISyn
10
00
12
(6)
,
9n(t) exp(-j21r/t) dt is the
(/) A (e 9peak t peak(Vm - ESyn))2
- T/Syn n
(1 + 41r2 J2t;)2
(7)
3 Noise in Linear Cables
The linear infinite cable corresponding to a dendrite is modeled by the ladder network
shown in Figure 2. The membrane voltage Vm(x, t) satisfies the differential equation
(Tuckwell, 1988),
ri
aVm
[Cm 8t
+ 9K(Vm -
EK)
+ gSyn(Vm -
+ 9Na(Vm -
ESyn)
ENa)
+ gdVm -
Ed ]
(8)
Since the ionic conductances are random and nonlinearly related to Vm , eq. 8 is a nonlinear stochastic differential equation. If the voltage fluctuations (denoted by V) around
the resting potential Vrest are small, one can express the conductances as small deviations
(denoted by g) from their corresponding resting values and transform eq. 8 to
_ \2
/\
=
a 2V(x, t)
ax2
=
+T
aV(x, t)
at
+
(1
+
~)V( t) = In
U
x,
G
(9)
where A2
1/ (riG) and T
cm/G denote the length and time constant of the membrane respectively. G is the passive membrane conductance and is given by the sum of
the resting values of all the conductances. ~ = gK + gNa + gSyn/G represents the random changes in the membrane conductance due to synaptic and channel stochasticity; ~
A. Manwani and C. Koch
136
-27
-6
-28
<>--1>
-
_-29
f"'-30
J
i-31
Thermal
K'
Na'
Synaptic
-8
i
-9
?-10
~
.
r
!-32
'ii
.
-7
..
ll
~-12
~-33
':-13
-34
-14
-35
-15
-360
-16
0.5
1.5
2
25
f\'equancy (Hz) (Log Unitt)
3
3.5
0
0.5
1.5
2
2.5
3.5
f'l'equoncy (Hz) (Log 1.WIo)
Figure 3: (a) Comparison of current spectra 8r(f) of the four noise sources we consider. Synaptic
noise is the most dominant source source of noise and thermal noise, the smallest. (b) Voltage
noise spectrum of a I-D infinite cable due to the current noise sources. 8Vth (f) is also shown for
comparison. Summary of the ~arameters used (adopted from Mainen and Sejnowski, 1998) : Rm =
40 kOcm 2 , em =0.75 f..tF/cm , ri =200 Ocm, d (dend. dia.) =0.75 f..tm, 'TIK =2.3 f..tm-1, 'TINa =3
f..tm-1, 'TISyn =0.1 f..tm-1, EK =-95 mY, ENa =50 mY, ESyn =0 mY, EL =Vrest =70 mV, "IK
"INa =20 pS.
=
Vrest ) + lth denotes the total effective current noise due to the different noise sources. In
order to derive analytical closed-fonn solutions to eq. 9, we further assume that 8 < < 13 ,
which reduces it to the familiar one-dimensional cable equation with noisy current input
(Tuckwell, 1988). For resting initial conditions (no charge stored on the membrane at
t = 0) , V is linearly related to In and can be obtained by convolving In with the Green's
function 9 (x, y, t) of the cable for the appropriate boundary conditions. It has been shown
that V (x , t) is an asymptotically wide-sense stationary process (Tuckwell and Walsh, 1991)
and its power spectrum Sv(x, f) can be expressed in tenns of the power spectrum of In,
Sn(f) as
Sv(x, f) =
Sn(f)
----cJ2
1
00
'2'
-00
IQ(x, x ,f)1 dx
(10)
where Q(x, x' , f) is the Fouriertransfonn of g(x, x', t). For an infinite cable
,
e- T
- ( X-X' ) 2
,
g(X,X ,T) = J47rT e 4T
,-00 < X,X < 00,0::; T < 00
(11)
where X = X/A, X' = x' / A and T = t / T are the corresponding dimensionless variables.
Substituting for g(x, x' , t) we obtain
S (f) = Sn(f)
V
sin
(tan-l~211"f7"))
2 AG2 27r /T (1
+ (27r /T)2// 4
(12)
Since the noise sources are independent, Sn(f) = SIth(f) + SIK(f) + SINa(f) +
SISyn(f). Thus, eq. 12 allows us to compute the relative contribution of each of the noise
sources to the voltage noise. The current and voltage noise spectra for biophysically relevant parameter values (Mainen and Sejnowski, 1998) are shown in Figure 3.
3Using self-consistency, we find the assumption to be satisfied in our case. In general, it needs
verified on a case-by-case basis.
Signal Detection in Noisy Weakly-Active Dendrites
137
4 Signal Detection
The framework and notation used here are identical to that in Manwani and Koch (1998)
and so we refer the reader to it for details. The goal in the signal detection task is to
optimally decide between the two hypotheses
Ho
HI
: y(t) = n(t),
: y(t) = g(t) * s(t) + n(t),
0~t ~T
0~t ~T
Noise
Signal + Noise
(13)
where n(t), g(t) and s(t) denote the dendritic voltage noise, the Green's function of the
cable (function of the distance between the input and measurement locations) and the epsc
waveform (due to a presynaptic spike) respectively. The decision strategy which minimizes
the probability of error Pe = PoP, + PIPm, where Po and PI = (1 - Po) are the prior
probabilities of Ho and HI respectively, is
(14)
where A(y) = P[yIHIl/ P[yIHo] and ?0 = Po/(l - Po). P, and Pm denote the
false alarm and miss probability respectively. Since n(t) arises due to the effect of several independent noise sources, by invoking the Central Limit theorem, we can assume
Hl
that n(t) is Gaussian, for which eq. 14 reduces to r ~
'T}.
r = Jooo y(t) hd( -t) dt
Ho
is a correlation between y(t) and the matched filter hd(t), given in the Fourier domain as Hd(f) = e- j21r ,Tg*(f)S*(f)/ Sn(f). g(f) and S(f) are Fourier transforms
of g(t) and s(t) respectively and Sn(f) is the noise power spectrum. The conditional
means and variances of the Gaussian variable r under Ho and HI are 110 = 0,111 =
J~oo IG(f)S(f)12 / Sn(f) df and (76 = (7~ = (72 = 111 respectively. The error probabilities are given by P, = J1/oo P[rIHo] dr and Pm = J~oo P[rlHtJ dr. The optimal value of the threshold 'T} depends on (7 and the prior probability Po. For equiprobable hypotheses (Po = 1 - Po = 0.5), the optimal 'T} = (110 + 111)/2 = (72/2 and
Pe = 0.5 Erfc[(7/2V2]. One can also regard the overall decision system as an effective
binary channel. Let M and D be binary variables which take values in the set {Ho, Hd
and denote the input and output of the dendritic channel respectively. Thus, the system
performance can equivalently be assessed by computing the mutual information between
M and D, I(M; D) = 1i(po (1- Pm) + (1- Po) PI) -Po1i(Pm) - (1- Po),H.(P, (Cover
and Thomas, 1991) where 1i (x) is the binary entropy function. For equi-probable hypotheses, I(M; D) = 1 -1i(Pe ) bits. It is clear from the plots for Pe and I(M; D) (Figure 4)
as a function of the distance between the synaptic (input) and the measurement (output)
location that an epsc. can be detected with almost certainty at short distances, after which,
there is a rapid decrease in detectability with distance. Thus, we find that membrane noise
may limit the maximum length of a dendrite over which information can be transmitted
reliably.
5 Conclusions
In this study we have investigated how neuronal noise sources might influence and limit the
ability of one-dimensional cable structures to propagate information. When extended to realistic dendritic geometries, this approach can help address questions as, is the length of the
apical dendrite in a neocortical pyramidal cell limited by considerations of signal-to-noise,
which synaptic locations on a dendritic tree (if any) are better at transmitting information,
what is the functional significance of active dendrites (Yuste and Tank, 1996) and so on.
Given the recent interest in dendritic properties, it seems timely to apply an informationtheoretic approach to study dendritic integration. In an attempt to experimentally verify
138
A. Manwani and C. Koch
0. 51-----.-----=::::==:::::::::::::;;~
,,'" ,,'"... ,,-
-.p.lS
!!:.
~ 0.3
I
"
I
w
15 0.25
I
\ \
\ '
\ \
\
....
I
/ /
I ,.
0.1 5
I
0.1
I
,
,-
\
\
"
\,
\
/
.
\
\ \
\
\ '
\ \
;
?
I
/
0.2
0.05
",
/' " " , .-
0.4
f
\\
,;':: "::.;--~=:---~-:."!'-
0.45
,/
\
\
I
0.3
?
0.2
I
//
0.1
"I ,I
\
\
\
'
'.
\
'
\
\ \ \.,
" "' .
........
1500
-:.. .:-.:.- .. -
~~---~~~--~~-1~~~---~1~
l(jlm)
l(jlm)
Figure 4: Infonnation loss in signal detection. (a) Probability of Error (Pe ) and (b) Mutual infonnation (I(M;D? for an infinite cable as a function of distance from the synaptic input location. Almost
perfect detection occurs for small distances but perfonnance degrades steeply over larger distances
as the signal-to-noise ratio drops below some threshold. This suggests that dendritic lengths may be
ultimately limited by signal-to-noise considerations. Epsc. parameters: gpeak= 0.1 nS, tpeak = 1.5
msec and ESyn = 0 mY. N syn is the number of synchronous synapses which activate in response to
a pre-synaptic action potential.
the validity of our results, we are currently engaged in a quantitative comparison using
neocortical pyramidal cells (Manwani et ai, 1998).
Acknowledgements
This research was supported by NSF, NIMH and the Sloan Center for Theoretical Neuroscience. We thank Idan Segev, Elad Schneidman, Moo London, YosefYarom and Fabrizio
Gabbiani for illuminating discussions.
References
DeFelice, LJ. (1981) Membrane Noise. New York: Plenum Press.
Cover, T.M., and Thomas, J.A. (1991) Elements of Information Theory. New York: Wiley.
Koch, C. (1998) Biophysics of Computation: Information Processing in Single Neurons. Oxford
University Press.
Mainen, Z.F. and Sejnowski, TJ. (1998) "Modeling active dendritic processes in pyramidal neurons,"
In: Methods in Neuronal Modeling: From Ions to Networks, Koch, C. and Segev, I., eds., Cambridge:
MIT Press.
Manwani, A. and Koch, C. (1998) "Synaptic transmission: An infonnation-theoretic perspective,"
In: Kearns, M., Jordan, M. and Solla, S., eds., Advances in Neural Information Processing Systems,"
Cambridge: MIT Press.
Manwani, A., Segev, I., Yarom, Y and Koch, C. (1998) "Neuronal noise sources in membrane patches
and linear cables," In: Soc. Neurosci. Abstr.
Papoulis, A. (1991) Probability, Random Variables and Stochastic Processes. New York: McGrawHill.
TuckweII, H.C. (1988) Introduction to Theoretical Neurobiology: I. New York: Cambridge University Press.
Tuckwell, H.C. and Walsh, J.B. (1983) "Random currents through nerve membranes I. Unifonn
poisson or white noise current in one-dimensional cables," Bioi. Cybem. 49:99-110.
Yuste, R. and Tank, D. W. (1996) "Dendritic integration in mammalian neurons, a century after Cajal,"
| 1633 |@word cu:1 seems:1 open:11 propagate:1 invoking:1 fonn:2 papoulis:2 initial:1 series:1 efficacy:2 contains:1 mainen:4 longitudinal:1 current:15 si:1 activation:1 dx:1 moo:5 realistic:1 j1:1 plot:1 drop:1 stationary:1 ith:1 short:1 filtered:1 equi:1 location:6 five:1 along:6 differential:3 ik:2 consists:1 inter:1 rapid:1 notation:1 matched:1 circuit:1 what:1 cm:3 minimizes:1 jlm:2 certainty:1 quantitative:1 kocm:1 ti:1 ofa:3 charge:2 rm:1 unit:22 christof:1 agitation:1 limit:5 receptor:1 oxford:1 fluctuation:4 firing:1 might:1 studied:1 suggests:1 limited:2 walsh:2 gpeak:3 convenient:1 pre:1 close:2 influence:2 dimensionless:1 equivalent:2 center:1 go:2 attention:1 l:1 hd:4 century:1 plenum:1 resp:3 spontaneous:2 tan:1 homogeneous:1 hypothesis:3 element:1 mammalian:1 role:2 electrical:2 epsc:4 rig:1 solla:1 decrease:1 yk:3 leak:2 nimh:1 ultimately:1 weakly:6 basis:2 po:10 various:1 train:2 fast:2 effective:2 activate:1 sejnowski:4 london:1 detected:1 cytoplasmic:1 larger:1 elad:1 ability:1 transform:2 noisy:6 biophysical:1 analytical:2 interaction:1 j2:3 relevant:1 abstr:1 transmission:2 p:1 perfect:1 help:1 derive:2 oo:4 iq:1 eq:5 soc:1 quantify:2 convention:1 waveform:1 vrest:7 fij:1 filter:1 stochastic:5 dendritic:13 probable:1 koch:15 klab:1 around:1 exp:1 lm:3 substituting:2 a2:2 smallest:1 estimation:1 f7:1 tik:1 currently:1 infonnation:3 gabbiani:1 tf:1 mit:2 gaussian:2 inactivation:1 voltage:22 steeply:1 detect:3 sense:1 dependent:2 el:1 lj:1 pasadena:1 tank:2 overall:1 fidelity:1 denoted:6 integration:3 mutual:2 identical:4 represents:2 gsyn:3 simplify:1 equiprobable:1 randomly:1 cajal:1 familiar:1 geometry:1 ourselves:1 attempt:1 detection:11 conductance:16 interest:1 investigate:1 tj:3 kt:2 perfonnance:2 conduct:2 tree:1 theoretical:2 formalism:2 modeling:2 gn:1 cover:2 tg:1 deviation:1 apical:1 characterize:2 reported:1 optimally:2 stored:1 corrupted:1 sv:2 my:4 combined:1 density:10 fundamental:1 peak:4 jooo:1 systematic:1 vm:13 transmitting:1 na:14 central:1 ear:1 satisfied:2 choose:1 dr:2 ek:6 convolving:1 potential:9 summarized:1 sloan:1 mv:2 depends:1 ax2:1 closed:4 mcgrawhill:1 capability:1 parallel:1 timely:1 contribution:1 ass:3 variance:1 weak:1 biophysically:1 ionic:4 detector:1 synapsis:3 synaptic:24 ed:3 distort:1 quixote:1 frequency:1 treatment:1 syn:5 campbell:1 nerve:3 dt:2 response:2 stage:1 correlation:1 nonlinear:1 effect:1 validity:1 contain:1 verify:1 manwani:10 analytically:1 tuckwell:4 white:1 ll:1 sin:1 self:1 steady:2 neocortical:2 theoretic:1 temperature:1 passive:1 consideration:2 functional:1 resting:6 measurement:4 refer:2 cambridge:3 ai:1 ena:5 consistency:1 pm:4 similarly:1 ocm:1 stochasticity:1 dominant:2 recent:1 perspective:1 scenario:1 isyn:2 binary:3 tenns:1 caltech:1 transmitted:2 paradigm:3 schneidman:1 signal:19 ii:1 reduces:2 characterized:1 lin:2 pipm:1 post:2 biophysics:1 schematic:2 noiseless:2 poisson:2 df:1 represent:1 cell:2 ion:1 background:2 addition:1 diagram:2 avm:1 source:28 pyramidal:3 hz:4 jordan:1 unitary:1 presence:5 j2t:1 restrict:2 tm:5 synchronous:1 expression:1 effort:1 resistance:4 york:4 cause:1 action:1 clear:1 transforms:1 hardware:1 continuation:1 cj2:1 nsf:1 unifonn:1 neuroscience:1 arising:2 per:1 fabrizio:1 detectability:1 shall:1 express:4 four:2 threshold:2 verified:1 asymptotically:1 relaxation:2 sum:1 hodgkin:2 almost:2 reader:2 decide:1 patch:1 decision:2 bit:1 gnu:1 hi:3 activity:2 huxley:2 segev:3 ri:2 sake:1 fourier:3 membrane:20 hoo:2 em:2 postsynaptic:2 wavefonn:1 cable:21 hl:1 electrotonically:1 sij:1 equation:4 discus:1 reversal:6 dia:1 adopted:1 eight:1 apply:1 v2:2 spectral:7 appropriate:1 ho:5 thomas:2 denotes:4 include:1 tina:1 amit:1 erfc:1 yarom:1 classical:1 objective:3 capacitance:1 question:1 quantity:1 spike:9 occurs:1 strategy:1 degrades:1 microscopic:1 distance:7 link:1 thank:1 capacity:1 presynaptic:4 gna:2 length:9 modeled:4 ratio:1 equivalently:1 esyn:7 gk:2 reliably:2 boltzmann:1 gated:7 av:1 neuron:4 thermal:7 subunit:1 extended:1 neurobiology:1 transverse:2 nonlinearly:1 california:1 pop:1 address:1 usually:1 below:1 program:1 saturation:1 green:4 power:11 event:1 sodium:1 technology:1 ladder:2 sn:7 prior:2 literature:1 acknowledgement:1 relative:1 embedded:1 loss:3 ina:1 yuste:2 illuminating:1 propagates:3 sik:2 pi:2 summary:1 gl:1 supported:1 understand:1 institute:1 wide:1 absolute:1 distributed:3 regard:1 boundary:1 ignores:1 commonly:1 simplified:3 ig:1 alpha:1 informationtheoretic:1 active:9 cybem:1 spectrum:10 channel:28 nature:3 transfer:3 ca:1 inactivated:3 dendrite:17 excellent:1 investigated:1 ampa:2 domain:1 significance:1 linearly:1 neurosci:1 noise:61 alarm:1 neuronal:8 wiley:1 n:1 sub:14 msec:1 resistor:2 clamped:1 pe:7 theorem:2 specific:2 r2:1 false:1 arameters:1 magnitude:1 entropy:1 idan:1 vth:2 expressed:1 ligand:3 satisfies:2 conditional:1 lth:1 goal:1 bioi:1 quantifying:1 absence:2 change:4 experimentally:1 infinite:8 conductor:2 miss:1 kearns:1 total:1 engaged:1 defelice:3 e:1 wio:1 noo:7 arises:2 sina:2 lipid:2 brevity:1 assessed:1 tpeak:4 |
693 | 1,634 | Maximum-Likelihood Continuity Mapping
(MALCOM): An Alternative to HMMs
David A. Nix
dnix@lanl.gov
Computer Research & Applications
CIC-3, MS B265
Los Alamos National Laboratory
Los Alamos, NM 87545
John E. Hogden
hogden@lanl.gov
Computer Research & Applications
CIC-3, MS B265
Los Alamos National Laboratory
Los Alamos, NM 87545
Abstract
We describe Maximum-Likelihood Continuity Mapping (MALCOM), an
alternative to hidden Markov models (HMMs) for processing sequence
data such as speech. While HMMs have a discrete "hidden" space constrained by a fixed finite-automaton architecture, MALCOM has a continuous hidden space-a continuity map-that is constrained only by a
smoothness requirement on paths through the space. MALCOM fits into
the same probabilistic framework for speech recognition as HMMs, but
it represents a more realistic model of the speech production process.
To evaluate the extent to which MALCOM captures speech production
information, we generated continuous speech continuity maps for three
speakers and used the paths through them to predict measured speech
articulator data. The median correlation between the MALCOM paths
obtained from only the speech acoustics and articulator measurements
was 0.77 on an independent test set not used to train MALCOM or the
predictor. This unsupervised model achieved correlations over speakers and articulators only 0.02 to 0.15 lower than those obtained using an
analogous supervised method which used articulatory measurements as
well as acoustics..
1 INTRODUCTION
Hidden Markov models (HMMs) are generally considered to be the state of the art in speech
recognition (e.g., Young, 1996). The strengths of the HMM framework include a rich mathematical foundation, powerful training and recognition algorithms for large speech corpora,
and a probabilistic framework that can incorporate statistical phonology and syntax (Morgan & Bourlard, 1995). However, HMMs are known to be a poor model of the speech
production process. While speech production is a continuous, temporally evolving process, HMMs treat speech production as a discrete, finite-state system where the current
state depends only on the immediately preceding state. Furthermore, while HMMs are
designed to capture temporal information as state transition probabilities, Bourlard et al.,
Maximum-Likelihood Continuity Mapping (MALCOM) : An Alternative to HMMs
745
(1995) suggest that when the transition probabilities are replaced by constant values, recognition results do not significantly deteriorate. That is, while transitions are often considered
the most perceptually relevent component of speech, the conventional HMM framework is
poor at capturing transition information.
Given these deficiencies, we are considering alternatives to the HMM approach that maintain its strengths while improving upon its weaknesses. This paper describes one such
model called Maximum-Likelihood Continuity Mapping (MALCOM). We first review a
general statistical framework for speech recognition so that we can compare the HMM and
MALCOM formulations. Then we consider what the abstract hidden state represents in
MALCOM, demonstrating empirically that the paths through MALCOM's hidden space
are closely related to the movements of the speech production articulators.
2 A GENERAL FRAMEWORK FOR SPEECH RECOGNITION
Consider an unknown speech waveform that is converted by a front-end signal-processing
module into a sequence of acoustic vectors X. Given a space of possible utterances, W,
the task of speech recognition is to return the most likely utterance W * given the observed
acoustic sequence X . Using Bayes' rule this corresponds to
(1)
In recognition, P(X) is typically ignored because it is constant over all W, and the posterior P(WIX) is estimated as the product of the prior probability of the word sequence,
P(W), and the probability that the observed acoustics were generated by the word sequence, P(XI W) . The prior P(W) is estimated by a language model, while the production
probability P(X IW) is estimated by an acoustic model. In continuous speech recognition,
the product of these terms must be maximized over W; however, in this paper, we will restrict our attention to the form of the acoustical model only. Every candidate utterance W
corresponds to a sequence of word/phone models M w such that P(XIW) = P(XIM w ),
and each M w considers all possible paths through some "hidden" space. Thus, for each
candidate utterance, we must calculate
P(X IM w) =
i
P(XIY , Mw )P(YIM w )dY ,
(2)
where Y is some path through the hidden space .
2.1
HIDDEN MARKOV MODELS
Because HMMs are finite-state machines with a given fixed architecture, the path Y
through the hidden space corresponds to series of discrete states, simplifying the integral
of Eq. (2) to a sum. However, to avoid computing the contribution of all possible paths, the
Viterbi approximation-considering only the single path that maximizes Eq. (2)-is frequently used without much loss in recognition performance (Morgan & Bourlard, 1995).
Thus,
(3)
P(XIM w ) ~ ar!?"ymaxP(XIY, Mw)P(YIM w).
The first term corresponds to the product of the emission probabilities of the acoustics given
the state sequence and is typically estimated by mixtures of high-dimensional Gaussian
densities. The second term corresponds to the product of the state transition probabilities.
However, because Bourlard et al. (1995) found that this second term contributes little to
recognition performance, the modeling power of the conventional HMM must reside in
the first term. Training the HMM system involves estimating both the emission and the
D. A. Nix and J. E. Hogden
746
transition probabilities from real speech data. The Baum-Welchlforward-backward algorithm (e.g., Morgan & Scofield, 1994) is the standard computationally efficient algorithm
for iteratively estimating these distributions.
2.2 MAXIMUM-LIKELIHOOD CONTINUITY MAPPING (MALCOM)
In contrast to HMMs, the multi-dimensional MALCOM hidden space is continuous-there
are an infinite number states and paths through them. While the HMM is constrained by
a fixed architecture, MALCOM is constrained by the notion of continuity of the hidden
path. That is, the path must be smooth and continuous: it may not carry any energy above
a given cutoff frequency. Unlike the discrete path in an HMM, the smooth hidden path
in MALCOM attempts to emulate the motion of the speech articulators in what we call a
continuity map (CM).
Unless we know how to evaluate the integral of Eq. (2) (which we currently do not), we
must also make the Viterbi approximation and approximate P(XIM w ) by considering
only the single path that maximizes the likelihood of the acoustics X given the utterance
model M w , resulting in Eq. (3) once again. Analogously, the first term, P(XIY, M w ),
corresponds to the acoustic generation probability given the hidden path, and the second
term corresponds to the probability of the hidden path given the utterance model. This
paper focuses on the first term because this is the term that produces conventional HMM
performance. 1
Common to all Mw is a set of N probability density functions (pdfs) <I> that define the CM
hidden space, modeling the likelihood of Y given X for an N -code vector quantization
(VQ) of the acoustic space. Because these pdfs are defined over the low-dimensional CM
space instead of the high-dimensional acoustic space (e.g., 6 vs. 40+), MALCOM requires
many fewer parameters to be estimated than the corresponding HMM.
3
THE MALCOM ALGORITHM
We now turn to developing an algorithm to estimate both the CM pdfs <I> and the corresponding paths Y that together maximize the likelihood of a given time series of acoustics,
C = P(X IY , <1? . This is an extension of the method first proposed by Hogden (1995), in
which he instead maximized P(YIX , <1? using vowel data from a single speaker. Starting
with random but smooth Y , the MALCOM training algorithm generates a CM by iterating
between the following two steps: (1) Given Y, reestimate <I> to maximize C; and (2) Given
<1>, reestimate smooth paths Y to maximize ?.
3.1
LOG LIKELIHOOD FUNCTION
To specify the log likelihood function C, we make two dependence claims and one independence assumption . First we claim that Yt depends (to at least some small extent) on all
other Y in the utterance, an expression of the continuity constraint described above. We
make another natural claim that Xt depends on Yt, that the path configuration at time t influences the corresponding acoustics. However, we do make the conditional independence
assumption that
n
C
= P(XIY, <1? =
IT P(XtIYt , <1?.
(4)
t=l
Note that Eq. (4) does not assume that each Xt is independent OfXt-l (as is often assumed
in data modeling); it only assumes that the conditioning of Xt on Yt is independent from
IHowever, we are currently developing a model of P(YIM w ) to replace the corresponding (and
useless) term in the conventional HMM formulation as well (Hogden et al. , 1998).
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
747
t-l to t. For example, because Xt depends on Yt, Yt depends on all othery (the smoothness
constraint), and Xt-1 depends on Yt-1, Xt is not assumed to be independent of all other xs
in the utterance.
With a log transformation and an invocation of Bayes' rule, we obtain the MALCOM log
likelihood function:
n
InL:
=
L [InP(Ytlxt,
<1?
+ InP(xt)
-InP(Ytl<l?]?
(5)
t=1
We model each P(Yt IXt, <1? by a probability density function (pdf) p[Ytlxt, <l>j (Xt)], where
the particular model <l>j depends on which of the N VQ codes Xt is assigned to. Here we
use a simple multi-dimensional Gaussian for each pdf, but we are currently exploring the
use of multi-modal mixtures of Gaussians to represent the pdfs for sounds such as stop
consonants for which the inverse map from acoustics to articulation may not be unique
(Nix, 1998). Next, we need an estimate of P(Ytl<l?, which can be obtained by summing
over all VQ partitions: P(Ytl<l? :::::: L~=l p(YtIXj, <l>j)P(Xj). We estimate P(Xj) by
calculating the relative frequency of each acoustic code in the VQ codebook.
3.2
PDF ESTIMATION
For step (1) of training, we use gradient-based optimization to reestimate the means of the
Gaussian pdfs for each acoustic partition, where the gradient of Eq.(5) with respect to the
mean of pdf i is
V' JL.In L: =
L
E;l(Yt -JLi) _
tEx(t)=x.
t {L~=1 p[Y~Xj,
t=1
<I>(Xj)]P(Xj)E j 1(Yt - JLj)}
Lj=l p[Ytlxj, <I>(Xj)]P(Xj)
(6)
where E is the covariance matrix for each pdf. For the results in this paper, we use a common radially symmetric covariance matrix for all pdfs and reestimate the covariance matrix
after each path optimization step.2 In doing the optimization, we employ the following algorithm:
1. Make an initial guess of each JLi as the means of the path configurations corresponding to the observed acoustics X E Xt .
2. Construct V' JL In L: by considering Eq. (6) over all N acoustic partitions.
3. Determine a search direction for the optimization using, for example, conjugate
gradients and perform a line search along this direction (Press et al., 1988).
4. Repeat steps [2]-[3] until convergence.
To avoid potential degenerate solutions, after each pdf optimization step, the dimensions
of the CM are orthogonalized. Furthermore, because the scale of the continuity map is
meaningless (only its topological arrangement matters), the N pdfmeans are scaled to zero
mean, unit variance before each path optimization step.
3.3
PATH ESTIMATION
For step (2) of training, we use gradient-based optimization to reestimate Y, where the
gradient of the log likelihood function with respect to a specific Yt is given by
V'y InL: = V'y,p[Ytlxt,<I>(xt)] _ V'y, F~=IP[YtIXj,<I>(Xj)]P(Xj)
,
p[YtIXt, <I>(Xt)]
Lj=1 p[Ytlxj, <I>(Xj)]P(Xj)
(7)
2However, we are currently exploring the effects of individual and diagonal covariance matrices.
748
D. A. Nix and J. E. Hogden
In doing the optimization, we employ the following gradient-based algorithm:
1. Make an initial guess of the path yO as the means of the pdfs corresponding to
the observed acoustic sequence X.
2. Low pass filter yo.
3. Construct \1y InC by considering Eq. (7) over all t.
4. Determine a search direction for the optimization using, for example, conjugate
gradients (Press et at., 1988).
5. Low-pass filter this search direction using the same filter as in step [2].
6. Perform a line search along the filtered direction (Press et at., 1988).
7. Repeat steps [3]-[6] until convergence.
Because neither the line search direction nor the initial estimate yO contains energy above
the cutoff frequency of the low-pass filter, their linear addition-the next estimate of Y will not contain energy above the cutoff frequency either. Thus, steps [2] and [5] implement
the desired smoothness constraint.
4
COMPARNG MALCOM PATHS TO SPEECH ARTICULATION
To evaluate our claim that MALCOM paths are topologically related to articulator motions,
we construct a regression predictor from Y to measured articulator data using the training
data and test the quality of this predictor on an independent test set.
Our speech corpus consists of data from two male and one female native speakers of German. This data was obtained from Dr. Igor Zlokarnik and recorded at the Technical University of Munich, Germany using electro-magnetic articulography (EMA) (Perkell et al.,
1992). Each speaker's articulatory measurements and acoustics were recorded for the same
108 sentences, where each sentence was about 4 seconds long.
The acoustics were recorded using a room-placed microphone and sampled using 16-bit
resolution at 16 kHz. Prior to receiving the data from Munich, the data were resampled at
11025 Hz. To represent the acoustic signal in compact vector time-series, we used 256sample (23.2 msec) Hamming-windowed frames, with a new frame starting every 5.8 msec
(75% overlap). We transform each frame into a 13th-order LPC-cepstral coefficient vector
at (12 cepstral features plus log gain-see Morgan& Scofield, 1994). A full acoustical
feature vector Xt consists of a window of seven frames such that Xt is made up of the frames
{at-6,at-4,at-2,at,at+2, at+4,at+6}. To VQ the acoustic space we used the classical kmeans algorithm (e.g., Bishop, 1995), but we used 512 codes to model the vowel data,
and 256 codes each to model the stop consonants, the fricatives, the nasals, and the liquids
(1536 codes combined).3
The articulatory data consist of the (x, y) coordinates of 4 coils along the tongue and the ycoordinates of coils on the jaw and lower lip. Figure 1 illustrates the approximate location
of each coil. The data were originally sampled at 250 Hz but were resampled to 172.26 Hz
to match one articulatory sample for each 75%-overlapping acoustic frame of 256 samples.
The articulatory data were subsequenpy low-pass filtered at 15 Hz to remove measurement
noise.
Sentences 1-90 were used as a training set, and sentences 91-108 were withheld for evaluation. A separate CM was generated for each speaker using the training data. We used
an 8 Hz cutoff frequency because the measured articulatory data had very little energy
above 8 Hz, and a 6-dimensional continuity map was used because the first six principal
components capture 99% of the variance of the corresponding articulator data (Nix, 1998).
3This acoustic representation and VQ scheme were determined to work well for modeling real
articulator data (Nix, 1998), so they were used here as well.
Maximum-Likelihood Continuity Mapping (MALCOM) : An Alternative to HMMs
,/
\
749
~
Tongue mIddle (T2)
/
Tongue dorsum (T3)
? -_ _
(
Tongue back (T4)
-t \
TOngUetIP~1 ~j~t;;"-- -~
--i
l---------.
----
\ \i~~
Lower lop (LL) ~ ., '~""~
(/ /" ~
Lower jaw (LJ) ,X
'
t,:' ,':
I
I
,c-
"I
, I
:'-'-~ (
.' ~ __\
,~
I , -- -
'~ \
\
J
i
,~\ ,. ~~:,~
'-
\(,..~~J
Figure 1: Approximate positions of EMA coils for speech articulation measurements.
Because the third term in Eq. (5) is computationally complex, we approximated Eq. (5)
by only its first term (the second term is constant during training) until In C, calculated
at the end of each iteration using all terms, started to decrease. At this point we started
using both the first and third terms of Eq. (5). In each pdf and path optimization step,
our convergence criterion was when the maximum movement of a mean or a path was
< 10- 4 . Our convergence criterion for the entire algorithm was when the correlation of
the paths from one full iteration of pdf and path optimization to another was> 0.99 in all
dimensions. This usually took about 30 iterations.
To evaluate the extent to which MALCOM hidden paths capture information related to
articulation, we used the same training set to estimate a non-linear regression function
from the output generated by MALCOM to the corresponding measured articulator data.
We used an ensemble of 10 single-hidden-Iayer, 32-hidden unit, multi-layer perceptrons
trained on different 2/3-training, 1/3-early stopping partitions of the training set, where
the results of the ensemble on the test set were averaged (e.g., Bishop, 1995). A linear
regression produced results approximately 10% worse than those we report here.
To contrast with the unsupervised MALCOM method, we also tested a supervised method
in which the articulatory data was available for training as well as evaluation. This involved
only the pdf optimization step of MALCOM because the paths were fixed as the articulator
measurements. The resulting pdfs were then used in the path optimization step to determine
paths for the test data acoustics. We could then measure what fraction of this supervised
performance the unsupervised MALCOM attained.
5
RESULTS AND CONCLUSIONS
The results of this regression on the test set are plotted in Figure 2. The MALCOM paths
had a median correlation of 0.77 with the actual articulator data, compared to 0.84 for
the comparable supervised method. Thus, using only the speech acoustics, MALCOM
generated continuity maps with correlations to real articulator measurements only 0.02 to
0.15 lower than the corresponding supervised model which used articulatory measurements
as well as acoustics.
Given that (1) MALCOM fits into the same probabilistic framework for speech recognition
as HMMs and (2) MALCOM's hidden paths capture considerable information about the
speech production process, we believe that MALCOM will prove to be a viable alternative
to the HMM for speech processing tasks. Our current work emphasizes developing a word
model to complete the MALCOM formulation and test a full speech recognition system.
Furthermore, MALCOM is applicable to any other task to which HMMs can be applied,
D. A. Nix and 1. E. Hogden
750
0.8
0.2
OL-~
__
~~~~
T1x
T1y
__
~~~~~-L~~~
T2x
T2y
T3x
T3y
__
-L~
T4x
__L J_ _
T4y
-L~
LLy
__
L-~
Uy
Articulator dimension
Figure 2: Correlation between estimated and actual articulator trajectories on the independent test set averaged across speakers. Each full bar is the performance of the supervised
analogy to MALCOM, and the horizontal line on each bar is the performance of MALCOM
itself.
including fraud detection (Hogden, 1997) and text processing.
Acknowledgments
We would like to thank James Howse and Mike Mozer for their helpful comments on this manuscript
and Igor Zlokarnik for sharing his data with us. This work was performed under the auspices of the
U.S. Department of Energy.
References
Bishop, C.M. (1995). Neural Networks for Pattern Recognition, NY: Oxford University Press, Inc.
Bourlard, H. Konig, Y., & Morgan, N. (1995). "REMAP: Recursive estimation and maximization of
a posteriori probabilities, application to transition-based connectionist speech recognition,"
International Computer Science Institute Technical Report TR-94-064.
Hogden, J. (1995). "Improving on hidden Markov models: an articulatorily constrained, maximumlikelihood approach to speech recognition and speech coding," Los Alamos National Laboratory Technical Report, LA-UR-96-3945.
Hogden, J. (1997). "Maximum likelihood continuity mapping for fraud detection," Los Alamos
National Laboratory Technical Report, LA-UR-97-992.
Hogden, 1., Nix, D.A., Gracco, v., & Rubin, P. (1998). "Stochastic word nodels for articulatorily
constrained speech recognition and synthesis," submitted to Acoustical Society of America
Conference, 1998.
Morgan, N. & Bourlard, H.A. (1995). "Neural Networks for Statistical Recognition of Continuous
Speech," Proceedings of the IEEE, 83(5), 742-770.
Morgan, D.P., & Scofield, c.L. (1992). Neural Networks and Speech Processing, Boston, MA:
Kluwer Academic Publishers.
Nix, D.A. (1998). Probabilistic methods for inferring vocal-tract articulation from speech acoustics,
Ph.D. Dissertation, U. of CO at Boulder, Dept. of Computer Science, in preparation.
Perkell, 1.S., Cohen, M.H., Svirsky, M.A., Matthies, M.L., Garabieta, I., & Jackson, M.T.T. (1992).
"Electromagnetic midsagittal articulo meter systems for transducing speech articulatory
movements," Journal of the Acoustical Society of America, 92(6), 3078-3096.
Press, W.H., Teukolsky, S.A., Vetterling, w.T., & Flannery, B.P. (1988). Numerical Recipes in C
Cambridge University Press.
Young, SJ. (1996). "A review of large-vocabulary continuous speech recognition," IEEE Signal
Processing Magazine, September, 45-57.
| 1634 |@word middle:1 covariance:4 simplifying:1 tr:1 carry:1 initial:3 configuration:2 series:3 xiy:4 contains:1 liquid:1 current:2 must:5 john:1 numerical:1 realistic:1 partition:4 remove:1 designed:1 v:1 fewer:1 guess:2 dissertation:1 filtered:2 codebook:1 location:1 mathematical:1 along:3 windowed:1 viable:1 consists:2 prove:1 deteriorate:1 frequently:1 nor:1 multi:4 ol:1 gov:2 little:2 tex:1 window:1 considering:5 actual:2 estimating:2 maximizes:2 what:3 cm:7 transformation:1 temporal:1 every:2 scaled:1 unit:2 before:1 treat:1 oxford:1 path:37 approximately:1 plus:1 co:1 hmms:15 lop:1 averaged:2 uy:1 unique:1 acknowledgment:1 recursive:1 implement:1 evolving:1 significantly:1 word:5 fraud:2 inp:3 vocal:1 suggest:1 influence:1 conventional:4 map:7 yt:10 baum:1 attention:1 starting:2 automaton:1 resolution:1 immediately:1 rule:2 jackson:1 his:1 notion:1 jli:2 coordinate:1 analogous:1 magazine:1 perkell:2 jlj:1 recognition:19 approximated:1 native:1 observed:4 mike:1 module:1 capture:5 ycoordinates:1 calculate:1 movement:3 decrease:1 mozer:1 trained:1 upon:1 emulate:1 america:2 train:1 describe:1 transform:1 itself:1 ip:1 sequence:8 took:1 product:4 degenerate:1 los:6 recipe:1 konig:1 convergence:4 xim:3 requirement:1 produce:1 tract:1 hogden:11 measured:4 eq:11 involves:1 direction:6 waveform:1 closely:1 filter:4 stochastic:1 electromagnetic:1 im:1 extension:1 exploring:2 cic:2 considered:2 mapping:8 predict:1 viterbi:2 claim:4 early:1 estimation:3 applicable:1 iw:1 currently:4 gaussian:3 yix:1 avoid:2 fricative:1 ytl:3 emission:2 focus:1 yo:3 pdfs:8 articulator:15 likelihood:15 contrast:2 helpful:1 posteriori:1 stopping:1 vetterling:1 typically:2 lj:3 entire:1 hidden:21 germany:1 constrained:6 art:1 once:1 construct:3 represents:2 unsupervised:3 igor:2 matthies:1 t2:1 report:4 connectionist:1 employ:2 national:4 individual:1 replaced:1 t2y:1 maintain:1 vowel:2 attempt:1 detection:2 evaluation:2 weakness:1 male:1 mixture:2 articulatory:9 integral:2 unless:1 desired:1 plotted:1 orthogonalized:1 tongue:4 modeling:4 ar:1 maximization:1 alamo:6 predictor:3 wix:1 front:1 ixt:1 combined:1 density:3 international:1 probabilistic:4 receiving:1 analogously:1 together:1 iy:1 synthesis:1 again:1 nm:2 recorded:3 dr:1 worse:1 return:1 converted:1 potential:1 coding:1 coefficient:1 matter:1 inc:2 depends:7 performed:1 doing:2 bayes:2 contribution:1 variance:2 maximized:2 t3:1 ensemble:2 svirsky:1 produced:1 emphasizes:1 trajectory:1 submitted:1 reestimate:5 sharing:1 energy:5 frequency:5 involved:1 james:1 hamming:1 stop:2 sampled:2 gain:1 radially:1 back:1 manuscript:1 originally:1 attained:1 supervised:6 specify:1 modal:1 formulation:3 furthermore:3 correlation:6 until:3 horizontal:1 overlapping:1 continuity:16 quality:1 believe:1 effect:1 contain:1 assigned:1 symmetric:1 laboratory:4 iteratively:1 ll:1 during:1 speaker:7 m:2 criterion:2 syntax:1 pdf:9 complete:1 motion:2 common:2 empirically:1 cohen:1 conditioning:1 khz:1 jl:2 he:1 kluwer:1 measurement:8 cambridge:1 smoothness:3 language:1 had:2 posterior:1 female:1 phone:1 morgan:7 preceding:1 determine:3 maximize:3 signal:3 full:4 sound:1 smooth:4 technical:4 match:1 academic:1 long:1 regression:4 lly:1 iteration:3 represent:2 achieved:1 addition:1 median:2 publisher:1 meaningless:1 unlike:1 midsagittal:1 comment:1 hz:6 electro:1 call:1 mw:3 independence:2 fit:2 xj:11 architecture:3 restrict:1 expression:1 six:1 j_:1 remap:1 garabieta:1 speech:38 ignored:1 generally:1 iterating:1 nasal:1 ph:1 estimated:6 discrete:4 t2x:1 demonstrating:1 cutoff:4 neither:1 backward:1 fraction:1 sum:1 inverse:1 powerful:1 topologically:1 dy:1 comparable:1 bit:1 capturing:1 layer:1 resampled:2 howse:1 topological:1 strength:2 constraint:3 deficiency:1 auspex:1 generates:1 jaw:2 department:1 developing:3 munich:2 poor:2 conjugate:2 describes:1 across:1 ur:2 ytlxt:3 boulder:1 computationally:2 vq:6 turn:1 german:1 know:1 end:2 available:1 gaussians:1 magnetic:1 yim:3 alternative:7 assumes:1 include:1 calculating:1 phonology:1 classical:1 society:2 arrangement:1 xiw:1 dependence:1 diagonal:1 september:1 gradient:7 separate:1 thank:1 hmm:12 acoustical:4 seven:1 evaluate:4 extent:3 considers:1 code:6 useless:1 unknown:1 perform:2 markov:4 nix:9 finite:3 withheld:1 frame:6 gracco:1 david:1 lanl:2 sentence:4 acoustic:29 bar:2 malcom:38 usually:1 pattern:1 articulation:5 lpc:1 including:1 power:1 overlap:1 natural:1 bourlard:6 xtiyt:1 transducing:1 scheme:1 temporally:1 started:2 utterance:8 text:1 review:2 prior:3 meter:1 relative:1 loss:1 generation:1 analogy:1 foundation:1 rubin:1 production:8 repeat:2 placed:1 scofield:3 institute:1 cepstral:2 dimension:3 calculated:1 transition:7 vocabulary:1 rich:1 reside:1 made:1 sj:1 approximate:3 compact:1 corpus:2 summing:1 assumed:2 consonant:2 xi:1 iayer:1 continuous:8 search:6 lip:1 contributes:1 improving:2 complex:1 noise:1 ny:1 position:1 inferring:1 msec:2 candidate:2 invocation:1 t1x:1 third:2 young:2 relevent:1 xt:14 specific:1 bishop:3 x:1 consist:1 quantization:1 perceptually:1 illustrates:1 t4:1 boston:1 flannery:1 likely:1 inl:2 corresponds:7 teukolsky:1 ma:1 coil:4 conditional:1 kmeans:1 room:1 replace:1 considerable:1 infinite:1 determined:1 principal:1 microphone:1 called:1 pas:4 la:2 perceptrons:1 ema:2 maximumlikelihood:1 preparation:1 incorporate:1 dept:1 tested:1 |
694 | 1,635 | Spike-Based Compared to Rate-Based
Hebbian Learning
Richard Kempter*
Institut fur Theoretische Physik
Technische Universitat Munchen
D-85747 Garching, Germany
Wulfram Gerstner
Swiss Federal Institute of Technology
Center of Neuromimetic Systems, EPFL-DI
CH-1015 Lausanne, Switzerland
J. Leo van Hemmen
Institut fur Theoretische Physik
Technische Universitat Munchen
D-85747 Garching, Germany
Abstract
A correlation-based learning rule at the spike level is formulated,
mathematically analyzed, and compared to learning in a firing-rate
description. A differential equation for the learning dynamics is
derived under the assumption that the time scales of learning and
spiking can be separated. For a linear Poissonian neuron model
which receives time-dependent stochastic input we show that spike
correlations on a millisecond time scale play indeed a role. Correlations between input and output spikes tend to stabilize structure
formation, provided that the form of the learning window is in
accordance with Hebb's principle. Conditions for an intrinsic normalization of the average synaptic weight are discussed.
1
Introduction
Most learning rules are formulated in terms of mean firing rates, viz., a continuous
variable reflecting the mean activity of a neuron. For example, a 'Hebbian' (Hebb
1949) learning rule which is driven by the correlations between presynaptic and
postsynaptic rates may be used to generate neuronal receptive fields (e.g., Linsker
1986, MacKay and Miller 1990, Wimbauer et al. 1997) with properties similar to
those of real neurons. A rate-based description, however, neglects effects which are
due to the pulse structure of neuronal signals. During recent years experimental and
* email: kempter@physik.tu-muenchen.de (corresponding author)
R. Kempter. W Gerstner and J L. van Hemmen
126
theoretical evidence has accumulated which suggests that temporal coincidences
between spikes on a millisecond or even sub-millisecond scale play an important
role in neuronal information processing (e.g., Bialek et al. 1991, Carr 1993, Abeles
1994, Gerstner et al. 1996). Moreover, changes of synaptic efficacy depend on
the precise timing of postsynaptic action potentials and presynaptic input spikes
(Markram et al. 1997, Zhang et al. 1998). A synaptic weight is found to increase, if
presynaptic firing precedes a postsynaptic spike and decreased otherwise. In contrast
to the standard rate models of Hebbian learning, the spike-based learning rule
discussed in this paper takes these effects into account. For mathematical details
and numerical simulations the reader is referred to Kempter et al. (1999) .
2
Derivation of the Learning Equation
2.1
Specification of the Hebb Rule
We consider a neuron that receives input from N ? 1 synapses with efficacies J i ,
1 :::; i :::; N. We assume that changes of Ji are induced by pre- and postsynaptic
spikes. The learning rule consists of three parts. (i) Let tf be the time of the m th
input spike arriving at synapse i. The arrival of the spike induces the weight Ji to
change by an amount win which can be positive or negative. (ii) Let t n be the nth
output spike of the neuron under consideration. This event triggers the change of all
N efficacies by an amount w out which can also be positive or negative. (iii) Finally,
time differences between input spikes influence the change of the efficacies. Given
a time difference s = tf - t n between input and output spikes, Ji is changed by an
amount W(s) where the learning window W is a real valued function (Fig. 1). The
learning window can be motivated by local chemical processes at the level of the
synapse (Gerstner et al. 1998, Senn et al. 1999). Here we simply assume that such
a learning window exist and take some (arbitrary) functional dependence W(s) .
Figure 1: An example of a learning window W as a function of the delay s =
tf - t n between a postsynaptic firing time
t n and presynaptic spike arrival tf at
synapse i. Note that for s < 0 the presynaptic spike precedes postsynaptic firing.
Starting at time t with an efficacy Ji(t), the total change 6.Ji(t) = Ji(t + T) - Ji(t)
in a time interval T is calculated by summing the contributions of all input and
output spikes in the time interval [t, t + 7]. Describing the input spike train at
synapse i by a series of 8 functions, s:n(t) = Lm 8(t - tf), and, similarly, output
spikes by sout(t) = Ln 8(t - t n), we can formulate the rules (i)--:(iii):
!
t+T
b.J,(t) =
2.2
dt'
[
!
t+T
Wi"
S;"(t')
+ wont 8"nt(t') +
dt" W(t" - t') S;"(t") 8"nt(t')
]
(1)
Separation of Time Scales
The total change 6..Ji (t) is subject to noise due to stochastic spike arrival and,
possibly, stochastic generation of output spikes. We therefore study the expected
development of the weights J i , denoted by angular brackets. We make the substitution s = til - t' on the right-hand side of (1), divide both sides by T, and take
127
Spike-Based Compared to Rate-Based Hebbian Learning
the expectation value:
(tlJt?)(t)
_1
T
T
I
t +Tdt '
[win (s!n)(t') + W out (sout) (t')]
t
+-1 It+T dt' It+T-t'
T
t
ds W(s) (s!n(t'
+ s) sout(t'))
(2)
t-t'
We may interpret (s~n)(t) for 1 ::; i ::; Nand (sout)(t) as instantaneous firing
rates. I They may vary on very short time scales - shorter, e.g., than average
interspike intervals. Such a model is consistent with the idea of temporal coding,
since it does not rely on temporally averaged mean firing rates.
We note, however, that due to the integral over time on the right-hand side of (2)
temporal averaging is indeed important. If T is much larger than typical interspike
intervals , we may define mean firing rates v!n(t) = (s~n)(t) and vout(t) = (sout)(t)
where we have used the notation f(t) = T- l Itt+T dt' f(t'). The mean firing rates
must be distinguished from the previously defined instantaneous rates (s~n) and
(sout) which are defined as an expectation value and have a high temporal resolution. In contrast , the mean firing rates vi n and v out vary slowly (time scale of the
order of T) as a function of time.
If the learning time T is much larger than the width of the learning window, the
integration over s in (2) can be extended to run from -00 to 00 without introducing
a noticeable error. With the definition of a temporally averaged correlation,
(3)
the last term on the right of (2) reduces to I~oo ds W(s) Ci(s; t). Thus , correlations
between pre- and postsynaptic spikes enter spike-based Hebbian learning through
Ci convolved with the learning window W. We remark that the correlation Ci(s; t)
may change as a function of s on a fast time scale. Note that, by definition, s < 0
implies that a presynaptic spike precedes the output spike - and this is when we
expect (for excitatory synapses) a positive correlation between input and output.
As usual in the theory of Hebbian learning, we require learning to be a slow process.
The correlation Ci can then be evaluated for a constant J i and the left-hand side
of (2) can be rewritten as a differential on the slow time scale of learning
:t
2.3
Ji(t) == ji = win v!n(t) + W out vout(t) +
i:
ds W(S) Ci(S; t)
(4)
Relation to Rate-Based Hebbian Learning
In neural network theory, the hypothesis of Hebb (Hebb 1949) is usually formulated
as a learning rule where the change of a synaptic efficacy Ji depends on the correlation between the mean firing rate vl n of the i th presynaptic and the mean firing
rate vout of a postsynaptic neuron, viz. ,
ji = ao + al v!n + a2 vout + a3 v!n vout + a4 (v~n)2 + a5 (v out )2 ,
(5)
where ao, aI, a2, a3 , a4, and a5 are proportionality constants. Apart from the decay
term ao and the 'Hebbian' term vi n vout proportional to the product of input and
1 An example of rapidly changing instantaneous rates can be found in the auditory
system . The auditory nerve carries noisy spike trains with a stochastic intensity modulated
at the frequency of the applied acoustic tone. In the barn owl, a significant modulation of
the rates is seen up to a frequency of 8 kHz (e.g., Carr 1993).
R. Kempler, W Gerstner and J L. van Hemmen
128
output rates, there are also synaptic changes which are driven separately by the preand postsynaptic rates. The parameters ao, ... , as may depend on J i . Equation (5)
is a general formulation up to second order in the rates; see, e.g., (Linsker 1986).
To get (5) from (4) two approximations are necessary. First, if there are no correlations between input and output spikes apart from the correlations contained in the
rates, we can approximate (SJn(t + s) sout(t)) ~ (s~n)(t + s) (SOUtHt). Second, if
these rates change slowly as compared to T, then we have Ci(s; t) ~ v;n(t+s) vout(t).
Since we have assumed that the learning time T is long compared to the width of
the learning window, we may simplify further and set vJn(t + s) ~ v!n(t), hence
J~oo ds W(s) Ci(s; t) ~ W(O) vjn(t) vout(t), where W(O) = J~oo ds W(s). We may
now identify W(O) with a3. By further comparison of (5) with (4) we identify
win with al and wo ut with a2, and we are able to reduce (4) to (5) by setting
ao = a4 = as = O.
The above set of of assumption which is necessary to derive (5) from (4) does,
however, not hold in general. According to the results of Markram et aI. (1997) the
width of the learning window in cortical pyramidal cells is in the range of ~ 100 ms.
A mean rate formulation thus requires that all changes of the activity are slow on
a time scale of lOOms. This is not necessarily the case. The existence of oscillatory
activity in the cortex in the range of 50 Hz implies activity changes every 20 ms.
Much faster activity changes on a time scale of 1 ms and below are found in the
auditory system (e.g., Carr 1993). Furthermore, beyond the correlations between
mean activities additional correlations between spikes may exist; see below. Because
of all these reasons, the learning rule (5) in the simple rate formulation is insufficient .
In the following we will study the full spike-based learning equation (4).
3
3.1
Stochastically Spiking Neurons
Poisson Input and Stochastic Neuron Model
To proceed with the analysis of (4) we need to determine the correlations C i between
input spikes at synapse i and output spikes. The correlations depend strongly on
the neuron model under consideration. To highlight the main points of learning
we study a linear inhomogeneous Poisson neuron as a toy model. Input spike
trains arriving at the N synapses are statistically independent and generated by an
inhomogeneous Poisson process with time-dependent intensities (Sin) (t) = ,\~n (t),
with 1 ~ i ~ N. A spike arriving at tf at synapse i , evokes a postsynaptic potential
(PSP) with time course E(t - tf) which we assume to be excitatory (EPSP). The
amplitude is given by the synaptic efficacy Ji(t) > O. The membrane potential u of
the neuron is the linear superposition of all contributions
N
u(t)
= Uo + L
i=l
L Ji(t) E(t - t~)
(6)
m
where Uo is the resting potential. Output spikes are assumed to be generated
stochastically with a time dependent rate ,\out(t) which depends linearly upon the
membrane potential
N
,\out(t)
= f3 [u(t)l+
= Vo
+L
i=l
L Ji(t) E(t - tf)?
(7)
m
with a linear function f3[ul+ = f30 + f31 u for u > 0 and zero otherwise. After the
second equality sign, we have formally set Vo = Uo + f30 and f31 = 1. vo > can
129
Spike-Based Compared to Rate-Based Hebbian Learning
be interpreted as the spontaneous firing rate. For excitatory synapses a negative
u is impossible and that's what we have used after the second equality sign. The
sums run over all spike arrival times at all synapses. Note that the spike generation
process is independent of previous output spikes. In particular, the Poisson model
does not include refractoriness.
In the context of (4), we are interested in the expectation values for input and
output. The expected input is (s~n)(t) = A~n(t). The expected output is
(sout)(t) = va +
L Ji(t) 10
00
d8?(s)
A~n(t -
8) ,
(8)
t
The expected output rate in (8) depends on the convolution of ? with the input rates.
In the following we will denote the convolved rates by A~n(t) = 1000 d8 ?(8)A~n(t - 8).
Next we consider the expected correlations between input and output, (s!n(t +
8) sout(t)), which we need in (3):
(s~n (t
+ 8) sout(t))
= A~n (t
+ 8) [Va + J i (t) ?( -8) + L Jj (t) A~n(t)]
(9)
j
The first term inside the square brackets is the spontaneous output rate. The second
term is the specific contribution of an input spike at time t + 8 to the output rate
at t. It vanishes for 8 > 0 (Fig. 2). The sum in (9) contains the mean contributions
of all synapses to an output spike at time t. Inserting (9) in (3) and assuming the
weights J j to be constant in the time interval [t, t + T] we obtain
C i (8; t) =
L
Jj(t) A~n(t + 8) A~n(t)
+ A~n(t + 8) [Va + Ji(t) ?( -8)].
(10)
j
For excitatory synapses, the second term gives for 8 < 0 a positive contribution
to the correlation function - as it should be. (Recall that 8 < 0 means that a
presynaptic spike precedes postsynaptic firing.)
[ ... ](t')
---?r --------o
-'-----r----,,---'---------___.
t+s
3.2
t
t'
Figure 2: Interpretation of the term in square
brackets in (9). The dotted line is the contribution of an input spike at time t + 8 to the
output rate as a function of t', viz., J i (t) ?(t' t - 8). Adding this to the mean rate contribution, Va + Lj Jj(t') A~n(t') (dashed line), we
obtain the rate inside the square brackets of
(9) (full line). At time t' = t the contribution
of an input spike at time t + 8 is Ji(t) ?( -s).
Learning Equation
The assumption of identical and constant mean input rates, A~n(t) = v:n(t) = v in
for all i, reduces the number of free parameters in (4) and eliminates all effects of
rate coding. We introduce r~n(t) := [W(O)]-l J.~oo d8 W(8)A~n(t + 8) and define
(11)
Using (8), (10), (11) in (4) we find for the evolution on the slow time scale oflearning
ji(t) = kl
+ L Jj(t) [Qij(t) + k2 + k3 bij] ,
j
where
(12)
R. Kempter. W Gerstner and 1. L. van Hemmen
130
[w out + W(O)
[w out + W(O)
v in /
4
vin]
Vo
+ win v in
vin] v in
ds?(-s) W(s) .
(13)
(14)
(15)
Discussion
Equation (12), which is the central result of our analysis, describes the expected
dynamics of synaptic weights for a spike-based Hebbian learning rule (1) under the
assumption of a linear inhomogeneous Poisson neuron. Linsker (1986) has derived a
mathematically equivalent equation starting from (5) and a linear graded response
neuron, a rate-based model. An equation of this type has been analyzed by MacKay
and Miller (1990). The difference between Linsker's equation and (12) is, apart from
a slightly different notation, the term k3 6ij and the interpretation of Qij.
4.1
Interpretation of Qij
In (12) correlations between spikes on time scales down to milliseconds or below
can enter the driving term Qij for structure formation; cf. (11). In contrast to that,
Linsker 's ansatz is based on a firing rate description, where the term Qij contains
correlations between mean firing rates only. In his Qij term, mean firing rates take
the place of r~n and A~n. If we use a standard interpretation of rate coding, a mean
firing rate corresponds to a temporally averaged quantity with an averaging window
or a hundred milliseconds or more.
Formally, we could define mean rates by temporal averaging with either ?( s) or
W(s) as the averaging window. In this sense, Linsker's 'rates ' have been made
more precise by (11). Note, however, that (11) is asymmetric: one of the rates
should be convolved with ? , the other one with W.
4.2
Relevance of the k3 term
The most important difference between Linsker's rate-based learning rule and our
Eq. (12) is the existence of a term k3 I: O. We now argue that for a causal chain of
events k3 ex: I dx ?(x) W( -x) must be positive. [We have set x = -s in (15).] First,
without loss of generality, the integral can be restricted to x > 0 since ?(x) is a
response kernel and vanishes for x < O. For excitatory synapses, ?(x) is positive for
x > O. Second, experiments on excitatory synapses show that W(s) is positive for
s < 0 (Markram et al. 1997, Zhang et al. 1998). Thus the integral I dx ?(x) W( -x)
is positive - and so is k 3 .
There is also a more general argument for k3 > 0 based on a literal interpretation of
Hebb's statement (Hebb 1949). Let us recall that s < 0 in (15) means that a presynaptic spike precedes postsynaptic spiking. For excitatory synapses, a presynaptic
spike which precedes postsynaptic firing may be the cause of the postsynaptic activity. [As Hebb puts it, it has 'contributed in firing the postsynaptic cell'.] Thus,
the Hebb rul~ 'predicts ' that for excitatory synapses W(s) is positive for s < O.
Hence, k3 = vln I ds ?( - s) W (s) > 0 as claimed above.
A positive k3 term in (12) gives rise to an exponential growth of weights. Thus any
existing structure in the distribution of weights is enhanced. This contributes to the
stability of weight distributions, especially when there are few and strong synapses
(Gerstner et al. 1996).
Spike-Based Compared to Rate-Based Hebbian Learning
4.3
131
Intrinsic Normalization
Let us suppose that no input synapse is special and impose the (weak) condition
that N - 1 Li Qij = Qo > 0 independent of the synapse index j . We find then from
(12) that the average weight Jo := N-l Li J i has a fixed point Jo = -kd[Qo +
k2 + N- 1 k 3 ]. The fixed point is stable if Qo + k2 + N- 1 k 3 < O. We have shown
above that k3 > O. Furthermore, Qo > 0 according to our assumption. The only
way to enforce stability is therefore a term k2 which is sufficiently negative. Let
us now turn to the definition of k2 in (14). To achieve k2 < 0, either W(O) (the
integral over W) must be sufficiently negative; this corresponds to a learning rule
which is , on the average, anti-Hebbian. Or , for W(O) > 0, the linear term wo ut in
(1) must be sufficiently negative. In addition, for excitatory synapses a reasonable
fixed point Jo has to be positive. For a stable fixed point this is only possible for
kl > 0, which, in turn, implies win to be sufficiently positive; cf. (13).
Intrinsic normalization of synaptic weights is an interesting property, since it allows
neurons to stay at an optimal operating point even while synapses are changing.
Auditory neurons may use such a mechanism to stay during learning in the regime
where coincidence detection is possible (Gerstner et al. 1996, Kempter et al. 1998).
Cortical neurons might use the same principles to operate in the regime of high
variability (Abbott, invited NIPS talk, this volume).
4.4
Conclusions
Spike-based learning is different from simple rate-based learning rules. A spikebased learning rule can pick up correlations in the input on a millisecond time
scale. Mathematically, the main difference to rate-based Hebbian learning is the
existence of a k3 term which accounts for the causal relation between input and
output spikes. Correlations between input and output spikes on a millisecond time
scale playa role and tend to stabilize existing strong synapses.
References
Abeles M. , 1994, In Domany E. et al., editors, Models of Neural Networks II, pp.
121- 140, New York. Springer.
Bialek W . et al. , 1991, Science, 252:1855- 1857.
Carr C. E., 1993 , Annu. Rev. Neurosci. , 16:223-243.
Gerstner W. et al., 1996, Nature, 383:76-78.
Gerstner W. et al., 1998, In W. Maass and C. M. Bishop., editors, Pulsed Neural
Networks, pp. 353-377, Cambridge. MIT-Press.
Hebb D.O. , 1949, The Organization of Behavior. Wiley, New York.
Kempter R. et al., 1998, Neural Comput. , 10:1987- 2017.
Kempter R. et al., 1999, Phys. Rev. E, In Press.
Linsker R., 1986, Proc. Natl. Acad. Sci. USA, 83:7508- 7512.
MacKay D. J. C., Miller K. D. , 1990, Network, 1:257- 297.
Markram H. et al., 1997, Science, 275:213-215.
Senn W. et al., 1999, preprint, Univ. Bern.
Wimbauer S. et al., 1997, BioI. Cybern., 77:453-461.
Zhang L.I. et al. , 1998, Nature, 395 :37- 44
| 1635 |@word proportionality:1 physik:3 simulation:1 pulse:1 pick:1 carry:1 substitution:1 series:1 efficacy:7 contains:2 existing:2 nt:2 dx:2 must:4 numerical:1 interspike:2 tone:1 short:1 zhang:3 mathematical:1 differential:2 qij:7 consists:1 inside:2 introduce:1 expected:6 indeed:2 behavior:1 window:11 provided:1 notation:2 moreover:1 what:1 interpreted:1 temporal:5 every:1 growth:1 k2:6 uo:3 positive:12 accordance:1 timing:1 local:1 acad:1 firing:20 modulation:1 might:1 suggests:1 lausanne:1 range:2 statistically:1 averaged:3 swiss:1 pre:2 get:1 spikebased:1 put:1 context:1 influence:1 impossible:1 cybern:1 equivalent:1 center:1 starting:2 formulate:1 resolution:1 rule:14 his:1 stability:2 spontaneous:2 play:2 trigger:1 enhanced:1 suppose:1 hypothesis:1 asymmetric:1 predicts:1 role:3 preprint:1 coincidence:2 vanishes:2 dynamic:2 depend:3 upon:1 vjn:2 talk:1 leo:1 derivation:1 separated:1 train:3 fast:1 univ:1 precedes:6 formation:2 larger:2 valued:1 otherwise:2 wont:1 noisy:1 product:1 epsp:1 tu:1 inserting:1 rapidly:1 achieve:1 description:3 oo:4 derive:1 ij:1 noticeable:1 eq:1 strong:2 implies:3 switzerland:1 inhomogeneous:3 stochastic:5 owl:1 require:1 ao:5 mathematically:3 hold:1 sufficiently:4 barn:1 k3:10 lm:1 driving:1 vary:2 a2:3 proc:1 superposition:1 tf:8 federal:1 mit:1 derived:2 viz:3 fur:2 contrast:3 sense:1 dependent:3 epfl:1 accumulated:1 vl:1 lj:1 nand:1 relation:2 interested:1 germany:2 denoted:1 development:1 integration:1 mackay:3 special:1 field:1 f3:2 identical:1 linsker:8 simplify:1 richard:1 few:1 tdt:1 detection:1 organization:1 a5:2 analyzed:2 bracket:4 natl:1 chain:1 integral:4 necessary:2 shorter:1 institut:2 divide:1 causal:2 theoretical:1 introducing:1 technische:2 oflearning:1 hundred:1 delay:1 universitat:2 abele:2 stay:2 sout:10 ansatz:1 jo:3 central:1 possibly:1 slowly:2 literal:1 d8:3 stochastically:2 til:1 toy:1 li:2 account:2 potential:5 de:1 coding:3 stabilize:2 vi:2 depends:3 vin:2 contribution:8 square:3 miller:3 identify:2 theoretische:2 vout:8 weak:1 oscillatory:1 synapsis:15 phys:1 synaptic:8 email:1 definition:3 frequency:2 pp:2 di:1 auditory:4 recall:2 ut:2 amplitude:1 reflecting:1 nerve:1 dt:4 response:2 synapse:8 formulation:3 evaluated:1 refractoriness:1 strongly:1 generality:1 furthermore:2 angular:1 correlation:22 d:7 hand:3 receives:2 qo:4 usa:1 effect:3 evolution:1 hence:2 equality:2 chemical:1 maass:1 sin:1 during:2 width:3 m:3 carr:4 vo:4 consideration:2 instantaneous:3 functional:1 spiking:3 ji:19 khz:1 volume:1 discussed:2 interpretation:5 resting:1 interpret:1 significant:1 cambridge:1 enter:2 ai:2 similarly:1 specification:1 stable:2 cortex:1 operating:1 playa:1 recent:1 pulsed:1 driven:2 apart:3 claimed:1 seen:1 additional:1 impose:1 determine:1 dashed:1 signal:1 ii:2 full:2 reduces:2 hebbian:13 faster:1 long:1 va:4 muenchen:1 expectation:3 poisson:5 normalization:3 kernel:1 cell:2 addition:1 separately:1 decreased:1 interval:5 pyramidal:1 invited:1 f30:2 eliminates:1 operate:1 induced:1 tend:2 subject:1 hz:1 iii:2 reduce:1 idea:1 domany:1 motivated:1 ul:1 wo:2 york:2 proceed:1 cause:1 remark:1 jj:4 garching:2 action:1 amount:3 induces:1 generate:1 exist:2 millisecond:7 senn:2 dotted:1 sign:2 changing:2 abbott:1 sum:2 year:1 run:2 evokes:1 place:1 reader:1 reasonable:1 separation:1 activity:7 argument:1 according:2 kd:1 membrane:2 psp:1 describes:1 slightly:1 postsynaptic:15 wi:1 rev:2 restricted:1 ln:1 equation:9 previously:1 describing:1 turn:2 mechanism:1 neuromimetic:1 rewritten:1 munchen:2 enforce:1 distinguished:1 convolved:3 existence:3 include:1 cf:2 a4:3 neglect:1 especially:1 graded:1 quantity:1 spike:52 receptive:1 dependence:1 usual:1 bialek:2 win:6 sci:1 presynaptic:10 argue:1 reason:1 assuming:1 index:1 insufficient:1 statement:1 negative:6 rise:1 contributed:1 neuron:16 convolution:1 anti:1 extended:1 variability:1 precise:2 arbitrary:1 intensity:2 kl:2 acoustic:1 nip:1 poissonian:1 able:1 beyond:1 usually:1 below:3 regime:2 event:2 rely:1 nth:1 loom:1 technology:1 temporally:3 kempter:8 expect:1 highlight:1 loss:1 generation:2 interesting:1 proportional:1 consistent:1 principle:2 editor:2 excitatory:9 changed:1 course:1 last:1 free:1 arriving:3 bern:1 side:4 institute:1 markram:4 van:4 f31:2 calculated:1 cortical:2 author:1 made:1 approximate:1 rul:1 wimbauer:2 summing:1 assumed:2 continuous:1 nature:2 itt:1 contributes:1 gerstner:10 necessarily:1 main:2 linearly:1 neurosci:1 noise:1 arrival:4 sjn:1 neuronal:3 fig:2 referred:1 hemmen:4 hebb:10 slow:4 wiley:1 sub:1 vln:1 exponential:1 comput:1 bij:1 down:1 annu:1 specific:1 bishop:1 decay:1 evidence:1 a3:3 intrinsic:3 adding:1 ci:7 simply:1 contained:1 springer:1 ch:1 corresponds:2 bioi:1 formulated:3 wulfram:1 change:14 typical:1 averaging:4 total:2 experimental:1 formally:2 preand:1 modulated:1 relevance:1 ex:1 |
695 | 1,636 | Neural Computation with Winner-Take-All as
the only Nonlinear Operation
Wolfgang Maass
Institute for Theoretical Computer Science
Technische UniversWit Graz
A-8010 Graz, Austria
email: maass@igi.tu-graz.ac.at
http://www.cis.tu-graz.ac.atiigi/maass
Abstract
Everybody "knows" that neural networks need more than a single layer
of nonlinear units to compute interesting functions. We show that this is
false if one employs winner-take-all as nonlinear unit:
?
Any boolean function can be computed by a single k-winner-takeall unit applied to weighted sums of the input variables.
?
Any continuous function can be approximated arbitrarily well by
a single soft winner-take-all unit applied to weighted sums of the
input variables.
?
Only positive weights are needed in these (linear) weighted sums.
This may be of interest from the point of view of neurophysiology,
since only 15% of the synapses in the cortex are inhibitory. In addition it is widely believed that there are special microcircuits in the
cortex that compute winner-take-all.
?
Our results support the view that winner-take-all is a very useful
basic computational unit in Neural VLS!:
o
o
o
it is wellknown that winner-take-all of n input variables can
be computed very efficiently with 2n transistors (and a total wire length and area that is linear in n) in analog VLSI
[Lazzaro et at., 1989]
we show that winner-take-all is not just useful for special purpose computations, but may serve as the only nonlinear unit for
neural circuits with universal computational power
we show that any multi-layer perceptron needs quadratically in
n many gates to compute winner-take-all for n input variables,
hence winner-take-all provides a substantially more powerful
computational unit than a perceptron (at about the same cost
of implementation in analog VLSI).
Complete proofs and further details to these results can be found in
[Maass, 2000].
294
W. Maass
1 Introduction
Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models,
artificial neural networks, and analog VLSI. The circuit of [Lazzaro et aI., 1989] computes
an approximate version of winner-take-all on n inputs with just 2n transistors and wires
oflength O(n), with lateral inhibition implemented by adding currents on a single wire of
length O( n). Numerous other efficient implementations of winner-take-all in analog VLSI
have subsequently been produced. Among them are circuits based on silicon spiking neurons ([Meador and Hylander, 1994], [Indiveri, 1999]) and circuits that emulate attention in
artificial sensory processing ([Horiuchi et aI., 1997], [Indiveri, 1999]). Preceding analytical
results on winner-take-all circuits can be found in [Grossberg, 1973] and [Brown, 1991].
We will analyze in section 4 the computational power of the most basic competitive computational operation: winner-take-all (= l-WTAn). In section 2 we will discuss the somewhat
more complex operation k-winner-take-all (k-WTA n ), which has also been implemented
in analog VLSI [Urahama and Nagao, 1995]. Section 3 is devoted to soft winner-take-all,
which has been implemented by [Indiveri, 1999] in analog VLSJ via temporal coding of
the output.
Our results shows that winner-take-all is a surprisingly powerful computational module
in comparison with threshold gates (= McCulloch-Pitts neurons) and sigmoidal gates.
Our theoretical analysis also provides answers to two basic questions that have been
raised by neurophysiologists in view of the well-known asymmetry between excitatory
and inhibitory connections in cortical circuits: how much computational power of neural
networks is lost if only positive weights are employed in weighted linear sums, and how
much learning capability is lost if only the positive weights are subject to plasticity.
2
Restructuring Neural Circuits with Digital Output
We investigate in this section the computational power of a k-winner-take-all gate computing the function
k - WT An : ~n -+ {a, l}n
E~
k- WTAn
...
E {a, I}
with
bi = 1 +-+
Xi
is among the k largest ofthe inputs Xl, ...
[precisely: bi = 1 +-+
Xj
> Xi
,X n
.
holds for at most k - 1 indices j]
295
Neural Computation with Winner-Take-All
Theorem 1. Any two-layer feedf01ward circuit C (with m analog or binary input
variables and one binary output variable) consisting of threshold gates (=perceptrons) can be simulated by a circuit W consisting of a single k-winner-take-all gate
k-WTA n I applied to weighted sums ofthe input variables with positive weights. This holds
for all digital inputs. and for analog inputs except for some set S ~ IR.m ~f inputs that has
measure O.
In particular, any booleanfunction
f : {D , l}m -+ {O, I}
can be computed by a single k-winner-take-all gate applied to positive weighted sums of
the input bits.
Remarks
I. If C has polynomial size and integer weights, whose size is bounded by a polynomial in m, then the number oflinear gates S in W can be bounded by a polynomial
in m, and all weights in the simulating circuit W are natural numbers whose size
is bounded by a polynomial in m.
2. The exception set of measure D in this result is a union of finitely many hyperplanes in lRm. One can easily show that this exception set S of measure D in
Theorem 1 is necessary.
3. Any circuit that has the structure ofW can be converted back into a 2-layerthreshold circuit, with a number of gates that is quadratic in the number of weighted
sums (=1inear gates) in W . This relies on the construction in section 4.
Proof of Theorem 1: Since the outputs of the gates on the hidden layer of C are from
{O, I}, we can assume without loss of generality that the weights a1 , . .. ,an of the output gate G of C are from { - 1, 1} (see for example [Siu et al., 1995] for details; one first
observes that it suffices to use integer weights for threshold gates with binary inputs, one
can then nonnalize these weights to values in { -1,1} by duplicating gates on the hidden
layer of C). Thus for any circuit input & E IR.m we have C(&) = 1 ?:}
n
L: ajGj (&) 2: e,
j=1
where G 1, ... , G n are the threshold gates on the hidden layer of C, a1 , .. . , an are from
{-I, I}, and is the threshold of the output gate G. In order to eliminate the negative
weights in G we replace each gate G j for which a j = -1 by another threshold gate (; j so
that (;j(&) = 1 - G j (&) for all & E IR.m except on some hyperpJane. 2 We set Gj := G j
for all j E {I, . . . , n} with a j = 1. Then we have for all & E lRm , except for & from some
exception set S consisting of up to n hyperplanes,
e
n
n
2: a Gj(&) = 2: (;j(&) -I{j E {I , ... , n}: aj = -1}1?
j
j=1
j=1
n
Hence C(&) = 1 ?:}
,
,
L: Gj (&) 2: k
for all
Z
E IR.m - S, for some suitable kE N.
j=1
Let w{ , . .. , win E lR be the weights and
I of which we only use its last output bit
2We exploit here that --, I:7:1 W iZi ;::: 0
ej
<=?
E IR. be the threshold of gate (; j ,j = 1, .. . , n .
I:7:1 (-W i )Zi > -0 for arbitrary Wi , Z i, 0
ER.
W. Maass
296
c
G 1 , ??. ,G n are arbitrary threshold gates, G
is a threshold gate with weights from {-I, I}
b
w
Zm
ZI
SI, ... ,Sn+1 are linear gates (with positive
weights only, which are sums of absolute values of weights from the gates G 1 , . .? ,G n)
b
' " andback
i:w{<O
i:w{>O
for j
= 1, ...
,n
l#j i:wf>o
i :wt <0
and
n
Sn+1 :=
L L
Iw11zi
j=1 i:w1>o
we have for every j E {I, ... ,n} and every ? E
Sn+l
~ Sj
?:}
L
i:w{>O
This implies that the (n
Iw11zi -
L
~m :
Iw11zi >
e
j ?:}
Gj (?)
= 1.
i:w{<O
+ l}st output bn +1
of the k-winner-take-all gate k-WTAn+1 for
297
Neural Computation with Winner-Take-All
k := n -
k + 1 applied to Sl, ... , Sn+l satisfies
bn +1 = 1 ?:> Ib E {I, ... ,n+ I}: Sj > Sn+dl ~ n - k
?:> Ib E {I, ... ,n+ I}: Sn+1 ~ Sj}1 ~ k+ 1
?:> Ib E {I, ... ,n}: Sn+1 ~ Sj}1 ~ k
n
?:> L: Gj(~) ~ k
A
A
j=l
?:>
C(~)
= 1.
?
Note that all the coefficients in the sums Sl, ... , Sn+1 are positive.
3
Restructuring Neural Circuits with Analog Output
In order to approximate arbitrary continuous functions with values in [0, 1] by circuits that
have a similar structure as those in the preceding section, we consider here a variation of a
winner-take-all gate that outputs analog numbers between 0 and I, whose values depend on
the rank of the corresponding input in the linear order of all the n input numbers. One may
argue that such gate is no longer a "winner-take-all" gate, but in agreement with common
terminology we refer to it as a soft winner-take-all gate. Such gate computes a function
from m.n into [0, l]n
Xn
ElR
soft winner-take-all
...
E [0,1]
whose ith output Ti E [0,1] is roughly proportional to the rank of Xi among the numbers
Xl, ??. , X n . More precisely: for some parameter TEN we set
Ti
=
l{jE{I, ... ,n}: xi~xj}I-~
T
'
rounded to 0 or 1 if this value is outside [0,1]. Hence this gate focuses on those
inputs Xi whose rank among the n input numbers Xl, ? ?. ,X n belongs to the set
{~, ~ + 1, ... , min{n, T + ~}}. These ranks are linearly scaled into [0, 1].3
Theorem 2. Circuits consisting oja single soft winner-take-all gate (oJ which we only use
its first output T1) applied to positive weighted sums oj the input variables are universal
?
approximatorsJor arbitrary continuousJunctionsJrom lRm into [0, 1].
3It is shown in [Maass, 2000] that actually any continuous monotone scaling into [0,1] can be
used instead.
298
W Maass
A circuit of the type considered in Theorem 2 (with a soft winner-take-all gate applied to
n positive weighted sums 51, ... , 5 n ) has a very simple geometrical interpretation: Over
each point &: of the input "plane" Rm we consider the relative heights of the n hyperplanes
HI, ... ,Hn defined by the n positive weighted sums 51, .. . ,5n . The circuit output depends only on how many ofthe otherhyperplanesH2 , ... , Hn are above HI at this point?.
4
A Lower Bound Result for Winner-Take-All
One can easily see that any k- WTA gate with n inputs can be computed by a 2-layer threshold circuit consisting of (~) + n threshold gates:
?
X l?
> X?J
_
Xn
G) threshold gates
I
bl
I
":
,
,
bi
,
b?J
,
I
n threshold gates
bn
,
?
L:~n-k
Hence the following result provides an optima/lower bound.
Theorem 3. Any JeedJmward threshold circuit (=multi-Iayer perceptron) that computes
l-WTAJor n inputs needs to have at least (~) + n gates.
?
5
Conclusions
The lower bound result of Theorem 3 shows that the computational power of winner-takeall is quite large, even if compared with the arguably most powerful gate commonly studied
in circuit complexity theory: the threshold gate (also referred to a McCulloch-Pitts neuron
or perceptron).
Neural Computation with Winner-Take-All
299
It is well known ([Minsky and Papert, 1969]) that a single threshold gate is not able to
compute certain important functions, whereas circuits of moderate (i.e., polynomial) size
consisting of two layers of threshold gates with polynomial size integer weights have remarkable computational power (see [Siu et aI., 1995]). We have shown in Theorem 1 that
any such 2-layer(i.e., I hidden layer) circuit can be simulated by a single k-winner-take-all
gate, applied to polynomially many weighted sums with positive integer weights of polynomial size.
We have also analyzed the computational power of soft winner-take-all gates in the context
of analog computation. It is shown in Theorem 2 that a single soft winner-take-all gate
may serve as the only nonlinearity in a class of circuits that have universal computational
power in the sense that they can approximate any continuous functions.
Furthermore our novel universal approximators require only positive linear operations besides soft winner-take-all, thereby showing that in principle no computational power is lost
if in a biological neural system inhibition is used exclusively for unspecific lateral inhibition, and no adaptive flexibility is lost if synaptic plasticity (i.e., "learning") is restricted to
excitatory synapses.
Our somewhat surprising results regarding the computational power and universality of
winner-take-all point to further opportunities for low-power analog VLSI chips, since
winner-take-all can be implemented very efficiently in this technology.
References
[Brown, 1991] Brown, T. X. (1991). Neural Network Design for Switching Network Control.. Ph.-D.-Thesis, CALTECH.
[Grossberg, 1973] Grossberg, S. (1973). Contour enhancement, short term memory, and
constancies in reverberating neural networks. Studies in Applied Mathematics, vol. 52,
217-257.
[Horiuchi et aI., 1997] Horiuchi, T. K., Morris, T. G., Koch, C., DeWeerth, S. P. (1997).
Analog VLSI circuits for attention-based visual tracking. Advances in Neural Information Processing Systems, vol. 9, 706-712.
[Indiveri, 1999] Indiveri, G. (1999). Modeling selective attention using a neuromorphic
analog VLSI device, submitted for publication.
[Lazzaro et aI., 1989] Lazzaro, 1., Ryckebusch, S., Mahowald, M. A., Mead, C. A. (1989).
Winner-take-all networks of O( n) complexity. Advances in Neural Information Processing Systems, vol. I, Morgan Kaufmann (San Mateo), 703-711.
[Maass,2000] Maass, W. (2000). On the computational power of winner-take-all, Neural
Computation, in press.
[Meador and Hylander, 1994] Meador, J. L., and Hylander, P. D. (1994). Pulse coded
winner-take-all networks. In: Silicon Implementation of Pulse Coded Neural Networks,
Zaghloul, M. E., Meador, 1., and Newcomb, R. W., eds., Kluwer Academic Publishers
(Boston),79-99.
[Minsky and Papert, 1969] Minsky, M. C., Papert, S. A. (1969). Perceptrons, MIT Press
(Cambridge).
[Siu et aI., 1995] Siu, K.-Y., Roychowdhury, v., Kailath, T. (1995). Discrete Neural Computation: A Theoretical Foundation. Prentice Hall (Englewood Cliffs, NJ, USA).
[Urahama and Nagao, 1995] Urahama, K., and Nagao, T. (1995). k-winner-take-all circuit
with O(N) complexity. IEEE Trans. on Neural Networks, vol.6, 776--778.
| 1636 |@word neurophysiology:1 version:1 polynomial:7 pulse:2 bn:3 thereby:1 exclusively:1 current:1 surprising:1 si:1 universality:1 plasticity:2 device:1 plane:1 ith:1 short:1 lr:1 provides:3 hyperplanes:3 sigmoidal:1 height:1 roughly:1 multi:2 brain:1 bounded:3 circuit:26 mcculloch:2 substantially:1 nj:1 temporal:1 duplicating:1 every:2 ti:2 scaled:1 rm:1 control:1 unit:7 arguably:1 positive:12 t1:1 switching:1 mead:1 cliff:1 lrm:3 studied:1 mateo:1 bi:3 grossberg:3 lost:4 union:1 area:1 universal:4 prentice:1 context:1 www:1 attention:3 ke:1 variation:1 construction:1 agreement:1 approximated:1 constancy:1 module:1 graz:4 observes:1 complexity:4 neglected:1 depend:1 serve:2 easily:2 chip:1 emulate:1 horiuchi:3 inear:1 universwit:1 artificial:2 outside:1 whose:5 quite:1 widely:2 transistor:2 analytical:1 zm:1 tu:2 flexibility:1 enhancement:1 asymmetry:1 optimum:1 ac:2 finitely:1 implemented:4 implies:1 newcomb:1 subsequently:1 require:1 suffices:1 biological:1 hold:2 koch:1 considered:1 hall:1 pitt:2 purpose:1 largest:1 weighted:11 hylander:3 mit:1 ej:1 publication:1 unspecific:1 focus:1 indiveri:5 rank:4 wf:1 sense:1 eliminate:1 hidden:4 vlsi:8 selective:1 among:4 raised:1 special:2 wtan:3 employ:1 oja:1 minsky:3 consisting:6 interest:1 englewood:1 investigate:1 analyzed:1 devoted:1 necessary:1 theoretical:3 soft:9 boolean:1 modeling:1 neuromorphic:1 mahowald:1 cost:1 technische:1 siu:4 answer:1 st:1 rounded:1 w1:1 thesis:1 hn:2 converted:1 coding:1 coefficient:1 igi:1 depends:1 view:3 wolfgang:1 analyze:1 competitive:2 capability:1 ir:5 kaufmann:1 efficiently:2 ofthe:3 produced:1 submitted:1 synapsis:2 synaptic:1 email:1 ed:1 proof:2 austria:1 actually:1 back:1 izi:1 microcircuit:1 generality:1 furthermore:1 just:2 stage:1 deweerth:1 nonlinear:4 aj:1 usa:1 brown:3 nagao:3 hence:4 maass:10 everybody:1 elr:1 complete:1 geometrical:1 novel:1 common:1 spiking:1 winner:42 analog:14 interpretation:1 kluwer:1 oflength:1 silicon:2 refer:1 cambridge:1 ai:6 mathematics:1 nonlinearity:1 cortex:2 vls:1 inhibition:3 gj:5 longer:1 belongs:1 moderate:1 wellknown:1 meador:4 certain:1 binary:3 arbitrarily:1 approximators:1 caltech:1 morgan:1 somewhat:2 preceding:2 employed:1 academic:1 believed:1 coded:2 a1:2 basic:3 addition:1 whereas:1 publisher:1 subject:1 integer:4 xj:2 zi:2 regarding:1 zaghloul:1 lazzaro:4 remark:1 useful:2 involve:1 ten:1 ph:1 morris:1 http:1 sl:2 inhibitory:2 roychowdhury:1 discrete:1 vol:4 terminology:1 threshold:17 monotone:1 sum:13 powerful:3 scaling:1 bit:2 layer:10 hi:2 bound:3 quadratic:1 precisely:2 min:1 wi:1 wta:3 restricted:1 discus:1 needed:1 know:1 operation:4 takeall:2 simulating:1 gate:44 opportunity:1 exploit:1 bl:1 question:1 ryckebusch:1 win:1 lateral:2 simulated:2 argue:1 besides:1 length:2 index:1 ofw:1 negative:1 implementation:3 design:1 wire:3 neuron:3 arbitrary:4 connection:1 quadratically:1 trans:1 able:1 oj:2 memory:1 power:12 suitable:1 natural:1 technology:1 numerous:1 sn:8 relative:1 loss:1 interesting:1 proportional:1 remarkable:1 digital:2 foundation:1 principle:1 excitatory:2 surprisingly:1 last:1 perceptron:4 institute:1 absolute:1 cortical:1 xn:2 contour:1 computes:3 sensory:1 commonly:1 adaptive:1 san:1 far:1 polynomially:1 sj:4 approximate:3 xi:5 iayer:1 continuous:4 complex:1 linearly:1 je:1 referred:1 oflinear:1 papert:3 xl:3 ib:3 theorem:9 showing:1 reverberating:1 er:1 dl:1 false:1 adding:1 ci:1 boston:1 visual:1 restructuring:2 tracking:1 satisfies:1 relies:1 kailath:1 replace:1 neurophysiologists:1 except:3 wt:2 total:1 perceptrons:2 exception:3 support:1 |
696 | 1,637 | Bayesian modelling of tMRI time series
Pedro A. d. F. R. H~jen-S~rensen, Lars K. Hansen and Carl Edward Rasmussen
Department of Mathematical Modelling, Building 321
Technical University of Denmark
DK-2800 Lyngby, Denmark
phs,lkhansen,carl@imrn.dtu.dk
Abstract
We present a Hidden Markov Model (HMM) for inferring the hidden
psychological state (or neural activity) during single trial tMRI activation experiments with blocked task paradigms. Inference is based on
Bayesian methodology, using a combination of analytical and a variety
of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single
trial experiments.
1 Introduction
Functional magnetic resonance imaging (tMRI) is a non-invasive technique that enables
indirect measures of neuronal activity in the working human brain. The most common
tMRI technique is based on an image contrast induced by temporal shifts in the relative
concentration of oxyhemoglobin and deoxyhemoglobin (BOLD contrast). Since neuronal
activation leads to an increased blood flow, the so-called hemodynamic response, the measured tMRI signal reflects neuronal activity. Hence, when analyzing the BOLD signal
there are two unknown factors to consider; the task dependent neuronal activation and the
hemodynamic response. Bandettini et al. [1993] analyzed the correlation between a binary reference function (representing the stimulus/task sequence) and the BOLD signal.
In the following we will also make reference to the binary representation of the task as
the paradigm. Lange and Zeger [] 997] discuss a parameterized hemodynamic response
adapted by a least squares procedure. Mu]tivariate strategies have been pursued in [Worsley et al. 1997, Hansen et al. ] 999]. Several explorative strategies have been proposed
for finding spatio-tempora] activation patterns without explicit reference to the activation
paradigm. McKeown et al. [1998] used independent component analysis and found several types of activations including components with "transient task related" response, i.e.,
responses that could not simply be accounted for by the paradigm. The model presented
in this paper draws on the experimental observation that the basic coupling between the
net neural activity and hemodynamic response is roughly linear while the relation between
neuronal response and stimulus/task parameters is often nonlinear [Dale ]997]. We will
represent the neuronal activity (integrated over the voxel and sampling time interval) by a
binary signal while we will represent the hemodynamic response as a linear filter of unknown form and temporal extent.
Bayesian Modelling ofjMRJ Time Series
2
755
A Bayesian model of fMRI time series
Let S = {St : t = 0, ... , T - 1} be a hidden sequence of binary state variables St E
{O, 1}, representing the state of a single voxel over time; the time variable, t, indexes the
sequence of tMRI scans. Hence, St is a binary representation of the neural state. The
hidden sequence is governed by a symmetric first order Hidden Markov Model (HMM)
with transition probability a = P(St+1 = jiSt = j). We expect the activation to mimic
the blocked structure of the experimental paradigm so for this reason we restrict a to be
larger than one half. The predicted signal (noiseless signal) is given by Yt = h*s+()o +()I t,
where * denotes the linear convolution and h is the impulse response of a linear system of
order M f . The dc off-set and linear trend which are typically seen in tMRI time series
are given by ()o and ()b respectively. Finally, it is assumed that the observable is given by
Zt = Yt + Ct, where Ct is iid. Gaussian noise with variance
The generative model
considered is therefore given by:
?7;.
p(StISt-l , a)
p(zls,?7n ,(),Mf )
a8 8t ,8t_1
+ (1- a)(l- 88t , 8t_1)'
= {yt} =
andz = {zt}.
is the usual Kronecker delta and
= [1,~, 'Yos, 'YIS, ... , 'YM/ -ISJ,
N(y,?7~I), where Y
'"
H8()'
Furthermore, 88t ,8t - 1
H8
where 1 = (1, .. .1)', ~=(l, .. . ,T)'IT and 'Yi is a i-step shift operator, that is 'YiS =
(O , .. . , O, SO,SI, ... ,ST-I-i)'. The linear parameters are collected in () = (()o'(h , h)'.
~
i
So
SI
S2
S3
Z, ':
Xo
3
XI
x2
ST_I
... ~
\:,1
.. \0
X3
The graphical model. The hidden states
X t = (St - l, St-2," ., St-(M/-l)) have
been introduced to make the model first order.
XT_I
Analytic integration and Monte Carlo sampling
In this section we introduce priors over the model parameters and show how inference may
be performed. The filter coefficients and noise parameters may be handled analytically,
whereas the remaining parameters are treated using sampling procedures (a combination
of Gibbs and Metropolis sampling). Like in the previous section explicit reference to the
filter order l'vlf may be omitted to ease the notation.
The dc off-set ()o and the linear trend ()1 are given (improper) uniform priors. The filter
coefficients are given priors that are uniform on an interval of length (3 independently for
each coefficient:
for Ihil < ~,
otherwise
Assuming that all the values of () for which the associated likelihood has non-vanishing
contributions lie inside the box where the prior for () has support, we may integrate out the
filter coefficients via a Gaussian integral:
P. A. d. F R. Hejen-Serensen, L. K. Hansen and C. E. Rasmussen
756
We have here defined the mean fiLter, 8s = (H;Hs)-l Hsz and mean predicted signaL,
Ys = Hils, for given state and filter length. We set the interval-length, (3 to be 4 times the
standard deviation of the observed signal z. This is done, since the response from the filter
should be able to model the signal, for which it is thought to need an interval of plus/minus
two standard deviations.
We now proceed to integrate over the noise parameter; using the (improper) noninformative Jeffreys prior, P(O'n) ex 0'~1, we get a Gamma integral:
p(zls , Mf)
=
!
p(zIO'n, S, Mf )P(O'n)dO'n
=
1 T - Mf
(ll'(ZIZ - Y~Ys))
-2 r (
2
- 1)
/
(3M[
M[-T
2
+1
V IH~Hsl
The remaining variables cannot be handled analytically, and will be treated using various
forms of sampling as described in the following sections.
3.1
Gibbs and Metropolis updates of the state sequence
We use a flat prior on the states, p(St = 0) = p(St = 1), together with the first order
Markov property for the hidden states and Bayes' rule to get the conditional posterior for
the individual states:
p(St = jls\st, a, Mf)
ex
p(St = jISt-1, a)p(st+llst = j , a)p(zls, Mf)?
These probabilities may (in normalized form) be used to implement Gibbs updates for the
hidden state variables, updating one variable at a time and sweeping through all variables.
However, it turns out that there are significant correlations between states which makes it
difficult for the Markov Chain to move around in the hidden state-space using only Gibbs
sampling (where a single state is updated at a time). To improve the situation we also
perform global state updates, consisting of proposing to move the entire state sequence
one step forward or backward (the direction being chosen at random) and accepting the
proposed state using the Metropolis acceptance procedure. The proposed movements are
made using periodic boundary conditions. The Gibbs sweep is computationally involved,
since it requires computation of several matrix expressions for every state-variable.
3.2 Adaptive Rejection Sampling for the transition probability
The likelihood for the transition probability a is derived from the Hidden Markov Model:
T-1
p(sla)
1
= p(so) II p(stlst-t,a) = 2'a E(s)(l- af- 1-E(s)
,
t=l
where E(s) = L,;=~1 6S , , 8'_1 is the number of neighboring states in S with identical values.
The prior on the transition probabilities is uniform, but restricted to be larger than one half,
since we expect the activation to mimic the blocked structure of the experimental paradigm.
It is readily seen that p(als) ex p(sla), a E [~, 1] is log-concave. Hence, we may use
the Adaptive Rejection Sampling algorithm [Gilks and Wild, t 992] to sample from the
distribution for the transition probability.
3.3 Metropolis updates for the filter length
In practical applications using real tMRI data, we do typically not know the necessary
length of the filter. The problem of finding the "right" model order is difficult and has received a lot of attention. Here, we let the Markov Chain sample over different filter lengths,
effectively integrating out the filter-length rather than trying to optimize it. Although the
757
Bayesian Modelling offMRl Time Series
value of Mf determines the dimensionality of the parameter space, we do not need to use
specialized sampling methodology (such as Reversible Jump MCMC [Green, 1995]), since
those parameters are handled analytically in our model. We put a flat (improper) prior on
M f and propose new filter lengths using a Gaussian proposal centered on the current value,
with a standard deviation of 3 (non-positive proposed orders are rejected). This choice of
the standard deviation only effects the mixing rate of the Markov chain and does not have
any influence on the stationary distribution. The proposed values are accepted using the
Metropolis algorithm, usingp(Mfls, y) ex: p(yls, Mf).
3.4 The posterior mean and uncertainty of the predicted signal
Since () has a flat prior the conditional probability for the filter coefficients is proportional
to the likelihood p(zl(),') and by (*) we get:
p(()lz, s, an, Mf) '" N(Dsz, a;DsD~),
The posterior mean of the predicted signal,
Ds = (H~Hs)-l H~ .
y, is then readily computed as:
Y = (y(),un,s,M,)() 'Vn"
~ s Mf
= (Ys)s
Mj
'
= (HiJs)s,M,
= (Fs)s , M,z,
where Fs = HsDs. Here, the average over () and an is done analytically, and the average
over the state and filter length is done using Monte Carlo. The uncertainty in the posterior,
can also be estimated partly by analytical averaging, and partly Monte Carlo:
~y
- ?( Y() ,un,s,M, - Y')(Y(),un,s,M, - Y')') () ,Vn, s , M
~
-
" , )Fs F')
= T _ M1 _ 2 (('
z z - YsYs
s
f
8,M,
J
+ (FsZz'F')
s
s,M,
-
" ,.
YY
4 Example: synthetic data
In order to test the model, we first present some results on a synthetic data set. A signal
z of length 100 is generated using a Mf = 10 order filter, and a hidden state sequence s
consisting of two activation bursts (indicated by dotted bars in figure I top left). In this
example, the hidden sequence is actually not generated from the generative model (*);
however, it still exhibits the kind of block structure that we wish to be able to recover.
The model is run for 10000 iterations, which is sufficient to generate 500 approximately
independent samples from the posterior; figure 2 (right) shows the auto-covariance for
Mf as a function of the iteration lag. It is thought that changes in Mf are indicative of
correlation time of the overall system.
The correlation plot for the hidden states (figure 2, left) shows that the state activation onset
correlates strongly with the second onset and negatively with the end of the activation (and
vice versa). This indicates that the Metropolis updates described in section 3.] may indeed
be effective. Notice also that the very strong correlation among state variables does not
strongly carryover to the predicted signal (figure 1, bottom right).
To verify that the model can reasonably recover the parameters used to generate the data,
posterior samples from some of the model variables are shown in figure 3. For all these
parameters the posterior density is large around the correct values. Notice, that there in the
original model (*) is an indeterminacy in the simultaneous inference of the state sequence
and the filter parameters (but no indeterminacy in the predicted signal); for example, the
same signal is predicted by shifting the state sequence backward in time and introducing
leading zero filter coefficients. However, the Bayesian methodology breaks this symmetry
by penalizing complex models.
P. A. d. F. R. Hojen-Sorensen, L. K. Hansen and C. E. Rasmussen
758
N
CI:S
c:
4
0>
'(j)
-0 2
Q.)
"-
:J
(/)
o
~ 0
~
-2~--~~~--~~~--~
_2L---~--~----~--~--~
o
20
40
60
80
100
20
40
60
80
100
Scan number, t
Scan number, t
6~--~_~
, --~-------------~
: "" ,
..
~-
0.8
.....-20
.:
Q.)
-0
E
0.6
40
0.4
:J
-....... '.-"
........
c: 60
c:
0 .2
CI:S
(.)
en
_2L---~--~----~--~--~
o
20
40
60
Scan number, t
80
100
o
80
100
-0.2
20
40
60
80
Scan number, t
100
J
Figure 1: Experiments with synthetic data. Top left, the measured response from a voxel
is plotted for 100 consecutive scans. In the bottom left, the underlying signal is seen in
thin, together with the posterior mean, fj (thick), and two std. dev. error-bars in dotted. Top
right, the posterior probabilities are shown as a grey level, for each scan . The true activated
instances are indicated by the dotted bars and the pseudo MAP estimate of the activation
sequence is given by the crossed bars. Bottom right, shows the posterior uncertainty ~Y'
The posterior mean and the two standard deviations are plotted in figure 1 bottom left. Notice, however, that the distribution of y is not Gaussian, but rather a mixture of Gaussians,
and is not necessarily well characterized by mean and variance alone. In figure] (top left),
the distribution of Yt is visualized using grey-scale to represent density.
5
Simulations on real fMRI data and discussion
In figure 4 the model has been applied to two measurements in the same voxel in visual
cortex. The fMRI scans were acquired every 330 ms. The experimental paradigm consisted
of 30 scans of rest followed by 30 scans of activation and 40 rest. Visual activation consisted of a flashing (8 Hz) annular checkerboard pattern. The model readily identifies the
activation burst of somewhat longer duration than the visual stimulus and delayed around
2 seconds. The delay is in part caused by the delay in the hemodynamic response.
These results show that the integration procedure works in spite of the very limited data
at hand. In figure 4 (top) the posterior model size suggests that (at least) two competing
models can explain the signal from this trial. One of these models explains the measured
signal as a simple square wave function which seems reasonable by considering the signal.
Conversely, figure 4 (bottom), suggests that the signal from the second trial can not be
explained by a simple model. This too, seems reasonable because of the long signal raise
interval suggested in the signal.
Bayesian Modelling o!iMRl Time Series
759
'"<Ii
(!)
:0
'">
-~
*
liS
c
'"
u
u
J:
OS"
lag
Hidden state variables. s
Figure 2: The covariance of the hidden states based on a long run of the model is shown to
the left. Notice, that the states around the front (back) of the activity "bumps" are highly
(anti-) correlated. Right The auto-covariance for the filter length M f as a function of the
lag time in iterations. The correlation length is about 20, computed as the sum of autocovariance coefficients from lag -400 to 400.
Since the posterior distribution of the filter length is very broad it is questionable whether
an optimization based procedure such as maximum likelihood estimation would be able
to make useful inference in this case were data is very limited. Also, it is not obvious
how one may use cross-validation in this setting. One might expect such optimization
based strategies to get trapped in suboptimal solutions. This, of course, remains to be
investigated.
6
Conclusion
We have presented a model for voxel based explorative data analysis of single trial fMRI
signals during blocked task activation studies. The model is founded on the experimental
observation that the basic coupling between the net neural activity and hemodynamic response is roughly linear. The preliminary investigation reported here are encouraging in
that the model reliably detects reasonable hidden states from the very noisy fMRI data.
One drawback of this method is that the Gibbs sampling step is computational expensive.
To improve on this step one could make use of the large class of variational/mean field
methods known from the graphical models literature. Finally, current work is in progress
for generalizing the model to multiple voxels, including spatial correlation due to e.g. spillover effects.
0.15
0.15
0.15
0.3
0.1
0.1
0.05
a
0.1
0.05
0.8
1.2
cr
a
0.2
0.05
1.5
2
2.5
DC off-set
a
A rL
-2
-1
Trend
a
0.1
5
Mf
10
Figure 3: Posterior distributions of various model parameters. The parameters used to
generate the data are: a = 1.0, DC off-set = 2, trend = -1 and filter order M f = 10.
P. A . d. F R. HfJjen-SfJrensen, L. K. Hansen and C. E. Rasmussen
760
320~--------------------~---.
::;r ~nnnen~~~~~nn_ I
12
14
16
18
20
cr
180~~----~~~~~~~~--~
o
20
40
60
eo
100
Scan number, t
2 8 0 . - - - - - - - - -_ _- - - - . - - -_ _ _ _- - - ,
260
::~r _~cOmmm[[n~~
10
12
14
1
16
cr
180
180~------==~~~~--------~
o
eo
20
40
80
Scan number. t
Figure 4: Analysis of two experimental trials of the same voxel in visual cortex. The left
hand plot shows the posterior inferred signal distribution superimposed by the measured
signal. The dotted bar indicates the experimental paradigm and the crossed bar indicates
the pseudo MAP estimate of the neural activity. To the right the posterior noise level and
inferred filter length are displayed .
Acknowledgments
Thanks to Egill Rostrup for providing the fMRI data. This work is funded by the Danish
Research Councils through the Computational Neural Network Center (CONNECT) and
the THOR Center for Neuroinformatics.
References
Bandettini, P. A. (1993). Processing strategies for time-course data sets in functional MRI of the
human brain Magnetic Resonance in Medicine 30, 161-173.
Dale, A. M. and R. L. Buckner (1997). Selective Averaging of Individual Trials Using fMRI. Neuro/mage 5, Abstract S47.
Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika 82, 711-732.
Gilks, W. R. and P. Wild (1992). Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41, 337-348.
Hansen, L. K. et al. (1999). Generalizable Patterns in Neuroimaging: How Many Principal Components? Neuro/mage, to appear.
Lange, N. and S. L. Zeger (1997). Non-linear Fourier time series analysis for human brain mapping
by functional magnetic resonance imaging. Journal of the Royal Statistical Society - Series C Applied
Statistics 46, 1-30.
McKeown, M. J. et al. (1998). Spatially independent activity patterns in functional magnetic resonance imaging data during the stroop color-naming task. Proc. Nat!. Acad. Sci. USA. 95,803-810.
Worsley, K. J. et al. (1997). Characterizing the Response of PET and fMRI Data Using Multivariate
Linear Models (MLM). Neuro/mage 6, 305-319.
| 1637 |@word h:2 trial:8 mri:1 seems:2 grey:2 simulation:1 covariance:3 minus:1 series:8 hemodynamic:7 current:2 activation:16 si:2 readily:3 explorative:2 zeger:2 noninformative:1 enables:1 analytic:1 plot:2 update:5 stroop:1 stationary:1 generative:2 pursued:1 half:2 alone:1 indicative:1 vanishing:1 short:1 accepting:1 mathematical:1 burst:2 hojen:1 wild:2 inside:1 introduce:1 acquired:1 indeed:1 roughly:2 brain:3 detects:1 encouraging:1 considering:1 notation:1 underlying:1 rostrup:1 kind:1 generalizable:1 proposing:1 finding:2 temporal:2 pseudo:2 every:2 concave:1 questionable:1 biometrika:1 zl:1 appear:1 positive:1 acad:1 analyzing:1 jls:1 approximately:1 might:1 plus:1 suggests:2 conversely:1 ease:1 limited:2 practical:1 gilks:2 acknowledgment:1 block:1 implement:1 x3:1 procedure:5 thought:2 integrating:1 spite:1 get:4 cannot:1 operator:1 put:1 h8:2 influence:1 optimize:1 map:2 yt:4 center:2 attention:1 independently:1 duration:1 rule:1 updated:1 carl:2 trend:4 expensive:1 updating:1 std:1 observed:1 bottom:5 improper:3 movement:1 mu:1 raise:1 negatively:1 indirect:1 various:2 effective:1 monte:5 neuroinformatics:1 lag:4 larger:2 otherwise:1 statistic:2 noisy:1 advantage:1 sequence:11 analytical:2 net:2 propose:1 neighboring:1 mixing:1 mckeown:2 coupling:2 measured:4 received:1 progress:1 indeterminacy:2 strong:1 edward:1 predicted:7 direction:1 thick:1 drawback:1 correct:1 filter:22 lars:1 centered:1 human:3 transient:1 explains:1 preliminary:1 investigation:1 around:4 considered:1 mapping:1 bump:1 consecutive:1 omitted:1 estimation:1 proc:1 hansen:6 council:1 vice:1 reflects:1 gaussian:4 rather:2 cr:3 derived:1 modelling:5 likelihood:4 indicates:3 superimposed:1 contrast:2 buckner:1 inference:5 dependent:1 integrated:1 typically:2 entire:1 hidden:16 relation:1 selective:1 overall:1 among:1 resonance:4 spatial:1 integration:2 field:1 sampling:13 identical:1 broad:1 thin:1 fmri:8 mimic:2 carryover:1 stimulus:3 gamma:1 individual:2 delayed:1 consisting:2 detection:1 acceptance:1 highly:1 analyzed:1 mixture:1 activated:1 sorensen:1 chain:5 integral:2 necessary:1 autocovariance:1 plotted:2 psychological:1 increased:1 instance:1 dev:1 introducing:1 deviation:5 uniform:3 delay:2 zio:1 too:1 front:1 reported:1 connect:1 periodic:1 synthetic:3 st:14 density:2 thanks:1 off:4 ym:1 together:2 usingp:1 leading:1 worsley:2 bandettini:2 checkerboard:1 li:1 bold:3 coefficient:7 caused:1 onset:2 crossed:2 performed:1 break:1 lot:1 wave:1 bayes:1 recover:2 contribution:1 square:2 variance:2 bayesian:8 iid:1 carlo:5 simultaneous:1 explain:1 danish:1 isj:1 involved:1 invasive:1 obvious:1 associated:1 color:1 dimensionality:1 actually:1 back:1 methodology:3 response:14 done:3 box:1 strongly:2 furthermore:1 rejected:1 correlation:7 d:1 working:1 hand:2 nonlinear:1 reversible:2 t_1:2 o:1 indicated:2 impulse:1 building:1 effect:3 usa:1 normalized:1 verify:1 true:1 consisted:2 analytically:4 hence:3 spatially:1 symmetric:1 ll:1 during:3 m:1 trying:1 fj:1 image:1 variational:1 common:1 specialized:1 functional:4 rl:1 m1:1 significant:1 blocked:4 measurement:1 versa:1 gibbs:7 funded:1 cortex:2 longer:1 posterior:16 multivariate:1 yls:1 binary:5 yi:3 seen:3 somewhat:1 eo:2 paradigm:8 signal:25 ii:2 multiple:1 technical:1 annular:1 characterized:1 af:1 cross:1 long:2 determination:1 naming:1 y:3 neuro:3 basic:2 noiseless:1 iteration:3 represent:3 proposal:1 whereas:1 interval:5 rest:2 induced:1 hz:1 flow:1 variety:1 restrict:1 competing:1 suboptimal:1 lange:2 shift:2 whether:1 expression:1 handled:3 f:3 proceed:1 useful:1 ph:1 visualized:1 generate:3 rensen:1 s3:1 dotted:4 notice:4 delta:1 estimated:1 trapped:1 yy:1 blood:1 sla:2 penalizing:1 backward:2 imaging:3 sum:1 run:2 parameterized:1 uncertainty:3 reasonable:3 vn:2 draw:1 ct:2 followed:1 lkhansen:1 activity:9 adapted:1 kronecker:1 x2:1 flat:3 fourier:1 department:1 zls:3 combination:2 metropolis:6 jeffreys:1 explained:1 restricted:1 xo:1 lyngby:1 computationally:1 remains:1 discus:1 turn:1 know:1 end:1 gaussians:1 magnetic:4 original:1 denotes:1 remaining:2 top:5 graphical:2 medicine:1 society:1 llst:1 sweep:1 move:2 strategy:4 concentration:1 usual:1 exhibit:1 sci:1 hmm:2 dsd:1 extent:1 collected:1 reason:1 pet:1 denmark:2 assuming:1 length:14 index:1 providing:1 difficult:2 neuroimaging:1 reliably:1 zt:2 unknown:2 perform:1 observation:2 convolution:1 markov:9 anti:1 displayed:1 situation:1 dc:4 sweeping:1 inferred:2 introduced:1 able:3 bar:6 suggested:1 pattern:4 including:2 green:2 royal:1 shifting:1 treated:2 representing:2 improve:2 dtu:1 identifies:1 auto:2 prior:9 literature:1 voxels:1 deoxyhemoglobin:1 relative:1 expect:3 proportional:1 vlf:1 validation:1 integrate:2 sufficient:1 jist:2 course:2 accounted:1 rasmussen:4 characterizing:1 boundary:1 transition:5 dale:2 forward:1 made:1 adaptive:3 jump:2 lz:1 voxel:6 founded:1 correlate:1 observable:1 thor:1 global:1 assumed:1 spatio:1 xi:1 un:3 mj:1 reasonably:1 correlated:1 symmetry:1 mlm:1 investigated:1 complex:1 necessarily:1 s2:1 noise:4 repeated:1 neuronal:6 en:1 inferring:1 explicit:2 wish:1 lie:1 governed:1 jen:1 dk:2 ih:1 effectively:1 ci:2 flashing:1 nat:1 stlst:1 mage:3 mf:14 rejection:3 generalizing:1 simply:1 visual:4 pedro:1 a8:1 determines:1 conditional:2 change:1 averaging:2 principal:1 called:1 accepted:1 experimental:7 partly:2 support:1 scan:12 mcmc:2 ex:4 |
697 | 1,638 | An Oeulo-Motor System with Multi-Chip
Neuromorphie Analog VLSI Control
Oliver Landolt*
CSEMSA
2007 Neuchatel / Switzerland
E-mail: landolt@caltech.edu
Steve Gyger
CSEMSA
2007 Neuchatel / Switzerland
E-mail: steve.gyger@csem.ch
Abstract
A system emulating the functionality of a moving eye-hence the name
oculo-motor system-has been built and successfully tested. It is made
of an optical device for shifting the field of view of an image sensor by up
to 45 ? in any direction, four neuromorphic analog VLSI circuits implementing an oculo-motor control loop, and some off-the-shelf electronics.
The custom integrated circuits communicate with each other primarily by
non-arbitrated address-event buses. The system implements the behaviors of saliency-based saccadic exploration, and smooth pursuit of light
spots. The duration of saccades ranges from 45 ms to 100 ms, which is
comparable to human eye performance. Smooth pursuit operates on light
sources moving at up to 50 0 /s in the visual field.
1 INTRODUCTION
Inspiration from biology has been recognized as a seminal approach to address some engineering challenges, particularly in the computational domain [1] . Researchers have borrowed architectures, operating principles and even micro-circuits from various biological
neural structures and turned them into analog VLSI circuits [2] . Neuromorphic approaches
are often considered to be particularly suited for machine vision, because even simple
animals are fitted with neural systems that can easily outperform most sequential digital
computers in visual processing tasks. It has long been recognized that the level of visual
processing capability needed for practical applications would require more circuit area than
can be fitted on a single chip. This observation has triggered the development of inter-chip
communication schemes suitable for neuromorphic analog VLSI circuits [3]-[4], enabling
the combination of several chips into a system capable of addressing tasks of higher complexity. Despite the availability of these communication protocols, only few successful
implementations of multi-chip neuromorphic systems have been reported so far (see [5] for
a review). The present contribution reports the completion of a fully functional multi-chip
system emulating the functionality of a moving eye, hence the denomination oculo-motor
system. It is made of two 2D VLSI retina chips, two custom analog VLSI control chips,
dedicated optical and mechanical devices and off-the-shelf electronic components. The
four neuromorphic chips communicate mostly by pulse streams mediated by non-arbitrated
address-event buses [4]. In its current version, the system can generate sacca des (quick eye
? Now with Koch Lab, Division of Biology 139-74, Caltech, Pasadena, CA 91125, USA
711
An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLS! Control
movements) toward salient points of the visual scene, and track moving light spots. The
purpose of the saccadic operating mode is to explore the visual scene efficiently by allocating processing time proportionally to significance. The purpose of tracking (also called
smooth pursuit) is to slow down or suppress the retina image slip of moving objects in order
to leave visual circuitry more time for processing. The two modes-saccadic exploration
and smooth pursuit--operate concurrently and interact with each other. The development
of this oculo-motor system was meant as a framework in which some general issues pertinent to neuromorphic engineering could be addressed. In this respect, it complements
Horiuchi's pioneering work [6]-[7], which consisted of developing a 10 model of the primate oculo-motor system with a focus on automatic on-chip learning of the correct control
function. The new system addresses different issues, notably 20 operation and the problem
of strongly non-linear mapping between 20 visual and motor spaces.
2 SYSTEM DESCRIPTION
The oculo-motor system is made of three modules (Fig. O. The moving eye module contains a 35 by 35 pixels electronic retina [8] fitted with a light deflection device driven by two
motors. This device can shift the field of view of the retina by up to 45 0 in any direction.
The optics are designed to cover only a narrow field of view of about 120. Thereby, the
retina serves as a high-resolution "spotlight" gathering details of interesting areas of the
visual scene, similarly to the fovea of animals. Two position control loops implemented
by off-the-shelf components keep the optical elements in the position specified by input
signals applied to this module. The other modules control the moving eye in two types
of behavior, namely saccadic exploration and smooth pursuit. They are implemented as
physically distinct printed circuit boards which can be enabled or disabled independently.
wide-angle retina
s;::.
u o?
;.0co *=:
~.
~
saccadic
control
chip
.sa:
'"
0.'
'" <I);
><.
saliency distribution
motors &
co ~
'"
.~
is
ii
i? .
-6
a
~
.c
g.
. 0 ._.
current prism orientations
spot
.<;::: ;
"'.
8 ~:
e 0..:
",.
tf.l
5
narrowangle
retina
~lEm3l?Jru
incremen tal
position
spot
14-.c...::.c:..---Ilocation
control
(EPROM)
chip
scene
retina
image
Figure 1: Oculo-motor system architecture
The light deflection device is made of two transparent and flat disks with a micro-prism
grating on one side, mounted perpendicularly to the optical axis of a lens. Each disk can
rotate without restriction around this axis, independently from the other. As a whole, each
micro-prism grating acts on light essentially like a single large prism, except that it takes
much less space (Fig. 2). Although a single fixed prism cannot have an adjustable deflection angle, with two mobile prisms, any magnitude and direction of deflection within
some boundary can be selected, because the two contributions may combine either con-
0. Landolt and S. Gyger
712
structively or destructively depending on the relative prism orientations. The relationship
between prism orientations and deflection angle has been derived in [9]. The advantage of
this system over many other designs is that only two small passive optical elements have
to move whereas most of the components are fixed, which enables fast movements and
avoids electrical connections to moving parts. The drawback of this principle is that optical
aberrations introduced by the prisms degrade image quality. However, when the device is
used in conjunction with a typical electronic retina, this degradation is not limiting because
these image sensors are characterized by a modest resolution due to focal-plane electronic
processing.
c.
retina
B.
iJ~= ~~ mic~o-prism
-
~
gratmgs
?-
Figure 2: A. Light deflection device principle. B. Replacement of conventional prisms by
micro-prism gratings. C. Photograph of the prototype with motors and orientation sensors.
The saccadic exploration module (Fig. 1) consists of an additional retina fitted with a
fixed wide-angle lens, and a neuromorphic saccadic control chip. The retina gathers lowresolution information from the whole visual scene accessible to the moving eye, determines the degree of interest--{)r saliency [l0]--{)f every region and transmits the resulting
saliency distribution to the saccadic control chip. In the current version of the system, the
distribution of saliency is just the raw output image of the retina, whereby saliency is determined by the brightness of visual scene locations. By inserting additional visual processing
hardware between the retina and the saccadic control chip, it would be possible to generate
interest for more sophisticated cues like edges, motion or specific shapes or patterns. The
saccadic control chip (Fig. 3) determines the sequence and timing of an endless succession of quick jumps--{)r saccades-to be executed by the moving eye, in such a way that
salient locations are attended longer and more frequently than less significant locations.
The chip contains a 2D array of about 900 cells, which is called visual map because its
organization matches the topology of the visual field accessible by the moving eye. The
chip also contains two 1D arrays of 64 cells called motor maps, which encode micro-prism
orientations in the light deflection device. Each cell of the visual map is externally stimulated by a stream of brief pulses, the frequency of which encodes saliency. The cells
integrate incoming pulses over time on a capacitor, thereby building up an internal voltage
at a rate proportional to pulse frequency. A global comparison circuit--called winnertake-all-selects the cell with the highest internal voltage. In the winning cell, a leakage
mechanism slowly decrease the internal voltage over time, thereby eventually leading another cell to win. With this principle, any cell stimulated to some degree wins from time
to time. The frequency of winning and the time ellapsed until another cell wins increases
with saliency. The visual map and the two motor maps are interconnected by a so-called
network of links [9], which embodies the mapping between visual and motor spaces. This
network consists of a pair of wires running from each visual cell to one cell in each of the
two motor maps. Thereby, the winning cell in the visual map stimulates exactly one cell in
713
An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLS! Control
each motor map. The location of the active cell in a motor map encodes the orientation of
a micro-prism grating, therefore this representation convention is called place coding [9].
The addresses of the active cells on the motor maps are transmitted to the moving eye,
which triggers micro-prism displacements toward the specified orientations.
visual map
saliency
distribution
motor maps
~==~~J5t2~5C~~~~:s;
~.r~~8 1--_.../
r--~
:&..-=;;;-~-m"H
s:::
Q)
...---.. . .1"8t>
(adressevent)
~
"0
prism
orientations
(addressevent)
Figure 3: Schematic of the saccadic control chip
The smooth pursuit module consists of an EPROM chip and a neuromorphic incremental
control chip (Fig. 1). The address-event stream delivered by the narrow-field retina is
applied to the EPROM. The field of view of this retina has been divided up into eight
angular sectors and a center region (Fig. 4A). The EPROM maps the addresses of pixels
located in the same sector onto a common output address, thereby summing their spiking
frequencies. The resulting address-event stream is applied to a topological map of eight
cells constituting one of the inputs of the neuromorphic incremental control chip. If a
single bright spot is focused on the retina away from the center, a large sum is produced in
one or two neighboring cells of this map, whereas the other cells receive only background
stimulation levels close to zero. Thereby, the angUlar position of the light spot is encoded by
the location of the spot of activity on the map-in other words place coding. Other objects
than light spots could be processed similarly after insertion of relevant detection hardware
between the retina and the EPROM. The incremental control chip has two additional input
maps representing the current orientations of the two prisms (Fig. 4B). These maps are
connected to position sensors incorporated into the moving eye module (Fig. 1). These
additional inputs are necessary because the control actions depends not only on the location
of the target on the retina, but also on the current prism orientations [9]. The control actions
are computed by three networks of links relating the primary inputs maps to the final output
map via an intermediate layer. The purpose of this intermediate stage is to break down the
control function of three variables into three functions of only two variables, which can
be implemented by a lower number of links [11]. As in the saccadic control chip, the
mapping between the input and output spaces has been calculated numerically prior to chip
fabrication, then hardwired as electrical connections. The final outputs of the chip are pulse
streams encoding the direction and rate at which each micro-prism grating must rotate in
order to shift the target toward the center of the retina. These pulses incrementally update
prism orientations settings at the input of the moving eye module (Fig. O.
Since two different modules control the same moving eye, it is necessary to coordinate
them in order to avoid conflicts. Saccadic module interventions occur whenever a saccade
is generated, namely every 200-500 ms in typical operating conditions. At the instant a
saccade is requested, the smooth pursuit module is shut off in order to prevent it from
reacting against the saccade. A similar mechanism called saccadic suppression exists in
biology. When the eye reaches the target location, control is left entirely to the smooth
pursuit module until the next saccade is generated. Reciprocally, if an object tracked by
0. Landolt and S. Gyger
714
0%
80%
output maps
input maps
B.
A.
20%
spot
location
0%
0%
increments
0%
0%
0%
current
prism
positions
networks of links
Figure 4: A. Place-coded spot location obtained by summing the outputs of pixels belonging to the same sector. B. Architecture of the incremental control chip
the smooth pursuit module reaches the boundary of the global visual field, the incremental
control chip sends a signal triggering a saccade back toward the center of the visual fieldwhich is called nystagmus in biology. The reason for splitting control into two modules is
that vi suo-motor coordinate mappings are very different for saccadic exploration and for
smooth pursuit [9]. In the former case, visual input is related to the global field of view
covered by the fixed wide-angle retina, and outputs are absolute micro-prism orientations.
Saccade targets need not be initially visible to the moving eye. Since saccades are executed
without permanent visual feedback, their accuracy is limited by the mapping hardwired in
the control chip. Inversely, smooth pursuit is based on information extracted directly from
the retina image of the moving eye. The output of the incremental control chip are small
changes in micro-prism orientations instead of absolute positions. Thereby, the smooth
pursuit module operates under closed-loop visual feedback, which confers it high accuracy.
However, operation under visual feedback is slower than open-loop saccadic movements,
and smooth pursuit inherently applies only to a single target. Thus, the two control modules
are very complementary in purpose and performance.
3 EXPERIMENTAL RESULTS
The present section reports both qualitative observations and quantitative measurements
made on the oculo-motor system, because the complexity of its behavior is difficult to
convey by just a few numbers. The measurement setup consisted of a black board on
which high efficiency white light emitting diodes were mounted, the intensity of which
could be set individually. The visual scene was placed about 70 cm away from the moving
eye. The axes of the two retinas were parallel at a distance of 6.5 cm. It was necessary
to take this spacing into account for the visuo-motor coordinate mapping. The saliency
distribution produced by the visual scene was measured by analyzing the output image of
the wide-angle retina chip (Fig. 1).
When a single torchlight was waved in front of the moving eye, it was found that the
smooth pursuit system indeed keeps the center of gravity of the light source image at the
center of the narrow field of view. The maximum tracking velocity depends on the intensity
ratio-contrast-between the light spot and the background. This behavior was expected
because by construction, the incremental control chip generates correction pulses at a rate
proportional to the magnitude of its input signals. At the highest contrast, we were able to
achieve a maximum tracking speed of 50 ?Is. For comparison, smooth pursuit in humans
can in principle reach up to 180 0 /s, but tracking is accurate only up to about 30 0 /s [7].
When shown two fixed light spots, the moving eye jumps from one to the other periodically.
An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLSI Control
715
The relative time spent on each light source depends on their intensity ratio. The duty cycle
has been measured for ratios ranging from 0.1 to 10 (Fig. SA). It is close to SO% for equal
saliency, and tends toward a ratio of 10 to 1 in favor of the brightest spot at the extremities of
the range. The delay between onset of a saccade and stabilization on the target ranges from
4S ms to 100 ms. The delay is not constant because it depends to some extent on saccade
magnitude, and because of occasional mechanical slipping at the onset. In humans, the
duration of saccades tends to be proportional to their amplitude, and ranges between 2S ms
and 200ms.
A.
~
100
80
40
20
o
B.
saccades duty cycle
?
~ 60
I
~
?I
:; 30
? 20
?
I
i
J
i
I
I
0.1
10
saliency ratio
50
E40
i
!
background observation time
~1O
0
.8 0.1
~
.
?
10
100
spot intensity I total background intensity [%]
Figure S: Measured data plots. A. Gaze time sharing between two salient spots versus
saliency ratio. B. Gaze time on background versus spot-to-background intensity ratio.
When more than two spots are turned on, the saccadic exploration is not obviously periodic anymore, but the eye keeps spending most time on the light spots, with a noticeable
preference for larger intensities. This behavior is consistent with measurements previously
made on the saccadic control chip alone under electrical stimulation [9]. Saccades towards
locations in the background are rare and brief if the intensity ratio between the light sources
and the background is high enough . This phenomenon has been studied quantitatively by
measuring the fraction of time spent on background locations for different light source intensities (Fig. SB) . The quantity on the horizontal axis of the plot is the ratio between the
total intensity in light spots and the total background intensity. These two quantities are
measured by summing the outputs of wide-angle retina pixels belonging to the light spot
images and to the background respectively. It can be seen that if this ratio is above 1, less
than 10% of the time is spent scanning the background.
Open-loop saccade accuracy has been evaluated by switching off the smooth pursuit module, and measuring the error vector between the center of gravity of the light spot and the
center of the narrow-field retina after each saccade, for six different light spots spread over
the field of view. The error vectors were found to be always less than 2 0 in magnitude, with
different orientations in each case. Whenever the moving eye returned to a same light spot,
the error vector was the same. This shows that the residual error is not due to random noise,
but to the limited accuracy of visuo-motor mapping within the saccadic control chip. The
magnitude of the error is always low enough that the target light spot is completely visible
by the moving eye, thereby ensuring that the smooth pursuit module can indeed correct the
error when enabled.
4
CONCLUSION
The oculo-motor system described herein performs as intended, thereby demonstrating the
value of a neuromorphic engineering approach in the case of a relatively complex task
involving mechanical and optical components. This system provides an experimental platform for studying active vision, whereby a visual system acts on itself in order to facilitate
perception of its surroundings. Besides saccadic exploration and smooth pursuit, a mov-
716
0. Landolt and S. Gyger
ing eye can be exploited to improve vision in many other ways. For instance, resolution
shortcomings in retinas incorporating only a modest number of pixels can be overcome
by continuously sweeping the field of view back and forth, thereby providing continuous
information in space-although not simultaneously in time. In binocular vision, 3D information perception by stereopsis is also made easier if the fields of view can be aligned by
vergence control [12]. Besides active vision, the oculo-motor system also lends itself as
a framework for testing and demonstrating other analog VLSI vision circuits. As already
mentioned, due to its modular architecture, it is possible to insert additional visual processing chips either in the saccadic exploration module, or in the smooth pursuit module,
in order to make the current light-source oriented system suitable for operation in natural
visual environments.
Acknowledgments
The authors wish to express their gratitude to all their colleagues at CSEM who contributed
to this work. Special thanks are due to Patrick Debergh for the micro-prism light deflection concept, to Friedrich Heitger for designing and building the mechanical device, and
to Edoardo Franzi for designing and building the related electronic interface. Thanks are
also due to Arnaud Tisserand, Friedrich Heitger, Eric Vittoz, Reid Harrison, Theron Stanford, and Edoardo Franzi for helpful comments on the manuscript. Mr. Roland Lagger,
from Portescap, La Chaux-de-Fonds, Switzerland, provided friendly assistance in a critical
mechanical assembly step.
References
[1] C. Mead. Analog VLSI and Neural Systems. Addison Wesley, 1989.
[2] TS. Lande, editor. Neuromorphic Systems Engineering. Kluwer Academic Publishers, Dordrecht, 1998.
[3] K. Boahen. Retinomorphic vision systems II: Communication channel design. In IEEE Int.
Symp. Circuits and Systems (ISCAS'96), Atlanta, May 1996.
[4] A. Mortara, E. Vittoz, and P. Venier. A communication scheme for analog VLSI perceptive
systems. IEEE Journal oj Solid-State Circuits, 30, June 1995.
[5] C.M. Higgins. Multi-chip neuromorphic motion processing. In Conference on Advanced Research in VLSI, Atlanta, March 1999.
[6] TK. Horiuchi, B. Bishotberger, and C. Koch. An analog VLSI saccadic eye movement system.
In Advances in Neural Processing Systems 6, 1994.
[7] TK. Horiuchi. Analog VLSI-Based, Neuromorphic Sensorimotor Systems: Modeling the Primate Oculomotor System. PhD thesis, Caltech, Pasadena, 1997.
[8] P. Venier. A constrast sensitive silicon retina based on conductance modulation in a diffusion network. In 6th Int. Conf Microelectronics Jor Neural Networks and Fuzzy Systems (MicroNeuro'97), Dresden, Sept 1997.
[9] O. Landolt. Place Coding in Analog VLSI - A Neuromorphic Approach to Computation. Kluwer
Academic Publishers, Dordrecht, 1998.
[10] TG. Morris and S.P DeWeerth. Analog VLSI excitatory feedback circuits for attentional shifts
and tracking. Analog Integrated Circuits and Signal Processing, 13, May-June 1997.
[11] O. Landolt. Place coding in analog VLSI and its application to the control of a light deflection
system. In MicroNeuro'97, Dresden, Sept 1997.
[12] M. Mahowald. An Analog VLSI SystemJorStereoscopic Vlsion. Kluwer Academic Publishers,
Boston, 1994.
| 1638 |@word version:2 disk:2 open:2 pulse:7 brightness:1 attended:1 thereby:10 solid:1 electronics:1 contains:3 current:7 must:1 periodically:1 visible:2 shape:1 pertinent:1 motor:29 enables:1 designed:1 plot:2 update:1 alone:1 cue:1 selected:1 device:9 shut:1 plane:1 provides:1 location:11 preference:1 lowresolution:1 qualitative:1 consists:3 combine:1 symp:1 inter:1 notably:1 expected:1 indeed:2 behavior:5 frequently:1 multi:7 provided:1 circuit:13 cm:2 fuzzy:1 quantitative:1 every:2 act:2 friendly:1 gravity:2 exactly:1 control:36 intervention:1 reid:1 engineering:4 timing:1 tends:2 switching:1 despite:1 encoding:1 analyzing:1 extremity:1 mead:1 reacting:1 modulation:1 black:1 studied:1 dresden:2 co:2 limited:2 range:4 practical:1 acknowledgment:1 testing:1 implement:1 spot:24 displacement:1 area:2 printed:1 word:1 e40:1 cannot:1 onto:1 close:2 seminal:1 restriction:1 conventional:1 map:22 quick:2 center:8 confers:1 duration:2 independently:2 focused:1 resolution:3 splitting:1 constrast:1 higgins:1 array:2 enabled:2 denomination:1 coordinate:3 increment:1 limiting:1 target:7 trigger:1 construction:1 eprom:5 slip:1 designing:2 element:2 mic:1 velocity:1 particularly:2 located:1 module:20 electrical:3 region:2 connected:1 cycle:2 movement:4 highest:2 decrease:1 mentioned:1 boahen:1 environment:1 complexity:2 insertion:1 division:1 efficiency:1 eric:1 completely:1 easily:1 chip:39 various:1 horiuchi:3 distinct:1 fast:1 shortcoming:1 dordrecht:2 encoded:1 larger:1 modular:1 stanford:1 favor:1 itself:2 delivered:1 final:2 obviously:1 triggered:1 advantage:1 sequence:1 interconnected:1 inserting:1 turned:2 loop:5 neighboring:1 relevant:1 aligned:1 achieve:1 forth:1 description:1 csem:2 incremental:7 leave:1 object:3 spent:3 depending:1 tk:2 completion:1 measured:4 ij:1 noticeable:1 borrowed:1 sa:2 grating:5 implemented:3 diode:1 vittoz:2 convention:1 switzerland:3 direction:4 drawback:1 correct:2 functionality:2 exploration:8 human:3 stabilization:1 implementing:1 incremen:1 require:1 venier:2 transparent:1 biological:1 insert:1 correction:1 koch:2 considered:1 around:1 brightest:1 mapping:7 circuitry:1 purpose:4 sensitive:1 individually:1 tf:1 successfully:1 concurrently:1 sensor:4 always:2 avoid:1 shelf:3 mobile:1 voltage:3 conjunction:1 encode:1 derived:1 focus:1 l0:1 ax:1 june:2 contrast:2 suppression:1 helpful:1 sb:1 integrated:2 initially:1 pasadena:2 vlsi:17 selects:1 pixel:5 issue:2 orientation:14 development:2 animal:2 platform:1 special:1 retinomorphic:1 field:14 equal:1 biology:4 report:2 quantitatively:1 micro:11 primarily:1 few:2 retina:28 perpendicularly:1 surroundings:1 oriented:1 simultaneously:1 intended:1 replacement:1 iscas:1 arbitrated:2 detection:1 organization:1 interest:2 atlanta:2 conductance:1 heitger:2 custom:2 light:27 allocating:1 accurate:1 oliver:1 edge:1 capable:1 endless:1 necessary:3 modest:2 mortara:1 fitted:4 instance:1 modeling:1 cover:1 measuring:2 neuromorphic:17 tg:1 mahowald:1 addressing:1 rare:1 delay:2 successful:1 fabrication:1 front:1 reported:1 stimulates:1 periodic:1 scanning:1 thanks:2 accessible:2 destructively:1 off:5 gaze:2 continuously:1 thesis:1 slowly:1 conf:1 leading:1 account:1 de:2 coding:4 availability:1 int:2 permanent:1 onset:2 depends:4 stream:5 vi:1 view:9 break:1 lab:1 closed:1 capability:1 parallel:1 nystagmus:1 contribution:2 bright:1 accuracy:4 who:1 efficiently:1 succession:1 saliency:13 raw:1 produced:2 researcher:1 reach:3 addressevent:1 whenever:2 sharing:1 against:1 colleague:1 frequency:4 sensorimotor:1 transmits:1 visuo:2 con:1 amplitude:1 sophisticated:1 back:2 manuscript:1 wesley:1 steve:2 higher:1 evaluated:1 strongly:1 just:2 angular:2 stage:1 binocular:1 until:2 deweerth:1 horizontal:1 incrementally:1 mode:2 quality:1 disabled:1 usa:1 name:1 building:3 consisted:2 concept:1 facilitate:1 former:1 hence:2 inspiration:1 arnaud:1 white:1 assistance:1 whereby:2 m:7 performs:1 dedicated:1 motion:2 passive:1 interface:1 image:10 ranging:1 spending:1 common:1 functional:1 spiking:1 stimulation:2 tracked:1 analog:18 relating:1 numerically:1 kluwer:3 spotlight:1 significant:1 measurement:3 silicon:1 suo:1 automatic:1 focal:1 similarly:2 winnertake:1 moving:22 longer:1 operating:3 vls:2 patrick:1 driven:1 prism:24 exploited:1 caltech:3 slipping:1 transmitted:1 seen:1 additional:5 mr:1 recognized:2 signal:4 ii:2 smooth:19 ing:1 match:1 characterized:1 academic:3 long:1 divided:1 roland:1 coded:1 lande:1 schematic:1 ensuring:1 involving:1 vision:7 essentially:1 physically:1 aberration:1 cell:18 receive:1 whereas:2 background:12 spacing:1 addressed:1 harrison:1 source:6 sends:1 publisher:3 operate:1 comment:1 capacitor:1 intermediate:2 enough:2 architecture:4 topology:1 triggering:1 prototype:1 shift:3 duty:2 six:1 edoardo:2 returned:1 action:2 proportionally:1 covered:1 morris:1 hardware:2 processed:1 generate:2 outperform:1 track:1 express:1 four:2 salient:3 demonstrating:2 prevent:1 diffusion:1 fraction:1 sum:1 deflection:9 angle:7 communicate:2 place:5 electronic:5 comparable:1 entirely:1 layer:1 topological:1 activity:1 occur:1 optic:1 scene:8 flat:1 encodes:2 tal:1 generates:1 speed:1 optical:7 relatively:1 developing:1 combination:1 march:1 belonging:2 primate:2 gathering:1 landolt:7 bus:2 previously:1 eventually:1 mechanism:2 needed:1 addison:1 serf:1 studying:1 pursuit:19 operation:3 eight:2 occasional:1 away:2 anymore:1 slower:1 running:1 assembly:1 instant:1 embodies:1 neuchatel:2 leakage:1 move:1 already:1 quantity:2 saccadic:22 primary:1 win:3 fovea:1 distance:1 link:4 lends:1 attentional:1 degrade:1 mail:2 extent:1 toward:5 reason:1 besides:2 relationship:1 ratio:10 providing:1 difficult:1 mostly:1 executed:2 sector:3 setup:1 suppress:1 implementation:1 design:2 adjustable:1 contributed:1 observation:3 wire:1 enabling:1 t:1 emulating:2 communication:4 incorporated:1 sweeping:1 intensity:11 introduced:1 complement:1 namely:2 mechanical:5 specified:2 pair:1 connection:2 gratitude:1 friedrich:2 conflict:1 narrow:4 herein:1 address:9 able:1 pattern:1 perception:2 challenge:1 oculomotor:1 pioneering:1 built:1 oj:1 reciprocally:1 shifting:1 event:4 suitable:2 natural:1 critical:1 hardwired:2 residual:1 advanced:1 representing:1 scheme:2 improve:1 brief:2 eye:24 inversely:1 axis:3 mediated:1 sept:2 review:1 prior:1 relative:2 fully:1 interesting:1 mounted:2 proportional:3 versus:2 microneuro:2 digital:1 integrate:1 degree:2 gather:1 consistent:1 principle:5 editor:1 excitatory:1 placed:1 side:1 wide:5 absolute:2 boundary:2 calculated:1 feedback:4 overcome:1 avoids:1 jor:1 author:1 made:7 jump:2 far:1 constituting:1 emitting:1 keep:3 global:3 active:4 incoming:1 summing:3 stereopsis:1 continuous:1 vergence:1 stimulated:2 channel:1 ca:1 inherently:1 requested:1 interact:1 complex:1 domain:1 protocol:1 significance:1 spread:1 whole:2 noise:1 complementary:1 convey:1 fig:12 board:2 slow:1 position:7 wish:1 winning:3 mov:1 externally:1 down:2 specific:1 microelectronics:1 exists:1 incorporating:1 sequential:1 phd:1 magnitude:5 fonds:1 easier:1 boston:1 suited:1 photograph:1 explore:1 visual:30 tracking:5 saccade:16 applies:1 ch:1 determines:2 extracted:1 towards:1 change:1 typical:2 except:1 operates:2 determined:1 degradation:1 called:8 lens:2 total:3 experimental:2 la:1 internal:3 perceptive:1 rotate:2 meant:1 tested:1 phenomenon:1 |
698 | 1,639 | Algorithms for Independent Components
Analysis and Higher Order Statistics
Daniel D. Lee
Bell Laboratories
Lucent Technologies
Murray Hill, NJ 07974
Uri Rokni and Haim Sompolinsky
Racah Institute of Physics and
Center for Neural Computation
Hebrew University
Jerusalem, 91904, Israel
Abstract
A latent variable generative model with finite noise is used to describe several different algorithms for Independent Components Analysis (lCA). In particular, the Fixed Point ICA algorithm is shown to
be equivalent to the Expectation-Maximization algorithm for maximum
likelihood under certain constraints, allowing the conditions for global
convergence to be elucidated. The algorithms can also be explained by
their generic behavior near a singular point where the size of the optimal generative bases vanishes. An expansion of the likelihood about this
singular point indicates the role of higher order correlations in determining the features discovered by ICA. The application and convergence of
these algorithms are demonstrated on a simple illustrative example.
Introduction
Independent Components Analysis (lCA) has generated much recent theoretical and practical interest because of its successes on a number of different signal processing problems.
ICA attempts to decompose the observed data into components that are as statistically independent from each other as possible, and can be viewed as a nonlinear generalization of
Principal Components Analysis (PCA). Some applications of ICA include blind separation
of audio signals, beamforming of radio sources, and discovery of features in biomedical
traces [I] .
There have also been a number of approaches to deriving algorithms for ICA [2, 3, 4].
Fundamentally, they all consider the problem of recovering independent source signals {s}
from observations {x} such that:
M
Xi
=L
WijS j ,
i
= l..N
(I)
j=1
Here, W ij is a N x M mixing matrix where the number of sources M is not greater than
the dimensionality N of the observations. Thus, the columns of W represent the different
independent features present in the observed data.
Bell and Sejnowski formulated their Infomax algorithm for ICA as maximizing the mutual
information between the data and a nonlinearly transformed version of the data [5]. The
D . D. Lee. U. Rokni and H. Sompolinsky
492
covariant version of this algorithm uses the natural gradient of the mutual information to
iteratively update the estimate for the demixing matrix W- 1 in terms of the estimated
componentss = W - 1x [6]:
.6.W- 1 ex:
[1 -
(g(s)sT)] W- 1,
(2)
The nonlinearity g( s) differentiates the features learned by the lnfomax ICA algorithm
from those found by conventional PCA. Fortunately, the exact form of the nonlinearity
used in Eq. 2 is not crucial for the success of the algorithm, as long as it preserves the
sub-Gaussian or super-Gaussian nature of the sources [7] .
Another approach to ICA due to Hyvarinen and Oja was derived from maximizing objective
functions motivated by projection pursuit [8]. Their Fixed Point ICA algorithm attempts
to self-consistently solve for the extremum of a nonlinear objective function. The simplest
formulation considers a single source M = 1 so that the mixing matrix is a single vector
w, constrained to be unit length Iwl = 1. Assuming the data is first preprocessed and
whitened, the Fixed Point ICA algorithm iteratively updates the estimate of w as follows:
w
t-
w
t-
(xg(w T x) w
Iwl'
ACW
(3)
where g(wT x) is a nonlinear function and AC is a constant given by the integral over the
Gaussian :
(4)
The Fixed Point algorithm can be extended to an arbitrary number M ~ N of sources by
using Eq. 3 in a serial deflation scheme. Alternatively, the M columns of the mixing matrix
W can be updated simultaneously by orthogonalizing the N x M matrix :
(5)
Under the assumption that the observed data match the underlying ICA model, x = W s, it
has been shown that the Fixed Point algorithm converges locally to the correct solution with
at least quadratic convergence. However, the global convergence of the generic Fixed Point
ICA algorithm is uncertain . This is in contrast to the gradient-based lnfomax algorithm
whose convergence is guaranteed as long as a sufficiently small step size is chosen.
In this paper, we first review the latent variable generative model framework for Independent Components Analysis. We then consider the generative model in the presence of finite
noise, and show how the Fixed Point ICA algorithm can be related to an ExpectationMaximization algorithm for maximum likelihood. This allows us to elucidate the conditions under which the Fixed Point algorithm is guaranteed to globally converge. Assuming
that the data are indeed generated from independent components, we derive the optimal
parameters for convergence. We also investigate how the optimal size of the ICA mixing
matrix varies as a function of the added noise, and demonstrate the presence of a singular
point. By expanding the likelihood about this singular point, the behavior of the ICA algorithms can be related to the higher order statistics present in the data. Finally, we illustrate
the application and convergence of these ICA algorithms on some artificial data.
Generative model
A convenient method for interpreting the different ICA algorithms is in terms of the hidden,
or latent, variable generative model shown in Fig. 1 [9, 10]. The hidden variables {s j}
493
leA Algorithms and Higher Order Statistics
M hidden variables
N visible variables
Figure 1: Generative model for ICA algorithms. s are the hidden variables, I j are additive
Gaussian noise terms, and x = W s + Ij are the visible variables.
correspond to the different independent components and are assumed to have the factorized
non-Gaussian prior probability distribution:
M
P(s ) =
II
(6)
e-F (Sj).
j=l
Once the hidden variables are instantiated, the visible variables {x t } are generated via a
linear mapping through the generative weights W:
P(xls) =
17T1j exp [1
II ~
- 2lj2 (Xi - L WijSj)2 1,
(7)
N
i= l
j
where 1j2 is the variance of the Gaussian noise added to the visible variables.
The probability of the data given this model is then calculated by integrating over all possible values of the hidden variables:
P( x )
=
f
ds P(s)P(xls)
= (27T1j;) N/ 2
f
ds exp [-F(S) -
2~2 (x -
WS)2]
(8)
In the limit that the added noise vanishes, 1j2 -T 0, it has previously been shown that
maximizing the likelihood of Eq. 8 is equivalent to the Infomax algorithm in Eq. 2 [11].
In the following analysis, we will consider the situation when the variance of the noise is
nonzero, 1j2 1= o.
Expectation-Maximization
We assume that the data has initially been preprocessed and spherized: (XiXj ) = Oij .
Unfortunately, for finite noise 1j2 and an arbitrary prior F(sj) , deriving a learning rule for
W in closed form is analytically intractable. However, it becomes possible to derive a
simple Expectation-Maximization (EM) learning rule under the constraint:
wlw
W = ~Wo ,
(9)
o= I,
which implies that W is orthogonal, and ~ is the length of the individual columns of W .
Indeed, for data that obeys the ICA model, x = W s, it can be shown that the optimal W
must satisfy this orthogonality condition. By assuming the constraint in Eq. 9 for arbitrary
data, the posterior distribution P(slx) becomes conveniently factorized:
F(.lx) ()(
i!
exp [-F(';)
+ :' I(W T x);,;
-
~e';ll?
(10)
494
D. D. Lee, U. Rokni and H. Sompolinsky
For the E-step, this factorized form allows the expectation function J ds P(slx)s =
g(WT x) to be analytically evaluated. This expectation is then used in the M-step to find
the new estimate W':
(xg(WT x)T) - AsW' = 0,
(11 )
where As is a symmetric matrix of Lagrange multipliers that constrain the new W' to be
orthogonal. Eq . 11 is easily solved by taking the reduced singular value decomposition of
the rectangular matrix:
(12)
where UTU = VV T = I and D is a diagonal M x M matrix. Then the solution for the
EM estimate of the mixing matrix is given by:
W'
(13)
As
(14)
As a specific example, consider the following prior for binary hidden variables: P( s) =
~[8(s - 1) + 8(s + 1)]. In this case, the expectation ds P(slx)s = tanh(WT X/(j2) and
so the EM update rule is given by onhogonalizing the matrix:
J
W
f-
(xtanh(:2 WT
X)) .
(15)
Fixed Point leA
Besides the presence of the linear term AC Win Eq. 5, the EM update rule looks very much
like that of the Fixed Point leA algorithm. It turns out that without this linear term, the
convergence of the naive EM algorithm is much slower than that of Eq. 5. Here we show
that it is possible to interpret the role of this linear term in the Fixed Point leA algorithm
within the framework of this generative model.
Suppose that the distribution of the observed data PD (x) is actually a mixture between an
isotropic distribution Po(x) and a non-isotropic distribution P1 (x):
PD(X)
= aPo(x) + (1 -
a)P1 (x).
(16)
Because the isotropic part does not break rotational symmetry, it does not affect the choice
of the directions of the learned basis W . Thus, it is more efficient to apply the learning
algorithm to only the non-isotropic portion of the distribution, Pt (x) (X PD(X) - aPo(x),
rather than to the whole observed distribution PD(X). Applying EM to P1 (x) results in a
correction term arising from the subtracted isotropic distribution . With this correction, the
EM update becomes:
W f- (xg(WT x)) - aAcW
(17)
which is equivalent to the Fixed Point leA algorithm when a = 1.
Unfortunately, it is not clear how to compute an appropriate value for a to use in fitting data.
Taking a very small value, a ? 1, will result in a learning rule that is very similar to the
naive EM update rule. This implies that the algorithm will be guaranteed to monotonically
converge, albeit very slowly, to a local maximum of the likelihood. On the other hand,
choosing a large value, a ? 1, will result in a subtracted probability density P1 (x) that is
negative everywhere. In this case, the algorithm will converge slowly to a local minimum
of the likelihood. For the Fixed Point algorithm which operates in the intermediate regime,
a ~ 1, the algorithm is likely to converge most rapidly. However, it is also in this situation
that the subtracted density P1 (x) could have both positive and negative regions, and the
algorithm is no longer guaranteed to converge.
495
leA Algorithms and Higher Order Statistics
Noise 0 2
Figure 2: Size of the optimal generative bases as a function of the added noise
the singular point behavior around (J~ ~ 1.
(J2,
showing
Optimal value of a
In order to determine the optimal value of a, we make the assumption that the observed
data obeys the ICA model, x = A8. Note that the statistics of the sources in the data need
not match the assumed prior distribution of the sources in the generative model Eq. 6. With
this assumption, which is not related to the mixture assumption in Eq. 16, it is easy to show
that W = A is a fixed point of the algorithm . By analyzing the behavior of the algorithm
in the vicinity of this fixed point, a simple expression emerges for the change in deviations
from this fixed point, 8W, after a single iteration of Eq. 17:
(g'(8)) - aAG
A 8Wij
8g ())
8
- a G
8Wij +- (
+ O(8W
3
)
(18)
where the averaging here is over the true source distribution, assumed for simplicity to be
identical for all sources. Thus, the algorithm converges most rapidly if one chooses:
(g' (8))
aopt
=
AG
(19)
'
so that the local convergence is cubic. From Eq. 18 one can show that the condition for the
stability of the fixed point is given by a < ae , where:
(8g(8)
ac =
+ g'(8))
2AG
.
(20)
Thus, for a = 0, the stability criterion in Eq. 18 is equivalent to (8g( 8)) > (g' (8)). For the
cubic nonlinearity g( 8) = S3, this implies that the algorithm will find the true independent
features only if the source distribution has positive kurtosis.
Singular point expansion
Let us now consider how the optimal size ~ of the weights W varies as a function of the
noise parameter (J2. For very small (J2 ? 1, the weights W are approximately described
by the Infomax algorithm of Eq. 2, and the lengths of the columns should be unity in order
to match the covariance of the data. For large (12 ? 1, however, the optimal size of the
weights should be very small because the covariance of the noise is already larger than that
of the data. In fact, for Factor Analysis which is a special case of the generative model
with F(s) = ~s2 in Eq. 6, it can be shown that the weights are exactly zero, W = 0, for
(J2 > 1.
Thus, the size of the optimal generative weights W varies with (J2 as shown qualitatively
in Fig. 2. Above a certain critical noise value (J~ ~ 1, the weights are exactly equal to
D. D. Lee, U. Rokni and H. Sompolinsky
496
0.81
r----~---~--_,
a=O.9
0.77
a=1.5
0.76'------'------'-------'
5
10
15
o
Iteration
Figure 3: Convergence of the modified EM algorithm as a function of a . With 9(S) =
tanh(s) as the nonlinearity, the likelihood (In cosh{WT x)) is plotted as a function of the
iteration number. The optimal basis W are plotted on the two-dimensional data distribution
when the likelihood is maximized (top) and minimized (bottom).
zero, W = O. Only below this critical value do the weights become nonzero. We expand
the likelihood of the generative model in the vicinity of this singular point. This expansion
is well-behaved because the size of the generative weights W acts as a small perturbative
parameter in this expansion. The log likelihood of the model around this singular value is
then given by:
L =
-~Tr [WWT - {I _ (j2)J] 2
4
1
+ 4! L kurt{sm) (XiXjXkXI)c WimWjmWkmWlm
(21)
ijklm
+0(1 _ (j2)3,
where kurt(sm) represents the kurtosis of the prior distribution over the hidden variables.
Note that this expansion is valid for any symmetric prior, and differs from other expansions
that assume small deviations from a Gaussian prior [12, 13]. Eq. 21 shows the importance
of the fourth-order cumulant of the observed data in breaking the rotational degeneracy of
the weights W. The generic behavior of ICA is manifest in optimizing the cumulant term
in Eq.21, and again depends crucially on the sign of the kurtosis that is used for the prior.
Example with artificial data
As an illustration of the convergence of the algorithm in Eq. 17, we consider the simple
two-dimensional uniform distribution:
P(x x)
I,
2
= {1/12,
-vls~. Xl, X2 ~ vis
0,
otherWIse
(22)
With 9(S) = tanh(s) as the nonlinearity, Fig. 3 shows how the overall likelihood converges for different values of the parameter a as the algorithm is iterated. For a ~ 1.0,
the algorithm converges to a maximum of the likelihood, with the fastest convergence at
aopt = 0.9. However, for a > 1.2, the algorithm converges to a minimum of the likelihood . At an intermediate value, a = 1.1, the likelihood does not converge at all, fluctuating wildly between the maximum and minimum likelihood solutions. The maximum
leA Algorithms and Higher Order Statistics
497
likelihood solution shows the basis vectors in W aligned with the sides of the square distribution , whereas the minimum likelihood solution has the basis aligned with the diagonals.
These solutions can also be understood as maximizing and minimizing the kurtosis terms
in Eq . 21.
Discllssion
The utility of the latent variable generative model is demonstrated on deriving algorithms
for leA . By constraining the generative weights to be orthogonal, an EM algorithm is
analytically obtained . By interpreting the data to be fitted as a mixture of isotropic and
non-isotropic parts, a simple correction to the EM algorithm is derived. Under certain
conditions, this modified algorithm is equivalent to the Fixed Point leA algorithm, and
converges much more rapidly than the naive EM algorithm. The optimal parameter for
convergence is derived assuming the data is consistent with the leA generative model.
There also exists a critical value for the noise parameter in the generative model , about
which a controlled expansion of the likelihood is possible. This expansion makes clear
the role of higher order statistics in determining the generic behavior of different leA
algorithms.
We acknowledge the support of Bell Laboratories, Lucent Technologies, the US-Israel Binational Science Foundation, and the Israel Science Foundation . We also thank Hagai
Attias, Simon Haykin , Juha Karhunen, Te-Won Lee, Erkki Oja, Sebastian Seung, Boris
Shraiman, and Oren Shriki for helpful discussions.
References
[1] Haykin, S (1999). Neural networks: a comprehensivefoundation. 2nd ed., Prentice-Hall, Upper
Saddle River, NJ.
[2] Jutten, C & Herault, J (1991). Blind separation of sources, part I: An adaptive algorithm based
on neuromimetic architecture. Signal Processing 24, 1-10.
[3] Comon, P (1994). Independent component analysis: a new concept? Signal Processing 36,
287-314.
[4] Roth, Z & Baram, Y (1996). Multidimensional density shaping by sigmoids. IEEE Trans. Neural Networks 7, 1291-1298.
[5] Bell, AJ & Sejnowski, TJ (1995). An information maximization approach to blind separation
and blind deconvolution. Neural Computation 7,1129-1159.
[6] Amari, S, Cichocki, A & Yang, H (1996). A new learning algorithm for blind signal separation.
Advances in Neural Information Processing Systems 8, 757-763.
[7] Lee, TW, Girolami, M, & Sejnowski, TJ (1999). Independent component analysis using an
extended infomax algorithm for mixed sub-gaussian and super-gaussian sources. Neural Computation 11, 609-633 .
[8] Hyvarinen, A & Oja, E (1997). A fast fixed-point algorithm for independent component analysis. Neural Computation 9, 1483-1492.
[9] Hinton, G & Ghahramani, Z (1997). Generative models for discovering sparse distributed representations. Philosophical Transactions Royal Society B 352, 1177-1190.
[10] Attias, H (1998). Independent factor analysis. Neural Computation 11, 803-851.
[11] Pearlmutter, B & Parra, L (1996). A context-sensitive generalization of ICA. In ICONIP ' 96,
151-157.
[12] Nadal, JP & Parga, N (1997). Redundancy reduction and independent component analysis:
conditions on cumulants and adaptive approaches. Neural Computation 9, 1421-1456.
[13] Cardoso, JF (1999). High-order contrasts for independent component analysis. Neural Computation 11,157-192.
| 1639 |@word version:2 nd:1 crucially:1 decomposition:1 covariance:2 tr:1 reduction:1 daniel:1 kurt:2 perturbative:1 must:1 additive:1 visible:4 update:6 generative:20 discovering:1 isotropic:7 haykin:2 lx:1 wijsj:1 become:1 fitting:1 lj2:1 indeed:2 ica:22 behavior:6 p1:5 globally:1 becomes:3 underlying:1 factorized:3 israel:3 nadal:1 extremum:1 ag:2 nj:2 multidimensional:1 act:1 exactly:2 unit:1 positive:2 understood:1 local:3 limit:1 analyzing:1 approximately:1 fastest:1 slx:3 statistically:1 obeys:2 practical:1 differs:1 bell:4 projection:1 convenient:1 integrating:1 prentice:1 context:1 applying:1 equivalent:5 conventional:1 demonstrated:2 center:1 maximizing:4 roth:1 jerusalem:1 rectangular:1 simplicity:1 rule:6 deriving:3 racah:1 stability:2 updated:1 elucidate:1 suppose:1 pt:1 exact:1 us:1 apo:2 observed:7 role:3 bottom:1 solved:1 region:1 sompolinsky:4 vanishes:2 pd:4 seung:1 basis:4 easily:1 po:1 instantiated:1 fast:1 describe:1 sejnowski:3 xixjxkxi:1 artificial:2 choosing:1 whose:1 larger:1 solve:1 otherwise:1 amari:1 statistic:7 wijs:1 kurtosis:4 j2:12 aligned:2 rapidly:3 mixing:5 wlw:1 convergence:13 boris:1 converges:6 derive:2 illustrate:1 ac:3 ij:2 expectationmaximization:1 eq:19 recovering:1 implies:3 girolami:1 direction:1 correct:1 comprehensivefoundation:1 generalization:2 decompose:1 parra:1 hagai:1 correction:3 sufficiently:1 around:2 hall:1 exp:3 mapping:1 shriki:1 radio:1 tanh:3 sensitive:1 rokni:4 gaussian:9 super:2 modified:2 rather:1 derived:3 consistently:1 likelihood:19 indicates:1 iwl:2 contrast:2 helpful:1 initially:1 hidden:8 w:1 expand:1 transformed:1 wij:2 overall:1 herault:1 constrained:1 special:1 mutual:2 equal:1 once:1 identical:1 represents:1 look:1 minimized:1 fundamentally:1 oja:3 preserve:1 simultaneously:1 individual:1 attempt:2 interest:1 investigate:1 mixture:3 tj:2 integral:1 orthogonal:3 plotted:2 theoretical:1 uncertain:1 fitted:1 column:4 cumulants:1 maximization:4 deviation:2 uniform:1 varies:3 chooses:1 st:1 density:3 river:1 lee:6 physic:1 infomax:4 again:1 slowly:2 satisfy:1 blind:5 depends:1 vi:1 break:1 closed:1 portion:1 simon:1 square:1 variance:2 maximized:1 correspond:1 parga:1 iterated:1 sebastian:1 ed:1 degeneracy:1 baram:1 manifest:1 emerges:1 dimensionality:1 shaping:1 actually:1 wwt:1 higher:7 formulation:1 evaluated:1 wildly:1 biomedical:1 correlation:1 d:4 hand:1 nonlinear:3 jutten:1 aj:1 behaved:1 concept:1 multiplier:1 true:2 analytically:3 vicinity:2 symmetric:2 laboratory:2 iteratively:2 nonzero:2 ll:1 self:1 illustrative:1 won:1 criterion:1 hill:1 iconip:1 demonstrate:1 shraiman:1 pearlmutter:1 interpreting:2 aag:1 binational:1 jp:1 interpret:1 nonlinearity:5 longer:1 vls:1 base:2 posterior:1 recent:1 optimizing:1 certain:3 binary:1 success:2 minimum:4 greater:1 fortunately:1 converge:6 determine:1 monotonically:1 signal:6 ii:2 match:3 long:2 serial:1 controlled:1 whitened:1 ae:1 expectation:6 iteration:3 represent:1 oren:1 lea:11 whereas:1 utu:1 singular:9 source:13 crucial:1 beamforming:1 near:1 presence:3 yang:1 intermediate:2 constraining:1 easy:1 affect:1 architecture:1 attias:2 expression:1 motivated:1 pca:2 utility:1 wo:1 clear:2 cardoso:1 locally:1 cosh:1 simplest:1 reduced:1 s3:1 sign:1 estimated:1 arising:1 t1j:2 redundancy:1 preprocessed:2 everywhere:1 fourth:1 aopt:2 separation:4 haim:1 guaranteed:4 quadratic:1 elucidated:1 constraint:3 orthogonality:1 constrain:1 x2:1 erkki:1 lca:2 xixj:1 em:12 unity:1 tw:1 comon:1 explained:1 previously:1 turn:1 differentiates:1 deflation:1 neuromimetic:1 pursuit:1 apply:1 fluctuating:1 generic:4 appropriate:1 subtracted:3 slower:1 top:1 include:1 ghahramani:1 murray:1 society:1 objective:2 added:4 already:1 diagonal:2 gradient:2 win:1 thank:1 considers:1 assuming:4 length:3 besides:1 illustration:1 rotational:2 minimizing:1 hebrew:1 unfortunately:2 trace:1 negative:2 allowing:1 upper:1 observation:2 sm:2 juha:1 finite:3 acknowledge:1 situation:2 extended:2 hinton:1 discovered:1 arbitrary:3 nonlinearly:1 philosophical:1 learned:2 trans:1 below:1 regime:1 royal:1 critical:3 natural:1 oij:1 scheme:1 technology:2 xg:3 naive:3 cichocki:1 review:1 prior:8 discovery:1 determining:2 mixed:1 foundation:2 consistent:1 lnfomax:2 side:1 vv:1 institute:1 taking:2 sparse:1 distributed:1 calculated:1 valid:1 qualitatively:1 adaptive:2 hyvarinen:2 transaction:1 sj:2 global:2 assumed:3 xi:2 alternatively:1 latent:4 nature:1 expanding:1 symmetry:1 expansion:8 whole:1 noise:14 s2:1 fig:3 cubic:2 sub:2 xl:3 breaking:1 lucent:2 specific:1 showing:1 demixing:1 intractable:1 exists:1 deconvolution:1 albeit:1 importance:1 orthogonalizing:1 te:1 sigmoids:1 karhunen:1 uri:1 likely:1 saddle:1 conveniently:1 lagrange:1 covariant:1 a8:1 viewed:1 formulated:1 jf:1 change:1 operates:1 wt:7 averaging:1 principal:1 support:1 cumulant:2 audio:1 ex:1 |
699 | 164 | 545
DYNAMIC, NON?LOCAL ROLE BINDINGS AND
INFERENCING IN A LOCALIST NETWORK FOR
NATURAL LANGUAGE UNDERSTANDING?
Trent E. Lange
Michael G. Dyer
Artificial Intelligence Laboratory
Computer Science Department
University of California, Los Angeles
Los Angeles, CA 90024
ABSTRACT
This paper introduces a means to handle the critical problem of nonlocal role-bindings in localist spreading-activation networks. Every
conceptual node in the network broadcasts a stable, uniquely-identifying
activation pattern, called its signature. A dynamic role-binding is created when a role's binding node has an activation that matches the
bound concept's signature. Most importantly, signatures are propagated
across long paths of nodes to handle the non-local role-bindings necessary for inferencing. Our localist network model, ROBIN (ROle
Binding and Inferencing Network), uses signature activations to robustly represent schemata role-bindings and thus perfonn the inferencing, plan/goal analysis, schema instantiation, word-sense disambiguation, and dynamic re-interpretation portions of the natural language understanding process.
MOTIVATION
Understanding natural language is a difficult task, often requiring a reader to make multiple inferences to understand the motives of actors and to connect actions that are unrelated
on the basis of surface semantics alone. An example of this is the sentence:
Sl: "John put the pot inside the dishwasher because the police were coming."
A complex plan/goal analysis of S 1 must be made to understand the actors' actions and
disambiguate "pot" to MARIJUANA by overriding the local context of "dishwasher".
*This research is supported in part by a contract with the JTF program of the DOD
and grants from the ITA Foundation and the Hughes Artificial Intelligence Center.
546
Lange and Dyer
DISTRIBUTED SPREADING?ACTIVATION NETWORKS
Distributed connectionist models, such as [McClelland and Kawamoto, 1986] and
[Touretzky and Hinton, 1985], are receiving much interest because their models are closer
to the neural level than symbolic systems, such as [Dyer, 1983]. Despite this attention,
no distributed network has yet exhibited the ability to handle natural language input having complexity even near to that of S 1. The primary reason for this current lack of
success is the inability to perform dynamic role-bindings and to propagate these binding
constraints during inferencing. Distributed networks, furthermore, are sequential at the
knowledge level and lack the representation of structure needed to handle complex
conceptual relationships [Feldman, 1986].
LOCALIST SPREADING?ACTIVATION NETWORKS
Localist spreading-activation networks, such as [Cottrell and Small, 1983] and [Waltz and
Pollack, 1985], also seem more neurally plausible than symbolic logic/Lisp-based systems. Knowledge is represented in localist networks by simple computational nodes and
their interconnections, with each node standing for a distinct concept. Activation on a
conceptual node represents the amount of evidence available for that concept in the current
context.
Unlike distributed networks, localist networks are parallel at the knowledge level and are
able to represent structural relationships between concepts. Because of this, many different inference paths can be pursued simultaneously; a necessity if the quick responses that
people are able to generate is to be modelled.
Unfortunately, however, the evidential activation on the conceptual nodes of previous 10calist networks gives no clue as to where that evidence came from. Because of this, previous localist models have been similar to distributed connectionist models in their inability to handle dynamic, non-local bindings -- and thus remain unsuited to higher-level
knowledge tasks where inferencing is required.
ROBIN
Our research has resulted in ROBIN (ROle Binding and Inferencing Network), a localist
spreading-activation model with additional structure to handle the dynamic role-bindings
and inferencing needed for building in-depth representations of complex and ambiguous
sentences, such as S 1. ROBIN's networks are built entirely with simple computational
elements that clearly have the possibility of realization at the neural level.
Figure 1 shows an overview of a semantic network embedded in ROBIN after input for
sentence S 1 has been presented. The network has made the inferences necessary to form a
plan/goal analysis of the actors' actions, with the role-bindings being instantiated
dynamically with activation. The final interpretation selected is the most highly-activated
path of frames inside the darkly shaded area.
As in previous Iocalist models, ROBIN's networks have a node for every known concept
Dynamic, Non-Local Role Bindings and Inferencing
John
COOking-Pot
Figure 1. Semantic network embedded in ROBIN. showing inferences
dynamically made after S 1 is presented. Thickness of frame boundaries
shows their amount of evidential activation. Darkly shaded area indicates
the most highly-activated path of frames representing the most probable
plan/goal analysis of the input. Dashed area shows discarded dishwashercleaning interpretation. Frames outside of both areas show a small portion of the network that received no evidential or signature activation.
Each frame is actually represented by the connectivity of a set of nodes.
in the network. Relations between concepts arc represented by weighted connections
between their respective nodes. The activation of a conceptual node is evidential.
547
548
Lange and Dyer
corresponding to the amount of evidence available for the concept and the likelihood that
it is selected in the current context.
Simply representing the amount of evidence available for a concept. however. is not sufficient for complex language understanding tasks. Role-binding requires that some means
exist for identifying a concept that is being dynamically bound to a role in distant areas of
the network. A network may have never heard about JOHN having the goal of
A VOID-DETECTION of his MARIJUANA. but it must be able to infer just such a
possibility to understand S 1.
SIGNATURE ACTIVATION IN ROBIN
Every conceptual node in ROBIN's localist network has associated with it an identification node broadcasting a stable. uniquely-identifying activation pattern. called its
signature. A dynamic binding is created when a role's binding node has an activation that
matches the activation of the bound concept's signature node.
<loiicY
I Si~2.91
~--=?
<ijV
I Sig=3.1 ~--~'"
~---g)
,
I
I
,
,,
I
ooking-Pot
,
_3.1
,
@i)--(Transfer-Inside )
Figure 2. Several concepts and their uniquely-identifying signature nodes
are shown. along with the Actor role of the TRANSFER-INSIDE frame.
The dotted arrow from the binding node (black circle) to the signature node
of JOHN represents the virtual binding indicated by the shared signature
activation. and does not exist as an actual connection.
In Figure 2. the virtual binding of the Actor role node of action TRANSFER-INSIDE to
JOHN is represented by the fact that its binding node. the solid black circle. has the same
activation (3.1) as JOHN's signature node.
PROPAGATION OF SIGNATURES FOR ROLE-BINDING
The most important feature of ROBIN's signature activations is that the model passes
them. as activation, across long paths of nodes to handle the non-local role-bindings necessary for inferencing. Figure 3 illustrates how the structure of the network automatically
accomplishes this in a ROBIN network segment that implements a portion of the
semantic network of Figure 1.
549
Dynamic, Non-Local Role Bindings and Inferencing
~'
,
"
,
{--------------.:=-:.-~~---=.---===-===-::=-=====---.::---~:---:-----------~~
I
.,
.
-
-
-
-
-
-
-
I
I
t
_ :: -
-
...
"
-...'
'\
\
,
I
I
I
1
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
,///IIIIIIIIiII~::: : : : : : : : : : : : : : : : : : : .: :, ;: : : : i
I
I
I
,
I
:,
.
Figure 3. Simplified ROBIN network segment showing parallel paths
over which evidential activation (bottom plane) and signature activation
(top plane) are spread for inferencing. Signature nodes (rectangles) and
binding nodes (solid black circles) are in the top plane. Thickness of
conceptual node boundaries (ovals) represents their level of evidential
activation after quiescence has been reached for sentence S 1. (The names
on the nodes are not used by ROBIN in any way. being used simply to set
up the network's structure initially and to aid in analysis.)
Evidential activation is spread through the paths between conceptual nodes on the bottom
plane (Le. TRANSFER-INSIDE and its Object role), while signature activation for dynamic
rOle-bindings is spread across the parallel paths of corresponding binding nodes on the top
plane. Nodes and connections for the Actor, Planner, and Location roles are not shown.
Initially there is no activation on any of the conceptual or binding nooes in the network.
When input for S 1 is presented, the concept TRANSFER-INSIDE receives evidential activation from the phrase "John put the pot inside the dishwasher", while the binding nodes
of its Object role get the activations (6.8 and 9.2) of the signatures for MARIJUANA and
COOKJNG-par, representing the candidate bindings from the word ''pot''.
As activation starts to spread, INSIDE-OF receives evidential activation from
TRANSFER-INSIDE, representing the strong evidence that something is now inside of
something else. Concurrently, the signature activations on the binding nodes of
TRANSFER-INSIDE's Object propagate to the corresponding binding nodes of INSIDE-OF's
Object. The network has thus made the crucial inference of exactly which thing is inside
of the other. Similarly, as time goes on, INSIDE-OF-DISHWASHER and INSIDE-OFOPAQUE receive evidential activation, with inferencing continuing by the propagation of
signature activation to their corresponding binding nodes.
550
Lange and Dyer
SPREAD OF ACTIVATION IN SENTENCE SI
The rest of the semantic network needed to understand S 1 (Figure 1) is also built utilizing
the structure of Figure 3. Both evidential and signature activation continue to spread from
the phrase "1ohn put the pot inside the dishwasher", propagating along the chain of related
concepts down to the CLEAN goal, with some reaching goal AVOID-DETECfION. The
phrase "because the police were coming" then causes evidential and signature activation to
spread along a path from TRANSFER-SELF to both goals POLICE-CAPTURE and
AVOID-DETECTION, until the activation of the network fmally settles.
SELECTING AMONG CANDIDATE BINDINGS
In Figure 3, signature activations for both of the ambiguous meanings of the word "pot"
were propagated along the Object roles, with MARIJUANA and COOKING-POT being the
candidate bindings for the role. The network's interpretation of which concept is selected
at any given time is the binding whose concept has greater evidential activation. Because
all candidate bindings are spread along the network, with none being discarded until processing is completed, ROBIN is easily able to handle meaning re-interpretations without
resorting to backtracking. For example, a re-interpretation of the word "pot" back to
COOKING-Par occurs when SI is followed by "They were coming over for dinner."
During the interpretation of SI, COOKING-POT initially receives more evidential activation than MARIJUANA by connections from the highly stereotypical usage of the
dishwasher for the CLEAN goal. The network's decision between the two candidate bindings at that point would be that it was a COOKING-POT that was INSIDE-OF the DISHWASHER. However, reinforcement and feedback from the inference paths generated by the
POlleE's TRANSFER-SELF eventually cause MARIJUANA to win out. The final selection
of MARIJUANA over the COOKING-POT bindings is represented simply by the fact that
MARIJUANA has greater evidential activation. The resulting most highly-activated path
of nodes and non-local bindings represents the plan/goal analysis in Figure 1. A more
detailed description of ROBIN's network structure can be found in [Lange, 1989].
EVIDENTIAL VS SIGNATURE ACTIVATION
It is important to emphasize the differences between ROBIN's evidential and signature activation. Both are simply activation from a computational point of view, but they
propagate across separate pathways and fulfil different functions.
Evidential Activation:
1) Previous work -- Similar to the activation of previous localist models.
2) Function -- Activation on a node represents the amount of evidence available for a
node and the likelihood that its concept is selected in the current context.
3) Node pathways -- Spreads along weighted evidential pathways between related frames.
4) Dynamic structure -- Decides among candidate structures; i.e. in Figure I, MARIJUANA is more highly-activated than COOKING-POT, so is selected as the currently
most plausible role-binding throughout the inference path.
Dynamic, Non-Local Role Bindings and Inferencing
Signature Activation:
1) Previous work -- First introduced in ROBIN.
2) Function -- Activation on a node is part of a unique pattern of signature activation
representing a dynamic, virtual binding of the signature's concept.
3) Node pathways -- Spreads along role-binding paths between corresponding roles of
related frames.
4) Dynamic structure -- Represents a potential (candidate) dynamic structure; i.e., that
either MARIJUANA or COOKING-POT is INSIDE-OF a DISHWASHER.
NETWORK BUILDING BLOCKS AND NEURAL
PLAUSIBILITY
ROBIN builds its networks with elements that each perform a simple computation on
their inputs: summation, summation with thresholding and decay, multiplication, or
maximization. The connections between units are either weighted excitatory or
inhibitory. Max units, i.e. those outputting the maximum of their inputs, are used
because of their ability to pass on signature activations without alteration.
ROBIN's most controversial element will likely be the signature-producing nodes that
generate the uniquely-identifying activations upon which dynamic role-binding is based.
These identifier nodes need to broadcast their unique signature activation throughout the
time the concept they represent is active, and be able to broadcast the same signature
whenever needed. Reference to neuroscience literature [Segundo et al., 1981, 1964] reveals that self-feedbacking groups of "pacemaker" neurons have roughly this ability:
"The mechanism described determines stable patterns in which, over a
clearly defined frequency range, the output discharge is locked in phase
and frequency ... " [Segundo et al., 1964]
Similar to pacemakers are central pattern generators (CPGs) [Ryckebusch et al., 1988],
which produce different stable patterns of neuronal oscillations. Groups of pacemakers or
CPGs could conceivably be used to build ROBIN's signature-producing nodes, with
oscillator phase-locking implementing virtual bindings of signatures. In any case, the
simple computational elements ROBIN is built upon appear to be as neurally plausible as
those of current distributed models.
FUTURE WORK
There are several directions for future research: (1) Self-organization of network structure
-- non-local bindings allow ROBIN to create novel network instances over its pre-existing
structure. Over time, repeated instantiations should cause modification of weights and recruitment of underutilized nodes to alter the network structure. (2) Signature dynamics -currently, the identifying signatures are single arbitrary activations; instead, signatures
should be distributed patterns of activation that are learned adaptively over time, with
similar concepts possessing similar signature patterns.
551
552
Lange and Dyer
CONCLUSION
This paper describes ROBIN. a domain-independent localist spreading-activation network
model that approaches many of the problems of nattIrallanguage understanding. including
those of inferencing and frame selection. To allow this. the activation on the network's
simple computational nodes is of one of two types: (a) evidential activation. to indicate
the likelihood that a concept is selected. and (b) signature activation. to uniquely identify
concepts and allow the representation and propagation of dynamic virtual role-bindings
not possible in previous localist or distributed models.
ROBIN's localist networks use the spread of evidential and signature activation along their
built-in structure of simple computational nodes to form a single most highly-activated
path representing a plan/goal analysis of the input. It thus performs the inferencing.
plan/goal analysis. schema instantiation. word-sense disambiguation. and dynamic reinterpretation tasks required for natural language understanding.
References
Cottrell. G. & Small. S. (1982): A Connectionist Scheme for Modeling Word-Sense
Disambiguation. Cognition and Brain Theory, 6. p. 89-120.
Dyer. M. G. (1983): In-Depth Understanding: A Computer Model of Integrated Processingfor Narrative Comprehension, MIT Press, Cambridge, MA.
Feldman. 1. A. (1986): Neural Representation of Conceptual Knowledge. (Technical
Report TR 189), Department of Computer Science, University of Rochester.
Lange, T. (1989): (forthcoming) High-Level Inferencing in a Localist Network. Master's
Thesis. Department of Computer Science. University of California. Los Angeles.
McClelland. 1. L. & Kawamoto. A. H. (1986): Mechanisms of Sentence Processing:
Assigning Roles to Constituents of Sentences. In McClelland & Rumelhart (eds.)
Parallel Distributed Processing: Vol 2. Cambridge, MA: The MIT Press.
Ryckebusch. S .? Mead. C., & Bower. 1. M. (1988): Modeling a Central Pattern Generator in Software and Hardware: Tritonia in Sea Moss. Proceedings of IEEE Conference on Neural Information Processing Systems -- Natural and Synthetic (NIPS88). Denver. CO.
Segundo. J. P., Perkel. D. H .? Schulman, J. H., Bullock. T. H .? & Moore. G. P (1964):
Pacemaker Neurons: Effects of Regularly Spaced Synaptic Input. Science, Volume 145, Number 3627, p. 61-63.
Segundo. 1. P. & Kohn. A. F. (1981): A Model of Excitatory Synaptic Interactions Between Pacemakers. Its Reality, its Generality, and the Principles Involved. Biological Cybernetics, Volume 40. p. 113-126.
Touretzky. D. S. & Hinton. G. E. (1985): Symbols among the Neurons: Details of a
Connectionist Inference Architecture. Proceedings of the International Joint Conference on Artificial Intelligence. Los Angeles, CA.
Waltz. D. & Pollack. J. (1985): MaSSively Parallel Parsing: A Strongly Interactive
Model of Natural Language Interpretation. Cognitive Science, Volume 9, Number 1. p. 51-74.
| 164 |@word propagate:3 tr:1 solid:2 necessity:1 selecting:1 existing:1 current:5 activation:62 yet:1 si:4 must:2 assigning:1 john:7 parsing:1 cottrell:2 distant:1 overriding:1 v:1 alone:1 intelligence:3 pursued:1 selected:6 pacemaker:5 plane:5 node:45 location:1 along:8 pathway:4 inside:19 roughly:1 brain:1 perkel:1 automatically:1 actual:1 unrelated:1 perfonn:1 every:3 interactive:1 exactly:1 unit:2 grant:1 cooking:8 producing:2 appear:1 marijuana:10 local:10 despite:1 mead:1 path:14 black:3 dynamically:3 shaded:2 co:1 range:1 locked:1 unique:2 hughes:1 block:1 implement:1 area:5 word:6 pre:1 symbolic:2 get:1 selection:2 put:3 context:4 quick:1 center:1 go:1 attention:1 identifying:6 stereotypical:1 utilizing:1 importantly:1 fmally:1 his:1 handle:8 fulfil:1 discharge:1 us:1 sig:1 element:4 rumelhart:1 bottom:2 role:33 capture:1 complexity:1 locking:1 dynamic:19 signature:40 reinterpretation:1 segment:2 upon:2 basis:1 easily:1 joint:1 represented:5 unsuited:1 distinct:1 instantiated:1 artificial:3 outside:1 whose:1 plausible:3 interconnection:1 ability:3 final:2 outputting:1 interaction:1 coming:3 realization:1 description:1 constituent:1 los:4 sea:1 produce:1 object:5 inferencing:17 propagating:1 received:1 strong:1 pot:15 indicate:1 dishwasher:8 direction:1 settle:1 virtual:5 implementing:1 ijv:1 probable:1 biological:1 summation:2 comprehension:1 cognition:1 narrative:1 spreading:6 currently:2 create:1 weighted:3 mit:2 clearly:2 concurrently:1 reaching:1 avoid:2 dinner:1 indicates:1 likelihood:3 sense:3 inference:8 motif:1 integrated:1 initially:3 relation:1 quiescence:1 semantics:1 among:3 plan:7 never:1 having:2 represents:6 alter:1 future:2 connectionist:4 report:1 simultaneously:1 resulted:1 phase:2 detection:2 organization:1 interest:1 possibility:2 highly:6 introduces:1 activated:5 chain:1 waltz:2 closer:1 necessary:3 segundo:4 iiiiiiiiiii:1 respective:1 continuing:1 re:3 circle:3 pollack:2 instance:1 modeling:2 maximization:1 localist:15 phrase:3 dod:1 connect:1 thickness:2 synthetic:1 adaptively:1 international:1 standing:1 contract:1 receiving:1 michael:1 connectivity:1 thesis:1 central:2 broadcast:3 cognitive:1 potential:1 alteration:1 view:1 schema:3 portion:3 reached:1 start:1 parallel:5 rochester:1 tritonia:1 spaced:1 identify:1 modelled:1 identification:1 none:1 cybernetics:1 evidential:21 touretzky:2 whenever:1 ed:1 synaptic:2 frequency:2 involved:1 associated:1 propagated:2 knowledge:5 actually:1 back:1 higher:1 ohn:1 response:1 strongly:1 generality:1 furthermore:1 just:1 until:2 receives:3 lack:2 propagation:3 indicated:1 usage:1 effect:1 name:1 concept:21 requiring:1 building:2 laboratory:1 moore:1 semantic:4 during:2 self:4 uniquely:5 ambiguous:2 performs:1 meaning:2 novel:1 possessing:1 denver:1 overview:1 volume:3 interpretation:8 cambridge:2 feldman:2 resorting:1 similarly:1 language:7 stable:4 actor:6 surface:1 something:2 massively:1 success:1 came:1 continue:1 additional:1 greater:2 accomplishes:1 dashed:1 multiple:1 neurally:2 infer:1 technical:1 match:2 plausibility:1 long:2 represent:3 receive:1 void:1 else:1 crucial:1 rest:1 unlike:1 exhibited:1 pass:1 thing:1 regularly:1 seem:1 lisp:1 structural:1 near:1 forthcoming:1 architecture:1 lange:7 iocalist:1 angeles:4 kohn:1 cause:3 action:4 heard:1 detailed:1 amount:5 hardware:1 mcclelland:3 generate:2 sl:1 exist:2 inhibitory:1 dotted:1 neuroscience:1 vol:1 group:2 clean:2 rectangle:1 recruitment:1 master:1 planner:1 reader:1 throughout:2 oscillation:1 disambiguation:3 decision:1 entirely:1 bound:3 followed:1 constraint:1 software:1 department:3 underutilized:1 across:4 remain:1 describes:1 bullock:1 modification:1 conceivably:1 eventually:1 mechanism:2 needed:4 dyer:7 kawamoto:2 available:4 robustly:1 top:3 completed:1 build:2 occurs:1 primary:1 ryckebusch:2 win:1 separate:1 reason:1 relationship:2 difficult:1 unfortunately:1 perform:2 neuron:3 discarded:2 arc:1 hinton:2 frame:9 arbitrary:1 police:3 introduced:1 required:2 sentence:7 connection:5 california:2 learned:1 darkly:2 able:5 pattern:9 program:1 built:4 max:1 including:1 critical:1 natural:7 representing:6 scheme:1 created:2 moss:1 understanding:7 literature:1 schulman:1 multiplication:1 embedded:2 par:2 ita:1 generator:2 foundation:1 controversial:1 sufficient:1 thresholding:1 principle:1 excitatory:2 supported:1 cpgs:2 allow:3 understand:4 distributed:10 boundary:2 depth:2 feedback:1 made:4 clue:1 reinforcement:1 simplified:1 nonlocal:1 emphasize:1 logic:1 decides:1 instantiation:3 active:1 reveals:1 conceptual:10 robin:24 disambiguate:1 reality:1 transfer:9 ca:2 complex:4 domain:1 spread:11 arrow:1 motivation:1 identifier:1 repeated:1 neuronal:1 aid:1 candidate:7 bower:1 down:1 showing:2 symbol:1 decay:1 evidence:6 sequential:1 illustrates:1 broadcasting:1 backtracking:1 simply:4 likely:1 binding:48 determines:1 ma:2 goal:12 oscillator:1 shared:1 called:2 oval:1 pas:1 people:1 inability:2 |