Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,000 | 1,914 | A tighter bound for graphical models
M.A.R. Leisink* and H.J. Kappe nt
Department of Biophysics
University of Nijmegen, Geert Grooteplein 21
NL 6525 EZ Nijmegen, The Netherlands
{martijn,bert}Cmbfys.kun.nl
Abstract
We present a method to bound the partition function of a Boltzmann machine neural network with any odd order polynomial. This
is a direct extension of the mean field bound, which is first order.
We show that the third order bound is strictly better than mean
field. Additionally we show the rough outline how this bound is
applicable to sigmoid belief networks. Numerical experiments indicate that an error reduction of a factor two is easily reached in
the region where expansion based approximations are useful.
1
Introduction
Graphical models have the capability to model a large class of probability distributions. The neurons in these networks are the random variables, whereas the
connections between them model the causal dependencies. Usually, some of the
nodes have a direct relation with the random variables in the problem and are
called 'visibles'. The other nodes, known as 'hiddens', are used to model more
complex probability distributions.
Learning in graphical models can be done as long as the likelihood that the visibles
correspond to a pattern in the data set, can be computed. In general the time it
takes, scales exponentially with the number of hidden neurons. For such architectures one has no other choice than using an approximation for the likelihood.
A well known approximation technique from statistical mechanics, called Gibbs
sampling, was applied to graphical models in [1]. More recently, the mean field
approximation known from physics was derived for sigmoid belief networks [2]. For
this type of graphical models the parental dependency of a neuron is modelled by a
non-linear (sigmoidal) function of the weighted parent states [3]. It turns out that
the mean field approximation has the nice feature that it bounds the likelihood
from below. This is useful for learning, since a maximisation of the bound either
increases its accuracy or increases the likelihood for a pattern in the data set, which
is the actual learning process.
In this article we show that it is possible to improve the mean field approximation
*http://www.mbfys.kun.nl/-martijn
thttp://www.mbfys.kun.nl/-bert
without losing the bounding properties. In section 2 we show the general theory
to create a new bound using an existing one, which is applied to a Boltzmann
machine in section 3. Boltzmann machines are another type of graphical models.
In contrast with belief networks the connections are symmetric and not directed [4].
A mean field approximation for this type of neural networks was already described
in [5]. An improvement of this approximation was found by Thouless, Anderson
and Palmer in [6], which was applied to Boltzmann machines in [7]. Unfortunately,
this so called TAP approximation is not a bound. We apply our method to the mean
field approximation, which results in a third order bound. We prove that the latter
is always tighter.
Due to the limited space it is not possible to discuss the third order bound for
sigmoid belief networks in much detail. Instead, we show the general outline and
focus more on the experimental results in section 5. Finally, in section 6, we present
our conclusions.
2
Higher order bounds
Suppose we have a function 10 (x) and a bound bo(x) such that 't/x
Let It(x) and b1(x) be two primitive functions of lo(x) and bo(x)
It(x) =
J
dx lo(x)
and
b1(x) =
10 (x) ::::: bo(x) .
J
(1)
dx bo(x)
such that It (v) = b1 (v) for some v. Note that we can always add an appropriate
constant to the primitive functions such that they are indeed equal at x = v .
Since the surface under 10 (x) at the left as well as at the right of x = v is obviously
greater than the surface under bo(x) and the primitive functions are equal at x = v
(by construction), we know
{ It (x) :s; b1 (x) for x :s; v
It (x) ::::: b1 (x) for x ::::: v
(2)
or in shorthand notation It (x) ~ bt (x). It is important to understand that even if
lo(v) > bo(v) the above result holds. Therefore we are completely free to choose v .
If we repeat this and let 12 (x) and b2 (x) be two primitive functions of It (x) and
bt{x), again such that h(v) = b2 (v), one can easily verify that 't/x h(x) ::::: b2 (x) .
Thus given a lower bound of lo(x) we can create another lower bound. In case the
given bound is a polynomial of degree k, the new bound is a polynomial of degree
k + 2 with one additional variational parameter.
To illustrate this procedure, we derive a third order bound on the exponential
function starting with the well known linear bound: the tangent of the exponential
function at x = v. Using the procedure of the previous section we derive
't/X ,V lo(x) = eX ::::: eV (1
It (x)
't/",I', >' h(x)
+x -
= eX ~ el' + e
= eX
with A = v - J1..
V
((
v) = bo(x)
(3)
1 + J1. - v) (x - J1.)
::::: el' { 1 + x - J1.
+ e>'
C~
+ ~ (x -
A (x - J1.)2
J1.)
2) = b1(x)
+ ~ (x -
J1.)3) }
(4)
= b2(x{5)
3
Boltzmann machines
In this section we derive a third order lower bound on the partition function of
a Boltzmann machine neural network using the results from the previous section.
The probability to find a Boltzmann machine in a state i E {-I, +1}N is given by
P (i)
1
1 (1..
= Z exp (- E (i)) = Z exp 20'
J
Si Sj
.)
+ 0' Si
(6)
There is an implicit summation over all repeated indices (Einstein's convention).
Z = Lall s exp ( - E (i)) is the normalisation constant known as the partition function which requires a sum over all, exponentially many states. Therefore this sum
is intractable to compute even for rather small networks.
To compute the partition function approximately, we use the third order bound 1
from equation 5. We obtain
where t::..E = J-L (i) + E . Note that the former constants J-L and A are now functions
of i, since we may take different values for J-L and A for each term in the sum. In
principle these functions can take any form. If we take, for instance, J-L (i) = - E (i)
the approximation is exact. This would lead, however, to the same intractability
as before and therefore we must restrict our choice to those that make equation 7
tractable to compute. We choose J-L (8) and A (8) to be linear with respect to the
neuron states Si :
(8)
One may view J-L (i) and A (.S) as (the negative of) the energy functions for the
Boltzmann distribution P ~ exp (J-L (i)) and P ~ exp (A (i)). Therefore we will
sometimes speak of 'the distribution J-L (i)' . Since these linear energy functions correspond to factorised distributions, we can compute the right hand side of equation 7
in a reasonable time, 0 (N 3 ).
To obtain the tightest bound, we may maximise equation 7 with respect to its
variational parameters J-LD, J-Li , AD and Ai .
A special case of the third order bound
Although it is possible to choose Ai f:. 0, we will set them to the suboptimal value
= 0, since this simplifies equation 7 enormously. The reader should keep in mind,
however, that all calculations could be done with non-zero Ai . Given this choice we
can compute the optimal values for J-LD and AD, given by
Ai
(9)
where (-) denotes an average over the (factorised) distribution J-L (i) . Using this
solution the bound reduces to the simple form
(10)
lUsing the first order bound from equation 3 resuJts in the standard mean field bound.
where ZI' is the partition function of the distribution fl (8). The term (t::..E2)
corresponds to the variance of E + fli Si with respect to the distribution fl (8), since
flo = - (E + fli si ). ).0 is proportional to the third order moment according to (9).
Explicit expressions for these moments can be derived with patience.
There is no explicit expression for the optimal fli as is the case with the standard mean field equations. An implicit expression, however, follows from setting
the derivative with respect to fl i to zero. We solved fli numerically by iteration.
Wherever we speak of 'fully optimised', we refer to this solution for fli .
Connection with standard Illean field and TAP
We like to focus for a moment on the suboptimal case where fli correspond to the
mean field solution, given by
Vi
mi
~f tanhfli
= tanh (Oi + Oi j mj )
(11)
For this choice for fli the logZI' term in equation 10 is equal to the optimal mean
field bound 2 . Since the last term in equation 10 is always positive, we conclude that
the third order bound is always tighter than the mean field bound.
The relation between TAP and the third order bound is clear in the region of small
weights. If we assume that 0 (Oi j3) is negligible, a small weight expansion of equation 10 yields
logZ
?: logZI' + log {I + ~eAO (t::..E2)}
>;:J
logZI'
+ ~Oij2 (1- m;) (1 4
mj)
(12)
where the last term is equal to the TAP correction term [7] . Thus the third order
bound tends to the TAP approximation for small weights. For larger weights, however, the TAP approximation overestimates the partition function, whereas the third
order approximation is still a bound.
4
Sigmoid belief networks
In the previous section we saw how to derive a third order bound on the partition
function. For sigmoid belief networks 3 we can use the same strategy to obtain a
third order bound on the likelihood of the visible neurons of the network to be in
a particular state. In this article, we present the rough outline of our method. The
full derivation will be presented elsewhere.
It turns out that these graphical models are comparable to Boltzmann machines to
a large extent. The energy function E(s) (as in equation 6), however, differs for
sigmoid belief networks:
-E(s)
= Oi jsi Sj + Oi Si
- Llog2cosh (OPi Si
+ Op)
(13)
p
The last term, known as the local normalisation, does not appear in the Boltzmann machine energy function. We have similar difficulties as with the Boltzmann
machine, if we want to compute the log-likelihood given by
loge
= log L
P (s)
sEHidden
= log L exp (-E (8))
(14)
sEHidden
2Be aware of the fact that J.I (S) contains the parameter J.l 0 = an important contribution to the expression for log Z 1' .
3 A detailed description of these networks can be found in [3].
(E + J.li Si). This gives
80
19
?70
U
c:
17
.260
16
c:
0
~50
<Il
a.
~40
,,
18
15
14
0
0.5
1il
E
??30
exact
mean field
tap
third order
a.a.
~20
100
SD of the
~eights = 0"2
3
4
Figure 1: The exact partition function and three approximations: (1) Mean field, (2)
and (3) Fully optimised third order. The standard deviation of the thresholds
is 0.1. Each point was averaged over a hundred randomly generated networks of
20 neurons. The inner plot shows the behaviour of the approximating functions for
small weights.
TAP
In contrast with the Boltzmann machine, we are not finished by using equation 7 to
bound C. Due to the non-linear log 2 cosh term in the sigmoid belief energy, the so
obtained bound is still intractable to compute. Therefore it is necessary to derive an
additional bound such that the approximated likelihood is tractable to compute (this
is comparable to the additional bound used in [2]). We make use of the concavity
of the log function to find a straight line upper bound 4 : V~ log x ::; eEx - ~ - 1. We
use this inequality to bound the log 2 cosh term in equation 13 for each p separately,
where we choose ~p to be ~p (8') = ~pi Si + ~p. In this way we obtain a new energy
function E (8') which is an upper bound on the original energy. It is obvious that
the following inequalities hold
C
= L exp ( -
E (8'))
sEHidden
2:
L exp ( -
E (8)) 2: B (E, /-l''\)
(15)
sEHidd en
where the last inequality is equal, apart from the tilde, to equation 7. It turns out
that this bound has a worst case computational complexity of 0 (N 4 ), which makes
it tractable for a large class of networks.
5
Results
5.1
Boltzmann machines
In this section we compare the third order bound for Boltzmann machines with (1)
the exact partition function, (2) the standard mean field bound and (3) the TAP
approximation. Therefore we created networks of N = 20 neurons with thresholds
drawn from a Gaussian with zero mean and 0"1 = 0.1 and weights drawn from a
Gaussian with zero mean and standard deviation O"dVN, a so called sK-model [8].
4This bound is also derivable using the method from section 2 with fo(x) =
f,- 2": o.
In figure 1 the exact partition function versus IJ'2 is shown. In the same figure the
mean field and fully optimised third order bound are shown together with the TAP
approximation. For large IJ'2 the exact partition function is linear in IJ'2, whereas
this is not necessarily the case for small IJ'2 (see figure 1). In fact, in the absence
of thresholds, the partition function is quadratic for small IJ'2. Since TAP is based
on a Taylor expansion in the weights upto second order, it is very accurate in
the small weight region. However, as soon as the size of the weights exceeds the
radius of convergence of this expansion (this occurs approximately at IJ'2 = 1), the
approximation diverges rapidly from the true value [9].
The mean field and third order approximation are both linear for large IJ'2, which
prevents that they cross the true partition function and would violate the bound. In
fact, both approximations are quite close to the true partition function. For small
weights (1J'2 < 1), however, we see that the third order bound is much closer to the
exact curved form than mean field is.
5.2
Sigmoid belief networks
Mean field bound
Third order bound
j ~:
~V1"ble
2
4
Relative error (%)
0.6
Figure 2: Histograms of the relative error for the toy network in the middle. The
error of the third order bound is roughly ten times smaller than the error of the
mean field bound.
Although a full optimisation of the variational parameters gives the tightest bound,
it turns out that the computational complexity of this optimisation is quite large
for sigmoid belief networks. Therefore, we use the mean field solution for J-Li (equation 11) instead. This can be justified since the most important error reduction is
due to the use of the third order bound. From experimental results not shown here
it is clear that a full optimisation has a share of only a few percent in the total gain.
To assess the error made by the various approaches, we use the same toy problem
as in [2] and [10]. The network has a top layer of two neurons, a middle layer of four
neurons and a lower layer of six visibles (figure 2). All neurons of two successive
layers are connected with weights pointing downwards. Weights and thresholds are
drawn from a uniform distribution over [-1,1].5 We compute the likelihood when
all visibles are clamped to -1. Since the network is rather small, we can compute
the exact likelihood to compare the lower bound with.
In figure 2 a histogram of the relative error, I-log B / log ?, is plotted for a thousand
randomly generated networks. It is clear from the picture that for this toy problem
the error is reduced by a factor ten. For larger weights, however, the effect is less,
but still large enough to be valuable. For instance , if the weights are drawn from a
uniform distribution over [-2,2], the error reduces by about a factor four on average
and is always less than the mean field error .
5The original toy problem in [2] used a Oil-coding for the neuron activity. To be able
to compare the results, we transform the weights and thresholds to the -l/+l-coding used
in this article.
6
Conclusions
We showed a procedure to find any odd order polynomial bound for the exponential
function. A 2k -1 order polynomial bound has k variational parameters. For the
third order bound these are J.I. and>". We can use this result to derive a bound on the
partition function, where the variational parameters can be seen as energy functions
for probability distributions. If we choose those distributions to be factorised, we
have (N + l)k new variational parameters. Since the approximating function is a
bound, we may maximise it with respect to all these parameters.
In this article we restricted ourselves to the third order bound, although an extension
to any odd order bound is possible. Third order is the next higher order bound to
naive mean field. We showed that this bound is strictly better than the mean field
bound and tends to the TAP approximation for small weights. For larger weights,
however, the TAP approximation crosses the partition function and violates the
bounding properties.
We saw that the third order bound gives an enormous improvement compared to
mean field. Our results are comparable to those obtained by the structured approach
in [10] . The choice between third order and variational structures, however, is not
exclusive. We expect that a combination of both methods is a promising research
direction to obtain the tightest tractable bound.
Acknowledgements
This research is supported by the Technology Foundation STW, applied science
devision of NWO and the technology programme of the Ministry of Economic Affairs.
References
[1) J. Pearl. Probabilistic Reasoning in Intelligent Systems, chapter 8.2.1, pages 387- 390.
Morgan Kaufmann, San Francisco, 1988.
[2) S.K. Saul, T.S. Jaakkola, and M.l. Jordan. Mean field theory for sigmoid belief
networks. Technical R eport 1, Computational Cognitive Science, 1995.
[3) R. Neil. Connectionist learning of belief networks. Artificial intelligence, 56:71- 113,
1992.
[4) D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169, 1985.
[5) C. P eterson and J . Anderson. A mean field theory learning algorithm for neural
networks. Complex systems, 1:995- 1019, 1987.
[6) D.J. Thouless, P.W. Andersson, and R.G. Palmer. Solution of 'solvable model of a
spin glass'. Philisophi cal Magazine, 35(3):593-601, 1977.
[7) H.J . Kappen and F .B. Rodriguez. Boltzmann machine learning using mean field
theory and linear response correction. In M.S. Kearns, S.A. Solla, and D .A . Cohn,
editors, Advances in Neural Tnformation Processing Systems, volume 11, pages 280286. MIT Press, 1999.
[8) D. Sherrington and S. Kirkpatrick. Solvable model of a spin-glass. Physical R eview
Letters, 35(26):1793- 1796,121975.
[9) M.A.R. Leisink and H.J. Kappen. Validity of TAP equations in neural networks. In
ICANN 99, volume 1, pages 425- 430, ISBN 0852967217, 1999. Insti tution of Electrical
Engineers, London.
[10) D. Barber and W. Wiegerinck. Tractable variational structures for approximating
graphical models. In M .S. Kearns, S.A. Solla, and D.A . Cohn, editors, Advances in
Neural Information Processin g Systems, volume 11, pages 183- 189. MIT Press, 1999.
| 1914 |@word eex:1 middle:2 polynomial:5 grooteplein:1 ld:2 kappen:2 moment:3 reduction:2 contains:1 existing:1 nt:1 si:9 dx:2 must:1 visible:1 numerical:1 j1:7 partition:16 plot:1 intelligence:1 affair:1 node:2 successive:1 sigmoidal:1 direct:2 prove:1 shorthand:1 indeed:1 roughly:1 mbfys:2 mechanic:1 actual:1 notation:1 visibles:4 appear:1 overestimate:1 before:1 maximise:2 positive:1 negligible:1 local:1 tends:2 sd:1 optimised:3 approximately:2 limited:1 palmer:2 averaged:1 directed:1 maximisation:1 differs:1 procedure:3 logz:1 close:1 cal:1 www:2 primitive:4 starting:1 geert:1 construction:1 suppose:1 magazine:1 exact:8 speak:2 losing:1 leisink:2 approximated:1 ackley:1 solved:1 electrical:1 worst:1 thousand:1 region:3 connected:1 solla:2 valuable:1 complexity:2 completely:1 easily:2 various:1 chapter:1 derivation:1 london:1 sejnowski:1 artificial:1 quite:2 larger:3 neil:1 transform:1 obviously:1 isbn:1 rapidly:1 martijn:2 description:1 flo:1 parent:1 convergence:1 diverges:1 illustrate:1 derive:6 ij:7 op:1 odd:3 indicate:1 convention:1 direction:1 radius:1 violates:1 behaviour:1 tighter:3 summation:1 extension:2 strictly:2 correction:2 hold:2 exp:8 pointing:1 applicable:1 tanh:1 nwo:1 saw:2 create:2 weighted:1 rough:2 mit:2 always:5 gaussian:2 rather:2 jaakkola:1 derived:2 focus:2 improvement:2 likelihood:9 contrast:2 glass:2 el:2 bt:2 hidden:1 relation:2 special:1 field:29 equal:5 aware:1 sampling:1 connectionist:1 intelligent:1 few:1 randomly:2 thouless:2 ourselves:1 normalisation:2 kirkpatrick:1 nl:4 accurate:1 closer:1 necessary:1 taylor:1 loge:1 plotted:1 causal:1 instance:2 deviation:2 hundred:1 uniform:2 dependency:2 hiddens:1 probabilistic:1 physic:1 together:1 again:1 choose:5 cognitive:2 derivative:1 li:3 toy:4 factorised:3 b2:4 coding:2 ad:2 vi:1 view:1 reached:1 capability:1 contribution:1 ass:1 oi:5 spin:2 accuracy:1 il:2 variance:1 kaufmann:1 correspond:3 yield:1 modelled:1 straight:1 fo:1 energy:8 obvious:1 e2:2 mi:1 gain:1 eview:1 higher:2 response:1 done:2 anderson:2 implicit:2 hand:1 cohn:2 rodriguez:1 oil:1 effect:1 validity:1 verify:1 true:3 former:1 symmetric:1 outline:3 sherrington:1 percent:1 reasoning:1 variational:8 recently:1 sigmoid:10 physical:1 exponentially:2 volume:3 numerically:1 refer:1 gibbs:1 ai:4 surface:2 add:1 showed:2 apart:1 inequality:3 seen:1 ministry:1 greater:1 additional:3 morgan:1 full:3 violate:1 reduces:2 exceeds:1 technical:1 calculation:1 cross:2 long:1 biophysics:1 j3:1 optimisation:3 iteration:1 sometimes:1 histogram:2 justified:1 whereas:3 want:1 separately:1 jordan:1 enough:1 zi:1 architecture:1 restrict:1 suboptimal:2 inner:1 simplifies:1 economic:1 expression:4 six:1 useful:2 clear:3 detailed:1 netherlands:1 cosh:2 ten:2 reduced:1 http:1 four:2 threshold:5 enormous:1 drawn:4 opi:1 v1:1 sum:3 letter:1 reasonable:1 reader:1 ble:1 patience:1 comparable:3 bound:67 fl:3 layer:4 quadratic:1 lall:1 activity:1 department:1 structured:1 according:1 combination:1 smaller:1 wherever:1 restricted:1 equation:16 turn:4 discus:1 know:1 mind:1 tractable:5 tightest:3 apply:1 eight:1 einstein:1 appropriate:1 upto:1 original:2 denotes:1 top:1 graphical:8 approximating:3 already:1 occurs:1 strategy:1 exclusive:1 barber:1 extent:1 index:1 kun:3 unfortunately:1 nijmegen:2 negative:1 stw:1 processin:1 boltzmann:16 upper:2 neuron:11 curved:1 tilde:1 hinton:1 bert:2 connection:3 tap:14 pearl:1 able:1 parental:1 usually:1 pattern:2 ev:1 below:1 belief:12 difficulty:1 solvable:2 improve:1 technology:2 picture:1 finished:1 created:1 naive:1 nice:1 acknowledgement:1 tangent:1 relative:3 fully:3 expect:1 proportional:1 versus:1 foundation:1 degree:2 article:4 principle:1 editor:2 intractability:1 pi:1 share:1 lo:5 elsewhere:1 repeat:1 last:4 free:1 soon:1 supported:1 side:1 understand:1 saul:1 concavity:1 made:1 san:1 programme:1 sj:2 derivable:1 keep:1 b1:6 conclude:1 francisco:1 sk:1 additionally:1 promising:1 mj:2 expansion:4 complex:2 necessarily:1 fli:7 icann:1 bounding:2 repeated:1 en:1 enormously:1 downwards:1 explicit:2 exponential:3 clamped:1 third:28 intractable:2 ez:1 prevents:1 bo:7 corresponds:1 absence:1 wiegerinck:1 kearns:2 engineer:1 called:4 total:1 andersson:1 experimental:2 latter:1 ex:3 |
1,001 | 1,915 | Decomposition of Reinforcement Learning
for Admission Control of Self-Similar
Call Arrival Processes
Jakob Carlstrom
Department of Electrical Engineering, Technion, Haifa 32000, Israel
jakob@ee . technion . ac . il
Abstract
This paper presents predictive gain scheduling, a technique for simplifying reinforcement learning problems by decomposition. Link admission
control of self-similar call traffic is used to demonstrate the technique.
The control problem is decomposed into on-line prediction of near-future call arrival rates, and precomputation of policies for Poisson call arrival processes. At decision time, the predictions are used to select
among the policies. Simulations show that this technique results in significantly faster learning without any performance loss, compared to a
reinforcement learning controller that does not decompose the problem.
1
Introduction
In multi-service communications networks, such as Asynchronous Transfer Mode (ATM)
networks, resource control is of crucial importance for the network operator as well as for
the users. The objective is to maintain the service quality while maximizing the operator's
revenue. At the call level , service quality (Grade of Service) is measured in terms of call
blocking probabilities, and the key resource to be controlled is bandwidth. Network routing
and call admission control (CAC) are two such resource control problems.
Markov decision processes offer a framework for optimal CAC and routing [1]. By modelling the dynamics of the network with traffic and computing control policies using dynamic
programming [2], resource control is optimized. A standard assumption in such models is
that calls arrive according to Poisson processes. This makes the models of the dynamics
relatively simple. Although the Poisson assumption is valid for most user-initiated requests
in communications networks, a number of studies [3, 4, 5] indicate that many types of arrival processes in wide-area networks as well as in local area networks are statistically selfsimilar. This makes it difficult to find models of the dynamics, and the models become large
and complex. If the number of system states is large, straightforward application of dynamic programming is unfeasible. Nevertheless, the "fractal" burst structure of self-similar
traffic should be possible to exploit in the design of efficient resource control methods.
We have previously presented a method based on temporal-difference (TD) learning for
CAC of self-similar call traffic, which yields higher revenue than a TD-based controller
assuming Poisson call arrival processes [7]. However, a drawback of this method is the slow
convergence of the control policy. This paper presents an alternative solution to the above
problem, called predictive gain scheduling. It decomposes the control problem into two
parts: time-series prediction of near-future call arrival rates and precomputation of a set of
control policies for Poisson call arrival processes. At decision time, a policy is selected
based on these predictions. Thus, the self-similar arrival process is approximated by a quasi-stationary Poisson process. The rate predictions are made by (artificial) neural networks
(NNs), trained on-line. The policies can be computed using dynamic programming or other
reinforcement learning techniques [6].
This paper concentrates on the link admission control problem. However, the controllers
we describe can be used as building block in optimal routing, as shown in [8] and [9]. Other
recent work on reinforcement learning for CAC and routing includes [10], where Marbach
et al. show how to extend the use of TD learning to network routing, and [11] where Tong
et al. apply reinforcement learning to routing subject to Quality of Service constraints.
2
Self-Similar Call Arrival Processes
The limitations of the traditional Poisson model for network arrival processes have been
demonstrated in a number of studies, e.g. [3, 4, 5], which indicate the existence of heavytailed inter-arrival time distributions and long-term correlations in the arrival processes.
Self-similar (fractal-like) models have been shown to correspond better with this traffic.
A self-similar arrival process has no "natural" burst length. On the contrary, its arrival intensity varies considerably over many time scales. This makes the variance of its sample
mean decay slowly with the sample size, and its auto-correlation function decay slowly
with time, compared to Poisson traffic [4].
The complexity of control and prediction of Poisson traffic is reduced by the memory-less
property of the Poisson process: its expected future depends on the arrival intensity, but not
on the process history. On the other hand, the long-range dependence of self-similar traffic
makes it possible to improve predictions of the process future by observing the history.
A compact statistical measure of the degree of self-similarity of a stochastic process is the
Hurst parameter [4]. For self-similar traffic this parameter takes values in the interval
(0.5, 1], whereas Poisson processes have a Hurst parameter of 0.5.
3
The Link Admission Control Problem
In the link admission control (LAC) problem, a link with capacity C [units/s] is offered calls
from K different service classes. Calls belonging to such a class j E J = {I, ... , K} have
the same bandwidth requirements hj [units/s]. The per-class call holding times are assumed
to be exponentially distributed with mean 1/ftj [s].
Access to the link is controlled by a policy:rc that maps states x E X to actions a EA,:rc:
X-+ A. The set X contains all feasible link states, and the action set is
A
=
((ai, ... ,aK )
?
:
aj E {O, Il,j E
J),
where a j is for rejecting a presumptive class-j call and 1 for accepting it. The set of link
states is given by X = N x H, where N is the set of feasible call number tuples, and His
the Cartesian product of some representations, '1, of the history of the per-class call arrival
processes (needed because of the memory of self-similar arrival processes). N is given by
N = {n : nj ;::
0, E
j
J; Injh j ::;
C}'
jEJ
where
nj
is the number of type-j calls accepted on the link.
We assume uniform call charging, which means that the reward rate p(t) at time t is equal
to the carried bandwidth:
pet) = p(x(t? =
I
(1)
n/t)bj
jEl
Time evolves continuously, with discrete call arrival and departure events, enumerated by
k = 0,1,2, ... Denote by rk+l the immediate reward obtained from entering a state Xk at
time tk until entering the next state Xk+l at time t k+ 1 ? The expectation of this reward is
E,,{rk+l} = E,,{P(Xk)[tk+ 1
-
t,)} = P(Xk)1:(X",:rr(Xk?
(2)
where t'(xk,:rr) is the expected sojourn time in state Xk under policy:rr.
By taking optimal actions, the policy controls the probabilities of state transitions so as to
increase the probability of reaching states that yield high long-term rewards. The objective
of link: admission control is to find a policy :rr that maximizes the average reward per stage:
R(,,)
~ )~"! E.{~ ~ 'He I ~ x}. xE X
X,
(3)
Note that the average reward does not depend on the initial state x, as the contribution from
this state to the average reward tends to zero as N -+ 00 (assuming, for example, that the
probability of reaching any other state y E X from every state x E X is positive).
Certain states are of special interest for the optimal policy. These are the states that are candidates for intelligent blocking. The set of such states X ib C X is given by X ib = Nib X H,
where Nib is the set of call number tuples for which the available bandwidth is a multiple
of the bandwidth of a wideband call. In the states of X ib , the long-term reward may be increased by rejecting narrowband calls to reserve bandwidth for future, expected wideband
calls.
4
Solution by Predictive Gain Scheduling
Gain scheduling is a control theory technique, where the parameters of a controller are
changed as a function of operating conditions [12]. The approach taken here is to look up
policies in a table from predictions of the near-future per-class call arrival rates.
For Poisson call arrival processes, the optimal policy for the link: admission control problem does not depend on the history, H, of the arrival processes. Due to the memory-less
property, only the (constant) per-class arrival rates Aj , j E J, matter. In our gain scheduled
control of self-similar call arrival processes, near-future Aj are predicted from hj- The selfsimilar call arrival processes are approximated by quasi-stationary Poisson processes, by
selecting precomputed polices (for Poisson arrival processes) based on predicted A/s. One
radial-basis function (REF) NN per class is trained to predict its near-future arrival rate.
4.1
Solving the Link Admission Control problem for Poisson Traffic
For Poisson call arrival processes, dynamic programming offers well-established techniques for solving the LAC problem [1]. In this paper, policy iteration is used. It involves
two steps: value determination and policy improvement.
The value determination step makes use of the objective function (3), and the concept of
relative values [1]. The difference v(x,:rr) - v(y,:rr) between two relative values under a
policy :rr is the expected difference in accumulated reward over an infinite time interval,
starting in state X instead of state y. In this paper, the relative values are computed by solving
a system of linear equations, a method chosen for its fast convergence. The dynamics of
the system are characterized by state transition probabilities, given by the policy, the perclass call arrival intensities, (,q, and mean holding times, (1/,ll
J
The policy improvement step consists of finding the action that maximizes the relative value at each state. After improving the policy, the value determination and policy improvement steps are iterated until the policy does not change [9].
4.2 Determining The Prediction Horizon
Over what future time horizon should we predict the rates used to select policies? In this
work, the prediction horizon is set to an average of estimated mean first passage times from
states back to themselves, in the following referred to as the mean return time. The arrival
process is approximated by a quasi-stationary Poisson process within this time interval.
The motivation for this choice of prediction horizon is that the effects of a decision (action)
in a state Xd influence the future probabilities of reaching other states and receiving the associated rewards, until the state Xd is reached the next time. When this happens, a new decision can be made, where the previous decision does no longer influence the future expected
reward. In accordance with the assumption of quasi-stationarity, the mean return time can
be estimated for call tuples n instead of the full state descriptor, x.
In case of Poisson call arrival processes, the mean first passage times E,.{ Tin} from other
states to a state n are the unique solution of the linear system of equations
E,,{TmJ
= T(m, a) +
I
E,, {Tln }, m E N\{n}, a
= n(m)
(4)
IE N\!n}
The limiting probability qn of occupying state n is determined for all states that are candidates for intelligent blocking, by solving a linear system of equations qB = 0. B is a matrix
containing the state transition intensities, given by (Aj} and (1/,llj}.
The mean return time for the link, T I , is defmed as the average of the individual mean return
times of the states of Nib, weighted by their limiting probabilities and normalized:
(5)
For ease of implementation, this time window is expressed as a number of call arrivals. The
window length L j for class j is computed by multiplying the mean return time by the arrival
rate, Lj = Aj T[, and rounding off to an integer. Although the window size varies with Aj,
this variation is partly compensated by T[ decreasing with increasing Aj ?
4.3
Prediction of Future Call Arrival Rates
The prediction of future arrival call rates is naturally based on measures of recent arrival
rates. In this work, the following representation of the history of the arrival process is used:
for all classes j E J, exponentially weighted running averages h j = (h j ), ... , hjM ) of the inter-arrival times are computed on different time scales. These history vectors are computed
using forgetting factors {a), ... ,a M } taking values in the interval (0, 1):
hik) = a i[t/k) - t/k - 1) 1+ (1 - a;)hik - 1) ,
(6)
where fj(k) is the arrival time of the k-th call from class j.
In studies of time-series prediction, non-linear feed-forward NN s outperform linear predictors on time series with long memory [13]. We employ RBF NNs with symmetric Gaussian
basis functions. The activations of the RBF units are normalized by division by the sum of
activations, to produce a smooth output function. The locations and widths of the RBF units
can be determined by inspection of the data sets, to cover the region of history vectors.
The NN is trained with the average inter-arrival time as target. After every new call arrival,
the prediction error ?j(k) is computed:
Lj
Elk) =
LI [
t(k
+
i) - t(k
+
i-I)] - y/k).
(7)
J i~ '
Learning is performed on-line using the least mean squares rule, which means that the upd)lting must be delayed by L j call arrivals. The predicted per-class arrival rates
A/k) = y(k)-' are used to select a control policy on the arrival of a call request.
Given the prediction horizon and the arrival rate predictor, ai' ... ,a M can be tuned by linear
search to minimize the prediction error on sample traffic traces.
5
Numerical study
The performance of the gain scheduled admission controller was evaluated on a simulated
link with capacity C = 24 [units/s], that was offered calls from self-similar call arrival processes. For comparison, the simulations were repeated with three other link admission controllers: two TD-based controllers, one table-based and one NN based, and a controller using complete sharing, i.e. to accept a call if the free capacity on the link is sufficient.
The NN based TD controller [7] uses RBF NNs (one per n EN), receiving (h" h2) as input.
Each NN has 65 hidden units, factorized to 8 units per call class, plus a default activation
unit. Its weights were initialized to favor acceptance of all feasible calls in all states.
The table-based TD controller assumes Poisson call arrival processes. From this, it follows
that the call number tuples n E N constitute Markovian states. Consequently, the value
function table stores only one value per n. This controller was used for evaluation of the
performance loss from incorrectly modelling self-similar call traffic by Poisson traffic.
5.1
Synthesis of Call Traffic
Synthetic traffic traces were generated from a Gaussian fractional auto-regressive integrated moving average model, FARIMA (0, d, 0). This results in a statistically self-similar
arrival process, where the Hurst parameter is easily tuned [7].
We generated traces containing arrival/departure pairs from two call classes, characterized
by bandwidth requirements bi = 1 (narrow-band) and ~ = 6 (wide-band) [units/s] and call
holding times with mean 1/,u1 = 1/,u2= 1 [s]. A Hurst parameter of 0.85 was used, and the
call arrival rates were scaled to make the expected long-term arrival rates A, and A2 for the
two classes fulfill b,A,/,u, + b).2/,u2 = 1.25 C. The ratio A,/A 2 was varied from 0.4 to
2.0.
5.2 Gain Scheduling
For simplicity, a constant prediction horizon was used throughout the simulations. This was
computed according to section 4.2. By averaging the resulting prediction windows for
A,/A 2 = 0.4, 1.0 and 2.0, a window size L, = L2 = 6 was obtained.
A
A
The table of policies to be used for gain scheduling was computed for predicted A, and A2
ranging from 0.5 to 15 with step size 0.5; in total 900 policies. The two rate-prediction NNs
both had 9 hidden units . The NNs' weights were initialized to O.
5.3
Numerical results
Both the TD learning controllers and the gain scheduling controller were allowed to adapt
to the first 400 000 simulated call arrivals of the traffic traces. The throughput obtained by
all four methods was measured on the subsequent 400000 call arrivals.
o
1000
2000
3000
4000
0.5 1
1.5 2
2.5 3
3.5 4.0
call arrivals
x 105 call arrivals
(a) Initial weight evolution in neural predictor (b) Long-term weight evolution in neural predictor
Throughput [units/s]
11
9
1.5 2 2.5 3 3.5 4.0
x 105 call arrivals
(c) Weight evolution in NN based TD controller
17.4
17.2
17.0
16.8
16.6
16.4
16.2
16.0
15.8
0.4 0.6 0.8
GSIRBF
TDIRBF
TDITBL
CS
1.0 1.2 1.4 1.6
1.8 2.0
AdA2
(d) Throughput versus arrival rate ratio
Figure 1: Weight evolution for NN predictor (a, b); NN based TD-controller (c). Performance (d).
Figure 1 (a, b) shows the evolution of the weights of the call arrival rate predictor for class
2, and figure 1 (c) displays nine weights of the RBF NN corresponding to the call number
tuple (n!, n 2) = (6,2), which is a candidate for intelligent blocking. These weights correspond to eight different class-2 center vectors, plus the default activation.
The majority of the weights of the gain scheduling RBF NN seems to converge in a few
thousand call arrivals, whereas the TD learning controller needs about tOO 000 call arrivals
to converge. This is not surprising, since the RBF NNs of the TD learning controllers split
up the set of training data, so that a single NN is updated much less frequently than a ratepredicting NN in the gain scheduling controller. Secondly, the TD learning NNs are trained
on moving targets, due to the temporal-difference learning rule, stochastic action selection
and a changing policy.
A few of the weights of the gain scheduling NN change considerably even after long training. These weights correspond to RBF units that are activated by rare, large inputs.
Figure t (d) evaluates performance in terms of throughput versus arrival rate ratio. Each
data point is the averaged throughput for 10 traffic traces. Gain scheduling (GS/RBF)
achieves the same throughput as TD learning with RBF NNs (TD/RBF), up to 1.3%
compared to tabular TD learning (TDITBL), and up to 5.7% better than complete sharing
(CS). The difference in throughput between TD learning and complete sharing is greatest
for low arrival rate ratios, since the throughput increase by reserving bandwidth for highrate wideband calls is considerably higher than the loss of throughput from the blocked lowrate narrowband traffic.
6
Conclusion
We have presented predictive gain scheduling, a technique for decomposing reinforcement
learning problems. Link admission control, a sub-problem of network routing, was used
to demonstrate the technique. By predicting near-future call arrival rates from one part of
the full state descriptor, precomputed policies for Poisson call arrival processes (computed
from the rest of the state descriptor) were selected. This increased the on-line convergence
rate approximately 50 times, compared to a TD-based admission controller getting the full
state descriptor as input. The decomposition did not result in any performance loss.
The computational complexity of the controller using predictive gain scheduling may
reach a computational bottleneck if the size of the state space is increased: the determination of optimal policies for Poisson traffic by policy iteration. This can be overcome by state
aggregation [2], or by parametrization the relative value function combined with temporaldifference learning [10]. It is also possible to significantly reduce the number of relative
value functions . In [14], we showed that linear interpolation of relative value functions distributed by an error-driven algorithm enables the use of less than 30 relative value functions
without performance loss. Further, we have successfully employed gain scheduled link admission control as a building block of network routing [9], where the performance improvement compared to conventional methods is larger than for the link admission control problem.
The use of gain scheduling to reduce the complexity of reinforcement learning problems
is not limited to link admission control. In general, the technique should be applicable to
problems where parts of the state descriptor can be used, directly or after preprocessing,
to select among policies for instances of a simplified version of the original problem.
References
[1] Z. Dziong, ATM Network Resource Management, McGraw-Hill, 1997.
[2] D.P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, Belmont, Mass.,
1995.
[3] V. Paxson and S. Floyd, "Wide-Area Traffic: The Failure of Poisson Modeling", IEEF/ACM Transactions on Networking, vol. 3, pp. 226-244, 1995.
[4] W.E. Leland, M.S. Taqqu, W. Willinger and D.V. Wilson, "On the Self-Similar Nature of Ethemet
Traffic (Extended Version)", IEEF/ACM Transactions on Networking, vol. 2, no. 1, pp. 1- 15, Feb. 1994.
[5] A Feldman, AC. Gilbert, W. Willinger and T.G. Kurtz, "The Changing Nature of Network Traffic:
Scaling Phenomena", Computer Communication Review, vol. 28, no. 2, pp. 5- 29, April 1998.
[6] R.S. Sutton and AG. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge,
Mass., 1998.
[7] J. Carlstrom and E. Nordstrom, "Reinforcement Learning for Control of Self-Similar Call Traffic
in Broadband Networks", Teletraffic Engineering in a Competitive World - Proceedings of The 16th International Teletraffic Congress (ITC 16), pp. 571- 580, Elsevier Science B.V., 1999.
[8] Z. Dziong and L. Mason,"Call Admission Control and Routing in Multi-service Loss Networks",
IEEE Transactions on Communications, vol. 42, no. 2. pp. 2011- 2022, Feb. 1994.
[9] J. Carlstrom and E. Nordstrom, "Gain Scheduled Routing in Multi-Service Networks", Technical
Report 2000-009, Dept. of Information Technology, Uppsala University, Uppsala, Sweden, April 2000.
[10] P. Marbach, O. Mihatsch and J.N. Tsitsiklis, "Call Admission Control and Routing in Integrated
Service Networks Using Neuro-Dynarnic Programming", IEEE J. Sel. Areas ofComm, Feb. 2000.
[11] H. Tong and T. Brown, "Adaptive Call Admission Control Under Quality of Service Constraints:
A Reinforcement Learning Solution", IEEE Journal on Selected Areas in Communications, Feb. 2000.
[12] K.J. Astrom and B. Wittenmark, Adaptive Control, 2 nd ed., Addison-Wesley, 1995.
[13] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed., Macmillan College Publishing Co., Englewood Cliffs, NJ, 1999.
[14] J. Carlstrom, "Efficient Approximation of Values in Gain Scheduled Routing", Technical Report
2000-010, Dept. of Information Technology, Uppsala University, Uppsala, Sweden, April 2000.
| 1915 |@word version:2 seems:1 nd:2 simulation:3 decomposition:3 simplifying:1 initial:2 series:3 contains:1 selecting:1 tuned:2 surprising:1 activation:4 must:1 willinger:2 belmont:1 subsequent:1 numerical:2 enables:1 stationary:3 selected:3 inspection:1 xk:7 parametrization:1 haykin:1 accepting:1 regressive:1 location:1 uppsala:4 admission:19 burst:2 rc:2 become:1 consists:1 leland:1 inter:3 forgetting:1 expected:6 themselves:1 frequently:1 multi:3 grade:1 decomposed:1 decreasing:1 td:17 window:5 increasing:1 maximizes:2 factorized:1 mass:2 israel:1 what:1 finding:1 ag:1 nj:3 temporal:2 every:2 xd:2 precomputation:2 scaled:1 control:33 unit:12 bertsekas:1 kurtz:1 positive:1 service:10 engineering:2 local:1 accordance:1 tends:1 congress:1 sutton:1 ak:1 initiated:1 cliff:1 interpolation:1 approximately:1 plus:2 co:1 ease:1 wideband:3 limited:1 range:1 statistically:2 bi:1 averaged:1 unique:1 block:2 area:5 significantly:2 radial:1 unfeasible:1 selection:1 operator:2 scheduling:14 influence:2 gilbert:1 conventional:1 map:1 demonstrated:1 compensated:1 maximizing:1 center:1 straightforward:1 starting:1 simplicity:1 rule:2 his:1 variation:1 limiting:2 updated:1 target:2 user:2 programming:6 us:1 approximated:3 blocking:4 electrical:1 thousand:1 region:1 complexity:3 reward:11 dynamic:9 taqqu:1 trained:4 depend:2 solving:4 predictive:5 division:1 basis:2 easily:1 fast:1 describe:1 artificial:1 larger:1 favor:1 rr:7 product:1 getting:1 convergence:3 requirement:2 produce:1 tk:2 ac:2 measured:2 predicted:4 involves:1 indicate:2 c:2 concentrate:1 drawback:1 stochastic:2 routing:12 decompose:1 tln:1 secondly:1 enumerated:1 bj:1 predict:2 reserve:1 achieves:1 a2:2 heavytailed:1 applicable:1 nordstrom:2 occupying:1 successfully:1 weighted:2 mit:1 gaussian:2 reaching:3 fulfill:1 hj:2 sel:1 barto:1 wilson:1 improvement:4 modelling:2 elsevier:1 nn:14 accumulated:1 lj:2 integrated:2 accept:1 hidden:2 quasi:4 among:2 special:1 equal:1 look:1 throughput:9 future:14 tabular:1 report:2 intelligent:3 employ:1 few:2 comprehensive:1 individual:1 delayed:1 lac:2 maintain:1 stationarity:1 interest:1 acceptance:1 englewood:1 evaluation:1 activated:1 tuple:1 sweden:2 sojourn:1 initialized:2 haifa:1 mihatsch:1 increased:3 instance:1 modeling:1 markovian:1 cover:1 rare:1 uniform:1 technion:2 predictor:6 rounding:1 too:1 varies:2 considerably:3 nns:8 synthetic:1 combined:1 international:1 ie:1 off:1 receiving:2 synthesis:1 continuously:1 management:1 containing:2 slowly:2 return:5 li:1 elk:1 includes:1 matter:1 depends:1 performed:1 observing:1 traffic:23 reached:1 competitive:1 aggregation:1 contribution:1 minimize:1 il:2 atm:2 square:1 variance:1 descriptor:5 yield:2 correspond:3 iterated:1 rejecting:2 multiplying:1 history:7 reach:1 networking:2 sharing:3 ed:2 evaluates:1 failure:1 pp:5 naturally:1 associated:1 gain:19 fractional:1 ea:1 back:1 feed:1 wesley:1 higher:2 april:3 evaluated:1 stage:1 hjm:1 correlation:2 until:3 hand:1 mode:1 quality:4 scheduled:5 aj:7 scientific:1 building:2 effect:1 concept:1 brown:1 normalized:2 evolution:5 entering:2 symmetric:1 ll:1 floyd:1 self:18 defmed:1 width:1 hik:2 hill:1 complete:3 demonstrate:2 passage:2 fj:1 narrowband:2 ranging:1 exponentially:2 extend:1 he:1 blocked:1 cambridge:1 feldman:1 ai:2 marbach:2 had:1 moving:2 access:1 similarity:1 operating:1 longer:1 feb:4 recent:2 tmj:1 showed:1 driven:1 store:1 certain:1 xe:1 employed:1 converge:2 multiple:1 full:3 smooth:1 technical:2 faster:1 determination:4 characterized:2 offer:2 long:8 adapt:1 controlled:2 prediction:20 neuro:1 controller:20 itc:1 expectation:1 poisson:23 iteration:2 ethemet:1 whereas:2 interval:4 crucial:1 rest:1 subject:1 contrary:1 call:69 integer:1 ee:1 near:6 hurst:4 split:1 bandwidth:8 reduce:2 bottleneck:1 nine:1 constitute:1 action:6 fractal:2 reserving:1 band:2 reduced:1 outperform:1 estimated:2 per:10 discrete:1 vol:4 key:1 four:1 nevertheless:1 changing:2 sum:1 arrive:1 throughout:1 decision:6 scaling:1 display:1 cac:4 g:1 constraint:2 nib:3 u1:1 qb:1 relatively:1 llj:1 department:1 according:2 request:2 belonging:1 evolves:1 happens:1 taken:1 resource:6 equation:3 previously:1 precomputed:2 needed:1 addison:1 available:1 decomposing:1 apply:1 eight:1 alternative:1 existence:1 original:1 assumes:1 running:1 publishing:1 farima:1 exploit:1 teletraffic:2 objective:3 dependence:1 traditional:1 link:20 simulated:2 capacity:3 majority:1 athena:1 pet:1 assuming:2 length:2 ratio:4 difficult:1 holding:3 trace:5 paxson:1 design:1 implementation:1 policy:31 markov:1 incorrectly:1 immediate:1 extended:1 communication:5 varied:1 jakob:2 police:1 intensity:4 pair:1 optimized:1 narrow:1 established:1 ftj:1 departure:2 memory:4 charging:1 greatest:1 event:1 natural:1 predicting:1 improve:1 technology:2 presumptive:1 carried:1 auto:2 review:1 l2:1 determining:1 relative:8 loss:6 limitation:1 versus:2 revenue:2 h2:1 foundation:1 degree:1 offered:2 sufficient:1 temporaldifference:1 changed:1 asynchronous:1 free:1 tsitsiklis:1 wide:3 taking:2 distributed:2 overcome:1 default:2 valid:1 transition:3 world:1 qn:1 forward:1 made:2 reinforcement:11 preprocessing:1 simplified:1 adaptive:2 transaction:3 compact:1 mcgraw:1 assumed:1 tuples:4 search:1 decomposes:1 table:5 nature:2 transfer:1 improving:1 jel:1 complex:1 did:1 motivation:1 arrival:63 repeated:1 ref:1 allowed:1 astrom:1 referred:1 broadband:1 en:1 slow:1 tong:2 sub:1 candidate:3 ib:3 tin:1 rk:2 mason:1 decay:2 highrate:1 importance:1 cartesian:1 horizon:6 jej:1 expressed:1 macmillan:1 u2:2 acm:2 consequently:1 rbf:11 feasible:3 change:2 infinite:1 determined:2 averaging:1 called:1 total:1 accepted:1 partly:1 wittenmark:1 select:4 college:1 dept:2 phenomenon:1 |
1,002 | 1,916 | Modelling spatial recall, mental imagery and
neglect
Suzanna Becker
Department of Psychology
McMaster University
1280 Main Street West
Hamilton,Ont. Canada L8S 4Kl
becker@mcmaster.ca
Neil Burgess
Department of Anatomy and
Institute of Cognitive Neuroscience, UCL
17 Queen Square
London, UK WCIN 3AR
n.burgess@ucl.ac.uk
Abstract
We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes
and imagery of the products of recall. Long term representations are
stored in the hippocampus, and are associated with local spatial and
object-related features in the parahippocampal region. Viewer-centered
representations are dynamically generated from long term memory in the
parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal
damage, the model exhibits hemispatial neglect in mental imagery that
rotates with the imagined perspective of the observer, as in the famous
Milan Square experiment [1]. Our model makes novel predictions for
the neural representations in the parahippocampal and parietal regions
and for behavior in healthy volunteers and neuropsychological patients.
1 Introduction
We perform spatial computations everday. Tasks such as reaching and navigating around
visible obstacles are predominantly sensory-driven rather than memory-based, and presumably rely upon egocentric, or viewer-centered representations of space. These representations, and the ability to translate between them, have been accounted for in several
computational models of the parietal cortex e.g. [2, 3]. In other situations such as route
planning, recall and imagery for scenes or events one must also reply upon representations
of spatial layouts from long-term memory. Neuropsychological and neuroimaging studies
implicate both the parietal and hippocampal regions in such tasks [4, 5], with the long-term
memory component associated with the hippocampus. The discovery of "place cells" in
the hippocampus [6] provides evidence that hippocampal representations are ailocentric,
in that absolute locations in open spaces are encoded irrespective of viewing direction.
This paper addresses the nature and source of the spatial representations in the hippocampal
and parietal regions, and how they interact during recall and navigation. We assume that
in the hippocampus proper, long-term spatial memories are stored allocentrically, whereas
in the parietal cortex view-based images are created on-the-fly during perception or recall.
Intuitively it makes sense to use an allocentric representation for long-term storage as the
position of the body will have changed before recall. Alternatively, to act on a spatial
location (e.g. reach with the hand) or to imagine a scene, an egocentric representation (e.g.
relative to the hand or retina) is more useful [7, 8].
A study of hemispatial neglect patients throws some light on the interaction of long-term
memory with mental imagery. Bisiach and Luzatti [1] asked two patients to recall the
buildings from the familiar Cathedral Square in Milan, after being asked to imagine (i)
facing the cathedral, and (ii) facing in the opposite direction. Both patients, in both (i)
and (ii), predominantly recalled buildings that would have appeared on their right from
the specified viewpoint. Since the buildings recalled in (i) were located physically on the
opposite side of the square to those recalled in (ii), the patients' long-term memory for all
of the buildings in the square was apparently intact. Further, the area neglected rotated
according to the patient's imagined viewpoint, suggesting that their impairment relates to
the generation of egocentric mental images from a non-egocentric long-term store.
The model also addresses how information about object identity is bound to locations in
space in long-term memory, i.e. how the "what" and the "where" pathways interact. Object information from the ventral visual processing stream enters the hippocampal formation
(medial entorhinal cortex) via the perirhinal cortex, while vi suo spatial information from the
dorsal pathways enters lateral entorhinal cortex primarily via the parahippocampal cortex
[9]. We extend the O'Keefe & Burgess [10] hippocampal model to include object-place
associations by encoding object features in perirhinal cortex (we refer to these features as
texture, but they could also be attributes such as colour, shape or size). Reciprocal connections to the parahippocampus allow object features to cue the hippocampus to activate
a remembered location in an environment, and conversely, a remembered location can be
used to reactivate the feature information of objects at that location. The connections from
parietal to parahippocampal areas allow the remembered location to be specified in egocentric imagery.
Post. parietal ego <?>allo /ransla/um
Medial parietal
egocentric locations
"'-~~~-:."":f?:o"ft
-----~:~--%
t ~entre
~~gItt
Parahpc.
aUo. object
locations
QV~
0000
Far
~-(~)----
__ (90)
AI/ocentric dir. - -: ~
-EE~----::;a.
Perirhinal objec//ex/ures
r D::~clar
,
-;-t/
,w':~
N(270)
? ? ? ?
? . . . . . . . . . ..
Hippocampal formation au/o-assoc place rep.
Figure 1: The model architecture. Note the allocentric encoding of direction (NSEW) in
parahippocampus, and the egocentric encoding of directions (LR) in medial parietal cortex.
2
The model
The model may be thought of in simple terms as follows. An allocentric representation
of object location is extracted from the ventral visual stream in the parahippocampus, and
feeds into the hippocampus. The dorsal visual stream provides an egocentric representation of object location in medial parietal areas and makes bi-directional contact with the
parahippocampus via posterior parietal area 7a. Inputs carrying allocentric heading direction information [11] project to both parietal and parahippocampal regions, allowing
bidirectional translation from allocentric to egocentric directions. Recurrent connections
in the hippocampus allow recall from long-term memory via the parahippocampus, and
egocentric imagery in the medial parietal areas. We now describe the model in more detail.
2.1
Hippocampal system
The architecture of the model is shown in Figure 1. The hippocampal formation (HF)
consists of several regions - the entorhinal cortex, dentate gyrus, CA3, and CAl, each of
which appears to code for space with varying degrees of sparseness. To simplify, in our
model the HF is represented by a single layer of "place cells", each tuned to random, fixed
configurations of spatial features as in [10, 12]. Additionally, it learns to represent objects'
textural features associated with a particular location in the environment. It receives these
inputs from the parahippocampal cortex (PH) and perirhinal cortex (PR), respectively.
The parahippocampal representation of object locations is simulated as a layer of neurons,
each of which is tuned to respond whenever there is a landmark at a given distance and
allocentric direction from the subject. Projections from this representation into the hippocampus drive the firing of place cells. This representation has been shown to account
for the properties of place cells recorded across environments of varying shape and size
[10, 12]. Recurrent connections between place cells allow subsequent pattern completion
in the place cell layer. Return projections from the place cells to the parahippocampus allow
reactivation of all landmark location information consistent with the current location.
The perirhinal representation in our model consists of a layer of neurons, each tuned to
a particular textural feature. This region is reciprocally connected with the hippocampal
formation [13]. Thus, in our model, object features can be used to cue the hippocampal
system to activate a remembered location in an environment, and conversely, a remembered
location can activate all associated object textures. Further, each allocentric spatial feature
unit in the parahippocampus projects to the perirhinal object feature units so that attention
to one location can activate a particular object's features.
2.2
Parietal cortex
Neurons responding to specific egocentric stimulus locations (e.g. relative to the eye, head
or hand) have been recorded in several parietal areas. Tasks involving imagery of the
products of retrieval tend to activate medial parietal areas (precuneus, posterior cingulate,
retrosplenial cortex) in neuroimaging studies [14]. We hypothesize that there is a medial
parietal egocentric map of space, coding for the locations of objects organised by distance
and angle from the body midline. In this representation cells are tuned to respond to the
presence of an object at a specific distance in a specific egocentric direction. Cells have
also been reported in posterior parietal areas with egocentrically tuned responses that are
modulated by variables such as eye position [15] or body orientation (in area 7a [16]). Such
coding can allow translation of locations between reference frames [17, 2]. We hypothesize
that area 7a performs the translation between allocentric and egocentric representations so
that, as well as being driven directly by perception, the medial parietal egocentric map can
be driven by recalled allocentric parahippocampal representations. We consider simply
translation between allocentric and view-dependent representations, requiring a modulatory input from the head direction system. A more detailed model would include translations
between allocentric and body, head and eye centered representations, and possibly use of
retrosplenial areas to buffer these intermediate representations [18].
The translation between parahippocampal and parietal representations occurs via a hardwired mapping of each to an expanded set of egocentric representations, each modulated
by head direction so that one is fully activated for each (coarse coded) head direction (see
Figure 1). With activation from the appropriate head direction unit, activation from the
parahippocampal or parietal representation can activate the appropriate cell in the other
representation via this expanded representation.
2.3
Simulation details
The hippocampal component of the model was trained on the spatial environment shown in
the top-left panel of Figure 2, representing the buildings of the Milan square. We generated
a series of views of the square, as would be seen from the locations in the central filled
rectangular region of this figure panel. The weights were determined as follows, in order to
form a continuous attractor (after [19, 20]). From each training location, each visible edge
point contributed the following to the activation of each parahippocampal (PH) cell:
L
( 9 i- 9j~2
1
J27fU ang 2
e-
2uang
1
X
V27fUdir(rj)2
(~i - ~ i)2
e - 2Udi~(~j)2
(1)
j
where ()i and ri are the preferred object direction and distance of the ith PH cell, ()j and rj
represent the location of the jth edge point relative to the observer, and Uang and Udir (r)
are the corresponding standard deviations (as in [10]). Here, we used u ang = pi/48 and
Udir(r) = 2(r/l0)2. The HF place cells were preassigned to cover a grid of locations
in the environment, with each cell's activation falling off as a Gaussian of the distance to
its preferred location. The PH-HF and HF-PH connection strengths were set equal to the
correlations between activations in the parahippocampal and hippocampal regions across
all training locations, and similarly, the HF-HF weights were set to values proportional to
a Gaussian of the distance between their preferred locations.
The weights to the perirhinal (PR) object feature units - on the HF-to-PR and PH-to-PR
connections - were trained by simulating sequential attention to each visible object, from
each training location. Thus, a single object's textural features in the PR layer were associated with the corresponding PH location features and HF place cell activations via Hebbian
learning. The PR-to-HF weights were trained to associate each training location with the
single predominant texture - either that of a nearby object or that of the background.
The connections to and within the parietal component of the model were hard-wired to
implement the bidirectional allocentric-egocentric mappings (these are functionally equivalent to a rotation by adding or subtracting the heading angle). The 2-layer parietal circuit
in Figure 1 essentially encodes separate transformation matrices for each of a discrete set of
head directions in the first layer. A right parietal lesion causing left neglect was simulated
with graded, random knockout to units in the egocentric map of the left side of space. This
could have equally been made to the trasnlation units projecting to them (i.e. those in the
top rows of the PP in Figure 1).
After pretraining the model, we performed two sets of simulations. In simulation 1, the
model was required to recall the allocentric representation of the Milan square after being
cued with the texture and direction (()j) of each of the visible buildings in turn, at a short
distance rj. The initial input to the HF, [HF (t = 0), was the sum of an externally provided
texture cue from the PR cell layer, and a distance and direction cue from the PH cell layer
obtained by initializing the PH states using equation 1, with rj = 2. A place was then
recalled by repeatedly updating the HF cells' states until convergence according to:
IHF (t)
AfF(t)
=
=
.25IHF (t - 1) + .75 (W HF - HF AHF (t - 1) + [HF (0))
(2)
exp(IfF(t))/Lexp(IfF(t))
(3)
k
.9I PH (t -1)
+ .1W HF - PH AHF(t)
(4)
Fin.ally, the HFJlace cell. activity was used to perf~r,? patte:n .completi.on in the. PH la~er
(USIng the wH -PH weIghts), to recall the other vIsIble buIldIng locatIons. In sImulatIOn
2 the model was then required to generate view-based mental images of the Milan square
from various viewpoints according to a specified heading direction. First, the PH cells and
HF place cells were initialized to the states of the retrieved spatial location (obtained after
settling in simulation 1). The model was then asked what it "saw" in various directions by
simulating focused attention on the egocentric map, and requiring the model to retrieve the
object texture at that location via activation of the PR region. The egocentric medial parietal
(MP) activation was calculated from the PH-to-MP mapping, as described above. Attention
to a queried egocentric direction was simulated by modulating the pattern of activation
across the MP layer with a Gaussian filter centered on that location. This activation was
then mapped back to the PH layer, and in turn projected to the PR layer via the PH-to-PR
connections:
IPR
W HC - PR AHF + W PH - PR A PH
(5)
AfR
=
exp(IfR)/
L exp(IfR)
(6)
k
2.4 Results and discussion
In simulation 1, when cued with the textures of each of the 5 buildings around the training
region, the model settled on an appropriate place cell activation. One such example is
shown in Figure 2, upper panel. The model was cued with the texture of the cathedral front,
and settled to a place representation near to its southwest corner. The resulting PH layer
activations show correct recall of the locations of the other landmarks around the square.
In simulation 2, shown in the lower panel, the model rotated the PH map according to the
cued heading direction, and was able to retrieve correctly the texture of each building when
queried with its egocentric direction. In the lesioned model, buildings to the egocentric left
were usually not identified correctly. One such example is shown in Figure 2. The heading
direction is to the south, so building 6 is represented at the top (egocentric forward) of the
map. The building to the left has texture 5, and the building to the right has texture 7. After
a simulated parietal lesion, the model neglects building 5.
3 Predictions and future directions
We have demonstrated how egocentric spatial representations may be formed from allocentric ones and vice versa. How might these representations and the mapping between them
be learned? The entorhinal cortex (EC) is the major cortical input zone to the hippocampus,
and both the parahippocampal and perirhinal regions project to it [13]. Single cell recordings in EC indicate tuning curves that are broadly similar to those of place cells, but are
much more coarsely tuned and less specific to individual episodes [21, 9] . Additionally, EC
cells can hold state information, such as a spatial location or object identity, over long time
delays and even across intervening items [9]. An allocentric representation could emerge if
the EC is under pressure to use a more compressed, temporally stable code to reconstruct
the rapidly changing visuospatial input. An egocentric map is altered dramatically after
changes in viewpoint, whereas an allocentric map is not. Thus, the PH and hippocampal
representations could evolve via an unsupervised learning procedure that discovers a temporally stable, generative model of the parietal input. The inverse mapping from allocentric
PH features to egocentric parietal features could be learned by training the back-projections
similarly. But how could the egocentric map in the parietal region be learned in the first
place? In a manner analagous to that suggested by Abbott [22], a "hidden layer" trained by
Hebbian learning could develop egocentric features in learning a mapping from a sensory
layer representing retinally located targets and arbitrary heading directions to a motor layer
representing randomly explored (whole-body) movement directions.
We note that our parietal imagery system might also support the short-term visuospatial
working memory required in more perceptual tasks (e.g. line cancellation)[2]. Thus lesions here would produce the commonly observed pattern of combined perceptual and
representational neglect. However, the difference in the routes by which perceptual and reconstructed information would enter this system, and possibly in how they are manipulated,
allow for patients showing only one form of neglect[23].
So far our simulations have involved a single spatial environment. Place cells recorded from
the same rat placed in two similar novel environments show highly similar firing fields
[10, 24], whereas after further exposure, distinctive responses emerge (e.g., [25, 26, 24]
and unpublished data). In our model, sparse random connections from the object layer to
the place layer ensure a high degree of initial place-tuning that should generalize across
similar environments. Plasticity in the HF-PR connections will allow unique textures of
walls, buildings etc to be associated with particular places; thus after extensive exposure,
environment-specific place firing patterns should emerge.
A selective lesion to the parahippocampus should abolish the ability to make allocentric
object-place associations altogether, thereby severely disrupting both landmark-based and
memory-based navigation. In contrast, a pure hippocampal lesion would spare the ability to
represent a single object's distance and allocentric directions from a location, so navigation
based on a single landmark should be spared. If an arrangement of objects is viewed in a
3-D environment, the recall or recognition of the arrangement from a new viewpoint will
be facilitated by having formed an allocentric representation of their locations. Thus we
would predict that damage to the hippocampus would impair performance on this aspect
of the task, while memory for the individual objects would be unimpaired. Similarly, we
would expect a viewpoint-dependent effect in hemispatial neglect patients.
Schematized Milan Square HR act given texture=1
PH act + head dir
.
I
~ .-MP act + query dir
. -~ .
~
.
PR activations - Control
f':L
o
o
00
0
5
10
Texture neuron
?
MP activns with neglect PR activations - Lesioned
0.:
1
O~
o
5
10
Texture neuron
Figure 2: I. Top panel. Left: training locations in the Milan square are plotted in the
black rectangle. Middle: HF place cell activations, after being cued that building #1 is
nearby and to the north. Place cells are arranged in a polar coordinate grid according to the
distance and direction of their preferred locations relative to the centre of the environment
(bright white spot). The white blurry spot below and at the left end of building #1 is the
maximally activated location. Edge points of buildings used during training are also shown
here. Right: PH inputs to place cell layer are plotted in polar coordinates, representing the
recalled distances and directions of visible edges associated with the maximally activated
location. The externally cued heading direction is also shown here. II. Bottom panel. Left:
An imagined view in the egocentric map layer (MP), given that the heading direction is
south; the visible edges shown above have been rotated by 180 degrees. Mid-left: the
recalled texture features in the PR layer are plotted in two different conditions, simulating
attention to the right (circles) and left (stars). Mid-right and right: Similarly, the MP and
PR activations are shown after damage to the left side of the egocentric map.
One of the many curiosities of the hemispatial neglect syndrome is the temporary amelioration of spatial neglect after left-sided vestibular stimulation (placement of cold water into
the ear) and transcutaneous mechanical vibration (for a review, see [27]), which presumably affects the perceived head orientation. If the stimulus is evoking erroneous vestibular
or somatosensory inputs to shift the perceived head direction system leftward, then all objects will now be mapped further rightward in egocentric space and into the 'good side'
of the parietal map in a lesioned model. The model predicts that this effect will also be
observed in imagery, as is consistent with a recent result [28].
Acknowledgments
We thank Allen Cheung for extensive pilot simulations and John O'Keefe for useful discussions. NB is a Royal Society University Research Fellow. This work was supported by
research grants from NSERC, Canada to S.B. and from the MRC, GB to N.B.
References
[1] E. Bisiach and C. Luzzatti. Cortex, 14:129- 133, 1978.
[2] A. Pouget and T. J. Sejnowski. 1. Cog. Neuro., 9(2):222- 237, 1997.
[3] E. Salinas and L.F. Abbott. 1. Neurosci., 15:6461-6474, 1995.
[4] E.A. Maguire, N. Burgess, J.G. Donnett, R. S.J. Frackowiak, e.D. Frith, and J. O'Keefe. Science, 280:921- 924, May 8 1998.
[5] N. Burgess, H. Spiers, E. Maguire, S. Baxendale, F. Vargha-Khadem, and J. O'Keefe. Subm.
[6] J. O'Keefe. Exp. Neurol., 51:78- 109,1976.
[7] N. Burgess, K Jeffery, and J. O' Keefe. In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The
hippocampal andparietalfoundations of5patial cognition. Oxford U. Press , 1999.
[8] A.D. Milner, H.e. Dijkerman, and D.P. Carey. In KJ. Jeffery N. Burgess and J. O' Keefe,
editors, The hippocampal and parietal foundations of spatial cognition. Oxford U. Press, 1999.
w,A. Suzuki, E.K Miller, and R. Desimone. 1. Neurosci., 78:1062- 1081, 1997.
J. O'Keefe and N. Burgess. Nature, 381:425--428, 1996.
J.S. Taube. Prog. Neurobiol. , 55:225- 256, 1998.
T. Hartley, N. Burgess, e. Lever, F. Cacucd, and J. O' Keefe. Hippocampus , 10:369- 379,2000.
w,A. Suzuki and D.G. Amaral. 1. Neurosci., 14:1856--1877, 1994.
P.e. Fletcher, C.D. Frith, S.C. Baker, T. Shallice, R.S.J. Frackowiak, and R.J. Dolan. Neuroimage, 2(3):195-200, 1995.
[15] R.A. Andersen, G.K Essick, and R.M. Siegel. Science, 230(4724):456--458, 1985.
[9]
[10]
[11]
[12]
[13]
[14]
[16] L.H. Snyder, A.P. Batista, and R.A. Andersen. Nature , 386:167- 170, 1997.
[17] D. Zipser and R. A. Andersen. Nature, 331:679- 684, 1988.
[18] N. Burgess, E. Maguire, H. Spiers, and 1. O'Keefe. Submitted.
[19] A. Samsonovich and B.L. McNaughton. 1. Neurosci., 17:5900--5920, 1997.
[20] S. Deneve, P.E. Latham, and A. Pouget. Nature Neuroscience , 2(8):740--745, 1999.
[21]
[22]
[23]
[24]
[25]
GJ. Quirk, R.U. Muller, J.L. Kubie, and J.B. Ranck. I Neurosci, 12:1945- 1963, 1992.
L.F.Abbott. Int. 1. ofNeur. Sys., 6:115- 122,1995.
C. Guariglia, A. Padovani, P. Pantano, and L. Pizzamiglio. Nature , 364:235-7,1993.
C. Lever, F. Cacucd, N. Burgess, and J. O'Keefe. In Soc. Neurosci. Abs., vol. 24. , 1999.
E. Bostock, R.U. Muller, andJ.L. Kubie. Hippo. , 1:193-205, 1991.
[26] R.U. Muller and J.L. Kubie. 1. Neurosci, 7:1951-1968, 1987.
[27] G. Vallar. In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The hippocampal and parietal
foundations of spatial cognition. Oxford U. Press, 1999.
[28] C. Guariglia, G. Lippolis, and L. Pizzamiglio. Cortex, 34(2):233-241 , 1998.
| 1916 |@word middle:1 cingulate:1 hippocampus:11 open:1 simulation:9 lobe:1 pressure:1 thereby:2 initial:2 configuration:1 series:1 tuned:6 batista:1 ranck:1 current:1 activation:16 must:1 john:1 subsequent:1 visible:7 plasticity:1 shape:2 motor:1 hypothesize:2 medial:9 cue:4 generative:1 item:1 sys:1 reciprocal:1 ith:1 short:2 lr:1 precuneus:1 mental:5 provides:2 coarse:1 location:44 udi:1 consists:2 retrosplenial:2 pathway:2 manner:1 hippo:1 behavior:1 planning:1 samsonovich:1 ont:1 project:3 provided:1 baker:1 panel:6 circuit:1 what:2 neurobiol:1 transformation:1 temporal:1 fellow:1 act:4 um:1 assoc:1 uk:2 control:1 unit:6 grant:1 evoking:1 hamilton:1 knockout:1 before:1 local:1 textural:3 severely:1 preassigned:1 encoding:3 oxford:3 firing:3 ipr:1 might:2 black:1 au:1 dynamically:1 conversely:2 bi:1 neuropsychological:2 unique:1 acknowledgment:1 implement:1 spot:2 procedure:1 cold:1 kubie:3 area:11 thought:1 projection:3 guariglia:2 parahippocampal:13 cal:1 storage:1 nb:1 equivalent:1 map:12 demonstrated:1 layout:1 attention:5 exposure:2 rectangular:1 focused:1 pure:1 suzanna:1 pouget:2 retrieve:2 coordinate:2 mcnaughton:1 imagine:2 target:1 milner:1 associate:1 ego:1 recognition:1 located:2 updating:1 parahippocampus:8 predicts:1 observed:2 ft:1 bottom:1 fly:1 enters:2 initializing:1 region:13 connected:1 episode:1 movement:1 environment:13 asked:3 lesioned:3 neglected:1 trained:4 carrying:1 upon:2 distinctive:1 rightward:1 frackowiak:2 represented:2 various:2 describe:1 london:1 activate:6 sejnowski:1 query:1 formation:4 salina:1 encoded:1 reconstruct:1 compressed:1 ability:3 neil:1 ucl:2 subtracting:1 interaction:1 product:2 causing:1 rapidly:1 translate:1 iff:2 representational:1 intervening:1 milan:7 afr:1 convergence:1 wcin:1 wired:1 produce:1 rotated:3 object:32 cued:6 recurrent:2 ac:1 completion:1 quirk:1 develop:1 throw:1 soc:1 indicate:1 somatosensory:1 direction:31 anatomy:1 hartley:1 correct:1 attribute:1 filter:1 centered:4 viewing:1 spare:1 southwest:1 wall:1 viewer:2 hold:1 around:3 exp:4 presumably:2 fletcher:1 mapping:6 dentate:1 predict:1 cognition:3 major:1 ventral:2 perceived:2 polar:2 healthy:1 saw:1 modulating:1 vibration:1 vice:1 qv:1 gaussian:3 reaching:1 rather:1 schematized:1 varying:2 entre:1 l0:1 modelling:1 contrast:1 spared:1 sense:1 dependent:2 hidden:1 selective:1 reactivate:1 orientation:2 spatial:19 equal:1 field:1 having:1 essick:1 unsupervised:1 future:1 stimulus:2 simplify:1 primarily:1 retina:1 randomly:1 manipulated:1 midline:1 individual:2 familiar:1 retinally:1 attractor:1 ab:1 highly:1 predominant:1 navigation:4 light:1 activated:3 edge:5 desimone:1 filled:1 initialized:1 circle:1 plotted:3 obstacle:1 ar:1 cover:1 queen:1 ca3:1 deviation:1 delay:1 front:1 stored:2 reported:1 dir:3 objec:1 combined:1 off:1 imagery:11 central:1 recorded:3 settled:2 ear:1 lever:2 possibly:2 andersen:3 cognitive:1 corner:1 mcmaster:2 return:1 suggesting:1 account:1 star:1 coding:2 north:1 int:1 analagous:1 mp:7 vi:1 stream:3 performed:1 view:5 observer:2 apparently:1 hf:20 carey:1 ihf:2 square:12 formed:2 abolish:1 bright:1 miller:1 directional:1 generalize:1 famous:1 mrc:1 drive:1 submitted:1 donnett:1 reach:1 whenever:1 pp:1 involved:1 associated:7 pilot:1 wh:1 recall:14 back:2 appears:1 feed:1 bidirectional:2 response:2 maximally:2 arranged:1 reply:1 correlation:1 until:1 hand:3 receives:1 ally:1 working:1 building:18 effect:2 requiring:2 white:2 nsew:1 during:3 rat:1 hippocampal:17 disrupting:1 latham:1 performs:1 allen:1 image:3 novel:2 discovers:1 predominantly:2 rotation:1 stimulation:1 imagined:3 extend:1 association:2 functionally:1 refer:1 versa:1 ai:1 queried:2 suo:1 tuning:2 enter:1 grid:2 similarly:4 cancellation:1 centre:1 unimpaired:1 stable:2 cortex:16 gj:1 etc:1 posterior:3 recent:1 perspective:1 retrieved:1 leftward:1 driven:3 route:2 store:1 buffer:1 rep:1 remembered:5 muller:3 seen:1 syndrome:1 taube:1 ii:4 relates:1 rj:4 hebbian:2 long:12 retrieval:1 post:1 equally:1 coded:1 prediction:2 involving:1 neuro:1 patient:8 essentially:1 volunteer:1 physically:1 represent:3 cell:29 whereas:3 background:1 ures:1 source:1 south:2 subject:1 tend:1 recording:1 simulates:1 ee:1 near:1 presence:1 zipser:1 intermediate:1 affect:1 psychology:1 burgess:13 architecture:2 identified:1 opposite:2 shift:1 colour:1 gb:1 becker:2 pretraining:1 repeatedly:1 impairment:1 dramatically:1 useful:2 modulatory:1 detailed:1 cathedral:3 ang:2 mid:2 ph:25 gyrus:1 generate:1 visuospatial:2 neuroscience:2 correctly:2 broadly:1 discrete:1 vol:1 coarsely:1 snyder:1 falling:1 changing:1 abbott:3 rectangle:1 deneve:1 egocentric:32 sum:1 angle:2 inverse:1 facilitated:1 respond:2 place:26 prog:1 bound:1 layer:21 activity:1 strength:1 placement:1 aff:1 scene:3 ri:1 encodes:1 nearby:2 aspect:1 expanded:2 department:2 according:5 andj:1 across:5 intuitively:1 projecting:1 pr:17 sided:1 equation:1 turn:2 lexp:1 mechanism:1 end:1 jeffery:4 appropriate:3 blurry:1 simulating:3 allocentric:20 altogether:1 responding:1 top:4 include:2 ensure:1 neglect:11 graded:1 society:1 contact:1 subm:1 arrangement:2 occurs:1 ifr:2 damage:3 exhibit:1 navigating:1 distance:11 separate:1 rotates:1 lateral:1 simulated:4 street:1 landmark:5 mapped:2 thank:1 water:1 ahf:3 code:2 neuroimaging:2 reactivation:1 proper:1 shallice:1 perform:1 allowing:1 contributed:1 upper:1 neuron:5 fin:1 parietal:36 situation:1 head:10 frame:1 transcutaneous:1 arbitrary:1 bostock:1 implicate:1 canada:2 spiers:2 unpublished:1 required:3 kl:1 specified:3 connection:10 extensive:2 mechanical:1 recalled:7 uang:2 learned:3 temporary:1 patte:1 vestibular:2 address:2 able:1 suggested:1 impair:1 usually:1 perception:2 pattern:4 below:1 curiosity:1 appeared:1 maguire:3 royal:1 memory:12 reciprocally:1 event:1 rely:1 settling:1 hardwired:1 hr:1 representing:4 altered:1 eye:3 temporally:2 irrespective:1 created:1 perf:1 kj:3 review:1 discovery:1 evolve:1 dolan:1 relative:4 fully:1 expect:1 allo:1 generation:1 organised:1 proportional:1 facing:2 foundation:2 degree:3 consistent:2 viewpoint:6 editor:3 pi:1 translation:6 row:1 changed:1 accounted:1 placed:1 supported:1 heading:8 jth:1 side:4 allow:8 institute:1 emerge:3 absolute:1 sparse:1 curve:1 calculated:1 cortical:1 sensory:2 forward:1 made:1 commonly:1 projected:1 suzuki:2 far:2 ec:4 amaral:1 reconstructed:1 preferred:4 alternatively:1 continuous:1 additionally:2 nature:6 ca:1 frith:2 interact:2 hc:1 complex:1 main:1 neurosci:7 whole:1 lesion:5 body:5 amelioration:1 west:1 siegel:1 neuroimage:1 position:2 perceptual:3 learns:1 externally:2 erroneous:1 cog:1 specific:5 showing:1 er:1 explored:1 neurol:1 evidence:1 l8s:1 sequential:1 keefe:13 adding:1 texture:16 entorhinal:4 sparseness:1 simply:1 visual:3 nserc:1 extracted:1 identity:2 viewed:1 cheung:1 hard:1 change:1 determined:1 perirhinal:8 la:1 intact:1 zone:1 support:2 modulated:2 dorsal:2 ex:1 |
1,003 | 1,917 | New Approaches Towards Robust and
Adaptive Speech Recognition
Herve Bourlard, Samy Bengio and Katrin Weber
IDIAP
P.O. Box 592, rue du Simplon 4
1920 Martigny, Switzerland
{ bourlard, bengio, weber} @idiap. ch
Abstract
In this paper, we discuss some new research directions in automatic
speech recognition (ASR), and which somewhat deviate from the
usual approaches. More specifically, we will motivate and briefly
describe new approaches based on multi-stream and multi/band
ASR. These approaches extend the standard hidden Markov model
(HMM) based approach by assuming that the different (frequency)
channels representing the speech signal are processed by different
(independent) "experts", each expert focusing on a different characteristic of the signal, and that the different stream likelihoods (or
posteriors) are combined at some (temporal) stage to yield a global
recognition output. As a further extension to multi-stream ASR,
we will finally introduce a new approach, referred to as HMM2,
where the HMM emission probabilities are estimated via state specific feature based HMMs responsible for merging the stream information and modeling their possible correlation.
1
Multi-Channel Processing in ASR
Current automatic speech recognition systems are based on (context-dependent or
context-independent) phone models described in terms of a sequence of hidden
Markov model (HMM) states, where each HMM state is assumed to be characterized by a stationary probability density function. Furthermore, time correlation,
and consequently the dynamic of the signal, inside each HMM state is also usually disregarded (although the use of temporal delta and delta-delta features can
capture some of this correlation). Consequently, only medium-term dependencies
are captured via the topology of the HMM model, while short-term and long-term
dependencies are usually very poorly modeled. Ideally, we want to design a particular HMM able to accommodate multiple time-scale characteristics so that we can
capture phonetic properties, as well as syllable structures and {long term) invariants
that are more robust to noise. It is, however, clear that those different time-scale
features will also exhibit different levels of stationarity and will require different
HMM topologies to capture their dynamics.
There are many potential advantages to such a multi-stream approach, including:
1. The definition of a principled way to merge different temporal knowledge
sources such as acoustic and visual inputs, even if the temporal sequences
are not synchronous and do not have the same data rate - see [13] for
further discussion about this.
2. Possibility to incorporate multiple time resolutions (as part of a structure
with multiple unit lengths, such as phon(l and syllable).
3. As a particular case of multi-stream processing, mufti-band ASR [2, 5],
involving the independent processing and combination of partial frequency
bands, have many potential advantages briefly discussed below.
In the following, we will not discuss the underlying algorithms (more or less "complex" variants of Viterbi decoding), nor detailed experimental results (see, e.g., [4]
for recent results). Instead, we will mainly focus on the combination strategy and
discuss different variants arounds the same formalism.
2
2.1
Multiband-based ASR
General Formalism
As a particular case of the multi-stream paradigm, we have been investigating an
ASR approach based on independent processing and combination of frequency subbands. The general idea, as illustrated in Fig. 1, is to split the whole frequency band
(represented in terms of critical bands) into a few subbands on which different recognizers are independently applied. The resulting probabilities are then combined
for recognition later in the process at some segmental level. Starting from critical
bands, acoustic processing is now performed independently for each frequency band,
yielding K input streams, each being associated with a particular frequency band.
Speech Signal
'
'
RecogDized
Word
!sand
''
''
''
''
Spectrogram
'------------------- ''
'', ___________________________________________________
_
'
Figure 1: Typical multiband-based ASR architecture. In multi-band speech recognition, the frequency range is split into several bands, and information in the bands is
used for phonetic probability estimation by independent modules. These probabilities
are then combined for recognition later in the process at some segmental level.
In this case, each of the K sub-recognizer (channel) is now using the information
contained in a specific frequency band Xk = {xt, x~, ... , x~, ... , x~}, where each
x~ represents the acoustic (spectral) vector at time n in the k-th stream.
In the case of hybrid HMM/ ANN systems, HMM local emission (posterior) probabilities are estimated by an artificial neural network (ANN), estimating P(qjlxn),
where q3 is an HMM state and Xn = (x~, ... ,x~, ... ,x:f)t the feature vector at
time n.
In the case of multi-stream (or subband-based) HMM? ANN systems, different ANNs
will compute state specific stream posteriors P(qjJxn)? Combination ofthese local
posteriors can then be performed at different temporal levels, and in many ways,
including [2]: untrained linear or trained linear (e.g., as a function of automatically
estimated local SNR) functions, as well as trained nonlinear functions (e.g., by using
a neural network). In the simplest case, this subband posterior recombination is
performed at the HMM state level, which then amounts to performing a standard
Viterbi decoding in which local {log) probabilities are obtained from a linear or
nonlinear combination of the local subband probabilities. For example, in the initial
subband-based ASR, local posteriors P(qjJxn) were estimated according to:
K
P(qjJxn)
= I:wkP(qjJx!,E>k)
(1)
k=l
where, in our case, each P(qjJx!, E>k) is computed with a band-specific ANN of
parameters E>k and with x~ (possibly with temporal context) at its input. The
weighting factors can be assigned a uniform distribution (already performing very
well [2]) or be proportional to the estimated SNR. Over the last few years, several
results were reported showing that such a simple approach was usually more robust
to band limited noise.
2.2
Motivations and Drawbacks
The multi-band briefly discussed above has several potential advantages summarized
here.
Better robustness to band-limited noise- The signal may be impaired (e.g.,
by noise, channel characteristics, reverberation, ... ) only in some specific frequency
bands. When recognition is based on several independent decisions from different
frequency subbands, the decoding of a linguistic message need not be severely impaired, as long as the remaining clean sub bands supply sufficiently reliable information. This was confirmed by several experiments (see, e.g., [2]). Surprisingly, even
when the combination is simply performed at the HMM state level, it is observed
that the multi-band approach is yielding better performance and noise robustness
than a regular full-band system.
Similar conclusions were also observed in the framework of the missing feature
theory [7, 9]. In this case, it was shown that, if one knows the position of the
noisy features, significantly better classification performance could be achieved by
disregarding the noisy data (using marginal distributions) or by integrating over
all possible values of the missing data conditionally on the clean features - See
Section 3 for further discussion about this.
Better modeling- Sub band modeling will usually be more robust. Indeed, since
the dimension of each (subband) feature space is smaller, it is easier to estimate
reliable statistics (resulting in a more robust parametrization). Moreover, the allpole modeling usually used in ASR will be more robust if performed on sub bands,
i.e., in lower dimensional spaces, than on the full-band signal [12].
Channel asynchrony - Transitions between more stationary segments of speech
do not necessarily occur at the same time across the different frequency bands [8],
which makes the piecewise stationary assumption more fragile. The subband approach may have the potential of relaxing the synchrony constraint inherent in
current HMM systems.
Channel specific processing and modeling -
Different recognition strate-
gies might ultimately be applied in different subbands. For example, different
time/frequency resolution tradeoff's could be chosen (e.g., time resolution and width
of analysis window depending on the frequency subband). Finally, some subbands
may be inherently better for certain classes of speech sounds than others.
Major objections ~nd drawbacks - One of the common objections [8] to this
separate modeling of each frequency band has been that important information in
the form of correlation between bands may be lost. Although this may be true,
several studies [8], as well as the good recognition rates achieved on small frequency
bands [3, 6], tend to show that most of the phonetic information is preserved in each
frequency band (possibly provided that we have enough temporal information). This
drawback will be fixed by the method presented next.
3
Full Combination Subband ASR
If we know where the noise is, and based on the results obtained with missing
data [7, 9], impressive noise robustness can be achieved by using the marginal
distribution, estimating the HMM emission probability based on the clean frequency
bands only. In our subband approach, we do not assume that we know, or detect
explicitly, where the noise is. Following the above developments and discussions,
it thus seems reasonable to integrate over all possible positions of the noisy bands,
and thus to simultaneously deal with all the L = 2K possible subband combinations
S~ (with i = 1, ... , L, and also including the empty set) extracted from the feature
vector Xn- Introducing the hidden variable E~, representing the statistical (exclusive
and mutually exhaustive) event that the feature subset S~ is "clean" (reliable),
and integrating over all its possible values, we can then rewrite the local posterior
probability as:
L
P(qilxn,E>) = :EP(qj,E~Ixn,E>)
?=1
L
=
I: P(qj IE~, Xn, E>.e)P(E~Ixn)
?=1
L
I: P(qj IS~, E>.e)P(E~Ixn)
(2)
?=1
where P(E~Ixn) represents the relative reliability of a specific feature set. E> represents the whole parameter space, while E>.e denotes the set of (ANN) parameters
used to compute the subband posteriors.
Typically, training of the L neural nets would be done once and for all on clean
data, and the recognizer would then be adapted on line simply by adjusting the
weights P(E~Ixn) (still representing a limited set of L parameters) to increase the
global posteriors. This adaptation can be performed by online estimation of the
signal-to-noise ratio or by online, unsupervised, EM adaptation.
While it is pretty easy to quickly estimate any subband likelihood or marginal
distribution when working with Gaussian or multi-Gaussian densities [7], straigh
implementation of (2) is not always tractable since it requires the use (and training)
of L neural networks to estimate all the posteriors P(q3IS~,E>.e). However, it has
the advantage of not requiring the subband independence assumption [3].
An interesting approximation to this "optimal" solution though consists in simply
using the neural nets that are available (K of them in the case of baseline sub band
ASR) and, re-introducing the independence assumption, to approximate all the
other subband combination probabilities in (2), as follows [3, 4):
P( ?IS? e) =P(
q3
n?
I.
?)II
q3
kESL
P(qilx~,ek)
P( ?)
(3)
qJ
Experimental results obtained from this Full. Combination approach in different
noisy conditions are reported in [3, 4), where the performance of this above approximation was also compared to the "optimal" estimators (2). Interestingly, it was
shown that this independence assumption did not hurt much and that the resulting
recognition performance was similar to the performance obtained by training and
recombining all possible L nets (and significantly better than the original subband
approach). In both cases, the recognition rate and the robustness to noise were
greatly improved compared to the initial subband approach. This further confirms
that we do not seem to lose "critically" important information when neglecting the
correlation between bands.
In the next section, we biefly introduced a further extension of this approach where
the segmentation into subbands is no longer done explicitly, but is achieved dynamically over time, and where the integration over all possible frequency segmentation
is part of the same formalism.
4
HMM2: Mixture of HMMs
HMM emission probabilities are typically modeled through Gaussian mixtures or
neural networks. We propose here an alternative approach, referred to as HMM2, integrating standard HMMs (referred to as ''temporal HMMs") with state-dependent
feature-based HMMs (referred to as ''feature HMMs") responsible for the estimation of the emission probabilities. In this case, each feature vector Xn at time n is
considered as a fixed length sequence, which has supposedly been generated by a
temporal HMM state specific HMM for which each state is emitting individual feature components that are modeled by, e.g., one dimensional Gaussian mixtures. The
feature HMM thus looks at all possible subband segmentations and automatically
performs the combination of the likelihoods to yield a single emission probability.
The resulting architecture is illustrated in Figure 2. In this example, the HMM2 is
composed of an HMM that handle sequences of features through time. This HMM
is composed of 3 left-to-right connected states (q1, q2 and q3) and each state emits
a vector of features at each time step. The particularity of an HMM2 is that each
state uses an HMM to emit the feature vector, as if it was an ordered sequence
(instead of a vector). In Figure 2, state q2 contains a feature HMM with 4 states
connected top-down. Of course, while the temporal HMM usually has a left-to-right
structure, the topology of the feature HMM can take many forms, which will then
reflect the correlation being captured by the model. The feature HMM could even
have more states than feature components, in which case "high-order" correlation
information could be extracted.
In [1), an EM algorithm to jointly train all the parameters of such HMM2 in order
to maximize the data likelihood has been derived. This derivation was based on the
fact that an HMM2 can be considered as a mixture of mixtures of distributions.
We believe that HMM2 (which includes the classical mixture of Gaussian HMMs as
a particular case) has several potential advantages, including:
1. Better feature correlation modeling through the feature-based (frequency)
HMM topology. Also, the complexity of this topology and the probability
density function associated with each state easily control the number of
parameters.
2. Automatic non-linear spectral warping. In the same way the conventional
HMM does time warping and time integration, the feature-based HMM
performs frequency warping and frequency integration.
3. Dynamic formant trajectory modelling. As further discussed below, the
HMM2 structure has the potential to extract some relevant formant structure information, which is often considered as important to robust speech
recognition.
To illustrate the last point and its relationship with dynamic multi-band ASR,
the HMM2 models was used in [14] to extract formant-like information. All the
parameters of HMM2 models were trained according to the above EM algorithm on
delta-frequency features (differences of two consecutive log Rasta PLP coefficients).
The feature HMM had a simple top-down topology with 4 states. After training,
Figure 3 shows (on unseen test data) the value of the features for the phoneme iy as
well as the segmentation found by a Viterbi decoding along the delta-frequency axis
(the thick black lines). At each time step, we kept the 3 positions where the deltafrequency HMM changed its state during decoding (for instance, at the first time
frame, the HMM goes from state 1 to state 2 after the third feature). We believe
they contain formant-like information. In [14], it has been shown that the use of
that information could significantly enhance standard speech recognition systems.
Time
Figure 2: An HMM2: the emission distributions of the HMM
are estimated by another HMM.
Figure 3: Frequency deltas of log Rasta
PLP and segmentation for an example of
phoneme iy.
Acknowledgments
The content and themes discussed in this paper largely benefited from the collaboration with our colleagues Andrew Morris, Astrid Hagen and Herve Glotin. This work
was partly supported by the Swiss Federal Office for Education and Science (FOES)
through the European SPHEAR (TMR, Training and Mobility of Researchers) and
RESPITE (ESPRIT Long term Research) projects. Additionnally, Katrin Weber is
supported by a the Swiss National Science Foundation project MULTICHAN.
References
[l] Bengio, S., Bourlard, H., and Weber, K., "An EM Algorithm for HMMs
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
with Emission Distributions Represented by HMMs," IDIAP Research Report,
IDIAP-RR00-11, Martigny, Switzerland, 2000.
Bourlard, H. and Dupont, S., "A new ASR approach based on independent
processing and combination of partial frequency bands," Proc. of Intl. Conf.
on Spoken Language Processing (Philadelphia), pp. 422-425, October 1996.
Hagen, A., Morris, A., Bourlard, H., "Subband-based speech recognition in
noisy conditions: The full combination approach," IDIAP Research Report
no. IDIAP-RR-98-15, 1998.
Hagen, A., Morris, A., Bourlard, H., "Different weighting schemes in the full
combination subbands approach for noise robust ASR," Proceedings of the
Workshop on Robust Methods for Speech Recognition in Adverse Conditions
(Tampere, Finland), May 25-26, 1999.
Hermansky, H., Pavel, M., and Tribewala, S., "Towards ASR using partially corrupted speech," Proc. of Intl. Conf. on Spoken Language Processing
(Philadelphia), pp. 458-461, October 1996.
Hermansky, H. and Sharma, S., "Temporal patterns (TRAPS) in ASR noisy
speech," Proc. of the IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Phoenix, AZ), pp. 289-292, March 1999.
Lippmann, R.P., Carlson, B.A., "Using missing feature theory to actively select
features for robust speech recognition with interruptions, filtering and noise,"
Proc. Eurospeech '97 (Rhodes, Greece, September 1997), pp. KN37-40.
Mirghafori, N. and Morgan, N., "Transmissions and transitions: A study of
two common assumptions in multi-band ASR," Intl. IEEE Conf. on Acoustics,
Speech, and Signal Processing, (Seattle, WA, May 1997), pp. 713-716.
Morris, A.C., Cooke, M.P., and Green, P.D., "Some solutions to the missing
features problem in data classification, with application to noise robust ASR,"
Proc. Intl. Conf on Acoustics, Speech, and Signal Processing, pp. 737-740,1998.
Morris, A.C., Hagen, A., Bourlard, H., "The full combination subbands approach to noise robust HMM/ ANN-based ASR," Proc. of Eurospeech '99 (Budapest, Sep. 99).
Okawa, S., Bocchieri, E., Potamianos, A., "Multi-band speech recognition in
noisy environment," Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal
Processing, 1998.
Rao, S. and Pearlman, W.A., "Analysis of linear prediction, coding, and spectral estimation from subbands," IEEE Irans. on Information Theory, vol. 42,
pp. 116Q-1178, July 1996.
Tomlinson, M.J., Russel, M.J., Moore, R.K., Bucklan, A.P., and Fawley, M.A.,
"Modelling asynchrony in speech using elementary single-signal decomposition," Proc. of IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing
(Munich), pp. 1247-1250, April 1997.
Weber, K., Bengio, S., and Bourlard, H., "HMM2-Extraction of Formant Features and their Use for Robust ASR," IDIAP Research Report, IDIAP-RR0042, Martigny, Switzerland, 2000.
| 1917 |@word briefly:3 seems:1 nd:1 confirms:1 decomposition:1 pavel:1 q1:1 accommodate:1 initial:2 contains:1 interestingly:1 current:2 dupont:1 stationary:3 xk:1 parametrization:1 short:1 along:1 supply:1 consists:1 inside:1 introduce:1 indeed:1 bocchieri:1 nor:1 multi:15 glotin:1 automatically:2 window:1 provided:1 estimating:2 underlying:1 moreover:1 project:2 medium:1 q2:2 spoken:2 temporal:11 esprit:1 control:1 unit:1 local:7 severely:1 merge:1 might:1 black:1 dynamically:1 relaxing:1 hmms:9 limited:3 range:1 acknowledgment:1 responsible:2 lost:1 swiss:2 significantly:3 word:1 integrating:3 regular:1 context:3 conventional:1 missing:5 go:1 starting:1 independently:2 resolution:3 estimator:1 handle:1 hurt:1 us:1 samy:1 recognition:18 hagen:4 observed:2 ep:1 module:1 capture:3 connected:2 principled:1 supposedly:1 environment:1 complexity:1 ideally:1 dynamic:4 ultimately:1 motivate:1 trained:3 rewrite:1 segment:1 easily:1 sep:1 represented:2 derivation:1 train:1 describe:1 artificial:1 exhaustive:1 particularity:1 statistic:1 formant:5 unseen:1 jointly:1 noisy:7 online:2 sequence:5 advantage:5 rr:1 net:3 propose:1 adaptation:2 relevant:1 budapest:1 poorly:1 az:1 seattle:1 impaired:2 empty:1 transmission:1 intl:7 depending:1 illustrate:1 andrew:1 idiap:8 switzerland:3 direction:1 thick:1 drawback:3 education:1 sand:1 require:1 elementary:1 extension:2 sufficiently:1 considered:3 viterbi:3 major:1 finland:1 consecutive:1 recognizer:2 estimation:4 proc:8 rhodes:1 lose:1 federal:1 gaussian:5 always:1 office:1 linguistic:1 q3:4 emission:8 focus:1 derived:1 modelling:2 likelihood:4 mainly:1 greatly:1 baseline:1 detect:1 dependent:2 typically:2 hidden:3 classification:2 development:1 integration:3 marginal:3 once:1 asr:21 extraction:1 represents:3 look:1 unsupervised:1 hermansky:2 others:1 report:3 piecewise:1 inherent:1 few:2 composed:2 simultaneously:1 national:1 individual:1 stationarity:1 message:1 possibility:1 mixture:6 yielding:2 emit:1 partial:2 neglecting:1 herve:2 mobility:1 re:1 instance:1 formalism:3 modeling:7 rao:1 introducing:2 subset:1 snr:2 uniform:1 eurospeech:2 reported:2 dependency:2 corrupted:1 combined:3 density:3 phon:1 ie:1 decoding:5 enhance:1 quickly:1 iy:2 tmr:1 reflect:1 possibly:2 conf:7 expert:2 ek:1 actively:1 potential:6 gy:1 summarized:1 coding:1 includes:1 coefficient:1 explicitly:2 stream:11 later:2 performed:6 synchrony:1 phoneme:2 characteristic:3 largely:1 yield:2 multiband:2 critically:1 trajectory:1 confirmed:1 researcher:1 foe:1 anns:1 definition:1 colleague:1 frequency:25 pp:8 associated:2 emits:1 adjusting:1 knowledge:1 segmentation:5 greece:1 focusing:1 improved:1 april:1 done:2 box:1 though:1 furthermore:1 stage:1 correlation:8 working:1 nonlinear:2 asynchrony:2 believe:2 requiring:1 true:1 contain:1 assigned:1 moore:1 illustrated:2 deal:1 conditionally:1 during:1 width:1 plp:2 performs:2 weber:5 common:2 phoenix:1 extend:1 discussed:4 automatic:3 language:2 had:1 reliability:1 recognizers:1 impressive:1 longer:1 segmental:2 posterior:10 recent:1 phone:1 phonetic:3 certain:1 captured:2 morgan:1 somewhat:1 spectrogram:1 tomlinson:1 sharma:1 paradigm:1 maximize:1 july:1 signal:13 ii:1 multiple:3 full:7 sound:1 characterized:1 long:4 prediction:1 involving:1 variant:2 achieved:4 preserved:1 want:1 objection:2 source:1 tend:1 seem:1 bengio:4 split:2 enough:1 easy:1 independence:3 architecture:2 topology:6 idea:1 okawa:1 tradeoff:1 fragile:1 qj:4 synchronous:1 speech:22 clear:1 detailed:1 amount:1 iran:1 band:36 morris:5 processed:1 simplest:1 estimated:6 delta:6 vol:1 clean:5 kept:1 year:1 reasonable:1 decision:1 syllable:2 adapted:1 occur:1 constraint:1 katrin:2 performing:2 recombining:1 munich:1 according:2 combination:15 march:1 smaller:1 across:1 em:4 ofthese:1 invariant:1 potamianos:1 mutually:1 discus:3 know:3 tractable:1 available:1 subbands:9 spectral:3 alternative:1 robustness:4 original:1 denotes:1 remaining:1 top:2 carlson:1 recombination:1 subband:18 classical:1 warping:3 already:1 strategy:1 exclusive:1 usual:1 interruption:1 exhibit:1 september:1 separate:1 hmm:36 strate:1 assuming:1 length:2 modeled:3 relationship:1 ratio:1 october:2 reverberation:1 martigny:3 design:1 implementation:1 rasta:2 markov:2 wkp:1 frame:1 introduced:1 acoustic:8 able:1 usually:6 below:2 pattern:1 including:4 reliable:3 green:1 simplon:1 critical:2 event:1 hybrid:1 ixn:5 bourlard:8 representing:3 scheme:1 axis:1 extract:2 philadelphia:2 deviate:1 relative:1 interesting:1 proportional:1 filtering:1 foundation:1 integrate:1 collaboration:1 cooke:1 course:1 changed:1 surprisingly:1 last:2 supported:2 dimension:1 xn:4 transition:2 adaptive:1 emitting:1 approximate:1 lippmann:1 global:2 investigating:1 assumed:1 pretty:1 channel:6 robust:13 inherently:1 du:1 untrained:1 complex:1 necessarily:1 european:1 rue:1 did:1 whole:2 noise:14 motivation:1 fig:1 referred:4 benefited:1 sub:5 position:3 theme:1 weighting:2 third:1 down:2 specific:8 xt:1 showing:1 disregarding:1 workshop:1 trap:1 merging:1 disregarded:1 easier:1 simply:3 tampere:1 visual:1 contained:1 ordered:1 partially:1 ch:1 extracted:2 russel:1 consequently:2 ann:6 towards:2 content:1 adverse:1 specifically:1 typical:1 partly:1 experimental:2 select:1 incorporate:1 |
1,004 | 1,918 | A variational mean-field theory for
sigmoidal belief networks
c. Bhattacharyya
Computer Science and Automation
Indian Institute of Science
Bangalore, India, 560012
cbchiru@csa.iisc.ernet.in
S. Sathiya Keerthi
Mechanical and Production Engineering
National University of Singapore
mpessk@guppy.mpe.nus.edu.sg
Abstract
A variational derivation of Plefka's mean-field theory is presented.
This theory is then applied to sigmoidal belief networks with the
aid of further approximations. Empirical evaluation on small scale
networks show that the proposed approximations are quite competitive.
1
Introduction
Application of mean-field theory to solve the problem of inference in Belief Networks(BNs) is well known [1]. In this paper we will discuss a variational mean-field
theory and its application to BNs, sigmoidal BNs in particular.
We present a variational derivation of the mean-field theory, proposed by Plefka[2].
The theory will be developed for a stochastic system, consistin~ of N binary random
variables, Si E {O, I}, described by the energy function E(S), and the following
Boltzmann Gibbs distribution at a temperature T:
_
P(S) =
~
e-z, z=
T
""'
E(S)
~ e-----;y-.
S
The application of this mean-field method to Boltzmann Machines(BMs) is already
done [3]. A large class of BN s are described by the following energy function:
N
E(S)
= - L {Si In f(Mi) + (1 -
i-l
Si) In(1 - f(Mi)} Mi
i=l
=L
WijSj + hi
j=l
The application of the mean-field theory for such energy functions is not straightforward and further approximations are needed. We propose a new approximation
scheme and discuss its utility for sigmoid networks, which is obtained by substituting
1
f(x)
= 1 + eX
in the above energy function. The paper is organized as follows. In section 2 we
present a variational derivation of Plefka's mean-field theory. In section 3 the theory
is extended to sigmoidal belief networks. In section 4 empirical evaluation is done.
Concluding remarks are given in section 5.
2
A Variational mean-field theory
Plefka,[2] proposed a mean-field theory in the context of spin glasses. This theory
can, in principle, yield arbitrarily close approximation to log Z. In this section we
present an alternate derivation from a variational viewpoint, see also [4],[5].
Let 'Y be a real parameter that takes values from 0 to 1. Let us define a 'Y dependent
partition and distribution function,
(1)
Note that Zl
(1) as
=Z
and Pl
= p.
Z 'Y
where
by
Introducing an external real vector, Blet us rewrite
,",e--Y~+2:.(JiS'
= L....
s
e
-"'(JosoZLJi ' ,
(2)
Z
Z is the partition function associated with the distribution function p-y given
2:i
_
E
'"
e --y~+
(JiSi
- ' " ' e--Y"T+ L..'
? (JiSi
P- - ---=
Z -L....
,,
.- - -
s
(3)
Z
Using Jensen's Inequality, (e- X ) ~ e-(x), we get
Z-y
= ZL
p-ye -
2:. (JiSi
~
Ze - 2:. (JiUi
(4)
S
where
(5)
Ui = (Si)-P-r
Taking logarithms on both sides of (4) we obtain
log Z-y ~ log Z - L
(6)
OiUi
The right hand side is defined as a function of u and 'Y via the following assumption.
Invertibility assumption: For each fixed
u and 'Y, (5) can be solved for if
If the invertibility assumption holds then we can use
(with Bdependent on u) and rewrite (6) as
u as the independent
vector
(7)
where G is as defined in
G(u,'Y) = -lnZ + LOiUi.
i
This then gives a variational feel: treat it as an external variable vector and choose
it to minimize G for a fixed 'Y. The stationarity conditions of the above minimization
problem yield
{)G
(Ji
= -()Ui = O.
At the minimum point we have the equality G = - log Z"(.
It is difficult to invert (5) for'Y :I 0, thus making it impossible to write an algebraic
expression for G for any nonzero 'Y. At 'Y = 0 the inversion is straightforward and
one obtains
N
G(it,O) = 2)Ui In Ui
+ (1 -
Ui)
In(l-
Ui)) ,
Po
=
II ui(1 -
Ui).
i
~1
A Taylor series approach is then undertaken around 'Y
to G. Define
-
_
GM = G(u,O)
+
L
= 0 to build an approximation
'Yk ()kG I
kT 8k
'Y
k
(8)
,,(=0
Then GM can be considered as an approximation of G. The stationarity conditions
are enforced by setting
(Ji = {)G
{)Ui
~
{)GM =
{)Ui
In this paper we will restrict ourselves to M
the following derivatives
= 2.
O.
To do this we need to evaluate
(9)
(10)
where
For M = 1 we have the standard mean-field approach. The expression for M = 2
can be identified with the TAP correction. The term (10) yields the TAP term for
BM energy function.
3
Mean-field approximations for BNs
The method, as developed in the previous section, is not directly useful for BNs
because of the intractability of the partial derivatives at 'Y = O. To overcome this
problem, we suggest an approximation based on Taylor series expansion. Though
in this paper we will be restricting ourselves to sigmoid activation function, this
method is applicable to other activation functions also. This method enables calculation of all the necessary terms required for extending Plefka's method for BN s.
Since, for BN operation T is fixed to 1, T will be dropped from all equations in the
rest of the paper.
Let us define a new energy function
N
E((3,S,il,w) = - 2)Silnf(Mi((3)) + (1- Si)ln(I- f(Mi((3))}
(11)
i=l
where 0
~
(3
~
1,
i-l
i-l
Mi((3) = L Wij(3(Sj - Uj) + Mi , Mi = L WijUj + hi
j=l
j=l
where
e - 'VE+"
Di (J?S?
I
Uk = L SkP"(/3 Vk, P"(/3 =
t
1.
~ 2: ..
- ,,(E+ . (J.S.
"
(12)
6;e?
?
Since (3 is the important parameter, E((3, S, il, w) will be referred to as E((3) so as
to avoid notational clumsiness. We use a Taylor series approximation of E((3) with
respect to (3. Let us define
~
~
e (3k okE I
Ec((3) = E(O)
(13)
+ (; kf o(3k /3=0
If Ee approximates E, then we can write
E = E(I) ~ Ec(I).
(14)
Let us now define the following function
A(r,(3,il) = _lnL e - "(E+2:i(JiSi + LBiUi
(15)
;
The Bi are assumed to be functions of il, (3, 'Y, which are obtained by inverting
equations (12) By replacing E by Ee in (15) we obtain Ae
Ae(r, (3, il) = -In L
e - "(Ec+
2:, (JiS, + L
BiUi
(16)
;
where the definition of il is obtained by replacing E by Ee. In view of (14) one can
consider Ae as an approximation to A. This observation suggests an approximation
to G.
(17)
G(r, il) = A(r, 1, il) ~ Ae(r, 1, il)
The required terms needed in the Taylor expansion of G in 'Y can be approximated
by
G(O, il) = A(O, 1, il) = Ac(O, 1, il)
okG I
Ok A I
Ok Ae I
o'Y k ,,(=0 = o'Y k ,,(=0,/3=1 ~ o'Y k ,,(=0,/3=1
The biggest advantage in working with Ae rather than G is that the partial derivatives of Ae with respect to 'Y at 'Y = 0 and (3 = 1 can be expressed as functions of
il. We define
(18)
Figure 1: Three layer BN (2 x 4 x 6) with top down propagation of beliefs. The
activation function was chosen to be sigmoid.
a
hence the mean-field
aGM :::::i aaMc = 0
aUi
aUi
(19)
In light of the above discussion one can consider
equations can be stated as
(}i
= aG
aUi
:::::i
In this paper we will restrict ourselves to M
for a general C is given by
GM
= 2.
:::::i
MC j
The relevant objective functions
All these objective functions can be expressed as a function of u.
4
Experimental results
To test the approximation schemes developed in the previous schemes, numerical
experiments were conducted. Saul et al.[l] pioneered the application of mean-field
theory to BNs. We will refer to their method as the SJJ approach. We compare
our schemes with the SJ J approach.
Small Networks were chosen so that In Z can be computed by exact enumeration
for evaluation purposes. For all the experiments the network topology was fixed
to the one shown in figure 1. This choice of the network enables us to compare
the results with those of [1]. To compare the performance of our methods with
their method we repeated the experiment conducted by them for sigmoid BNs. Ten
thousand networks were generated by randomly choosing weight values in [-1,1].
The bottom layer units, or the visible units of each network were instantiated to
zero. The likelihood, In Z, was computed by exact enumeration of all the states in
the higher two layers. The approximate value of - In Z was computed by MC j
U was computed by solving the fixed point equations obtained from (19). The
goodness of approximation scheme was tested by the following measure
a
c = - aMc -1
(22)
InZ
For a proper comparison we also implemented the SJJ method. The goodness
of approximation for the SJ J scheme is evaluated by substituting MC, in (22)
by Lsapprox, for specific formula see [1]. The results are presented in the form
of histograms in Figure 2. We also repeated the experiment with weights and
a
Gu
G 12
G 22
SJJ
(?)
small weights [-1, 1]
-0.0404
0.0155
0.0029
0.0157
(?)
large weights [-5,5]
-0.0440
0.0231
-0.0456
0.0962
Table 1: Mean of ? for randomly generated sigmoid networks, in different weight
ranges.
biases taking values between -5 and 5, the results are again presented in the form of
histograms in Figure 3. The findings are summarized in the form of means tabulated
in Table l.
For small weights G12 and the SJJ approach show close results, which was expected.
But the improvement achieved by the G22 scheme is remarkable; it gave a mean
value of 0.0029 which compares substantially well against the mean value of 0.01139
reported in [6]. The improvement in [6] was achieved by using mixture distribution
which requires introduction of extra variational variables; more than 100 extra variational variables are needed for a 5 component mixture. This results in substantial
increase in the computation costs. On the other hand the extra computational
cost for G22 over G12 is marginal. This makes the G22 scheme computationally
attractive over the mixture distribution.
"
, \ 0 '"
Figure 2: Histograms for GlO and SJJ scheme for weights taking values in [-1,1],
for sigmoid networks. The plot on the left show histograms for ? for the schemes
Gu and G12 They did not have any overlaps; Gu , gives a mean of -0.040 while G12
gives a mean of 0.0155. The middle plot shows the histogram for the SJJ scheme,
mean is given by 0.0157.The plot at the extreme right is for the scheme G22 , having
a mean of 0.0029
Of the three schemes G12 is the most robust and also yields reasonably accurate
results. It is outperformed only by G22 in the case of sigmoid networks with low
weights. Empirical evidence thus suggests that the choice of a scheme is not straightforward and depends on the activation function and also parameter values.
Figure 3: Histograms for the G10 and SJJ schemes for weights taking values in
[-5,5] for sigmoid networks. The leftmost histogram shows ? for G11 scheme having
a mean of -0.0440, second from left is for G12 scheme having a mean of 0.0231, and
second from right is for SJJ scheme, having a mean of 0.0962. The scheme G22 is
at the extreme right with mean -0.0456.
5
Discussion
Application of Plefka's theory to BNs is not straightforward. It requires computation of some averages which are not tractable. We presented a scheme in which
the BN energy function is approximated by a Taylor series, which gives a tractable
approximation to the terms required for Plefka's method. Various approximation
schemes depending on the degree of the Taylor series expansion are derived. Unlike
the approach in [1], the schemes discussed here are simpler as they do not introduce
extra variational variables. Empirical evaluation on small scale networks shows that
the quality of approximations is quite good. For a more detailed discussion of these
points see [7].
References
[1] Saul, L. K. and Jaakkola, T. and Jordan, M. 1.(1996), Mean field theory for sigmoid
belief networks, Journal of Artificial Intelligence Research,4
[2] Plefka, T . (1982), Convergence condition of the TAP equation for the Infinite-ranged
Ising glass model,J. Phys. A: Math. Gen.,15
[3] Kappen, H. J and Rodriguez, F. B(1998), Boltzmann machine learning using mean
field theory and linear response correction, Advances in Neural Information Processing Systems 10, (eds.) M. I. Jordan and M. J. Kearns and S. A. Solla, MIT press
[4] Georges, A. and Yedidia, J. S.(1991), How to expand around mean-field theory using
high temperature expansions,J. Phys. A: Math. Gen., 24
[5] Bhattacharyya, C. and Keerthi, S. S.(2000), Information geometry and Plefka's meanfield theory, J. Phys. A: Math. Gen.,33
[6] Bishop, M. C. and Lawrence, N. and Jaakkola, T. and Jordan, M. 1.(1997), Approximating Posterior Distributions in Belief Networks using Mixtures, Advances in Neural
Information Processing Systems 10, (eds.) Jordan, M. I. and Kearns, M. J. and Solla,
S., MIT press
[7] Bhattacharyya, C. and Keerthi, S. S. (1999), Mean field theory for a special class of
belief networks, accepted in Journal of Artificial Intelligence Research
| 1918 |@word build:1 implemented:1 ye:1 middle:1 inversion:1 ranged:1 uj:1 equality:1 hence:1 approximating:1 objective:2 already:1 g22:6 nonzero:1 stochastic:1 bn:5 attractive:1 kappen:1 leftmost:1 series:5 bhattacharyya:3 pl:1 correction:2 hold:1 temperature:2 around:2 considered:1 variational:11 si:5 activation:4 lawrence:1 sigmoid:9 difficult:1 visible:1 numerical:1 partition:2 substituting:2 ji:4 enables:2 stated:1 plot:3 purpose:1 discussed:1 outperformed:1 intelligence:2 applicable:1 approximates:1 proper:1 refer:1 boltzmann:3 observation:1 gibbs:1 minimization:1 mit:2 math:3 extended:1 sigmoidal:4 simpler:1 rather:1 avoid:1 wijsj:1 glo:1 jaakkola:2 posterior:1 inverting:1 derived:1 mechanical:1 required:3 vk:1 notational:1 improvement:2 likelihood:1 introduce:1 tap:3 inequality:1 expected:1 binary:1 arbitrarily:1 glass:2 nu:1 inference:1 guppy:1 dependent:1 minimum:1 george:1 enumeration:2 expand:1 wij:1 iisc:1 ii:1 belief:8 overlap:1 meanfield:1 kg:1 substantially:1 calculation:1 special:1 developed:3 ernet:1 marginal:1 field:18 ag:1 finding:1 amc:1 having:4 scheme:22 ae:7 histogram:7 uk:1 zl:2 unit:2 bangalore:1 invert:1 randomly:2 achieved:2 sg:1 kf:1 national:1 ve:1 engineering:1 dropped:1 treat:1 geometry:1 ourselves:3 keerthi:3 extra:4 rest:1 unlike:1 remarkable:1 stationarity:2 degree:1 evaluation:4 suggests:2 jordan:4 principle:1 mixture:4 extreme:2 ee:3 viewpoint:1 light:1 intractability:1 bi:1 range:1 production:1 kt:1 accurate:1 gave:1 restrict:2 partial:2 necessary:1 identified:1 topology:1 side:2 bias:1 india:1 institute:1 saul:2 taking:4 taylor:6 logarithm:1 empirical:4 expression:2 agm:1 utility:1 overcome:1 lnl:1 tabulated:1 suggest:1 algebraic:1 get:1 close:2 goodness:2 bm:2 remark:1 ec:3 context:1 impossible:1 cost:2 introducing:1 useful:1 plefka:9 detailed:1 sj:3 approximate:1 obtains:1 straightforward:4 conducted:2 ten:1 reported:1 assumed:1 sathiya:1 singapore:1 inz:1 table:2 reasonably:1 robust:1 write:2 lnz:1 feel:1 gm:4 csa:1 pioneered:1 exact:2 again:1 expansion:4 choose:1 did:1 ze:1 approximated:2 external:2 derivative:3 undertaken:1 ising:1 repeated:2 bottom:1 enforced:1 solved:1 summarized:1 automation:1 thousand:1 invertibility:2 referred:1 biggest:1 aid:1 depends:1 solla:2 view:1 yk:1 substantial:1 mpe:1 ui:10 competitive:1 layer:3 hi:2 down:1 biui:1 formula:1 rewrite:2 solving:1 spin:1 minimize:1 il:13 aui:3 specific:1 bishop:1 g11:1 yield:4 jensen:1 gu:3 evidence:1 po:1 restricting:1 various:1 bns:8 concluding:1 g10:1 mc:3 derivation:4 instantiated:1 mpessk:1 artificial:2 alternate:1 phys:3 choosing:1 ed:2 quite:2 definition:1 against:1 solve:1 energy:7 expressed:2 making:1 associated:1 mi:8 di:1 ln:1 advantage:1 equation:5 computationally:1 discus:2 propose:1 organized:1 needed:3 g12:6 tractable:2 relevant:1 ok:2 higher:1 gen:3 operation:1 infinite:1 response:1 yedidia:1 sjj:8 kearns:2 done:2 though:1 evaluated:1 accepted:1 experimental:1 convergence:1 hand:2 working:1 extending:1 replacing:2 top:1 propagation:1 rodriguez:1 depending:1 ac:1 indian:1 quality:1 evaluate:1 tested:1 ex:1 |
1,005 | 1,919 | Direct Classification with Indirect Data
Timothy X Brown
Interdisciplinary Telecommunications Program
Dept. of Electrical and Computer Engineering
University of Colorado, Boulder, 80309-0530
timxb~colorado.edu
Abstract
We classify an input space according to the outputs of a real-valued
function. The function is not given, but rather examples of the
function. We contribute a consistent classifier that avoids the unnecessary complexity of estimating the function.
1
Introduction
In this paper, we consider a learning problem that combines elements of regression
and classification. Suppose there exists an unknown real-valued property of the
feature space, p(?), that maps from the feature space, ? ERn, to R. The property
function and a positive set A c R, define the desired classifier as follows:
C*(?) = {
~~
if p(?) E A
otherwise
(1)
Though p(?) is unknown, measurements, p" associated with p(?) at different features, ?, are available in a data set X = {(?i,P,i)} of size IXI = N. Each sample
is i.i.d. with unknown distribution f(?,p,). This data is indirect in that p, may be
an input to a sufficient statistic for estimating p( ?) but in itself does not directly
indicate C*(?) in (1). Figure 1 gives a schematic of the problem.
Let Cx(?) be a decision function mapping from Rn to {-I, I} that is estimated
from the data X. The estimator, Cx(?) is consistent if,
lim P{Cx (?)
IXI-+oo
i-
C*(?)} = O.
(2)
where the probabilities are taken over the distribution f.
This problem arises in controlling data networks that provide quality of service
guarantees such as a maximum packet loss rate [1]-[8]. A data network occasionally
drops packets due to congestion. The loss rate depends on the traffic carried by the
network (i.e. the network state). The network can not measure the loss rate directly,
but can collect data on the observed number of packets sent and lost at different
network states. Thus, the feature space, ?, is the network state; the property
function, p(?), is the underlying loss rate; the measurements, p" are the observed
p(if?
A
Figure 1: The classification problem. The
classifier indicates whether an unknown
if> function, p(?), is within a set of interest,
A. The learner is only given the data "x" .
packet losses; the positive set, A, is the set of loss rates less than the maximum lossrate; and the distribution, f, follows from the arrival and departures processes of the
traffic sources. In words, this application seeks a consistent estimator of when the
network can and can not meet the packet loss rate guarantee based on observations
of the network losses. Over time, the network can automatically collect a large set
of observations so that consistency guarantees the classifier will be accurate.
Previous authors have approached this problem. In [6, 7], the authors estimate the
property function from X as, N?) and then classify via
C(?) = {
~~
if p(?) E A
otherwise.
(3)
The approach suffers two related disadvantages. First, an accurate estimate of
the property function may require many more parameters than the corresponding
classifier in which only the decision boundary is important. Second, the regression
requires many samples over the entire range of ? to be accurate, while the fewer
parameters in the classifier may require fewer samples for the same accuracy.
A second approach, used in [4, 5, 8], makes a single sample estimate, p(?i) from fJ-i
and estimates the desired output class as
if p(?i) E A
otherwise.
(4)
This forms a training set Y = {?i' oil for standard classification. This was shown
to lead to an inconsistent estimator in the data network application in [1].
This paper builds on earlier results by the author specific to the packet network
problem [1, 2, 3] and defines a general framework for mapping the indirect data
into a standard supervised learning task. It defines conditions on the training set,
classifier, and learning objective to yield consistency. The paper defines specific
methods based on these results and provides examples of their application.
2
Estimator at a Single Feature
In this section, we consider a single feature vector ? and imagine that we can collect
as much monitoring data as we like at ?. We show that a consistent estimator of the
property function, p(?), yields a consistent estimator of the optimal classification,
C*(?), without directly estimating the property function. These results are a basis
for the next section where we develop a consistent classifier over the entire feature
space even if every ?i in the data set is distinct.
Given the data set X = {?, ltd, we hypothesize that there is a mapping from data
set to training set Y = {?, Wi, od such that IXI = IYI and
IXI
Cx(?)
= sign(L WiOi)
(5)
i=1
is consistent in the sense of (2). The Wi and 0i are both functions of /ti, but for
simplicity we will not explicitly denote this.
Do any mappings from X to Y yield consistent estimators of the form (5)? We
consider only thresholds on p(?). That is, sets A in the form A = [-00,7) (or
similarly A = (7,00]) for some threshold 7. Since most practical sets can be formed
from finite union, intersection, and complements of sets in this form, this is sufficient.
Consider an estimator fix that has the form
(6)
for some functions a > 0, and estimator (3. Suppose that fix is a consistent estimator
of p(?), i.e. for every E > 0:
lim P {Ifix - p(?)1
I XI~oo
> E} = O.
(7)
For threshold sets such as A = [-00,7), we can use (6) to construct the classifier:
IXI
Cx(?) = sign(7 - fix (?)) = sign ( ~(a(/ti)7 - (3(/ti))
)
IXI
= sign(L WiOi)
(8)
i=1
where
la(/ti)7 - (3 (/ti) I
sign(a(/ti)7 - (3(/ti))
(9)
(10)
E then the above estimator can be incorrect only if lfix - p(?)1 >
The consistency in (7) guarantees that (8)-(10) is consistent if E > O.
If 17 - p(?)1 =
E.
The simplest example of (6) is when /ti is a noisy unbiased sample of p(?i). The
natural estimator is just the average of all the /ti, i.e. a(/ti) = 1 and (3(/ti) = /ti. In
this case, Wi = 17 - ltd and 0i = sign(7 - /ti). A less trivial example will be given
later in the application section of the paper.
We now describe a range of objective functions for evaluating a classifier C( ?; 0)
parameterized by 0 and show a correspondence between the objective minimum and
(5). Consider the class of weighted L-norm objective functions (L > 0):
IXI
J(X, 0) = ( ~ wiIC(?; 0) - oilL
)11L
(11)
Let the 0 that minimizes this be denoted O(X). Let
Cx(?) = C(?; O(X))
(12)
For a single ?, C(?;O) is a constant +1 or -1. We can simply try each value and
see which is the minimum to find C x (?). This is carried out in [3] where we show:
Theorem 1 When C(?;O) is a constant over X then the Cx(?) defined by {11}
and (12) is equal to the Cx(?) defined by (5).
The definition in (5) is independent of L. So, we can choose any L-norm as convenient without changing the solution. This follows since (11) is essentially a weighted
count of the errors. The L-norm has no significant effect.
This section has shown how regression estimators such as (6) can be mapped via
(9) and (10) and the objective (11) to a consistent classifier at a single feature. The
next section considers general classifiers.
3
Classification over All Features
This section addresses the question of whether there exist any general approach to
supervised learning that leads to a consistent estimator across the feature space.
Several considerations are important. First, not all feature vectors, 4>, are relevant. Some 4> may have zero probability associated with them from the distribution
f (4), f..L). Such 4> we denote as unsupported. The optimal and learned classifier can
differ on unsupported feature vectors without affecting consistency. Second, the
classifier function C(4),O) may not be able to represent the consistent estimator.
For instance, a linear classifier may never yield a consistent estimator if the optimal classifier, C* (4)), decision boundary is non-linear. Classifier functions that
can represent the optimal classifier for all supported feature vectors we denote as
representative. Third, the optimal classifier is discontinuous at the decision boundary. A classifier that considers any small region around a feature on the decision
boundary will have both positive and negative samples. In general, the resulting
classifier could be +1 or -1 without regard to the underlying optimal classifier at
these points and consistency can not be guaranteed. These considerations are made
more precise in Appendix A. Taking these considerations into account and defining
Wi and 0i as in (9) and (10) we get the following theorem:
Theorem 2 If the classifier (5) is a consistent estimator for every supported nonboundary 4>, and C(4); 0) is representative, then the O(X)) that minimizes (11) yields
a consistent classifier over all supported 4> not on the decision boundary.
Theorem 2 tells us that we can get consistency across the feature space. This result
is proved in Appendix A.
4
Application
This section provides an application of the results to better illustrate the methodology. For brevity, we include only a simple stylized example (see [3] for a more
realistic application). We describe first how the data is created, then the form of
the consistent estimator, and then the actual application of the learning method.
The feature space is one dimensional with 4> uniformly distributed in (3,9). The
underlying property function is p(4)) = 10-4>. The measurement data is generated
as follows. For a given 4>i, 8i is the number of successes in Ti = 105 Bernoulli trials
with success probability P(4)i). The monitoring data is thus, f..Li = (8i' Ti). The
positive set is A = (0, r) with r = 10- 6 , and IXI = 1000 samples.
As described in Section 1, this kind of data appears in packet networks where the
underlying packet loss rate is unknown and the only monitoring data is the number
of packets dropped out of Ti trials. The Bernoulli trial successes correspond to
dropped packets. The feature vector represents data collected concurrently that
indicates the network state. Thus the classifier can decide when the network will
and will not meet a packet loss rate guarantee.
0.001
sample loss rate
true loss rate
threshold
0.0001
-?+H-
le-05
~
en
en
0
.....:l
++
++
+ -tItI-"llnrnll ' II I 111 111 II I
sample-based
~
le-06
+
+
. ".
.m ........................ m
:::EJ<:())]!;ist~)]t..... .
'.
........................ ,
le-07
...............
"
le-08
..... , ..
'.
le-09
3
4
5
6
7
8
".'.
" .'.
9
Feature
Figure 2: Monitoring data, true property function, and learned classifiers in the
loss-rate classification application. The monitoring data is shown as sample loss
rate as a function of feature vector. Sample loss-rates of zero are arbitrarily set to
10- 9 for display purposes. The true loss rate is the underlying property function.
The consistent and sample-based classifier results are shown as a a range of thresholds on the feature. An x and y error range is plotted as a box. The x error range
is the 10th and 90th percentile of 1000 experiments. This is mapped via the underlying property function to a y-error range. The consistent classifier finds thresholds
around the true value. The sample-based is off by a factor of 7.
Figure 2 shows a sample of data. A consistent estimator in the form of (6) is:
A
px
Li Si
= Li Ti '
(13)
Defining Wi and 0i as in (9) and (10) the classifier for our data set is the threshold
on the feature space that minimizes (11). This classifier is representative since p(?)
is monotonic.
The results are shown in Figure 2 and labeled "consistent". This paper's methods
find a threshold on the feature that closely corresponds to the r = 10- 6 threshold.
As a comparison we also include a classifier that uses Wi = 1 for all i and sets
0i to the single-sample estimate, p(?i) = silTi , as in (4). The results are labeled
"sample-based". This method misses the desired threshold by a factor of 7.
This application shows the features of the paper's methods. The classifier is a simple
threshold with one parameter. Estimating p(?) to derive a classifier required lO's
of parameters in [6, 7]. The results are consistent unlike the approaches in [4, 5, 8].
5
Conclusion
This paper has shown that using indirect data we can define a classifier that directly
uses the data without any intermediate estimate of the underlying property function.
The classifier is consistent and yields a simpler learning problem. The approach
was demonstrated on a problem from telecommunications. Practical details such
as choosing the form of the parametric classifier, C(?i(}), or how to find the global
minimum of the objective function (11) are outside the scope of this paper.
Two Dimensional Feature Space
C(rjJ;(}~
Both Classifiiers
Decision
Bounpar
."IA";-.-.-.-.~Fal
-;T.-;-sei'-.-.-.-.-.-\
oun ary
Figure 3: A classifier 0 (</>; ()) and the
optimal classifier 0* (</? create four different sets in feature space: where they
agree and are positive; where they agree
and are negative; where they disagree
and O*(</? = +1 (false negatives); and
where they disagree and O*(</? = -1
(false positives).
Both Classifiers
Positive
A
Appendix: Consistency of Supervised Learning
This appendix proves certain natural conditions on a supervised learner lead to a
consistent classifier (Theorem 2). First we need to formally define several concepts.
Since the feature space is real, it is a metric space with measure m.
A feature vector </> is supported by the distribution
</> has positive probability.
f
if every neighborhood around
A feature vector </> is on the decision boundary if in every neighborhood around </>
there exists supported </>', </>" such that 0* (</>') i' 0* (</>").
A classifier function, 0 (</>; ()) is representative if there exists a ()* such that 0 (</>; ()*) =
O*(</? for all supported, non-boundary </>.
Parameters () and ()' are equivalent if for all supported, non-boundary </>; O(</>; ()) =
O(</>; ()').
Given a (), it is either equivalent to ()* or there are supported, non-boundary </> where
O(</>j()) is not equal to the optimal classifier as in Figure 3. We will show that for
any () not equivalent to ()*,
lim P{J(X, ()) ::; J(X, ()*)} = 0
IXI--+oo
(14)
In other words, such a () can not be the minimum of the objective in (11) and so
only a () equivalent to ()* is a possible minimum.
To prove Theorem 2, we need to introduce a further condition. An estimator of the
form (5) has uniformly bounded variance if Var(wi) < B for some fixed B < 00 for
all </>.
Let E[w(</?o(</?] = e(</? be the expected weighted desired output for independent
samples at </> where the expectation is from 1(",,1</?. To start, we note that if (5) is
consistent, then:
(15)
sign(e(</?) = O*(</?
for all non-boundary states. Looking at Figure 3, let us focus on the false negative
set minus the optimal decision boundary, call this cJ>. From (15), e(</? is positive for
every </> E cJ>. Let x be the probability measure of cJ>. Define the set
cJ> ? = {</>I</> E cJ> and e (</? ~ E}.
Let x? be the probability measure of cJ>?. Choose
E
> 0 so that
x?
> O.
The proof is straight forward from here and we omit some details. With (), C( ?; ()) =
-1 for all ? E cI>. With ()*, C (?; ()*) = +1 for all ? E cI>. Since the minimum of a
constant objective function satisfies (5) , we would incorrectly choose () if
IXI
lim LWioi
IXI --+oo i=l
<0
For the false negatives the expected number of examples in cI> and cI>e is xlXI and
xelXI. By the definition of cI>e and the bounded variance of the weight, we get that
IXI
E[L WiOi]
i=l
~
ExelXI
(16)
< BxIXI?
(17)
IXI
Var[L WiOi]
i=l
Since the expected value grows linearly with the sample size and the standard
deviation with the square root of the sample size, as IXI --t 00 the weighted sum will
with probability one be positive. Thus, as the sample size grows, + 1 will minimize
the objective function for the set of false negative samples and the decision boundary
from ()* will minimize the objective.
The same argument applied to the false positives shows that ()* will minimize the
false positives with probability one. Thus ()* will be chosen with probability one
and the theorem is shown.
Acknowledgments
This work was supported by NSF CAREER Award NCR-9624791.
References
[1] Brown, T.X (1995) Classifying loss rates with small samples, Proc. Inter. Workshop on Appl. of NN to Telecom (pp. 153- 161). Hillsdale, NJ: Erlbaum.
[2] Brown, T.X (1997) Adaptive access control applied to ethernet data, Advances
in Neural Information Processing Systems, 9 (pp. 932- 938). MIT Press.
[3] Brown, T. X (1999) Classifying loss rates in broadband networks, INFOCOMM
'99 (v. 1, pp. 361- 370). Piscataway, NJ: IEEE.
[4] Estrella, A.D., et al. (1994). New training pattern selection method for ATM
call admission neural control, Elec. Let., v. 30, n. 7, pp. 577- 579.
[5] Hiramatsu, A. (1990). ATM communications network control by neural networks, IEEE T. on Neural Networks, v. 1, n. 1, pp. 122- 130.
[6] Hiramatsu, A. (1995). Training techniques for neural network applications in
ATM, IEEE Comm. Mag., October, pp. 58-67.
[7] Tong, H., Brown, T. X (1998). Estimating Loss Rates in an Integrated Services
Network by Neural Networks, Proc. of Global Telecommunications Conference
(GLOBECOM 98) (v. 1, pp. 19- 24) Piscataway, NJ: IEEE.
[8] Tran-Gia, P., Gropp, O. (1992). Performance of a neural net used as admission
controller in ATM systems, Proc. GLOBECOM 92 (pp. 1303- 1309). Piscataway, NJ: IEEE.
| 1919 |@word trial:3 norm:3 seek:1 minus:1 mag:1 od:1 si:1 realistic:1 hypothesize:1 drop:1 congestion:1 fewer:2 provides:2 contribute:1 simpler:1 admission:2 direct:1 incorrect:1 prove:1 combine:1 introduce:1 inter:1 expected:3 automatically:1 actual:1 estimating:5 underlying:7 bounded:2 kind:1 fal:1 minimizes:3 nj:4 guarantee:5 every:6 ti:17 classifier:42 control:3 omit:1 positive:12 service:2 engineering:1 dropped:2 meet:2 collect:3 appl:1 range:6 practical:2 acknowledgment:1 lost:1 union:1 convenient:1 word:2 get:3 selection:1 equivalent:4 map:1 demonstrated:1 simplicity:1 estimator:20 controlling:1 suppose:2 colorado:2 imagine:1 ixi:14 us:2 element:1 labeled:2 observed:2 electrical:1 region:1 comm:1 complexity:1 learner:2 basis:1 stylized:1 indirect:4 elec:1 distinct:1 describe:2 approached:1 tell:1 choosing:1 outside:1 neighborhood:2 valued:2 unsupported:2 otherwise:3 statistic:1 itself:1 noisy:1 net:1 tran:1 relevant:1 oo:4 develop:1 illustrate:1 derive:1 indicate:1 ethernet:1 differ:1 closely:1 discontinuous:1 packet:11 hillsdale:1 require:2 fix:3 around:4 mapping:4 scope:1 gia:1 purpose:1 proc:3 sei:1 create:1 weighted:4 mit:1 concurrently:1 rather:1 ej:1 focus:1 bernoulli:2 indicates:2 sense:1 nn:1 entire:2 integrated:1 classification:7 denoted:1 equal:2 construct:1 never:1 represents:1 interest:1 accurate:3 desired:4 plotted:1 instance:1 classify:2 earlier:1 disadvantage:1 deviation:1 erlbaum:1 interdisciplinary:1 off:1 choose:3 hiramatsu:2 li:3 account:1 explicitly:1 depends:1 later:1 try:1 root:1 traffic:2 start:1 minimize:3 atm:4 formed:1 accuracy:1 square:1 variance:2 yield:6 correspond:1 titi:1 monitoring:5 straight:1 ary:1 suffers:1 definition:2 pp:8 associated:2 proof:1 proved:1 lim:4 cj:6 appears:1 supervised:4 methodology:1 though:1 box:1 just:1 defines:3 quality:1 grows:2 oil:1 effect:1 brown:5 unbiased:1 true:4 concept:1 percentile:1 fj:1 consideration:3 measurement:3 significant:1 consistency:7 similarly:1 access:1 iyi:1 occasionally:1 certain:1 success:3 arbitrarily:1 gropp:1 minimum:6 ii:2 award:1 schematic:1 regression:3 controller:1 essentially:1 metric:1 expectation:1 represent:2 affecting:1 source:1 unlike:1 sent:1 inconsistent:1 call:2 oill:1 intermediate:1 whether:2 ltd:2 simplest:1 exist:1 nsf:1 sign:7 estimated:1 ist:1 four:1 threshold:11 changing:1 sum:1 parameterized:1 telecommunication:3 decide:1 decision:10 appendix:4 guaranteed:1 display:1 correspondence:1 argument:1 px:1 ern:1 according:1 piscataway:3 across:2 wi:7 globecom:2 boulder:1 taken:1 agree:2 count:1 available:1 include:2 build:1 prof:1 objective:10 question:1 parametric:1 mapped:2 considers:2 collected:1 trivial:1 october:1 negative:6 unknown:5 disagree:2 observation:2 finite:1 incorrectly:1 defining:2 looking:1 precise:1 communication:1 rn:1 complement:1 required:1 learned:2 address:1 able:1 pattern:1 departure:1 program:1 ia:1 natural:2 created:1 carried:2 loss:19 var:2 sufficient:2 consistent:25 classifying:2 lo:1 supported:9 taking:1 distributed:1 regard:1 boundary:12 evaluating:1 avoids:1 author:3 made:1 forward:1 adaptive:1 global:2 unnecessary:1 xi:1 career:1 linearly:1 arrival:1 representative:4 telecom:1 en:2 broadband:1 tong:1 rjj:1 third:1 oun:1 theorem:7 specific:2 exists:3 workshop:1 false:7 ci:5 cx:8 intersection:1 timothy:1 simply:1 monotonic:1 corresponds:1 satisfies:1 timxb:1 uniformly:2 infocomm:1 miss:1 la:1 formally:1 ncr:1 arises:1 brevity:1 dept:1 |
1,006 | 192 | 194
Huang and Lippmann
HMM Speech Recognition
with Neural Net Discrimination*
William Y. Huang and Richard P. Lippmann
Lincoln Laboratory, MIT
Room B-349
Lexington, MA 02173-9108
ABSTRACT
Two approaches were explored which integrate neural net classifiers
with Hidden Markov Model (HMM) speech recognizers. Both attempt to improve speech pattern discrimination while retaining the
temporal processing advantages of HMMs. One approach used neural nets to provide second-stage discrimination following an HMM
recognizer. On a small vocabulary task, Radial Basis Function
(RBF) and back-propagation neural nets reduced the error rate
substantially (from 7.9% to 4.2% for the RBF classifier). In a larger
vocabulary task, neural net classifiers did not reduce the error rate.
They, however, outperformed Gaussian, Gaussian mixture, and knearest neighbor (KNN) classifiers. In another approach, neural
nets functioned as low-level acoustic-phonetic feature extractors.
When classifying phonemes based on single 10 msec. frames, discriminant RBF neural net classifiers outperformed Gaussian mixture classifiers. Performance, however, differed little when classifying phones by accumulating scores across all frames in phonetic
segments using a single node HMM recognizer.
-This work was sponsored by the Department of the Air Force and the Air Force Office of
Scientific Research.
HMM Speech Recognition with Neural Net Discrimination
B ...
D
Second Stage
Classifier
Node Averages
Viterbi
Segmentation
Cepstral Sequence
Figure 1: Second stage discrimination system. HMM recognition is based on the
accumulated scores from each node. A second stage classifier can adjust the weights
from each node to provide improved discrimination.
1
Introduction
This paper describes some of our current efforts to integrate discriminant neural net
classifiers into HMM speech recognizers. The goal of this work is to combine the
temporal processing capabilities of the HMM approach with the superior recognition rates provided by discriminant classifiers. Although neural nets are well developed for static pattern classification, neural nets for dynamic pattern recognition
require further research. Current conventional HMM recognizers rely on likelihood
scores provided by non-discriminant classifiers, such as Gaussian mixture [11] and
histogram [5] classifiers. Non-discriminant classifiers are sensitive to assumptions
concerning the shape of the probability density function and the robustness of the
Maximum Likelihood (ML) estimators. Discriminant classifiers have a number of
potential advantages over non-discriminant classifiers on real world problems. They
make fewer assumptions concerning underlying class distributions, can be robust to
outliers, and can lead to efficient parallel analog VLSI implementation [4, 6, 7, 8].
Recent efforts in applying discriminant training to HMM recognizers have led to
promising techniques, including Maximum Mutual Information (MMI) training [2]
and corrective training [5]. These techniques maintain the same structure as in a
conventional HMM recognizer but use a different overall error criteria to estimate
parameters. We believe that a significant improvement in recognition rate will result
if discriminant classifiers are included directly in the HMM structure.
This paper examines two integration strategies: second stage classification and
discriminant pre-processing. In second stage classification, discussed in Sec. 2,
classifiers are used to provide post-processing for an HMM isolated word recognizer.
In discriminant pre-processing, discussed in Sec. 3, discriminant classifiers replace
the maximum likelihood classifiers used in conventional HMM recognizers.
195
196
Huang and Lippmann
2
Second Stage Classification
HMM isolated-word recognition requires one Markov model per word. Recognition
involves accumUlating scores for an unknown input across the nodes in each word
model, and selecting that word model which provides the maximum accumulated
score. In the case of discriminating between minimal pairs, such as those in the
E-set vocabulary (the letters {BCDEGPTVZ}), it is desired that recognition be
focused on the nodes that correspond to the small portion of the utterance that are
different between words. In the second stage classification approach, illustrated in
Fig. 1, the HMMs at the first layer are the components of a fully-trained isolatedword HMM recognizer. The second stage classifier is provided with matching scores
and duration from each HMM node. A simple second stage classifier which sums
the matching scores of the nodes for each word would be equivalent to an HMM
recognizer. It is hoped that discriminant classifiers can utilize the additional information provided by the node dependent scores and duration to deliver improved
recognition rates.
The second stage system of Fig. 1 was evaluated using the 9 letter E-set vocabulary
and the {BDG} vocabulary. Words were taken from the TI-46 Word database,
which contains 10 training and 16 testing tokens per word per talker and 16 talkers.
Evaluation was performed in the speaker dependent mode; thus, there were a total
of 30 training and 48 testing tokens per talker for the {BDG }-set task and 90
training and 144 testing tokens per talker for the E-set task. Spectral pre-processing
consisted of extracting the first 12 mel-scaled cepstral coefficients [10], ignoring the
oth cepstral coefficient (energy), for each 10 ms frame. An HMM isolated word
recognizer was first trained using the forward-backward algorithm. Each word was
modeled using 8 HMM nodes with 2 additional noise nodes at each end. During
classification, each test word was segmented using the Viterbi decoding algorithm
on all word models. The average matching score and duration of all non-noise nodes
were used as a static pattern for the second stage classifier.
2.1
Classifiers
Four second stage classifiers were used: (1) Multi-layer perceptron (MLP) classifiers
trained with back-propagation, (2) Gaussian mixture classifiers trained with the
Expectation Maximization (EM) algorithm [9], (3) RBF classifiers [8] with weights
trained using the pseudoinverse method computed via Singular Value Decomposition (SVD), and (4) KNN classifiers. Covariance matrices in the Gaussian mixture
classifiers were constrained to be diagonal and tied to be the same between mixture
components in all classes. The RBF classifiers were of the form
~
w .. EXP
Decide Class i = Argmax
L..J "
i
;=1
(_IIX - ,1; 112 )
2hu~
,
(1)
HMM Speech Recognition with Neural Net Discrimination
where
i
i
J
_
acoustic vector input,
class label,
number of centers,
weight from jth center to ith class output,
jth center and variance, and
spread factor.
The center locations (Pi'S) were obtained from either k-means or Gaussian mixture
clustering. The variances (Uj 's) were either the variances of the individual k-means
clusters or those of the individual Gaussian mixture components, depending on
which clustering algorithm was used. Results for k = 1 are reported for the KNN
classifier because this provided best performance.
The Gaussian mixture classifier was selected as a reference conventional non-discriminant classifier. A Gaussian mixture classifier can provide good models for multimodal and non-Gaussian distributions by using many mixture components. It can
also generalize to the more common, well-known unimodal Gaussian classifier which
provides poor performance when the input distribution is not Gaussian. Very few
benchmarking studies have been performed to evaluate the relative performance of
Gaussian mixture and neural net classifiers, although mixture models have been
used successfully in HMM recognizers [11]. RBF classifiers were used because they
train rapidly, and recent benchmarking studies show that they perform as well as
MLP classifiers on speech problems [8].
GAUSSIAN
ixtures per
Class
Centers (rom Gaussian mixture clustering, h=150.
Centers (rom k-means clustering. h=lS0.
Table 1: Percentage errors from the second stage classifier, averaged over all 16
talkers.
2.2
Results of Second Stage Classification
Table 1 shows the error rates for the second stage system of Fig. 1, averaged
over all talkers. The second stage system improved performance over the baseline
HMM system when the vocabulary was small (B, D and G). Error rates decreased
from 7.9% for the baseline HMM recognizer to 4.2% for the RBF second stage
classifier. There was no improvement for the E-set vocabulary task. The best RBF
second stage classifier degraded the error rate from 11.3% with the baseline HMM
to 12.8%. In the E-set results, MLP and RBF classifiers, with error rates of 13.4%
197
198
Huang and Lippmann
and 12.8%, performed considerably better than the Gaussian (21.2%), Gaussian
mixture (20.6%) and KNN classifiers (36.0%).
The second stage approach is effective for a very small vocabulary but not for a larger
vocabulary task. This may be due to a combination of limited training data and the
increased complexity of decision regions as vocabulary size and dimensionality gets
large. When the vocabulary size increased from 3 to 9, the input dimensionality
of the classifiers scaled up by a factor of 3 (from 48 to 144) but the number of
training tokens increased only by the same factor (from 30 to 90). It is, in general,
possible for the amount of training tokens required for good performance to scale
up exponentially with the input dimensionality. MLP and RBF classifiers appear
to be affected by this problem but not as strongly as Gaussian, Gaussian mixture,
and KNN classifiers.
3
Discriminant Pre-Processing
Second stage classifiers will not work well if the nodal matching scores do not lead to
good discrimination. Current conventional HMM recognizers use non-discriminant
classifiers based on ML estimators to generate these scores. In the discriminant
pre-processing approach, the ML classifiers in an HMM recognizer are replaced by
discriminant classifiers.
All the experiments in this section are based on the phonemes /b,d,43/ from the
speaker dependent TI-46 Word database. Spectral pre-processing consisted of extracting the first 12 mel-scaled cepstral coefficients and ignoring the oth cepstral
coefficient (energy), for each 10 ms frame. For multi-frame inputs, adjacent frames
were 20 msec. apart (skipping every other frame). The database was segmented
with a conventional high-performance continuous-observation HMM recognizer using forced Viterbi decoding on the correct word. The phonemes fbi, /d/ and /dJ/
from the letters "B", "D" and "G" (/#_i/ context) were then extracted. This
resulted in an average of 95 training and 158 testing frames per talker per word
using the 10 training and 16 testing words per talker in the 16 talker database.
Talker dependent results, averaged over all 16 talkers, are reported here.
Preliminary experiments using MLP, RBF, KNN, Gaussian, and Gaussian mixture
classifiers indicated that RBF classifiers with Gaussian basis functions and a spread
factor of 50 consistently yielded close to best performance. RBF classifiers also
provided much shorter training times than MLP classifiers. RBF classifiers (as in
Eq. 1) with h 50 were thus used in all experiments presented in this section. The
parameters of the RBF classifiers were determined as described in Sec. 2.1 above.
=
Gaussian mixture classifiers were used as reference conventional non-discriminant
classifiers. In the preliminary experiments, they also provided close to best performance, and outperformed KNN and unimodal Gaussian classifiers. Covariance
matrices were constrained, as described in Sec. 2.1. Although full and independent covariance matrices were advantageous for the unimodal Gaussian classifier
and Gaussian mixture classifiers with few mixture components, best performance
was provided using many mixture components and constrained covariance matri-
HMM Speech Recognition with Neural Net Discrimination
30
-N
20
~
ClII
ClII
JI;I
10
~-:
13-
...
0
?]
j
:t
01 frames
ll.2 frames
+3 frames
X4 frames
OS frames
0
SO
75
TOTAL NUMBER OF KMEANS CENTERS
75
Figure 2: Frame-level error rates for Gaussian tied-mixture and RBF classifiers as
a function of the total number of unique centers. Multi-frame results had context
frames adjoined together at the input. Centers for both classifiers were determined.
using k-means clustering.
ces. A Gaussian "tied-mixture" classifier was also used. This is a Gaussian mixture
classifier where all classes share the same mixture components but have different
mixture weights. It is trained in two stages. In the first stage, class independent
mixture centers are computed by k-means clustering, and mixture variances are
the variances of the individual k-means clusters. In the second stage, the ML estimates of the class dependent mixture weights are computed while holding mixture
components fixed.
3.1
Frame Level Results
Error rates for classifying phonemes based on single frames are shown in Fig. 2 for
the Gaussian tied-mixture classifier (left) and RBF classifier (right). These results
were obtained using k-means centers. Superior frame-level error rates were consistently provided by the RBF classifier in all experimental variations of this study.
This is expected since RBF classifiers use an objective function which is directly
related to classification error, whereas the objective of non-discriminant classifiers,
modeling the class dependent probability density functions, is only indirectly related
to classification error.
3.2
Phone Level Results
In a single node HMM, classifier scores for the frames in a phone segment are accumulated to obtain phone-level results. For conventional HMM recognizers that use
non-discriminant classifiers, this score accumulation is done by assuming independent frames, which allows the frame-level scores to be multiplied together:
Prob(phone)
z...
Prob(Zl' Z2, ... ZN)
Prob(zl)Prob(z2)' .. Prob(zN)
(2)
ZN are input frames in an N-frame phone. Eq. 2 does not apply to nonwhere
discriminant classifiers. RBF classifier outputs are not constrained to lie between
o and 1. They do not necessarily behave like probabilities and do not perform
199
200
Huang and Lippmann
8
I
I
I
I
(a) GAUSS. TIED MIX.
--
6
~
N
~
2 r-
o
:s
I
I
I
I
I
(c) WIDENED RBF
(b) RBF
~
~
I
I
I
I
I
I
I
I
I
25
50
75
25
50
75
25
50
75
TOTAL NUMBER OF KMEANS CENTERS
-
Figure 3: Phone-level error rates using (a) Gauasian tied-mixture, (b) RBF and
(c) 5% widened RBF classifiers, as a function of the total number of unique centers.
Gauasian classifier phone-level results were obtained by accumulating frame-level
scores via multiplication. RBF classifier frame-level scores were accumulated via
addition. Symbols are as in Fig. 2.
well when their frame scores are multiplied together. The RBF classifier's framelevel scores were thus accumulated, instead, by addition. Phone-level error rates
obtained by accumulating frame-level scores from the Gaussian tied-mixture and
RBF classifiers are shown in Fig. 's 3(a) and (b). Best performance was provided by
the Gaussian tied-mixture classifier with 50 k-means centers and no context frames
(2.6% error rate, versus 3.9% for the RBF classifier with 75 centers and 1 context
frame).
The good phone-level performance provided by the Gaussian tied-mixture classifier
in Fig. 3(a) is partly due to the near correctness of the Gaussian mixture distribution assumption and the independent frames assumption (Eq. 2). To address
the poor phone-level performance of the RBF classifier, we examine solutions that
use smoothing to directly extend good frame-level results to acceptable phonelevel performance. Smoothing was performed both by passing the classifier outputs
through a sigmoid function l and by increasing the spread (h in Eq. 1) after RBF
weights were trained. Increasing h was more effective.
Increasing h has the effect of "widening" the basis functions. This smoothes the
discriminant functions produced by the RBF classifier to compensate for limited
training data. If basis function widening occurs before weights are trained, then
weights training will effectively compensate for the increase. This was verified in
preliminary experiments, which showed that if h was increased before weights were
trained, little difference in performance was observed as h varies from 50 to 200.
Increasing h by 5% after weights were trained resulted in a slightly different framelevel performance (sometimes better, sometimes worse), but a significant improvement in phone-level results for all experimental variations of this study. In Fig.
3(c), a 5% widening of the basis function improved the performance of the baseline
sigmoid function is of the fonn 31 = 1/ (1 + e-(Z-.5)2) where :r is the input (an output
from the RBF classifier) and 31 is the output used for classification.
1 The
HMM Speech Recognition with Neural Net Discrimination
5
o GAUSS
--
4
+ Smoothed
0
3
fI.l
2
N
~
~
~
ll. RBF
RBF
1
0
0
1
2
345
NUMBER OF FRAMES
Figure 4: Phone-level error rates, as a function of the number of frames, for
Gaussian mixture with 9 mixtures per class, and RBF classifiers with centers from
the Gaussian mixture classifier (27 total centers for this 3 class task).
RBF classifier. It did not, however, improve performance over that provided by the
Gaussian tied-mixture classifier without context frames at the input. The lowest
error rate provided by the smoothed RBF is now 3.4% using 75 k-means centers
and 2 context frames (compared with 2.6% for the Gaussian tied-mixture classifier
with 50 centers and no context).
Error rates for the Gaussian mixture classifier with 9 mixtures per class is plotted
versus the number of frames in Fig. 4, along with the results for RBF classifiers with
centers taken from the Gaussian mixture classifier. Similar behavior was observed
in all experimental variations of this study. There are three main observations: (1)
The Gaussian mixture classifier without context frames provided best performance
but degraded as the number of input frames increased, (2) RBF classifiers can outperform Gaussian mixture classifiers with many input frames, and (3) widening
the basis functions after weights were trained improved the RBF classifier's performance.
4
Summary
Two techniques were explored that integrated discriminant classifiers into HMM
speech recognizers. In second-stage discrimination, an RBF second-stage classifier
halved the error rates in a {BDG} vocabulary task but provided no performance
improvement in an E-set vocabulary task. For integrating at the pre-processing
level, RBF classifiers provided superior frame-level performance over conventional
Gaussian mixture classifiers. At the phone-level, best performance was provided by
a Gaussian mixture classifier with a single frame input; however, the RBF classifier
outperformed the Gaussian mixture classifier when the input contained multiple
context frames. Both sets of experiments indicated an ability for the RBF classifier to integrate the large amount of information provided by inputs with high
dimensionality. They suggest that an HMM recognizer integrated with RBF and
other discriminant classifiers may provide improved recognition by providing better frame-level discrimination and by utilizing features that are ignored by current
"state-of-the-art" HMM speech recognizers. This is consistent with the results of
201
202
Huang and Lippmann
Franzini [3] and Bourlard [1], who used many context frames in their implementation of discriminant pre-processing which embedded MLPs' into HMM recognizers.
Current efforts focus on studying techniques to improve the performance of discriminant classifier for phones, words, and continuous speech. Approaches include
accumulating scores from lower level speech units and using objective functions that
depend on higher level speech units, such as phones and words. Work is also being
performed to integrate discriminant classification algorithms into HMM recognizers
using Viterbi training.
References
[1] H. Bourlard and N. Morgan. Merging multilayer perceptrons in hidden Markov models: Some experiments in continuous speech recognition. Technical Report TR-89033, International Computer Science Institute, Berkeley, CA., July 1989.
[2] Peter F. Brown. The Acoustic-Modeling Problem in Automatic Speech Recognition
PhD thesis, Carnegie Mellon University, May 1987.
[3] Michael A. Franzini, Michael J. Witbrock, and Kai-Fu Lee. A connectionist approach
to continuous speech recognition. In Proceedings of the IEEE ICASSP, May 1989.
[4] William Y. Huang and Richard P. Lippmann. Comparisons between conventional
and neural net classifiers. In 1st International Conference on Neural Network, pages
IV-485. IEEE, June 1987.
[5] Kai-Fu Lee and Sanjoy Mahajan. Corrective and reinforcement leaning for speakerindependent continuous speech recognition. Technical Report CMU-CS-89-100, Computer Science Department, Carnegie-Mellon University, January 1989.
[6] Yuchun Lee and Richard Lippmann. Practical characteristics of neural network and
conventional pattern classifiers on artificial and speech problems. In Advances in Neural Information Processing Systems 2, Denver, CO., 1989. IEEE, Morgan Kaufmann.
In Press.
[7] R. P. Lippmann. Review of neural networks for speech recognition. Neural Computation, 1(1):1-38, 1989.
[8] Richard P. Lippmann. Pattern classification using neural networks. IEEE Communications Magazine, 27(11):47-63, Nov. 1989.
[9] G. J. McLachlan. Mixture Models. Marcel Dekker, New York, N. Y., 1988.
[10] D. B. Paul. A speaker-stress resistant HMM isolated word recognizer. In Proceedings
of the IEEE ICASSP, pages 713-716, April 1987.
[11] L. R. Rabiner, B.-H. Juang, S. E. Levinson, and M. M. Sondhi. Recognition of
isolated digits using hidden Markov models with continuous mixture densities. AT&T
Technical Journal, 64(6):1211-1233, 1985.
| 192 |@word advantageous:1 dekker:1 hu:1 covariance:4 decomposition:1 fonn:1 tr:1 contains:1 score:20 selecting:1 current:5 z2:2 skipping:1 speakerindependent:1 shape:1 sponsored:1 discrimination:12 fewer:1 selected:1 ith:1 provides:2 node:13 location:1 nodal:1 along:1 combine:1 expected:1 behavior:1 examine:1 multi:3 little:2 increasing:4 provided:18 underlying:1 lowest:1 substantially:1 developed:1 lexington:1 temporal:2 berkeley:1 every:1 ti:2 classifier:111 scaled:3 zl:2 unit:2 appear:1 before:2 co:1 hmms:2 limited:2 averaged:3 unique:2 practical:1 testing:5 digit:1 matching:4 pre:8 radial:1 word:21 integrating:1 suggest:1 get:1 close:2 context:10 applying:1 accumulating:5 accumulation:1 conventional:11 equivalent:1 center:20 duration:3 focused:1 estimator:2 examines:1 utilizing:1 variation:3 magazine:1 recognition:20 database:4 observed:2 region:1 complexity:1 dynamic:1 trained:11 depend:1 segment:2 deliver:1 basis:6 multimodal:1 icassp:2 sondhi:1 corrective:2 train:1 forced:1 effective:2 artificial:1 larger:2 kai:2 ability:1 knn:7 knearest:1 advantage:2 sequence:1 net:16 rapidly:1 lincoln:1 juang:1 cluster:2 depending:1 eq:4 c:1 involves:1 marcel:1 correct:1 require:1 preliminary:3 exp:1 viterbi:4 talker:11 recognizer:12 outperformed:4 label:1 sensitive:1 correctness:1 successfully:1 adjoined:1 mclachlan:1 mit:1 gaussian:46 office:1 focus:1 june:1 improvement:4 consistently:2 likelihood:3 baseline:4 dependent:6 accumulated:5 integrated:2 hidden:3 vlsi:1 overall:1 classification:12 retaining:1 constrained:4 integration:1 smoothing:2 mutual:1 art:1 x4:1 report:2 connectionist:1 richard:4 few:2 resulted:2 individual:3 replaced:1 argmax:1 bdg:3 william:2 maintain:1 attempt:1 mlp:6 evaluation:1 adjust:1 mixture:51 oth:2 clii:2 fu:2 shorter:1 iv:1 desired:1 plotted:1 isolated:5 minimal:1 increased:5 modeling:2 zn:3 maximization:1 witbrock:1 reported:2 varies:1 considerably:1 st:1 density:3 international:2 discriminating:1 lee:3 decoding:2 michael:2 together:3 thesis:1 huang:7 worse:1 potential:1 sec:4 coefficient:4 performed:5 portion:1 capability:1 parallel:1 mlps:1 air:2 degraded:2 phoneme:4 variance:5 who:1 characteristic:1 correspond:1 kaufmann:1 rabiner:1 generalize:1 mmi:1 produced:1 energy:2 static:2 dimensionality:4 segmentation:1 yuchun:1 back:2 higher:1 improved:6 april:1 evaluated:1 done:1 strongly:1 stage:26 o:1 propagation:2 mode:1 indicated:2 scientific:1 believe:1 effect:1 consisted:2 brown:1 laboratory:1 mahajan:1 illustrated:1 adjacent:1 ll:2 during:1 speaker:3 mel:2 criterion:1 m:2 stress:1 fi:1 superior:3 common:1 sigmoid:2 ji:1 denver:1 exponentially:1 analog:1 discussed:2 extend:1 significant:2 mellon:2 automatic:1 dj:1 had:1 resistant:1 recognizers:12 halved:1 recent:2 showed:1 apart:1 phone:16 phonetic:2 morgan:2 additional:2 july:1 levinson:1 full:1 unimodal:3 mix:1 multiple:1 segmented:2 technical:3 compensate:2 concerning:2 post:1 multilayer:1 expectation:1 cmu:1 histogram:1 sometimes:2 whereas:1 addition:2 decreased:1 singular:1 extracting:2 near:1 reduce:1 effort:3 peter:1 speech:20 york:1 passing:1 ignored:1 amount:2 reduced:1 generate:1 outperform:1 percentage:1 per:11 carnegie:2 affected:1 four:1 ce:1 verified:1 utilize:1 backward:1 sum:1 prob:5 letter:3 decide:1 smoothes:1 decision:1 acceptable:1 layer:2 matri:1 yielded:1 department:2 combination:1 poor:2 across:2 describes:1 em:1 ls0:1 slightly:1 outlier:1 taken:2 end:1 studying:1 multiplied:2 apply:1 spectral:2 fbi:1 indirectly:1 robustness:1 clustering:6 include:1 iix:1 uj:1 franzini:2 objective:3 occurs:1 strategy:1 diagonal:1 hmm:38 discriminant:28 rom:2 assuming:1 modeled:1 providing:1 holding:1 implementation:2 unknown:1 perform:2 observation:2 markov:4 behave:1 january:1 communication:1 frame:45 smoothed:2 pair:1 required:1 widened:2 functioned:1 acoustic:3 address:1 pattern:6 including:1 widening:4 force:2 rely:1 bourlard:2 improve:3 utterance:1 review:1 multiplication:1 relative:1 embedded:1 fully:1 versus:2 integrate:4 consistent:1 leaning:1 classifying:3 pi:1 share:1 summary:1 token:5 jth:2 perceptron:1 institute:1 neighbor:1 cepstral:5 vocabulary:13 world:1 forward:1 reinforcement:1 nov:1 lippmann:10 ml:4 pseudoinverse:1 continuous:6 table:2 promising:1 robust:1 ca:1 ignoring:2 necessarily:1 did:2 spread:3 main:1 noise:2 paul:1 fig:9 benchmarking:2 differed:1 msec:2 lie:1 tied:11 extractor:1 symbol:1 explored:2 merging:1 effectively:1 phd:1 hoped:1 led:1 contained:1 extracted:1 ma:1 goal:1 kmeans:2 rbf:45 room:1 replace:1 included:1 determined:2 total:6 sanjoy:1 partly:1 svd:1 experimental:3 gauss:2 perceptrons:1 evaluate:1 |
1,007 | 1,920 | Incorporating Second-Order Functional
Knowledge for Better Option Pricing
Charles Dugas, Yoshua Bengio, Fran~ois Belisle, Claude Nadeau:Rene Garcia
CIRANO, Montreal, Qc, Canada H3A 2A5
{du gas ,beng i o y,beli s lf r ,na de a u c }@i ro .umo nt r e a l. ca
garc i ar@c i ran o .qc . ca
Abstract
Incorporating prior knowledge of a particular task into the architecture
of a learning algorithm can greatly improve generalization performance.
We study here a case where we know that the function to be learned is
non-decreasing in two of its arguments and convex in one of them. For
this purpose we propose a class of functions similar to multi-layer neural
networks but (1) that has those properties, (2) is a universal approximator
of continuous functions with these and other properties. We apply this
new class of functions to the task of modeling the price of call options.
Experiments show improvements on regressing the price of call options
using the new types of function classes that incorporate the a priori constraints.
1 Introduction
Incorporating a priori knowledge of a particular task into a learning algorithm helps reducing the necessary complexity of the learner and generally improves performance, if the
incorporated knowledge is relevant to the task and really corresponds to the generating process of the data. In this paper we consider prior knowledge on the positivity of some first
and second derivatives of the function to be learned. In particular such constraints have
applications to modeling the price of European stock options. Based on the Black-Scholes
formula, the price of a call stock option is monotonically increasing in both the "moneyness" and time to maturity of the option, and it is convex in the "moneyness". Section 3
better explains these terms and stock options. For a function f(Xl, X2) of two real-valued
arguments, this corresponds to the following properties:
> 0,
{Pf>o
8xr -
(1)
The mathematical results of this paper (section 2) are the following: first we introduce a class of one-argument functions (similar to neural networks) that is positive, nondecreasing and convex in its argument, and we show that this class of functions is a universal approximator for positive functions with positive first and second derivatives. Second,
in the main theorem, we extend this result to functions of two or more arguments, with
some having the convexity property and all having positive first derivative. This result rests
on additional properties on cross-derivatives, which we illustrate below for the case of two
?C.N. is now with Health Canada at Cl a ude-.Na d eau@hc-sc . gc . c a
arguments:
(2)
Comparative experiments on these new classes of functions were performed on stock option
prices, showing some improvements when using these new classes rather than ordinary
feedforward neural networks. The improvements appear to be non-stationary but the new
class of functions shows the most stable behavior in predicting future prices. The detailed
results are presented in section 5.
2 Theory
Definition
A class of functions :i from IRn to IR is a universal approximator for a class of functions
F from IRn to IR if for any f E F, any compact domain D C IRn , and any positive E, one
can find a j E :i with sUP"'ED If(x) - j(x)1 ~ EIt has already been shown that the class of artificial neural networks with one hidden layer
H
N
= {f(x) = bo + 2: Wih(bi + 2: VijXj)}
i=l
e.g. with a sigmoid activation function h(s)
(3)
j
= l+~-" are universal approximators of
continuous functions [1, 2, 5]. The number of hidden units H of the neural network is a
hyper-parameter that controls the accuracy of the approximation and it should be chosen to
balance the trade-off between accuracy (bias of the class of functions) and variance (due to
the finite sample used to estimate the parameters of the model), see also [6].
Since h is monotonically increasing, it is easy to force the first derivatives with respect to
x to be positive by forcing the weights to be positive, for example with the exponential
function:
H
N+
= {f(x) = bo + 2: eWi h(bi + 2: eVii Xj)}
i=l
because h'(s)
(4)
j
= h(s)(1- h(s)) > o.
Since the sigmoid h has a positive first derivative, its primitive, which we call softplus, is
convex:
((s) = log(1 + eS )
(5)
i.e., d((s)/ds = h(s) = 1/(1 + C S ). The basic idea of the proposed class of functions
N++ is to replace the sigmoid of a sum by a product of softplus or sigmoid functions over
each of the dimensions (using the softplus over the convex dimensions and the sigmoid
over the others):
H e n
II
cN++={f(x)=ebo+2:eWi(II((bij+eviiXj))(
h(bij+eViixj))}
(6)
i=l
j=l
j=c+l
One can readily check that the first derivatives wrt Xj are positive, and that the second
derivatives wrt Xj for j ~ c are positive. However, this class of functions has other properties. Let (it,??? ,jm) be a set of indices with 1 ~ ji ~ c (convex dimensions), and let
(jt, ... , j~) be a set of indices c + 1 ~ j~ ~ n (the other dimensions), then
am +v f
aXjl ... aXj= aXj~ ... Xj~
~ 0,
a2m+v f
aX]l . .. aX]= aXj~ ... Xj~
~0
(7)
Note that m or p can be 0, so as special cases we find that f is positive, and that it is
monotonically increasing w.r.t. all its inputs, and convex w.r.t. the first c inputs.
2.1
Universality of cN++ over ~
Theorem Within the set F ++ of continuous functions from ~n to ~ whose first and second
derivatives are non-negative (as specified by equation 7), the class cN++ is a universal
approximator.
Proof
For lack of space we only show here a sketch of the proof, and only for the case n = 2
and c = 1 (one convex dimension and one other dimension), but the same principle allows
to prove the more general case. Let f(x) E F++ be the function to approximate with a
function 9 E IN++. To perform our approximation we will restrict 9 to the subset of
IN++ where the sigmoid becomes a step function B(x) = [x >o and where the softplus
becomes the positive part function x+ = max(O, x). Let D be the compact domain of
interest and t: the desired approximation precision. We focus our attention on an axisaligned rectangle T with lower-left comer (ai, bl ) and upper right comer (a2' b2) such
that it is the smallest such rectangle enclosing D and it can be partitionned into squares of
length L forming a grid such that the value of f at neighboring grid points does not differ
by more than t:. The number of square grids on the Xl axis is Nl and the number on the X2
axis is N 2. The number of hidden units is H = (Nl + 1)(N2 + 1). Let Xij = (Xi, Xj) =
(al + iL, bl + jL) be the grid points, with i = 0,1, ... , N l , j = 0,1, ... , N 2. Also,
x = (Xl, X2). With k = i(N2 + 1) + j, we recursively build a series of functions gk(X) as
follows :
with increment
for k = 1 to H and with initial approximation go = f(al, bl ). The final approximation
is g(x) = gH(X), It is exact at every single point on the grid and within t: of the true
function value anywhere within D. To prove this, we need to show that at every step of the
recursive procedure, the necessary increment is nonnegative (since it must be equated with
eWk) . First note that the value of 9 H (Xij) is strictly affected by the set of increments ~st
for which s <= i and t <= j so that,
j
f(Xij)
= gH(Xij) = L: L: ~st(i -
s
+ 1)L
s=Ot=o
Isolating ~ij and doing some algebra, we get,
~ij
=
~;1,Xl,X2gH(Xij)L2
where ~~i ,Xj,Xk is the third degree finite difference with respect to arguments Xi, Xj, Xk,
i.e. ~~1, X l,XJ(Xl,X2) = (~~1,XJ(Xl,X2) - ~~l,xJ(Xl - L,X2))/L, where similarly ~~l,xJ(Xl,X2) = (~xlf(Xl,X2) - ~xJ(Xl,X2 - L))/L, and ~xlf(Xl,X2) =
(f(Xl' X2) - f(Xl - L, X2))/ L. By the mean value theorem, the third degree finite difference is nonnegative if the corresponding third derivative is nonnegative everywhere over the
finite interval which is obtained by constraint 7. Finally, the third degree finite difference
being nonnegative, the corresponding increment is also nonnegative and this completes the
proof.
Corollary Within the set of positive continuous functions from ~ to ~ whose first and
second derivatives are non-negative, the class IN++ is a universal approximator.
3 Estimating Call Option Prices
An option is a contract between two parties that entitles the buyer to a claim at a future
date T that depends on the future price, ST of an underlying asset whose price at time t is
St. In this paper we consider the very common European call options, in which the value
of the claim at maturity (time T) is max(O, ST - K), i.e. if the price is above the strike
price K, then the seller of the option owes ST - K dollars to the buyer. In the no-arbitrage
framework, the call function is believed to be a function of the actual market price of the
security (St), the strike price (K), the remaining time to maturity (T = T - t), the risk
free interest rate (r) , and the volatility of the return (a). The challenge is to evaluate the
value of the option prior to the expiration date before entering a transaction. The risk free
interest rate (r) needs to be somehow extracted from the term structure and the volatility
(a) needs to be forecasted, this latest task being a field of research in itself. We have [3]
previously tried to feed in neural networks with estimates of the volatility using historical
averages but so far, the gains remained insignificant. We therefore drop these two features
and rely on the ones that can be observed: St, K, T. One more important result is that
under mild conditions, the call option function is homogeneous of degree one with respect
to the strike price and so our final approximation depends on two variables: the moneyness
(M = Stl K) and the time to maturity (T).
ctl K
=
(8)
f(M, T)
An economic theory yielding to the Black-Scholes formula suggest that f has the properties
of (1), so we will evaluate the advantages brought by the function classes of the previous
section. However, it is not clear whether the constraint on the cross derivatives that are
incorporated in IN++ should or not be present in the true price function. It is known that
the Black-Scholes formula does not adequately represent the market pricing of options, but
it might still be a useful guide in designing a learning algorithm for option prices.
4 Experimental Setup
As a reference model, we use a simple multi-layered perceptron with one hidden layer
(eq. 3). We also compare our results with a recently proposed model [4] that closely resembles the Black-Scholes formula for option pricing (i.e. another way to incorporate possibly
useful prior knowledge):
nh
yES
a
+M
.
L i31 ,i . h('Yi,o +
'Yi,l .
M
+ 'Yi,2 . T)
i=l
nh
+ e- r r
.
L i32 ,i . hbi,3 + 'Yi,4 . M + 'Yi,5 . T).
(9)
i=l
We evaluate two new architectures incorporating some or all of the constraints defined in
equation 7.
We used european call option data from 1988 to 1993. A total of 43518 transaction prices
on european call options on the S&P500 index were used. In section 5, we report results
on 1988 data. In each case, we used the first two quarters of 1988 as a training set (3434
examples), the third quarter as a validation set (1642 examples) for model selection and 4
to 20 quarters as a test sets (each with around 1500 examples) for final generalization error
estimation. In tables 1 and 2, we present results for networks with unconstrained weights
on the left-hand side, and weights constrained to positive and monotone functions through
exponentiation of parameters on the right-hand side. For each model, the number of hidden
units varies from one to nine. The mean squared error results reported were obtained as
follows : first, we randomly sampled the parameter space 1000 times. We picked the best
(lowest training error) model and trained it up to 1000 more times. Repeating this procedure
10 times, we selected and averaged the performance of the best of these 10 models (those
with training error no more than 10% worse than the best out of 10). In figure 1, we present
tests of the same models on each quarter up to and including 1993 (20 additional test sets)
in order to assess the persistence (conversely, the degradation through time) of the trained
models.
5 Forecasting Results
Simple Multi-Layered Perceptrons
Mean Squared Error Results on Call Option Pricing (x 10- 4 )
Units
Unconstrained weights
Constrained weights
Train Valid Test! Test2 Train Valid Test! Test2
1
2.38
1.92
2.73
2.32
3.02
3.60
6.06 2.67
2
1.51
2.14
3.81
1.68
1.76
5.70 2.63
3.08
1.40
1.39
1.27
27.31 2.63
2.15
3.07
3.79
3
4
1.42
1.44
1.25
27.32 2.65
2.24
3.05
3.70
1.40
3.64
1.27
2.29
5
1.38
30.56 2.67
3.03
1.41
1.24
33.12 2.63
2.14
6
1.43
3.08
3.81
3.71
1.41
1.41
1.26
2.23
3.05
7
33.49 2.65
1.41
1.43
1.24
2.14
3.07
3.80
8
39.72 2.63
1.40
1.41
1.24
2.27
3.04
9
38.07 2.66
3.67
Black-Scholes Similar Networks
Mean Squared Error Results on Call Option Pricing (x 10- 4 )
Units
Unconstrained weights
Constrained weights
Train Valid Test! Test2 Train Valid Test! Test2
1
1.54
1.40
4.70 2.49
2.17
2.78
3.61
1.58
2
1.42
1.42
1.27
24.53 1.90
2.05
1.71
3.19
1.40
1.41
1.24
2.00
30.83 1.88
1.73
3.72
3
4
1.40
1.27
31.43 1.85
3.15
1.39
1.70
1.96
1.40
1.40
1.25
30.82 1.87
3.51
2.01
5
1.70
3.19
1.41
1.42
1.25
2.04
6
35.77 1.89
1.70
1.25
3.12
35.97 1.87
1.72
1.98
1.40
1.40
7
34.68 1.86
3.25
1.40
1.40
1.25
1.69
1.98
8
2.08
3.17
9
1.42
1.43
1.26
32.65 1.92
1.73
Table 1: Left: the parameters are free to take on negative values. Right: parameters are
constrained through exponentiation so that the resulting function is both positive and monotone increasing everywhere w.r.t. to both inputs. Top: regular feedforward artificial neural
networks. Bottom: neural networks with an architecture resembling the Black-Scholes formula as defined in equation 9. The number of units varies from 1 to 9 for each network
architecture. The first two quarters of 1988 were used for training, the third of 1988 for
validation and the fourth of 1988 for testing. The first quarter of 1989 was used as a second
test set to assess the persistence of the models through time (figure 1). In bold: test results
for models with best validation results.
As can be seen in tables 1 and 2, the positivity constraints through exponentiation of the
weights allow the networks to avoid overfitting. The training errors are generally slightly
lower for the networks with unconstrained weights, the validation errors are similar but final test errors are disastrous for unconstrained networks, compared to the constrained ones.
This "liftoff' pattern when looking at training, validation and testing errors has triggered
our attention towards the analysis of the evolution of the test error through time. The unconstrained networks obtain better training, validation and testing (test 1) results but fail in
Products of SoftPlus and Sigmoid Functions
Mean Squared Error Results on Call Option Pricing (x 10 - 4 )
Units
Unconstrained weights
Constrained weights
Train Valid Testl Test2 Train Valid Test1 Test2
1
2.27
2.15
2.35
3.27 2.28
2.14
2.37
3.51
2
1.61
1.58
14.24 2.28
2.13
2.37
3.48
1.58
1.51
18.16 2.28
2.13
2.36
3.48
3
1.53
1.38
4
1.46
20.14 1.84
1.54
1.97
4.19
1.51
1.29
1.46
4.18
5
1.57
1.57
10.03 1.83
1.56
1.95
4.09
6
1.51
1.53
1.35
22.47 1.85
1.57
1.97
1.62
1.46
2.00
4.10
7
1.67
7.78 1.86
1.55
1.54
1.44
11.58 1.84
4.25
8
1.55
1.55
1.96
26.13 1.87
1.46
4.12
9
1.47
1.31
1.60
1.97
Sums of SoftPlus and Sigmoid functions
Mean Squared Error Results on Call Option Pricing (x 10- 4 )
Units
Unconstrained weights
Constrained weights
Train Valid Testl Test2 Train Valid Test1 Test2
2.19
2.36
1
4.10 2.30
3.43
1.83
1.59
1.93
1.26
25.00 2.29
2.19
2.34
3.39
2
1.42
1.45
4.11
1.45
1.46
1.32
35.00 1.84
1.58
1.95
3
4
21.80 1.85
4.09
1.56
1.99
1.56
1.69
1.33
1.42
1.52
2.00
4.21
5
1.60
1.69
10.11 1.85
14.99 1.86
4.12
6
1.57
1.66
1.39
1.54
2.00
8.00 1.86
1.60
1.98
3.94
7
1.61
1.67
1.48
1.64
1.48
1.54
4.25
8
1.72
7.89 1.85
1.98
1.52
6.16 1.84
1.54
4.25
9
1.65
1.70
1.97
Table 2: Similar results as in table 1 but for two new architectures. Top: products of softplus
along the convex axis with sigmoid along the monotone axis. Bottom: the softplus and
sigmoid functions are summed instead of being multiplied. Top right: the fully constrained
proposed architecture.
the extra testing set (test 2). Constrained architectures seem more robust to changes in underlying econometric conditions. The constrained Black-Scholes similar model performs
slightly better than other models on the second test set but then fails on latter quarters (figure 1). All in all, at the expense of slightly higher initial errors our proposed architecture
allows us to forecast with increased stability much farther in the future. This is a very
welcome property as new derivative products have a tendency to lock in values for much
longer durations (up to 10 years) than traditional ones.
6 Conclusions
Motivated by prior knowledge on the derivatives of the function that gives the price of
European options, we have introduced new classes of functions similar to multi-layer neural
networks that have those properties. We have shown one of these classes to be a universal
approximator for functions having those properties, and we have shown that using this a
priori knowledge can help in improving generalization performance. In particular, we have
found that the models that incorporate this a priori knowledge generalize in a more stable
way over time.
.
,
"
"'1
.ii
: 11
2
."
?
??
,
1
,
05
,,
.,.
,,
'.
"
"
"
"
~
'-,J
-'
0
.,., .,,,,
,??,
5
10
15
20
Ouar1Ofusodas lest sel tom3rd01 1988 1041ho11993(llCI)
?O~-----'~--~'~O----~,~,----~ro~--~
Quartorusodasleslsel 1rom3rd01 1988104lho1 1993(Ulci)
Figure 1: Out-of-sample results from the third quarter of 1988 to the fourth of 1993 (incl.)
for models with best validation results. Left: unconstrained models: results for the BlackScholes similar network. Other unconstrained models exhibit similar swinging result patterns and levels of errors. Right: constrained models: the fully constrained proposed architecture (solid). The model with sums over dimensions obtains similar results. The regular
neural network (dotted). The constrained Black-Scholes model obtains very poor results
(dashed).
References
[1] G. Cybenko. Continuous valued neural networks with two hidden layers are sufficient.
Technical report, Department of Computer Science, Tufts University, Medford, MA,
1988.
[2] G. Cybenko. Approximation by superpositions of a sigmoidal function. 2:303-314,
1989.
[3] C. Dugas, O. Bardou, and Y. Bengio. Analyses empiriques sur des transactions
d'options. Technical Report 1176, Department d'informatique et de Recherche
Operationnelle, Universite de Montreal, Montreal, Quebec, Canada, 2000.
[4] R. Garcia and R. Gen~ay. Pricing and Hedging Derivative Securities with Neural
Networks and a Homogeneity Hint. Technical Report 98s-35, CIRANO, Montreal,
Quebec, Canada, 1998.
[5] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. 2:359-366,1989.
[6] 1. Moody. Prediction risk and architecture selection for neural networks. In From
Statistics to Neural Networks: Theory and Pattern Recognition Applications. Springer,
1994.
| 1920 |@word mild:1 tried:1 solid:1 recursively:1 initial:2 series:1 nt:1 activation:1 universality:1 must:1 readily:1 drop:1 stationary:1 selected:1 xk:2 farther:1 recherche:1 sigmoidal:1 mathematical:1 along:2 maturity:4 prove:2 introduce:1 operationnelle:1 market:2 behavior:1 multi:4 decreasing:1 actual:1 jm:1 pf:1 increasing:4 becomes:2 estimating:1 underlying:2 lowest:1 every:2 ro:2 control:1 unit:8 appear:1 positive:15 before:1 black:8 might:1 resembles:1 conversely:1 bi:2 averaged:1 testing:4 recursive:1 lf:1 xr:1 procedure:2 universal:8 persistence:2 regular:2 suggest:1 get:1 layered:2 selection:2 risk:3 resembling:1 primitive:1 attention:2 go:1 latest:1 convex:9 duration:1 swinging:1 qc:2 stability:1 increment:4 expiration:1 exact:1 homogeneous:1 designing:1 recognition:1 observed:1 bottom:2 trade:1 ran:1 convexity:1 complexity:1 seller:1 trained:2 algebra:1 learner:1 ewi:2 comer:2 stock:4 eit:1 train:8 informatique:1 artificial:2 sc:1 hyper:1 whose:3 valued:2 statistic:1 axisaligned:1 nondecreasing:1 itself:1 final:4 advantage:1 triggered:1 claude:1 propose:1 wih:1 product:4 neighboring:1 relevant:1 date:2 gen:1 generating:1 comparative:1 help:2 illustrate:1 volatility:3 montreal:4 ij:2 eq:1 ois:1 differ:1 closely:1 garc:1 explains:1 generalization:3 really:1 cybenko:2 strictly:1 around:1 claim:2 a2:1 smallest:1 purpose:1 estimation:1 superposition:1 ctl:1 brought:1 rather:1 avoid:1 sel:1 axj:3 forecasted:1 corollary:1 ax:2 focus:1 improvement:3 check:1 greatly:1 am:1 dollar:1 hidden:6 irn:3 priori:4 constrained:13 special:1 summed:1 field:1 having:3 future:4 yoshua:1 others:1 report:4 hint:1 randomly:1 homogeneity:1 interest:3 a5:1 regressing:1 nl:2 yielding:1 necessary:2 owes:1 blackscholes:1 desired:1 isolating:1 hbi:1 increased:1 modeling:2 ar:1 ordinary:1 subset:1 a2m:1 reported:1 varies:2 st:8 contract:1 off:1 moody:1 na:2 squared:5 possibly:1 positivity:2 worse:1 derivative:15 return:1 de:4 b2:1 bold:1 depends:2 hedging:1 performed:1 picked:1 doing:1 sup:1 option:25 ass:2 square:2 ir:2 accuracy:2 il:1 variance:1 yes:1 generalize:1 asset:1 ed:1 definition:1 universite:1 proof:3 gain:1 sampled:1 knowledge:9 improves:1 feed:1 test2:8 higher:1 anywhere:1 d:1 sketch:1 hand:2 lack:1 somehow:1 pricing:8 true:2 adequately:1 evolution:1 entering:1 white:1 ay:1 performs:1 gh:2 recently:1 charles:1 sigmoid:10 common:1 functional:1 quarter:8 ji:1 nh:2 jl:1 extend:1 rene:1 ai:1 unconstrained:10 grid:5 test1:2 similarly:1 xlf:2 stable:2 longer:1 forcing:1 approximators:2 yi:5 seen:1 additional:2 strike:3 monotonically:3 dashed:1 ii:3 technical:3 h3a:1 cross:2 believed:1 prediction:1 basic:1 testl:2 multilayer:1 represent:1 interval:1 completes:1 ot:1 rest:1 extra:1 lest:1 quebec:2 seem:1 bardou:1 call:14 feedforward:3 bengio:2 easy:1 xj:13 architecture:10 restrict:1 economic:1 idea:1 cn:3 whether:1 motivated:1 forecasting:1 nine:1 generally:2 useful:2 detailed:1 clear:1 repeating:1 welcome:1 xij:5 dotted:1 affected:1 rectangle:2 econometric:1 monotone:3 sum:3 year:1 everywhere:2 exponentiation:3 fourth:2 eau:1 fran:1 layer:5 nonnegative:5 constraint:6 x2:12 argument:7 department:2 poor:1 slightly:3 equation:3 previously:1 fail:1 wrt:2 know:1 multiplied:1 apply:1 medford:1 tuft:1 top:3 remaining:1 lock:1 build:1 bl:3 ude:1 already:1 traditional:1 exhibit:1 length:1 sur:1 index:3 balance:1 setup:1 disastrous:1 expense:1 gk:1 negative:3 enclosing:1 perform:1 upper:1 finite:5 dugas:2 gas:1 incorporated:2 looking:1 gc:1 canada:4 introduced:1 specified:1 security:2 learned:2 scholes:8 below:1 pattern:3 challenge:1 max:2 including:1 stinchcombe:1 force:1 rely:1 predicting:1 improve:1 axis:4 nadeau:1 health:1 prior:5 l2:1 fully:2 approximator:6 validation:7 degree:4 sufficient:1 principle:1 arbitrage:1 free:3 bias:1 guide:1 side:2 perceptron:1 allow:1 dimension:7 valid:8 equated:1 historical:1 party:1 cirano:2 far:1 transaction:3 belisle:1 approximate:1 compact:2 obtains:2 overfitting:1 xi:2 continuous:5 table:5 p500:1 ca:2 robust:1 hornik:1 improving:1 du:1 hc:1 european:5 cl:1 domain:2 main:1 n2:2 precision:1 fails:1 exponential:1 xl:13 third:7 bij:2 formula:5 theorem:3 remained:1 jt:1 showing:1 insignificant:1 stl:1 incorporating:4 forecast:1 garcia:2 forming:1 bo:2 llci:1 springer:1 corresponds:2 extracted:1 ma:1 towards:1 price:18 replace:1 change:1 reducing:1 degradation:1 total:1 e:1 ewk:1 buyer:2 experimental:1 tendency:1 perceptrons:1 incl:1 softplus:8 latter:1 incorporate:3 evaluate:3 |
1,008 | 1,921 | FaceSync: A linear operator for measuring
synchronization of video facial images and
audio tracks
Malcolm Slaney!
Interval Research
malcolm@ieee.org
Michele Covell2
Interval Research
covell@ieee.org
Abstract
FaceSync is an optimal linear algorithm that finds the degree of synchronization between the audio and image recordings of a human
speaker. Using canonical correlation, it finds the best direction to combine all the audio and image data, projecting them onto a single axis.
FaceSync uses Pearson's correlation to measure the degree of synchronization between the audio and image data. We derive the optimal linear
transform to combine the audio and visual information and describe an
implementation that avoids the numerical problems caused by computing the correlation matrices.
1 Motivation
In many applications, we want to know about the synchronization between an audio signal
and the corresponding image data. In a teleconferencing system, we might want to know
which of the several people imaged by a camera is heard by the microphones; then, we can
direct the camera to the speaker. In post-production for a film, clean audio dialog is often
dubbed over the video; we want to adjust the audio signal so that the lip-sync is perfect.
When analyzing a film, we want to know when the person talking is in the shot, instead of
off camera. When evaluating the quality of dubbed films, we can measure of how well the
translated words and audio fit the actor's face.
This paper describes an algorithm, FaceSync, that measures the degree of synchronization
between the video image of a face and the associated audio signal. We can do this task by
synthesizing the talking face, using techniques such as Video Rewrite [1], and then comparing the synthesized video with the test video. That process, however, is expensive. Our
solution finds a linear operator that, when applied to the audio and video signals, generates
an audio-video-synchronization-error signal. The linear operator gathers information
from throughout the image and thus allows us to do the computation inexpensively.
Hershey and Movellan [2] describe an approach based on measuring the mutual information between the audio signal and individual pixels in the video. The correlation between
the audio signal, x, and one pixel in the image y, is given by Pearson's correlation, r. The
mutual information between these two variables is given by f(x,y) = -1/2 log(l-?). They
create movies that show the regions of the video that have high correlation with the audio;
1. Currently at IBM Almaden Research, 650 Harry Road, San Jose, CA 95120.
2. Currently at Yes Video. com, 2192 Fortune Drive, San Jose, CA 95131.
Standard Deviation of Testing Data
FaceSync
<l1li
1 6T~S
~
tt -
~.
16T8
50
10
40
20
30
30
20
40
10
50
20
40
60
80
Figure 1: Connections between linear models Figure 2: Standard deviation of the
relating audio, video and fiduciary points
aligned facial images used to create the
canonical model.
from the correlation data, they estimate the centroid of the activity pattern and find the
talking face. They make no claim of their algorithms ability to measure synchronization.
FaceSync is an optimal linear detector, equivalent to a Wiener filter [3], which combines
the information from all the pixels to measure audio-video synchronization. We developed
our approach based on two surprisingly simple algorithms in computer-vision and audiovisual speech synthesis: EigenPoints [4] and ATR's multilinear facial synthesizer [5]. The
relationship of these two algorithms to each other and to our problem is shown in Figure 1.
EigenPoints [4] is an algorithm that finds a linear mapping between the brightness of a
video signal and the location of fiduciary points on the face. At first, the validity of this
mapping is not obvious; we might not expect the brightness of pixels on a face to covary
linearly with x and y coordinates. It turns out, however, that the brightness of the image
pixels, i(x,y), and the location of fiduciary points such as the comer of the mouth, Pi =(Xi'
y), describe a function in a high-dimensional space. In the absence of occlusion, the combined brightness-fiduciary function is smoothly varying. Thus the derivatives are defined
and a Taylor-series approximation is valid. The real surprise is that EigenPoints can find a
linear approximation that describes the brightness-fiduciary space, and this linear approximation is valid over a useful range of brightness and control-point changes.
Similarly, Yehia, Rubin, and Vatikiotis-Bateson at ATR [5] have shown that it is possible to
connect a specific model of speech, the line-spectral pairs or LSP, with the position of
fiduciary points on the face. Their multilinear approximation yielded an average correlation of 0.91 between the true facial locations and those estimated from the audio data.
We derive a linear approximation to connect brightness to audio without the intermediate
fiduciary points. Neither linear mapping is exact, so we had to determine whether the
direct path between brightness and audio could be well approximated by a linear transform. We describe FaceSync in the next section.
Fisher and his colleagues [6] describe a more general approach that finds a non-linear
mapping onto subspaces which maximize the mutual information. They report results
using a single-layer perceptron for the non-linear mapping.
2 FaceSync Algorithm
FaceSync uses a face-recognition algorithm and canonical correlation to measure audiovisual synchrony. There are two steps: training or building the canonical correlation
model, and evaluating the fit of the model to the data. In both steps we use face-recognition software to find faces and align them with a sample face image. In the training stage,
canonical correlation finds a linear mapping that maximizes the cross-correlation between
two signals: the aligned face image and the audio signal. Finally, given new audio and
video data, we use the linear mapping to rotate a new aligned face and the audio signal
into a common space where we can evaluate their correlation as a function of time.
In both training and testing, we use a neural-network face-detection algorithm [7] to find
portions of the image that contain a face. This approach uses a pyramid of images to
search efficiently for pixels that look like faces. The software also allows the face to be
tracked through a sequence of image and thus reduce the computational overhead, but we
did not use this capability in our experiments. The output of Rowley's face-detection algorithm is a rectangle that encloses the position of a face. We use this information to align
the image data prior to correlational analysis.
We investigated a number of ways to describe the audio signal. We looked at mel-frequency cepstral coefficients (MFCC) [8], linear-predictive coding (LPC) [8], line spectral
frequencies (LSF) [9], spectrograms, and raw signal energy. For most calculations, we
used MFCC analysis, because it is a favorite front-end for speech-recognition systems
and, as do several of the other possibilities, it throws away the pitch information. This is
useful because the pitch information affects the spectrogram in a non-linear manner and
does not show up in the image data. For each form of audio analysis, we used a window
size that was twice the frame interval (2/29.97 seconds,)
Canonical correlation analysis (CCA) uses jointly varying data from an input subspace Xi
and an output subspace Yi to find canonic correlation matrices, A x and A y . These matrices whiten the input and output data, as well as making the cross correlation diagonal and
"maximally compact." Specifically, the whitened data matrices are
11 = A: (x and have the following properties:
x) and
cp
= A~ (y -
y),
(1)
E{l1l1 T } = I, E{cpcpT} = I, E{<P11T} = LK = diag{cr 1,
cr 2 ,
... , cr L },
(2)
where 1 ::": cr 1 ::": cr 2 ::": ... > 0 and cr M + 1 = ... = cr L = O. In addition, for i starting from
1 and then repeating up to L, cr i is the largest possible correlation between 11i and
<Pi (where 11i and <Pi are the ilh elements of 11 and <P respectively), given the norm and
orthogonality constraints on 11 and <p, expressed in equation 2. We refer to this property
as maximal compaction, since the correlation is (recursively) maximally compacted into
the leading elements of 11 and <p.
We find the matrices
Ax
and
x,
= R-xxl 12 ( X -
Ay
by whitening the input and output data:
x-) andy'
= R -yy1I2 ( y -
(3)
y-)
and then finding the left (U) and right (V) singular vectors of the cross-correlation matrix
between the whitened data
- 112
-112
T
K=Ry'x?=R yy RyxRxx
=UKLKVK .
(4)
The SVD gives the same type of maximal compaction that we need for the cross correlation matrices, A x and A y ' Since the SVD is unique up to sign changes (and a couple of
other degeneracies assocIated with repeated singular values), A x and A x must be:
- 1/ 2
- 1/ 2
Ax = Rxx V K and Ay = Ryy UK'
(5)
We can verify this by calculating E { <P11 T} using the definitions of <P and 11 .
T
_
- 112
T
_
T - 112
_
A}'(y-y) = (R yy UK) (y-y) = UKR yy (y-y),
T
_
- 112
T
_
T - 112
_
= Ax(x-x) = (Rxx V K ) (x-x) = VKRxx (x-x),
<P =
(6)
11
(7)
then note
T
E{<P11 } =
T - 112
T
- 112
UKR yy E{yx }Rxx V K
T
- 112
- 112
= UKR yy RyxRxx
VK
(8)
and then by using equation 4 (twice)
T
E{<Pl1 }
T
T
T
T
T
= UKKV
K = UK(UKLKVK)VK = (UKUK)LK(VKVK) = L K .
(9)
This derivation of canonical correlation uses correlation matrices. This introduces a wellknown problem due to doubling the dynamic range of the analysis data. Instead, we formulate the estimation equations in terms of the components of the SVDs of the training
data matrices. Specifically, we take the SVDs of the zero-mean input and output matrices:
[x 1 -x ... x N -x] = IN-IUxLxV~'[YI-Y'''YN-Y] = IN-IUyLyV~.(10)
From these two decompositions, we can write the two correlation matrices as
2
T
- 112
Rxx = UxLxUx
Ryy = U yL 2y U yT
-1
T
(11)
Rxx = UxLx Ux '
R-yy1I2 = U yL-y 1 U yT
(12)
and then write the cross-correlation matrix as
T T
Ryx = UyLyV y VxLxUx'
Using these expressions for the correlation matrices, the K matrix becomes
K = (UyL
-1
T
T
T
-1
T
T
(13)
T
yU y )(UyL1VY VxLxUx)(UxLx U x ) = UyV y VxU x '
(14)
Now let's look at the quantity U y K U x in terms of its SVD
T
T
T
T
T
UyKU x= Vy Vx = (UyUK)LK(VKU) = UUKULKVUKU'
(15)
and, due to the uniqueness of the SVD, note
T
T
(16)
UyU k = U UKU and UxV K = V UKU ?
Now we can rewrite the equation for A x to remove the need for the squaring operation
Ax
- 1/ 2
-1
T
-1
= Rxx
V K = UxLx (U x V K) = UxLx V UKU
(17)
and similarly for Ay
Ay
- 112
-1
T
-1
= Ryy
U K = UyL y (UyU K ) = UyL y UUKU'
(18)
Using these identities, we compute A x and Ay using the following steps:
1)
Find the SVDs of the data matrices using the expressions in equation 10.
2) Form a rotated version of the cross-correlation matrix K and computes its SVD using
equation 14.
3)
Compute the A x and Ay matrixes using equations 17 and 18.
Given the linear mapping between audio data and the video images, as described by the
A x and A matrices, we measure the correlation between these two sets of data. For each
candidate face in the image, we rotate the audio data by the first column of Ax ' rotate the
face image by the first column of A , and then compute Pearson's correlation of the
rotated audio and video data. We use the absolute value of this correlation coefficient as a
measure of audio-video synchronization.
3 Results
We evaluated the performance of the FaceSync algorithm using a number of tests. In the
simplest tests we measured FaceSync's sensitivity to small temporal shifts between the
audio and the video signals, evaluated our performance as a function of testing-window
size and looked at different input representations. We also measured the effect of coarticulation.
To train the FaceSync system, we used 19 seconds of video. We used Rowley's face-detection software to find a rectangle bounding the face but we noticed a large amount (several
-5r--------,,---------,,-----,
AN Sync with MFCC Analysis (testing data)
0.5 ,-------'-----,r-----.----''--"'--r--=---'--,
0.4
0.3
-10
0.2
0 .1
-15 4~-----:'-------:8'-------:-'.10
Rotated Audio Data
Figure 3: Optimum projections of the
audio and video signals that maximize
their cross-correlation_
~0~0---~5~0-~~0-~-5~0~-~100
Frame Offset
Figure 4: Correlation of audio and video
data as the audio data is shifted in time
past the video. (29.97 frames/sec.)
pixels) of jitter in the estimated positions. Figure 2 shows the standard deviation of our
aligned facial data. The standard deviation is high along the edges of the face, where small
amounts of motion have a dramatic effect on the brightness, and around the mouth, where
the image brightness changes with the spoken sounds.
Figure 3 shows the results of the canonical-correlation analysis for the 7 (distinct) seconds
of audio and video that we used for testing. Canonical correlation has rotated the two multidimensional signals (audio and image) into the directions that are maximally correlated
with each other. Note that the transformed audio and image signals are correlated.
We can evaluate the quality of these results by looking at the correlation of the two sets of
data as the audio and image data are shifted relative to each other (such shifts are the kinds
of errors that you would expect to see with bad lip sync.) An example of such a test is
shown in Figure 4. Note that, after only a few frames of shift (about lOOms), the correlation between the audio and image data declined to close to zero.
We used the approach described by Hershey and Movellan to analyze which parts of the
facial image correlate best with the audio data. In their work, they computed correlations
over 16 frame intervals. Since we used aligned data, we could measure accurately the correlations over our entire 9 second test sequence. Our results are shown in Figure 5: Each
pixel shows the correlation that we found using our data. This approach looks at each pixel
individually and produces a maximum correlation near 0.45. Canonical correlation, which
accumulates all the pixel information from all over the image, also produces a maximum
correlation near 0.45, but by accumulating information from all over the image it allows us
to measure sychronization without integrating over the full 9 seconds.
Figure 6 shows FaceSync's ability to measure audio-visual synchronization as we varied
the testing-window size. For short windows (less than 1.3 seconds), we had insufficient
data to measure the correlation accurately. For long windows (greater than 2.6 seconds),
we had sufficient data to average and minimize the effect of errors, but as a result did not
have high time resolution. As shown in Figure 5, there is a peak in the correlation near 0
frame offset; there are often, however, large noise peaks at other shifts. Between 1.3 and
2.6 seconds of video produces reliable results.
Different audio-analysis techniques provide different information to the FaceSync algorithm. Figure 7 shows the audio-video synchronization correlation, similar to Figure 3, for
several different kinds of analysis. LPC and LSF produced identical narrow peaks; MFCC
produced a slightly lower peak. Hershey used the power from the spectrogram in his algorithm to detect the visual motion. However, our result for spectrogram data is in the noise,
indicating that a linear model can not use spectrogram data for fine-grain temporal measurements.
Correlation between audio energy and video pi xels (r)
20 Frame Wi ndows
40 Frame Windows
80 Frame Wi ndows
160 Frame Windows
0.45
0.4
10
0.35
0.3
20
0.25
30
0.2
0.15
40
0.1
0.05
50
20
40
60
80
Figure 5: Correlation of each separate pixel
and audio energy over the entire 9 second
test sequence [2].
~5:~ ~5: [3
50E::] 50c::]
100 200 300 400
Frame Number
1 00 200 300
Frame Number
Figure 6: Perfonnance of the FaceSync
algorithm as a function of test window
length. We would like to see a large peak
(dark line) for all frames at zero shift.
We also looked at FaceSync's perfonnance when we enhanced the video model with temporal context. Normally, we use one image frame and 67 ms of audio data as our input and
output data. For this experiment, we stacked 13 images to fonn the input to the canonicalcorrelation algorithm Our perfonnance did not vary as we added more visual context,
probably indicating that a single image frame contained all of the infonnation that the linear model was able to capture.
As the preceding experiment shows, we did not improve the performance by adding more
image context. We can, however, use the FaceSync framework with extended visual context to learn something about co-articulation. Coarticulation is a well-known effect in
speech; the audio and physical state of the articulators not only depends on the current
phoneme, but also on the past history of the phonemic sequence and on the future sounds.
We let canonical correlation choose the most valuable data, across the range of shifted
video images. Summing the squared weighting terms gives us an estimate of how much
weight canonical correlation assigned to each shifted frame of data. Figure 8 shows that
one video frame (30ms) before the current audio frame, and four video frames (120ms)
after the current audio are affected by coarticulation. Interestingly, the zero-shift frame is
not the one that shows the maximum importance. Instead, the frames just before and after
are more heavily weighted.
4 Conclusions
We have described an algorithm, FaceSync, that builds an optimal linear model connecting the audio and video recordings of a person's speech. The model allows us to measure
the degree of synchronization between the audio and video, so that we can, for example,
determine who is speaking or to what degree the audio and video are sychronized.
While the goal of Hershey's process is not a temporal synchronization measurement, it is
still interesting to compare the two approaches. Hershey's process does not take into
account the mutual information between adjacent pixels; rather, it compares mutual information for individual pixels, then combines the results by calculating the centroid. In contrast, FaceSync asks what combination of audio and image data produces the best possible
correlation, thus deriving a single optimal answer. Although the two algorithms both use
Pearson's correlation to measure sychronization, FaceSync combines the pixels of the face
and the audio information in an optimal detector.
The performance of the FaceSync algorithm is dependent on both training and testing data
sizes. We did not test the quality of our models as we varied the training data. We do the
training calculation only once using all the data we have. Most interesting applications of
o.s
4
i----,-----r;:=======i"]
MFCC
LPC
LSF
Spectrogram
Power
0.4
0.3
3 .S
>-
~
Ql
3
s6mmed weight for each frame position
Video in
past
Video in
future
<:
W
o>2.S
<:
E
0>
0.2
.ijj
:;=
0 .1
2
I.S
o '--_....l!:."--'.!!....lI.:""-"""
-100
-so
o
Frame Offset
so
100
1
-40
-20
0
20
40
Delta Frame
Figure 7: Performance of the FaceSync algo- Figure 8: Contributions of different frames
rithm for different kinds of input representa- to the optimum correlation with the audio
tions.
frame
FaceSync depend on the testing data, and we would like to know how much data is necessary to make a decision.
In our FaceSync application, we have more dimensions (pixels in the image) than examples (video frames) . Thus, our covariance matrices are singular, making their inversionwhich we do as part of canonical correlation - problematic. We address the need for a
pseudo-inverse, while avoiding the increased dynamic range of the covariance matrices,
by using an SVD on the (un squared) data matrices themselves (in place of an eigendecomposition of the covariance matrices).
We demonstrated high linear correlations between the audio and video signals, after we
first found the optimal projection direction by using canonical correlation. We evaluated
the FaceSync algorithm by measuring the correlation between the audio and video signals
as we shift the audio data relative to the image data. MFCC, LPC, and LSF all produce
sharp correlations as we shift the audio and images, whereas speech power and spectrograms produce no correlation peak at all.
References
[1] C. Bregler, M. Covell, M. Slaney. "Video Rewrite: Driving visual speech with audio." Proc. SIGGRAPH 97, Los Angeles, CA, pp. 353- 360, August 1997.
[2] J. Hershey, J. R. Movellan. "Audio-Vision: Locating sounds via audio-visual synchrony."
Advances in Neural Information Processing Systems 12, edited by S. A. Solla, T. K. Leen, K-R.
Mi.iller. MIT Press, Cambridge, MA (in press).
[3] L. L. Scharf, John K. Thomas. "Wiener filters in canonical coordinates for transform coding, filtering and quantizing." IEEE Transactions on Signal Processing, 46(3), pp. 647- 654, March 1998.
[4] M. Covell , C. Bregler. "Eigenpoints." Proc. Int. Con! Image Processing, Lausanne, Switzerland,
Vol. 3,pp.471-474, 1996.
[5] H. C. Yehia, P. E. Rubin, E. Vatikiotis-Bateson. "Quantitative association of vocal-tract and facial
behavior," Speech Communication, 26, pp. 23-44, 1998.
[6] J. W. Fisher III, T. Darrell, W. T. Freeman, P. Viola. "Learning Joint Statistical Models for AudioVisual Fusion and Segregation," This volume, 2001.
[7] H. A. Rowley, S. Baluja, and T. Kanade. "Neural network- based face detection." IEEE Transa ctions on Pattern Analysis and Machine Intelligence, 20(1), pp. 23- 38, January 1998.
[8] L. Rabiner, B. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs,
New Jersey, 1993.
[9] N. Sugamura, F. Itakura, "Speech analysis and synthesis methods developed at ECL in NTTFrom LPC to LSP." Speech Communications, 4(2), June 1986.
| 1921 |@word version:1 norm:1 covariance:3 decomposition:1 brightness:10 asks:1 dramatic:1 fonn:1 shot:1 sychronization:2 recursively:1 series:1 interestingly:1 past:3 current:3 comparing:1 com:1 synthesizer:1 must:1 john:1 grain:1 numerical:1 remove:1 intelligence:1 short:1 location:3 org:2 along:1 direct:2 combine:5 sync:3 overhead:1 manner:1 behavior:1 themselves:1 dialog:1 ry:1 audiovisual:3 freeman:1 window:8 becomes:1 maximizes:1 what:2 kind:3 developed:2 spoken:1 finding:1 dubbed:2 temporal:4 pseudo:1 quantitative:1 multidimensional:1 uk:3 control:1 normally:1 yn:1 before:2 accumulates:1 analyzing:1 cliff:1 path:1 might:2 twice:2 lausanne:1 co:1 range:4 unique:1 camera:3 testing:8 movellan:3 projection:2 word:1 road:1 integrating:1 vocal:1 onto:2 close:1 encloses:1 operator:3 prentice:1 context:4 accumulating:1 equivalent:1 demonstrated:1 yt:2 starting:1 formulate:1 resolution:1 deriving:1 his:2 coordinate:2 enhanced:1 heavily:1 exact:1 us:5 element:2 expensive:1 approximated:1 recognition:4 capture:1 svds:3 region:1 solla:1 valuable:1 edited:1 rowley:3 dynamic:2 depend:1 rewrite:3 algo:1 predictive:1 teleconferencing:1 translated:1 comer:1 siggraph:1 joint:1 jersey:1 ilh:1 derivation:1 train:1 distinct:1 stacked:1 describe:6 pearson:4 film:3 ability:2 transform:3 fortune:1 jointly:1 sequence:4 quantizing:1 maximal:2 aligned:5 facesync:25 los:1 juang:1 ecl:1 optimum:2 darrell:1 produce:6 perfect:1 tract:1 rotated:4 tions:1 derive:2 measured:2 phonemic:1 throw:1 direction:3 switzerland:1 coarticulation:3 filter:2 human:1 vx:1 multilinear:2 bregler:2 around:1 hall:1 mapping:8 claim:1 driving:1 vary:1 uniqueness:1 estimation:1 proc:2 currently:2 infonnation:1 individually:1 largest:1 create:2 weighted:1 mit:1 representa:1 rather:1 cr:8 varying:2 vatikiotis:2 ax:5 june:1 vk:2 articulator:1 contrast:1 centroid:2 detect:1 dependent:1 squaring:1 entire:2 transformed:1 pixel:15 almaden:1 mutual:5 once:1 identical:1 look:3 yu:1 future:2 ryy:3 report:1 few:1 individual:2 occlusion:1 inexpensively:1 detection:4 englewood:1 possibility:1 adjust:1 introduces:1 andy:1 edge:1 necessary:1 facial:7 perfonnance:3 canonic:1 taylor:1 increased:1 column:2 measuring:3 deviation:4 front:1 ukr:3 connect:2 answer:1 combined:1 person:2 peak:6 sensitivity:1 fundamental:1 off:1 yl:2 synthesis:2 connecting:1 squared:2 choose:1 slaney:2 vku:1 derivative:1 leading:1 li:1 account:1 harry:1 coding:2 sec:1 coefficient:2 int:1 caused:1 depends:1 analyze:1 portion:1 compacted:1 capability:1 synchrony:2 contribution:1 minimize:1 wiener:2 phoneme:1 who:1 efficiently:1 rabiner:1 covell:3 yes:1 raw:1 accurately:2 produced:2 mfcc:6 drive:1 bateson:2 history:1 detector:2 definition:1 energy:3 colleague:1 frequency:2 pp:5 obvious:1 associated:2 mi:1 con:1 couple:1 degeneracy:1 hershey:6 maximally:3 leen:1 evaluated:3 just:1 stage:1 correlation:56 quality:3 michele:1 building:1 effect:4 validity:1 contain:1 true:1 verify:1 assigned:1 imaged:1 covary:1 adjacent:1 speaker:2 mel:1 whiten:1 m:3 ay:6 tt:1 cp:1 motion:2 image:38 lsp:2 common:1 physical:1 tracked:1 volume:1 association:1 relating:1 synthesized:1 l1li:1 refer:1 measurement:2 cambridge:1 similarly:2 had:3 actor:1 whitening:1 align:2 something:1 pl1:1 wellknown:1 yi:2 uku:3 greater:1 preceding:1 spectrogram:7 determine:2 maximize:2 signal:20 full:1 sound:3 calculation:2 cross:7 long:1 post:1 pitch:2 whitened:2 vision:2 pyramid:1 addition:1 want:4 fine:1 whereas:1 interval:4 singular:3 lsf:4 probably:1 recording:2 near:3 intermediate:1 ryx:1 iii:1 affect:1 fit:2 reduce:1 shift:8 angeles:1 whether:1 expression:2 locating:1 speech:11 speaking:1 useful:2 heard:1 amount:2 repeating:1 dark:1 simplest:1 canonical:15 vy:1 shifted:4 problematic:1 sign:1 estimated:2 delta:1 track:1 yy:5 write:2 rxx:6 vol:1 affected:1 four:1 p11:2 neither:1 clean:1 rectangle:2 jose:2 inverse:1 jitter:1 you:1 place:1 throughout:1 compaction:2 decision:1 layer:1 cca:1 ukuk:1 yielded:1 activity:1 orthogonality:1 constraint:1 software:3 generates:1 sychronized:1 combination:1 march:1 describes:2 slightly:1 across:1 wi:2 making:2 projecting:1 equation:7 segregation:1 turn:1 uyu:2 know:4 end:1 operation:1 away:1 spectral:2 thomas:1 yx:1 calculating:2 build:1 noticed:1 added:1 quantity:1 looked:3 diagonal:1 subspace:3 separate:1 iller:1 atr:2 length:1 relationship:1 insufficient:1 ql:1 scharf:1 synthesizing:1 implementation:1 january:1 viola:1 extended:1 looking:1 communication:2 frame:27 varied:2 sharp:1 august:1 pair:1 connection:1 narrow:1 address:1 able:1 pattern:2 articulation:1 lpc:5 reliable:1 video:40 mouth:2 power:3 loom:1 improve:1 movie:1 uyl:3 axis:1 lk:3 declined:1 prior:1 relative:2 synchronization:13 expect:2 interesting:2 filtering:1 eigendecomposition:1 degree:5 gather:1 sufficient:1 rubin:2 pi:4 production:1 ibm:1 surprisingly:1 perceptron:1 face:26 cepstral:1 absolute:1 dimension:1 evaluating:2 avoids:1 valid:2 computes:1 san:2 correlate:1 transaction:1 compact:1 summing:1 xi:2 search:1 un:1 ctions:1 lip:2 favorite:1 learn:1 kanade:1 ca:3 itakura:1 investigated:1 t8:1 diag:1 did:5 linearly:1 motivation:1 bounding:1 noise:2 repeated:1 rithm:1 position:4 candidate:1 weighting:1 bad:1 specific:1 offset:3 fusion:1 adding:1 importance:1 surprise:1 smoothly:1 ijj:1 visual:7 expressed:1 contained:1 ux:1 doubling:1 talking:3 ma:1 identity:1 goal:1 absence:1 fisher:2 change:3 specifically:2 baluja:1 microphone:1 correlational:1 svd:6 indicating:2 people:1 rotate:3 evaluate:2 audio:62 malcolm:2 avoiding:1 correlated:2 |
1,009 | 1,922 | Large Scale Bayes Point Machines
Ralf Herbrich
Statistics Research Group
Computer Science Department
Technical University of Berlin
ralfh@cs.tu-berlin.de
Thore Graepel
Statistics Research Group
Computer Science Department
Technical University of Berlin
guru@cs.tu-berlin.de
Abstract
The concept of averaging over classifiers is fundamental to the
Bayesian analysis of learning. Based on this viewpoint, it has recently been demonstrated for linear classifiers that the centre of
mass of version space (the set of all classifiers consistent with the
training set) - also known as the Bayes point - exhibits excellent generalisation abilities. However, the billiard algorithm as presented in [4] is restricted to small sample size because it requires
o (m 2 ) of memory and 0 (N . m2 ) computational steps where m
is the number of training patterns and N is the number of random
draws from the posterior distribution. In this paper we present a
method based on the simple perceptron learning algorithm which
allows to overcome this algorithmic drawback. The method is algorithmically simple and is easily extended to the multi-class case.
We present experimental results on the MNIST data set of handwritten digits which show that Bayes point machines (BPMs) are
competitive with the current world champion, the support vector
machine. In addition, the computational complexity of BPMs can
be tuned by varying the number of samples from the posterior.
Finally, rejecting test points on the basis of their (approximative)
posterior probability leads to a rapid decrease in generalisation error, e.g. 0.1% generalisation error for a given rejection rate of 10%.
1
Introduction
Kernel machines have recently gained a lot of attention due to the popularisation
of the support vector machine (SVM) [13] with a focus on classification and the
revival of Gaussian Processes (GP) for regression [15]. Subsequently, SVMs have
been modified to handle regression [12] and GPs have been adapted to the problem
of classification [8]. Both schemes essentially work in the same function space that is
characterised by kernels (SVM) and covariance functions (GP), respectively. While
the formal similarity of the two methods is striking the underlying paradigms of
inference are very different. The SVM was inspired by results from statistical/PAC
learning theory while GPs are usually considered in a Bayesian framework. This
ideological clash can be viewed as a continuation in machine learning of the by
now classical disagreement between Bayesian and frequentistic statistics. With
regard to algorithmics the two schools of thought appear to favour two different
methods of learning and predicting: the SVM community - as a consequence of the
formulation of the SVM as a quadratic programming problem - focuses on learning
as optimisation while the Bayesian community favours sampling schemes based on
the Bayesian posterior. Of course there exists a strong relationship between the two
ideas, in particular with the Bayesian maximum a posteriori (MAP) estimator being
the solution of an optimisation problem. Interestingly, the two viewpoints have
recently been reconciled theoretically in the so-called PAC-Bayesian framework [5]
that combines the idea of a Bayesian prior with PAC-style performance guarantees
and has been the basis of the so far tightest margin bound for SVMs [3]. In practice,
optimisation based algorithms have the advantage of a unique, deterministic solution
and the availability of the cost function as an indicator for the quality of the solution.
In contrast, Bayesian algorithms based on sampling and voting are more flexible and
have the so-called "anytime" property, providing a relatively good solution at any
point in time. Often, however, they suffer from the computational costs of sampling
the Bayesian posterior.
In this contribution we review the idea of the Bayes point machine (BPM) as an
approximation to Bayesian inference for linear classifiers in kernel space in Section
2. In contrast to the GP viewpoint we do not define a Gaussian prior on the length
Ilwllx: of the weight vector. Instead, we only consider weight vectors of length
Ilwllx: = 1 because it is only the spatial direction of the weight vector that matters
for classification. It is then natural to define a uniform prior on the resulting ballshaped hypothesis space. Hence, we determine the centre of mass ("Bayes point") of
the resulting posterior that is uniform in version space, i.e. in the zero training error
region. While the version space could be sampled using some form of Gibbs sampling
(see, e.g. [6] for an overview) or an ergodic dynamic system such as a billiard [4]
we suggest to use the perceptron algorithm trained on permutations of the training
set for sampling in Section 3. This extremely simple sampling scheme proves to be
efficient enough to make the BPM applicable to large data sets. We demonstrate
this fact in Section 4 on the well-known MNIST data set containing 60 000 samples
of handwritten digits and show how an approximation to the posterior probability of
classification provided by the BPM can even be used for test-point rejection leading
to a great reduction in generalisation error on the remaining samples.
We denote n-tuples by italic bold letters (e.g. x = (Xl, ... ,xn )), vectors by roman
bold letters (e.g. x), random variables by sans serif font (e.g. X) and vector spaces
by calligraphic capitalised letters (e.g. X). The symbols P, E and I denote a probability measure, the expectation of a random variable and the indicator function,
respectively.
2
Bayes Point Machines
Let us consider the task of classifying patterns X E X into one of the two classes
y E Y = {-1, + 1} using functions h : X ~ Y from a given set 1t known as the
hypothesis space. In this paper we shall only be concerned with linear classifiers:
1t={xf-tsign((?(x),w)x;) IWEW},
W={wEK I Ilwllx:=1},
(1)
where ? : X ~ K ~ i~ is known I as the feature map and has to fixed beforehand.
If all that is needed for learning and classification are the inner products (., .)x: in
the feature space K, it is convenient to specify ? only by its inner product function
1 For notational convenience we shall abbreviate cf> (x) by x. This should not be confused
with the set x of training points.
k :X
X
X -t IR known as the kernel, i.e.
"Ix, x' EX:
k (x, x')
= (? (x) , ? (x')}JC
.
For simplicity, let us assume that there exists a classifier 2 w* E W that labels all
our data, i.e.
PYlx=x ,w=w' (y) = Ih_.(x)=y.
(2)
This assumption can easily be relaxed by introducing slack variables as done in the
soft margin variant of the SVM. Then given a training set z = (x, y) of m points
Xi together with their classes Yi assigned by hw' drawn iid from an unknown data
distribution P z = PYIXP X we can assume the existence of a version space V (z), i.e.
the set of all classifiers w E W consistent with z:
(3)
In a Bayesian spirit we incorporate all of our prior knowledge about w* into a
prior distribution Pw over W. In the absence of any a priori knowledge we suggest
a uniform prior over the spatial direction of weight vectors w. Now, given the
training set z we update our prior belief by Bayes' formula, i.e.
Pw1zm=z (W)
=
Pzmlw=w (z) Pw (w)
Ew [PzmIW=w (Z)]
Pw(w)
~w(V(z))
{
0:1 PYIX=Xi,W=W (Yi) Pw (W)
= -=-=~~----''-'-----':'::''''':'----:~c-'Ew
[0:1 PY1X=Xi,W=W (Yi)]
ifwEV(Z)
otherwise
where the first line follows from the independence and the fact that x has no dependence on w and the second line follows from (2) and (3). The Bayesian classification
of a novel test point x is then given by
Bayes z (x)
=
argmaxyEy Pw1zm=z ({hw (x) = y})
=
=
sign (EWlzm=z [hw (x)])
sign (Ew1zm=z [sign ((x, W}dD
Unfortunately, the strategy Bayes z is in general not contained in the set 1-l of
classifiers considered beforehand. Since Pw1zm=z is only non-zero inside version
space, it has been suggested to use the centre of mass w crn as an approximation for
Bayes z , i.e.
wcrn
sign (Ew1zm=z [(x, W}JCl)
sign ((x, wcrn}d ,
EWlzm=z [W] .
(4)
This classifier is called the Bayes point. In a previous work [4] we calculated Wcrn
using a first order Markov chain based on a billiard-like algorithm (see also [10]).
We entered the version space V (z) using a perceptron algorithm and started playing billiards in version space V (z) thus creating a sequence of pseudo-random
samples Wi due to the chaotic nature of the billiard dynamics. Playing billiards
in V (z) is possible because each training point (Xi, Yi) E z defines a hyperplane
{w E W I Yi (Xi, w}JC = O} ~ W. Hence, the version space is a convex polyhedron
on the surface of W. After N bounces of the billiard ball the Bayes point was
estimated by
1 N
\V crn
=N
LWi.
i=1
2We synonymously call h E 11. and w E W a classifier because there is a one-to-one
correspondence between the two by virtue of (1) .
Although this algorithm shows excellent generalisation performance when compared
to state-of-the art learning algorithms like support vector machines (SVM) [13], its
effort scales like 0 (m 2 ) and 0 (N . m2 ) in terms of memory and computational
requirements, respectively.
3
Sampling the Version Space
Clearly, all we need for estimating the Bayes point (4) is a set of classifiers W drawn
uniformly from V (z). In order to save computational resources it might be advantageous to achieve a uniform sample only approximately. The classical perceptron
learning algorithm offers the possibility to obtain up to m! different classifiers in version space simply by learning on different permutations of the training set. Given
a permutation II : {I, ... , m} -+ {I, ... , m} the perceptron algorithm works as
follows:
1. Start with Wo
= 0 and t = O.
2. For all i E {I, ... , m}, if YII(i) (XII(i), Wt) K.
and t ~ t + 1.
:::;
0 then Wt+! = Wt + YII(i) XII (i)
3. Stop, if for all i E {I, ... ,m}, YII(i) (XII(i), Wt) K.
> O.
A classical theorem due to Novikoff [7] guarantees the convergence of this procedure
and furthermore provides an upper bound on the number t of mistakes needed until
convergence. More precisely, if there exists a classifier WSVM with margin
'Y% (WSVM )
.
= (XimIll
,y;)E%
Yi(Xi,WSVM)K.
IlwsVM 11K.
then the number of mistakes until convergence - which is an upper bound on
the sparsity of the solution - is not more than R2 (x) y;2 (WSVM), where R (x)
is the smallest real number such that Vx Ex: II ? (x) II K. :::; R (x). The quantity
'Y% (WSVM) is maximised for the solution WSVM found by the SVM, and whenever
the SVM is theoretically justified by results from learning theory (see [11, 13]) the
ratio d = R2 (x) 'Y;2 (WSVM) is considerably less than m, say d? m.
Algorithmically, we can benefit from this sparsity by the following "trick": since
m
W=
2: QiXi
i=l
all we need to store is the m-dimensional vector o. Furthermore, we keep track of
the m-dimensional vector 0 of real valued outputs
m
0i = Yi
(Xi, Wt)K. =
2: Qjk (Xi, Xj)
j=l
of the current solution at the i-th training point. By definition, in the beginning 0 =
0=0. Now, if 0i :::; 0 we update Qi by Qi +Yi and update 0 by OJ ~ OJ +Yik (Xi, Xj)
which requires only m kernel calculations. In summary, the memory requirement of
this algorithm is 2m and the number of kernel calculations is not more than d?m. As
a consequence, the computational requirement of this algorithm is no more than the
computational requirement for the evaluation ofthe margin 'Y% (WSVM)! We suggest
to use this efficient perceptron learning algorithm in order to obtain samples Wi for
the computation of the Bayes point by (4).
(a)
(b)
(c)
Figure 1: (a) Histogram of generalisation errors (estimated on a test set) using
a kernel Gibbs sampler. (b) Histogram of generalisation errors (estimated on a
test set) using a kernel perceptron. (c) QQ plot of distributions (a) and (b). The
straight line indicates that both distribution are very similar.
In order to investigate the usefulness of this approach experimentally, we compared
the distribution of generalisation errors of samples obtained by perceptron learning
on permuted training sets (as suggested earlier by [14]) with samples obtained by
a full Gibbs sampling [2]. For computational reasons, we used only 188 training
patterns and 453 test patterns of the classes "I" and "2" from the MNIST data set 3 .
In Figure 1 (a) and (b) we plotted the distribution over 1000 random samples using
the kernel 4
k(x,x') = ?(x,x'h+1)5 .
(5)
Using a quantile-quantile (QQ) plot technique we can compare both distributions
in one graph (see Figure 1 (c)). These plots suggest that by simple permutation
of the training set we are able to obtain a sample of classifiers exhibiting the same
generalisation error distribution as with time-consuming Gibbs sampling.
4
Experimental Results
In our large scale experiment we used the full MNIST data set with 60000 training
examples and 10000 test examples of 28 x 28 grey value images of handwritten
digits. As input vector x we used the 784 dimensional vector of grey values. The
images were labelled by one of the ten classes "0" to "I". For each of the ten classes
y = {O, ... , 9} we ran the perceptron algorithm N = 10 times each time labelling
all training points of class y by + 1 and the remaining training points by -1. On
an Ultra Sparc 10 each learning trial took approximately 20 - 30 minutes. For
the classification of a test image x we calculated the real-valued output of all 100
different classifiers5 by
Ii (x) =
where we used the kernel k given by (5). (Oi)j refers to the expansion coefficient
corresponding to the i- th classifier and the j - th data point. Now, for each of the
at http://wvw .research. att. comryann/ocr/mnist/.
4We decided to use this kernel because it showed excellent generalisation performance
when using the support vector machine.
5For notational simplicity we assume that the first N classifiers are classifiers for the
class "0", the next N for class "1" and so on.
3 available
OOB
004
rejection rate
0%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
010
rejection rate
generalisation error
1.46%
1.10%
0.87%
0.67%
0.49%
0.37%
0.32%
0.26%
0.21%
0.14%
0.11%
Figure 2: Generalisation error as a function of the rejection rate for the MNIST data
set. The SVM achieved 1.4% without rejection as compared to 1.46% for the BPM.
Note that by rejection based on the real-valued output the generalisation error
could be reduced to 0.1% indicating that this measure is related to the probability
of misclassification of single test points.
ten classes we calculated the real-valued decision of the Bayes point
1
ibp ,y
(x)
=N
Wy
by
N
L: ii+yN (x) .
i=l
In a Bayesian spirit, the final decision was carried out by
hbp
(x) = argmaxyE {O, ... ,9}
ibp ,y
(x) .
Note that ibp ,y (x) [9] can be interpreted as an (unnormalised) approximation of
the posterior probability that x is of class y when restricted to the function class
(1). In order to test the dependence of the generalisation error on the magnitude
max y ibp ,y (x) we fixed a certain rejection rate r E [0,1] and rejected the set of
r? 10000 test points with the smallest value of maxy ibp ,y (x). The resulting plot
is depicted in Figure 2.
As can be seen from this plot, even without rejection the Bayes point has excellent
generalisation performance6 . Furthermore, rejection based on the real-valued output ibp (x) turns out to be excellent thus reducing the generalisation error to 0.1%.
One should also bear in mind that the learning time for this simple algorithm was
comparable to that of SVMs.
A very advantageous feature of our approach as compared to SVMs are its adjustable
time and memory requirements and the "anytime" availability of a solution due to
sampling. If the training set grows further and we are not able to spend more time
with learning, we can adjust the number N of samples used at the price of slightly
worse generalisation error.
5
Conclusion
In this paper we have presented an algorithm for approximating the Bayes point by
rerunning the classical perceptron algorithm with a permuted training set. Here we
6Note that the best know result on this data set if 1.1 achieved with a polynomial
kernel of degree four. Nonetheless, for reason of fairness we compared the results of both
algorithms using the same kernel.
particularly exploited the sparseness of the solution which must exist whenever the
success of the SVM is theoretically justified. The restriction to the zero training
error case can be overcome by modifying the kernel as
k>.. (x, x') = k (x, x')
+ A ? Ix=x' .
This technique is well known and was already suggested by Vapnik in 1995 (see [1]).
Another interesting question raised by our experimental findings is the following:
By how much is the distribution of generalisation errors over random samples from
version space related to the distribution of generalisation errors of the up to m!
different classifiers found by the classical perceptron algorithm?
Acknowledgements We would like to thank Bob Williamson for helpful discussions and suggestions on earlier drafts. Parts of this work were done during a
research stay of both authors at the ANU Canberra.
References
[1) C. Cortes and V. Vapnik. Support Vector Networks. Machine Learning, 20:273-297,
1995.
[2) T. Graepel and R. Herbrich. The kernel Gibbs sampler.
Information System Processing 13, 200l.
In Advances in Neural
[3) R. Herbrich and T . Graepel. A PAC-Bayesian margin bound for linear classifiers:
Why SVMs work. In Advances in Neural Information System Processing 13, 200l.
[4) R. Herbrich, T . Graepel, and C. Campbell. Robust Bayes Point Machines. In Proceedings of ESANN 2000, pages 49- 54, 2000.
[5) D. A. McAliester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230- 234 , Madison, Wisconsin, 1998.
[6) R. M. Neal. Markov chain monte carlo method based on 'slicing' the density function .
Technical report, Department of Statistics, University of Toronto, 1997. TR- 9722.
[7) A. Novikoff. On convergence proofs for perceptrons. In Report at the Symposium
on Mathematical Theory of Automata, pages 24- 26, Politechnical Institute Brooklyn,
1962.
[8) M. Opper and O. Winther. Gaussian processes for classification: Mean field algorithms. Neural Computation, 12(11) , 2000.
[9) J . Platt. Probabilities for SV machines. In Advances in Large Margin Classifiers,
pages 61- 74. MIT Press, 2000.
[10) P. Rujan and M. Marchand. Computing the bayes kernel classifier. In Advances in
Large Margin Classifiers, pages 329- 348. MIT Press, 2000.
[11) J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk
minimization over data- dependent hierarchies. IEEE Transactions on Information
Theory, 44(5):1926- 1940, 1998.
[12) A. J. Smola. Learning with Kernels. PhD thesis, Technische Universitat Berlin, 1998.
[13) V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
[14) T. Watkin. Optimal learning with a neural network. Europhysics Letters, 21:871- 877,
1993.
[15) C. Williams. Prediction with Gaussian Processes: From linear regression to linear
prediction and beyond. Technical report , Neural Computing Research Group , Aston
University, 1997. NCRG/ 97/ 012.
| 1922 |@word trial:1 version:11 pw:4 polynomial:1 advantageous:2 grey:2 covariance:1 tr:1 reduction:1 att:1 tuned:1 interestingly:1 current:2 clash:1 must:1 plot:5 update:3 maximised:1 beginning:1 qjk:1 provides:1 draft:1 toronto:1 herbrich:4 billiard:5 mathematical:1 symposium:1 combine:1 eleventh:1 inside:1 theoretically:3 rapid:1 multi:1 inspired:1 provided:1 confused:1 underlying:1 estimating:1 mass:3 interpreted:1 finding:1 guarantee:2 pseudo:1 voting:1 classifier:22 platt:1 appear:1 yn:1 mistake:2 consequence:2 approximately:2 might:1 decided:1 unique:1 practice:1 chaotic:1 digit:3 procedure:1 thought:1 convenient:1 refers:1 suggest:4 convenience:1 risk:1 restriction:1 map:2 demonstrated:1 deterministic:1 williams:1 attention:1 convex:1 ergodic:1 automaton:1 simplicity:2 slicing:1 m2:2 estimator:1 ralf:1 handle:1 qq:2 hierarchy:1 programming:1 gps:2 approximative:1 hypothesis:2 trick:1 particularly:1 region:1 revival:1 decrease:1 ran:1 complexity:1 dynamic:2 trained:1 basis:2 easily:2 monte:1 popularisation:1 spend:1 ralfh:1 valued:5 say:1 otherwise:1 ability:1 statistic:4 gp:3 final:1 advantage:1 sequence:1 argmaxyey:1 took:1 product:2 tu:2 yii:3 entered:1 achieve:1 wvw:1 convergence:4 requirement:5 school:1 ibp:6 esann:1 strong:1 c:2 exhibiting:1 direction:2 drawback:1 modifying:1 subsequently:1 vx:1 ultra:1 sans:1 considered:2 great:1 algorithmic:1 smallest:2 applicable:1 label:1 champion:1 minimization:1 mit:2 clearly:1 gaussian:4 modified:1 varying:1 focus:2 notational:2 polyhedron:1 indicates:1 contrast:2 posteriori:1 inference:2 helpful:1 dependent:1 bpm:6 classification:8 flexible:1 priori:1 spatial:2 art:1 raised:1 field:1 sampling:10 fairness:1 synonymously:1 report:3 roman:1 novikoff:2 possibility:1 investigate:1 evaluation:1 adjust:1 chain:2 beforehand:2 taylor:1 plotted:1 soft:1 earlier:2 cost:2 introducing:1 technische:1 uniform:4 usefulness:1 universitat:1 guru:1 sv:1 considerably:1 density:1 fundamental:1 winther:1 stay:1 together:1 thesis:1 containing:1 watkin:1 worse:1 creating:1 style:1 leading:1 de:2 bold:2 availability:2 coefficient:1 matter:1 jc:2 lot:1 competitive:1 bayes:19 start:1 contribution:1 oi:1 ir:1 ofthe:1 bayesian:16 handwritten:3 rejecting:1 iid:1 carlo:1 comryann:1 bob:1 straight:1 whenever:2 definition:1 nonetheless:1 proof:1 sampled:1 stop:1 anytime:2 knowledge:2 graepel:4 campbell:1 specify:1 formulation:1 done:2 furthermore:3 rejected:1 smola:1 until:2 billiards:2 defines:1 quality:1 grows:1 thore:1 concept:1 hence:2 assigned:1 neal:1 during:1 demonstrate:1 image:3 novel:1 recently:3 permuted:2 overview:1 ncrg:1 gibbs:5 centre:3 shawe:1 similarity:1 surface:1 posterior:8 showed:1 sparc:1 store:1 certain:1 calligraphic:1 success:1 yi:8 exploited:1 pylx:1 seen:1 relaxed:1 determine:1 paradigm:1 ii:5 full:2 technical:4 xf:1 calculation:2 offer:1 europhysics:1 qi:2 prediction:2 variant:1 regression:3 essentially:1 optimisation:3 expectation:1 histogram:2 kernel:17 achieved:2 justified:2 addition:1 jcl:1 spirit:2 call:1 structural:1 enough:1 concerned:1 independence:1 xj:2 inner:2 idea:3 bounce:1 favour:2 bartlett:1 effort:1 wo:1 suffer:1 yik:1 ten:3 svms:5 reduced:1 continuation:1 http:1 exist:1 sign:5 estimated:3 algorithmically:2 track:1 xii:3 shall:2 group:3 four:1 drawn:2 graph:1 letter:4 striking:1 draw:1 decision:2 comparable:1 bound:4 correspondence:1 quadratic:1 marchand:1 annual:1 adapted:1 precisely:1 extremely:1 relatively:1 department:3 ball:1 wsvm:8 slightly:1 wi:2 maxy:1 restricted:2 resource:1 slack:1 turn:1 needed:2 mind:1 know:1 available:1 tightest:1 ocr:1 disagreement:1 save:1 existence:1 remaining:2 cf:1 madison:1 quantile:2 prof:1 approximating:1 classical:5 hbp:1 already:1 quantity:1 question:1 font:1 strategy:1 rerunning:1 dependence:2 italic:1 exhibit:1 thank:1 berlin:5 reason:2 length:2 relationship:1 providing:1 ratio:1 unfortunately:1 unknown:1 adjustable:1 upper:2 markov:2 extended:1 community:2 algorithmics:1 ideological:1 brooklyn:1 able:2 suggested:3 beyond:1 usually:1 pattern:4 wy:1 sparsity:2 oj:2 memory:4 max:1 belief:1 misclassification:1 natural:1 predicting:1 indicator:2 abbreviate:1 scheme:3 aston:1 capitalised:1 started:1 carried:1 unnormalised:1 prior:7 review:1 acknowledgement:1 wisconsin:1 permutation:4 bear:1 interesting:1 suggestion:1 degree:1 consistent:2 dd:1 viewpoint:3 classifying:1 playing:2 course:1 summary:1 formal:1 perceptron:11 institute:1 benefit:1 regard:1 overcome:2 calculated:3 xn:1 world:1 opper:1 author:1 far:1 transaction:1 keep:1 tuples:1 xi:9 consuming:1 crn:2 why:1 nature:2 robust:1 expansion:1 williamson:2 excellent:5 anthony:1 rujan:1 reconciled:1 canberra:1 xl:1 ix:2 hw:3 formula:1 theorem:2 minute:1 pac:5 symbol:1 r2:2 svm:11 cortes:1 virtue:1 exists:3 serif:1 mnist:6 vapnik:3 gained:1 phd:1 magnitude:1 labelling:1 sparseness:1 margin:7 anu:1 rejection:10 depicted:1 simply:1 contained:1 springer:1 lwi:1 viewed:1 labelled:1 price:1 absence:1 experimentally:1 generalisation:19 characterised:1 uniformly:1 reducing:1 averaging:1 hyperplane:1 wt:5 sampler:2 called:3 experimental:3 ew:2 indicating:1 perceptrons:1 support:5 incorporate:1 ex:2 |
1,010 | 1,923 | Homeostasis in a Silicon Integrate and Fire
Neuron
Shih-Chii LiD
Institute for Neuroinformatics, ETHIVNIZ
Winterthurstrasse 190, CH-8057 Zurich
Switzerland
shih@ini.phys.ethz.ch
Bradley A. Minch
School of Electrical and Computer Engineering
Cornell University
Ithaca, NY 14853-5401, U.S.A.
minch@ee.comell.edu
Abstract
In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order
of seconds or minutes so that it returns to its spontaneous firing rate after
a lasting perturbation. Homeostasis is implemented via two schemes.
One scheme looks at the presynaptic activity and adapts the synaptic
weight depending on the presynaptic spiking rate. The second scheme
adapts the synaptic "threshold" depending on the neuron's activity. The
threshold is lowered if the neuron's activity decreases over a long time
and is increased for prolonged increase in postsynaptic activity. Both
these mechanisms for adaptation use floating-gate technology. The results shown here are measured from a chip fabricated in a 2-J.lm CMOS
process.
1 Introduction
We explored long-time constant adaptation mechanisms in a simple integrate-and-fire silicon neuron. Many researchers have postulated constant adaptation mechanisms which, for
example, preserve the firing rate of the neuron over long time invervals (Liu et al. 1998)
or use the presynaptic spiking statistics to adapt the spiking rate of the neuron so that
the distribution of this spiking rate is uniformly distributed (Stemmler and Koch 1999).
Homeostasis is observed in in-vitro recordings (Desai et al. 1999) where if the K or Na
conductances are perturbed by adding antagonists, the cell returns to its original spiking
rate in a couple of days.
This work differs from previous work that explore the adaptation of the firing threshold
and the gain of the neuron through the regulation of Hodgkin-Huxley like conductances
(Shin and Koch 1999) and regulation of the neuron to perturbation in the conductances
(Simoni and DeWeerth 1999). Our neuron circuit is a simple integrate-and-fire neuron and
lepse
Spike
Vm
Irefr
Vrl
JUL
VOl
-:-
output, Vo
>---'---
-:-
- - - -..
I
C2
Pbase
/~/
v,IV)::ectm
:
lepse
-=-j
:
\
\
I
I
\
I
-.
/
.....
_/
"",
Figure 1: Schematic of neuron circuit with long time constant mechanisms for presynaptic
adaptation.
our adaptation mechanisms have time constants of seconds to minutes. We also describe
adaptation of the synaptic weight to presynaptic spiking rates. This presynaptic adaptation
models the contrast gain control curves of cortical simple cells (Ohzawa et al. 1985).
We fabricated two different circuits in a 2-pm CMOS process. One circuit implements
presynaptic adaptation and the other circuit implements postsynaptic adaptation. The long
time constant adaptation mechanisms use tunnelling and injection mechanisms to remove
charge from and to add charge onto a floating gate (Diorio et al. 1999). We added these
mechanisms to a simple integrate-and-fire neuron circuit (Mead 1989). This circuit (shown
in Figure 1) takes an input current, lepsc, which charges up the membrane, V m . When
the membrane exceeds a threshold, the output of the neuron, Vo , spikes. The spiking rate
of the neuron, fo is determined by the input current, lepsc , that is, fo = m lepsc where
1
.
m = (C l+C 2jVdd IS a constant.
2 Adaptation mechanisms in silicon neuron circuit
In order to permit continuous operation with only positive polarity bias voltages, we use
two distinct mechanisms to modify the floating-gate charges in our neuron circuits. We use
Fowler-Nordheim tunneling through high-quality gate oxide to remove electrons from the
floating gates (Lenzlinger and Snow 1969). Here, we apply a large voltage across the oxide,
which reduces the width of the Si-Si0 2 energy barrier to such an extent that electrons are
likely to tunnel through the barrier. The tunneling current is given approximately by
-/'o t e- vo/voz ,
I tunwhere Vox = V'tun - Vfg is the voltage across the tunneling oxide and lot and Vo are
measurable device parameters. For the 400-A oxides that are typical of a 2-l-?m CMOS
process, a typical value of Vo is 1000 V and an oxide voltage of about 30 V is required to
obtain an appreciable tunneling current.
We use subthreshold channel hot-electron injection in an nMOS transistor (Diorio, Minch,
and Hasler 1999) to add electrons to the floating gates. In this process, electrons in the
channel of the nMOS transistor accelerate in the high electric field that exists in the depletion region near the drain, gaining enough energy to surmount the Si-Si0 2 energy barrier
(about 3.2 eV). To facilitate the hot-electron injection process, we locally increase the substrate doping density of the nMOS transistor using the p-base layer that is normally used
to form the base of a vertical npn bipolar transistor. The p-base substrate implant simultaneously increases the electric field at the drain end of the channel and increases the nMOS
transistor's threshold voltage from 0.8 V to about 6 V, permitting subthreshold operation at
gate voltages that permit the collection of the injected electrons by the floating gate. The
hot-electron injection current is given approximately by
1-on). -- 111
" se<Pdc/Vinj ,
where Is is the source current, <Pdc is the drain-to-channel voltage, and 1/ and Vinj are
measurable device parameters. The value of Vinj is a bias dependent injection parameter
and typically ranges from 60 mV to 0.1 V.
3 Presynaptic adaptation
The first mechanism adapts the synaptic efficacy to the presynaptic firing rate over long
time constants. The circuit for this adaptation mechanism is shown in Figure 1. The synaptic current is generated by a series of two transistors; one is driven by the presynaptic input
and the other by the floating-gate voltage. The floating-gate voltage stores the synaptic efficacy of the synapse. A discrete amount of charge is integrated on a diode capacitor every
time there is a presynaptic spike. The charge that is dumped onto the capacitor depends
on the input frequency and the synaptic weight. The excitatory postsynaptic current to the
membrane of the neuron depends also on the gain of the current-mirror. The tunneling
mechanism which is controlled by vtun is continuously on so the synaptic efficacy slowly
decreases over time. The injection mechanism is turned on only when there is a presynaptic
spike. This presynaptic adaptation can model the contrast gain control curves of cortical
simple cells.
3.1
Steady-state analysis
In steady-state, the tunneling current, I tun , is equal to the average injection current, I inj
and they are as follows:
I tun
=
Iote
-
Vo
V tun
v/ gO
(1)
IopbekV/gO/UT T6
I inj
(e
QT
- I)AQT fi
(2)
where A is the gain of the current mirror integrator, QT = CdUT /k , VfgO is the steady-state
floating-gate voltage, fi is the presynaptic rate and T/j is the pulse width of the presynaptic
pulse. From Equations 1 and 2, we can solve for VfgO and thus determine the synaptic
current, Isyn:
kV[gO
Isyn = Iopbe
UT
1
= Im/(hT/j)"P.
In this equation, 1m is a preconstant and fJ is approximately 1. The steady-state input
current is given by Iepsc = IsynT/jAfi ~ ImA, thus it is independent of the presynaptic
input frequency.
3.2
Transient analysis
With a transient change in the presynaptic frequency, h, the initial postsynaptic frequency
is given by:
(3)
160
~
N
::c:
140
Transient gain
~
'-"
;>-.
u
120
~
(\) 100
::s
c::r
(\)
~
80
u
.,....,
......
60
~
~
;>-.
......
'"
'"
0
0..
40
20
50
100
150
200
250
300
350
Presynaptic frequency (Hz)
Figure 2: Adaptation curves of synaptic efficacy to presynaptic frequencies using long time
constant adaptation mechanisms.
As derived from Equation 3, we see that the transient change in the neuron's spiking rate is
dependent on the contrast of the input spiking rate, dfd Ii-
dfo
= m * 1m * A * dfd Ii = foCdfd Ii)
'* dfo/dfi = fo/ Ii
(4)
Hence, the transient gain of the neuron is equal to the ratio of the postsynaptic spiking rate
to the presynaptic input rate and it decreases with the input rate.
3.3
Experimental results
We measured the transient and steady-state spiking rates of the neuron around four different steady-state presynaptic rates of 100Hz, 150Hz, 200Hz, and 250Hz. In these measurements, the drain of the pbase injection transistor was set at 4V and the tunnelling voltage
was set at 35.3V. For each steady-state presynaptic rate, we presented step increases and
decreases in the presynaptic rate of 15Hz, 30Hz, 45Hz, and 60Hz. The instantaneous postsynaptic rate is plotted along one the four steep curves in Figure 2. After every change in
the presynaptic rate, we returned the presynaptic rate to its steady-state value before we
presented the next change in presynaptic rate. The transient gain of the curves decreases
for higher input spiking rates. This is predicted by Equation 4.
We also recorded the dynamics of the adaptation mechanisms by measuring the spiking
rate of the neuron when the presynaptic frequency was decreased at time (t=O) from 350
Hz to 300 Hz as shown in Figure 3. The system adapts over a time constant of minutes
back to the initial output frequency. These data show that the synaptic efficacy adapted to a
higher weight value over time. The time constant of adaptation can be increased by either
increasing the tunnelling voltage or the pbase injector's drain voltage, Vd.
70
~
N
~
'-'
~
60
50
u
$:I
(])
::l
0'"
(])
~
40
30
......
::l
&
,rn
~ITnr
~
20
::l
0
10
00
100
200
300
400
500
600
700
800
900
Time (sec)
Figure 3: Temporal adaptation of spiking rate of neuron to a decrease in the presynaptic
frequency from 350Hz to 300Hz. The smooth line is an exponential fit to the data curve.
4 Postsynaptic adaptation
In the second mechanism, the neuron's spiking rate determines the synaptic "threshold".
The schematic of this adaptation circuitry is shown in Figure 4. The floating-gate pbase
transistor provides a quiescent input to the neuron so that the neuron fires at a quiescent rate.
The tunneling mechanism is always turned on so the neuron's spiking rate increases in time
if the neuron does not spike. However the injection mechanism turns on when the neuron
spikes. The time constant of these mechanisms is in terms of seconds to minutes. The
increase in the floating-gate voltage is equivalent to a decrease in the synaptic threshold. If
the neuron's activity is high, the injection mechanism turns on thus decreasing the floatinggate voltage and the input current to the neuron. These two opposing mechanisms ensure
that the cell will remain at a constant activity under steady-state conditions. In other words,
the threshold of the neuron is modulated by its output spiking rate. The threshold of the
neuron continuously decreases and each output spike increases the threshold.
4.1
Steady-state analysis
Similar equations as in Section 3.1 can be used to solve for V/ gD , thus leading us to the
following expression for the steady-state input current, linD:
kV/ gO
linD = lopbe----rJ'T = Im/(foT/j)"Y
where 1m is a preconstant and 'Y is close to 1.
4.2
Transient analysis
When a positive step voltage is applied to v;,,,,, the step change,
floating gate. The initial transient current is :
~V,
is coupled into the
!.~
SPike
.--_
----1
I :~
O"'~~~J It1.11.11
/--
Membrane
voltage, Vm
I
Adaptation I
circuitry ~
\
-l
Vtun -lI
...,...
I
~~_________-_?_____V~f9~______\\____~//
Figure 4: Schematic of neuron circuit with long time constant mechanisms for postsynaptic
adaptation.
and the initial increase in the postsynaptic firing rate is
k~V
fo
+ dfo = foe""fYT.
If we assume that the step input, Vin = 10g(li) (where fi is the firing rate of the presynaptic
neuron), then the change in the floating-gate voltage is described by ~ V = dfd Ii- We then
solve for dfo,
dfo
k~V
= e UT
fo
-
-
k dfi
1 ~ --.
Ur Ii
(5)
Equation 5 shows that the transient change in the neuron's spiking rate is proportional to
the input contrast in the firing rate. With time, the floating-gate voltage adapts back to the
steady-state condition, so the spiking rate returns to fo.
4.3
Experimental results
In these experiments, we set the tunneling voltage, vtun to 28V, and the injection voltage
to 6.6y' We coupled a step decrease of O.2V into the floating-gate voltage and then measured the output frequency of the neuron over a period of 10 minutes. The output of this
experiment is shown in Figure 5. The frequency dropped from about 19Hz to 13Hz but the
circuit adapted after this initial perturbation and the spiking rate of the neuron returned to
about 19Hz over 26min. A similar experiment is performed but this time a step increase
of O.2V was coupled into the floating gate node (shown in Figure 5). Initially, the neuron's
rate increased from 20Hz to 28Hz but over a long period of minutes, the firing rate returned
to 20Hz.
5 Conclusion
In this work, we show how long-time constant adaptation mechanisms can be added to a
silicon integrate-and-fire neuron in a normal CMOS process. These homeostatic mechanisms can be combined with short time constant synaptic depressing synapses on the same
neuron to provide a range of adapting mechanisms. The presynaptic adaptation mechanism
can also account for the contrast gain curves of cortical simple cells.
30
,-.,
N
::c:
'-'
;>.,
u
~
II)
5<
II)
c.t:: 2
.......
&
=s
0
200
400
600
800
1000
1200
1400
1600
Time (sec)
Figure 5: Response of silicon neuron to an increase and a decrease of a step input of 0.2V.
The curve shows that the adaptation time constant is in the order of about 10 min.
Acknowledgments
We thank Rodney Douglas for supporting this work, the MOSIS foundation for fabricating
this circuit, and Tobias Delbrilck for proofreading this document. This work was supported
in part by the Swiss National Foundation Research SPP grant and the U.S. Office of Naval
Research.
References
Desai, N., L. Rutherford, and G. Turrigiano (1999, lun). Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature Neuroscience 2(6), 515-520.
Diorio, c., B. A. Minch, and P. Hasler (1999). Floating-gate MOS learning systems.
Proceedings of the International Symposium on the Future of Intellectual Integrated
Electronics (ISF/IE), 515-524.
Lenzlinger, M. and E. H. Snow (1969). Fowler-Nordheim tunneling into thermally
grown Si0 2 . Journal of Applied Physics 40, 278-283.
Liu, Z., 1. Golowasch, E. Marder, and L. Abbott (1998). A model neuron with activitydependent conductances regulated by multiple calcium sensors. Journal of Neuro science 18(7), 2309-2320.
Mead, C. (1989). Analog VLSI and neural systems. Reading, MA: Addison-Wesley.
Ohzawa, I., G. Sclar, and R. Freeman (1985). Contrast gain control in the eat's visual
system. Journal ofNeurophys. 54, 651-667.
Shin, I. and C. Koch (1999). Dynamic range and sensitivity adaptation in a silicon spiking neuron. IEEE Trans. on Neural Networks 10(5), 1232-1238.
Simoni, M . and S. DeWeerth (1999). Adaptation in an aVLSI model of a neuron . IEEE
CAS II-Analog and Digital Signal Processing 46(7), 967-970.
Stemmler, M. and C. Koch (1999). How voltage-dependent conductances can adapt
to maximize the information encoded by neuronal firing rate. Nature Neuro science 2(6),521-527.
| 1923 |@word pulse:2 it1:1 electronics:1 liu:2 series:1 efficacy:5 initial:5 document:1 bradley:1 current:17 comell:1 si:2 plasticity:1 remove:2 device:2 floatinggate:1 short:1 fabricating:1 provides:1 intellectual:1 node:1 along:1 c2:1 symposium:1 integrator:1 freeman:1 decreasing:1 prolonged:1 vfg:1 increasing:1 circuit:13 fabricated:2 temporal:1 every:2 charge:6 bipolar:1 control:3 normally:1 grant:1 positive:2 before:1 engineering:1 dropped:1 modify:1 mead:2 firing:10 approximately:3 range:3 acknowledgment:1 fot:1 implement:2 differs:1 swiss:1 shin:2 adapting:1 word:1 onto:2 close:1 vrl:1 measurable:2 equivalent:1 go:4 iepsc:1 pbase:4 spontaneous:1 substrate:2 vtun:3 observed:1 electrical:1 region:1 desai:2 diorio:3 decrease:10 tobias:1 dynamic:2 accelerate:1 chip:1 grown:1 stemmler:2 distinct:1 describe:1 neuroinformatics:1 encoded:1 solve:3 statistic:1 transistor:8 turrigiano:1 adaptation:29 turned:2 adapts:6 kv:2 vinj:3 cmos:4 depending:2 avlsi:1 measured:3 school:1 qt:2 dumped:1 implemented:1 diode:1 predicted:1 switzerland:1 snow:2 vfgo:2 vox:1 transient:10 im:2 koch:4 around:1 normal:1 mo:1 lm:1 electron:8 circuitry:2 si0:3 homeostasis:4 sensor:1 always:1 cornell:1 voltage:23 office:1 derived:1 naval:1 contrast:6 dependent:3 typically:1 integrated:2 initially:1 vlsi:1 field:2 equal:2 look:1 future:1 preserve:1 simultaneously:1 national:1 floating:17 ima:1 fire:7 opposing:1 conductance:5 aqt:1 iv:1 plotted:1 increased:3 doping:1 measuring:1 perturbed:1 minch:4 gd:1 combined:1 density:1 international:1 sensitivity:1 ie:1 golowasch:1 vm:2 physic:1 continuously:2 na:1 recorded:1 slowly:1 oxide:5 leading:1 return:3 li:2 account:1 sec:2 postulated:1 mv:1 depends:2 lind:2 performed:1 lot:1 vin:1 jul:1 rodney:1 subthreshold:2 chii:1 researcher:1 foe:1 synapsis:1 phys:1 fo:6 synaptic:14 energy:3 frequency:11 couple:1 gain:10 lenzlinger:2 ut:3 back:2 wesley:1 higher:2 day:1 response:1 synapse:1 depressing:1 deweerth:2 thermally:1 quality:1 fowler:2 facilitate:1 ohzawa:2 hence:1 excitability:1 width:2 steady:12 ini:1 antagonist:1 vo:6 fj:1 instantaneous:1 fi:3 spiking:21 vitro:1 analog:2 isf:1 silicon:7 measurement:1 pm:1 lowered:1 add:2 base:3 driven:1 store:1 isyn:2 determine:1 maximize:1 period:3 signal:1 ii:9 multiple:1 rj:1 reduces:1 exceeds:1 smooth:1 adapt:2 long:11 permitting:1 controlled:1 schematic:3 neuro:2 cell:5 decreased:1 pyramidal:1 source:1 ithaca:1 recording:1 hz:19 capacitor:2 ee:1 near:1 enough:1 npn:1 fit:1 expression:1 returned:3 tunnel:1 se:1 amount:1 locally:1 neuroscience:1 discrete:1 vol:1 shih:2 four:2 threshold:10 douglas:1 abbott:1 ht:1 hasler:2 mosis:1 surmount:1 injected:1 hodgkin:1 tunneling:9 layer:1 activity:6 adapted:2 marder:1 huxley:1 nmos:4 min:2 proofreading:1 injection:11 eat:1 dfd:3 membrane:4 across:2 remain:1 postsynaptic:9 ur:1 lid:1 lasting:1 depletion:1 equation:6 zurich:1 turn:2 mechanism:27 addison:1 end:1 operation:2 permit:2 apply:1 gate:19 original:1 ensure:1 pdc:2 added:2 spike:8 regulated:1 thank:1 vd:1 presynaptic:30 extent:1 polarity:1 ratio:1 simoni:2 regulation:2 steep:1 rutherford:1 calcium:1 vertical:1 neuron:47 supporting:1 rn:1 perturbation:3 homeostatic:1 required:1 nordheim:2 tunnelling:3 trans:1 ev:1 spp:1 reading:1 tun:4 gaining:1 hot:3 scheme:3 technology:1 coupled:3 drain:5 proportional:1 digital:1 foundation:2 integrate:6 excitatory:1 supported:1 t6:1 bias:2 institute:1 barrier:3 distributed:1 curve:8 cortical:4 collection:1 quiescent:2 continuous:1 activitydependent:1 channel:4 nature:2 ca:1 electric:2 neuronal:1 f9:1 ny:1 lun:1 dfi:2 exponential:1 fyt:1 minute:6 explored:1 exists:1 intrinsic:1 adding:1 mirror:2 implant:1 explore:2 likely:1 visual:1 sclar:1 ch:2 determines:1 ma:1 appreciable:1 dfo:5 change:7 determined:1 typical:2 uniformly:1 inj:2 experimental:2 modulated:1 ethz:1 |
1,011 | 1,924 | Active Support Vector Machine
Classification
o. L. Mangasarian
Computer Sciences Dept.
University of Wisconsin
1210 West Dayton Street
Madison, WI 53706
David R. Musicant
Dept. of Mathematics and Computer Science
Carleton College
One North College Street
Northfield, MN 55057
olvi@cs.wisc.edu
dmusican@carleton.edu
Abstract
An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector
machine. This application generates a fast new dual algorithm
that consists of solving a finite number of linear equations, with a
typically large dimensionality equal to the number of points to be
classified. However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. Thus, a problem with a
32-dimensional input space and 7 million points required inverting
positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. The algorithm
requires no specialized quadratic or linear programming code, but
merely a linear equation solver which is publicly available.
1
Introduction
Support vector machines (SVMs) [23, 5, 14, 12] are powerful tools for data classification. Classification is achieved by a linear or nonlinear separating surface in the
input space of the dataset. In this work we propose a very fast simple algorithm,
based on an active set strategy for solving quadratic programs with bounds [18].
The algorithm is capable of accurately solving problems with millions of points and
requires nothing more complicated than a commonly available linear equation solver
[17, 1, 6] for a typically small (100) dimensional input space of the problem.
Key to our approach are the following two changes to the standard linear SVM:
1. Maximize the margin (distance) between the parallel separating planes with
respect to both orientation (w) as well as location relative to the origin b).
See equation (7) below. Such an approach was also successfully utilized in
the successive overrelaxation (SOR) approach of [15] as well as the smooth
support vector machine (SSVM) approach of [12].
2. The error in the soft margin (y) is minimized using the 2-norm squared
instead of the conventional 1-norm. See equation (7) . Such an approach
has also been used successfully in generating virtual support vectors [4].
These simple, but fundamental changes, lead to a considerably simpler positive
definite dual problem with nonnegativity constraints only. See equation (8).
In Section 2 of the paper we begin with the standard SVM formulation and its
dual and then give our formulation and its simpler dual. We corroborate with solid
computational evidence that our simpler formulation does not compromise on generalization ability as evidenced by numerical tests in Section 4 on 6 public datasets.
See Table 1. Section 3 gives our active support vector machine (ASVM) Algorithm
3.1 which consists of solving a system of linear equations in m dual variables with
a positive definite matrix. By invoking the Sherman-Morrison-Woodbury (SMW)
formula (1) we need only invert an (n + 1) x (n + 1) matrix where n is the dimensionality of the input space. This is a key feature of our approach that allows us to
solve problems with millions of points by merely inverting much smaller matrices of
the order of n. In concurrent work [8] Ferris and Munson also use the SMW formula
but in conjunction with an interior point approach to solve massive problems based
on our formulation (8) as well as the conventional formulation (6). Burges [3] has
also used an active set method, but applied to the standard SVM formulation (2)
instead of (7) as we do here. Both this work and Burges' appeal, in different ways,
to the active set computational strategy of More and Toraldo [18]. We note that
an active set computational strategy bears no relation to active learning. Section
4 describes our numerical results which indicate that the ASVM formulation has a
tenfold testing correctness that is as good as the ordinary SVM, and has the capability of accurately solving massive problems with millions of points that cannot be
attacked by standard methods for ordinary SVMs.
We now describe our notation and give some background material. All vectors will
be column vectors unless transposed to a row vector by a prime I. For a vector
x E Rn, x + denotes the vector in Rn with all of its negative components set to
zero. The notation A E Rm x n will signify a real m x n matrix. For such a matrix
A' will denote the transpose of A and Ai will denote the i-th row of A. A vector
of ones or zeroes in a real space of arbitrary dimension will be denoted by e or
0, respectively. The identity matrix of arbitrary dimension will be denoted by I.
For two vectors x and y in Rn, x ..1 y denotes orthogonality, that is x' y = O. For
U E R m , Q E Rm xm and B C {I, 2, ... , m}, UB denotes UiEB, QB denotes QiEB
and QBB denotes a principal submatrix of Q with rows i E B and columns j E B.
The notation argminxEs f(x) denotes the set of minimizers in the set S of the
real-valued function f defined on S. We use := to denote definition. The 2-norm
of a matrix Q will be denoted by IIQI12. A separating plane, with respect to two
given point sets A and B in R n , is a plane that attempts to separate R n into two
halfspaces such that each open halfspace contains points mostly of A or B. A special
case of the Sherman-Morrison-Woodbury (SMW) formula [9] will be utilized:
(Ilv + HH') -l = v(I - H(Ilv + H'H)-l H'),
(1)
where v is a positive number and H is an arbitrary m x k matrix. This formula
enables us to invert a large m x m matrix by merely inverting a smaller k x k matrix.
2
The Linear Support Vector Machine
We consider the problem of classifying m points in the n-dimensional real space
R n , represented by the m x n matrix A, according to membership of each point Ai
in the class A+ or A- as specified by a given m x m diagonal matrix D with +l 's
or -1 's along its diagonal. For this problem the standard SVM with a linear kernel
[23, 5] is given by the following quadratic program with parameter v > 0:
.
1
ve'y + -w'w s.t. D(Aw - e-y)
(w,'Y,y)ERn +l+=
2
mm
+y
2:: e, y 2:: O.
(2)
x'w
0
x
o
x
o 0 0 Ox
0
0000
1
+1
x x
x
0
000
A-
=
x
A+
x
x x
x
o
(
x'w =1
1-1
?
2
Margln=
IIwl12
X'W =
Figure 1: The bounding planes (3) with a soft (i.e. with some errors) margin
and the plane (4) approximately separating A+ from A-.
2/llwI12,
Here w is the normal to the bounding planes:
x'w = 'Y ? 1
(3)
and'Y determines their location relative to the origin (Figure 1.) The plane x'w =
'Y + 1 bounds the A+ points, possibly with error, and the plane x'w = 'Y -1 bounds
the A - points, also possibly with some error. The separating surface is the plane:
x'w
=
(4)
'Y ,
midway between the bounding planes (3). The quadratic term in (2), is twice the
reciprocal of the square of the 2-norm distance 2/llw112 between the two bounding
planes of (3) (see Figure 1). This term maximizes this distance which is often called
the "margin". If the classes are linearly inseparable, as depicted in Figure 1, then
the two planes bound the two classes with a "soft margin". That is, they bound each
set approximately with some error determined by the nonnegative error variable y:
~
::;
'Y + 1, for
'Y - 1, for
Dii =
Dii =
1,
- 1.
(5)
Traditionally the I-norm of the error variable y is minimized parametrically with
weight v in (2) resulting in an approximate separation as depicted in Figure 1. The
dual to the standard quadratic linear SVM (2) [13, 22, 14, 7] is the following:
.
mill
1
- u'DAA'Du - e'u s.t. e'Du = 0, 0 < u < ve.
uER=2
-
-
(6)
The variables (w, 'Y) of the primal problem which determine the separating surface
(4) can be obtained from the solution of the dual problem above [15, Eqns. 5 and
7]. We note immediately that the matrix DAA'D appearing in the dual objective
function (6) is not positive definite in general because typically m > > n. Also,
there is an equality constraint present, in addition to bound constraints, which for
large problems necessitates special computational procedures such as SMO [21].
Furthermore, a one-dimensional optimization problem [15] must be solved in order
to determine the locator 'Y of the separating surface (4). In order to overcome all
these difficulties as well as that of dealing with the necessity of having to essentially
invert a very large matrix of the order of m x m , we propose the following simple
but critical modification of the standard SVM formulation (2). We change Il y lll to
Ilyll? which makes the constraint y ~ 0 redundant. We also append the term 'Y2 to
w'w . This in effect maximizes the margin between the parallel separating planes
(3) with respect to both wand 'Y [15], that is with respect to both orientation and
location of the planes, rather that just with respect to w which merely determines
the orientation of the plane. This leads to the following reformulation of the SVM:
y'y
1
min
v - + -(w'w + ,2) s.t. D(Aw - er) + y ~ e.
(7)
(w ,'Y, y)ERn+l+",
2
2
the dual of this problem is [13]:
1
I
min -u'( - + D(AA' + ee')D)u - e'u.
(8)
O~uER'" 2
v
The variables (w,,) of the primal problem which determine the separating surface
(4) are recovered directly from the solution of the dual (8) above by the relations:
w=A'Du, y=u/v, ,=-e'Du.
(9)
We immediately note that the matrix appearing in the dual objective function is
positive definite and that there is no equality constraint and no upper bound on the
dual variable u. The only constraint present is a simple nonnegativity one. These
facts lead us to our simple finite active set algorithm which requires nothing more
sophisticated than inverting an (n + 1) x (n + 1) matrix at each iteration in order
to solve the dual problem (8).
3
ASVM (Active Support Vector Machine) Algorithm
The algorithm consists of determining a partition of the dual variable u into nonbasic
and basic variables. The nonbasic variables are those which are set to zero. The
values of the basic variables are determined by finding the gradient of the objective
function of (8) with respect to these variables, setting this gradient equal to zero, and
solving the resulting linear equations for the basic variables. If any basic variable
takes on a negative value after solving the linear equations, it is set to zero and
becomes nonbasic. This is the essence of the algorithm. In order to make the
a lgorithm converge and terminate, a few additional safeguards need to be put in
place in order to a llow us to invoke the More- Toraldo finite termination result [18].
The other key feature of the algorithm is a computational one and makes use of the
SMW formula. This feature allows us to invert an (n + 1) x (n + 1) matrix at each
step instead of a much bigger matrix of order m x m.
Before stating our a lgorithm we define two matrices to simplifY notation as follows:
H = D[A - e],
Q = I /v + HH'.
(10)
With these definitions the dual problem (8) becomes
.
1
mm f(u):= -u'Qu - eu .
(11)
2
It will be understood that within the ASVM Algorithm, Q - 1 will always be evaluated using the SMW formula and hence only an (n+l) x (n+l) matrix is inverted.
We state our algorithm now. Note that commented (%) parts of the algorithm are
not needed in general and were rarely used in our numerical results presented in
Section 4. The essence of the algorithm is displayed in the two boxes below.
O~uER'"
Algorithm 3.1 Active SVM (ASVM) Algorithm for (8).
(0) Start with UO := (Q - 1e )+. For i = 1,2, .. ., having u i compute Ui+1 as
Ifollows.
(1) Define Bi := {j I u; > a}, N i := {.i I u~ = a}.
(2) Determine
Ui+l
.- (Q-1
e?)
u iNi
+1.a.
Bi ' BiBi B ' +,
.Stop if Ui+1 is the global solution, that is if a ~ Ui+1 -.l QUi+1 - e ~
a.
(2a) % If f(u iH ) ~ f(u i ), then go to (4a).
(2b) % If 0 :s; Ut.~l .1 QBi+1Bi+l nt.~ 1 -eBi+1 ~ 0, then UH1 is a global solution
on the face of active constraints: UNi = O. Set u i := u iH and go to (4b).
(3) ISet i := i + 1 and go to (1).
I
(4a) % Move in the direction of the global minimum on the face of ac?
? t s, UNi = 0 .
S et UBi
-HI := Q -Bil Bi eBi an d UBi
H I .t we
constrazn
1
1
argmino9 5. df(uki + ).(ut. - nki)) I nki + ).(ut. - Uki ) ~ O}. If
U~+1 = 0 for some j E B i , set i := i + 1 and go to (1). Otherwise UH1 is a
global minimum on the face UNi = 0, and go to (4b).
(4b) % Iterate a gradient projection step. S et k := 0 and uk := u i . Iterate
Uk+l:= argmin O<A<l f(uk - ).(uk -(Quk -e))+), k:= k + l untilf(u k ) <
f(11/). Set u iH ::: ilk. Set i:= i + 1 and go to (1).
Remark 3.2 All commented (%) parts of the algorithm are optional and are not
usually implemented unless the algorithm gets stuck, which it rarely did on our
examples. Hence our algorithm is particularly simple and consists of steps (0),
(1),(2) and (3). The commented parts were ins erted in order to comply with the
active set strategy of Morr!- Toraldo result [18] for which they give finite termination.
Remark 3.3 The iteration in step (4b) is a gradient projection step which is guaranteed to converge to the global solution of (8) [2, pp 223-225] and is placed here to
ensure that the strict inequality f(u k ) < f(u') eventually holds as required in [18].
Similarly, the step in (4a) ensures that the function value does not increase when it
remains on the same face, in compliance with [18, Algortihm BCQP(b)j.
4
Numerical Implementation and Comparisons
We implemented ASVM in Visual C++ 6.0 under Windows NT 4.0. The experiments were run on the UW-Madison Data Mining Institute Locop2 machine, which
utilizes a 400 MHz Pentium II Xeon Processor and a maximum of 2 Gigabytes of
memory available per process. We wrote all the code ourselves except for the linear
equation solver, for which we used CLAPACK [1, 6]. Our stopping criterion for
ASVM is triggered when the error bound residual [16] Ilu - (u - Qu + e)+ II, which
is zero at the solution of (11) , goes below O.l.
The first set of experiments are designed to show that our reformulation (8) of
the SVM (7) and its associated algorithm ASVM yield similar performance to the
standard SVM (2) , referred to here as SVM-QP. For six datasets availa ble from the
UCI Machine Learning Repository [19], we performed tenfold cross validation in
order to compare test set accuracies between ASVM and SVM-QP. We implemented
SVM-QP using the high-performing CPLEX barrier quadratic programming solver
[10], and utilized a tuning set for both algorithms to find the optimal value of the
parameter v , using the defa ult stopping criterion of CPLEX. Altering the CPLEX
default stopping criterion to match that of ASVM did not result in significant change
in timing relative to ASVM, but did reduce test set correctness.
In order to obtain additional timing comparison information , we also ran the wellknown SVM optimized algorithm SVM1ig ht [11]. Joachims, the author of SVM1ight ,
provided us with the newest version of the software (Version 3.lOb) and advice on
setting the parameters. All features for these experiments were normalized to the
range [-1, + 1] as recommended in the SVM1ig ht documentation. We chose to use
Dataset
Training
mxn
1\lqorithm
Liver Disorders CPLEX
~VMf~ht
345 x 6
",SVM
Cleveland Heart CPLEX
297 x 13
~VMf~ht
SVM
Testing
Correctness Correctness
Time
CPU sec)
70 .76%
70.37%
70.40%
87.50%
87.50%
68.41%
7.87
68.12%
0.26
0.03
87.24%
85.56%
67.25%
64.20%
4.17
64.20%
0.17
0.05
128.90
Pima Diabetes
CPLEX
77.36%
76.95%
77.36%
76.95%
768 x 8
~VMf~ht
SVM
78.04%
78.12%
0.19
0.08
Dataset
mx n
Ionosphere
351 x 34
ic Tae Toe
~Iqorithm
CPLEX
~VMf~ht
",SVM
CPLEX
Time
(CPU sec)
Training
Testing
Correctness
92.81%
Correctness
88.60%
92.81%
93.29%
88.60%
87.75%
0.23
0.26
9.84
65.34%
65.34%
206.52
~VMf~ht
SVM
65.34%
70.27%
65.34%
69.72%
Votes
CPLEX
96.02%
95.85%
0.23
0.05
27.26
96.02%
96.73%
95.85%
435 x 16
~VMf~ht
SVM
958 x 9
96.07%
0.06
0.09
Table 1: ASVM compared with conventional SVM-QP (CPLEX and SVM1ig ht )
on VCI datasets. ASVM test correctness is comparable to SVM-QP, with
timing much faster than CPLEX and faster than or comparable to SVM1ig ht ?
#01
Points
4 million
7 million
Iterations
5
5
Training
Correctness
86.09%
86.10%
Testing
Correctness
86.06%
86.28%
Time
(CPU min)
38.04
95.57
Table 2: Performance of ASVM on NDC generated datasets in R 32 . (1/ = 0.01)
the default termination error criterion in SVM1ig ht of 0.001, which is act ually a less
stringent criterion than the one we used fo r ASVM. This is because the criterion we
used for ASVM (see above) is an aggregate over the errors for all points, whereas
the SVM1ig ht criterion reflects a minimum error threshold for each point.
The second set of experiments show that ASVM performs well on massive datasets.
We created synthetic data of Gaussian distribution by using our own NDC Data
Generator [20] as suggested by Usama Fayyad. The results of our experiments are
shown in Table 2. We did try to run SVM1ig ht on these datasets as well, but we
ran into memory difficulties. Note that for these experiments, all the data was
brought into memory. As such , the running time reported consists of the time
used to actually solve the problem to termination excluding I/O time. This is
consistent with the measurement techniques used by other popular approaches [11 ,
21]. Putting all t he data in memory is simpler to code and results in faster running
times. However, it is not a fundamental requirement of our algorithm - block
matrix multiplications, incremental evaluations of Q-1 using another application of
the SMW formula, and indices on the dataset can be used to create an efficient disk
based version of ASVM.
5
Conclusion
A very fast, finite and simple algorithm, ASVM, capable of classifying massive
datasets has been proposed and implemented. ASVM requires nothing more complex than a commonly available linear equation solver for solving small systems
with few variables even for massive datasets. Future work includes extensions to
parallel processing of the data, handling very large datasets directly from disk as
well as extending our approach to nonlinear kernels.
Acknow ledgements
We are indebted to our colleagues Thorsten J oachims for helping us to get SVM1ig ht
running significantly faster on the UCI datasets , and to Glenn Fung for his efforts
in running the experiments for revisions of this work. Research described in this
Data Mining Institute Report 00-04, April 2000, was supported by National Science Foundation Grants CCR-9729842 and CDA-9623632 , by Air Force Office of
Scientific Research Grant F49620-00-1-0085 and by Microsoft.
References
[1] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Cros, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen. LAPACK User's Guide. SIAM, Philadelphia, Pennsylvania, second edition, 1995.
[2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont , MA,
second edition, 1999.
[3] C. J. C. Burges. A tutorial on support vector machines for pattern recognition.
Data Mining and Knowledge Discovery, 2(2):121-167, 1998.
[4] C. J. C. Burges and B. Sch6lkopf. Improving the accuracy and speed of support vector machines. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors,
Advances in Neural Information Processing Systems -9-, pages 375-381, Cambridge, MA, 1997. MIT Press.
[5] V. Cherkassky and F. Mulier. Learning from Data - Concepts, Theory and
Methods. John Wiley & Sons, New York, 1998.
[6] CLAPACK. f2c'ed version of LAPACK. http://www.netlib.org/clapack.
[7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, 2000.
[8] M. C. Ferris and T. S. Munson. Interior point methods for massive support
vector machines. Technical Report 00-05, Computer Sciences Department,
University of Wisconsin, Madison, Wisconsin, May 2000.
[9] G. H. Golub and C. F. Van Loan. Matrix Computations. The John Hopkins
University Press, Baltimore, Maryland, 3rd edition, 1996.
[10] ILOG, Incline Village, Nevada. CPLEX 6.5 Reference Manual, 1999.
[11] T. Joachims. SVMlight, 1998. http://www-ai . informatik . uni -dortmund.
de/FORSCHUNG/VERFAHREN/SVM_LIGHT/sVID_light.eng.html.
[12] Yuh-Jye Lee and O. L. Mangasarian. SSVM: A smooth support vector machine.
Computational Optimization and Applications, 2000.
[13] O. L. Mangasarian. Nonlinear Programming. SIAM, Philadelphia, PA, 1994.
[14] O. L. Mangasarian. Generalized support vector machines. In A. Smola,
P. Bartlett, B. Sch6lkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 135- 146, Cambridge, MA, 2000. MIT Press.
[15] O. L. Mangasarian and D. R. Musicant. Successive overrelaxation for support
vector machines. IEEE Transactions on Neural Networks, 10:1032- 1037, 1999.
[16] O. L. Mangasarian and J. Ren. New improved error bounds for the linear
complementarity problem. Mathematical Programming, 66:241- 255, 1994.
[17] MATLAB. User's Guide. The MathWorks, Inc., Natick, MA 01760,1992.
[18] J. J. More and G. Toraldo. Algorithms for bound constrained quadratic programs. Numerische Mathematik, 55:377-400, 1989.
[19] P. M. Murphy and D. W. Aha. UCI repository of machine learning databases,
1992. www.ics.uci.edu/ rvmlearn/MLRepository.html.
[20] D. R. Musicant.
NDC: normally distributed clustered datasets, 1998.
www.cs.wisc.edu/rvmusicant/data/ndc/.
[21] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. In Sch6lkopf et al. [22], pages 185- 208.
[22] B. Sch6lkopf, C. Burges, and A. Smola (editors). Advances in K ernel Methods:
Support Vector Machines. MIT Press, Cambridge, MA, 1998.
[23] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, NY, 1995.
| 1924 |@word repository:2 version:4 norm:5 disk:2 open:1 termination:4 eng:1 invoking:1 solid:1 bai:1 necessity:1 contains:1 recovered:1 nt:2 must:1 john:2 belmont:1 numerical:4 partition:1 midway:1 enables:1 designed:1 newest:1 plane:15 reciprocal:1 mulier:1 location:3 successive:2 iset:1 org:1 simpler:4 mathematical:1 along:1 consists:5 cpu:3 tenfold:2 solver:5 lll:1 becomes:2 begin:1 window:1 notation:4 provided:1 maximizes:2 cleveland:1 revision:1 argmin:1 finding:1 act:1 rm:2 classifier:1 uk:4 platt:1 normally:1 uo:1 grant:2 bertsekas:1 positive:6 before:1 understood:1 timing:3 approximately:2 chose:1 twice:1 bi:4 range:1 woodbury:2 testing:4 block:1 definite:5 procedure:1 dayton:1 significantly:1 projection:2 get:2 cannot:1 interior:2 put:1 www:4 conventional:3 go:7 ostrouchov:1 numerische:1 disorder:1 immediately:2 his:1 gigabyte:1 traditionally:1 usama:1 massive:6 user:2 programming:5 origin:2 diabetes:1 pa:1 complementarity:1 documentation:1 recognition:1 particularly:1 utilized:3 database:1 solved:1 ensures:1 munson:2 eu:1 cro:1 halfspaces:1 ran:2 mozer:1 ui:4 cristianini:1 solving:8 compromise:1 necessitates:1 represented:1 mxn:1 fast:4 describe:1 demmel:1 aggregate:1 daa:2 solve:4 valued:1 otherwise:1 ability:1 triggered:1 nevada:1 propose:2 uci:4 llwi12:1 ebi:2 requirement:1 extending:1 generating:1 incremental:1 locator:1 stating:1 ac:1 liver:1 implemented:4 c:2 indicate:1 direction:1 stringent:1 virtual:1 public:1 material:1 dii:2 ually:1 sor:1 generalization:1 clustered:1 extension:1 helping:1 mm:2 hold:1 ic:2 normal:1 inseparable:1 ilv:2 concurrent:1 village:1 correctness:9 create:1 successfully:2 tool:1 reflects:1 brought:1 mit:3 always:1 gaussian:1 rather:1 office:1 conjunction:1 joachim:2 pentium:2 algortihm:1 minimizers:1 membership:1 stopping:3 typically:3 relation:2 lapack:2 classification:3 dual:16 orientation:3 denoted:3 html:2 constrained:1 special:2 equal:2 having:2 future:1 minimized:2 report:2 simplify:1 few:2 bil:1 ve:2 national:1 murphy:1 ourselves:1 cplex:12 toraldo:4 microsoft:1 attempt:1 mining:3 evaluation:1 golub:1 yuh:1 primal:2 sorensen:1 capable:2 unless:2 taylor:1 aha:1 cda:1 minimal:1 column:2 soft:3 xeon:1 corroborate:1 mhz:2 altering:1 ordinary:2 parametrically:1 oachims:1 reported:1 aw:2 considerably:1 synthetic:1 fundamental:2 siam:2 lee:1 invoke:1 safeguard:1 hopkins:1 squared:1 possibly:2 ilu:1 de:1 smw:6 sec:2 north:1 includes:1 inc:1 performed:1 try:1 defa:1 start:1 complicated:1 parallel:3 capability:1 halfspace:1 square:1 publicly:1 il:1 accuracy:2 air:1 yield:1 accurately:2 informatik:1 dortmund:1 ren:1 processor:1 classified:1 indebted:1 fo:1 manual:1 ed:1 definition:2 iiwl12:1 colleague:1 pp:1 toe:1 associated:1 transposed:1 stop:1 dataset:4 popular:1 qbi:1 ut:3 dimensionality:2 knowledge:1 sophisticated:1 actually:1 improved:1 april:1 formulation:8 evaluated:1 ox:1 box:1 anderson:1 furthermore:1 just:1 smola:2 vci:1 nonlinear:4 scientific:2 effect:1 concept:1 y2:1 lgorithm:2 normalized:1 equality:2 hence:2 symmetric:1 lob:1 eqns:1 essence:2 mlrepository:1 criterion:7 generalized:1 ini:1 vmf:6 performs:1 ifollows:1 mangasarian:6 novel:1 specialized:1 qp:5 million:6 he:1 significant:1 measurement:1 cambridge:5 ai:3 tuning:1 rd:1 mathematics:1 similarly:1 shawe:1 sherman:3 surface:5 own:1 prime:1 wellknown:1 inequality:1 musicant:3 inverted:2 minimum:3 additional:2 determine:4 maximize:1 redundant:1 converge:2 morrison:2 ii:3 recommended:1 smooth:2 technical:1 match:1 faster:4 cross:1 bigger:1 basic:4 essentially:1 df:1 natick:1 iteration:3 kernel:2 achieved:1 invert:4 background:1 addition:1 signify:1 whereas:1 baltimore:1 carleton:2 strict:1 compliance:1 greenbaum:1 jordan:1 ee:1 uki:2 svmlight:1 iterate:2 pennsylvania:1 reduce:1 six:1 bartlett:1 svm_light:1 effort:1 ubi:2 york:1 remark:2 matlab:1 ssvm:2 svms:2 http:2 tutorial:1 per:1 ccr:1 ledgements:1 commented:3 key:3 putting:1 reformulation:3 threshold:1 wisc:2 ht:14 uw:1 overrelaxation:2 merely:4 wand:1 run:2 powerful:1 hammarling:1 dongarra:1 place:1 separation:1 utilizes:1 ble:1 qui:1 comparable:2 submatrix:1 bound:10 hi:1 guaranteed:1 quadratic:8 nonnegative:1 ilk:1 constraint:7 orthogonality:1 software:1 generates:1 speed:1 min:3 fayyad:1 qb:1 performing:1 ern:2 department:1 fung:1 according:1 smaller:3 describes:1 son:1 wi:1 qu:2 making:1 modification:1 thorsten:1 heart:1 equation:11 remains:1 mathematik:1 eventually:1 hh:2 needed:1 mathworks:1 available:4 ferris:2 ernel:1 petsche:1 appearing:2 original:1 denotes:6 running:5 ensure:1 madison:3 objective:3 move:1 strategy:5 diagonal:2 gradient:4 mx:1 distance:3 separate:1 maryland:1 separating:9 street:2 athena:1 olvi:1 code:3 index:1 mostly:1 pima:1 acknow:1 negative:2 append:1 implementation:1 nonbasic:3 availa:1 sch6lkopf:4 upper:1 ilog:1 datasets:11 finite:5 attacked:1 displayed:1 optional:1 excluding:1 rn:3 arbitrary:3 david:1 inverting:4 evidenced:1 required:2 specified:1 verfahren:1 optimized:1 bischof:1 smo:1 suggested:1 below:3 usually:1 xm:1 pattern:1 asvm:20 program:4 memory:4 critical:1 llow:1 difficulty:2 force:1 nki:2 residual:1 mn:1 created:1 philadelphia:2 comply:1 discovery:1 multiplication:1 determining:1 relative:3 wisconsin:3 bear:1 generator:1 validation:1 foundation:1 consistent:1 editor:3 classifying:2 quk:1 row:3 placed:1 supported:1 transpose:1 guide:2 burges:5 institute:2 face:4 barrier:1 van:1 f49620:1 overcome:1 dimension:2 default:2 distributed:1 stuck:1 commonly:2 author:1 transaction:1 approximate:1 uni:4 wrote:1 dealing:1 global:5 active:13 glenn:1 table:4 terminate:1 nature:1 incline:1 improving:1 schuurmans:1 du:5 complex:1 ndc:4 did:4 linearly:1 bounding:4 edition:3 nothing:3 advice:1 west:1 referred:1 ny:1 wiley:1 nonnegativity:2 formula:8 minute:1 er:1 jye:1 appeal:1 svm:23 ionosphere:1 evidence:1 ih:3 vapnik:1 sequential:1 forschung:1 margin:7 cherkassky:1 depicted:2 mill:1 visual:1 springer:1 aa:1 determines:2 ma:5 netlib:1 identity:1 change:4 loan:1 determined:2 except:1 principal:1 total:1 called:1 vote:1 rarely:2 college:2 support:17 ult:1 ub:1 dept:2 tae:1 handling:1 |
1,012 | 1,925 | Occam?s Razor
Carl Edward Rasmussen
Department of Mathematical Modelling
Technical University of Denmark
Building 321, DK-2800 Kongens Lyngby, Denmark
carl@imm . dtu . dk
http : //bayes . imm . dtu . dk
Zoubin Ghahramani
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WCIN 3AR, England
zoubin@gatsby . ucl . ac . uk
http : //www . g a tsby . ucl .ac . uk
Abstract
The Bayesian paradigm apparently only sometimes gives rise to Occam's
Razor; at other times very large models perform well. We give simple
examples of both kinds of behaviour. The two views are reconciled when
measuring complexity of functions, rather than of the machinery used to
implement them. We analyze the complexity of functions for some linear
in the parameter models that are equivalent to Gaussian Processes, and
always find Occam's Razor at work.
1 Introduction
Occam's Razor is a well known principle of "parsimony of explanations" which is influential in scientific thinking in general and in problems of statistical inference in particular. In
this paper we review its consequences for Bayesian statistical models, where its behaviour
can be easily demonstrated and quantified. One might think that one has to build a prior
over models which explicitly favours simpler models. But as we will see, Occam's Razor is
in fact embodied in the application of Bayesian theory. This idea is known as an "automatic
Occam's Razor" [Smith & Spiegelhalter, 1980; MacKay, 1992; Jefferys & Berger, 1992].
We focus on complex models with large numbers of parameters which are often referred to
as non-parametric. We will use the term to refer to models in which we do not necessarily
know the roles played by individual parameters, and inference is not primarily targeted at
the parameters themselves, but rather at the predictions made by the models. These types
of models are typical for applications in machine learning.
From a non-Bayesian perspective, arguments are put forward for adjusting model complexity in the light of limited training data, to avoid over-fitting. Model complexity is often
regulated by adjusting the number offree parameters in the model and sometimes complexity is further constrained by the use of regularizers (such as weight decay). If the model
complexity is either too low or too high performance on an independent test set will suffer,
giving rise to a characteristic Occam's Hill. Typically an estimator of the generalization
error or an independent validation set is used to control the model complexity.
From the Bayesian perspective, authors seem to take two conflicting stands on the question
of model complexity. One view is to infer the probability of the model for each of several
different model sizes and use these probabilities when making predictions. An alternate
view suggests that we simply choose a "large enough" model and sidestep the problem of
model size selection. Note that both views assume that parameters are averaged over. Example: Should we use Occam's Razor to determine the optimal number of hidden units in a
neural network or should we simply use as many hidden units as possible computationally?
We now describe these two views in more detail.
1.1
View 1: Model size selection
One of the central quantities in Bayesian learning is the evidence, the probability of the data
given the model P(YIM i ) computed as the integral over the parameters W of the likelihood
times the prior. The evidence is related to the probability of the model, P(MiIY) through
Bayes rule:
where it is not uncommon that the prior on models P(M i ) is flat, such that P(MiIY) is
proportional to the evidence. Figure 1 explains why the evidence discourages overcomplex
models, and can be used to select l the most probable model.
It is also possible to understand how the evidence discourages overcomplex models and
therefore embodies Occam's Razor by using the following interpretation. The evidence is
the probability that if you randomly selected parameter values from your model class, you
would generate data set Y. Models that are too simple will be very unlikely to generate
that particular data set, whereas models that are too complex can generate many possible
data sets, so again, they are unlikely to generate that particular data set at random.
1.2
View 2: Large models
In non-parametric Bayesian models there is no statistical reason to constrain models, as
long as our prior reflects our beliefs. In fact, since constraining the model order (i.e. number of parameters) to some small number would not usually fit in with our prior beliefs
about the true data generating process, it makes sense to use large models (no matter how
much data you have) and pursue the infinite limit if you can2 ? For example, we ought not
to limit the number of basis functions in function approximation a priori since we don't
really believe that the data was actually generated from a small number of fixed basis functions. Therefore, we should consider models with as many parameters as we can handle
computationally.
Neal [1996] showed how multilayer perceptrons with large numbers of hidden units
achieved good performance on small data sets. He used sophisticated MCMC techniques
to implement averaging over parameters. Following this line of thought there is no model
complexity selection task: We don't need to evaluate evidence (which is often difficult)
and we don't need or want to use Occam's Razor to limit the number of parameters in our
model.
'We really ought to average together predictions from all models weighted by their probabilities.
However if the evidence is strongly peaked, or for practical reasons, we may want to select one as an
approximation.
2Por some models, the limit of an infinite number of parameters is a simple model which can be
treated tractably. Two examples are the Gaussian Process limit of Bayesian neural networks [Neal,
1996], and the infinite limit of Gaussian mixture models [Rasmussen, 2000].
too complex
y
All possible data sets
Figure 1: Left panel: the evidence as a function of an abstract one dimensional representation of "all possible" datasets. Because the evidence must "normalize", very complex
models which can account for many data sets only achieve modest evidence; simple models
can reach high evidences, but only for a limited set of data. When a dataset Y is observed,
the evidence can be used to select between model complexities. Such selection cannot be
done using just the likelihood, P(Y Iw, Mi). Right panel: neural networks with different
numbers of hidden unit form a family of models, posing the model selection problem.
2
Linear in the parameters models - Example: the Fourier model
For simplicity, consider function approximation using the class of models that are linear in
the parameters; this class includes many well known models such as polynomials, splines,
kernel methods, etc:
y(x) =
L
Wi(Pi(X) {:} Y
=
WT
<1>,
where y is the scalar output, ware the unknown weights (parameters) of the model, (/>i(x)
are fixed basis functions, <l>in = ?i(X(n)) and x(n) is the (scalar or vector) input for example number n. For example, a Fourier model for scalar inputs has the form:
D
y(x)
where w
weights:
= ao + Lad sin(dx) + bd cos(dx),
d=l
{ao,al,bl, ... ,aD,bD}' Assuming an independent Gaussian prior on the
D
p(wIS, c) ex: exp (-
~ [Coa~ + L cd(a~ + b~)]),
d=l
where S is an overall scale and Cd are precisions (inverse variances) for weights of order
(frequency) d. It is easy to show that Gaussian priors over weights imply Gaussian Process
priors over functions 3 . The covariance function for the corresponding Gaussian Process
prior is:
D
K(x,x')
= [Lcos(d(x-x'))/Cd]/S.
d=O
3Under the prior, the joint density of any (finite) set of outputs y is Gaussian
Order 0
2
Order 1
. _i.J .
0 .. "+. ..
-1
-2
2
Order 3
2
2
-1
...
-1
0 1
Order 6
-1
-1
-1
+i-
-2
0
1
...
-2
0 1
Order 7
-1
0 1
OrderS
-1
0 1
Order 9
...
-1
-2
0
1
2
...
-2
-1
0
1
-1
0
-1
0 1
Order 11
2
-1
...
...
-2
-1
0 1
Order 10
2
-1
Order 5
2
-1
-2
2
-2
Order 4
0
ct.
-1
Order 2
1
-1
7
S
...
-2
0
1
-1
0
1
0.25
0.2
0.15
0.1
0.05
0
0
2
3
4
5
6
Model order
9
10
11
Figure 2: Top: 12 different model orders for the "unscaled" model: Cd ex 1. The mean
predictions are shown with a full line, the dashed and dotted lines limit the 50% and 95%
central mass of the predictive distribution (which is student-t). Bottom: posterior probability of the models, normalised over the 12 models. The probabilities of the models exhibit
an Occam's Hill, discouraging models that are either "too small" or "too big".
2.1
Inference in the Fourier model
Given data V = {x(n), y(n) In = 1, ... ,N} with independent Gaussian noise with precision T, the likelihood is:
N
p(Ylx, w, T) ex
II exp (- ~[y(n) -
WT
<l>n]2).
n=1
For analytical convenience, let the scale of the prior be proportional to the noise precision,
S = CT and put vague4 Gamma priors on T and C:
p(T) ex T<>1-1 exp(-,81T),
p(C) ex C<>2- 1 exp (-,82 C) ,
then we can integrate over weights and noise to get the evidence as a function of prior
hyperparameters, C (the overall scale) and c (the relative scales):
E(C, c)
ff
= }} p(Ylx, w, T)p(wIC, T, c)p(T)p(C)dTdw =
x IA11/2 [,81 + ~Y T (J -
,8<>1,8<>2r(a1+ N/ 2)
(~7r)~/2r(a1)r(a2)
<I> A -1<1> T)yr<>1-N/2CD+<>2-1/2
exp( -,82C)~/2
D
II
d=1
4We choose vague priors by setting al
= a2 = fA = /32 = 0.2 throughout.
Cd,
Scaling Exponent=O
Scaling Exponent=2
Scaling Exponent=3
Scaling Exponent=4
2
a
-1
-2
2
\"
V'J~
-1
-2
-2
2
-1
-2
-2
a
2
o~
-2
-2
a
-2
2
Figure 3: Functions drawn at random from the Fourier model with order D = 6 (dark)
and D = 500 (light) for four different scalings; limiting behaviour from left to right:
discontinuous, Brownian, borderline smooth, smooth.
where A = cpT cp + C diag(c), and the tilde indicates duplication of all components except
for the first. We can optimizeS the overall scale C of the weights (using ego Newton's
method). How do we choose the relative scales, c? The answer to this question turns out
to be intimately related to the two different views of Bayesian inference.
2.2
Example
To illustrate the behaviour of this model we use data generated from a step function that
changes from -1 to 1 corrupted by independent additive Gaussian noise with variance
0.25. Note that the true function cannot be implemented exactly with a model of finite
order, as would typically be the case in realistic modelling situations (the true function is
not "realizable" or the model is said to be "incomplete"). The input points are arranged in
two lumps of 16 and 8 points, the step occurring in the middle of the larger, see figure 2.
If we choose the scaling precisions to be independent of the frequency of the contributions,
ex 1 (while normalizing the sum of the inverse precisions) we achieve predictions as
depicted in figure 2. We clearly see an Occam's Razor behaviour. A model order of around
D = 6 is preferred. One might say that the limited data does not support models more
complex than this. One way of understanding this is to note that as the model order grows,
the prior parameter volume grows, but the relative posterior volume decreases, because
parameters must be accurately specified in the complex model to ensure good agreement
with the data. The ratio of prior to posterior volumes is the Occam Factor, which may be
interpreted as a penalty to pay for fitting parameters.
Cd
In the present model, it is easy to draw functions at random from the prior by simply drawing values for the coefficients from their prior distributions. The left panel of figure 3 shows
samples from the prior for the previous example for D = 6 and D = 500. With increasing
order the functions get more and more dominated by high frequency components. In most
modelling applications however, we have some prior expectations about smoothness. By
scaling the precision factors Cd we can achieve that the prior over functions converges to
functions with particular characteristics as D grows towards infinity. Here we will focus
on scalings of the form Cd = d'Y for different values of ,,(, the scaling exponent. As an
example, if we choose the scaling Cd = d 3 we do not get an Occam's Razor in terms of the
order of the model, figure 4. Note that the predictions and their errorbars become almost
independent of the model order as long as the order is large enough. Note also that the
errorbars for these large models seem more reasonable than for D = 6 in figure 2 (where a
spurious "dip" between the two lumps of data is predicted with high confidence). With this
choice of scaling, it seems that the "large models" view is appropriate.
50f course, we ought to integrate over C, but unfortunately that is difficult.
Order 0
2
Order 1
Order 2
Order 3
Order 4
Order 5
. _i.J .
0 ?? .+...
-1
-2
t..
*
-1
0
1
-1
Order 6
0
1
-1
Order 7
0
1
-1
OrderS
2
2
0
1
-1
Order 9
0
1
-1
Order 10
2
2
0
1
Order 11
2
o
o
-1
-1
-2
-2
-1
0
1
-1
-2
-1
0
1
-2
-1
0
1
-1
0
1
-1
7
S
0
1
-1
0
1
0.25
0.2
0.15
0.1
0.05
O~~~----~---o
2
3
4
5
6
Model order
9
10
11
Figure 4: The same as figure 2, except that the scaling Cd = d3 was used here, leading to a
prior which converges to smooth functions as D -t 00. There is no Occam's Razor; instead
we see that as long as the model is complex enough, the evidence is flat. We also notice
that the predictive density of the model is unchanged as long as D is sufficiently large.
3 Discussion
In the previous examples we saw that, depending on the scaling properties of the prior over
parameters, both the Occam's Razor view and the large models view can seem appropriate.
However, the example was unsatisfactory because it is not obvious how to choose the scaling exponent 'Y. We can gain more insight into the meaning of'Y by analysing properties of
functions drawn from the prior in the limit of large D. It is useful to consider the expected
squared difference of outputs corresponding to nearby inputs, separated by ~:
G(~)
= E[(J(x) -
f(x
+ ~))2l,
in the limit as ~ -t O. In the table in figure 5 we have computed these limits for various
values of 'Y, together with the characteristics of these functions. For example, a property
of smooth functions is that G (~) <X ~ 2 . Using this kind of information may help to
choose good values for 'Y in practical applications. Indeed, we can attempt to infer the
"characteristics of the function" 'Y from the data. In figure 5 we show how the evidence
depends on 'Y and the overall scale C for a model of large order (D = 200). It is seen
that the evidence has a maximum around 'Y = 3. In fact we are seeing Occam's Razor
again! This time it is not in terms of the dimension if the model, but rather in terms of
the complexity of the functions under the priors implied by different values of 'Y. Large
values of'Y correspond to priors with most probability mass on simple functions, whereas
small values of'Y correspond to priors that allow more complex functions. Note, that the
"optimal" setting 'Y = 3 was exactly the model used in figure 4.
log Evidence (D=2oo. max =-27.48)
- 0.5
'Y
<1
2
-1
6'
~-1 .5
3
.Q
>3
-2
limD-.--+o G(~}
1
~
~2(1-ln~)
~2
properties
discontinuous
Brownian
borderline smooth
smooth
-2.5
Figure 5: Left panel: the evidence as a function of the scaling exponent, 'Y and overall scale
C, has a maximum at 'Y = 3. The table shows the characteristics of functions for different
values of 'Y . Examples of these functions are shown in figure 3.
4
Conclusion
We have reviewed the automatic Occam's Razor for Bayesian models and seen how, while
not necessarily penalising the number of parameters, this process is active in terms of the
complexity offunctions. Although we have only presented simplistic examples, the explanations of the behaviours rely on very basic principles that are generally applicable. Which
of the two differing Bayesian views is most attractive depends on the circumstances: sometimes the large model limit may be computationally demanding; also, it may be difficult
to analyse the scaling properties of priors for some models. On the other hand, in typical
applications of non-parametric models, the "large model" view may be the most convenient
way of expressing priors since typically, we don't seriously believe that the "true" generative process can be implemented exactly with a small model. Moreover, optimizing (or
integrating) over continuous hyperparameters may be easier than optimizing over the discrete space of model sizes. In the end, whichever view we take, Occam's Razor is always
at work discouraging overcomplex models.
Acknowledgements
This work was supported by the Danish Research Councils through the Computational
Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. Thanks
to Geoff Hinton for asking a puzzling question which stimulated the writing of this paper.
References
Jefferys, W. H. & Berger, J. O. (1992) Ockham's Razor and Bayesian Analysis. Amer. Sci., 80:64-72.
MacKay, D. J. C. (1992) Bayesian Interpolation. Neural Computation, 4(3):415-447.
Neal, R. M. (1996) Bayesian Learning for Neural Networks, Lecture Notes in Statistics No. 118,
New York: Springer-Verlag.
Rasmussen, C. E. (2000) The Infinite Gaussian Mixture Model, in S. A. Solla, T. K. Leen and
K.-R. Muller (editors.), Adv. Neur. In! Proc. Sys. 12, MIT Press, pp. 554-560.
Smith, A. F. M. & Spiegelhalter, D. J. (1980) Bayes factors and choice criteria for linear models.
1. Roy. Stat. Soc. , 42:213-220.
| 1925 |@word middle:1 polynomial:1 seems:1 covariance:1 seriously:1 dx:2 must:2 bd:2 realistic:1 additive:1 offunctions:1 generative:1 selected:1 yr:1 sys:1 smith:2 simpler:1 mathematical:1 limd:1 overcomplex:3 become:1 fitting:2 indeed:1 expected:1 themselves:1 coa:1 increasing:1 moreover:1 panel:4 mass:2 kind:2 interpreted:1 parsimony:1 pursue:1 differing:1 ought:3 exactly:3 uk:2 control:1 unit:5 limit:11 consequence:1 ware:1 interpolation:1 might:2 quantified:1 suggests:1 co:1 limited:3 averaged:1 practical:2 borderline:2 implement:2 thought:1 convenient:1 confidence:1 integrating:1 seeing:1 zoubin:2 get:3 cannot:2 convenience:1 selection:5 put:2 writing:1 www:1 equivalent:1 demonstrated:1 center:2 simplicity:1 estimator:1 rule:1 insight:1 handle:1 limiting:1 carl:2 agreement:1 ego:1 roy:1 observed:1 role:1 bottom:1 adv:1 solla:1 decrease:1 complexity:12 predictive:2 basis:3 vague:1 easily:1 joint:1 geoff:1 various:1 separated:1 describe:1 london:2 neuroinformatics:1 larger:1 say:1 drawing:1 statistic:1 think:1 analyse:1 analytical:1 ucl:2 achieve:3 normalize:1 wcin:1 generating:1 converges:2 help:1 illustrate:1 depending:1 ac:2 stat:1 oo:1 edward:1 soc:1 implemented:2 predicted:1 discontinuous:2 explains:1 behaviour:6 ao:2 generalization:1 really:2 probable:1 around:2 sufficiently:1 exp:5 a2:2 proc:1 applicable:1 iw:1 saw:1 council:1 reflects:1 weighted:1 mit:1 clearly:1 gaussian:11 always:2 rather:3 avoid:1 focus:2 unsatisfactory:1 modelling:3 likelihood:3 indicates:1 sense:1 realizable:1 inference:4 typically:3 unlikely:2 hidden:4 spurious:1 overall:5 priori:1 exponent:7 constrained:1 mackay:2 thinking:1 peaked:1 spline:1 primarily:1 randomly:1 gamma:1 individual:1 attempt:1 uncommon:1 mixture:2 light:2 regularizers:1 integral:1 machinery:1 modest:1 incomplete:1 asking:1 ar:1 measuring:1 queen:1 too:7 connect:1 answer:1 corrupted:1 thanks:1 density:2 together:2 again:2 central:2 squared:1 choose:7 por:1 sidestep:1 leading:1 account:1 student:1 includes:1 coefficient:1 matter:1 explicitly:1 ad:1 depends:2 view:14 apparently:1 analyze:1 bayes:3 contribution:1 square:1 variance:2 characteristic:5 correspond:2 bayesian:14 accurately:1 reach:1 danish:1 frequency:3 pp:1 obvious:1 mi:1 gain:1 dataset:1 adjusting:2 penalising:1 sophisticated:1 actually:1 arranged:1 done:1 amer:1 strongly:1 leen:1 just:1 hand:1 scientific:1 grows:3 believe:2 building:1 true:4 neal:3 offree:1 attractive:1 sin:1 razor:17 criterion:1 hill:2 cp:1 meaning:1 discourages:2 volume:3 interpretation:1 he:1 lad:1 refer:1 expressing:1 smoothness:1 automatic:2 etc:1 posterior:3 brownian:2 showed:1 perspective:2 optimizing:2 optimizes:1 verlag:1 muller:1 seen:2 determine:1 paradigm:1 dashed:1 ii:2 full:1 infer:2 smooth:6 technical:1 england:1 long:4 a1:2 prediction:6 simplistic:1 basic:1 multilayer:1 circumstance:1 expectation:1 sometimes:3 kernel:1 achieved:1 whereas:2 want:2 duplication:1 seem:3 lump:2 kongens:1 constraining:1 enough:3 easy:2 fit:1 idea:1 favour:1 penalty:1 suffer:1 york:1 cpt:1 useful:1 generally:1 ylx:2 dark:1 http:2 generate:4 notice:1 dotted:1 neuroscience:1 discrete:1 four:1 drawn:2 d3:1 sum:1 inverse:2 you:4 family:1 throughout:1 almost:1 reasonable:1 draw:1 scaling:16 ct:2 pay:1 played:1 infinity:1 constrain:1 your:1 flat:2 dominated:1 nearby:1 fourier:4 argument:1 department:1 influential:1 alternate:1 neur:1 intimately:1 wi:2 making:1 lyngby:1 computationally:3 ln:1 turn:1 know:1 whichever:1 end:1 appropriate:2 yim:1 top:1 ensure:1 newton:1 embodies:1 giving:1 ghahramani:1 build:1 unchanged:1 bl:1 implied:1 question:3 quantity:1 parametric:3 fa:1 said:1 exhibit:1 regulated:1 sci:1 reason:2 denmark:2 assuming:1 berger:2 ratio:1 difficult:3 unfortunately:1 rise:2 unknown:1 perform:1 datasets:1 ockham:1 finite:2 tilde:1 situation:1 hinton:1 specified:1 errorbars:2 conflicting:1 tractably:1 usually:1 max:1 explanation:2 belief:2 demanding:1 treated:1 rely:1 spiegelhalter:2 imply:1 dtu:2 embodied:1 review:1 prior:29 understanding:1 discouraging:2 acknowledgement:1 relative:3 lecture:1 proportional:2 validation:1 integrate:2 principle:2 wic:1 editor:1 occam:19 pi:1 cd:11 unscaled:1 course:1 supported:1 rasmussen:3 normalised:1 understand:1 allow:1 dip:1 dimension:1 stand:1 forward:1 made:1 author:1 preferred:1 thor:1 imm:2 active:1 don:4 continuous:1 why:1 table:2 reviewed:1 stimulated:1 posing:1 complex:8 necessarily:2 diag:1 reconciled:1 big:1 noise:4 hyperparameters:2 referred:1 ff:1 gatsby:2 precision:6 dk:3 decay:1 evidence:19 normalizing:1 occurring:1 easier:1 depicted:1 simply:3 scalar:3 springer:1 targeted:1 towards:1 change:1 analysing:1 typical:2 infinite:4 except:2 averaging:1 wt:2 perceptrons:1 select:3 college:1 puzzling:1 support:1 evaluate:1 mcmc:1 ex:6 |
1,013 | 1,926 | Higher-order Statistical Properties
Arising from the Non-stationarity of
Natural Signals
Lucas Parra, Clay Spence
Adaptive Signal and Image Processing, Sarnoff Corporation
{lparra, cspence} @sarnofJ. com
Paul Sajda
Department of Biomedical Engineering, Columbia University
ps629@columbia. edu
Abstract
We present evidence that several higher-order statistical properties of natural images and signals can be explained by a stochastic
model which simply varies scale of an otherwise stationary Gaussian process. We discuss two interesting consequences. The first
is that a variety of natural signals can be related through a common model of spherically invariant random processes, which have
the attractive property that the joint densities can be constructed
from the one dimensional marginal. The second is that in some cases the non-stationarity assumption and only second order methods
can be explicitly exploited to find a linear basis that is equivalent
to independent components obtained with higher-order methods.
This is demonstrated on spectro-temporal components of speech.
1
Introduction
Recently, considerable attention has been paid to understanding and modeling the
non-Gaussian or "higher-order" properties of natural signals, particularly images.
Several non-Gaussian properties have been identified and studied. For example,
marginal densities of features have been shown to have high kurtosis or "heavy
tails", indicating a non-Gaussian, sparse representation. Another example is the
"bow-tie" shape of conditional distributions of neighboring features, indicating dependence of variances [11]. These non-Gaussian properties have motivated a number
of image and signal processing algorithms that attempt to exploit higher-order statistics of the signals, e.g., for blind source separation. In this paper we show
that these previously observed higher-order phenomena are ubiquitous and can be
accounted for by a model which simply varies the scale of an otherwise stationary Gaussian process. This enables us to relate a variety of natural signals to one
another and to spherically invariant random processes, which are well-known in
the signal processing literature [6, 3]. We present analyses of several kinds of data
from this perspective, including images, speech, magneto encephalography (MEG)
activity, and socio-economic data (e.g., stock market data). Finally we present the
results of experiments with algorithms for finding a linear basis equivalent to independent components that exploit non-stationarity so as to require only 2nd-order
statistics. This simplification is possible whenever linearity and non-stationarity of
independent sources is guaranteed such as for the powers of acoustic signals.
2
Scale non-stationarity and high kurtosis
Natural signals can be non-stationary in various ways, e.g. varying powers, changing
correlation of neighboring samples, or even non-stationary higher moments. We will
concentrate on the simplest possible variation and show in the following sections
how it can give rise to many higher-order properties observed in natural signals.
We assume that at any given instance a signal is specified by a probability density
function with zero mean and unknown scale or power. The signal is assumed nonstationary in the sense that its power varies from one time instance to the next. 1
We can think of this as a stochastic process with samples z(t) drawn from a zero
mean distribution Pz(z) with samples possibly correlated in time. We observe a
scaled version of this process with time varying scales s(t) > sampled from Ps(s),
?
x(t)
= s(t)z(t) ,
(1)
The observable process x(t) is distributed according to
Px(x)
=
(OOdsPs(s)Px(xls)
10
=
r dsps(s) S-l Pz(~).
10
XJ
s
(2)
We refer to px(x) as the long-term distribution and pz(z) as the instantaneous
distribution. In essence Px (x) is a mixture distribution with infinitely many kernels
S-lpz(~). We would like to relate the sparseness of Pz(z), as measured by the
kurtosis, to the sparseness of the observable distribution Px(x).
Kurtosis is defined as the ratio between the fourth and second cumulant of a distribution [7]. As such it measures the length of the distribution's tails, or the sharpness
of its mode. For a zero mean random variable x this reduces up to a constant to
K[x]
= ~::;!
,with (f(x)x
=
f
dxf(x)px(x).
(3)
In this case we find that the kurtosis of the long-term distribution is always larger
than the kurtosis of the instantaneous distribution unless the scale is stationary ([9]
and [1] for symmetric pz(z)),
K[x]
~K[z].
To see this note that the independence of sand z implies, (xn)x
therefore, K[x] = K[z] (S4)s / (S2)~. From the inequality, ((S2 -
(4)
= (sn)s (zn)z, and
2)2)s ~ 0, which
hold for any arbitrary constant c> 0, it is easy to show that (S4) s ~ (S2)~, where
the equality holds for Ps(s) = 8(s - c). Together this leads to inequality (4), which
states that for a fixed scale s(t), i.e. the magnitude of the signal is stationary, the
C
kurtosis will be minimal. Conversely, non-stationary signals, defined as a variable
scaling of an otherwise stationary process, will have increased kurtosis.
IThroughout this paper we will refer to signals that are sampled in time. Note that
all the arguments apply equally well to a spatial rather than temporal sampling, that is,
images rather than time series.
-1
-1
-2
-2
-3
-3
-4
-2
-2
-5
-2
-5
-2
Figure 1: Marginal distributions within 3 standard deviations are shown on a logarithmic scale; left to right: natural image features, speech sound intensities, stock
market variation, MEG alpha activity. The measured kurtosis is 4.5, 16.0, 12.9, and
5.3 respectively. On top the empirical histograms are presented and on bottom the
model distributions. The speech data has been fit with a Meijer-G function G5g [3].
For the MEG activity, the stock market data and the image features a mixture of
zero mean Gaussians was used.
Figure 1 shows empirical plots of the marginal distributions for four natural signals;
image, speech, stock market, and MEG data. As image feature we used a wavelet
component for a 162x162 natural texture image of sand (presented in [4]). Selfinverting wavelets with a down-sampling factor of three where used. The speech
signal is a 2.3 s recording of a female speaker sampled at 8 kHz with a noise level
less than -25 dB. The signal has been band limited between 300 Hz and 3.4 kHz corresponding to telephone speech. The market data are the daily closing values of the
NY Stock exchange composite index from 02/01/1990 to 04/28/2000. We analyzed
the variation from the one day linear prediction value to remove the upwards trend
of the last decade. The MEG data is band-passed (10-12 Hz) alpha activity of a independent component of 122 MEG signals. This independendt component exhibits
alpha de-synchronization for a visio-motor integration task [10]. One can see that
in all four cases the kurtosis is high relative to a Gaussian (K = 3). Our claim is
that for natural signals, high kurtosis is a natural result of the scale non-stationarity
of the signal. Additional evidence comes from the behavior seen in the conditional
histograms of the joint distributions, presented in the next section.
3
Higher-order properties of joint densities
It has been observed in images that the conditional histograms of joint densities
from neighboring features (neighboring in scale, space, and/or orientation) exhibit
variance dependencies that cannot be accounted for by simple second-order models [11]. Figure 2 shows empirical conditional histograms for the four types of natural
signals we considered earlier. One can see that speech and stock-market data exhibit
the same variance dependency or "bow-tie" shape exhibited by images.
-2
-2
-2
-2
-2
-2
-2
-2
Figure 2: (Top) Empirical conditional histograms and (bottom) model conditional
density derived from the one dimensional marginals presented in the previous figure
assuming the data is sampled form a SIRP. Good correspondence validates the SIRP
assumption which is equivalent to our non-stationary scale model for slow varying
scales.
The model of Equation 1 can easily account for this observation if we assume slowly
changing scales s(t). A possible explanation is that neighboring samples or features
exhibit a common scale. If two zero mean stochastic variables are scaled both with
the same factors their magnitude and variance will increase together. That is, as
the magnitudes of one variable increase so will the magnitude and the variance of
the other variable. This results in a broadening of the histogram of one variable
as one increases the value of the conditioning variable - resulting in a "bow-tie"
shaped conditional density.
4
Relationship to spherical invariant random process
A closely related class of signals to those in Equation 1 is the so-called Spherical
Invariant Random Process (SIRP). If the signals are short time Gaussian and the
powers vary slowly the class of signals described are approximately SIRPs. Despite
the restriction to Gaussian distributed z SIRPs have been shown to be a good
model for a range of stochastic processes with very different higher-order properties,
depending on the scale distributions Ps (s). They have been used in a variety of signal
processing applications [6]. Band-limited speech, in particular, has been shown to
be well described by SIRPs [3]. If z is multidimensional, such as a window of samples
in a time series or a multi-dimensional feature vector, one talks about Spherically
Invariant Random Vectors SIRVs. Natural images have been modeled by what in
essence is closely related to SIRV s - a infinite mixture of zero mean Gaussian
features [11]. Similar models have also been used for financial time series [2].
The fundamental property of SIRPs is that the joint distribution of a SIRP is
entirely defined by a univariate characteristic function Cx(u) and the covariance ~
of neighboring samples [6]. They are directly related to our scale-non-stationarity
model through a theorem by Kingman and Yao which states that any SIRP is
equivalent to a zero mean Gaussian process z(t) with an independent stochastic
scale s. Furthermore the univariate characteristic function Cx(u) specifies Ps(s)
and the 1D marginal Px(x) and visa versa [6]. From the characteristic function
Cx(u) and the covariance 1; one can also construct all higher dimensional joint
densities. This leads to the following relation between the marginal densities of
various orders [3],
Pn(x)
= 7r- n/ 2fn(xT1;-lx),
d
),
fn+2(S ) = - dsfn(s
with x E IRn , and
hm(s) =
7r
-1/2
1;
= (xxT),
fOO
_oohm+1(s + y 2) dy
(5)
(6)
In particular these relations allow us to compute the joint density P2(X(t), x(t + 1))
from an empirically estimated marginal density Pi (x(t)) and the covariance of x(t)
and x(t+ 1). Comparing the resulting 2D joint density to the observed joint density
allows to us verify the assumption that the data is sampled from a SIRP. In so doing
we can more firmly assert that the observed two dimensional joint histograms can
in fact be explained as a Gaussian process with a non-stationary scale.
If we use zero mean Gaussian mixtures, p1(X) = L~lmiexp(-x2/uT), as the
1D model distribution the resulting 2D joint distribution is simply Pn(x) =
L~l mi exp( -x T 1;-l x / uT). If the model density is given by a Meijer-G function, as suggested in [3] with P1(X) = ro1A)G5g(A 2X 2IA - 0.5,A - 0.5), then the 2D
joint is p2(X) = ~:(A) G~g(A2xT1;-lxl - 0.5; 0, A, A). In both cases it is assumed
that the data is normalized to unit variance.
Brehm has used this approach to demonstrate that band-limited speech is well described by a SIRP [3] . In addition, we show here that the same is true for the image
features and stock market data presented above. The model conditional densities
shown in Figure 2 correspond well with the empirical conditional histograms. In
particular they exhibit the characteristic bow-tie structure. We emphasize that
these model 2D joint densities have been obtained only from the 1D marginal of
Figure 1 and the covariance of neighboring samples.
The deviations of the observed and model 2D joint distributions are likely due to
variable covariance itself, that is, not only does the overall scale or power vary
with time, but the components of the covariance matrix vary independently of each
other. For example in speech the covariance of neighboring samples is well known to
change considerably over time. Nevertheless, the surprising result is that a simple
scale non-stationarity model can reproduce the higher-order statistical properties
in a variety of natural signals.
5
Spectro-temporallinear basis for speech
As an example of the utility of this non-stationarity assumption, we analyze the
statistical properties of the powers of a single source, in particular for speech signals.
Motivated by the auditory spectro-temporal receptive field reported in [5] and work
on receptive fields and independent components we are interested to find a linear
basis of independent components in a spectro-temporal window of speech signals.
In [9, 8] we show that one can use second order statistic to uniquely recover sources
from a mixture provided that the mix is linear and the sources are non-stationary.
One can do so by finding a basis that guarantees uncorrelated signals at multiple
time intervals (multiple decorrelation algorithm (MDA)). Our present model argues
that features of natural signals such as the powers in different frequency bands can
be assumed non-stationary, while powers of independent signals are known to add
"We had a barbecue over the weekend at my house."
PCA
MDA
ICA-JADE
Figure 3: Spectro-temporal representation of speech. One pixel in the horizontal
direction corresponds to 16 ms. In the vertical direction 21 Bark scale power bands
are displayed. The upper diagram shows the log-powers for a 2.5 s segment of the
200 s recording used to compute the different linear bases. The three lower diagrams
show three sets of 15 linear basis components for 2lx8 spectra-temporal segments of
the speech powers. The sets correspond to PCA, MDA, and ICA respectively. Note
that these are not log-powers, hence the smaller contribution of the high frequencies
as compared to the log-power plot on top.
linearly. We should be able therefore to identify with second order methods the
same linear components as with independent component algorithms where highorder statistical assumptions are invoked.
We compute the powers in 21 frequency bands on a Bark scale for short consecutive
time intervals. We choose to find a basis for a segment of 21 bands and 8 neighboring time slices corresponding to 128 ms of signal between 0 and 4 kHz. We used half
overlapping windows of 256 samples such that for a 8 kHz signal neighboring time
slices are 16 ms apart. A set of 7808 such spectro-temporal segments were sampled
from 200 s of the same speech data presented previously. Figure 3 shows the results
obtained for a subspace of 15 components. One can see that the components obtained with MDA are quite similar to the result of rcA and differ considerably from
the principal components. From this we conclude that speech powers can in fact
be thought of as a linear combination of non-stationary independent components.
In general, the point we wish to make is to demonstrate the strength of secondorder methods when the assumptions of non-stationarity, independence, and linear
superposition are met.
6
Conclusion
We have presented evidence that several high-order statistical properties of natural
signals can be explained by a simple scale non-stationary model. For four types of
natural signals, we have shown that a scale non-stationary model will reproduce the
high-kurtosis behavior of the marginal densities. Furthermore, for the case of scale
non-stationary with Gaussian density (SIRP), we have shown that we can reproduce
the variance dependency seen in conditional histograms of the joint density directly
from the empirical marginal densities. This leads to the conclusion that a scale nonstationary model (e.g. SIRP) is a good model for these natural signals. We have
shown that one can exploit the assumptions of this model to compute a linear basis
for natural signals without having to invoke higher order statistically techniques.
Though we do not claim that all higher-order properties or all natural signals can
be explained by a scale non-stationary model, it is remarkable that such a simple
model can account for a variety of the higher-order phenomena and for a variety of
signal types.
References
[1] E.M.L. Beale and C.L. Mallows. Scale mixing of symmetric distributions with
zero means. Annals of Mathematical Statitics, 30:1145-1151, 1959.
[2] T. P. Bollerslev, R. F. Engle, and D. B. Nelson. Arch models. In R. F. Engle
and D. L. McFadden, editors, Handbook of Econometrics, volume IV. NorthHolland, 1994.
[3] Helmut Brehm and Walter Stammler. Description and generation of spherically
invariant speech-model signals. Signal Processing, 12:119-141, 1987.
[4] Phil Brodatz. Textures: A Photographic Album for Artists and Designers.
Dover, 1999.
[5] R. deCharms, Christopher and M. Merzenich, Miachael. Characteristic neuros
in the primary auditory cortex of the awake primate using reverse correlation.
In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information
Processing Systems 10, pages 124-130, 1998.
[6] Joel Goldman. Detection in the presence of spherically symmetric random vectors. IEEE Transactions on Information Theory, 22(1):52- 59, January 1976.
[7] M.G. Kendal and A. Stuart. The Advanced Theory of Statistics. Charles Griffin
& Company Limited, London, 1969.
[8] L. Parra and C. Spence. Convolutive blind source separation of non-stationary
sources. IEEE Trans. on Speech and Audio Processing, pages 320- 327, May
2000.
[9] Lucas Parra and Clay Spence. Separation of non-stationary sources. In
Stephen Roberts and Richard Everson, editors, Independent Components Analysis: Principles and Practice. Cambridge University Press, 200l.
[10] Akaysha Tang, Barak Pearlmutter, Dan Phung, and Scott Carter. Independent
components of magnetoencephalography. Neural Computation, submitted.
[11] Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of Gaussians
and the statistics of natural images. In S. A. Solla, T.K. Leen, and K.-R.
Miiller, editors, Advances in Neural Information Processing Systems 12, pages
855-861, Cambridge, MA, 2000. MIT Press.
| 1926 |@word version:1 nd:1 covariance:7 paid:1 moment:1 series:3 com:1 comparing:1 surprising:1 fn:2 shape:2 enables:1 motor:1 remove:1 plot:2 stationary:19 half:1 dover:1 short:2 lx:1 mathematical:1 constructed:1 dan:1 ica:2 market:7 behavior:2 p1:2 multi:1 spherical:2 goldman:1 company:1 window:3 dxf:1 provided:1 linearity:1 what:1 kind:1 finding:2 corporation:1 guarantee:1 temporal:7 assert:1 multidimensional:1 socio:1 tie:4 scaled:2 unit:1 engineering:1 consequence:1 despite:1 approximately:1 studied:1 conversely:1 limited:4 sarnoff:1 range:1 statistically:1 spence:3 mallow:1 practice:1 empirical:6 thought:1 composite:1 cannot:1 restriction:1 equivalent:4 demonstrated:1 phil:1 attention:1 independently:1 sharpness:1 financial:1 variation:3 annals:1 secondorder:1 trend:1 particularly:1 econometrics:1 cspence:1 observed:6 bottom:2 solla:2 highorder:1 segment:4 basis:8 easily:1 joint:15 stock:7 various:2 talk:1 xxt:1 weekend:1 sajda:1 walter:1 london:1 jade:1 ithroughout:1 quite:1 larger:1 otherwise:3 statistic:5 think:1 itself:1 validates:1 kurtosis:12 neighboring:10 bow:4 mixing:1 description:1 bollerslev:1 p:4 brodatz:1 depending:1 measured:2 p2:2 implies:1 come:1 met:1 differ:1 concentrate:1 direction:2 closely:2 stochastic:5 sand:2 require:1 exchange:1 parra:3 hold:2 considered:1 exp:1 claim:2 vary:3 consecutive:1 lparra:1 superposition:1 mit:1 gaussian:14 always:1 rather:2 pn:2 varying:3 derived:1 helmut:1 sense:1 akaysha:1 relation:2 irn:1 reproduce:3 interested:1 pixel:1 overall:1 orientation:1 lucas:2 spatial:1 integration:1 marginal:10 field:2 construct:1 shaped:1 having:1 sampling:2 stuart:1 richard:1 attempt:1 detection:1 stationarity:10 joel:1 mixture:6 analyzed:1 daily:1 unless:1 iv:1 minimal:1 instance:2 increased:1 modeling:1 earlier:1 statitics:1 zn:1 deviation:2 reported:1 dependency:3 varies:3 considerably:2 my:1 density:20 fundamental:1 invoke:1 together:2 yao:1 choose:1 possibly:1 slowly:2 kingman:1 account:2 de:1 explicitly:1 blind:2 doing:1 analyze:1 recover:1 encephalography:1 contribution:1 variance:7 characteristic:5 correspond:2 identify:1 artist:1 submitted:1 whenever:1 frequency:3 mi:1 sampled:6 auditory:2 ut:2 ubiquitous:1 clay:2 higher:15 day:1 leen:1 though:1 furthermore:2 biomedical:1 arch:1 correlation:2 horizontal:1 christopher:1 overlapping:1 mode:1 verify:1 true:1 normalized:1 equality:1 hence:1 merzenich:1 spherically:5 symmetric:3 attractive:1 uniquely:1 essence:2 speaker:1 m:3 demonstrate:2 pearlmutter:1 argues:1 upwards:1 image:16 instantaneous:2 invoked:1 recently:1 charles:1 common:2 empirically:1 khz:4 conditioning:1 volume:1 tail:2 marginals:1 refer:2 versa:1 cambridge:2 closing:1 had:1 cortex:1 add:1 base:1 female:1 perspective:1 apart:1 reverse:1 inequality:2 exploited:1 seen:2 additional:1 brehm:2 signal:44 stephen:1 multiple:2 photographic:1 sound:1 reduces:1 mix:1 simoncelli:1 long:2 equally:1 prediction:1 histogram:9 kernel:1 addition:1 interval:2 diagram:2 source:8 exhibited:1 recording:2 hz:2 db:1 jordan:1 nonstationary:2 presence:1 easy:1 variety:6 xj:1 fit:1 independence:2 identified:1 economic:1 engle:2 motivated:2 pca:2 utility:1 passed:1 miiller:1 speech:20 s4:2 band:8 carter:1 simplest:1 specifies:1 designer:1 estimated:1 arising:1 four:4 nevertheless:1 drawn:1 changing:2 fourth:1 separation:3 griffin:1 dy:1 scaling:1 entirely:1 guaranteed:1 simplification:1 correspondence:1 activity:4 mda:4 strength:1 phung:1 awake:1 x2:1 argument:1 px:7 martin:1 department:1 according:1 combination:1 smaller:1 visa:1 primate:1 explained:4 invariant:6 rca:1 equation:2 previously:2 discus:1 gaussians:2 everson:1 apply:1 observe:1 beale:1 top:3 exploit:3 receptive:2 primary:1 dependence:1 exhibit:5 subspace:1 nelson:1 assuming:1 meg:6 length:1 index:1 relationship:1 modeled:1 ratio:1 robert:1 relate:2 decharms:1 rise:1 unknown:1 upper:1 vertical:1 observation:1 displayed:1 january:1 dsps:1 arbitrary:1 intensity:1 specified:1 acoustic:1 trans:1 able:1 suggested:1 scott:1 convolutive:1 including:1 explanation:1 wainwright:1 power:16 ia:1 decorrelation:1 natural:22 ps629:1 advanced:1 firmly:1 hm:1 columbia:2 sn:1 understanding:1 literature:1 bark:2 relative:1 synchronization:1 mcfadden:1 interesting:1 generation:1 remarkable:1 principle:1 editor:4 uncorrelated:1 pi:1 heavy:1 accounted:2 last:1 allow:1 barak:1 sparse:1 distributed:2 slice:2 xn:1 adaptive:1 transaction:1 alpha:3 spectro:6 observable:2 emphasize:1 handbook:1 xt1:1 assumed:3 conclude:1 eero:1 spectrum:1 decade:1 northholland:1 broadening:1 linearly:1 s2:3 noise:1 paul:1 ny:1 slow:1 foo:1 wish:1 xl:1 house:1 wavelet:2 meijer:2 tang:1 down:1 theorem:1 pz:5 evidence:3 texture:2 magnitude:4 album:1 sparseness:2 cx:3 logarithmic:1 lxl:1 simply:3 univariate:2 infinitely:1 likely:1 corresponds:1 ma:1 conditional:10 visio:1 magnetoencephalography:1 considerable:1 change:1 telephone:1 infinite:1 principal:1 kearns:1 called:1 indicating:2 cumulant:1 audio:1 phenomenon:2 correlated:1 |
1,014 | 1,927 | Analysis of Bit Error Probability of
Direct-Sequence CDMA Multiuser
Demodulators
Toshiyuki Tanaka
Department of Electronics and Information Engineering
Tokyo Metropolitan University
Hachioji, Tokyo 192-0397, Japan
tanaka@eeLmetro-u.ac.jp
Abstract
We analyze the bit error probability of multiuser demodulators for directsequence binary phase-shift-keying (DSIBPSK) CDMA channel with additive gaussian noise. The problem of multiuser demodulation is cast
into the finite-temperature decoding problem, and replica analysis is applied to evaluate the performance of the resulting MPM (Marginal Posterior Mode) demodulators, which include the optimal demodulator and
the MAP demodulator as special cases. An approximate implementation of demodulators is proposed using analog-valued Hopfield model
as a naive mean-field approximation to the MPM demodulators, and its
performance is also evaluated by the replica analysis. Results of the performance evaluation shows effectiveness of the optimal demodulator and
the mean-field demodulator compared with the conventional one, especially in the cases of small information bit rate and low noise level.
1
Introduction
The CDMA (Code-Division-Multiple-Access) technique [1] is important as a fundamental
technology of digital communications systems, such as cellular phones. The important applications include realization of spread-spectrum multipoint-to-point communications systems, in which multiple users share the same communication channel. In the multipoint-topoint system, each user modulates his/her own information bit sequence using a spreading
code sequence before transmitting it, and the receiver uses the same spreading code sequence for demodulation to obtain the original information bit sequence. Different users
use different spreading code sequences so that the demodulation procedure randomizes
and thus suppresses multiple access interference effects of transmitted signal sequences
sent from different users.
The direct-sequence binary phase-shift-keying (DSIBPSK) [1] is the basic method among
various methods realizing CDMA, and a lot of studies have been done on it. Use of
Hopfield-type recurrent neural network has been proposed as an implementation of a multiuser demodulator [2]. In this paper, we analyze the bit error probability of the neural
multiuser demodulator applied to demodulation of DS/BPSK CDMA channel.
Spreading Code Sequences
Gaussian Noise
{'7~ } {'7~} ??? {'7~}
{Vi}
~1
~2------~X~-----+--~
Received Signal
{i}
~N------~
Information Bits
Figure 1: DSIBPSK CDMA model
2
DSIBPSK CDMA system
We assume that a single Gaussian channel is shared by N users, each of which wishes
to transmit his/her own information bit sequence. We also take a simplifying assumption,
that all the users are completely synchronized with each other, with respect not only to the
chip timing but also to the information bit timing. We focus on any of the time intervals
corresponding to the duration of one information bit. Let ~i E {-I, I} be the information
bit to be transmitted by user i (i = 1, ... , N) during the time interval, and P be the number
of the spreading code chips (clocks) per information bit. For simplicity, the spreading code
sequences for the users are assumed to be random bit sequences {'7:; t = 1, ... , P}, where
'7:'s are independent and identically distributed (i.i.d.) binary random variables following
Prob['7f = ?1] = 1/2.
User i modulates the information bit ~i by the spreading code sequence and transmits the
modulated sequence {~i '7f; t = 1, ... , P} (with carrier modulation, in actual situations).
Assuming that power control [3] is done perfectly so that every transmitted sequences arrive at the receiver with the same intensity, the received signal sequence (after baseband
demodulation) is {yl; t = 1, ... , P}, with
N
l = L'7:~i + Vi,
(1)
i=1
where
Vi
~ N(O, a}) is i.i.d. gaussian noise. This system is illustrated in Fig. l.
At the receiver side, one has to estimate the information bits {~i} based on the knowledge of
the received signal {i} and the spreading code sequences {'7f} for the users. The demodulator refers to the system which does this task. Accuracy of the estimation depends on what
demodulator one uses. Some demodulators are introduced in Sect. 3, and analytical results
for their performance is derived in Sect. 4.
3 Demodulators
3.1
Conventional demodulator
The conventional demodulator (CD) [1-3] estimates the information bit ~i using the spreading code sequence {11:; t = 1, . .. , P} for the user i , by
1 P
hi
== N
I>l;i.
(2)
1= 1
We can rewrite hi as
(3)
The second and third terms of the right-hand side represent the effects of multiple access interference and noise, respectively. CD would give the correct information bit in the
single-user (N = 1), and no noise (V i == 0) case, but estimation may contain some errors
in the multiple-user andlor noisy cases.
3.2
MAP demodulator
The accuracy of the estimation would be significantly improved if the demodulator knows
the spreading code sequences for all N users and makes full use of them by simultaneously estimating the information bits for aLI the users (the multiuser demodulator). This
is the case, for example, for a base station receiving signals from many users. A common
approach to the multiuser demodulation is to use the MAP decoding, which estimates the
information bits lSi = ~;} by maximizing the posterior probability p({~;}I{ y l}). We call
this kind of multiuser demodulator the MAP demodulator 1.
When we assume uniform prior for the information bits, the posterior probability is explicitly given by
p(sl{i }) = Z - I exp(-flsH(s?),
(4)
where
(5)
fl. ==
N fa},
s ==
(Si), h
==
(hi), and W
==
(wij) is the sample covariance of the spreading
=
~ I>:11j.
code sequences,
P
Wij
(6)
1= 1
The problem of MAP demodulation thus reduces to the following minimization problem:
A
~
(MAP)
= arg
min
H(s).
(7)
sE{- I,I} N
3.3
MPM demodulator
Although the MAP demodulator is sometimes referred to as "optimal," actually it is not so
in terms of the common measure of performance, i.e., the bit error probability Ph, which is
IThe MAP demodulator refers to the same one as what is frequently called the "maximumlikelihood (ML) demodulator" in the literature.
related to the overlap M
their estimates {~d as
==
(1/ N) L~l ~i~i between the original information bits {~i} and
I-M
Pb=-2-'
(8)
The 'MPM (Marginal Posterior Mode [4]) demodulator,' with the inverse temperature
defined as follows:
~i(MPM) = sgn(('\?i},B),
/3, is
(9)
where ('},B refers to the average with respect to the distribution
(10)
= Z(/3)-1 exp( -/3H(s?) .
Then, we can show that the MPM demodulator with /3 = /3s is the optimal one minimizing
P,B(s)
the bit error probability Pb. It is a direct consequence of general argument on optimal
decoders [5]. Note that the MAP demodulator corresponds to the MPM demodulator in the
/3 --* +00 limit (the zero-temperature demodulator).
4
4.1
Analysis
Conventional demodulator
In the cases where we can assume that Nand P are both large while a == P / N = 0(1),
evaluation of the overlap M, and therefore the bit error probability Pb, for those demodulators are possible. For CD, simple application of the central limit theorem yields
M
= erf
a) ,
( 2(1+1//3,,)
where
erf(x)
==
2 r
.rn
10 e-
I
2
(11)
(2)
dt
is the error function.
4.2
MPM demodulator
For the MPM demodulator with inverse temperature /3, we have used the replica analysis
to evaluate the bit error probability Pb. Assuming that Nand P are both large while a ==
P / N = 0(1), and that the macroscopic properties of the demodulator are self-averaging
with respect to the randomness of the information bits, of the spreading codes, and of the
noise, we evaluate the quenched average of the free energy ((log Z)} in the thermodynamic
limit N --* 00, where ((.}) denotes averaging over the information bits and the noise.
Evaluation of the overlap M (within replica-symmetric (RS) ansatz) requires solving
saddle-point problem for scalar variables {m, q, E, F}. The saddle-point equations are
m
=
f
Dz tanh(#z
+ E),
E= _ _
a_/3_ _
1 + /3(1 where Dz
q
=
F-
f
- [l
q)'
== 0/ -J2ir)e- z2 / 2dz
Dz tanh2 (#z
(3)
a/3 2
+ /3(1 -
+ E)
[
q)]2
1]
1-2m-
+ q + /3s
is the gaussian measure. The overlap M is then given by
M
=
f
Dzsgn(#z
+ E),
from which Pb is evaluated via (8). This is the first main result of this paper.
(4)
4.3
MAP demodulator: Zero-temperature limit
Taking the zero-temperature limit f3 --+ +00 of the result for the MPM demodulator yields
the result for the MAP demodulator. Assuming that q --+ I as f3 --+ +00, while f3 (1 - q)
remains finite in this limit, the saddle-point equations reduce to
M
(15)
= m = erf(J 2(2 _ 2: + 1/f3s?)
It is found numerically, however, that the assumption q --+ I is not valid for small a, so
that we have to solve the original saddle-point equations in such cases.
4.4
Optimal demodulator: The case f3
= f3s
Letting f3 = f3s in the result for the MPM demodulator gives the optimal demodulator
minimizing the bit error probability. In this case, it can be shown that m = q and E = F
hold for the solutions of the saddle-point equations (13).
4.5
Demodulator using naive mean-field approximation
Since solving the MAP or MPM demodulation problem is in general NP complete, we have
to consider approximate implementations of those demodulators which are sub-optimal. A
straightforward choice is the mean-field approximation (MFA) demodulator, which uses
the analog-valued Hopfield model as the naive mean-field approximation to the finitetemperature demodulation problem 2 . The solution {mi} of the mean-field equations
mi
= tanh[f3(- LWijmj +h i )]
(16)
j
gives an approximation to {(.\'i) f! }, from which we have the mean-field approximation to
the MPM estimates, as
(MPA)
(17)
~i
= sgn(mi) .
A
The macroscopic properties of the MFA demodulator can be derived by the replica analysis
as well, along the line proposed by Bray et al. [6] We have derived the following saddlepoint equations:
m
=
f
Dz fe z ),
af3
E=-1 + f3x'
x= ~ f Dz zf(z),
q
=
f
Dz [f(z)]2
af32
[1-2mI ]
F- [l+f3X]2
+q+ f3s '
(18)
where fe z ) is the function defined by
fe z ) = tanh [ flz - Ef(z ) + E].
(19)
fez) is a single-valued function of z since E is positive. The overlap M is then calculated
by
M
=
f
Dz sgn(t(z?).
(20)
This is the second main result of this paper.
2The proposal by Kechriotis and Manolakos [2] is to use the Hopfield model for an approximation
to the MAP demodulation. The proposal in this paper goes beyond theirs in that the analog-valued
Hopfield model is used to approximate not the MAP demodulator in the zero-temperature limit but
the MPM demodulators directly, including the optimal one.
......" ......
0.01
0.0001
0':
>(,,
,,
0.01
,
,,
,,
,,
,
0 .0001
...
Opt.
10-8
0':
,
10-6
MAP
MFA
._._----_._ ..
,
------
....
10- 6
Opt.
10-8
CD
MAP
MFA
--
--_._._----_.
------
CD
10-10
a
(a) f3s
10
100
0.1
'.
,
...
\,
,
.'.,
.,
,
I
\,
a
=1
.' .
I
10-10
0.1
,,
,,
(b) f3s
'.
10
100
= 20
Figure 2: Bit error probability for various demodulators.
4.6
AT instability
The AT instability [7] refers to the bifurcation of a saddle-point solution without replica
symmetry from the replica-symmetric one. In this paper we follow the usual convention
and assume that the first such destabilization occurs in the so-called "replicon mode [8] ."
As the stability condition of the RS saddle-point solution for the MPM demodulator, we
obtain
(21)
a - E2
D z sech4 (flz + E) = O.
f
For the MFA demodulator, we have
a - E2
f
D
z
2]2 [1 + 1E(l- fez)
- f(z)2)
-
0
.
(22)
The RS solution is stable as long as the left-hand side of (21) or (22) is positive.
5 Performance evaluation
The saddle-point equations (13) and (18) can be solved numerically to evaluate the bit error
probability Pb of the MPM demodulator and its naive mean-field approximation, respectively. We have investigated four demodulators: the optimal one (f3 = f3s), MAP, MFA
(with f3 = f3s, i.e., the naive mean-field approximation to the optimal one), and CD. The
results are summarized in Fig. 2 (a) and (b) for two cases with f3s = 1 and 20, respectively.
Increasing a corresponds to relatively lowering the information bit rate, so that Pb should
become small as a gets larger, which is in consistent with the general trend observed in
Fig. 2. The optimal demodulator shows consistently better performance than CD, as expected. The MAP demodulator marks almost the same performance as the optimal one
(indeed the result of the MAP demodulator is nearly the same as that of the optimal demodulator in the case f3s = 1, so they are indistinguishable from each other in Fig. 2 (a?.
We also found that the performance of the optimal, MAP, and MFA demodulators is signifof the noise is small relative
icantly improved in the large-a region when the variance
to N, the number of the users. For example, in order to achieve practical level of bit error
probability, Pb '" 10- 5 say, in the f3s = 1 case the optimal and MAP demodulators allow
information bit rate 2 times faster than CD does. On the other hand, in the f3s = 20 case
they allow information bit rate as much as 20 times faster than CD, which demonstrates that
significant process gain is achieved by the optimal and MAP demodulators in such cases.
a?
The MFA demodulator with fl = fls showed the performance competitive with the optimal
one for the fls = 1 case. Although the MFA demodulator feU behind the optimal and MAP
demodulators in the performance for the fls = 20 case, it still had process gain which allows about 10 times faster information bit rate than CD does. Moreover, we observed, using
(22), that the RS saddle-point solution for the MFA demodulator with fl = fls was stable
with respect to replica symmetry breaking (RSB), and thus RS ansatz was indeed valid
for the MFA solution. It suggests that the free energy landscape is rather simple for these
cases, making it easier for the MFA demodulator to find a good solution. This argument
provides an explanation as to why finite-temperature analog-valued Hopfield models, proposed heuristically by Kechriotis and Manolakos [2], exhibited better performance in their
numerical experiments. We also found that the RS saddle-point solution for the optimal
demodulator was stable with respect to RSB over the whole range investigated, whereas
the solution for the MAP demodulator was found to be unstable. This observation suggests
the possibility to construct efficient near-optimal demodulators using advanced mean-field
approximations, such as the TAP approach [9, 10].
Acknowledgments
This work is supported in part by Grant-in-Aid for Scientific Research from the Ministry
of Education, Science, Sports and Culture, Japan.
References
[1] M. K. Simon, 1. K. Omura, R. A. Scholtz, and B. K. Levitt, Spread Spectrum Communications Handbook, Revised Ed., McGraw-Hill, 1994.
[2] G. I. Kechriotis and E. S. Manolakos, "Hopfield neural network implementation of
the optimal CDMA multiuser detector," IEEE Trans. Neural Networks, vol. 7, no. 1,
pp. 131-141,Jan. 1996.
[3] A. J. Viterbi, CDMA: Principles of Spread Spectrum Communication, Addison-Wesley,
Reading, Massachusetts, 1995.
[4] G. Winkler, Image Analysis, Random Fields and Dynamic Monte Carlo Methods,
Springer-Verlag, Berlin, Heidelberg, 1995.
[5] Y. Iba, "The Nishimori line and Bayesian statistics," J. Phys. A: Math. Gen., vol. 32,
no. 21, pp. 3875-3888, May 1999.
[6] A. J. Bray, H. Sompolinsky, and C. Yu, "On the 'naive' mean-field equations for spin
glasses," J. Phys. C: Solid State Phys., vol. 19, no. 32, pp. 6389-6406, Nov. 1986.
[7] J. R. L. de Almeida and D. 1. Thouless, "Stability of the Sherrington-Kirkpatrick solution ofa spin glass mode," J. Phys. A: Math. Gen., vol. 11, no. 5, pp. 983-990,1978.
[8] K. H. Fischer and J. A. Hertz Spin Glasses, Cambridge University Press, Cambridge,
1991.
[9] D. J. Thouless, P. W. Anderson, and R. G. Palmer, "Solution of 'Solvable model of a
spin glass' ," Phil. Mag., vol. 35, no. 3, pp. 593-601 , 1977.
[10] Y. Kabashima and D. Saad, "The belief in TAP," in M. S. Keams et al.(eds.), Advances
in Neural Information Processing Systems, vol. 11, The MIT Press, pp. 246-252, 1999.
| 1927 |@word multipoint:2 heuristically:1 r:6 simplifying:1 covariance:1 solid:1 electronics:1 mag:1 multiuser:9 z2:1 si:1 numerical:1 additive:1 mpm:16 realizing:1 provides:1 math:2 af3:1 along:1 direct:3 become:1 indeed:2 expected:1 frequently:1 fez:2 actual:1 increasing:1 estimating:1 moreover:1 what:2 kind:1 suppresses:1 every:1 ofa:1 demonstrates:1 control:1 grant:1 before:1 carrier:1 engineering:1 timing:2 positive:2 limit:7 consequence:1 randomizes:1 feu:1 modulation:1 suggests:2 palmer:1 range:1 practical:1 acknowledgment:1 procedure:1 jan:1 significantly:1 quenched:1 refers:4 get:1 instability:2 conventional:4 map:24 dz:8 maximizing:1 phil:1 straightforward:1 go:1 duration:1 simplicity:1 his:2 stability:2 transmit:1 user:17 us:3 trend:1 observed:2 solved:1 region:1 sect:2 sompolinsky:1 flz:2 dynamic:1 rewrite:1 solving:2 ithe:1 ali:1 division:1 completely:1 hopfield:7 chip:2 tanh2:1 various:2 monte:1 larger:1 valued:5 solve:1 say:1 erf:3 winkler:1 statistic:1 fischer:1 noisy:1 sequence:20 analytical:1 realization:1 gen:2 achieve:1 recurrent:1 ac:1 received:3 synchronized:1 convention:1 tokyo:2 correct:1 sgn:3 education:1 opt:2 hold:1 exp:2 viterbi:1 estimation:3 spreading:12 tanh:3 metropolitan:1 minimization:1 mit:1 gaussian:5 rather:1 derived:3 focus:1 consistently:1 glass:4 nand:2 her:2 baseband:1 keams:1 wij:2 arg:1 among:1 special:1 bifurcation:1 marginal:2 field:12 construct:1 f3:8 yu:1 nearly:1 np:1 simultaneously:1 thouless:2 phase:2 possibility:1 evaluation:4 kirkpatrick:1 behind:1 culture:1 uniform:1 mpa:1 fundamental:1 yl:1 receiving:1 decoding:2 ansatz:2 transmitting:1 central:1 japan:2 de:1 summarized:1 explicitly:1 vi:3 depends:1 lot:1 analyze:2 competitive:1 simon:1 spin:4 accuracy:2 variance:1 yield:2 landscape:1 toshiyuki:1 bayesian:1 carlo:1 randomness:1 kabashima:1 detector:1 phys:4 ed:2 energy:2 pp:6 e2:2 transmits:1 mi:4 gain:2 massachusetts:1 knowledge:1 actually:1 wesley:1 dt:1 follow:1 improved:2 evaluated:2 done:2 anderson:1 clock:1 d:1 hand:3 mode:4 scientific:1 effect:2 contain:1 symmetric:2 illustrated:1 indistinguishable:1 during:1 self:1 iba:1 hill:1 complete:1 sherrington:1 topoint:1 temperature:8 image:1 ef:1 common:2 jp:1 keying:2 analog:4 numerically:2 theirs:1 significant:1 destabilization:1 cambridge:2 had:1 access:3 stable:3 base:1 posterior:4 own:2 showed:1 phone:1 verlag:1 binary:3 transmitted:3 ministry:1 manolakos:3 signal:5 multiple:5 full:1 thermodynamic:1 reduces:1 faster:3 long:1 demodulation:10 basic:1 demodulator:68 represent:1 sometimes:1 achieved:1 proposal:2 whereas:1 interval:2 macroscopic:2 saad:1 exhibited:1 sent:1 effectiveness:1 call:1 near:1 identically:1 perfectly:1 reduce:1 shift:2 se:1 ph:1 sl:1 lsi:1 per:1 vol:6 four:1 pb:8 replica:8 lowering:1 prob:1 inverse:2 arrive:1 almost:1 bit:35 fl:7 hi:3 bray:2 argument:2 min:1 relatively:1 department:1 hertz:1 saddlepoint:1 making:1 interference:2 equation:8 remains:1 f3s:12 know:1 letting:1 addison:1 original:3 denotes:1 include:2 cdma:9 especially:1 occurs:1 fa:1 usual:1 berlin:1 decoder:1 cellular:1 unstable:1 assuming:3 code:13 minimizing:2 fe:3 implementation:4 zf:1 observation:1 revised:1 finite:3 situation:1 communication:5 rn:1 station:1 intensity:1 introduced:1 cast:1 tap:2 tanaka:2 trans:1 beyond:1 reading:1 including:1 explanation:1 belief:1 rsb:2 power:1 overlap:5 mfa:12 solvable:1 advanced:1 technology:1 naive:6 prior:1 literature:1 nishimori:1 relative:1 digital:1 consistent:1 principle:1 share:1 cd:10 supported:1 free:2 side:3 allow:2 hachioji:1 icantly:1 taking:1 distributed:1 calculated:1 valid:2 approximate:3 nov:1 mcgraw:1 ml:1 handbook:1 receiver:3 assumed:1 a_:1 spectrum:3 why:1 channel:4 andlor:1 symmetry:2 heidelberg:1 investigated:2 spread:3 main:2 whole:1 noise:9 levitt:1 fig:4 referred:1 aid:1 sub:1 wish:1 breaking:1 third:1 theorem:1 modulates:2 easier:1 saddle:10 sport:1 scalar:1 springer:1 corresponds:2 shared:1 averaging:2 called:2 mark:1 maximumlikelihood:1 f3x:2 modulated:1 almeida:1 evaluate:4 |
1,015 | 1,928 | Kernel expansions with unlabeled examples
Martin Szummer
MIT AI Lab & CBCL
Cambridge, MA
szummer@ai.mit.edu
Tommi Jaakkola
MIT AI Lab
Cambridge, MA
tommi @ai.mit.edu
Abstract
Modern classification applications necessitate supplementing the few
available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved
essentially by expanding the input vectors into longer feature vectors via
both labeled and unlabeled examples. The resulting classification method
can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative
and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the
maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy.
1 Introduction
In many modern classification problems such as text categorization, very few labeled examples are available but a large number of unlabeled examples can be readily acquired.
Various methods have recently been proposed to take advantage of unlabeled examples to
improve classification performance. Such methods include the EM algorithm with naive
Bayes models for text classification [1], the co-training framework [2], transduction [3, 4],
and maximum entropy discrimination [5] .
These approaches are divided primarily on the basis of whether they employ generative
modeling or are motivated by robust classification. Unfortunately, the computational effort
scales exponentially with the number of unlabeled examples for exact solutions in discriminative approaches such as transduction [3, 5]. Various approximations are available [4, 5]
but their effect remains unclear.
In this paper, we formulate a complementary discriminative approach to exploiting unlabeled examples, effectively by using them to expand the representation of examples. This
approach has several advantages including the ability to represent the true Bayes optimal
decision boundary and making explicit use of the density over the examples. It is also
computationally feasible as stated.
The paper is organized as follows . We start by discussing the kernel density estimate and
providing a smoothness condition, assuming labeled data only. We subsequently introduce
unlabeled data, define the expansion and formulate the EM algorithm for discriminative
training. In addition, we provide a purely discriminative version of the parameter estimation problem and formalize it as a maximum entropy discrimination problem. We then
demonstrate experimentally that various concerns about the approach are not warranted.
2
Kernel density estimation and classification
We start by assuming a large number of labeled examples D = {(Xl, ill)" .. , (XN, fiN)}'
where ih E {-I, I} and Xi E Rf A joint kernel density estimate can be written as
1
P(x,y) = N
N
L
(1)
t5(Y,ih)K(x,Xi)
i=l
J
where K(x, xi)dl-?(x) = 1 for each i. With an appropriately chosen kernel K, a function
of N, P(x, y) will be consistent in the sense of converging to the joint density as N -+ 00.
Given a fixed number of examples, the kernel functions K(x, Xi) may be viewed as conditional probabilities P(xli), where i indexes the observed points. For the purposes of this
paper, we assume a Gaussian form K(x, Xi) = N(x; Xi> rr2 I). The labels ih assigned
to the sampled points Xi may themselves be noisy and we incorporate P(yli), a locationspecific probability of labels. The resulting joint density model is
N
P(x, y) =
~?: P(yli) P(xli)
t=l
Interpreting liN as a prior probability of the index variable i = 1, ... , N, the resulting
model conforms to the graph depicted above. This is reminiscent of the aspect model for
clustering of dyadic data [6]. There are two main differences. First, the number of aspects
here equals the number of examples and the model is not suitable for clustering. Second, we
do not search for the probabilities P(xli) (kernels), instead they are associated with each
observed example and are merely adjusted in terms of scale (kernel width). This restriction
yields a significant computational advantage in classification, which is the objective in this
paper.
The posterior probability of the label y given an example X is given by P(ylx) =
L:i P(yli)P(ilx), where P(ilx) ex: P(xli) I P(x) as P(i) is assumed to be uniform. The
quality of the posterior probability depends both on how accurately P(yli) are known as
well as on the properties of the membership probabilities P(ilx) (always known) that must
be relatively smooth.
Here we provide a simple condition on the membership probabilities P(ilx) so that any
noise in the sampled labels for the available examples would not preclude accurate decisions. In other words, we wish to ensure that the conditional probabilities P(ylx) can be
evaluated accurately on the basis of the sampled estimate in Eq. (1). Removing the label
noise provides an alternative way of setting the width parameter rr of the Gaussian kernels.
The simple lemma below, obtained via standard large deviation methods, ties the appropriate choice of the kernel width rr to the squared norm of the membership probabilities
P(iIXj).
Lemma I Let IN = {I, ... , N}. Given any t5 > 0, E > 0, and any collection of distributionsPilk ~ 0, L:iEINPilk = 1 fork E IN, suchthatllp'lkl12 ~ E/V210g(2Nlt5),Vk E
IN, and independent samples ih E {-I, I} from some P(yli), i E IN, then
P(3k E IN : I L::'1 ihpilk - L::'1 WiPilkl > E) ~ t5 where Wi
-Iii) and the probability is taken over the independent samples.
= P(y = Iii) -
P(y
=
The lemma applies to our case by setting Pilk = P(ilxk), {;iii} represents the sampled
labels for the examples, and by noting that the sign of L wiP(ilx) is the MAP decision
rule from our model, P(y = 11x) - P(y = -llx). The lemma states that as long as the
membership probabilities have appropriately bounded squared norm, the noise in the labeling is inconsequential for the classification decisions. Note, for example, that a distribution
Pilk = l/N has IIp.lkl12 = l/VN implying that the conditions are achievable for large
N. The squared norm of P(ilx) is directly controlled by the kernel width a 2 and thus the
lemma ties the kernel width with the accuracy of estimating the conditional probabilities
P(ylx). Algorithms for adjusting the kernel width(s) on the basis of this will be presented
in a longer version of the paper.
3 The expansion and EM estimation
A useful way to view the resulting kernel density estimate is that each example x is represented by a vector of membership probabilities P(ilx), i = 1, ... , N. Such mixture
distance representations have been used extensively; it can also be viewed as a Fisher score
vector computed with respect to adjustable weighting P(i). The examples in this new
representation are classified by associating P(yli) with each component and computing
P(ylx) = Li P(yli)P(ilx). An alternative approach to exploiting kernel density estimates in classification is given by [7].
We now assume that we have labels for only a few examples, and our training data is
{(X1dh), ... , (XL, ih), XL+1,? .. ,XN}. In this case, we may continue to use the model
defined above and estimate the free parameters, P(y Ii), i = 1, ... , N, from the few labeled
examples. In other words, we can maximize the conditional log-likelihood
L
Z)og P(Yllxl)
1=1
L
N
1=1
i=l
= 2: log 2: P(jh li)P(ilxl)
(2)
where the first summation is only over the labeled examples and L ? N . Since P(ilxl)
are fixed, this objective function is jointly concave in the free parameters and lends itself
to a unique maximum value. The concavity also guarantees that this optimization is easily
performed via the EM algorithm [8].
Let Pill be the soft assignment for component i given (x!, iM, i.e., Pill = P(ilxl, iii) ex:
P(ydi)P(ilxl) . The EM algorithm iterates between the E-step, where Pill are recomputed from the current estimates of P(yli), and the M-step where we update P(yli) ~
Ll:ih=yPild L1Pill.
This procedure may have to be adjusted in cases where the overall frequency of different
labels in the (labeled) training set deviates significantly from uniform. A simple rescaling
P(yli) ~ P(yli)/ Ly by the frequencies Ly and renormalization after each M-step would
probably suffice.
The runtime of this algorithm is O(L N). The discriminative formulation suggests that EM
will provide reasonable parameter estimates P(yli) for classification purposes. The quality of the solution, as well as the potential for overfitting, is contingent on the smoothness
of the kernels or, equivalently, smoothness of the membership probabilities P(ilx). Note,
however, that whether or not P(yli) will converge to the extreme values 0 or 1 is not an indication of overfitting. Actual classification decisions for unlabeled examples Xi (included
in the expansion) need to be made on the basis of P(yIXi) and not on the basis of P(yli),
which function as parameters.
4 Discriminative estimation
An alternative discriminative formulation is also possible, one that is more sensitive to the
decision boundary rather than probability values associated with the labels. To this end,
consider the conditional probability P(Ylx) = L i P(Yli)P(ilx). The decisions are made
on the basis of the sign of the discriminant function
N
f(x) = P(y = llx) - P(y = -llx) =
L wiP(ilx)
(3)
i=l
where Wi = P(y = Iii) - P(y = -Iii). This is similar to a linear classifier and there
are many ways of estimating the weights Wi discriminatively. The weights should remain
bounded, however, i.e., Wi E [-1,1], so long as we wish to maintain the kernel density
interpretation. Estimation algorithms with Euclidean norm regularization such as SVMs
would not be appropriate in this sense. Instead, we employ the maximum entropy discrimination (MED) framework [5] and rely on the relation Wi = E{Yi} = L yi =?1 YiP(y) to
estimate the distribution P(y) over all the labels Y = [YI,'" ,YN] . Here Yi is a parameter associated with the ith example and should be distinguished from any observed labels.
We can show that in this case the maximum entropy solution factors across the examples
P(YI, ... , YN) = TIi Pi (Yi) and we can formulate the estimation problem directly in terms
of the marginals Pi (Yi).
The maximum entropy formalism encodes the principle that label assignments Pi (Yi) for
the examples should remain uninformative to the extent possible given the classification
objective. More formally, given a set of L labeled examples (Xl, IiI), . .. , (XL, ih), we
maximize L~l H(Yi) - eLl el subject to the classification constraints
(4)
where H (Yi) is the entropy of Yi relative to the marginal Pi (Yi). Here'Y specifies the target
separation ('Y E [0,1]) and the slack variables el 2: a permit deviations from the target to
ensure that a solution always exists. The solution is not very sensitive to these parameters,
and'Y = 0.1 and C = 40 worked well for many problems. The advantage of this formulation is that effort is spent only on those training examples whose classification is uncertain.
Examples already classified correctly with a margin larger than 'Yare effectively ignored.
The optimization problem and algorithms are explained in the appendix.
5 Discussion of the expanded representation
The kernel expansion enables us to represent the Bayes optimal decision boundary provided
that the kernel density estimate is sufficiently accurate. With this representation, the EM
and MED algorithms actually estimate decision boundaries that are sensitive to the density
P(x). For example, labeled points in high-density regions will influence the boundary
more than in low-density regions . The boundary will partly follow the density, but unlike in
unsupervised methods, will adhere strongly to the labeled points. Moreover, our estimation
techniques limit the effect of outliers, as all points have a bounded weight Wi = [-1,1]
(spurious unlabeled points do not adversely affect the boundary).
As we impose smoothness constraints on the membership probabilities P(ilx) , we also
guarantee that the capacity of the resulting classifier need not increase with the number
of unlabeled examples (in the fat shattering sense). Also, in the context of the maximum
entropy formulation , if a point is not helpful for the classification constraints, then entropy
is maximized for Pi(y
boundary.
= ?l) = 0.5, implying Wi = 0, and the point has no effect on the
If we dispense with the conditional probability interpretation of the kernels K, we are
free to choose them from a more general class of functions. For example, the kernels
no longer have to integrate to 1. An expansion of x in terms of these kernels can still
be meaningful; as a special case, when linear kernels are chosen, the expansion reduces
to weighting distances between points by the covariance of the data. Distinctions along
high variance directions then become easier to make, which is helpful when between-class
scatter is greater than within-class scatter.
Thus, even though the probabilistic interpretation is missing, a simple preprocessing step
can still help, e.g., support vector machines to take advantage of unlabeled data: we can
expand the inputs x in terms of kernels G from labeled and unlabeled points as in ?(x) =
~[G(x, Xl)' ... ,G(x, XN)], where Z optionally normalizes the feature vector.
6 Results
We first address the potential concern that the expanded representation may involve too
many degrees of freedom and result in poor generalization. Figure la) demonstrates that
this is not the case and, instead, the test classification error approaches the limiting asymptotic rate exponentially fast. The problem considered was a DNA splice site classification
problem with 500 examples for which d = 100. Varying sizes of random subsets were
labeled and all the examples were used in the expansion as unlabeled examples. The error rate was computed on the basis of the remaining 500 - L examples without labels,
where L denotes the number of labeled examples. The results in the figure were averaged
across 20 independent runs. The exponential rate of convergence towards the limiting rate
is evidenced by the linear trend in the semilog figure la). The mean test errors shown in figure Ib) indicate that the purely discriminative training (MED) can contribute substantially
to the accuracy. The kernel width in these experiments was simply fixed to the median
distance to the 5th nearest neighbor from the opposite class. Results from other methods
of choosing the kernel width (the squared norm, adaptive) will be discussed in the longer
version of the paper.
Another concern is perhaps that the formulation is valid only in cases where we have a
large number of unlabeled examples. In principle, the method could deteriorate rapidly
after the kernel density estimate no longer can be assumed to give reasonable estimates.
Figure 2a) illustrates that this is not a valid interpretation. The problem here is to classify
DNA micro array experiments on the basis of the leukemia types that the tissues used in
the array experiments corresponded to. Each input vector for the classifier consists of the
expression levels of over 7000 genes that were included as probes in the arrays. The number
of examples available was 38 for training and 34 for testing. We included all examples
as unlabeled points in the expansion and randomly selected subsets of labeled training
examples, and measured the performance only on the test examples (which were of slightly
different type and hence more appropriate for assessing generalization). Figure 2 shows
rapid convergence for EM and the discriminative MED formulation. The "asymptotic"
level here corresponds to about one classification error among the 34 test examples. The
results were averaged over 20 independent runs.
7 Conclusion
We have provided a complementary framework for exploiting unlabeled examples in discriminative classification problems. The framework involves a combination of the ideas of
kernel density estimation and representational expansion of input vectors. A simple EM
o
b)
a)
35 ,----~-~-~-~-~_____,
o 050L-~-----,"
10--1C=5 -~2~
0 -~
25,--------,J
30
labeled examples
Figure 1: A semilog plot of the test error rate for the EM formulation less the asymptotic
rate as a function of labeled examples. The linear trend in the figure implies that the error
rate approaches the asymptotic error exponentially fast. b) The mean test errors for EM,
MED and SVM as a function of the number of labeled examples. SVM does not use
unlabeled examples.
o
35 ,------~-,------~--,----______,
03
025
g
?
~
02
ma 15
E
01
005
%~-~-~17
0 -~
1 5-~27
0 -~25'
number of labeled exam ples
Figure 2: The mean test errors for the leukemia classification problem as a function of the
number of randomly chosen labeled examples. Results are given for both EM (lower line)
and MED (upper line) formulations.
algorithm is sufficient for finding globally optimal parameter estimates but we have shown
that a purely discriminative formulation can yield substantially better results within the
framework.
Possible extensions include using the kernel expansions with transductive algorithms that
enforce margin constraints also for the unlabeled examples [5] . Such combination can be
particularly helpful in terms of capturing the lower dimensional structure of the data. Other
extensions include analysis of the framework similarly to [9].
Acknowledgments
The authors gratefully acknowledge support from NTT and NSF. Szummer would also like
to thank Thomas Minka for many helpful discussions and insights.
References
[1] Nigam K. , McCallum A. , Thrun S., and Mitchell T. (2000) Text classification from
labeled and unlabeled examples. Machine Learning 39 (2):103-134.
[2] Blum A., Mitchell T. (1998) Combining Labeled and Unlabeled Data with CoTraining. In Proc. 11th Annual Con! Computational Learning Theo ry, pp. 92-100.
[3] Vapnik V. (1998) Statistical learning theory. John Wiley & Sons.
[4] Joachims, T. (1999) Transductive inference for text classification using support vector
machines. International Conference on Machine Learning.
[5] Jaakkola T., Meila M., and Jebara T. (1999) Maximum entropy discrimination. In
Advances in Neural Information Processing Systems 12.
[6] Hofmann T., Puzicha 1. (1998) Unsupervised Learning from Dyadic Data. International Computer Science Institute, TR-98-042.
[7] Tong S., Koller D. (2000) Restricted Bayes Optimal Classifiers. Proceedings AAAI.
[8] Miller D., Uyar T. (1996) A Mixture of Experts Classifer with Learning Based on
Both Labelled and Unlabelled Data. In Advances in Neural Information Processing
Systems 9, pp. 571-577.
[9] Castelli v., Cover T. (1996) The relative value of labeled and unlabeled samples in
pattern recognition with an unknown mixing parameter. IEEE Transactions on information theory 42 (6): 2102-2117.
A
Maximum entropy solution
The unique solution to the maximum entropy estimation problem is found via introducing
Lagrange multipliers {AI} for the classification constraints. The multipliers satisfy Al E
[0, el, where the lower bound comes from the inequality constraints and the upper bound
from the linear margin penalties being minimized. To represent the solution and find the
optimal setting of Al we must evaluate the partition function
= e- ~f An
N
L II e~f
=
(5)
= e- ~f An II (e~f Y1A1P(ilxd + e- ~f Y1A1P(ilxd )
(6)
Z(A)
iizA1YiP(ilxd
N
i=l
that normalizes the maximum entropy distribution. Here Y denote the observed labels.
Minimizing the jointly convex log-partition function log Z(A) with respect to the Lagrange
multipliers leads to the optimal setting {Ai}. This optimization is readily done via an axis
parallel line search (e.g. the bisection method). The required gradients are given by
01
Z(A)
O~Ak
N
(
= -')' + ~
tanh
tt
L
)
=
(7)
= -')'+Yk LEp;{YdP(ilxk)
(8)
Y1Aj P(ilxl) YkP(ilxk)
N
i=l
(this is essentially the classification constraint). The expectation is taken with respect to
the maximum entropy distribution P* (Yl , ... , YN) = Pi (Yl) .. . PN(y N) where the components are Pt(Yi) ex exp{L:1Y1A1YiP(ilx)}. The label averages wi = Ep.{Yd =
L: Yi Yi Pt (Yi) are needed for the decision rule as well as in the optimization. We can identify these from above wi = tanh(L:l Y1Aj P(ilxl)) and they are readily evaluated. Finding
the solution involves O(L2 N) operations.
Often the numbers of positive and negative training labels are imbalanced. The MED
formulation (analogously to SVMs) can be adjusted by defining the margin penalties as
e+ L:l:Y1=1 6 + e- L:l:Y1=-1 ~l' where, for example, L+e+ = L-e- that equalizes the
mean penalties. The coefficients e+ and e- can also be modified adaptively during the
estimation process to balance the rate of misclassification errors across the two classes.
| 1928 |@word version:3 achievable:1 norm:5 covariance:1 tr:1 score:1 current:1 ilxl:6 scatter:2 written:1 readily:4 john:1 reminiscent:1 must:2 partition:2 hofmann:1 enables:1 plot:1 update:1 discrimination:4 implying:2 generative:1 selected:1 mccallum:1 ith:1 provides:1 iterates:1 contribute:1 along:1 become:1 consists:1 introduce:1 yllxl:1 deteriorate:1 acquired:1 rapid:1 themselves:1 ry:1 globally:1 actual:1 preclude:1 supplementing:1 provided:2 estimating:2 bounded:3 suffice:1 moreover:1 interpreted:1 substantially:2 finding:2 guarantee:2 concave:1 tie:2 runtime:1 fat:1 classifier:4 demonstrates:1 ly:2 yn:3 positive:1 limit:1 ak:1 yd:1 inconsequential:1 suggests:1 co:1 averaged:2 unique:2 acknowledgment:1 testing:1 procedure:1 significantly:1 word:2 unlabeled:23 context:1 influence:1 restriction:1 map:1 missing:1 convex:1 formulate:3 rule:2 insight:1 array:3 limiting:2 target:2 pt:2 exact:1 trend:2 recognition:1 particularly:1 labeled:24 observed:4 fork:1 ep:1 region:2 ykp:1 yk:1 dispense:1 trained:1 purely:4 classifer:1 basis:8 pill:3 easily:1 joint:3 various:3 represented:1 fast:2 labeling:1 corresponded:1 choosing:1 equalizes:1 whose:1 larger:1 ability:1 transductive:2 jointly:2 noisy:1 itself:1 advantage:5 rr:2 indication:1 combining:1 rapidly:1 mixing:1 representational:1 exploiting:4 convergence:2 assessing:1 categorization:1 spent:1 help:1 exam:1 measured:1 nearest:1 eq:1 involves:2 indicate:1 implies:1 come:1 tommi:2 direction:1 subsequently:1 generalization:2 summation:1 im:1 adjusted:3 extension:2 sufficiently:1 considered:1 cbcl:1 exp:1 achieves:1 purpose:2 estimation:11 proc:1 label:16 tanh:2 sensitive:3 mit:4 gaussian:2 always:2 modified:1 rather:1 pn:1 og:1 varying:1 jaakkola:2 joachim:1 vk:1 likelihood:1 sense:3 helpful:4 inference:1 el:3 membership:7 spurious:1 relation:1 koller:1 expand:2 overall:1 classification:28 ill:1 among:1 yip:1 ell:1 special:1 marginal:1 equal:1 shattering:1 represents:1 unsupervised:2 leukemia:2 minimized:1 micro:1 few:5 primarily:1 modern:2 employ:2 randomly:2 maintain:1 freedom:1 lep:1 mixture:2 extreme:1 accurate:2 conforms:1 ples:1 euclidean:1 uncertain:1 formalism:1 modeling:1 soft:1 classify:1 cover:1 assignment:2 introducing:1 deviation:2 subset:2 uniform:2 too:1 adaptively:1 density:17 international:2 probabilistic:1 yl:2 analogously:1 squared:4 aaai:1 iip:1 choose:1 necessitate:1 adversely:1 expert:1 rescaling:1 li:2 potential:2 tii:1 coefficient:1 satisfy:1 depends:1 performed:1 view:1 lab:2 start:2 bayes:4 parallel:1 lkl12:2 accuracy:3 variance:1 maximized:1 yield:2 miller:1 identify:1 xli:4 accurately:2 castelli:1 bisection:1 tissue:1 classified:2 frequency:2 pp:2 minka:1 associated:3 con:1 sampled:4 adjusting:1 mitchell:2 organized:1 formalize:1 actually:1 follow:1 formulation:11 evaluated:2 though:1 strongly:1 done:1 quality:2 perhaps:1 effect:3 true:1 multiplier:3 regularization:1 assigned:1 hence:1 ll:1 during:1 width:8 tt:1 demonstrate:2 interpreting:1 recently:1 exponentially:3 discussed:1 interpretation:4 marginals:1 significant:1 cambridge:2 ai:6 llx:3 smoothness:4 meila:1 similarly:1 gratefully:1 longer:5 posterior:2 imbalanced:1 inequality:1 continue:1 discussing:1 yi:16 contingent:1 greater:1 impose:1 converge:1 maximize:2 ii:3 reduces:1 smooth:1 ntt:1 unlabelled:1 long:2 lin:1 divided:1 controlled:1 converging:1 essentially:2 expectation:1 kernel:30 represent:3 achieved:1 addition:2 uninformative:1 adhere:1 median:1 appropriately:2 semilog:2 unlike:1 probably:1 subject:1 med:7 noting:1 iii:7 affect:1 associating:1 opposite:1 idea:1 whether:2 motivated:1 expression:1 effort:2 penalty:3 ignored:1 iixj:1 useful:1 involve:1 ylx:5 extensively:1 svms:2 dna:2 specifies:1 nsf:1 sign:2 correctly:1 recomputed:1 blum:1 graph:1 merely:1 run:2 reasonable:2 vn:1 separation:1 decision:10 appendix:1 capturing:1 bound:2 annual:1 constraint:7 worked:1 encodes:1 aspect:2 expanded:2 martin:1 relatively:1 combination:2 poor:1 remain:2 across:3 em:13 slightly:1 son:1 wi:9 appealing:1 making:1 explained:1 outlier:1 restricted:1 taken:2 computationally:1 remains:1 slack:1 needed:1 tractable:1 end:1 available:5 operation:1 permit:1 yare:1 probe:1 appropriate:3 enforce:1 distinguished:1 alternative:3 thomas:1 denotes:1 clustering:2 include:3 ensure:2 remaining:1 objective:3 already:1 unclear:1 gradient:1 lends:1 distance:3 thank:1 thrun:1 capacity:1 extent:1 discriminant:1 assuming:2 index:2 providing:1 minimizing:1 balance:1 equivalently:1 optionally:1 unfortunately:1 stated:1 negative:1 adjustable:1 unknown:1 upper:2 yli:15 fin:1 acknowledge:1 defining:1 y1:2 jebara:1 evidenced:1 required:1 distinction:1 address:1 below:1 pattern:1 ilxk:3 rf:1 including:1 suitable:1 misclassification:1 rely:1 improve:2 axis:1 naive:1 text:4 prior:1 deviate:1 l2:1 relative:2 asymptotic:4 discriminatively:1 integrate:1 degree:1 sufficient:1 consistent:1 principle:2 pi:6 normalizes:2 free:3 theo:1 jh:1 institute:1 neighbor:1 boundary:8 xn:3 valid:2 t5:3 concavity:1 author:1 collection:1 made:2 preprocessing:1 adaptive:1 transaction:1 gene:1 overfitting:2 assumed:2 discriminative:15 xi:8 search:2 robust:1 expanding:1 yixi:1 nigam:1 expansion:11 warranted:1 main:1 noise:3 dyadic:2 complementary:2 site:1 transduction:2 renormalization:1 wiley:1 tong:1 rr2:1 explicit:1 wish:2 exponential:1 xl:6 ib:1 cotraining:1 weighting:2 splice:1 removing:1 wip:2 svm:2 concern:3 dl:1 exists:1 ih:7 vapnik:1 effectively:2 illustrates:1 margin:4 easier:1 entropy:14 depicted:1 ilx:13 simply:1 lagrange:2 applies:1 corresponds:1 ma:3 conditional:6 ydi:1 viewed:2 towards:1 labelled:1 fisher:1 feasible:1 experimentally:1 included:3 uyar:1 lemma:5 partly:1 la:2 meaningful:1 formally:1 puzicha:1 support:3 szummer:3 incorporate:1 evaluate:1 ex:3 |
1,016 | 1,929 | High-temperature expansions for learning
models of nonnegative data
Oliver B. Downs
Dept. of Mathematics
Princeton University
Princeton, NJ 08544
ob do wn s@ p r in c et o n.edu
Abstract
Recent work has exploited boundedness of data in the unsupervised
learning of new types of generative model. For nonnegative data it was
recently shown that the maximum-entropy generative model is a Nonnegative Boltzmann Distribution not a Gaussian distribution, when the
model is constrained to match the first and second order statistics of the
data. Learning for practical sized problems is made difficult by the need
to compute expectations under the model distribution. The computational cost of Markov chain Monte Carlo methods and low fidelity of
naive mean field techniques has led to increasing interest in advanced
mean field theories and variational methods. Here I present a secondorder mean-field approximation for the Nonnegative Boltzmann Machine
model, obtained using a "high-temperature" expansion. The theory is
tested on learning a bimodal 2-dimensional model, a high-dimensional
translationally invariant distribution, and a generative model for handwritten digits.
1 Introduction
Unsupervised learning of generative and feature-extracting models for continuous nonnegative data has recently been proposed [1], [2] . In [1], it was pointed out that the maximum
entropy distribution (matching Ist- and 2nd-order statistics) for continuous nonnegative
data is not Gaussian, and indeed that a Gaussian is not in general a good approximation
to that distribution. The true maximum entropy distribution is known as the Nonnegative Boltzmann Distribution (NNBD), (previously the rectified Gaussian distribution [3]) ,
which has the functional form
p(x)
= {o~exp[-E(X)]
if Xi ~ OVi,
if any Xi < 0,
(1)
where the energy function E(x) and normalisation constant Z are:
E(x)
Z
(3x T Ax - bT X,
= (
10;"20
dx exp[-E(x)].
(2)
(3)
In contrast to the Gaussian distribution, the NNBD can be multimodal in which case its
modes are confined to the boundaries of the nonnegative orthant.
The Nonnegative Boltzmann Machine (NNBM) has been proposed as a method for learning
the maximum likelihood parameters for this maximum entropy model from data. Without
hidden units, it has the stochastic-EM learning rule:
(XiXj)f - (XiXj)c
(Xi)c - (Xi)r,
(4)
(5)
where the subscript "c" denotes a "clamped" average over the data, and the subscript "f"
denotes a "free" average over the NNBD:
(f(x))c
=
1 M
M
f(x(I'))
L
(6)
1'=1
(f(X))f
=
1
dxp(x)f(x).
(7)
x~O
This learning rule has hitherto been extremely computationally costly to implement, since
naive variationaVmean-field approximations for (XXT)r are found empirically to be poor,
leading to the need to use Markov chain Monte Carlo methods. This has made the NNBM
impractical for application to high-dimensional data.
While the NNBD is generally skewed and hence has moments of order greater than 2, the
maximum-likelihood learning rule suggests that the distribution can be described solely in
terms of the Ist- and 2nd-order statistics of the data. With that in mind, I have pursued
advanced approximate models for the NNBM.
In the following section I derive a second-order approximation for (XiXj)r analogous to the
TAP-On sager correction for the mean-field Ising Model, using a high temperature expansion, [4]. This produces an analytic approximation for the parameters A ij , bi in terms of
the mean and cross-correlation matrix of the training data.
2 Learning approximate NNBM parameters using high-temperature
expansion
Here I use Taylor expansion of a "free energy" directly related to the partition function
of the distribution, Z in the fJ = 0 limit, to derive a second-order approximation for the
NNBM model parameters. In this free energy we embody the constraint that Eq. 5 is
satisfied:
where fJ is an "inverse temperature". There is a direct relationship between the "free energy", G and the normalisation, Z of the NNBD, Eq. 3.
-In Z
= G(fJ, m) + Constant(b, m)
(9)
Thus,
(10)
The Lagrange multipliers, Ai embody the constraint that (Xi)f match the mean field of the
patterns, mi = (x)c. This effectively forces tl.b = 0 in Eq. 5, with bi = -Ai((3).
Since the Lagrange constraint is enforced for all temperatures, we can solve for the specific
case (3 = O.
= (Xi)fl.8-o =
mi
TIk Ixoo=0 Xi exp (- L:l Al(O)(XI - ml)) dXk
hOO
-
1
= --
(11)
TIk IXh=o exp (- L:l Al (0) (Xl - ml)) dXk
Ai(O)
Note that this embodies the unboundedness of Xk in the nonnegative orthant, as compared
to the equivalent term of Georges & Yedidia for the Ising model, mi = tanh(Ai(O)).
We consider Taylor expansion of Eq. 8 about the "high temperature" limit, (3 = O.
G((3, m)
= G(O, m) + (3
8G
8(3 I
.8=0
2
88(32
GI
+ ...
.8=0
(32
+ 2'
(12)
Since the integrand becomes factorable in Xi in this limit, the infinite temperature values of
G and its derivatives are analytically calculable.
G((3,m)I.8=o
= - Lin
k
{OO_ exp (- LAi(O)(Xi -mi)) dXk
}Xh-O
(13)
i
using Eq. 11;
G((3,m)I.8=o
=-
~ln (Ak~O) exp (~Ai(O)mi)) =N+ Llnmk
(14)
k
The first derivative is then as follows
8GI
8(3 .8=0
TIk 1000 (L:i .j -AijXiXj - L:i(Xi - mi)
?t) exp (- L:l Am(O)(XI -
ml)) dXk
TIk 10 exp (- L:l Am(O)(XI - ml)) dXk
00
(15)
(16)
i,j
This term is exactly the result of applying naive mean-field theory to this system, as in [1].
Likewise we obtain the second derivative
~~~ Ip~o ~ - ( (~A';X'X;) ')
+
(pi
+
O';)A,;m,m;) ,
.8=0
+
(~AijXiXj L ~; (Xk t,}
=-
(17)
mk))
k
.8=0
L L Qijkl Aij Aklmimjmkml
i,j k,l
(18)
Where Qijkl contains the integer coefficients arising from integration by parts in the first
and second terms and (1 + Oij) in the second term of Eq. 17.
This expansion is to the same order as the TAP-Onsager correction term for the Ising model,
which can be derived by an analogous approach to the equivalent free-energy [4]. Substituting these results into Eq. 10, we obtain
(3(Xi Xj)f
R!
(3(1 + Oij)mimj -
(32
2' L
kl
QijklAklmimjmkml
(19)
We arrive at an analytic approximation for Aij as a function of the 1st and 2nd moments of
the data, using Eq. 19 in the learning rule, Eq. 4, setting ~Aij = 0 and solving the linear
equation for A.
We can obtain an equivalent expansion for Ai ((3) and hence bi . To first order in (3 (equivalent to the order of (3 in the approximation for A), we have
Ai((3) ~ Ai(O) + (3
8A?1
8;
P
+ . ..
(20)
/3 =0
Using Eqs. 11 & 15
(21)
(22)
= - 2:(1 + c5ij )Aijmj
(23)
j
Hence
(24)
The approach presented here makes an explicit approximation of the statistics required
for the NNBM learning rule (xxT}f' which can be substituted in the fixed-point equation
Eq. 4, and yields a linear equation in A to be solved. This is in contrast to the linear
response theory approach of Kappen & Rodriguez [6] to the Boltzmann Machine, which
exploits the relationship
8 2 1nZ
8b i 8b j = (XiXj) - (Xi) (Xj) = Xij
(25)
between the free energy and the covariance matrix X of the model. In the learning problem,
this produces a quadratic equation in A, the solution of which is non-trivial. Computationally efficient solutions of the linear response theory are then obtained by secondary
approximation of the 2nd-order term, compromising the fidelity of the model.
3 Learning a 'Competitive' Nonnegative Boltzmann Distribution
A visualisable test problem is that of learning a bimodal NNBD in 2 dimensions. MonteCarlo slice sampling (See [1] & [5]) was used to generate 200 samples from a NNBD as
shown in Fig. l(a). The high temperature expansion was then used to learn approximate
parameters for the NNBM model of this data. A surface plot of the resulting model distribution is shown in Fig. l(b), it is clearly a valid candidate generative distribution for the
data. This is in strong contrast with a naive mean field ((3 = 0) model, which by construction would be unable to produce a multiple-peaked approximation, as previously described,
[1] .
4
Orientation Tuning in Visual Cortex - a translationally invariant
model
The neural network model of Ben-Yishai et. al [7] for orientation-tuning in visual cortex
has the property that its dynamics exhibit a continuum of stable states which are trans-
(a)
8
~
15
-
6
><.-
>-
'iii
c
4 ~
~10
>-
:c
==co 5
.c
2to
0
0
0
o
oo~
2
Jiil..
4
x2
6
-""
o
...
0
Q.
8
Figure 1: (a) Training data, generated from 2-dimensional 'competitive' NNBD, (b)
Learned model distribution, under the high temperature expansion.
lationally invariant across the network. The energy function of the network model is a
translationally invariant function of the angles of maximal response, Bi , of the N neurons,
and can be mapped directly onto the energy of the NNBM, as described in [1].
Aii=1'(c5ii + ~-
~COS(~li-jl)),bi=1'
(26)
We can generate training data for the NNBM by sampling from the neural network model
with known parameters. It is easily shown that Aii has 2 equal negative eigenvalues, the
remainder being positive and equal in value. The corresponding pair of eigenvectors of A
are sinusoids of period equal to the width of the stable activation bumps of the network,
with a small relative phase.
Here, the NNBM parameters have been solved using the high-temperature expansion for
training data generated by Monte Carlo slice-sampling [5] from a lO-neuron model with
parameters to = 4, I' = 100 in Eq. 26. Fig. 2 illustrates modal activity patterns of the learned
NNBM model distribution, found using gradient ascent of the log-likelihood function from
a random initialisation of the variables.
~x ex [-Ax + bj+
(27)
where the superscript + denotes rectification.
These modes of the approximate NNBM model are highly similar to the training patterns,
also the eigenvectors and eigenvalues of A exhibit similar properties between their learned
and training forms. This gives evidence that the approximation is successful in learning a
high-dimensional translationally invariant NNBM model.
5 Generative Model for Handwritten Digits
In figure 3, I show the results of applying the high-temperature NNBM to learning a generative model for the feature coactivations of the Nonnegative Matrix Factorization [2]
6
6
~
Q)
rn4
0:
0:
OJ
c::
.;::
OJ
c::
.;::
u:
U:2
0
1 2
4
2
O~
3 4 5 6 7 8 9 10
Neuron Number
1 2
3 4 5 6 7 8 9 10
Neuron Number
(b)
(a)
0.4
Q)
0.2
~
rn
0:
OJ
c::
.;::
0:
0
u: -0.2
2
4
6
8
Neuron Number
10
2
4
6
8
Neuron Number
10
Figure 2: Upper: 2 modal states of the NNBM model density, located by gradient-ascent
of the log-likelihood from different random initialisations, Lower: The two negativeeigenvalue eigenvectors of A - a) in the learned model, and b) as used to generate the
training data.
decomposition of a database of the handwritten digits, 0-9. This problem contains none of
the space-filling symmetry of the visual cortex model, and hence requires a more strongly
multimodal generative model distribution to generate distinct digits. Here performance is
poor, although superior to uniformly-sampled feature activitations.
6 Discussion
In this work, an approximate technique has been derived for directly determining the
NNBM parameters A, b in terms of the Ist- and 2nd-order statistics of the data, using
the method of high-temperature expansion. To second order this produces corrections to
the naive mean field approximation of the system analogous to the TAP term for the Ising
Model/Boltzmann Machine. The efficacy of this approximation has been demonstrated
in the pathological case of learning the 'competitive' NNBD, learning the translationally
invariant model in 10 dimensions, and a generative model for handwritten digits.
These results demonstrate an improvement in approximation to models in this class over
a naive mean field ((3 = 0) approach, without reversion to secondary assumptions such as
those made in the linear response theory for the Boltzmann Machine.
There is strong current interest in the relationship between TAP-like mean field theory,
variational approximation and belief-propagation in graphical models with loops. All of
these can be interpreted in terms of minimising an effective free energy of the system [8].
The distinction in the work presented here lies in choosing optimal approximate statistics
to learn the true model, under the assumption that satisfaction of the fixed-point equations
of the true model optimises the free energy. This compares favourably with variational
a)
b)
Figure 3: Digit images generated with feature activations sampled from a) a uniform distribution, and b) a high-temperature NNBM model for the digits.
approaches which directly optimise an approximate model distribution.
Methods of this type fail when they add spurious fixed points to the learning dynamics.
Future work will focus on understanding the origins of such fixed points, and the regimes
in which they lead to a poor approximation of the model parameters.
7 Acknowledgements
This work was inspired by the NIPS 1999 Workshop on Advanced Mean Field Methods.
The author is especially grateful to David MacKay and Gayle Wittenberg for comments on
early versions of this manuscript. I also acknowledge guidance from John Hopfield and
David Heckerman, detailed discussion with Bert Kappen, Daniel Lee and David Barber
and encouragement from Kim Midwood.
References
[1] Downs, DB, MacKay, DJC, & Lee, DD (2000). The Nonnegative Boltzmann Machine. Advances in Neural Information Processing Systems 12, 428-434.
[2] Lee, DD, and Seung, HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401,788-791.
[3] Socci, ND, Lee, DD, and Seung, HS (1998). The rectified Gaussian distribution. Advances in
Neural Information Processing Systems 10, 350-356.
[4] Georges, A, & Yedidia, JS (1991). How to expand around mean-field theory using hightemperature expansions. Journal of Physics A 24, 2173- 2192.
[5] Neal, RM (1997). Markov chain Monte Carlo methods based on 'slicing' the density function.
Technical Report 9722, Dept. of Statistics, University of Toronto.
[6] Kappen, HJ & Rodriguez, FB (1998). Efficient learning in Boltzmann Machines using linear
response theory. Neural Computation 10, 1137-1156.
[7] Ben-Yishai, R, Bar-Or, RL, & Sompolinsky, H (1995). Theory of orientation tuning in visual
cortex. Proc. Nat. Acad. Sci. USA,92(9):3844-3848.
[8] Yedidia, JS , Freeman, WT, & Weiss, Y (2000). Generalized Belief Propagation. Mitsubishi
Electric Research Laboratory Technical Report, TR-2000-26.
| 1929 |@word h:2 version:1 nd:6 mitsubishi:1 covariance:1 decomposition:1 tr:1 boundedness:1 kappen:3 moment:2 contains:2 efficacy:1 initialisation:2 daniel:1 current:1 activation:2 dx:1 john:1 partition:1 analytic:2 plot:1 generative:9 pursued:1 xk:2 toronto:1 direct:1 reversion:1 calculable:1 indeed:1 embody:2 inspired:1 freeman:1 increasing:1 sager:1 becomes:1 hitherto:1 interpreted:1 onsager:1 nj:1 impractical:1 exactly:1 rm:1 unit:1 positive:1 limit:3 acad:1 ak:1 subscript:2 solely:1 nz:1 suggests:1 co:2 factorization:2 bi:5 practical:1 implement:1 digit:7 matching:1 onto:1 applying:2 equivalent:4 demonstrated:1 gayle:1 slicing:1 rule:5 analogous:3 construction:1 secondorder:1 origin:1 located:1 ising:4 database:1 solved:2 sompolinsky:1 seung:2 dynamic:2 grateful:1 solving:1 multimodal:2 aii:2 easily:1 hopfield:1 xxt:2 distinct:1 effective:1 monte:4 choosing:1 solve:1 statistic:7 gi:2 ip:1 dxp:1 superscript:1 eigenvalue:2 maximal:1 remainder:1 loop:1 produce:4 ben:2 object:1 derive:2 oo:1 ij:1 eq:12 strong:2 compromising:1 stochastic:1 correction:3 around:1 exp:8 bj:1 bump:1 substituting:1 continuum:1 early:1 proc:1 tik:4 tanh:1 clearly:1 gaussian:6 c5ij:1 hj:1 ax:2 derived:2 focus:1 improvement:1 wittenberg:1 likelihood:4 contrast:3 kim:1 am:2 bt:1 qijkl:2 hidden:1 spurious:1 expand:1 fidelity:2 orientation:3 c5ii:1 constrained:1 integration:1 mackay:2 field:13 equal:3 optimises:1 sampling:3 unsupervised:2 filling:1 peaked:1 future:1 report:2 pathological:1 translationally:5 phase:1 interest:2 normalisation:2 djc:1 highly:1 yishai:2 chain:3 oliver:1 taylor:2 guidance:1 mk:1 cost:1 uniform:1 successful:1 st:1 density:2 lee:4 physic:1 satisfied:1 derivative:3 leading:1 li:1 coefficient:1 competitive:3 likewise:1 yield:1 handwritten:4 none:1 carlo:4 rectified:2 energy:10 mi:6 sampled:2 manuscript:1 response:5 modal:2 wei:1 strongly:1 correlation:1 favourably:1 propagation:2 rodriguez:2 mode:2 usa:1 dxk:5 true:3 multiplier:1 hence:4 analytically:1 sinusoid:1 laboratory:1 neal:1 skewed:1 width:1 generalized:1 demonstrate:1 temperature:14 fj:3 image:1 variational:3 recently:2 superior:1 functional:1 empirically:1 rl:1 jl:1 ai:8 encouragement:1 tuning:3 mathematics:1 pointed:1 stable:2 cortex:4 surface:1 add:1 j:2 recent:1 exploited:1 greater:1 george:2 period:1 multiple:1 technical:2 match:2 cross:1 minimising:1 lin:1 lai:1 expectation:1 bimodal:2 confined:1 ascent:2 comment:1 db:1 integer:1 extracting:1 iii:1 wn:1 xj:2 ovi:1 factorable:1 generally:1 detailed:1 eigenvectors:3 generate:4 xij:1 arising:1 ist:3 enforced:1 inverse:1 angle:1 arrive:1 ob:1 fl:1 quadratic:1 nonnegative:13 activity:1 constraint:3 x2:1 integrand:1 mimj:1 extremely:1 poor:3 hoo:1 across:1 xixj:4 em:1 heckerman:1 invariant:6 computationally:2 ln:1 equation:5 previously:2 rectification:1 montecarlo:1 fail:1 mind:1 yedidia:3 denotes:3 graphical:1 embodies:1 exploit:1 especially:1 costly:1 exhibit:2 gradient:2 unable:1 mapped:1 sci:1 barber:1 trivial:1 relationship:3 difficult:1 negative:2 boltzmann:10 upper:1 neuron:6 markov:3 acknowledge:1 orthant:2 rn:1 bert:1 david:3 pair:1 required:1 kl:1 tap:4 learned:4 distinction:1 nip:1 trans:1 nnbm:17 bar:1 pattern:3 regime:1 oj:3 optimise:1 belief:2 satisfaction:1 force:1 oij:2 advanced:3 naive:6 understanding:1 acknowledgement:1 aijxixj:2 relative:1 determining:1 dd:3 pi:1 lo:1 free:8 aij:3 slice:2 boundary:1 dimension:2 valid:1 fb:1 author:1 made:3 approximate:7 ml:4 xi:15 continuous:2 socci:1 learn:2 nature:1 symmetry:1 expansion:13 electric:1 substituted:1 unboundedness:1 fig:3 tl:1 explicit:1 xh:1 xl:1 candidate:1 clamped:1 lie:1 down:2 specific:1 evidence:1 workshop:1 effectively:1 nat:1 illustrates:1 entropy:4 led:1 visual:4 lagrange:2 sized:1 infinite:1 uniformly:1 wt:1 secondary:2 dept:2 princeton:2 tested:1 ex:1 |
1,017 | 193 | 694
MacKay and Miller
Analysis of Linsker's Simulations
of Hebbian rules
David J. C. MacKay
Computation and Neural Systems
Caltech 164-30 CNS
Pasadena, CA 91125
mackayOaurel.cns.caltech.edu
Kenneth D. Miller
Department of Physiology
University of California
San Francisco, CA 94143 - 0444
kenOphyb.ucsf.edu
ABSTRACT
Linsker has reported the development of centre---surround receptive
fields and oriented receptive fields in simulations of a Hebb-type
equation in a linear network. The dynamics of the learning rule
are analysed in terms of the eigenvectors of the covariance matrix
of cell activities. Analytic and computational results for Linsker's
covariance matrices, and some general theorems, lead to an explanation of the emergence of centre---surround and certain oriented
structures.
Linsker [Linsker, 1986, Linsker, 1988] has studied by simulation the evolution of
weight vectors under a Hebb-type teacherless learning rule in a feed-forward linear
network. The equation for the evolution of the weight vector w of a single neuron,
derived by ensemble averaging the Hebbian rule over the statistics of the input
patterns, is:!
a
at Wi = k! + L(Qij + k 2 )wj
subject to
-W max
~
Wi
< W max
(1)
j
lOur definition of equation I differs from Linsker's by the omission of a factor of liN before
the sum term, where N is the number of synapses.
Analysis of Linsker's Simulations of Hebbian Rules
where Q is the covariance matrix of activities of the inputs to the neuron. The
covariance matrix depends on the covariance function, which describes the dependence of the covariance of two input cells' activities on their separation in the input
field, and on the location of the synapses, which is determined by a synaptic density
function. Linsker used a gaussian synaptic density function.
Depending on the covariance function and the two parameters kl and k2' different
weight structures emerge. Using a gaussian covariance function (his layer B -+- C),
Linsker reported the emergence of non-trivial weight structures, ranging from saturated structures through centre-surround structures to bi-Iobed oriented structures.
The analysis in this paper examines the properties of equation (1). We concentrate on the gaussian covariances in Linsker's layer B -+- C, and give an explanation
of the structures reported by Linsker. Several of the results are more general,
applying to any covariance matrix Q. Space constrains us to postpone general
discussion, and criteria for the emergence of centre-surround weight structures,
technical details, and discussion of other model networks, to future publications
[MacKay, Miller, 1990].
1
ANALYSIS IN TERMS OF EIGENVECTORS
We write equation (1) as a first order differential equation for the weight vector w:
(2)
=
=
where J is the matrix J ij
1 Vi, j, and n is the DC vector ni
1 Vi. This equation
is linear, up to the hard limits on Wi. These hard limits define a hypercube in weight
space within which the dynamics are confined. We make the following assumption:
Assumption 1 The principal features of the dynamics are established before the
hard limits are reached. When the hypercube is reached, it captures and preserves
the existing weight structure with little subsequent change.
The matrix Q+k 2 J is symmetric, so it has a complete orthonormal set of eigenvectors2
e Ca ) with real eigenvalues Aa. The linear dynamics within the hypercube can be characterised in terms of these eigenvectors, each of which represents an independently
evolving weight configuration. First, equation (2) has a fixed point at
(3)
Second, relative to the fixed point, the component of w in the direction of an eigenvector grows or decays exponentially at a rate proportional to the corresponding
eigenvalue. Writing wet) :La wa(t)eCa ), equation (2) yields
=
wa(t) - w: P
= (wa(O) -
w~p)e>'~t
(4)
2 The indices a and b will be used to denote the eigenvector basis for w, while the indices i and
j will be used for the synaptic basis.
695
696
MacKay and Miller
Thus, the principal emergent features of the dynamics are determined by the following three factors:
1. The principal eigenvectors of Q
+ k 2 J,
that is, the eigenvectors with largest
positive eigenvalues. These are the fastest growing weight configurations.
2. Eigenvectors of Q + k 2 J with negative eigenvalue. Each is associated with an
attracting constraint surface, the hyperplane defined by Wa
w!p.
=
3. The location of the fixed point of equation (1). This is important for two
reasons: a) it determines the location of the constraint surfaces; b) the fixed point
gives a "head start" to the growth rate of eigenvectors e(a) for which Iw~PI is large
compared to IWa(O)I.
EIGENVECTORS OF Q
2
We first examine the eigenvectors and eigenvalues of Q. The principal eigenvector
of Q dominates the dynamics of equation (2) for kl
0, k2
O. The subsequent
eigenvectors of Q become important as kl and k2 are varied.
=
2.1
=
PROPERTIES OF CIRCULARLY SYMMETRIC SYSTEMS
If an operator commutes with the rotation operator, its eigenfunctions can be written as eigenfunctions of the rotation operator. For Linsker's system, in the continuum limit, the operator Q + k 2 J is unchanged under rotation of the system. So the
eigenfunctions of Q + k 2 J can be written as the product of a radial function and
one of the angular functions cosiO, sinifJ, 1= 0,1,2 ... To describe these eigenfunc1,2,3 ... and I
s, p,
tions we borrow from quantum mechanics the notation n
d ... to denote the total number of number of nodes in the function = 0,1,2 ... and
the number of angular nodes
0, 1,2 ... respectively. For example, "2s" denotes a
centre-surround function with one radial node and no angular nodes (see figure 1).
=
=
=
For monotonic and non-negative covariance functions, we conjecture that the eigenfunctions of Q are ordered in eigenvalue by their numbers of nodes such that the
eigenfunction [nl] has larger eigenvalue than either [en + 1)/] or [n(1 + 1)]. This
conjecture is obeyed in all analytical and numerical results we have obtained.
2.2
ANALYTIC CALCULATIONS FOR k2
=0
We have solved analytically for the first three eigenfunctions and eigenvalues of the
covariance matrix for layer 8 -+ C of Linsker's network, in the continuum limit
(Table 1). Is, the function with no changes of sign, is the principal eigenfunction
of Q; 2p, the bilobed oriented function, is the second eigenfunction; and 2s, the
centre-surround eigenfunction, is third. 3
Figure l(a) shows the first six eigenfunctions for layer B
32s is degenerate with 3d at k2 = O.
-+
C of [Linsker, 1986].
Analysis of Linsker's Simulations of Hebbian Rules
Table 1: The first three eigenfunctions of the operator Q(r, r')
Q(r, r') = e-(r-r')2/ 2 c e-r'2/2A, where C and A denote the characteristic sizes of
the covariance function and synaptic density function. r denotes two-dimensional
spatial position relative to the centre of the synaptic arbor, and r = Irl. The
eigenvalues ~ are all normalised by the effective number of synapses.
Name
Is
2p
2s
Eigenfunction
e- r2 / 2R
r cos Oe -r 2/2R
(1 - r2/r5)e-r2/2R
~/N
IC/A
[2C/A
13C/A
R
~
(1 + VI + 4A/C)
I
?
(0 < 1<1)
r2
o
2A
Jl+4A/C
Figure 1: Eigenfunctions of the operator Q + k 2 J.
Largest eigenvalue is in the top row. Eigenvalues (in arbitrary units): (a) k2 = 0:
Is, 2.26; 2p, 1.0; 2s & 3d (only one 3d is shown), 0.41. (b) k2
-3: 2p, 1.0;
2s, 0.66; Is, -17.8. The greyscale indicates the range from maximum negative to
maximum positive synaptic weight within each eigenfunction. Eigenfunctions of
the operator (e-(r-r')2/ 2C +k2)e-r'2/2A were computed for CIA 2/3 (as used by
Linsker for most layer B --+ C simulations) on a circle of radius 12.5 grid intervals,
with VA = 6.15 grid intervals.
(~)
(E3)
=
=
697
698
MacKay and Miller
3
THE EFFECTS OF THE PARAMETERS kl AND k2
Varying k2 changes the eigenvectors and eigenvalues of the matrix Q + k 2J. Varying
kl moves the fixed point of the dynamics with respect to the origin. We now analyse
these two changes, and their effects on the dynamics.
Definition: Let ii be the unit vector in the direction of the DC vector n. We
refer to (w . ii) as the DC component of w. The DC component is proportional
to the sum of the synaptic strengths in a weight vector. For example, 2p and all
the other eigenfunctions with angular nodes have zero DC component. Only the
s-modes have a non-zero DC component.
3.1
GENERAL THEOREM: THE EFFECT OF k2
We now characterise the effect of adding k 2 J to any covariance matrix Q.
Theorem 1 For any covariance matrix Q, the spectrum of eigenvectors and eigenvalues of Q + k 2 J obeys the following:
1. Eigenvectors of Q with no DC component, and their eigenvalues, are unaffected
by k 2 ?
2. The other eigenvectors, with non-zero DC component, vary with k 2 ? Their eigenvalues increase continuously and monotonically with k2 between asymptotic limits
such that the upper limit of one eigenvalue is the lower limit of the eigenvalue above.
3. There is at most one negative eigenvalue.
4. All but one of the eigenvalues remain finite. In the limits k2 --+ ?oo there is a
DC eigenvector ii with eigenvalue --+ k 2 N, where N is the dimensionality ofQ, i.e.
the number of synapses.
The properties stated in this theorem, whose proof is in [MacKay, Miller, 1990]' are
summarised pictorially by the spectral structure shown in figure 2.
3.2
IMPLICATIONS FOR LINSKER'S SYSTEM
For Linsker's circularly symmetric systems, all the eigenfunctions with angular
nodes have zero DC component and are thus independent of k 2 ? The eigenvalues that vary with k2 are those of the s-modes. The leading s-modes at k2 0 are
Is, 2s; as k2 is decreased to -00, these modes transform continuously into 2s, 3s
respectively (figure 2).4 Is becomes an eigenvector with negative eigenvalue, and it
approaches the DC vector ii. This eigenvector enforces a constraint w? ii = w FP . ii,
and thus determines that the final average synaptic strength is equal to w FP . n/ N.
=
Linsker used k2 = -3 in [Linsker, 1986]. This value of k2 is sufficiently large that
the properties of the k2 --+ -00 limit hold [MacKay, Miller, 1990]' and in the following we concentrate interchangeably on k2
-3 and k2 --+ -00. The computed
eigenfunctions for Linsker's system at layer B --+ C are shown in figure l(b) for
=
? The 2s eigenfunctions at k2
functions.
= 0 and k2 = -
00
both have one radial node, but are not identical
Analysis of Linsker's Simulations of Hebbian Rules
Figure 2: General spectrum of eigenvalues of Q + k 2 J as a function of k 2 A: Eigenvectors with DC component. B: Eigenvectors with zero DC component.
C: Adjacent DC eigenvalues share a common asymptote. D: There is only one
negative eigenvalue.
The annotations in brackets refer to the eigenvectors of Linsker's system.
-:00
.
00:
~ k2
D
n~ ...(~~2 .............................~
!
.................................................1
k2 = -3. The principal eigenfunction is 2p. The centre-surround eigenfunction 2s
is the principal symmetric eigenfunction, but it still has smaller eigenvalue than 2p.
3.3
EFFECT OF kl
Varying kl changes the location of the fixed point of equation (2). From equation
(3), the fixed point is displaced from the origin only in the direction of eigenvectors
that have non-zero DC component, that is, only in the direction of the s-modes.
This has two important effects, as discussed in section 1: a) The s-modes are given
a head start in growth rate that increases as kl is increased. In particular, the
principal s-mode, the centre-surround eigenvector 2s, may outgrow the principal
eigenvector 2p. b) The constraint surface is moved when kl is changed. For large
negative k2' the constraint surface fixes the average synaptic strength in the final
weight vector. To leading order in 1/k2' Linsker showed that the constraint is:
L Wj = kl/lk21?5
3.4
SUMMARY OF THE EFFECTS OF kl AND k2
We can now anticipate the explanation for the emergence of centre--surround cells:
For kl = 0, k2 = 0, the dynamics are dominated by Is. The centre-surround
L
=
=
5To second order, this expression becomes
Wi
kt/lk2 + ql, where q (QiJ)' the average
covariance (averaged over i and j). The additional term largely resolves the discrepancy between
Linsker's 9 and kt/lk21 in [Linsker, 1986].
699
700
MacKay and Miller
eigenfunction 2s is third in line behind 2p, the bi-Iobed function. Making k2 large
and negative removes Is from the lead. 2p becomes the principal eigenfunction
and dominates the dynamics for kl ~ 0, so that the circular symmetry is broken. Finally, increasing kdlk21 gives a head start to the centre-surround function
2s. Increasing kdlk21 also increases the final average synaptic strength, so large
kdlk21 also produces a large DC bias. The centre-surround regime therefore lies
sandwiched between a 2p-dominated regime and an all-excitatory regime. kdlk21
has to be large enough that 2s dominates over 2p, and small enough that the DC
bias does not obscure the centre-surround structure. We estimate this parameter
regime in [MacKay, Miller, 1990], and show that the boundary between the 2s- and
2p-dominated regimes found by simulated annealing on the energy function may be
different from the boundary found by simulating the time-development of equation
(1), which depends on the initial conditions.
4
CONCLUSIONS AND DISCUSSION
For Linsker's B ---+ C connections, we predict four main parameter regimes for varying kl and k2.6 These regimes, shown in figure 3, are dominated by the following
weight structures:
k2
= 0, kl = 0:
k2 = large positive
and/ or kl = large
The principal eigenvector of Q, Is.
The flat DC weight vector, which leads to the same saturated structures as Is.
k2 = large negative, The principal eigenvector of Q + k 2 J for k2 ---+ -00, 2p.
kl ~ 0
k2
large negative, The principal circularly symmetric function which is given
kl
intermediate
a head start, 2s.
Higher layers of Linsker's network can be analysed in terms of the same four regimes;
the principal eigenvectors are altered, so that different structures can emerge. The
development of the interesting cells in Linsker's system depends on the use of negative synapses and on the use of the terms kl and k2 to enforce a constraint on the
final percentages of positive and negative synapses. Both of these may be biologically problematic [Miller, 1990]. Linsker suggested that the emergence of centresurround structures may depend on the peaked synaptic density function that he
used [Linsker, 1986, page 7512]. However, with a flat density function, the eigenfunctions are qualitatively unchanged, and centre-surround structures can emerge
by the same mechanism.
=
=
Acknowledgements
D.J.C.M. is supported by a Caltech Fellowship and a Studentship from SERe, UK.
K.D.M. thanks M. P. Stryker for encouragement and financial support while this
work was undertaken. K.D.M. was supported by an N .E.I. Fellowship and the In6not counting the symmetric regimes (kl' k2) ..... (-kl' k 2 ) in which all the weight shuctures
are inverted in sign.
Analysis of Linsker's Simulations of Hebbian Rules
Figure 3: Parameter regimes for Linsker's system. The DC bias is approximately constant along the radial lines, so each of the regimes with large negative
k2 is wedge-shaped.
-8---8 -q 8---8-+-k
1
'.
'.
'.
? ? ??? . . 8
'.
ternational Joint Research Project Bioscience Grant to M. P. Stryker (T. Tsumoto,
Coordinator) from the N.E.D.O., Japan.
This collaboration would have been impossible without the internet/NSF net, long
may their daemons flourish.
References
[Linsker, 1986] R. Linsker. From Basic Network Principles to Neural Architecture
(series), PNAS USA, 83, Oct.-Nov. 1986, pp. 7508-7512, 8390-8394,
8779-8783.
[Linsker, 1988] R. Linsker. Self-Organization in a Perceptual Network, Computer,
March 1988.
[Miller, 1990]
K.D. Miller. "Correlation-based mechanisms of neural development," in Neuroscience and Connectionist Theory, M.A. Gluck and
D.E. Rumelhart, Eds. (Lawrence Erlbaum Associates, Hillsboro NJ)
(in press).
[MacKay, Miller, 1990] D.J.C. MacKay and K.D. Miller. "Analysis ofLinsker's Simulations of Hebbian rules" (submitted to Neural Computation); and
"Analysis of Linsker's application of Hebbian rules to linear networks" (submitted to Network).
701
| 193 |@word simulation:9 covariance:16 commute:1 initial:1 configuration:2 series:1 existing:1 analysed:2 written:2 numerical:1 subsequent:2 analytic:2 asymptote:1 remove:1 node:8 location:4 along:1 lk2:1 differential:1 become:1 qij:2 examine:1 growing:1 mechanic:1 resolve:1 little:1 increasing:2 becomes:3 project:1 notation:1 eigenvector:10 nj:1 growth:2 k2:39 uk:1 unit:2 grant:1 before:2 positive:4 limit:10 approximately:1 studied:1 co:1 fastest:1 bi:2 range:1 obeys:1 averaged:1 enforces:1 postpone:1 differs:1 evolving:1 physiology:1 radial:4 operator:7 applying:1 writing:1 impossible:1 independently:1 rule:10 examines:1 orthonormal:1 borrow:1 his:1 financial:1 origin:2 associate:1 rumelhart:1 solved:1 capture:1 pictorially:1 wj:2 oe:1 iobed:2 broken:1 constrains:1 dynamic:10 depend:1 basis:2 joint:1 emergent:1 describe:1 effective:1 whose:1 larger:1 statistic:1 emergence:5 analyse:1 transform:1 final:4 eigenvalue:26 analytical:1 net:1 product:1 degenerate:1 moved:1 produce:1 tions:1 depending:1 oo:1 ij:1 concentrate:2 direction:4 radius:1 wedge:1 fix:1 ofq:1 anticipate:1 hold:1 sufficiently:1 ic:1 lawrence:1 predict:1 continuum:2 vary:2 wet:1 iw:1 largest:2 gaussian:3 varying:4 publication:1 derived:1 indicates:1 pasadena:1 coordinator:1 development:4 spatial:1 mackay:11 field:3 equal:1 shaped:1 identical:1 represents:1 r5:1 linsker:39 future:1 discrepancy:1 peaked:1 connectionist:1 oriented:4 preserve:1 cns:2 organization:1 circular:1 saturated:2 bracket:1 nl:1 behind:1 implication:1 kt:2 circle:1 increased:1 erlbaum:1 reported:3 obeyed:1 thanks:1 density:5 continuously:2 leading:2 japan:1 depends:3 vi:3 reached:2 start:4 annotation:1 ni:1 characteristic:1 largely:1 miller:14 ensemble:1 yield:1 flourish:1 unaffected:1 submitted:2 synapsis:6 synaptic:11 ed:1 definition:2 energy:1 pp:1 associated:1 proof:1 bioscience:1 dimensionality:1 feed:1 higher:1 angular:5 correlation:1 irl:1 mode:7 grows:1 usa:1 effect:7 name:1 evolution:2 analytically:1 symmetric:6 adjacent:1 interchangeably:1 self:1 criterion:1 complete:1 eca:1 ranging:1 common:1 rotation:3 exponentially:1 jl:1 discussed:1 he:1 refer:2 surround:14 encouragement:1 grid:2 centre:15 surface:4 attracting:1 showed:1 certain:1 caltech:3 inverted:1 additional:1 monotonically:1 ii:6 pnas:1 hebbian:8 technical:1 calculation:1 long:1 lin:1 va:1 basic:1 confined:1 cell:4 fellowship:2 interval:2 decreased:1 annealing:1 eigenfunctions:14 subject:1 counting:1 intermediate:1 enough:2 architecture:1 six:1 expression:1 e3:1 eigenvectors:19 characterise:1 percentage:1 problematic:1 nsf:1 sign:2 neuroscience:1 summarised:1 write:1 centresurround:1 four:2 kenneth:1 undertaken:1 sum:2 daemon:1 separation:1 layer:7 internet:1 activity:3 strength:4 constraint:7 flat:2 dominated:4 conjecture:2 department:1 march:1 describes:1 remain:1 smaller:1 wi:4 making:1 biologically:1 equation:14 mechanism:2 spectral:1 enforce:1 simulating:1 cia:1 denotes:2 top:1 hypercube:3 sandwiched:1 unchanged:2 move:1 receptive:2 dependence:1 stryker:2 simulated:1 trivial:1 reason:1 index:2 lour:1 ql:1 greyscale:1 negative:13 stated:1 upper:1 neuron:2 displaced:1 finite:1 head:4 dc:19 varied:1 omission:1 arbitrary:1 david:1 kl:21 connection:1 california:1 established:1 eigenfunction:11 suggested:1 pattern:1 fp:2 regime:11 max:2 explanation:3 altered:1 acknowledgement:1 relative:2 asymptotic:1 interesting:1 proportional:2 principle:1 pi:1 obscure:1 share:1 collaboration:1 tsumoto:1 excitatory:1 changed:1 summary:1 supported:2 row:1 bias:3 normalised:1 emerge:3 boundary:2 studentship:1 quantum:1 forward:1 qualitatively:1 san:1 nov:1 francisco:1 spectrum:2 table:2 ca:3 symmetry:1 main:1 en:1 hebb:2 position:1 lie:1 perceptual:1 third:2 theorem:4 r2:4 decay:1 dominates:3 circularly:3 adding:1 gluck:1 ordered:1 monotonic:1 aa:1 determines:2 oct:1 hard:3 change:5 determined:2 characterised:1 averaging:1 hyperplane:1 principal:14 total:1 arbor:1 la:1 support:1 ucsf:1 |
1,018 | 1,930 | A Mathematical Programming Approach to the
Kernel Fisher Algorithm
Sebastian Mika*, Gunnar Ratsch*, and Klaus-Robert Miiller*+
*GMD FIRST.lDA, KekulestraBe 7, 12489 Berlin, Germany
+University of Potsdam, Am Neuen Palais 10, 14469 Potsdam
{mika, raetsch, klaus}@jirst.gmd.de
Abstract
We investigate a new kernel-based classifier: the Kernel Fisher Discriminant (KFD). A mathematical programming formulation based on the observation that KFD maximizes the average margin permits an interesting
modification of the original KFD algorithm yielding the sparse KFD. We
find that both, KFD and the proposed sparse KFD, can be understood
in an unifying probabilistic context. Furthermore, we show connections
to Support Vector Machines and Relevance Vector Machines. From this
understanding, we are able to outline an interesting kernel-regression
technique based upon the KFD algorithm. Simulations support the usefulness of our approach.
1 Introduction
Recent years have shown an enormous interest in kernel-based classification algorithms,
primarily in Support Vector Machines (SVM) [2]. The success of SVMs seems to be triggered by (i) their good generalization performance, (ii) the existence of a unique solution,
and (iii) the strong theoretical background: structural risk minimization [12], supporting
the good empirical results. One of the key ingredients responsible for this success is the
use of Mercer kernels, allowing for nonlinear decision surfaces which even might incorporate some prior knowledge about the problem to solve. For our purpose, a Mercer kernel
can be defined as a function k : IRn x IRn --+ IR, for which some (nonlinear) mapping
~ : IRn --+ F into afeature ,space F exists, such that k(x, y) = (~(x) . ~(y)). Clearly, the
use of such kernel functions is not limited to SVMs. The interpretation as a dot-product
in another space makes it particularly easy to develop new algorithms: take any (usually)
linear method and reformulate it using training samples only in dot-products, which are
then replaced by the kernel. Examples thereof, among others, are Kernel-PCA [9] and the
Kernel Fisher Discriminant (KFD [4]; see also [8, 1]).
In this article we consider algorithmic ideas for KFD. Interestingly KFD - although exhibiting a similarly good performance as SVMs - has no explicit concept of a margin. This
is noteworthy since the margin is often regarded as explanation for good generalization
in SVMs. We will give an alternative formulation of KFD which makes the difference
between both techniques explicit and allows a better understanding of the algorithms. Another advantage of the new formulation is that we can derive more efficient algorithms for
optimizing KFDs, that have e.g. sparseness properties or can be used for regression.
2
A Review of Kernel Fisher Discriminant
The idea of the KFD is to solve the problem of Fisher's linear discriminant in a kernel
feature space F , thereby yielding a nonlinear discriminant in the input space. First we
fix some notation. Let {Xi Ii = 1, ... ,e} be our training sample and y E {-1, 1}l be
the vector of corresponding labels. Furthermore define 1 E ~l as the vector of all ones,
1 1 ,1 2 E ~l as binary (0,1) vectors corresponding to the class labels and let I, I l , andI2
be appropriate index sets over and the two classes, respectively (with i = IIil).
In the linear case, Fisher's discriminant is computed by maximizing the coefficient J( w) =
(WTSBW)/(WTSww) of between and within class variance, i.e. SB = (m2 - mt)(m2mll and Sw = Lk=1 ,2 LiEIk (Xi - mk)(Xi - mkl, where mk denotes the sample
mean for class k. To solve the problem in a kernel feature space F one needs a formulation
which makes use of the training samples only in terms of dot-products. One first shows
[4], that there exists an expansion for w E F in terms of mapped training patterns, i.e.
e
e
(1)
Using some straight forward algebra, the optimization problem for the KFD can then be
(o.TIL) 2 o.TMo.
written as [5]:
= o.TNo. = o.TNo.'
(2)
where ILi = t K1 i' N = KKT - Li=1 ,2eiILiILY, IL = IL2 - ILl' M = ILILT, and
Kij = (<P(Xi) . <p(Xj)) = k(Xi' Xj). The projection of a test point onto the discriminant
is computed by (w . <p(x)) = LI Qi k(Xi' x) . As the dimension of the feature space is
J(o.)
e
usually much higher than the number of training samples some form of regularization
is necessary. In [4] it was proposed to add e.g. the identity or the kernel matrix K to N,
penalizing 110.11 2 or Ilw11 2 , respectively (see also [3]).
There are several equivalent ways to optimize (2). One could either solve the generalized
eigenproblem M 0. = ANa, selecting the eigenvector 0. with maximal eigenvalue A, or
compute 0. == N- l (1L2 -ILl)' Another way which will be detailed in the following exploits
the special structure of problem (2).
3
Casting KFD into a Quadratic Program
Although there exist many efficient off-the-shelve eigensolvers or Cholesky packages
which could be used to optimize (2) there remain two problems: for a large sample size e
the matrices Nand M become unpleasantly large and the solutions 0. are non-sparse (with
no obvious way to introduce sparsity in e.g. the matrix inverse). In the following we show
how KFD can be cast as a convex quadratic programming problem. This new formulation
will prove helpful in solving the problems mentioned above and makes it much easier to
gain a deeper understanding of KFD.
As a first step we exploit the facts that the matrix M is only rank one, i.e. 0.TM 0. =
(0.T(IL2 - ILl))2 and that with 0. any multiple of 0. is an optimal solution to (2). Thus we
may fix aT(IL2 - ILl) to any non-zero value, say 2 and minimize 0.TN 0.. This amounts to
the following quadratic program:
min
subject to:
'"
o.TNo. + CP(o.)
o.T(IL2-IL1)
(3)
=
2.
(3a)
The regularization formerly incorporated in N is made vi sible explicitly here through the
operator P, where C is a regularization constant. This program still makes use of the
rather un-intuitive matrix N. This can be avoided by our final reformulation which can
be understood as follows : Fisher's Discriminant tries to minimize the variance of the data
along the projection whilst maximizing the distance between the average outputs for each
class. Considering the argumentation leading to (3) the following quadratic program does
exactly this:
(4)
min
a,b ,E
subject to:
Ko:
+ lb
lIe
y+e
ofori =
(4a)
(4b)
1,2
e
for 0:, E Rl , and b, C E ~ C ~ O. The constraint (4a), which can be read as
(w . Xi) + b = Yi + ~i for all i E I, pulls the output for each sample to its class-label. The
term IIel1 2 minimizes the variance of the error committed while the constraints (4b) ensure
that the average output for each class is the label, i.e. for ?llabels the average distance of
the projections is two. The following proposition establishes the link to KFD:
Proposition 1. For given C E TIt any optimal solution 0: to the optimization problem (3)
is also optimal for (4) and vice versa.
The formal, rather straightforward but lengthy, proof of Proposition 1 is omitted here. It
shows (i) that the feasible sets of (3) and (4) are identical with respect to 0: and (ii) that the
objective functions coincide. Formulation (4) has a number of appealing properties which
we will exploit in the following.
4 A Probabilistic Interpretation
We would like to point out the following connection (which is not specific to the formulation (4) ofKFD): The Fisher discriminant is the Bayes optimal classifier for two normal
distributions with equal covariance (i.e. KFD is Bayes optimal for two Gaussian in feature
space.). To see this connection to Gaussians consider a regression onto the labels of the
form (w . <f>(x)) + b, where w is given by (1). Assuming a Gaussian noise model with
variance u the likelihood can be written as
p(ylo:, u 2)
=exp( -
1
2u 2 L)(w . <f>(Xi))
+ b - Yi)2)
= exp( -
1
2u21IeI12).
i
Now, assume some prior p(o:IC) over the weights with hyper-parameters C . Computing the posterior we would end up with the Relevance Vector Machine (RVM) [11]. An
advantage of the RVM approach is that all hyper-parameters u and C are estimated automatically. The drawback however is that one has to solve a hard, computationally expensive optimization problem. The following simplifications show how KFD can be seen as
an approximation to this probabilistic approach. Assuming the noise variance u is known
(i.e. dropping all terms depending solely on u) and taking the logarithm of the posterior
p(ylo:, u 2)p(0:IC), yields the following optimization problem
min IIel1 2 -log(P(o:IC)),
a,b
(5)
subject to the constraint (4a). Interpreting the prior as a regularization operator P, introducing an appropriate weighting factor C, and adding the two zero-mean constraints (4b)
yields the KFD problem (4). The latter are necessary for classification as the two classes
are independently assumed to be zero-mean Gaussians. This probabilistic interpretation
has some appealing properties which we outline in the following:
Interpretation of outputs The probabilistic framework reflects the fact, that the outputs
produced by KFD can be interpreted as probabilities, thus making it possible to assign a
confidence to the final classification. This is in contrast to SVMs whose outputs can not
directly be seen as probabilities.
Noise models In the above illustration we assumed a Gaussian noise model and some yet
unspecified prior which was then interpreted as regularizer. Of course, one is not limited
to Gaussian models. E.g. assuming a Laplacian noise model we would get Ilelh instead of
Ilell~ in the objective (5) or (4), respectively. Table 1 gives a selection of different noise
models and their corresponding loss functions which could be used (cf. Figure 1 for an
illustration). All of them still lead to convex linear or quadratic programming problems in
the KFD framework.
II
Table 1: Loss functions
for the slack variables e
and their corresponding
density/noise models in
a probabilistic framework [10].
c-ins.
Laplacian
Gaussian
Huber's
loss function
density model
1~le
I~I
~ exp(-IW
~e
~ exp(-s;.)
2".
{le
I~I- ~
2d+E"l exp( -1~le)
e
< exp(-2".)
exp(~ -IW
if I~I :::; a
otherwise
Regularizers Still open in this probabilistic interpretation is the choice of the prior or
regularizer p(aIC) . One choice would be a zero-mean Gaussian as for the RVM . Assuming again that this Gaussians' variance C is known and a multiple of the identity this would
lead to a regularizer of the form P(a) = Iia 112. Crucially, choosing a single, fixed variance
parameter for all a we would not achieve sparsity as in RVM anymore. But of course any
other choice, e.g. from Table 1 is possible. Especially interesting is the choice of a Laplacian prior which in the optimization procedure would correspond to a h -loss on the a's,
i.e. P(a) = Iialh. This choice leads to sparse solutions in the KFD as the h-norm can
be seen as an approximation to the lo-norm. In the following we call this particular setting
sparse KFD (SKFD).
Figure 1: Illustration of Gaussian, Laplacian, Huber's robust and c-insensitive loss functions (dotted) and corresponding densities (solid).
Regression and connection to SVM Considering the program (4) it is rather simple to
modify the KFD approach forregression. Instead of ?1 outputs y we now have real-valued
y's. And instead of two classes there is only one class left. Thus, we can use KFD for
regression as well by simply dropping the distinction between classes in constraint (4b).
The remaining constraint requires the average error to be zero while the variance of the
errors is minimized.
This as well gives a connection to SVM regression (e.g. [12]), where one uses the cinsensitive loss for e (cf. Table 1) and a K-regularizer, i.e. P(a) = aTKa = Ilw11 2.
Finally, we can as well draw the connection to a SVM classifier. In SVM classification one
is maximizing the (smallest) margin, traded off against the complexity controlled by Ilw11 2.
Contrary, besides parallels in the algorithmic formulation, in KFD is no explicit concept of
a margin. Instead, implicitly the average margin, i.e. the average distance of samples from
different classes, is maximized.
Optimization Besides a more intuitive understanding, the formulation (4) allows for deriving more efficient algorithms as well. Using a sparsity regularizer (i.e. SKFD) one could
employ chunking techniques during the optimization of (4). However, the problem of selecting a good working set is not solved yet, and contrary to e.g. SVM, for KFD all samples
will influence the final solution via the constraints (4a), not just the ones with ai I:- O. Thus
these samples can not simply be eliminated from the optimization problem. Another interesting option induced by (4) is to use a sparsity regularizer and a linear loss function,
e.g. the Laplacian loss (cf. Table 1). This results in a linear program which we call linear
sparse KFD (LSKFD). This can very efficiently be solved by column generation techniques
known from mathematical programming. A final possibility to optimize (4) for the standard KFD problem (i.e. quadratic loss and regularizer) is described in [6]. Here one uses
a greedy approximation scheme which iteratively constructs a (sparse) solution to the full
problem. Such an approach is straight forward to implement and much faster than solving
a quadratic program, provided that the number of non-zero a's necessary to get a good
approximation to the full solution is small.
5 Experiments
In this section we present some experimental results targeting at (i) showing that the KFD
and some of its variants proposed here are capable of producing state of the art results
and (ii) comparing the influence of different settings for the regularization P(a) and the
loss-function applied to in kernel based classifiers.
e
The Output Distribution In an initial experiment we compare the output distributions
generated by a SVM and the KFD (cf. Figure 2). By maximizing the smallest margin and
using linear slack variables for patterns which do not achieve a reasonable margin, the
SVM produces a training output sharply peaked around ?1 with Laplacian tails inside the
margin area (the inside margin area is the interval [-1, 1], the outside area its complement).
Contrary, KFD produces normal distributions which have a small variance along the discriminating direction. Comparing the distributions on the training set to those on the test
set, there is almost no difference for KFD. In this sense the direction found on the training
data is consistent with the test data. For SVM the output distribution on the test set is significantly different. In the example given in Figure 2 the KFD performed slightly better than
SVM (1.5% vs. 1.7%; for both the best parameters found by 5-fold cross validation were
used), a fact that is surprising looking only on the training distribution (which is perfectly
separated for SVM but has some overlap for KFD).
SVM training set
SVM test set
KFD training set
2
Figure 2: Comparison of output distributions on training and test set for SVM and KFD for
optimal parameters on the ringnorm dataset (averaged over 100 different partitions). It is
clearly observable, that the training and test set distributions for KFD are almost identical
while they are considerable different for SVM.
Performance To evaluate the performance of the various KFD approaches on real data
sets we performed an extensive comparison to SVMl. The results in Table 2 show the
lTbanks to M. Zwitter and M. Soklic for the breast cancer data. All data sets used in the experiments can be obtained via http : // www.f i rs t . gmd . d e ;- rae t s ch/ .
average test error and the standard deviation of the averages' estimation, over 100 runs
with different realizations of the datasets. To estimate the necessary parameters, we ran
5-fold cross validation on the first five realizations of the training sets and took the model
parameters to be the median over the five estimates (see [7] for details of the experimental
setup).
From Table 2 it can be seen that both, SVM and the KFD variants on average perform
equally well. In terms of (4) KFD denotes the formulation with quadratic regularizer, SKFD
The comparable
with h -regularizer, and LSKFD with h -regularizer and h loss on
performance might be seen as an indicator, that maximizing the smallest margin or the
average margin does not make a big difference on the data sets studied. The same seems
to be true for using different regularizer and loss functions. Noteworthy is the significantly
higher degree of sparsity for KFD.
e.
Regression Just to show that the proposed KFD regression works in principle, we conducted a toy experiment on the sine function (cf. Figure 3). In terms of the number of
support vectors we obtain similarly sparse results as with RVMs [11], i.e. a much smaller
number of non-zero coefficients than in SVM regression. A thorough evaluation is currently being carried out.
1.2
0.8
0.8
0.6
0.6
0.4
0.2
0
-0.2
0.4
~
-0.:.10
-5
0.2
~/T'~
0
??
~Iil
-0.2
-0.4
5
0
10
-10
-5
5
0
10
Figure 3: I\Iustration of KFD regression. The left panel shows a fit to the noise-free sine
function sampled on 100 equally spaced points, the right panel with Gaussian noise of
std. dev. 0.2 added. In both cases we used RBF-kemel exp( -llx-yI12 /c)ofwidth c = 4.0
and c = 3.0, respectively. The regularization was C = 0.01 and C = 0.1 (small dots
training samples, circled dots SVs).
SVM
Banana
B.Cancer
Diabetes
German
Hern.t
Ringnorm
ESonar
Thyroid
Titanic
Waveform
11.5?0.07
26.0?0.47
23.5?0.17
23.6?O.21
16.0?O.33
1.7?0.01
32.4?O.18
4.8?0.22
22.4?O.10
9.9?O.O4
(78%)
(42%)
(57%)
(58%)
(51 %)
(62%)
(9%)
(79%)
(10%)
(60%)
KFD
10.8?O.OS
2S.8?O.46
23.2?O.16
23.7?O.22
16.l?O.34
1.S?O.01
33.2?O.17
4.2?O.21
23.2?0.20
9.9?O.04
SKFD
11.2?0.48
2S.2?O.44
23.l?O.18
23.6?O.23
16.4?0.31
1.6?O.Ol
33.4?0.17
4.3?O.18
22.6?0.17
1O.l?O.04
LSKFD
(86%)
(88%)
(97%)
(96%)
(88%)
(85 %)
(67%)
(88 %)
(8%)
(81 %)
10.6?O.04
2S.8?0.47
23.6?0.18
24.1?0.23
16.0?O.36
l.S?O.01
34.4?0.23
4.7?0.22
22.S?O.20
1O.2?0.04
(92%)
(88 %)
(97%)
(98%)
(96%)
(94%)
(99%)
(89%)
(95 %)
(96%)
Table 2: Comparison between KFD, sparse KFD (SKFD), sparse KFD with linear loss
on (LSKFD), and SVMs (see text). All experiments were carried out with RBF-kemels
exp( -llx-yI12 /c). Best result in bold face, second best in italics. The numbers in brackets
denote the fraction of expansions coefficients which were zero.
e
6 Conclusion and Outlook
In this work we showed how KFD can be reformulated as a mathematical programming
problem. This allows a better understanding of KFD and interesting extensions: First, a
probabilistic interpretation gives new insights about connections to RVM, SVM and regularization properties. Second, using a Laplacian prior, i.e. a it regularizer yields the sparse
algorithm SKFD. Third, the more general modeling permits a very natural KFD algorithm
for regression. Finally, due to the quadratic programming formulation, we can use tricks
known from SVM literature like chunking or active set methods for solving the optimization problem. However the optimal choice of a working set is not completely resolved and
is still an issue of ongoing research. In this sense sparse KFD inherits some of the most appealing properties of both, SVM and RVM: a unique, mathematical programming solution
from SVM and a higher sparsity together with interpretable outputs from RVM.
Our experimental studies show a competitive performance of our new KFD algorithms if
compared to SVMs. This indicates that neither the margin nor sparsity nor a specific output distribution alone seem to be responsible for the good performance of kernel-machines.
Further theoretical and experimental research is therefore needed to learn more about this
interesting question. Our future research will also investigate the role of output distributions and their difference between training and test set.
Acknowledgments This work was partially supported by grants of the DFG (JA 379/71,9-1). Thanks to K. Tsuda for helpful comments and discussions.
References
[1] G. Baudat and F. Anouar. Generalized discriminant analysis using a kernel approach. Neural
Computation, 12(10):2385- 2404, 2000.
[2] B.E. Boser, LM. Guyon, and Y.N. Vapnik. A training algorithm for optimal margin classifiers. In
D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning
Theory, pages 144-152, 1992.
[3] J.H. Friedman. Regularized discriminant analysis. Journal of the American Statistical Association, 84(405):165- 175, 1989.
[4] S. Mika, G. Ratsch, J. Weston, B. Schtilkopf, and K.-R. Mi.iller. Fisher discriminant analysis
with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for
Signal Processing IX, pages 41-48. IEEE, 1999.
[5] S. Mika, G. Ratsch, J. Weston, B. SchOlkopf, AJ. Smola, and K.-R. Millier. Invariant feature
extraction and classification in kernel spaces. In S.A. Solla, T.K. Leen, and K.-R. Mi.iller,
editors, Advances in Neural Information Processing Systems 12, pages 526- 532. MIT Press,
2000.
[6] S. Mika, AJ. Smola, and B. Schtilkopf. An improved training algorithm for kernel fisher discriminants. In Proceedings AlSTATS 2001. Morgan Kaufmann, 2001. to appear.
[7] G. Ratsch, T. Onoda, and K.-R. Mi.iller. Soft margins for AdaBoost. Machine Learning,
42(3):287- 320, March 2001. also NeuroCOLT Technical Report NC-TR-1998-021.
[8] V. Roth and V. Steinhage. Nonlinear discriminant analysis using kernel functions. In S.A. Solla,
T.K. Leen, and K.-R. Mi.iller, editors, Advances in Neural Information Processing Systems 12,
pages 568- 574. MIT Press, 2000.
[9] B. Schtilkopf, A.J. Smola, and K.-R. Mi.iller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998.
[10] A J. Smola. Learning with Kernels. PhD thesis, Technische Universitat Berlin, 1998.
[11] M.E. Tipping. The relevance vector machine. In S.A. Solla, T.K. Leen, and K.-R. Mi.iller,
editors, Advances in Neural Information Processing Systems 12, pages 652-658. MIT Press,
2000.
[12] Y.N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995.
| 1930 |@word seems:2 norm:2 open:1 hu:1 simulation:1 crucially:1 r:1 covariance:1 thereby:1 tr:1 outlook:1 solid:1 initial:1 eigensolvers:1 selecting:2 interestingly:1 comparing:2 surprising:1 yet:2 written:2 partition:1 interpretable:1 v:1 alone:1 greedy:1 five:2 mathematical:5 along:2 become:1 scholkopf:1 prove:1 inside:2 introduce:1 huber:2 nor:2 ol:1 automatically:1 considering:2 provided:1 notation:1 maximizes:1 panel:2 interpreted:2 minimizes:1 eigenvector:1 unspecified:1 whilst:1 thorough:1 anouar:1 exactly:1 classifier:5 grant:1 appear:1 producing:1 rvms:1 understood:2 modify:1 solely:1 noteworthy:2 mika:5 might:2 studied:1 ringnorm:2 limited:2 averaged:1 unique:2 responsible:2 acknowledgment:1 implement:1 procedure:1 area:3 empirical:1 significantly:2 projection:3 confidence:1 get:2 onto:2 targeting:1 selection:1 operator:2 baudat:1 risk:1 influence:2 context:1 optimize:3 equivalent:1 www:1 roth:1 maximizing:5 straightforward:1 independently:1 convex:2 m2:1 insight:1 haussler:1 regarded:1 deriving:1 pull:1 programming:8 us:2 diabetes:1 trick:1 expensive:1 particularly:1 std:1 role:1 solved:2 solla:3 ran:1 mentioned:1 complexity:1 solving:3 algebra:1 tit:1 upon:1 completely:1 resolved:1 various:1 regularizer:12 separated:1 klaus:2 hyper:2 choosing:1 outside:1 whose:1 solve:5 valued:1 say:1 otherwise:1 final:4 triggered:1 advantage:2 eigenvalue:2 took:1 product:3 maximal:1 realization:2 achieve:2 intuitive:2 produce:2 derive:1 develop:1 depending:1 strong:1 exhibiting:1 direction:2 waveform:1 drawback:1 ana:1 ja:1 assign:1 fix:2 generalization:2 proposition:3 extension:1 around:1 ic:3 normal:2 exp:9 iil:1 mapping:1 algorithmic:2 traded:1 lm:1 smallest:3 omitted:1 purpose:1 estimation:1 label:5 iw:2 currently:1 rvm:7 vice:1 establishes:1 reflects:1 minimization:1 mit:3 clearly:2 gaussian:8 rather:3 casting:1 wilson:1 inherits:1 rank:1 likelihood:1 indicates:1 contrast:1 am:1 sense:2 helpful:2 sb:1 nand:1 irn:3 germany:1 issue:1 classification:5 among:1 ill:4 art:1 special:1 neuen:1 iiil:1 equal:1 construct:1 eigenproblem:1 extraction:1 eliminated:1 identical:2 peaked:1 future:1 minimized:1 others:1 report:1 primarily:1 employ:1 dfg:1 replaced:1 friedman:1 interest:1 kfd:54 investigate:2 possibility:1 rae:1 evaluation:1 bracket:1 yielding:2 regularizers:1 capable:1 necessary:4 il2:4 logarithm:1 tsuda:1 theoretical:2 mk:2 kij:1 column:1 modeling:1 soft:1 dev:1 introducing:1 deviation:1 technische:1 usefulness:1 conducted:1 universitat:1 thanks:1 density:3 discriminating:1 probabilistic:8 off:2 together:1 again:1 thesis:1 american:1 til:1 leading:1 li:2 sible:1 toy:1 de:1 bold:1 coefficient:3 explicitly:1 vi:1 performed:2 try:1 sine:2 competitive:1 bayes:2 option:1 parallel:1 minimize:2 il:1 ir:1 variance:9 kaufmann:1 efficiently:1 maximized:1 yield:3 correspond:1 spaced:1 produced:1 straight:2 argumentation:1 sebastian:1 lengthy:1 against:1 larsen:1 thereof:1 obvious:1 proof:1 mi:6 gain:1 sampled:1 dataset:1 knowledge:1 higher:3 tipping:1 adaboost:1 improved:1 formulation:11 leen:3 furthermore:2 just:2 smola:4 working:2 nonlinear:5 o:1 mkl:1 aj:2 lda:1 concept:2 true:1 regularization:7 read:1 iteratively:1 during:1 generalized:2 palais:1 outline:2 tn:1 cp:1 interpreting:1 discriminants:1 mt:1 rl:1 insensitive:1 tail:1 interpretation:6 association:1 raetsch:1 versa:1 ai:1 llx:2 similarly:2 iia:1 dot:5 surface:1 add:1 posterior:2 recent:1 showed:1 optimizing:1 verlag:1 binary:1 success:2 yi:2 seen:5 morgan:1 signal:1 ii:5 multiple:2 full:2 technical:1 faster:1 cross:2 schtilkopf:3 equally:2 laplacian:7 qi:1 controlled:1 variant:2 regression:11 ko:1 breast:1 kernel:24 background:1 interval:1 ratsch:4 median:1 comment:1 subject:3 induced:1 steinhage:1 contrary:3 seem:1 call:2 structural:1 iii:1 easy:1 xj:2 fit:1 perfectly:1 idea:2 tm:1 pca:1 miiller:1 reformulated:1 york:1 svs:1 detailed:1 amount:1 svms:7 gmd:3 http:1 exist:1 dotted:1 estimated:1 svml:1 zwitter:1 dropping:2 gunnar:1 key:1 reformulation:1 enormous:1 penalizing:1 neither:1 douglas:1 fraction:1 year:1 run:1 package:1 inverse:1 almost:2 reasonable:1 guyon:1 draw:1 decision:1 comparable:1 simplification:1 aic:1 fold:2 quadratic:9 annual:1 constraint:7 sharply:1 thyroid:1 min:3 march:1 remain:1 slightly:1 smaller:1 appealing:3 modification:1 making:1 invariant:1 chunking:2 computationally:1 hern:1 slack:2 german:1 needed:1 end:1 gaussians:3 permit:2 appropriate:2 anymore:1 alternative:1 existence:1 original:1 denotes:2 remaining:1 ensure:1 cf:5 sw:1 tno:3 unifying:1 exploit:3 k1:1 especially:1 objective:2 added:1 question:1 italic:1 distance:3 link:1 mapped:1 berlin:2 iller:6 neurocolt:1 discriminant:13 o4:1 assuming:4 besides:2 index:1 reformulate:1 illustration:3 nc:1 setup:1 robert:1 perform:1 allowing:1 observation:1 datasets:1 supporting:1 incorporated:1 committed:1 looking:1 banana:1 lb:1 complement:1 cast:1 extensive:1 connection:7 potsdam:2 distinction:1 boser:1 able:1 usually:2 pattern:2 sparsity:7 program:7 yi12:2 explanation:1 overlap:1 natural:1 regularized:1 indicator:1 scheme:1 titanic:1 lk:1 carried:2 formerly:1 prior:7 understanding:5 review:1 l2:1 circled:1 text:1 literature:1 loss:13 interesting:6 generation:1 ingredient:1 validation:2 degree:1 consistent:1 mercer:2 article:1 principle:1 editor:5 lo:1 cancer:2 course:2 supported:1 free:1 formal:1 deeper:1 taking:1 face:1 sparse:12 dimension:1 forward:2 made:1 coincide:1 avoided:1 observable:1 implicitly:1 kkt:1 active:1 assumed:2 xi:8 un:1 table:8 learn:1 onoda:1 robust:1 nature:1 expansion:2 big:1 noise:9 il1:1 explicit:3 ylo:2 lie:1 weighting:1 third:1 ix:1 specific:2 showing:1 svm:22 exists:2 workshop:1 vapnik:2 adding:1 phd:1 tmo:1 sparseness:1 margin:15 easier:1 simply:2 partially:1 springer:1 ch:1 acm:1 weston:2 identity:2 soklic:1 rbf:2 fisher:10 feasible:1 hard:1 considerable:1 ili:1 experimental:4 cholesky:1 support:4 latter:1 relevance:3 ongoing:1 incorporate:1 evaluate:1 |
1,019 | 1,931 | Active inference in concept learning
Jonathan D. Nelson
Javier R. Movellan
Department of Cogniti ve Science
University of California, San Diego
La Jolla, CA 92093-0515
jnelson@cogsci.ucsd.edu
Department of Cognitive Science
University of California, San Diego
La Jolla, CA 92093-0515
movellan@inc.ucsd.edu
Abstract
People are active experimenters, not just passive observers,
constantly seeking new information relevant to their goals. A
reasonable approach to active information gathering is to ask
questions and conduct experiments that maximize the expected
information gain, given current beliefs (Fedorov 1972, MacKay
1992, Oaksford & Chater 1994). In this paper we present results
on an exploratory experiment designed to study people's active
information gathering behavior on a concept learning task
(Tenenbaum 2000). The results of the experiment are analyzed in
terms of the expected information gain of the questions asked by
subjects.
In scientific inquiry and in everyday life, people seek out information relevant to
perceptual and cognitive tasks. Scientists perform experiments to uncover causal
relationships; people saccade to informative areas of visual scenes, turn their head
towards surprising sounds, and ask questions to understand the meaning of concepts .
Consider a person learning a foreign language, who notices that a particular word,
"tikos," is used for baby moose, baby penguins, and baby cheetahs. Based on those
examples, he or she may attempt to discover what tikos really means. Logically,
there are an infinite number of possibilities. For instance, tikos could mean baby
animals, or simply animals, or even baby animals and antique telephones. Yet a
few examples are enough for human learners to form strong intuitions about what
meanings are most likely.
Suppose you can point to a baby duck, an adult duck, or an antique telephone, to
inquire whether that object is "tikos." Your goal is to figure out what "tikos"
means. Which question would you ask? Why? When the goal is to learn as much as
possible about a set of concepts, a reasonable strategy is to choose those questions
which maximize the expected information gain, given current beliefs (Fedorov
1972, MacKay 1992, Oaksford & Chater 1994). In this paper we present
preliminary results on an experiment designed to quantify the information value of
the questions asked by subjects on a concept learning task.
1.1
Tenenbaum's number concept task
Tenenbaum (2000) developed a Bayesian model of number concept learning. The
model describes the intuitive beliefs shared by humans about simple number
concepts, and how those beliefs change as new information is obtained, in terms of
subjective probabilities. Suppose a subject has been told that the number 16 is
consistent with some unknown number concept. With its current parameters, the
model predicts that the subjective probability that the number 8 will also be
consistent with that concept is about 0.35 . Tenenbaum (2000) included both
mathematical and interval concepts in his number concept space. Interval concepts
were sets of numbers between nand m, where 1 ::; n ::; 100, and n ::; m ::; 100, such as
numbers between 5 and 8, and numbers between 10 and 35. There were 33
mathematical concepts: odd numbers, even numbers, square numbers, cube
numbers, prime numbers, multiples of n (3 ::; n ::; 12), powers of n (2 ::; n ::; 10), and
numbers ending in n (1 ::; n ::; 9). Tenenbaum conducted a number concept learning
experiment with 8 subjects and found a correlation of 0.99 between the average
probability judgments made by subjects and the model predictions. To evaluate
how well Tenenbaum's model described our population of subjects, we replicated
his study, with 81 subjects. We obtained a correlation of .87 between model
predictions and average subject responses. Based on these results we decided to
extend Tenenbaum's experiment, and allow subjects to actively ask questions about
number concepts, instead of just observing examples given to them. We used
Tenenbaum's model to obtain estimates of the subjective probabilities of the
different concepts given the examples at hand. We hypothesized that the questions
asked by subjects would have high information value, when information value was
calculated according to the probability estimates produced by Tenenbaum's model.
1.2
Infomax sampling
Consider the following problem. A subject is given examples of numbers that are
consistent with a particular concept, but is not told the concept itself. Then the
subject is allowed to pick a number, to test whether it follows the same concept as
the examples given. For example, the subject may be given the numbers 2, 6 and 4
as examples of the underlying concept and she may then choose to ask whether the
number 8 is also consistent with the concept. Her goal is to guess the correct
concept.
We formalize the problem using standard probabilistic notation: random variables
are represented with capital letters and specific values taken by those variables are
represented with small letters. The random variable C represents the correct concept
on a given trial. Notation of the form "C=c" is shorthand for the event that the
random variable C takes the specific value c. We represent the examples given to
the subjects by the random vector X. The subject beliefs about which concepts are
probable prior to the presentation of any examples is represented by the probability
function p(e = c). The subject beliefs after the examples are presented is
represented by p(e = c I X = x). For example, if c is the concept even numbers
and x the numbers "2, 6, 4", then p(e = c I X = x) represents subjects' posterior
probability that the correct concept is even numbers, given that 2, 6, and 4 are
positive examples of that concept. The binary random variable Y n represents
whether the number n is a member of the correct concept. For example, Y 8 =1
represents the event that 8 is an element of the correct concept, and Y 8= 0 the event
that 8 is not. In our experiment subjects are allowed to ask a question of the form
"is the number n an element of the concept ?". This is equivalent to finding the value
taken by the random variable Y n , in our formalism.
We evaluate how good a question is in terms of the information about the correct
concept expected for that question, given the example vector X=x. The expected
information gain for the question "Is the number n an element of the concept?" is
given by the following formula:
I(C'Yn IX =x)=H(CIX =x)-H(CIYn,X =x),
where H(C I X = x)is the uncertainty (entropy) about of the concept C given the
example numbers in x
H(CIX =x)=-[P(C=cIX =x) log2 P(C=cIX =x),
c
and
1
- [P(C=cIX = x) [P(Yn =vIC=c,X =x) log2P(C=cIYn =v,X =x),
CEC
v=o
is the uncertainty about C given the active question Yn and the example vector x.
We consider only binary questions, of the form "is n consistent with the concept?"
so the maximum information value of any question in our experiment is one bit.
Note how information gain is relative to a probability model P of the subjects'
internal beliefs. Here we approximate these subjective probabilities using
Tenenbaum's (2000) number concept model.
An information-maximizing strategy (infomax) prescribes asking the question with
the highest expected information gain, e.g., the question that minimizes the expected
entropy, over all concepts. Another strategy of interest is confirmatory sampling,
which consists of asking questions whose answers are most likely to confirm current
beliefs. In other domains it has been proposed that subjects have a bias to use
confirmatory strategies regardless of their information value (Klayman & Ha 1987,
Popper 1959, Wason 1960). Thus, it is interesting to see whether people use a
confirmatory strategy on our concept learning task and to evaluate how informative
such a strategy would be.
2 Human sampling in the number concept game
Twenty-nine undergraduate students, recruited from Cognitive Science Department
classes at the University of California, San Diego, participated in the experiment. 1
Subjects gave informed consent, and received either partial course credit for
required study participation, or extra course credit, for their participation. The
experiment began with the following instructions:
Often it is possible to have a good idea about the state of the world, without
completely knowing it. People often learn from examples, and this study explores
how people do so. In this experiment, you will be given examples of a hidden
number rule. These examples will be randomly chosen from the numbers between 1
and 100 that follow the rule. The true rule will remain hidden, however. Then you
will be able to test an additional number, to see if it follows that same hidden rule.
Finally, you will be asked to give your best estimation of what the true hidden rule
is, and the chances that you are right. For instance, if the true hidden rule were
"multiples of 11 ", you might see the examples 22 and 66. If you thought the rule
were " multiples of 1 I ", but also possibly "even numbers ", you could test a number
of your choice, between 1-100, to see if it also follows the rule.
1
Full stimuli are posted at http://hci.ucsd.edul-jnelson/pages/study.html
On each trial subjects first saw a set of examples from the correct concept. For
instance, if the concept were even numbers, subjects might see the numbers "2, 6, 4"
as examples . Subjects were then given the opportunity to test a number of their
choice. Subjects were given feedback on whether the number they tested was an
element of the correct concept.
We wrote a computer program that uses the probability estimates provided by
Tenenbaum (2000) model to compute the information value of any possible question
in the number concept task. We used this program to evaluate the information value
of the questions asked by subjects, the questions asked by an infomax strategy, the
questions asked by a confirmatory strategy, and the questions asked by a random
sampling strategy. The infomax strategy was determined as described above. The
random strategy consisted of randomly testing a number between 1 and 100 with
equal probability. The confirmatory strategy consisted of testing the number
(excluding the examples) that had the highest posterior probability, as given by
Tenenbaum's model, of being consistent with the correct concept.
3
Results
Results for nine representative trials are discussed. The trials are grouped into three
types, according to the posterior beliefs of Tenenbaum's model, after the example
numbers have been seen. The average information value of subjects' questions, and
of each simulated sampling strategy, are given in Table 1. The specific questions
subjects asked are considered in Sections 3.1-3.3.
Trial type
Examples
Single example, high
uncertainty
16
60
37
Multiple example,
low uncertainty
Interval
16, 8, 60,80, 81,25, 16,23, 60,51, 81,98,
2, 64 10,30 4, 36 19,20 57, 55 96, 93
Subjects
.70
.72
.73
.00
.06
0.00
.47
.37
.31
Infomax
.97
1.00
1.00
.01
.32
0.00
1.00
.99
1.00
Confirmation
.97
1.00
1.00
.00
.00
0.00
0.00
0.00
0.00
Random
.35
.54
.52
.00
.04
0.00
.17
.20
.14
Table 1. Information value, as assessed using the subjective probabilities in
Tenenbaum's number concept model, of several sampling strategies. Information
value is measured in bits.
3.1
Single example, high uncertainty trials
On these trials Tenenbaum's model is relatively uncertain about the correct concepts
and gives some probability to many concepts. Interestingly, the confirmatory
strategy is identical to the infomax strategy on each of these trials, suggesting that a
confirmatory sampling strategy may be optimal under conditions of high
uncertainty. Consider the trial with the example number 16. On this trial, the
concepts powers of 4, powers of 2, and square numbers each have moderate
posterior probability (.28, .14, and .09, respectively).
These trials provided the best qualitative agreement between infomax predictions
and subjects' sampling behavior. Unfortunately the results are inconclusive since on
these trials both infomax and confirmatory strategy make the same predictions. On
the trial with the example number 16, subjects' modal response (8 of 29 subjects)
was to test the number 4. This was actually the most informative question,
according to Tenenbaum's model. Several subjects (8 of 29) tested other square
numbers, such as 49, 36, or 25, which also have high information value, relative to
Tenenbaum's number concept model (Figure 1). Subjects' questions also had a high
information value on the trial with the example number 37, and the trial with the
example number 60.
1
0.5
o
---------------------------------------------------------------------------_.
- -
-
-- -- -- --- - ----- ----- ----- -- - ----- ----- --
IIII 11111 II. 1.1. .L .I.
II
5
10 15
20 25
30 35
40 45
J I I I
50 55 60 65 70 75 80
I
I II I
85 90
95 100
Figure 1. Information value of sampling each number, in bits, given that the
number 16 is consistent with the correct concept.
3.2
Multiple example, low uncertainty trials
On these trials Tenenbaum's model gives a single concept very high posterior
probability. When there is little or no information value in any question, infomax
makes no particular predictions regarding which questions are best. Most subjects
tested numbers that were consistent with the most likely concept, but not
specifically given as examples. This behavior matches the confirmatory strategy.
On the trial with the examples 81, 25,4, and 36, the model gave probability 1.00 to
the concept square numbers. On this trial, the most commonly tested numbers were
49 (11 of 29 subjects) and 9 (4 of 29 subjects). No sample is expected to be
informati ve on this trial, because overall uncertainty is so low.
On the trial with the example numbers 60, 80, 10, and 30, the model gave
probability .94 to the concept multiples of 10, and probability .06 to the concept
multiples of 5. On this trial, infomax tested odd multiples of 5, such as 15, each of
which had expected information gain of 0.3 bits. The confirmatory strategy tested
non-example multiples of 10, such as 50, and had an information value of Obits.
Most subjects (17 of 29) followed the confirmatory strategy; some subjects (5 of 29)
followed the infomax strategy.
3.3
Interval trials
It is desirable to consider situations in which: (1) the questions asked by the
infomax strategy are different than the questions asked by the confirmatory strategy,
and (2) the choice of questions matters, because some questions have high
information value. Trials in which the correct concept is an interval of numbers
provide such situations. Consider the trial with the example numbers 16, 23, 19,
and 20. On this trial, and the other "interval" trials, the concept learning model is
certain that the correct concept is of the form numbers between m and n, because the
observed examples rule out all the other concepts. However, the model is not
certain of the precise endpoints, such as whether the concept is numbers between 16
and 23, or numbers between 16 and 24, etc. Infomax tests numbers near to, but
outside of, the range spanned by the examples, such as 14 or 26, in this example
(See Figure 2 at left).
What do subjects do? Two patterns of behavior, each observed on all three interval
trials, can be distinguished . The first is to test numbers outside of, but near to, the
range of observed examples. On the trial with example numbers between 16 and 23,
a total of 15 of 29 subjects tested numbers between 10-15, or 24-30. This behavior
is qualitatively similar to infomax.
The second pattern of behavior, which is shown by about one third of the subjects,
consists of testing (non-example) numbers within the range spanned by the observed
examples. If one is certain that the concept at hand is an interval then asking about
numbers within the range spanned by the observed examples provides no
information (Figure 2 at left). Yet some subjects consistently ask about these
numbers. Based on this surprising result, we went back to the results of Experiment
1, and reanalyzed the accuracy of Tenenbaum's model on trials in which the model
gave high probability to interval concepts. We found that on such trials the model
significantly deviated from the subjects' beliefs. In particular, subjects gave
probability lower than one that non-example numbers within the range spanned by
observed examples were consistent with the true concept. The model, however,
gives all numbers within the range spanned by the examples probability 1. See
Figure 2, at right, and note the difference between subjective probabilities (points)
and the model' s estimate of these probabilities (solid line). We hypothesize that the
apparent uninformativeness of the questions asked by subjects on these trials is due
to imperfections in the current version of Tenenbaum's model, and are working to
improve the model's descriptive accuracy, to test this hypothesis.
0.5
o
10
20
30
40
50
10
20
30
40
50
Figure 2. Information value, relative to Tenenbaum's model, of sampling each
number, given the example numbers 16, 23, 19, and 20 (left). In this case the model
is certain that the correct concept is some interval of numbers; thus, it is not
informative to ask questions about numbers within the range spanned by that
examples. At right, the probability that each number is consistent with the correct
concept. Our subjects' mean probability rating is denoted with points (n = 81 , from
our first experiment). Tenenbaum's model's approximation of these probabilities is
denoted with the solid line.
4
Discussion
This paper presents exploratory work in progress that attempts to analyze active
inference from the point of view of the rational approach to cognition (Anderson,
1990; Oaksford and Chater, 1994).
First we performed a large scale replication of Tenenbaum's number concept
experiment (Tenenbaum, 2000), in which subjects estimated the probability that
each of several test numbers were consistent with the same concept as some
example numbers . We found a correlation of 0.87 between our subjects' average
probability estimates and the probabilities predicted by Tenenbaum' s model. We
then extended Tenenbaum's experiment by allowing subjects to ask questions about
the concepts at hand. Our goal was to evaluate the information value of the
questions asked by subjects.
We found that in some situations, a simple
confirmatory strategy maximizes information gain. We also found that the current
version of Tenenbaum's number concept model has significant imperfections, which
limit its ability to estimate the informativeness of subjects' questions. We expect
that modifications to Tenenbaum's model will enable info max to predict sampling
behavior in the number concept domain. We are performing simulations to explore
this point. We are also working to generalize the infomax analysis of active
inference to more complex and natural problems.
Acknowledgments
We thank Josh Tenenbaum, Gedeon Deak, Jeff Elman, Iris Ginzburg, Craig
McKenzie, and Terry Sejnowski for their ideas; and Kent Wu and Dan Bauer for
their help in this research. The first author was partially supported by a Pew
graduate fellowship during this research.
References
Anderson, J. R. (1990). The adaptive character of thought. New Jersey: Erlbaum.
Fedorov, V. V. (1972). Theory of optimal experiments. New York: Academic Press.
Klayman, J.; Ha, Y. (1987). Confirmation, disconfirmation, and information in
hypothesis testing. Psychological Review, 94, 211-228.
MacKay, D. J. C. (1992). Information-based objective functions for active data
selection. Neural Computation, 4, 590-604.
Oaksford, M.; Chater, N. (1994). A rational analysis of the selection task as optimal
data selection. Psychological Review, 101, 608-631.
Popper, K. R. (1959). The logic of scientific discovery. London: Hutchnison .
Tenenbaum, J. B. (2000). Rules and similarity in concept learning. In Advances in
Neural Information Processing Systems, 12, Solla, S. A. , Leen, T. K., Mueller, K.R. (eds.), 59-65.
Wason, P. C. (1960) . On the failure to eliminate hypotheses in a conceptual task.
Quarterly Journal of Experimental Psychology. 12, 129-140.
| 1931 |@word trial:32 version:2 instruction:1 simulation:1 seek:1 kent:1 pick:1 solid:2 interestingly:1 subjective:6 current:6 surprising:2 yet:2 informative:4 hypothesize:1 designed:2 guess:1 provides:1 mathematical:2 replication:1 qualitative:1 shorthand:1 consists:2 hci:1 dan:1 expected:9 behavior:7 elman:1 cheetah:1 little:1 provided:2 discover:1 underlying:1 notation:2 maximizes:1 what:5 minimizes:1 developed:1 informed:1 finding:1 yn:3 positive:1 scientist:1 limit:1 might:2 range:7 graduate:1 decided:1 acknowledgment:1 testing:4 movellan:2 area:1 thought:2 significantly:1 word:1 selection:3 equivalent:1 maximizing:1 regardless:1 rule:10 spanned:6 his:2 population:1 exploratory:2 diego:3 suppose:2 us:1 hypothesis:3 agreement:1 element:4 predicts:1 observed:6 inquire:1 went:1 solla:1 highest:2 intuition:1 gedeon:1 asked:13 prescribes:1 learner:1 completely:1 represented:4 jersey:1 london:1 sejnowski:1 cogsci:1 outside:2 whose:1 apparent:1 ability:1 itself:1 descriptive:1 relevant:2 consent:1 intuitive:1 everyday:1 object:1 help:1 measured:1 odd:2 received:1 progress:1 strong:1 predicted:1 quantify:1 correct:15 human:3 enable:1 really:1 preliminary:1 probable:1 credit:2 considered:1 cognition:1 predict:1 estimation:1 saw:1 grouped:1 imperfection:2 chater:4 she:2 consistently:1 logically:1 inference:3 mueller:1 foreign:1 eliminate:1 nand:1 her:1 hidden:5 overall:1 html:1 reanalyzed:1 denoted:2 animal:3 mackay:3 cube:1 equal:1 sampling:11 identical:1 represents:4 stimulus:1 penguin:1 few:1 randomly:2 ve:2 attempt:2 interest:1 possibility:1 analyzed:1 partial:1 conduct:1 causal:1 uncertain:1 psychological:2 instance:3 formalism:1 asking:3 conducted:1 erlbaum:1 answer:1 person:1 explores:1 told:2 probabilistic:1 infomax:15 choose:2 possibly:1 cognitive:3 actively:1 suggesting:1 student:1 inc:1 matter:1 performed:1 view:1 observer:1 observing:1 analyze:1 square:4 accuracy:2 who:1 judgment:1 generalize:1 bayesian:1 produced:1 craig:1 inquiry:1 ed:1 failure:1 gain:8 rational:2 experimenter:1 ask:9 formalize:1 javier:1 uncover:1 actually:1 back:1 follow:1 response:2 modal:1 leen:1 anderson:2 just:2 correlation:3 hand:3 working:2 scientific:2 hypothesized:1 concept:71 true:4 consisted:2 game:1 during:1 iris:1 passive:1 meaning:2 began:1 log2p:1 confirmatory:13 endpoint:1 extend:1 he:1 discussed:1 significant:1 cix:5 pew:1 language:1 had:4 similarity:1 etc:1 posterior:5 jolla:2 moderate:1 prime:1 popper:2 certain:4 binary:2 life:1 baby:6 seen:1 additional:1 maximize:2 ii:3 multiple:9 desirable:1 sound:1 full:1 match:1 academic:1 prediction:5 represent:1 fellowship:1 participated:1 iiii:1 interval:10 extra:1 subject:52 recruited:1 member:1 near:2 enough:1 gave:5 psychology:1 idea:2 regarding:1 knowing:1 whether:7 york:1 nine:2 tenenbaum:30 http:1 notice:1 estimated:1 capital:1 letter:2 you:9 uncertainty:8 reasonable:2 wu:1 bit:4 followed:2 deviated:1 your:3 scene:1 performing:1 relatively:1 edul:1 department:3 according:3 describes:1 remain:1 character:1 wason:2 modification:1 ginzburg:1 gathering:2 taken:2 turn:1 quarterly:1 mckenzie:1 distinguished:1 log2:1 opportunity:1 seeking:1 objective:1 question:38 strategy:25 thank:1 simulated:1 nelson:1 relationship:1 unfortunately:1 info:1 unknown:1 perform:1 twenty:1 allowing:1 fedorov:3 situation:3 extended:1 excluding:1 head:1 precise:1 ucsd:3 rating:1 required:1 california:3 adult:1 able:1 pattern:2 program:2 max:1 belief:10 terry:1 power:3 event:3 natural:1 participation:2 improve:1 vic:1 oaksford:4 prior:1 review:2 discovery:1 relative:3 expect:1 moose:1 interesting:1 consistent:11 informativeness:1 course:2 supported:1 bias:1 allow:1 understand:1 bauer:1 feedback:1 calculated:1 ending:1 world:1 author:1 made:1 commonly:1 san:3 replicated:1 qualitatively:1 adaptive:1 approximate:1 wrote:1 confirm:1 logic:1 active:8 conceptual:1 why:1 table:2 learn:2 ca:2 confirmation:2 complex:1 posted:1 domain:2 allowed:2 representative:1 duck:2 perceptual:1 third:1 ix:1 formula:1 specific:3 cec:1 inconclusive:1 undergraduate:1 entropy:2 simply:1 likely:3 explore:1 visual:1 josh:1 partially:1 saccade:1 chance:1 constantly:1 goal:5 presentation:1 towards:1 jeff:1 shared:1 change:1 included:1 infinite:1 telephone:2 determined:1 specifically:1 total:1 experimental:1 la:2 internal:1 people:7 jonathan:1 assessed:1 evaluate:5 tested:7 |
1,020 | 1,932 | Probabilistic Semantic Video Indexing
Milind R. Naphade, Igor Kozintsev and Thomas Huang
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
{milind, igor,huang}@ifp.uiuc.edu
Abstract
We propose a novel probabilistic framework for semantic video indexing. We define probabilistic multimedia objects (multijects)
to map low-level media features to high-level semantic labels. A
graphical network of such multijects (multinet) captures scene context by discovering intra-frame as well as inter-frame dependency
relations between the concepts. The main contribution is a novel
application of a factor graph framework to model this network.
We model relations between semantic concepts in terms of their
co-occurrence as well as the temporal dependencies between these
concepts within video shots. Using the sum-product algorithm [1]
for approximate or exact inference in these factor graph multinets,
we attempt to correct errors made during isolated concept detection by forcing high-level constraints. This results in a significant
improvement in the overall detection performance.
1
Introduction
Research in video retrieval has traditionally focussed on the paradigm of query-byexample (QBE) using low-level features [2]. Query by keywords/key-phrases (QBK)
(preferably semantic) instead of examples has motivated recent research in semantic
video indexing. For this, we need models which capture the feature representation
corresponding to these keywords. A QBK system can support semantic retrieval for
a small set of keywords and also act as the first step in QBE systems to narrow down
the search. The difficulty lies in the gap between low-level media features and highlevel semantics. Recent attempts to address this include detection of audio-visual
events like explosion [3] and semantic visual templates [4].
We propose a statistical pattern recognition approach for training probabilistic multimedia objects (multijects) which map the high level concepts to low-level audiovisual features. We also propose a probabilistic factor graph framework, which models the interaction between concepts within each video frame as well as across the
video frames within each video shot. Factor graphs provide an elegant framework to
represent the stochastic relationship between concepts, while the sum-product algo-
rithm provides an efficient tool to perform learning and inference in factor graphs.
Using exact as well as approximate inference (through loopy probability propagation) we show that there is significant improvement in the detection performance.
2
Proposed Framework
To support retrieval based on high-level queries like' Explosion on a beach', we need
models for the event explosion and site beach. User queries might similarly involve
sky, helicopter, car-chase etc. Detection of some of these concepts may be possible,
while some others may not be directly observable. To support such queries, we
proposed a probabilistic multimedia object (multiject) [3] as shown in Figure 1 (a) ,
which has a semantic label and which summarizes a time sequence of features from
multiple media. A Multiject can belong to any of the three categories: objects (car,
man, helicopter), sites (outdoor, beach), or events (explosion, man-walking).
Intuitively it is clear that the presence of certain multijects suggests a high possibility of detecting certain other multijects. Similarly some multijects are less likely to
occur in the presence of others. The detection of sky and water boosts the chances
of detecting a beach, and reduces the chances of detecting Indoor. It might also be
possible to detect some concepts and infer more complex concepts based on their
relation with the detected ones. Detection of human speech in the audio stream and
a face in the video stream may lead to the inference of human talking. To integrate
all the multijects and model their interaction, we propose the network of multijects
which we term as multinet. A conceptual figure of a multinet is shown in Figure
1 (b) with positive (negative) signs indicating positive (negative) interaction. In
P (c:oncept=Outdoor I :flBtures, other
mut~ijectll)
=
0.7
7\
v1deo
featur..
audio
featur ??
(a)
(b)
Figure 1: (a) A probabilistic multimedia object. (b) A conceptual multinet.
Section 5 we present a factor graph multinet implementation.
3
Video segmentation and Feature Extraction
We have digitized movies of different genres to create a large database of a few
hours of video data. The video clips are segmented into shots using the algorithm
in [5]. We then perform spatio-temporal segmentation [2] within each shot to obtain and track regions homogeneous in color and motion separated by strong edges.
Large dominant regions are labeled manually. Each region is then processed to
extract features characterizing the color (3-channel histogram [3]), texture (statistical properties of the Gray-level Co-occurrence matrices at 4 different orientations
[6]), structure (edge direction histogram [7]), motion (affine motion parameters)
and shape (moment invariants [8]). Details about the extracted features can be
found in [9]. For sites we use color, texture and structural features (84 elements)
and for objects and events we use all features (98 elements)l . Audio features are
extracted as in [10]. For training our multiject and multinet models we use 1800
frames from different video shots and for testing our framework we use 9400 frames.
Since consecutive images within a shot are correlated, the video data is subsampled
to create the training and testing without redundancy.
4
Modeling semantic concepts using Multijects
We use an identical approach to model concepts in video and audio (independently
and jointly). The following site multijects are used in our experiments: sky, water,
forest , rocks and snow. Audio-only multijects (human-speech, music) can be found
in [10] and audio-visual multijects (explosion) in [3]. Detection of multijects is
performed on every segmented region 2 within each video frame. Let the feature
vector for the region j be Xj . We model the semantic concept as a binary random
variable and define the two hypotheses Ho and Hl as
(1)
where Po(Xj ) and PdXj) denote the class conditional probability density functions conditioned on the null hypothesis (concept absent) and the true hypothesis
(concept present). Po (Xj) and P 1 (Xj) are modeled using a mixture of Gaussian
components for the site multijects 3 . For objects and events (in video and audio),
hidden Markov models replace the Gaussian mixture models and feature vectors for
all the frames within a shot constitute to the time series modeled. The detection
performance for the five site multijects on the test-set is given in Table 1.
multiject
Detection (%)
False Alarm (%)
Rocks
77
24.1
Sky
81.8
11.9
Snow
81.5
12.9
Water
79.4
15.6
Forest
85 .1
14.9
Table 1: Maximum likelihood binary classification performance for site multijects.
4.1
Frame level semantic features
Since multijects are used as semantic feature detectors at a regional level, it is easy
to define multiject-based semantic features at the frame level by integrating the
region-level classification. We check each region for each concept individually and
obtain probabilities of each concept being present or absent in the region. Imperfect
segmentation does not hurt us too much since these soft decisions are modified in
the multinet based on high-level constraints. Defining a binary random variable Rij
(Rij = 1/0 if concept present/absent) and assuming uniform priors on the presence
or absence of a concept in any region we can use Bayes' rule to obtain:
P(Rij = 11Xj) = P(XjlRij = l)/(P(Xj IRij = 1) +P(XjIRij = 0))
(2)
Defining binary random variables F i , i E {I, N} (N is the number of concepts) to
take on value 1 if concept i is present in the frame and value 0 otherwise, we use the
I Automatic feature selection is not addressed here.
2We thank Prof. Chang and D. Zhong for the algorithm [2].
3Po(Xj ) used 5 gaussian components, while PI(Xj ) used 10. The number of mixing
components can be fixed experimentally and could be different for optimal performance.
In general models for Ho are represented better with more components than those for HI
OR function to combine soft decisions for each concept from all regions to obtain
F i . Let X = {Xl, ... , X~} (M is the number of regions in a frame), then
j=M
P(Fi = OIX) =
II P(Rij =
0IXj
)
and P(Fi = 11X) = 1- P(Fi = OIX)
(3)
j=l
5
The multinet as a factor graph
To model the interaction between multijects in a multinet, we propose to use a
factor graph [1] framework. Factor graphs subsume graphical models like Bayesian
nets and Markov random fields and have been successfully applied in the area of
channel error correction coding [1] and specifically, iterative decoding. Let x =
{Xl, X2, ... , Xn} be a vector of variables. A factor graph visualizes the factorization
of a global function f(x). Let f(x) factor as
(4)
i=l
where x( i) is the set of variables of the function fi. A factor graph for f is defined as
the bipartite graph with two vertex classes Vf and Vv of sizes m and n respectively
such that the ith node in Vf is connected to the jth node in Vv iff fi is a function
of Xj. Figure 2 (a) shows a simple factor graph representation of f(x,y,z) =
h(x,y)h(y,z) with function nodes h,h and variable nodes X,y,z.
Many signal processing and learning problems are formulated as optimizing a global
function f(x) marginalized for a subset of its arguments. The algorithm which allows us to perform this efficiently, though in most cases only approximately, is called
the sum-product algorithm. The sum-product algorithm works by computing
messages at the nodes using a simple rule and then passing the messages between
nodes according to a reasonable schedule. A message from a function node to a
variable node is the product of all messages incoming to the function node with the
function itself, marginalized for the variable associated with the variable node. A
message from a variable node to a function node is simply the product of all messages incoming to the variable node from other functions connected to it. Pearl's
probability propagation working on a Bayesian net is equivalent to the sum-product
algorithm applied to the corresponding factor graph. If the factor graph is a tree,
exact inference is possible using a single set of forward and backward passage of
messages. For all other cases inference is approximate and the message passing is
iterative [1] leading to loopy probability propagation. This has a direct bearing on
our problem because relations between semantic concepts are complicated and in
general contain numerous cycles (e.g., see Figure 1 (b)).
5.1
Relating semantic concepts in a factor graph
We now describe a frame-level factor graph to model the probabilistic relations
between various frame-level semantic features Fi obtained using Equation 3. To
capture the co-occurrence relationship between the five semantic concepts at the
frame-level, we define a function node which is connected to the five variable nodes
representing the concepts as shown in Figure 2 (b). This function node represents
P(F1' F 2 , F 3 , .. , FN). The function nodes below the five variable nodes denote the
messages passed by the OR function of Equation 3 (P(Fi = 1), P(Fi = 0)). These
are then propagated to the function node. At the function node the messages are
multiplied by the function which is estimated from the co-occurrence of the concepts
in the training set. The function node then sends back messages summarized for
each variable. This modifies the soft decisions at the variable nodes according to
the high-level relationship between the five concepts. In general, the distribution
I omtden slty functl on of 5 semantic concepts
FusIOn at imme ko llel uSing OR functt on
FusIon at frame levd uSing OR functIOn
(b)
(0)
(, )
Figure 2: (a) An example of a simple factor graph (b)A multinet: Accounting for
concept dependencies using a single function (b) Another multinet: Replacing the
function in (b) by a product of 10 local functions.
at the function node in Figure 2 (b) is exponential in the number of concepts
(N) and the computational cost may increase quickly. To alleviate this we can
enforce a factorization of the function in Figure 2 (b) as a product of a set of
local functions where each local function accounts for co-occurrence of two variables
only. This modification to the graph in Figure 2 (b) is shown in Figure 2 (c).
Each function in Figure 2 (c) represents the joint probability mass of those two
variables that are its arguments (and there are eli such functions) thus reducing
the complexity. The factor graph is no longer a tree and exact inference becomes
hard as the number of loops grows. We then apply iterative techniques based on
the sum-product algorithm to overcome this. We can also incorporate temporal
A dynamic multi net with unfactored global
distribution for each frame
A dynamic multinet with factored global
di stribution for each frame
Multinet
slate at
Multinet
Mullmet
stale at
framel
Multinet
state al
frame 1-1
,-
!
Accounting for temporal dependency using a Markov chain
Accounting for temporal dependency using a Markov chain
(a)
(b)
Figure 3: (a) Replicating the multinet in Figure 2 (b) for each frame in a shot and
introducing temporal dependencies between the value of each concept in consecutive
frames. (b) Repeating this for Figure 2 (c).
dependencies. This can be done by replicating the slice of factor graph in Figure
2 (b) or (c) as many times as the number of frames within a single video shot and
by introducing a first order Markov chain for each concept. Figures 3 (a) and (b)
show two consecutive time slices and extend the models in Figures 2 (b) and (c)
respectively. The horizontal links in Figures 3 (a), (b) connect the variable node
for each concept in a time slice to the corresponding variable node in the next time
slice through a function modeling the transition probability. This framework now
becomes a dynamic probabilistic network. For inference, messages are iteratively
passed locally within each slice. This is followed by message passing across the
time slices in the forward direction and then in the backward direction. Accounting
for temporal dependencies thus leads to temporal smoothing of the soft decisions
within each shot.
6
Results
We compare detection performance of the multijects with and without accounting
for the concept dependencies and temporal dependencies. The reference system
performs multiject detection by thresholding soft-decisions (i.e., P(Fi IX)) at the
frame-level. The proposed schemes are then evaluated by thresholding the soft decisions obtained after message passing using the structures in Figures 2 (b), (c)
(conceptual dependencies) and Figures 3 (a), (b) (conceptual and temporal dependencies). We use receiver operating characteristics (ROC) curves which show a plot
of the probability of detection plotted against the probability of false alarms for
different values of a parameter (the threshold in our case).
Figure 4 shows the ROC curves for the overall performance over the test-set across
all the five multijects. The three curves in Figure 4 (a) correspond to the performance using isolated frame-level classification, the factor graph in Figure 2 (b) and
the factor graph in Figure 2 (c) with ten iterations of loopy propagation. The curves
in Figure 4 (b) correspond to isolated detection followed by temporal smoothing,
the dynamic multinet in Figure 3 (a) and the one in Figure 3 (b) respectively. From
,,~~~~~~
,,--~
,,==,~.~~~~?=~
P,obllb4l ityoiF. ,. AI.ms
(a)
,,~~~~~~
,,==~
,, ~,~.~~~~?=~
P, ob.bilityofF.lseAI., ms
(b)
Figure 4: ROC curves for overall performance using isolated detection and two
factor graph representations. (a) With static multinets (b) With dynamic multinets.
Figure 4 we observe that there is significant improvement in detection performance
by using the multinet to model the dependencies between concepts than without
using it. This improvement is especially stark for low Pf where detection rate improves by more than 22 % for a threshold corresponding to P f = 0.1. Interestingly,
detection based on the factorized functions (Figure 2 (c)) performs better than
the the one based on the unfactorized function. This suggests that the factorized
function is a better representative and can be estimated more reliably due to fewer
parameters being involved. Also by using models in Figure 3, which account for
temporal dependencies across video frames and by performing smoothing using the
forward backward algorithm, we see further improvement in detection performance
in Figure 4 (b). The detection rate corresponding to Pf = 0.1 is 68 % for the
static multinet (Figure 2 (c)) and 72 % for its dynamic counterpart (Figure 3 (b)).
Comparison of ROC curves with and without temporal smoothing (not shown here
due to lack of space) reveal that temporal smoothing results in better detection
irrespective of the threshold or configuration.
7
Conclusions and Future Research
We propose a probabilistic framework for detecting semantic concepts using multijects and multinets. We present implementations of static and dynamic multinets
using factor graphs. We show that there is significant improvement in detection performance by accounting for the interaction between semantic concepts and temporal
dependency amongst the concepts. The multinet architecture imposes no restrictions on the classifiers used in the multijects and we can improve performance by
using better multiject models. Our framework can be easily expanded to integrate
multiple modalities if they have not been integrated in the multijects to account for
the loose coupling between audio and visual streams in movies. It can also support
inference of concepts that are observed not through media features but through
their relation to those concepts which are observed in media features.
References
[1] F. Kschischang, B. Frey, and H .-A. Loeliger, "Factor graphs and the sum-product
algorithm," submitted to IEEE Trans. Inform. Theory, July 1998.
[2] D. Zhong and S. F. Chang, "Spatio-temporal video search using the object-based
video representation," in Proceedings of the IEEE International Conference on Image
Processing, vol. 2, Santa Barbara, CA, Oct. 1997, pp. 21-24.
[3] M. Naphade, T. Kristjansson, B. Frey, and T . S. Huang, "Probabilistic multimedia
objects (multijects): A novel approach to indexing and retrieval in multimedia systems," in Proceedings of the fifth IEEE International Conference on Image Processing,
vol. 3, Chicago, IL, Oct 1998, pp. 536-540.
[4] S. F. Chang, W. Chen, and H. Sundaram, "Semantic visual templates - linking features to semantics," in Proceedings of the fifth IEEE International Conference on
Image Processing, vol. 3, Chicago, IL, Oct 1998, pp. 531- 535.
[5] M. Naphade, R. Mehrotra, A. M. Ferman, J. Warnick, T. S. Huang, and A. M.
Tekalp, "A high performance shot boundary detection algorithm using multiple cues,"
in Proceedings of the fifth IEEE International Conference on Image Processing, vol. 2,
Chicago, IL, Oct 1998, pp . 884-887.
[6] R. Jain, R. Kasturi, and B. Schunck, Machine Vision. MIT Press and McGraw-Hill,
1995.
[7] A. K. Jain and A. Vailaya, "Shape-based retrieval: A case study with trademark
image databases," Pattern Recognition, vol. 31, no. 9, pp. 1369- 1390, 1998.
[8] S. Dudani, K. Breeding, and R. McGhee, "Aircraft identification by moment invariants," IEEE Trans. on Computers, vol. C-26, pp. 39- 45, Jan 1977.
[9] M. R. Naphade and T. S. Huang, "A probabilistic framework for semantic indexing
and retrieval in video," to appear in IEEE International Conference on Multimedia
and Expo, New York, NY, July 2000 . http://www.ifp.uiuc.edu;-milind/cpapers.html
[10] M. R. Naphade and T. S. Huang, "Stochastic modeling of soundtrack for efficient
segmentation and indexing of video," in SPIE IS ?3 T Storage and Retrieval for
Multimedia Databases, vol. 3972, Jan 2000, pp. 168-176.
| 1932 |@word aircraft:1 kristjansson:1 accounting:6 shot:11 moment:2 configuration:1 series:1 loeliger:1 interestingly:1 qbe:2 ixj:1 fn:1 chicago:3 shape:2 plot:1 sundaram:1 cue:1 discovering:1 fewer:1 ith:1 provides:1 detecting:4 node:25 five:6 direct:1 combine:1 inter:1 uiuc:2 multi:1 audiovisual:1 kozintsev:1 pf:2 becomes:2 medium:5 mass:1 null:1 factorized:2 temporal:16 sky:4 preferably:1 every:1 act:1 classifier:1 appear:1 positive:2 engineering:1 local:3 frey:2 approximately:1 might:2 suggests:2 co:5 factorization:2 testing:2 jan:2 area:1 integrating:1 selection:1 storage:1 context:1 www:1 restriction:1 equivalent:1 map:2 modifies:1 independently:1 factored:1 rule:2 traditionally:1 hurt:1 user:1 exact:4 homogeneous:1 hypothesis:3 element:2 recognition:2 walking:1 featur:2 database:3 labeled:1 observed:2 electrical:1 capture:3 rij:4 region:11 connected:3 cycle:1 complexity:1 dynamic:7 algo:1 kasturi:1 bipartite:1 po:3 joint:1 easily:1 slate:1 represented:1 various:1 genre:1 separated:1 jain:2 describe:1 query:5 detected:1 otherwise:1 jointly:1 itself:1 unfactored:1 chase:1 highlevel:1 sequence:1 net:3 rock:2 propose:6 interaction:5 product:11 helicopter:2 loop:1 mixing:1 iff:1 object:9 coupling:1 keywords:3 strong:1 direction:3 snow:2 correct:1 stochastic:2 human:3 f1:1 alleviate:1 correction:1 consecutive:3 label:2 individually:1 create:2 successfully:1 tool:1 mit:1 gaussian:3 modified:1 zhong:2 improvement:6 multinet:20 likelihood:1 check:1 detect:1 inference:9 integrated:1 hidden:1 relation:6 semantics:2 overall:3 classification:3 orientation:1 html:1 smoothing:5 field:1 extraction:1 beach:4 manually:1 identical:1 represents:2 igor:2 future:1 others:2 few:1 mut:1 subsampled:1 attempt:2 detection:23 message:14 possibility:1 intra:1 mixture:2 chain:3 edge:2 explosion:5 tree:2 plotted:1 isolated:4 modeling:3 soft:6 phrase:1 loopy:3 cost:1 vertex:1 subset:1 introducing:2 uniform:1 too:1 dependency:15 connect:1 density:1 international:5 probabilistic:12 decoding:1 milind:3 quickly:1 huang:6 leading:1 stark:1 account:3 coding:1 summarized:1 stream:3 performed:1 bayes:1 complicated:1 contribution:1 il:3 characteristic:1 efficiently:1 correspond:2 bayesian:2 identification:1 visualizes:1 submitted:1 detector:1 inform:1 against:1 pp:7 involved:1 associated:1 di:1 spie:1 static:3 propagated:1 color:3 car:2 improves:1 segmentation:4 schedule:1 qbk:2 back:1 done:1 though:1 evaluated:1 mcghee:1 working:1 horizontal:1 replacing:1 propagation:4 lack:1 reveal:1 gray:1 stale:1 grows:1 concept:41 true:1 contain:1 counterpart:1 iteratively:1 semantic:23 during:1 m:2 hill:1 performs:2 motion:3 passage:1 image:6 novel:3 fi:9 ifp:2 belong:1 extend:1 linking:1 relating:1 significant:4 ai:1 automatic:1 similarly:2 illinois:1 replicating:2 longer:1 operating:1 etc:1 dominant:1 recent:2 optimizing:1 forcing:1 barbara:1 certain:2 binary:4 paradigm:1 july:2 signal:1 ii:1 multiple:3 reduces:1 infer:1 champaign:1 segmented:2 retrieval:7 ko:1 vision:1 histogram:2 represent:1 iteration:1 addressed:1 sends:1 modality:1 regional:1 elegant:1 structural:1 presence:3 easy:1 xj:9 architecture:1 imperfect:1 absent:3 motivated:1 passed:2 speech:2 passing:4 york:1 constitute:1 clear:1 involve:1 santa:1 repeating:1 locally:1 ten:1 clip:1 processed:1 category:1 http:1 sign:1 estimated:2 track:1 vol:7 redundancy:1 key:1 threshold:3 backward:3 graph:26 sum:7 eli:1 reasonable:1 decision:6 summarizes:1 ob:1 vf:2 hi:1 followed:2 occur:1 constraint:2 expo:1 scene:1 multinets:5 x2:1 argument:2 breeding:1 performing:1 expanded:1 department:1 according:2 across:4 modification:1 hl:1 naphade:5 intuitively:1 invariant:2 indexing:6 equation:2 loose:1 multiplied:1 apply:1 observe:1 enforce:1 occurrence:5 ho:2 thomas:1 include:1 graphical:2 marginalized:2 music:1 prof:1 especially:1 amongst:1 thank:1 link:1 water:3 assuming:1 modeled:2 relationship:3 negative:2 implementation:2 reliably:1 perform:3 markov:5 urbana:1 defining:2 subsume:1 digitized:1 frame:25 narrow:1 boost:1 hour:1 pearl:1 trans:2 address:1 below:1 pattern:2 indoor:1 video:23 event:5 difficulty:1 representing:1 scheme:1 improve:1 movie:2 numerous:1 irrespective:1 mehrotra:1 extract:1 prior:1 integrate:2 affine:1 imposes:1 thresholding:2 pi:1 jth:1 vv:2 template:2 focussed:1 face:1 characterizing:1 fifth:3 slice:6 overcome:1 curve:6 xn:1 transition:1 boundary:1 forward:3 made:1 approximate:3 observable:1 mcgraw:1 global:4 incoming:2 conceptual:4 receiver:1 spatio:2 search:2 iterative:3 table:2 channel:2 ca:1 kschischang:1 forest:2 bearing:1 complex:1 main:1 alarm:2 site:7 representative:1 roc:4 rithm:1 ny:1 exponential:1 xl:2 lie:1 outdoor:2 ix:1 down:1 trademark:1 fusion:2 false:2 texture:2 conditioned:1 gap:1 chen:1 simply:1 likely:1 visual:5 schunck:1 talking:1 chang:3 chance:2 extracted:2 oct:4 conditional:1 formulated:1 replace:1 man:2 absence:1 experimentally:1 hard:1 specifically:1 reducing:1 multimedia:8 called:1 oix:2 indicating:1 support:4 incorporate:1 audio:9 correlated:1 |
1,021 | 1,933 | Sex with Support Vector Machines
Baback Moghaddam
Mitsubishi Electric Research Laboratory
Cambridge MA 02139, USA
baback<amerl.com
Ming-Hsuan Yang
University of Illinois at Urbana-Champaign
Urbana, IL 61801 USA
mhyang<avision.ai.uiuc.edu
Abstract
Nonlinear Support Vector Machines (SVMs) are investigated for
visual sex classification with low resolution "thumbnail" faces (21by-12 pixels) processed from 1,755 images from the FE RET face
database. The performance of SVMs is shown to be superior to
traditional pattern classifiers (Linear, Quadratic, Fisher Linear Discriminant, Nearest-Neighbor) as well as more modern techniques
such as Radial Basis Function (RBF) classifiers and large ensembleRBF networks. Furthermore, the SVM performance (3.4% error)
is currently the best result reported in the open literature.
1
Introduction
In recent years, SVMs have been successfully applied to various tasks in computational face-processing. These include face detection [14], face pose discrimination [12] and face recognition [16]. Although facial sex classification has
attracted much attention in the psychological literature [1, 4, 8, 15], relatively few
computatinal learning methods have been proposed. We will briefly review and
summarize the prior art in facial sex classification. l
Gollomb et at. [10] trained a fully connected two-layer neural network, SEXNET, to
identify sex from 30-by-30 face images. Their experiments on a set of 90 photos (45
males and 45 females) gave an average error rate of 8.1% compared to an average
error rate of 11.6% from a study of five human subjects. Brunelli and Poggio [2]
lSex classification is also referred to as gender classification (for political correctness).
However, given the two distinct biological classes, the scientifically correct term is sex
classification. Gender often denotes a fuzzy continuum of feminine ~ masculine [1).
Fu...._ _--1-_ _ _---.J M
F
L-____
F
L - -_ _ __
~-+
________ M
~
Gender
Classifier
M
Figure 1: Sex classifier
developed HyperBF networks for sex classification in which two competing RBF
networks, one for male and the other for female, were trained using 16 geometric
features as inputs (e.g., pupil to eyebrow separation, eyebrow thickness, and nose
width). The results on a data set of 168 images (21 males and 21 females) show
an average error rate of 21%. Using similar techniques as Golomb et al. [10] and
Cottrell and Metcalfe [6], Tamura et al. [18] used multi-layer neural networks
to classify sex from face images at multiple resolutions (from 32-by-32 to 8-by-8
pixels). Their experiments on 30 test images show that their network was able to
determine sex from 8-by-8 images with an average error rate of 7%. Instead of using
a vector of gray levels to represent faces, Wiskott et al. [20] used labeled graphs of
two-dimensional views to describe faces. The nodes were represented by waveletbased local "jets" and the edges were labeled with distance vectors similar to the
geometric features in [3]. They used a small set of controlled model graphs of males
and females to encode "general face knowledge," in order to generate graphs of new
faces by elastic graph matching. For each new face, a composite reconstruction
was generated using the nodes in the model graphs. The sex of the majority of
nodes used in the composite graph was used for classification. The error rate of
their experiments on a gallery of 112 face images was 9.8%. Recently, Gutta et
al. [11] proposed a hybrid classifier based on neural networks (RBFs) and inductive
decision trees with Quinlan's C4.5 algorithm. Experiments with 3000 FERET faces
of size 64-by-72 pixels yielded an error rate of 4%.
2
Sex Classifiers
A generic sex classifier is shown in Figure 1. An input facial image x generates a
scalar output f(x) whose polarity - sign of f(x) - determines class membership. The
magnitude II f (x II can usually be interpreted as a measure of belief or certainty in the
decision made. Nearly all binary classifiers can be viewed in these terms; for densitybased classifiers (Linear, Quadratic and Fisher) the output function f(x) is a log
likelihood ratio, whereas for kernel-based classifiers (Nearest-Neighbor, RBFs and
SVMs) the output is a "potential field" related to the distance from the separating
boundary.
2.1
Support Vector Machines
A Support Vector Machine is a learning algorithm for pattern classification and
regression [19, 5]. The basic training principle behind SVMs is finding the optimal
linear hyperplane such that the expected classification error for unseen test samples
is minimized - i.e., good generalization performance. According to the structural
risk minimization inductive principle [19], a function that classifies the training data
accurately and which belongs to a set of functions with the lowest VC dimension
[5] will generalize best regardless of the dimensionality of the input space. Based
on this principle, a linear SVM uses a systematic approach to finding a class of
functions with the lowest VC dimension. For linearly non-separable data, SVMs
can (nonlinearly) map the input to a high dimensional feature space where a linear
hyperplane can be found. Although there is no guarantee that a linear separable
solution will always exist in the high dimensional space, in practice it is quite feasible
to construct a working solution.
Given a labeled set of M training samples (Xi , Yi), where X i E RN and Yi is the
associated label (Yi E {-I, I}), a SVM classifier finds the optimal hyperplane that
correctly separates (classifies) the data points while maximizing the distance of
either class from the hyperplane (the margin). Vapnik [19] shows that maximizing
the margin distance is equivalent to minimizing the VC dimension in constructing
an optimal hyperplane. Computing the best hyperplane is posed as a constrained
optimization problem and solved using quadratic programming techniques. The
discriminant hyperplane is defined by the level set of
M
f(x)
=L
Yi
Q:i .
k(x, Xi)
+b
i=l
where k( ?, ?) is a kernel function and the sign of f(x) determines the membership
of x. Constructing an optimal hyperplane is equivalent to finding all the nonzero
Q:i . Any vector Xi that corresponds to a nonzero Q:i is a supported vector (SV) of
the optimal hyperplane. A desirable feature of SVMs is that the number of training
points which are retained as support vectors is usually quite small, thus providing
a compact classifier.
For a linear SVM, the kernel function is just a simple dot product in the input
space while the kernel function in a nonlinear SVM effectively projects the samples
to a feature space of higher (possibly infinite) dimension via a nonlinear mapping
function:
~ : RN ~ F M , M? N
and then constructs a hyperplane in F. The motivation behind this mapping is
that it makes possible a larger class of discriminative functions with which to find a
linear hyperplane in the high dimensional feature space. Using Mercer's theorem [7],
the expensive calculations required in projecting samples into the high dimensional
feature space can be replaced by a much simpler kernel function satisfying the
condition
k(X,Xi) = ~(x) . ~(Xi)
where ~ is the implicit nonlinear projection. Several kernel functions, such as
polynomials and radial basis functions, have been shown to satisfy Mercer's theorem
I - - _ L -__
~_
Figure 2: Automatic face alignment system [13].
Figure 3: Some processed FE RET faces, at high resolution
and have been used successfully in nonlinear SVMs. In fact, by using different
kernel functions, SVMs can implement a variety of learning machines, some of
which coincide with classical architectures. Nevertheless, automatic selection of the
"right" kernel function and its associated parameters remains problematic and in
practice one must resort to trial and error for model selection.
A radial basis function (RBF) network is also a kernel-based technique for improved
generalization, but it is based instead on regularization theory [17]. The number of
radial bases and their centers in a conventional RBF network is predetermined,
often by k-means clustering. In contrast, a SVM with the same RBF kernel
will automatically determine the number and location of the centers, as well as
the weights and thresholds that minimize an upper bound on the expected risk.
Recently, Evgeniou et al. [9] have shown that both SVMs and RBFs can be
formulated under a unified framework in the context of Vapnik's theory of statistical
learning [19]. As such, SVMs provide a more systematic approach to classification
than classical RBF and various other neural networks.
3
Experiments
In our study, 256-by-384 pixel FERET "mug-shots" were pre-processed using an
automatic face-processing system which compensates for translation, scale, as well
as slight rotations. Shown in Figure 2, this system is described in detail in
[13] and uses maximum-likelihood estimation for face detection, affine warping
for geometric shape alignment and contrast normalization for ambient lighting
variations. The resulting output "face-prints" in Figure 2 were standardized to
80-by-40 (full) resolution. These "face-prints" were further sub-sampled to 21-by12 pixel "thumbnails" for our low resolution experiments. Figure 3 shows a few
examples of processed face-prints (note that these faces contain little or no hair
information). A total of 1755 thumbnails (1044 males and 711 females) were used
Classifier
Overall
SVM with Gaussian RBF kernel
SVM with Cubic polynomial kernel
Large ensemble-RBF
Classical RBF
Quadratic classifier
Fisher linear discriminant
Nearest neighbor
Linear classifier
Error Rate
Male
Female
3.38%
4.88%
5.54%
7.79%
10.63%
13.03%
27.16%
58.95%
2.05%
4.21%
4.59%
6.89%
9.44%
12.31%
26.53%
58.47%
4.79%
5.59%
6.55%
8.75%
11.88%
13.78%
28.04%
59.45%
Table 1: Experimental results with thumbnails.
in our experiments. For each classifier, the average error rate was estimated with
5-fold cross validation (CV) - i. e., a 5-way dataset split, with 4/5th used for
training and 1/ 5th used for testing, with 4 subsequent non-overlapping rotations.
The average size of the training set was 1496 (793 males and 713 females) and the
average size of the test set was 259 (133 males and 126 females) .
SVM wI RBF kernel
SVM wI cubic poly. kernel
Large ensemble of RBF
Classical RBF
Quadratic classifier
Fisher linear discriminant
-?
Nearest neighbor
Li near classifier
0
10
20
30
40
50
60
Error Rate
Figure 4: Error rates of various classifiers
The SVM classifier was first tested with various kernels in order to explore the space
of models and performance. A Gaussian RBF kernel was found to perform the best
(in terms of error rate), followed by a cubic polynomial kernel as second best. In the
large ensemble-RBF experiment, the number of radial bases was incremented until
the error fell below a set threshold. The average number of radial bases in the large
ensemble-RBF was found to be 1289 which corresponds to 86% of the training set.
The number of radial bases for classical RBF networks was heuristically set to 20
prior to actual training and testing. Quadratic, Linear and Fisher classifiers were
implemented using Gaussian distributions and in each case a likelihood ratio test
was used for classification. The average error rates of all the classifiers tested with
21-by-12 pixel thumbnails are reported in Table 1 and summarized in Figure 3.
The SVMs out-performed all other classifiers, although the performance of large
ensemble-RBF networks was close to SVMs. However, nearly 90% of the training
set was retained as radial bases by the large ensemble-RBF. In contrast, the number
of support vectors found by both SVMs was only about 20% of the training set. We
also applied SVMs to classification based on high resolution images. The Gaussian
and cubic kernel performed equally well at both low and high resolutions with only
a slight 1% error rate difference. We note that as indicated in Table 1, all the
classifiers had higher error rates in classifying females, most likely due to the general
lack of prominent and distinct facial features in female faces, as compared to males.
4
Discussion
We have presented a comprehensive evaluation of various classification methods
for determination of sex from facial images. The non-triviality of this task (made
even harder by our "hairless" low resolution faces) is demonstrated by the fact that
a linear classifier had an error rate of 60% (i.e., worse than a random coin flip).
Furthermore, an acceptable error rate ? 5%) for the large ensemble-RBF network
required storage of 86% of the training set (SVMs required about 20%). Storage of
the entire dataset in the form of the nearest-neighbor classifier yielded too high an
error rate (30%). Clearly, SVMs succeeded in the difficult task of finding a nearoptimal class partition in face space with the added economy of a small number of
support faces.
Given the relative success of previous studies with low resolution faces it is reassuring that 21-by-12 faces can, in fact, be used for reliable sex classification.
However, most of the previous studies used datasets of relatively few faces,
consequently with little statistical significance in the reported results. The most
directly comparable study to ours is that of Gutta et al. [11], which also used
FERET faces. With a dataset of 3000 faces at a resolution of 64-by-72, their hybrid
RBF /Decision-Tree classifier achieved a 4% error rate. In our study, with 1800 faces
at a resolution of 21-by-12, a Gaussian kernel SVM was able to achieve a 3.4% error
rate. Both studies use extensive cross validation to estimate the error rates. Given
our results with SVMs, it is clear that better performance at even lower resolutions
is possible with this learning technique.
References
[1] V. Bruce, A. M. Burton, N. Dench, E. Hanna, P. Healey, O. Mason, A. Coombes,
R. Fright, and A. Linney. Sex discrimination: How do we tell the difference between
male and female faces? Perception, 22:131- 152, 1993.
[2] R. Brunelli and T. Poggio. Hyperbf networks for gender classification. In Proceedings
of the DARPA Image Understanding Workshop, pages 311-314, 1992.
[3] R. Brunelli and T. Poggio. Face recognition: Features vs. templates. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 15(10), October 1993.
[4] A. M. Burton, V. Bruce, and N. Dench. What's the difference between men and
women? evidence from facial measurement. Perception, 22:153- 176, 1993.
[5] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20, 1995.
[6] Garrison W. Cottrell. Empath: Face, emotion, and gender recognition using holons.
In Advances in Neural Information Processing Systems, pages 564-571, 1991.
[7] R. Courant and D. Hilbert. Methods of Mathematical Physiacs, volume 1. Interscience,
New-York, 1953.
[8] B. Edelman, D. Valentin, and H. Abdi. Sex classification of face areas: how well can
a linear neural network predict human performance. Journal of Biological System,
6(3):241- 264, 1998.
[9] Theodoros Evgeniou, Massimiliano Pontil, and Tomaso Poggio. A unified framework
for regularization networks and support vector machines. Technical Report AI Memo
No. 1654, MIT, 1999.
[10] B. A. Golomb, D. T. Lawrence, and T. J. Sejnowski. Sexnet: A neural network
identifies sex from human faces. In Advances in Neural Information Processing
Systems, pages 572-577, 1991.
[11] S. Gutta, H. Wechsler, and P. J. Phillips. Gender and ethnic classification. In
Proceedings of the IEEE International Automatic Face and Gesture Recognition, pages
194-199, 1998.
[12] J. Huang, X. Shao, and H. Wechsler. Face pose discrimination using support vector
machines. In Proc. of 14th Int'l Conf. on Pattern Recognition (ICPR '98), pages 154156, August 1998.
[13] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI19(7):696- 710, July 1997.
[14] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application
to face detection. In Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pages 130-136, 1997.
[15] A. J. O'Toole, T. Vetter, N. F. Troje, and H. H. Bulthoff. Sex classification is better
with three-dimensional structure than with image intensity information. Perception,
26:75- 84, 1997.
[16] P. J. Phillips. Support vector machines applied to face recognition. In M. S. Kearns,
S. Solla, and D. Cohen, editors, Advances in Neural Information Processing Systems
11, volume 11, pages 803- 809. MIT Press, 1998.
[17] T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of
the IEEE, 78(9):1481- 1497, 1990.
[18] S. Tamura, H. Kawai, and H. Mitsumoto. Male/female identification from 8 x 6 very
low resolution face images by neural network. Pattern Recognition, 29(2):331- 335,
1996.
[19] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
[20] Laurenz Wiskott, Jean-Marc Fellous, Norbert Kruger, and Christoph von der
Malsburg. Face recognition and gender determination. In Proceedings of the
International Workshop on Automatic Face and Gesture Recognition, pages 92- 97,
1995.
| 1933 |@word trial:1 briefly:1 polynomial:3 sex:19 open:1 coombes:1 heuristically:1 mitsubishi:1 harder:1 shot:1 empath:1 ours:1 com:1 attracted:1 must:1 cottrell:2 subsequent:1 partition:1 predetermined:1 girosi:2 shape:1 discrimination:3 v:1 intelligence:2 node:3 location:1 theodoros:1 simpler:1 five:1 mathematical:1 edelman:1 interscience:1 expected:2 tomaso:1 uiuc:1 multi:1 ming:1 automatically:1 little:2 actual:1 laurenz:1 project:1 classifies:2 golomb:2 lowest:2 what:1 interpreted:1 fuzzy:1 developed:1 ret:2 finding:4 unified:2 guarantee:1 certainty:1 holons:1 classifier:27 local:1 christoph:1 testing:2 practice:2 implement:1 pontil:1 area:1 composite:2 matching:1 projection:1 pre:1 radial:8 vetter:1 close:1 selection:2 storage:2 risk:2 context:1 equivalent:2 map:1 demonstrated:1 center:2 maximizing:2 conventional:1 attention:1 regardless:1 resolution:13 hsuan:1 variation:1 by12:1 programming:1 us:2 recognition:10 expensive:1 satisfying:1 database:1 labeled:3 solved:1 burton:2 connected:1 solla:1 incremented:1 trained:2 basis:3 shao:1 darpa:1 various:5 represented:1 distinct:2 massimiliano:1 describe:1 sejnowski:1 tell:1 whose:1 quite:2 posed:1 larger:1 jean:1 compensates:1 unseen:1 reconstruction:1 product:1 achieve:1 object:1 pose:2 nearest:5 implemented:1 correct:1 vc:3 human:3 generalization:2 biological:2 lawrence:1 mapping:2 predict:1 continuum:1 estimation:1 proc:1 label:1 currently:1 correctness:1 successfully:2 minimization:1 mit:2 clearly:1 always:1 gaussian:5 encode:1 likelihood:3 political:1 contrast:3 economy:1 membership:2 waveletbased:1 entire:1 pixel:6 overall:1 classification:19 art:1 constrained:1 field:1 construct:2 evgeniou:2 emotion:1 nearly:2 minimized:1 report:1 few:3 modern:1 comprehensive:1 replaced:1 detection:3 evaluation:1 alignment:2 male:11 behind:2 sexnet:2 ambient:1 moghaddam:2 fu:1 edge:1 succeeded:1 poggio:5 facial:6 tree:2 psychological:1 classify:1 valentin:1 too:1 reported:3 nearoptimal:1 thickness:1 sv:1 international:2 systematic:2 probabilistic:1 von:1 huang:1 possibly:1 woman:1 worse:1 conf:1 resort:1 li:1 potential:1 summarized:1 healey:1 int:1 satisfy:1 performed:2 view:1 rbfs:3 bruce:2 minimize:1 il:1 ensemble:7 identify:1 generalize:1 identification:1 accurately:1 lighting:1 associated:2 sampled:1 dataset:3 knowledge:1 dimensionality:1 hilbert:1 higher:2 courant:1 improved:1 furthermore:2 just:1 implicit:1 until:1 working:1 bulthoff:1 nonlinear:5 overlapping:1 lack:1 indicated:1 gray:1 usa:2 contain:1 inductive:2 regularization:2 laboratory:1 nonzero:2 brunelli:3 mug:1 width:1 scientifically:1 prominent:1 image:13 recently:2 superior:1 rotation:2 cohen:1 volume:2 slight:2 measurement:1 cambridge:1 ai:2 cv:1 phillips:2 automatic:5 illinois:1 had:2 dot:1 base:5 recent:1 female:12 belongs:1 binary:1 success:1 yi:4 der:1 baback:2 determine:2 july:1 ii:2 multiple:1 desirable:1 full:1 champaign:1 technical:1 jet:1 determination:2 calculation:1 cross:2 gesture:2 equally:1 controlled:1 regression:1 basic:1 hair:1 vision:1 represent:1 kernel:19 normalization:1 achieved:1 tamura:2 whereas:1 fell:1 subject:1 structural:1 near:1 yang:1 split:1 variety:1 gave:1 architecture:1 competing:1 triviality:1 york:1 clear:1 kruger:1 svms:18 processed:4 generate:1 fright:1 exist:1 problematic:1 sign:2 estimated:1 thumbnail:5 correctly:1 nevertheless:1 threshold:2 graph:6 year:1 separation:1 decision:3 acceptable:1 comparable:1 layer:2 bound:1 followed:1 fold:1 quadratic:6 yielded:2 generates:1 separable:2 relatively:2 according:1 icpr:1 osuna:1 wi:2 feret:3 projecting:1 remains:1 nose:1 flip:1 photo:1 generic:1 coin:1 denotes:1 clustering:1 include:1 standardized:1 quinlan:1 malsburg:1 wechsler:2 classical:5 society:1 warping:1 print:3 added:1 traditional:1 distance:4 separate:1 separating:1 majority:1 discriminant:4 gallery:1 retained:2 polarity:1 ratio:2 minimizing:1 providing:1 difficult:1 october:1 hairless:1 fe:2 memo:1 perform:1 upper:1 datasets:1 urbana:2 pentland:1 rn:2 august:1 intensity:1 hyperbf:2 toole:1 nonlinearly:1 required:3 extensive:1 c4:1 able:2 usually:2 pattern:7 below:1 perception:3 summarize:1 eyebrow:2 reliable:1 belief:1 hybrid:2 identifies:1 review:1 literature:2 prior:2 geometric:3 understanding:1 relative:1 freund:1 fully:1 men:1 validation:2 affine:1 wiskott:2 principle:3 mercer:2 editor:1 classifying:1 translation:1 supported:1 neighbor:5 template:1 face:45 boundary:1 dimension:4 made:2 coincide:1 mhyang:1 transaction:2 compact:1 xi:5 discriminative:1 table:3 nature:1 elastic:1 hanna:1 investigated:1 poly:1 electric:1 constructing:2 marc:1 significance:1 linearly:1 motivation:1 ethnic:1 referred:1 pupil:1 cubic:4 garrison:1 sub:1 theorem:2 fellous:1 mason:1 svm:12 cortes:1 evidence:1 workshop:2 vapnik:4 effectively:1 magnitude:1 margin:2 explore:1 likely:1 visual:2 norbert:1 scalar:1 springer:1 gender:7 corresponds:2 determines:2 ma:1 reassuring:1 viewed:1 formulated:1 consequently:1 rbf:20 fisher:5 feasible:1 infinite:1 hyperplane:11 kearns:1 total:1 experimental:1 metcalfe:1 support:12 abdi:1 kawai:1 tested:2 |
1,022 | 1,934 | Computing with Finite and Infinite Networks
Ole Winther*
Theoretical Physics, Lund University
SOlvegatan 14 A, S-223 62 Lund, Sweden
wint h e r@ nimis.thep.lu. s e
Abstract
Using statistical mechanics results, I calculate learning curves (average
generalization error) for Gaussian processes (GPs) and Bayesian neural
networks (NNs) used for regression. Applying the results to learning a
teacher defined by a two-layer network, I can directly compare GP and
Bayesian NN learning. I find that a GP in general requires CJ (d S )-training
examples to learn input features of order s (d is the input dimension),
whereas a NN can learn the task with order the number of adjustable
weights training examples. Since a GP can be considered as an infinite
NN, the results show that even in the Bayesian approach, it is important
to limit the complexity of the learning machine. The theoretical findings
are confirmed in simulations with analytical GP learning and a NN mean
field algorithm.
1 Introduction
Non-parametric kernel methods such as Gaussian Processes (GPs) and Support Vector Machines (SVMs) are closely related to neural networks (NNs). These may be considered as
single layer networks in a possible infinite dimensional feature space. Both the Bayesian
GP approach and SVMs regularize the learning problem so that only a finite number of the
features (dependent on the amount of data) is used.
Neal [1] has shown that Bayesian NNs converge to GPs in the limit of infinite number of
hidden units and furthermore argued that (1) there is no reason to believe that real-world
problem should require only a 'small' number of hidden units and (2) there are in the
Bayesian approach no reasons (besides computational) to limit the size of the network.
Williams [2] has derived kernels allowing for efficient computation with both infinite feedforward and radial basis networks.
In this paper, I show that learning with a finite rather than infinite networks can make a
profound difference by studying the case where the task to be learned is defined by a large
but finite two-layer NN. A theoretical analysis of the Bayesian approach to learning this
task shows that the Bayesian student makes a learning transition from a linear model to
specialized non-linear one when the number of examples is of the order of the number of
adjustable weights in the network. This effect-which is also seen in the simulations-is a
consequence of the finite complexity of the network. In an infinite network, i.e. a GP on the
*http : // www. th e p . lu .se /t f 2/ s t aff /winth e r /
other hand such a transition will not occur. It will eventually learn the task but it requires
CJ( d S )-training examples to learn features of order s, where d is the input dimension.
Here, I focus entirely on regression. However, the basic conclusions regarding learning
with kernel methods and NNs turn out to be valid more generally, e.g. for classification
unpublished results and [3].
I consider the usual Bayesian setup of supervised learning: A training set DN =
{(Xi, y; ) Ii = 1 ... , N} (x E Rd and y E R) is known and the output for the new input x is predicted by the function f(x) which is sampled from the prior distribution of
model outputs. I will consider both a Gaussian process prior and the prior implied by
a large (but finite) two-layer network. The output noise is taken to be Gaussian, so the
Likelihood becomes p(ylf(x)) = e - (Y- J(X))2 /2 /V27r(T2. The error measure is minus the
log-Likelihood and Bayes regressor (which minimizes the expected error) is the posterior
mean prediction
(f(x)) - Ef f(x) 0 ; p(Yi If(Xi))
EfO; p(y;l f(x;))
,
(1)
where I have introduced Ef , f = f(Xl) " '" f(XN) , f(x), to denote an average with respect to the model output prior.
Gaussian processes.
In this case, the model output prior is by definition Gaussian
(2)
where C is the covariance matrix. The covariance matrix is computed from the kernel
(covariance function) C(x, x'). Below I give an explicit example corresponding to an
infinite two-layer network.
Bayesian neural networks The output of the two-layer NN is given by f(x, w , W) =
~:: Wk<f>(Wk . x), where an especially convenient choice of transfer function in what
JK
follows is <f>( z ) = I~ dte- t2 /2/ V2ii. I consider a Bayesian framework (with fixed
known hyperparameters) with a weight prior that factorizes over hidden units p(w, W) =
Ok [P(Wk )p(Wk)] and Gaussian input-to-hidden weights Wk ~ N(O, ~).
From Bayesian NNs to GPs. The prior over outputs for the Bayesian neural network is
p(f)
dwdWp(w , W) 0; J(J(x; ) - f(x ;, w , W)). In the infinite hidden unit limit,
J{ -+ 00, when P(Wk) has zero mean and finite, say unit variance, it follows from the
central limit theorem (eLT) that the prior distribution converges to a Gaussian process
f ~ N(O, C) with kernel [1,2]
=I
C(x, x')
J
dw p(w) <f>(w . x) <f>(w . x')
~ arcsin (J(l + xT;:~:'+ XIT~XI))
(3)
The rest of the paper deals with theoretical statistical mechanics analysis and simulations
for GPs and Bayesian NNs learning tasks defined by either a NN or a GP. For the simulations, I use analytical GP learning (scaling like CJ (N 3 )) [4] and a TAP mean field algorithm
for Bayesian NN.
2 Statistical mechanics of learning
The aim of the average case statistical mechanics analysis is to derive learning curves, i.e.
the expected generalization error as a function of the number of training examples. The
generalization error of the Bayes regressor (f (x)) eq. (1) is
fg
= (((y - (f(X)))2)) ,
(4)
where double brackets (( ... )) = I IIi [dx;dYip(Xi, Yi)] .. . denote an average over both
training examples and the test example (x , y). Rather than using eq. (4) directly, fg will-as
usually done-be derived from the average of the free energy -( (In Z )), where the partition
function is given by
Z = Ef
1
N
V27ru 2
(-~
2:)Yi 2u
exp
f(X i ))2) .
(5)
i
I will not give many details of the actual calculations here since it is beyond the scope of
the paper, but only outline some of the basic assumptions.
2.1
Gaussian processes
The calculation for Gaussian processes is given in another NIPS contribution [5]. The basic
assumption made is that Y- f(x) becomes Gaussian with zero mean 1 under an average over
the training example Y - f(x) ~ N(O , (((y - f(x)) 2))). This assumption can be justified
by the CLT when f(x) is a sum of many random parts contributing on the same scale.
Corrections to the Gaussian assumption may also be calculated [5]. The free energy may
be written in term of a set of order parameters which is found by saddlepoint integration.
Assuming that the teacher is noisy y = f. (x) + 1], (( 1]2)) = uZ, the generalization error is
given by the following equation which depends upon an orderparameter v
uZ + ((f;(x))) -
Ov( v2E f((f(x)f.(x)))2)
2
1 + A 0v Ef((J2(X)))/N
N
=
v
(6)
(7)
where the new normalized measure Ef . . . ex Ef exp (-v((J2(x)))/2) ... has been introduced.
Kernels in feature space. By performing a Karhunen-Loeve expansion, f(x) can be
written as a linear perceptron with weights w p in a possible infinite feature space
f(x)
= LWpAcP p(x)
,
(8)
p
where the features cP p(x) are orthonormal eigenvectors of the covariance function with
eigenvalues Ap: I dxp(x) C (x', X)cP p(x) = ApcP p(X') and I dx p(X) cPpl (x)cPp (x) = Jppl.
The teacher f. (x) may also be expanded in terms of the the features:
f.(x) = L apAcP p(x) ,
p
Using the orthonormality the averages may be found : ((J2(x))) = I: p ApW ~ ,
((f(x)f. (x))) = I: p Apwpa p and ((f;(x))) = I: p Apa ~ . For a Gaussian process prior,
lGeneralization to non-zero mean is straightforward.
the prior over the weight is a spherical Gaussian w ~ N(O , I). Averaging over w, the saddlepoint equations can be written in tenns of the number of examples N, the noise levels
0"2 and 0";, the eigenvectors of the covariance function Ap and the teacher projections ap:
N
--;;
v
2
(
0"*
Apa~
-1
+ ~ (1 + VAp)2
N (0"2+
L
p
Ap
1 + VAp
)
2
(
0"
+~
Ap
)
(1 + VAp)2
(9)
)-1
(10)
These eqs. are valid for a fixed teacher. However, eq. (9) may also be averaged over the
distribution of teachers. In the Bayes optimal scenario, the teacher is sampled from the
same prior as the student and 0"2 = 0";. Thus ap ~ N(O, I) implying a~ = 1, where the
average over the teacher is denoted by an overline. In this case the equations reduce to the
Bayes optimal result first derived by Sollich [6]: f. g = f.~ayes = N / v.
Learning finite nets. Next, I consider the case where the teacher is the two-layer network
f*(x) = f(w, W) and the GP student uses the infinite net kernel eq. (3). The average
over the teacher corresponds to an average over the weight prior and since f* (x)f* (Xl) =
C(x, Xl), I get
a~Ap =
!
dxdxlp(x)p(xl)C(x, XI)?p(X)?p(XI) = Ap ,
(11)
where the eigenvalue equation and the orthonormality have been used. The theory therefore
predicts that a GP student (with the infinite network kernel) will have the same learning
curve irre.~pectively of the number of hidden units of the NN teacher. This result is a direct
consequence of the Gaussian assumption made for the average over examples. However,
what is more surprising is that it is found to be a very good approximation in simulations
down to K = 1, i.e. a simple perceptron with a sigmoid non-linearity.
Inner product kernels. I specialize to inner product kernels C(x, Xl) = c(x . xl/d)
and consider large input dimensionality d and input components which are iid with
zero mean and unit variance. The eigenvectors are products of the input components
?p(x) = OmEP Xm and are indexed by subsets of input indices, e.g. p = {I, 2, 42} [3].
The eigenvalues are Ap = cl;IIJ~) with degeneracy nlpl = ( I~I ) R:i d lpl / Ipl!, where Ipi is
the cardinality (in the example above Ipl = 3). Plugging these results into eqs. (9) and (10),
it follows that to learn features that are order s in the inputs, O( d S ) examples are needed.
The same behavior has been predicted for learning in SVMs [3].
The infinite net eq. (3) reduces to an inner product covariance function for
controls the degree on non-linearity of the rule) and large d, X . X R:i d:
C (x, X
I)
(
1/)
2
. (TX. Xl )
= ex?
x d =;: arcsm d (1 + T)
.
~
= TI/ d (T
(12)
Figure 1 shows learning curves for GPs for the infinite network kernel. The mismatch
between theory and simulations is expected to be due to 0(1/ d)-corrections to the eigenvalues Ap. The figure clearly shows that learning of the different order features takes place
on different scales. The stars on the f.g-axis show the theoretical prediction of asymptotic
errorfor N = O( d), O( d3 ), ... (the teacher is an odd function).
2.2
Bayesian neural networks
The limit of large but finite NNs allows for efficient computation since the prior over
functions can be approximated by a Gaussian. The hidden-to-output weights are for sim-
Small N
0.15
= O(d)
= O(d 3 )
0.1
0.4
Eg
Eg
0.2
0.0
o
Large N
20
40
N
60
500
80
1000
N
1500
2000
=
Figure 1: Learning curve for Gaussian processes with the infinite network kernel (d 10,
T = 10 and (}2 = 0.01) for two scales of training examples. The full line is the the
theoretical prediction for the Bayes optimal GP scenario. The two other curves (almost on
top of each other as predicted by theory) are simulations for the Bayes optimal scenario
(dotted line) and for GP learning a neural network with J{ = 30 hidden units (dash-dotted
line).
plicity set to one and we introduce the 'fields' hk(x) = Wk . x and write the output as
f(x, w) = f(h(x)) =
~~ <I>(hk(X)), h(x) = h1 (x), ... , hK(x). In the following, I
discuss the TAP mean field algorithm used to find an approximation to the Bayes regressor
and briefly the theoretical statistical mechanics analysis for the NN task.
.Jx
Mean field algorithm. The derivation sketched here is a straightforward generalization
of previous results for neural networks [7]. The basic cavity assumption [7, 8] is that for
large d, J{ and for a suitable input distribution, the predictive distribution p(J (x) IDN) is
Gaussian:
p(J(x)IDN)
N((J(x)), (J2(x)) - (J(x))2) .
The predictive distribution for the fields h( x) is also assumed to be Gaussian
RJ
p(h(x)IDN)
RJ
N((h(x)) , V) ,
where V = (h(x)h(xf) - (h(x))(h(xf). Using these assumptions, I get an approximate Bayes regressor
(13)
To make predictions, we therefore need the two first moments of the weights since
(hk(x)) = (Wk) . x and Vkl = ~mn XmXn((WmkWnl) - (Wmk)(Wnl)). We can simplify
this in the large d limit by taking the inputs to by iid with zero mean and unit variance:
Vkl RJ (Wk' WI) - (Wk) . (WI). This approximation can be avoided at a substantial computational cost [8]. Furthermore, (Wk' WI) turns out equal to the prior covariance <SkIT / d
[7]. The following exact relation is obtained for the mean weights
(14)
where
p(YiI DN\(Xi, Yi)) =
J
dh(Xi) p(Yi Ih(Xi)) p(h(Xi )IDN\(Xi' y;)) .
0 . 05...,--~--~-~-~~---,
0.04
0.03
E
.
\.
.;.~~
..... ::----
9 0.02
~~~~:':~~~'-~-~-~.-~-~~~-~-=
.-=.-=
. -~~~-~-.----~.
0.01
N
2
4
6
8
10 dK
Figure 2: . Learning curves for Bayesian NNs and GPs. The dashed line is simulations
for the TAP mean field algorithm (d = 30, K = 5, T = 1 and 0- 2 = 0.01) learning a
corresponding NN task, i.e. an approximation to the Bayes optimal scenario. The dashdotted line is the simulations for GPs learning the NN task. Virtually on top of that curve
is the curve for Bayes optimal GP scenario (dotted line). The full lines are the theoretical
prediction. Up to N = Nc = 2.51dK, the learning curves for Bayesian NNs and GPs coincide. At N e , the statistical mechanics theory predicts a first order transition to a specialized
solution for the NN Bayes optimal scenario (lower full line).
p(y;lh(x;)) is the Likelihood and p(h(x;)IDN\(X;, y;)) is a predictive distribution for
h(x;) for a training set where the ith example has been left out. In accordance with above,
I assume p(h(x;) IDN\(Xi, y;)) ~ N((h(x;)hi, V). Finally, generalizing the relation
found in Refs. [7,8], I can relate the reduced mean to the full posterior mean:
(hk(x;)h;
= (hk(x;)) -
L VklD:li
I
to express everything in terms of (Wk) and D:k;, k
= 1, ... , K and i = 1, ... , N .
The mean field eqs. are solved by iteration in D:k; and (Wmk) following the recipe given in
Ref. [8]. The algorithm is tested using a teacher sampled from the NN prior, i.e. the Bayes
optimal scenario. Two types of solutions are found: a linear symmetric and a non-linear
specialized. In the symmetric solution, (Wk) = (WI) and (Wk) . (Wk) = O(T/dK). This
means that the machine is linear (when T ? K). For N = O(dK), a transition to a
specialized solution occurs, where each (Wk), k = 1, ... , K, aligns to a distinct weight
vector of the teacher and (Wk) . (Wk) = O(T/d). The Bayesian student thus learns the
linear features for N = 0 (d). However, unlike the GP, it learns all of the remaining nonlinear features for N = O(dK). The resulting empirical learning curve averaged over 25
independent runs is shown in figure 2. It turned out that setting (hk(xdhi = (hk(x;))
was a necessary heuristic in order to find the specialized solution. The transition to the
specialized solution-although very abrupt for the individual run-is smeared out because it
occurs at different N for each run.
The theoreticalleaming curve is also shown in figure 2. It has been derived by generalizing the results of Ref. [9] for the Gibbs algorithm to the Bayes optimal scenario. The
picture that emerges is in accordance with the empirical findings. The transition to the
specialized solution is predicted to be first order, i.e. with a discontinuous jump in the relevant order parameters at the number of examples N c ( 0- 2 , T ), where the specialized solution
becomes the physical solution (i.e. the lowest free energy solution).
The mean field algorithm cannot completely reproduce the theoretical predictions because
the solution gets trapped in the meta-stable symmetric solution. This is often observed
for first order transitions and should also be observable in the Monte Carlo approach to
Bayesian NNs [1].
3 Discussion
Learning a finite two-layer regression NN using (1) the Bayes optimal algorithm and (2)
the Bayes optimal algorithm for an infinite network (implemented by a GP) is compared.
It is found that the Bayes optimal algorithm can have a very superior performance.
This can be explained as an entropic effect: The infinite network will-although the correct finite network solution is included a priori- have a vanishing probability of finding
this solution. The finite network on the other hand is much more constraint wrt the functions it implements. It can thus--even in the Bayesian setting-give a great payoff to limit
complexity.
For d-dimensional inner product kernel with iid input distribution, it is found that it in
general requires 0 (d S ) training examples to learn features of 0 (s). Unpublished results
and [3] show that these conclusions remain true also for SVM and GP classification.
For SVM hand-written digit recognition, fourth order kernels give good results in practise. Since N = 0(10 4 ) - 0(10 5 ), it can be concluded that the 'effective' dimension,
deffective = 0(10) against typically d = 400, i.e. some inputs must be very correlated
and/or carry very little information. It could therefore be interesting to develop methods
to measure the effective dimension and to extract the important lower dimensional features
rather than performing the classification directly from the images.
Acknowledgments
I am thankful to Manfred Opper for valuable discussions and for sharing his results with
me and to Klaus-Robert Muller for discussions at NIPS. This research is supported by the
Swedish Foundation for Strategic Research.
References
[1] R. Neal, Bayesian Learningfor Neural Networks, Lecture Notes in Statistics, Springer (1996).
[2] C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems
9, Eds. M. C. Mozer, M. I. Jordan and T. Petsche, 295-301, MIT Press (1997).
[3] R. Dietrich, M. Opper and H. Sompolinsky, Statistical Mechanics of Support Vector Machines,
Phys. Rev. Lett. 82, 2975-2978 (1999).
[4] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, In Advances in
Neural Information Processing Systems 8 (NIPS'95). Eds. D. S. Touretzky, M. C. Mozer and
M. E. Hasselmo, 514-520, MIT Press (1996).
[5] D. Malzahn and M. Opper, In this volume.
[6] P. Sollich, Learning Curves for Gaussian Processes, In Advances in Neural Information Processing Systems 11 (NIPS'98), Eds. M. S. Keams, S. A. Solla, and D. A. Cohn, 344-350, MIT
Press (1999).
[7] M. Opper and O. Winther, Mean Field Approach to Bayes Learning in Feed-Forward Neural
Networks, Phys. Rev. Lett. 76,1964-1967 (1996).
[8] M. Opper and O. Winther, Gaussian Processes for Classification: Mean Field Algorithms, Neural Computation 12,2655-2684 (2000).
[9] M. Ahr, M. Biehl and R. Urbanczik, Statistical physics and practical training of soft-committee
machines Eur. Phys. J. B 10,583 (1999).
| 1934 |@word briefly:1 simulation:9 covariance:7 minus:1 carry:1 moment:1 surprising:1 dx:2 written:4 must:1 partition:1 implying:1 ith:1 vanishing:1 manfred:1 ipi:1 dn:2 direct:1 profound:1 specialize:1 introduce:1 overline:1 expected:3 behavior:1 mechanic:7 uz:2 spherical:1 actual:1 little:1 cardinality:1 becomes:3 linearity:2 lowest:1 what:2 minimizes:1 finding:3 ti:1 control:1 unit:9 accordance:2 limit:8 consequence:2 ap:10 averaged:2 acknowledgment:1 practical:1 implement:1 digit:1 urbanczik:1 empirical:2 convenient:1 projection:1 radial:1 get:3 cannot:1 applying:1 www:1 williams:3 straightforward:2 abrupt:1 rule:1 orthonormal:1 regularize:1 his:1 dw:1 exact:1 gps:9 us:1 approximated:1 jk:1 recognition:1 predicts:2 observed:1 solved:1 calculate:1 sompolinsky:1 solla:1 valuable:1 substantial:1 mozer:2 complexity:3 practise:1 ov:1 predictive:3 upon:1 basis:1 completely:1 tx:1 derivation:1 distinct:1 effective:2 monte:1 ole:1 klaus:1 heuristic:1 biehl:1 say:1 statistic:1 gp:16 noisy:1 dxp:1 eigenvalue:4 dietrich:1 analytical:2 net:3 product:5 j2:4 yii:1 turned:1 relevant:1 lpl:1 recipe:1 double:1 converges:1 thankful:1 derive:1 develop:1 odd:1 ex:2 sim:1 eq:8 implemented:1 predicted:4 closely:1 discontinuous:1 correct:1 everything:1 argued:1 require:1 generalization:5 correction:2 considered:2 exp:2 great:1 scope:1 jx:1 entropic:1 hasselmo:1 smeared:1 mit:3 clearly:1 gaussian:22 vkl:2 aim:1 rather:3 factorizes:1 derived:4 focus:1 xit:1 likelihood:3 hk:8 am:1 dependent:1 nn:15 typically:1 hidden:8 relation:2 keams:1 reproduce:1 sketched:1 classification:4 denoted:1 priori:1 integration:1 field:11 equal:1 t2:2 simplify:1 individual:1 bracket:1 necessary:1 lh:1 sweden:1 indexed:1 theoretical:9 soft:1 cost:1 strategic:1 subset:1 teacher:14 nns:10 eur:1 winther:3 physic:2 regressor:4 wnl:1 central:1 li:1 star:1 student:5 wk:18 depends:1 h1:1 bayes:17 irre:1 contribution:1 solvegatan:1 variance:3 bayesian:22 iid:3 lu:2 carlo:1 confirmed:1 phys:3 touretzky:1 sharing:1 aligns:1 ed:3 definition:1 against:1 energy:3 degeneracy:1 sampled:3 emerges:1 dimensionality:1 cj:3 ok:1 feed:1 supervised:1 swedish:1 done:1 furthermore:2 hand:3 cohn:1 nonlinear:1 believe:1 effect:2 normalized:1 orthonormality:2 true:1 symmetric:3 elt:1 neal:2 deal:1 eg:2 outline:1 cp:2 image:1 ef:6 sigmoid:1 superior:1 specialized:8 physical:1 volume:1 gibbs:1 rd:1 ylf:1 stable:1 posterior:2 scenario:8 meta:1 tenns:1 yi:5 muller:1 seen:1 converge:1 dashed:1 clt:1 ii:1 full:4 rj:3 reduces:1 xf:2 calculation:2 plugging:1 prediction:6 regression:4 basic:4 iteration:1 kernel:14 justified:1 whereas:1 concluded:1 rest:1 unlike:1 virtually:1 jordan:1 feedforward:1 iii:1 reduce:1 regarding:1 inner:4 generally:1 se:1 eigenvectors:3 amount:1 svms:3 reduced:1 http:1 dotted:3 trapped:1 write:1 express:1 d3:1 sum:1 run:3 fourth:1 place:1 almost:1 scaling:1 entirely:1 layer:8 hi:1 apa:2 dash:1 occur:1 constraint:1 aff:1 loeve:1 performing:2 expanded:1 ayes:1 idn:6 remain:1 sollich:2 wi:4 saddlepoint:2 rev:2 plicity:1 explained:1 taken:1 equation:4 turn:2 eventually:1 discus:1 committee:1 wrt:1 needed:1 studying:1 petsche:1 vap:3 top:2 remaining:1 especially:1 implied:1 occurs:2 parametric:1 usual:1 me:1 reason:2 assuming:1 besides:1 index:1 nc:1 setup:1 robert:1 relate:1 adjustable:2 allowing:1 finite:12 payoff:1 learningfor:1 introduced:2 unpublished:2 tap:3 learned:1 nip:4 malzahn:1 beyond:1 below:1 usually:1 xm:1 mismatch:1 lund:2 suitable:1 wmk:2 mn:1 picture:1 axis:1 extract:1 prior:15 contributing:1 asymptotic:1 lecture:1 interesting:1 foundation:1 degree:1 dashdotted:1 supported:1 free:3 rasmussen:1 perceptron:2 ipl:2 taking:1 fg:2 curve:13 calculated:1 dimension:4 world:1 transition:7 valid:2 xn:1 opper:5 lett:2 made:2 jump:1 coincide:1 avoided:1 forward:1 approximate:1 observable:1 cavity:1 assumed:1 xi:12 thep:1 learn:6 transfer:1 dte:1 expansion:1 cl:1 noise:2 hyperparameters:1 ahr:1 ref:3 iij:1 explicit:1 xl:7 learns:2 theorem:1 down:1 xt:1 dk:5 svm:2 ih:1 karhunen:1 arcsin:1 generalizing:2 springer:1 corresponds:1 dh:1 included:1 infinite:18 averaging:1 cpp:1 support:2 tested:1 correlated:1 |
1,023 | 1,935 | Constrained Independent Component
Analysis
Wei Lu and Jagath C. Rajapakse
School of Computer Engineering
Nanyang Technological University, Singapore 639798
email: asjagath@ntu.edu.sg
Abstract
The paper presents a novel technique of constrained independent
component analysis (CICA) to introduce constraints into the classical ICA and solve the constrained optimization problem by using
Lagrange multiplier methods. This paper shows that CICA can
be used to order the resulted independent components in a specific
manner and normalize the demixing matrix in the signal separation
procedure. It can systematically eliminate the ICA's indeterminacy
on permutation and dilation. The experiments demonstrate the use
of CICA in ordering of independent components while providing
normalized demixing processes.
Keywords: Independent component analysis, constrained independent component analysis, constrained optimization, Lagrange multiplier methods
1
Introduction
Independent component analysis (ICA) is a technique to transform a multivariate random signal into a signal with components that are mutually independent
in complete statistical sense [1]. There has been a growing interest in research for
efficient realization of ICA neural networks (ICNNs). These neural algorithms provide adaptive solutions to satisfy independent conditions after the convergence of
learning [2, 3, 4].
However, ICA only defines the directions of independent components. The magnitudes of independent components and the norms of demixing matrix may still be
varied. Also the order of the resulted components is arbitrary. In general, ICA has
such an inherent indeterminacy on dilation and permutation. Such indetermination cannot be reduced further without additional assumptions and constraints [5].
Therefore, constrained independent component analysis (CICA) is proposed as a
way to provide a unique ICA solution with certain characteristics on the output by
introducing constraints:
? To avoid the arbitrary ordering on output components: statistical measures
give indices to sort them in order, and evenly highlight the salient signals.
? To produce unity transform operators: normalization of the demixing channels reduces dilation effect on resulted components. It may recover the exact
original sources.
With such conditions applied, the ICA problem becomes a constrained optimization
problem. In the present paper, Lagrange multiplier methods are adopted to provide
an adaptive solution to this problem. It can be well implemented as an iterative
updating system of neural networks, referred to ICNNs. Next section briefly gives an
introduction to the problem, analysis and solution of Lagrange multiplier methods.
Then the basic concept of ICA will be stated. And Lagrange multiplier methods are
utilized to develop a systematic approach to CICA. Simulations are performed to
demonstrate the usefulness of the analytical results and indicate the improvements
due to the constraints.
2
Lagrange Multiplier Methods
Lagrange multiplier methods introduce Lagrange multipliers to resolve a constrained
optimization iteratively. A penalty parameter is also introduced to fit the condition
so that the local convexity assumption holds at the solution. Lagrange multiplier
methods can handle problems with both equality and inequality constraints.
The constrained nonlinear optimization problems that Lagrange multiplier methods
deal take the following general form:
minimize f(X), subject to g(X) ~ 0, h(X) =
?
(1)
where X is a matrix or a vector of the problem arguments, f(X) is an objective
function, g(X) = [9l(X)??? 9m(X)jT defines a set of m inequality constraints and
h(X) = [hl (X) ... hn(X)jT defines a set of n equality constraints. Because Lagrangian methods cannot directly deal with inequality constraints 9i(X) ~ 0, it
is possible to transform inequality constraints into equality constraints by introducing a vector of slack variables z = [Zl ... zmjT to result in equality constraints
Pi(X) = 9i(X) + zl = 0, i = 1? .. m.
Based on the transformation, the corresponding simplified augmented Lagrangian
function for problem (1) is defined as:
where f-L = [f-Ll ... f-LmjT and A = [Al ... AnjT are two sets of Lagrange multipliers, "I
is the scalar penalty parameter, 9i(X) equals to f-Li+"I9i(X), 11?11 denotes Euclidean
norm, and !"III . 112 is the penalty term to ensure that the optimization problem
is held at the condition of local convexity assumption: 'V5cx? > 0. We use the
augmented Lagrangian function in this paper because it gives wider applicability
and provides better stability [6].
For discrete problems, the changes in the augmented Lagrangian function can be
defined as ~x?(X, f-L, A) to achieve the saddle point in the discrete variable space.
The iterative equations to solve the problem in eq.(2) are given as follows:
X(k + 1) = X(k) - ~x?(X(k),f-L(k),A(k))
f-L(k + 1) = f-L(k) + "Ip(X(k)) = max{O,g(X(k))}
A(k + 1) = A(k) + "Ih(X(k))
where k denotes the iterative index and g(X(k))
= f-L(k) + "I g(X(k)).
(3)
3
Unconstrained ICA
Let the time varying input signal be x = (Xl, X2, . .. , XN)T and the interested signal
consisting of independent components (ICs) be c = (CI, C2, ... , CM) T, and generally M ~ N. The signal x is considered to be a linear mixture of independent
components c: x = Ac, where A is an N x M mixing matrix with full column rank.
The goal of general rcA is to obtain a linear M x N demixing matrix W to recover
the independent components c with a minimal knowledge of A and c, normally
M = N. Then, the recovered components u are given by u = Wx.
In the present paper, the contrast function used is the mutual information (M) of
the output signal which is defined in the sense of variable's entropy to measure the
independence:
(4)
where H(Ui) is the marginal entropy of component Ui and H(u) is the output
joint entropy. M has non-negative value and equals to zero when components are
completely independent.
While minimizing M, the learning equation for demixing matrix W to perform rcA
is given by [1]:
~ W ex W- T + <I?(u)x T
(5)
where W- T is the transpose of the inverse matrix W- l and <I?(u) is a nonlinear
function depending on the activation functions of neurons or p.d.f. of sources [1].
With above assumptions, the exact components c are indeterminant because of
possible dilation and permutation. The independent components and the columns
of A and the rows of W can only be estimated up to a multiplicative constant. The
definitions of normal ICA imply no ordering of independent components [5].
4
Constrained ICA
In practice, the ordering of independent components is quite important to separate
non-stationary signals or interested signals with significant statistical characters.
Eliminating indeterminacy in the permutation and dilation is useful to produce a
unique ICA solution with systematically ordered signals and normalized demixing
matrix. This section presents an approach to CICA by enhancing classical ICA
procedure using Lagrange multiplier methods to obtain unique ICs.
4.1
Ordering of Independent Components
The independent components are ordered in a descent manner according to a certain
statistical measure defined as index I (u). The constrained optimization problem
to CICA is then defined as follows:
minimize
subject to
Mutual Information M(W)
g(W) ~ 0, g(W) = [gl(W)??? gM_I(W)]T
(6)
where g(W) is a set of (M - 1) inequality constraints, gi(W) = I(Ui+I) - I(Ui)
defines the descent order and I(Ui) is the index of some statistical measures of
output components Ui, e.g. variance, normalized kurtosis.
Using Lagrange multiplier methods, the augmented Lagrangian function is defined
based on eq.(2) as:
1 M-1
= M(W) + 2
L
(7)
([max{O,Yi(W)W - /in
'Y i=1
With discrete solutions applied, the changes of individual element Wij can be formulated by minimizing eq.(7):
C(W,/i)
LlWij oc LlWi?C(W(k),/i(k))
J
=
minM(W(k))
Wij
+ [max{0'Yi_1(W(k))}
-max{O'Yi(W(k))}] I'(Ui(k)) Xj
(8)
where I' (.) is the first derivative of index measure.
The iterative equation for finding individual multipliers /ii is
/ii(k + 1)
= max{O, /ii(k) + 'Y [I(Ui+1 (k)) -
I(Ui(k))]}
(9)
With the learning equation of normal ICNN given in (5) and the multiplier /ii'S
iterative equation (9), the iterative procedure to determine the demixing matrix W
is given as follows:
LlW oc LlwC(W,/i)
= W- T + v(u)xT
(10)
<P2(U2)
where v(u)
<p1(ud - /i1I'(U1)
+ (/i1 - /i2)I'(U2)
=
<PM-1(UM-1) + (/iM-2 - /iM-d I' (UM-1)
<PM(UM) + /iM-1I'(UM)
We apply measures of variance and kurtosis as examples to emerge the ordering
among the signals. Then the functions I and corresponding first-derivative I' become as below.
variance:
~r(Ui) =
kurtosis:
Ikur(Ui)
I
(11)
2E{ud
4u~
= E{u~P
4E{ut}Ui
E{urP
(12)
The signal with the most variance shows the majority of information that input
signals consist of. The ordering based on variance sorts the components in information magnitude that needs to reconstruct the original signals. However, it should
be used accompanying with other preprocessing or constraints, such as PCA or normalization, because the normal ICA's indeterminacy on dilation of demixing matrix
may cause the variance of output components being amplified or reduced.
Normalized kurtosis is a kind of 4th-order statistical measure. The kurtosis of a
stationary signal to be extracted is constant under the situation of indeterminacy on
signals' amplitudes. Kurtosis shows the high order statistical character. Any signal
can be categorized into super-Gaussian, Gaussian and sub-Gaussianly distributed
ones by using kurtosis. The components are ordered in the distribution of sparseness
(i.e. super-Gaussian) to denseness (i.e. sub-Gaussian). Kurtosis has been widely
used to produce one-unit ICA [7]. In contrast to their sequential extraction, our
approach can extract and order the components in parallel.
4.2
Normalization of Demixing Matrix
The definition of ICA implies an indeterminacy in the norm of the mixing and
demixing matrix, which is in contrast to, e.g. PCA. Rather than the unknown
mixing matrix A was to be estimated, the rows of the demixing matrix W can be
normalized by applying a constraint term in the ICA energy function to establish
a normalized demixing channel. The constrained ICA problem is then defined as
follows:
minimize
subject to
Mutual Information M(W)
h(W) = [h 1 (W)??? hM(W)F = 0
(13)
where h(W) defines a set of M equality constraints, hi(Wi) = W[Wi - 1 (i =
1,???, M), which define the row norms of the demixing matrix W equal to 1.
Using Lagrange multiplier methods, the augmented Lagrangian function is defined
based on eq.(2) as:
'c(W,A) = M(W)
+ ATdiag[WWT -
I]
+ ~'Y 11diag[WWT -
1]11 2
(14)
where diag[?] denotes the operation to select the diagonal elements in the square
matrix as a vector.
By applying discrete Lagrange multiplier method, the iterative equation minimizing
the augmented function for individual multiplier Ai is
Ai (k + 1)
= Ai (k) + 'Y (w[ Wi -
1)
(15)
and the iterative equation of demixing matrix W is given as follows:
~W ex ~w'c(W,A)
=
W- T + ~(u)xT
where Oi(Wi)
+ O(W)
(16)
= 2Aiw;
Let assume c is the normalized source with unit variance such that E{ ccT } = I, and
the input signal x is processed by a prewhitening matrix P such that p = Px obeys
E{ppT} = I. Then with the normalized demixing matrix W, the network output
u contains exact independent components with unit magnitude, i.e. Ui contains one
?Cj for some non-duplicative assignment j -t i.
5
Experiments and Results
The CICA algorithms were simulated in MATLAB version 5. The learning procedure ran 500 iterations with certain learning rate. All signals were preprocessed by
a whitening process to have zero mean and uniform variance. The accuracy of the
recovered components compared to the source components was measured by the
signal to noise ratio (SNR) in dB, where signal power was measured by the variance
of the source component, and noise was the mean square error between the sources
and recovered ones. The performance of the network separating the signals into ICs
was measured by an individual performance index (IPI) of the permutation error Ei
for ith output:
(17)
where Pij were elements ofthe permutation matrix P = WA. IPI was close to zero
when the corresponding output was closely independent to other components.
5.1
Ordering ICs in Signal Separation
Three independent random signals distributed in Gaussian, sub- and super-Gaussian
manner were simulated. Their statistical configurations were similar to those used
in [1]. These source signals c were mixed with a random matrix to derive inputs
to the network. The networks were trained to obtain the 3 x 3 demixing matrix
using the algorithm of kurtosis-constraint ClCA eq.(lO) and (12) to separate three
independent components in complete lCA manner.
The source components, mixed input signals and the resulted output waveforms
are shown in figure 1 (a), (b) and (c), respectively. The network separated and
C
,
:~ iM~~(~~~I\'~1~r4,~~~W
X,
c, :~ I~~ ~l~IIU~IMI~~ijf~,~1
~FW~~~~~N'~~~~l\i'
X2 ' :
~~~\W,~~~W\~~\j~
s :~~j X3 l~~~~!~I~~~;~~
"
50
,,,
100
.'"
.,.,
""
"'"
Samples In Time Senes
""
e,
""
"""00
I!a
.,"
....
""
:!III
Samples In Time Senes
""
...,
''''
"'"
...,
'5O
30.
In Time
""
Senes
...
(c)
(b)
(a)
,..
Samples
Figure 1: Result of extraction of one super-Gaussian, one Gaussian and one subGaussian signals in the kurtosis descent order. Normalized kurtosis measurements
are K.4(Yl) = 32.82, K.4(Y2) = -0.02 and K.4(Y3) = -1.27. (a) Source components,
(b) input mixtures and (c) resulted components.
sorted the output components in a decreasing manner of kurtosis values, where the
component Yl had kurtosis 32.82 (> 0, super-Gaussian), Y2 is 0.02 (~ 0, Gaussian)
and Y3 is -1.27 ? 0, sub-Gaussian). The final performance index value of 0.28
and output components' average SNR value of 15dB show all three independent
components well separated too.
5.2
Demixing Matrix Normalization
Three deterministic signals and one Gaussian noise were simulated in this experiment. All signals were independently generated with unit variance and mixed with
a random mixing matrix. All input mixtures were preprocessed by a whitening process to have zero mean and unit variance. The signals were separated using both
unconstrained lCA and constrained lCA as given by eq.(5) and (16) respectively.
Table 1 compares their resulted demixing matrix, row norms, variances of separated
components and SNR values. The dilation effect can be seen from the difference
y
Yl
uncons.
lCA
Y2
Y3
Y4
cons.
lCA
Y2
Y3
Y4
Yl
Demixing
0.90 0.08
-0.06 1.11
0.07 0.07
1.04 0.08
0.65
0.43
-0.37 0.91
0.01 -0.04
0.65
0.07
Matrix W
-0.12 -0.82
-0.07 0.07
1.47 -0.09
0.04
1.16
-0.02 -0.61
0.05
0.20
1.00 -0.04
0.02
0.76
Norms
1.23
1.11
1.47
1.56
0.99
1.01
1.00
1.00
Variance
1.50
1.24
2.17
2.43
0.98
1.02
1.00
1.00
SNR
4.55
10.88
21.58
16.60
4.95
13.94
25.04
22.56
Table 1: Comparison of the demixing matrix elements, row norms, output variances
and resulted components' SNR values in lCA, and ClCA with normalization.
among components' variances caused by the non-normalized demixing matrix in
unconstrained ICA. The CICA algorithm with normalization constraint normalized
rows of the demixing matrix and separated the components with variances remained
at unit. Therefore, the source signals are exactly recovered without any dilation.
The increment of separated components' SNR values using CICA also can be seen
in the table. Their source components, input mixture, separated components using
normalization are given in figure 2. It shows that the resulted signals from CICA
are exactly match with the source signals in the sense of waveforms and amplitudes.
(a)
(b)
(c)
Figure 2: (a) Four source deterministic components with unit variances, (b) mixture
inputs and (c) resulted components through normalized demixing channel W.
6
Conclusion
We present an approach of constrained ICA using Lagrange multiplier methods to
eliminate the indeterminacy of permutation and dilation which are present in classical ICA. Our results provide a technique for systematically enhancing the ICA's
usability and performance using the constraints not restricted to the conditions
treated in this paper. More useful constraints can be considered in similar manners
to further improve the outputs of ICA in other practical applications. Simulation
results demonstrate the accuracy and the usefulness of the proposed algorithms.
References
[1] Jagath C. Rajapakse and Wei Lu. Unified approach to independent component
networks. In Second International ICSC Symposium on NEURAL COMPUTATION (NC'2000), 2000.
[2] A. Bell and T. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neurocomputing, 7:1129-1159,1995.
[3] S. Amari, A. Chchocki, and H. Yang. A new learning algorithm for blind signal
separation. In Advances in Neural Information Processing Systems 8, 1996.
[4] T-W. Lee, M. Girolami, and T. Sejnowski. Independent component analysis using an extended informax algorithm for mixed sub-gaussian and super-gaussian
sources. Neural Computation, 11(2):409-433, 1999.
[5] P. Comon. Independent component analysis: A new concept? Signal Processing,
36:287- 314, 1994.
[6] Dimitri P. Bertsekas. Constrained optimization and Lagrange multiplier methods. New York: Academic Press, 1982.
[7] A. Hyvii.rinen and Erkki Oja. Simple neuron models for independent component
analysis. Neural Systems, 7(6):671- 687, December 1996.
| 1935 |@word version:1 briefly:1 eliminating:1 norm:7 simulation:2 configuration:1 contains:2 recovered:4 activation:1 wx:1 stationary:2 urp:1 ith:1 provides:1 ipi:2 c2:1 become:1 symposium:1 manner:6 introduce:2 ica:24 p1:1 growing:1 cct:1 decreasing:1 resolve:1 becomes:1 cm:1 kind:1 unified:1 finding:1 transformation:1 y3:4 exactly:2 um:4 zl:2 normally:1 unit:7 bertsekas:1 engineering:1 local:2 uncons:1 r4:1 obeys:1 unique:3 practical:1 nanyang:1 practice:1 x3:1 procedure:4 bell:1 cannot:2 close:1 operator:1 applying:2 deterministic:2 lagrangian:6 independently:1 iiu:1 i1i:1 stability:1 handle:1 increment:1 rinen:1 exact:3 element:4 updating:1 utilized:1 ordering:8 technological:1 ran:1 convexity:2 ui:13 trained:1 completely:1 joint:1 separated:7 sejnowski:2 quite:1 widely:1 solve:2 reconstruct:1 amari:1 gi:1 transform:3 ip:1 final:1 analytical:1 kurtosis:13 realization:1 mixing:4 achieve:1 amplified:1 normalize:1 convergence:1 produce:3 wider:1 depending:1 develop:1 ac:1 derive:1 measured:3 keywords:1 school:1 indeterminacy:7 eq:6 p2:1 implemented:1 indicate:1 implies:1 girolami:1 direction:1 waveform:2 closely:1 icnn:1 ntu:1 im:4 hold:1 accompanying:1 considered:2 ic:4 normal:3 gaussian:14 super:6 rather:1 avoid:1 varying:1 improvement:1 rank:1 contrast:3 sense:3 eliminate:2 wij:2 interested:2 llw:1 i1:1 among:2 constrained:15 mutual:3 marginal:1 equal:3 extraction:2 inherent:1 ppt:1 oja:1 resulted:9 neurocomputing:1 individual:4 consisting:1 interest:1 mixture:5 held:1 prewhitening:1 euclidean:1 minimal:1 column:2 assignment:1 maximization:1 applicability:1 introducing:2 snr:6 uniform:1 usefulness:2 too:1 imi:1 international:1 systematic:1 yl:4 lee:1 hn:1 derivative:2 dimitri:1 li:1 satisfy:1 caused:1 blind:3 performed:1 multiplicative:1 icsc:1 sort:2 recover:2 parallel:1 minimize:3 square:2 oi:1 accuracy:2 variance:17 characteristic:1 ofthe:1 lu:2 minm:1 informax:1 email:1 definition:2 energy:1 con:1 knowledge:1 ut:1 cj:1 amplitude:2 wwt:2 wei:2 ei:1 nonlinear:2 defines:5 effect:2 normalized:12 concept:2 multiplier:20 y2:4 equality:5 iteratively:1 i2:1 deal:2 llwi:1 ll:1 oc:2 complete:2 demonstrate:3 novel:1 significant:1 measurement:1 ai:3 unconstrained:3 pm:2 had:1 whitening:2 multivariate:1 certain:3 inequality:5 yi:2 seen:2 additional:1 determine:1 ud:2 signal:36 ii:4 full:1 reduces:1 match:1 usability:1 academic:1 basic:1 enhancing:2 iteration:1 normalization:7 source:14 subject:3 db:2 december:1 subgaussian:1 yang:1 iii:2 independence:1 fit:1 xj:1 pca:2 penalty:3 york:1 cause:1 aiw:1 matlab:1 useful:2 generally:1 processed:1 reduced:2 singapore:1 estimated:2 discrete:4 salient:1 four:1 preprocessed:2 inverse:1 separation:4 hi:1 constraint:19 x2:2 erkki:1 u1:1 argument:1 px:1 according:1 lca:6 character:2 unity:1 wi:4 hl:1 comon:1 restricted:1 rca:2 equation:7 mutually:1 slack:1 adopted:1 operation:1 apply:1 original:2 denotes:3 ensure:1 rajapakse:2 establish:1 classical:3 objective:1 diagonal:1 separate:2 simulated:3 separating:1 majority:1 evenly:1 index:7 y4:2 providing:1 minimizing:3 ratio:1 nc:1 stated:1 negative:1 unknown:1 perform:1 neuron:2 descent:3 situation:1 extended:1 varied:1 arbitrary:2 introduced:1 below:1 max:5 power:1 treated:1 improve:1 imply:1 hm:1 extract:1 sg:1 permutation:7 highlight:1 mixed:4 pij:1 systematically:3 pi:1 row:6 lo:1 gl:1 transpose:1 denseness:1 emerge:1 distributed:2 xn:1 adaptive:2 preprocessing:1 simplified:1 cica:11 iterative:8 dilation:9 table:3 channel:3 diag:2 noise:3 categorized:1 augmented:6 referred:1 sub:5 xl:1 remained:1 specific:1 xt:2 jt:2 demixing:24 consist:1 deconvolution:1 ih:1 sequential:1 ci:1 magnitude:3 sparseness:1 entropy:3 saddle:1 lagrange:17 ordered:3 scalar:1 u2:2 ijf:1 extracted:1 goal:1 formulated:1 sorted:1 change:2 fw:1 sene:3 select:1 ex:2 |
1,024 | 1,936 | Programmable Reinforcement Learning Agents
David Andre and Stuart J. Russell
Computer Science Division, UC Berkeley, CA 94702
{dandre,russell}@cs.berkeley.edu
Abstract
We present an expressive agent design language for reinforcement learning that allows the user to constrain the policies considered by the learning process.The language includes standard features such as parameterized subroutines, temporary interrupts, aborts, and memory variables, but
also allows for unspecified choices in the agent program. For learning
that which isn't specified, we present provably convergent learning algorithms. We demonstrate by example that agent programs written in the
language are concise as well as modular. This facilitates state abstraction
and the transferability of learned skills.
1 Introduction
The field of reinforcement learning has recently adopted the idea that the application of
prior knowledge may allow much faster learning and may indeed be essential if realworld environments are to be addressed. For learning behaviors, the most obvious form
of prior knowledge provides a partial description of desired behaviors. Several languages
for partial descriptions have been proposed, including Hierarchical Abstract Machines
(HAMs) [8], semi-Markov options [12], and the MAXQ framework [4].
This paper describes extensions to the HAM language that substantially increase its expressive power, using constructs borrowed from programming languages. Obviously, increasing expressiveness makes it easier for the user to supply whatever prior knowledge
is available, and to do so more concisely. (Consider, for example, the difference between
wiring up Boolean circuits and writing Java programs.) More importantly, the availability
of an expressive language allows the agent to learn and generalize behavioral abstractions
that would be far more difficult to learn in a less expressive language. For example, the
ability to specify parameterized behaviors allows multiple behaviors such as WalkEast,
W alkN arth, Walk West, WalkS outh to be combined into a single behavior W alk( d)
where d is a direction parameter. Furthermore, if a behavior is appropriately parameterized, decisions within the behavior can be made independently ofthe "calling context" (the
hierarchy of tasks within which the behavior is being executed). This is crucial in allowing
behaviors to be learned and reused as general skills.
Our extended language includes parameters, interrupts, aborts (i.e., interrupts without resumption), and local state variables. Interrupts and aborts in particular are very important
in physical behaviors-more so than in computation-and are crucial in allowing for modularity in behavioral descriptions. These features are all common in robot programming
languages [2, 3, 5]; the key element of our approach is that behaviors need only be partially described; reinforcement learning does the rest.
To tie our extended language to existing reinforcement learning algorithms, we utilize Parr
and Russell's [8] notion of the joint semi-Markov decision process (SMDP) created when
a HAM is composed with an environment (modeled as an MDP). The joint SMDP state
space consists of the cross-product of the machine states in the HAM and the states in the
original MDP; the dynamics are created by the application of the HAM in the MDP. Parr
and Russell showed that an optimal solution to the joint SMDP is both learnable and yields
an optimal solution to the original MDP in the class o/policies expressed by the HAM (socalled hierarchical optimality). Furthermore, Parr and Russell show that the joint SMDP
can be reduced to an equivalent SMDP with a state space consisting only of the states
where the HAM does not specify an action, which reduces the complexity of the SMDP
problem that must be solved. We show that these results hold for our extended language of
Programmable HAMs (PHAMs).
To demonstrate the usefulness of the new language, we show a small, complete program for
a complex environment that would require a much larger program in previous formalisms.
We also show experimental results verifying the convergence of the learning process for
our language.
2 Background
An MDP is a 4-tuple, (S, A, 'T, R), where S is a set of states, A is a set of actions, 'T is a
probabilistic transition function mapping S x A x S -+ [0,1], and R is a reward function
mapping S x A x S to the reals. In this paper, we focus on infinite-horizon MDPs with a
discount factor /3. A solution to a MDP is an optimal policy 7['* that maps from S -+ A and
achieves maximum expected discounted reward for the agent. An SMDP (semi-Markov
decision process) allows for actions that take more than one time step. 'T is modified to
be a mapping from S, A, S, N -+ [0, 11, where N is the natural numbers; i.e., it specifies
a distribution over both output states and action durations. R is then a mapping from
S, A, S, N to the reals. The discount factor, /3, is generalized to be a function, /3(s, a), that
represents the expected discount factor when action a is taken in state s. Our definitions
follow those common in the literature [9, 6,4].
The HAM language [8] provides for partial specification of agent programs. A HAM program consists of a set of partially specified Moore machines. Transitions in each machine
may depend stochastically on (features of) the environment state, and the outputs of each
machine are primitive actions or nonrecursive invocations of other machines. The states
in each machine can be of four types: {start, stop, action, choice}. Each machine has a
single distinguished start state and may have one or more distinguished stop states. When
a machine is invoked, control starts at the start state; stop states return control back to the
calling machine. An action state executes an action. A call state invokes another machine
as a subroutine. A choice state may have several possible next states; after learning, the
choice is reduced to a single next state.
3 Programmable HAMs
Consider the problem of creating a HAM program for the Deliver-Patrol domain presented
in Figure 1, which has 38,400 states. In addition to delivering mail and picking up occasional additional rewards while patrolling (both of which require efficient navigation and
safe maneuvering), the robot must keep its battery charged (lest it be stranded) and its
camera lens clean (lest it crash). It must also decide whether to move quickly (incurring
collision risk) or slowly (delaying reward), depending on circumstances.
Because all the 5 x 5 "rooms" are similar, one can write a "traverse the room" HAM routine
that works in all rooms, but a different routine is needed for each direction (north-south,
south-north, east-west, etc.). Such redundancy suggests the need for a "traverse the room"
routine that is parameterized by the desired direction.
Consider also the fact that the robot should clean its camera lens whenever it gets dirty.
0
0
AI
0
0
0
0
00
00
I
0
0
.!..
I
0
I
cl
0
0
0
M
00
00
I
0
0
00
00
I
0
00
00
0
0
00
00
I
0
00
00
I
0
00
00
0
0
00
00
I
0
0
I
0
0
00
00
0
Is
RootO
a=e---~
water
ean
~- ___.~ail
WorkO
<DoDelivery>
I
0
0
0
ID
0
(a)
Figure 1: (a) The Deliver- Patrol world. Mail appears at M and must be delivered to the appropriate location. Additional rewards appear sporadically at A, B , C, and D. The robot's battery
may be recharged at R. The robot is penalized for colliding with walls and "furniture" (small circles). (b) Three of the PHAMs in the partial specification for the Deliver- Patrol world. Right-facing
half-circles are start states, left-facing half-circles are stop states, hexagons are call states, ovals are
primitive actions, and squares are choice points. zl and z2 are memory variables. When arguments to
call states are in braces, then the choice is over the arguments to pass to the subroutine. The RootO
PHAM specifies an interrupt to clean the camera lens whenever it gets dirty; the WorkO PHAM
interrupts its patrolling whenever there is mail to be delivered.
~-~
e
19 : - : gl
(a)
t IH
ToDoor( dest,sp)
Hit
L -_ _ _ _
~==~
__________________"
(b)
Figure 2: (a) A room in the Deliver-Patrol domain. The arrows in the drawing of the room indicate the behavior specified by the pO transition function in ToDoor(dest,sp). Two arrows indicate
a "fast" move (jN,jS,jE.jW), whereas a single arrow indicates a slow move (N, S, E, W). (b) The
ToDoor(dest,sp) and Move(dir) PHAMs.
Nav( dest,sp)
- InRoom( dest)
Figure 3: The remainder of the PHAMs for the Deliver- Patrol domain. Nav(dest,sp) leaves route
choices to be learned through experience. Similarly, PatrolO does not specify the sequence of locations to check.
In the HAM language, this conditional action must be inserted after every state in every
HAM. An interrupt mechanism with appropriate scoping would obviate the need for such
widespread mutilation.
The PHAM language has these additional characteristics. We provide here an informal
summary of the language features that enable concise agent programs to be written. The
9 PHAMs for the Deliver-Patrol domain are presented in Figure l(b), Figure 2(b), and
Figure 3. The corresponding HAM program requires 63 machines, many of which have
significantly more states than their PHAM counterparts.
The PHAM language adds several structured programming constructs to the HAM language. To enable this, we introduce two additional types of states in the PHAM: internal
states, which execute an internal computational action (such as setting memory variables
to a function of the current state), and null states, which have no direct effect and are used
for computational convenience.
Parameterization is key for expressing concise agent specifications, as can be seen in
the Deliver-Patrol task. Subroutines take a number of parameters, (h,fh, ... Ok, the values of which must be filled in by the calling subroutine (and can depend on any function
of the machine, parameter, memory, and environment state). In Figure 2(b), the subroutine Move(dir) is shown. The dir parameter is supplied by the NavRoom subroutine. The
ToDoor( dest,speed) subroutine is for navigating a single room of the agent's building. The
pO is a transition function that stores a parameterized policy for getting to each door. The
policy for (N, J) (representing the North door, going fast) is shown in Figure 2(a). Note
that by using parameters, the control for navigating a room is quite modular, and is written
once, instead of once for each direction and speed.
Aborts and interrupts allow for modular agent specification. As well as the camera-lens
interrupt described earlier, the robot needs to abort its current activity if the battery is low
and should interrupt its patrolling activity if mail arrives for delivery. The PHAM language
allows abort conditions to be specified at the point where a subroutine is invoked within
a calling routine; those conditions are in force until the subroutine exits. For each abort
condition, an "abort handler" state is specified within the calling routine, to which control
returns if the condition becomes true. (For interrupts, normal execution is resumed once
the handler completes.) Graphically, aborts are depicted as labelled dotted lines (e.g., in the
DoAll() PHAM in Figure 3), and interrupts are shown as labelled dashed lines with arrows
on both ends (e.g., in the Work() PHAM in Figure l(b?.
Memory variables are a feature of nearly every programming language. Some previous
research has been done on using memory variables in reinforcement learning in partially
observable domains [10]. For an example of memory use in our language, examine the
DoDelivery subroutine in Figure l(b), where Z2 is set to another memory value (set in
Nav( dest,sp ). Z2 is then passed as a variable to the Nav subroutine. Computational functions such as dest in the Nav( dest,sp) subroutine are restricted to be recursive functions
taking effectively zero time. A PHAM is assumed to have a finite number of memory variables, Zl, ... ,Zn, which can be combined to yield the memory state, Z. Each memory
variable has finite domain D(Zi). The agent can set memory variables by using internal
states, which are computational action states with actions in the following format: (set
Zl 'l/J(m, 0, x, Z), where 'l/J(m, 0, x, Z) is a function taking the machine, parameter, environment, and memory state as parameters. The transition function, parameter-setting
functions, and choice functions take the memory state into account as well.
4 Theoretical Results
Our results mirror those obtained in [9]. In summary (see also Figure 4): The composition
1-l 0 M of a PHAM 1-l with the underlying MDP M is defined using the cross product of
states in 1-l and M. This composition is in fact an SMDP. Furthermore, solutions to 1-l 0 M
yield optimal policies for the original MDP, among those policies expressed by the PHAM.
Finally, 1i a M may be reduced to an equivalent SMDP whose states are just the choice
points, i.e., the joint states where the machine state is a choice state. See [1] for the proofs.
Definition 1 (Programmable Hierarchical Abstract Machines: PHAMs) A PRAM is a
tuple 1i = (IL, 9,8, p, ~, I, ILI, A, ILA, Z, \[1), where IL is the set of machine states in 1i, 9
is the .Ipace of possible parameter settings, 8 is the transition function, mapping IL x 9 x
Z x X x IL to [0,1], p is a mapping from IL x 9 x Z x X x 9 to [0,1] and expresses the
parameter choice function, ~ maps from IL x 9 x Z x X to subsets of IL and expresses the
allowed choices at choice states, I( m) returns the interrupt condition at a call state, ILI (m)
specifies the handler of an interrupt, A(m) returns the abort condition at a call state,
ILA (m) specifies the handler of an abort, Z is the set of possible memory configurations,
and \[1(m) is a complex function expressing which computational internal function is used
at internal states, and to which memory variable the result is assigned.
Theorem 1 For any MDP, M and any PRAM, 1i, the operation of1i in M induces a joint
SMDp, called 1i a M. If 7r is an optimal solution for 1i a M, then the primitive actions
specified by 7r constitute an optimal policy for M among those consistent with 1i.
The state space of 1i a M may be enormous. As is illustrated in Figure 4, however, we
can obtain significant further savings, just as in [9]. First, not all pairs of PHAM and MDP
states will be reachable from the initial state; second, the complexity of the induced SMDP
is solely determined by the number of reachable choice points.
Theorem 2 For any MDP M and PRAM 1i, let C be the set of choice points in 1i a M.
There exists an SMDP called reduce(1i a M) with states C such that the optimal policy for
reduce(1i a M) corresponds to an optimal policy for M among those consistent with 1i.
The reduced SMDP can be solved by offline, model-based techniques using the method
given in [9] for constructing the reduced model. Alternatively, and much more simply,
we can solve it using online, model-free HAMQ-Iearning [8], which learns directly in the
reduced state space of choice points. Starting from a choice state w where the agent takes
action a, the agent keeps track of the reward r tot and discount fJtot accumulated on the way
to the next choice point, w'. On each step, the agent encounters reward ri and discount
fJi (note that fJi is 0 exactly when the agent transitions only in the PHAM and not in the
MDP), and updates the totals as follows:
rtot ~ rtot
+ fJtotri;
fJtot ~ fJtotfJi .
The agent maintains a Q-table, Q(w, a), indexed by choice state and action. When the
agent gets to the next choice state, w', it updates the Q-table as follows:
Q(w, a) ~ (1 - o:)Q(w, a) + o:[rtot + fJtot max Q(w' , u)] .
u
We have the following theorem.
Theorem 3 For a PHAM 1i and and MDP M, HAMQ-leaming will converge to an optimal policy for reduce(1i a M), with probability 1, with appropriate restrictions on the
learning rate.
5
Expressiveness of the PHAM language
As shown by Parr [9], the HAM language is at least as expressive as some existing action
languages including options [12] and full-fJ models [11]. The PHAM language is substantially more expressive than HAMs. As mentioned earlier, the Deliver-Patrol PHAM
program has 9 machines whereas the HAM program requires 63. In general, the additional
number of states required to express a PHAM as a pure HAM is IV(Z) x C x 91, where
V(Z) is the memory state space, C is the set of possible abort/interrupt contexts, and 9 is
the total parameter space. We also developed a PHAM program for the 3,700-state maze
world used by Parr and Russell [8]. The HAM used in their experiments had 37 machines;
the PHAM program requires only 7.
~
Reduce(H oM)
Figure 4: A schematic illustration of the formal results. (1) The top two diagrams are of a PRAM
fragment with 1 choice state and 3 action states (of which one, labelled d, is the start state). The
MDP has 4 states, and action d always leads to state 1 or 4. The composition, H. 0 M , is shown in
(2). Note that there are no incoming arcs to the states < c, 2 > or < c, 3 >. In (3), reduce(H. 0 M) is
shown. There are only 2 states in the reduced SMDP because there are no incoming arcs to the states
< c, 2 > or < c, 3 >.
Resu lts on Deliver/Pat rol T ask
150000
100000
Ii.'
50000
"
~
,;xl lmal -
'0
PHA -easy _
1j
PHAM-hard ~
Q-Leamin g~
-50000
-100000
50
100
150
200
Num PrimitIVe Steps, In 10,OOOs
Figure 5: Learning curves for the DeliverlPatrol domain, averaged over 25 runs . X-axis: number of
primitive steps. Y-axis: value of the policy measured by ten 5,000 step trial s. PRAM-hard refers to
the PRAMs given in this paper. PRAM-easy refers to a more complete PRAM, leaving unspecified
only the speed of travel for each activity.
With respect to the induced choice points, the Deliver-Patrol PHAM induces 7,816 choice
points in the joint SMDP, compared with 38,400 in the original MDP. Furthermore, only
15,800 Q-values must be learned, compared with 307,200 for flat Q-Iearning. Figure 5
shows empirical results for the Deliver-Patrol problem, indicating that Q-Iearning with a
suitable PHAM program is far faster than flat Q-Iearning. (Parr and Russell observed similar results for the maze world, where HAMQ-Iearning finds a good policy in 270,000 iterations compared to 9,000,000 for flat Q-Iearning.) Note that equivalent HAM and PHAM
programs yield identical reductions in the number of choice points and identical speedups
in Q-Iearning. Thus, one might argue that PHAMs do not offer any advantage over HAMs,
as they can express the same set of behaviors. However, this would be akin to arguing that
the Java programming language offers nothing over Boolean circuits. Ease of expression
and the ability to utilize greater modularity can greatly ease the task of coding reinforcement learning agents that take advantage of prior knowledge.
An interesting feature of PHAMs was observed in the Deliver-Patrol domain. The initial
PHAM program was constructed on the assumption that the agent should patrol among A,
B, C, D unless there is mail to be delivered. However, the specific rewards are such that
the optimal behavior is to loiter in the mail room until mail arrives, thereby avoiding costly
delays in mail delivery. The PHAM-Q learning agents learned this optimal behavior by
"retargeting" the N av routine to stay in the mail room rather than go to the specified destination. This example demonstrates the difference between constraining behavior through
structure and constraining behavior through subgoals: the former method may give the
agent greater flexibility but may yield "surprising" results. In another experiment, we constrained the PHAM further to prevent loitering. As expected, the agent learned a suboptimal
policy in which N av had the intended meaning of travelling to a specified destination. This
experience suggests a natural debugging cycle in which the agent designer may examine
learned behaviors and adjust the PHAM program accordingly.
The additional features of the PHAM language allow direct expression of programs from
other formalisms that are not easily expressed using HAMs. For example, programs in
Dietterich's MAXQ language [4] are written easily as PHAMs, but not as HAMs because
the MAXQ language allows parameters. The language of teleo-reactive (TR) programs [7,
2] relies on a prioritized set of condition-action rules to achieve a goal. Each action can
itself be another TR program. The TR architecture can be implemented directly in PHAMs
using the abort mechanism [1].
6 Future work
Our long-term goal in this project is to enable true cross-task learning of skilled behavior.
This requires state abstraction in order to learn choices within PHAMs that are applicable
in large classes of circumstances rather than just to each invocation instance separately.
Dietterich [4] has derived conditions under which state abstraction can be done within his
MAXQ framework without sacrificing recursive optimality (a weaker form of optimality
than hierarchical optimality). We have developed a similar set of conditions, based on a
new form of value function decomposition, such that PHAM learning maintains hierarchical optimality. This decomposition critically depends on the modularity of the programs
introduced by the language extensions presented in this paper.
Recently, we have added recursion and complex data structures to the PHAM language,
incorporating it into a standard programming language (Lisp). This provides the PHAM
programmer with a very powerful set of tools for creating adaptive agents.
References
[1] D. Andre. Programmable HAMs. www.cs.berkeley.edwdandre/pham.ps. 2000.
[2] S. Benson and N. Nilsson. Reacting, planning and learning in an autonomous agent. In K. Furukawa, D. Michie, and S. Muggleton, editors, Machine Intelligence 14. 1995.
[3] G. Berry and G. Gonthier. The Esterel synchronous programming language: Design, semantics,
implementation. Science oj Computer Programming, 19(2):87-152, 1992.
[4] T. G. Dietterich. State abstraction in MAXQ hierarchical RL. In NIPS 12, 2000.
[5] R.I. Firby. Modularity issues in reactive planning. In AlPS 96, pages 78-85. AAAI Press, 1996.
[6] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. lAIR,
4:237-285, 1996.
[7] N. I. Nilsson. Teleo-reactive programs for agent control. lAIR, 1:139-158, 1994.
[8] R. Parr and S. I. Russell. Reinforcement learning with hierarchies of machines. In NIPS 10,
1998.
[9] R. Parr. Hierarchical Control and Learning jor MDPs. PhD thesis, UC Berkeley, 1998.
[10] L. Peshkin, N. Meuleau, and L. Kaelbling. Learning policies with external memory. In ICML,
1999.
[11] R. Sutton. Temporal abstraction in reinforcement learning. In ICML, 1995.
[12] R. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal
abstraction in reinforcement learning. Artificial Intelligence , 112(1):181- 211 , February 1999.
| 1936 |@word trial:1 reused:1 decomposition:2 rol:1 concise:3 thereby:1 tr:3 reduction:1 initial:2 configuration:1 fragment:1 existing:2 current:2 transferability:1 z2:3 surprising:1 written:4 must:7 tot:1 update:2 smdp:15 half:2 leaf:1 intelligence:2 parameterization:1 accordingly:1 meuleau:1 num:1 provides:3 location:2 traverse:2 constructed:1 direct:2 skilled:1 supply:1 consists:2 behavioral:2 introduce:1 expected:3 indeed:1 behavior:19 examine:2 planning:2 discounted:1 increasing:1 becomes:1 project:1 underlying:1 circuit:2 null:1 nav:5 unspecified:2 substantially:2 ail:1 developed:2 temporal:2 berkeley:4 every:3 iearning:7 tie:1 exactly:1 demonstrates:1 hit:1 whatever:1 control:6 zl:3 appear:1 local:1 sutton:2 id:1 reacting:1 solely:1 might:1 suggests:2 ease:2 averaged:1 camera:4 arguing:1 recursive:2 empirical:1 java:2 significantly:1 refers:2 ila:2 get:3 convenience:1 context:2 risk:1 writing:1 restriction:1 equivalent:3 map:2 charged:1 www:1 primitive:5 graphically:1 starting:1 independently:1 duration:1 go:1 survey:1 pure:1 rule:1 importantly:1 obviate:1 his:1 notion:1 autonomous:1 hierarchy:2 user:2 ooos:1 programming:8 element:1 michie:1 observed:2 inserted:1 solved:2 verifying:1 cycle:1 maneuvering:1 russell:8 mentioned:1 environment:6 ham:27 complexity:2 reward:8 littman:1 battery:3 dynamic:1 depend:2 singh:1 deliver:12 division:1 exit:1 po:2 joint:7 easily:2 fast:2 artificial:1 quite:1 modular:3 larger:1 whose:1 solve:1 drawing:1 ability:2 itself:1 delivered:3 online:1 obviously:1 sequence:1 advantage:2 product:2 remainder:1 flexibility:1 achieve:1 description:3 getting:1 convergence:1 p:1 depending:1 handler:4 measured:1 borrowed:1 implemented:1 c:2 of1i:1 indicate:2 direction:4 safe:1 alp:1 enable:3 programmer:1 require:2 wall:1 extension:2 hold:1 pham:34 considered:1 normal:1 mapping:6 parr:8 achieves:1 fh:1 travel:1 applicable:1 tool:1 always:1 modified:1 rather:2 derived:1 interrupt:15 focus:1 indicates:1 check:1 greatly:1 abstraction:7 accumulated:1 going:1 subroutine:13 semantics:1 provably:1 issue:1 among:4 socalled:1 constrained:1 uc:2 field:1 construct:2 once:3 saving:1 identical:2 represents:1 stuart:1 icml:2 nearly:1 future:1 composed:1 intended:1 consisting:1 teleo:2 patrol:12 adjust:1 navigation:1 arrives:2 tuple:2 partial:4 experience:2 unless:1 filled:1 indexed:1 iv:1 walk:2 desired:2 circle:3 sacrificing:1 theoretical:1 instance:1 formalism:2 earlier:2 boolean:2 zn:1 resumed:1 kaelbling:2 subset:1 usefulness:1 delay:1 dir:3 combined:2 dest:10 stay:1 probabilistic:1 destination:2 picking:1 quickly:1 precup:1 thesis:1 aaai:1 slowly:1 stochastically:1 creating:2 external:1 return:4 gonthier:1 account:1 coding:1 includes:2 availability:1 north:3 depends:1 sporadically:1 start:6 option:2 maintains:2 om:1 square:1 il:7 characteristic:1 yield:5 ofthe:1 generalize:1 critically:1 ipace:1 executes:1 andre:2 whenever:3 definition:2 obvious:1 proof:1 stop:4 ask:1 knowledge:4 hamq:3 routine:6 back:1 appears:1 ok:1 follow:1 specify:3 jw:1 execute:1 done:2 furthermore:4 just:3 until:2 expressive:6 widespread:1 abort:13 mdp:15 building:1 effect:1 dietterich:3 true:2 counterpart:1 former:1 assigned:1 moore:2 lts:1 illustrated:1 wiring:1 generalized:1 complete:2 demonstrate:2 scoping:1 fj:1 meaning:1 invoked:2 recently:2 common:2 physical:1 rl:1 subgoals:1 expressing:2 composition:3 significant:1 ai:1 similarly:1 language:36 had:2 reachable:2 robot:6 specification:4 etc:1 add:1 j:1 showed:1 route:1 store:1 furukawa:1 seen:1 additional:6 greater:2 converge:1 dashed:1 semi:4 ii:1 multiple:1 full:1 reduces:1 faster:2 cross:3 offer:2 long:1 muggleton:1 schematic:1 circumstance:2 iteration:1 pram:8 background:1 addition:1 crash:1 whereas:2 addressed:1 separately:1 completes:1 diagram:1 leaving:1 crucial:2 appropriately:1 rest:1 brace:1 lest:2 south:2 induced:2 facilitates:1 lisp:1 call:5 door:2 constraining:2 easy:2 zi:1 architecture:1 suboptimal:1 reduce:5 idea:1 synchronous:1 whether:1 expression:2 peshkin:1 alk:1 passed:1 akin:1 constitute:1 action:22 programmable:5 collision:1 delivering:1 discount:5 ten:1 induces:2 reduced:7 specifies:4 supplied:1 dotted:1 designer:1 track:1 write:1 express:4 key:2 four:1 redundancy:1 enormous:1 prevent:1 clean:3 utilize:2 realworld:1 run:1 parameterized:5 powerful:1 decide:1 delivery:2 decision:3 hexagon:1 furniture:1 convergent:1 activity:3 constrain:1 colliding:1 ri:1 flat:3 calling:5 speed:3 argument:2 optimality:5 format:1 speedup:1 structured:1 debugging:1 describes:1 pha:1 nilsson:2 benson:1 restricted:1 taken:1 mechanism:2 needed:1 end:1 fji:2 informal:1 adopted:1 available:1 operation:1 incurring:1 travelling:1 hierarchical:7 occasional:1 appropriate:3 distinguished:2 encounter:1 jn:1 original:4 top:1 dirty:2 invokes:1 february:1 move:5 added:1 costly:1 navigating:2 mail:9 argue:1 water:1 modeled:1 illustration:1 difficult:1 executed:1 lair:2 design:2 rtot:3 implementation:1 policy:15 allowing:2 av:2 markov:3 arc:2 finite:2 pat:1 extended:3 delaying:1 expressiveness:2 david:1 introduced:1 pair:1 required:1 specified:8 concisely:1 learned:7 temporary:1 maxq:5 nip:2 program:24 including:2 memory:18 max:1 oj:1 power:1 suitable:1 natural:2 force:1 recursion:1 representing:1 mdps:4 axis:2 created:2 isn:1 prior:4 literature:1 berry:1 interesting:1 resu:1 facing:2 agent:27 consistent:2 editor:1 penalized:1 summary:2 gl:1 free:1 retargeting:1 offline:1 formal:1 allow:3 weaker:1 taking:2 curve:1 transition:7 world:4 maze:2 jor:1 stranded:1 made:1 reinforcement:11 adaptive:1 far:2 skill:2 observable:1 keep:2 incoming:2 assumed:1 alternatively:1 modularity:4 table:2 learn:3 ca:1 ean:1 complex:3 cl:1 constructing:1 domain:8 sp:7 arrow:4 nothing:1 allowed:1 west:2 recharged:1 je:1 slow:1 xl:1 invocation:2 learns:1 theorem:4 specific:1 learnable:1 essential:1 exists:1 ih:1 incorporating:1 effectively:1 mirror:1 phd:1 execution:1 horizon:1 easier:1 depicted:1 simply:1 expressed:3 partially:3 corresponds:1 relies:1 conditional:1 goal:2 leaming:1 prioritized:1 room:10 labelled:3 hard:2 infinite:1 determined:1 lens:4 oval:1 pas:1 ili:2 experimental:1 called:2 total:2 east:1 indicating:1 internal:5 reactive:3 avoiding:1 |
1,025 | 1,937 | Emergence of movement sensitive
neurons' properties by learning a sparse
code for natural moving images
Rafal Bogacz
Dept. of Computer Science
University of Bristol
Bristol BS8 lUB, U.K.
Malcolm W. Brown Christophe Giraud-Carrier
Dept. of Anatomy
Dept. of Computer Science
University of Bristol
University of Bristol
Bristol BS8 lTD, U.K.
Bristol BS8 lUB, U.K.
R.Bogacz@bri.l'fol.ac.uk M. W.Brown@bri.l'fol.ac.uk
cgc@c.l'.bri.l'.ac.uk
Abstract
Olshausen & Field demonstrated that a learning algorithm that
attempts to generate a sparse code for natural scenes develops a
complete family of localised, oriented, bandpass receptive fields,
similar to those of 'simple cells' in VI. This paper describes an
algorithm which finds a sparse code for sequences of images that
preserves information about the input. This algorithm when trained
on natural video sequences develops bases representing the
movement in particular directions with particular speeds, similar to
the receptive fields of the movement-sensitive cells observed in
cortical visual areas. Furthermore, in contrast to previous
approaches to learning direction selectivity, the timing of neuronal
activity encodes the phase of the movement, so the precise timing
of spikes is crucially important to the information encoding.
1 Introduction
It was suggested by Barlow [3] that the goal of early sensory processing is to reduce
redundancy in sensory information and the activity of sensory neurons encodes
independent features. Neural modelling can give some insight into how these neural
nets may learn and operate. Atick & Redlich [1] showed that training a neural
network on patches of natural images, aiming to remove pair-wise correlation
between neuronal responses, results in neurons having centre-surround receptive
fields resembling those of retinal ganglion neurons. Olshausen & Field [11,12]
demonstrated that a learning algorithm that attempts to generate a sparse code for
natural scenes while preserving information about the visual input, develops a
complete family of localised, oriented, bandpass receptive fields, similar to those of
simple-cells in VI. The activities of the neurons implementing this coding signal the
presence of edges, which are basic components of natural images. Olshausen &
Field chose their algorithm to create a sparse representation because it possesses a
higher degree of statistical independence among its outputs [11]. Similar receptive
fields were also obtained by training a neural net so as to make the responses of
neurons as independent as possible [4]. Other authors [14,16,5] have shown that
direction selectivity of the simple-cells may also emerge from unsupervised
learning. However, there is no agreed way of how the receptive fields of neurons
that encode movements are created.
This paper describes an algorithm which finds a sparse code for sequences of
images that preserves the critical information about the input. This algorithm,
trained on natural video images, develops bases representing movements in
particular directions at particular speeds, similar to the receptive fields of the
movement-sensitive cells observed in early visual areas [9,2]. The activities of the
neurons implementing this encoding signal the presence of edges moving with
certain speeds in certain directions, with each neuron having its preferred speed and
direction. Furthermore, in contrast to all the previous approaches, the timing of
neural activity encodes the movement's phase, so the precise timing of spikes is
crucially important for information coding.
The proposed algorithm is an extension of the one proposed by Olshausen & Field.
Hence it is a high level algorithm, which cannot be directly implemented in a
biologically plausible neural network. However, a plausible neural network
performing a similar task can be developed. The proposed algorithm is described in
Section 2. Sections 3 and 4 show the methods and the results of simulations. Finally,
Section 5 discusses how the algorithm differs from the previous approaches, and the
implications of the presented results.
2
Description of the algorithm
Since the proposed algorithm is an extension of the one described by Olshausen &
Field [11 ,12], this section starts with a brief introduction of the main ideas of their
algorithm. They assume that an image x can be represented in terms of a linear
superposition of basis functions Ai. For clarity of notation, let us represent both
images and bases as vectors created by concatenating rows of pixels as shown in
Figure 1, and let each number in the vector describe the brightness of the
corresponding pixel. Let the basis functions Ai form the columns of a matrix A . Let
the weighting of the above mentioned linear superposition (which changes from one
image to the next) be given by a vector s:
x=As
(1)
The image x may be encoded, for example using the inverted transformation where
it exists. Hence, the image code s is determined by the choice of basis functions Ai.
Olshausen & Field [11,12] try to find bases that result in a code s that preserves
information about the original image x and that is sparse. Therefore, they minimise
the following cost function with respect to A, where A, denotes a constant
determining the importance of sparseness [11] :
E = -[preserved information in s about x] - A,[sparseness of s]
(2)
The algorithm proposed in this paper is similar, but it takes into consideration the
temporal order of images. Let us divide time into intervals (to be able to treat it as
discrete) and denote the image observed at time t and the code generated by xt and
st, respectively. The Olshausen & Field algorithm assumes that image x is a linear
superposition (mixture) of s. By contrast, our algorithm assumes that images are
convolved mixtures of s, i.e., st depends not only on xt but also on xt-l, xt-2, ... , Xt -(T-l)
(i.e. Sl depends on T preceding Xl). Therefore, each basis function may also be
image
~
......
? II
I I I ?
I
xT
Figure 1: Representing images as vectors.
EE?Em, ~ ~ EE?EE?
Xl
X-
X3
X4
X5
X6
.I'll
.1'/
.I}
.1'1 4
.I}
.I }
.1'2 1
.1'/
.I}
.1'24
.1'/
.1'26
EE? EE? EE?
A/
A/
A IO
~~~
I
Figure 2: Encoding of an image sequence. In the example, there are two basis
functions, each described by T = 3 vectors. The first basis encodes movement to
the right, the second encodes movement down. A sequence x of 6 images is
shown on the top and the corresponding code s below. A "spike" over a
coefficient .1'/ denotes that .1'/ = 1, the absence of a "spike" denotes .1'/ = o.
represented as a sequence of vectors AiO, Ail, ... , Ar l (corresponding to a sequence
of images). These vectors create columns of the mixing matrices A 0, A I, ... , AIel.
Each coefficient .1'/ describes how strongly the basis function Ai is present in the last
T images. This relationship is illustrated in Figure 2 and is expressed by Equation 3.
T-I
x'
=
[.A fsf+1
(3)
f=O
In the proposed algorithm, the basis functions A are also found by optimising the
cost function of Equation 2. The detailed method of this minimisation is described
below, and this paragraph gives its overview. In each optimisation step, a sequence
x of P image patches is selected from a random position in the video sequence (P 2:
2D. Each of the optimisation steps consists of two operations. Firstly, the sequence
of coefficient vectors s which minimises the cost function E for the images x is
found. Secondly, the basis matrices A are modified in the direction opposite to the
gradient of E over A, thus minimising the cost function. These two operations are
repeated for different sequences of image patches.
In Equation 2, the term "preserved information in s about x" expresses how weJl x
may be reconstructed on the basis of s. In particular, it is defined as the negative of
the square of the reconstruction error. The reconstruction error is the difference
between the original image sequence x and the sequence of images r reconstructed
from s. The sequence r may be reconstructed from s in the foJlowing way:
T- I
r' = [.A fsf+1
(4)
f=O
The precise definition of the cost function is then given by:
.t )
~ ~ (x~ - r; Y+ A~ ~ C ~
P-T+I
E=
P
[
(5)
In Equation 5, C is a nonlinear function, and (j is a scaling constant. Images at the
start and end of the sequence (e.g. , Xl, xP ) may share some bases with images not in
the sequence (e.g., xO, x ? l , XP+I). To avoid this problem, only the middle images are
reconstructed and only for them is the reconstruction error computed in the cost
function. In particular, only images from T to P-T+l are reconstructed - since the
assumed length of the bases is T, those images contain only the bases whose other
parts are also contained in the sequence. Since only images from T to P-T+1 are
reconstructed, it is clear from Equation 4, that only coefficients ST to sP need to be
found. These considerations explain the limits of the outer summations in both
terms of Equation 5.
For each image sequence, in the first operation, the coefficients ST, ST+!, ... , sP
minimising E are found using an optimisation method. Minus the gradient of E over
s is given by:
-~
(Xl - r' ),4,'-1 -~ctS(
chi = 2""""
i... i...
?i
t
j
J
J
1
(J"
(J"
1
(6)
In the second operation, the bases A are modified so as to minimise E:
(7)
In equation 7, 17 denotes the learning rate. The vector length of each basis function
Ai is adapted over time so as to maintain equal variance on each coefficient s, m
exactly the same way as described in [12].
3
Methods of simulations
The proposed algorithm was implemented in Matlab except for finding s minimising
E, which was implemented in C++, using the conjugate gradient method for the sake
of speed. In the implementation, the original codes of Olshausen & Field were used
and modified (downloaded from http://redwood.ucdavis.edu/bruno/sparsenet.html).
Many parameters of the proposed algorithm were taken from [11]. In particular,
C(x) = In(1+x2), cris the standard deviation of pixels' colours in the images, A is set
up such that A/cr = 0.14, and 17 = 1. ~A is averaged over 100 image sequences, and
hence the bases A are updated with the average of ~A every 100 optimisation steps.
The length of an image sequence P is set up such that P = 3T.
The proposed algorithm was tested on two types of video sequences: 'toy' problems
and natural video sequences. Each of the toy sequences consisted of 10 frames 100x100 pixels. In the sequence, there were 20 moving lines. Each line was either
horizontal or vertical and 1 pixel thick. Each line was either black or white, which
corresponded to positive or negative values of the elements of x vectors (the grey
background corresponded to zero). Each horizontal line moved up or down, each
vertical - left or right, with the speed of one pixel per frame.
Then the algorithm was tested on five natural video sequences showing moving
people or animals. In each optimisation step , a sequence of image patches was
selected from a randomly chosen video. The video sequences were preprocessed.
First, to remove the static aspect of the images, from each frame the previous one
was subtracted, i.e., each image encoded the difference between two successive
frames of the video. This simple operation reduces redundancy in data since the
corresponding pixels in the successive frames tend to have similar colours. An
analogous operation may be performed by the retina, since the ganglion cells
typically respond to the changes in light intensity [10].
Then, to remove the pair-wise correlation between pixels of the same frame , Zerophase Component Analysis (ZCA) [4] was applied to each of the patches from the
selected sequence, i.e., x' := W x', where W = (X'(X'?)-I> i.e., W is equal to the
inverted square root of the covariance matrix of x. The filters in W have centresurround receptive fields resembling those of retinal ganglion neurons [4].
| 1937 |@word implemented:3 brown:2 middle:1 barlow:1 contain:1 consisted:1 hence:3 direction:7 determining:1 anatomy:1 thick:1 spike:4 grey:1 filter:1 simulation:2 crucially:2 illustrated:1 covariance:1 white:1 x5:1 ll:1 brightness:1 receptive:8 implementing:2 minus:1 cgc:1 gradient:3 outer:1 complete:2 secondly:1 summation:1 extension:2 code:10 length:3 image:40 wise:2 consideration:2 relationship:1 overview:1 early:2 localised:2 remove:3 negative:2 implementation:1 selected:3 superposition:3 vertical:2 neuron:10 sensitive:3 surround:1 ai:5 create:2 bs8:3 centre:1 bruno:1 precise:3 successive:2 frame:6 firstly:1 modified:3 moving:4 five:1 avoid:1 cr:1 redwood:1 malcolm:1 base:9 intensity:1 minimisation:1 encode:1 consists:1 showed:1 pair:2 paragraph:1 modelling:1 selectivity:2 certain:2 contrast:3 zca:1 ucdavis:1 christophe:1 sparsenet:1 able:1 chi:1 inverted:2 preserving:1 suggested:1 typically:1 preceding:1 below:2 signal:2 ii:1 notation:1 pixel:8 video:9 among:1 html:1 reduces:1 bogacz:2 critical:1 natural:9 ail:1 animal:1 minimising:3 developed:1 field:16 finding:1 transformation:1 equal:2 having:2 representing:3 brief:1 temporal:1 x4:1 every:1 optimising:1 unsupervised:1 basic:1 created:2 optimisation:5 exactly:1 uk:3 develops:4 represent:1 retina:1 oriented:2 randomly:1 cell:6 preserve:3 carrier:1 positive:1 preserved:2 timing:4 treat:1 background:1 limit:1 aiming:1 io:1 phase:2 encoding:3 interval:1 maintain:1 operate:1 attempt:2 posse:1 tend:1 black:1 chose:1 downloaded:1 degree:1 xp:2 mixture:2 ee:6 presence:2 light:1 share:1 row:1 averaged:1 implication:1 independence:1 last:1 edge:2 opposite:1 differs:1 x3:1 reduce:1 idea:1 divide:1 minimise:2 area:2 emerge:1 sparse:7 colour:2 cortical:1 ltd:1 lub:2 column:2 sensory:3 author:1 aio:1 ar:1 cannot:1 matlab:1 cost:6 reconstructed:6 deviation:1 detailed:1 clear:1 preferred:1 demonstrated:2 resembling:2 assumed:1 fsf:2 generate:2 http:1 cris:1 sl:1 insight:1 st:5 per:1 learn:1 discrete:1 analogous:1 updated:1 express:1 centresurround:1 redundancy:2 rafal:1 clarity:1 preprocessed:1 sp:2 element:1 main:1 toy:2 repeated:1 observed:3 retinal:2 neuronal:2 coding:2 respond:1 redlich:1 coefficient:6 family:2 vi:2 depends:2 movement:10 performed:1 try:1 root:1 patch:5 mentioned:1 scaling:1 position:1 fol:2 start:2 ct:1 bandpass:2 concatenating:1 xl:4 weighting:1 down:2 trained:2 activity:5 xt:6 square:2 adapted:1 showing:1 variance:1 scene:2 x2:1 basis:11 encodes:5 sake:1 giraud:1 exists:1 aspect:1 speed:6 represented:2 x100:1 importance:1 performing:1 bri:3 describe:1 bristol:6 sparseness:2 explain:1 corresponded:2 conjugate:1 describes:3 whose:1 encoded:2 definition:1 plausible:2 em:1 ganglion:3 visual:3 biologically:1 expressed:1 contained:1 static:1 emergence:1 xo:1 taken:1 sequence:28 equation:7 net:2 discus:1 goal:1 reconstruction:3 agreed:1 end:1 absence:1 change:2 higher:1 operation:6 x6:1 mixing:1 response:2 determined:1 except:1 description:1 moved:1 strongly:1 furthermore:2 atick:1 subtracted:1 correlation:2 convolved:1 horizontal:2 original:3 top:1 nonlinear:1 denotes:4 assumes:2 people:1 ac:3 minimises:1 dept:3 tested:2 olshausen:8 |
1,026 | 1,938 | Learning and Tracking Cyclic Human
Motion
D.Ormoneit
Dept. of Computer Science
Stanford University
Stanford, CA 94305
ormoneitOcs.stanford.edu
H. Sidenbladh
Royal Institute of Technology (KTH),
CVAP/NADA,
S-100 44 Stockholm, Sweden
hedvigOnada.kth.se
M. J. Black
Dept. of Computer Science
Brown University, Box 1910
Providence, RI 02912
blackOcs.brown.edu
T. Hastie
Dept. of Statistics
Stanford University
Stanford, CA 94305
hastieOstat.stanford.edu
Abstract
We present methods for learning and tracking human motion in
video. We estimate a statistical model of typical activities from a
large set of 3D periodic human motion data by segmenting these
data automatically into "cycles". Then the mean and the principal components of the cycles are computed using a new algorithm
that accounts for missing information and enforces smooth transitions between cycles. The learned temporal model provides a
prior probability distribution over human motions that can be used
in a Bayesian framework for tracking human subjects in complex
monocular video sequences and recovering their 3D motion.
1
Introduction
The modeling and tracking of human motion in video is important for problems as
varied as animation, video database search, sports medicine, and human-computer
interaction. Technically, the human body can be approximated by a collection of
articulated limbs and its motion can be thought of as a collection of time-series
describing the joint angles as they evolve over time. A key challenge in modeling
these joint angles involves decomposing the time-series into suitable temporal primitives. For example, in the case of repetitive human motion such as walking, motion
sequences decompose naturally into a sequence of "motion cycles" . In this work,
we present a new set of tools that carry out this segmentation automatically using
the signal-to-noise ratio of the data in an aligned reference domain. This procedure
allows us to use the mean and the principal components of the individual cycles in
the reference domain as a statistical modeL Technical difficulties include missing information in the motion time-series (resulting from occlusions) and the necessity of
enforcing smooth transitions between different cycles. To deal with these problems,
we develop a new iterative method for functional Principal Component Analysis
(PCA). The learned temporal model provides a prior probability distribution over
human motions that can be used in a Bayesian framework for tracking. The details
of this tracking framework are described in [7] and are briefly summarized here.
Specifically, the posterior distribution of the unknown motion parameters is represented using a discrete set of samples and is propagated over time using particle
filtering [3 , 7]. Here the prior distribution based on the PCA representation improves the efficiency of the particle filter by constraining the samples to the most
likely regions of the parameter space. The resulting algorithm is able to track human subjects in monocular video sequences and to recover their 3D motion under
changes in their pose and against complex unknown backgrounds.
Previous work on modeling human motion has focused on the recognition of activities using Hidden Markov Models (HMM's), linear dynamical models, or vector
quantization (see [7, 5] for a summary of related work). These approaches typically
provide a coarse approximation to the underlying motion. Alternatively, explicit
temporal curves corresponding to joint motion may be derived from biometric studies or learned from 3D motion-capture data. In previous work on principal component analysis of motion data, the 3D motion curves corresponding to particular
activities had typically to be hand-segmented and aligned [1, 7, 8]. By contrast,
this paper details an automated method for segmenting the data into individual
activities, aligning activities from different examples, modeling the statistical variation in the data, dealing with missing data, enforcing smooth transitions between
cycles, and deriving a probabilistic model suitable for a Bayesian interpretation. We
focus here on cyclic motions which are a particularly simple but important class of
human activities [6]. While Bayesian methods for tracking 3D human motion have
been suggested previously [2 , 4], the prior information obtained from the functional
PCA proves particularly effective for determining a low-dimensional representation
of the possible human body positions [8 , 7].
2
Learning
Training data is provided by a commercial motion capture system describes the
evolution of m = 19 relative joint angles over a period of about 500 to 5000 frames.
We refer to the resulting multivariate time-series as a "motion sequence" and we
use the notation Zi (t) == {Za ,i (t) la = 1, ... , m} for t = 1, ... ,T; to denote the angle measurements. Here T; denotes the length of sequence i and a = 1, ... , m
is the index for the individual angles. Altogether, there are n = 20 motion
sequences in our training set. Note that missing observations occur frequently
as body markers are often occluded during motion capture. An associated set
Ia, i == {t E {I, ... , T;} I za ,; (t) is not missing} indicates the positions of valid data.
2.1
Sequence Alignment
Periodic motion is composed of repetitive "cycles" which constitute a natural unit
of statistical modeling and which must be identified in the training data prior to
building a model. To avoid error-prone manual segmentation we present alignment
procedures that segment the data automatically by separately estimating the cycle length and a relative offset parameter for each sequence. The cycle length is
computed by searching for the value p that maximizes the "signal-to-noise ratio":
.
_ " signali ,a(p)
stn_ratzo;(p) = ~
.
() ,
a
nozse;,a p
(1)
m
i ~~t
~~~t
~ ~t
~~~t
!~t
!~~t
1]
l ~~t
i OOO
t
; 50::0
IMteoOl Sl gn al -tCH'IOl &O
IIII~ 1III11~lllil 1.llilillllli~ IIII
: ..... : : :
: .1 : : :
:~: : :
: ... : : :
:
: .... : : :
:
:, .1 ,: ,: :
:::
:': " : ....&...1 :
:
:
: ..l :
:
~
' 00
, ~
1
3'
3'
~:E;?3~~~=-~
i}=
J #?,8 ; .. ~e ~O; 'ft;
r ~i;~
r !;
J'
r~
r
J'
J'
r
~"
E:=
-2
~;~ ~ ~~
~ :~
Figure 1: Left: Signal-to-noise ratio of a representative set of angles as a function
of the candidate period length. Right: Aligned representation of eight walking
sequences.
where noisei ,a (p) is the variation in the data that is not explained by the mean
cycle, z, and signal;,a (P) measures the signal intensity. 1 In Figure 1 we show the
individual signal-to-noise ratios for a subset of the angles as well as the accumulated
signal-to-noise ratio as functions of p in the range {50, 51, ... , 250}. Note the peak
of these values around the optimal cycle length p = 126. Note also that the signalto-noise ratio of the white noise series in the first row is approximately constant ,
warranting the unbiasedness of our approach.
Next, we estimate the offset parameters , 0, to align multiple motion sequences in
a common domain . Specifically, we choose 0(1) , 0(2) , ... , o(n) so that the shifted
motion sequences minimize the deviation from a common prototype model by analogy to the signal-to-noise-criterion (1). An exhaustive search for the optimal offset
combination is computationally infeasible. Instead , we suggest the following iterative procedure: We initialize the offset values to zero in Step 1, and we define a
reference signal ra in Step 2 so as to minimize the deviation with respect to the
aligned data. This reference signal is a periodically constrained regression spline
that ensures smooth transitions at the boundaries between cycles. Next, we choose
the offsets of all sequences so that they minimize the prediction error with respect
to the reference signal (Step 3). By contrast to the exhaustive search, this operation
requires 00:=7=1 p(i)) comparisons. Because the solution of the first iteration may
be suboptimal, we construct an improved reference signal using the current offset
estimates, and use this signal in turn to improve the offset estimates. Repeating
these steps, we obtain an iterative optimization algorithm that is terminated if the
improvement falls below a given threshold . Because Steps 2 and 3 both decrease the
prediction error, so that the algorithm converges monotonically. Figure 1 (right)
shows eight joint angles of a walking motion, aligned using this procedure.
2.2
Functional peA
The above alignment procedures segment the training data into a collection of
cycle-data called "slices". Next, we compute the principal components of these
slices , which can be interpreted as the major sources of variation in the data. The
algorithm is as follows
lThe mean cycle is obtained by "folding" t he original sequence into the doma in
{I, . .. ,p}. For brevi ty, we don't provide formal definitions here; see [5].
1. For a = 1, ... , m and i = 1, ... , n:
(a) Dissect Zi,a into K i cycles of length p(i), marlcing missing values at both
ends. This gives a n ew set of time series Z~l ) for k = 1, ... , K i wher e
Ki
= IT';(~f) 1+ 1.
Let
h,a
b e the new index
~:t for this series.
(b) Compute functional estimates in the domain [0,1].
(c) Resample the data in the reference domain, imputing missing observations.
This gives yet another time-series zk~~ (j) := ik ,a ( 1=) for j = 0,1, ... , T.
2. Stack the "slices"
design matrix X.
zk
2)
obtained from all sequences row-wise into a
,a
2:: . Ki
X
mT
?
3. Compute th e row-mean /1. of X, and let X(1) := X - l'p. 1 is a vector of ones.
4. Slice by slice, compute the Fourier coefficients of X(1), and store them in a new
matrix, X(2). Use the first 20 coefficients only.
5. Compute the Singular Value Decomposition of X(2): X(2)
= USV'.
= usqv'.
6. Reconstruct X(2), using the rank q approximation to S: X(3)
7. Apply the Inverse Fourier Transform and add I' p to obtain
8. Impute the missing values in
X
X(4).
using the corresponding values in
X(4).
9. Evaluate IIX - X(4) II. Stop, if the performance improvement is b elow 10- 6 .
O therwise, goto Step 3.
Our algorithm addresses several difficulties. First, even though the individual motion sequences are aligned in Figure I , they are still sampled at different frequencies
in the reference domain due to the different alignment parameters. This problem
is accommodated in Step lc by resampling after computing a functional estimate
in continuous time in Step lb. Second, missing data in the design matrix X means
we cannot simply use the Singular Value Decomposition (SVD) of X(l) to obtain
the principal components. Instead we use an iterative approximation scheme [9] in
which we alternate between an SVD step (4 through 7) and a data imputation step
(8) , where each update is designed so as to decrease the matrix distance between X
and its reconstruction , X(4 ) . Finally, we need to ensure that the m ean estimates and
the principal components produce a smooth motion when recombined into a new
sequence. Specifically, the approximation of an individual cycle must be periodic in
the sense that its first two derivatives match at the left and the right endpoint. This
is achieved by translating the cycles into a Fourier domain and by truncating highfrequency coefficients (Step 4). Then we compute the SVD in the Fourier domain
in Step 5, and we reconstruct the design matrix using a rank-q approximation in
Steps 6 and 7, respectively. In Step 8 we use the reconstructed values as improved
estimates for the missing data in X, and then we repeat Steps 4 through 7 using
these improved estimates. This iterative process is continued until the performance
improvement falls below a given threshold. As its output, the algorithm generates
the imputed design matrix , X, as well as its principal components.
3
Bayesian Tracking
In tracking , our goal is to calculate the posterior probability distribution over 3D
human poses given a sequence of image measurements, It. The high dimensionality
of the body model makes this calculation computationally demanding. Hence, we
use the learned model above to constrain the body motions to valid walking motions.
Towards that end , we use the SVD of X(2) to formulate a prior distribution for
Bayesian tracking.
Formally, let O(t) == (Oa(t)la = 1, ... , m) be a random vector of the relative joint
angles at time t; i.e., the value of a motion sequence, Zi(t), at time t is interpreted
as the i-th realization of O(t). Then O(t) can be written in the form
q
O(t) = ji(1/!t)
+L
(2)
Ct,kVk(1/!t) ,
k=l
where Vk is the Fourier inverse of the k-th column of V, rearranged as an T X mmatrix; similarly, j1, denotes the rearranged mean vector J.L. Vk (1/! ) is the 1/!-th column
of Vk, and the Ct,k are time-varying coefficients. 1/!t E {O, T -I} maps absolute time
onto relative cycle positions or phases, and Pt denotes the speed of the motion
such that 1/!t+l = (1/!t + pt) mod T Given representation (2), body positions are
characterized entirely by the low-dimensional state-vector cPt = (Ct, 1/!t, Pt,
where Ct = (Ct,l, ... , Ct ,q) and where
and
represent the global 3D translation
and rotation of the torso, respectively. Hence we the problem is to calculate the
posterior distribution of cPt given images up to time t. Due to the Markovian
structure underlying cPt, this posterior distribution is given recursively by:
-ri
-ri, Oi)"
0i
(3)
Here p(It I cPt ) is the likelihood of observing the image It given the parameters and
P(cPt-l I It-I) is the posterior probability from the previous instant. p(cPt I cPt-d
is a temporal prior probability distribution that encodes how the parameters cPt
change over time. The elements of the Bayesian approach are summarized below;
for details the reader is referred to [7].
Generative Image Model. Let M(It, cPt) be a function that takes image texture
at time t and, given the model parameters, maps it onto the surfaces of the 3D
model using the camera model. Similarly, let M- 1 (-) take a 3D model and project
its texture back into the image. Given these functions, the generative model of
images at time t + 1 can be viewed as a mapping from the image at time t to images
at time t + 1:
It +1 = M-l(M(It, cPt) , cPt+l)
+ 17,
17 ~ G(O, 0") ,
where G(O, 0") denotes a Gaussian distribution with zero mean and standard deviation 0" and 0" depends on the viewing angle of the limb with respect to the camera
and increases as the limb is viewed more obliquely (see [7] for details) .
Temporal Prior. The temporal prior, p(cPt I cPt-d, models how the parameters
describing the body configuration are expected to vary over time . The individual
components of cP, (Ct, 1/!t, Pt ,
are assumed to follow a random walk with
Gaussian increments.
-ri, on,
Likelihood Model. Given the generative model above we can compare the image
at time t - 1 to the image It at t. Specifically, we compute this likelihood term
separately for each limb. To avoid numerical integration over image regions, we
generate ns pixel locations stochastically. Denoting the ith sample for limb j as
Xj ,i, we obtain the following measure of discrepancy:
n
E == L(It(xj ,i ) - M-1(M(It _ 1, cPt-I), cPt)(Xj ,i ))2.
(4)
i =l
As an approximate likelihood term we use
p(ItlcPt) =
II. ~Ctj)
exp(-E/(2u(Ctj)2n s)) + (1- q(Ctj))Poccluded,
21r0"(Ctj)
J
(5)
Figure 2: Tracking of person walking, 10000 samples. Upper rows: frames 0, 10, 20,
30, 40, 50 with the projection of the expected model configuration overlaid. Lower row:
expected 3D configuration in the same frames.
where Poccluded is a constant probability that a limb is occluded, aj is the angle
between the limb j principal axis and the image plane of the camera, 0"( a j) is a
function that increases with narrow viewing angles, and q(aj) = cos(aj) if limb j
is non-occluded, or 0 if limb j is occluded.
Partical Filter. As it is typical for tracking problems, the posterior distribution
may well be multi-modal due to the nonlinearity of the likelihood function. Hence,
we use a particle filter for inference where the posterior is represented as a weighted
set of state samples, ?;, which are propagated in time. In detail, we use N. ~ 10 4
particles in our experiments. Details of this algorithm can be found in [3, 7].
4
Experiment
To illustrate the method we show an example of tracking a walking person in a
cluttered scene in Figure 2. The 3D motion is recovered from a monocular sequence
using only the motion between frames. To visualize the posterior distribution we
display the projection of the 3D model corresponding to the expected value of
the model parameters: ~, ~~1 Pi?; where P; is the likelihood of sample ?;. All
parameters were initialized manually with a Gaussian prior at time t = O. The
learned model is able to generalize to the subject in the sequence who was not part
of the training set.
5
Conclusions
We described an automated method for learning periodic human motions from
training data using statistical methods for detecting the length of the periods in the
data, segmenting it into cycles, and optimally aligning the cycles. We also presented
a PCA method for building a statistical eigen-model of the motion curves that copes
with missing data and enforces smoothness between the beginning and ending of a
motion cycle. The learned eigen-curves are used as a prior probability distribution
in a Bayesian tracking framework. Tracking in monocular image sequences was
performed using a particle filtering technique and results were shown for a cluttered
Image sequence.
Acknowledgements. We thank M. Gleicher for generously providing the 3D
motion-capture data and M. Kamvysselis and D. Fleet for many discussions on
human motion and Bayesian estimation . Portions of this work were supported by
the Xerox Corporation and we gratefully acknowledge their support.
References
[1] A. Bobick and J. Davis. An appearance-based representation of action. ICPR,
1996.
[2] T-J. Cham and J. Rehg. A multiple hypothesis approach to figure tracking.
CVPR, pp. 239- 245, 1999.
[3] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. ECCV, pp. 343-356, 1996.
[4] M. E. Leventon and W. T. Freeman. Bayesian estimation of 3-d human motion
from an image sequence. Tech. Report TR-98-06, Mitsubishi Electric Research
Lab, 1998.
[5] D. Ormoneit, H. Sidenbladh , M. Black, T. Hastie, Learning and tracking human motion using functional analysis, submitted: IEEE Workshop on Human
Modeling, Analysis and Synthesis, 2000.
[6] S.M. Seitz and C.R. Dyer. Affine invariant detection of periodic motion. CVPR,
pp. 970-975, 1994.
[7] H. Sidenbladh, M. J. Black, and D. J. Fleet. Stochastic tracking of 3D human
figures using 2D image motion. to appear, ECCV-2000, Dublin Ireland.
[8] Y. Yacoob and M. Black. Parameterized modeling and recognition of activities
in temporal surfaces. CVIU, 73(2):232-247, 1999.
[9] G. Sherlock, M. Eisen, O. Alter , D. Botstein, P. Brown, T. Hastie, and R. Tibshirani. "Imputing missing data for gene expression arrays," 2000, Working
Paper, Department of Statistics, Stanford University.
| 1938 |@word briefly:1 seitz:1 mitsubishi:1 decomposition:2 tr:1 recursively:1 carry:1 necessity:1 cyclic:2 series:8 configuration:3 denoting:1 current:1 recovered:1 yet:1 must:2 written:1 numerical:1 periodically:1 j1:1 designed:1 update:1 resampling:1 generative:3 isard:1 plane:1 beginning:1 ith:1 provides:2 coarse:1 detecting:1 location:1 ik:1 expected:4 ra:1 frequently:1 multi:1 freeman:1 automatically:3 provided:1 estimating:1 notation:1 underlying:2 maximizes:1 project:1 interpreted:2 corporation:1 temporal:8 unit:1 appear:1 segmenting:3 approximately:1 black:4 warranting:1 co:1 range:1 camera:3 enforces:2 procedure:5 thought:1 projection:2 suggest:1 cannot:1 onto:2 map:2 missing:12 primitive:1 truncating:1 cluttered:2 focused:1 formulate:1 continued:1 array:1 deriving:1 rehg:1 searching:1 variation:3 increment:1 pt:4 commercial:1 hypothesis:1 element:1 approximated:1 recognition:2 walking:6 particularly:2 database:1 ft:1 capture:4 calculate:2 region:2 ensures:1 cycle:22 decrease:2 wher:1 occluded:4 segment:2 technically:1 efficiency:1 joint:6 represented:2 iol:1 articulated:1 effective:1 exhaustive:2 stanford:7 cvpr:2 reconstruct:2 statistic:2 transform:1 sequence:24 reconstruction:1 interaction:1 aligned:6 realization:1 bobick:1 obliquely:1 produce:1 converges:1 illustrate:1 develop:1 pose:2 recovering:1 involves:1 filter:3 pea:1 stochastic:2 human:22 viewing:2 translating:1 decompose:1 recombined:1 stockholm:1 around:1 blake:1 exp:1 overlaid:1 mapping:1 visualize:1 major:1 doma:1 vary:1 resample:1 estimation:2 tool:1 weighted:1 generously:1 gaussian:3 ctj:4 avoid:2 varying:1 yacoob:1 nada:1 derived:1 focus:1 improvement:3 vk:3 rank:2 indicates:1 likelihood:6 tech:1 contrast:2 sense:1 inference:1 accumulated:1 typically:2 hidden:1 biometric:1 pixel:1 constrained:1 integration:1 initialize:1 construct:1 manually:1 alter:1 discrepancy:1 report:1 spline:1 composed:1 individual:7 phase:1 occlusion:1 detection:1 alignment:4 kvk:1 sweden:1 accommodated:1 walk:1 initialized:1 dublin:1 column:2 modeling:7 gn:1 markovian:1 leventon:1 deviation:3 subset:1 optimally:1 providence:1 periodic:5 unbiasedness:1 person:2 density:1 peak:1 probabilistic:1 synthesis:1 elow:1 choose:2 stochastically:1 derivative:1 account:1 summarized:2 coefficient:4 depends:1 performed:1 lab:1 observing:1 portion:1 recover:1 minimize:3 oi:1 who:1 generalize:1 bayesian:10 submitted:1 za:2 manual:1 definition:1 against:1 ty:1 frequency:1 pp:3 naturally:1 associated:1 propagated:2 stop:1 sampled:1 improves:1 dimensionality:1 segmentation:2 torso:1 back:1 follow:1 botstein:1 modal:1 improved:3 ooo:1 box:1 though:1 until:1 hand:1 working:1 marker:1 propagation:1 aj:3 building:2 brown:3 evolution:1 hence:3 deal:1 white:1 during:1 impute:1 davis:1 criterion:1 motion:47 cp:1 usv:1 image:17 wise:1 common:2 imputing:2 rotation:1 functional:6 mt:1 ji:1 endpoint:1 interpretation:1 he:1 refer:1 measurement:2 smoothness:1 similarly:2 particle:5 nonlinearity:1 gratefully:1 had:1 surface:2 align:1 aligning:2 add:1 posterior:8 multivariate:1 store:1 cham:1 r0:1 period:3 monotonically:1 signal:13 ii:2 multiple:2 smooth:5 technical:1 segmented:1 match:1 calculation:1 characterized:1 dept:3 prediction:2 regression:1 repetitive:2 iteration:1 represent:1 achieved:1 folding:1 background:1 separately:2 iiii:2 singular:2 source:1 subject:3 goto:1 mod:1 constraining:1 automated:2 xj:3 zi:3 hastie:3 identified:1 suboptimal:1 prototype:1 fleet:2 expression:1 pca:4 constitute:1 action:1 cpt:15 se:1 repeating:1 rearranged:2 imputed:1 generate:1 sl:1 shifted:1 track:1 tibshirani:1 discrete:1 key:1 threshold:2 imputation:1 angle:12 inverse:2 parameterized:1 reader:1 entirely:1 ki:2 ct:7 display:1 activity:7 occur:1 constrain:1 ri:4 scene:1 encodes:1 generates:1 fourier:5 speed:1 department:1 alternate:1 xerox:1 combination:1 icpr:1 describes:1 explained:1 invariant:1 computationally:2 monocular:4 previously:1 describing:2 turn:1 dyer:1 end:2 decomposing:1 operation:1 eight:2 limb:9 apply:1 altogether:1 eigen:2 original:1 denotes:4 include:1 ensure:1 iix:1 instant:1 medicine:1 prof:1 highfrequency:1 kth:2 ireland:1 sidenbladh:3 distance:1 thank:1 oa:1 hmm:1 lthe:1 enforcing:2 length:7 index:2 ratio:6 providing:1 design:4 unknown:2 upper:1 observation:2 markov:1 acknowledge:1 frame:4 varied:1 stack:1 lb:1 intensity:1 learned:6 narrow:1 address:1 able:2 suggested:1 dynamical:1 below:3 challenge:1 sherlock:1 royal:1 video:5 ia:1 suitable:2 demanding:1 difficulty:2 natural:1 ormoneit:2 scheme:1 improve:1 technology:1 axis:1 prior:11 acknowledgement:1 evolve:1 determining:1 relative:4 filtering:2 analogy:1 affine:1 pi:1 translation:1 row:5 prone:1 eccv:2 summary:1 repeat:1 supported:1 infeasible:1 formal:1 institute:1 fall:2 absolute:1 slice:5 curve:4 boundary:1 transition:4 valid:2 ending:1 contour:1 eisen:1 collection:3 cope:1 reconstructed:1 approximate:1 gene:1 dealing:1 global:1 assumed:1 alternatively:1 don:1 search:3 iterative:5 continuous:1 zk:2 ca:2 ean:1 complex:2 electric:1 domain:8 terminated:1 noise:8 animation:1 body:7 representative:1 referred:1 lc:1 n:1 position:4 explicit:1 dissect:1 candidate:1 offset:7 workshop:1 quantization:1 texture:2 cviu:1 signalto:1 simply:1 likely:1 appearance:1 tch:1 tracking:19 sport:1 cvap:1 conditional:1 goal:1 viewed:2 towards:1 change:2 typical:2 specifically:4 principal:9 called:1 svd:4 la:2 ew:1 formally:1 support:1 evaluate:1 |
1,027 | 1,939 | Periodic Component Analysis:
An Eigenvalue Method for Representing
Periodic Structure in Speech
Lawrence K. Saul and Jont B. Allen
{lsaul,jba}@research.att.com
AT&T Labs, 180 Park Ave, Florham Park, NJ 07932
Abstract
An eigenvalue method is developed for analyzing periodic structure in
speech. Signals are analyzed by a matrix diagonalization reminiscent of
methods for principal component analysis (PCA) and independent component analysis (ICA). Our method-called periodic component analysis
(1l"CA)-uses constructive interference to enhance periodic components
of the frequency spectrum and destructive interference to cancel noise.
The front end emulates important aspects of auditory processing, such as
cochlear filtering, nonlinear compression, and insensitivity to phase, with
the aim of approaching the robustness of human listeners. The method
avoids the inefficiencies of autocorrelation at the pitch period: it does not
require long delay lines, and it correlates signals at a clock rate on the
order of the actual pitch, as opposed to the original sampling rate. We
derive its cost function and present some experimental results.
1 Introduction
Periodic structure in the time waveform conveys important cues for recognizing and understanding speech[I]. At the end of an English sentence, for example, rising versus falling
pitch indicates the asking of a question; in tonal languages, such as Chinese, it carries linguistic information. In fact, early in the speech chain-prior to the recognition of words or
the assignment of meaning-the auditory system divides the frequency spectrum into periodic and non-periodic components. This division is geared to the recognition of phonetic
features[2]. Thus, a voiced fricative might be identified by the presence of periodicity in
the lower part of the spectrum, but not the upper part. In complicated auditory scenes, periodic components of the spectrum are further segregated by their fundamental frequency [3 ].
This enables listeners to separate simultaneous speakers and explains the relative ease of
separating male versus female speakers, as opposed to two recordings of the same voice[4].
The pitch and voicing of speech signals have been extensively studied[5]. The simplest
method to analyze periodicity is to compute the autocorrelation function on sliding windows of the speech waveform. The peaks in the autocorrelation function provide estimates
of the pitch and the degree of voicing. In clean wideband speech, the pitch of a speaker
can be tracked by combining a peak-picking procedure on the autocorrelation function
with some form of smoothing[6], such as dynamic programming. This method, however,
does not approach the robustness of human listeners in noise, and at best, it provides an
extremely gross picture of the periodic structure in speech. It cannot serve as a basis
for attacking harder problems in computational auditory scene analysis, such as speaker
separation[7], which require decomposing the frequency spectrum into its periodic and
non-periodic components.
The correlogram is a more powerful method for analyzing periodic structure in speech. It
looks for periodicity in narrow frequency bands. Slaney and Lyon[8] proposed a perceptual pitch detector that autocorrelates multichannel output from a model of the auditory
periphery. The auditory model includes a cochlear filterbank and periodicity-enhancing
nonlinearities. The information in the correlogram is summed over channels to produce an
estimate of the pitch. This method has two compelling features: (i) by measuring autocorrelation, it produces pitch estimates that are insensitive to phase changes across channels;
(ii) by working in narrow frequency bands, it produces estimates that are robust to noise.
This method, however, also has its drawbacks. Computing multiple autocorrelation functions is expensive. To avoid aliasing in upper frequency bands, signals must be correlated
at clock rates much higher than the actual pitch. From a theoretical point of view, it is
unsatisfying that the combination of information across channels is not derived from some
principle of optimality. Finally, in the absence of conclusive evidence for long delay lines
(~1O ms) in the peripheral auditory system, it seems worthwhile-for both scientists and
engineers-to study ways of detecting periodicity that do not depend on autocorrelation.
In this paper, we develop an eigenvalue method for analyzing periodic structure in speech.
Our method emulates important aspects of auditory processing but avoids the inefficiencies
of autocorrelation at the pitch period. At the same time, it is highly robust to narrowband
noise and insensitive to phase changes across channels. Note that while certain aspects of
the method are biologically inspired, its details are not intended to be biologically realistic.
2
Method
We develop the method in four stages. These stages are designed to convey the main technical ideas of the paper: (i) an eigenvalue method for combining and enhancing weakly
periodic signals; (ii) the use of Hilbert transforms to compensate for phase changes across
channels; (iii) the measurement of periodicity by efficient sinusoidal fits; and (iv) the hierarchical analysis of information across different frequency bands.
2.1
Cross-correlation of critical bands
Consider the multichannel output of a cochlear filterbank. If the input to this filterbank
consists of noisy voiced speech, the output will consist of weakly periodic signals from
different critical bands. Can we combine these signals to enhance the periodic signature
of the speaker's pitch? We begin by studying a mathematical idealization of the problem.
Given n real-valued signals, {xi(t)}i=l' what linear combination s(t) = Li WiXi(t) maximizes the periodic structure at some fundamental frequency fa, or equivalently, at some
pitch period T = 1/ fa? Ideally, the linear combination should use constructive interference to enhance periodic components of the spectrum and destructi ve interference to cancel
noise. We measure the periodicity of the combined signal by the cost function:
(
c w, T
) _ Lt Is(t + T) - s(tW
Lt Is(t)12
with
s(t) =
L WiXi(t).
(1)
Here, for simplicity, we have assumed that the signals are discretely sampled and that the
period T is an integer multiple of the sampling interval. The cost function c (w , T) measures
the normalized prediction error, with the period T acting as a prediction lag. Expanding the
right hand side in terms of the weights
Wi
gives:
Lij
e(W,7) =
Wi wj A ij(7)
L ij WiwjBij '
(2)
where the matrix elements Aij (7) are determined by the cross-correlations,
Aij (7) =
L
[Xi(t)Xj(t)
+ Xi(t + 7)Xj(t + 7) -
Xi(t)Xj (t
+ 7) -
Xi (t
+ 7)Xj (t)] ,
t
and the matrix elements Bij are the equal-time cross-correlations, Bij = Lt Xi (t)Xj(t).
Note that the denominator and numerator of eq. (2) are both quadratic forms in the
weights Wi. By the Rayleigh-Ritz theorem of linear algebra, the weights Wi minimizing
eq. (2) are given by the eigenvector of the matrix B-1 A( 7) with the smallest eigenvalue.
For fixed 7, this solution corresponds to the global minimum of the cost function e( w, 7).
Thus, matrix diagonalization (or simply computing the bottom eigenvector, which is often
cheaper) provides a definitive answer to the above problem.
The matrix diagonalization which optimizes eq. (2) is reminiscent of methods for principal
component analysis (PCA) and independent component analysis (IcA)[9]. Our methodwhich by analogy we call periodic component analysis (1I'cA)-uses an eigenvalue principle to combine periodicity cues from different parts of the frequency spectrum.
2.2 Insensitivity to phase
The eigenvalue method in the previous section has one obvious shortcoming: it cannot
compensate for phase changes across channels. In particular, the real-valued linear combination 8(t) = L i WiX;(t) cannot align the peaks of signals that are (say) 11'/2 radians out
of phase, even though such an alignment-prior to combining the signals-would significantly reduce the normalized prediction error in eq. (1).
A simple extension of the method overcomes this shortcoming. Given real-valued signals, {x;(t)} , we consider the analytic signals, {x;(t)}, whose imaginary components are
computed by Hilbert transforms[lO]. The Fourier series of these signals are related by:
X;(t) =
L D:k COS(Wkt + ?k)
?:::::>
x;(t)
=L
k
D:k e;(Wkt+?k).
(3)
k
We now reconsider the problem of the previous section, looking for the linear combination
of analytic signals, 8(t) = L; w;x;(t), that minimizes the cost function in eq. (1). In this
setting, moreover, we allow the weights W; to be complex so that they can compensate for
phase changes across channels. Eq. (2) generalizes in a straightforward way to:
e(w ,7)=
L;j wi wj A;j(7)
L ;j w;* wjB;j '
(4)
where A (7) and B are Hermitian matrices with matrix elements
A;j(7) =
L
[x;(t)Xj(t)
+ x;(t + 7)Xj(t + 7) -
x;(t)Xj(t + 7) - x;(t
+ 7)Xj(t)]
t
and B;j = Lt x;(t)Xj(t). Again, the optimal weights W; are given by the eigenvector
corresponding to the smallest eigenvalue of the matrix B- 1 A (7). (Note that all the eigenvalues of this matrix are real because the matrix is Hermitian.)
Our analysis so far suggests a simple-minded approach to investigating periodic structure
in speech. In particular, consider the following algorithm for pitch tracking. The first
step of the algorithm is to pass speech through a cochlear filterbank and compute analytic
signals, Xi (t), via Hilbert transforms. The next step is to diagonalize the matrices B- 1A(T)
on sliding windows of Xi(t) over a range of pitch periods, T E [Tmin, Tmaxl. The final step
is to estimate the pitch periods by the values of T that minimize the cost function, eq. (1),
for each sliding window. One might expect such an algorithm to be relatively robust to
noise (because it can zero the weights of corrupted channels), as well as insensitive to
phase changes across channels (because it can absorb them with complex weights).
Despite these attractive features, the above algorithm has serious deficiencies. Its worst
shortcoming is the amount of computation needed to estimate the pitch period, T. Note that
the analysis step requires computing n 2 cross-correlation functions, Lt xi(t)x j (t+T), and
diagonalizing the n x n matrix, B- 1 A(T). This step is unwieldy for three reasons: (i) the
burden of recomputing cross-correlations for different values of T, (ii) the high sampling
rates required to avoid aliasing in upper frequency bands, and (iii) the poor scaling with the
number of channels, n. We address these concerns in the following sections.
2.3 Extracting the fundamental
Further signal processing is required to create multichannel output whose periodic structure can be analyzed more efficiently. Our front end, shown in Fig. 1, is designed to analyze voiced speech with fundamental frequencies in the range fa E [fmin, fmax] , where
fmax < 2fmin. The one-octave restriction on fa can be lifted by considering parallel, overlapping implementations of our front end for different frequency octaves.
The stages in our front end are inspired by important aspects of auditory processing[lO].
Cochlear filtering is modeled by a Bark scale filterbank with contiguous passbands. Next,
we compute narrowband envelopes by passing the outputs of these filters through two nonlinearities: half-wave rectification and cube-root compression. These operations are commonly used to model the compressive unidirectional response of inner hair cells to movement along the basilar membrane. Evidence for comparison of envelopes in the peripheral
auditory system comes from experiments on comodulation masking release[ll]. Thus, the
next stage of our front end creates a multichannel array of signals by pairwise multiplying envelopes from nearby parts of the frequency spectrum. Allowed pairs consist of any
two envelopes, including an envelope with itself, that might in principle contain energy
at two consecutive harmonics of the fundamental. Multiplying these harmonics-just like
multiplying two sine waves-produces intermodulation distortion with energy at the sum
and difference frequencies. The energy at the difference frequency creates a signature of
"residue" pitch at fa. The energy at the sum frequency is removed by bandpass filtering to
frequencies [fmin'!max] and aggressively downsampling to a sampling rate fs = 4fmin.
Finally, we use Hilbert transforms to compute the analytic signal in each channel, which
we call Xi(t).
In sum, the stages of the front end create an array of bandlimited analytic signals, Xi (t),
that-while derived from different parts of the frequency spectrum-have energy concentrated at the fundamental frequency, fa. Note that the bandlimiting of these channels to
frequencies [fmin, fmax] where fmax <2fmin removes the possibility that a channel contains periodic energy at any harmonic other than the fundamental. In voiced .speech, this
has the effect that periodic channels contain noisy sine waves with frequency fa.
speech
waveform
cochlear
filterbank
half-wave
rectification;
cube-root
compression
~----~
Q
x8.
/'0,
X
pairwise
multiplication
bandlimiting;
downsampling
~----~
Figure 1: Signal processing in the front end.
=:
compute
analytic
signals
How can we combine these "baseband" signals to enhance the periodic signature of a
speaker's pitch? The nature of these signals leads to an important simplification of the
problem. As opposed to measuring the autocorrelation at lag T, as in eq. (1), here we can
measure the periodicity of the combined signal by a simple sinusoidalfit. Let ~ = 27r fo/ f.
denote the phase accumulated per sample by a sine wave with frequency fo at sampling
rate f., and let S (t) = I:i Wi Xi (t) denote the combined signal. We measure the periodicity of the combined signal by
C
A) _
(w,u
-
I:t Is(t + 1) - s(t)ei~ 12
I:t Is(t)12
_
-
I:ij wiWjAij(~) ,
I:ij Wi WjBij
(5)
where the matrix B is again formed by computing equal-time cross-correlations, and the
matrix A(~) has elements
Aij(~)
=
L
[x;(t)Xj(t)+X;(t+l)Xj(t+l)-e-i~x;(t)Xj(t+l)-ei~x;(t+l)xj(t)] .
t
For fixed ~, the optimal weights Wi are given by the eigenvector corresponding to the
smallest eigenvalue of the matrix B- 1 A( ~).
Note that optimizing the cost function in eq. (5) over the phase, ~, is equivalent to optimizing over the fundamental frequency, fo, or the pitch period, T. The structure of this
cost function makes it much easier to optimize than the earlier measure of periodicity in
eq. (1). For instance, the matrix elements Aij(~) depend only on the equal-time and onesample-lagged cross-correlations, which do not need to be recomputed for different values
of ~. Also, the channels Xi(t) appearing in this cost function are sampled at a clock rate
on the order of fo, as opposed to the original sampling rate of the speech. Thus, the few
cross-correlations that are required can be computed with many fewer operations. These
properties lead to a more efficient algorithm than the one in the previous section. The improved algorithm, working with baseband signals, estimates the pitch by optimizing eq. (5)
over w and ~ for sliding windows of Xi (t). One problem still remains, however-the need
to invert and diagonalize large numbers of n x n matrices, where the number of channels, n,
may be prohibitively large. This final obstacle is removed in the next section.
2.4 Hierarchical analysis
We have developed a fast recursive algorithm to locate a good approximation to the minimum of eq. (5). The recursive algorithm works by constructing and diagonalizing 2 x 2
matrices, as opposed to the n x n matrices required for an exact solution. Our approximate
algorithm also provides a hierarchical analysis of the frequency spectrum that is interesting
in its own right. A sketch of the algorithm is given below.
The base step of the recursion estimates a value
mizing the error of a sinusoidal fit:
~i
for each individual channel by mini-
(6)
The minimum of the right hand side can be computed by setting its derivative to zero and
solving a quadratic equation in the variable ei~ ?. If this minimum does not correspond to
a legitimate value of fo E [fmin, fmax], the ith channel is discarded from future analysis,
effectively setting its weight Wi to zero. Otherwise, the algorithm passes three arguments
to a higher level of the recursion: the values of ~i and Ci (~i)' and the channel Xi (t) itself.
The recursive step of the algorithm takes as input two auditory "substreams", Sl(t)
and su(t), derived from "lower" and "upper" parts of the frequency spectrum, and returns as output a single combined stream, s(t) = WISI(t) + wusu(t). In the first step
Figure 2: Measures of pitch (fo) and periodicity (e l ) in nested regions of the frequency
spectrum. The nodes in this tree describe periodic structure in the vowel luI from 4001080 Hz. The nodes in the first (bottom) layer describe periodicity cues in individual
channels; the nodes in the kth layer measure cues integrated across 2k - l channels.
of the recursion, the substreams correspond to individual channels Xi (t), while in the kth
step, they correspond to weighted combinations of 2k - l channels. Associated with the
substreams are phases, ~I and ~t" corresponding to estimates of fo from different parts
of the frequency spectrum. The combined stream is formed by optimizing eq.(5) over the
two-component weight vector, W = [WI , w u ]. Note that the eigenvalue problem in this case
involves only a 2 x 2 matrix, as opposed to an n x n matrix. The value of ~ determines the
period of the combined stream; in practice, we optimize it over the interval defined by ~I
and ~u. Conveniently, this interval tends to shrink at each level of the recursion.
The algorithm works in a bottom-up fashion . Channels are combined pairwise to form
streams, which are in turn combined pairwise to form new streams. Each stream has a
pitch period and a measure of periodicity computed by optimizing eq. (5). We order the
channels so that streams are derived from contiguous (or nearly contiguous) parts of the frequency spectrum. Fig. 2 shows partial output of this recursive procedure for a windowed
segment of the vowel luI. Note how as one ascends the tree, the combined streams have
greater periodicity and less variance in their pitch estimates. This shows explicitly how
the algorithm integrates information across narrow frequency bands of speech. The recursive output also suggests a useful representation for studying problems, such as speaker
separation, that depend on grouping different parts of the spectrum by their estimates of fo.
3
Experiments
We investigated the performance of our algorithm in simple experiments on synthesized
vowels. Fig. 3 shows results from experiments on the vowel luI. The pitch contours in these
plots were computed by the recursive algorithm in the previous section, with f min = 80 Hz,
fmax = 140 Hz, and 60 ms windows shifted in 10 ms intervals. The solid curves show
the estimated pitch contour for the clean wideband waveform, sampled at 8 kHz. The
left panel shows results for filtered versions of the vowel, bandlimited to four different
frequency octaves. These plots show that the algorithm can extract the pitch from different
parts of the frequency spectrum. The right panel shows the estimated pitch contours for the
vowel in 0 dB white noise and four types of -20 dB bandlimited noise. The signal-to-noise
ratios were computed from the ratio of (wideband) speech energy to noise energy. The
white noise at 0 dB presents the most difficulty; by contrast, the bandlimited noise leads
to relatively few failures , even at -20 dB. Overall, the algorithm is quite robust to noise
and filtering . (Note that the particular frequency octaves used in these experiments had no
special relation to the filters in our front end.) The pitch contours could be further improved
by some form of smoothing, but this was not done for the plots shown.
bandhmlted speech
noisy speech
130 l--~--~-r=======il
wide band
- 0500
- 1000
- 2000
- 4000
0250
0500
1000
2000
125
120
90
clean
Hz
Hz
Hz
Hz
~-----=-0'-::.2 ---=-0.'4:----=0 .'6:----::'
0 .-B=------:
time (sec)
1 30 L-~--r=========il
o dB, white noise
125
-20
-20
-20
-20
120
90
dB, 0250
dB, 0500
dB, 1000
dB, 2000
- 0500
- 1000
- 2000
- 4000
Hz
Hz
Hz
Hz
L----::'
o .-2=----::'
o.~
4 ---::'
0 .~
6 --~
0 .B~-----:
time (sec)
Figure 3: Tracking the pitch of the vowel lui in corrupted speech.
4 Discussion
Many aspects of this work need refinement. Perhaps the most important is the initial filtering into narrow frequency bands. While narrow filters have the ability to resolve individual
harmonics, overly narrow filters-which reduce all speech input to sine waves~o not adequately differentiate periodic versus noisy excitation. We hope to replace the Bark scale
filterbank in Fig. 1 by one that optimizes this tradeoff. We also want to incorporate adaptation and gain control into the front end, so as to improve the performance in non stationary
listening conditions. Finally, beyond the problem of pitch tracking, we intend to develop
the hierarchical representation shown in Fig. 2 for harder problems in phoneme recognition
and speaker separation[7]. These harder problems seem to require a method, like ours, that
decomposes the frequency spectrum into its periodic and non-periodic components.
References
[1] Stevens, K. N. 1999. Acoustic Phonetics. MIT Press: Cambridge, MA.
[2] Miller, G. A. and Nicely, P. E. 1955. An analysis of perceptual confusions among some English
consonants. Journal of the Acoustical Society ofAmerica 27, 338- 352.
[3] Bregman, A. S. 1994. Auditory Scene Analysis: the Perceptual Organization of Sound. MIT
Press: Cambridge, MA.
[4] Brokx, J. P. L. and Noteboom, S. G. 1982. Intonation and the perceptual separation of simultaneous voices. J. Phonetics 10, 23- 26.
[5] Hess, W. 1983. Pitch Determination of Speech Signals: Algorithms and Devices. SpringerVerlag.
[6] Talkin, D. 1995. A Robust Algorithm for Pitch Tracking (RAPT). In Kleijn, W. B. and Paliwal,
K. K. (Eds.), Speech Coding and Synthesis , 497- 518. Elsevier Science.
[7] Roweis, S. 2000. One microphone source separation. In Tresp, v., Dietterich, T., and Leen, T.
(Eds.), Advances in Neural Information Processing Systems 13. MIT Press: Cambridge, MA.
[8] Slaney, M. and Lyon, R. F. 1990. A perceptual pitch detector. In Proc. ICASSP-90, 1, 357- 360.
[9] Molgedey, L. and Schuster, H. G. 1994. Separation of a mixture of independent signals using
time delayed correlations. Phys. Rev. Lett. 72(23), 3634-3637.
[10] Hartmann, W. A. 1997. Signals, Sound, and Sensation. Springer-Verlag.
[11] Hall, J. w., Haggard, M. P., and Fernandes, M. A. 1984. Detection in noise by spectro-temporal
pattern analysis. J. Acoust. Soc. Am. 76,50- 56.
| 1939 |@word version:1 rising:1 compression:3 seems:1 solid:1 harder:3 carry:1 initial:1 inefficiency:2 series:1 att:1 contains:1 ours:1 rapt:1 imaginary:1 com:1 reminiscent:2 must:1 realistic:1 enables:1 analytic:6 remove:1 designed:2 plot:3 stationary:1 cue:4 half:2 fewer:1 device:1 ith:1 filtered:1 provides:3 detecting:1 node:3 passbands:1 mathematical:1 along:1 windowed:1 consists:1 combine:3 autocorrelation:9 hermitian:2 pairwise:4 ica:2 ascends:1 aliasing:2 inspired:2 resolve:1 actual:2 lyon:2 window:5 considering:1 begin:1 moreover:1 maximizes:1 panel:2 what:1 minimizes:1 eigenvector:4 developed:2 compressive:1 acoust:1 nj:1 temporal:1 prohibitively:1 filterbank:7 control:1 scientist:1 tends:1 despite:1 analyzing:3 bandlimiting:2 might:3 studied:1 suggests:2 co:1 ease:1 wideband:3 comodulation:1 range:2 recursive:6 practice:1 procedure:2 significantly:1 word:1 cannot:3 restriction:1 equivalent:1 optimize:2 straightforward:1 simplicity:1 legitimate:1 array:2 ritz:1 exact:1 programming:1 us:2 element:5 recognition:3 expensive:1 bottom:3 worst:1 wj:2 region:1 movement:1 removed:2 gross:1 ideally:1 dynamic:1 signature:3 depend:3 weakly:2 solving:1 algebra:1 segment:1 serve:1 creates:2 division:1 molgedey:1 basis:1 icassp:1 listener:3 fast:1 shortcoming:3 describe:2 whose:2 lag:2 quite:1 valued:3 say:1 distortion:1 otherwise:1 florham:1 ability:1 kleijn:1 noisy:4 itself:2 final:2 differentiate:1 eigenvalue:11 adaptation:1 combining:3 fmax:6 insensitivity:2 roweis:1 produce:4 derive:1 develop:3 basilar:1 ij:4 eq:14 soc:1 involves:1 come:1 waveform:4 sensation:1 drawback:1 stevens:1 filter:4 human:2 explains:1 require:3 extension:1 hall:1 lawrence:1 early:1 smallest:3 consecutive:1 proc:1 integrates:1 create:2 minded:1 weighted:1 hope:1 mit:3 aim:1 avoid:2 fricative:1 lifted:1 linguistic:1 derived:4 release:1 indicates:1 contrast:1 ave:1 am:1 elsevier:1 accumulated:1 integrated:1 lsaul:1 baseband:2 relation:1 overall:1 among:1 hartmann:1 smoothing:2 summed:1 special:1 cube:2 equal:3 nicely:1 sampling:6 wisi:1 park:2 look:1 cancel:2 nearly:1 future:1 serious:1 few:2 ve:1 individual:4 cheaper:1 delayed:1 phase:12 intended:1 vowel:7 detection:1 organization:1 highly:1 possibility:1 alignment:1 male:1 analyzed:2 mixture:1 chain:1 bregman:1 partial:1 tree:2 iv:1 divide:1 theoretical:1 recomputing:1 instance:1 earlier:1 compelling:1 asking:1 obstacle:1 contiguous:3 measuring:2 assignment:1 cost:9 delay:2 recognizing:1 wix:1 front:9 answer:1 periodic:29 corrupted:2 combined:10 fundamental:8 peak:3 picking:1 enhance:4 synthesis:1 again:2 opposed:6 slaney:2 derivative:1 return:1 li:1 nonlinearities:2 sinusoidal:2 sec:2 coding:1 includes:1 unsatisfying:1 explicitly:1 stream:8 sine:4 view:1 root:2 lab:1 analyze:2 wave:6 complicated:1 parallel:1 unidirectional:1 masking:1 voiced:4 minimize:1 formed:2 il:2 variance:1 emulates:2 efficiently:1 miller:1 correspond:3 phoneme:1 multiplying:3 simultaneous:2 detector:2 fo:8 phys:1 ed:2 failure:1 energy:8 frequency:35 destructive:1 obvious:1 conveys:1 associated:1 intermodulation:1 gain:1 radian:1 sampled:3 auditory:12 wjb:1 wixi:2 hilbert:4 mizing:1 higher:2 response:1 improved:2 leen:1 done:1 though:1 shrink:1 just:1 stage:5 clock:3 correlation:9 working:2 hand:2 sketch:1 ei:3 su:1 nonlinear:1 overlapping:1 perhaps:1 effect:1 dietterich:1 normalized:2 contain:2 adequately:1 aggressively:1 jont:1 white:3 attractive:1 ll:1 numerator:1 speaker:8 excitation:1 m:3 octave:4 confusion:1 allen:1 phonetics:2 narrowband:2 meaning:1 harmonic:4 tracked:1 khz:1 insensitive:3 synthesized:1 measurement:1 cambridge:3 haggard:1 hess:1 language:1 had:1 geared:1 align:1 base:1 own:1 female:1 optimizing:5 optimizes:2 periphery:1 phonetic:1 certain:1 paliwal:1 verlag:1 minimum:4 greater:1 attacking:1 jba:1 period:11 signal:33 ii:3 sliding:4 multiple:2 sound:2 technical:1 constructive:2 determination:1 cross:8 long:2 compensate:3 pitch:34 prediction:3 hair:1 denominator:1 enhancing:2 invert:1 cell:1 residue:1 want:1 interval:4 source:1 diagonalize:2 envelope:5 pass:1 wkt:2 recording:1 hz:11 db:9 seem:1 integer:1 call:2 extracting:1 presence:1 iii:2 xj:14 fit:2 approaching:1 identified:1 reduce:2 idea:1 inner:1 tradeoff:1 listening:1 pca:2 tonal:1 f:1 speech:25 passing:1 useful:1 transforms:4 amount:1 extensively:1 band:10 concentrated:1 simplest:1 multichannel:4 sl:1 shifted:1 estimated:2 overly:1 per:1 recomputed:1 four:3 falling:1 clean:3 idealization:1 sum:3 powerful:1 separation:6 scaling:1 layer:2 simplification:1 quadratic:2 discretely:1 deficiency:1 scene:3 nearby:1 fmin:7 aspect:5 fourier:1 argument:1 extremely:1 optimality:1 min:1 relatively:2 peripheral:2 combination:6 poor:1 membrane:1 across:10 wi:10 tw:1 rev:1 biologically:2 interference:4 rectification:2 equation:1 remains:1 turn:1 needed:1 end:10 studying:2 generalizes:1 decomposing:1 operation:2 worthwhile:1 hierarchical:4 voicing:2 fernandes:1 appearing:1 robustness:2 voice:2 original:2 chinese:1 society:1 intend:1 question:1 fa:7 kth:2 separate:1 separating:1 acoustical:1 cochlear:6 reason:1 modeled:1 mini:1 ratio:2 minimizing:1 downsampling:2 equivalently:1 reconsider:1 lagged:1 implementation:1 upper:4 discarded:1 looking:1 tmin:1 locate:1 pair:1 required:4 sentence:1 conclusive:1 acoustic:1 narrow:6 address:1 beyond:1 below:1 pattern:1 including:1 max:1 bandlimited:4 critical:2 difficulty:1 recursion:4 diagonalizing:2 representing:1 improve:1 picture:1 x8:1 extract:1 lij:1 tresp:1 prior:2 understanding:1 bark:2 segregated:1 multiplication:1 relative:1 expect:1 interesting:1 filtering:5 analogy:1 versus:3 degree:1 principle:3 lo:2 periodicity:15 english:2 aij:4 side:2 allow:1 saul:1 wide:1 curve:1 lett:1 avoids:2 contour:4 commonly:1 refinement:1 far:1 correlate:1 approximate:1 spectro:1 absorb:1 overcomes:1 global:1 investigating:1 assumed:1 consonant:1 xi:16 spectrum:17 decomposes:1 channel:25 nature:1 robust:5 ca:2 expanding:1 investigated:1 complex:2 constructing:1 main:1 noise:15 definitive:1 allowed:1 convey:1 fig:5 fashion:1 bandpass:1 intonation:1 perceptual:5 bij:2 theorem:1 unwieldy:1 evidence:2 concern:1 consist:2 burden:1 grouping:1 effectively:1 ci:1 diagonalization:3 easier:1 lt:5 rayleigh:1 simply:1 conveniently:1 correlogram:2 tracking:4 springer:1 corresponds:1 nested:1 determines:1 ma:3 replace:1 absence:1 change:6 springerverlag:1 determined:1 lui:4 acting:1 principal:2 engineer:1 called:1 microphone:1 pas:1 experimental:1 incorporate:1 schuster:1 correlated:1 |
1,028 | 194 | 686
Barto, Sutton and Watkins
Sequential Decision Problems
and Neural Networks
A. G. Barto
Dept. of Computer and
Information Science
Univ. of Massachusetts
Amherst, MA 01003
R. S. Sutton
GTE Laboratories Inc.
Waltham, MA 02254
c.
J. C. H. Watkins
25B Framfield
Highbury, London
N51UU
ABSTRACT
Decision making tasks that involve delayed consequences are very
common yet difficult to address with supervised learning methods.
If there is an accurate model of the underlying dynamical system,
then these tasks can be formulated as sequential decision problems
and solved by Dynamic Programming. This paper discusses reinforcement learning in terms of the sequential decision framework
and shows how a learning algorithm similar to the one implemented
by the Adaptive Critic Element used in the pole-balancer of Barto,
Sutton, and Anderson (1983), and further developed by Sutton
(1984), fits into this framework. Adaptive neural networks can
play significant roles as modules for approximating the functions
required for solving sequential decision problems.
1
INTRODUCTION
Most neural network research on learning assumes the existence of a supervisor or
teacher knowledgeable enough to supply desired, or target, network outputs during
training. These network learning algorithms are function approximation methods
having various useful properties. Other neural network research addresses the question of where the training information might come from. Typical of this research
is that into reinforcement learning systems; these systems learn without detailed
Sequential Decision Problems and Neural Networks
instruction about how to interact successfully with reactive environments. Learning tasks involving delays between actions and their consequences are particularly
difficult to address with supervised learning methods, and special reinforcement
learning algorithms have been developed to handle them. In this paper, reinforcement learning is related to the theory of sequential decision problems and to the
computational methods known as Dynamic Programming (DP). DP methods are
not learning methods because they rely on complete prior knowledge of the task,
but their theory is nevertheless relevant for understanding and developing learning
methods.
An example of a sequential decision problem invloving delayed consequences is the
version of the pole-balancing problem studied by Barto, Sutton, and Anderson
(1983). In this problem the consequences of control decisions are not immediately
available because training information comes only in the form of a "failure signal"
occurring when the pole falls past a critical angle or when the cart hits an end of
the track. The learning system used by Barto et al. (1983), and subsequently systematically explored by Sutton (1984), consists of two different neuron-like adaptive
elements: an Associative Search Element (ASE), which implemented and adjusted
the control rule, or decision policy, and an Adaptive Critic Element (ACE), which
used the failure signal to learn how to provide useful moment-to-moment evaluation
of control decisions. The focus of this paper is the algorithm implemented by the
ACE: What computational task does this algorithm solve, and how does it solve it?
Sutton (1988) analyzed a class of learning rules which includes the algorithm used
by the ACE, calling them Temporal Difference, or TD, algorithms. Although Sutton briefly discussed the relationship between TD algorithms and DP, he did not
develop this perspective. Here, we discuss an algorithm slightly different from the
one implemented by the ACE and call it simply the "TD algorithm" (although the
class of TD algorithms includes others as well). The earliest use of a TD algorithm
that we know of was by Samuel (1959) in his checkers player. Werbos (1977) was
the first we know of to suggest such algorithms in the context of DP, calling them
"heuristic dynamic programming" methods. The connection to dynamic programming has recently been extensively explored by Watkins (1989), who uses the term
"incremental dynamic programming." Also related is the "bucket brigade" used
in classifier systems (see Liepins et al., 1989), the adaptive controller developed by
Witten (1977), and certain animal learning models (see Sutton and Barto, to appear). Barto, Sutton, and Watkins (to appear) discuss the relationship between TD
algorithms and DP more extensively than is possible here and provide references to
other related research.
2
OPTIMIZING DELAYED CONSEQUENCES
Many problems require making decisions whose consequences emerge over time periods of variable and uncertain duration. Decision-making strategies must be formed
that take into account expectations of both the short-term and long-term consequences of decisions. The theory of sequential decision problems is highly developed
687
688
Barto, Sutton and Watkins
and includes formulations of both deterministic and stochastic problems (the books
by Bertsekas, 1976, and Ross, 1983, are two of the many relevant texts). This theory concerns problems such as the following special case of a stochastic problem.
A decision maker (DM) interacts with a discrete-time stochastic dynamical system
in such a way that, at each time step, the DM observes the system's current state
and selects an action. After the action is performed, the DM receives (at the next
time step) a certain amount of payoff that depends on the action and the current
state, and the system makes a transition to a new state determined by the current
state, the action, and random disturbances. Upon observing the new state, the DM
chooses another action and continues in this manner for a sequence of time steps.
The objective of the task is to form a rule for the DM to use in selecting actions,
called a policy, that maximizes a measure of the total amount of payoff accumulated
over time. The amount of time over which this measure is computed is the horizon
of the problem, and a maximizing policy is an optimal policy. One commonly studied measure of cumulative payoff is the expected infinite-horizon discounted return,
defined below. Because the objective is to maximize a measure of cumulative payoff,
both short- and long-term consequences of decisions are important. Decisions that
produce high immediate payoff may prevent high payoff from being received later
on, and hence such decisions should not necessarily be included in optimal policies.
More formally (following the presentation of Ross, 1983), a policy is a mapping, denoted 1r, that assigns an action to each state ofthe underlying system (for simplicity,
here we consider only the special case of deterministic policies). Let Xt denote the
system state at time step t, and if the DM uses policy 1r, the action it takes at step
t is at = 1r(Xt). After the action is taken, the system makes a transition from state
x
Xt to state y
Xt+l with a probability Pzy(at). At time step t + 1, the DM
receives a payoff, rt+l, with expected value R(xt, at). For any policy 1r and state x,
one can define the expected infinite-horizon discounted return (which we simply call
the expected return) under the condition that the system begins in state x, the DM
continues to use policy 1r throughout the future, and 'Y, 0 ::; 'Y < 1, is the discount
factor:
(1)
=
=
where Xo is the initial system state, and E'Jr is the expectation assuming the DM uses
policy 1r. The objective of the decision problem is to form a policy that maximizes
the expected return defined by Equation 1 for each state x.
3
DYNAMIC PROGRAMMING
Dynamic Programming (DP) is a collection of computational methods for solving
stochastic sequential decision problems. These methods require a model of the
dynamical system underlying the decision problem in the form ofthe state transition
probabilities, PZy(a), for all states x and y and actions a, as well as knowledge of the
function, R( x, a), giving the payoff expectations for all states x and actions a. There
are several different DP methods, all of which are iterative methods for computing
optimal policies, and all of which compute sequences of different types of evaluation
junctions. Most relevant to the TD algorithm is the evaluation function for a given
Sequential Decision Problems and Neural Networks
policy. This function assigns to each state the expected value of the return assuming
the problem starts in that state and the given policy is used. Specifically, for policy
1r and discount factor ,,(, the evaluation function, V';, assigns to each state, x, the
expected return given the initial state x:
For each state, the evaluation function provides a prediction of the return that will
accrue throughout the future whenever this state is encountered if the given policy
is followed . If one can compute the evaluation function for a state merely from
observing that state, this prediction is effectively available immediately upon the
system entering that state. Evaluation functions provide the means for assessing
the temporally extended consequences of decisions in a temporally local manner.
It can be shown (e.g., Ross, 1983) that the evaluation function V'Ylr is the unique
function satisfying the following condition for each state x:
(2)
DP methods for solving this system of equations (i.e., for determining V'Ylr) typically proceed through successive approximations. For dynamical systems with large
state sets the solution requires considerable computation. For systems with continuous state spaces, DP methods require approximations of evaluation functions
(and also of policies). In their simplest form, DP methods rely on lookup-table
representations of these functions, based on discretizations of the state space in
continuous cases, and are therefore exponential in the state space dimension. In
fact, Richard Bellman, who introduced the term Dynamic Programming (Bellman,
1957), also coined the phrase "curse of dimensionality" to describe the difficulty of
representing these functions for use in DP. Consequently, any advance in function
approximation methods, whether due to theoretical insights or to the development
of hardware having high speed and high capacity, can be used to great advantage
in DP. Artificial neural networks therefore have natural applications in DP.
Because DP methods rely on complete prior knowledge of the decision problem,
they are not learning methods. However, DP methods and reinforcement learning
methods are closely related, and many concepts from DP are relevant to the case
of incomplete prior knowledge. Payoff values correspond to the available evaluation
signals (the "primary reinforcers"), and the values of an evaluation function correspond to improved evaluation signals (the "secondary reinforcers") such a those
produced by the ACE. In the simplest reinforcement learning systems, the role of
the dynamical system model required by DP is played by the real system itself. A
reinforcement learning system improves performance by interacting directly with
the real system. A system model is not required. 1
1 Although reinforcement learning methods can greatly benefit from such models (Sutton, to
appear).
689
690
Barto, Sutton and Watkins
4
THE TD ALGORITHM
The TD algorithm approximates V1''Ir for a given policy 1(" in the absence of knowledge
of the transition probabilities and the function determining expected payoff values.
Assume that each system state is represented by a feature vector, and that V1''Ir can
be approximated adequately as a function in a class of parameterized functions of
the feature vectors, such as a class of functions parameterized by the connection
weights of a neural network. Letting ?>(Xt) denote the feature vector representing
state Xt, let the estimated evaluation of Xt be
where Vt is the weight vector at step t and f depends on the class of models assumed.
In terms of a neural network, ?>(Xt) is the input vector at time t, and Vt(Xt) is the
output at time t, assuming no delay across the network.
If we knew the true evaluations of the states, then we could define as an error the
difference between the true evaluations and the estimated evaluations and adjust
the weight vector Vt according to this error using supervised-learning methods.
However, it is unrealistic to assume such knowledge in sequential decision tasks.
Instead the TD algorithm uses the following update rule to adjust the weight vector:
(3)
In this equation, (l' is a positive step-size parameter, rt+l is the payoff received at
time step t + I, Vt(Xt+d is the estimated evaluation of the state at t + 1 using the
weight vector Vt (i.e., Vt(Xt+l)
f( Vt, ?>(Xt+l))),2 and *!;(?>(Xt)) is the gradient
of f with respect to Vt evaluated at ?>(Xt). If f is the inner product of Vt and
?>(Xt), this gradient is just ?>(Xt), as it is for a single linear ACE element. In the
case of an appropriate feedforward network, this gradient can be computed by the
error backpropagation method as illustrated by Anderson (1986). One can think
of Equation 3 as the usual supervised-learning rule using rt+l + iVt(Xt+d as the
"target" output in the error term.
=
To understand why the TD algorithm uses this target, assume that the DM is
using a fixed policy for selecting actions. The output of the critic at time step t,
Vt(Xt), is intended to be a prediction of the return that will accrue after time step
t. Specifically, vt(Xt) should be an estimate for the expected value of
where rt+l: is the payoff received at time step t + k. One way to adjust the weights
would be to wait forever and use the actual return as a target. More practically,
+
2 Instead of using Vt to evaluate the state at t I, the learning tule used by the ACE by Barto et
al. (1983) uses Vt+l. This closely approximates the algorithm described here if the weights change
slowly.
Sequential Decision Problems and Neural Networks
one could wait n time steps and use what Watkins (1989) calls the n-step truncated
return as a target:
rt+l
+ 1Tt+2 + -y2rt+3 + ... + -yn-lrt+n.
However, it is possible to do better than this. One can use what Watkins calls the
corrected n-step truncated return as a target:
rt+1
+ ,rt+2 + -y2rt+3 + ... + -yn-lrt+n + ,nllt(xt+n),
where lIt(xt+n) is the estimated evaluation of state Xt+n using the weight values at
time t. Because lit (xt+n) is an estimate of the expected return from step t + n + 1
onwards, -ynVi(xt+n) is an estimate for the missing terms in the n-step truncated
return from state Xt. To see this, note that
lit (Xt+n) approximates
,n
,n[rt+n+l
+ -yrt+n+2 + ,2rt+n+3 + ...].
MUltiplying through by -yn, this equals
-ynrt+n+l
+ ,n+l rt +n+2 + ... ,
which is the part of the series missing from the n-step truncated return. The weight
update rule for the TD algorithm (Equation 3) uses the corrected I-step truncated
return as a target, and using the n-step truncated return for n > 1 produces obvious
generalizations of this learning rule at the cost of requiring longer delay lines for
implementation.
The above justification of the TD algorithm is based on the assumption that the
critic's output lIt(x) is in fact a useful estimate of the expected return starting
from any state x. Whether this estimate is good or bad, however, the expected
value of the n-step corrected truncated return is always better (Watkins, 1989).
Intuitively, this is true because the n-step corrected truncated return includes more
data, namely the payoffs rt+k, k = 1, ... , n. Surprisingly, as Sutton (1988) shows,
the corrected truncated return is often a better estimate of the actual expected
return than is the actual return itself.
Another way to explain the TD algorithm is to refer to the system of equations from
DP (Equation 2), which the evaluation function for a given policy must satisfy. One
can obtain an error based on how much the current estimated evaluation function,
Vi, departs from the desired condition given by Equation 2 for the current state, Xt:
R(Xt, at} +, Ly PZt,y(at)Vi(y) - Vi(xt).
But the function R and the transition probabilities, PZt,y(at), are not known. Consequently, one substitutes rt+l, the payoff actually received at step t + 1, for the
expected value of this payoff, R(xt, at), and substitutes the current estimated evaluation of the state actually reached in one step for the expectation of the estimated
evaluations of states reachable in one step. That is, one uses Vi(Xt+l) in place of
Ly PXt,y(at)lIt(y). Using the resulting error in the usual supervised-learning rule
yields the TD algorithm (Equation 3).
691
692
Barto, Sutton and Watkins
5
USING THE TD ALGORITHM
We have described the TD algorithm above as a method for approximating the
evaluation function associated with a fixed policy. However, if the fixed policy and
the underlying dynamical system are viewed together as an autonomous dynamical
system, i.e, a system without input, then the TD algorithm can be regarded purely
as a prediction method, a view taken by Sutton (1988). The predicted quantity
can be a discounted sum of any observable signal, not just payoff. For example, in
speech recognition, the signal might give the identity of a word at the word's end,
and the prediction would provide an anticipatory indication of the word's identity.
Unlike other adaptive prediction methods, the TD algorithm does not require fixing
a prediction time interval.
More relevant to the topic of this paper, the TD algorithm can be used as a component in methods for improving policies. The pole-balancing system of Barto et
al. (1983; see also Sutton, 1984) provides one example in which the policy changes
while the TD algorithm operates. The ASE of that system changes the policy by attempting to improve it according to the current estimated evaluation function . This
approach is most closely related to the policy improvement algorithm of DP (e.g.,
see Bertsekas, 1976; Ross, 1983) and is one of several ways to use TD-like methods
for improving policies; others are described by Watkins (1989) and Werbos (1987) .
6
CONCLUSION
Decision making problems involving delayed consequences can be formulated as
stochastic sequential decision problems and solved by DP if there is a complete
and accurate model of the underlying dynamical system. Due to the computational
cost of exact DP methods and their reliance on complete and exact models, there
is a need for methods that can provide approximate solutions and that do not require this amount of prior knowledge. The TD algorithm is an incremental, on-line
method for approximating the evaluation function associated with a given policy
that does not require a system model. The TD algorithm directly adjusts a parameterized model of the evaluation function-a model that can take the form of
an artificial neural network. The TD learning process is a Monte-Carlo approximation to a successive approximation method of DP. This perspective provides the
necessary framework for extending the theory of TD algorithms as well as that of
other algorithms used in reinforcement learning. Adaptive neural networks can play
significant roles as modules for approximating the required functions.
Acknowledgements
A. G. Barto's contribution was supported by the Air Force Office of Scientific Research, Bolling AFB, through grants AFOSR-87-0030 and AFOSR-89-0526.
References
C. W. Anderson. (1986) Learning and Problem Solving with Multilayer Connectionist Systems. PhD thesis, University of Massachusetts, Amherst, MA.
Sequential Decision Problems and Neural Networks
A. G. Barto, R. S. Sutton, and C. W. Anderson. (1983) Neuronlike elements that
can solve difficult learning control problems. IEEE Transactions on Systems, Man,
and Cybernetics, 13:835-846.
A. G. Barto, R. S. Sutton, and C. Watkins. (to appear) Learning and sequential
decision making. In M. Gabriel and J. W. Moore, editors, Learning and Computational Neuroscience. The MIT Press, Cambridge, MA.
R. E. Bellman. (1957) Dynamic Programming. Princeton University Press, Princeton, NJ.
D. 1. Bertsekas. (1976) Dynamic Programming and Stochastic Control. Academic
Press, New York.
Liepins, G. E., Hilliard, M.R., Palmer, M., and Rangarajan, G. (1989) Alternatives
for classifier system credit assignment. Proceedings of the Eleventh International
Joint Conference on Artificial Intelligence, 756-761.
S. Ross. (1983) Introduction to Stochastic Dynamic Programming. Academic Press,
New York.
A. L. Samuel. (1959) Some studies in machine learning using the game of checkers.
IBM Journal on Research and Development, 210-229.
R. S. Sutton. (1984) Temporal Credit Assignment in Reinforcement Learning. PhD
thesis, University of Massachusetts, Amherst, MA.
R. S. Sutton. (1988) Learning to predict by the methods of temporal differences.
Machine Learning, 3:9-44.
R. S. Sutton (to appear) First results with Dyna, an integrated architecture for
learning planning and reacting. Proceedings of the 1990 AAAI Symposium on Planning in Uncertain, Unpredictable, or Changing Environments.
R. S. Sutton and A. G. Barto. (to appear) Time-derivative models of Pavlovian
reinforcement. In M. Gabriel and J. W. Moore, editors, Learning and Computational
Neuroscience. The MIT Press, Cambridge, MA.
C. J. C. H. Watkins. (1989) Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England.
P. J. Werbos. (1977) Advanced forecasting methods for global crisis warning and
models of intelligence. General Systems Yearbook, 22:25-38.
P. J. Werbos. (1987) Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17:7-20.
1. H. Witten. (1977). An adaptive optimal controller for discrete-time markov
environments. Information and Control, 34:286-295.
693
| 194 |@word briefly:1 version:1 instruction:1 moment:2 initial:2 series:1 selecting:2 past:1 current:7 yet:1 must:2 numerical:1 update:2 intelligence:2 short:2 provides:3 successive:2 supply:1 symposium:1 consists:1 eleventh:1 manner:2 expected:14 planning:2 brain:1 bellman:3 discounted:3 td:26 actual:3 curse:1 unpredictable:1 begin:1 underlying:5 maximizes:2 what:3 crisis:1 developed:4 warning:1 nj:1 temporal:3 classifier:2 hit:1 control:6 ly:2 grant:1 appear:6 yn:3 bertsekas:3 positive:1 local:1 consequence:10 sutton:23 reacting:1 might:2 studied:2 palmer:1 unique:1 backpropagation:1 discretizations:1 word:3 wait:2 suggest:1 context:1 deterministic:2 missing:2 maximizing:1 starting:1 duration:1 simplicity:1 immediately:2 assigns:3 rule:8 insight:1 adjusts:1 regarded:1 his:1 handle:1 autonomous:1 justification:1 target:7 play:2 exact:2 programming:11 us:8 element:6 satisfying:1 particularly:1 approximated:1 continues:2 recognition:1 werbos:4 role:3 module:2 solved:2 observes:1 environment:3 reward:1 dynamic:11 solving:4 purely:1 upon:2 joint:1 various:1 represented:1 univ:1 describe:1 london:1 monte:1 artificial:3 whose:1 ace:7 heuristic:1 solve:3 think:1 itself:2 associative:1 sequence:2 advantage:1 indication:1 reinforcer:2 product:1 relevant:5 rangarajan:1 assessing:1 extending:1 produce:2 incremental:2 develop:1 fixing:1 received:4 implemented:4 predicted:1 come:2 waltham:1 closely:3 subsequently:1 stochastic:7 require:6 generalization:1 adjusted:1 practically:1 credit:2 great:1 mapping:1 predict:1 maker:1 ross:5 successfully:1 mit:2 always:1 barto:16 office:1 earliest:1 focus:1 improvement:1 greatly:1 accumulated:1 typically:1 integrated:1 selects:1 lrt:2 denoted:1 development:2 animal:1 special:3 equal:1 having:2 lit:5 future:2 others:2 pxt:1 connectionist:1 richard:1 delayed:5 intended:1 onwards:1 neuronlike:1 highly:1 evaluation:26 adjust:3 analyzed:1 accurate:2 necessary:1 incomplete:1 desired:2 accrue:2 theoretical:1 uncertain:2 assignment:2 phrase:1 cost:2 pole:4 delay:3 supervisor:1 balancer:1 teacher:1 chooses:1 international:1 amherst:3 together:1 thesis:3 aaai:1 slowly:1 book:1 derivative:1 return:22 account:1 lookup:1 includes:4 automation:1 inc:1 satisfy:1 depends:2 vi:4 performed:1 later:1 view:1 observing:2 reached:1 start:1 contribution:1 formed:1 ir:2 air:1 who:2 correspond:2 ofthe:2 yield:1 produced:1 carlo:1 multiplying:1 cybernetics:2 explain:1 whenever:1 failure:2 obvious:1 dm:10 associated:2 massachusetts:3 knowledge:7 dimensionality:1 improves:1 actually:2 supervised:5 afb:1 improved:1 formulation:1 evaluated:1 anticipatory:1 knowledgeable:1 anderson:5 just:2 receives:2 scientific:1 building:1 concept:1 true:3 requiring:1 adequately:1 hence:1 entering:1 laboratory:1 moore:2 illustrated:1 during:1 game:1 samuel:2 complete:4 tt:1 recently:1 common:1 witten:2 brigade:1 discussed:1 he:1 approximates:3 significant:2 refer:1 cambridge:4 reachable:1 longer:1 perspective:2 optimizing:1 certain:2 vt:13 maximize:1 period:1 signal:6 academic:2 england:1 long:2 dept:1 ase:2 prediction:7 involving:2 controller:2 multilayer:1 expectation:4 interval:1 unlike:1 checker:2 cart:1 call:4 feedforward:1 enough:1 fit:1 architecture:1 inner:1 whether:2 forecasting:1 speech:1 proceed:1 york:2 action:13 gabriel:2 useful:3 detailed:1 involve:1 amount:4 discount:2 extensively:2 hardware:1 simplest:2 estimated:8 neuroscience:2 track:1 discrete:2 ivt:1 reliance:1 nevertheless:1 changing:1 prevent:1 v1:2 merely:1 sum:1 angle:1 parameterized:3 place:1 throughout:2 decision:31 followed:1 played:1 encountered:1 calling:2 speed:1 attempting:1 pavlovian:1 developing:1 according:2 jr:1 across:1 slightly:1 making:5 intuitively:1 xo:1 bucket:1 taken:2 equation:9 discus:3 dyna:1 know:2 letting:1 end:2 available:3 junction:1 appropriate:1 alternative:1 existence:1 substitute:2 assumes:1 giving:1 coined:1 approximating:4 objective:3 question:1 quantity:1 strategy:1 primary:1 rt:12 usual:2 interacts:1 gradient:3 dp:22 capacity:1 topic:1 assuming:3 relationship:2 difficult:3 implementation:1 policy:29 neuron:1 yearbook:1 markov:1 truncated:9 immediate:1 payoff:16 extended:1 interacting:1 pzt:2 introduced:1 bolling:1 namely:1 required:4 connection:2 address:3 dynamical:8 below:1 unrealistic:1 critical:1 difficulty:1 rely:3 disturbance:1 natural:1 force:1 advanced:1 representing:2 improve:1 temporally:2 text:1 prior:4 understanding:2 acknowledgement:1 determining:2 afosr:2 editor:2 systematically:1 critic:4 balancing:2 ibm:1 surprisingly:1 supported:1 pzy:2 understand:1 fall:1 emerge:1 benefit:1 dimension:1 transition:5 cumulative:2 commonly:1 reinforcement:11 adaptive:9 collection:1 transaction:2 approximate:1 observable:1 forever:1 global:1 assumed:1 knew:1 search:1 iterative:1 continuous:2 why:1 table:1 learn:2 improving:2 interact:1 necessarily:1 did:1 exponential:1 factory:1 watkins:13 departs:1 bad:1 xt:32 explored:2 concern:1 sequential:15 effectively:1 yrt:1 phd:3 occurring:1 horizon:3 simply:2 ma:6 viewed:1 formulated:2 presentation:1 consequently:2 identity:2 absence:1 considerable:1 change:3 man:2 included:1 typical:1 determined:1 infinite:2 specifically:2 corrected:5 operates:1 gte:1 called:1 total:1 secondary:1 player:1 formally:1 reactive:1 evaluate:1 princeton:2 |
1,029 | 1,940 | Discovering Hidden Variables:
A Structure-Based Approach
Gal Elidan
Noam Lotner
Nir Friedman
Daphne Koller
Hebrew University
Stanford University
{galel,noaml,nir}@cs.huji.ac.il
koller@cs.stanford.edu
Abstract
A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several
of the observed variables. As such, they induce seemingly complex dependencies among the latter. In recent years, much attention has been
devoted to the development of algorithms for learning parameters, and
in some cases structure, in the presence of hidden variables. In this paper, we address the related problem of detecting hidden variables that
interact with the observed variables. This problem is of interest both for
improving our understanding of the domain and as a preliminary step that
guides the learning procedure towards promising models. A very natural
approach is to search for "structural signatures" of hidden variables substructures in the learned network that tend to suggest the presence of
a hidden variable. We make this basic idea concrete, and show how to
integrate it with structure-search algorithms. We evaluate this method on
several synthetic and real-life datasets, and show that it performs surprisingly well.
1 Introduction
In the last decade there has been a great deal of research focused on the problem of learning
Bayesian networks (BNs) from data (e.g., [7]). An important issue is the existence of
hidden variables that are never observed, yet interact with observed variables. Naively, one
might think that, if a variable is never observed, we can simply ignore its existence. At
a certain level, this intuition is correct. We can construct a network over the observable
variables which is an I-map for the marginal distribution over these variables, i.e., captures
all the dependencies among the observed variables. However, this approach is weak from a
variety of perspectives. Consider, for example, the network in Figure lea). Assume that the
data is generated from such a dependency model, but that the node H is hidden. A minimal
I-map for the marginal distribution is shown in Figure l(b). From a pure representation
perspective, this network is clearly less useful. It contains 12 edges rather than 6, and the
nodes have much bigger families. Hence, as a representation of the process in the domain,
it is much less meaningful. From the perspective of learning these networks from data, the
marginalized network has significant disadvantages. Assuming all the variables are binary,
it uses 59 parameters rather than 17, leading to substantial data fragmentation and thereby
to nonrobust parameter estimates. Moreover, with limited amounts of data the induced
network will usually omit several of the dependencies in the model.
When a hidden variable is known to exist, we can introduce it into the network and apply known BN learning algorithms. If the network structure is known, algorithms such as
Figure 1: Hidden variable
simplifies structure
(a) with hidden variable
(b) no hidden variable
EM [3, 9] or gradient ascent [2] can learn parameters. If the structure is not known, the
Structural EM (SEM) algorithm of [4] can be used to perform structure learning with missing data. However, we cannot simply introduce a "floating" hidden variable and expect
SEM to place it correctly. Hence, both of these algorithms assume that some other mechanism introduces the hidden variable in approximately the right location in the network.
Somewhat surprisingly, only little work has been done on the problem of automatically
detecting that a hidden variable might be present in a certain position in the network.
In this paper, we investigate what is arguably the most straightforward approach for inducing the existence of a hidden variable. This approach, briefly mentioned in [7], is roughly
as follows: We begin by using standard Bayesian model selection algorithms to learn a
structure over the observable variables. We then search the structure for substructures,
which we call semi-cliques, that seem as if they might be induced by a hidden variable.
We temporarily introduce the hidden variable in a way that breaks up the clique, and then
continue learning based on that new structure. If the resulting structure has a better score,
we keep the hidden variable. Surprisingly, this very basic technique does not seem to have
been pursued. (The approach of [10] is similar on the surface, but is actually quite different;
see Section 5.) We provide a concrete and efficient instantiation of this approach and show
how to integrate it with existing learning algorithms such as SEM. We apply our approach
to several synthetic and real datasets, and show that it often provides a good initial placement for the introduced hidden variable. We can therefore use it as a preprocessing step for
SEM, substantially reducing the SEM search space.
2 Learning Structure of Bayesian Networks
Consider a finite set X = {Xl, ... ,Xn } of discrete random variables where each variable
Xi may take on values from a finite set. A Bayesian network is an annotated directed
acyclic graph that encodes a joint probability distribution over X. The nodes of the graph
correspond to the random variables Xl, ... , X n. Each node is annotated with a conditional
probability distribution that represents P(Xi I Pa(Xi )), where Pa(Xi ) denotes the parents
of Xi in G. A Bayesian network B specifies a unique joint probability distribution over X
given by: PB(X 1 , .?. ,Xn ) = n~=l PB(XiIPa(Xi)).
The problem of learning a Bayesian network can be stated as follows. Given a training
set D = {x[I], ... , x[ M]} of instances of X, find a network B that best matches D. The
common approach to this problem is to introduce a scoring function that evaluates each
network with respect to the training data, and then to search for the best network according
to this score. The scoring function most commonly used to learn Bayesian networks is the
Bayesian scoring metric [8]. Given a scoring function, the structure learning task reduces
to a problem of searching over the combinatorial space of structures for the structure that
maximizes the score. The standard approach is to use a local search procedure that changes
one arc at a time. Greedy hill-climbing with random restarts is typically used.
The problem of learning in the presence of partially observable data (or known hidden
variables) is computationally and conceptually much harder. In the case of a fixed network
structure, the Expectation Maximization (EM) algorithm of [3] can be used to search for a
(local) maximum likelihood (or maximum a posteriori) assignment to the parameters. The
structural EM algorithm of [4] extends this idea to the realm of structure search. Roughly
speaking, the algorithm uses an E-step as part of structure search. The current model structure as well as parameters - is used for computing expected sufficient statistics for
other candidate structures. The candidate structures are scored based on these expected
sufficient statistics. The search algorithm then moves to a new candidate structure. We can
then run EM again, for our new structure, to get the desired expected sufficient statistics.
3 Detecting Hidden Variables
We motivate our approach for detecting hidden variables by considering the simple example
discussed in the introduction. Consider the distribution represented by the network shown
in Figure l(a), where H is a hidden variable. The variable H was the keystone for the
conditional independence assumptions in this network. As a consequence, the marginal
distribution over the remaining variables has almost no structure: each }j depends on all
the Xi'S, and the }j's themselves are also fully connected. A minimal I-map for this
distribution is shown in Figure l(b). It contains 12 edges compared to the original 6. We
can show that this phenomenon is a typical effect of removing a hidden variables:
Proposition 3.1: Let G be a network over the variables Xl, . .. ,Xn , H. Let I be the
conditional independence statements - statements of the form J(X; Y 1 Z) - that are
implied by G and do not involve H. Let G' be the graph over X I, ... , X n that contains
an edge from Xi to X j whenever G contains such an edge, and in addition: G' contains a
clique over the children}j of H , and G' contains an edge from any parent Xi of H to any
child}j of H. Then G' is a minimall-map for I.
We want to define a procedure that will suggest candidate hidden variables by finding
structures of this type in the context of a learning algorithm. We will apply our procedure to
networks induced by standard structure learning algorithms [7]. Clearly, it is unreasonable
to hope that there is an exact mapping between substructures that have the form described in
Proposition 3.1 and hidden variables. Learned networks are rarely an exact reflection of the
minimal I-map for the underlying distribution. We therefore use a somewhat more flexible
definition, which allows us to detect potential hidden variables. For a node X and a set of
nodes Y, we define 6. (X ; Y) to be the set of neighbors of X (parents or children) within
the subset Y. We define a semi-clique to be a set of nodes Q where each node X E Q
is linked to at least half of Q: 16.(X; Q)I 2:: ~IQI (This revised definition is the strictest
criterion that still accepts a minimally (just one neighbor missing) relaxed 4-Clique.)
We propose a simple heuristic for finding semi-cliques in the graph. We first observe that
each semi-clique must contain a seed which is easy to spot; this seed is a 3-vertex clique.
Proposition 3.2: Any semi-clique of size 4 or more contains a clique ofsize 3.
The first phase of the algorithm is a search for all 3-cliques in the graph. The algorithm then
tries to expand each of them into a maximal semi-clique in a greedy way. More precisely,
at each iteration the algorithm attempts to add a node to the "current" semi-clique. If the
expanded set satisfies the semi-clique property, then it is set as the new "current" clique.
These tests are repeated until no additional variable can be added to the semi-clique. The
algorithm outputs the expansions found based on the different 3-clique "seeds". We note
that this greedy procedure does not find all semi-cliques. The exceptions are typically
two semi-cliques that are joined by a small number of edges, making a larger legal semiclique. These cases are of less interest to us, because they are less likely to arise from the
marginalization of a hidden variable.
In the second phase, we convert each of the semi-cliques to a structure candidate containing
a new hidden node. Suppose Q is a semi-clique. Our construction introduces a new variable
H, and replaces all of the incoming edges into variables in Q by edges from H. Parents of
nodes in Q are then made to be parents of H, unless the edge results in a cycle. This process
results in the removal of all intra-clique edges and makes H a proxy for all "outside"
influences on the nodes in the clique.
In the third phase, we evaluate each of these candidate structures in attempt to find the
most useful hidden variable. There are several possible ways in which this candidate can
be utilized by the learning algorithm. We propose three approaches. The simplest assumes
that the network structure, after the introduction of the hidden variable, is fixed. In other
words, we assume that the "true" structure of the network is indeed the result of applying
our transformation to the input network (which was produced by the first stage of learning).
We can then simply fit the parameters using EM, and score the resulting network.
We can improve this idea substantially by noting that our simple transformation of the
semi-clique does not typically recover the true underlying structure of the original model.
In our construction, we chose to make the hidden variable H the parent of all the nodes in
the semi-clique, and eliminate all other incoming edges to variables in the clique. Clearly,
this construction is very limited. There might well be cases where some of the edges in the
clique are warranted even in the presence of the hidden variable. It might also be the case
that some of the edges from H to the semi-clique variables should be reversed. Finally,
it is plausible that some nodes were included in the semi-clique accidentally, and should
not be directly correlated with H . We could therefore allow the learning algorithm - the
SEM algorithm of [4] - to adapt the structure after the hidden variable is introduced. One
approach is to use SEM to fine-tune our model for the part of the network we just changed:
the variables in the semi-clique and the new hidden variable. Therefore, in the second
approach we fix the remaining structure, and consider only adaptations of the edges within
this set of variables. This restriction substantially reduces the search space for the SEM
algorithm. The third approach allows full structural adaptation over the entire network.
This offers the SEM algorithm greater flexibility, but is computationally more expensive.
To summarize our approach: In the first phase we analyze the network learned using conventional structure search to find semi-cliques that indicate potential locations of hidden
variables. In the second phase we convert these semi-cliques into structure candidates
(each containing a new hidden variable). Finally, in the third phase we evaluate each of
these structures (possibly using them as a seed for further search) and return the best scoring network we find.
The main assumption of our approach is that we can find "structural signatures" of hidden
variables via semi-cliques. As we discussed above, it is unrealistic to expect the learned
network G to have exactly the structure described in Proposition 3.1. On the one hand,
learned networks often have spurious edges resulting from statistical noise, which might
cause fragments of the network to resemble these structures even if no hidden variable is
involved. On the other hand, there might be edges that are missing or reversed. Spurious
edges are less problematic. At worst, they will lead us to propose a spurious hidden variable
which will be eliminated by the subsequent evaluation step. Our definition of semi-clique,
with its more flexible structure, partially deals with the problem of missing edges. However,
if our data is very sparse, so that standard learning algorithms will be very reluctant to
produce clusters with many edges, the approach we propose will not work.
4 Experimental Results
Our aim is to evaluate the success of our procedure in detecting hidden variables. To do
so, we evaluated our procedure on both synthetic and real-life data sets. The synthetic data
sets were sampled from Bayesian networks that appear in the literature. We then created a
training set in which we "hid" one variable. We chose to hide variables that are "central"
in the network (i.e., variables that are the parents of several children). The synthetic data
sets allow for a controlled evaluation, and for generating training and testing data sets of
any desired size. However, the data is generated from a distribution that indeed has only
a single hidden variable. A more realistic benchmark is real data, that may contain many
confounding influences. In this case, of course, we do not have a generating model to
compare against.
Insurance: A 27-node network developed to evaluate driver's insurance applications [2].
We hid the variables Accident, Age, MakeModel, and VehicleYear (A, G, M, V in Figure 2). Alarm: A 37-node network [1] developed to monitor ICU patients. We hid the
variables HR, intubation, LVFailure, and VentLung (H, I, L, V in Figure 2). Stock Data:
'C
o
,g.l!!
=
=
CO
QI'C
.11:til
Cl,S
0 ....
..JO
.l!!
600
400
200
bJ
o
<i>
<i> <i>
<i>
'
.
.
-200
20~bJ
?
<i> <i>
-
200~
+
HILV
HIL
4
0 0 8 10
200&
00
+0 E ] '
0
?
<i>
..p
AGMV
CO -200"
s::: Cl
o s::: -400
QI ....
o .-s::: -600 ? ?
o ~ -BOO
11)1'C
.
<i>
6
400 0 0 8 1 2
8000 0 [ ] 150
200
<i>
'
?
100
0"
400
50
-200
r:1
+
-400?'
0
o
-200
-400
'
.
+ <i>
D
.
A
'"
-2000
G
200
150
100
50
-1000
?
AGMV
HILV
.
HIL
Insurance 1k
Alarm 1k
Alarm 10k
D
SI TB Original
Hidden
Naive
+
0
+
[!]
'iEl
o
-50
SI TB
Figure 2: Comparison of the different approaches. Each point in the graph corresponds to
a network learned by one of the methods. The graphs on the bottom row show the log of
the Bayesian score. The graphs on the top row show log-likelihood of an independent test
set. In all graphs, the scale is normalized to the performance of the No-hidden network,
shown by the dashed line at "0".
A real-life dataset that traces the daily change of 20 major US technology stocks for several years (1516 trading days) . These values were discretized to three categories: "up", "no
change", and "down". TB: A real-life dataset that records information about 2302 tuberculosis patients in the San Francisco county (courtesy of Dr. Peter Small, Stanford Medical
Center). The data set contains demographic information such as gender, age, ethnic group,
and medical information such as HIV status, TB infection type, and other test results.
In each data set, we applied our procedure as follows. First, we used a standard model selection procedure to learn a network from the training data (without any hidden variables).
In our implementation, we used standard greedy hill-climbing search that stops when it
reaches a plateau it cannot escape. We supplied the learned network as input to the cliquedetecting algorithm which returned a set of candidate hidden variables. We then used each
candidate as the starting point for a new learning phase. The Hidden procedure returns the
highest-scoring network that results from evaluating the different putative hidden variables.
To gauge the quality of this learning procedure, we compared it to two "strawmen" approaches. The Naive strawman [4] initializes the learning with a network that has a single
hidden variable as parent of all the observed variables. It then applies SEM to get an improved network. This process is repeated several times, where each time a random perturbation (e.g., edge addition) is applied to help SEM to escape local maxima. The Original
strawman, which applied only in synthetic data set, is to use the true generating network on
the data set. That is, we take the original network (that contains the variable we hid) and
use standard parametric EM to learn parameters for it. This strawman corresponds to cases
where the learner has additional prior knowledge about domain structure.
We quantitatively evaluated each of these networks in two ways . First, we computed the
Bayesian score of each network on the training data. Second, we computed the logarithmic
loss of predictions made by these networks on independent test data. The results are shwon
in Figure 2. In this evaluation, we used the performance of No-Hidden as the baseline for
comparing the other methods. Thus, a positive score of say 100 in Figure 2 indicates a
score which is larger by 100 than the score of No-Hidden. Since scores are the logarithm
of the Bayesian posterior probability of structures (up to a constant), this implies that such
a structure is 2100 times more probable than the structure found by No-Hidden.
We can see that, in most cases, the network learned by Hidden outperforms the network
learned by No-hidden. In the artificial data sets, Original significantly outperforms our
algorithm on test data. This is no surprise: Original has complete knowledge of the structure which generated the test data. Our algorithm can only evaluate networks according to
their score; indeed, the scores of the networks found by Hidden are better than those of
Original in 12 out of 13 cases tested. Thus, we see that the "correct" structure does not
usually have the highest Bayesian score. Our approach usually outperforms the network
learned by Naive. This improvement is particularly significant in the real-life datasets.
As discussed in Section 3, there are three ways that a learning algorithm can utilize the
original structure proposed by our algorithm. As our goal was to find the best model for
the domain, we ran all three of them in each case, and chose the best resulting network. In
all of our experiments, the variant that fixed the candidate structure and learned parameters
for it resulted in scores that were significantly worse than the networks found by the variants that employed structure search. The networks trained by this variant also performed
much worse on test data. This highlights the importance of structure search in evaluating a
potential hidden variable. The initial structure candidate is often too simplified; on the one
hand, it forces too many independencies among the variables in the semi-clique, and on the
other, it can add too many parents to the new hidden variable.
The comparison between the two variants that use search is more complex. In many cases,
the variant that gives the SEM complete flexibility in adapting the network structure did
not find a better scoring network than the variant that only searches for edges in the area of
the new variable. In the cases it did lead to improvement, the difference in score was not
significantly larger. Since the variant that restricts SEM is computationally cheaper (often
by an order of magnitude), we believe that it provides a good tradeoff between model
quality and computational cost.
The structures found by our procedure are quite appealing. For example, in the stock
market data, our procedure constructs a hidden variable that is the parent of several stocks:
Microsoft, Intel, Dell, CISCO, and Yahoo. A plausible interpretation of this variable is
"strong" market vs. "stationary" market. When the hidden variable has the "strong" value,
all the stocks have higher probability for going up. When the hidden variable has the
"stationary" probability, these stocks have much higher probability of being in the "no
change" value. We do note that in the learned networks there were still many edges between
the individual stocks. Thus, the hidden variable serves as a general market trend, while the
additional edges make better description of the correlations between individual stocks. The
model we learned for the TB patient dataset was also interesting. One value of the hidden
variable captures two highly dominant segments of the population: older, HIV-negative,
foreign-born Asians, and younger, HIV-positive, US-born blacks. The hidden variable's
children distinguished between the two aggregated subpopulations using the HIV-result
variable, which was also a parent of most of them. We believe that, had we allowed the
hidden variable to have three values, it would have separated these populations.
5 Discussion and Future Work
In this paper, we propose a simple and intuitive algorithm for finding plausible locations
for hidden variables in BN learning. It attempts to detect structural signatures of a hidden
variable in the network learned by standard structure search. We presented experiments
showing that our approach is reasonably successful at producing better models. To our
knowledge, this paper is also the first to provide systematic empirical tests of any approach
to the task of discovering hidden variables.
The problem of detecting hidden variables has received surprisingly little attention. Spirtes
et at. [11] suggest an approach that detects patterns of conditional independencies that can
only be generated in the presence of hidden variables. This approach suffers from two
limitations. First, it is sensitive to failure in few of the multiple independence tests it uses.
Second, it only detects hidden variables that are forced by the qualitative independence
constraints. It cannot detect situations where the hidden variable provides a more succinct
model of a distribution that can be described by a network without a hidden variable (as in
the simple example of Figure 1).
Martin and VanLehn [10] propose an alternative approach that appears, on the surface, to
be similar to ours. They start by checking correlations between all pairs of variables. This
results in a "dependency" graph in which there is an edge from X to Y if their correlation is
above a predetermined threshold. Then they construct a two-layered network that contains
independent hidden variables in the top level, and observables in the bottom layer, such that
every dependency between two observed variables is "explained" by at least one common
hidden parent. This approach suffers from three important drawbacks. First, it does not
eliminate from consideration correlations that can be explained by direct edges among the
observables. Thus, it forms clusters even in cases where the dependencies can be fully
explained by a standard Bayesian network structure. Moreover, since it only examines
pairwise dependencies, it cannot detect conditional independencies, such as X -+ Y -+ Z,
from the data. (In this case, it would learn a hidden variable that is the parent of all three
variables.) Finally, this approach learns a restricted form of networks that requires many
hidden variables to represent dependencies among variables. Thus, it has limited utility in
distinguishing "true" hidden variables from artifacts of the representation.
We plan to test further enhancements to the algorithm in several directions. First, other
possibilities for structural signatures (for example the structure resulting from a many parent - many children configuration) may expand the range of variables we can discover.
Second, our clique-discovering procedure is based solely on the structure of the network
learned. Additional information, such as the confidence of learned edges [6, 5], might help
the procedure avoid spurious signatures. Third, we plan to experiment with multi-valued
hidden variables and better heuristics for selecting candidates out of the different proposed
networks. Finally, we are considering approaches for dealing with sparse data, when the
structural signatures do not manifest. Information-theoretic measures might provide a more
statistical signature for the presence of a hidden variable.
Acknowledgements
This work was supported in part by ISF grant 244/99, Israeli Ministry of Science grant
2008-1-99. Nir Friedman was supported by Alon fellowship, and by the generosity of the
Sacher foundation.
References
[1] 1. Beinlich, G. Suermondt, R. Chavez, and G. Cooper. The ALARM monitoring system. In
Proc. 2 'nd European Conf. on AI and Medicine. , 1989.
[2] J. Binder, D. Koller, S. Russell, and K. Kanazawa. Adaptive probabilistic networks with hidden
variables. Machine Learning, 29:213- 244, 1997.
[3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. J. Royal Stat. Soc., B 39:1- 39, 1977.
[4] N. Friedman. The Bayesian structural EM algorithm. In UAJ, 1998 .
[5] N. Friedman and D. Koller. Being Bayesian about Network Structure. In UAI, 2000.
[6] N. Friedman, M. Goldszmidt, and A. Wyner. Data analysis with Bayesian networks: A bootstrap
approach. In UAJ, 1999.
[7] D. Heckerman. A tutorial on learning with Bayesian networks. In Learning in Graphical
Models. 1998.
[8] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20: 197- 243, 1995.
[9] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Camp.
Stat.and Data Ana., 19:191- 201,1995.
[10] J. Martin and K. VanLehn. Discrete factor analysis: Learning hidden variables in Bayesian
networks. Technical report, Department of Computer Science, University of Pittsburgh, 1995.
[11] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. Springer-Verlag,
1993.
| 1940 |@word briefly:1 nd:1 bn:2 thereby:1 harder:1 initial:2 born:2 contains:10 score:15 fragment:1 configuration:1 selecting:1 ours:1 outperforms:3 existing:1 current:3 comparing:1 si:2 yet:2 must:1 suermondt:1 subsequent:1 realistic:1 predetermined:1 v:1 stationary:2 pursued:1 discovering:3 greedy:4 half:1 record:1 detecting:6 provides:3 node:16 location:3 daphne:1 dell:1 direct:1 driver:1 qualitative:1 introduce:4 pairwise:1 indeed:3 market:4 expected:3 themselves:1 roughly:2 multi:1 discretized:1 detects:2 automatically:1 little:2 considering:2 begin:1 discover:1 moreover:2 underlying:2 maximizes:1 what:1 substantially:3 developed:2 finding:3 gal:1 transformation:2 every:1 exactly:1 grant:2 medical:2 omit:1 appear:1 arguably:1 producing:1 positive:2 iqi:1 local:3 consequence:1 solely:1 approximately:1 might:9 chose:3 black:1 minimally:1 binder:1 co:2 limited:3 range:1 directed:1 unique:1 testing:1 bootstrap:1 spot:1 procedure:15 area:1 empirical:1 significantly:3 adapting:1 word:1 induce:1 subpopulation:1 confidence:1 suggest:3 get:2 cannot:4 selection:2 layered:1 context:1 influence:2 applying:1 restriction:1 conventional:1 map:5 missing:5 courtesy:1 center:1 straightforward:1 attention:2 starting:1 focused:1 pure:1 examines:1 population:2 searching:1 construction:3 suppose:1 exact:2 us:3 distinguishing:1 pa:2 trend:1 expensive:1 particularly:1 utilized:1 observed:9 bottom:2 capture:2 worst:1 connected:1 cycle:1 russell:1 highest:2 ran:1 substantial:1 intuition:1 mentioned:1 dempster:1 signature:7 motivate:1 trained:1 segment:1 learner:1 observables:2 joint:2 stock:8 represented:1 nonrobust:1 separated:1 forced:1 artificial:1 outside:1 quite:2 heuristic:2 stanford:3 larger:3 plausible:3 hiv:4 say:1 valued:1 statistic:3 think:1 laird:1 seemingly:1 propose:6 maximal:1 adaptation:2 hid:4 flexibility:2 description:1 inducing:1 intuitive:1 parent:14 cluster:2 enhancement:1 produce:1 generating:3 help:2 alon:1 ac:1 stat:2 lauritzen:1 received:1 strong:2 soc:1 c:2 resemble:1 indicate:1 trading:1 implies:1 direction:1 drawback:1 correct:2 annotated:2 ana:1 fix:1 preliminary:1 county:1 proposition:4 probable:1 great:1 seed:4 mapping:1 bj:2 major:1 proc:1 combinatorial:1 sensitive:1 gauge:1 hope:1 clearly:3 aim:1 hil:2 rather:2 avoid:1 improvement:2 likelihood:3 indicates:1 generosity:1 baseline:1 detect:4 camp:1 posteriori:1 foreign:1 typically:3 eliminate:2 entire:1 spurious:4 hidden:82 koller:4 expand:2 going:1 issue:1 among:5 flexible:2 yahoo:1 development:1 plan:2 marginal:3 construct:3 never:2 eliminated:1 represents:1 future:1 report:1 quantitatively:1 serious:1 escape:2 causation:1 few:1 resulted:1 individual:2 cheaper:1 asian:1 floating:1 phase:7 microsoft:1 friedman:5 attempt:3 interest:2 investigate:1 highly:1 intra:1 insurance:3 evaluation:3 possibility:1 introduces:2 devoted:1 edge:26 daily:1 unless:1 incomplete:1 logarithm:1 desired:2 minimal:3 instance:1 disadvantage:1 assignment:1 maximization:1 cost:1 vertex:1 subset:1 successful:1 too:3 dependency:9 synthetic:6 huji:1 probabilistic:2 systematic:1 concrete:2 jo:1 again:1 central:1 cisco:1 containing:2 possibly:1 dr:1 worse:2 conf:1 leading:1 return:2 til:1 potential:3 depends:1 performed:1 break:1 try:1 linked:1 analyze:1 start:1 recover:1 substructure:3 il:1 correspond:1 climbing:2 conceptually:1 weak:1 bayesian:20 produced:1 monitoring:1 plateau:1 reach:1 suffers:2 whenever:1 infection:1 definition:3 failure:1 evaluates:1 against:1 involved:1 sampled:1 stop:1 dataset:3 reluctant:1 manifest:1 realm:1 knowledge:4 actually:1 appears:1 higher:2 day:1 restarts:1 improved:1 done:1 evaluated:2 just:2 stage:1 until:1 correlation:4 hand:3 quality:2 artifact:1 believe:2 effect:1 contain:2 true:4 normalized:1 hence:2 spirtes:2 deal:2 galel:1 criterion:1 hill:2 complete:2 theoretic:1 performs:1 reflection:1 consideration:1 common:2 lotner:1 discussed:3 interpretation:1 association:1 isf:1 significant:2 ai:1 had:1 surface:2 add:2 dominant:1 posterior:1 recent:1 hide:1 perspective:3 confounding:1 certain:2 verlag:1 binary:1 continue:1 success:1 life:5 scoring:7 ministry:1 additional:4 somewhat:2 relaxed:1 greater:1 accident:1 employed:1 aggregated:1 elidan:1 dashed:1 semi:23 full:1 multiple:1 reduces:2 technical:1 match:1 adapt:1 offer:1 bigger:1 controlled:1 qi:2 prediction:2 variant:7 basic:2 patient:3 metric:1 expectation:1 iteration:1 represent:1 lea:1 younger:1 addition:2 want:1 fine:1 fellowship:1 ascent:1 induced:3 tend:1 seem:2 call:1 structural:9 presence:7 noting:1 easy:1 boo:1 variety:1 independence:4 marginalization:1 fit:1 idea:3 simplifies:1 tradeoff:1 utility:1 peter:1 returned:1 speaking:1 cause:1 useful:2 vanlehn:2 involve:1 tune:1 amount:1 category:1 simplest:1 specifies:1 supplied:1 exist:1 restricts:1 problematic:1 tutorial:1 correctly:1 discrete:2 group:1 independency:3 threshold:1 pb:2 monitor:1 utilize:1 graph:10 year:2 convert:2 run:1 place:1 family:1 extends:1 almost:1 putative:1 geiger:1 layer:1 replaces:1 placement:1 precisely:1 constraint:1 encodes:1 bns:1 expanded:1 martin:2 glymour:1 department:1 according:2 combination:1 heckerman:2 em:10 appealing:1 making:1 explained:3 restricted:1 computationally:3 legal:1 scheines:1 mechanism:1 demographic:1 serf:1 unreasonable:1 apply:3 observe:1 distinguished:1 alternative:1 existence:3 original:9 denotes:1 remaining:2 assumes:1 top:2 graphical:2 marginalized:1 medicine:1 implied:1 move:1 initializes:1 added:1 parametric:1 gradient:1 reversed:2 assuming:1 hebrew:1 statement:2 noam:1 stated:1 trace:1 negative:1 implementation:1 perform:1 revised:1 datasets:3 arc:1 finite:2 benchmark:1 beinlich:1 situation:1 perturbation:1 keystone:1 introduced:2 pair:1 learned:16 accepts:1 israeli:1 address:1 usually:3 pattern:1 summarize:1 tb:5 royal:1 unrealistic:1 natural:1 force:1 hr:1 older:1 improve:1 technology:1 wyner:1 created:1 naive:3 strawman:4 nir:3 prior:1 understanding:1 literature:1 removal:1 checking:1 acknowledgement:1 uaj:2 fully:2 expect:2 loss:1 highlight:1 interesting:1 limitation:1 acyclic:1 age:2 foundation:1 integrate:2 sufficient:3 proxy:1 rubin:1 row:2 course:1 changed:1 surprisingly:4 last:1 supported:2 accidentally:1 guide:1 allow:2 neighbor:2 sparse:2 xn:3 evaluating:2 commonly:1 made:2 preprocessing:1 san:1 simplified:1 adaptive:1 observable:3 ignore:1 status:1 keep:1 clique:36 dealing:1 instantiation:1 incoming:2 uai:1 pittsburgh:1 francisco:1 xi:9 search:21 decade:1 promising:1 learn:6 reasonably:1 sem:13 improving:1 interact:3 expansion:1 warranted:1 complex:2 cl:2 european:1 domain:4 icu:1 did:2 main:1 strictest:1 noise:1 scored:1 arise:1 alarm:4 succinct:1 child:6 repeated:2 allowed:1 ethnic:1 intel:1 cooper:1 position:1 xl:3 candidate:13 chickering:1 third:4 learns:1 removing:1 down:1 showing:1 naively:1 kanazawa:1 importance:1 fragmentation:1 iel:1 magnitude:1 chavez:1 surprise:1 logarithmic:1 simply:3 likely:1 temporarily:1 partially:2 joined:1 applies:1 springer:1 gender:1 corresponds:2 satisfies:1 conditional:5 goal:1 towards:1 change:4 included:1 typical:1 reducing:1 experimental:1 meaningful:1 rarely:1 exception:1 latter:1 goldszmidt:1 evaluate:6 tested:1 phenomenon:1 correlated:1 |
1,030 | 1,941 | Multiagent Planning with Factored MDPs
Carlos Guestrin
Computer Science Dept
Stanford University
guestrin@cs.stanford.edu
Daphne Koller
Computer Science Dept
Stanford University
koller@cs.stanford.edu
Ronald Parr
Computer Science Dept
Duke University
parr@cs.duke.edu
Abstract
We present a principled and efficient planning algorithm for cooperative multiagent dynamic systems. A striking feature of our method is that the coordination
and communication between the agents is not imposed, but derived directly from
the system dynamics and function approximation architecture. We view the entire multiagent system as a single, large Markov decision process (MDP), which
we assume can be represented in a factored way using a dynamic Bayesian network (DBN). The action space of the resulting MDP is the joint action space of
the entire set of agents. Our approach is based on the use of factored linear value
functions as an approximation to the joint value function. This factorization of
the value function allows the agents to coordinate their actions at runtime using
a natural message passing scheme. We provide a simple and efficient method
for computing such an approximate value function by solving a single linear program, whose size is determined by the interaction between the value function
structure and the DBN. We thereby avoid the exponential blowup in the state and
action space. We show that our approach compares favorably with approaches
based on reward sharing. We also show that our algorithm is an efficient alternative to more complicated algorithms even in the single agent case.
1 Introduction
Consider a system where multiple agents, each with its own set of possible actions and its
own observations, must coordinate in order to achieve a common goal. We want to find a
mechanism for coordinating the agents? actions so as to maximize their joint utility. One
obvious approach to this problem is to represent the system as a Markov decision process
(MDP), where the ?action? is a joint action for all of the agents and the reward is the total
reward for all of the agents. Unfortunately, the action space is exponential in the number
of agents, rendering this approach impractical in most cases. Alternative approaches to
this problem have used local optimization for the different agents, either via reward/value
sharing [11, 13] or direct policy search [10].
We present a novel approach based on approximating the joint value function as a linear
combination of local value functions, each of which relates only to the parts of the system
controlled by a small number of agents. We show how such factored value functions allow the agents to find a globally optimal joint action using a very natural message passing
scheme. We provide a very efficient algorithm for computing such a factored approximation to the true value function using a linear programming approach. This approach is of
independent interest, as it is significantly faster and compares very favorably to previous
approximate algorithms for single agent MDPs. We also compare our multiagent algorithm
to the multiagent reward and value sharing algorithms of Schneider et al. [11], showing that
our algorithm achieves superior performance which in fact is close to the achievable optimum for this class of problems.
Q1
A1
A2
Q2
A1
Q3
X2
A3
A4
Q4
X 1?
X1
X3
X4
A2
A3
A4
X 2?
X 3?
X 4?
(a)
(b)
Figure 1: (a) Coordination graph for a 4-agent problem. (b) A DBN for a 4-agent MDP.
2 Cooperative Action Selection
We begin by considering a simpler problem of selecting a globally optimal joint action
in order to maximize the joint immediate value
achieved by a set of agents. Suppose we
chooses
an action
. We use to denote
have
a
collection
of
agents,
where
each
agent
. Each agent
has a local Q function
, which represents its local contribution to the total utility function. The agents are jointly trying to maximize
.
An agent?s local
function might be influenced by its action and those of some other
agents; we define the scope "!$#&%('*) +
$,.-/ to be the set of agents whose action influences
. Each 0
may be further decomposed as a linear combination of functions that involve
fewer agents; in this case, the complexity of the algorithm may be further reduced.
Our task is to select a joint action 1 that maximizes
*2 1+3 . The fact that the
depend on the actions of multiple agents forces the agents to coordinate their action choices.
We can represent the coordination requirements of the system using a coordination graph,
where there is a node for each agent and an edge between two agents if they must directly
coordinate their actions to optimize some particular 54 . Fig. 1(a) shows the coordination
graph for an example where 67 82:9; 9=< 3?>@ <*2A9< 9B 3?>CED 2:9; 9 DF3.>C B&2:9 D* 9B 3 . A
graph structure suggests the use of a cost network [5], which can be solved using non-serial
dynamic programming [1] or a variable elimination algorithm which is virtually identical
to variable elimination in a Bayesian network.
The key idea is that, rather than summing all functions and then doing the maximization,
we maximize over variables one at a time. Specifically, when maximizing over 9HG , only
summands involving 9 G participate in the maximization. Let us begin our optimization
with agent 4. To optimize B , functions I and D are irrelevant. Hence, we obtain:
JLKNM
OPRQ O$STQ OTU I 2:9 N 9 <3.>V
D 2:9 9 D 3.>
JWK8M
ORX ) E< 2:9 <8 9 BF3.>VB 2A9 D 9 B3H,H
We see that to optimally choose B , the agent must know the values of 0< and D . In
effect, it is computing a conditional strategy, with a (possibly) different action choice for
each action choice of agents 2 and 3. Agent 4 can summarize the value that it brings to
the system in the different circumstances using a new function Y*B 2 Z<&[ D 3 whose value
at the point 9 <8 9 D is the value of the internal JLKNM expression. Note that Y8B introduces a
new induced communication dependency between agents < and D , the dashed line in
Fig. 1(a).
Our problem now reduces to computing JLKNM OPRQ OTS$Q OTU 82:9; 9< 3\>]ED 2:9; 9 D3L>
Y B&2A9< 9 D3 , having one fewer agent.
Next, agent 3 makes its decision, giving:
JLKNM O P Q O S
JLKNM O U
^ 2A9 9 <F3_>`Y D 2A9 N 9 <3 , where Y D 2A9 9 <3E
) D 2:9 9 D 3_>`Y8 2A9 <* 9 D 3H, .
JWK8M O S
Agent 2 now makes its decision, giving Y*< 2:9 35
) ^ 2A9 8 9 <F3a>bY D 2:9 9 <F3H, , and
agent 1 can now simply choose the action 9c that maximizes Y JLKNM OP Y <*2A9" 3 .
We can recover the maximizing set of actions by performing the entire process in reverse: The maximizing choice for Y selects the action 9+d for agent 1. To fulfill its commitment to agent 1, agent 2 must choose the value 9c<d which maximizes YN< 2A9;d 3 . This, in turn,
forces agent 3 and then agent 4 to select their actions appropriately.
In general, the algorithm maintains a set
of functions, which initially contains
. The algorithm then repeats the following steps: (1) Select an uneliminated agent G . (2) Take all Y $[Y whose scope contains G . (3) Define
a new
function Y0 JLKNM O
Y
and introduce it into ; the scope of Y is
"!$#&%(') Y
, G .
As above, the maximizing action choices are recovered by sending messages in the reverse
direction. The cost of this algorithm is linear in the number of new ?function values? introduced, or exponential in the induced width of the coordination graph [5]. Furthermore, each
agent does not need to communicate directly with every other agent, instead the necessary
communication bandwidth will also be the induced width of the graph, further simplifying
the coordination mechanism. We note that this algorithm is essentially a special case of the
algorithm used to solve influence diagrams with multiple parallel decisions [7] (as is the
one in the next section). However, to our knowledge, these ideas have not been applied to
the problem of coordinating the decision making process of multiple collaborating agents.
IF?R*
3 One-Step Lookahead
We now consider two elaborations to the action selection problem of the previous section.
First, we assume
thatthe
in a space described by a set of discrete state vari
agents are acting
J
4 takes on values in some finite domain #
, where each
ables,
2 4 3 .
J
A state defines a value 4Z# 2 43 for each variable 4 . The scope of the local
functions that comprise the value can include both action choices and state variables. We
assume that the agents have full observability of the relevant state variables, so by itself,
this extension is fairly trivial:
The
functions define a conditional cost network. Given
a particular state , the agents instantiate the state variables and then solve the
cost network as in the previous section. However, we note that the agents do not actually
need to have access to all of the state variables: agent only needs to observe the variables
that are in the scope of its local
function, thereby decreasing considerably the amount
of information each agent needs to observe.
The second extension is somewhat more complicated: We assume that the agents are
trying to maximize the sum of an immediate reward and a value that they expect to receive
one step in the future. We describe the dynamics of the system using a dynamic
decision
network (DDN) [4]. Let 4 denote the variable 4 at the current time and 4 the variable
at the next step. The
transition graph ofa DDN
directed acyclic graph
is a two-layer
whose nodes are N[ Z ?
, and where only nodes in
have parents. We denote the parents of 4 in the graph by Parents 2 4 3 . For simplicity
of exposition, we assume that Parents 2 4 3
! . (This assumption
can be relaxed,
"
but our algorithm becomes somewhat more complex.)
Eachnode
4 is associated with a
"
conditional
probability distribution (CPD) # 2 4%$ Parents 2 4 3[3 . The transition probabil
ity # 2 $ [1(3 is then
defined to be & 4 # 2 4 $(' 4 3 , where ' 4 is the value in [1 of the
variables in Parents 2 4 3 . The immediate rewards are a set of functions ) ?) , and the
next-step values are a set of functions *?N+*+ . Here, we assume that both ) 4 ?s and * 4 ?s
are functions that depend only on a small set of variables.
Fig. 1(b)
shows a DDN for a simple four-agent problem, where the ovals represent the
variables 4 and the rectangles the agent actions. The diamond nodes in the first time step
represent the immediate reward, while the * nodes in the second time step represent the
future value associated with a subset of the state variables.
, the agents aim to maximize , 2 .3
For any setting
of the state variables,
JLKNM O P Q.-.-.- Q O/
) )$
2 1+3&> 1032 # 2
$ [1(3*"
2 3H, , i.e., the immediate reward plus the
expected value of the next state. The expectation is a summation over an exponential number of future states. As shown
in [8], this can be simplified substantially. For example, if we
consider the function * 82 3 in Fig. 1(b), we can see that its expected
value is a function
only of 0N[ Z< . More generally, we define 4
2 [1(3a 0 2 # 2 $ [1(3*"
2 3 . Recall
our assumption that the scope of each *+
is only a small subset of variables 5I
. Then,
the scope of 4N
is 2 5E
F3 2 2 Parents 2 4 3 . Specifically, 4
2 [1+3
2 # 2
$
3 2
3 , where
is a value of 2 5
3 . Note that the cost of the computation depends
linearly on $ Z# J 2 2 5
3[3 $ , which depends on 5
(the scope of *
) and on the complexity
of the process dynamics.
By replacing the expectation with the backprojection, we can once again generate a set
of local functions
)T
?> 4
, and apply our coordination graph algorithm unchanged.
4 Markov Decision Processes
We now turn our attention to the substantially more complex case where the agents are
acting in a dynamic environment, and are jointly trying to maximize their expected longterm return. The Markov Decision Process (MDP) framework formalizes this problem. An
MDP is defined as a 4-tuple 2
L5
# 3 where:
is a finite set of $
$ states;
is a set of actions; is a reward function
IR, such that 2 1+3 represents
the reward obtained
in state after taking action 1 ; and # is a Markovian transition
model
where # 2 $ [1+3 represents the probability of going from state to state with action
1 . We assume that the MDP has an infinite horizon and that future rewards are discounted
exponentially with a discount factor ) +!3 .
The optimal value function , d is defined so that the value of a state must be the maximal value achievable by
any action at that state. More precisely, we define #" 2 [1+3W
JLKN2 M'& 1+3>$E 032 # 2 $ 1+3 , 2 3 , and the Bellman operator % d to be % d , 2 .3V
"_2 [1+3 . The optimal value function , d is the fixed point of % d : , d (% d , d .
A stationary policy ) for an MDP is a mapping )*
( , where ) 2 .3 is the action the
agent takes at state . For any value function , , we can define the policy obtained by acting
greedily relative to , : Greedy 2 , 3 2 .3a K,+-_JLKNM.& " 2 1+3 . The greedy policy relative to
the optimal value function , d is the optimal policy ) d Greedy 2 , d 3 .
There are several algorithms for computing the optimal policy. One is via linear programming. Numbering the states in
as 8
#/ , our variables are 0N?01/ , where
0 4 represents , 2 4 3 . The LP is:
Minimize:
Subject to:
432 2 4 30 454
0 476 2 4 [1(3.>8E
#
G
2
G
$
90;:=<
4 1+3
4
>L
1
The state relevance weights 2 are any convex weights, with 2 2 .3@?A and 0 2 2 .3aB .
In our setting,
the state
space is exponentially large, with one state for each assignment to
$
. We use the common approach of restricting attention to
value
functions
that
are
compactly
represented as C a linear combination of basis functions
C
* ?* : . A linear value function over
is a function , that can be written as
:
, 2 .3a
ED
*
&2 .3 for some coefficients F 2 D D : 3 .
The LP approach can be adapted to use this value function representation [12]:
Variables:
Minimize:
Subject to:
D
8
G: 2
D
HD
D : 4
4
"
*
2
4 3
6
E
2
4 1+3[>
0 2 #
2
G
$ ?4
1+3+
:
HD
*
&2
G 3
4
<
?4
1
I\
Where 2
0 2 2 ?4H3
*
&2 ?4 3 . This transformation has the effect of reducing the number
of free variables in the LP to J but the number of constraints remains $
$ $ $ . There
:
is, in general, no guarantee as to the quality of the approximation
;D
*
, but recent
work of de Farias and Van Roy [3] provides some analysis of the error relative to that of
the best possible approximation in the subspace, and some guidance as to selecting the 2 ?s
so as to improve the quality of the approximation.
5 Factored MDPs
Factored MDPs [2] allow the representation of large structured MDPs by using a dynamic
Bayesian network to represent the transition model. Our representation of the one-step
transition dynamics in Section 3 is precisely a factored MDP, where we factor not only the
states but also the actions. In [8], we proposed the use of factored linear value functions
to approximate the value function in a factored MDP. These value functions are a weighted
linear combination of basis functions, as above, but each basis function is restricted to depend only on a small subset of state variables. The * functions in Fig. 1(b) are an example.
If we had a value function , represented in this way, then we could use our algorithm of
Section 3 to implement Greedy 2 , 3 by having the agents use our message passing coordination algorithm at each step. (Here we have only one function * per agent, but our approach
extends trivially to the case of multiple * functions.)
In previous work [9, 6], we presented algorithms for computing approximate value functions of this form for factored MDPs. These algorithms can circumvent the exponential
blowup in the number of state variables, but explicitly enumerate the action space of the
MDP, making them unsuitable for the exponentially large action space in multiagent MDPs.
We now provide a novel algorithm based on the LP of the previous section. In particular,
we show how we can solve this LP exactly in closed form, without explicitly enumerating
the exponentially many constraints.
Our first task is to compute the coefficients 2
in the objective function. Note that, 2
0 2 2 .3*
2 .3I
2 2 39 2 3 , as basis *
has scope restricted to 5
. Here, 2 2 3
represents the marginal of the state relevance weights 2 over 5I
. Thus, the coefficients 2
can be pre-computed efficiently if 2 is represented compactly by its marginals 2 2 5?
F3 . Our
experiments used uniform weights 2 2 .3a , thus, 2 2
3_ .
We must now deal with the exponentially large constraint set. Using the backprojection
from Section 3, we can rewrite our constraints as:
:
D
6
"
*
2.
3
2
:
1+3.>
D
4
2
1+3
4 <
[1
I
where 4
&2 1+3 032 # 2 $ 1+3*
2 3 . Note that this exponentially large set of linear
constraints can be replaced by a single, equivalent, non-linear constraint:
6
&
JLK8M
0=Q
2
1+3.>
:
D
)
4
82
[1+3
*
3 ,
*2 .
In a factored MDP, the reward function is represented as the sum of local rewards 4 ) 4 .
Furthermore, the basis *+
and its backprojection 48
are also functions that depend only
on a small set of variables. Thus, the right side of the constraint can be viewed as the
sum of restricted scope
functions parameterized by F . For a fixed F , we can compute
the maximum over [1 using a cost network, as in Section 2. If F is not specified, the
maximization induces a family of cost networks parameterized by F . As we showed in [6],
we can turn this cost network into a compact set of LP constraints on the free variable F .
generally, suppose we wish to enforce the constraint 6 JLKNM
2
3 , where
More
2
3 such that each
has a restricted scope. Here, the superscript F
2 3
indicates that each
might be multiplied by a weight D , but this dependency is linear.
Consider the cost network used to maximize
; let Y by any function used in the network,
including the original
?s, and let be its scope. For any assignment to , we introduce
a variable , whose value represents Y 2 3 , into the linear program. For the initial functions
, we include the constraint that
2 3 . As
is linear in F , this constraint is
linear in the LP variables. Now, consider a new function Y introduced into by eliminating
a variable (G . Let Y8Y be the functions extracted from , with scope 8
Offline:
1. Select a set of restricted scope basis functions
.
2. Apply efficient
LP-based
approximation algorithm offline (Section 5) to compute coefficients
of the approximate value function .
3. Use the one-step lookahead planning algorithm (Section 3) with as a value function
estimate to compute local functions for each agent.
Online: At state :
1. Each agent instantiates with values of state variables in scope of .
2. Agents apply coordination graph algorithm (Section 2) with local functions to
coordinate approximately optimal global action.
Figure 2: Algorithm for multiagent planning with factored MDPs
respectively. As in the cost network, we want that JWK8M )
, where
is the
value of
in the instantiation 2 &G:3 . We enforce this by introducing a set of constraints
< G . The last function generated in the elimination, Y! , has
into our LP: 6
an empty domain. We introduce the additional constraint 6 " , which is equivalent to
the global constraint 6 JWK8M
2 3 .
In the case of cooperative multiagent MDPs, the actions of the individual agents become variables in the cost network, so that the set # is simply
. The functions
*
and their
are simply the local functions corresponding to the rewards )
, the bases
:
backprojections 4N
. We can thus write down constraints that enforce
;D
*=
2 .3 6
*E
032 #
$
:
HD
*
3 over the entire exponential state space and
joint action space using a number of constraints which is only exponential in the induced
tree width of the cost network, rather than exponential in the number of actions and state
variables in the problem.
A traditional single agent is, of course, a special case of the multiagent case. The LP approach described in this section provides an attractive alternative to the methods described
in [9] and [6]. In particular, our approach requires that we solve a single LP, whose size is
essentially the size of the cost network. The approach of [6] (which is substantially more
efficient than that of [9]) requires that we solve an LP for each step in policy iteration,
and each LP contains constraints corresponding to multiple cost networks (whose number
depends on the complexity of the policy representation). Furthermore, the LP approach
eliminates the restrictions on the action model made in [9, 6].
Our overall algorithm for multiagent planning with factored MDPs in shown in Fig. 2.
2
1+3 >
2
[1(3+
&2
6 Experimental results
We first validate our approximate LP approach by comparing the quality of the solution
to the approximate policy iteration (PI) approach of [6]. As the approximate PI algorithm
is not able to deal with the exponentially large action spaces of multiagent problems, we
compare these two approaches on the single agent SysAdmin problem presented in [6], on
a unidirectional ring network of up to 32 machines (over 4 billion states). As shown in
Fig. 3(b), our new approximate LP algorithm for factored MDPs is significantly faster than
the approximate PI algorithm. In fact, approximate PI with single-variable basis functions
variables is more costly than the LP approach using basis functions over consecutive triples
of variables. As shown in Fig. 3(c), for singleton basis functions, the approximate PI policy
obtains slightly better performance for some problem sizes. However, as we increase the
number of basis functions for the approximate LP formulation, the value of the resulting
policy is much better. Thus, in this problem, our new approximate linear programming
formulation allows us to use more basis functions and to obtain a resulting policy of higher
value, while still maintaining a faster running time.
We constructed a multiagent version of the SysAdmin problem, applied to various net-
200
400
Server
Star
Unidirectional Ring
PIsinglebasis
PI single basis
160
LP single basis
140
LP pair basis
120
LP triple basis
LP singlebasis
Discountedrewardoffinalpolicy
(averagedover50trialsof100steps)
Total running time (minutes)
180
100
80
60
40
20
0
Ring of Rings
0
5
10
15
20
25
30
35
LP pairbasis
300
LP triplebasis
200
100
0
0
numbe r of machine s
10
20
30
40
numbe rofmachine s
Figure 3: (a) Network topologies used in our experiments. Graphs: Approximate LP versus
approximate PI on single agent SysAdmin on unidirectional ring: (b) running time; (c)
estimated value of policy.
4.4
180
RingofRings:LPSinglebasis
120
100
80
60
40
20
0
4.2
4.2
Estimatevalueperagent(100runs)
Star:LPPairbasis
140
Utopic maximum value
Utopic maximumvalue
Estimatedvalueperagent(100runs)
Totalrunningtime(seconds)
4.4
Star:LPSinglebasis
160
4
3.8
LPSinglebasis
LPPairbasis
Distributedvaluefunction
Distributedreward
3.6
4
6
8
10
Numberofagents
12
14
16
3.8
3.6
LP Singlebasis
Distributedreward
3.4
Distributedvaluefunction
3.2
3.4
2
4
2
4
6
8
10
Numberofagents
12
14
16
5
10
15
20
Numberofagents
25
30
Figure 4: (a) Running time for approximate LP for increasing number of agents. Policy
performance of approximate LP and DR/DRF: (b) on ?star?; (c) on ?ring of rings?.
work architectures shown in
Fig. 3(a). Each machine is associated
with an agent 4 and
two variables: Status .4 good, faulty, dead , and Load 4 idle, loaded, process successful . A dead machine increases the probability that its neighbors will become faulty
and die. The system receives a reward of 1 if a process terminates successfully. If the
Status is faulty, processes take longer to terminate. If the machine dies, the process is lost.
Each agent 4 must decide whether machine should be rebooted, in which case the Status
becomes good and any running
process is lost. For a network of machines, the number
of states in the MDP is and the joint action space contains possible actions, e.g., a
problem with agents has over < states and a billion possible actions.
We implemented the factored approximate linear programming and the message passing coordination algorithms in Matlab, using CPLEX as the LP solver. We experimented
with two types of basis functions: ?single?, which contains an indicator basis function for
each value of each 4 and 4 ; and ?pair? which, in addition, contains indicators over joint
assignments of the Status variables of neighboring agents. We use ;
.
As shown in Fig. 4(a), the total running time of the algorithm grows linearly in the
number of agents, for each fixed network and basis type. This is the expected asymptotic
behavior, as each problem has a fixed induced tree width of the cost network. (The induced
tree width for pair basis on the ?ring of rings? problem was too large.)
For comparison, we also implemented the distributed reward (DR) and distributed value
function (DRF) algorithms of Schneider et al. [11]. Here we used 10000 learning iterations,
with learning and exploration rates starting at ; and * respectively and a decaying
schedule after 5000 iterations; the observations for each agent were the status and load of its
machine. The results of the comparison are shown in Fig. 4(b) and (c). We also computed
a utopic upper bound on the value of the optimal policy by removing the (negative) effect
of the neighbors on the status of the machines. This is a loose upper bound, as a dead
neighbor increases the probability of a machine dying by about , . For both network
topologies tested, the estimated value of the approximate LP solution using single basis
was significantly higher than that of the DR and DRF algorithms. Note that the single
basis solution requires no coordination when acting, so this is a ?fair? comparison to DR
and DRF which also do not communicate while acting. If we allow for pair bases, which
implies agent communication, we achieve a further improvement in terms of estimated
value. The policies obtained tended to be intuitive: e.g., for the ?star? topology with pair
basis, if the server becomes faulty, it is rebooted even if loaded. but for the clients, the
agent waits until the process terminates or the machine dies before rebooting.
7 Conclusion
We have provided principled and efficient approach to planning in multiagent domains.
Rather than placing a priori restrictions on the communication structure between agents,
we first choose the form of an approximate value function and derive the optimal communication structure given the value function architecture. This approach provides a unified
view of value function approximation and agent communication. We use a novel linear
programming technique to find an approximately optimal value function. The inter-agent
communication and the LP avoid the exponential blowup in the state and action spaces,
having computational complexity dependent, instead, upon the induced tree width of the
coordination graph used by the agents to negotiate their action selection. By exploiting
structure in both the state and action spaces, we can deal with considerably larger MDPs
than those described in previous work. In a family of multiagent network administration
<
problems with over
states and over a billion actions, we have demonstrated near optimal performance which is superior to a priori reward or value sharing schemes. We believe
the methods described herein significantly further extend the efficiency, applicability and
general usability of factored value functions and models for the control of dynamic systems.
Acknowledgments: This work was supported by ONR under MURI ?Decision Making Under
Uncertainty?, the Sloan Foundation, and the first author was also supported by a Siebel scholarship.
References
[1] U. Bertele and F. Brioschi. Nonserial Dynamic Programming. Academic Press, 1972.
[2] C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and
computational leverage. Journal of Artificial Intelligence Research, 11:1 ? 94, 1999.
[3] D.P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic
programming. submitted to the IEEE Transactions on Automatic Control, January 2001.
[4] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5(3):142?150, 1989.
[5] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence,
113(1?2):41?85, 1999.
[6] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In Proc. 17th
IJCAI, 2001.
[7] F. Jensen, F. Jensen, and S. Dittmer. From influence diagrams to junction trees. In Uncertainty in Artificial Intelligence: Proceedings of the Tenth Conference, pages 367?373, Seattle,
Washington, July 1994. Morgan Kaufmann.
[8] D. Koller and R. Parr. Computing factored value functions for policies in structured MDPs. In
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI99). Morgan Kaufmann, 1999.
[9] D. Koller and R. Parr. Policy iteration for factored MDPs. In Proc. 16th UAI, 2000.
[10] L. Peshkin, N. Meuleau, K. Kim, and L. Kaelbling. Learning to cooperate via policy search. In
Proc. 16th UAI, 2000.
[11] J. Schneider, W. Wong, A. Moore, and M. Riedmiller. Distributed value functions. In Proc.
16th ICML, 1999.
[12] P. Schweitzer and A. Seidmann. Generalized polynomial approximations in Markovian decision
processes. Journal of Mathematical Analysis and Applications, 110:568 ? 582, 1985.
[13] D. Wolpert, K. Wheller, and K. Tumer. General principles of learning-based multi-agent systems. In Proc. 3rd Agents Conference, 1999.
| 1941 |@word version:1 longterm:1 eliminating:1 achievable:2 norm:1 polynomial:1 simplifying:1 q1:1 thereby:2 initial:1 contains:6 siebel:1 selecting:2 recovered:1 current:1 comparing:1 must:7 written:1 dechter:1 ronald:1 stationary:1 greedy:4 fewer:2 instantiate:1 intelligence:5 meuleau:1 tumer:1 provides:3 node:6 simpler:1 daphne:1 mathematical:1 constructed:1 direct:1 become:2 schweitzer:1 introduce:3 inter:1 expected:4 blowup:3 behavior:1 planning:7 multi:1 bellman:1 globally:2 decomposed:1 f3h:1 decreasing:1 discounted:1 considering:1 increasing:1 becomes:3 begin:2 solver:1 provided:1 maximizes:3 substantially:3 q2:1 dying:1 unified:1 transformation:1 impractical:1 formalizes:1 guarantee:1 every:1 rebooting:1 runtime:1 exactly:1 control:2 yn:1 before:1 local:12 ddn:3 approximately:2 might:2 plus:1 suggests:1 factorization:1 directed:1 acknowledgment:1 lost:2 implement:1 x3:1 riedmiller:1 significantly:4 projection:1 persistence:1 pre:1 idle:1 wait:1 close:1 selection:3 operator:1 faulty:4 influence:3 wong:1 optimize:2 equivalent:2 imposed:1 restriction:2 demonstrated:1 maximizing:4 dean:2 attention:2 starting:1 convex:1 y8:2 simplicity:1 factored:20 hd:3 ity:1 coordinate:5 suppose:2 duke:2 programming:9 roy:2 ringofrings:1 muri:1 cooperative:3 solved:1 principled:2 environment:1 complexity:4 reward:18 dynamic:13 depend:4 solving:1 rewrite:1 upon:1 efficiency:1 basis:22 compactly:2 farias:2 joint:13 represented:5 various:1 describe:1 artificial:4 whose:8 stanford:4 solve:5 larger:1 jointly:2 itself:1 superscript:1 online:1 a9:10 backprojections:1 net:1 interaction:1 maximal:1 commitment:1 neighboring:1 relevant:1 achieve:2 lookahead:2 sixteenth:1 intuitive:1 validate:1 probabil:1 seattle:1 billion:3 parent:7 empty:1 optimum:1 requirement:1 negotiate:1 exploiting:1 ijcai:1 ring:9 derive:1 h3:1 implemented:2 c:3 implies:1 direction:1 exploration:1 elimination:4 ots:1 summation:1 extension:2 scope:15 mapping:1 parr:5 achieves:1 consecutive:1 a2:2 proc:5 coordination:13 successfully:1 weighted:1 aim:1 rather:3 fulfill:1 avoid:2 sysadmin:3 derived:1 q3:1 improvement:1 indicates:1 greedily:1 kim:1 dependent:1 entire:4 initially:1 koller:5 going:1 selects:1 overall:1 priori:2 special:2 fairly:1 marginal:1 comprise:1 f3:2 having:3 once:1 washington:1 x4:1 represents:6 identical:1 placing:1 icml:1 future:4 cpd:1 causation:1 individual:1 replaced:1 cplex:1 interest:1 message:5 introduces:1 hg:1 edge:1 tuple:1 necessary:1 tree:5 guidance:1 markovian:2 assignment:3 maximization:3 cost:15 introducing:1 applicability:1 subset:3 kaelbling:1 uniform:1 successful:1 too:1 optimally:1 dependency:2 considerably:2 chooses:1 international:1 again:1 choose:4 possibly:1 dr:4 dead:3 return:1 de:2 singleton:1 star:5 coefficient:4 explicitly:2 sloan:1 depends:3 view:2 closed:1 doing:1 recover:1 carlos:1 complicated:2 maintains:1 parallel:1 unidirectional:3 decaying:1 contribution:1 minimize:2 ir:1 loaded:2 kaufmann:2 efficiently:1 bayesian:3 submitted:1 influenced:1 tended:1 sharing:4 ed:3 obvious:1 associated:3 recall:1 knowledge:1 schedule:1 actually:1 higher:2 formulation:2 hank:1 furthermore:3 until:1 receives:1 replacing:1 defines:1 brings:1 quality:3 believe:1 grows:1 mdp:13 effect:3 true:1 hence:1 moore:1 deal:3 attractive:1 width:6 die:1 generalized:1 trying:3 theoretic:1 reasoning:2 jlknm:10 cooperate:1 numbe:2 novel:3 common:2 superior:2 exponentially:7 extend:1 marginals:1 automatic:1 rd:1 dbn:3 trivially:1 had:1 access:1 longer:1 summands:1 base:2 own:2 recent:1 showed:1 irrelevant:1 reverse:2 server:2 onr:1 guestrin:3 morgan:2 additional:1 somewhat:2 relaxed:1 schneider:3 maximize:8 dashed:1 july:1 relates:1 multiple:6 full:1 reduces:1 faster:3 usability:1 academic:1 elaboration:1 serial:1 a1:2 controlled:1 involving:1 circumstance:1 essentially:2 expectation:2 iteration:5 represent:6 achieved:1 receive:1 addition:1 want:2 diagram:2 appropriately:1 eliminates:1 induced:7 subject:2 virtually:1 structural:1 near:1 leverage:1 rendering:1 architecture:3 bandwidth:1 topology:3 observability:1 idea:2 enumerating:1 administration:1 whether:1 expression:1 peshkin:1 utility:2 passing:4 action:50 matlab:1 enumerate:1 generally:2 boutilier:1 involve:1 amount:1 discount:1 induces:1 reduced:1 generate:1 coordinating:2 estimated:3 per:1 discrete:1 write:1 df3:1 key:1 four:1 tenth:1 rectangle:1 graph:13 sum:3 run:2 parameterized:2 uncertainty:2 communicate:2 striking:1 extends:1 family:2 decide:1 decision:12 dy:2 layer:1 bound:2 adapted:1 precisely:2 constraint:17 x2:1 ables:1 performing:1 structured:2 numbering:1 combination:4 instantiates:1 terminates:2 slightly:1 y0:1 lp:31 making:3 restricted:5 bucket:1 remains:1 turn:3 loose:1 mechanism:2 know:1 sending:1 otu:2 junction:1 multiplied:1 apply:3 observe:2 enforce:3 alternative:3 original:1 running:6 include:2 a4:2 maintaining:1 unifying:1 unsuitable:1 giving:2 scholarship:1 approximating:1 unchanged:1 backprojection:3 objective:1 strategy:1 costly:1 prq:2 traditional:1 subspace:1 participate:1 trivial:1 unfortunately:1 favorably:2 negative:1 policy:19 diamond:1 upper:2 observation:2 markov:4 finite:2 january:1 immediate:5 communication:8 introduced:2 pair:5 specified:1 herein:1 able:1 summarize:1 program:2 including:1 max:1 natural:2 force:2 circumvent:1 client:1 indicator:2 scheme:3 improve:1 mdps:15 relative:3 asymptotic:1 multiagent:14 expect:1 acyclic:1 versus:1 triple:2 foundation:1 agent:82 principle:1 pi:7 course:1 repeat:1 last:1 free:2 supported:2 offline:2 side:1 allow:3 neighbor:3 taking:1 van:2 distributed:3 seidmann:1 transition:5 vari:1 drf:4 author:1 collection:1 made:1 simplified:1 transaction:1 approximate:22 compact:1 obtains:1 status:6 global:2 instantiation:1 uai:2 q4:1 summing:1 search:2 terminate:1 complex:2 domain:3 linearly:2 fair:1 x1:1 fig:11 wish:1 exponential:9 stq:1 down:1 minute:1 removing:1 load:2 showing:1 jensen:2 experimented:1 a3:2 kanazawa:1 restricting:1 utopic:3 horizon:1 wolpert:1 simply:3 collaborating:1 extracted:1 conditional:3 goal:1 viewed:1 exposition:1 determined:1 specifically:2 infinite:1 reducing:1 acting:5 total:4 oval:1 experimental:1 select:4 internal:1 relevance:2 dept:3 tested:1 rebooted:2 |
1,031 | 1,942 | Generalizable Relational Binding from
Coarse-coded Distributed Representations
Randall C. O?Reilly
Department of Psychology
University of Colorado Boulder
345 UCB
Boulder, CO 80309
Richard S. Busby
Department of Psychology
University of Colorado Boulder
345 UCB
Boulder, CO 80309
oreilly@psych.colorado.edu
Richard.Busby@Colorado.EDU
Abstract
We present a model of binding of relationship information in a spatial
domain (e.g., square above triangle) that uses low-order coarse-coded
conjunctive representations instead of more popular temporal synchrony
mechanisms. Supporters of temporal synchrony argue that conjunctive
representations lack both efficiency (i.e., combinatorial numbers of units
are required) and systematicity (i.e., the resulting representations are
overly specific and thus do not support generalization to novel exemplars). To counter these claims, we show that our model: a) uses far
fewer hidden units than the number of conjunctions represented, by using coarse-coded, distributed representations where each unit has a broad
tuning curve through high-dimensional conjunction space, and b) is capable of considerable generalization to novel inputs.
1 Introduction
The binding problem as it is classically conceived arises when different pieces of information are processed by entirely separate units. For example, we can imagine there are
neurons that separately code for the shape and color of objects, and we are viewing a scene
having a red triangle and a blue square (Figure 1). Because color and shape are encoded
separately in this system, the internal representations do not discriminate this situation from
one where we are viewing a red square and a blue triangle. This is the problem. Broadly
speaking, there are two solutions to it. Perhaps the most popular solution is to imagine
that binding is encoded by some kind of transient signal, such as temporal synchrony (e.g.,
von der Malsburg, 1981; Gray, Engel, Konig, & Singer, 1992; Hummel & Holyoak, 1997).
Under this solution, the red and triangle units should fire together, as should the blue and
square units, with each group firing out of phase with the other.
The other solution can be construed as solving the problem by questioning its fundamental
assumption ? that information is encoded completely separately in the first place (which
is so seductive that it typically goes unnoticed). Instead, one can imagine that color and
shape information are encoded together (i.e., conjunctively). In the red-triangle blue-square
example, some neurons encode the conjunction of red and triangle, while others encode
the conjunction of blue and square. Because these units are explicitly sensitive to these
a) Input activates features
Red
Blue
Triangle
Square
b) But rest of brain doesn?t know
which features go with each other
?
?
Red
Blue
Triangle
Square
Figure 1: Illustration of the binding problem. a) Visual inputs (red triangle, blue square) activate
separate representations of color and shape properties. b) However, just the mere activation of these
features does not distinguish for the rest of the brain the alternative scenario of a blue triangle and a
red square. Red is indicated by dashed outline and blue by a dotted outline.
obj1
RS
RC
RS
RT
RS
RC
RS
RT
RC
obj2
GC
GS
GT
GS
BC
BS
BT
BS
GT
R
1
1
1
1
1
1
1
1
1
G
1
1
1
1
0
0
0
0
1
B
0
0
0
0
1
1
1
1
0
S
1
1
1
1
1
1
1
1
0
C
1
1
0
0
1
1
0
0
1
T
0
0
1
1
0
0
1
1
1
RC
GS
BT
0
1
0
1
0
1
1
0
1
obj1
RT
RC
RT
GS
GC
GS
GT
GC
GT
obj2
GC
BT
BC
BC
BS
BT
BS
BT
BC
R
1
1
1
0
0
0
0
0
0
G
1
0
0
1
1
1
1
1
1
B
0
1
1
1
1
1
1
1
1
S
0
0
0
1
1
1
1
0
0
C
1
1
1
1
1
0
0
1
1
T
1
1
1
0
0
1
1
1
1
RC
GS
BT
0
1
0
1
0
1
0
1
0
Table 1: Solution to the binding problem by using representations that encode combinations of
input features (i.e., color and shape), but achieve greater efficiency by representing multiple such
combinations. Obj1 and obj2 show the features of the two objects (R = Red, G = Green, B = Blue,
S = Square, C = Circle, T = Triangle), and remaining columns show 6 localist units and one coarsecoded conjunctive unit. Adding this one conjunctive unit is enough to disambiguate the inputs.
conjunctions, they will not fire to a red square or a blue triangle, and thereby avoid the
binding problem. The obvious problem with this solution, and one reason it has been
largely rejected in the literature, is that it would appear to require far too many units to cover
all of the possible conjunctions that need to be represented ? a combinatorial explosion.
However, the combinatorial explosion problem is predicated on another seductive notion ?
that separate units are used for each possible conjunction. In short, both the binding problem itself and the problem with the conjunctive solution derive from localist assumptions
about neural coding. In contrast, these problems can be greatly reduced by simply thinking
in terms of distributed representations, where each unit encodes some possibly-difficult to
describe amalgam of input features, such that individual units are active at different levels
for different inputs, and many such units are active for each input (Hinton, McClelland, &
Rumelhart, 1986). Therefore, the input is represented by a complex distributed pattern of
activation over units, and each unit can exhibit varying levels of sensitivity to the featural
conjunctions present in the input. The binding problem is largely avoided because a different pattern of activation will be present for a red-triangle, blue-square input as compared to
a red-square, blue-triangle input.
These kinds of distributed representations can be difficult to understand. This is probably
a significant reason why the ability of distributed representations to resolve the binding
problem goes under-appreciated. However, we can analyze special cases of these representations to gain some insight. One such special case is shown in Table 1 from O?Reilly
and Munakata (2000). Here, we add one additional distributed unit to an otherwise localist
featural encoding like that shown in Figure 1. This unit has a coarse-coded conjunctive
representation, meaning that instead of coding for a single conjunction, it codes for several
possible conjunctions. The table shows that if this set of conjunctions is chosen wisely,
this single unit can enable the distributed pattern of activation across all units to distinguish
between any two possible combinations of stimulus inputs. A more realistic system will
have a larger number of partially redundant coarse-coded conjunctive units that will not
require such precise representations from each unit. A similar demonstration was recently
provided by Mel and Fiser (2000) in an analysis of distributed, low-order conjunctive representations (resembling ?Wickelfeatures?; Wickelgren, 1969; Seidenberg & McClelland,
1989) in the domain of textual inputs. However, they did not demonstrate that a neural network learning mechanism would develop these representations, or that they could support
systematic generalization to novel inputs.
2 Learning Generalizable Relational Bindings
We present here a series of models that test the ability of existing neural network learning
mechanisms to develop low-order coarse-coded conjunctive representations in a challenging binding domain. Specifically, we focus on the problem of relational binding, which
provides a link to higher-level cognitive function, and speaks to the continued use of structured representations in these domains. Furthermore, we conduct a critical test of these
models in assessing their ability to generalize to novel inputs after moderate amounts of
training. This is important because conjunctive representations might appear to limit generalization as these representations are more specific than purely localist representations.
Indeed the inability to generalize is considered by some the primary limitation of conjunctive binding mechanisms (Holyoak & Hummel, 2000).
2.1 Relational Binding, Structured Representations, and Higher-level Cognition
A number of existing models rely on structured representations because they are regarded
as essential for encoding complex relational information and other kinds of data structures
that are used in symbolic models (e.g., lists, trees, sequences) (e.g., Touretzky, 1986; Shastri & Ajjanagadde, 1993; Hummel & Holyoak, 1997). A canonical example of a structured
representation is a propositional encoding (e.g., LIKES cats milk) that has a main relational
term (LIKES) that operates on a set of slot-like arguments that specify the items entering into the relationship. The primary advantages of such a representation are that it is
transparently systematic or productive (anything can be put in the slots), and it is typically
easy to compose more elaborate structures from these individual propositions (e.g., this
proposition can have other propositions in its slots instead of just basic symbols).
The fundamental problem with structured representations, regardless of what implements
them, is that they cannot be easily learned. To date, there have been no structured representation models that exhibit powerful learning of the form typically associated with
neural networks. There are good reasons to believe that this reflects basic tradeoffs
between complex structured representations and powerful learning mechanisms (Elman,
1991; St John & McClelland, 1990; O?Reilly & Munakata, 2000). Essentially, structured
representations are discrete and fragile, and therefore do not admit to gradual changes over
learning. In contrast, neural networks employ massively-parallel, graded processing that
can search out many possible solutions at the same time, and optimize those that seem to
make graded improvements in performance. In contrast, the discrete character of structured
representations requires exhaustive combinatorial search in high-dimensional spaces.
To provide an alternative to these structured representations, our models test a simple example of relational encoding, focusing on easily-visualized spatial relationships, which can
be thought of in propositional terms as for example (LEFT-OF square triangle).
right
left
above
below
Location
Object
Relation
relation?loc?
relation?obj?
where?
what?
Hidden
Question
Input
Figure 2: Spatial relationship binding model. Objects are represented by distributed patterns of
activation over 8 features per location within a 4x4 array of locations. Inputs have two objects,
arranged vertically or horizontally. The network answers questions posed by the Question input
(?what?, ?where?, and ?what relationship??) ? the answers require binding of object, location, and
relationship information.
3 Spatial Relationship Binding Model
The spatial relationship binding model is shown in Figure 2. The overall framework for
training the network is to present it with input patterns containing objects in different locations, and ask it various questions about these input displays. These questions ask about the
identity and location of objects (i.e., ?what?? and ?where??), and the relationships between
the two objects (e.g., ?where is object1 relative to object2??). To answer these questions
correctly, the hidden layer must bind object, location, and relationship information accurately in the hidden layer. Otherwise, it will confuse the two objects and their locations and
relationships. Furthermore, we encoded the objects using distributed representations over
features, so these features must be correctly bound into the same object.
Specifically, objects are represented by distributed patterns of activation over 8 features per
location, in a 4x4 location array. Inputs have two different objects, arranged either vertically
or horizontally. The network answers different questions about the objects posed by the
Question input. For the ?what?? question, the location of one of the objects is activated
as an input in the Location layer, and the network must produce the correct object features
for the object in that location. We also refer to this target object as the agent object. For
the ?where?? question, the object features for the agent object are activated in the Object
layer, and the network must produce the correct location activation for that object. For
the ?relation-obj?? question, the object features for the agent object are activated, and the
network must activate the relationship between this object and the other object (referred to
as the patient object), in addition to activating the location for the agent object. For the
?relation-loc?? question, the location of the agent object is activated, and the network must
activate the relationship between this object and the patient object, in addition to activating
the object features for the agent object.
This network architecture has a number of nice properties. For example, it has only one
object and location encoding layer, both of which can act as either an input or an output.
This is better than an alternative architecture having separate slots representing the agent
and patient objects, because such slot-based encodings solve the binding problem by having
separate role-specific units, which becomes implausible as the number of different roles and
objects multiply. Note that supporting the dual input/output roles requires an interactive
(recurrent, bidirectionally-connected) network (O?Reilly, 2001, 1998).
Location
a)
Object
Location
b)
RLAB
Relation
Object
RLAB
Relation
Rel-Loc?
Rel-Obj?
Where?
What?
Hidden
Rel-Loc?
Rel-Obj?
Where?
What?
Question
Input
Hidden
Input
Location
c)
Object
Location
d)
RLAB
Relation
Object
Rel-Loc?
Rel-Obj?
Where?
What?
Hidden
Input
Question
RLAB
Relation
Rel-Loc?
Rel-Obj?
Where?
What?
Question
Hidden
Question
Input
Figure 3: Hidden unit representations (values are weights into a hidden unit from all other layers)
showing units (a & b) that bind object, location, & relationship information via low-order conjunctions, and other units that have systematic representations of location (c) and object features (d).
There are four levels of questions we can ask about this network. First, we can ask if standard neural network learning mechanisms are capable of solving this challenging binding
problem. They are. Second, we can ask whether the network actually develops coarsecoded distributed representations. It does. Third, we can ask if these networks can generalize to novel inputs (both novel objects and novel locations for existing objects). They
can. Finally, we can ask whether there are differences in how well different kinds of
learning algorithms generalize, specifically comparing the Leabra algorithm with purely
error-driven networks, as was recently done in other generalization tests with interactive
networks (O?Reilly, 2001). This paper showed that interactive networks generalize significantly worse than comparable feedforward networks, but that good generalization can be
achieved by adding additional biases or constraints on the learning mechanisms in the form
of inhibitory competition and Hebbian learning in the Leabra algorithm. These results are
replicated here, with Leabra generalization being roughly twice as good as other interactive
algorithms.
Generalization Error
200 Agent, Locs
300 Agent, Locs
400 Agent, Locs
0.8
0.6
0.4
0.2
0.0
0
10
20
30
40
No. of Patients Per Agent, Location
Spat Rel Generalization (Nov Objs)
b) 1.0
Generalization Error
Spat Rel Generalization (Fam Objs)
a) 1.0
200 Agent, Locs
300 Agent, Locs
400 Agent, Locs
0.8
0.6
0.4
0.2
0.0
0
10
20
30
40
No. of Patients Per Agent, Location
Figure 4: Generalization results (proportion errors on testing set) for the spatial relationship binding
model using the Leabra algorithm as a function of the number of training items, specified as number
of agent, location combinations and number of patient, locations per each agent, location. a) shows
results for testing on familiar objects in novel locations. b) shows results for testing on novel objects
that were never trained before.
3.1 Detailed Results
First, we examined the representations that developed in the network?s hidden layer (Figure 3). Many units encoded low-order combinations (conjunctions) of object, location,
and relationship features (Figure 3a & b). This is consistent with our hypothesis. Other
units also encoded more systematic representations of location without respect to objects
(Figure 3c) and object features without respect to location (Figure 3d).
To test the generalization capacity of the networks, we trained on only 26 of the 28 possible
objects that can be composed out of 8 features with two units active, and only a subset of
all 416 possible agent object x location combinations. We trained on 200, 300, and 400
such combinations. For each agent object-location input, there are 150 different patient
object-location combinations per agent object-location, and we trained on 4, 10, 20, and 40,
selected at random, for each different level of agent object-location combination training.
At the most (400x40) there were a total of 16000 unique inputs trained out of a total possible
of 62400, which amounts to about 1/4 of the training space. At the least (200x4) only
roughly 1.3% of the training space was covered.
The ability of the network to generalize to the 26 familiar objects in novel locations was
tested by measuring performance on a random sample of 640 of the untrained agent objectlocation combinations. The results for the Leabra algorithm are shown in Figure 4a. As one
would expect, the number of training patterns improves generalization in a roughly proportional manner. Importantly, the network is able to generalize to a high level of performance,
getting roughly 95% correct after training on only 25% of the training space (400x40), and
achieving roughly 80% correct after training on only roughly 10% of the space (300x20).
The ability of the network to generalize to novel objects was tested by simply presenting
the two novel objects as agents in all possible locations, with a random sampling of 20
different patients (which were the familiar objects), for a total of 640 different testing items
(Figure 4b). Generalization on these novel objects was roughly comparable to the familiar objects, except there was an apparent ceiling point at roughly 15% generalization error
where the generalization did not improve even with more training. Overall, the network
performed remarkably well on these novel objects, and future work will explore generalization with fewer training objects.
To evaluate the extent to which the additional biologically-motivated biases in the Leabra
algorithm are contributing to these generalization results, we ran networks using the contrastive Hebbian learning algorithm (CHL) and the Almeida-Pineda (AP) recurrent back-
Spat Rel Generalization (Fam Objs)
Generalization Error
0.40
Leabra
CHL
0.30
0.20
0.10
0.00
10
20
No. of Patients Per Agent, Location
Figure 5: Generalization results for different algorithms on the spatial relationship binding task (see
previous figure for details on measures) in the 400 x 10 or 20 conditions.
propagation algorithm, as in O?Reilly (2001). Both of these algorithms work in interactive,
bidirectionally-connected networks, which are required for this task. Standard AP was
unable to learn the task, we suspected because it does not preserve the symmetry of the
weights as is required for stable settling. Attempts to to rectify this problem by enforcing symmetric weight changes did not succeed either. The results for CHL (Figure 5)
replicated earlier results (O?Reilly, 2001) in showing that the additional biases in Leabra
produced roughly twice as good of generalization performance compared to CHL.
4 Discussion
These networks demonstrate that existing, powerful neural network learning algorithms can
learn representations that perform complex relational binding of information. Specifically,
these networks had to bind together object identity, location, and relationship information to answer a number of questions about input displays containing two objects. This
supports our contention that rich distributed representations containing coarse-coded conjunctive encodings can effectively perform binding. It is critical to appreciate that these
distributed representations are highly efficient, encoding over 62400 unique input configurations with only 200 hidden units. Furthermore, these representations are systematic,
in that they support generalization to novel inputs after training on a fraction of the input
space.
Despite these initial successes, more work needs to be done to extend this approach to
other kinds of domains that require binding. One early example of such an application is
the St John and McClelland (1990) sentence gestalt model, which was able to sequentially process words in a sentence and construct a distributed internal representation of the
meaning of the sentence (the sentence gestalt). This model was limited in that it required
extremely large numbers of training trials and an elaborate training control mechanism.
However, these limitations were eliminated in a recent replication of this model based on
the Leabra algorithm (O?Reilly & Munakata, 2000). We plan to extend this model to handle
a more complex corpus of sentences to more fully push the relational binding capacities of
the model.
Finally, it is important to emphasize that we do not think that these low-order conjunctive
representations are entirely sufficient to resolve the binding problems that arise in the cortex. One important additional mechanism is the use of selective attention to focus neural
processing on coherent subsets of information present in the input (e.g., on individual objects, people, or conversations). The interaction between such a selective attentional system
and a complex object recognition system was modeled in O?Reilly and Munakata (2000).
In this model, selective attention was an emergent process deriving from excitatory interac-
tions between a spatial processing pathway and the object processing pathway, combined
with surround inhibition as implemented by inhibitory interneurons. The resulting model
was capable of sequentially processing individual objects when multiple such objects were
simultaneously present in the input.
Acknowledgments
This work was supported by ONR grant N00014-00-1-0246 and NSF grant IBN-9873492.
5 References
Elman, J. L. (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7, 195?225.
Gray, C. M., Engel, A. K., Konig, P., & Singer, W. (1992). Synchronization of oscillatory neuronal
responses in cat striate cortex ? temporal properties. Visual Neuroscience, 8, 337?347.
Hinton, G. E., McClelland, J. L., & Rumelhart, D. E. (1986). Distributed representations. In D. E.
Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing.
Volume 1: Foundations (Chap. 3, pp. 77?109). Cambridge, MA: MIT Press.
Holyoak, K. J., & Hummel, J. E. (2000). The proper treatment of symbols in a connectionist architecture. In E. Dietrich, & A. Markman (Eds.), Cognitive dynamics: Conceptual and representational
change in humans and machines. Mahwah, NJ: Lawrence Erlbaum Associates.
Hummel, J. E., & Holyoak, K. J. (1997). Distributed representations of structure: A theory of analogical access and mapping. Psychological Review, 104(3), 427?466.
Mel, B. A., & Fiser, J. (2000). Minimizing binding errors using learned conjunctive features. Neural
Computation, 12, 731?762.
O?Reilly, R. C. (1998). Six principles for biologically-based computational models of cortical cognition. Trends in Cognitive Sciences, 2(11), 455?462.
O?Reilly, R. C. (2001). Generalization in interactive networks: The benefits of inhibitory competition
and Hebbian learning. Neural Computation, 13, 1199?1242.
O?Reilly, R. C., & Munakata, Y. (2000). Computational explorations in cognitive neuroscience:
Understanding the mind by simulating the brain. Cambridge, MA: MIT Press.
Seidenberg, M. S., & McClelland, J. L. (1989). A distributed, developmental model of word recognition and naming. Psychological Review, 96, 523?568.
Shastri, L., & Ajjanagadde, V. (1993). From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences, 16, 417?494.
St John, M. F., & McClelland, J. L. (1990). Learning and applying contextual constraints in sentence
comprehension. Artificial Intelligence, 46, 217?257.
Touretzky, D. S. (1986). BoltzCONS: Reconciling connectionism with the recursive nature of stacks
and trees. Proceedings of the 8th Annual Conference of the Cognitive Science Society (pp. 522?
530). Hillsdale, NJ: Lawrence Erlbaum Associates.
von der Malsburg, C. (1981). The correlation theory of brain function. MPI Biophysical Chemistry,
Internal Report 81-2. In E. Domany, J. L. van Hemmen, & K. Schulten (Eds.), Models of neural
networks, II (1994). Berlin: Springer.
Wickelgren, W. A. (1969). Context-sensitive coding, associative memory, and serial order in (speech)
behavior. Psychological Review, 76, 1?15.
| 1942 |@word trial:1 proportion:1 holyoak:5 gradual:1 r:4 contrastive:1 thereby:1 initial:1 configuration:1 series:1 loc:6 bc:4 existing:4 comparing:1 contextual:1 activation:7 conjunctive:14 must:6 john:3 realistic:1 shape:5 intelligence:1 fewer:2 selected:1 item:3 short:1 coarse:7 provides:1 location:42 rc:6 replication:1 compose:1 pathway:2 behavioral:1 manner:1 speaks:1 indeed:1 behavior:1 elman:2 roughly:9 brain:5 chap:1 resolve:2 becomes:1 provided:1 what:10 kind:5 psych:1 generalizable:2 developed:1 nj:2 temporal:5 act:1 interactive:6 control:1 unit:32 grant:2 appear:2 before:1 vertically:2 bind:3 limit:1 despite:1 encoding:8 firing:1 ap:2 might:1 twice:2 examined:1 challenging:2 co:2 limited:1 unique:2 acknowledgment:1 testing:4 recursive:1 implement:1 thought:1 significantly:1 reilly:12 word:2 symbolic:1 cannot:1 put:1 context:1 applying:1 optimize:1 resembling:1 go:3 regardless:1 attention:2 fam:2 insight:1 continued:1 array:2 regarded:1 importantly:1 deriving:1 rule:1 handle:1 notion:1 imagine:3 target:1 colorado:4 us:2 hypothesis:1 associate:2 trend:1 rumelhart:3 recognition:2 role:3 connected:2 counter:1 ran:1 developmental:1 productive:1 dynamic:2 trained:5 solving:2 purely:2 efficiency:2 completely:1 triangle:15 ajjanagadde:2 easily:2 emergent:1 represented:5 cat:2 various:1 describe:1 activate:3 artificial:1 exhaustive:1 apparent:1 encoded:7 larger:1 posed:2 solve:1 obj2:3 otherwise:2 ability:5 think:1 itself:1 associative:1 pineda:1 sequence:1 questioning:1 advantage:1 spat:3 dietrich:1 biophysical:1 interaction:1 date:1 achieve:1 representational:1 analogical:1 competition:2 getting:1 konig:2 chl:4 assessing:1 produce:2 object:73 tions:1 derive:1 develop:2 recurrent:3 exemplar:1 implemented:1 ibn:1 correct:4 exploration:1 human:1 viewing:2 transient:1 enable:1 hillsdale:1 require:4 activating:2 generalization:25 proposition:3 connectionism:1 comprehension:1 considered:1 lawrence:2 cognition:2 mapping:1 claim:1 early:1 combinatorial:4 sensitive:2 engel:2 reflects:1 mit:2 activates:1 avoid:1 varying:1 conjunction:13 encode:3 focus:2 improvement:1 greatly:1 contrast:3 seductive:2 typically:3 bt:6 hidden:12 relation:9 selective:3 overall:2 dual:1 plan:1 spatial:8 special:2 construct:1 never:1 having:3 sampling:1 eliminated:1 x4:3 broad:1 markman:1 thinking:1 future:1 report:1 others:1 stimulus:1 develops:1 richard:2 employ:1 connectionist:2 composed:1 preserve:1 simultaneously:1 individual:4 familiar:4 phase:1 fire:2 hummel:5 attempt:1 interneurons:1 highly:1 multiply:1 activated:4 capable:3 explosion:2 conduct:1 tree:2 circle:1 psychological:3 column:1 earlier:1 cover:1 measuring:1 localist:4 subset:2 erlbaum:2 too:1 interac:1 answer:5 combined:1 st:3 fundamental:2 sensitivity:1 systematic:6 together:3 von:2 containing:3 possibly:1 wickelgren:2 classically:1 worse:1 cognitive:5 admit:1 chemistry:1 coding:3 objs:3 explicitly:1 piece:1 performed:1 systematicity:1 analyze:1 red:14 parallel:2 synchrony:4 construed:1 square:15 largely:2 generalize:8 accurately:1 produced:1 mere:1 coarsecoded:2 oscillatory:1 implausible:1 touretzky:2 ed:3 pp:2 obvious:1 associated:1 gain:1 treatment:1 popular:2 ask:7 color:5 conversation:1 improves:1 oreilly:1 actually:1 back:1 focusing:1 higher:2 specify:1 response:1 arranged:2 done:2 furthermore:3 just:2 rejected:1 predicated:1 fiser:2 correlation:1 lack:1 propagation:1 gray:2 perhaps:1 indicated:1 believe:1 entering:1 symmetric:1 mel:2 anything:1 mpi:1 busby:2 presenting:1 outline:2 demonstrate:2 reasoning:1 meaning:2 boltzcons:1 novel:15 recently:2 contention:1 volume:1 extend:2 association:1 significant:1 refer:1 surround:1 cambridge:2 tuning:1 had:1 rectify:1 stable:1 access:1 cortex:2 inhibition:1 gt:4 add:1 showed:1 recent:1 moderate:1 driven:1 scenario:1 massively:1 n00014:1 onr:1 success:1 der:2 greater:1 additional:5 redundant:1 signal:1 dashed:1 ii:1 multiple:2 hebbian:3 naming:1 serial:1 coded:7 basic:2 essentially:1 patient:9 achieved:1 addition:2 remarkably:1 separately:3 rest:2 probably:1 seem:1 obj:6 feedforward:1 enough:1 easy:1 psychology:2 architecture:3 domany:1 tradeoff:1 supporter:1 fragile:1 x40:2 whether:2 motivated:1 six:1 speech:1 speaking:1 detailed:1 covered:1 amount:2 processed:1 mcclelland:8 visualized:1 reduced:1 wisely:1 canonical:1 transparently:1 dotted:1 inhibitory:3 nsf:1 neuroscience:2 overly:1 conceived:1 per:7 correctly:2 blue:14 broadly:1 discrete:2 group:2 four:1 achieving:1 fraction:1 powerful:3 place:1 comparable:2 entirely:2 layer:7 bound:1 distinguish:2 display:2 g:6 annual:1 constraint:2 scene:1 encodes:1 argument:1 extremely:1 department:2 structured:10 leabra:9 combination:10 across:1 character:1 b:4 biologically:2 randall:1 boulder:4 ceiling:1 mechanism:9 singer:2 know:1 mind:1 simulating:1 alternative:3 remaining:1 unnoticed:1 reconciling:1 malsburg:2 graded:2 society:1 locs:6 appreciate:1 question:18 primary:2 rt:4 striate:1 exhibit:2 separate:5 link:1 unable:1 capacity:2 attentional:1 berlin:1 argue:1 extent:1 reason:3 enforcing:1 code:2 modeled:1 relationship:18 illustration:1 demonstration:1 minimizing:1 difficult:2 x20:1 shastri:2 proper:1 perform:2 neuron:2 supporting:1 situation:1 relational:9 hinton:2 precise:1 pdp:1 gc:4 stack:1 obj1:3 propositional:2 required:4 specified:1 sentence:6 coherent:1 learned:2 textual:1 able:2 below:1 pattern:7 green:1 memory:1 critical:2 rely:1 settling:1 representing:2 improve:1 featural:2 nice:1 literature:1 review:3 understanding:1 contributing:1 relative:1 synchronization:1 fully:1 expect:1 limitation:2 proportional:1 foundation:1 agent:24 sufficient:1 consistent:1 suspected:1 principle:1 excitatory:1 supported:1 appreciated:1 bias:3 understand:1 distributed:21 grammatical:1 curve:1 benefit:1 cortical:1 van:1 rich:1 doesn:1 replicated:2 avoided:1 far:2 gestalt:2 nov:1 emphasize:1 active:3 sequentially:2 corpus:1 conceptual:1 search:2 seidenberg:2 why:1 table:3 disambiguate:1 object1:1 learn:2 nature:1 symmetry:1 untrained:1 complex:6 domain:5 did:3 main:1 arise:1 mahwah:1 neuronal:1 referred:1 hemmen:1 elaborate:2 schulten:1 third:1 specific:3 showing:2 symbol:2 list:1 essential:1 rel:11 adding:2 effectively:1 milk:1 confuse:1 push:1 simply:2 explore:1 wickelfeatures:1 bidirectionally:2 visual:2 horizontally:2 partially:1 binding:30 springer:1 ma:2 succeed:1 slot:5 identity:2 considerable:1 change:3 specifically:4 except:1 operates:1 total:3 discriminate:1 munakata:5 ucb:2 internal:3 support:4 almeida:1 people:1 arises:1 inability:1 evaluate:1 tested:2 |
1,032 | 1,943 | A Quantitative Model of Counterfactual
Reasoning
Michael Ramscar
Division of Informatics
University of Edinburgh
Edinburgh, Scotland
michael@dai.ed.ac.uk
Daniel Yarlett
Division of Informatics
University of Edinburgh
Edinburgh, Scotland
dany@cogsci.ed.ac.uk
Abstract
In this paper we explore two quantitative approaches to the modelling of
counterfactual reasoning ? a linear and a noisy-OR model ? based on information contained in conceptual dependency networks. Empirical data
is acquired in a study and the fit of the models compared to it. We conclude by considering the appropriateness of non-parametric approaches
to counterfactual reasoning, and examining the prospects for other parametric approaches in the future.
1 Introduction
If robins didn?t have wings would they still be able to fly, eat worms or build nests? Previous work on counterfactual reasoning has tended to characterise the processes by which
questions such as these are answered in purely qualitative terms, either focusing on the
factors determining their onset and consequences (see Roese, 1997, for a review); the qualitative outline of their psychological characteristics (Kahneman and Miller, 1986; Byrne
and Tasso, 1999); or else their logical or schematic properties (Lewis, 1973; Goodman,
1983). And although Pearl (2000) has described a formalism addressing quantitative aspects of counterfactual reasoning, this model has yet to be tested empirically. Furthermore,
the non-parametric framework in which it is proposed means certain problems attach to it
as a cognitive model, as discussed in 6.
To date then, the quantitative processes underlying human counterfactual reasoning have
proven surprisingly recalcitrant to philosophical, psychological and linguistic analysis. In
this paper we propose two parametric models of counterfactual reasoning for a specific
class of counterfactuals: those involving modifications to our conceptual knowledge. The
models we present are intended to capture the constraints operative on this form of inference at the computational level. Having outlined the models, we present a study which
compares their predictions with the judgements of participants about corresponding counterfactuals. Finally, we conclude by raising logistical and methodological doubts about a
non-parametric approach to the problem, and considering future work to extend the current
models.
2 Counterfactuals and Causal Dependencies
One of the main difficulties in analysing counterfactuals is that they refer to alternative
ways that things could be, but it?s difficult to specify exactly which alternatives they pick
out. For example, to answer the counterfactual question we began this paper with we
clearly need to examine the possible states of affairs in which robins don?t have wings in
order to see whether they will still be able to fly, eat worms and build nests in them. But the
problem is that we can imagine many possible ways in which robins can be without wings
? for instance, at an extreme we can imagine a situation in which the robin genotype failed
to evolve beyond the plankton stage ? not all of which will be relevant when it comes to
reasoning counterfactually.
In the alternatives envisaged by a counterfactual some things are clearly going to differ
from the way they are in the actual world, while others are going to remain unchanged.
And specifying which things will be affected, and which things will be unaffected, by
a counterfactual supposition is the crux of the issue. Counterfactual reasoning seems to
revolve around causal dependencies: if something depends on a counterfactual supposition
then it should differ from the way it is in the actual world, otherwise it should remain
just as it is. The challenge is to specify exactly what depends on what in the world ? and
crucially to what degree, if we are interested in the quantitative aspects of counterfactual
reasoning ? in order that we can arrive at appropriate counterfactual inferences. Clearly
some information about our representation of dependency relations is required.
3 Dependency Information
Fortunately, data is available about people?s representations of dependencies, albeit in a
limited domain. As part of an investigation into feature centrality, Sloman, Love and Ahn
(1998) explored the idea that a feature is central to a concept to the degree that other features
depend on it. To test this idea empirically they derived dependency networks for four
concepts ? robin, apple, chair and guitar ? by asking people to rate on a scale of 0 to 3 how
strongly they thought the features of the four concepts depended on one another. One of
the dependency structures derived from this process is depicted in Figure 1.
4 Parametric Models
The models we present here simulate counterfactual reasoning about a concept by operating on conceptual networks such as the one in Figure 1. A counterfactual supposition
is entertained by setting the activation of the counterfactually manipulated feature to an
appropriate level. Inference then proceeds via an iterative algorithm which propagates the
effect of manipulating the selected feature throughout the network.
In order to do this we make two main assumptions about cause-effect interactions. First we
assume that a node representing an effect, , will be expected to change as a function of (i)
the degree to which a node representing its cause, , has itself changed, and (ii) the degree
to which depends on . Second, we also assume that multiple cause nodes, , will
affect a target node, , independently of one another and in a cumulative fashion. This
means that the proposed models do not attempt to deal with interactions between causes.
The first assumption seems warranted by recent empirical work (Yarlett & Ramscar, in
preparation). And while the second assumption is certainly not true in all instances ?
interaction effects are certainly possible ? there do seem to be multiple schemas that can be
adopted in causal reasoning (Kelley, 1967), and it may be that the parametric assumptions
of the two models correspond to a form of reasoning that predominates.
feathers
small
beak
flies
eats
red
breast
two legs
living
moves
eats
worms
wings
builds
nests
lays
eggs
chirps
Figure 1: Dependency network for the concept robin. An arrow drawn from feature A to
feature B means that A depends on B. Note (i) that only the strongest dependency links
are shown, but that all dependency information was used in the simulations; (ii) there
is a numeric strength associated with every dependency connection, although this is not
shown in the diagram; and (iii) the proposed models propagate information in the opposite
direction to the dependency connections.
4.1 Causal Dependency Networks
The dependency networks obtained by Sloman, Love and Ahn (1998) were collected by
asking people to consider features in a pairwise fashion, independently of all other features. However, causal inference requires that the causal impact of multiple features on a
target node be combined. Therefore some preprocessing needs to be done to the raw dependency networks to define a causal dependency network suitable for using in counterfactual
, in
inference. The original dependency networks can each be represented as a matrix
which represents the strength with which feature depends on feature in concept ,
as judged by the original participants. The modified causal dependency networks, , are
defined as follows:
where
(1)
where
;
otherwise.
(2)
This transformation achieves two things. Firstly it normalises the weights to be in the range
0 to 1, instead of the range 0 to 3 that the original ratings occupied. Secondly it normalises
the strength with which each input node is connected to a target node with respect to the
sum of all other inputs to the target. This means that multiple inputs to a target node cannot
activate the target any more than a single input.
4.2 Parametric Propagation Schemes
We can now define how inference proceeds in the two parametric models: the linear and
the noisy-OR models. Let denote the feature being counterfactually manipulated
(?has
#" $
wings? in our example), and let be a matrix in which each component !
represents
the amount the model predicts feature to have changed as a result of the counterfactual
modification to , after iterations. To initialise both models all predicted levels of change
for features other than the manipulated feature, , are initialised to 0:
"
!
(3)
4.2.1 Linear Model
The update rules for each iteration of the linear model are defined as follows. The manipulated feature is set to an initial activation level of 1, indicating it has been counterfactually
modified1. All other features have their activations set as specified below:
"$
!
"$
!
(4)
This condition states that a feature is expected to change in proportion to the degree to
which the features that influence it have changed, given the initial alteration made to the
manipulated feature , and the degree to which they affect it. The general robustness of
linear models of human judgements (Dawes, 1979) provides grounds for expecting a good
correlation between the linear model and human counterfactual judgements.
4.2.2 Noisy-OR Model
The second model uses the noisy-OR gate (Pearl, 1988) to describe the propagation of
information in causal inference. The noisy-OR gate assumes that each cause has an independent probability of failing to produce the effect, and that the effect will only be absent
if all its associated causes fail to produce it. In the counterfactual model noisy-OR propagation is therefore formalised as follows:
!
"$
!
"$
(5)
The questions people were asked to validate the two models measured how strongly they
would believe in different features of a concept, if a specific feature was subtracted. This
can be interpreted as the degree to which their belief in the target feature would vary given
the presence and the absence of the manipulated feature. Accordingly, the output of the
noisy-OR model was the difference in activation of each node when the manipulated node
was set to 1 and 0 respectively2.
4.2.3 Clamping
Because of the existence of loops in the dependency networks, if the counterfactually manipulated node is not clamped to its initial value activation can feed back through the network and change this value. This is likely to be undesirable, because it will mean the
network will converge to a state in which the required counterfactual manipulation has not
been successfully maintained, and that therefore its consequences have not been properly
assimilated. The empirical performance of the two models was therefore considered when
Note that the performance of the linear model does not depend crucially on the activation of
being set to 1, as solutions for at convergence are simply multiples of the initial value selected and
hence will not affect the correlational results.
2
This highlights an interesting difference in the output of the two models: the linear model outputs
the degree to which a feature is expected to change as a result of a counterfactual manipulation directly, whereas the noisy-OR model outputs probabilities which need to be converted into an expected
degree of change (expressed in Pearl?s causal calculus as !"$# %'&)(+*-, ./01 !"$# %'&23*-, ).
1
the activation of the manipulated node was clamped to its initial value,
and not clamped.
The clamping constraint bears a close similarity
to Pearl?s (2000) ?
? operator,
which prevents causes of a random variable affecting its value when an intervention has
occurred in order to bring
about.
4.2.4 Convergence
Propagation continues in both models until the activations for the features converge:
!
"$
!
"$
(6)
The models thus offer a spreading activation account of the changes induced in a conceptual
network as a result of a counterfactual manipulation, their iterative nature allowing the
effect of non-local influences to be accommodated.
5 Testing the Models
In order to test the validity of the two models we empirically studied people?s intuitions
about how they would expect concepts to change if they no longer possessed characteristic
features. For example, participants were asked to imagine that robins did not in fact have
wings. They were then asked to rate how strongly they agreed or disagreed with statements
such as ?If robins didn?t have wings, they would still be able to fly?. The task clearly
requires participants to engage in counterfactual reasoning: robins do in fact have wings ?
in normal contexts at least ? so participants are required to modify their standard conceptual
representation in order to find out how this affects their belief in the other aspects of robins.
5.1 Method
Three features were chosen from each of the four concepts for which dependency information was available. These features were selected as having low, medium and high levels
of centrality, as reported by Sloman, Love and Ahn (1998, Study 1). This was to ensure
that counterfactuals revolving around more and less important features of a concept were
considered in the study.
Each selected feature formed the basis of a counterfactual manipulation. For example, if
the concept was robin and the selected feature was ?has wings?, then subjects were asked
to imagine that robins didn?t have wings. Participants were then asked how strongly they
believed that the concept in question would still possess each of its remaining features if
it no longer possessed the selected feature. For example, they would read ?If robins didn?t
have wings, they would still be able to fly? and be asked to rate how strongly they agreed
with it.
Ratings were elicited on a 1-7 point scale anchored by ?strongly disagree? at the lower end
and ?strongly agree? at the upper end. The ratings provided by participants can be regarded
as estimates of how much people expect the features of a concept to change if the concept
were counterfactually modified in the specified way. If the models are good ones we would
therefore expect there to be a correlation between their predictions and the judgements of
the participants.
5.2 Design and Materials
Participants were randomly presented with 4 of the 12 counterfactual manipulations, and
were asked to rate their agreement with counterfactual statements about the remaining,
Counterfactual Concept
robin-wings
robin-lays-eggs
robin-eats-worms
chair-back
chair-arms
chair-holds-people
guitar-neck
guitar-makes-sound
guitar-used-by-music-groups
apple-grows-on-trees
apple-edible
apple-stem
Mean
n
13
13
13
8
8
8
8
8
8
8
8
8
Linear Model
Clamped Non-Clamped
-0.870** -0.044
-0.521*
-0.105
-0.066
-0.069
-0.451
0.191
-0.530
0.042
-0.815** -0.928**
-0.760*
-0.242
-0.889** -0.920**
0.235
0.225
-0.748*
-0.838**
-0.207
0.361
-0.965** -0.948**
-0.549
-0.273
Noisy-OR Model
Clamped Non-Clamped
-0.739** -0.062
-0.278
0.121
-0.009
-0.017
-0.178
0.148
-0.358
0.036
-0.917** -0.957**
-0.381
-0.181
-0.939**
0.895**
0.290
0.263
-0.905** -0.921**
-0.288
0.000
-0.961** -0.893**
-0.472
-0.131
Table 1: The correlation between the linear and noisy-OR models, in the clamped and
non-clamped conditions, with participants? empirical
judgements
about corresponding inferences. All comparisons were one-tailed (*
; **
).
unmanipulated features of the concept. People read an introductory passage for each inference in which they were asked to ?Imagine that robins didn?t have wings. If this was true,
how much would you agree or disagree with the following statements...? They were then
asked to rate their agreement with the specific inferences.
5.3 Participants
38 members of the Division of Informatics, University of Edinburgh, took part in the study.
All participants were volunteers, and no reward was offered for participation.
5.4 Results
The correlation of the two models, in the clamped and non-clamped conditions, is shown
repeated-measures
in Table 1. A
ANOVA revealed that there was a main effect
,
), no main effect
of propagation method
of clamping (
(
,
), and no interaction
effect
(
). The correlations
,
,
one-tailed)
and the noisy-OR
of both the linear (Wilcoxon Test,
Z
model (Wilcoxon Test, Z
,
, one-tailed) differed significantly from 0 when
clamping was used.
5.5 Discussion
The simulation results show that clamping is necessary to the success of the counterfactual
?
models; this thus constitutes an empirical validation of Pearl?s use of the ?
operator in modelling counterfactuals. In addition, both the models capture the empirical
patterns with some degree of success, so further work is required to tease them apart.
6 Exploring Non-Parametric Approaches
The models of counterfactual reasoning we have presented both make parametric assumptions. Although non-parametric models in general offer greater flexibility, there are two
main reasons ? one logistical and one methodological ? why applying them in this context
may be problematic.
6.1 A Logistical Reason: Conditional Probability Tables
Bayesian Belief Networks (BBNs) define conditional dependence relations in terms of
graph structures like the dependency structures used by the present model. This makes
them an obvious choice of normative model for counterfactual inference. However, there
are certain problems that make the application of a non-parametric BBN to counterfactual
reasoning problematic.
For non-parametric inference a joint conditional probability table needs to be defined for
all the variables
upon which a target node is conditioned. In other words, it?s not sufficient
, , . . . , alone; instead, is required.
to know
This leads
to a combinatorial explosion
in
the
number
of
parameters
required.
If
is
a
vector
of
elements in which represents the number of discrete classes that the random variable
can take, then the number of conditional probabilities required to compute the interaction
between and in the general case is:
(7)
On the assumption that features can normally be represented by two classes (present
or absent), the number of probability judgements required to successfully apply a nonparametric BBN to all four of Sloman, Love and Ahn?s (1998) concepts is 3888. Aside
from the obvious logistical difficulties in obtaining estimates of this number of parameters
from people, attribution theorists suggest that simplifying assumptions are often made in
causal inference (Kelley, 1972). If this is the case then it should be possible to specify a
parametric model which appropriately captures these patterns, as we have attempted to do
with the models in this paper, thus obviating the need for a fully general non-parametric
approach.
6.2 A Methodological Reason: Patterns of Interaction
Parametric models are special cases of non-parametric models: this means that a nonparametric model will be able to capture patterns of interaction between causes that a parametric model may be unable to express. A risk concomitant with the generality of nonparametric models is that they can gloss over important limitations in human inference.
Although a non-parametric approach, with exhaustively estimated conditional probability
parameters, would likely fit people?s counterfactual judgements satisfactorily, it would not
inform us about the limitations in our ability to process causal interactions. A parametric
approach, however, allows one to adopt an incremental approach to modelling in which
such limitations can be made explicit: parametric models can be generalised when there is
empirical evidence that they fail to capture a particular kind of interaction. Parametric approaches go hand-in-hand, then, with an empirical investigation of our treatment of causal
interactions. Obtaining a good fit with data is not of sole importance in cognitive modelling: it is also important for the model to make explicit the assumptions it is predicated
on, and parametric approaches allow this to be done, hopefully making causal principles
explicit which would otherwise lie latent in an exhaustive conditional probability table.
7 Closing Thoughts
Given the lack of quantitative models of counterfactual reasoning, we believe the models
we have presented in this paper constitute a significant contribution to our understanding
of this process. Notably, the models achieved a significant correlation across a sizeable
data-set (111 data-points), with no free parameters. However, there are limitations to the
current models. As stated, the models both assume that causal factors contribute independently to a target factor, and this is clearly not always the case. Although a non-parametric
Bayesian model with an exhaustive conditional probability table could accommodate all
possible interaction effects between causal factors, as argued in the previous section, this
would not necessarily be all that enlightening. It is up to further empirical work to unearth
the principles underpinning our processing of causal interactions (e.g., Kelley, 1972); these
principles can then be made explicit in future parametric models to yield a fuller understanding of human inference. In the future we intend to examine our treatment of causal
interactions empirically, in order to reach a better understanding of the appropriate way to
model counterfactual reasoning.
Acknowledgements
We would like to thank Tom Griffiths, Brad Love, Steven Sloman and Josh Tenenbaum for
their discussion of the ideas presented in this paper.
References
[1] Byrne R.M.J. and Tasso A. (1999). Counterfactual Reasoning with Factual, Possible, and Counterfactual Conditionals, Memory & Cognition, 27(4), 726-740.
[2] Dawes R.M. (1979). The Robust Beauty of Improper Linear Models in Decision Making, American Psychologist, 34, 571-582.
[3] Goodman N. (1983; 4th edition). Fact, Fiction, and Forecast, Harvard University Press, Cambridge, Massachusetts.
[4] Griffiths T. (August 2001). Assessing Interventions in Linear Belief Networks.
[5] Kahneman D. and Miller D.T. (1986). Norm Theory: Comparing Reality to its Alternatives,
Psychological Review, 93(2), 136-153.
[6] Kahneman D., Slovic P. and Tversky A. (1982; eds.). Judgment Under Uncertainty: Heuristics
and Biases, Cambridge University Press, Cambridge, UK.
[7] Kelley H.H. (1972). Causal Schemata and the Attribution Process. In Jones, Kanouse, Kelley,
Nisbett, Valins and Weiners (eds.), Attribution: Perceiving the Causes of Behavior, Chapter 9, 151174, General Learning Press, Morristown, New Jersey.
[8] Lewis D.K. (1973). Counterfactuals, Harvard University Press, Cambridge, Massachusetts.
[9] Murphy K.P., Weiss Y. and Jordan M.I. (1999). Loopy Belief Propagation for Approximate
Inference: An Empirical Study, Proceedings of Uncertainty in AI, 467-475.
[10] Pearl J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference,
Morgan Kaufmann, San Mateo, California.
[11] Pearl J. (2000). Causality: Models, Reasoning, and Inference, Cambridge University Press,
Cambridge.
[12] Roese N.J. (1997). Counterfactual Thinking, Psychological Bulletin, 121, 133-148.
[13] Sloman S., Love B.C. and Ahn W.K. (1998). Feature Centrality and Conceptual Coherence,
Cognitive Science, 22(2), 189-228.
[14] Yarlett D.G. and Ramscar M.J.A. (2001). Structural Determinants of Counterfactual Reasoning,
Proceedings of the 23rd Annual Conference of the Cognitive Science Society, 1154-1159.
[15] Yarlett D.G. and Ramscar M.J.A. (in preparation). Uncertainty in Causal and Counterfactual
Inference.
| 1943 |@word determinant:1 judgement:7 norm:1 seems:2 proportion:1 calculus:1 simulation:2 crucially:2 propagate:1 simplifying:1 pick:1 accommodate:1 gloss:1 initial:5 daniel:1 current:2 comparing:1 activation:9 yet:1 update:1 aside:1 alone:1 selected:6 accordingly:1 scotland:2 affair:1 dawes:2 provides:1 node:13 contribute:1 firstly:1 qualitative:2 feather:1 introductory:1 pairwise:1 acquired:1 notably:1 expected:4 behavior:1 examine:2 love:6 actual:2 considering:2 provided:1 underlying:1 didn:5 medium:1 what:3 kind:1 interpreted:1 transformation:1 quantitative:6 every:1 morristown:1 exactly:2 uk:3 normally:1 intervention:2 generalised:1 local:1 modify:1 depended:1 consequence:2 yarlett:4 chirp:1 studied:1 mateo:1 specifying:1 limited:1 range:2 satisfactorily:1 testing:1 empirical:10 thought:2 significantly:1 word:1 griffith:2 suggest:1 cannot:1 undesirable:1 close:1 operator:2 judged:1 context:2 influence:2 applying:1 risk:1 attribution:3 go:1 independently:3 rule:1 regarded:1 bbns:1 initialise:1 counterfactually:6 target:9 imagine:5 engage:1 us:1 agreement:2 harvard:2 element:1 continues:1 lay:2 predicts:1 steven:1 fly:5 factual:1 capture:5 connected:1 improper:1 prospect:1 expecting:1 intuition:1 reward:1 asked:9 exhaustively:1 tversky:1 depend:2 purely:1 upon:1 division:3 basis:1 kahneman:3 joint:1 represented:2 chapter:1 jersey:1 describe:1 activate:1 cogsci:1 exhaustive:2 heuristic:1 plausible:1 otherwise:3 ability:1 noisy:11 itself:1 took:1 propose:1 interaction:13 formalised:1 relevant:1 loop:1 date:1 flexibility:1 validate:1 convergence:2 assessing:1 produce:2 incremental:1 ac:2 measured:1 sole:1 predicted:1 come:1 appropriateness:1 differ:2 direction:1 human:5 material:1 argued:1 crux:1 investigation:2 secondly:1 normalises:2 exploring:1 hold:1 underpinning:1 around:2 considered:2 ground:1 normal:1 cognition:1 achieves:1 vary:1 adopt:1 failing:1 spreading:1 combinatorial:1 successfully:2 clearly:5 eats:3 always:1 modified:2 occupied:1 beauty:1 linguistic:1 derived:2 properly:1 methodological:3 modelling:4 inference:19 relation:2 manipulating:1 going:2 interested:1 issue:1 special:1 fuller:1 having:2 represents:3 jones:1 constitutes:1 thinking:1 future:4 others:1 intelligent:1 randomly:1 manipulated:9 murphy:1 intended:1 attempt:1 certainly:2 extreme:1 genotype:1 explosion:1 necessary:1 tree:1 accommodated:1 causal:20 psychological:4 instance:2 formalism:1 asking:2 loopy:1 addressing:1 examining:1 reported:1 dependency:22 answer:1 combined:1 slovic:1 probabilistic:1 informatics:3 predominates:1 michael:2 central:1 nest:3 cognitive:4 american:1 wing:13 doubt:1 account:1 converted:1 alteration:1 sizeable:1 onset:1 depends:5 schema:2 counterfactuals:7 red:1 participant:12 elicited:1 contribution:1 formed:1 kaufmann:1 characteristic:2 miller:2 correspond:1 yield:1 judgment:1 raw:1 bayesian:2 apple:4 unaffected:1 strongest:1 inform:1 tended:1 reach:1 ed:4 beak:1 initialised:1 obvious:2 associated:2 treatment:2 massachusetts:2 counterfactual:40 logical:1 knowledge:1 agreed:2 back:2 focusing:1 feed:1 tom:1 specify:3 wei:1 done:2 strongly:7 generality:1 furthermore:1 just:1 stage:1 predicated:1 correlation:6 until:1 hand:2 hopefully:1 propagation:6 lack:1 grows:1 believe:2 revolving:1 effect:11 validity:1 concept:17 true:2 byrne:2 ramscar:4 hence:1 read:2 deal:1 maintained:1 outline:1 bring:1 passage:1 reasoning:22 began:1 empirically:4 discussed:1 extend:1 occurred:1 refer:1 significant:2 theorist:1 cambridge:6 ai:1 rd:1 outlined:1 closing:1 kelley:5 respectively2:1 similarity:1 ahn:5 operating:1 longer:2 something:1 wilcoxon:2 recent:1 apart:1 manipulation:5 certain:2 success:2 morgan:1 dai:1 fortunately:1 greater:1 converge:2 envisaged:1 living:1 ii:2 multiple:5 sound:1 stem:1 offer:2 believed:1 schematic:1 prediction:2 involving:1 impact:1 breast:1 volunteer:1 iteration:2 achieved:1 whereas:1 affecting:1 addition:1 operative:1 conditionals:1 else:1 diagram:1 goodman:2 appropriately:1 posse:1 induced:1 subject:1 thing:5 member:1 seem:1 jordan:1 structural:1 presence:1 revealed:1 iii:1 affect:4 fit:3 opposite:1 idea:3 absent:2 edible:1 whether:1 cause:9 constitute:1 characterise:1 amount:1 nonparametric:3 tenenbaum:1 problematic:2 fiction:1 estimated:1 discrete:1 affected:1 express:1 group:1 revolve:1 four:4 drawn:1 anova:1 graph:1 sum:1 you:1 uncertainty:3 arrive:1 throughout:1 decision:1 coherence:1 annual:1 strength:3 constraint:2 aspect:3 answered:1 simulate:1 chair:4 eat:2 remain:2 across:1 modification:2 making:2 psychologist:1 leg:1 agree:2 fail:2 know:1 end:2 adopted:1 available:2 apply:1 appropriate:3 centrality:3 subtracted:1 alternative:4 robustness:1 gate:2 existence:1 original:3 assumes:1 remaining:2 ensure:1 music:1 build:3 society:1 unchanged:1 move:1 intend:1 question:4 parametric:26 dependence:1 sloman:6 link:1 unable:1 thank:1 collected:1 reason:3 concomitant:1 difficult:1 statement:3 stated:1 design:1 plankton:1 allowing:1 disagree:2 upper:1 possessed:2 situation:1 august:1 rating:3 required:8 specified:2 connection:2 philosophical:1 raising:1 california:1 pearl:7 able:5 beyond:1 proceeds:2 below:1 pattern:4 challenge:1 memory:1 belief:5 enlightening:1 suitable:1 logistical:4 difficulty:2 attach:1 participation:1 arm:1 representing:2 scheme:1 review:2 understanding:3 acknowledgement:1 evolve:1 determining:1 fully:1 expect:3 highlight:1 bear:1 interesting:1 limitation:4 proven:1 validation:1 degree:10 offered:1 sufficient:1 propagates:1 principle:3 changed:3 surprisingly:1 free:1 tease:1 bias:1 allow:1 bulletin:1 edinburgh:5 world:3 cumulative:1 numeric:1 made:4 preprocessing:1 san:1 approximate:1 nisbett:1 conceptual:6 conclude:2 don:1 iterative:2 latent:1 anchored:1 tailed:3 reality:1 robin:17 table:6 why:1 nature:1 assimilated:1 robust:1 obtaining:2 warranted:1 necessarily:1 domain:1 did:1 main:5 arrow:1 edition:1 repeated:1 obviating:1 causality:1 egg:2 fashion:2 differed:1 explicit:4 lie:1 clamped:11 specific:3 normative:1 supposition:3 explored:1 guitar:4 evidence:1 disagreed:1 albeit:1 importance:1 bbn:2 entertained:1 conditioned:1 clamping:5 forecast:1 depicted:1 simply:1 explore:1 likely:2 josh:1 failed:1 prevents:1 expressed:1 contained:1 brad:1 lewis:2 conditional:7 absence:1 analysing:1 change:9 perceiving:1 correlational:1 worm:4 neck:1 attempted:1 indicating:1 people:10 preparation:2 tested:1 |
1,033 | 1,944 | Convergence of Optimistic and
Incremental Q- Learning
Eyal Even-Dar*
Yishay Mansour t
Abstract
Vie sho,v the convergence of tV/O deterministic variants of Qlearning. The first is the widely used optimistic Q-learning, which
initializes the Q-values to large initial values and then follows a
greedy policy with respect to the Q-values. We show that setting
the initial value sufficiently large guarantees the converges to an Eoptimal policy. The second is a new and novel algorithm incremental Q-learning, which gradually promotes the values of actions that
are not taken. We show that incremental Q-learning converges, in
the limit, to the optimal policy. Our incremental Q-learning algorithm can be viewed as derandomization of the E-greedy Q-learning.
1
Introduction
One of the challenges of Reinforcement Learning is learning in an unknown environment. The environment is modeled by an MDP and we can only observe the
trajectory of states, actions and rewards generated by the agent wandering in the
MDP. There are two basic conceptual approaches to the learning problem. The first
is model base, where we first reconstruct a model of the MDP, and then find an
optimal policy for the approximate model. Recently polynomial time algorithms
have been developed for this approach, initially in [7] and latter extended in [3].
The second are direct methods that update their estimated policy after each step.
The most popular of the direct methods is Q-learning [13].
Q-learning uses the information observed to approximate the optimal value function,
from which one can construct an optimal policy. There are various proofs that Qlearning converges, in the limit, to the optimal value function, under very mild
conditions [1, 11, 12, 8,6, 2]. In a recent result the convergence rates of Q-learning
are computed and an interesting dependence on the learning rates is exhibited [4].
Q-learning is an off-policy that can be run on top of any strategy. ?Although, it is
an off policy algorithm, in many cases its estimated value function is used to guide
the selection of actions. Being always greedy with respect to the value function may
result in poor performance, due to the lack of exploration, and often randomization
is used guarantee proper exploration.
We show the convergence of two deterministic strategies. The first is optimistic
*School of Computer Science, Tel-Aviv University, Tel-Aviv, Israel. evend@cs.tau.ac.il
tSchool of Computer Science, Tel-Aviv University, Israel. mansDur@cs.tau.ac.il
Q-learning, that initializes the estimates to large values and then follows a greedy
policy. Optimistic Q-Iearning is widely used in applications and has been recognized
as having good convergence in practice [10].
We prove that optimistic Q-Iearning, with the right setting of initial values, converge
to a near optimal policy. This is not the first theoretical result showing that optimism helps in reinforcement learning, however previous results where concern with
model based methods [7,3]. We show the convergence of the widely used optimistic
Q-Iearning, thus explaining and supporting the results observed in practice.
Our second result is a new and novel deterministic algorithm incremental Qlearning, which gradually promotes the values of actions that are not taken. We
show that the frequency of sub-optimal actions vanishes, in the limit, and that
the strategy defined by incremental Q-Iearning converges, in the limit, to the optimal policy (rather than a near optimal policy).
Another view of incremental
Q-Iearning is as a derandomization of the E-greedy Q-Iearning. The E-greedy Qlearning performs the sub optimal action every liE times in expectation, while the
incremental Q-learning performs sub optimal action every (Q(s, a(s)) - Q(s, b))jE
times. Furthermore, by taking the appropriate values it can be similar to the Boltzman machine.
2
The Model
We define a Markov Decision process (MDP) as follows
Definition 2.1 A Markov Decision process (MDP) M is a 4-tuple (8, A, P, R),
where S is a set of the states, A is a set of actions, P/:J (a) is the transition probability from state i to state j when performing action a E A in state i, and RM(S, a)
is the reward received when performing action a in state s.
A strategy for an MDP assigns, at each time t, for each state S a probability for
performing action a E A, given a history F t - 1 == {sl,al,rl, ...,St-l,at-l,rt-l}
which includes the states, actions and rewards observed until time t - 1. While
executing a strategy 1f we perform at time t action at in state St and observe a
reward rt (distributed according to RM(S, a)), and the next state St+l distributed
according to P:!,St+l (at). We combine the sequence of rewards to a single value
called return, and our goal is to maximize the return. In this work we focus on
discounted return, which has a parameter, E (0,1), and the discounted return of
policy 1f is VM== L:o ,trt , where Tt is the reward observed at time t.
We assume that RM(S, a) is non-negative and bounded by R max , i.e, "Is, a: 0:::;
RM(S, a) :::; R max . This implies that the discounted return is bounded by Vmax ==
RrnalZ'
1-, .
We define a value function for ~ach state s, under policy 1f, as VM(s) == E[L:o Ti,i] ,
where the expectation is over a run of policy 1f starting at state s, and a state-action
(a)ViI(s/).
value function Q:M(s, a) == E[RM(S, a)] +, LSI
P:!sl
Let 1f* be an optimal policy which maximizes the return from any start state.
This implies that for any policy 1f and any state S we have VM* (s) ~ ViI (s), and
1f*(s) == argmaxa(E[RM(S, a)]
+ ,(LsI P:!sl (a)V*(sl)).
We use ViI and QM- for VM
* and
E-optimal if IIVM- Vull oo :::; ?.
Q',
respectively. We say that a policy
1f
is an
Given a trajectory let Ts,a be the.set of times in which we perform action a in state
s, TS == UaTs,a be the times when state s is visited, Ts,not(a) == TS \ Ts,a be the set
J
of times where in state s an action a' =1= a is performed, and Tnot(s) == UsJ=I=sTs be
the set of times in which a state s' =1= s is visited. Also, [#(8, a, t)] is the number of
times action a is performed in state 8 up to time t, Le., ITs,a n [1, t]l.
Finally, throughout the paper we assume that the MDP is a uni-chain (see [9]),
namely that from every state we can reach any other state.
3
Q- Learning
The Q-Learning algorithm [13] estimates the state-action value function (for discounted return) as follows:
Qt+ 1 ( s, a) == (1 - at (s, a) )Qt (s, a) + at (8, a) (rt (8, a) + ,Vi(s') )
where Sl is the state reached from state s when performing action a at time t, and
J
Vi(s) == maxa Qt(s, a). We assume that at(sl, a') == 0 for t fj. TsJ,a .
A learning rate at is well-behaved if for every state action pair (s, a): (1)
2::1 at(8, a) == 00 and (2) 2::1o'.;(s, a) < 00. If the learning rate is well-behaved
and every state action pair is performed infinitely often then Q-Learning converges
to Q* with probability 1 (see [1, 11, 12, 8, 6]).
The convergence of Q-Iearning holds using any exploration policy, and only requires
that each state action pair is executed infinitely often. The greedy policy with
respect to the Q-values tries to exploit continuously, however, since it does not
explore properly, it might result in poor return. At the other extreme random
policy continuously explores, but its actual return may be very poor. An interesting
compromise between the two extremes is the E-greedy policy, which is widely used
in practice [10]. This policy executes the greedy policy with probability 1 - E
and the random policy with probability E. This balance between exploration and
exploitation both guarantees convergence and often good performance.
Common to many of the exploration techniques, is the use of randomization, which
is also a very natural choice. In this work we explore strategies which perform
exploration but avoids randomization and uses deterministic strategies.
4
Optimistic Q-Learning
Optimistic Q-learning is a simple greedy algorithm with respect to the Q-values,
where the initial Q-values are set to large values, larger than their optimal values.
We show that optimistic Q-Iearning converges to an E-optimal policy if the initial
Q-values are set sufficiently large.
Let fiT ==
rr;=l (1 -
ai). We set the initial conditions of the Q-values as follows:
Vs, a: Qo(s, a)
=
1
flT Vma:t,
where T == T (E, 8, S, A, a.) will be specified later. Let T}i,T ==
ai(3T/ fii. Note that
Qt+l (s, a).== (l- a t)Qt(s, a)+at(rt+,Vi(s/)) == (3TQO(S, a)+
ai
rrj=i+1 (1 -
aj) ==
T
T
i=l
i=l
L T}i,Tri(S, a)+, L T}i,T"Vt (Si),
i
where T == [#(s, a, t)] and Si is the next state arrived at time ti when action a is
performed for the ith time in state s.
First we show that as long as T == [#(s, a, t)] :::; T actions a are performed in state s,
we have Qt(s, a) ~ Vmax ' Latter we will use this to show that action a is performed
at least T times in state s.
Lemma 4.1 In optimistic Q-learning for any state s, action a and time t, such
that T == [#(s, a, t)] :::; T we have Qt(s, a) ~ Vmax ~ Q*(s, a).
Lemma 4.1 follows from the following observation:
.
r
f'
r
2:
Qt(s, a) = f3rQO(s, a) + ~)7i,rri(s,a) + 'Y
17i,rVt;(Si) 2:: / Vmax 2:: V*(s).
i=l
i=l
T
Now we bound T as a function of the algorithm parameters (Le., E,8, lSI, IAI) and
the learning rate. We need to set T large enough to guarantee that with probability
1 - 8, for any t >-T updates, using the given learning rate, the deviation from the
true value is at most E. Formally, given a sequence X t of i.i.d. random variables
with zero mean and bounded by Vmax , and a learning rate at == (l/[#(s, a, t)])W let
Zt+1 == (1 - D:t)Zt + D:tXt. A time T(E, 8) is an initialization time if Prl'v't ~ T :
Zt :::; E] ~ 1 - 8. The following lemma bounds the initialization time as a function
of the parameter w of the learning rate.
Lemma 4.2 The initialization time for
c(
(V~r (In(1/8) + In(Vmaxj?)))
X
and
a
t), for some constant
is at most T(E, 8)
c.
We define a modified process, in which we update using the optimal value function,
rather than our current estimate. For t ~ 1 we have,
where Sl is the next state. The following lemma bounds the difference between Q*
and Qt.
Lemma 4.3 Consider optimistic Q-learning and let T == T(E, 8) be the initialization
time. Then with probability 1 - 8, for any t > T, we have Q*(s, a) - Q(s, a) :::; E.
Proof: Let
T
== [#(s, a, t)]. By definition we have
Qt(s, a) == f3rQO(s, a)
r
r
i=l
i=l
+ 2: 'T/i,rri + 'Y 2: 'T/i,rV*(Si)'
This implies that,
Q*(s, a) - Q(s, a) == -f3rQO(s, a)
+ error _r[s, a, t] + error _v[s, a, t]
E[R(s, a)] - 2:;=1 'T/i,rri, and error _v[s, a, t]
E[V*(SI)ls, a] - 2:;=1 'T/i,rV*(Si)' We bound both error _r[s, a, t] and error_v[s, a, t]
using Lemma 4.2. Therefore, with probability 1- 8, we have Q*(s, a) - Q(s, a) :::; E,
for any t ~ T.
Q.E.D.
where error _r[s, a, t]
Next we bound the difference between our estimate Vi(s) and V*(s).
Lemma 4.4 Consider optimistic Q-learning and let T == T((I-I')E,8/ISIIAI) be
the initialization time. With probability at least 1- 8 for any state s and time t, we
have V*(s) - vt(s) :::; E.
Proof: By Lemma 4.3 we have that with probability 1- {) for every state s, action
a and time t we have Q* (s, a) - Qt(s, a) :::; (1-I')E. We show by induction on t that
V*(s) - vt(s) :::; E, for every state s. For t == 0 we have Vo(s) > Vmax and hence
the claim holds. For the inductive step assume it holds up to time t and show that
it hold for time t + 1. Let (8, a) be the state action pair executed in time t + 1. If.
[#(s, a, t + 1)] :::; T then by Lemma 4.1, vt(s) ~ Vmax ~ V*(s), and the induction
claim holds. Otherwise, let a* be the optimal action at state s, then,
V*(s) - vt+l(S)
< Q*(s,a*) - Qt+l(s,a*)
Q*(s,a*) - Qt+l(s,a*)
+ Qt+l(s,a*) -
Qt+l(s,a*)
".
< (1 -I')E + I' L 1Ji,,,.(V*(Si) - vti (Si)),
i==l
where T == [#(s, a, t)], ti is the time when the i-th time the action a is performed
in state 8, and state Si is the next state. Since ti :::; t, by the inductive hypothesis
we have that V* (Si) - vt i (Si) :::; E, and therefore,
V* (s) - vt+l (s) :::; (1 - I')E + I'E ==
E.
Q.E.D.
Lemma 4.5 Consider optimistic Q-learning and let T == T((I-I')E,{)/\SIIAI) be
the initialization time. With probability at least 1 - {) any state action pair (s, a)
that is executed infinitely often is E-optimal, i.e., V*(s) - Q*(s, a) :::;
E.
Proof: Given a trajectory let U' be the set of state action pairs that are executed
infinitely often, and let M' be the original MDP M restricted to U'. For M' we
can use the classical convergence proofs, and claim that vt (s) converges to ViII (s )
and Qt(s, a), for (s, a) E U', converges to QMI (s, a), both with probability 1. Since
(8, a) E U' is performed infinitely often it implies that Qt (s, a) converges to vt (s) ==
VM, (s) and therefore QM' (s, a) == ViII (s). By Lemma 4.4 with probability 1- {) we
have that VM(s) - yt(s) :::; E, therefore ViI(s) - QM(s, a) :::; ViI(s) - QM' (s, a) :::; E.
Q.E.D.
A simple corollary is that if we set E small enough, e.g., E < min(s,a){V*(s) Q*(s,a)IV*(s) f:. Q*(s,a)}, then optimistic Q-Iearning converges to the optimal
policy. Another simple corollary is the following theorem.
== T((I-I')E, {)/ISIIAI) be
the initialization time. For any constant ~, with probability at least 1 - {) there is
a time T~ > T such that at any time t > T~ the strategy defined by the optimistic
Q-learning is (E + ~)/(1 - ,)-optimal.
Theorem 4.6 Consider optimistic Q-learning and let T
5
Incremental Q-Iearning
In this section we describe a new algorithm that we call incremental Q-learning. The
main idea of the algorithm is to achieve a deterministic tradeoff between exploration
and exploitation.
Incremental Q-Iearning is a greedy policy with respect to the estimated Q-values
plus a promotion term. The promotion term of a state-action pair (s, a) is promoted
each time the action a is not executed in state s, and zeroed each time action a is
executed. We show that in incremental Q-Iearning every state-action pair is taken
infinitely often, which implies standard convergence of the estimates. We show that
the fraction of time in which sub-optimal actions are executed vanishes in the limit.
This implies that the strategy defined by incremental Q-Iearning converges, in the
limit, to the optimal policy. Incremental Q-Iearning estimates the Q-function as in
Q-Iearning:
Qt+l(S, a) == (1 - (It(s, a))Qt(s, a)
+ (It(s, a)(rt(s, a) + IVi(s/))
where Sl is the next state reached when performing action a in state s at time t.
The promotion term At is define as follows:
A t+1 (s, a) == 0: t E Ts,a
A t+1 (s, a) == At(s, a) + "p([#(s, a, t)]): t E Ts,not(a)
A t+1 (s, a) == At(s, a): t E Tnot(s) ,
where "p(i) is a promotion junction which in our case depends only on the number
of times we performed (s, a' ), al :j:. a, since the last time we performed (s, a). We
say that a promotion function "p is well-behaved if: (1) The function "p converges to
zero, Le., limi-+oo 'ljJ(i) == 0, and (2) "p(1) == 1 and 'ljJ(k) > "p(k+ 1) > o. For example
"p(i) == is well behaved promotion function.
t
Incremental Q-Iearning is a greedy policy with respect to St(s, a) == Qt(s, a) +
At(s, a). First we show that Qt, in incremental Q-Iearning, converges to Q*.
Lemma 5.1 Consider incremental Q-Iearning using a well-behaved learning rate
and a well-behaved promotion function. Then Qt converges to Q* with probability
1.
Proof: Since the learning rate is well-behaved, we need only to show that each state
action pair is performed infinitely often. We show that each state that is visited
infinitely often, all of its actions are performed infinitely often. Since the MDP is
uni-chain this will imply that with probability 1 we reach all states infinitely often,
which completes the proof.
Assume that state s is visited infinitely often. Since s is visited infinitely often,
there has to be a non-empty subset of the actions AI which are performed infinitely
often in s. The proof is by contradiction, namely assume that AI :j:. A. Let tl be the
last time that an action not in A' is performed in state s. Since"p is well behaved
we have that 'ljJ(tl) is constant for a fixed tl , it implies that At(s, a) diverges for
a fj. AI. Therefore, eventually we reach a time t2 > tl such that A t2 (s, a) > Vmax ,
for every a fj. AI. Since the actions in AI are performed infinitely often there is a
time t3 > t 2 such that each action al E AI is performed at least once in [t2, ts]. This
implies that A ta (s, a) > Vmax + A ta (s, a l ) for any al E AI and a fj. AI. Therefore,
some action in a E A \ AI will be performed after t 1 , contradicting our assumption.
Q.E.D.
The following lemma shows that the frequency of sub-optimal actions vanishes.
Lemma 5.2 Consider incremental Q-learning using a well behaved learning rate
and a well behaved promotion function. Let It(s, a) == ITs,al/ITsl and (s, a) be any
sub-optimal state-action pair. Then limt-+oo It(s, a) == 0, with probability 1.
The intuition behind Lemma 5.2 is the following. Let a* be an optimal action in
state s and a be a sub-optimal action. By Lemma 5.1, with probability 1 both
~
lii
>
~
a.o
0.8
Ql
-5
" .... ....
....
....
0.2
J
100
I
I
I
I
I
200
300
400
500
600
I
I
I
700
800
900
1000
3
Number of steps 10
Figure 1: Example of 50 states MDP, where the discount factor, {, is 0.9. The
leaning rate of both Incremantal and epsilon greedy Q-Iearning is set to 0.8. The
dashed line represents the epsilon greedy Q-Iearning.
Qt(s, a*) converges to Q*(s, a*) == V*(s) and Qt(s, a) converges to Q*(s, a). This
implies, intuitively, that At(s, a) has to be at least V*(s) - Q*(s, a) == h > 0 for
(s, a) to be executed. Since the promotion function is well behaved, the number
of time steps required until At(s, a) changes from 0 to h increases after each time
we perform (s,a). Since the inter-time between executions of (s,a) diverges, the
frequency ft(s, a) vanishes.
The following corollary gives a quantitative bound.
Corollary 5.3 Consider incremental Q-learning with learning rate at(s, a) ==
l/[#(s, a, t)] and '?(k) == l/e k . Let (s, a) be a sub-optimal state-action pair. The
number of times (s, a) is performed in the first n visits to state s is 8( V'(s~~~(s,a?)'
for sufficiently large n.
Furthermore, the return obtained by incremental Q-Iearning converges to the optimal return.
Corollary 5.4 Consider incremental Q-learning using a well behaved learning rate
and a well behaved promotion function. For every ? there exists a time T f such that
for any t > T f we have that the strategY.1T defined by incremental Q-Iearning is
?-optimal with probability 1.
6
Experiments
In this section we show some experimental results, comparing Incremental QLearning and epsilon-greedy Q-Learning. One can consider incremental Q-Iearning
. as a derandomization of ?t-greedy Q-Learning, where the promotion function satisfies 'l/Jt == ?t??
The experiment was made on MDP, which includes 50 states and two actions per
state. Each state action pair immediate reward is randomly chosen in the interval
[0, 10]. For each state and action (s, a) the next state transition is random,i.e., for
every state
Sl
we have a random variable
X:: a
E [0, 1] and
PS~SI =
E:::;.a.
For
the ?t-greedy Q-Iearning, we have ?t == 10000/t at time t, while for the incremental
we have 'l/Jt == 10000/t. Each result in the experiment is an average of ten different
runs. In Figure 1, we observe similar behavior of the two algorithms. This experiment demonstrates the strong experimental connection between these methods. We
plan to further investigate the theoretical connection between ?-greedy, Boltzman
machine and incremental Q-Learning.
7
Acknowledgements
This research was supported in part by a grant from the Israel Science Foundation.
References
[1] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Scientific, Belmont, MA, 1996.
V.S. Borkar and S.P. Meyn. The O.D.E. method for convergence of stochastic
approximation and reinforcement learning. Siam J. control, 38 (2):447-69,
2000.
R. I. Brafman and M. Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. m IJCAI, 2001.
E. Even-Dar and Y. Mansour. Learning rates for Q-Iearning. m COLT, 2001.
J. C. Gittins and D. M. Jones. A dynamic allocation index for the sequential
design of experiments. Progress in Statistics, pages 241 -266, 1974.
T. Jaakkola, M. I. Jordan, and S. P. Singh. On the convergence of stochastic
iterative dynamic programming algorithms. Neural Computation, 6, 1994.
M. Kearns and S. Singh. Efficient reinforcement learning: theoretical framework and algorithms. In fCML, 1998.
M. Littman and Cs. Szepesvari. A generalized reinforcement learning model:
convergence and applications. m ICML, 1996.
M.L Puterman. Markov Decision Processes - Discrete Stochastic Dynamic
Programming. John Wiley & Sons. mc., New York, NY, 1994.
[10] R. S. Sutton and A. G. Bato. Reinforcement Learning. MIT press, 1998.
[11] J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-Iearning. Machine Learning, 16:185-202, 1994.
[12] C. Watkins and P. Dyan. Q-Iearning. Machine Learning, 8(3/4):279 -292,
1992.
[13] C. Watkins. Learning from Delayed Rewards. PhD thesis, Cambridge University, 1989.
| 1944 |@word mild:1 exploitation:2 polynomial:2 initial:6 tnot:2 current:1 comparing:1 si:12 john:1 belmont:1 update:3 v:1 greedy:18 ith:1 direct:2 prove:1 combine:1 inter:1 behavior:1 derandomization:3 discounted:4 actual:1 bounded:3 maximizes:1 israel:3 maxa:1 developed:1 guarantee:4 quantitative:1 every:11 ti:4 iearning:27 rm:6 qm:4 demonstrates:1 control:1 grant:1 bertsekas:1 vie:1 limit:6 sutton:1 might:1 plus:1 initialization:7 rvt:1 practice:3 argmaxa:1 selection:1 deterministic:5 yt:1 starting:1 l:1 assigns:1 contradiction:1 meyn:1 rri:3 yishay:1 programming:3 us:2 hypothesis:1 observed:4 ft:1 intuition:1 environment:2 vanishes:4 reward:8 littman:1 dynamic:4 singh:2 compromise:1 various:1 describe:1 widely:4 larger:1 say:2 reconstruct:1 otherwise:1 statistic:1 sequence:2 rr:1 achieve:1 convergence:13 ach:1 empty:1 diverges:2 p:1 ijcai:1 incremental:26 executing:1 converges:17 gittins:1 help:1 oo:3 ac:2 school:1 qt:24 received:1 progress:1 strong:1 c:3 implies:9 stochastic:4 exploration:7 randomization:3 hold:5 sufficiently:3 sho:1 claim:3 visited:5 promotion:11 mit:1 always:1 modified:1 rather:2 jaakkola:1 corollary:5 focus:1 properly:1 initially:1 colt:1 plan:1 construct:1 once:1 having:1 represents:1 jones:1 icml:1 t2:3 randomly:1 delayed:1 usj:1 investigate:1 extreme:2 behind:1 chain:2 tuple:1 iv:1 theoretical:3 deviation:1 subset:1 st:5 explores:1 tsj:1 siam:1 off:2 vm:6 continuously:2 thesis:1 lii:1 return:11 includes:2 sts:1 vi:4 depends:1 performed:18 try:1 view:1 later:1 optimistic:16 eyal:1 reached:2 start:1 il:2 t3:1 mc:1 trajectory:3 executes:1 history:1 reach:3 definition:2 frequency:3 proof:8 popular:1 ta:2 iai:1 furthermore:2 until:2 qo:1 lack:1 aj:1 behaved:13 scientific:1 mdp:11 aviv:3 qmi:1 true:1 inductive:2 hence:1 puterman:1 trt:1 generalized:1 arrived:1 tt:1 vo:1 performs:2 fj:4 novel:2 recently:1 common:1 rl:1 ji:1 cambridge:1 ai:12 base:1 fii:1 recent:1 vt:9 promoted:1 recognized:1 converge:1 maximize:1 dashed:1 rv:2 long:1 visit:1 promotes:2 variant:1 basic:1 neuro:1 txt:1 expectation:2 limt:1 interval:1 completes:1 exhibited:1 ivi:1 tri:1 jordan:1 call:1 near:3 prl:1 enough:2 fit:1 idea:1 tradeoff:1 optimism:1 wandering:1 york:1 action:53 dar:2 discount:1 ten:1 sl:9 lsi:3 estimated:3 per:1 discrete:1 fraction:1 run:3 throughout:1 decision:3 bound:6 min:1 performing:5 tv:1 according:2 poor:3 son:1 intuitively:1 gradually:2 restricted:1 taken:3 eventually:1 junction:1 observe:3 appropriate:1 original:1 top:1 exploit:1 epsilon:3 classical:1 initializes:2 strategy:10 dependence:1 rt:5 athena:1 induction:2 viii:2 modeled:1 index:1 balance:1 ql:1 executed:8 negative:1 design:1 proper:1 policy:30 unknown:1 perform:4 zt:3 observation:1 markov:3 t:8 supporting:1 immediate:1 extended:1 mansour:2 namely:2 pair:12 specified:1 required:1 connection:2 tennenholtz:1 challenge:1 max:3 tau:2 natural:1 imply:1 ljj:3 acknowledgement:1 interesting:2 allocation:1 foundation:1 agent:1 zeroed:1 leaning:1 supported:1 last:2 brafman:1 asynchronous:1 tsitsiklis:2 guide:1 explaining:1 taking:1 limi:1 distributed:2 transition:2 avoids:1 made:1 reinforcement:7 vmax:9 boltzman:2 approximate:2 qlearning:5 uni:2 vti:1 conceptual:1 iterative:1 szepesvari:1 tel:3 main:1 contradicting:1 je:1 tl:4 ny:1 wiley:1 sub:8 lie:1 watkins:2 theorem:2 jt:2 showing:1 flt:1 concern:1 exists:1 sequential:1 phd:1 execution:1 vii:5 borkar:1 explore:2 infinitely:14 satisfies:1 ma:1 viewed:1 goal:1 change:1 lemma:17 kearns:1 called:1 experimental:2 formally:1 latter:2 |
1,034 | 1,945 | Natural Language Grammar Induction using a
Constituent-Context Model
Dan Klein and Christopher D. Manning
Computer Science Department
Stanford University
Stanford, CA 94305-9040
{klein, manning}@cs.stanford.edu
Abstract
This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused
on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly
on constituent identity and linear context, and use an EM-like iterative
procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
1 Overview
To enable a wide range of subsequent tasks, human language sentences are standardly given
tree-structure analyses, wherein the nodes in a tree dominate contiguous spans of words
called constituents, as in figure 1(a). Constituents are the linguistically coherent units in
the sentence, and are usually labeled with a constituent category, such as noun phrase (NP)
or verb phrase (VP). An aim of grammar induction systems is to figure out, given just the
sentences in a corpus S, what tree structures correspond to them. In this sense, the grammar
induction problem is an incomplete data problem, where the complete data is the corpus
of trees T , but we only observe their yields S. This paper presents a new approach to this
problem, which gains leverage by directly making use of constituent contexts.
It is an open problem whether entirely unsupervised methods can produce linguistically
accurate parses of sentences. Due to the difficulty of this task, the vast majority of statistical parsing work has focused on supervised learning approaches to parsing, where one
uses a treebank of fully parsed sentences to induce a model which parses unseen sentences
[7, 3]. But there are compelling motivations for unsupervised grammar induction. Building
supervised training data requires considerable resources, including time and linguistic expertise. Investigating unsupervised methods can shed light on linguistic phenomena which
are implicit within a supervised parser?s supervisory information (e.g., unsupervised systems often have difficulty correctly attaching subjects to verbs above objects, whereas for
a supervised parser, this ordering is implicit in the supervisory information). Finally, while
the presented system makes no claims to modeling human language acquisition, results on
whether there is enough information in sentences to recover their structure are important
data for linguistic theory, where it has standardly been assumed that the information in the
data is deficient, and strong innate knowledge is required for language acquisition [4].
S
VP
NP
NN1
NNS
Factory payrolls
VBD
fell
PP
IN
NN2
in September
Node
Constituent
S
NP
VP
PP
NN 1
NNS
VBD
IN
NN 2
NN NNS VBD IN NN
NN NNS
VBD IN NN
IN NN
NN
NNS
VBD
IN
NNS
Context
?
? VBD
NNS ?
VBD ?
? NNS
NN ? VBD
NNS ? IN
VBD ? NN
IN ?
Empty
0
1
2
3
4
5
Context
? NN
? NNS
? VBD
? IN
? NN
?
NN
NNS
VBD
IN
NN
Figure 1: Example parse tree with the constituents and contexts for each tree node.
2 Previous Approaches
One aspect of grammar induction where there has already been substantial success is the
induction of parts-of-speech. Several different distributional clustering approaches have
resulted in relatively high-quality clusterings, though the clusters? resemblance to classical
parts-of-speech varies substantially [9, 15]. For the present work, we take the part-ofspeech induction problem as solved and work with sequences of parts-of-speech rather
than words. In some ways this makes the problem easier, such as by reducing sparsity,
but in other ways it complicates the task (even supervised parsers perform relatively poorly
with the actual words replaced by parts-of-speech).
Work attempting to induce tree structures has met with much less success. Most grammar
induction work assumes that trees are generated by a symbolic or probabilistic context-free
grammar (CFG or PCFG). These systems generally boil down to one of two types. Some
fix the structure of the grammar in advance [12], often with an aim to incorporate linguistic constraints [2] or prior knowledge [13]. These systems typically then attempt to find
the grammar production parameters 2 which maximize the likelihood P(S|2) using the
inside-outside algorithm [1], which is an efficient (dynamic programming) instance of the
EM algorithm [8] for PCFG s. Other systems (which have generally been more successful) incorporate a structural search as well, typically using a heuristic to propose candidate
grammar modifications which minimize the joint encoding of data and grammar using an
MDL criterion, which asserts that a good analysis is a short one, in that the joint encoding
of the grammar and the data is compact [6, 16, 18, 17]. These approaches can also be seen
as likelihood maximization where the objective function is the a posteriori likelihood of
the grammar given the data, and the description length provides a structural prior.
The ?compact grammar? aspect of MDL is close to some traditional linguistic argumentation which at times has argued for minimal grammars on grounds of analytical [10] or
cognitive [5] economy. However, the primary weakness of MDL-based systems does not
have to do with the objective function, but the search procedures they employ. Such systems end up growing structures greedily, in a bottom-up fashion. Therefore, their induction
quality is determined by how well they are able to heuristically predict what local intermediate structures will fit into good final global solutions.
A potential advantage of systems which fix the grammar and only perform parameter search
is that they do compare complete grammars against each other, and are therefore able to
detect which give rise to systematically compatible parses. However, although early work
showed that small, artificial CFGs could be induced with the EM algorithm [12], studies with
large natural language grammars have generally suggested that completely unsupervised
EM over PCFG s is ineffective for grammar acquisition. For instance, Carroll and Charniak
[2] describe experiments running the EM algorithm from random starting points, which
produced widely varying learned grammars, almost all of extremely poor quality. 1
1 We duplicated one of their experiments, which used grammars restricted to rules of the form
x ? x y | y x, where there is one category x for each part-of-speech (such a restricted CFG is
isomorphic to a dependency grammar). We began reestimation from a grammar with uniform rewrite
It is well-known that EM is only locally optimal, and one might think that the locality
of the search procedure, not the objective function, is to blame. The truth is somewhere
in between. There are linguistic reasons to distrust an ML objective function. It encourages the symbols and rules to align in ways which maximize the truth of the conditional
independence assumptions embodied by the PCFG. The symbols and rules of a natural language grammar, on the other hand, represent syntactically and semantically coherent units,
for which a host of linguistic arguments have been made [14]. None of these have anything to do with conditional independence; traditional linguistic constituency reflects only
grammatical regularities and possibilities for expansion. There are expected to be strong
connections across phrases (such as dependencies between verbs and their selected arguments). It could be that ML over PCFGs and linguistic criteria align, but in practice they do
not always seem to. Experiments with both artificial [12] and real [13] data have shown that
starting from fixed, correct (or at least linguistically reasonable) structure, EM produces a
grammar which has higher log-likelihood than the linguistically determined grammar, but
lower parsing accuracy.
However, we additionally conjecture that EM over PCFGs fails to propagate contextual cues
efficiently. The reason we expect an algorithm to converge on a good PCFG is that there
seem to be coherent categories, like noun phrases, which occur in distinctive environments,
like between the beginning of the sentence and the verb phrase. In the inside-outside algorithm, the product of inside and outside probabilities ? j ( p, q)? j ( p, q) is the probability
of generating the sentence with a j constituent spanning words p through q: the outside
probability captures the environment, and the inside probability the coherent category. If
we had a good idea of what VPs and NPs looked like, then if a novel NP appeared in an
NP context, the outside probabilities should pressure the sequence to be parsed as an NP .
However, what happens early in the EM procedure, when we have no real idea about the
grammar parameters? With randomly-weighted, complete grammars over a symbol set X,
we have observed that a frequent, short, noun phrase sequence often does get assigned to
some category x early on. However, since there is not a clear overall structure learned,
there is only very weak pressure for other NPs, even if they occur in the same positions,
to also be assigned to x, and the reestimation process goes astray. To enable this kind of
constituent-context pressure to be effective, we propose the model in the following section.
3 The Constituent-Context Model
We propose an alternate parametric family of models over trees which is better suited for
grammar induction. Broadly speaking, inducing trees like the one shown in figure 1(a) can
be broken into two tasks. One is deciding constituent identity: where the brackets should
be placed. The second is deciding what to label the constituents. These tasks are certainly
correlated and are usually solved jointly. However, the task of labeling chosen brackets is
essentially the same as the part-of-speech induction problem, and the solutions cited above
can be adapted to cluster constituents [6]. The task of deciding brackets, is the harder task.
For example, the sequence DT NN IN DT NN ([the man in the moon]) is virtually always a
noun phrase when it is a constituent, but it is only a constituent 66% of the time, because
the IN DT NN is often attached elsewhere ([we [sent a man] [to the moon]]). Figure 2(a)
probabilities. Figure 4 shows that the resulting grammar (DEP - PCFG) is not as bad as conventional
wisdom suggests. Carroll and Charniak are right to observe that the search spaces is riddled with
pronounced local maxima, and EM does not do nearly so well when randomly initialized. The need
for random seeding in using EM over PCFGs is two-fold. For some grammars, such as one over a set X
of non-terminals in which any x 1 ? x2 x3 , xi ? X is possible, it is needed to break symmetry. This
is not the case for dependency grammars, where symmetry is broken by the yields (e.g., a sentence
noun verb can only be covered by a noun or verb projection). The second reason is to start the search
from a random region of the space. But unless one does many random restarts, the uniform starting
condition is better than most extreme points in the space, and produces superior results.
1.5
2
Usually a Constituent
Rarely a Constituent
1
1
0.5
0
0
?1
?2
NP
VP
PP
?3
?1.5
?1
?0.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
1.5
(a)
(b)
Figure 2: The most frequent examples of (a) different constituent labels and (b) constituents
and non-constituents, in the vector space of linear contexts, projected onto the first two
principal components. Clustering is effective for labeling, but not detecting constituents.
shows the 50 most frequent constituent sequences of three types, represented as points
in the vector space of their contexts (see below), projected onto their first two principal
components. The three clusters are relatively coherent, and it is not difficult to believe that
a clustering algorithm could detect them in the unprojected space. Figure 2(a), however,
shows 150 sequences which are parsed as constituents at least 50% of the time along with
150 which are not, again projected onto the first two components. This plot at least suggests
that the constituent/non-constituent classification is less amenable to direct clustering.
Thus, it is important that an induction system be able to detect constituents, either implicitly
or explicitly. A variety of methods of constituent detection have been proposed [11, 6],
usually based on information-theoretic properties of a sequence?s distributional context.
However, here we rely entirely on the following two simple assumptions: (i) constituents
of a parse do not cross each other, and (ii) constituents occur in constituent contexts. The
first property is self-evident from the nature of the parse trees. The second is an extremely
weakened version of classic linguistic constituency tests [14].
Let ? be a terminal sequence. Every occurrence of ? will be in some linear context c(? ) =
x ? y, where x and y are the adjacent terminals or sentence boundaries. Then we can
view any tree t over a sentence s as a collection of sequences and contexts, one of each for
every node in the tree, plus one for each inter-terminal empty span, as in figure 1(b). Good
trees will include nodes whose yields frequently occur as constituents and whose contexts
frequently surround constituents. Formally, we use a conditional exponential model of the
form:
P
exp( (?,c)?t ?? f ? + ?c f c )
P
P(t|s, 2) = P
t:yield(t)=s exp( (?,c)?t ?? f ? + ?c f c )
We have one feature f ? (t) for each sequence ? whose value on a tree t is the number of
nodes in t with yield ? , and one feature f c (t) for each context c representing the number of
times c is the context of the yield of some node in the tree.2 No joint features over c and ?
are used, and, unlike many other systems, there is no distinction between constituent types.
We model only the conditional likelihood of the trees, P(T |S, 2), where 2 = {? ? , ?c }.
We then use an iterative EM-style procedure
Q to find a local maximum P(T |S, 2) of the
completed data (trees) T (P(T |S, 2) = t?T ,s=yield(t) P(t|s, 2)). We initialize 2 such
that each ? is zero and initialize T to any arbitrary set of trees. In alternating steps, we first
fix the parameters 2 and find the most probable single tree structure t ? for each sentence
s according to P(t|s, 2), using a simple dynamic program. For any 2 this produces the
2 So, for the tree in figure 1(a), P(t|s) ? exp(?
NN NNS + ?VBD IN NN + ?IN NN + ??VBD +
?NNS? + ?VBD? + ??NNS + ?NN?VBD + ?NNS?IN + ?VBD?NN + ?IN? ).
set of parses T ? which maximizes P(T |S, 2). Since T ? maximizes this quantity, if T 0 is
the former set of trees, P(T ? |S, 2) ? P(T 0 |S, 2). Second, we fix the trees and estimate
new parameters 2. The task of finding the parameters 2? which maximize P(T |S, 2)
is simply the well-studied task of fitting our exponential model to maximize the conditional likelihood of the fixed parses. Running, for example, a conjugate gradient (CG)
ascent on 2 will produce the desired 2? . If 20 is the former parameters, then we will
have P(T |S, 2? ) ? P(T |S, 20 ). Therefore, each iteration will increase P(T |S, 2) until
convergence.3 Note that our parsing model is not a generative model, and this procedure,
though clearly related, is not exactly an instance of the EM algorithm. We merely guarantee
that the conditional likelihood of the data completions is increasing. Furthermore, unlike in
EM where each iteration increases the marginal likelihood of the fixed observed data, our
procedure increases the conditional likelihood of a changing complete data set, with the
completions changing at every iteration as we reparse.
Several implementation details were important in making the system work well. First, tiebreaking was needed, most of all for the first round. Initially, the parameters are zero, and
all parses are therefore equally likely. To prevent bias, all ties were broken randomly.
Second, like so many statistical NLP tasks, smoothing was vital. There are features in our
model for arbitrarily long yields and most yield types occurred only a few times. The most
severe consequence of this sparsity was that initial parsing choices could easily become
frozen. If a ?? for some yield ? was either 0 or 0, which was usually the case for
rare yields, ? would either be locked into always occurring or never occurring, respectively.
Not only did we want to push the ?? values close to zero, we also wanted to account for
the fact that most spans are not constituents.4 Therefore, we expect the distribution of the
?? to be skewed towards low values.5 A greater amount of smoothing was needed for the
first few iterations, while much less was required in later iterations.
Finally, parameter estimation using a CG method was slow and difficult to smooth in
the desired manner, and so we used the smoothed relative frequency estimates ? ? =
count( f? )/(count(? ) + M) and ?c = count( f c )/(count(c) + N). These estimates ensured
that the ? values were between 0 and 1, and gave the desired bias towards non-constituency.
These estimates were fast and surprisingly effective, but do not guarantee non-decreasing
conditional likelihood (though the conditional likelihood was increasing in practice). 6
4 Results
In all experiments, we used hand-parsed sentences from the Penn Treebank. For training,
we took the approximately 7500 sentences in the Wall Street Journal (WSJ) section which
contained 10 words or fewer after the removal of punctuation. For testing, we evaluated the
system by comparing the system?s parses for those same sentences against the supervised
parses in the treebank. We consider each parse as a set of constituent brackets, discarding
all trivial brackets.7 We calculated the precision and recall of these brackets against the
treebank parses in the obvious way.
3 In practice, we stopped the system after 10 iterations, but final behavior was apparent after 4?8.
4 In a sentence of length n, there are (n + 1)(n + 2)/2 total (possibly size zero) spans, but only 3n
constituent spans: n ? 1 of size ? 2, n of size 1, and n + 1 empty spans.
5 Gaussian priors for the exponential model accomplish the former goal, but not the latter.
6 The relative frequency estimators had a somewhat subtle positive effect. Empty spans have no
effect on the model when using CG fitting, as all trees include the same empty spans. However,
including their counts improved performance substantially when using relative frequency estimators.
This is perhaps an indication that a generative version of this model would be advantageous.
7 We discarded both brackets of length one and brackets spanning the entire sentence, since all of
these are impossible to get incorrect, and hence ignored sentences of length ? 2 during testing.
S
DT
VP
NN
VBD
?
NP
?
VBD
NP
The screen was
NP
PP
DT
DT NN IN NP
a
VBD
?
NN
VBD
?
?
DT
?
was DT NN IN NN
The screen
a
sea of
DT
red
NN
DT
VBD
DT
was
The screen
DT
a
red
(b)
IN red
DT NN of
sea of NN
(a)
NN
sea
(c)
Figure 3: Alternate parse trees for a sentence: (a) the Penn Treebank tree (deemed correct),
(b) the one found by our system CCM, and (c) the one found by DEP - PCFG.
Method
LBRANCH
RANDOM
DEP - PCFG
RBRANCH
CCM
UBOUND
UP
20.5
29.0
39.5
54.1
60.1
78.2
UR
24.2
31.0
42.3
67.5
75.4
100.0
F1
22.2
30.0
40.9
60.0
66.9
87.8
(a)
NP UR
28.9
42.8
69.7
38.3
83.8
100.0
PP UR
6.3
23.6
44.1
44.5
71.6
100.0
VP UR
0.6
26.3
22.8
85.8
66.3
100.0
System
EMILE
ABL
CDC -40
RBRANCH
CCM
UP
51.6
43.6
53.4
39.9
54.4
UR
16.8
35.6
34.6
46.4
46.8
F1
25.4
39.2
42.0
42.9
50.3
CB
0.84
2.12
1.46
2.18
1.61
(b)
Figure 4: Comparative accuracy on WSJ sentences (a) and on the ATIS corpus (b). UR =
unlabeled recall; UP = unlabeled precision; F1 = the harmonic mean of UR and UP; CB =
crossing brackets. Separate recall values are shown for three major categories.
To situate the results of our system, figure 4(a) gives the values of several parsing strategies. CCM is our constituent-context model. DEP - PCFG is a dependency PCFG model [2]
trained using the inside-outside algorithm. Figure 3 shows sample parses to give a feel for
the parses the systems produce. We also tested several baselines. RANDOM parses randomly. This is an appropriate baseline for an unsupervised system. RBRANCH always
chooses the right-branching chain, while LBRANCH always chooses the left-branching
chain. RBRANCH is often used as a baseline for supervised systems, but exploits a systematic right-branching tendency of English. An unsupervised system has no a priori reason
to prefer right chains to left chains, and LBRANCH is well worse than RANDOM. A system
need not beat RBRANCH to claim partial success at grammar induction. Finally, we include an upper bound. All of the parsing strategies and systems mentioned here give fully
binary-branching structures. Treebank trees, however, need not be fully binary-branching,
and generally are not. As a result, there is an upper bound UBOUND on the precision and
F1 scores achievable when structurally confined to binary trees.
Clearly, CCM is parsing much better than the RANDOM baseline and the DEP - PCFG induced
grammar. Significantly, it also out-performs RBRANCH in both precision and recall, and,
to our knowledge, it is the first unsupervised system to do so. To facilitate comparison
with other recent systems, figure 4(b) gives results where we trained as before but used
(all) the sentences from the distributionally different ATIS section of the treebank as a test
set. For this experiment, precision and recall were calculated using the EVALB system of
measuring precision and recall (as in [6, 17]) ? EVALB is a standard for parser evaluation,
but complex, and unsuited to evaluating unlabeled constituency. EMILE and ABL are lexical
systems described in [17]. The results for CDC-40, from [6], reflect training on much more
data (12M words). Our system is superior in terms of both precision and recall (and so F 1 ).
These figures are certainly not all that there is to say about an induced grammar; there are a
number of issues in how to interpret the results of an unsupervised system when comparing
with treebank parses. Errors come in several kinds. First are innocent sins of commission. Treebank trees are very flat; for example, there is no analysis of the inside of many
short noun phrases ([two hard drives] rather than [two [hard drives]]). Our system gives a
Sequence
DT NN
NNP NNP
CD CD
JJ NNS
DT JJ NN
DT NNS
JJ NN
CD NN
IN NN
IN DT NN
NN NNS
NN NN
TO VB
DT JJ
IN DT
PRP VBZ
PRP VBP
NNS VBP
NN VBZ
NN IN
NNS VBD
Example
the man
United States
4 1/2
daily yields
the top rank
the people
plastic furniture
12 percent
on Monday
for the moment
fire trucks
fire truck
to go
?the big
*of the
?he says
?they say
?people are
?value is
*man from
?people were
CORRECT
1
2
3
4
5
6
7
8
9
10
11
22
26
78
90
95
180
=350
=532
=648
=648
FREQUENCY
2
1
9
7
?
?
3
?
?
?
?
8
?
6
4
?
?
?
10
5
?
ENTROPY
2
?
?
3
?
?
7
?
9
?
6
10
1
?
?
?
?
4
5
?
8
DEP - PCFG
1
2
5
4
7
?
3
?
?
?
?
?
6
?
10
8
9
?
?
?
?
CCM
1
2
5
4
6
10
3
9
?
?
8
7
?
?
?
?
?
?
?
?
?
Figure 5: Top non-trivial sequences by actual treebank constituent counts, linear frequency,
scaled context entropy, and in DEP - PCFG and CCM learned models? parses.
(usually correct) analysis of the insides of such NPs, for which it is penalized on precision
(though not recall or crossing brackets). Second are systematic alternate analyses. Our
system tends to form modal verb groups and often attaches verbs first to pronoun subjects
rather than to objects. As a result, many VPs are systematically incorrect, boosting crossing bracket scores and impacting VP recall. Finally, the treebank?s grammar is sometimes
an arbitrary, and even inconsistent standard for an unsupervised learner: alternate analyses may be just as good.8 Notwithstanding this, we believe that the treebank parses have
enough truth in them that parsing scores are a useful component of evaluation.
Ideally, we would like to inspect the quality of the grammar directly. Unfortunately, the
grammar acquired by our system is implicit in the learned feature weights. These are not
by themselves particularly interpretable, and not directly comparable to the grammars produced by other systems, except through their functional behavior. Any grammar which
parses a corpus will have a distribution over which sequences tend to be analyzed as constituents. These distributions can give a good sense of what structures are and are not being
learned. Therefore, to supplement the parsing scores above, we examine these distributions.
Figure 5 shows the top scoring constituents by several orderings. These lists do not say
very much about how long, complex, recursive constructions are being analyzed by a given
system, but grammar induction systems are still at the level where major mistakes manifest
themselves in short, frequent sequences. CORRECT ranks sequences by how often they
occur as constituents in the treebank parses. DEP - PCFG and CCM are the same, but use
counts from the DEP - PCFG and CCM parses. As a baseline, FREQUENCY lists sequences by
how often they occur anywhere in the sentence yields. Note that the sequence IN DT (e.g.,
?of the?) is high on this list, and is a typical error of many early systems. Finally, ENTROPY
is the heuristic proposed in [11] which ranks by context entropy. It is better in practice than
FREQUENCY , but that isn?t self-evident from this list. Clearly, the lists produced by the
CCM system are closer to correct than the others. They look much like a censored version
of the FREQUENCY list, where sequences which do not co-exist with higher-ranked ones
have been removed (e.g., IN DT often crosses DT NN). This observation may explain a good
part of the success of this method.
Another explanation for the surprising success of the system is that it exploits a deep fact
about language. Most long constituents have some short, frequent equivalent, or proform,
which occurs in similar contexts [14]. In the very common case where the proform is a
single word, it is guaranteed constituency, which will be transmitted to longer sequences
8 For example, transitive sentences are bracketed [subject [verb object]] (The president [executed
the law]) while nominalizations are bracketed [[possessive noun] complement] ([The president?s execution] of the law), an arbitrary inconsistency which is unlikely to be learned automatically.
via shared contexts (categories like PP which have infrequent proforms are not learned well
unless the empty sequence is in the model ? interestingly, the empty sequence appears to
act as the proform for PPs, possibly due to the highly optional nature of many PPs).
5 Conclusions
We have presented an alternate probability model over trees which is based on simple
assumptions about the nature of natural language structure. It is driven by the explicit
transfer between sequences and their contexts, and exploits both the proform phenomenon
and the fact that good constituents must tile in ways that systematically cover the corpus
sentences without crossing. The model clearly has limits. Lacking recursive features, it
essentially must analyze long, rare constructions using only contexts. However, despite, or
perhaps due to its simplicity, our model predicts bracketings very well, producing higher
quality structural analyses than previous methods which employ the PCFG model family.
Acknowledgements. We thank John Lafferty, Fernando Pereira, Ben Taskar, and Sebastian Thrun for comments and discussion. This paper is based on work supported in part by
the National Science Foundation under Grant No. IIS-0085896.
References
[1] James K. Baker. Trainable grammars for speech recognition. In D. H. Klatt and J. J. Wolf,
editors, Speech Communication Papers for the 97th Meeting of the ASA, pages 547?550, 1979.
[2] Glenn Carroll and Eugene Charniak. Two experiments on learning probabilistic dependency
grammars from corpora. In C. Weir, S. Abney, R. Grishman, and R. Weischedel, editors, Working Notes of the Workshop Statistically-Based NLP Techniques, pages 1?13. AAAI Press, 1992.
[3] Eugene Charniak. A maximum-entropy-inspired parser. In NAACL 1, pages 132?139, 2000.
[4] Noam Chomsky. Knowledge of Language. Prager, New York, 1986.
[5] Noam Chomsky & Morris Halle. The Sound Pattern of English. Harper & Row, NY, 1968.
[6] Alexander Clark. Unsupervised induction of stochastic context-free grammars using distributional clustering. In The Fifth Conference on Natural Language Learning, 2001.
[7] Michael John Collins. Three generative, lexicalised models for statistical parsing. In ACL
35/EACL 8, pages 16?23, 1997.
[8] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. J. Royal Statistical Society Series B, 39:1?38, 1977.
[9] Steven Finch and Nick Chater. Distributional bootstrapping: From word class to proto-sentence.
In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pages 301?
306, Hillsdale, NJ, 1994. Lawrence Erlbaum.
[10] Zellig Harris. Methods in Structural Linguistics. University of Chicago Press, Chicago, 1951.
[11] Dan Klein and Christopher D. Manning. Distributional phrase structure induction. In The Fifth
Conference on Natural Language Learning, 2001.
[12] K. Lari and S. J. Young. The estimation of stochastic context-free grammars using the insideoutside algorithm. Computer Speech and Language, 4:35?56, 1990.
[13] Fernando Pereira and Yves Schabes. Inside-outside reestimation from partially bracketed corpora. In ACL 30, pages 128?135, 1992.
[14] Andrew Radford. Transformational Grammar. Cambridge University Press, Cambridge, 1988.
[15] Hinrich Sch?utze. Distributional part-of-speech tagging. In EACL 7, pages 141?148, 1995.
[16] Andreas Stolcke and Stephen M. Omohundro. Inducing probabilistic grammars by Bayesian
model merging. In Grammatical Inference and Applications: Proceedings of the Second International Colloquium on Grammatical Inference. Springer Verlag, 1994.
[17] M. van Zaanen and P. Adriaans. Comparing two unsupervised grammar induction systems:
Alignment-based learning vs. emile. Technical Report 2001.05, University of Leeds, 2001.
[18] J. G. Wolff. Learning syntax and meanings through optimization and distributional analysis. In
Y. Levy, I. M. Schlesinger, and M. D. S. Braine, editors, Categories and processes in language
acquisition, pages 179?215. Lawrence Erlbaum, Hillsdale, NJ, 1988.
| 1945 |@word version:3 achievable:1 advantageous:1 open:1 heuristically:1 propagate:1 pressure:3 harder:1 moment:1 initial:1 series:1 score:4 charniak:4 united:1 interestingly:1 contextual:1 comparing:3 surprising:1 must:2 parsing:11 john:2 subsequent:1 chicago:2 wanted:1 seeding:1 plot:1 interpretable:1 v:1 generative:4 selected:1 cue:1 fewer:1 beginning:1 short:5 provides:1 detecting:1 node:7 boosting:1 nnp:2 monday:1 simpler:1 along:1 direct:1 become:1 incorrect:2 dan:2 fitting:2 inside:8 manner:1 acquired:1 inter:1 tagging:1 expected:1 behavior:2 themselves:2 frequently:2 growing:1 examine:1 terminal:4 inspired:1 decreasing:1 automatically:1 actual:2 increasing:2 baker:1 maximizes:2 hinrich:1 what:6 kind:2 substantially:2 finding:1 bootstrapping:1 nj:2 guarantee:2 every:3 act:1 innocent:1 shed:1 tie:1 exactly:1 ensured:1 scaled:1 unit:2 penn:2 grant:1 producing:1 positive:1 before:1 local:3 tends:1 mistake:1 consequence:1 limit:1 despite:1 encoding:2 approximately:1 might:1 plus:1 acl:2 weakened:1 studied:1 suggests:2 co:1 pcfgs:3 range:1 locked:1 statistically:1 testing:2 practice:4 recursive:2 x3:1 procedure:7 significantly:1 projection:1 ccm:10 word:8 induce:3 chomsky:2 symbolic:1 get:2 onto:3 close:2 unlabeled:3 context:28 impossible:1 conventional:1 equivalent:1 lexical:1 maximizing:1 go:2 starting:3 focused:2 simplicity:1 rule:3 estimator:2 lexicalised:1 dominate:1 classic:1 president:2 feel:1 construction:2 parser:5 infrequent:1 programming:1 us:1 crossing:4 recognition:1 particularly:1 pps:2 distributional:7 labeled:1 predicts:1 bottom:1 observed:2 taskar:1 steven:1 solved:2 capture:1 region:1 prager:1 ordering:2 removed:1 substantial:1 mentioned:1 environment:2 broken:3 dempster:1 colloquium:1 ideally:1 dynamic:2 trained:2 rewrite:1 eacl:2 asa:1 distinctive:1 learner:1 completely:1 easily:1 joint:3 represented:1 unsuited:1 fast:1 describe:1 effective:3 artificial:2 labeling:2 outside:7 whose:3 heuristic:2 stanford:3 widely:1 apparent:1 say:4 grammar:48 cfg:2 unseen:1 schabes:1 syntactic:1 think:1 jointly:1 laird:1 final:2 sequence:22 advantage:1 frozen:1 analytical:1 indication:1 took:1 propose:3 product:1 frequent:5 pronoun:1 poorly:1 tiebreaking:1 sixteenth:1 description:1 asserts:1 inducing:2 pronounced:1 constituent:44 convergence:1 empty:7 cluster:3 regularity:1 sea:3 produce:7 generating:1 wsj:2 comparative:1 ben:1 object:3 andrew:1 completion:2 dep:9 strong:2 c:1 come:1 met:1 correct:6 stochastic:2 human:2 enable:2 hillsdale:2 argued:1 fix:4 f1:4 wall:1 probable:1 ground:1 deciding:3 exp:3 cb:2 lawrence:2 predict:1 claim:2 major:2 early:4 utze:1 estimation:2 linguistically:4 label:2 vbp:2 reflects:1 weighted:1 clearly:4 always:5 gaussian:1 aim:2 rather:3 varying:1 evalb:2 chater:1 linguistic:10 rank:3 likelihood:13 prp:2 contrast:1 greedily:1 baseline:5 sense:2 detect:3 posteriori:1 cg:3 economy:1 inference:2 nn:43 typically:2 entire:1 unlikely:1 initially:1 overall:1 classification:1 issue:1 priori:1 impacting:1 noun:8 smoothing:2 initialize:2 marginal:1 never:1 look:1 unsupervised:13 nearly:1 np:15 others:1 report:1 employ:3 few:2 randomly:4 resulted:1 national:1 abl:2 replaced:1 fire:2 attempt:1 detection:1 possibility:1 highly:1 evaluation:2 certainly:2 severe:1 mdl:3 weakness:1 punctuation:1 bracket:11 extreme:1 analyzed:2 light:1 alignment:1 chain:4 amenable:1 accurate:1 closer:1 partial:1 daily:1 censored:1 unless:2 tree:31 incomplete:2 initialized:1 desired:3 schlesinger:1 minimal:1 complicates:1 stopped:1 instance:3 modeling:1 compelling:1 cfgs:1 contiguous:1 cover:1 measuring:1 maximization:1 phrase:9 rare:2 uniform:2 successful:1 erlbaum:2 commission:1 dependency:5 varies:1 accomplish:1 finch:1 nns:20 chooses:2 cited:1 international:1 probabilistic:4 systematic:2 michael:1 again:1 reflect:1 aaai:1 vbd:22 possibly:2 tile:1 worse:1 cognitive:2 style:1 account:1 potential:1 transformational:1 zellig:1 explicitly:1 bracketed:3 later:1 break:1 view:1 analyze:1 red:3 start:1 recover:1 minimize:1 yves:1 accuracy:2 moon:2 efficiently:1 correspond:1 yield:13 wisdom:1 vp:7 weak:1 bayesian:1 plastic:1 produced:3 none:1 expertise:1 drive:2 published:1 argumentation:1 explain:1 sebastian:1 against:3 acquisition:4 pp:6 frequency:8 james:1 obvious:1 boil:1 gain:1 dataset:1 duplicated:1 recall:9 knowledge:4 manifest:1 subtle:1 appears:1 higher:4 dt:22 supervised:7 restarts:1 wherein:1 improved:1 modal:1 evaluated:1 though:4 furthermore:1 just:2 implicit:3 anywhere:1 until:1 hand:2 working:1 parse:5 christopher:2 quality:6 perhaps:2 resemblance:1 believe:2 innate:1 supervisory:2 building:1 effect:2 facilitate:1 naacl:1 former:3 hence:1 assigned:2 riddled:1 alternating:1 adjacent:1 sin:1 round:1 self:2 encourages:1 skewed:1 during:1 branching:5 anything:1 adriaans:1 criterion:2 syntax:1 evident:2 complete:4 theoretic:1 omohundro:1 performs:1 syntactically:1 percent:1 meaning:1 harmonic:1 novel:2 began:1 superior:2 common:1 functional:1 overview:1 attached:1 nn2:1 occurred:1 atis:3 he:1 interpret:1 surround:1 cambridge:2 language:14 blame:1 had:2 carroll:3 longer:1 align:2 showed:1 recent:1 driven:1 possessive:1 verlag:1 binary:3 success:5 arbitrarily:1 inconsistency:1 meeting:1 scoring:1 seen:1 transmitted:1 greater:1 somewhat:1 payroll:1 converge:1 maximize:4 fernando:2 ii:2 stephen:1 sound:1 smooth:1 technical:1 cross:2 long:4 host:1 equally:1 essentially:2 iteration:6 represent:1 sometimes:1 confined:1 whereas:1 want:1 sch:1 unlike:2 fell:1 comment:1 ineffective:1 subject:3 deficient:1 induced:3 virtually:1 sent:1 ascent:1 inconsistent:1 tend:1 seem:2 unprojected:1 lafferty:1 structural:4 leverage:1 intermediate:1 vital:1 enough:2 stolcke:1 variety:1 independence:2 fit:1 gave:1 weischedel:1 andreas:1 idea:2 whether:2 speech:10 speaking:1 york:1 jj:4 deep:1 ignored:1 generally:4 useful:1 clear:1 covered:1 amount:1 locally:1 morris:1 category:8 constituency:5 exist:1 correctly:1 klein:3 broadly:1 group:1 changing:2 prevent:1 vast:1 merely:1 almost:1 reasonable:1 family:2 distrust:1 prefer:1 vb:1 comparable:1 entirely:2 bound:2 guaranteed:1 furniture:1 fold:1 truck:2 annual:1 adapted:1 occur:6 constraint:1 x2:1 flat:1 aspect:2 argument:2 span:8 extremely:2 attempting:1 relatively:3 conjecture:1 department:1 according:2 alternate:5 manning:3 poor:1 conjugate:1 across:1 em:15 ur:7 making:2 modification:1 happens:1 restricted:2 resource:1 lari:1 count:7 needed:3 end:1 observe:2 appropriate:1 occurrence:1 assumes:1 clustering:6 running:2 include:3 completed:1 nlp:2 top:3 linguistics:1 somewhere:1 exploit:3 parsed:4 giving:1 classical:1 society:2 objective:4 already:1 quantity:1 looked:1 occurs:1 parametric:1 primary:1 strategy:2 traditional:2 september:1 gradient:1 separate:1 thank:1 thrun:1 majority:1 street:1 astray:1 trivial:2 reason:4 induction:17 spanning:2 length:4 difficult:2 unfortunately:1 executed:1 noam:2 rise:1 implementation:1 perform:2 upper:2 inspect:1 observation:1 discarded:1 beat:1 optional:1 communication:1 smoothed:1 verb:9 arbitrary:3 standardly:2 complement:1 required:2 trainable:1 sentence:26 connection:1 nick:1 coherent:5 learned:7 distinction:1 able:3 suggested:1 usually:6 below:1 pattern:1 appeared:1 sparsity:2 program:1 including:2 royal:1 explanation:1 natural:7 difficulty:2 rely:1 ranked:1 leeds:1 representing:1 halle:1 deemed:1 transitive:1 embodied:1 isn:1 text:1 prior:3 eugene:2 acknowledgement:1 removal:1 relative:3 law:2 lacking:1 fully:3 par:18 expect:2 cdc:2 attache:1 emile:3 clark:1 foundation:1 rubin:1 vbz:2 treebank:13 editor:3 systematically:3 cd:3 production:1 row:1 compatible:1 elsewhere:1 penalized:1 placed:1 surprisingly:1 free:3 english:2 supported:1 bias:2 wide:1 attaching:1 fifth:2 van:1 grammatical:3 boundary:1 calculated:2 evaluating:1 made:1 collection:1 projected:3 situate:1 compact:2 implicitly:1 vps:2 ml:2 global:1 investigating:1 reestimation:3 corpus:7 assumed:1 xi:1 search:6 iterative:2 glenn:1 abney:1 additionally:1 nature:3 transfer:1 ca:1 symmetry:2 expansion:1 complex:2 did:1 motivation:1 big:1 grishman:1 screen:3 fashion:1 slow:1 ny:1 precision:8 fails:1 position:1 structurally:1 explicit:1 pereira:2 exponential:3 factory:1 candidate:1 levy:1 young:1 down:1 bad:1 discarding:1 symbol:3 list:6 workshop:1 pcfg:17 merging:1 supplement:1 notwithstanding:1 execution:1 occurring:2 push:1 easier:1 locality:1 suited:1 entropy:5 zaanen:1 simply:1 likely:1 contained:1 partially:1 radford:1 springer:1 wolf:1 truth:3 harris:1 conditional:9 identity:2 goal:1 klatt:1 weir:1 towards:2 nn1:1 man:4 considerable:1 hard:2 shared:1 determined:2 except:1 reducing:1 semantically:1 typical:1 principal:2 wolff:1 called:1 total:1 isomorphic:1 tendency:1 distributionally:1 rarely:1 formally:1 people:3 latter:1 harper:1 collins:1 alexander:1 incorporate:2 ofspeech:1 proto:1 tested:1 phenomenon:2 correlated:1 |
1,035 | 1,946 | On Kernel-Target Alignment
N ello Cristianini
BIOwulf Technologies
nello@support-vector. net
Andre Elisseeff
BIOwulf Technologies
andre@barnhilltechnologies.com
John Shawe-Taylor
Royal Holloway, University of London
john@cs.rhul.ac.uk
Jaz Kandola
Royal Holloway, University of London
jaz@cs.rhul.ac.uk
Abstract
We introduce the notion of kernel-alignment, a measure of similarity between two kernel functions or between a kernel and a target
function. This quantity captures the degree of agreement between
a kernel and a given learning task, and has very natural interpretations in machine learning, leading also to simple algorithms for
model selection and learning. We analyse its theoretical properties,
proving that it is sharply concentrated around its expected value,
and we discuss its relation with other standard measures of performance. Finally we describe some of the algorithms that can be
obtained within this framework, giving experimental results showing that adapting the kernel to improve alignment on the labelled
data significantly increases the alignment on the test set, giving
improved classification accuracy. Hence, the approach provides a
principled method of performing transduction.
Keywords: Kernels, alignment, eigenvectors, eigenvalues, transduction
1
Introduction
Kernel based learning algorithms [1] are modular systems formed by a generalpurpose learning element and by a problem specific kernel function. It is crucial for
the performance of the system that the kernel function somehow fits the learning
target, that is that in the feature space the data distribution is somehow correlated
to the label distribution. Several results exist showing that generalization takes
place only when such correlation exists (nofreelunch; luckiness), and many classic
estimators of performance (eg the margin) can be understood as estimating this
relation. In other words, selecting a kernel in this class of systems amounts to the
classic feature and model selection problems in machine learning.
Measuring the similarity between two kernels, or the degree of agreement between
a kernel and a given target function, is hence an important problem both for conceptual and for practical reasons. As an example, it is well known that one can
obtain complex kernels by combining or manipulating simpler ones, but how can
one predict whether the resulting kernel is better or worse than its components?
What a kernel does is to virtually map data into a feature space so that their relative
positions in that space are what matters. The degree of clustering achieved in that
space, and the relation between the clusters and the labeling to be learned, should
be captured by such an estimator.
Alternatively, one could regard kernels as 'oracles' or 'experts' giving their opinion
on whether two given points belong to the same class or not. In this case, the
correlation between experts (seen as random variables) should provide an indication
of their similarity.
We will argue that - if one were in possess of this information - the ideal kernel for
a classification target y(x) would be K(x, z) = y(x)y(z). One way of estimating
the extent to which the kernel achieves the right clustering is to compare the sum
of the within class distances with the sum of the between class distances. This will
correspond to the alignment between the kernel and the ideal kernel y(x)y(z). By
measuring the similarity of this kernel with the kernel at hand - on the training
set - one can assess the degree of fitness of such kernel. The measure of similarity
that we propose, 'kernel alignment' would give in this way a reliable estimate of its
expected value, since it is sharply concentrated around its mean.
In this paper we will motivate and introduce the notion of Alignment (Section 2);
prove its concentration (Section 3); discuss its implications for the generalisation
of a simple classifier (Section 4) and deduce some simple algorithms (Section 5) to
optimize it and finally report on some experiments (Section 6).
2
Alignment
Given an (unlabelled) sample 8 = {Xl, ... ,xm }, we use the following inner product
between Gram matrices, (K1,K2)F = 2:7,'j=l K 1(Xi,Xj)K2(Xi,Xj)
Definition 1 Alignment The (empirical) alignment of a kernel k1 with a kernel
k2 with respect to the sample 8 is the quantity
A(8 k k) =
,
1,
2
(K 1 ,K2 )F
J(K 1,K1)F(K2, K 2)F'
where Ki is the kernel matrix for the sample 8 using kernel k i .
This can also be viewed as the cosine of the angle between two bi-dimensional
vectors K1 and K 2, representing the Gram matrices. If we consider K2 = yyl,
where y is t he vector of { -1, + I} labels for the sample, then
A(8 K
I)
(K, yyl)F
, ,yy =. / / K K) / 1 I)
y \,
F\YY ,yy F
(K, yyl)F
.
/ 1 I)
2
. / / K K) , smce \yy ,yy F = m
my \ ,
F
We will occasionally omit t he arguments K or y when t hese are understood from
the context or when y forms part of the sample. In the next section we will see how
this definition provides with a method for selecting kernel parameters and also for
combining kernels.
3
Concentration
The following theorem shows that the alignment is not too dependent on the training
set 8. This result is expressed in terms of 'concentration'. Concentration means that
the probability of an empirical estimate deviating from its mean can be bounded
as an exponentially decaying function of that deviation.
This will have a number of implications for the application and optimisation of the
alignment. For example if we optimise the alignment on a random sample we can
expect it to remain high on a second sample. Furthermore we will show in the next
section that if the expected value of the alignment is high, then there exist functions
that generalise well. Hence, the result suggests that we can optimise the alignment
on a training set and expect to keep high alignment and hence good performance
on a test set. Our experiments will demonstrate that this is indeed the case.
The theorem makes use of the following result due to McDiarmid. Note that lEs is
the expectation operator under the selection of the sample.
TheoreIll 2 (McDiarmid!4}) Let Xl, ... ,Xn be independent random variables taking values in a set A, and assume that f : An -+ m. satisfies for 1 ::::; i ::::; n
then for all
f
> 0,
TheoreIll 3 The sample based estimate of the alignment is concentrated around its
expected value. For a kernel with feature vectors of norm 1, we have that
pm{s: 1.4(S) - A(y)1 ::::: ?} ::::; 8 where
? = C(S)V8ln(2/8)/m,
(1)
for a non-trivial function C (S) and value A(y).
Proof: Let
1 ~
1 ~
2
lEs[.41(S)]
A 1(S) = m 2 .~ Yiy j k(Xi,Xj),A2(S) = m 2 .~ k(xi,Xj) , and A(y) = /
',J=l
',J=l
ylES [A 2 (S)]
A
A
A
First note that .4(S) = .4 1(S)/) .4 2(S).
Define Al = lES[A1(S)] and A2 =
lES[A2(S)], First we make use of McDiarmid's theorem to show that Ai(S) are
concentrated for i = 1,2. Consider the training set S' = S \ {(Xi, Yi)} U {(X~, y~)}.
We must bound the difference
1
4
IAj(S) - Aj(S')1 ::::; - 2 (2(m - 1)2) < -,
m
m
for j = 1,2. Hence, we have Ci = 4/m for all i and we obtain from an application
of McDiarmid's Theorem for j = 1 and 2,
A
A
< 2exp ( f;m)
Setting f = V8ln(2/8)/m, the right hand sides are less than or equal to 8/2. Hence,
with probability at least 1 - 8, we have for j = 1, 2 1Aj (S) - Aj 1 < f. But whenever
these two inequalities hold, we have
<
<
?
Remark. We could also define the true Alignment, based on the input distribution P, as follows: given functions f,g : X 2 --+ JR, we define (j,g)p =
f(x, z)g(x, z)dP(x)dP(z), Then the alignment of a kernel k1 with a kernel k2
is the quantity A(k1' k 2) = J (kl,k2)P
.
IX2
(kl ,kl) P (k2 ,k2) P
Then it is possible to prove that asymptotically as m tends to infinity the empirical
alignment as defined above converges to the true alignment. However if one wants
to obtain unbiased convergence it is necessary to slightly modify its definition by
removing the diagonal, since for finite samples it biases the expectation by receiving
too large a weight. With this modification A(y) in the statement of the theorem becomes the true alignment. We prefer not to pursue this avenue further for simplicity
in this short article, we just note that the change is not significant.
4
Generalization
In this section we consider the implications of high alignment for the generalisation
of a classifier. By generalisation we mean the test error err(h) = P(h(x) ?- y).
Our next observation relates the generalisation of a simple classification function
to the value of the alignment. The function we consider is the expected Parzen
window estimator hex) = sign(f(x)) = sign (lE(XI ,v') [y'k(x ' , x)]). This corresponds
to thresholding a linear function f in the feature space. We will show that if
there is high alignment then this function will have good generalisation. Hence, by
optimising the alignment we may expect Parzen window estimators to perform well.
We will demonstrate that this prediction does indeed hold good in experiments.
> O. With probability 1 - 8 over a randomly drawn
training set S, the generalisation accuracy of the expected Parzen window estimator
h(x) = sign (lE(XI ,yl) [y' k(X', x)]) is bounded from above by
Theorem 4 Given any 8
err(h(x)) ::::: 1- A(S)
+ E+ (mJ A2(S)) - 1,
where E = C(S)V!
ln~.
Proof: (sketch) We assume throughout that the kernel has been normalised so that
k(x , x) = 1 for all x. First observe that by Theorem 3 with probability greater than
1- 8/2, IA(y) - A(S)I ::::: E. The result will follow if we show that with probability
greater than 1- 8/2 the generalisation error of hS\(xl,y,) can be upper bounded by
1 - A(y) + ~. Consider the quantity A(y) from Theorem 3.
A2(S)
m
lEs
A(y)
[~L:Z;=1 Yiyjk(xi,xj)]
lEs
But
Hence,
lEs
if
I
E
lEs
[~L:#j Yiyjk(xi,xj)] + ~
C
[~2 L:Z;=1 k(Xi,Xj)2]
mC-ml f(x) I < VIlE(x,y) [2]
y
P(f(x)
[C~2 L:#j YiYj k(Xi' Xj)] :::::
?-
y)
1xa
(m -1)2
I 2
C 2 m 2 lE(XI,yl) [k(x, x ) ] < 1
P(f(x)
and a
+0 x E =
a and
E
y),
we have
= 1 - a ::::: 1 - A(y)
+ c~, D
An empirical estimate of the function f would be the Parzen window function.
The expected margin of the empirical function is concentrated around the expected
margin of the expected Parzen window. Hence, with high probability we can bound
the error of j in terms of the empirically estimated alignment A(S). This is omitted
due to lack of space. The concentration of j is considered in [3].
5
Algorithms
The concentration of the alignment can be directly used for tuning a kernel family
to the particular task, or for selecting a kernel from a set, with no need for training.
The probability that the level of alignment observed on the training set will be out
by more than ? from its expectation for one of the kernels is bounded by 6, where
J
? is given by equation (1) for E =
~ (InINI + lnj), where INI is the size of the
set from which the kernel has been chosen. In fact we will select from an infinite
family of kernels. Providing a uniform bound for such a class would require covering
numbers and is beyond the scope of this paper. One of the main consequences of
the definition of kernel alignment is in providing a practical criterion for combining
kernels. We will justify the intuitively appealing idea that two kernels with a certain
alignment with a target that are not aligned to each other, will give rise to a more
aligned kernel combination. In particular we have that
This shows that if two kernels with equal alignment to a given target yare also
completely aligned to each other, then IIKI + K211F = IIKlllF + IIK211F and the
alignment of the combined kernel remains the same. If on the other hand the
kernels are not completely aligned, then the alignment of the combined kernel is
correspondingly increased.
To illustrate the approach we will take to optimising the kernel, consider a kernel
that can be written in the form k(x, Xl) = l:.k I-tk(yk(x)yk(x l )) , where all the yk
are orthogonal with respect to the inner product defined on the training set S,
(y, yl)S = l:.:l YiYj. Assume further that one of them yt is the true label vector.
We can now evaluate the alignment as A(y) ~ I-tt/v'l:.kl-t% . In terms of the
Gram matrix this is written as Kij = l:.k I-tkyfyj where yf is the i-th label of the
k-th classification. This special case is approximated by the decomposition into
eigenvectors of the kernel matrix K = l:. Aiviv~, where Vi denotes the transpose of
v and Vi is the i-th eigenvector with eigenvalue Ai. In other words, the more peaked
the spectrum the more aligned (specific) the kernel can be.
If by chance the eigenvector of the largest eigenvalue Al corresponds to the target
labeling, then we will give to that labeling a fraction Ad v'l:.i AT of the weight that
we can allocate to different possible labelings. The larger the emphasis of the kernel
on a given target, the higher its alignment.
In the previous subsection we observed that combining non-aligned kernels that are
aligned with the target yields a kernel that is more aligned to the target. Consider
the base kernels Ki = ViV~ where Vi are the eigenvectors of K, the kernel matrix
for both labeled and unlabeled data. Instead of choosing only the most aligned
ones, one could use a linear combination, with the weights proportional to their
alignment (to the available labels): k = l:.i f(ai)viv~ where ai is the alignment of
the kernel K i , and f(a) is a monotonically increasing function (eg. the identity or
an exponential). Note that a recombination of these rank 1 kernels was made in
so-called latent semantic kernels [2]. The overall alignment of the new kernel with
the labeled data should be increased, and the new kernel matrix is expected also
to be more aligned to the unseen test labels (because of the concentration, and the
assumption that the split was random).
Moreover, in general one can set up an optimization problem, aimed at finding the
optimal a, that is the parameters that maximize the alignment of the combined
kernel with the available labels. Given K = Li aiviv~ , using the orthonormality
of the Vi and that (v v' ,uu') F = (v, u)}, the alignment can be written as
A.(y) =
(K, yy')F
mJLij aiaj(viv~, VjVj)F
Li ai(vi, y)}
J(yy', yY')FJLi a;?
Hence we have the following optimization problem:
maximise
W (a)
(2)
Setting derivatives to zero we obtain ~:.
(Vi,Y)} - A2ai = 0 and hence
(Vi,Y)}, giving the overall alignment A.(y) =
JL,i~i'Y)j".
ai
(X
This analysis suggests the following transduction algorithm. Given a partially labelled set of examples optimise its alignment by adapting the full kernel matrix by
recombining its rank one eigenmatrices ViV~ using the coefficients ai determined by
measuring the alignment between Vi and y on the labelled examples. Our results
suggest that we should see a corresponding increase in the alignment on the unlabelled part of the set, and hence a reduction in test error when using a Parzen
window estimator. Results of experiments testing these predictions are given in the
next section.
6
Experiments
We applied the transduction algorithm designed to take advantage of our results
by optimizing alignment with the labeled part of the dataset using the spectral
method described above. All of the results are averaged over 20 random splits with
the standard deviation given in brackets.
Table 1 shows the alignments of the
K50
G50
K 20
G20
Train Align
0.076 (0.007)
0.228 ~0.012)
0.075 ~0.016)
0.242 (0.023)
0.072 ~0.022)
0.273 ~0.037)
Test Align
0.092 (0.029)
0.219 ~0.041)
0.084 ~0.017)
0.181 (0.043)
0.081 ~0.006)
0.034 ~0.046)
Train Align
0.207 (0.020)
0.240 ~0.016)
0.210 ~0.031)
0.257 (0.023)
0.227 ~0.057)
0.326 ~0.023)
Test
0.240
0.257
0.216
0.202
0.210
0.118
Align
(0.083
~0.059)
~0.033)
(0.015)
~0.015)
~0.017)
Table 1: Mean and associated standard deviation alignment values using a linear
kernel on the Breast (left two columns) and Ionosphere (right two columns).
Gram matrices to the label matrix for different sizes of training set. The index
indicates the percentage of training points. The K matrices are before adaptation,
while the G matrices are after optimisation of the alignment using equation (2).
The results on the left are for Breast Cancer data using a linear kernel, while the
results on the right are for Ionosphere data.
The left two columns of Table 2 shows the alignment values for Breast Cancer data
using a Gaussian kernel together with the performance of an SVM classifier trained
Table 2: Breast alignment (cols 1,2) and SVM error for a Gaussian kernel (sigma
= 6) (col 3), Parzen window error for Breast (col 4) and Ionosphere (col 5)
with the given gram matrix in the third column. The right two columns show the
performance of the Parzen window classifier on the test set for Breast linear kernel
(left column) and Ionosphere (right column).
The results clearly show that optimising the alignment on the training set does
indeed increase its value in all but one case by more than the sum of the standard
deviations. Furthermore, as predicted by the concentration this improvement is
maintained in the alignment measured on the test set with both linear and Gaussian
kernels in all but one case (20% train with the linear kernel). The results for
Ionosphere are less conclusive. Again as predicted by the theory the larger the
alignment the better the performance that is obtained using the Parzen window
estimator. The results of applying an SVM to the Breast Cancer data using a
Gaussian kernel show a very slight improvement in the test error for both 80% and
50% training sets.
7
Conclusions
We have introduced a measure of performance of a kernel machine that is much
easier to analyse than standard measures (eg the margin) and that provides much
simpler algorithms. We have discussed its statistical and geometrical properties,
demonstrating that it is a well motivated and formally useful quantity.
By identifying that the ideal kernel matrix has a structure of the type yy', we have
been able to transform a measure of similarity between kernels into a measure of
fitness of a given kernel. The ease and reliability with which this quantity can be
estimated using only training set information prior to training makes it an ideal
tool for practical model selection. We have given preliminary experimental results
that largely confirm the theoretical analysis and augur well for the use of this tool
in more sophisticated model (kernel) selection applications.
References
[1] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. See also the web site www.supportvector.net.
[2] Nello Cristianini, Huma Lodhi, and John Shawe-Taylor. Latent semantic kernels
for feature selection. Technical Report NC-TR-00-080, NeuroCOLT Working
Group, http://www.neurocolt.org, 2000.
[3] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Th eory of Pattern Recognition. Number 31 in Applications of mathematics. Springer, 1996.
[4] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics 1989, pages 148-188. Cambridge University Press, 1989.
| 1946 |@word h:1 norm:1 lodhi:1 decomposition:1 elisseeff:1 tr:1 reduction:1 selecting:3 err:2 com:1 jaz:2 must:1 written:3 john:3 designed:1 short:1 provides:3 mcdiarmid:5 simpler:2 org:1 prove:2 introduce:2 indeed:3 expected:10 window:9 increasing:1 becomes:1 estimating:2 bounded:5 moreover:1 what:2 pursue:1 eigenvector:2 finding:1 classifier:4 k2:10 uk:2 omit:1 before:1 maximise:1 understood:2 modify:1 tends:1 consequence:1 lugosi:1 emphasis:1 suggests:2 ease:1 bi:1 gyorfi:1 averaged:1 practical:3 testing:1 empirical:5 adapting:2 significantly:1 word:2 suggest:1 unlabeled:1 selection:6 operator:1 context:1 applying:1 optimize:1 www:2 map:1 yt:1 survey:1 simplicity:1 identifying:1 estimator:7 proving:1 classic:2 notion:2 aiaj:1 target:11 agreement:2 element:1 approximated:1 recognition:1 labeled:3 observed:2 capture:1 yk:3 principled:1 cristianini:3 hese:1 motivate:1 trained:1 completely:2 train:3 describe:1 london:2 labeling:3 choosing:1 modular:1 larger:2 unseen:1 analyse:2 transform:1 advantage:1 eigenvalue:3 indication:1 net:2 g20:1 propose:1 vjvj:1 product:2 adaptation:1 aligned:10 combining:4 convergence:1 cluster:1 converges:1 tk:1 illustrate:1 ac:2 measured:1 keywords:1 c:2 predicted:2 uu:1 yiyj:2 opinion:1 require:1 generalization:2 preliminary:1 yyl:3 hold:2 around:4 considered:1 exp:1 scope:1 predict:1 achieves:1 a2:5 omitted:1 eigenmatrices:1 label:8 largest:1 tool:2 clearly:1 gaussian:4 improvement:2 rank:2 indicates:1 dependent:1 relation:3 manipulating:1 labelings:1 overall:2 classification:4 special:1 equal:2 optimising:3 peaked:1 report:2 yiy:1 randomly:1 kandola:1 fitness:2 deviating:1 alignment:55 bracket:1 implication:3 necessary:1 orthogonal:1 taylor:3 theoretical:2 increased:2 kij:1 column:7 measuring:3 deviation:4 uniform:1 too:2 my:1 combined:3 probabilistic:1 yl:3 receiving:1 parzen:9 together:1 again:1 worse:1 expert:2 derivative:1 leading:1 li:2 lnj:1 coefficient:1 matter:1 combinatorics:1 vi:8 ad:1 decaying:1 ass:1 formed:1 accuracy:2 largely:1 correspond:1 yield:1 mc:1 ix2:1 andre:2 whenever:1 definition:4 proof:2 associated:1 dataset:1 subsection:1 sophisticated:1 higher:1 follow:1 improved:1 iaj:1 furthermore:2 just:1 xa:1 correlation:2 hand:3 sketch:1 working:1 web:1 lack:1 somehow:2 yf:1 aj:3 true:4 unbiased:1 orthonormality:1 hence:12 semantic:2 eg:3 covering:1 maintained:1 cosine:1 criterion:1 ini:1 tt:1 demonstrate:2 theoreill:2 geometrical:1 empirically:1 exponentially:1 jl:1 belong:1 interpretation:1 he:2 slight:1 discussed:1 significant:1 cambridge:2 ai:7 tuning:1 pm:1 augur:1 mathematics:1 shawe:3 reliability:1 similarity:6 deduce:1 base:1 align:4 optimizing:1 occasionally:1 certain:1 inequality:1 yi:1 captured:1 seen:1 greater:2 maximize:1 monotonically:1 relates:1 full:1 technical:1 unlabelled:2 a1:1 prediction:2 breast:7 optimisation:2 expectation:3 kernel:78 achieved:1 want:1 crucial:1 posse:1 virtually:1 ideal:4 split:2 xj:8 fit:1 inner:2 idea:1 avenue:1 whether:2 motivated:1 allocate:1 remark:1 useful:1 eigenvectors:3 aimed:1 amount:1 concentrated:5 eory:1 http:1 exist:2 percentage:1 sign:3 estimated:2 yy:9 group:1 demonstrating:1 drawn:1 asymptotically:1 fraction:1 sum:3 angle:1 place:1 throughout:1 family:2 prefer:1 ki:2 bound:3 oracle:1 infinity:1 sharply:2 argument:1 performing:1 recombining:1 combination:2 jr:1 remain:1 slightly:1 appealing:1 modification:1 intuitively:1 ln:1 equation:2 remains:1 discus:2 available:2 yare:1 observe:1 spectral:1 denotes:1 clustering:2 giving:4 recombination:1 k1:6 quantity:6 concentration:8 biowulf:2 diagonal:1 dp:2 distance:2 neurocolt:2 argue:1 nello:2 extent:1 trivial:1 reason:1 devroye:1 index:1 providing:2 nc:1 statement:1 sigma:1 rise:1 perform:1 upper:1 observation:1 finite:1 yiyjk:2 introduced:1 kl:4 conclusive:1 learned:1 huma:1 beyond:1 able:1 pattern:1 xm:1 royal:2 reliable:1 smce:1 optimise:3 ia:1 natural:1 representing:1 improve:1 technology:2 prior:1 relative:1 expect:3 proportional:1 degree:4 article:1 thresholding:1 cancer:3 transpose:1 hex:1 side:1 bias:1 normalised:1 generalise:1 taking:1 correspondingly:1 regard:1 xn:1 gram:5 made:1 viv:4 keep:1 confirm:1 ml:1 conceptual:1 xi:12 alternatively:1 spectrum:1 latent:2 table:4 mj:1 generalpurpose:1 complex:1 main:1 k50:1 site:1 transduction:4 position:1 exponential:1 xl:4 col:4 third:1 theorem:8 luckiness:1 removing:1 specific:2 showing:2 rhul:2 svm:3 ionosphere:5 exists:1 ci:1 margin:4 easier:1 expressed:1 partially:1 springer:1 corresponds:2 satisfies:1 chance:1 viewed:1 ello:1 identity:1 labelled:3 change:1 generalisation:7 infinite:1 determined:1 justify:1 called:1 experimental:2 holloway:2 select:1 formally:1 support:2 evaluate:1 correlated:1 |
1,036 | 1,947 | An Efficient Clustering Algorithm Using
Stochastic Association Model and Its
Implementation Using Nanostructures
Takashi Morie, Tomohiro Matsuura, Makoto Nagata, and Atsushi Iwata
Graduate School of Advanced Sciences of Matter, Hiroshima University
Higashi-hiroshima, 739-8526 Japan.
http://www.dsl.hiroshima-u.ac.jp
morie@dsl.hiroshima-u.ac.jp
Abstract
This paper describes a clustering algorithm for vector quantizers using a
?stochastic association model?. It offers a new simple and powerful softmax adaptation rule. The adaptation process is the same as the on-line
K-means clustering method except for adding random fluctuation in the
distortion error evaluation process. Simulation results demonstrate that
the new algorithm can achieve efficient adaptation as high as the ?neural
gas? algorithm, which is reported as one of the most efficient clustering
methods. It is a key to add uncorrelated random fluctuation in the similarity evaluation process for each reference vector. For hardware implementation of this process, we propose a nanostructure, whose operation
is described by a single-electron circuit. It positively uses fluctuation in
quantum mechanical tunneling processes.
1 Introduction
Vector quantization (VQ) techniques are used in a wide range of applications, including
speech and image processing, data compression. VQ techniques encode a data manifold
using only a finite set of reference vectors
. A data vector
is represented by the best-matching or ?winning? reference vector , which minimizes
the average distortion error:
where
'
&%(' -,
!
# "$
)+*
.) is the probability distribution of data vectors over manifold
(1)
.
Various clustering algorithms to obtain the best reference vectors have been reported. Here,
we treat on-line training, in which the data point distribution is not given a priori, but instead
a stochastic sequence of incoming sample data points drives the adaptation procedure.
The straightforward approach is the well-known on-line K-means clustering algorithm, in
which only the nearest reference vector to the sample vector is adjusted;
/ 0132450 6
7
.8+9"$0:&
(2)
2
50
where, is the step size and is the Kronecker delta. However, this simple clustering
algorithm is often stuck in a local minimum. To avoid this difficulty, a common approach
is to introduce a ?soft-max? adaptation rule that not only adjusts the ?winning? reference
vector but affects other reference vectors depending on their proximity to .
The maximum-entropy (ME) algorithm [1] adjusts all reference vectors 0 depending on
the Euclidean distance to ;
/ 0 324
(3)
1
.8+9"$ 0 &
where parameter defines the proximity.
/ 01324
.
8+9"$0
The Kohonen?s self-organization map (SOM) algorithm [2] is another well-known model;
(4)
In this model, every reference vector is assigned to a site of a lattice. Each time a sample
vector is presented, not only the ?winning? reference vector is adjusted but also the reference vectors assigned to the lattice sites adjacent to the winner are updated according to
function , which is typically chosen to be a Gaussian:
!
"
%
(5)
where # is a parameter that defines the proximity.
The neural-gas (NG) clustering algorithm [3] is a powerful soft-max adaptation rule, in
which all reference vectors are adjusted depending on the ?neighborhood ranking?;
/ 0)2
0
. ++96
.7
8+ " 0
where 0
. +# is the ranking, which depends on and the whole set
0 is typically as follows:
7
$%
'&
&
%
(6)
. The function
(&
+* " %
(7)
where parameter , defines the proximity. This algorithm exhibits faster convergence to
smaller distortion errors, however consumes higher computational power especially for
sorting. An efficient version of the NG clustering that adjusts only several reference vectors
having upper ranking was also proposed [4].
)%
(&
In the next section, we propose a new efficient soft-max adaptation algorithm. It employs the stochastic association model that we have proposed related to single-electron
circuits [5], [6]. In Sec. 3, it is demonstrated from simulation results that this new clustering algorithm is as powerful as the other algorithms. In Sec. 4, we propose a nanostructure
based on a single-electron circuit for implementing the stochastic association model.
2 Stochastic association algorithm
A usual associative memory is defined as a system that deterministically extracts the vector
most similar to the input vector from the stored reference vectors. This just corresponds to
the process choosing the winning reference vector for a certain data vector in all conventional clustering algorithms.
In our stochastic association (SA) model, the association probability depends on the similarity between the input and the reference vectors. The SA algorithm extracts not only the
reference vector most similar to the input but also other similar reference vectors with the
probability depending on the similarity.
0
0
In the SA algorithm, stochastic fluctuation is added in the evaluation process of distortion
error - between data vector and reference vector . We propose this algorithm inspired
?(rn ?Dn )
Pr
Ri
Rn
i
A (rn)
Di
rn
wi
Distance
Dn
wn
Figure 1: Probability distribution in evaluation of the distortion error between the data
vector and each reference vector.
by the quantum mechanical property of single-electron circuits as described in Sec. 4, and
we expect that such fluctuation helps to avoid getting stuck in local minima of .
#" 0 0
The distortion error - can be the squared Euclidean distance
distance
. The evaluation result is represented by
0 0
6
-
0
"# 0
%
or the Manhattan
0
(8)
where is a random variable with probability distribution function
. Therefore, the
is also considered as a random variable. The probability that
has
evaluation result
value is represented by
(9)
0
0) 04
0 " 0:
The winning reference vector # is determined by
- 0
0
(10)
The probability that reference vector becomes the winner when has value for a
certain data vector is given by the product of
" and the probability that 0!
#" %'$ & as shown in Fig. 1. Therefore, the probability that becomes the winner is
given by integrating it with ;
&( *) *+ ,-
. , " 0/ 2 0
. ,
(11)
)
#
0
0
1
2 0
. , 43 )
. " 0:(*8
(12)
576
-
)
-
-
If the winning reference vector is updated as expressed by eq. (2), the SA model can provide
a new soft-max adaptation rule. Figure 2 shows an architecture for clustering processing
using the SA model. The distortion error between the input vector and each stored reference vector is evaluated in parallel with stochastic fluctuation. The winner-take-all circuit
deterministically extracts the winner, and the winning reference vector is only updated with
a constant value. As in the K-means algorithm, only one reference vector is adjusted for
each adaptation step and the update value for the selected reference vector is independent of
similarity or proximity. However, unlike the K-means algorithm, the adjusted vector is not
always the most similar reference vector, and sometimes other similar vectors are adjusted.
The total adjusting tendency in the SA algorithm seems similar to the NG or ME algorithm
because the probability of reference vector selection is determined by the neighborhood
ranking and the distances between each reference vector and a given data vector.
update only one vector
Reference vectors
wi
Input vector v
distortion error
evaluation with
stochastic fluctuation ?
Winner-Take-All
wc
Figure 2: Architecture for clustering processing using the SA model.
(a) SA
(b) ME
tmax = 5000
t=0
tmax = 50000
Figure 3: Test problem and clustering results by SA and ME algorithms. Data samples uniformly distribute in square regions, and points represent reference vectors. Both algorithms
use the same initial state.
3 Simulation results
In order to test the performance of the SA algorithm in minimizing the distortion error and
to compare it with the other soft-max approaches, we performed the same simulation of
model clustering described by Ref. [3]. The data clusters are of square shape within a twodimensional input space as shown in Fig. 3. In the simulation, the number of clusters was
15, and that of reference vectors was 60. We averaged the results of 50 simulation runs for
each of which not only the initialization of the reference vectors were chosen randomly but
also the 15 clusters were placed randomly.
0
The SA algorithm in this simulation used the squared Euclidean distance as a distortion
error - and the normal distribution as the probability distribution of the stochastic fluctuation;
%
.67
3
+" %
%
(13)
Performance ?
2
SOM
1
Maximum-entropy(ME)
algorithm
parameter
ME
SOM
NG
SA
All
initial
final
1
2
10
0.2
0.5
10000
0.01
0.01
0.0001
0.005
Stochastic association (SA)
Neural-gas (NG)
0
50000
Total number of adaptation steps tmax
Figure 4: Clustering performance of SA algorithm comparing with other clustering methods. The optimized parameters used in the simulation are also shown.
Figure 3 shows an example of clustering by the SA algorithm compared with that by the
ME algorithm. The result of the SA algorithm demonstrates nearly perfect clustering for
. In contrast, the clustering result by the ME algorithm is not so good
although the parameters used were optimized.
8
Here, all the clustering algorithms including the SA algorithm use an annealing procedure
to escape local minima. The parameters were gradually reduced during adaptation:
.8+7 0
" ! 0 # #%$'&)(
"
8
*%+ )9(2
#
,
(14)
where
is the total number of adaptation steps. The values optimized by numerous
preliminary simulations are shown in Fig. 4, which were used in the simulation described
here.
, .-
8 7" /10 ! /
8
2/
In order to compare the performance of the algorithms, we used a performance measure
, where
is the minimal distortion error in this problem. The
relationships between
and for the four algorithms are shown in Fig. 4. The clustering performance of the SA algorithm is nearly equal to that of the NG algorithm, which is
the most efficient clustering method in this test problem. The number of adaptation steps
to reach the steady state and the distortion error at the steady state in the SA algorithm are
nearly the same as those in the NG algorithm.
,
3
We also performed other simulations, one of which was vector quantization of a real image
(?Lena?, 256 256 pixels, 8-bit grayscale). In this case, the SOM demonstrated the best
performance, and the SA algorithm also had the nearly equal performance.
Consequently, comparing with the other soft-max algorithms, the SA algorithm has nearly
the best clustering performance. Moreover, it does not require a sorting process unlike the
NG algorithm nor a searching process of adjacent lattice sites unlike the SOM; only one
reference vector is adjusted per adaptation step. Thus, the computational power required
by the SA algorithm is much less than that required by the other soft-max algorithm. If the
number of reference vectors is , the total updating steps of reference vectors in the SA
algorithm are
times as many as those in the other algorithms. Thus, the SA algorithm
is the most efficient clustering method.
!
D1
Vd2
Dc
Energy
Ah
Vr3
Vr2
Av
Co
MOSFET
(b)
C2
Ah
C1
Vd2
Vd1
C3
: Electron e M C2
Cj : 0.1aF
C1 : 0.06aF
C2 : 0.02aF
C3 (parasitic) : 0.002aF
Co : 100aF
D1
Vr1
Dv1
Av
Ne ~ ? |Vdi ? Vri|
Co
D1
Position of eM
10-4
10-6
10-8
H-H state
L-H state
i
200
Vbg
Dc
0
t0
(d)
D5
Dv3
D1
Position of eM
-400
C1
Cj
Dc
Dc
400
Position of eM
Vr2
D5
D1
-400
Data unmatched (L-H state)
Energy
Vr1
0
Position of eM
Energy (meV)
Vd1
D1
400
Energy (meV)
Data matched (H-H state)
(c)
Vd3
eM moving time tM (sec)
(a)
300
Temperature (K)
2
Figure 5: Nanostructure evaluating Hamming distance. (a) Schematic of nanostructure,
where dot arrays are extremely enlarged compared with a MOSFET to emphasize the dot
structures. (b) Single-electron circuit. (c) Potential profile in dot array
. (d)
moving
time for bit comparator operation.
4 Nanostructure implementing SA model
The key for implementing the SA model is adding random fluctuation as expressed by
eq. (8). We have already proposed single-electron circuits and nanostructures evaluating
Hamming distance for the SA model [5]-[9].
Figure 5(a) and (b) show a nanostructure and the corresponding single-electron circuit, respectively, which are the most sophisticated version of our circuits and structures [9]. The
nanostructure consists of plural ( ) dot structures arranged on a MOS transistor gate electrode. Each dot structure consists of 1-D dot arrays
() and
(), where means the number of dots at a side of
. (From Monte
Carlo single-electron circuit simulation, should be more than 3). The dot diameter assumed is around 1 nm. The capacitance
corresponds to the gate capacitance of an
ultrasmall MOS transistor. An electron
is introduced in array , which is for example
performed by using Fowler-Nordheim tunneling from the grounded plate over - . Electron
, which is initially located at - , can move along array
through tunneling junctions
, but it cannot move to
through the normal capacitor . Digital (High/Low) voltages
and
(
) are applied at both edges of
, which correspond to
elements of data and reference vectors, respectively. Each dot structure simply works as an
exclusive-NOR logic gate (bit comparator) with random fluctuation as explained below.
2
%
&
2
5 0
0
&
2
& 2
2
2 %
2
If the two digital data bits ( or ) are matched, electron
stabilizes at center dot - ,
otherwise
moves to an off-center position. After stabilizing , by changing voltages
0 5 0
2
2
,
and back-gate voltage
, vertical dot array
detects whether
stays at - or
not; only if
stays at - ,
is polarized and an electron is induced at the gate electrode
of . The total number of induced electrons ( ) is proportional to the number of dot
structures with matched bits; thus the Hamming distance can be measured by counting the
induced electrons using the ultrasmall MOS transistor. (If one of the input digital data is
applied through an inverter, the number of unmatched bits can be calculated).
2
The detail of operation stabilizing
is as follows: Because of the charging energy of
itself, the total energy as a function of the position of
in array
has two peaks at the
midpoints of each side of the array, and has minimal values at - and both of - as shown
in Fig. 5(c). The energy barrier height for
at is assumed larger than the thermal
energy at room temperature.
In L-L state, the energy at rises up, thus
is most strongly stabilized at - . On
the other hand, in H-L(L-H) or H-H state, the energy barrier is lower than that of L-L
can more easily overcome the barrier by using thermal noise. Figure 5(d)
state, thus
shows the relation between operation temperature and time ( ) required until
moves
to - , which was obtained by Monte Carlo single-electron circuit simulation. The moving
process assisted by thermal noise is purely stochastic, thus
scatters in a wide range.
However, because the energy barrier height in H-L(L-H) states is lower than that in H-H
state as shown in Fig. 5(c), there exists a certain time span within which
in H-L(L-H)
in H-H state stays at - . At room temperature (300K), is
states moves to - while
several microseconds in this case although depends on the tunneling resistance. If the
detection process starts after , nearly perfect exclusive-NOR (bit comparison) operation
is achieved. On the other hand, if the start timing is shifted from , arbitrary amount of
fluctuation can be included in the bit comparison result. Thus, we utilize quantum mechanical tunneling processes assisted by thermal noise in this structure, which is similar to a
phenomenon known as stochastic resonance.
8/
8/
/
8
8
8
8/
8/
Although digital data are treated in the above explanation, analog data can be treated in the
same circuit by using pulse-width modulation (PWM) signals, which have a digital amplitude and an analog pulse width [10]. Therefore, instead of the Hamming distance, the
Manhattan distance can be evaluated by using this nanostructure. Because random fluctuation is naturally added in our nanostructure, it can implement the calculation expressed
by eq. (8). The annealing procedure described by eqs. (13) and (14) can be performed by
changing the time scale in the stabilization operation; that means the scaling of pulse-width
modulation.
The proposed nanostructure has not yet been fabricated using the present VLSI technology,
but the basic technology related to nanocrystalline floating-dot MOSFET devices, which
are closely related to our structure, is now being developed [11]-[13]. Furthermore, wellcontrolled self-assembly processes using molecular manipulation technology, especially
using DNA [14], would be utilized to fabricate our nanostructure. Thus, it could be constructed in the near future.
5 Conclusions
The stochastic association algorithm offers a simple and powerful soft-max adaptation rule
for vector quantizers. Although it is the same as the simple on-line K-means clustering
method except for adding random fluctuation in the distortion error evaluation process, our
new method has an efficient adaptation performance as high as the neural-gas (NG) or the
SOM algorithms. Moreover, our method needs no additional process such as sorting and
only one reference vector is adjusted at each adaptation step; thus the computational effort
is much smaller compared with the conventional soft-max clustering algorithms.
By employing the nanostructure proposed in this paper, very high performance clustering
hardware could be constructed.
Acknowledgments
The authors wish to thank Prof. Masataka Hirose for his support and encouragement. This
work has been supported in part by Grants-in-aid for the Core Research for Evolutional
Science and Technology (CREST) from Japan Science and Technology Corporation(JST).
References
[1] K. Rose, E. Gurewitz, and G. C. Fox, ?Statistical Mechanics and Phase Transitions in Clustering,? Physical Review Letters, vol. 65, no. 8, pp. 945?948, 1990.
[2] T. Kohonen, Self-Organization and Associative Memory, Springer-Verlag, Berlin, 1984.
[3] T. M. Martinetz, S. G. Berkovich, and K. J. Schulten, ??Neural-Gas? Network for Vector Quantization and its Apllication to Time-Series Prediction,? IEEE Trans. Neural Networks, vol. 4,
pp. 558?569, 1993.
[4] S. Rovetta and R. Zunino, ?Efficient Training of Neural Gas Vector Quantizers with Analog
Circuit Implementation,? IEEE Trans. Circuits & Syst., vol. 46, pp. 688?698, 1999.
[5] M. Saen, T. Morie, M. Nagata, and A. Iwata, ?A Stochastic Associative Memory Using SingleElectron Tunneling Devices,? IEICE Trans. Electron., vol. E81-C, no. 1, pp. 30?35, 1998.
[6] T. Yamanaka, T. Morie, M. Nagata, and A. Iwata, ?A Single-Electron Stochastic Associative Processing Circuit Robust to Random Background-Charge Effects and Its Structure Using
Nanocrystal Floating-Gate Transistors,? Nanotechnology, vol. 11, no. 3, pp. 154?160, 2000.
[7] T. Morie, T. Matsuura, S. Miyata, T. Yamanaka, M. Nagata, and A. Iwata, ?Quantum Dot Structures Measuring Hamming Distance for Associative Memories,? Superlattices & Microstructures, vol. 27, no. 5/6, pp. 613?616, 2000.
[8] T. Matsuura, T. Morie, M. Nagata, and A. Iwata, ?A Multi-Quantum-Dot Associative Circuit
Using Thermal-Noise Assisted Tunneling,? in Ext. Abs. of Int. Conf. on Solid State Devices and
Materials, pp. 306?307, Sendai, Japan, Aug. 2000.
[9] T. Morie, T. Matsuura, M. Nagata, and A. Iwata, ?Quantum Dot Structures Measuring Hamming
Distance for Associative Memories,? in Extended Abstracts, 4th International Workshop on
Quantum Functional Devices (QFD2000), pp. 210?213, Kanazawa, Japan, Nov. 2000.
[10] A. Iwata and M. Nagata, ?A Concept of Analog-Digital Merged Circuit Architecture for Future
VLSI?s,? IEICE Trans. Fundamentals., vol. E79-A, no. 2, pp. 145?157, 1996.
[11] S. Tiwari, F. Rana, H. Hanafi,A. Hartstein, E. F. Crabb?e, and K. Chan, ?A Silicon Nanocrystals
Based Memory,? Appl. Phys. Lett., vol. 68, no. 10, pp. 1377?1379, 1996.
[12] A. Kohno, H. Murakami, M. Ikeda, H. Nishiyama, S. Miyazaki, and M. Hirose, ?Transient
Characteristics of Electron Charging in Si-Quantum-Dot Floating Gate MOS Memories,? in
Ext. Abs. of Int. Conf. on Solid State Devices and Materials, pp. 124?125, Sendai, Japan, Aug.
2000.
[13] R. Ohba, N. Sugiyama, J. Koga, K. Uchida, and A. Toriumu, ?Novel Si Quantum Memory
Structure with Self-Alighed Stacked Nanocrystalline Dots,? in Ext. Abs. of Int. Conf. on Solid
State Devices and Materials, pp. 122?123, Sendai, Japan, Aug. 2000.
[14] R. A. Kiehl, ?Nanoelectronic Array Architecture,? in Extended Abstracts, 4th International
Workshop on Quantum Functional Devices (QFD2000), pp. 49?51, Kanazawa, Japan, Nov.
2000.
| 1947 |@word version:2 compression:1 seems:1 pulse:3 simulation:13 solid:3 initial:2 series:1 comparing:2 si:2 scatter:1 yet:1 ikeda:1 shape:1 update:2 selected:1 device:7 core:1 height:2 dn:2 c2:3 along:1 constructed:2 consists:2 sendai:3 fabricate:1 introduce:1 nor:3 mechanic:1 multi:1 lena:1 inspired:1 detects:1 becomes:2 moreover:2 matched:3 circuit:17 miyazaki:1 minimizes:1 developed:1 fabricated:1 corporation:1 every:1 charge:1 demonstrates:1 grant:1 local:3 treat:1 timing:1 ext:3 fluctuation:13 modulation:2 tmax:3 initialization:1 appl:1 co:3 graduate:1 range:2 averaged:1 acknowledgment:1 implement:1 procedure:3 matching:1 integrating:1 cannot:1 selection:1 vr2:2 twodimensional:1 www:1 conventional:2 map:1 demonstrated:2 center:2 straightforward:1 stabilizing:2 rule:5 adjusts:3 d5:2 array:9 his:1 searching:1 updated:3 us:1 element:1 updating:1 located:1 utilized:1 higashi:1 vd2:2 region:1 consumes:1 rose:1 hartstein:1 purely:1 vr1:2 easily:1 represented:3 various:1 hiroshima:4 stacked:1 mosfet:3 monte:2 neighborhood:2 choosing:1 whose:1 larger:1 distortion:13 otherwise:1 itself:1 final:1 associative:7 sequence:1 transistor:4 propose:4 product:1 adaptation:17 kohonen:2 achieve:1 getting:1 convergence:1 cluster:3 electrode:2 perfect:2 help:1 depending:4 ac:2 measured:1 nearest:1 school:1 sa:27 aug:3 eq:4 closely:1 merged:1 stochastic:17 stabilization:1 transient:1 jst:1 material:3 implementing:3 require:1 preliminary:1 vdi:1 adjusted:8 assisted:3 proximity:5 around:1 considered:1 normal:2 mo:4 electron:19 stabilizes:1 inverter:1 makoto:1 gaussian:1 always:1 avoid:2 voltage:3 takashi:1 encode:1 contrast:1 typically:2 initially:1 relation:1 vlsi:2 pixel:1 priori:1 resonance:1 softmax:1 equal:2 having:1 ng:9 nearly:6 future:2 escape:1 employ:1 randomly:2 floating:3 phase:1 ab:3 detection:1 organization:2 evaluation:8 edge:1 fox:1 morie:7 euclidean:3 minimal:2 soft:9 superlattices:1 measuring:2 lattice:3 nanoelectronic:1 reported:2 stored:2 peak:1 international:2 fundamental:1 stay:3 off:1 squared:2 nm:1 unmatched:2 conf:3 murakami:1 japan:7 syst:1 distribute:1 potential:1 sec:4 int:3 matter:1 ranking:4 depends:3 performed:4 start:2 nagata:7 parallel:1 square:2 characteristic:1 correspond:1 dsl:2 carlo:2 drive:1 ah:2 reach:1 phys:1 energy:11 pp:13 matsuura:4 naturally:1 di:1 hamming:6 adjusting:1 nanostructure:12 tiwari:1 cj:2 amplitude:1 sophisticated:1 back:1 higher:1 arranged:1 evaluated:2 strongly:1 furthermore:1 just:1 until:1 hand:2 microstructures:1 defines:3 fowler:1 ieice:2 effect:1 concept:1 assigned:2 adjacent:2 during:1 self:4 width:3 steady:2 plate:1 demonstrate:1 atsushi:1 temperature:4 image:2 novel:1 common:1 functional:2 physical:1 winner:6 jp:2 association:9 analog:4 silicon:1 dv1:1 encouragement:1 sugiyama:1 had:1 dot:18 moving:3 similarity:4 add:1 chan:1 manipulation:1 certain:3 verlag:1 minimum:3 additional:1 signal:1 faster:1 af:5 offer:2 calculation:1 molecular:1 schematic:1 prediction:1 basic:1 sometimes:1 represent:1 grounded:1 wellcontrolled:1 achieved:1 c1:3 background:1 annealing:2 unlike:3 martinetz:1 induced:3 capacitor:1 near:1 counting:1 wn:1 affect:1 architecture:4 tm:1 t0:1 whether:1 effort:1 vd1:2 resistance:1 speech:1 amount:1 hardware:2 diameter:1 reduced:1 http:1 dna:1 stabilized:1 shifted:1 delta:1 per:1 vol:8 key:2 four:1 changing:2 utilize:1 run:1 letter:1 powerful:4 polarized:1 tunneling:7 scaling:1 bit:8 kronecker:1 ri:1 uchida:1 wc:1 extremely:1 span:1 according:1 describes:1 smaller:2 em:5 wi:2 explained:1 gradually:1 pr:1 vq:2 junction:1 operation:6 evolutional:1 gate:7 pwm:1 clustering:29 assembly:1 especially:2 prof:1 move:5 capacitance:2 added:2 already:1 exclusive:2 usual:1 exhibit:1 distance:13 thank:1 berlin:1 me:8 manifold:2 zunino:1 relationship:1 minimizing:1 quantizers:3 rise:1 implementation:3 upper:1 av:2 vertical:1 finite:1 gas:6 thermal:5 extended:2 dc:4 rn:4 arbitrary:1 vri:1 introduced:1 mechanical:3 required:3 c3:2 optimized:3 nordheim:1 trans:4 below:1 including:2 max:9 memory:8 explanation:1 charging:2 power:2 difficulty:1 treated:2 advanced:1 technology:5 numerous:1 ne:1 extract:3 yamanaka:2 gurewitz:1 review:1 manhattan:2 expect:1 proportional:1 digital:6 uncorrelated:1 koga:1 placed:1 supported:1 side:2 wide:2 barrier:4 midpoint:1 crabb:1 overcome:1 calculated:1 lett:1 evaluating:2 transition:1 quantum:10 hirose:2 stuck:2 author:1 employing:1 crest:1 nov:2 emphasize:1 logic:1 incoming:1 assumed:2 grayscale:1 robust:1 miyata:1 som:6 whole:1 noise:4 profile:1 plural:1 ref:1 positively:1 enlarged:1 site:3 fig:6 aid:1 position:6 schulten:1 deterministically:2 wish:1 winning:7 nishiyama:1 exists:1 workshop:2 quantization:3 kanazawa:2 adding:3 sorting:3 entropy:2 simply:1 expressed:3 rana:1 springer:1 corresponds:2 iwata:7 comparator:2 consequently:1 microsecond:1 room:2 included:1 determined:2 except:2 uniformly:1 nanotechnology:1 total:6 tendency:1 parasitic:1 support:1 d1:6 phenomenon:1 |
1,037 | 1,948 | Risk Sensitive Particle Filters
Sebastian Thrun, John Langford, Vandi Verma
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
thrun,jcl,vandi @cs.cmu.edu
Abstract
We propose a new particle filter that incorporates a model of costs when
generating particles. The approach is motivated by the observation that
the costs of accidentally not tracking hypotheses might be significant in
some areas of state space, and next to irrelevant in others. By incorporating a cost model into particle filtering, states that are more critical to the
system performance are more likely to be tracked. Automatic calculation
of the cost model is implemented using an MDP value function calculation that estimates the value of tracking a particular state. Experiments in
two mobile robot domains illustrate the appropriateness of the approach.
1 Introduction
In recent years, particle filters [3, 7, 8] have found widespread application in domains with
noisy sensors, such as computer vision and robotics [2, 5]. Particle filters are powerful
tools for Bayesian state estimation in non-linear systems. The key idea of particle filters is
to approximate a posterior distribution over unknown state variables by a set of particles,
drawn from this distribution.
This paper addresses a primary deficiency of particle filters: Particle filters are insensitive
to costs that might arise from the approximate nature of the particle representation. Their
only criterion for generating a particle is the posterior likelihood of a state. To illustrate this
point, consider the example of a Space Shuttle. Failures of the engine system are extremely
unlikely, even in the presence of evidence to the contrary. Should we therefore not track
the possibility of such failures, just because they are unlikely? If failure to track such lowlikelihood events may incur high costs?such as a mission failure?these variables should
be tracked even when their posterior probability is low. This observation suggests that costs
should be taken into consideration when generating particles in the filtering process.
This paper proposes a particle filter that generates particles according to a distribution that
combines the posterior probability with a risk function. The risk function measures the
importance of a state location on future cumulative costs. We obtain this risk function via
an MDP that calculates the approximate future risk of decisions made in a particular state.
Experimental results in two robotic domains illustrate that our approach yields significantly
better results than a particle filter insensitive to costs.
2 The ?Classical? Particle Filter
Particle filters are a popular means of estimating the state of partially observable controllable Markov chains [3], sometimes referred to as dynamical systems [1]. To do so, particle
filters require two types of information: data, and a probabilistic generative model of the
system. The data generally comes in two flavors: measurements (e.g., camera images) and
controls (e.g., robot motion commands). The measurement at time will be denoted ,
and denotes the control asserted in the time interval
. Thus, the data is given by
and
Following common notation in the controls literature, we use the subscript to refer to an
event at time and the superscript to denote all events leading up to time .
Particle filters, like any member of the family of Bayes filters such as Kalman filters and
HMMs, estimate the posterior distribution of the state of the dynamical system conditioned
&%
on the data, !#"$
. They do so via the following recursive formula
'#" $ % ( ' $ " %*) '#" $ " +, % !#" + $ +
+, %- " +,
(1)
(
where is a normalization constant. To calculate this posterior, three probability distri-
butions are required, which together are commonly referred as the probabilistic model of
%
the dynamical system: (1) A measurement model ! $ " , which describes the probability
of measuring when the system is in state " . (2) A control model !#",$ ."*+, % , which
characterizes the effect of controls ! on the system state by specifying the probability that
the system is in state " after executing control in state " +, . (3) An initial state distri%
bution '#"/ , which specifies the user?s knowledge about the initial system state. See [2, 5]
for examples of such models in practical applications.
Eqn. 1 is easily derived under the common assumption that the system is Markov:
'#"0$ %
1325 47658
: 2 ;#<=5>
: 2 ;#<=5>
( *!0$ "9
+
% !#"*$ +
%
( *!0$ " % '"*$ +, %
( ! $ " %*) !#" $ +
" + % '" +, $ +, %- " +
( *!0$ " % ) !#"$ .
"*+, % !#"*+,$ +
+, ?
% - *" +,
(2)
Notice that this filter, in the general form stated here, is commonly known as a Bayes filter. Approximations to Bayes filters includes the Kalman filter, the hidden Markov model,
binary filters, and of course particle filters. In many applications, the key concern in implementing this probabilistic filter is the continuous nature of the states " , controls , and
measurements . Even in discrete applications, the state space is often too large to compute
the entire posterior in reasonable time.
The particle filter addresses these concerns by approximating the posterior using sets of
state samples (particles):
@
@
", A BDC
BFE
9GIHIHIH G J
(3)
The set consists of K particles "' A BLC , for some large number@ K (e.g, K
M
NNMN ). To &%
gether, these particles approximates the posterior !#" $ . is calculated recursively.
N , the particles "A BLC are generated from the initial state distribution
Initially, at time
/
@
@
%
'" / . The -th particle set is then calculated recursively from +, as follows:
@
1
2
3
4
5
6
7
8
9
10
11
set
for
@ 2
to K do
@
pick the -th sample "?A &+C
+
%
draw "'
A &C
'"*$ " A &+C
!7$ ", A &C %
set A &C
@
add
#"' A &C
A &C to 2
endfor
to K do
for
@
draw " A BDC from 2 with probability proportional to
@
add "! A BDC to
endfor
A BDC
Lines 2 through 7 generates a new set of particles that incorporates the control . Lines
8 through 11 apply a technique known as importance-weighted resampling [11] to account
for the measurement . It is a well-known fact that (for large K ) the resulting weighted
particles are asymptotically distributed according to the desired posterior [12] !#" $ 5%
In recent years, researchers have actively developed various extensions of the basic particle
filter, capable of coping with degenerate situations that are often relevant in practice [3, 7,
8]. The common aim of this rich body of literature, however, is to generate samples from
the posterior '"$ &% . If different controls at different states infer drastically different
costs, generating samples according to the posterior runs the risk of not capturing important
events that warrant action. Overcoming this deficiency is the very aim of this paper.
3 Risk Sensitive Particle Filters
This section describes a modified particle filter that is sensitive to the risk arising from the
approximate nature of the particle representation. To arrive at a notion of risk, our approach
requires a cost function
#"
%
(4)
This function assigns real-valued costs to states and control. From a decision theoretic
point of view, the goal of risk sensitive sampling is to generate particles that minimize
the cumulative increase in cost due to the particle approximation. To translate this into a
practical algorithm, we extend the basic paradigm in two ways. First, we modify the basic
particle filters
so that particles are generated in a risk-sensitive way, where the risk is a
function of . Second, an appropriate risk function is defined that approximates the cumulative expected costs relative to tracking individual states. This risk function is calculated
using value iteration.
3.1 Risk-Sensitive Sampling
%
Risk-sensitive sampling generates particles factoring in a risk function, #" . Formally,
all we have to ask of a risk function is that it be positive and finite almost everywhere.
Not all risk functions will be equally useful, however, so deriving the ?right? risk function
is important. Decision theory gives us a framework for deciding what the ?right? action
is in any given state. By considering approximation errors due to monte carlo sampling
in decision theory and making a sequence of rough approximations, we can arrive at the
choice of #" % , which is discussed further below. The full derivation is omitted for lack of
space. For now, let us simply assume are given a suitable risk function.
Risk sensitive particle filters generate samples that are distributed according to
"* % !#"$
%
%
%- *" +,
#" '"!$
(5)
Here
is a normalization constant that ensures that the term in
(5) is indeed a probability distribution. Thus, the probability that a state sample " A BLC is part
@
of is not only a function of its posterior probability, but also of the risk #" A BLC % associated
with that sample.
Sampling from (5) is easily achieved by the following two modifications of the basic particle filter algorithm. First, the initial set of particles " / A BLC is generated from the distribution
/ # "/ % '#"/ %
(6)
Second, Line 5 of the particle filter algorithm is replaced by the following assignment:
set A & C
#" A &C % " A &+,C % + ! $ " A &C %
(7)
We conjecture that this simple modification results in a particle filter with samples dis%
&%
tributed according to #" !#"$
. Our conjecture is obviously true for the base
N , since the risk function was explicitly incorporated in@ the construction of
case
@
/ (see eqn. 6). By induction,
let us assume that the particles in + are distributed
according to + "*+ % !#"*+,$ +
+,0% . Then Line 3 of the modified algo+ "*+ % !#"*+,$ +
+,9% . Line 4 gives us " A &C
rithm generates "A &+C
%
+ #"+ '"*$ 7
"*+ % '"*+,$ +, +% . Samples generated in Line 9 are distributed according to
+, +, %
A &C + "*+ % !#"*$ .
"*+, % !#"*+,$
(8)
Substituting in the modified weight (eqn. 7) we find the final sample distribution:
#" % # "+ % +, ' 0$ "* % + #"+ % '"*$ 7
"*+ % '"*+M$ +, + %
+, "* % '9$ " % '"*$ 7"+ % '"*+M$ +, + %
(9)
( +,
This term is, up to the normalization constant +, , equivalent to the desired distri
bution (5) (see also eqn. 1), which proves our conjecture. Thus, the risk sensitive particle
filter successfully generates samples from a distribution that factors in the risk .
3.2 The Risk Function
The remaining question is: What is an appropriate risk function ? How important is
it to track a state " ? Our approach rests on the assumption that there are two possible
situations, one in which the state is tracked well, and one in which the state is tracked
poorly. In the first situation, we assume that any controller will basically chose the right
control, whereas in the second situation, it is reasonable to assume that controls are selected
anywhere between random and in the worst possible way. To complete this model, we
assume that with small probability, the state estimator might move from ?well-tracked? to
?lost track? and vice versa.
These assumptions are sufficient to formulate an MDP that models the effect of tracking
accuracy on the expected costs. The MDP is defined over an augmented state space
"
N3 is a binary
(see also [10]), where
state variable that models the event that the
estimator tracks the state with sufficient ( ) or insufficient ( N ) accuracy. The various
probabilities of the MDP are easily obtained from the known probability distributions via
the natural assumption that the variable is conditionally independent of the system state
" :
'
" $
" +, + %
!0$
"*9 9 %
!
" / / %
"
%
'#" $ " +, % ! $ + %
'0$ "* %
'#" / % ! / %
#"
%
(10)
The expressions on the left hand side define all necessary components of the augmented
model. The only unspecified terms on the right hand side are the initial tracking probability
' 9/ % and the transition probabilities for the state estimator ' $ +, % . The former must be
set in accordance to the initial knowledge state (e.g., 1 if the initial system state is known, 0
if it is unknown). For the latter, weadopt
model where with high likelihood the tracking
9+ % N3 ) anda with
%
state
is
retained
(
!
low likelihood it changes (' 9+,
N3 N ).
The MDP is solved via value iteration. To model the effect of poor tracking on the control
policy, our approach uses the following value iteration rule (stated here without discounting
for simplicity), in which denotes the value function, and is an auxiliary variable:
%
if
#"
%
#"'
% )
%
"
%- * if N
"
& M
" %
#"
%
E
/
)
"
% ' $ % !#"5$ '
" % - "
(11)
This value iteration rule considers two cases: When
, i.e., the state is estimated suf N,
ficiently accurately, it is assumed that the controller acts by minimizing costs. If
however, the controller adopts a mixture of picking the worst possible
control , and a
random control. These two options
are
traded
off
by
the
gain
factor
,
which
controls the
suggests
that
poor
state
estimation
leads
to
the worst
?pessimism? of the
approach.
N is more optimistic, in that control is assumed to be random.
possible control.
Our
experiments
have yielded somewhat indifferent results relative to the choice of , and we
use
N3 for all experiments reported here.
Finally, the risk is defined as the difference between the value function that arises from
accurate versus inaccurate state estimation:
" %
" N % " %
%
Under mild assumptions, #" can be shown to be strictly positive.
(12)
4 Experimental Results
We have applied our approach to two complimentary real-world robotic domains: robot
localization, and mobile robot diagnostics. Both yield superior results using our new risk
sensitive approach when compared to the standard particle filter.
4.1 Mobile Robot Localization
Our first evaluation domain involves the problem of localizing a mobile robot from sensor
data [2]. In our experiments, we focused on the most difficult of all localization problems:
(b)
(a)
B
A
C
Figure 1: (a) Robot Pearl, as it interacts with elderly people at an assisted living facility in Oakmont,
PA. (b) Occupancy grid map. Shown here are also three testing locations labeled A, B, and C, and
regions of high costs (black contours).
(a)
(b)
Figure 2: (a) Risk function : the darker a location, the higher the risk. This function, which is
used in the proposal distribution, is derived from the immediate risk function shown in Figure 1b. (b)
Sample of a uniform distribution, taking into consideration the risk function.
steps to re-localize when ported to A
steps to re-localize when ported to B
steps to re-localize when ported to C
number of violations after global kidnapping
standard filter
120 13.7
301 35.2
63.2 6.2
96.1 14.1
risk sensitive filter
89.3 12.3
203 37.6
53.2 7.7
57.4 10.3
Table 1: Localization results for the kidnapped robot problem, which emulates a total localization
failure. Our new approach requires consistently fewer steps for re-localization in high-cost areas, and
therefore incurs less cost.
The kidnapped robot problem [4]. Here a well-localized robot is ?tele-ported? to some
unknown location and has to recover from this event. This problem plays an important
role in evaluating the robustness of a localization algorithm. Figure 1a shows the robot
Pearl, which has recently been deployed in an assisted living facility as an assistant to the
elderly and cognitively frail. Our study is motivated by the fact that some of the robot?s
operational area is a densely cluttered dining room, where the robot is not allowed to cross
certain boundaries due to the danger of physically harming people. These boundaries are
illustrated by the black contours shown in Figure 1b, which also depicts an occupancy grid
map of the facility. Beyond the boundaries, the robot?s sensor are somewhat insufficient to
avoid collisions, since they can only sense obstacles at one specific height (34 cm).
Figure 2a shows the risk function , projected into 2D. The darker a location, the higher
the risk. A sample set drawn from this risk function is shown in Figure 2b. This sample
set represents a uniform posterior. Since risk sensitive particle filters incorporate the risk
(a)
v2
(b)
?
v1
Rover position at time step 1, 10, 22 and 35
(c)
6
W2
Sy
5
Sx
4
W1
3
y ?>
Ry
L
Rx
v3
2
v4
1
0
W3
W4
?4
B
?3
?2
?1
0
x ?>
1
2
3
4
Figure 3: (a) The Hyperion rover, a mobile robot being developed at CMU. (b) Kinematic model. (c)
Rover position at time step 1, 10, 22 and 35.
10,000 samples
100,000 samples
10
10
5
5
5
5
Most Likely State
10
Sample Variance
0
0
20
40
0
0
20
40
0
0
20
0
40
8
8
8
8
6
6
6
6
4
4
4
0
20
(b)
40
4
2
2
2
2
0
0
0
0
100 samples
Most likely state
1000 samples
10
Median error (1?0 loss)
Avg. sample variance
100 samples
1000 samples
10
40
0
20
40
0
20
40
0
1
1
1
0.5
0.5
0.5
0.5
0
0
0
20
Time step ?>
40
0
0
20
Time step ?>
40
20
40
Error variance
20
1
0
0
20
Time step ?>
40
0
20
Time step ?>
40
10000 samples
10
10
5
5
0
10
20
30
40
0
5
0
20
40
0
15
15
15
10
10
10
5
5
0
0
0
20
40
1
0
10
20
30
40
0.1
40
40
0
0
?1
?1
?1
40
0
20
40
0
20
40
0
20
Time step ?>
40
1
0
20
40
1
0
10
20
30
Time step ?>
20
0
20
1
0
?0.1
0
5
0
1
0.5
0
Error using 1?0 loss
(a)
0
0
20
Time step ?>
40
?1
Figure 4: Tracking curves obtained with (a) plain particle filters, and (b) our new risk sensitive filter.
The bottom curves show the error, which is much smaller for our new approach.
function into the sampling process, however, the density of samples is proportional to the
risk function .
Numerical results are summarized in Table 1, using data collected in the facility at dinner
time. We ran two types of experiments: First, we kidnapped the robot to any of the locations
marked A, B, and C in Figure 1, and measured the number of sensor readings required to
recover from this global failure. All three locations are within the high-risk area so the
recovery time is significantly shorter than with plain particle filters. Second, we measured
the number of times a simple-minded planner that always looks at the most likely pose
would violate the safety constraint. Here we find that our approach is almost twice as
safe as the conventional particle filter, at virtually the same computational expense. All
experiments were repeated 20 times, and rely on real-world data and operating conditions.
4.2 Mobile Robot Diagnosis
In some domains, particle filters simply cannot be applied in real time because of a large
number of high loss and low probability events. One example is the fault detection domain
illustrated in Figure 3. Our evaluation involves a data set where a rover is driven with a
variety of different control inputs in the normal operation mode. At the time step,
wheel #3 becomes stuck and locked against a rock. The wheel is then driven in the backward direction, fixing the problem. The rover returns to the normal operation mode and
continues to operate normally until the gear on wheel #4 breaks at the N time step. This
fault is not recoverable and the controller just alters its input based on this state. Notice that
both failures lead to very similar sensor measurement, despite the fact that they are caused
by quite different events.
Tracking results in Figure 4 show that our approach yields superior results to the standard
particle filter. Even though failures are very unlikely, our approach successfully identifies
them due to the high risk associated with such a failure while the plain particle filter essentially fails to do so. The estimation error is shown in the bottom row of Figure 4, which is
practically zero for our approach when 1,000 or more samples are used. Vanialle particle
filters exhibit non-zero error even with 100,000 samples. However, it is important to notice that these results were obtained using simulated data and a hand-tuned loss function
approach.
5 Discussion
We have proposed a particle filter algorithm that considers a cost model when generating
samples. The key idea is that particles are generated in proportion to their posterior likelihood and to the risk that arises relative to a control goal. An MDP algorithm was developed
that computes the risk function as a differential cumulative cost. Experimental results in
two robotic domains show the superior performance of our new approach.
An alternative approach for solving the problem addressed in this paper would be to analyze
the estimation process as a partially observable Markov decision process (POMDP) [6].
Bounds on the performance loss due to the approximate nature of particle filters can be
found in [9]. Pursuing the problem of risk-sensitive particle generation within the POMDP
framework might be a promising future line of research.
Acknowledgment
The authors thank Dieter Fox and Wolfram Burgard, who generously provided some the
localization software on which this research is built. Financial support by DARPA (TMR,
MARS, CoABS and MICA programs) and NSF (ITR, Robotics, and CAREER programs)
is gratefully acknowledged.
References
[1] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. UAI-98.
[2] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte carlo localization for mobile robots. In
Proc. ICRA-99.
[3] A. Doucet, J.F.G. de Freitas, and N.J. Gordon, editors. Sequential Monte Carlo Methods In
Practice. Springer, 2001.
[4] S. Engelson. Passive Map Learning and Visual Place Recognition. PhD thesis, Computer
Science Department, Yale University, 1994.
[5] M. Isard and A. Blake. CONDENSATION: conditional density propagation for visual tracking.
International Journal of Computer Vision, 29(1):5?28, 1998.
[6] L.P. Kaelbling, M.L. Littman, and A.R. Cassandra. Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101(1-2):99?134, 1998.
[7] J. Liu and R. Chen. Sequential monte carlo methods for dynamic systems. Journal of the
American Statistical Association, 93:1032?1044, 1998.
[8] M. Pitt and N. Shephard. Filtering via simulation: auxiliary particle filter. Journal of the
American Statistical Association, 94:590?599, 1999.
[9] P. Poupart, L.E. Ortiz, and C. Boutilier. Value-directed sampling methods for monitoring
POMDPs. In Proc. UAI-2001.
[10] N. Roy and S. Thrun. Coastal navigation with mobile robot. In Proc. NIPS-99.
[11] D.B. Rubin. Using the SIR algorithm to simulate posterior distributions. In Bayesian Statistics
3. Oxford Univ. Press, 1988.
[12] M.A. Tanner. Tools for Statistical Inference. Springer, 1996.
| 1948 |@word mild:1 proportion:1 simulation:1 pick:1 incurs:1 recursively:2 initial:7 liu:1 tuned:1 freitas:1 must:1 john:1 numerical:1 resampling:1 generative:1 selected:1 fewer:1 isard:1 intelligence:1 gear:1 wolfram:1 location:7 height:1 differential:1 consists:1 combine:1 elderly:2 indeed:1 expected:2 planning:1 ry:1 considering:1 becomes:1 distri:3 estimating:1 notation:1 provided:1 what:2 cm:1 unspecified:1 complimentary:1 developed:3 act:1 control:20 normally:1 positive:2 safety:1 accordance:1 modify:1 despite:1 oxford:1 subscript:1 tributed:1 might:4 chose:1 black:2 twice:1 suggests:2 specifying:1 hmms:1 locked:1 directed:1 practical:2 camera:1 acknowledgment:1 testing:1 recursive:1 practice:2 lost:1 danger:1 area:4 w4:1 coping:1 significantly:2 cannot:1 wheel:3 risk:47 equivalent:1 map:3 conventional:1 computes:1 cluttered:1 focused:1 formulate:1 pomdp:2 simplicity:1 recovery:1 assigns:1 estimator:3 rule:2 deriving:1 financial:1 notion:1 construction:1 play:1 user:1 us:1 hypothesis:1 pa:2 roy:1 recognition:1 continues:1 labeled:1 bottom:2 role:1 solved:1 worst:3 calculate:1 region:1 ensures:1 ran:1 littman:1 dynamic:1 solving:1 algo:1 incur:1 localization:9 rover:5 easily:3 darpa:1 various:2 derivation:1 univ:1 monte:4 artificial:1 quite:1 valued:1 statistic:1 noisy:1 superscript:1 final:1 obviously:1 sequence:1 rock:1 dining:1 propose:1 mission:1 relevant:1 translate:1 degenerate:1 poorly:1 generating:5 executing:1 illustrate:3 fixing:1 pose:1 measured:2 school:1 shephard:1 implemented:1 c:1 auxiliary:2 come:1 involves:2 appropriateness:1 direction:1 safe:1 filter:47 stochastic:2 implementing:1 require:1 extension:1 strictly:1 assisted:2 practically:1 blake:1 normal:2 deciding:1 traded:1 pitt:1 substituting:1 adopt:1 omitted:1 estimation:5 assistant:1 endfor:2 proc:4 sensitive:14 coastal:1 vice:1 successfully:2 tool:2 weighted:2 minded:1 blc:5 rough:1 generously:1 sensor:5 always:1 aim:2 modified:3 avoid:1 dinner:1 shuttle:1 mobile:8 command:1 derived:2 consistently:1 likelihood:4 sense:1 inference:2 factoring:1 inaccurate:1 unlikely:3 entire:1 initially:1 hidden:1 koller:1 denoted:1 ported:4 proposes:1 sampling:7 represents:1 look:1 warrant:1 future:3 others:1 gordon:1 engelson:1 densely:1 individual:1 cognitively:1 replaced:1 ortiz:1 detection:1 possibility:1 kinematic:1 evaluation:2 indifferent:1 violation:1 mixture:1 navigation:1 diagnostics:1 asserted:1 chain:1 accurate:1 capable:1 necessary:1 shorter:1 fox:2 desired:2 re:4 obstacle:1 measuring:1 localizing:1 assignment:1 cost:21 kaelbling:1 uniform:2 burgard:2 too:1 reported:1 density:2 international:1 probabilistic:3 off:1 v4:1 harming:1 picking:1 pessimism:1 together:1 tanner:1 tmr:1 w1:1 thesis:1 american:2 leading:1 return:1 actively:1 account:1 de:1 summarized:1 includes:1 explicitly:1 caused:1 view:1 break:1 optimistic:1 analyze:1 characterizes:1 bution:2 bayes:3 option:1 recover:2 minimize:1 accuracy:2 variance:3 emulates:1 who:1 sy:1 yield:3 bayesian:2 accurately:1 basically:1 carlo:4 rx:1 monitoring:1 researcher:1 pomdps:1 sebastian:1 failure:9 against:1 associated:2 gain:1 popular:1 ask:1 knowledge:2 higher:2 though:1 mar:1 just:2 anywhere:1 langford:1 until:1 hand:3 eqn:4 lack:1 propagation:1 widespread:1 mode:2 mdp:7 effect:3 true:1 former:1 discounting:1 facility:4 nmn:1 illustrated:2 conditionally:1 criterion:1 butions:1 theoretic:1 complete:1 motion:1 passive:1 image:1 consideration:2 recently:1 common:3 superior:3 tracked:5 insensitive:2 extend:1 discussed:1 approximates:2 association:2 mellon:1 significant:1 measurement:6 refer:1 versa:1 automatic:1 grid:2 particle:59 gratefully:1 robot:19 operating:1 add:2 base:1 posterior:16 recent:2 irrelevant:1 driven:2 certain:1 binary:2 fault:2 somewhat:2 paradigm:1 v3:1 living:2 recoverable:1 full:1 violate:1 infer:1 calculation:2 cross:1 equally:1 calculates:1 basic:4 controller:4 vision:2 cmu:2 essentially:1 physically:1 iteration:4 sometimes:1 normalization:3 robotics:2 achieved:1 proposal:1 whereas:1 condensation:1 interval:1 addressed:1 jcl:1 median:1 w2:1 rest:1 operate:1 virtually:1 member:1 contrary:1 incorporates:2 presence:1 variety:1 w3:1 idea:2 mica:1 itr:1 motivated:2 expression:1 dellaert:1 action:2 boutilier:1 generally:1 useful:1 collision:1 generate:3 specifies:1 nsf:1 notice:3 alters:1 estimated:1 arising:1 track:5 diagnosis:1 discrete:1 carnegie:1 key:3 acknowledged:1 drawn:2 localize:3 backward:1 v1:1 asymptotically:1 year:2 run:1 everywhere:1 powerful:1 arrive:2 family:1 reasonable:2 almost:2 planner:1 pursuing:1 place:1 draw:2 decision:5 capturing:1 bound:1 yale:1 yielded:1 constraint:1 deficiency:2 n3:4 software:1 generates:5 simulate:1 extremely:1 bdc:4 conjecture:3 department:1 according:7 poor:2 describes:2 smaller:1 making:1 modification:2 dieter:1 taken:1 tractable:1 operation:2 apply:1 v2:1 appropriate:2 alternative:1 robustness:1 denotes:2 remaining:1 prof:1 approximating:1 classical:1 icra:1 move:1 question:1 primary:1 interacts:1 exhibit:1 thank:1 thrun:4 simulated:1 poupart:1 considers:2 collected:1 induction:1 kalman:2 retained:1 insufficient:2 minimizing:1 difficult:1 expense:1 stated:2 policy:1 unknown:3 observation:2 markov:4 finite:1 tele:1 immediate:1 situation:4 incorporated:1 overcoming:1 required:2 engine:1 pearl:2 nip:1 address:2 beyond:1 dynamical:3 below:1 boyen:1 reading:1 program:2 built:1 critical:1 event:8 suitable:1 natural:1 rely:1 occupancy:2 identifies:1 literature:2 relative:3 sir:1 loss:5 suf:1 generation:1 filtering:3 proportional:2 versus:1 localized:1 sufficient:2 rubin:1 editor:1 verma:1 row:1 course:1 accidentally:1 dis:1 drastically:1 side:2 taking:1 distributed:4 boundary:3 plain:3 calculated:3 curve:2 transition:1 cumulative:4 rich:1 world:2 contour:2 adopts:1 made:1 commonly:2 evaluating:1 projected:1 avg:1 stuck:1 author:1 approximate:5 observable:3 global:2 robotic:3 anda:1 uai:2 doucet:1 pittsburgh:1 assumed:2 continuous:1 table:2 promising:1 nature:4 controllable:1 operational:1 career:1 complex:1 domain:9 gether:1 arise:1 allowed:1 repeated:1 body:1 augmented:2 referred:2 rithm:1 depicts:1 deployed:1 darker:2 fails:1 position:2 formula:1 specific:1 evidence:1 concern:2 incorporating:1 sequential:2 importance:2 phd:1 conditioned:1 sx:1 cassandra:1 flavor:1 chen:1 simply:2 likely:4 visual:2 tracking:10 partially:3 springer:2 conditional:1 goal:2 marked:1 room:1 change:1 acting:1 total:1 experimental:3 formally:1 people:2 support:1 latter:1 arises:2 incorporate:1 |
1,038 | 1,949 | A Parallel Mixture of SVMs for Very Large Scale
Problems
Ronan Collobert*
Universite de Montreal, DIRG
CP 6128, Succ. Centre-Ville
Montreal, Quebec, Canada
collober?iro.umontreal.ca
Samy Bengio
IDIAP
CP 592, rue du Simp Ion 4
1920 Martigny, Switzerland
bengio?idiap.ch
Yoshua Bengio
Universite de Montreal, DIRG
CP 6128, Succ. Centre-Ville
Montreal, Quebec, Canada
bengioy?iro.umontreal.ca
Abstract
Support Vector Machines (SVMs) are currently the state-of-the-art models for
many classification problems but they suffer from the complexity of their training algorithm which is at least quadratic with respect to the number of examples.
Hence, it is hopeless to try to solve real-life problems having more than a few
hundreds of thousands examples with SVMs. The present paper proposes a
new mixture of SVMs that can be easily implemented in parallel and where
each SVM is trained on a small subset of the whole dataset. Experiments on a
large benchmark dataset (Forest) as well as a difficult speech database , yielded
significant time improvement (time complexity appears empirically to locally
grow linearly with the number of examples) . In addition, and that is a surprise,
a significant improvement in generalization was observed on Forest.
1
Introduction
Recently a lot of work has been done around Support Vector Machines [9], mainly due to
their impressive generalization performances on classification problems when compared to other
algorithms such as artificial neural networks [3, 6]. However, SVMs require to solve a quadratic
optimization problem which needs resources that are at least quadratic in the number of training
examples, and it is thus hopeless to try solving problems having millions of examples using
classical SVMs.
In order to overcome this drawback, we propose in this paper to use a mixture of several SVMs,
each of them trained only on a part of the dataset. The idea of an SVM mixture is not new,
although previous attempts such as Kwok's paper on Support Vector Mixtures [5] did not train
the SVMs on part of the dataset but on the whole dataset and hence could not overcome the
'Part of this work has been done while Ronan Collobert was at IDIAP, CP 592, rue du Simplon 4,
1920 Martigny, Switzerland.
LUHe CUIHIJ1eJULY vrUUleUI lUI 1i:L1!!,e UaLaOeLO.
vve vruvuoe Here a
l:i't'fltpte
'fIte~ltuu
LU LlalH oUCH
a mixture, and we will show that in practice this method is much faster than training only
one SVM, and leads to results that are at least as good as one SVM. We conjecture that the
training time complexity of the proposed approach with respect to the number of examples is
sub-quadratic for large data sets. Moreover this mixture can be easily parallelized, which could
improve again significantly the training time.
The organization of the paper goes as follows: in the next section, we briefly introduce the SVM
model for classification. In section 3 we present our mixture of SVMs, followed in section 4 by
some comparisons to related models. In section 5 we show some experimental results, first on a
toy dataset, then on two large real-life datasets. A short conclusion then follows .
2
Introduction to Support Vector Machines
Support Vector Machines (SVMs) [9] have been applied to many classification problems, generally yielding good performance compared to other algorithms. The decision function is of the
form
(1)
where x E ~d is the d-dimensional input vector of a test example, y E {-I, I} is a class label, Xi
is the input vector for the ith training example, Yi is its associated class label, N is the number
of training examples, K(x , Xi) is a positive definite kernel function , and 0: = {a1 , ... ,aN} and
b are the parameters of the model. Training an SVM consists in finding 0: that minimizes the
objective function
1
N
N
N
Q(o:) = - 2..: a i + 22..:2..:aiajYiyjK(Xi , Xj)
i=l
(2)
i=l j=l
subject to the constraints
N
2..: aiYi = 0
(3)
i=l
and
O:S
ai
:S C Vi.
(4)
The kernel K(X,Xi) can have different forms, such as the Radial Basis Function (RBF):
K(Xi, Xj) = exp
with parameter
(-llxi(T~ Xj112)
(5)
(T.
Therefore, to train an SVM, we need to solve a quadratic optimization problem, where the
number of parameters is N. This makes the use of SVMs for large datasets difficult: computing
K(Xi' Xj) for every training pair would require O(N2) computation, and solving may take up
to O(N3). Note however that current state-of-the-art algorithms appear to have training time
complexity scaling much closer to O(N 2 ) than O(N3) [2].
3
A New Conditional Mixture of SVMs
In this section we introduce a new type of mixture of SVMs. The output of the mixture for an
input vector X is computed as follows:
f(x) = h
(II
wm(x)sm(x))
(6)
wuen~
lVl
1::;
LUe UUUIUel Ul eXvelL::; lU
LUe
lUIXLUle,
;;m~;,r;)
1::;
LUe
UULVUL Ul
LUe 'fit
exvelL
given input x, wm(x) is the weight for the mth expert given by a "gater" module taking also
x in input, and h is a transfer function which could be for example the hyperbolic tangent for
classification tasks. Here each expert is an SVM, and we took a neural network for the gater in
our experiments. In the proposed model, the gater is trained to minimize the cost function
N
C=
L
[f(xi) - Yi]2 .
(7)
i=l
To train this model, we propose a very simple algorithm:
1. Divide the training set into M random subsets of size near N j M.
2. Train each expert separately over one of these subsets.
3. Keeping the experts fixed, train the gater to minimize (7) on the whole training set.
4. Reconstruct M subsets: for each example (Xi,Yi),
? sort the experts in descending order according to the values Wm(Xi),
? assign the example to the first expert in the list which has less than (NjM
examples*, in order to ensure a balance between the experts.
+ c)
5. If a termination criterion is not fulfilled (such as a given number of iterations or a
validation error going up), goto step 2.
Note that step 2 of this algorithm can be easily implemented in parallel as each expert can
be trained separately on a different computer. Note also that step 3 can be an approximate
minimization (as usually done when training neural networks).
4
Other Mixtures of SVMs
The idea of mixture models is quite old and has given rise to very popular algorithms, such
as the well-known Mixture of Experts [4] where the cost function is similar to equation (7) but
where the gater and the experts are trained, using gradient descent or EM, on the whole dataset
(and not subsets) and their parameters are trained simultaneously. Hence such an algorithm
is quite demanding in terms of resources when the dataset is large, if training time scales like
O(NP) with p > 1.
In the more recent Support Vector Mixture model [5], the author shows how to replace the
experts (typically neural networks) by SVMs and gives a learning algorithm for this model.
Once again the resulting mixture is trained jointly on the whole dataset , and hence does not
solve the quadratic barrier when the dataset is large.
In another divide-and-conquer approach [7], the authors propose to first divide the training set
using an unsupervised algorithm to cluster the data (typically a mixture of Gaussians), then
train an expert (such as an SVM) on each subset of the data corresponding to a cluster, and
finally recombine the outputs of the experts. Here, the algorithm does indeed train separately the
experts on small datasets, like the present algorithm, but there is no notion of a loop reassigning
the examples to experts according to the prediction made by the gater of how well each expert
performs on each example. Our experiments suggest that this element is essential to the success
of the algorithm.
Finally, the Bayesian Committee Machine [8] is a technique to partition the data into several
subsets, train SVMs on the individual subsets and then use a specific combination scheme based
on the covariance of the test data to combine the predictions. This method scales linearly in the
'where c is a small positive constant. In the experiments, c
= 1.
llUll1ue1 U1 Lld111111!!, UdLd, UUL
1~
111 1dCL d
HUnIjU ?uc~?tVt;
ll1eLllUU
~
1L CdllllUL Uve1dLe Ull d
~U1!!,1e
test example. Like in the previous case, this algorithm assigns the examples randomly to the
experts (however the Bayesian framework would in principle allow to find better assignments).
Regarding our proposed mixture of SVMs, if the number of experts grows with the number
of examples, and the number of outer loop iterations is a constant, then the total training
time of the experts scales linearly with the number of examples. Indeed, &iven N the total
number of examples, choose the number of expert M such that the ratio M is a constant r;
Then, if k is the number of outer loop iterations, and if the training time for an SVM with r
examples is O(r i3 ) (empirically f3 is slightly above 2), the total training time of the experts is
O(kr i3 * M) = O(kr i3- 1 N), where k, rand f3 are constants, which gives a total training time
of O(N). In particular for f3 = 2 that gives O(krN). The actual total training time should
however also include k times the training time of the gater, which may potentially grow more
rapidly than O(N). However, it did not appear to be the case in our experiments, thus yielding
apparent linear training time. Future work will focus on methods to reduce the gater training
time and guarantee linear training time per outer loop iteration.
5
Experiments
In this section, we present three sets of experiments comparing the new mixture of SVMs to
other machine learning algorithms. Note that all the SVMs in these experiments have been
trained using SVMTorch [2] .
5.1
A Toy Problem
In the first series of experiments, we first tested the mixture on an artificial toy problem for
which we generated 10,000 training examples and 10,000 test examples. The problem had two
non-linearly separable classes and had two input dimensions. On Figure 1 we show the decision
surfaces obtained first by a linear SVM, then by a Gaussian SVM, and finally by the proposed
mixture of SVMs. Moreover, in the latter, the gater was a simple linear function and there were
two linear SVMs in the mixture t . This artificial problem thus shows clearly that the algorithm
seems to work, and is able to combine, even linearly, very simple models in order to produce a
non-linear decision surface.
5.2
A Large-Scale Realistic Problem: Forest
For a more realistic problem, we did a series of experiments on part of the UCI Forest dataset+.
We modified the 7-class classification problem into a binary classification problem where the
goal was to separate class 2 from the other 6 classes. Each example was described by 54 input
features, each normalized by dividing by the maximum found on the training set. The dataset
had more than 500,000 examples and this allowed us to prepare a series of experiments as follows :
? We kept a separate test set of 50,000 examples to compare the best mixture of SVMs
to other learning algorithms.
? We used a validation set of 10,000 examples to select the best mixture of SVMs , varying
the number of experts and the number of hidden units in the gater.
? We trained our models on different training sets, using from 100,000 to 400,000 examples.
? The mixtures had from 10 to 50 expert SVMs with Gaussian kernel and the gater was
an MLP with between 25 and 500 hidden units.
tNote that the transfer function hO was still a tanhO.
tThe Forest dataset is available on the VCI website at the following
ftp://ftp.ics.uci.edu/pub/rnachine-learning-databases/covtype/covtype.info.
address:
(a) Linear SVM
(b) Gaussian SVM
(c) Mixture of two linear
SVMs
Figure 1: Comparison of the decision surfaces obtained by (a) a linear SVM, (b) a Gaussian
SVM, and (c) a linear mixture of two linear SVMs, on a two-dimensional classification toy
problem.
Note that since the number of examples was quite large, we selected the internal training parameters such as the (J of the Gaussian kernel of the SVMs or the learning rate of the gater
using a held-out portion of the training set. We compared our models to
? a single MLP, where the number of hidden units was selected by cross-validation between
25 and 250 units,
? a single SVM, where the parameter of the kernel was also selected by cross-validation,
? a mixture of SVMs where the gater was replaced by a constant vector, assigning the
same weight value to every expert.
Table 1 gives the results of a first series of experiments with a fixed training set of 100,000
examples. To select among the variants of the gated SVM mixture we considered performance
over the validation set as well as training time. All the SVMs used (J = 1. 7. The selected model
had 50 experts and a gater with 150 hidden units. A model with 500 hidden units would have
given a performance of 8.1 % over the test set but would have taken 621 minutes on one machine
(and 388 minutes on 50 machines).
one MLP
one SVM
uniform SVM mixture
gated SVM mixture
Train
Error
17.56
16.03
19.69
5.91
Test
(%)
18.15
16.76
20.31
9.28
Time (minutes)
(1 cpu) (50 cpu)
12
3231
2
85
237
73
Table 1: Comparison of performance between an MLP (100 hidden units), a single SVM, a
uniform SVM mixture where the gater always output the same value for each expert, and finally
a mixture of SVMs as proposed in this paper.
As it can be seen, the gated SVM outperformed all models in terms of training and test error.
Note that the training error of the single SVM is high because its hyper-parameters were selected
to minimize error on the validation set (other values could yield to much lower training error but
larger test error). It was also much faster, even on one machine, than the SVM and since the
mixture could easily be parallelized (each expert can be trained separately) , we also reported
Lue LIUIe IL LUUK LU LldUI UU ClV UldCUIUei:>.
.1U d
UIi:>L dLLeUIVL LU UUUeli:>LdUU LUei:>e lei:>UILi:>, uue
can at least say that the power of the model does not lie only in the MLP gater, since a single
MLP was pretty bad, it is neither only because we used SVMs, since a single SVM was not
as good as the gated mixture, and it was not only because we divided the problem into many
sub-problems since the uniform mixture also performed badly. It seems to be a combination of
all these elements.
We also did a series of experiments in order to see the influence of the number of hidden units
of the gater as well as the number of experts in the mixture. Figure 2 shows the validation error
of different mixtures of SVMs, where the number of hidden units varied from 25 to 500 and the
number of experts varied from 10 to 50. There is a clear performance improvement when the
number of hidden units is increased, while the improvement with additional experts exists but
is not as strong. Note however that the training time increases also rapidly with the number of
hidden units while it slightly decreases with the number of experts if one uses one computer per
expert.
Validation error as a function of the number of hidden units
of the gater and the number of experts
2!'50
100
50
150 200
250
Number of hidden
units of the gater
500
10
Figure 2: Comparison of the validation error of different mixtures of SVMs with various number
of hidden units and experts.
In order to find how the algorithm scaled with respect to the number of examples, we then
compared the same mixture of experts (50 experts, 150 hidden units in the gater) on different
training set sizes. Table 3 shows the validation error of the mixture of SVMs trained on training
sets of sizes from 100,000 to 400,000. It seems that, at least in this range and for this particular
dataset, the mixture of SVMs scales linearly with respect to the number of examples, and not
quadratically as a classical SVM. It is interesting to see for instance that the mixture of SVMs
was able to solve a problem of 400,000 examples in less than 7 hours (on 50 computers) while it
would have taken more than one month to solve the same problem with a single SVM.
Finally, figure 4 shows the evolution of the training and validation errors of a mixture of 50
SVMs gated by an MLP with 150 hidden units, during 5 iterations of the algorithm. This
should convince that the loop of the algorithm is essential in order to obtain good performance.
It is also clear that the empirical convergence of the outer loop is extremely rapid.
5.3
Verification on Another Large-Scale Problem
In order to verify that the results obtained on Forest were replicable on other large-scale problems, we tested the SVM mixture on a speech task. We used the Numbers95 dataset [1] and
450 ,----~--~-~--~-~-_
Error as a function of the number of training iterations
14
400
-
13
350
1
-
-
-
Train error
Validation Error
12
_300
c:
11
E
~10
-;250
E
i= 200
g9
w
150
8
100
7
6
1~
2
2~
3
Number of train examples
4
3~
x 105
~L---~2~--~3---~
4---~5
Number of training iterations
Figure 3: Comparison of the training time
of the same mixture of SVMs (50 experts,
150 hidden units in the gater) trained on
different training set sizes, from 100,000 to
400,000.
Figure 4: Comparison of the training and
validation errors of the mixture of SVMs as
a function of the number of training iterations.
turned it into a binary classification problem where the task was to separate silence frames from
non-silence frames . The total number of frames was around 540,000 frames. The training set
contained 100,000 randomly chosen frames out of the first 400,000 frames. The disjoint validation set contained 10,000 randomly chosen frames out of the first 400,000 frames also. Finally,
the test set contained 50,000 randomly chosen frames out of the last 140,000 frames. Note that
the validation set was used here to select the number of experts in the mixture, the number of
hidden units in the gater, and a. Each frame was parameterized using standard methods used
in speech recognition (j-rasta coefficients, with first and second temporal derivatives) and was
thus described by 45 coefficients, but we used in fact an input window of three frames, yielding
135 input features per examples.
Table 2 shows a comparison between a single SVM and a mixture of SVMs on this dataset. The
number of experts in the mixture was set to 50, the number of hidden units of the gater was set
to 50, and the a of the SVMs was set to 3.0. As it can be seen, the mixture of SVMs was again
many times faster than the single SVM (even on 1 cpu only) but yielded similar generalization
performance.
one SVM
gated SVM mixture
Train
Error
0.98
4.41
Test
(%)
7.57
7.32
Time (minutes)
(1 cpu) (50 cpu)
6787
851
65
Table 2: Comparison of performance between a single SVM and a mixture of SVMs on the
speech dataset.
6
Conclusion
In this paper we have presented a new algorithm to train a mixture of SVMs that gave very good
results compared to classical SVMs either in terms of training time or generalization performance
on two large scale difficult databases. Moreover, the algorithm appears to scale linearly with
the number of examples, at least between 100,000 and 400,000 examples.
.1 uebe lebUILb dle eXLleIuelY
e UCUUli:t!!,l1l!!, dllU bu!!,!!,ebL LUi:tL
Lue plupUbeu lueLuuu CUUIU dllUW
training SVM-like models for very large multi-million data sets in a reasonable time. If training
of the neural network gater with stochastic gradient takes time that grows much less than
quadratically, as we conjecture it to be the case for very large data sets (to reach a "good enough"
solution), then the whole method is clearly sub-quadratic in training time with respect to the
number of training examples. Future work will address several questions: how to guarantee
linear training time for the gater as well as for the experts? can better results be obtained by
tuning the hyper-parameters of each expert separately? Does the approach work well for other
types of experts?
Acknowledgments
RC would like to thank the Swiss NSF for financial support (project FN2100-061234.00). YB
would like to thank the NSERC funding agency and NCM 2 network for support.
References
[1] RA. Cole, M. Noel, T. Lander, and T. Durham. New telephone speech corpora at CSLU.
Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH, 1:821- 824, 1995.
[2] R Collobert and S. Bengio. SVMTorch: Support vector machines for large-scale regression
problems. Journal of Machine Learning Research, 1:143- 160, 200l.
[3] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273- 297, 1995.
[4] Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive
mixtures of local experts. Neural Computation, 3(1):79- 87, 1991.
[5] J. T. Kwok. Support vector mixture for classification and regression problems. In Proceedings
of the International Conference on Pattern Recognition (ICPR) , pages 255-258, Brisbane,
Queensland, Australia, 1998.
[6] E. Osuna, R Freund, and F. Girosi. Training support vector machines: an application to
face detection. In IEEE conference on Computer Vision and Pattern Recognition, pages
130- 136, San Juan, Puerto Rico, 1997.
[7] A. Rida, A. Labbi, and C. Pellegrini. Local experts combination trough density decomposition. In International Workshop on AI and Statistics (Uncertainty'99). Morgan Kaufmann,
1999.
[8] V. Tresp. A bayesian committee machine. Neural Computation, 12:2719-2741,2000.
[9] V. N. Vapnik. The nature of statistical learning theory. Springer, second edition, 1995.
| 1949 |@word briefly:1 seems:3 termination:1 queensland:1 covariance:1 jacob:1 decomposition:1 series:5 pub:1 current:1 comparing:1 nowlan:1 assigning:1 realistic:2 ronan:2 partition:1 girosi:1 lue:6 selected:5 website:1 ith:1 short:1 rc:1 consists:1 combine:2 introduce:2 ra:1 indeed:2 rapid:1 multi:1 actual:1 cpu:5 window:1 project:1 moreover:3 iven:1 minimizes:1 finding:1 guarantee:2 temporal:1 every:2 njm:1 ull:1 scaled:1 unit:19 appear:2 l1l:1 positive:2 simp:1 local:2 range:1 acknowledgment:1 practice:1 definite:1 gater:25 swiss:1 empirical:1 significantly:1 hyperbolic:1 radial:1 suggest:1 influence:1 descending:1 go:1 assigns:1 financial:1 notion:1 us:1 samy:1 element:2 recognition:3 database:3 observed:1 steven:1 module:1 thousand:1 decrease:1 ebl:1 agency:1 complexity:4 trained:12 solving:2 recombine:1 basis:1 easily:4 succ:2 various:1 train:13 artificial:3 hyper:2 quite:3 apparent:1 larger:1 solve:6 say:1 reconstruct:1 statistic:1 jointly:1 rida:1 took:1 propose:3 uci:2 loop:6 turned:1 rapidly:2 g9:1 convergence:1 cluster:2 produce:1 lvl:1 ftp:2 montreal:4 strong:1 dividing:1 implemented:2 idiap:3 uu:1 switzerland:2 drawback:1 stochastic:1 australia:1 require:2 tthe:1 assign:1 generalization:4 around:2 considered:1 ic:1 exp:1 pellegrini:1 outperformed:1 label:2 currently:1 prepare:1 cole:1 puerto:1 minimization:1 clearly:2 gaussian:5 always:1 i3:3 modified:1 varying:1 focus:1 improvement:4 mainly:1 typically:2 cslu:1 mth:1 hidden:18 going:1 classification:10 among:1 proposes:1 art:2 uc:1 once:1 f3:3 having:2 reassigning:1 unsupervised:1 future:2 yoshua:1 np:1 few:1 randomly:4 simultaneously:1 individual:1 replaced:1 attempt:1 detection:1 organization:1 mlp:7 dle:1 mixture:56 yielding:3 held:1 closer:1 divide:3 old:1 increased:1 instance:1 assignment:1 cost:2 subset:8 hundred:1 uniform:3 eurospeech:1 reported:1 convince:1 density:1 international:2 bu:1 michael:1 again:3 choose:1 juan:1 expert:44 derivative:1 toy:4 de:2 coefficient:2 trough:1 vi:1 collobert:3 performed:1 try:2 lot:1 portion:1 wm:3 sort:1 dcl:1 parallel:3 minimize:3 il:1 kaufmann:1 yield:1 bayesian:3 lu:4 reach:1 universite:2 associated:1 dataset:17 popular:1 appears:2 rico:1 rand:1 yb:1 done:3 vci:1 lei:1 grows:2 normalized:1 verify:1 evolution:1 hence:4 during:1 criterion:1 performs:1 cp:4 l1:1 aiyi:1 recently:1 umontreal:2 funding:1 empirically:2 million:2 significant:2 ai:2 tuning:1 centre:2 had:5 impressive:1 surface:3 recent:1 binary:2 success:1 life:2 yi:3 seen:2 morgan:1 additional:1 parallelized:2 ii:1 faster:3 cross:2 divided:1 a1:1 wuen:1 prediction:2 variant:1 regression:2 vision:1 iteration:8 kernel:5 ion:1 addition:1 separately:5 lander:1 brisbane:1 grow:2 subject:1 goto:1 quebec:2 jordan:1 near:1 bengio:4 enough:1 xj:3 fit:1 gave:1 reduce:1 idea:2 regarding:1 ul:2 suffer:1 speech:6 generally:1 clear:2 locally:1 svms:45 nsf:1 fulfilled:1 disjoint:1 per:3 neither:1 kept:1 ville:2 parameterized:1 uncertainty:1 reasonable:1 decision:4 scaling:1 followed:1 quadratic:7 yielded:2 badly:1 constraint:1 uii:1 n3:2 u1:2 extremely:1 separable:1 conjecture:2 according:2 icpr:1 combination:3 slightly:2 em:1 osuna:1 taken:2 resource:2 equation:1 committee:2 clv:1 available:1 gaussians:1 kwok:2 ho:1 ensure:1 include:1 conquer:1 classical:3 objective:1 question:1 aiajyiyjk:1 gradient:2 separate:3 thank:2 outer:4 iro:2 ratio:1 balance:1 difficult:3 robert:1 potentially:1 info:1 rise:1 martigny:2 gated:6 rasta:1 datasets:3 sm:1 benchmark:1 descent:1 hinton:1 communication:1 frame:12 varied:2 canada:2 pair:1 quadratically:2 hour:1 address:2 able:2 usually:1 pattern:2 power:1 simplon:1 demanding:1 scheme:1 improve:1 technology:1 tresp:1 tangent:1 freund:1 interesting:1 geoffrey:1 validation:15 verification:1 principle:1 hopeless:2 last:1 keeping:1 silence:2 allow:1 taking:1 barrier:1 face:1 overcome:2 dimension:1 replicable:1 author:2 made:1 adaptive:1 san:1 approximate:1 corpus:1 xi:9 pretty:1 table:5 nature:1 transfer:2 ca:2 forest:6 du:2 european:1 rue:2 did:4 linearly:7 whole:6 edition:1 n2:1 allowed:1 tvt:1 vve:1 tl:1 sub:3 bengioy:1 lie:1 krn:1 minute:4 bad:1 specific:1 list:1 svm:36 covtype:2 cortes:1 essential:2 exists:1 workshop:1 vapnik:2 kr:2 durham:1 surprise:1 ncm:1 contained:3 nserc:1 springer:1 ch:1 conditional:1 llxi:1 goal:1 month:1 noel:1 rbf:1 replace:1 telephone:1 lui:2 total:6 experimental:1 select:3 internal:1 support:12 latter:1 svmtorch:2 tested:2 |
1,039 | 195 | Training Stochastic Model Recognition Algorithms
Training Stochastic Model Recognition
Algorithms as Networks can lead to Maximum
Mutual Information Estimation of Parameters
John s. Bridle
Royal Signals and Radar Establishment
Great Malvern
Worcs.
UK
WR143PS
ABSTRACT
One of the attractions of neural network approaches to pattern
recognition is the use of a discrimination-based training method.
We show that once we have modified the output layer of a multilayer perceptron to provide mathematically correct probability distributions, and replaced the usual squared error criterion with a
probability-based score, the result is equivalent to Maximum Mutual Information training, which has been used successfully to improve the performance of hidden Markov models for speech recognition. If the network is specially constructed to perform the recognition computations of a given kind of stochastic model based classifier then we obtain a method for discrimination-based training of
the parameters of the models. Examples include an HMM-based
word discriminator, which we call an 'Alphanet' .
1
INTRODUCTION
It has often been suggested that one of the attractions of an adaptive neural network
(NN) approach to pattern recognition is the availability of discrimination-based
training (e.g. in Multilayer Perceptrons (MLPs) using Back-Propagation). Among
the disadvantages of NN approaches are the lack of theory about what can be
computed with any partir.ular structure, what can be learned, how to choose a
network architecture for a given task, and how to deal with data (such as speech) in
which an underlying sequential structure is ofthe essence. There have been attempts
to build internal dynamics into neural networks, using recurrent connections, so that
they might deal with sequences and temporal patterns [1, 2], but there is a lack of
relevant theory to inform the choice of network type.
Hidden Markov models (HMMs) are the basis of virtually all modern automatic
speech recognition systems. They can be seen as an extension of the parametric
statistical approach to pattern recognition, to deal (in a simple but principled way)
witli temporal patterning. Like most parametric models, HMMs are usually trained
using within-class maximum-likelihood (ML) methods, and an EM algorithm due to
Baum and Welch is particularly attractive (see for instance [3]). However, recently
211
212
Bridle
some success has been demonstrated using discrimination-based training methods,
suc.h as the so-called Maximum Mutual Information criterion [4] and Corrective
Training[5] .
This paper addresses two important questions:
? How can we design Neural Network architectures with at least the desirable
properties of methods based on stochastic models (such as hidden Markov
models)?
? What is the relationship between the inherently discriminative neural network
training and the analogous MMI training of stochastic models?
We address the first question in two steps. Firstly, to make sure that the outputs
of our network have the simple mathematical properties of conditional probability
distributions over class labels we recommend a generalisation of the logistic nonlinearity; this enables us (but does not require us) to replace the usual squared error
criterion with a more appropriate one, based on relative entropy. Secondly, we
also have the option of designing networks which exactly implement the recognition
computations of a given stochastic model method. (The resulting 'network' may be
rather odd, and not very 'neural', but this is engineering, not biology.) As a contribution to the investigation of the second question, we point out that optimising
the relative entropy criterion is exactly equivalent to performing Maximum Mutual
Information Estimation.
By way of illustration we describe three 'networks' which implement stochastic
model classifiers, and show how discrimination training can help.
2
TRAINABLE NETWORKS AS PARAMETERISED CONDITIONAL DISTRIBUTION FUNCTIONS
We consider a trainable network, when used for pattern classification, as a vector
function Q( re, 8) from an input vt>ctor re to a set of indicators of class membership,
{Qj}, j = 1, ... N. The parameters 8 modify the transfer function. In a multilayer perceptron, for instance, the parameters would be values of weights. Typically,
we have a training set of pairs (ret,ct), t = 1, ... T, of inputs and associated true
class labels, and we have to find a value for 8 which specialises the function so that
it is consistent with the training st't. A common procedure is to minimise E( 8), the
sum of the squart's of the differt'nces hetwt'en the network outputs and true class
indicators, or targets:
'1'
N
E(8)
=:
L
L(Qj(ret, 8) - bj ,c,)2,
t=l j==l
where bj,c = 1 if j = c, otht'rwise O. E and Q will be written without the 8 argument
where the meaning is clear, and wt' may drop the t subscript.
It is well known that the value of F(~) which minimises the expected value of
(F(~) - y)2 is the expected value of y given~. The expected value of bj,e, is
P( C
j I X red, the probability that the class associated with ret is the jth class.
=
=
Training Stochastic Model Recognition Algorithms
From now on we shall assume that the desired output of a classifier network is this
conditional probability distribution over classes, given the input.
The outputs must satisfy certain simple constraints if they are to be interpretable as
a probability distribution. For any input, the outputs must all be positive and they
must sum to unity. The use of logistic nonlinearities at the outputs of the network
ensures positivity, and also ensures that each output is less than unity. These
constraints are appropriate for outputs that are to be interpreted as probabilities
of Boolean events, but are not sufficient for I-from-N classifiers.
Given a set of unconstrained values, Vj(:e), we can ensure both conditions by using
a Normalised Exponential transformation:
Qj(~)
= eVj(a!) /
L eVIe(~)
Ie
This transformation can be considered a multi-input generalisation of the logistic,
operating on the whole output layer. It preserves the rank order of its input values,
and is a differentiable generalisation of the 'winner-take-all' operation of picking the
maximum value. For this reason we like to refer to it as soft max. Like the logistic,
it has a simple implementation in transistor circuits [6].
If the network is such that we can be sure the values we have are all positive, it may
be more appropriate just to normalise them. In particular, if we can treat them as
likelihoods of the data given the possible classes, Lj(~) = P(X = ~ Ie =i), then
normalisation produces the required conditional distribution (assuming equal prior
probabilities for the classes).
3
RELATIVE ENTROPY SCORING FOR CLASSIFIERS
In this section we introduce an information-theoretic criterion for training I-fromN classifier networks, to replace the squared error criterion, both for its intrinsic
interest and because of the link to discriminative training of stochastic models.
the class with highest likelihood. This is justified by
if we assume equal priors P(c) (this can be generalised) and see that the denominator
P(~) = Lc P(~ I c)P(c) is the same for all classes.
It is also usual to train such classifiers by ma:?:imising the data likelihood given
the correct classes. Maximum Likelihood (ML) training is appropriate if we are
choosing from a family of pdfs which includes the correct one. In most real-life
applications of pattern classification we do not have knowledge of the form of the
data distributions, although we may have some useful ideas. In tbat case ML may
be a rather bad approach to pdf estimation for the purpose of pattern clauification,
because what matters is the f'elalive densities.
An alternative is to optimise a measure of success in pattern classification, and this
can make a big difference to performance, particularly when the assumptions about
the form of the class pdfs is badly wrong.
213
214
Bridle
To make the likelihoods produced by a SM classifier look like NN outputs we can
simply normalise them:
Ie
Then we can use Neural Network optimisation methods to adjust the parameters.
a
SUlll,
weighted by the joint probability, of the MI of the joint events
,....
P(X=:r,Y=y)
I(X, Y) = ,L; P(X:=::r, Y=y)log p{X-=:r)p-(Y~Yf
(~,y)
For discrimination training of sets of stochastic models, Bahl et.al. suggest maximising the Mutual Information, I, between the training observations and the choice
of the correspolluing correct class.
,""
P(C =.: Ct,X=Zt)
,...........
P(C=Ct IX=zt}P(X=zd
I(X, C) = ,L; log
= ,L; log
.
P(C=cdP(X=z)
P(C=ct}P(X=z)
t
t
P(C=Ct I X = zt} should be read as the probability that we choose the correct class
for the tth training example. If we are choosing classes according to the conditional
distribution computed using parameters (J then P(C=Ct IX = zd = QCt(z,(J),
and
If the second term involving the priors is fixed, we are left with maximising
LlogQCt(:rt,6) = -J.
t
The RE-based score we use is J ..;; -- }:;:;;1 L;=l Pjtlog Qj{ zd, where Pjt is the
probability of class j associated with input Zt 1ll the training set. If as usual the
training set specifies only oue true class, Ct for each Zt then Pj,t = [)j,Ct and
T
J
= -- LlogQCt(zt},
t=l
the sum of the logs of the outputs for the correct classes.
Q with respect to the
true conditional distribution P, averaged over the input distribution:
J can be derived from the Relative Entropy of distribution
J
d:r P(X = z)G(Q I P),
where
G(Q I P)
=- L
P(c I z)log
~~(Iz~)'
C
information, cross entropy, asymmetric divergence, directed divergence, I-divergence,
and Kullback-Leibler number. RE scoring is the basis for the Boltzmann Machine
learning algorithm [7] and has also been proposed and used for adaptive networks
with continuous-valued outputs [8, 9, 10, 11], but usually in the form appropriate
to separate logistics and independent Boolean targets. An exception is [12].
There is another way of thinking about this 'log-of correct-output' score. Assume
that the way we would use the outputs of the network is that, rather than choosing
Training Stochastic Model Recognition Algorithms
the class with the largest output, we choose randomly, picking from the distribution
specified by the outputs. (Pick class j with probability Qj.) The probability of
choosing the class Ct for training sample IBt is simply Qet (tee). The probability of
choosing the correct class labels for all the training set is n;=1 Qet (1Bt). We simply
seek to maximise this probability, or what is equivalent, to minimise minus its log:
T
J = -
L log Qet(ted?
t=l
In order to compute the partial derivatives of J wrt to parameters of the network, we
first need
-= -Pjt!Qj The details of the back-propagation depend on the form
of the network, but if the final non-linearity is a normalised exponential (softmax),
:gj
Qj(:l)
= exp(Vj(:z:))/ '"'
Lt exp(V" (:z:)),
then [6]
"
8Jt -= (Qj(:z:t) - bj,et)'
aVJ
We see that the derivative before the output nonlinearity is the difference between
the corresponding output and a one-from-N target. We conclude that softmax
output stages and I-from-N RE scoring are natural partners.
4
DISCRIMINATIVE TRAINING
In stochastic model (probability-density) based pattern classification we usually
compute likelihoods of the data given models for each class, P(IB Ic), and choose.
So minimising our J criterion is also maximising Bahl's mutual information. (Also
see [13).)
5
5.1
STOCHASTIC MODEL CLASSIFIERS AS NETWORKS
EXAMPLE ONEs A PAIR OF MULTIVARIATE GAUSSIANS
The conditional distribution for a pair of multivariate Gaussian densities with the
same arbitrary covariance matrix is a logistic function of a weighted sum of the
input coordinates (plus a constant). Therefore, even if we make such incorrect
assumptions as equal priors and spherical unit covariances, it is still possible to find
values for the parameters of the model (the positions of the means of the assumed
distributions) for which the form of the conditional distribution is correct. (The
means may be far from the means of the true distributions and from the data
means.) Of course in this case we have the alternative of using a weighted-sum
logistic, unit to compute the conditional probability: the parameters are then the
weights.
5.2
EXAMPLE TWO: A MULTI-CLASS GAUSSIAN CLASSIFIER
Consider a model in which the distributions for each class are multi-variate Gaussian, with equal isotropic unit variances, and different means, {mj}. The probability distribution over class labels, given an observation IB I is P( c = j lIB)
e 1'; / L" e V", where V; = -IIIB - mj 112. This can be interpreted as a one-layer
feed-forward non-linear network. The usual weighted sums are replaced by squared
Euclidean distances, and the usual logistic output non-linearities are replaced by a
normalised exponential.
=
215
216
Bridle
For a particular two-dimensional10-class problem, derived from Peterson and Barney's formant data, we have demonstrated [6] that training such a network can
cause the ms to move from their "natural" positions at the data means (the in-class
maximum likelihood estimates), and this can improve classification performance on
unseen data (from 68% correct to 78%).
5.3
EXAMPLE THREE: ALPHANETS
Consider a set of hidden Markov models (HMMs), one for each word, each parameterised by a set of state transition probabilities, {a~j}' and observation likelihood
functions {b~ ('" H, where a~j is the probability that in model k state i will be followed by state j, and b~ ( "') is the likelihood of model k emi tting observation '" from
state j. For simplicity we insist that the end of the word pattern corresponds to
state N of a model.
The likelihood, Lie (lett) of model k generating a given sequence
~ "'1, ?? " "'M
is a sum, over all sequences of states, of the joint likelihood of that state sequence
and the data:
",tt
M
LIe(ler) =
L IT a!'_I"f b!I("'d
' I ... IM
with
8M
= N.
t=2
This can be r.omput.ed efficiently via the forward recursion [3J
glvlllg
which we can think of as a recurrent network. (Note that t is used as a time index
here.)
If the observation sequence "':'" could only have come from one of a set of known,
equally likely models, then the posterior probability that it was from model k is
p(r=k I",f!) = QIe(",f!) = Llc(",f1 ) /
L Lr(",r)?
r
These numbers are the output of our special "recurrent neural network" for isolated
word discrimination, which we call an "Alphanet" [14J. Backpropagation of partial
derivatives of the J score has the form of the backward recurrence used in the
Baum-Welch algorithm, but they include discriminative terms, and we obtain the
gradient of the relative entropy/mutual information.
6
CONCLUSIONS
Discrimination-based training is different from within-class parameter estimation,
and it may be useful. (Also see [15].) Discrimination-based training for stochastic
models and for networks are not distinct, and in some cases can be mathematically
identical.
The notion of specially constructed 'network' architectures which implement stochastic model recognition algorithms provides a way to construct fertile hybrids. For
instance, a Gaussian classifier (or a HMM classifier) can be preceeded by a nonlinear transformation (perhaps based on semilinear logistics) and all the parameters
Training Stochastic Model Recognition Algorithms
of the system adjusted together. This seems a useful approach to automating the
discovery of 'feature detectors'.
? British Crown Copyright 1990
References
[1] R P Lippmann. Review of neural networks for speech recognition. Neural
Computation, 1(1), 1989.
[2] It L Watrous . Connectionist speech recognition using the temporal flow model.
In .Pl'Oc. IEEE W ol'kshop on Speech Recognition, June 1988.
[3] A B Poritz. Hidden Markov models: a guided tour. In Proc. IEEE Int. Conf.
Acouslics Speech and Signal P1'Ocessillg, pages 7-13, 1988.
[4] L R Bahl, P F Brown, P V de Souza, and R L Mercer. Maximum mutual
information estimation of hidden Markov model parameters. In Proc. IEEE
Tnt. Conf. Acoustics Speech and Signal P,'ocessing, pages 49-52, 1986.
[5] L R Bahl, P F Brown, P V de Souza, and R L r.fercer. A new algorithm for the
estimation of HMM parameters . In P,'Vf. IEEE Int. Con!. Acoustics Speech
and Signal Processmg, pages 493-496, 1988.
[6] J S Bridle. Probabilistic interpretation of feedforward classification network
output.s, with relationships to statistical pattern recognition. In F FougelmanSoulie and J Herault, editors, Neuro-computing: algorithms, architectures and
appfications, Springer-Verlag, 1989.
[7] D HAckley, G E Hinton, and T J Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-168,1985.
[8] L Gillick. Probability scores for backpropagation networks. July 1987. Personal communication.
[9] G E Hinton. Connectionist LeaJ'ning Procedures. Technical Report CMU-CS87-115, Carnegie Mellon University Computer Science Department, June 1987.
[10] E B Baum and F Wilczek. Supervised learning of probability distributions
by neural networks. In D Anderson, editor, Neura,Z Infol'mation Processing
Systems, pages 52"-6], Am. lnst. of Physics, 1988.
[11] S SoHa, E Levin, and M Fleisher. Accelerated learning in layered neural networks. Complex Systems, January 1989.
[12] E Yair and A Gersho. The Boltzmann Perceptron Network: a soft classifier.
III D Touretzky, editor, Advances in Neuml Information Processing Systems 1,
San Mateo, CA: Morgan Kaufmann, 1989.
[13] P S Gopalakrishnan, D Kanevsky, A Nadas, D Nahamoo, and M A Picheny.
Decoder seledion based on cross-entropies . In Proc. IEEE Int. Conf. Acoustics
Speech and Signal Pl'ocessing, pages 20-23, 1988.
[14] J S Bridle. Alphanets: a recurrent 'lleural' network architecture with a hidden
Markov model interpretation. Spee('h Communication, Special N eurospeech
issue, February 1990.
[15] "L Niles, H Silverman, G Tajclllnan, and 1\'1 Bush. How limited training data
can allow a neural network to out-perform an 'optimal' classifier. In Proc.
IEEE in.t . Conf. Acoustics Speech and Signal Processing, 1989.
217
| 195 |@word seems:1 seek:1 covariance:2 specialises:1 pick:1 minus:1 barney:1 score:5 written:1 must:3 john:1 enables:1 drop:1 interpretable:1 discrimination:9 poritz:1 patterning:1 isotropic:1 lr:1 provides:1 firstly:1 mathematical:1 constructed:2 incorrect:1 introduce:1 expected:3 p1:1 multi:3 ol:1 insist:1 spherical:1 lib:1 underlying:1 linearity:2 circuit:1 what:5 kind:1 interpreted:2 watrous:1 ret:3 transformation:3 temporal:3 exactly:2 classifier:14 wrong:1 uk:1 unit:3 before:1 positive:2 generalised:1 engineering:1 modify:1 treat:1 cdp:1 maximise:1 preceeded:1 subscript:1 might:1 plus:1 mateo:1 hmms:3 limited:1 averaged:1 directed:1 implement:3 backpropagation:2 silverman:1 procedure:2 word:4 suggest:1 layered:1 equivalent:3 clauification:1 demonstrated:2 baum:3 welch:2 simplicity:1 attraction:2 notion:1 coordinate:1 analogous:1 tting:1 target:3 designing:1 recognition:17 particularly:2 asymmetric:1 fleisher:1 ensures:2 highest:1 principled:1 qet:3 dynamic:1 personal:1 radar:1 trained:1 depend:1 basis:2 joint:3 corrective:1 train:1 distinct:1 describe:1 sejnowski:1 tbat:1 choosing:5 kshop:1 valued:1 formant:1 unseen:1 think:1 final:1 iiib:1 sequence:5 differentiable:1 transistor:1 relevant:1 produce:1 generating:1 help:1 recurrent:4 minimises:1 odd:1 come:1 guided:1 ning:1 correct:10 stochastic:16 require:1 f1:1 investigation:1 secondly:1 mathematically:2 im:1 extension:1 adjusted:1 pl:2 considered:1 ibt:1 ic:1 exp:2 great:1 bj:4 pjt:2 purpose:1 estimation:6 proc:4 label:4 largest:1 successfully:1 weighted:4 alphanet:2 gaussian:4 mation:1 establishment:1 modified:1 rather:3 nada:1 derived:2 june:2 pdfs:2 evie:1 rank:1 likelihood:12 am:1 membership:1 nn:3 typically:1 lj:1 bt:1 hidden:7 lnst:1 issue:1 among:1 classification:6 herault:1 softmax:2 special:2 mutual:8 equal:4 once:1 construct:1 ted:1 biology:1 optimising:1 identical:1 look:1 thinking:1 connectionist:2 recommend:1 report:1 modern:1 randomly:1 preserve:1 divergence:3 replaced:3 attempt:1 normalisation:1 interest:1 adjust:1 copyright:1 partial:2 euclidean:1 re:5 desired:1 isolated:1 ocessing:2 instance:3 qie:1 soft:2 boolean:2 disadvantage:1 tour:1 levin:1 eurospeech:1 st:1 density:3 ie:3 automating:1 probabilistic:1 physic:1 picking:2 together:1 squared:4 choose:4 positivity:1 conf:4 cognitive:1 derivative:3 nces:1 nonlinearities:1 de:2 availability:1 includes:1 matter:1 int:3 satisfy:1 red:1 option:1 contribution:1 mlps:1 variance:1 kaufmann:1 efficiently:1 ofthe:1 mmi:1 produced:1 detector:1 inform:1 touretzky:1 ed:1 associated:3 mi:1 con:1 bridle:6 knowledge:1 back:2 feed:1 supervised:1 anderson:1 parameterised:2 just:1 stage:1 wilczek:1 nonlinear:1 propagation:2 lack:2 logistic:7 bahl:4 yf:1 perhaps:1 brown:2 true:5 read:1 leibler:1 deal:3 attractive:1 ll:1 recurrence:1 essence:1 oc:1 criterion:7 m:1 pdf:1 theoretic:1 tt:1 crown:1 meaning:1 recently:1 common:1 winner:1 interpretation:2 refer:1 mellon:1 automatic:1 unconstrained:1 nonlinearity:2 operating:1 gj:1 processmg:1 multivariate:2 posterior:1 certain:1 verlag:1 success:2 tee:1 vt:1 life:1 scoring:3 seen:1 morgan:1 signal:6 july:1 desirable:1 technical:1 cross:2 minimising:1 equally:1 involving:1 neuro:1 multilayer:3 optimisation:1 denominator:1 cmu:1 justified:1 specially:2 sure:2 virtually:1 fertile:1 qct:1 flow:1 call:2 feedforward:1 iii:1 kanevsky:1 variate:1 architecture:5 idea:1 minimise:2 qj:8 speech:11 cause:1 useful:3 clear:1 tth:1 specifies:1 semilinear:1 zd:3 carnegie:1 shall:1 iz:1 pj:1 backward:1 sum:7 family:1 vf:1 layer:3 ct:9 followed:1 nahamoo:1 badly:1 constraint:2 gillick:1 emi:1 argument:1 performing:1 department:1 according:1 em:1 unity:2 avj:1 wrt:1 gersho:1 end:1 operation:1 gaussians:1 appropriate:5 alternative:2 yair:1 neuml:1 include:2 ensure:1 neura:1 build:1 february:1 move:1 question:3 parametric:2 rt:1 usual:6 gradient:1 distance:1 link:1 separate:1 hmm:3 normalise:2 decoder:1 partner:1 reason:1 assuming:1 maximising:3 gopalakrishnan:1 index:1 relationship:2 illustration:1 ler:1 design:1 implementation:1 zt:6 boltzmann:3 perform:2 observation:5 markov:7 ctor:1 sm:1 logistics:2 january:1 hinton:2 communication:2 tnt:1 arbitrary:1 souza:2 pair:3 required:1 specified:1 connection:1 discriminator:1 acoustic:4 learned:1 address:2 suggested:1 usually:3 pattern:11 royal:1 max:1 optimise:1 event:2 natural:2 hybrid:1 indicator:2 recursion:1 oue:1 improve:2 prior:4 review:1 discovery:1 relative:5 sufficient:1 consistent:1 mercer:1 editor:3 course:1 jth:1 normalised:3 allow:1 perceptron:3 peterson:1 lett:1 llc:1 transition:1 forward:2 adaptive:2 san:1 far:1 picheny:1 lippmann:1 kullback:1 ml:3 conclude:1 assumed:1 discriminative:4 continuous:1 mj:2 transfer:1 ca:1 inherently:1 complex:1 suc:1 vj:2 whole:1 big:1 alphanets:2 malvern:1 nile:1 en:1 lc:1 position:2 exponential:3 lie:2 ib:2 ix:2 british:1 bad:1 jt:1 intrinsic:1 sequential:1 entropy:7 soha:1 lt:1 simply:3 likely:1 springer:1 corresponds:1 ma:1 conditional:9 replace:2 generalisation:3 ular:1 wt:1 called:1 perceptrons:1 exception:1 internal:1 accelerated:1 bush:1 trainable:2 |
1,040 | 1,950 | A hierarchical model of complex cells in
visual cortex for the binocular perception
of motion-in-depth
Silvio P. Sabatini, Fabio Solari, Giulia Andreani,
Chiara Bartolozzi, and Giacomo M. Bisio
Department of Biophysical and Electronic Engineering
University of Genoa, 1-16145 Genova, ITALY
silvio@dibe.unige.it
Abstract
A cortical model for motion-in-depth selectivity of complex cells in
the visual cortex is proposed. The model is based on a time extension of the phase-based techniques for disparity estimation. We
consider the computation of the total temporal derivative of the
time-varying disparity through the combination of the responses of
disparity energy units. To take into account the physiological plausibility, the model is based on the combinations of binocular cells
characterized by different ocular dominance indices. The resulting
cortical units of the model show a sharp selectivity for motion-indepth that has been compared with that reported in the literature
for real cortical cells.
1
Introduction
The analysis of a dynamic scene implies estimates of motion parameters to infer
spatio-temporal information about the visual world. In particular, the perception
of motion-in-depth (MID), i.e. the capability of discriminating between forward
and backward movements of objects from an observer, has important implications
for navigation in dynamic environments. In general, a reliable estimate of motionin-depth can be gained by considering the dynamic stereo correspondence problem
in the stereo image signals acquired by a binocular vision system. Fig. 1 shows
the relationships between an object moving in the 3-D space and its geometrical
projections in the right and left retinas. In a first approximation, the positions of
corresponding points are related by a 1-D horizontal shift, the disparity, along the
direction of the epipolar lines. Formally, the left and right observed intensities from
the two eyes, respectively JL(X) and JR(x), result related as JL(X) = JR[x + 8(x)],
where 8(x) is the horizontal binocular disparity. If an object moves from P to
Q its disparity changes and projects different velocities (VL' VR) on the retinas.
. . . ... . ..9J. . . .... t+~t
8(t+lit)
= (X QL -X QR) ""
a(D-ZQ)/D2
V "" li8 D2/a
z M
li8 = 8(t+lit)-&(t)
lit
lit
_
=
(X QL -XPL )-(XQR -XPR)
""
lit
V Z ""
(
a
(V L-v R)D2/a
)
Figure 1: The dynamic stereo correspondence problem. A moving object in the 3-D
space projects different trajectories onto the left and right retinas. The differences
between the two trajectories carry information about motion-in-depth.
Thus, the Z component of the object's motion (i.e., its motion-in-depth) Vz can
be approximated in two ways [1]: (1) by the rate of change of disparity, and (2)
by the difference between retinal velocities, as it is evidenced in the box in Fig. l.
The predominance of one measure on the other one corresponds to different hypotheses on the architectural solutions adopted by visual cortical cells to encode
dynamic 3-D visual information. Recently, numerous experimental and computational studies (see e.g., [2] [3] [4] [5]) addressed this issue, by analyzing the binocular
spatio-temporal properties of simple and complex cells. The fact that the resulting
disparity tuning does not vary with time, and that most of the cells in the primary visual cortex have the same motion preference for the two eyes, led to the
conclusion that these cells are not tuned to motion-in-depth. In this paper, we
demonstrate that, within a phase-based disparity encoding scheme, such cells relay
phase temporal derivative components that can be combined, at a higher level, to
yield a specific motion-in-depth selectivity. The rationale of this statement relies
upon analytical considerations on phase-based dynamic stereopsis, as a time extension of the well-known phase-based techniques for disparity estimation [6] [7].
The resulting model is based on the computation of the total temporal derivative of
the disparity through the combination of the outputs of binocular disparity energy
units [4] [5] characterized by different ocular dominance indices. Since each energy
unit is just a binocular Adelson and Bergen's motion detector, this establishes a
link between the information contained in the total rate of change of the binocular
disparity and that held by the interocular velocity differences.
2
Phase-based dynamic stereopsis
In the last decades, a computational approach for stereopsis, that rely on the phase
information contained in the spectral components of the stereo image pair, has been
proposed [6] [7]. Spatially-localized phase measures on the left and right images can
be obtained by filtering operations with a complex-valued quadrature pair of Gabor
2 t'k
filters h(x , ko) = e- X 2 ; "e
ox, where ko is the peak frequency of the filter and a
relates to its spatial extension. The resulting convolutions with the left and right
binocular signals can be expressed as Q(x) = p(x)ei?(x) = C(x) + is(x) where
p(x) = ylC2(X) + S2(X) and ?(x) = arctan (S(x)/C(x)) denote their amplitude
and phase components, respectively, and C(x) and S(x) are the responses of the
quadrature pair of filters. Hence, binocular disparity can be predicted by 8(x) =
[?L(X) - ?R(x)]/k(x) where k(x) = [?~(x) + ?;Z(x)]/2 , with ?x spatial derivative of
phase ?, is the average instantaneous frequency of the bandpass signal, that, under
a linear phase model, can be approximated by the peak frequency of the Gabor filter
ko. Extending to time domain, the disparity of a point moving with the motion
field can be estimated by:
5:[ () ] _ ?L[X(t), t] - ?R[x(t), t]
uxt ,t -
ko
(1)
where phase components are computed from the spatiotemporal convolutions of the
stereo image pair Q(x, t) = C(x, t) + is(x, t) with directionally tuned Gabor filters
with a central frequency p = (ko, wo). For spatiotemporal locations where linear
phase approximation still holds (? ~ kox + wot), the phase differences in Eq. (1)
provide only spatial information, useful for reliable disparity estimates.
2.1
Motion-in-depth
If disparity is defined with respect to the spatial coordinate XL, by differentiating
with respect to time, its total rate of variation can be written as
d8
dt
= 88
8t
VL (A.L _ A.R)
+ ko
'l'x
'l'x
(2)
where VL is the horizontal component of the velocity signal on the left retina. Considering the conservation property of local phase measurements [8], image velocities
can be computed from the temporal evolution of constant phase contours, and thus:
(3)
and
with ?t = ~. Combining Eq. (3) with Eq. (2) we obtain d8/dt = (VR - VL)?;Z /ko,
where (v R - V L) is the phase-based interocular velocity difference along the epipolar
lines. When the spatial tuning frequency of the Gabor filter ko approaches the
instantaneous spatial frequency of the left and right convolution signals one can
derive the following approximated expressions:
d8
dt
-
~
88
=
8t
-
?t - ?f
ko
~VR-VL
(4)
The partial derivative of the disparity can be directly computed by convolutions
(S, C) of stereo image pairs and by their temporal derivatives (St, Ct):
a8
at
[
StCL - SLCt
(SL)2 + (CL)2
s[lc R - SRC[l] 1
(SR)2 + (CR)2 ko
(5)
thus avoiding explicit calculation and differentiation of phase, and the attendant
problem of phase unwrapping. Considering that, at first approximation (SL)2 +
(C L )2 ::: (SR)2 + (CR)2 and that these t erms are scantly discriminant for motionin-depth, we can formulate the cortical model taking into account the numerator
terms only.
2.2
The cortical model
If one prefilters the image signal to extract some temporal frequency sub-band,
S(x, t) ::: 9 * S(x , t) and C(x , t) ::: 9 * C(x , t) , and evaluates the temporal changes
in that sub-band, differentiation can be attained by convolutions on the data with
appropriate bandpass temporal filters:
S'(x, t) ::: g'
* S(x, t)
; C'(x, t) ::: g'
* C(x, t)
.
S' and C' approximate St and Ct, respectively, if 9 and g' are a quadrature pair of
temporal filters, e.g.: g(t) = e- t / T sinwot and g'(t) = e- t / T coswot. From a modeling perspective, that approximation allows us to express derivative operations in
terms of convolutions with a set of spatio-temporal filters, whose shapes resemble
those of simple cell receptive fields (RFs) of the primary visual cortex. Though, it
is worthy to note that a direct interpretation of the computational model is not biologically plausible. Indeed, in the computational scheme (see Eq. (5)), the temporal
variations of phases are obtained by processing monocular images separately and
then the resulting signals are binocularly combined to give at an estimate of motionin-depth in each spatial location. To employ binocular RFs from the beginning, as
they exist for most of the cells in the visual cortex, we manipulated the numerator
by rewriting it as the combination of terms characterized by a dominant contribution for the ipsilateral eye and a non-dominant contribution for the controlateral
eye. These contributions are referable to binocular disparity energy units [5] built
from two pairs of binocular direction selective simple cells with left and right RFs
weighted by an ocular dominance index a E [0,1]. The "tilted" spatio-temporal RFs
of simple cells of the model are obtained by combining separable RFs according to
an Adelson and Bergen's scheme [9]. It can be demonstrated that the information
about motion-in-depth can be obtained with a minimum number of eight binocular
simple cells, four with a left and four with a right ocular dominance, respectively
(see Fig. 2):
+ SL) - a(C R - sf")
S2 = (1 - a)(C L + Sf)
S3 = (1 - a)(Cf - SL) - a(C R + sf")
S4 = (1 - a)(C L + Sf)
Sl = (1 - a)(Cf
+ SL) -
(1 - a)(C R - sf")
S6 =
S7 = a(Cf - SL) - (1 - a)(C R + sf")
S8 =
S5 = a(Cf
C11 =
si + S~
; C12 = S5
+ S~
+ a(Cf" + SR)
+ a(Cf" - SR)
a(C L - Sf) + (1 - a)(Cf" + SR)
a(C L + Sf) + (1 - a)(Cf" - SR)
C13 = S~
+ S~ ; C14
= S?
+ S~
C21 = C12 - C11 ; C22 = C13 - C14
C3 = (1 - 20:) (stc L - sLCt - s[lc R + sRc[l) .
The output of the higher complex cell in the hierarchy (C3 ) truly encodes motionin-depth information. It is worthy to note that for a balanced ocular dominance
(0: = 0.5) the cell looses its selectivity.
3
Results
To assess model performances we derived cells' responses to drifting sinusoidal gratings with different speeds in the left and right eye. The spatial frequency of the
gratings has been chosen as central to the RF's bandwidth. For each layer, the
tuning characteristics of the cells are analyzed as sensitivity maps in the (XL - XR)
and (VL - VR) domains for the static and dynamic properties, respectively. The
(XL - XR) represents the binocular RF [5] of a cell, evidencing its disparity tuning.
The (v L - vR) response represents the binocular tuning curve of the velocities along
the epipolar lines. To better evidence motion-in-depth sensitivity, we represent as
polar plots, the responses of the model cells with respect to the interocular velocities ratio for 12 different motion trajectories in depth (labeled 1 to 12) [10]. The
cells of the cortical model exhibit properties and typical profiles similar to those
observed in the visual cortex [5] [10]. The middle two layers (see insets A and B
in Fig. 2) exhibit a strong selectivity to static disparity, but no specific tuning to
motion-in-depth. On the contrary, the output cell C 3 shows a narrow tuning to the
Z direction of the object's motion, while lacking disparity tuning (see inset C in
Fig. 2).
To consider more biologically plausible RFs for the simple cells, we included a
coefficient f3 in the scheme used to obtain tilted RFs in the space-time domain (e.g.
C + f3St). This coefficient takes into account the simple cell response to the nonpreferred direction. We analytically demonstrated (results not shown here) that
the resulting effect is a constant term that multiplies the cortical model output.
In this way, the model is based on more realistic simple cells without lacking its
functionality, provided that the basic direction selective units maintain a significant
direction selective index. To analyze the effect of the architectural parameters on
the model performance, we systematically varied the ocular dominance index 0: and
introduced a weight I representing the inhibition strength of the afferent signals
to the complex cells in layer 2. The resulting direction-in-depth polar plots are
shown in Fig. 3. The 0: parameter yields a strong effect on the response profile:
if 0: = 0.5 there is no direction-in-depth selectivity; according that 0: > 0.5 or
0: < 0.5 cells exhibit a tuning to opposite directions in depth. As 0: approaches the
boundary values 0 or 1 the binocular model turns to a monocular one. A decrease
of the inhibition strength I yields cells characterized by a less selective response to
direction-in-depth, whereas an increase of I diminishes their response amplitude.
4
Discussion and conclusions
There are at least two binocular cues that can be used to determine the MID
[1] : binocular combination of monocular velocity signals or the rate of change of
retinal disparity. Assuming a phase-based disparity encoding scheme [6], we demonstrated that information held in the interocular velocity difference is the same of
,
A
"" EBS, (
/
)'
~
\ ~"
?c
'--'
"
?
,' ""EB_2
s ( )2---{]
;:::l
,
,,-.,.
......
...c
01)
"0
u
<l)
u
::::
ro
::::
?s
0
'"0
8
3u
0
VR
12
, /
VL
'
XR
:
c
?
12
"" EBS3 (
"
/
:?
' , ,,,,
,/
S
EB_4 (
)2
.:?
~
C2'
.::
6
X
R
~3
,
""
"/
S
5 (
EB-
6
Figure 2: Functional representation of the proposed cortical architecture. Each
branch groups cells belonging to an ocular dominance column. The afferent signals from left and right ocular dominance columns are combined in layer 3. The
basic units are binocular simple cells tuned to motion directions (S1, . . . ,S8). The
responses of the complex cells in layers 1, 2 and 3 are obtained by linear and nonlinear combinations of the outputs of those basic units. See text . White squares
denote excitatory synapses whereas black squares denote inhibitory ones.
a = 0.3
, = 0.5
, = 1.0
, = 2.0
12
12
12
9~3
3
9
6
6
3
9
6
12
a = 0.7
9
--~~--
3
9
9 --~~-- 3
9
6
3
6
12
6
12I
a = 0.9
--~I!'---
~
Iv
A~
6
9 ------':111:"-- - 3
6
12
3
9 ------7,i!k-- - 3
6
Figure 3: Effects on the direction-in-depth selectivity of the systematic variation
of the model's parameters a and f. The responses are normalized to the largest
amplitude value.
that derived by the evaluation of the total derivative of the binocular disparity. The
resulting computation relies upon spatio-temporal differentials of the left and right
retinal phases that can be approximated by linear filtering operations with spatiotemporal RFs. Accordingly, we proposed a cortical model for the generation of
binocular motion-in-depth selective cells as a hierarchical combination of binocular
energy complex cells. It is worth noting that the phase response and the associated characteristic disparity of simple and complex cells in layers 1 and 2 do not
change with time, but the amplitudes of their responses carry information on temporal phase derivatives, that can be related to both retinal velocities and temporal
changes in disparity. Moreover, the model evidences the different roles of simple
and complex cells. Simple cells provide a Gabor-like spatio-temporal transformation of the visual space, on which to base a variety of visual functions (perception
ofform, depth, motion). Complex cells, by proper combinations ofthe same signals
provided by simple cells, actively eliminate sensitivity to a selected set of parameters, thus becoming specifically tuned to different features, such as disparity but
not motion-in-depth (layer 1 and 2), motion-in-depth but not disparity (layer 3).
Acknowledgments
This work was partially supported by the UNIGE-2000 Project "Spatio-temporal
Operators for the Analysis of Motion in Depth from Binocular Images ".
References
[1] J. Harris and S. N.J. Watamaniuk. Speed discrimination of Motion-in depth
using binocular cues. Vision R esearch, 35(7):885- 896, 1995.
[2] N. Qian and S. Mikaelian. Relationship between phase and energy methods for
disparity computation. Neural Comp ., 12(2) :279- 292, 2000.
[3] Y. Chen, Y. Wang, and N. Qian. Modelling VI disparity tuning to time-varying
stimuli. J. N europhysiol., pages 504- 600, 2001.
[4] D. J. Fleet, H. Wagner, and D. J. Heeger. Neural encoding of binocular diparity:
energy models, position shift and phase shift. Vision Research, 17:345- 398,
1996.
[5] 1. Ohzawa, G.C. DeAngelis, and R.D. Freeman. Encoding of binocular disparity
by complex cells in the cat's visual cortex. J. Neurophysiol., 77:2879- 2909 ,
1997.
[6] T.D . Sanger. Stereo disparity computation using Gabor filters. BioI. Cybern.,
59:405- 418, 1988.
[7] D.J. Fleet, A.D. Jepson, and M. Jenkin. Phase-based disparity measurements.
CVGIP: Image Understanding, 53:198- 210, 1991.
[8] D. J. Fleet and A. D. Jepson. Computation of component image velocity from
local phase information. International Journal of Computer Vision, 1:77- 104,
1990.
[9] E.H. Adelson and J.R. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Amer., 2:284-321, 1985.
[10] W. Spileers, G.A. Orban, B. Gulyas, and H. Maes. Selectivity of cat area
18 neurons for direction and speed in depth. J. Neurophysiol. , 63(4):936- 954,
1990.
| 1950 |@word middle:1 sabatini:1 d2:3 maes:1 carry:2 disparity:34 tuned:4 erms:1 si:1 written:1 tilted:2 realistic:1 shape:1 plot:2 discrimination:1 cue:2 selected:1 accordingly:1 beginning:1 location:2 preference:1 arctan:1 c22:1 along:3 c2:1 direct:1 differential:1 acquired:1 indeed:1 freeman:1 considering:3 project:3 provided:2 moreover:1 c13:2 loos:1 transformation:1 differentiation:2 temporal:19 ro:1 unit:8 li8:2 engineering:1 local:2 encoding:4 analyzing:1 becoming:1 black:1 eb:2 c21:1 acknowledgment:1 xr:3 area:1 gabor:6 projection:1 onto:1 operator:1 cybern:1 map:1 demonstrated:3 formulate:1 qian:2 s6:1 coordinate:1 variation:3 hierarchy:1 hypothesis:1 velocity:12 approximated:4 labeled:1 observed:2 role:1 wang:1 movement:1 decrease:1 src:2 balanced:1 environment:1 dynamic:8 upon:2 neurophysiol:2 cat:2 dibe:1 evidencing:1 deangelis:1 whose:1 unige:2 valued:1 plausible:2 directionally:1 biophysical:1 analytical:1 combining:2 qr:1 extending:1 object:6 derive:1 eq:4 strong:2 soc:1 grating:2 predicted:1 resemble:1 implies:1 direction:13 functionality:1 filter:10 opt:1 extension:3 hold:1 vary:1 relay:1 estimation:2 polar:2 diminishes:1 predominance:1 vz:1 largest:1 establishes:1 weighted:1 cr:2 varying:2 encode:1 derived:2 modelling:1 esearch:1 bergen:3 vl:7 eliminate:1 selective:5 issue:1 bisio:1 multiplies:1 spatial:8 field:2 f3:1 represents:2 lit:5 adelson:3 stimulus:1 employ:1 retina:4 manipulated:1 phase:28 maintain:1 evaluation:1 navigation:1 truly:1 analyzed:1 wot:1 held:2 implication:1 partial:1 iv:1 column:2 modeling:1 genoa:1 c14:2 reported:1 spatiotemporal:4 giacomo:1 combined:3 st:2 peak:2 sensitivity:3 discriminating:1 international:1 systematic:1 gulyas:1 central:2 d8:3 derivative:9 actively:1 account:3 sinusoidal:1 retinal:4 c12:2 coefficient:2 afferent:2 vi:1 observer:1 analyze:1 capability:1 contribution:3 ass:1 square:2 characteristic:2 yield:3 ofthe:1 interocular:4 trajectory:3 worth:1 comp:1 detector:1 synapsis:1 evaluates:1 bartolozzi:1 energy:8 frequency:8 ocular:8 associated:1 static:2 amplitude:4 higher:2 dt:3 attained:1 response:13 amer:1 ox:1 box:1 though:1 just:1 binocular:27 horizontal:3 ei:1 nonlinear:1 effect:4 ohzawa:1 normalized:1 evolution:1 hence:1 analytically:1 spatially:1 white:1 numerator:2 xqr:1 demonstrate:1 motion:27 geometrical:1 image:11 consideration:1 instantaneous:2 recently:1 functional:1 jl:2 s8:2 interpretation:1 measurement:2 s5:2 significant:1 tuning:10 moving:3 cortex:7 inhibition:2 base:1 dominant:2 perspective:1 italy:1 selectivity:8 minimum:1 c11:2 determine:1 signal:11 relates:1 branch:1 infer:1 characterized:4 plausibility:1 calculation:1 ko:10 basic:3 vision:4 represent:1 cell:39 whereas:2 separately:1 addressed:1 sr:6 contrary:1 noting:1 variety:1 architecture:1 bandwidth:1 opposite:1 shift:3 fleet:3 expression:1 s7:1 wo:1 stereo:7 useful:1 s4:1 mid:2 band:2 sl:7 exist:1 inhibitory:1 s3:1 estimated:1 ipsilateral:1 express:1 dominance:8 group:1 four:2 rewriting:1 backward:1 europhysiol:1 electronic:1 architectural:2 mikaelian:1 genova:1 layer:8 ct:2 correspondence:2 strength:2 scene:1 encodes:1 speed:3 orban:1 separable:1 department:1 according:2 watamaniuk:1 combination:8 belonging:1 jr:2 biologically:2 s1:1 binocularly:1 monocular:3 turn:1 adopted:1 operation:3 eight:1 hierarchical:2 spectral:1 appropriate:1 drifting:1 cf:8 sanger:1 cvgip:1 move:1 receptive:1 primary:2 exhibit:3 fabio:1 link:1 discriminant:1 uxt:1 assuming:1 index:5 relationship:2 ratio:1 ql:2 statement:1 proper:1 convolution:6 neuron:1 worthy:2 varied:1 sharp:1 intensity:1 introduced:1 evidenced:1 pair:7 c3:2 narrow:1 perception:4 rf:10 reliable:2 built:1 epipolar:3 rely:1 representing:1 scheme:5 eye:5 numerous:1 extract:1 text:1 literature:1 understanding:1 lacking:2 rationale:1 generation:1 filtering:2 localized:1 systematically:1 unwrapping:1 excitatory:1 supported:1 last:1 taking:1 wagner:1 differentiating:1 curve:1 depth:28 cortical:10 world:1 attendant:1 contour:1 boundary:1 forward:1 approximate:1 conservation:1 spatio:7 stereopsis:3 decade:1 zq:1 complex:12 cl:1 domain:3 stc:1 jepson:2 s2:2 profile:2 quadrature:3 fig:6 vr:6 lc:2 sub:2 position:2 explicit:1 bandpass:2 sf:8 xl:3 heeger:1 specific:2 inset:2 physiological:1 andreani:1 evidence:2 giulia:1 gained:1 chen:1 led:1 visual:12 chiara:1 expressed:1 contained:2 partially:1 corresponds:1 a8:1 relies:2 harris:1 bioi:1 change:7 included:1 typical:1 specifically:1 silvio:2 total:5 experimental:1 nonpreferred:1 formally:1 solari:1 avoiding:1 |
1,041 | 1,951 | A Sequence Kernel and its Application to
Speaker Recognition
William M. Campbell
Motorola Human Interface Lab
7700 S. River Parkway
Tempe, AZ 85284
Bill.Campbell@motorola.com
Abstract
A novel approach for comparing sequences of observations using an
explicit-expansion kernel is demonstrated. The kernel is derived using
the assumption of the independence of the sequence of observations and
a mean-squared error training criterion. The use of an explicit expansion kernel reduces classifier model size and computation dramatically,
resulting in model sizes and computation one-hundred times smaller in
our application. The explicit expansion also preserves the computational
advantages of an earlier architecture based on mean-squared error training. Training using standard support vector machine methodology gives
accuracy that significantly exceeds the performance of state-of-the-art
mean-squared error training for a speaker recognition task.
1 Introduction
Comparison of sequences of observations is a natural and necessary operation in speech
applications. Several recent approaches using support vector machines (SVM?s) have been
proposed in the literature. The first set of approaches attempts to model emission probabilities for hidden Markov models [1, 2]. This approach has been moderately successful
in reducing error rates, but suffers from several problems. First, large training sets result
in long training times for support vector methods. Second, the emission probabilities must
be approximated [3], since the output of the support vector machine is not a probability.
A more recent method for comparing sequences is based on the Fisher kernel proposed by
Jaakkola and Haussler [4]. This approach has been explored for speech recognition in [5].
The application to speaker recognition is detailed in [6]. We propose an alternative kernel
based upon polynomial classifiers and the associated mean-squared error (MSE) training
criterion [7]. The advantage of this kernel is that it preserves the structure of the classifier
in [7] which is both computationally and memory efficient.
We consider the application of text-independent speaker recognition; i.e., determining or
verifying the identity of an individual through voice characteristics. Text-independent
recognition implies that knowledge of the text of the speech data is not used. Traditional
methods for text-independent speaker recognition are vector quantization [8], Gaussian
mixture models [9], and artificial neural networks [8]. A state-of-the-art approach based
on polynomial classifiers was presented in [7]. The polynomial approach has several ad-
vantages over traditional methods?1) it is extremely computationally-efficient for identification, 2) the classifier is discriminative which eliminates the need for a background or
cohort model [10], and 3) the method generates small classifier models.
In Section 2, we describe polynomial classifiers and the associated scoring process. In
Section 3, we review the process for mean-squared error training. Section 4 introduces the
new kernel. Section 5 compares the new kernel approach to the standard mean-squared
error training approach.
2 Polynomial classifiers for sequence data
We start by considering the problem of speaker verification?a two-class problem. In this
case, the goal is to determine the correctness of an identity claim (e.g., a user id was entered
in the system) from a voice input. If is the class, then the decision to be made is if the
claim is valid,
, or if an impostor is trying to break into the system,
. We
motivate the classification process from a probabilistic viewpoint.
For the verification application, a decision is made from a sequence of observations extracted
from the speech input. We decide based on the output of a discriminant
function using a polynomial classifier. A polynomial classifier of the form
where is the vector of classifier parameters (model) and is an expansion of the input
and
space into the vector of monomials of degree or less is used. For example, if
, then
" !$#
%
%
&
(' ) )+*-, !
# /.01) ) * ) * ) ) * ) ** +) 2 ) * ) * ) ) ** )32*4 !
(1)
#
Note that we do not use a nonlinear activation function as is common in higher-order neural
networks; this allows us to find a closed form solution for training. Also, note that we use
a bold to avoid confusion with probabilities.
#
0
(5 6 7(
:5;
8 ; 9
8 < - =75
;
?
;
8> @BA C8 @
(2)
?
8C D; @ 8< @
@BA 8
For the purposes of classification, we can discard 8< @ . We take the logarithm of both
sides to get the discriminant function
EGF
M 8 D; @
(3)
; H@A
8 (N
B
I
L
J
K
where we have used the shorthand to denote the sequence C- . We use two
terms of the Taylor series,
) PO )"QR0 , to approximate the discriminant function and
IBJLK ofE frames to obtain the final discriminant function
also normalize by the number
0 D; @
(4)
< ; S H@A 8C8
Note that we have discarded the QT0 in this discriminant function since this will not affect
If the polynomial classifier is trained with a mean-squared error training criterion and target
values of for
and for
, then
will approximate the a posteriori
probability
[11]. We can then find the probability of the entire sequence,
, as follows. Assuming independence of the observations [12] gives
the classification decision. The key reason for using the Taylor approximation is that it
reduces computation without significantly affecting classifier accuracy.
< O 8 7 5; ; we call the vector
< gives
E
!$# @
0
< ; =R5 S H @A 8C R 5
0
H # @
!
S =R
(5)
8 5 @A
=R0 5 ! #
8
#
where we have defined the mapping
as
0
(6)
S H@A # @
We summarize the scoring method. For a sequence of input vectors and a speaker
! # .
model, , we construct # using (6). We then score using the speaker model,
Since we are performing verification, if
is above a threshold then we J declare the
identity claim valid; otherwise, the claim is J rejected as an impostor attempt. More details
Now assume we have a polynomial function
the speaker model. Substituting in the polynomial function
on this probabilistic scoring method can be found in [13].
Extending the sequence scoring framework to the case of identification (i.e., identifying
the speaker from a list of speakers by voice) is straightforward. In this case, we construct
speaker models for each speaker
and then choose the speaker which maximizes
(assuming equal prior probability of each speaker). Note that identification has low computational complexity, since we must only compute one inner product to determine the
speaker?s score.
@
!@ #
3 Mean-squared error training
8 D; ; this process will help us set notation for the following
sections. Let be the
5
0
desired speaker model and the ideal output; i.e.,
and B 6 . The
resulting problem is
! # Q *
(7)
K
where denotes expectation. This criterion can be approximated using the training set as
H ! # < @ Q0 * H ! # @ *
(8)
@A
@BA
K
Here, the speaker?s training data is
, and the anti-speaker data is
- . (Anti-speakers are designed to have the same statistical characteristics as
We next review how to train the polynomial classifier to approximate the probability
!
2
"
.-
#%$'&(*)
,+
54
$'/ 0 (
1+
+
+
+
62
32
+
+
+
$ &(*)
the impostor set.)
$ /0 (
The training method can be written in matrix form. First, define 7983:<; as the matrix whose
rows are the polynomial expansion of the speaker?s data; i.e.,
# !
# .. * !
# . !
CED
D
7=83:<;
?>@
@A
F
$B&(*)
(9)
. Define
(10)
The problem (8) then becomes
D Q *
(11)
K
where is the vector consisting of
ones followed by zeros (i.e., the ideal output).
The problem (11) can be solved using the method of normal equations,
! !
(12)
We rearrange (12) to
! !
(13)
!
where is the vector of all ones. If we define
and solve for , then (13)
becomes
!
(14)
Define a similar matrix for the impostor data, 7
7=8 : ;
7
:
4
# 7
:
7
8 : ;
:
7
7
7
7
79
7
8 : ;
7
7
7
83:<;
4 The naive a posteriori sequence kernel
We can now combine the methods from Sections 2 and 3 to obtain a novel sequence comparison kernel in a straightforward manner. Combine the speaker model from (14) with the
scoring equation from (5) to obtain the classifier score
0 # !
0 #! !
(15)
8C =
5
8
5
J
Now 8C =75
5 O
(because of the large anti-speaker
population), so that (15) becomes
# !
# !
(16)
J
#
!
0
where
is
(note that this exactly the same as mapping the training
data using (6)), and
is 0
>
.
The scoring method in (16) is the basis of our sequence kernel. Given two sequences of
# and #
speech feature vectors, and , we compare them by mapping
- # !
#
and then computing
(17)
83: ;
:
83:<;
8 : ;
7
8 : ;
-
83: ; *7
:
83: ;
:
2
:<8
2
2
-
We call
:<8 the naive a posteriori sequence kernel since scoring assumes independence of observations and training approximates the a posteriori probabilities. The value
can be interpreted as scoring using a polynomial classifier on the sequence
: 8
62
, see (5), with the MSE model trained from feature vectors
(or vice-versa because
2
of symmetry).
- " # ! #
! !
##
$
G& 6 & L& 6
' #
(
#' H ) @ @ - # @ # +*
@BA
Several observations should be made about the NAPS kernel. First, scoring
complexity can
using
be reduced dramatically in training by the following trick. We factor
the Cholesky decomposition. Then
:<8
2
. I.e., if we transform
all the sequence data by
before training, the sequence kernel is a simple inner product.
For our application in Section 5, this reduces training time from hours per speaker down
to
seconds on a Sun Ultra ,
MHz. Second, since the NAPS kernel explicitly
performs the expansion to ?feature space?, we can simplify the output of the support vector
machine. Suppose
is the (soft) output of the SVM,
%G6
:<8
-
(18)
(
!
H
#'
) @ @
# @ #
(19)
@A
*
6 6 , ! . That is, once we train the support
where '
vector machine, we can
collapse all the support vectors down into a single model , where is the quantity in
We can simplify this to
-
parenthesis in (19). Third, although the NAPS kernel is reminiscent of the Mahalanobis
distance, it is distinct. No assumption of equal covariance matrices for different classes
(speakers) is made for the new kernel?the kernel covariance matrix is a mixture of the
individual class covariances. Also, the kernel is not a distance measure?no subtraction of
means occurs as in the Mahalanobis distance.
5 Results
5.1 Setup
The NAPS kernel was tested on the standard speaker recognition database YOHO [14] collected from 138 speakers. Utterances in the database consist of combination lock phrases of
fixed length; e.g., ?23-45-56.? Enrollment and verification session were recorded at distinct
times. (Enrollment is the process of collecting data for training and generating a speaker
model. Verification is the process of testing the system; i.e., the user makes an identity
claim and then this hypothesis is verified.) For each speaker, enrollment consisted of four
sessions each containing twenty-four utterances. Verification consisted of ten separate sessions with four utterances per session (again per speaker). Thus, there are 40 tests of the
speaker?s identity and 40*137=5480 possible impostor attempts on a speaker. For clarity,
we emphasize that enrollment and verification session data is completely separate.
&6
0 6L6
To extract features for each of the utterances, we used standard speech processing. Each
utterance was broken up into frames of ms each with a frame rate of
frames/sec. The
mean was removed from each frame, and the frame was preemphasized with the filter
. A Hamming window was applied and then linear prediction coefficients were
found. The resulting coefficients were transformed to cepstral coefficients. Endpointing
was performed to eliminate non-speech frames. This typically resulted in approximately
observations per utterance.
65
6G6
0
0
0Q
For verification, we measure performance in terms of the pooled and average equal error
rates (EER). The average EER is found by averaging the individual EER for each speaker.
The individual EER is the threshold at which the false accept rate (FAR) equals the false
reject rate (FRR). The pooled EER is found by setting a constant threshold across the
entire population. When the FAR equals the FRR for the entire population this is termed
the pooled EER. For identification, the misclassification error rate is used.
&
&
&
&
&
&
To eliminate bias in verification, we trained the first
speakers against the first
and
the second
against the second
(as in [7]). We then performed verification using the
second as impostors to the first speakers models and vice versa. This insures that the
speakers against each other.
impostors are unknown. For identification, we trained all
0 &
5.2 Experiments
0
We trained support vector machines for each speaker using the software tool SVMTorch [15] and the NAPS kernel (17). The
cepstral features were mapped to a dimension
vector using a rd degree polynomial classifier. Single utterances (i.e., ?2345-56?) were converted to single vectors using the mapping (6) and then transformed with
% $ $
&
%
&
the Cholesky factor to reduce computation. We cross-validated using the first enrollment
sessions as training and the th enrollment session as a test to determine the best tradeoff
between margin and error; the best performing value of
was used with the final
SVMTorch training. Using the identical set of features and the same methodology, classifier models were also trained using the mean-squared error criterion using the method
in [7]. For final testing, all enrollment session were used for training, and all verification
sessions were used for testing.
65 0
%
&
$ & $ %G6
Results for verification and identification are shown in Table 1. The new kernel method
reduces error rates considerably?the average EER is reduced by , the pooled EER is
reduced by , and the identification error rate is reduced by . The average number
of support vectors was
which resulted in a model size of about
bytes (in single
precision floating point); using the model size reduction method in Section 4 resulted in a
model size of
bytes?over a hundred times reduction in size.
%
%
0&
06
Table 1: Comparison of structural risk minimization and MSE training
MSE NAPS SVM
Average EER 1.63%
1.01%
Pooled EER 2.76%
1.45%
ID error rate 4.71%
2.72%
We also plotted scores for all speakers versus a threshold, see Figure 1. We normalized
the scores for the MSE and SVM approaches to the same range for comparison. One can
easily see the reduction in pooled EER from the graph. Note also the dramatic shifting of
the FRR curve to the right for the SVM training, resulting in substantially better error rates
, the MSE training method gives
than the MSE training. For instance, when FAR is
; whereas, the SVM training method gives an FRR of ?a reduction by
an FRR of
a factor of
in error.
65 0
& % $ $
0
2
10
1
Percent
10
0
10
FAR(%) MSE
FRR(%) MSE
FAR(%) SVM
FRR(%) SVM
?1
10
?4
?2
0
2
4
6
Threshold
Figure 1: FAR/FRR rates for the entire population versus a threshold for the SVM and
MSE training methods
6 Conclusions and future work
A novel kernel for comparing sequences in speech applications was derived, the NAPS
kernel. This data-dependent kernel was motivated by using a probabilistic scoring method
and mean-squared error training. Experiments showed that incorporating this kernel in
an SVM training architecture yielded performance superior to that of the MSE training
criterion. Reduction in error rates of up to
times were observed while retaining the
efficiency of the original MSE classifier architecture.
& $
The new kernel method is also applicable to more general situations. Potential applications
include?using the approach with radial basis functions, application to automatic speech
recognition, and extending to an SVM/HMM architecture.
References
[1] Vincent Wan and William M. Campbell, ?Support vector machines for verification and identification,? in Neural Networks for Signal Processing X, Proceedings of the 2000 IEEE Signal
Processing Workshop, 2000, pp. 775?784.
[2] Aravind Ganapathiraju and Joseph Picone, ?Hybrid SVM/HMM architectures for speech recognition,? in Speech Transcription Workshop, 2000.
[3] John C. Platt, ?Probabilities for SV machines,? in Advances in Large Margin Classifiers,
Alexander J. Smola, Peter L. Bartlett, Bernhard Sch?olkopf, and Dale Schuurmans, Eds., pp.
61?74. The MIT Press, 2000.
[4] Tommi S. Jaakkola and David Haussler, ?Exploiting generative models in discriminative classifiers,? in Advances in Neural Information Processing 11, M. S. Kearns, S. A. Solla, and D. A.
Cohn, Eds. 1998, pp. 487?493, The MIT Press.
[5] Nathan Smith, Mark Gales, and Mahesan Niranjan, ?Data-dependent kernels in SVM classification of speech patterns,? Tech. Rep. CUED/F-INFENG/TR.387, Cambridge University
Engineering Department, 2001.
[6] Shai Fine, Ji?r?i Navr?atil, and Ramesh A. Gopinath, ?A hybrid GMM/SVM approach to speaker
recognition,? in Proceedings of the International Conference on Acoustics, Speech, and Signal
Processing, 2001.
[7] William M. Campbell and Khaled T. Assaleh, ?Polynomial classifier techniques for speaker
verification,? in Proceedings of the International Conference on Acoustics, Speech, and Signal
Processing, 1999, pp. 321?324.
[8] Kevin R. Farrell, Richard J. Mammone, and Khaled T. Assaleh, ?Speaker recognition using
neural networks and conventional classifiers,? IEEE Trans. on Speech and Audio Processing,
vol. 2, no. 1, pp. 194?205, Jan. 1994.
[9] Douglas A. Reynolds, ?Automatic speaker recognition using Gaussian mixture speaker models,? The Lincoln Laboratory Journal, vol. 8, no. 2, pp. 173?192, 1995.
[10] Michael J. Carey, Eluned S. Parris, and John S. Bridle, ?A speaker verification system using
alpha-nets,? in Proceedings of the International Conference on Acoustics Speech and Signal
Processing, 1991, pp. 397?400.
[11] J?urgen Sch?urmann, Pattern Classification, John Wiley and Sons, Inc., 1996.
[12] Lawrence Rabiner and Biing-Hwang Juang, Fundamentals of Speech Recognition, PrenticeHall, 1993.
[13] William M. Campbell and C. C. Broun, ?A computationally scalable speaker recognition system,? in Proceedings of EUSIPCO, 2000, pp. 457?460.
[14] Joseph P. Campbell, Jr., ?Testing with the YOHO CD-ROM voice verification corpus,? in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, 1995,
pp. 341?344.
[15] Ronan Collobert and Samy Bengio, ?Support vector machines for large-scale regression problems,? Tech. Rep. IDIAP-RR 00-17, IDIAP, 2000.
| 1951 |@word polynomial:15 decomposition:1 covariance:3 dramatic:1 tr:1 reduction:5 series:1 score:5 reynolds:1 com:1 comparing:3 activation:1 must:2 written:1 john:3 reminiscent:1 ronan:1 designed:1 generative:1 smith:1 shorthand:1 combine:2 manner:1 motorola:2 window:1 considering:1 becomes:3 notation:1 maximizes:1 interpreted:1 substantially:1 collecting:1 exactly:1 classifier:23 platt:1 before:1 declare:1 engineering:1 eusipco:1 id:2 nap:7 tempe:1 approximately:1 yoho:2 collapse:1 range:1 testing:4 impostor:7 ofe:1 jan:1 significantly:2 reject:1 vantage:1 eer:11 radial:1 get:1 risk:1 bill:1 conventional:1 demonstrated:1 straightforward:2 identifying:1 haussler:2 population:4 target:1 suppose:1 user:2 samy:1 hypothesis:1 trick:1 recognition:15 approximated:2 database:2 observed:1 solved:1 verifying:1 sun:1 solla:1 removed:1 broken:1 complexity:2 moderately:1 motivate:1 trained:6 upon:1 efficiency:1 basis:2 completely:1 po:1 easily:1 train:2 distinct:2 describe:1 artificial:1 kevin:1 mammone:1 whose:1 solve:1 otherwise:1 transform:1 final:3 sequence:20 advantage:2 rr:1 net:1 propose:1 product:2 entered:1 lincoln:1 normalize:1 az:1 olkopf:1 exploiting:1 juang:1 extending:2 generating:1 help:1 cued:1 idiap:2 implies:1 tommi:1 filter:1 human:1 ultra:1 biing:1 normal:1 lawrence:1 mapping:4 claim:5 substituting:1 purpose:1 applicable:1 correctness:1 vice:2 tool:1 minimization:1 mit:2 gaussian:2 avoid:1 jaakkola:2 derived:2 emission:2 validated:1 tech:2 posteriori:4 dependent:2 entire:4 eliminate:2 typically:1 accept:1 hidden:1 picone:1 transformed:2 classification:5 retaining:1 art:2 urgen:1 equal:5 construct:2 once:1 identical:1 future:1 simplify:2 richard:1 preserve:2 resulted:3 individual:4 floating:1 consisting:1 william:4 attempt:3 introduces:1 mixture:3 rearrange:1 necessary:1 taylor:2 logarithm:1 desired:1 plotted:1 instance:1 mahesan:1 earlier:1 soft:1 enrollment:7 mhz:1 phrase:1 monomials:1 hundred:2 successful:1 sv:1 considerably:1 international:4 river:1 fundamental:1 probabilistic:3 michael:1 prenticehall:1 squared:10 again:1 recorded:1 containing:1 choose:1 wan:1 gale:1 converted:1 potential:1 bold:1 sec:1 pooled:6 coefficient:3 inc:1 explicitly:1 farrell:1 ad:1 collobert:1 performed:2 break:1 lab:1 closed:1 start:1 shai:1 atil:1 carey:1 accuracy:2 characteristic:2 rabiner:1 identification:8 vincent:1 suffers:1 ed:2 against:3 pp:9 associated:2 hamming:1 bridle:1 knowledge:1 campbell:6 aravind:1 higher:1 methodology:2 rejected:1 smola:1 cohn:1 nonlinear:1 hwang:1 consisted:2 normalized:1 laboratory:1 mahalanobis:2 speaker:45 criterion:6 m:1 trying:1 confusion:1 performs:1 interface:1 percent:1 novel:3 common:1 superior:1 ji:1 approximates:1 versa:2 cambridge:1 rd:1 automatic:2 session:9 recent:2 showed:1 discard:1 termed:1 rep:2 scoring:10 ced:1 r0:1 subtraction:1 determine:3 signal:6 reduces:4 exceeds:1 cross:1 long:1 niranjan:1 parenthesis:1 prediction:1 infeng:1 scalable:1 regression:1 expectation:1 kernel:29 background:1 affecting:1 fine:1 whereas:1 sch:2 eliminates:1 khaled:2 call:2 structural:1 ideal:2 cohort:1 bengio:1 independence:3 affect:1 architecture:5 inner:2 reduce:1 tradeoff:1 motivated:1 bartlett:1 peter:1 speech:18 dramatically:2 detailed:1 ten:1 reduced:4 per:4 vol:2 key:1 four:3 threshold:6 clarity:1 gmm:1 douglas:1 verified:1 graph:1 ganapathiraju:1 decide:1 decision:3 followed:1 yielded:1 software:1 generates:1 nathan:1 extremely:1 c8:2 performing:2 department:1 combination:1 jr:1 smaller:1 across:1 son:1 joseph:2 g6:3 computationally:3 equation:2 operation:1 alternative:1 voice:4 original:1 denotes:1 assumes:1 include:1 lock:1 l6:1 quantity:1 occurs:1 traditional:2 qt0:1 distance:3 separate:2 mapped:1 hmm:2 collected:1 discriminant:5 reason:1 rom:1 assuming:2 length:1 setup:1 ba:4 twenty:1 unknown:1 observation:8 markov:1 discarded:1 ramesh:1 anti:3 situation:1 frame:7 david:1 acoustic:4 hour:1 trans:1 pattern:2 summarize:1 memory:1 shifting:1 misclassification:1 natural:1 hybrid:2 naive:2 utterance:7 extract:1 text:4 review:2 literature:1 prior:1 byte:2 determining:1 versus:2 gopinath:1 degree:2 verification:16 viewpoint:1 cd:1 row:1 side:1 bias:1 cepstral:2 curve:1 dimension:1 valid:2 dale:1 made:4 far:6 approximate:3 emphasize:1 alpha:1 bernhard:1 transcription:1 parkway:1 corpus:1 discriminative:2 table:2 symmetry:1 schuurmans:1 expansion:6 mse:12 wiley:1 precision:1 explicit:3 third:1 down:2 urmann:1 explored:1 list:1 svm:14 consist:1 incorporating:1 quantization:1 false:2 workshop:2 margin:2 insures:1 extracted:1 identity:5 goal:1 fisher:1 reducing:1 averaging:1 kearns:1 egf:1 support:11 cholesky:2 mark:1 alexander:1 svmtorch:2 audio:1 tested:1 |
1,042 | 1,952 | Agglomerative Multivariate Information
Bottleneck
Noam Sionim Nir Friedman Naftali Tishby
School of Computer Science & Engineering, Hebrew University, Jerusalem 91904, Israel
{noamm, nir, tishby } @cs.huji.ac.il
Abstract
The information bottleneck method is an unsupervised model independent data
organization technique. Given a joint distribution peA, B), this method constructs a new variable T that extracts partitions, or clusters, over the values of A
that are informative about B. In a recent paper, we introduced a general principled framework for multivariate extensions of the information bottleneck method
that allows us to consider multiple systems of data partitions that are inter-related.
In this paper, we present a new family of simple agglomerative algorithms to
construct such systems of inter-related clusters. We analyze the behavior of these
algorithms and apply them to several real-life datasets.
1 Introduction
The information bottleneck (IB) method of Tishby et al [14] is an unsupervised nonparametric data organization technique. Given a joint distribution P(A, B), this method
constructs a new variable T that represents partitions of A which are (locally) maximizing
the mutual information about B. In other words, the variable T induces a sufficient partition, or informative features of the variable A with respect to B. The construction of T
finds a tradeoff between the information about A that we try to minimize, J(T; A), and
the information about B which we try to maximize, J(T ; B). This approach is particularly
useful for co-occurrence data, such as words and documents [12], where we want to capture what information one variable (e.g., use of a word) contains about the other (e.g., the
document).
In a recent paper, Friedman et al. [4] introduce multivariate extension of the IB principle.
This extension allows us to consider cases where the data partition is relevant with respect
to several variables, or where we construct several systems of clusters simultaneously. In
this framework, we specify the desired interactions by a pair of Bayesian networks. One
network, Gin, represents which variables are compressed versions of the observed variables
- each new variable compresses its parents in the network. The second network, Gout>
defines the statistical relationship between these new variables and the observed variables
that should be maintained.
Similar to the original IB, in Friedman et al. we formulated the general principle as a
tradeoff between the (multi) information each network carries. On the one hand, we want to
minimize the information maintained by G in and on the other to maximize the information
maintained by Gout. We also provide a characterization of stationary points in this tradeoff
as a set of self-consistent equations. Moreover, we prove that iterations of these equations
converges to a (local) optimum. Then, we describe a deterministic annealing procedure
that constructs a solution by tracking the bifurcation of clusters as it traverses the tradeoff
curve, similar to the original IB method.
In this paper, we consider an alternative approach to solving multivariate IB problems
which is motivated by the success of the agglomerative IB of Slonim and Tishby [11]. As
shown there, a bottom-up greedy agglomeration is a simple heuristic procedure that can
yield good solutions to the original IB problem. Here we extend this idea in the context of
multivariate IB problems. We start by analyzing the cost of agglomeration steps within this
framework. This both elucidates the criteria that guides greedy agglomeration and provides
for efficient local evaluation rules for agglomeration steps. This construction results with
a novel family of information theoretic agglomerative clustering algorithms, that can be
specified using the graphs G in and G out. We demonstrate the performance of some of
these algorithms for document and word clustering and gene expression analysis.
2 Multivariate Information Bottleneck
A Bayesian network structure G is a DAG that specifies interactions among variables [8].
A distribution P is consistent with G (denoted P F G), if P(X l , ... , X n ) = I1 P(Xi I
Pa<fJ, where Pa<fi are the parents of X i in G. Our main interest is in the information
that the variables Xl " '" X n contain about each other. A quantity that captures this is the
multi-information given by
where V(Pllq) is the familiar Kullback-Liebler divergence [2].
Proposition 2.1 [4] Let G be a DAG over {Xl , ... , X n }, and let P
Then, I G I(Xl' ... , X n ) = L i I(Xi ; Pa<fi ).
=
F G be a distribution.
That is, the multi-information is the sum of local mutual information terms between each
variable and its parents (denoted I G).
Friedman et al. define the multivariate IE problem as follows. Suppose we are given a set
of observed variables, X = {Xl , ... , X n} and their joint distribution P (X l , ... , X n ). We
want to "construct" new variables T , where the relations between the observed variables
and these new compression variables are specified using a DAG Gin over X U T where
the variables in T are leafs. Thus, each T j is a stochastic function of a set of variables
U j = Pa~;in ~ X. Once these are set, we have a joint distribution over the combined set
of variables: P(X, T) = P(X) IT j P(Tj I U j ).
The "relevant" information that we want to preserve is specified by another DAG, Gout .
This graph specifies, for each T j which variables it predicts. These are simply its children in G out . More precisely, we want to predict each Xi (or T j ) by V X i = Pa~;"t
(resp. V T; = Pa~;out ), its parents in G out. Thus, we think ofIGout as a measure of how
much information the variables in T maintain about their target variables.
The Lagrangian can then be defined as
(1)
with a tradeoff parameter (Lagrange multiplier) (3. 1 The variation is done subject to
the normalization constraints on the partition distributions. Thus, we balance between the
information T loses about X in G in and the information it preserves in G out.
Friedman et al. [4] show that stationary points of this Lagrangian satisfy a set of selfconsistent equations. Moreover, they show that iterating these equations converges to a
INotice that under this formul ation we would like to maximize ?. An equivalent definition [4]
would be to minimize ? = 'L G in - (J . 'LGO"t .
stationary point of the tradeoff. Then, extending the procedure of Tishby et al [14], they
propose a procedure that searches for a solution of the IB equations using a 'deterministic
annealing' approach [9]. This is a top-down hierarchical algorithm that starts from a single
cluster for each T j at j3 -+ 0, and then undergoes a cascade of cluster splits as j3 is being
"cooled". These determines "soft" trees of clusters (one for each T j ) that describe solutions
at different tradeoff values of j3 .
3
The Agglomerative Procedure
For the original IB problem, Slonim and Tishby [11] introduced a simpler procedure that
performs greedy bottom-up merging of values. Several successful applications of this algorithm are already presented for a variety of real-world problems [10, 12, 13, 15]. The main
focus of the current work is in extending this approach for the multivariate IB problem. As
we will show, this will lead to further insights about the method, and also provide a rather
simple and intuitive clustering procedures.
We consider procedures that start with a set of clusters for each T j (usually the most
fine-grained solution we can consider where T j = U j ) and then iteratively reduce the
cardinality of one of the T j 's by merging two values t~ and tj of T j into a single value lj.
To formalize this notion we must define the membership probability of a new cluster lj,
resulting from merging {t~, tj}
lj in T j . This is done rather naturally by
'*
(2)
In other words, we view the event l j as the union of the events t~ and tj.
Given the membership probabilities, at each step we can also draw the connection between T j and the other variables. This is done using the following proposition which is
based on the conditional independence assumptions given in Gin.
Proposition 3.1 Let Y , Z C X U T \ {Tj} then,
(3)
h re II Z = { 1f1 ,Z, 1fr ,z } =
we
I Z)
{ p(t~
p(t; IZ)'
p(t j IZ) }.
p (t; IZ) IS
h
d? ?b?
d??
d
t e merger 1Str1 uttOn can ltzone on
z.
In particular, this proposition allows us to evaluate all the predictions defined in G out and
all the informations terms in ? that involve T j .
The crucial question in an agglomerative process is of course which pair to merge at each
step. We know that the merger "cost" in our terms is exactly the difference in the values
of ? , before and after the merger. Let T}ef and T/ft denote the random variables that
correspond to T j , before and after the merger, respectively. Thus, the values of ? before
and after the merger are calculated based on Tr f and Ttt . The merger cost is then simply
given by,
(4)
The greedy procedure evaluates all the potential mergers (for all T j ) and then applies the
best one (i.e., the one that minimizes 6.?( t~ , tj ). This is repeated until all the variables in
T degenerate into trivial clusters. The resulting set of trees describes a range of solutions
at different resolutions.
This agglomerative approach is different in several important aspects from the deterministic annealing approach described above. In that approach, by "cooling" (i.e., increasing)
j3 , we move along a tradeoff curve from the trivial - single cluster - solution toward solutions with higher resolutions that preserve more information in G out. In contrast, in the
agglomerative approach we progress in the opposite direction. We start with a high resolution clustering and as the merging process continues we move toward more and more
compact solutions. During this process (3 is kept constant and the driving force is the reduction in the cardinality of the T/s. Therefore, we are able to look for good solutions in
different resolutions for ajixed tradeoff parameter (3. Since the merging does not attempt
directly to maintain the (stationary) self-consistent "soft" membership probabilities, we do
not expect the self-consistent equations to hold at solutions found by the agglomerative
procedure. On the other hand, the agglomerative process is much simpler to implement
and fully deterministic. As we will show, it provides sufficiently good solutions for the IB
problem in many situations.
4
Local Merging Criteria
In the procedure we outline above, at every step there are O(ITj 12) possible mergers of
values of T j (for every j). A direct calculation of the costs of all these potential mergers
is typically infeasible. However, it turns out that one may calculate t:...c (t; , tj) while examining only the probability distributions that involve t; and tj directly. Generalizing the
results of [11] for the original IB, we now develop a closed-form formula for t:...c(t;, tj) .
To describe this result we need the following definition. The Jensen-Shannon ( J S) divergence [7, 3] between two probabilities PI , P2 is given by
where II = {7rl ' 7r2} is a normalized probability and p = 7rlPl + 7r2P 2 . The J S divergence is equal zero if and only if both its arguments are identical. It is upper bounded
and symmetric, though it is not a metric. One interpretation of the J S -divergence relates
it to the (logarithmic) measure of the likelihood that the two sample distributions originate by the most likely common source, denoted by p. In addition, we need the notation
V-X~ = V X i - {Tj} (similarly for V T
!).
Theorem 4.1 Let t; , tj E Tj be two clusters. Then, t:...c( t; , tj) = p(tj) . d(t;, tj) where
L
d(t; , tj)
Ep(' lt;) [JSrr v _ ;
i: T; EVX i
+
L
Ep(' lt; ) [JSrr v _;
e: T;E vT?
+
(P(Xi 1 t;, V-X~ ),p(Xi 1 tj, V -X{))]
Xi
JSrr(p(V T ;
(p(Te 1t;, V Tj) ,p(Te 1tj , V T! ))]
T?
1
t;) ,p(V T ;
1
tj)) - (3- 1. JSrr(p(Uj 1 t;) ,p(Uj 1 tj))
A detailed proof of this theorem will be given elsewhere. Thus, the merger cost is a multiplication of the weight of the merger components (P(tj)) with their "distance" given by
d(t; , tj). Notice that due to the properties of the JS-divergence, this distance is symmetric. In addition, the last term in this distance has the opposite sign to the first three terms.
Thus, the distance between two clusters is a tradeoff between these two factors. Roughly
speaking, we may say that the distance is minimized for pairs that give similar predictions
about the variables connected with T j in Gout and have different predictions (minimum
overlap) about the variables connected with T j in Gin. We notice also the analogy between
this result and the main theorem in [4]. In [4] the optimization is governed by the KL
divergences between data and cluster's centroids, or by the likelihood that the data was
generated by the centroid distribution. Here the optimization is controlled through the J S
divergences, i.e. the likelihood that the two clusters have a common source.
Next, we notice that after applying a merger, only a small portion of the other mergers
costs change. The following proposition characterizes these costs.
r
0~
Gin
I(T ;B) -
G out
n
T}
Gin
Tz
o~
8
G out
I(T1, T2; B)
_(3 -" (I(T1 ; A) + I(T2; A))
(3 -1 I(T;A)
(a) Original Bottleneck
(b) Parallel Bottleneck
n n
T,(
A
TIJ
G in
G out
I(TA;TB )
_((3 -" - l)(I(TA;A) + I(TB;B))
(c) Symmetric Bottleneck
Figure 1: The source and target networks and the corresponding Lagrangian for the three
examples we consider.
Proposition4.2 The merger {t; , tj} :::} tj in Tj can change the cost 6..c(t~ , tc ) only if
p(tj , te) > 0 and Tj , Te co-appear in some information term in r Gout ?
This proposition is particularly useful, when we consider "hard" clustering where T j is
a (deterministic) function ofU j . In this case, p(tj,te) is often zero (especially when Tj
and Te compressing similar variables, i.e., U j n U e =I- 0). In particular, after the merger
{t;, tj} :::} tj, we do not have to reevaluate merger costs of other values of Tj , except for
mergers of tj with each of these values.
In the case of hard clustering we also find thatI(Tj ; U j ) = H(Tj ) (where H(P) is Shannon's entropy). Roughly speaking, we may say that H(P) is decreasing for less balanced
probability distributions p. Therefore, increasing (3-1 will result with a tendency to look
for less balanced "hard" partitions and vice verse. This is reflected by the fact that the last
term in d( t; , tj) is then simplified through J Sn (p(U j I t;), p(U j I tj)) = H (II) .
5
Examples
We now briefly consider three examples of the general methodology. For brevity we focus
on the simpler case of hard clustering. We first consider the example shown in figure I(a).
This choice of graphs results in the original IB problem. The merger cost in this case is
given by,
6..c(tl,
n = p(t) . (JSn(p(B I tl),p(B I n) - (3-1H(II))
.
(5)
Note that for (3 -1 -+ 0 we get exactly the algorithm presented in [11] .
One simple extension of the original IB is the parallel bottleneck [4]. In this case we
introduce two variables T1 and T2 as in Figure I(b), both of them are functions of A.
Similarly to the original IB, Gout specifies that T1 and T2 should predict B. We can think
of this requirement as an attempt to decompose the information A contains about B into
two "orthogonal" components. In this case, the merger cost for T1 is given by,
6..c(ti, tD = p(t1) . (Ep (.lld[JSn T2 (P(B I ti,T2),p(B I tLT2))]- (3- 1H(II)) . (6)
Finally, we consider the symmetric bottleneck [4, 12]. In this case, we want to compress
A into T A and B into T B so that T A extracts the information A contains about B, and at the
same time TB extracts the information B contains about A. The DAG G in of figure I(c)
captures the form of the compression. The choice of G out is less obvious and several alternatives are described in [4]. Here, we concentrate only in one option, shown in figure I(c).
In this case we attempt to make each ofTA and TB sufficient to separate A from B. Thus,
on one hand we attempt to compress, and on the other hand we attempt to make T A and T B
as informative about each other as possible. The merger cost in T A is given by
6..c(t~, t A ) = P(tA) . JSn(p(TB
I t~) , p(TB
8
ItA)) - ((3-1 - l)H(II)),
(7)
while for merging in TB we will get an analogous expression.
6
Applications
We examine a few applications of the examples presented above. As one data set we
used a subset ofthe 20 newsgroups corpus [6] where we randomly choose 2000 documents
evenly distributed among the 4 science discussion groups (sci. crypt, sci.electronics, sci.med
and sci.space).2 Our pre-processing included ignoring file headers (and the subject lines),
lowering upper case and ignoring words that contained non ' a .. z' characters. Given this
document set we can evaluate the joint probability p(W, D), which is the probability that a
random word position is equal to w E Wand at the same time the document is dE D . We
sort all words by their contribution to I(W; D) and used only the 2000 'most informative'
ones, ending up with a joint probability with IW I = ID I = 2000.
We first used the original IB to cluster W , while trying to preserve the information about
D. This was already done in [12] with (3- 1 = 0, but in this new experiment we took
(3-1 = 0.15. Recall that increasing (3-1 results in a tendency for finding less balanced clusters. Indeed, while for (3 - 1 = 0 we got relatively balanced word clusters (high H(Tw )),
for (3- 1 = 0.15 the probability p(Tw) is much less smooth. For 50 word clusters, one
cluster contained almost half of the words, while the other clusters were typically much
smaller. Since the algorithm also tries to maximize I(Tw; D), the words merged into the
big cluster are usually the less informative words about D. Thus, a word must be highly informative to stay out of this cluster. In this sense, increasing (3-1 is equivalent for inducing
a "noise filter", that leave only the most informative features in specific clusters. In figure 2
we present p( D I tw) for several clusters tw E Tw. Clearly, words that passed the "filter"
form much more informative clusters about the real structure of D. A more formal demonstration of this effect is given in the right panel of figure 2. For a given compression level
(i.e. a given I(Tw; W)), we see that taking (3- 1 = 0.15 preserve much more information
aboutD.
While an exact implementation of the symmetric IB will require alternating mergers in
Tw and T D , an approximated approach require only two steps. First we find Tw. Second,
we project each d E D into the low dimensional space defined by T w , and use this more
robust representation to extract document clusters TD. Approximately, we are trying to
find Tw and TD that will maximize I(Tw; TD)' This two-phase IB algorithm was shown
in [12] to be significantly superior to six other document clustering methods, when the
performance are measured by the correlation of the obtained document clusters with the
real newsgroup categories. Here we use the same procedure, but for finding Tw we take
(3- 1 = 0.15 (instead of zero). Using the above intuition we predict this will induce a
cleaner representation for the document set. Indeed, the averaged correlation of TD (for
lTD I = 4) with the original categories was 0.65, while for (3-1 = 0 it was 0.58 (the average
is taken over different number of word clusters, ITw I = 10, 11...50). Similar results were
obtained for all the 9 other subsets of the 20 newsgroups corpus described in [12].
As a second data set we used the gene expression measurements of rv 6800 genes in
72 samples of Leukemia [5]. The sample annotations included type of leukemia (ALL vs.
AML), type of cells, source of sample, gender and donating hospital. We removed genes
that were not expressed in the data and normalized the measurements of each sample to
get a joint probability P(G, A) over genes and samples (with uniform prior on samples).
We sorted all genes by their contribution to I(G; A) and chose the 500 most informative
ones, which capture 47% of the original information, ending up with a joint probability
with IAI = 72 and IGI = 500.
We first used an exact implementation of the symmetric IB with alternating mergers be2We used the same subset already used in [12].
aotthetoand
algonthm secure secunty enayptlon ClaSSlIJed ...
analog mOde signaimput output ...
0?04'---~----r~_~CC=
, ,~
905;O=w~
ord~
S
0.04
0.03
0.03
0.03
0.02
0.02
0.02
c2, 20 words
0.04,---~-~~
c3~,19;O=w~
ord~
S
0.01
00
aCldvltammGaIClumlntakekldDey ...
0.04
0.03
c4,35words
ames planetary nasa spaceanane ...
0?04'---~-~~
C5o="35;O=w~
ord~
s
0.02
1000
1500
2000
I ~-:=O I
~
0.03
o
0.02
500
sCience dataset, lnl0rmallon curves
1 .5 ~~~------,
1
~
0.5
IfTw?W\
Figure 2: P(D I tw) for 5 word clusters, tw E Tw. Documents 1 - 500 belong to sci. crypt
category, 501 - 1000 to sci. electronics, 1001 - 1500 to sci.med and 1501 - 2000 to sci. space.
In the title of each panel we see the 5 most frequent words in the cluster. The 'big' cluster (upper
left panel) is clearly less informative about the structure of D. In the lower right panel we see the
two information curves. Given some compression level, for (3- 1 = 0.15 we preserve much more
information about D than for (3-1 = O.
tween both clustering hierarchies (and /3-1 = 1). For ITA I = 2 we found an almost perfect
correlation with the ALL vs. AML annotations (with only 4 exceptions). For ITA I = 8 and
ITGI = 10 we found again high correlation between our sample clusters and the different
sample annotations. For example, one cluster contained 10 samples that were all annotated
as ALL type, taken from male patients in the same hospital. Almost all of these 10 were also
annotated as T-cells, taken from bone marrow. Looking at p(TA I TG) we see that given the
third genes cluster (which contained 17 genes) the probability of the above specific samples
cluster is especially high. Further such analysis might yield additional insights about the
structure of this data and will be presented elsewhere.
Finally, to demonstrate the performance of the parallel IB we apply it to the same data.
Using the parallel IB algorithm (with /3-1 = 0) we clustered the arrays A into two clustering hierarchies, T1 and T 2 , that try together to capture the information about G. For
ITj I = 4 we find that each I(Tj; G) preserve about 15% of the original information. However, taking ITj I = 2 (i.e. again, just 4 clusters) we see that the combination of the hierarchies, I(T1, T 2 ; G), preserve 21 % of the original information. We then compared the
two partitions we found against sample annotations. We found that the first hierarchy with
IT11 = 2 almost perfectly match the split between B-cells and T-cells (among the 47 samples for which we had this annotation). The second hierarchy, with IT21 = 2 separates a
cluster of 18 samples, almost all of which are ALL samples taken from the bone marrow of
patients from the same hospital. These results demonstrate the ability of the algorithm to
extract in parallel different meaningful independent partitions of the data.
7 Discussion
The analysis presented by this work enables to implement a family of novel agglomerative
clustering algorithms. All of these algorithms are motivated by one variational framework
given by the multivariate IB method. Unlike most other clustering techniques, this is a
principled model independent approach, which aims directly at the extraction of informative structures about given observed variables. It is thus very different from maximum-
likelihood estimation of some mixture model and relies on fundamental information theoretic notions, similar to rate distortion theory and channel coding. In fact the multivariate
IB can be considered as a multivariate coding result. The fundamental tradeoff between the
compressed multi-information rG in and the preserved multi-information r G ou , provides a
generalized coding limiting function, similar to the information curve in the original IB
and to the rate distortion function in lossy compression. Despite the only local-optimality
of the resulting solutions this information theoretic quantity - the fraction of the multiinformation that is extracted by the clusters - provides an objective figure of merit for the
obtained clustering schemes.
The suggested approach of this paper has several practical advantages over the 'deterministic annealing' algorithms suggested in [4], as it is simpler, fully deterministic and
non-parametric. There is no need to identify cluster splits which is usually rather tricky.
Though agglomeration procedures do not scale linearly with the sample size as top down
methods do, there exist several heuristics to improve the complexity of these algorithms
(e.g. [1]).
While a typical initialization of an agglomerative procedure induces "hard" clustering
solutions, all of the above analysis holds for "soft" clustering as well. Moreover, as already
noted in [11], the obtained "hard" partitions can be used as a platform to find also "soft"
solutions through a process of "reverse annealing". This raises the possibility for using an
agglomerative procedure over "soft" clustering solutions, which we leave for future work.
We could describe here only a few relatively simple examples. These examples show
promising results on non trivial real life data. Moreover, other choices of Gin and Gout
can yield additional novel algorithms with applications over a variety of data types.
Acknowledgements
This work was supported in part by the Israel Science Foundation (ISF), the Israeli Ministry
of Science, and by the US-Israel Bi-national Science Foundation (BSF). N. Slonim was also
supported by an Eshkol fellowship. N. Friedman was also supported by an Alon fellowship
and the Harry & Abe Sherman Senior Lectureship in Computer Science.
References
[I]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[II]
[12]
[13]
[14]
[15]
L. D. Baker and A. K. McCallum. Distributional clustering of words for text classification. In ACM SIGIR 98.
T. M. Cover and J. A. Thomas. Elements of Information Theory. 1991.
R. EI-Yaniv, S. Fine, and N. Tishby. Agnostic classification of Markovian sequences. In NIPS'97.
N. Friedman, O. Mosenzon, N. Sionim and N. Tishby Multivariate Infonnation Bottleneck UAI,2001.
T. Golub, D. Slonim, P. Tamayo, C.M. Huard, J.M. Caasenbeek, H. Coller, M. Loh, J. Downing, M. Caligiuri, C. Bloomfield, and E. Lander. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring
Science 286, 531 - 537,1999.
K. Lang. Learning to filter netnews. In ICML'95 .
J. Lin. Divergence Measures Based on the Shannon Entropy. IEEE Trans. Info. Theory, 37(1):145-151 , 1991.
J. Pearl. Probabilistic Reasoning in Intelligent Systems. 1988.
K. Rose. Detenninistic annealing for clustering, compression, classification, regression, and related optimization problems.
Proc. IEEE, 86:2210--2239,1998.
N. Sionim, R. Somerville, N. Tishby, and O. Lahav. Objective spectral classification of galaxies using the infonnation
bottleneck method. in "Monthly Notices of the Royal Astronomical Society", MNRAS, 323, 270, 2001.
N. Slonim and N. Tishby. Agglomerative Infonnation Bottleneck. In NIPS' 99.
N. Sionim and N. Tishby. Document clustering using word clusters via the infonnation bottleneck method. InACM SIGIR
2000.
N. Slonim and N. Tishby. The power of word clusters for text classification. In ECIR, 2001.
N. Tishby, F. Pereira, and W. Bialek. The Infonnation Bottleneck method. In Proc. 37th Allerton Conference on Communication and Computation. 1999.
N. Tishby and N. Slonim. Data clustering by markovian relaxation and the infonnation bottleneck method. In NIPS' OO.
| 1952 |@word briefly:1 version:1 compression:6 tamayo:1 tr:1 carry:1 reduction:1 electronics:2 contains:4 document:12 current:1 lang:1 must:2 partition:10 informative:11 enables:1 v:2 stationary:4 greedy:4 leaf:1 half:1 eshkol:1 noamm:1 merger:22 mccallum:1 ecir:1 characterization:1 provides:4 ames:1 traverse:1 allerton:1 simpler:4 downing:1 along:1 c2:1 direct:1 prove:1 introduce:2 inter:2 indeed:2 roughly:2 behavior:1 examine:1 multi:5 decreasing:1 td:5 cardinality:2 increasing:4 project:1 moreover:4 bounded:1 notation:1 panel:4 baker:1 agnostic:1 israel:3 what:1 minimizes:1 finding:2 every:2 ti:2 exactly:2 tricky:1 appear:1 before:3 t1:8 engineering:1 local:5 slonim:7 despite:1 analyzing:1 id:1 merge:1 approximately:1 might:1 chose:1 initialization:1 multiinformation:1 co:2 range:1 bi:1 averaged:1 practical:1 union:1 implement:2 procedure:15 cascade:1 got:1 significantly:1 word:23 pre:1 induce:1 get:3 context:1 applying:1 equivalent:2 deterministic:7 lagrangian:3 maximizing:1 jerusalem:1 sigir:2 resolution:4 rule:1 insight:2 array:1 bsf:1 cooled:1 variation:1 notion:2 analogous:1 limiting:1 resp:1 construction:2 suppose:1 target:2 hierarchy:5 elucidates:1 exact:2 pa:6 element:1 approximated:1 particularly:2 continues:1 cooling:1 predicts:1 distributional:1 observed:5 bottom:2 ft:1 ep:3 capture:5 calculate:1 compressing:1 connected:2 removed:1 principled:2 balanced:4 intuition:1 rose:1 complexity:1 raise:1 solving:1 joint:8 describe:4 netnews:1 header:1 heuristic:2 say:2 distortion:2 compressed:2 ability:1 think:2 advantage:1 sequence:1 took:1 propose:1 interaction:2 fr:1 frequent:1 relevant:2 degenerate:1 intuitive:1 inducing:1 parent:4 cluster:42 optimum:1 extending:2 requirement:1 yaniv:1 perfect:1 converges:2 leave:2 oo:1 develop:1 ac:1 alon:1 measured:1 school:1 progress:1 p2:1 c:1 direction:1 concentrate:1 merged:1 aml:2 annotated:2 filter:3 pea:1 stochastic:1 require:2 f1:1 clustered:1 decompose:1 proposition:6 extension:4 hold:2 sufficiently:1 considered:1 predict:3 driving:1 estimation:1 proc:2 iw:1 infonnation:6 title:1 vice:1 clearly:2 aim:1 rather:3 focus:2 likelihood:4 contrast:1 secure:1 centroid:2 sense:1 membership:3 lj:3 typically:2 relation:1 i1:1 among:3 classification:6 denoted:3 platform:1 bifurcation:1 mutual:2 equal:2 construct:6 once:1 extraction:1 identical:1 represents:2 look:2 unsupervised:2 leukemia:2 icml:1 future:1 minimized:1 t2:6 utton:1 intelligent:1 few:2 randomly:1 simultaneously:1 divergence:8 preserve:8 national:1 familiar:1 phase:1 maintain:2 friedman:7 attempt:5 organization:2 interest:1 highly:1 possibility:1 lectureship:1 gout:7 evaluation:1 golub:1 male:1 mixture:1 tj:39 detenninistic:1 orthogonal:1 tree:2 desired:1 re:1 soft:5 markovian:2 cover:1 tg:1 cost:12 subset:3 uniform:1 successful:1 examining:1 tishby:14 combined:1 fundamental:2 huji:1 ie:1 stay:1 probabilistic:1 together:1 itj:3 again:2 choose:1 tz:1 potential:2 de:1 harry:1 coding:3 satisfy:1 igi:1 bone:2 try:4 view:1 closed:1 analyze:1 characterizes:1 portion:1 start:4 sort:1 option:1 parallel:5 ttt:1 annotation:5 contribution:2 minimize:3 il:1 yield:3 correspond:1 ofthe:1 identify:1 bayesian:2 monitoring:1 cc:1 liebler:1 definition:2 evaluates:1 verse:1 crypt:2 against:1 galaxy:1 obvious:1 naturally:1 proof:1 donating:1 dataset:1 algonthm:1 recall:1 astronomical:1 formalize:1 ou:1 nasa:1 higher:1 ta:4 reflected:1 specify:1 methodology:1 iai:1 done:4 though:2 just:1 until:1 correlation:4 hand:4 ei:1 defines:1 undergoes:1 thati:1 mode:1 lossy:1 effect:1 contain:1 multiplier:1 normalized:2 alternating:2 symmetric:6 iteratively:1 during:1 self:3 naftali:1 maintained:3 noted:1 criterion:2 generalized:1 trying:2 outline:1 theoretic:3 demonstrate:3 performs:1 fj:1 reasoning:1 variational:1 novel:3 fi:2 ef:1 common:2 superior:1 agglomeration:5 rl:1 extend:1 interpretation:1 analog:1 belong:1 isf:1 measurement:2 monthly:1 dag:5 similarly:2 had:1 sherman:1 selfconsistent:1 j:1 multivariate:12 recent:2 reverse:1 success:1 life:2 vt:1 itw:1 minimum:1 additional:2 ministry:1 maximize:5 ii:7 relates:1 multiple:1 rv:1 smooth:1 match:1 calculation:1 lin:1 molecular:1 controlled:1 j3:4 prediction:4 regression:1 patient:2 metric:1 iteration:1 normalization:1 cell:4 preserved:1 addition:2 want:6 fine:2 fellowship:2 annealing:6 lander:1 source:4 crucial:1 unlike:1 file:1 subject:2 med:2 coller:1 split:3 variety:2 independence:1 newsgroups:2 perfectly:1 opposite:2 reduce:1 idea:1 tradeoff:11 bottleneck:16 bloomfield:1 motivated:2 expression:4 six:1 passed:1 lgo:1 ltd:1 reevaluate:1 loh:1 speaking:2 useful:2 iterating:1 detailed:1 involve:2 tij:1 cleaner:1 nonparametric:1 locally:1 induces:2 category:3 specifies:3 exist:1 notice:4 sign:1 iz:3 group:1 caligiuri:1 kept:1 lowering:1 graph:3 relaxation:1 fraction:1 sum:1 wand:1 family:3 almost:5 draw:1 precisely:1 constraint:1 aspect:1 argument:1 optimality:1 relatively:2 combination:1 describes:1 smaller:1 lld:1 character:1 tw:15 taken:4 equation:6 turn:1 know:1 merit:1 apply:2 hierarchical:1 spectral:1 occurrence:1 pllq:1 alternative:2 original:15 compress:3 top:2 clustering:20 thomas:1 uj:2 especially:2 society:1 move:2 objective:2 already:4 quantity:2 question:1 parametric:1 bialek:1 gin:7 distance:5 separate:2 sci:8 evenly:1 originate:1 agglomerative:14 trivial:3 toward:2 relationship:1 balance:1 hebrew:1 demonstration:1 info:1 noam:1 implementation:2 upper:3 ord:3 datasets:1 situation:1 looking:1 communication:1 abe:1 introduced:2 pair:3 specified:3 kl:1 connection:1 c3:1 c4:1 planetary:1 pearl:1 nip:3 israeli:1 trans:1 able:1 suggested:2 usually:3 tb:7 royal:1 ofu:1 event:2 ation:1 overlap:1 force:1 power:1 scheme:1 improve:1 extract:5 nir:2 sn:1 text:2 prior:1 acknowledgement:1 discovery:1 multiplication:1 fully:2 expect:1 analogy:1 ita:3 foundation:2 sufficient:2 consistent:4 principle:2 pi:1 cancer:1 course:1 elsewhere:2 supported:3 last:2 infeasible:1 guide:1 formal:1 senior:1 taking:2 distributed:1 curve:5 calculated:1 world:1 ending:2 simplified:1 compact:1 kullback:1 gene:9 uai:1 corpus:2 xi:6 search:1 promising:1 channel:1 robust:1 ignoring:2 marrow:2 tween:1 main:3 linearly:1 big:2 noise:1 child:1 repeated:1 formul:1 tl:2 jsn:3 position:1 pereira:1 xl:4 governed:1 ib:25 third:1 grained:1 down:2 formula:1 theorem:3 specific:2 jensen:1 r2:1 r2p:1 merging:7 mosenzon:1 te:6 entropy:2 generalizing:1 logarithmic:1 lt:2 simply:2 likely:1 tc:1 rg:1 lagrange:1 expressed:1 contained:4 huard:1 tracking:1 applies:1 gender:1 loses:1 determines:1 relies:1 extracted:1 acm:1 conditional:1 sorted:1 formulated:1 change:2 hard:6 included:2 typical:1 except:1 hospital:3 tendency:2 shannon:3 meaningful:1 newsgroup:1 exception:1 brevity:1 evaluate:2 |
1,043 | 1,953 | Reinforcement Learning
Memory
Bram Bakker
Dept. of Psychology, Leiden University / IDSIA
P.O. Box 9555; 2300 RB, Leiden; The Netherlands
bbakker@fsw.leidenuniv. nl
Abstract
This paper presents reinforcement learning with a Long ShortTerm Memory recurrent neural network: RL-LSTM. Model-free
RL-LSTM using Advantage(,x) learning and directed exploration
can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as
well as in a difficult variation of the pole balancing task.
1
Introduction
Reinforcement learning (RL) is a way of learning how to behave based on delayed
reward signals [12]. Among the more important challenges for RL are tasks where
part of the state of the environment is hidden from the agent. Such tasks are called
non-Markovian tasks or Partially Observable Markov Decision Processes. Many real
world tasks have this problem of hidden state. For instance, in a navigation task
different positions in the environment may look the same, but one and the same
action may lead to different next states or rewards. Thus, hidden state makes RL
more realistic. However, it also makes it more difficult, because now the agent not
only needs to learn the mapping from environmental states to actions, for optimal
performance it usually needs to determine which environmental state it is in as well.
Long-term dependencies. Most approaches to solving non-Markovian RL tasks
have problems if there are long-term dependencies between relevant events. An
example of a long-term dependency problem is a maze navigation task where the
only way to distinguish between two T -junctions that look identical is to remember
an observation or action a long time before either T-junction. Such a case prese~ts
obvious problems for fixed size history window approaches [6], which attempt toresolve the hidden state by making the chosen action depend not only on the current observation, but also on a fixed number of the most recent observations and
actions. If the relevant piece of information to be remembered falls outside the history window, the agent cannot use it. McCallum's variable history window [8] has,
in principle, the capacity to represent long-term dependencies. However, the system
starts with zero history and increases the depth of the history window step by step.
This makes learning long-term dependencies difficult, especially when there are no
short-term dependencies to build on.
Other approaches to non-Markovian tasks are based on learning Finite State Automata [2], recurrent neural networks (RNNs) [10, 11, 6], or on learning to set
memory bits [9]. Unlike history window approaches, they do not have to represent
(possibly long) entire histories, but can in principle extract and represent just the
relevant information for an arbitrary amount of time. However, learning to do that
has proven difficult. The difficulty lies in discovering the correlation between a
piece of information and the moment at which this information becomes relevant
at a later time, given the distracting observations and actions between them. This
difficulty can be viewed as an instance of the general problem of learning long-term
dependencies in timeseries data. This paper uses one particular solution to this
problem that has worked well in supervised timeseries learning tasks: Long ShortTerm Memory (LSTM) [5, 3]. In this paper an LSTM recurrent neural network is
used in conjunction with model-free RL, in the same spirit as the model-free RNN
approaches of [10,6]. The next section describes LSTM. Section 3 presents LSTM's
combination with reinforcement learning in a system called RL-LSTM. Section 4
contains simulation results on non-Markovian RL tasks with long-term dependencies. Section 5, finally, presents the general conclusions.
2
LSTM
LSTM is a recently proposed recurrent neural network architecture, originally designed for supervised timeseries learning [5, 3]. It is based on an analysis of the
problems that conventional recurrent neural network learning algorithms, e.g. backpropagation through time (BPTT) and real-time recurrent learning (RTRL), have
when learning timeseries with long-term dependencies. These problems boil down
to the problem that errors propagated back in time tend to either vanish or blow
up (see [5]).
Memory cells. LSTM's solution to this problem is to enforce constant error flow
in a number of specialized units, called Constant Error Carrousels (CECs). This
actually corresponds to these CECs having linear activation functions which do
not decay over time. In order to prevent the CECs from filling up with useless
information from the timeseries, access to them is regulated using other specialized,
multiplicative units, called input gates. Like the CECs, the input. gates receive input
from the timeseries and the other units in the network, and they learn to open and
close access to the CECs at appropriate moments. Access from the activations of
the CECs to the output units (and possibly other units) of the network is regulated
using multiplicative output gates. Similar to the input gates, the output gates learn
when the time is right to send the information stored in the CECs to the output
side of the network. A recent addition is forget gates [3], which learn to reset
the activation of the CECs when the information stored in the CECs is no longer
useful. The combination of a CEC with its associated input, output, and forget
gate is called a memory cell. See figure 1b for a schematic of a memory cell. It is
also possible for multiple CECs to be combined with only one input, output, and
forget gate, in a so-called memory block.
Activation updates. More formally, the network's activations at each timestep
yh, output unit
activation yk, input gate activation yin, output gate activation y0'Ut, and forget
gate activation yep is computed in the following standard way:
t are computed as follows. A standard hidden unit's activation
(1)
m
where Wim is the weight of the connection from unit m to unit i. In this paper, Ii
is the standard logistic sigmoid function for all units except output units, for which
it is the identity function. The CEC activation Be"!,
or the "state" of memory cell v
:J
cell output
,/'
,/'
b.
a.
memory cell
cell input
Figure 1: a. The general LSTM architecture used in this paper. Arrows indicate
unidirectional, fully connected weights. The network's output units directly code
for the Advantages of different actions. b. One memory cell.
in
Illell10ry
block j, is COITlputed as follows:
(2)
m
where 9 is a logistic sigmoid function scaled to the range [-2,2]' and
sc~(o)
3
== o.
The memory cell's output ycj is calculated by
ycj (t) == youtj (t)h(sc~ (t))
(3)
3
where h is a logistic sigmoid function scaled to the range [-1, IJ.
Learning. At some or all timesteps of the timeseries, the output units of the
network may make prediction' errors. Errors are propagated just one step back in
time through all units other than the CEes, including the gates. However, errors
are backpropagated through the CECs for an indefinite amount of time, using an
efficient variation of RTRL [5, 3J. Weight updates are done at every timestep, which
fits in nicely with the philosophy of online RL. The learning algorithm is adapted
slightly for RL, as explained in the next section.
3
RL-LSTM
RNNs, such as LSTM, can be applied to RL tasks in various ways. One way is
to let the RNN learn a model of the environment, which learns to predict observations and rewards, and in this way learns to infer the environmental state at
each point [6, IIJ. LSTM's architecture would allow the predictions to depend on
information from long ago. The model-based system could then learn the mapping
from (inferred) environmental states to actions as in the Markovian case, using
standard techniques such as Q-learning [6, 2J, or by backpropagating through the
frozen model to the controller [IIJ. An alternative, model-free approach, and the
one used here, is to use the RNN to directly approximate the value function of a
reinforcement learning algorithm [10, 6]. The state of the environment is approximated by the current observation, which is the input to the network, together with
the recurrent activations in the network, which represent the agent's history. One
possible advantage of such a model-free approach over a model-based approach is
that the system may learn to only resolve hidden state insofar as that is useful for
obtaining higher rewards, rather than waste time and resources in trying to predict
features of the environment that are irrelevant for obtaining rewards [6, 8].
Advantage learning. In this paper, the RL-LSTM network approximates the
value function of Advantage learning [4], which was designed as, an improvement on
Q-Iearning for continuous-time RL. In continuous-time RL, values of adjacent states,
and therefore optimal Q-values of different actions in a given state, typically differ by
only small amounts, which can easily get lost in noise. Advantage learning remedies
this problem by artificially decreasing the values of suboptimal actions in each state.
Here Advantage learning is used for both continuous-time and discrete-time RL.
Note that the same problem of small differences between values of adjacent states
applies to any RL problem with long paths to rewards. And to demonstrate RLLSTM's potential to bridge long time lags, we need to consider such RL problems.
In general, Advantage learning may be more suitable for non-Markovian tasks than
Q-Iearning, because it seems less sensitive to getting the value estimations exactly
right.
The LSTM network's output units directly code for the Advantage values of different
actions. Figure 1a shows the general network architecture used in this paper. As
in Q-learning with a function approximator, the temporal difference error ETD(t),
derived from the equivalent of the Bellman equation for Advantage learning [4], is
taken as the function approximator's prediction error at timestep t:
ETD(t) = V(s(t))
+ r(t) +iV(S(t+ 1)) - V(s(t)) _ A(s(t),a(t))
(4)
~
where A(s, a) is the Advantage value of action a in state s, r is the immediate
reward, and V(s) == max a A(s, a) is the value of the state s. , is a discount factor
in the range [0,1], and "" is a constant scaling the difference between values of
optimal and suboptimal actions. Output units associated with other actions than
the executed one do not receive error signals.
Eligibility traces. In this work, Advantage learning is extended with eligibility traces, which have often been found to improve learning in RL, especially in
non-Markovian domains [7]. This yields Advantage(A) learning, and the necessary
computations turn out virtually the same as in Q(A)-learning [1]. It requires the
storage of one eligibility trace eim per weight Wim. A weight update corresponds to
D ) where eim(t ) == , Aeim(t-1) + -8--.
ayK (t)
)
Wim(t+ 1) == Wim(t) +aE T(t)eim(t,
(5
Wim
K indicates the output unit associated with the executed action, a is a learning rate
parameter, and A is a parameter determining how fast the eligibility trace decays.
eim(O) == 0, and eim(t - 1) is set to 0 if an exploratory action is taken.
Exploration. Non-Markovian RL requires extra attention to the issue of exploration [2, 8]. Undirected exploration attempts to tryout actions in the same way in
each environmental state. However, in non-Markovian tasks, th~ agent initially does
not know which environmental state it is in. Part of the exploration must be aimed
at discovering the environmental state structure. Furthermore, in many cases, the
non-Markovian environment will provide unambiguous observations indicating the
state in some parts, while providing ambiguous observations (hidden state) in other
parts. In general, we want more exploration in the ambiguous parts.
This paper employs a directed exploration technique based on these ideas. A separate multilayer feedforward neural network, with the same input as the LSTM
network (representing the current observation) and one output unit yV, is trained
concurrently with the LSTM network. It is trained, using standard backpropagation, to predict the absolute value of the current temporal difference error E TD (t)
as defined byeq. 4, plus its own discounted prediction at the next timestep:
Yd(t) ==
IE TD (t)1 + (3yV(t + 1)
(6)
where Y'd(t) is the desired value for output yV(t), and (3 is a discount parameter
in the range [0,1]. This amounts to attempting to identify which observations are
G
Figure 2: Long-term dependency T-maze with length of corridor N == 10. At the
starting position S the agent's observation indicates where the goal position G is in
this episode.
"problematic" , in the sense that they are associated with large errors in the current
value estimation (the first term), or precede situations with large such errors (the
second term). yV(t) is linearly scaled and used as the temperature of a Boltzmann
action selection rule [12]. The net result is much exploration when, for the current
observation, differences between estimated Advantage values are small (the standard
effect of Boltzmann exploration), or when there is much "uncertainty" about current
Advantage values or Advantage values in the near future (the effect of this directed
exploration scheme). This exploration technique has obvious similarities with the
statistically more rigorous technique of Interval Estimation (see [12]), as well as
with certain model-based approaches where exploration is greater when there is
more uncertainty in the predictions of a model [11].
4
Test problems
Long-term dependency T-maze. The first test problem is a non-Markovian
grid-based T-maze (see figure 2). It was designed to test RL-LSTM's capability to
bridge long time lags, without confounding the results by making the control task
difficult in other ways. The agent has four possible actions: move North, East,
South, or West. The agent must learn to move from the starting position at the
beginning of the corridor to the T-junction. There it must move either North or
South to a changing goal position, which it cannot see. However, the location of
the goal depends on a "road sign" the agent has seen at the starting position. If
the agent takes the correct action at the T-junction, it receives a reward of 4. If it
takes the wrong action, it receives a reward of -.1. In both cases, the episode ends
and a new episode starts, with the new goal position set randomly either North or
South. During the episode, the agent receives a reward of -.1 when it stands still.
At the starting position, the observation is either 011 or 110, in the corridor the
observation is 101, and at the T-junction the observation is 010. The length of the
corridor N was systematically varied from 5 to 70. In each condition, 10 runs were
performed.
If the agent takes only optimal ac~ions to the T-junction, it must remember the
observation from the starting position for N timesteps to determine the optimal
action at the T-junction. Note that the agent is not aided by experiences in whiGh
there are shorter time lag dependencies. In fact, the opposite is true~ Initially, it
takes many more actions until even the T -junction is reached, and the experienced
history is very variable from episode to episode. The agent must first learn to
reliably move to the T-junction. Once this is accomplished, the agent will begin to
experience more or less consistent and shortest possible histories of observations and
actions, from which it can learn to extract the relevant piece of information. The
directed exploration mechanism is crucial in this regard: it learns to set exploration
low in the corridor and high at the T-junction.
The LSTM network had 3 input units, 12 standard hidden units, 3 memory cells, and
a == .0002. The following parameter values were used in all conditions: { == .98, ,\ ==
.8, flJ == .1. An empirical comparison was made with two alternative systems that
have been used in non-Markovian tasks. The long-term dependency nature of the
10
,,
en
c::
o
\ ,,
-8
]1.5
o
\
\
-*
~
""
G---E)
+ - -+
~ooooooX
5
10
15
20
25
1
(])
30
40
50
N: lenQth of corridor
OJ
~ 0.5
LSTM
Elman-BPTT
Memory bits
60
~
70
5
10 15 20 25 30
40
50
N: lenQth of corridor
60
70
Figure 3: Results in noise-free T-maze task. Left: Number of successful runs (out of
10) as a function of N, length of the corridor. Right: Average number of timesteps
until success as a function of N.
en 10
\
e
,,
28
~
.. +,
\
,
\
~
,
""
4
.0
E
~ 2
\
\
\
"\, IG---E)
+ - -+
\ \
5
10
15
20 25
~
00000
oX
30
40
50
N: lenQth of corridor
LSTM
Elman-BPTT
Memory bits
70
5
10 15 20 25 30
40
50
N: lenQth of corridor
60
70
Figure 4: Results in noisy T-maze task. Left: Number of successful runs (out of
10) as a function of N, length of the corridor. Right: Average number of timesteps
until success as a function of N.
task virtually rules out history window approaches. Instead, two alternative systems
were used that, like LSTM, are capable in principle of representing information for
arbitrary long time lags. In the first alternative, the LSTM network was replaced by
an Elman-style Simple Recurrent Network, trained using BPTT [6]. Note that the
unfolding of the RNN necessary for BPTT means that this is no longer truly online
RL. The Elman network had 16 hidden units and 16 context units, and a == .001.
The second alternative is a table-based system extended with memory bits that are
part of the observation, and that the controller can switch on and off [9]. Because
the task requires the agent to remember just one bit of information, this system
had one memory bit, and a == .01. In order to determine the specific contribution of
LSTM to performance, in both alternatives all elements of the overall system except
LSTM (i.e. Advantage(A) learning, directed exploration) were left unchanged.
A run was considered a success if the agent learned to take the correct action at the
T-junction in over 80% of cases, using its stochastic action selection mechanism. In
practice, this corresponds to 100% correct action choices at the T-junction using
greedy action selection, as well as optimal or near-optimal action choices leading
to the T-junction. Figure 3 shows the number of successful runs (out of 10) as a
function of the length of the corridor N, for each of the three methods. It also
shows the average number of timesteps needed to reach success. It is apparent that
RL-LSTM is able to deal with much longer time lags than the two alternatives. RLLSTM has perfect performance up to N == 50, after which performance gradually
decreases. In those cases where the alternatives also reach success, RL-LSTM also
learns faster. The reason why the memory bits system performs worst is probably
that, in contrast with the other two, it does not explicitly compute the gradient
of performance with respect to past events. This should make credit assignment
less directed and therefore less effective. The Elman-BPTT system does compute
such a gradient, but in contrast to LSTM, the gradient information tends to vanish
quickly with longer time lags (as explained in section 2).
T-maze with noise. It is one thing to learn long-term dependencies in a noise-free
task, it is quite another thing to do so in the presence of severe noise. To investigate
this, a very noisy variation of the T-mazetask described above was designed. Now
the observation in the corridor is aOb, where a and b are independent, uniformly
distributed random values in the range [0, 1], generate online. All other aspects of
the task remain the same as above. Both the LSTM and the Elman-BPTT system
were also left unchanged. To allow for a fair comparison, the table-based memory
bit system's observation was computed using Michie and Chambers's BOXES state
aggregation mechanism (see [12]), partitioning each input dimension into three equal
regions.
Figure 4 shows the results. The memory bit system suffers most from the noise.
This is not very surprising because a table-based system, even if augmented with
BOXES state aggregation, does not give very sophisticated generalization. The two
RNN approaches are hardly affected by the severe noise in the observations. Most
importantly, RL-LSTM again significantly outperforms the others, both in terms of
the maximum time lag it can deal with, and in terms of the number of timesteps
needed to learn the task.
Multi-mode pole balancing. The third test problem is less artificial than the
T-mazes and has more complicated dynamics. It consists of a difficult variation of
the classical pole balancing task. In the pole balancing task, an agent must balance
an inherently unstable pole, hinged to the top of a wheeled cart that travels along a
track, by applying left and right forces to the cart. Even in the Markovian version,
the task requires fairly precise control to solve it.
The version used in this experiment is made more difficult by two sources of hidden
state. First, as in [6], the agent cannot observe the state information corresponding
to the cart velocity and pole angular velocity. It has to learn to approximate this
(continuous) information using its recurrent connections in order to solve the task.
Second, the agent must learn to operate in two different modes. In mode A, action
1 is left push and action 2 is right push. In mode B, this is reversed: action 1 is right
push and action 2 is left push. Modes are randomly set at the beginning of each
episode. The information which mode the agent is operating in is provided to the
agent only for the first second of the episode. After that, the corresponding input
unit is set to zero and the agent must remember which mode it is in. Obviously,
failing to remember the mode leads to very poor performance. The only reward
signal is -1 if the pole falls past ?12? or if the cart hits either end of the track.
Note that the agent must learn to remember the (discrete) mode information for an
infinite amount of time if it is to learn to balance the pole indefinitely. This rules
out history window approaches altogether. However, in contrast with the T-mazes,
the system now has the benefit of starting with relatively short time lags.
The LSTM network had 2 output units, 14 standard hidden units, and 6 memory
cells. It has 3 input units: one each for cart position and pole angle; and one for the
mode of operation, set to zero after one second of simulated time (50 timesteps).
ry == .95, A == .6, fiJ == .2, a == .002. In this problem, directed exploration was
not necessary, because in contrast to the T-mazes, imperfect policies lead to many
different experiences with reward signals, and there is hidden state everywhere in
the environment. For a continuous problem like this, a table-based memory bit
system is not suited very well, so a comparison was only made with the ElmanBPTT system, which had 16 hidden and context units and a == .002.
The Elman-BPTT system never reached satisfactory solutions in 10 runs. It only
learned to balance the pole for the first 50 timesteps, when the mode information
is available, thus failing to learn the long-term dependency. However, RL-LSTM
learned optimal performance in 2 out of 10 runs (after an average of 6,250,000
timesteps of learning). After learning, these two agents were able to balance the pole
indefinitely in both modes of operation. In the other 8 runs, the agents still learned
to balance the pole in both modes for hundreds or even thousands of timesteps
(after an average of 8,095,000 timesteps of learning), thus showing that the mode
information was remembered for long time lags. In most cases, such an agent
learns optimal performance for one mode, while achieving good but suboptimal
performance in the other.
5
Conclusions
The results presented in this paper suggest that reinforcement learning with Long
Short-Term ~v1emory (RL-LSTI\,f) is a promising approach to solving non-:r-v1arkovi&t~
RL tasks with long-term dependencies. This was demonstrated in a T-maze task
with minimal time lag dependencies of up to 70 timesteps, as well as in a nonMarkovian version of pole balancing where optimal performance requires remembering information indefinitely. RL-LSTM's main power is derived from LSTM's
property of constant error flow, but for good performance in RL tasks, the combination with Advantage(A) learning and directed exploration was crucial.
Acknowledgments
The author wishes to thank Edwin de Jong, Michiel de Jong, Gwendid van der Voort
van der Kleij, Patrick Hudson, Felix Gers, and Jiirgen Schmidhuber for valuable
comments.
References
[1] B. Bakker. Reinforcement learning with LSTM in non-Markovian tasks with longterm dependencies. Technical report, Dept. of Psychology, Leiden University, 2001.
[2] L. Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In Proc. of the 10th National Conf. on AI AAAI Press, 1992.
[3] F. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction
with LSTM. Neural Computation, 12 (10):2451-2471, 2000.
[4] M. E. Harmon and L. C. Baird. Multi-player residual advantage learning with general
function approximation. Technical report, Wright-Patterson Air Force Base, 1996.
[5] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9
(8):1735-1780, 1997.
[6] L.-J. Lin and T. Mitchell. Reinforcement learning with hidden states. In Proc. of the
2nd Int. Conf. on Simulation of Adaptive Behavior. MIT Press, 1993.
[7] J. Loch and S. Singh. Using eligibility traces to find the best memoryless policy in
Partially Observable Markov Decision Processes. In Proc. of ICML'98, 1998.
[8] R. A. McCallum. Learning to use selective attention and short-term memory in
sequential tasks. In Proc. 4th Int. Conf. on Simulation of Adaptive Behavior, 1996.
[9] L. Peshkin, N. Meuleau, and L. P. Kaelbling. Learning policies with external memory.
In Proc. of the 16th Int. Conf. on Machine Learning, 1999.
[10] J. Schmidhuber. Networks adjusting networks. In Proc. of Distributed Adaptive Neural
Information Processing, St. Augustin, 1990.
[11] J. Schmidhuber. Curious model-building control systems. In Proc. of IJCNN'91,
volume 2, pages 1458-1463, Singapore, 1991.
[12] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press,
Cambridge; MA, 1998.
| 1953 |@word longterm:1 version:3 seems:1 nd:1 bptt:8 open:1 simulation:3 moment:2 contains:1 past:2 outperforms:1 current:7 surprising:1 activation:12 must:9 yep:1 realistic:1 designed:4 update:3 greedy:1 discovering:2 mccallum:2 beginning:2 short:5 hinged:1 meuleau:1 indefinitely:3 location:1 along:1 corridor:13 consists:1 behavior:2 elman:7 multi:2 ry:1 aliasing:1 bellman:1 discounted:1 decreasing:1 td:2 resolve:1 window:7 becomes:1 begin:1 provided:1 bakker:2 temporal:2 remember:6 every:1 continual:1 iearning:2 exactly:1 scaled:3 wrong:1 hit:1 control:3 unit:27 partitioning:1 before:1 felix:1 hudson:1 tends:1 sutton:1 cecs:11 path:1 yd:1 rnns:2 plus:1 range:5 statistically:1 directed:8 acknowledgment:1 lost:1 block:2 practice:1 backpropagation:2 rnn:5 empirical:1 significantly:1 road:1 suggest:1 get:1 cannot:3 close:1 selection:3 storage:1 context:2 applying:1 conventional:1 equivalent:1 demonstrated:2 send:1 attention:2 starting:6 automaton:1 rule:3 importantly:1 exploratory:1 variation:4 flj:1 us:1 element:1 idsia:1 approximated:1 velocity:2 michie:1 worst:1 thousand:1 region:1 connected:1 episode:8 decrease:1 valuable:1 yk:1 environment:7 reward:12 dynamic:1 trained:3 depend:2 solving:2 singh:1 patterson:1 edwin:1 easily:1 various:1 fast:1 effective:1 artificial:1 sc:2 outside:1 apparent:1 lag:10 quite:1 solve:3 noisy:2 online:3 obviously:1 advantage:19 frozen:1 net:1 reset:1 relevant:6 getting:1 perfect:1 recurrent:9 ac:1 ij:1 indicate:1 differ:1 fij:1 correct:3 stochastic:1 exploration:17 generalization:1 etd:2 considered:1 credit:1 wright:1 wheeled:1 mapping:2 predict:3 failing:2 estimation:3 proc:7 travel:1 precede:1 wim:5 augustin:1 bridge:2 sensitive:1 unfolding:1 mit:2 concurrently:1 rather:1 barto:1 conjunction:1 derived:2 improvement:1 indicates:2 nonmarkovian:1 contrast:4 rigorous:1 sense:1 entire:1 typically:1 initially:2 hidden:14 selective:1 issue:1 among:1 overall:1 fairly:1 equal:1 once:1 never:1 having:1 nicely:1 identical:1 look:2 icml:1 filling:1 future:1 others:1 report:2 employ:1 randomly:2 national:1 delayed:1 replaced:1 ycj:2 attempt:2 investigate:1 severe:2 navigation:2 truly:1 nl:1 capable:1 necessary:3 experience:3 shorter:1 harmon:1 iv:1 desired:1 minimal:1 instance:2 markovian:15 assignment:1 kaelbling:1 pole:13 hundred:1 successful:3 stored:2 dependency:19 combined:1 st:1 lstm:37 ie:1 off:1 together:1 quickly:1 again:1 aaai:1 possibly:2 conf:4 external:1 style:1 leading:1 potential:1 de:2 blow:1 waste:1 north:3 int:3 baird:1 explicitly:1 depends:1 piece:3 multiplicative:2 later:1 performed:1 reached:2 start:2 yv:4 aggregation:2 capability:1 complicated:1 unidirectional:1 contribution:1 air:1 yield:1 identify:1 ago:1 history:12 reach:2 suffers:1 obvious:2 jiirgen:1 associated:4 boil:1 propagated:2 adjusting:1 mitchell:1 ut:1 sophisticated:1 actually:1 back:2 originally:1 youtj:1 supervised:2 higher:1 done:1 box:3 ox:1 furthermore:1 just:3 angular:1 correlation:1 until:3 receives:3 logistic:3 mode:15 building:1 effect:2 true:1 remedy:1 memoryless:1 satisfactory:1 deal:2 adjacent:2 during:1 eligibility:5 backpropagating:1 unambiguous:1 ambiguous:2 trying:1 distracting:1 demonstrate:1 performs:1 temperature:1 recently:1 sigmoid:3 specialized:2 rl:31 volume:1 approximates:1 cambridge:1 ai:1 grid:1 had:5 access:3 longer:4 similarity:1 operating:1 patrick:1 base:1 own:1 recent:2 confounding:1 irrelevant:1 schmidhuber:5 certain:1 remembered:2 success:5 accomplished:1 der:2 seen:1 greater:1 remembering:1 determine:3 shortest:1 signal:4 ii:1 cummins:1 multiple:1 infer:1 technical:2 faster:1 michiel:1 long:27 lin:1 schematic:1 prediction:6 controller:2 ae:1 multilayer:1 represent:4 hochreiter:1 cell:11 ion:1 receive:2 addition:1 want:1 interval:1 source:1 crucial:2 extra:1 operate:1 unlike:1 probably:1 south:3 cart:5 tend:1 virtually:2 undirected:1 thing:2 comment:1 flow:2 spirit:1 curious:1 near:2 presence:1 feedforward:1 insofar:1 switch:1 fit:1 psychology:2 timesteps:12 architecture:4 suboptimal:3 opposite:1 imperfect:1 idea:1 peshkin:1 hardly:1 action:33 useful:2 aimed:1 netherlands:1 amount:5 discount:2 backpropagated:1 generate:1 problematic:1 singapore:1 eim:5 estimated:1 sign:1 per:1 rb:1 track:2 discrete:2 affected:1 indefinite:1 four:1 achieving:1 changing:1 prevent:1 timestep:4 run:8 angle:1 everywhere:1 uncertainty:2 decision:2 scaling:1 bit:10 distinguish:1 adapted:1 ijcnn:1 worked:1 aspect:1 attempting:1 relatively:1 combination:3 poor:1 describes:1 slightly:1 remain:1 rtrl:2 y0:1 making:2 explained:2 gradually:1 taken:2 resource:1 equation:1 turn:1 mechanism:3 needed:2 know:1 end:2 junction:13 operation:2 available:1 observe:1 enforce:1 appropriate:1 chamber:1 alternative:8 gate:12 altogether:1 top:1 especially:2 build:1 classical:1 unchanged:2 move:4 regulated:2 gradient:3 reversed:1 separate:1 thank:1 simulated:1 capacity:1 unstable:1 reason:1 code:2 length:5 useless:1 loch:1 providing:1 balance:5 difficult:7 executed:2 trace:5 reliably:1 boltzmann:2 policy:3 observation:21 markov:2 finite:1 behave:1 t:1 timeseries:7 immediate:1 situation:1 extended:2 precise:1 varied:1 arbitrary:2 inferred:1 connection:2 learned:4 distinction:1 chrisman:1 able:2 usually:1 challenge:1 including:1 memory:25 max:1 oj:1 power:1 event:3 suitable:1 difficulty:2 force:2 residual:1 representing:2 scheme:1 improve:1 shortterm:2 extract:2 determining:1 fully:1 lenqth:4 proven:1 approximator:2 leiden:3 agent:27 consistent:1 principle:3 systematically:1 balancing:5 free:7 side:1 allow:2 fall:2 absolute:1 distributed:2 regard:1 benefit:1 depth:1 calculated:1 world:1 stand:1 maze:12 dimension:1 van:2 author:1 ayk:1 reinforcement:10 made:3 ig:1 adaptive:3 approximate:2 observable:2 continuous:5 why:1 table:4 promising:1 learn:17 nature:1 inherently:1 obtaining:2 artificially:1 domain:1 main:1 linearly:1 arrow:1 noise:7 fair:1 augmented:1 west:1 en:2 iij:2 experienced:1 position:10 wish:1 gers:2 lie:1 perceptual:2 vanish:2 yh:1 third:1 learns:5 down:1 cec:2 specific:1 showing:1 decay:2 sequential:1 push:4 suited:1 forget:5 yin:1 partially:2 applies:1 corresponds:3 environmental:7 ma:1 viewed:1 identity:1 goal:4 aided:1 infinite:1 except:2 uniformly:1 called:6 player:1 east:1 indicating:1 formally:1 jong:2 philosophy:1 dept:2 |
1,044 | 1,954 | Active Information Retrieval
Tommi Jaakkola
MIT AI Lab
Cambridge, MA
tommi@ai.mit.edu
Hava Siegelmann
MIT LIDS
Cambridge, MA
hava@mit.edu
Abstract
In classical large information retrieval systems, the system responds
to a user initiated query with a list of results ranked by relevance.
The users may further refine their query as needed. This process
may result in a lengthy correspondence without conclusion. We
propose an alternative active learning approach, where the system responds to the initial user's query by successively probing the
user for distinctions at multiple levels of abstraction. The system's
initiated queries are optimized for speedy recovery and the user
is permitted to respond with multiple selections or may reject the
query. The information is in each case unambiguously incorporated
by the system and the subsequent queries are adjusted to minimize
the need for further exchange. The system's initiated queries are
subject to resource constraints pertaining to the amount of information that can be presented to the user per iteration.
1
Introduction
An IR system consists of a collection of documents and an engine that retrieves
documents described by users queries. In large systems, such as the Web, queries
are typically too vague, and hence, an iterative process in which the users refine their
queries gradually has to take place. Since much dissatisfaction of IR users stems
from long, tedious repetitive search sessions, our research is targeted at shortening
the search session. We propose a new search paradigm of active information retrieval
in which the user initiates only one query, and the subsequent iterative process is
led by the engine/system. The active process exploits optimum experiment design
to permit minimal effort on the part of the user.
Our approach is related but not identical to the interactive search processes called
relevance feedback. The primary differences pertain to the way in which the feedback
is incorporated and queried from the user. In relevance feedback, the system has to
deduce a set of "features" (words, phrases, etc. ) that characterize the set of selected
relevant documents, and use these features in formulating a new query (e.g., [5,6]) .
In contrast, we cast the problem as a problem of estimation and the goal is to
recover the unknown docum ent weights or relevance assessments.
Our system also relates to the Scatter/Gather algorithm of browsing information
systems [2], where the system initially scatters the document collection into a fixed
number k of clusters whose summaries are presented to the user. The user select
clusters from a new sub-collection, to be scattered again into k clusters, and so
forth , until enumerating single documents. In our approach, documents are not
discarded but rather their weighting is updated appropriately. Like many other
clustering methods, the scatter/gather is based on hierarchical orderings. Overlapping clusters were recently proposed to better match real-life grouping and allow
natural summarizing and viewing [4].
This short paper focuses on the underlying methodology of the active learning
approach.
2
Active search
Let X be the set of documents (elements) in the database and C = {GI , ... , Gm }
a set of available clusters of documents for which appropriate summaries can be
generated. The set of clusters typically includes individual documents and may
come from a fiat, hierarchical, or overlapping clustering method. The clustering
need not be static, however, and could be easily defined dynamically in the search
process.
Given the set of available clusters, we may choose a query set, a limited set of clusters
that are presented to the user for selection at each iteration of the search process.
The user is expected to choose the best matching cluster in this set or, alternatively,
annotate the clusters with relevant/irrelevant labels (select the relevant ones). We
will address both modes of operation.
The active retrieval algorithm proceeds as follows: (1) it finds a small subset S
of clusters to present, along with their summaries, to the user; (2) waits until
the user selects none, one or more of the presented clusters; (3) uses the evidence
from the user's selections to update the distribution over documents or relevance
assessments; (4) outputs the top documents so far , ranked by their weights, and
the iteration continues until terminated by the user or the system (based on any
remaining uncertainty about the relevant documents or the implied ranking).
The following sections address three primary issues: the user model, how to incorporate the information from user selections, and how to optimize the query set
presented to the user. All the algorithms should scale linearly with the database
size (and the size of the query set).
3
Contrastive selection model
We start with a contrastive selection model where the user is expected to choose
only the best matching cluster in the query set. In case of multiple selections, we
will interpret the marked clusters as a redefined cluster of the query set. While
this interpretation will result in sub-optimal choices for the query set assuming the
user consistently selects multiple clusters, the interpretation nevertheless obviates
the need for modeling user's selection biases in this regard. An empty selection, on
the other hand, suggests that the clusters outside the query set are deemed more
likely.
9
8
7
6
5
4
b)
3 ,
10
/
10
databasesize(log-sca~)
09
06
10'
c)
database size (log-scale)
Figure 1: a) A three level hierarchical transform of a flat Dirichlet; b) dependence
of mean retrieval time on the database size (log-scale); c) median ratio of retrieval
times corresponding to doubling the query set size.
To capture the ranking implied by the user selections, we define weights (distribution) {Bx}, L: XEX Bx = lover the underlying documents. We assume that the user
behavior is (probabilistic ally) consistent with one such weighting B;. The goal of a
retrieval algorithm is therefore to recover this underlying weighting through interactions with the user . The resulting (approximation to) B; can be used to correctly
rank the documents or, for example, to display all the documents with sufficiently
large weight (cf. coverage). Naturally, B; changes from one retrieval task to another
and has to be inferred separately in each task. We might estimate a user specific
prior (model) over the document weights to reflect consistent biases that different
users have across retrieval tasks.
We express our prior belief about the document weights in terms of a Dirichlet
distribution: P(B) = liZ? rrxExB~ ' -l, where Z = [f1 x Exr(ax)l/r(L:~= l ax).
3.1
Inference
Suppose a fiat Dirichlet distribution P(B) over the document weights and a fixed
query set S = {CS1, .. . ,CSk }. We evaluate here the posterior distribution P(Bly)
given the user response y. The key is to transform P(B) into a hierarchical form
so as to explicate the portion of the distribution potentially affected by the user
response. The hierarchy, illustrated in Figure 1a), contains three levels: selection
of S or X \ S; choices within the query set S (of most interest to us) and those
under X \ S; selections within the clusters C S1 in S. For simplicity, the clusters are
assumed to be either nested or disjoint , i.e. , can be organized hierarchically.
We use B?) , i = 1,2 to denote the top level parameters,
B;f{, j
= 1, ... , k for
the cluster choices within the query set whereas B~~~, x ~ S gives the document
choices outside S. Finally, B~~j for x E C Sj indicate the parameters associated
with the cluster CSj E S. The original flat Dirichlet P(B) can be written as a
product p(B(l) )P(BW )P(BW) [rr~=l P( B(I~)) ] with the appropriate normalization
constraints. If clusters in S overlap, the expansion is carried out in terms of the
disjoint subsets. The parameters governing the Dirichlet component distributions
are readily obtained by gathering the appropriate parameters ax of the original
Dirichlet (cf. [3]). For example, a~l) = L: xES ax; am = L:xECs j a x, for j =
1, ... , k; a~~~ = ax, for x ~ S; a~~j = ax, whenever x E C Sj , j = 1, ... , k.
If user selects cluster CSy , we will update P( 8W) which reduces to adjusting the
counts a~~i
f-
a~~i + 1. The resulting new parameters give rise to the posterior dis-
tribution P(8W IY) and, by including the other components, to the overall posterior
P(8IY). If the user selects "none of these items," only the first level parameters 8~1)
will be updated.
3.2
Query set optimization
Our optimization criterion for choosing the query set S is the information that we
stand to gain from querying the user with it. Let y indicate the user choice, the
mutual information between y and the parameters 8 is given by (derivation omitted)
(1)
J(Yi 8)
(2)
where P(y) = a~~iI (I::=l a~~~) defines our current expectation about user selection
from Si H(y) = - I:~=l P(y) log P(y) is the entropy of the selections y, and w(?)
is the Di-gamma function , defined as w( z) = djdzlogr(z). Extending the criterion
to "no selection" is trivial.
aW
in terms of the original (flat) counts ax,
To simplify, we expand the counts
and define for all clusters (whether or not they appear in the query set) the weights
ai = I: xECi ax, bi = aiw(ai + 1) - ailogai. The mutual information criterion now
depends only on as = I:~=l a Si = I: xES ax, the overall weight of the query set and
bs = I:~= l bS i which provides an overall measure of how informative the individual
clusters in S are. With these changes, we obtain:
(2)
bs
J(Y i 8. ll) = - + log(as) - w(as + 1)
as
(3)
We can optimize the choice of S with a simple greedy method that successively
finds the next best cluster index i to include in the information set. This algorithm
scales as O(km), where m is the number of clusters in our database and k is the
the maximal query set size in terms of the number of clusters.
Note that this simple criterion excludes nested or overlapping clusters from S. In
a more general context, the bookkeeping problem associated with the overlapping
clusters is analogous to that of the Kikuchi expansion in statistical physics (cf. [7]) .
3.3
Projection back to a flat Dirichlet
The hierarchical posterior is not a flat Dirichlet anymore. To maintain simplicity, we project it back into a flat Dirichlet in the KL-divergence sense: P~ I Y =
argminQo KL(Pe 1yIIQe), where P(8Iy) is the hierarchical posterior expressed in
terms of the original flat variables 8x ,x E X (but no longer a flat Dirichlet). The
transformation from hierarchical to flat variables is given by: 8x = 8~1) 8JN 8~~j for
x E CSj , j = 1, ... ,k, and Ox = O~l) o~~L for x E X \ S. As a result , when x E C Sj
for some j = 1, ... , k we get (derivation omitted)
(4)
where y denotes the user selection. For x E X\S, EO ly log Ox = w(a x ) - W(L zEX a z )
If we define rx = Ee ly log Ox for all x E X, then the counts f3x corresponding to the
flat approximation Qo can be found by minimizing
(5)
xEX
xEX
where we have omitted any terms not depending on f3x . This is a strictly convex
optimization problem over the convex set f3x ~ 0, x E X and therefore admits a
unique solution. Furthermore, we can efficiently apply second order methods such
as Newton-Raphson in this context due to the specific structure of the Hessian:
1i = D - el1 T, where D is a diagonal matrix containing the derivatives of the
di-gamma function l w'( f3x) = d/df3x w( f3x) and e = W'(LXEX f3x ). Each NewtonRaphson iteration requires only O(m) space/time.
3.4
Decreasing entropy
Since the query set was chosen to maximize the mutual information between the
user selection and the parameters 0, we get the maximal reduction in the expected
entropy of the parameters: J(y; 0) = H(Po) - Ey H(Pe ly)
As discussed in the previous section, we cannot maintain the true posterior but have
to settle for a projection. It is therefore no longer obvious that the expected entropy
of the projected posterior possesses any analogous guarantees; indeed, projections
of this type typically increase the entropy. We can easily show, however , that the
expected entropy is non-increasing:
since P~ y is the minimizing argument. It is possible to make a stronger statement indicating that the expected entropy of the projected distribution decreases
monotonically after each iteration.
> 0, Ey {H(Qo IY) } :::; H(Pe) - f(k -l)/as + 0(102 ), where
k is the size of the query set and as = L zEs a z .
Theorem 1 For any 10
While this result is not tight , it does demonstrate that the projection back into a
flat Dirichlet still permits a semi-linear decrease in the entropy2. The denominator
of the first order term, i.e., as, can increase only by 1 at each iteration.
IThese derivatives can be evaluated efficiently on the basis of the highly accurate approximation to the di-gamma function .
2Note that the entropy of a Dirichlet distribution is not bounded from below (it is
bounded from above) . The manner in which the Dirichlet updates are carried out (how
a x change) still keeps the entropy a meaningful quantity.
4
Annotation model
The contrastive selection approach discussed above operates a priori in a single
topic mode 3 . The expectation that the user should select the best matching cluster
in the query set also makes an inefficient use of the query set. We provide here an
analogous development of the active learning approach under the assumption that
the user classifies rather than contrasts the clusters.
The user responses are now assumed to be consistent with a noisy-OR model
P(Ye = 1Ir*) = 1 - (1 - qo)
II (1 -
(7)
qr:
xEe
where Ye is the binary relevance annotation (outcome) for a cluster c in the query,
r; E {O, 1}, x E X are the underlying task specific relevance assignments to the
elements in the database, q is the probability that a relevant element in the cluster
is caught by the user, and qo is the probability that a cluster is deemed "relevant"
in the absence of any relevant elements. While the parameters qo and q could easily
be inferred from past searches, we assume here for simplicity that they are known
to the search algorithm. The user annotations of different clusters in the query set
are independent of each other, even for overlapping clusters.
To ensure that we can infer the unknown relevance assignments from the observabIes (cluster annotations), we require identifiability: the annotation probabilities
P(Ye = 1Ir*), for all c E C, should uniquely determine {r;}. Equivalently, knowing
the number of relevant documents in each cluster should enable us to recover the
underlying relevance assignments. This is a property of the cluster structure and
holds trivially for any clustering with access to individual elements.
The search algorithm maintains a simple independent Bernoulli model over the
unknown relevance assignments: P(rIB) = TIxEx B;' (1 - Bx) l - r ? . This gives rise
to a marginal noisy-OR model over cluster annotations:
P(Ye = liB) =
L P(Ye =
1Ir)P(rIB) = 1 - (1 - qo)
r
II (1- Bxq)
(8)
x Ee
The uncertainty about the relevance assignments {rx} makes the system beliefs
about the cluster annotations dependent on each other. The parameters (relevance
probabilities) {Bx} are, of course, specific to each search task.
4.1
Inference and projection
Given fie E {O, 1} for a single cluster c, we can evaluate the posterior P(rlfie, B) over
the relevance assignments. Similarly to noisy-OR graphical models, this posterior
can be (exponentially) costly to maintain and we instead sequentially project the
posterior back into the set of independent Bernoulli distributions. The projection
here is in the moments sense (m- projection): Pr;(I' = argminQr KL(Pr IVc,(lIIQr),
where Qr is an independent Bernoulli model. The m-projection preserves the posterior expectations B~ ; vc = Er lYc{rx} used for ranking the documents.
3Dynamic redefinition of clusters partially avoids this problem.
The projection yields simple element-wise updates for the parameters 4 : for x E c,
(9)
where Po = P(yc = OIB) = (l-qo) IT xEc(l-Bxq) is the only parameter that depends
on the cluster as a whole.
4.2
Query set optimization
The best single cluster c E C to query has the highest mutual information between
the expected user response Yc = {O, I} and the underlying relevance assignments
l' = {r x}xEx, maximizing I(y c; r iB) = EYe{KL( PrIO ,Ye II Pr IO)}' This mutual information cannot be evaluated in closed form, however. We use a lower bound:
I(yc; r iB ) ::::: EYe{
l: D(Bx ;Ye II Bx) } d~ Ip(yc; riB)
(10)
xE c
where BX;Ye' x E X are the parameters of the m-projected posterior and
KL(Bx;yJB x ) is the KL-divergence between two Bernoulli distributions with mean
parameters BX;Ye and Bx , respectively.
To alleviate the concern that the lower bound would prematurely terminate the
search, we note that if Ip(r; B) = 0 for all c E C, then Bx E {O, I} for all x E X.
In other words , the search terminates only if we are already fully certain about the
underlying relevance assignments.
The best k clusters to query are those maximizing
Finding the optimal query set under this criterion (even with the m-projections)
involves O(nk2k) operations. We select the clusters sequentially while maintaining an explicit dependence on the hypothetical outcome (classification) of only
the previous cluster choice. More precisely, we combine the cluster selection with
conditional projections: for k > 1, Ck = argmaxclp(Yc,Yck;rIBk - l),
B~.y
, ek =
E{ B~;!k_l ,Yek IYCk }. The mutual information terms do not, however, decompose
additively with the elements in the clusters. The desired O(kn) scaling of the selection algorithm requires a cached spline reconstruction 5 .
4.3
Sanity check results
Figure 1b) gives the mean number of iterations of the query process as function of
the database size. Each point represents an average over 20 runs with parameters
4The parameters 8x;fiq ,fi e2 , ... ,fi ek resulting from k successive projections define a martingale process Ey q ,Ye2 , . .. ,Yek {8 x;yq ,Ye2 , . . . ,Yek } = 8x, x EX, where the expectation is taken
w.r .t . to the posterior approximation .
5The mutual information terms for select fixed values of po can be cached additively
relative to the cluster structure. The actual Po dependence is reconstructed (quadratically)
from the cached values (Ip is convex in po) .
k = 5, qo = 0.05, and q = 0.95. The user responses were selected on the basis of the
same parameters and a randomly chosen (single) underlying element of interest.
The search is terminated when the sought after element in the database has the
highest rank according to {Ox} , x E X. The randomized cluster structures were
relatively balanced and hierarchical. Similarly to the theoretically optimal system,
the performance scales linearly with the log-database size. Results for random
choice of the clusters in the query are far outside the figure.
Figure lc), on the other hand, demonstrates that increasing the query set size
appropriately reduces the interaction time. Note that since all the clusters in the
query set have to be chosen prior to getting feedback from any of the clusters,
doubling the query set size cannot theoretically reduce the retrieval time to a half.
5
Discussion
The active learning approach proposed here provides the basic methodology for
optimally querying the user at multiple levels of abstraction. There are a number
of extensions to the approach presented in this short paper. For example, we can
encourage the user to provide confidence rated selections/annotations among the
presented clusters. Both user models can be adapted to handle such selections.
Analyzing the fundamental trade-offs between the size of the query set (resource
constraints) and the expected completion time of the retrieval process will also be
addressed in later work.
References
[1] A. C. Atkinson and A. N. Donev, Optimum experimental designs, Clarendon
Press, 1992.
[2] D. R. Cutting, D. R. Karger, J. O. Pederson, J. W. Tukey, Scatter/Gather:
A cluster Based Approach to Browse Document Collections, In Proceedings of
the Fifteenth Annual International ACM SIGIR Conference, Denmark, June
1996.
[3] D. Heckerman, D. Geiger, and D. M. Chickering, Learning Bayesian Networks:
The Combination of Knowledge and Statistical Data, Machine Learning, Vol
20, 1995.
[4] H. Lipson and H.T. Siegelmann, Geometric Neurons for Clustering, Neural
Computation 12(10), August 2000
[5] J. J. Jr. Rocchio, Relevance Feedback in Information Retrieval, In The Smart
System - experiments in automatic document processing, 313-323, Englewood
Cliffs, NJ: Prentice Hall Inc.
[6] G. Salton and C. Buckley, Improving Retrieval Performance by Relevance Feedback, Journal ofthe American Society for Information Science, 41(4): 288-297,
1990.
[7] J.S. Yedidia, W.T. Freeman, Y. Weiss, Generalized Belief Propagation, Neural
Information Processing Systems 13, 2001.
| 1954 |@word stronger:1 tedious:1 km:1 additively:2 contrastive:3 moment:1 reduction:1 initial:1 contains:1 karger:1 document:23 past:1 current:1 si:2 scatter:4 written:1 readily:1 subsequent:2 informative:1 update:4 xex:4 greedy:1 selected:2 half:1 item:1 short:2 el1:1 provides:2 successive:1 along:1 consists:1 combine:1 manner:1 theoretically:2 indeed:1 expected:8 behavior:1 freeman:1 decreasing:1 actual:1 increasing:2 lib:1 project:2 classifies:1 underlying:8 bounded:2 ye2:2 entropy2:1 z:1 finding:1 transformation:1 nj:1 guarantee:1 yjb:1 hypothetical:1 interactive:1 demonstrates:1 csk:1 ly:3 appear:1 io:1 analyzing:1 initiated:3 cliff:1 might:1 dynamically:1 suggests:1 limited:1 bi:1 unique:1 tribution:1 sca:1 reject:1 matching:3 projection:12 word:2 confidence:1 wait:1 get:2 cannot:3 selection:22 pertain:1 prentice:1 context:2 optimize:2 maximizing:2 caught:1 convex:3 sigir:1 simplicity:3 recovery:1 docum:1 handle:1 analogous:3 updated:2 hierarchy:1 gm:1 exr:1 user:50 suppose:1 us:1 element:9 continues:1 database:9 capture:1 ordering:1 decrease:2 highest:2 trade:1 balanced:1 dynamic:1 tight:1 smart:1 basis:2 vague:1 easily:3 po:5 retrieves:1 derivation:2 pertaining:1 query:44 outside:3 choosing:1 outcome:2 sanity:1 whose:1 gi:1 transform:2 noisy:3 ip:3 rr:1 propose:2 reconstruction:1 interaction:2 product:1 maximal:2 relevant:8 forth:1 qr:2 getting:1 ent:1 empty:1 cluster:58 optimum:2 extending:1 cached:3 kikuchi:1 depending:1 completion:1 lyc:1 coverage:1 involves:1 come:1 indicate:2 tommi:2 vc:1 viewing:1 settle:1 enable:1 exchange:1 require:1 f1:1 alleviate:1 decompose:1 adjusted:1 strictly:1 extension:1 hold:1 sufficiently:1 hall:1 liz:1 sought:1 omitted:3 rocchio:1 estimation:1 label:1 mit:4 offs:1 rather:2 ck:1 jaakkola:1 ax:9 focus:1 june:1 consistently:1 rank:2 bernoulli:4 check:1 contrast:2 summarizing:1 am:1 sense:2 inference:2 abstraction:2 dependent:1 typically:3 initially:1 expand:1 selects:4 issue:1 overall:3 classification:1 yck:1 among:1 priori:1 development:1 mutual:7 marginal:1 identical:1 represents:1 spline:1 simplify:1 randomly:1 gamma:3 divergence:2 preserve:1 individual:3 bw:2 maintain:3 interest:2 englewood:1 highly:1 accurate:1 encourage:1 desired:1 minimal:1 yek:3 modeling:1 assignment:8 phrase:1 subset:2 too:1 characterize:1 optimally:1 kn:1 aw:1 fundamental:1 randomized:1 international:1 probabilistic:1 physic:1 iy:4 again:1 reflect:1 successively:2 containing:1 choose:3 ek:2 derivative:2 explicate:1 bx:11 inefficient:1 american:1 donev:1 includes:1 inc:1 ranking:3 depends:2 later:1 lab:1 closed:1 tukey:1 portion:1 start:1 recover:3 maintains:1 annotation:8 identifiability:1 lipson:1 minimize:1 ir:5 efficiently:2 yield:1 ofthe:1 bayesian:1 none:2 rx:3 whenever:1 hava:2 lengthy:1 obvious:1 e2:1 naturally:1 associated:2 di:3 salton:1 static:1 gain:1 adjusting:1 knowledge:1 fiat:2 organized:1 back:4 cs1:1 clarendon:1 permitted:1 unambiguously:1 methodology:2 response:5 wei:1 evaluated:2 ox:4 furthermore:1 governing:1 until:3 hand:2 ally:1 web:1 qo:8 assessment:2 propagation:1 overlapping:5 bly:1 mode:2 defines:1 ye:9 true:1 hence:1 illustrated:1 ll:1 uniquely:1 criterion:5 generalized:1 demonstrate:1 wise:1 recently:1 fi:2 bookkeeping:1 exponentially:1 discussed:2 interpretation:2 interpret:1 cambridge:2 ai:4 queried:1 automatic:1 trivially:1 session:2 similarly:2 access:1 longer:2 deduce:1 etc:1 posterior:13 irrelevant:1 certain:1 browse:1 binary:1 life:1 xe:1 yi:1 eo:1 ey:3 determine:1 paradigm:1 maximize:1 monotonically:1 ii:5 relates:1 multiple:5 semi:1 reduces:2 stem:1 infer:1 match:1 fie:1 long:1 retrieval:13 raphson:1 basic:1 denominator:1 redefinition:1 expectation:4 fifteenth:1 iteration:7 repetitive:1 annotate:1 normalization:1 whereas:1 separately:1 addressed:1 median:1 appropriately:2 posse:1 subject:1 lover:1 ee:2 reduce:1 knowing:1 enumerating:1 whether:1 effort:1 hessian:1 aiw:1 buckley:1 amount:1 shortening:1 disjoint:2 per:1 correctly:1 vol:1 affected:1 express:1 key:1 nevertheless:1 excludes:1 run:1 uncertainty:2 respond:1 place:1 geiger:1 scaling:1 bound:2 atkinson:1 display:1 correspondence:1 refine:2 annual:1 adapted:1 constraint:3 precisely:1 flat:11 argument:1 formulating:1 relatively:1 according:1 combination:1 jr:1 across:1 terminates:1 heckerman:1 lid:1 b:3 s1:1 gradually:1 pr:3 gathering:1 taken:1 resource:2 count:4 needed:1 initiate:1 available:2 operation:2 permit:2 yedidia:1 apply:1 hierarchical:8 appropriate:3 anymore:1 k_l:1 alternative:1 jn:1 original:4 obviates:1 top:2 remaining:1 dirichlet:13 clustering:5 cf:3 include:1 denotes:1 ensure:1 newton:1 graphical:1 maintaining:1 exploit:1 siegelmann:2 classical:1 society:1 implied:2 already:1 quantity:1 primary:2 dependence:3 costly:1 responds:2 diagonal:1 topic:1 trivial:1 denmark:1 assuming:1 index:1 ratio:1 minimizing:2 equivalently:1 potentially:1 statement:1 rise:2 design:2 redefined:1 unknown:3 neuron:1 discarded:1 incorporated:2 prematurely:1 august:1 inferred:2 cast:1 kl:6 optimized:1 engine:2 distinction:1 quadratically:1 address:2 proceeds:1 below:1 yc:5 including:1 belief:3 overlap:1 ranked:2 natural:1 rated:1 eye:2 yq:1 deemed:2 carried:2 prio:1 prior:3 geometric:1 relative:1 fully:1 querying:2 gather:3 consistent:3 course:1 summary:3 dis:1 bias:2 allow:1 regard:1 feedback:6 csj:2 stand:1 avoids:1 collection:4 projected:3 far:2 sj:3 reconstructed:1 cutting:1 keep:1 rib:3 active:9 sequentially:2 assumed:2 alternatively:1 search:14 iterative:2 terminate:1 improving:1 expansion:2 hierarchically:1 linearly:2 terminated:2 whole:1 scattered:1 martingale:1 probing:1 lc:1 sub:2 explicit:1 pe:3 ib:2 chickering:1 weighting:3 lxex:1 theorem:1 specific:4 fiq:1 er:1 list:1 x:2 admits:1 evidence:1 grouping:1 concern:1 browsing:1 observabies:1 entropy:9 led:1 likely:1 dissatisfaction:1 expressed:1 partially:1 doubling:2 nested:2 acm:1 ma:2 conditional:1 goal:2 targeted:1 marked:1 absence:1 change:3 operates:1 called:1 experimental:1 meaningful:1 indicating:1 select:5 speedy:1 f3x:6 newtonraphson:1 relevance:17 incorporate:1 evaluate:2 ex:1 |
1,045 | 1,955 | Switch Packet Arbitration via Queue-Learning
Timothy X Brown
Electrical and Computer Engineering
Interdisciplinary Telecommunications
University of Colorado
Boulder, CO 80309-0530
timxb@colorado.edu
Abstract
In packet switches, packets queue at switch inputs and contend for outputs. The contention arbitration policy directly affects switch performance. The best policy depends on the current state of the switch and
current traffic patterns. This problem is hard because the state space,
possible transitions, and set of actions all grow exponentially with the
size of the switch. We present a reinforcement learning formulation of
the problem that decomposes the value function into many small independent value functions and enables an efficient action selection.
1 Introduction
Reinforcement learning (RL) has been applied to resource allocation problems in telecommunications. e.g., channel allocation in wireless systems, network routing, and admission control in telecommunication networks [1, 3, 7, 11]. These have demonstrated reinforcement learning can find good policies that significantly increase the application reward
within the dynamics of the telecommunications problems. However, a key issue is how to
scale these problems when the state space grows quickly with problem size.
This paper focuses on packet arbitration for data packet switches. Packet switches are unlike telephone circuit switches in that packet transmissions are uncoordinated and clusters
of traffic can simultaneously contend for switch resources. A packet arbitrator decides the
order packets are sent through the switch in order to minimize packet queueing delays and
the switch resources needed. Switch performance depends on the arbitration policy and the
pattern of traffic entering the switch.
A number of packet arbitration strategies have been developed for switches. Many have
fixed policies for sending packets that do not depend on the actual patterns of traffic in the
network [10]. Under the worse case traffic, these arbitrators can perform quite poorly [8].
Theoretical work has shown consideration of future packet arrivals can have significant
impact on the switch performance but is computationally intractable (NP-Hard) to use [4].
As we will show, a dynamic arbitration policy is difficult since the state space, possible
transitions, and set of actions all grow exponentially with the size of the switch.
In this paper, we consider the problem of finding an arbitration policy that dynamically
and efficiently adapts to traffic conditions. We present queue-learning, a formulation that
effectively decomposes the problem into many small RL sub-problems. The independent
0.3
0.6
0
0.3
0
0.6
0.3
0.3
0.3
Arrivals Queues
2311
3x3
Switch
1
Out
1
2
2
(a)
2
1
0
1
0
0
3
(b)
1
0
1
(c)
Figure 1: The packet arbitration model. (a) In each time slot, packet sources generate
packets on average at input for output . (b) Packets arrive at an input-queued
switch and are stored in queues. The number label on each packet indicates to which
output the packet is destined. (c) The corresponding queue states, where
indicates
the number of packets waiting at input destined for output .
RL problems are coupled via an efficient algorithm that trades off actions in the different
sub-problems. Results show significant performance improvements.
2 Problem Description
traffic sources generating traffic at each of
inputs to
The problem is comprised of
a packet data switch as shown in Figure 1. Time is divided into discrete time slots and
in each time slot each source generates 0 or 1 packets. Each packet that arrives at the
input is labeled with which of the outputs the packet is headed. In every time slot, the
switch takes packets from inputs and delivers them at their intended output. We describe
the specific models for each used in this paper and then state the packet arbitration problem.
2.1 The Traffic Sources
at
!#" $ % !&"
(
'
*
)
The matrix
only represents long term average loads between input and output
between sources
. We treat the case where packet arrivals are uncorrelated over time
and
so that in each time slot, a packet arrives at input with probability
given that we
,+* . Letand
have an arrival, it is destined for output with probability
the set of packet
arrivals be - .
At input , a traffic source generates a packet destined for output with probability
the beginning of each time slot. If
is the load on input and
is the load on output , then for stability we require
and
.
2.2 The Switch
The switch alternates between accepting newly arriving packets and sending packets in
every time slot. At the start of the time slot the switch sends packets waiting in the input
queues and delivers them to the correct output where they are sent on. Let
represent the set of packets sent where
if a packet is sent from input to output
and
otherwise. The packets it can send are limited by the input and output
constraints: the switch can send at most one packet per input and can deliver at most one
packet to each output. After sending packets, the new arrivals are added at the input and
the switch moves to the next time slot. Other switches are possible, but this is the simplest
and a common architecture in high-speed switches.
0 !
0
',0 )
. /
2.3 The Input Queues
Because the traffic sources are un-coordinated, it is possible for multiple packets to arrive
in one time slot at different inputs, but destined for the same output. Because of the output
constraint, only one such packet may be sent and the others buffered in queues, one queue
per input. Thus packet queueing is unavoidable and the goal is to limit the delays due to
queueing.
The queues are random access which means packets can be sent in any order from a queue.
For the purposes of this paper, all packets waiting at an input and destined for the same
output are considered equivalent. Let
be a matrix where
is the number of
packets waiting at input for output as shown in Figure 1c.
'
*)
2.4 Packet Arbitration
.
The packet arbitration problem is: Given the state of the input queues, , choose a set of
packets to send, , so at most one packet is sent from each input and at most one packet is
delivered to each output. We want a packet arbitration policy that minimizes the expected
packet wait time.
.
-
When is sent
they can
the remaining packets must wait at least one more time slot
before
be the total
be the number
be sent. Let
number of packets in all the input queues, let
of new arrivals, and let
be the number of packets sent.
Thus,
the
total wait of all packets
. By Little?s theorem, the
is increased by the number of packets that remain:
expected wait time is proportional to the expected number of packets waiting
in each
time
slot [10]). Thus, we want a policy that minimizes the expected value of
.
.
.
-
.
-
.
The complexity of this problem is high. Given an input and output switch. The input
and output constraints are met with equality if is a subset of a permutation matrix (zeros
everywhere except that every row has at most one one and every column has one one). This
implies there are as many as possible to choose from. In each time slot at each input,
a packet can arrive for one of outputs or not at all. This implies as many as
possible transitions after each send. If each
ranges from 0 to packets, then the number
of states in the system is
. A minimal representation would only indicate whether each
sub-queue is empty or not, resulting in
states. Thus, every aspect of the problem grows
exponentially in the size of the switch.
.
Traditionally switching solves these problems by not considering the possible next arrivals,
and using a search algorithm with time-complexity polynomial in
that considers only
the current state . For instance the problem can be formulated as a so-called matching
problem and polynomial algorithms exist that will send the largest possible [2, 6, 8].
.
.
While maximizing the packets sent in every time slot may seem like a solution, the problem
is more interesting than this. In general, many possible will maximize the number of
packets that are sent. Which one can we send now so that we will be in the best possible
state for future time slots? Some heuristics can guide this choice, but these are insensitive
to the traffic pattern [9]. Further, it can be shown that to minimize the total wait it may be
necessary to send less than the maximum number of packets in the current time slot [4]. So,
we look to a solution that efficiently finds policies that minimize the total wait by adapting
to the current traffic pattern.
The problem is especially amenable to RL for two reasons. (1) Packet rates are fast, up to
millions of packets per second so that many training examples are available. (2) Occasional
bad decisions are not catastrophic. They only increase packet delays somewhat, and so it
is possible to freely learn in an online system. The next section describes our solution.
3 Queue-Learning Solution
.
-
- .
At any given time slot, , the system is in a particular state, . New packets, , arrive
and
arbitrator can choose to send any valid . The cost,
is the
the
packet
packets that remain. The task of the learner is to determine a packet arbitration
policy that minimizes the total average cost. We use the Tauberian approximation, that is,
we assume the discount factor is close enough to 1 so that the discounted reward policy is
equivalent to the average reward policy [5]. Since minimizing the expected value of this
cost is equivalent to minimizing the expected wait time, this formulation provides an exact
match between RL and the problem task.
-
.
As shown already every aspect of this problem scales badly. The solution to this problem is
three fold. First
use online learning and afterstates [12] to eliminate the need to average
we
possible next states. Second, we show how the value function can yield
over the
a set of inputs into a polynomial algorithm for choosing actions. Third, we decompose the
value function so the effective number of states is much smaller than
. We describe
each in turn.
3.1 Afterstates
RL methods solve MDP problems by learning good approximations to the optimal value
function, . A single time slot consists of two stages: new arrivals are added to the
queues and then packets are sent (see Figure 2). The value function could be computed
after either of these stages. We compute it after packets are sent since we can use the notion
of afterstates to choose the action. Since the packet sending process is deterministic, we
know the state following the send action. In this case, the Bellman equation is:
- .
!
# " %$'&
available in the current state after arrival event - ,
'* ) - ) (.
- is the - set of . actions
is the effective immediate cost, is the discount factor, and
is the expectation over possible events and the resulting next state is " .
We learn an approximation to using TD(0) learning. At time-step on a transition from
state to ,+- on action . after event - , we update an estimate to via
/.
. - !
, +-
%$
.
where is the learning step size.
where
With afterstates, the action (which set of packets to send) depends on both the current state
and the event. The best action is the one that results in the lowest value function in the
next state (which is known deterministically given , , and ). In this way, afterstated
eliminates the need to average over a large number of non-zero transitions to find the best
action.
-
.
3.2 Choosing the Action
We compare every action with the action of not sending any packets. The best action, is the
set of packets meeting the input and output constraints that will reduce the value function
the most compared to not sending any packets.
Each input-output pair
has an associated queue at the input, . Packets in
contend
with other packets at input and other packets destined for output . If we send a packet
from , then no packet at the same input or output will be sent. In other words, packets at
Queue
State
211
100
010
Queue After
Arrivals
Packet
Arrivals
-
-
010
000
010
Stochastic Step
Packets
Sent
.
Next
State
001
100
010
Deterministic Step
221
100
020
Decision
,+-
220
000
010
,+-
Figure 2: Timing of packet arrivals and sends relative to decisions and the value function.
interact primarily with packets in the same row and column. Packets in other rows and
columns only have an indirect effect on the value of sending a packet from .
-
- !
. argmax
- 0
This suggests the following approximation. Let
be the number of packets in every
subqueue after arrivals in state and before the decision. Let
be the reduction
in the value function if one packet is sent from subqueue
(
if the
subqueue is empty). We can reformulate the best action as:
-
0 ' ! ) "
0 "
0 "
subject to the constraints:
This problem can be solved as a linear program and is also known as the weighted matching
or the assignment problem which has a polynomial time solution [13]. In this way, we
reduce the search over the
possible actions to a polynomial time solution.
3.3 Decomposing the Value Function
The interaction between queues in the same row or the same column is captured primarily
by the input and output constraints. This suggests a further simplifying approximation with
the following decomposition.
We compute a separate value function for each , denoted
. In principle, this can
depend on the entire state , but can be reduced to consider only elements of the state
relevant to . Every
estimates its associated value function
based on the packets
at input and packets destined for output .
could be considered, but we consider
a
linear
approximation.
Let
Many forms of
$ be the total number of packets
be the total number of packets waiting at input . Let
waiting for output .
:
With these variables we define a linear approximation
with
parameters
$
(1)
-
It follows the value of sending a packet (compared to not sending a packet) from
is
$
- -
*
-
*
-
*
-
This is computed for each
and used in the weighted matching of Section 3.2 to compute which packets to send. Learning for this problem is standard TD(0) for linear approximations [12]. The combination of decomposition and linear value function approximation
parameters.
reduces the problem to estimating
No explicit exploration is used since from the perspective of , enough stochasticity already exists in the packet arrival and send processes. To assist the switch early in the
learning, the switch sends the packets from a maximum matching in each time slot (instead
of the packets selected by queue learning). This initial assist period during the training was
found to bring the switch into a good operating regime from which it could learn a better
policy.
In summary, we simplify the exponential computation for this problem by decomposing
substates. Each substate computes the value of sending a packet versus
the state into
not sending a packet, and a polynomial algorithm computes the action that maximizes the
total value across substates subject to the input and output constraints.
4 Implementation Issues
!
A typical high speed link rate is at OC-3 rates (155Mbps). In ATM at this rate, the packet
time slots.
learning,
the number of
rate is 366k time slots/s or less than 30 sec for
For
floating point operations per time slot is approximately
where isthe
number of
parameters in the linear approximation. At the above packet rate, for an
switch, this
translates into 650 MFLOPS which is within existing highend microprocessor capacity. For
to compute the weights.
computation of the packets to send, the cost is approximately
To compute the maximum weight matching an
algorithm exists [13].
New optical transport technologies are pushing data rates one and two orders of magnitude
greater than OC-3 rates. In this case, if computing is limited then the queue-learning can
learn on a subsample of time slots. To compute the packets to send, the decomposition has
a natural parallel implementation that can divide it among processors. Massively parallel
neural networks can also be used to compute the maximum weighted matching [2, 9].
5 Simulation Results
We applied our procedure to
switches under different loads. The parameters used in
the experiment are shown in Table 1. In each experiment, the queue-learning was trained
for an initial period, and then the mean wait time, is measured over a test period. We
compared performance to two alternatives. One alternative sends the largest number of
packets in every time slot. If multiple sets are equally large it chooses randomly
between
. The
them. We simulate this arbitrator and measure the mean packet wait time,
best possible switch is a so-called output-queued switch [10]. Such a switch is difficult to
build at high-speeds, but we can compute its mean packet
time,
, via simulation.
wait
The results are specified in normalized form as
.
Thus if our queue-learning solution is no better than a max send arbitrator, the gain will be
0 and if we achieve the performance of the output-queued switch, the gain will be 1.
+
$
!
We experimented on five different traffic loads. - is a uniform load of
packets per
input per time slot with each packet uniformly destined for one of the outputs. Similarly,
is a uniform load of ! . The uniform load is a common baseline scenario for evaluating
switches.
!
and are random matrices where the sum of loads per row and column are 0.6 and
0.9 (as in - and ) but the distribution is not uniform. This is generated by summing
permutation matrices and than scaling the entries to yield the desired row and column sums
Table 1: RL parameters.
Parameter
Discount, .
Learn Rate,
Assist Period
Train Period
Test Period
Value
0.99
-
! - + -
slots
! time
time slots
! time slots
Table 2: Simulation Results.
Normalized Wait
Switch Loading
Reduction ( )
(uniform 0.6 load)
10%
(uniform 0.9 load)
50%
(random 0.6 load)
14%
(random 0.9 load)
70%
(truncated 0.9 load)
84%
(e.g. Figure 1a). The random load is a more realistic in that loads tend to vary among the
different input/output pairs.
is , except that all
+
for the last
outputs is set to zero. This simulates the more
typical case of traffic being concentrated on a few outputs.
We emphasize that a different policy is learned for each of these loads. The different loads
suggest the kinds of improvements that we might expect if queue-learning is implemented.
The results for the five loads are given in Table 2.
6 Conclusion
This paper showed that queue learning is able to learn a policy that significantly reduces
the wait times of packets in a high-speed switch. It uses a novel decomposition of the
value function combined with efficient computation of the action to overcome the problems a traditional RL approach would have with the large number of states, actions, and
transitions. This is able to gain 10% to 84% of the possible reductions in wait times. The
largest gains are when the network is more heavily loaded and delays are largest. The gains
are also largest when the switch load is least uniform which is what is most likely to be
encountered in practice.
Traditional thinking in switching is that input-queued switches are much worse than the optimal output-queued switches and improving performance would require increasing switching speeds (the electronic switching is already the slowest part of the otherwise optical networking), or using information of future arrivals (which may not exists and in any case is
NP-Hard to use optimally). The queue-learning approach is able to use its estimates of the
future impact of its packet send decisions in a consistent framework that is able to bridge
the majority of the gap between current input queueing and optimal output queueing.
Acknowledgment
This work was supported by CAREER Award: NCR-9624791.
References
[1] Boyan, J.A., Littman, M.L., ?Packet routing in dynamically changing networks: a
reinforcement learning approach,? in Cowan, J.D., et al., ed. Advances in NIPS 6,
Morgan Kauffman, SF, 1994. pp. 671?678.
[2] Brown, T.X, Lui, K.H., ?Neural Network Design of a Banyan Network Controller,?
IEEE JSAC, v. 8, n. 8, pp. 1428?1438, Oct., 1990.
[3] Brown, T.X, Tong, H., Singh, S., ?Optimizing admission control while ensuring quality of service in multimedia networks via reinforcement learning,? in Advances NIPS
11, ed. M. Kearns et al., MIT Press, 1999.
[4] Brown, T.X, Gabow, H.N., ?Future Information in Input Queueing,? submitted to
Computer Networks, April 2001.
[5] Gabor, Z., Kalmar, Z., Szepesvari, C., ?Multi-criteria Reinforcement Learning,? International Conference on Machine Learning, Madison, WI, July, 1998.
[6] J. Hopcroft and R. Karp, ?An
algorithm for maximum matchings in bipartite
graphs?, SIAM J. Computing 2, 4, 1973, pp 225-231.
[7] Marbach, P., Mihatsch, M., Tsitsiklis, J.N., ?Call admission control and routing in
integrated service networks using neuro-dynamic programming,? IEEE J. Selected
Areas in Comm., v. 18, n. 2, pp. 197?208, Feb. 2000.
[8] McKeown, N., Anantharam, V., Walrand, J., ?Achieving 100% Throughput in an
Input-Queued Switch,? Proc. of IEEE INFOCOM ?96, San Francisco, March 1996.
[9] Park, Y.-K., Lee, G., ?NN Based ATM Scheduling with Queue Length Based Priority
Scheme,? IEEE J. Selected Areas in Comm., v. 15, n. 2 pp. 261?270, Feb. 1997.
[10] Pattavina, A., Switching Theory: Architecture and Performance in Broadband ATM
Networks, John Wiley and Sons, New York, 1998.
[11] Singh, S.P., Bertsekas, D.P., ?Reinforcement learning for dynamic channel allocation
in cellular telephone systems,? in Advances in NIPS 9, ed. Mozer, M., et al., MIT
Press, 1997. pp. 974?980.
[12] Sutton, R.S., Barto, A.G., Reinforcement Learning: an Introduction, MIT Press,
1998.
[13] Tarjan, R.E., Data Structures and Network Algorithms, Soc. for Industrial and Applied Mathematics, Philidelphia, 1983.
| 1955 |@word polynomial:6 loading:1 simulation:3 simplifying:1 decomposition:4 gabow:1 reduction:3 initial:2 existing:1 current:8 must:1 john:1 realistic:1 enables:1 update:1 selected:3 destined:9 beginning:1 accepting:1 provides:1 uncoordinated:1 five:2 admission:3 consists:1 headed:1 expected:6 multi:1 bellman:1 discounted:1 td:2 actual:1 little:1 considering:1 increasing:1 estimating:1 circuit:1 maximizes:1 lowest:1 what:1 kind:1 minimizes:3 developed:1 finding:1 every:11 control:3 bertsekas:1 before:2 service:2 engineering:1 timing:1 treat:1 limit:1 switching:5 sutton:1 approximately:2 might:1 dynamically:2 suggests:2 co:1 limited:2 range:1 acknowledgment:1 practice:1 x3:1 procedure:1 area:2 significantly:2 adapting:1 matching:6 gabor:1 word:1 wait:13 suggest:1 close:1 selection:1 scheduling:1 queued:6 equivalent:3 deterministic:2 demonstrated:1 maximizing:1 send:17 stability:1 notion:1 traditionally:1 colorado:2 heavily:1 exact:1 programming:1 us:1 element:1 labeled:1 electrical:1 solved:1 trade:1 mozer:1 comm:2 complexity:2 reward:3 littman:1 dynamic:4 trained:1 depend:2 singh:2 deliver:1 bipartite:1 learner:1 matchings:1 hopcroft:1 indirect:1 train:1 fast:1 describe:2 effective:2 choosing:2 quite:1 heuristic:1 solve:1 otherwise:2 delivered:1 online:2 interaction:1 relevant:1 poorly:1 achieve:1 adapts:1 description:1 cluster:1 transmission:1 empty:2 generating:1 mckeown:1 measured:1 solves:1 soc:1 implemented:1 implies:2 indicate:1 met:1 correct:1 stochastic:1 exploration:1 packet:115 routing:3 require:2 decompose:1 considered:2 vary:1 early:1 purpose:1 proc:1 label:1 bridge:1 largest:5 weighted:3 mit:3 barto:1 karp:1 focus:1 improvement:2 indicates:2 slowest:1 industrial:1 baseline:1 nn:1 arbitrator:5 eliminate:1 entire:1 integrated:1 issue:2 among:2 denoted:1 represents:1 park:1 look:1 throughput:1 thinking:1 future:5 np:2 others:1 simplify:1 primarily:2 few:1 randomly:1 simultaneously:1 floating:1 intended:1 argmax:1 mflop:1 arrives:2 amenable:1 mbps:1 necessary:1 divide:1 mihatsch:1 desired:1 theoretical:1 minimal:1 increased:1 column:6 instance:1 assignment:1 cost:5 subset:1 entry:1 uniform:7 comprised:1 delay:4 optimally:1 stored:1 chooses:1 combined:1 international:1 siam:1 interdisciplinary:1 lee:1 off:1 quickly:1 unavoidable:1 choose:4 priority:1 worse:2 sec:1 coordinated:1 depends:3 infocom:1 traffic:15 start:1 parallel:2 minimize:3 atm:3 loaded:1 efficiently:2 yield:2 processor:1 submitted:1 networking:1 ed:3 pp:6 associated:2 gain:5 newly:1 april:1 formulation:3 stage:2 transport:1 quality:1 mdp:1 grows:2 effect:1 brown:4 normalized:2 equality:1 entering:1 during:1 oc:2 criterion:1 delivers:2 bring:1 consideration:1 contention:1 novel:1 common:2 rl:8 exponentially:3 insensitive:1 million:1 significant:2 buffered:1 mathematics:1 similarly:1 marbach:1 stochasticity:1 access:1 operating:1 feb:2 showed:1 perspective:1 optimizing:1 massively:1 scenario:1 meeting:1 captured:1 morgan:1 greater:1 somewhat:1 freely:1 determine:1 maximize:1 period:6 july:1 multiple:2 reduces:2 match:1 long:1 divided:1 equally:1 award:1 impact:2 ensuring:1 neuro:1 controller:1 expectation:1 represent:1 want:2 kalmar:1 grow:2 source:7 sends:4 eliminates:1 unlike:1 subject:2 tend:1 sent:17 simulates:1 cowan:1 seem:1 call:1 enough:2 switch:46 affect:1 architecture:2 reduce:2 translates:1 whether:1 assist:3 queue:29 york:1 action:21 discount:3 concentrated:1 simplest:1 reduced:1 generate:1 exist:1 walrand:1 per:7 discrete:1 waiting:7 key:1 achieving:1 queueing:6 changing:1 graph:1 sum:2 everywhere:1 telecommunication:4 arrive:4 electronic:1 decision:5 scaling:1 fold:1 encountered:1 badly:1 constraint:7 generates:2 aspect:2 speed:5 simulate:1 optical:2 tauberian:1 alternate:1 combination:1 march:1 remain:2 describes:1 smaller:1 across:1 son:1 wi:1 boulder:1 computationally:1 resource:3 equation:1 turn:1 needed:1 know:1 sending:11 available:2 decomposing:2 operation:1 occasional:1 alternative:2 remaining:1 madison:1 pushing:1 especially:1 build:1 move:1 added:2 already:3 strategy:1 traditional:2 separate:1 link:1 capacity:1 majority:1 considers:1 cellular:1 reason:1 length:1 reformulate:1 minimizing:2 difficult:2 implementation:2 design:1 policy:16 contend:3 perform:1 truncated:1 immediate:1 tarjan:1 pair:2 specified:1 learned:1 nip:3 able:4 pattern:5 kauffman:1 regime:1 program:1 max:1 event:4 natural:1 boyan:1 scheme:1 technology:1 coupled:1 relative:1 expect:1 permutation:2 interesting:1 allocation:3 proportional:1 versus:1 consistent:1 principle:1 substates:2 uncorrelated:1 row:6 summary:1 supported:1 wireless:1 last:1 arriving:1 tsitsiklis:1 guide:1 overcome:1 transition:6 valid:1 evaluating:1 computes:2 reinforcement:8 san:1 emphasize:1 decides:1 summing:1 francisco:1 un:1 search:2 decomposes:2 table:4 channel:2 learn:6 szepesvari:1 career:1 improving:1 interact:1 microprocessor:1 subsample:1 arrival:16 broadband:1 tong:1 wiley:1 sub:3 deterministically:1 explicit:1 exponential:1 sf:1 third:1 theorem:1 load:20 specific:1 bad:1 experimented:1 intractable:1 exists:3 effectively:1 magnitude:1 gap:1 timothy:1 likely:1 oct:1 slot:28 goal:1 formulated:1 timxb:1 hard:3 telephone:2 except:2 typical:2 uniformly:1 lui:1 kearns:1 total:8 called:2 multimedia:1 catastrophic:1 ncr:1 anantharam:1 arbitration:13 |
1,046 | 1,956 | The Infinite Hidden Markov Model
Matthew J. Beal
Zoubin Ghahramani
Carl Edward Rasmussen
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WC1N 3AR, England
http://www.gatsby.ucl.ac.uk
m.beal,zoubin,edward @gatsby.ucl.ac.uk
Abstract
We show that it is possible to extend hidden Markov models to have
a countably infinite number of hidden states. By using the theory of
Dirichlet processes we can implicitly integrate out the infinitely many
transition parameters, leaving only three hyperparameters which can be
learned from data. These three hyperparameters define a hierarchical
Dirichlet process capable of capturing a rich set of transition dynamics.
The three hyperparameters control the time scale of the dynamics, the
sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence. In this framework it
is also natural to allow the alphabet of emitted symbols to be infinite?
consider, for example, symbols being possible words appearing in English text.
1 Introduction
Hidden Markov models (HMMs) are one of the most popular methods in machine
learning and statistics for modelling sequences such as speech and proteins. An
HMM defines a probability distribution over sequences of observations (symbols)
by invoking another sequence of unobserved, or hidden, discrete
state variables
. The basic idea in an HMM is that the seis independent of
quence of hidden states has Markov dynamics?i.e. given ,
for all
?and that the observations
are independent of all other variables
given . The model is defined in terms of two sets of parameters, the transition matrix
whose
element is
and the emission matrix whose
element
is
. The usual procedure for estimating the parameters of an HMM is
the Baum-Welch algorithm, a special case of EM, which estimates expected values of two
matrices and corresponding to counts of transitions and emissions respectively, where
the expectation is taken over the posterior probability of hidden state sequences [6].
'
! #"$&% *12
.0/ (*);+-, 9<4 (=7 .0/ 3)54 6(87
> ?
(:9+-,
Both the standard estimation procedure and the model definition for HMMs suffer from
important limitations. First, maximum likelihood estimation procedures do not consider
the complexity of the model, making it hard to avoid over or underfitting. Second, the
model structure has to be specified in advance. Motivated in part by these problems there
have been attempts to approximate a full Bayesian analysis of HMMs which integrates over,
rather than optimises, the parameters. It has been proposed to approximate such Bayesian
integration both using variational methods [3] and by conditioning on a single most likely
hidden state sequence [8].
In this paper we start from the point of view that the basic modelling assumption of
HMMs?that the data was generated by some discrete state variable which can take on
one of several values?is unreasonable for most real-world problems. Instead we formulate the idea of HMMs with a countably infinite number of hidden states. In principle,
such models have infinitely many parameters in the state transition matrix. Obviously it
would not be sensible to optimise these parameters; instead we use the theory of Dirichlet
processes (DPs) [2, 1] to implicitly integrate them out, leaving just three hyperparameters
defining the prior over transition dynamics.
The idea of using DPs to define mixture models with infinite number of components has
been previously explored in [5] and [7]. This simple form of the DP turns out to be inadequate for HMMs.1 Because of this we have extended the notion of a DP to a two-stage hierarchical process which couples transitions between different states. It should be stressed
that Dirichlet distributions have been used extensively both as priors for mixing proportions and to smooth n-gram models over finite alphabets [4], which differs considerably
from the model presented here. To our knowledge no one has studied inference in discrete
infinite-state HMMs.
We begin with a review of Dirichlet processes in section 2 which we will use as the basis
for the notion of a hierarchical Dirichlet process (HDP) described in section 3. We explore
properties of the HDP prior, showing that it can generate interesting hidden state sequences
and that it can also be used as an emission model for an infinite alphabet of symbols. This
infinite emission model is controlled by two additional hyperparameters. In section 4 we
describe the procedures for inference (Gibbs sampling the hidden states), learning (optimising the hyperparameters), and likelihood evaluation (infinite-state particle filtering).
We present experimental results in section 5 and conclude in section 6.
(
2 Properties of the Dirichlet Process
*12
*12
Let us examine in detail the statistics of hidden state transitions
from a particular state
to
, with the number of hidden states finite and equal to . The transition probabilities
row of the transition
matrix can be interpreted as mixing proportions for
given in the
that we call
.
( +-,
from a discrete indicator variable which can take
Imagine drawing
samples
>
on values
with proportions given by . The joint distribution of these indicators
is multinomial
.0/ 4 7 with > / 8 )7
(1)
7 iff , and otherwise)
where we have used the Kronecker-delta
function ( /
*
2
1
0) has been drawn. Let us see what happens to
to count the number of times > that
the distribution of these indicators when we integrate out the mixing proportions under a
conjugate prior. We give the mixing proportions a symmetric Dirichlet prior with positive
concentration hyperparameter
.0/ 4 27
/ 7$ / / 7 7
"!$#&%'#)(*+&,.- 0/
0/
1
2
1
0/
43
5
76
(2)
where is restricted to be on the simplex of mixing proportions that sum to 1. We can
analytically integrate out under this prior to yield:
1
That is, if we only applied the mechanism described in section 2, then state trajectories under the
prior would never visit the same state twice; since each new state will have no previous transitions
from it, the DP would choose randomly between all infinitely many states, therefore transitioning to
another new state with probability 1.
0. / 4 7 .0/ 4 78.0/ 4 27 /->/ 27 27 /*> / 7 7 (3)
Thus
of a particular sequence of indicators is only a function of the counts
the probability
>indicators
> (denoted
. The conditional
probability of an indicator given the setting of all other
) is given by
.0/ )54
27 > >
(4)
is the counts as in (1) with the '+-, indicator removed. Note the self-reinforcing
where >
property of (4): is more likely to choose an already popular state. A key property of DPs,
'
'
2
1
1
1
1
0/
0/
6
6
6
6
0/
1
which is at the very heart of the model in this paper, is the expression for (4) when we take
tends to infinity:
the limit as the number of hidden
states
) ' i.e. represented
3
6
0. / )54 27
63 1 3 for all unrepresented ) , combined
(5)
is the number of represented states (i.e. for which > 6 ), which cannot
where
6
>
be infinite
of
since. is finite. can be interpreted as the number of pseudo-observations
/
/
, i.e. the strength of belief in the symmetric prior.2 In the infinite limit
acts as an ?innovation? parameter, controlling the tendency for the model to populate a
previously unrepresented state.
3 Hierarchical Dirichlet Process (HDP)
We now consider modelling each row of the transition and emission matrices of an HMM as
a DP. Two key results from the previous section form the basis of the HDP model for infinite
HMMs. The first is that we can integrate out the infinite number of transition parameters,
and represent the process with a finite number of indicator variables. The second is that
under a DP there is a natural tendency to use existing transitions in proportion to their
previous usage, which gives rise to typical trajectories. In sections 3.1 and 3.2 we describe
in detail the HDP model for transitions and emissions for an infinite-state HMM.
3.1 Hidden state transition mechanism
" >!
" 8 > 1
(
)
(
*12
/ 8( 7 / ) 7
( +-,
>
(6)
.0/ *12 )54 ( > 27 $" # > %> )
Note
do not sum to 1?under the DP there is a finite probability
</ " that> the above7 ofprobabilities
not selecting
transitions. In this case, the model defaults
*12one
withof these
to a second different DP (5) on
parameter & whose counts are given by a vector
counts as the oracle. Given that we have
> ' . We refer to the default DP and its associated
defaulted to the oracle DP, the probabilities
now become
*) 1 of1 transitioning
'
i.e. ) represented
+-, /.0 1 ) )
*
2
1
.0/ )54 ( > ' &
7 ( +-, /.0 ) 1 1 3) 2 ' i.e. ) is a new state (7)
Imagine we have generated a hidden state sequence up to and including time , building
6
a table
of counts for transitions that have occured so far from state to , i.e.
. Given that we are in state
, we impose on state
a DP
(5) with parameter whose counts are those entries in the
row of , i.e. we prefer to
reuse transitions we have used before and follow typical trajectories (see Figure 1):
0/
4
2
Under the infinite model, at any time, there are an infinite number of (indistinguishable) unrepresented states available, each of which have infinitesimal mass proportional to .
nii + ?
? nij + ? + ?
j
self
transition
nij
a)
b)
c)
d)
?
? nij + ? + ? ?nij + ? + ?
j
j
existing
transition
oracle
j=i
njo
?
? n jo + ?
? n jo + ?
existing
state
new
state
j
j
(time along horizontal axis) from the HDP: we give examples of four modes of
4
, explores many states with a sparse transition matrix. (b)
4
, retraces multiple interacting trajectory segments. (c) ! 4"# ,
switches between a few different states. (d) $
4%&
#'&
, has strict left-to-right transition
Figure 1:
(left) State transition generative mechanism. (right a-d) Sampled state trajectories
of length
behaviour. (a)
dynamics with long linger time.
&
Under the oracle, with probability proportional to an entirely new state is transitioned
to. This is the only mechanism for visiting new
states from the infinitely many available to
us. After each transition we set
and, if we transitioned
to the state via the
. If we transitioned to a new
oracle DP just described then in addition we set
state then the size of and
will increase.
>
>'
> )( >%
)
>!' ( > '
Self-transitions are special because their probability defines a time scale over which the
dynamics of the hidden state evolves. We assign a finite prior mass to self transitions for
each state; this is the third hyperparameter in our model. Therefore, when first visited (via
in the HDP), its self-transition count is initialised to .
&
*
*
The full hidden state transition mechanism is a two-level DP hierarchy shown in decision
tree form in Figure 1. Alongside are shown typical state trajectories under the prior with
different hyperparameters. We can see that, with just three hyperparameters, there are a
wealth of types of possible trajectories. Note that controls the expected number of represented hidden states, and influences the tendency to explore new transitions, corresponding to the size and density respectively of the resulting transition count matrix. Finally
controls the prior tendency to linger in a state.
&
*
The role of the oracle is two-fold. First it serves to couple the transition DPs from different
hidden states. Since a newly visited state has no previous transitions to existing states,
without an oracle (which necessarily has knowledge of all represented states as it created
them) it would transition to itself or yet another new state with probability 1. By consulting
the oracle, new states can have finite probability of transitioning to represented states. The
second role of the oracle is to allow some states to be more influential (more commonly
transitioned to) than others.
,+
3.2 Emission mechanism
-+ *12
The emission process
is identical to the transition process
in every
respect except that there is no concept analogous to a self-transition. Therefore we need
6
only introduce two further hyperparameters and
for the emission HDP. Like for state
2
2
which is the number
transitions we keep a table of counts
of times before that state has emitted symbol , and
is the number of times symbol
"
(
/. &).
? 10 " 9 / ? 8 ' (=7 / 8 9 7
0
miq
?e
?miq + ?e
? miq + ?e
2
10
2500
2000
q
q
existing
emission
oracle
1500
1
10
1000
?e
mq
?q mqo + ?e
existing
symbol
? mqo + ?e
q
new
symbol
500
0
0
0.5
1
1.5
2
0
2.5
4
x 10
10
0
20
40
60
80
100
Figure 2:
(left) State emission generative mechanism. (middle) Word occurence for entire Alice
novel: each word is assigned a unique integer identity as it appears. Word identity (vertical) is plotted
against the word position (horizontal) in the text. (right) (Exp 1) Evolution of number of represented
(vertical), plotted against iterations of Gibbs sweeps (horizontal) during learning of the
states
ascending-descending sequence which requires exactly 10 states to model the data perfectly. Each
line represents initialising the hidden state to a random sequence containing
distinct represented states. (Hyperparameters are not optimised.)
"
!
9
has been emitted using the emission oracle.
For some applications the training sequence is not expected to contain all possible observation symbols. Consider the occurence of words in natural text e.g. as shown in Figure 2
(middle) for the Alice novel. The upper envelope demonstrates that new words continue to
appear in the novel. A property of the DP is that the expected number of distinct symbols
(i.e. words here) increases as the logarithm of the sequence length. The combination of
an HDP for both hidden states and emissions may well be able to capture the somewhat
super-logarithmic word generation found in Alice.
4 Inference, learning and likelihoods
& . & .
Given a sequence of observations, there are two sets of unknowns in the infinite HMM:
, and the five hyperparameters
the hidden state sequence
defining the transition and emission HDPs. Note that by using HDPs for both states and
observations, we have implicitly integrated out the infinitely many transition and emission
parameters. Making an analogy with non-parametric models such as Gaussian Processes,
we define a learned model as a set of counts
and optimised hyperparameters
.
*
> >!' ? ?3'
* & . & .
We first describe an approximate Gibbs sampling procedure for inferring the posterior over
the hidden state sequence. We then describe hyperparameter optimisation. Lastly, for calculating the likelihood we introduce an infinite-state particle filter. The following algorithm
summarises the learning procedure:
1. Instantiate a random hidden state sequence
2. For
#
.
- Gibbs sample given hyperparameter settings, count matrices, and observations.
- Update count matrices to reflect new ; this may change , the number of represented hidden states.
3. End
4. Update hyperparameters
given hidden state statistics.
5. Goto step 2.
4 # 4 #
4.1 Gibbs sampling the hidden state sequence
>
?
> ' > ? ' ?
Define and as the results of removing from and the transition and emission counts
contributed by . Define similar items
and
related to the transition and emission
/ 7
"
*12 / 7
> ? > ' ?3 '
In order to facilitate hyperparameter learning and improve the mixing
time
of the Gibbs
*
2
1
sampler,
we
also
sample
a
set
of
auxiliary
indicator
variables
. alongside
; each
is a binary variable denoting whether the oracle was used to generate
*12 of these
respectively.
oracle vectors. An exact Gibbs sweep of the hidden state from
takes
operations, since under the HDP generative process changing affects the probability of
all subsequent hidden state transitions and emissions.3 However this computation can be
reasonably approximated
in
, by basing the Gibbs update for only on the state of
6
its neigbours
and the total counts
.4
& . )& .
4.2 Hyperparameter optimisation
*
We place vague Gamma priors5 on the hyperparameters
. We derive an
approximate form for the hyperparameter posteriors from (3) by treating each level of the
HDPs separately. The following expressions for the posterior for , , and are accurate
for large , while the expressions for and
are exact:
&
*
&
'.
& . 6
# #
.0/ * 4 7 / ' 7 / 3 3 7 1 / * / *7 27 / " 1 /*>%> ** 7 27
# 1 . # 1/2 .7
.0/2 . 4 27 / 3 3 7 / " 0 ? 1 0 . 7
1
# / &
7
#
' 1
1
&
.0/ & 4 7 / 7 / ' 1 &7 .0/ & . 4 27 / 1 ' 1 7 & / ' . 1 / &)&. 7 . 7
1
is the number of1 represented states that are transitioned to from
( (includwhere
is the number of possible emissions from state (state
. ' and ' .
ing itself); similarly .
.
are the number of times the oracle has been
for the transition and emission processes,
used
calculated from the indicator variables
. We solve for the maximum a posteriori
(MAP) setting for each hyperparameter; for example MAP is obtained as the solution to
following equation using gradient following techniques such as Newton-Raphson:
/.
" # . / . / . 7!
/ " 0 ?3 0 . 7
MAP
!
MAP
MAP #"
3
/ 3
'7 / .
MAP
4.3 Infinite-state particle filter
The likelihood for a particular observable sequence of symbols involves intractable sums
over the possible hidden state trajectories. Integrating out the parameters in any HMM
induces long range dependencies between states. In particular, in the DP, making the transition
makes that transition more likely later on in the sequence, so we cannot use
standard tricks like dynamic programming. Furthermore, the number of distinct states can
grow with the sequence length as new states are generated. If the chain starts with distinct states, at time there could be
possible distinct states making the total number
%$ /
$.
of trajectories over the entire length of the sequence
(+ )
"
"
/ 7
3
Although the hidden states in an HMM satisfy the Markov condition, integrating out the parameters induces these long-range dependencies.
4
This approximation can be motivated in the following way. Consider sampling parameters &
from the posterior distribution '()&+* , .-%/ of parameter matrices, which will depend on the count
matrices. By the Markov property, for a given & , the probability of only depends on 10 , 2 and
43 , and can therefore be computed without considering its effect on future states.
0 .E 0GF1H
5 567
(48 :9;/ <9=?>A@B(48C/+D 5 =
, with 8 and 9 the shape and inverse-scale parameters.
)
We propose estimating the likelihood of a test sequence given a learned model using particle
filtering. The idea is to start with some number of particles distributed on the represented
hidden states according to the final state marginal from the training sequence
(some of the
may fall onto new states).6 Starting from the set of particles
, the tables from
the recursive procedure is as specified
the training sequences
,
and
6
below, where
:
> > '
? ?3'
" "
/ 7
.0/ 4
7
#
#
Compute <'(42 *
/ for each particle .
'(42 * 2
( >
/
:2 10 / .
Calculate
Resample particles
6 ( %>
/
(
/ .
Update transition and emission tables , for each particle.
6 '( 43 *
For each sample forward dynamics:
3
/ ; this may
cause particles to land on novel states. Update and .
6. If !
, Goto 1 with
" .
1.
2.
3.
4.
5.
"
The log likelihood of the test sequence is computed as
+#%$& . Since it is a discrete
state space, with much of the probability mass concentrated on the represented states, it is
feasible to use '
particles.
/ 7
5 Synthetic experiments
Exp 1: Discovering the number of hidden states We applied the infinite HMM inference algorithm to the ascending-descending observation sequence consisting of 30 concatenated copies of (*),+.-0/213/4-5+.) . The most parsimonious HMM which models this
data perfectly has exactly 10 hidden states. The infinite HMM was initialised with a random hidden state sequence, containing distinct represented states. In Figure 2 (right) we
show how the number of represented states evolves with successive Gibbs sweeps, starting
from a variety of initial . In all cases converges to 10, while occasionally exploring 9
and 11.
Exp 2: Expansive A sequence of length 76 was generated from a 4-state 8-symbol
HMM with the transition and emission probabilities as shown in Figure 3 (top left).
Exp 3: Compressive A sequence of length 86 was generated from a 4-state 3-symbol
HMM with the transition and emission probabilities as shown in Figure 3 (bottom left).
In both Exp 2 and Exp 3 the infinite HMM was initialised with a hidden state sequence
with :9 distinct states. Figure 3 shows that, over successive Gibbs sweeps and hyperparameter learning, the count matrices for the infinite HMM converge to resemble the true
probability matrices as shown on the far left.
6 Discussion
We have shown how a two-level Hierarchical Dirichlet Process can be used to define a nonparametric Bayesian HMM. The HDP implicity integrates out the transition and emission
parameters of the HMM. An advantage of this is that it is no longer necessary to constrain
the HMM to have finitely many states and observation symbols. The prior over hidden state
transitions defined by the HDP is capable of producing a wealth of interesting trajectories
by varying the three hyperparameters that control it.
We have presented the necessary tools for using the infinite HMM, namely a linear-time
approximate Gibbs sampler for inference, equations for hyperparameter learning, and a
particle filter for likelihood evaluation.
6
Different particle initialisations apply if we do not assume that the test sequence immediately
follows the training sequence.
True transition and
emission probability
matrices used for Exp 2
0
0
True transition and
emission probability
matrices used for Exp 3
0
0
0
0
00
0
00
0
0
0
Figure 3:
The far left pair of Hinton diagrams represent the true transition and emission probabilities used to generate the data for each experiment 2 and 3 (up to a permutation of the hidden
states; lighter boxes correspond to higher values). (top row) Exp 2: Expansive HMM. Count matrix
are displayed after
sweeps of Gibbs sampling. (bottom
row) Exp 3:
pairs
sweeps of
Compressive HMM. Similar to top row displaying count matrices after
Gibbs sampling. In both rows the display after a single Gibbs sweep has been reduced in size for
clarity.
)
!
!
"
!
!
On synthetic data we have shown that the infinite HMM discovers both the appropriate
number of states required to model the data and the structure of the emission and transition
matrices. It is important to emphasise that although the count matrices found by the infinite
HMM resemble point estimates of HMM parameters (e.g. Figure 3), they are better thought
of as the sufficient statistics for the HDP posterior distribution over parameters.
We believe that for many problems the infinite HMM?s flexibile nature and its ability to
automatically determine the required number of hidden states make it superior to the conventional treatment of HMMs with its associated difficult model selection problem. While
the results in this paper are promising, they are limited to synthetic data; in future we hope
to explore the potential of this model on real-world problems.
Acknowledgements
The authors would like to thank David Mackay for suggesting the use of an oracle, and
Quaid Morris for his Perl expertise.
References
[1] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. Annals of Statistics, 2(6):1152?1174, 1974.
[2] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics,
1(2):209?230, March 1973.
[3] D. J. C. MacKay. Ensemble learning for hidden Markov models. Technical report, Cavendish
Laboratory, University of Cambridge, 1997.
[4] D. J. C. MacKay and L. C. Peto. A hierarchical Dirichlet language model. Natural Language
Engineering, 1(3):1?19, 1995.
[5] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Technical
Report 9815, Dept. of Statistics, University of Toronto, 1998.
[6] L. R. Rabiner and B. H. Juang. An introduction to hidden Markov models. IEEE Acoustics,
Speech & Signal Processing Magazine, 3:4?16, 1986.
[7] C. E. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information
Processing Systems 12, Cambridge, MA, 2000. MIT Press.
[8] A. Stolcke and S. Omohundro. Hidden Markov model induction by Bayesian model merging. In
S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing
Systems 5, pages 11?18, San Francisco, CA, 1993. Morgan Kaufmann.
| 1956 |@word middle:2 proportion:7 neigbours:1 invoking:1 initial:1 selecting:1 nii:1 denoting:1 initialisation:1 existing:6 yet:1 subsequent:1 shape:1 treating:1 update:5 generative:3 instantiate:1 discovering:1 item:1 consulting:1 toronto:1 successive:2 five:1 along:1 become:1 underfitting:1 introduce:2 expected:5 examine:1 automatically:1 considering:1 begin:1 estimating:2 underlying:1 mass:3 what:1 interpreted:2 compressive:2 unobserved:1 pseudo:1 every:1 act:1 exactly:2 demonstrates:1 uk:2 control:4 unit:1 appear:1 producing:1 positive:1 before:2 engineering:1 tends:1 limit:2 optimised:2 twice:1 studied:1 alice:3 hmms:9 limited:1 range:2 unique:1 recursive:1 differs:1 procedure:7 thought:1 word:9 integrating:2 zoubin:2 protein:1 cannot:2 onto:1 selection:1 influence:1 descending:2 www:1 conventional:1 map:6 baum:1 starting:2 welch:1 formulate:1 immediately:1 his:1 mq:1 notion:2 cavendish:1 analogous:1 annals:2 imagine:2 controlling:1 hierarchy:1 magazine:1 exact:2 programming:1 carl:1 lighter:1 trick:1 element:2 approximated:1 bottom:2 role:2 capture:1 calculate:1 removed:1 of1:2 complexity:1 dynamic:8 depend:1 segment:1 basis:2 vague:1 joint:1 represented:14 seis:1 alphabet:3 distinct:8 describe:4 london:2 whose:4 solve:1 drawing:1 otherwise:1 ability:1 statistic:7 itself:2 final:1 beal:2 obviously:1 sequence:33 advantage:1 ucl:2 propose:1 mixing:6 iff:1 juang:1 converges:1 derive:1 ac:2 finitely:1 edward:2 auxiliary:1 involves:1 resemble:2 filter:3 behaviour:1 assign:1 exploring:1 exp:10 matthew:1 resample:1 estimation:2 integrates:2 visited:2 basing:1 tool:1 hope:1 mit:1 gaussian:2 super:1 rather:1 avoid:1 varying:1 emission:28 quence:1 modelling:3 likelihood:8 expansive:2 posteriori:1 inference:5 ferguson:1 entire:2 integrated:1 hidden:44 denoted:1 special:2 integration:1 mackay:3 marginal:1 optimises:1 equal:1 never:1 sampling:7 optimising:1 identical:1 represents:1 future:2 simplex:1 others:1 report:2 few:1 randomly:1 gamma:1 quaid:1 consisting:1 attempt:1 evaluation:2 mixture:4 wc1n:1 chain:2 accurate:1 capable:2 necessary:2 tree:1 logarithm:1 plotted:2 nij:4 giles:1 ar:1 queen:1 entry:1 inadequate:1 dependency:2 considerably:1 combined:1 synthetic:3 density:1 explores:1 jo:2 reflect:1 containing:2 choose:2 suggesting:1 potential:1 satisfy:1 depends:1 later:1 view:1 start:3 square:1 kaufmann:1 ensemble:1 yield:1 correspond:1 rabiner:1 bayesian:6 trajectory:10 expertise:1 definition:1 infinitesimal:1 against:2 initialised:3 associated:2 couple:2 sampled:1 newly:1 treatment:1 popular:2 knowledge:2 occured:1 appears:1 higher:1 follow:1 box:1 furthermore:1 just:3 stage:1 lastly:1 horizontal:3 defines:2 mode:1 believe:1 usage:1 effect:1 facilitate:1 concept:1 contain:1 true:4 building:1 evolution:1 analytically:1 assigned:1 symmetric:2 laboratory:1 neal:1 indistinguishable:1 during:1 self:6 omohundro:1 variational:1 novel:4 discovers:1 superior:1 multinomial:1 conditioning:1 extend:1 refer:1 cambridge:2 gibbs:14 similarly:1 particle:13 language:2 longer:1 posterior:6 occasionally:1 binary:1 continue:1 morgan:1 additional:1 somewhat:1 impose:1 converge:1 determine:1 signal:1 full:2 multiple:1 smooth:1 technical:2 ing:1 england:1 long:3 raphson:1 visit:1 controlled:1 basic:2 optimisation:2 expectation:1 iteration:1 represent:2 addition:1 separately:1 wealth:2 grow:1 leaving:2 diagram:1 envelope:1 strict:1 goto:2 cowan:1 emitted:3 call:1 integer:1 njo:1 stolcke:1 switch:1 affect:1 variety:1 perfectly:2 idea:4 whether:1 motivated:2 expression:3 reuse:1 reinforcing:1 suffer:1 speech:2 cause:1 nonparametric:3 extensively:1 morris:1 induces:2 concentrated:1 reduced:1 http:1 generate:3 neuroscience:1 delta:1 hdps:3 discrete:5 hyperparameter:10 key:2 four:1 drawn:1 changing:1 clarity:1 sum:3 unrepresented:3 inverse:1 linger:2 place:1 parsimonious:1 decision:1 prefer:1 initialising:1 capturing:1 entirely:1 display:1 fold:1 oracle:15 strength:1 kronecker:1 infinity:1 constrain:1 influential:1 according:1 combination:1 march:1 conjugate:1 em:1 evolves:2 making:4 happens:1 restricted:1 taken:1 heart:1 equation:2 previously:2 turn:1 count:21 mechanism:7 ascending:2 serf:1 end:1 available:2 operation:1 unreasonable:1 apply:1 hierarchical:6 appropriate:1 appearing:1 top:3 dirichlet:13 newton:1 calculating:1 concatenated:1 ghahramani:1 summarises:1 sweep:7 already:1 parametric:1 concentration:1 usual:1 visiting:1 gradient:1 dp:18 thank:1 hmm:25 sensible:1 induction:1 hdp:13 length:6 innovation:1 difficult:1 rise:1 unknown:1 contributed:1 upper:1 vertical:2 observation:9 markov:10 finite:8 displayed:1 defining:2 extended:1 hinton:1 peto:1 interacting:1 david:1 namely:1 pair:2 specified:2 required:2 hanson:1 acoustic:1 learned:3 able:1 alongside:2 below:1 sparsity:1 perl:1 optimise:1 including:1 belief:1 natural:4 indicator:10 improve:1 axis:1 created:1 occurence:2 text:3 prior:12 review:1 acknowledgement:1 permutation:1 interesting:2 limitation:1 filtering:2 proportional:2 generation:1 analogy:1 integrate:5 sufficient:1 principle:1 displaying:1 editor:1 land:1 row:7 rasmussen:2 english:1 copy:1 populate:1 allow:2 fall:1 sparse:1 emphasise:1 distributed:1 default:2 calculated:1 gram:1 transition:55 rich:1 world:2 forward:1 commonly:1 author:1 san:1 far:3 approximate:5 observable:1 countably:2 implicitly:3 keep:1 conclude:1 francisco:1 table:4 transitioned:5 nature:1 reasonably:1 promising:1 ca:1 necessarily:1 hyperparameters:15 gatsby:3 position:1 inferring:1 third:1 removing:1 transitioning:3 showing:1 symbol:14 explored:1 intractable:1 merging:1 logarithmic:1 antoniak:1 likely:3 infinitely:5 explore:3 ma:1 conditional:1 identity:2 feasible:1 hard:1 change:1 infinite:28 typical:3 except:1 sampler:2 total:2 experimental:1 tendency:4 college:1 stressed:1 dept:1 |
1,047 | 1,957 | Why neuronal dynamics should control
synaptic learning rules
Jesper Tegner
Stockholm Bioinformatics Center
Dept. of Numerical Analysis
& Computing Science
Royal Institute for Technology
S-10044 Stockholm, Sweden
jespert@nada.kth.se
Adam Kepecs
Volen Center for Complex Systems
Brandeis University
Waltham, MA 02454
kepecs@brandeis.edu
Abstract
Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics)
most rules are sensitive to parameters. In particular, recent work
has focused on two different formulations of spike-timing-dependent
plasticity rules. Additive STDP [1] is remarkably versatile but
also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and
rate stabilization. Here we address the problem of robustness in
the additive STDP rule. We derive an adaptive control scheme,
where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions.
Such a control scheme can be implemented using known biophysical
mechanisms of synapses. We show that this adaptive rule makes
the addit ive STDP more robust. Finally, we give an example how
meta plasticity of the adaptive rule can be used to guide STDP
into different type of learning regimes.
1
Introduction
Hebbian learning rules are widely used to model synaptic modification shaping the
functional connectivity of neural networks [4, 5]. To ensure competition between
synapses and stability of learning, constraints have to be added to correlational Hebbian learning rules [6]. Recent experiments revealed a mode of synaptic plasticity
that provides new possibilities and constraints for synaptic learning rules [7, 8, 9].
It has been found that synapses are strengthened if a presynaptic spike precedes a
postsynaptic spike within a short (::::: 20 ms) time window, while the reverse spike
order leads to synaptic weakening. This rule has been termed spike-t iming dependent plasticity (STDP) [1] . Computational models highlighted how STDP combines
synaptic strengthening and weakening so that learning gives rise to synaptic competition in a way that neuronal firing rates are stabilized.
Recent modeling studies have, however, demonstrated that whether an STDP type
rule results in competition or rate stabilization depends on exact formulation of the
weight update scheme [3, 2]. Sompolinsky and colleagues [2] introduced a distinction between additive and multiplicative weight updating in STDP. In the additive
version of an STDP update rule studied by Abbott and coworkers [1, 10], the magnitude of synaptic change is independent on synaptic strength. Here, it is necessary to
add hard weight bounds to stabilize learning. For this version of the rule (aSTDP),
the steady-state synaptic weight distribution is bimodal. In sharp contrast to this,
using a multiplicative STDP rule where the amount of weight increase scales inversely with present weight size produces neither synaptic competition nor rate
normalization [3, 2]. In this multiplicative scenario the synaptic weight distribution
is unimodal. Activity-dependent synaptic scaling has recently been proposed as
a separate mechanism to ensure synaptic competition operating on a slow (days)
time scale [3]. Experimental data as of today is not yet sufficient to determine the
circumstances under which the STDP rule is additive or multiplicative.
In this study we examine the stabilization properties of the additive STDP rule. In
the first section we show that the aSTDP rule normalizes postsynaptic firing rates
only in a limited parameter range. The critical parameter of aSTDP becomes the
ratio (0;) between the amount of synaptic depression and potentiation. We show
that different input statistics necessitate different 0; ratios for aSTDP to remain
stable. This lead us to consider an adaptive version of aSTDP in order to create a
rule that is both competitive as well as rate stabilizing under different circumstances.
Next, we use a Fokker-Planck formalism to clarify what determines when an additive STDP rule fails to stabilize the postsynaptic firing rate. Here we derive
the requirement for how the potentiation to depression ratio should change with
neuronal activity. In the last section we provide a biologically realistic implementation of the adaptive rule and perform numerical simulations to show the how
different parameterizations of the adaptive rule can guide STDP into differentially
rate-sensitive regimes.
2
Additive STDP does not always stabilize learning
First, we numerically simulated an integrate-and-fire model receiving 1000 excitatory and 250 inhibitory afferents. The weights of the excitatory synapses were updated according to the additive STDP rule. We used the model developed by Song et
al, 2000 [1]. The learning kernel L(T) is A+exp(T/T+) if T < 0 or -A_ exp( -T/L)
if T > 0 where A_ / A+ denotes the amplitude of depression/potentiation respectively. Following [1] we use T + = T _ = 20 ms for the time window of learning. The
integral over the temporal window of the synaptic learning function (L) is always
negative. Synaptic weights change according to
dWi =
ill
J
L(T)Spre(t + T)Spost(T)dT , Wi E[O,W max ]
(1)
where s(t) denotes a delta function representing a spike at time t. Correlations
between input rates were generated by adding a common bias rate in a graded
manner across synapses so that the first afferent is has zero while the last afferent
has the maximal correlation, Cmax .
We first examine how the depression/potentiation ratio (0; = LTD / LT P) [2] controls the dependence of the output firing rate on the synaptic input rate, here
referred to as the effective neuronal gain. Provided that 0; is sufficiently large, the
STDP rule controls the postsynaptic firing rate (Fig. 1A). The stabilizing effect of
the STDP rule is therefore equivalent to having weak a neuronal gain.
c
B
600
250
500
;; mD
200
,
-;
Increasing
j:: I'lP"'C'~~
50~
100
10
~
~
~
W
Inpul Rate (liz)
W
M
00
00
,---~--------~
250
%
m w w
w ~ ~
Input Rale(Hz)
150 Increasing
LTDlt.TPratios
t.05 Reference Ratio
I
': I~
0 -
o
~
W
00
00
100
Input Rate (Hz)
Figure I: A STDP controls neuronal gain. The slope of the dependence of the postsynaptic output rate on the presynaptic input rate is referred to as the effective neuronal gain.
The initial firing rate is shown by the upper curve while the lower line displays the final
postsynaptic firing rate. The gain is reduced provided that the depression/potentiation
ratio (0: = 1.05 here) is large enough. The input is uncorrelated. B Increasing input
correlations increases neuronal gain. When the synaptic input is strongly correlated
the postsynaptic neuron operates in a high gain mode characterized by a larger slope
and larger baseline rate. Input correlations were uniformly distributed between 0 and a
maximal value, Cm a x . The maximal correlation increases in the direction of the arrow:
0.0; 0.2 ; 0.3; 0.4; 0.5; 0.6; 0.7. The 0: ratio is 1.05. Note that for further increases in
the presynaptic rates, postsynaptic firing can increase to over 1000 Hz. C The depression/potentiation ratio sets the neuronal gain. The 0: ratios increase in the direction of
arrow:1.025;1.05;1.075;1.1025;1.155;1.2075. Cm a x is 0.5.
We find that the neuronal gain is extremely sensitive to the value of 0: as well
as to the amount of afferent input correlations. Figure IB shows that increasing
the amount of input correlations for a given 0: value increases the overall firing
rate and the slope of the input-output curve, thus leading to larger effective gain.
Increasing the amount of correlations between the synaptic afferents could therefore
be interpreted as increasing the effective neuronal gain. Note that the baseline firing
at a presynaptic drive of 20Hz is also increased. Next, we examined how neuronal
gain depends on the value of 0: in the STDP rule (Figure IC). The high gain and
high rate mode induced by strong input correlations was reduced to a lower gain
and lower rate mode by increasing 0: (see arrow in Figure IC). Note, however, that
there is no correct 0: value as it depends on both the input statistics as well as the
desired input/output relationship.
3
Conditions for an adaptive additive STDP rule
Here we address how the learning ratio, 0:, should depend on the input rate in order to produce a given neuronal input-output relationship. Using this functional
form we will be able to formulate constraints for an adaptive additive STDP rule.
This will guide us in the derivation of a biophysical implementation of the adaptive control scheme. The problem in its generality is to find (i) how the learning
ratio should depend on the postsynaptic rate and (ii) how the postsynaptic rate
depends on the input rate and the synaptic weights. By performing self-consistent
calculations using a Fokker-Planck formulation, the problem is reduced to finding
conditions for how the learning ratio should depend on the input rates only.
Let
0:
denote depression/potentiation ratio
0:
= LTD/LTP as before. Now we
A
30
ouput rate
,-------------~
B
meanw
0.6,-------------~
0.5
25
0.4
20
0.3
~
. . .. . .
.
0.2
15
--
...
-
. .. .
o. 1
0.05
..
.
. ..?. . . . . . . .?.. . . . . . . . . . . . ...?. ...
.
.
0.1
D
:
.
C
WTOT
40,-------------~
35
30
25
,-----,----==:=::=--,---~
T????
? ? ??.?? ? ???? .? ? ?????.? ? ? ???? .???
?0L--2~0---4~0--6~0---8~0
input rate
0.5
w
0.5
w
Figure 2: Self consistent Fokker-Planck calculations. Conditions for zero neuronal gain. U
A The output rate does not depend on the input rate. Zero neuronal gain. B Dependence
of the mean synaptic weight on input rates . C W t ot ex: Tp re < W >, see text. D The
dependence of j3 = a - 1 on input rate. E,F A( w) and P( w) are functions of the synaptic
strength and depend on the input rate .. Note that eight different input rates are used but
only traces 1, 3, 5, 7 are shown for A(w) and pew) in which the dashed line correspond
to the case with the lowest presynaptic rate.
determine how the parameter fJ = 0: - 1 should scale with presynaptic rates in
order to control the neuronal gain. The Fokker-Planck formulation permits an
analytic calculation of the steady state distribution of synaptic weights [3]. The
competition parameter for N excitatory afferents is given by W tot = twrpreN < w >
where the time window tw is defined as the probability for depression (Pd = tw/tisi)
that a synaptic event occurs within the time window (tw < tisi ). The amount
of potentiation and depression for the additive STDP yields in the steady-state,
neglecting the exponential timing dependence, the following expression for the drift
term A(w)
A(w) = PdA-[W/Wtot - (1 - 1/0:)]
(2)
A( w) represents the net weight "force field" experienced by an individual synapse.
Thus, A( w) determines whether a given synapse (w) will increase or decrease as
a function of its synaptic weight. The steepness of the A( w) function determines
the degree of synaptic competition. The w /Wtot is a competition term whereas the
(1 - 1/0:) provides a destabilizing force. When Wmax > (1 - l/o:)Wtot the synaptic
weight distribution is bimodal. The steady state distribution reads
P(w) =
Ke[(-w(1-1 /a) +w 2 /(2 W t o t ))/(A _ )]
(3)
where K normalizes the P(w) distribution [3].
Now, equations (2-3), with appropriate definitions of the terms, constitute a selfconsistent system. Using these equations one can calculate how the parameter fJ
should scale with the presynaptic input rate in order to produce a given postsynaptic
firing rate. For a given presynaptic rate, equations (2-3) can be iterated in until a
self-consistent solution is found. At that point, the postsynaptic firing rate can be
calculated. Here, instead we impose a fixed postsynaptic output rate for a given
input rate and search for a self-consistent solution using (3 as a free parameter.
Performing this calculation for a range of input rates provides us with the desired
dependency of (3 on the presynaptic firing rate. Once a solution is reached we
also examine the resulting steady state synaptic weight distribution (P(w)) and the
corresponding drift term A( w) as a function of the presynaptic input rate.
The results of such a calculation are illustrated in Figure 2. The neuronal gain,
the ratio between the postsynaptic firing rate and the input rate is set to be zero
(Fig 2A). To normalize postsynaptic firing rates the average synaptic weight has
to decrease in order to compensate for the increasing presynaptic firing rate. This
can be seen in (Fig 2B). The condition for a zero neuronal gain is that the average
synaptic weight should decrease as 1 j r pre . This makes W tot constant as shown
in Fig 2C. For these values, (3 has to increase with input rate as shown in Fig
2D. Note that this curve is approximately linear. The dependence of A( w) and the
synaptic weight distribution P( w) on different presynaptic rates is illustrated in Fig
2E and F. As the presynaptic rates increase, the A(w) function is lowered (dashed
line indicates the smallest presynaptic rate), thus pushing more synapses to smaller
values since they experience a net negative "force field". This is also reflected in the
synaptic weight distribution which is pushed to the lower boundary as the input
rates increase. When enforcing a different neuronal gain, the dependence of the
(3 term on the presynaptic rates remains approximately linear but with a different
slope (not shown).
4
Derivation of an adaptive learning rule with biophysical
components
The key insight from the above calculations is the observed linear dependence of (3 on
presynaptic rates. However, when implementing an adaptive rule with biophysical
elements it is very likely that individual components will have a non-linear dependence on each other. The Fokker-Planck analysis suggests that the non-linearities
should effectively cancel. Why should the system be linear? Another way to see
from where the linearity requirement comes is that the (w jWtot - (3) term in expression for A(w) (valid for small (3) has to be appropriately balanced when the input
rates increases. The linearity of (3(rpr e ) follows from W tot being linear in r pre .
Now, how could (3 depend on presynaptic rates? A natural solution would be to use
postsynaptic calcium to measure the postsynaptic firing and therefore indirectly the
presynaptic firing rate. Moreover, the asymmetry ((3) of the learning ratio could
depend on the level of postsynaptic calcium. It is known that increased resting
calcium levels inhibit NMDA channels and thus calcium influx due to synaptic input.
Additionally, the calcium levels required for depression are easier to reach. Both of
these effects in turn increase the probability of LTD induction. Incorporating these
intermediate steps gives the following scheme:
q
(3 +-'-+
c a t-=--+
p
h
r po st +-'---+ r pr e
This scheme introduces parameters (p and q) and a function Ut} to control for
the linearity jnon-linearity between the variables. The global constraint from the
Fokker-Planck is that the effective relation between (3 and r pr e should be linear. A
biophysical formulation of the above scheme is the following
200
i-
No Adaptive Tracking
150
:WlUlliWWU]
2
~-40
>
-60
0~----------~5~00~--------~10~0~0----------~1~500
~loo
5
o
'?'r ~A_
50
~'ll
Adaptive Tracking
20
40
60
input rat.
80
100
.~ 1
V'~
'0
500
1000
"1
1500
Time (ms)
Figure 3: Left Steady-state response with (squares) or without (circles) the adaptive
tracking scheme. When the STDP rule is extended with an adaptive control loop , the
output rates are normalized in the presence of correlated input. Right Fast adaptive
tracking. Since (3 tracks changes in intracellular calcium on a rapid time-scale, every spike
experiences a different learning ratio, 0:. Note that the adaptive scheme approximates the
learning ratio (0: = 1.05) used in [1].
(4)
d(3
T(3 -
dt
= - (3
+ [Ca]q
(5)
The parameter p determines how the calcium concentration scales with the postsynaptic firing rate (delta spikes r5 above) and q controls the learning sensitivity. "(
controls the rise of steady-state calcium with increasing postsynaptic rates (rpost).
The time constants TCa and T(3 determine the calcium dynamics and the time course
of the adaptive rule respectively. Note that we have not specified the neuronal
transfer function, it.
To ensure a linear relation between (3 and
analysis that [it (rpre)]pq is approximately
now be independently be controlled by the
A( w) becomes
r pre it follows from the Fokker-Planck
linear in r pre . The neuronal gain can
parameter T Moreover , the drift term
(6)
for (3 < < 1. A(w) can be written in this form since we use that Wd
- A_ =
-A+CI: = -A+( l + [TCa"(r~ost]q). The w/Wtot is a competition term whereas
the [TCa"(r~ost ]q provides a destabilizing force. Note also, that when W max >
[TCa"(r~ost ]qWtot there is a bimodal synaptic weight distribution and synaptic competition is preserved. A complete stability analysis is beyond the scope of the
present study.
A
C
B
75
75
50
50
50
25
25 ....~ . ;
25
.-.. 75
.. .?~.
N
:E-
--...
O)
1\1
:::J
Co
:::J
0
0
0
25
50
75
100
0
0
Input rate (Hz)
25
50
75
100
Input rate (Hz)
0
0
25
50
75
100
Input rate (Hz)
F igure 4: Full numerical simulation of the adaptive additive STDP rule. Parameters:
T ea = 10ms , T f! = lOOms . A I = 1.25. B I = 0.25. C Input correlations are
Cmax = 0, 0.3, 0.6
p = q = 1.
5
Numerical simulations
Next, we examine whether the theory of adaptive normalization carryover to a
full scale simulation of the integrate-and-fire model with the STDP rule and the
biophysical adaptive scheme as described above. First, we studied the neuronal
gain (cf. Figure 1) when the inputs were strongly correlated. Driving a neuron
with increasing input rates increases the output rate significantly when there is
no adaptive scheme (squares, Figure 3 Left) as observed previously (cf. Figure
IB). Adding the adaptive loop normalizes the output rates (circles, Figure 3 Left).
This simulation shows that the average postsynaptic firing rate is regulated by
the adaptive tracking scheme. This is expected since the Fokker-Planck analysis
is based on the steady-state synaptic weight distribution. To further gain insight
into the operation of the adaptive loop we examined the spike-to-spike dependence
of the tracking scheme. Figure 3 (Right) displays the evolution of the membrane
potential (top) and the learning ratio 0: = 1 + (3 (bottom) . The adaptive rule
tracks fast changes in firing by adjusting the learning ratio for each spike. Thus,
the strength plasticity is different for every spike. Interestingly, the learning ratio
(0:) fluctuates around the value 1.05 which was used in previous studies [1] . Our
fast , spike-to-spike tracking scheme is in contrast to other homeostatic mechanisms
operating on the time-scale of hours to days [11 , 12, 13, 14]. In our formulation , the
learning ratio, via (3, tracks changes in intra-cellular calcium, which in turn reflects
the instantaneous firing rate. Slower homeostatic mechanisms are unable to detect
these rapid changes in firing statistics. Because this fast adaptive scheme depends
on recent neuronal firing, pairing several spikes on the time-scale comparable to the
calcium dynamics introduces non-linear summation effects.
Neurons with this adaptive STDP control loop can detect changes in the input
correlation while being only weakly dependent on the presynaptic firing rate. Figure
4a and 4b show two different regimes corresponding to two different values of the
parameter , . In the high , regime (Fig. 4a) the neuronal gain is zero. The neuronal
gain increased when , decreased (Fig. 4b) as expected from the theory. In a
different regime where we introduce increasing correlations between the synaptic
inputs [1] we find that the neuronal gain is changed little with increasing input
rates but increases substantially with increasing input correlations (Fig 4c) . Thus,
the adaptive aSTDP rule can normalize the mean postsynaptic rate even when the
input statistics change. W ith other adaptive parameters we also found learning
regimes where the responses to input correlations were affected differentially (not
shown).
6
Discussion
Synaptic learning rules have to operate under widely changes conditions such as
different input statistics or neuromodulation. How can a learning rule dynamically guide a network into functionally similar operating regime under different
conditions? We have addressed this issue in the context of spike-timing-dependent
plasticity (STDP) [1, 10J. We found that STDP is very sensitive to the ratio of
synaptic strengthening to weakening, (t, and requires different values for different
input statistics. To correct for this, we proposed an adaptive control scheme to
adjust the plasticity rule. This adaptive mechanisms makes the learning rule more
robust to changing input conditions while preserving its interesting properties, such
as synaptic competition. We suggested a biophysically plausible mechanism that
can implement the adaptive changes consistent with the requirements derived using
the Fokker-Planck analysis.
Our adaptive STDP rule adjusts the learning ratio on a millisecond time-scale.
This in contrast to other, slow homeostatic controllers considered previously
[11 , 12, 13, 14, 3J. Because the learning rule changes rapidly, it is very sensitive
the input statistics. Furthermore, the synaptic weight changes add non-linearly
due to the rapid self-regulation. In recent experiments similar non-linearities have
been detected (Y. Dan, personal communication) which might have roles in making synaptic plasticity adaptive. Finally, the new set of adaptive parameters could
be independently controlled by meta-plasticity to bring the neuron into different
operating regimes.
Acknowledgments
We thank Larry Abbott , Mark van Rossum, and Sen Song for helpful discussions.
J.T. was supported by the Wennergren Foundation, and grants from Swedish Medical Research Foundation, and The Royal Academy for Science. A.K. was supported
by the NIH Grant 2 ROI NS27337-12 and 5 ROI NS27337-13. Both A.K. and J.T.
thank the Sloan Foundation for support.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
Song, S., Miller, K , & Abbott, L. Nature N euroscience, 3:919-926, 2000.
Rubin, J., Lee, D. , & Sompolinsky, H. Physical Review Letter, 86:364-367, 200l.
van Rossum, M., G-Q , B. , & Thrrigiano, G. J Neurosci, 20:8812- 8821, 2000.
Sejnowski, T. J Th eoretical Biology, 69:385- 389, 1997.
Abbott, L. & Nelson, S. Nature Neuroscience, 3:1178- 1183, 2000.
Miller, K & MacKay, D. Neural Computation, 6:100- 126, 1994.
Markram , H., Lubke, J., Frotscher, M., & Sakmann, B. Science, 275:213- 215, 1997.
Bell, C., Han, V. , Sugawara, Y. , & Grant , K Nature , 387:278- 81 , 1997.
Bi, G.-Q . & Poo, M. J Neuroscience , 18:10464- 10472, 1998.
Kempter, R., Gerstner, W., & van Hemmen , J. N eural Computation, 13:2709- 2742 ,
200l.
Bell, A. In Moody, J. , Hanson , S. , & Lippmann , R., editors, Advances in Neural
Information Processing Systems, volume 4. Morgan-Kaufmann, 1992.
LeMasson, G. , Marder, E. , & Abbott , L. Sci ence, 259:1915- 7, 1993.
Thrrigiano, G. , Leslie, K , Desai, N., Rutherford, L. , & Nelson, S. Nature, 391:892- 6,
1998.
Thrrigiano, G. & Nelson, S. Curr Opin N eurobiol, 10:358- 64, 2000.
| 1957 |@word version:3 simulation:5 versatile:1 initial:1 interestingly:1 wd:1 yet:1 written:1 tot:3 additive:13 realistic:1 numerical:4 plasticity:9 analytic:1 opin:1 update:2 ith:1 short:1 provides:4 parameterizations:1 ouput:1 pairing:1 combine:1 dan:1 introduce:1 manner:1 expected:2 rapid:3 nor:1 examine:4 little:1 window:5 increasing:13 becomes:2 provided:2 linearity:6 moreover:2 lowest:1 what:1 cm:2 interpreted:1 substantially:1 developed:1 finding:1 temporal:1 every:2 control:15 grant:3 medical:1 planck:9 rossum:2 before:1 timing:3 firing:25 approximately:3 might:1 studied:2 examined:2 dynamically:1 suggests:1 co:1 limited:1 range:2 bi:1 sugawara:1 acknowledgment:1 implement:1 bell:2 destabilizing:2 significantly:1 pre:4 context:1 equivalent:1 demonstrated:1 center:2 poo:1 independently:2 focused:1 ke:1 stabilizing:2 formulate:1 rule:43 insight:2 adjusts:1 stability:2 updated:1 today:1 exact:1 element:1 updating:1 observed:2 bottom:1 role:1 calculate:1 sompolinsky:2 desai:1 decrease:3 inhibit:1 balanced:1 pd:1 pda:1 dynamic:4 personal:1 depend:7 weakly:1 tca:4 po:1 derivation:2 jesper:1 fast:5 effective:5 sejnowski:1 precedes:1 detected:1 fluctuates:1 widely:2 ive:1 larger:3 plausible:1 statistic:8 highlighted:1 final:1 biophysical:6 net:2 sen:1 maximal:3 strengthening:2 loop:4 rapidly:1 academy:1 competition:12 differentially:2 normalize:2 requirement:3 asymmetry:1 produce:3 adam:1 spre:1 derive:2 strong:1 implemented:1 come:1 waltham:1 direction:2 correct:2 stabilization:3 larry:1 implementing:1 potentiation:8 stockholm:2 summation:1 clarify:1 sufficiently:1 around:1 liz:1 stdp:31 exp:2 ic:2 considered:1 scope:1 roi:2 driving:1 smallest:1 rpr:1 sensitive:5 create:1 reflects:1 always:2 nada:1 derived:1 indicates:1 contrast:3 baseline:2 detect:2 helpful:1 dependent:5 weakening:3 relation:2 overall:1 issue:1 ill:1 mackay:1 frotscher:1 field:2 once:1 having:1 biology:1 represents:1 r5:1 cancel:1 carryover:1 individual:2 fire:2 curr:1 possibility:1 intra:1 adjust:1 introduces:2 integral:1 neglecting:1 necessary:1 experience:2 sweden:1 desired:2 re:1 circle:2 increased:3 formalism:1 modeling:1 ence:1 tp:1 leslie:1 loo:1 dependency:1 st:1 sensitivity:1 lee:1 receiving:1 moody:1 connectivity:1 necessitate:1 leading:1 potential:1 kepecs:2 stabilize:4 sloan:1 afferent:6 depends:5 multiplicative:5 reached:1 competitive:1 slope:4 square:2 lubke:1 kaufmann:1 miller:2 correspond:1 yield:1 weak:1 biophysically:1 iterated:1 drive:1 synapsis:6 reach:1 synaptic:44 definition:1 colleague:1 static:1 gain:26 adjusting:1 ut:1 shaping:1 amplitude:1 nmda:1 ea:1 dt:2 day:2 reflected:1 response:2 synapse:2 swedish:1 formulation:6 strongly:2 generality:1 furthermore:1 correlation:14 until:1 wmax:1 lack:1 mode:4 effect:3 normalized:1 evolution:1 read:1 illustrated:2 attractive:1 ll:1 self:5 steady:8 rat:1 m:4 complete:1 bring:1 fj:2 instantaneous:1 recently:1 tegner:1 common:1 nih:1 functional:2 physical:1 volume:1 approximates:1 resting:1 numerically:1 functionally:1 eoretical:1 pew:1 pq:1 lowered:1 stable:1 selfconsistent:1 han:1 operating:4 add:2 inpul:1 recent:5 reverse:1 termed:1 scenario:1 meta:2 ost:3 seen:1 preserving:1 morgan:1 impose:1 determine:3 coworkers:1 dashed:2 ii:1 full:2 unimodal:1 hebbian:3 characterized:1 calculation:6 compensate:1 controlled:2 j3:1 controller:1 circumstance:2 normalization:2 kernel:1 bimodal:3 preserved:1 whereas:3 remarkably:1 decreased:1 addressed:1 appropriately:1 ot:1 operate:1 hz:7 induced:1 ltp:1 presence:1 revealed:1 intermediate:1 enough:1 variety:1 fragile:1 whether:3 expression:2 ltd:3 song:3 constitute:1 depression:10 generally:1 se:1 amount:6 lemasson:1 reduced:3 stabilized:1 inhibitory:1 millisecond:1 delta:2 neuroscience:2 track:3 affected:1 steepness:1 key:1 changing:2 neither:1 abbott:5 letter:1 scaling:1 comparable:1 pushed:1 bound:1 display:2 igure:1 activity:3 strength:3 marder:1 constraint:4 influx:1 extremely:1 performing:2 according:2 membrane:1 remain:1 across:1 smaller:1 postsynaptic:23 wi:1 lp:1 tw:3 modification:1 biologically:1 making:1 pr:2 equation:3 remains:1 previously:2 turn:2 neuromodulation:2 mechanism:6 operation:1 permit:1 eight:1 appropriate:1 indirectly:1 robustness:1 slower:1 denotes:2 top:1 ensure:3 cf:2 cmax:2 pushing:1 graded:1 added:1 spike:16 occurs:1 concentration:1 dependence:10 md:1 regulated:1 kth:1 separate:1 unable:1 simulated:1 thank:2 sci:1 addit:1 nelson:3 presynaptic:19 cellular:1 enforcing:1 induction:1 relationship:2 ratio:23 regulation:1 trace:1 negative:2 rise:2 rutherford:1 implementation:2 sakmann:1 calcium:11 perform:1 upper:1 neuron:4 extended:1 communication:1 volen:1 homeostatic:3 sharp:1 drift:3 introduced:1 required:1 specified:1 hanson:1 distinction:1 hour:1 address:2 able:1 beyond:1 suggested:1 rale:1 regime:8 royal:2 max:2 critical:1 event:1 natural:1 force:4 representing:1 scheme:16 loom:1 technology:1 inversely:1 text:1 review:1 kempter:1 interesting:1 foundation:3 integrate:2 degree:1 sufficient:1 consistent:5 rubin:1 editor:1 uncorrelated:1 normalizes:3 excitatory:3 course:1 changed:1 supported:2 last:2 free:1 guide:4 bias:1 institute:1 markram:1 distributed:1 van:3 curve:3 calculated:1 boundary:1 valid:1 adaptive:36 dwi:1 brandeis:2 lippmann:1 global:1 a_:4 search:1 why:2 euroscience:1 additionally:1 nature:4 channel:1 transfer:1 robust:3 ca:1 astdp:6 correlated:3 gerstner:1 complex:1 intracellular:1 arrow:3 linearly:1 neurosci:1 eural:1 neuronal:26 fig:9 referred:2 hemmen:1 strengthened:1 slow:2 fails:1 experienced:1 exponential:1 ib:2 incorporating:1 adding:2 effectively:1 ci:1 magnitude:1 easier:1 lt:1 likely:1 tracking:7 fokker:9 determines:4 ma:1 formulated:1 change:13 hard:1 operates:1 uniformly:1 correlational:1 experimental:1 mark:1 support:1 bioinformatics:1 dept:1 ex:1 |
1,048 | 1,958 | Kernel Machines and Boolean Functions
Adam Kowalczyk
Telstra Research Laboratories
Telstra, Clayton, VIC 3168
a.kowalczyk@trl.oz.au
Alex J. Smola, Robert C. Williamson
RSISE, MLG and TelEng
ANU, Canberra, ACT, 0200
Alex.Smola, Bob.Williamson @anu.edu.au
Abstract
We give results about the learnability and required complexity of logical
formulae to solve classification problems. These results are obtained by
linking propositional logic with kernel machines. In particular we show
that decision trees and disjunctive normal forms (DNF) can be represented by the help of a special kernel, linking regularized risk to separation margin. Subsequently we derive a number of lower bounds on the
required complexity of logic formulae using properties of algorithms for
generation of linear estimators, such as perceptron and maximal perceptron learning.
1 Introduction
The question of how many Boolean primitives are needed to learn a logical formula is
typically an NP-hard problem, especially when learning from noisy data. Likewise, when
dealing with decision trees, the question what depth and complexity of a tree is required to
learn a certain mapping has proven to be a difficult task.
We address this issue in the present paper and give lower bounds on the number of Boolean
functions required to learn a mapping. This is achieved by a constructive algorithm which
can be carried out in polynomial time. Our tools for this purpose are a Support Vector
learning algorithm and a special polynomial kernel.
In Section 2 we define the classes of functions to be studied. We show that we can treat
propositional logic and decision trees within the same framework. Furthermore we will
argue that in the limit boosted decision trees correspond to polynomial classifiers built
directly on the data. Section 3 contains our main result linking the margin of separation
to a simple complexity measure on the class of logical formulae (number of terms and
depth). Subsequently we apply this connection to devise test procedures concerning the
complexity of logical formulae capable of learning a certain dataset. More specifically,
this will involve the training of a perceptron to minimize the regularized risk functional.
Experimental results and a discussion conclude the paper. Some proofs have been omitted
due to space constraints. They can be found in an extended version of this paper (available
at http://www.kernel-machines.org).
2 Polynomial Representation of Boolean Formulae
We
use
the
standard
assumptions of supervised learning: we have a training set
. Based on these observations we attempt to find a
function !" which incorporates the information given by the training set. Here
goodness of fit is measured relative to some predefined loss function or a probabilistic
model.
What makes
+,- the situation in this paper
0/1- special is that we assume that $#&%(' where
and moreover .)
. In other words, we attempt to learn a binary
%*)
function on Boolean variables. A few definitions will be useful.
The set of all polynomials of degree 2436587 on %9' will be denoted1 by :<;=?> . They are
C
given by the expansion
C
@
I J I
J I J
where GH
(1)
)BA C?DE<F
5 % ' and 243
K
C
C
ML
NNL
)
P
5
O
F
5RQ and we use a compact notation
S)
,
where
for
every
LUTWY
V XX
X L T[Z
M\
' NN[ \
for
every
Y
5
%
'
for
monomials
on
]
%
'
,
with
the
usual
convention
of
L_^
-'
La`+ '
S)
for every
. InI order to avoid further notation we always assume in such
Ccb +
expansions that F )
for all 58G .
:<;h=N> of all polynomials
The subset dfefg > 4
of the form
C
M@
-lk4m
I J I
J I J
A C?DE
)ji
where G
5n% ' and 2o3
(2)
will be called disjunctive normal forms. It is linked by the following lemma to the set of
disjunctive normal forms depg q> rts u v commonly
~??? used in the propositional logic. The latter
byL disjunctions
consist of all clauses wx% '
gzyf{|z}
} which can beL expressed
-
?
+??
of terms, each being a conjunction of up to 3 logical primitives ? T )
and ? T )
.
\
h-h
??
\? h-h
??
Lemma 1 Assume for each 5
there exists an 5
such that
L
L
ML??
N[NL
T )
(3)
i T[? for all )
K
5
O
'
Then for every n5ndepg > there exists wx5ndfefg q> rts u v such that for every 5KO
@
?`B@
l??~???
if and only if w
}
And vice versa, for every such w there exists satisfying the above relation.
Standing Assumption: Unless stated otherwise, we will in the rest of the paper assume
that (3) of the above lemma holds. This is not a?MLmajor
- restriction,
L
???ML
since we can always
T i T
T 58O ?4%? ' .
satisfy (3) by artificially enlarging O?o%6' into
~
deciNow we consider another special subclass of polynomials, d
> O
a:<;= > ,M\?called
C
sion trees
are
polynomials
which
have
expansions
of
type
(1)
where
all
coeffiC . These
-h
\?\?
C
I
i
5?O exactly one monomial , 5G , ?fires?, i.e.
and ? for
every
cients F 5
D?
T equals 1 and all the others are 0.
exactly one of the numbers
Eq. (1) shows that each decision tree can be expressed as half of a difference between two
disjunctive normal forms such that for any given input, one and only one of the conjunctions
comprising them will be true. There exists also an obvious link to popular decision trees (on
Boolean variables) used for classification in machine learning, cf. [4, 12]. Here C the depth
?/1of a leaf equals the degree of the corresponding monomial, and the coefficient F 5
corresponds to the class associated with the leaf.
1
Such binary polynomials are widely used under the name of score tables, e.g. typically loan
applications are assessed by financial institutions by an evaluation of such score tables.
3 Reproducing Kernel Hilbert Space and Risk
Kernel The next step is to map the complexity measure applied to decision trees, such as
depth or number of leaves, to a Reproducing Kernel Hilbert Space (RKHS), as used in Support Vector machines. This is defined as ?)a:<;h=M> with the scalar product corresponding
to the norm defined via the quadratic form on
C C
? ) A CWD
E F ?
(4)
+
\
?+_
?
3 are complexity weights for each degree of the polynomiHere T
with 5
C
als and the coefficients F are the coefficients of expansion (1).
Lemma 2 (Existence of Kernel) The RKHS
X
kernel realizing the dot product corresponding to the quadratic form (4) with
58:<;= > has the following efficient functional
form:
?
>
M ?
(5)
) A
^
Proof The norm is well defined by (4) for all 5 :<;=?> and the space :<;=?> is
complete. Furthermore it is easy to check that (4) defines a homogeneous quadratic form
on :<;=?> . Via the polarization identity we can reconstruct a bilinear form (dot product) from
(4). This gives
@??
us the desired Hilbert
X space. From [1] we obtain that there exists a unique
kernel
corresponding to
? . The key observation for derivation of its form (5)
?
+
@?
C C
5O and
there are exactly
non-vanishing monomials of
is that given
?
"L ! V L ?! X
XX "L !$#L ?! #
?
'
(
'
'
V
)
% ? XX
X %
2 are positions
, where 2&%
the form
I
of 1?s in the sequence .
+
` ?
) )
with )
Note that for the special case where
*
and 3
, (5) simply leads
to a binomial expansion and we obtain
?
?
>
M ?
?-lk
(6)
) A
)
)
)
^
The larger ) , the less severely we will penalize higher order polynomials, which provides
us with an effective means of controlling
the complexity of the? estimates. Note that this is
`aJ cJ
applicable to the case when 3
, and always holds for 31)
.
C
~
Due to the choice of the F in depg > and d
> O we obtain
C
C
JNJ JNJ
-l,
k +
JNJ JNJ
~
A C?D
? )
E - for n5ndepg > and ? ) A C?D
E for Y5Kd > O
Next we introduce regularized risk functionals. They follow the standard assumptions made
in soft-margin SVM and regularization networks.
M
+
For our training set T T of size . and a regularization
constant /
we define
JNJ JNJ k
0 ?
? / S)
?
/
A i T T ?
T
??
021 ? JNJ JNJ ? k
? / )
/
A ? i T T 1?
T
for every Y5K:<;h= > , where ?
?1
+,
S)
for every 15nQ .
The first risk is typically used by regularization networks [8],
?H` the
? by support vector
0
021 other
machines [5]. Note that for all 5 :<;= > we have ? /
?
/ . Furthermore, if
~
J M
J?` T
Y5Kdepg > d > O , then
and hence?
C
k +
0 ?@` 021 ?`
A C?D
(7)
? /
? /
E
/
\?J0 b
M
T )
T
)
(8)
where
denotes the number of classification errors (on the training set).
~
> O
Note that in (7) equalities hold throughout for 5od
and in such a case the risks
are fully determined by the depths of the leaves of the decision tree and the number of
classification errors.
Furthermore,
JNJ JNJ in the particular case of decision trees and all coefficients - T ?)
, i.e. when ? equals to the number of leaves of the decision tree
~
0 ? 0 1 ?
Y5Kd > O , the regularized risks ? / )
? / are exactly equal to the ?cost complexity? employed to prune decision trees by CART algorithm [4]. In other words, the basis
of the pruning algorithm in CART is the minimisation of the regularised risk in the class
of subtrees of the maximal tree, with the regularisation constant / selected by a heuristic
applied to a validation set.
Our reasoning in the
? 0 1 relies
? on the idea that if we can find some function x5n:<;h=>
0 following
? / , then
which minimizes ? / or
~ the
minimizer of the risk functionals, when chosen from the more restrictive set 5?d
> O or 5?dfefg > , must have a risk functional
at least as large as the one found by optimizing~ over
:< ;h= > . This can then be translated into
dfefg > 4:<;h=N> .
a lower bound on the complexity of since d
> O
4 Complexity Bounds
The last part missing to establish a polynomial-time device to lower-bound the0 required
?
complexity
021 ? of a logical formula is to present actual algorithms for minimizing ? / or
? / . In this section we will study two such methods: the kernel perceptron and the
maximum margin perceptron and establish bounds on execution time and regularized risk.
Kernel Perceptron Test The / -perceptron learning algorithm is a direct + modification
it becomes
of ordinary linear perceptron learning rule. In the particular case of / ) +
the ordinary perceptron learning rule in the feature space Q . For /
it implements
perceptron learning rule in the extended feature space Q KQ ; cf. [7, 6] for details.
Algorithm 1 Regularized kernel perceptron ( / -perceptron)
`o+
Given: a Mercer
.
+ kernel and
+ a constant
\
-[N[ /
Initialize: )
and % T )
for )
. .
do
while an
+
update is possible
T T
T
@k
T
find such that
%
/ %
2 , then update:
kk?-
%
%
and
end while
T
We introduce the special notation:
)
! M@
@
T
T
T
T
for
every
JNJ ! JNJ S)
5 O
o
%
M
? )
T
T
% T %
T
.
&% '
% '
!#" $ ! M T T
&% ' 5o:<;h= >
M
T T , )
T
and % 5oQ . Note that
(
and
and
A modification of the standard proof of convergence of linear perceptron [11] combined
with the extended feature space trick [13] gives the following result.
T
Theorem
3
Assume
that
the
coefficients
were generated after -th update
?
%
)
%
K
5
Q
of the / -perceptron and
(
)
??
Then ?2
0
?
k
/
0 1
?`
/
,
?
?
?`
/
k
! D
%
/ .
J[J
!
T ! "
^
!
T
% ' T
,JNJ ! % ' JNJ
1
and
% ' J[J ?
?? k
/ %(
JNJ JNJ
`
?
?
k
for every n5n:<;=?> .
/
(9)
Note that defined above is the maximal margin of separation of the training data by
polynomials from :<;h= > (treated as elements of the RKHS).
Maximum Margin Perceptron Test Below we state formally the soft margin version of
maximal margin perceptron algorithm. This is a simplified (homogeneous) version of the
algorithm introduced in [9].
Algorithm 2 Greedy Maximal Margin Perceptron ( / -MMP)
+
` +
Given: ) , /
a and a Mercer kernel .
) M
k / for ) -h
N[ . ;
Initialize:
\
, ??) T , T ) + ;
)
)
T
@k T
/ and %
) T
for ) -
NN[ . ;
X TN?
while TN? TN?
2 -h
N[ i ) ? do
)
k . do
for for every
?
@k
T
i
T T
k T
/m T T
;
? i
i
;
k? - ; 3 )
?
!# #
%
#T T
i T % ;
+
!$# #
' +
# if
+ , else
if
&
>
>
end for
\
find
) ?
k , then setk m ?
?
i T ? ?
T T
T
i T T ;
end while
!"
'
'
%
-
, else
+
.
The proof of the following theorem uses the extended feature space [13].
+ '
`+
Theorem 4 Given
. Assume that the
) and /
generated after -th iteration of the ?while loop? of the /
k
k4m
H?
0?
/
2
? k / . i ? k
)?
0 ?.` 021 ?@` JNJ ! JNJ k J[J J[J `
? /
? /
/ % ?
?
?
% '
(
vector % )
% T 5 Q was
-MMP learning rule. Then
(
/
-
k
/
k
for every Y5K:<;h= > . If the algorithm halts after -th update, then
` D
021 ???J[J ! J[J k
JNJ JNJ
? /
i ) ?
t
r
q
/ % ?
?
&% '
(
!"
?
k
) ? k4m
? /
(10)
(11)
(12)
Note that condition (10)+ ensures the convergence of the algorithm in a finite time. The
above theorem for /x)
ensures that solution generated by Algorithm 2 converges to the
(hard) maximum
it can be
margin classifier.` Further,
+
- shown that the bound (11) holds for
every % ) % T such that each % T
and % T ) .
(
Bounds on classification error The task of finding a linear perceptron minimizing the
number of classification errors on the training set is known to be NP-hard. On this basis
it is reasonable to expect that finding a decision tree or disjunctive normal form of upper
bounded complexity and minimizing the number of errors is also hard. In this section we
provide a lower bound on the number of errors for such classifiers.
, i.e. the number of classification errors (8), can be
The following estimates on
derived from Theorems 3 and 4:
+
~
and ?5xd
has been generated
Theorem 5 Let / )
> . If the vector %) % T 5xQ
after -th iteration of the ?while loop? of the / -perceptron learning rule, then
(
/+
?k
!
JNJ
(
%'
J[J
(
JNJ NJ J
i ?
`
(
%'
/+
JNJ JNJ
)k?
JNJ NJ J
i ?
(13)
/
?
?
/ % ?
On the other hand,
if %?) % T 5nQ
has been generated after -th iteration of the ?while
loop? of the / -MMP learning rule, then
$`
J[J JNJ
/+ JNJ ! JNJ k J[J J[J
(14)
i ?
/ % ?
?
-
`
k
JNJ J[J
/+
)? k m
k
k
(15)
i ?
/
/
?
?
?
T
B
%
)
%
5
Q
% T ) and
Additionally,
the
estimate
(14)
holds
for
every
such that
` +
.
each % T
?`
(
Note that
% T equals in (13), while it is 1 in (14). The following result is derived form
some recent results of Ben David and Simon [2] on efficient learning of perceptrons.
+
+
and integer 3 ? . There exists an algorithm which runs in
Theorem 6 Given
time polynomial in both the input dimension
M
\ and
-[the
N[ number of training samples . , that
given the labelled training sample T T , )
. , it outputs a polynomial x5Y:<;h= >
~
such that
2
for every in x5nd > dfefg > .
Following
[2] we give an explicit formulation
+ elements
?
of
the
algorithm : for each subset of 2
T T T
of the training set
find the maximal margin hyperplane,
if one exists. Using the standard quadratic programming approach this can be done in time
polynomial in both and . [3]. Next, define 5Q
as the vector
of the hyperplane
X
X X
58:<;=?> .
S)
with the lowest error rate on the whole training set. Finally, set
5 Experimental Results and Discussion
We have used a standard machine learning benchmark of noisy 7 bit LED display for 10
digits, 0 though 9, originally introduced in [4]. We generated 500 examples for training and
5000 for independent test, under assumption of 10% probability of a bit being reversed.
The task set was to discriminate
classes, digits 0-4 and digits 5-9. Each
L0[N[Nbetween
L ?
was two
complemented
additional 7 bits vector
?noisy
digit?
data
vector
?L??
NN[L ?
to ensure that our Standing Assumptionbyof an
i
i
Section 2 holds true.
- \
+,
NN[
For a sake of simplicity we used~ fixed complexity weights, T )
, )
3 , and / )
which for a decision tree n5nd
> gives a simple formula for the risk
k
0 1 ? 0 ?
? / )
? / ) [number of leaves] [number of errors]
+
,
M\?
Four different algorithms have been applied to this data:\?\?
Decision Trees, version C4.5
[12] (available from www.cse.unsw.edu.au/ quinlan/),
regularized
z JNJ ! kernel
J[J k perceptron
J[J J[J
%
/
% ? ,
(Algorithm 1) with the generated coefficients scaled\?\ %R
\?
margin classifier
where is the number
of
updates
to
the
convergence,
greedy
maximal
M\z
(Algorithm 2) and
mask perceptron [10] which for this data generates a polynomial
Y5K:<;h=N> using some greedy search heuristics. Table 1 gives the experimental results.
(
(
%'
(
Table 1: Results for recognition of two groups of digits on faulty LED-display.
Algorithm
Risk (no. of leaves /SV/terms )
Error rate
%: train/test
31)
31
)
3)
3
)
Decision tree
110 (4 leaves)
80 (17 leaves)
21.3 / 22.9 12.0 / 15.8
Kernel SVM
44.4(413 SV)
40.8 (382 SV)
12.2 / 15.1 11.2 / 14.8
Kernel percep. 53.1 (294 SV)
54.9 (286 SV)
11.8 / 16.3 13.8 / 17.1
Mask percep.
53.2(10 terms) 49.1 (26 terms) 12.8 / 15.7 11.8 / 15.6
The lower
bound on risk from maximal margin criterion (Eq. 11) are 44.3 and 40.7 for
3 )
and 3() , respectively. Similarly, the lower bound on risk from kernel perceptron
criterion (Eq. 9) were 39.7 and 36.2, respectively. Risks for SVM solutions approach this
bound and for kernel perceptron they are reasonably close. Comparison with the risks obtained for decision trees show that our lower bounds are meaningful (for the ?un-pruned?
decision trees risks were only slightly worse). The mask perceptron results show that simple (low number of terms) polynomial solutions with risks approaching our lower bounds
can be practically found.
The Bayes-optimal classifier can be evaluated on this data set, since we know explicitly the
distribution from which data is drawn. Its error rates are 11.2% and 13.8% on the training
and test sets, respectively. SVM solutions have error rates closest to the Bayesian classifier
(the test error rate for 31
) exceeds the one of the Bayes-optimal classifier by only 7%).
Boosted Decision Trees An obvious question to ask is what happens if we take a large
enough linear combination of decision
~ trees.
This is the case, for instance, in boosting. We
can show that :<;= > is spanned by d > O . In a nutshell, the proof relies on the partition
of the identity into
?L??
L
L
where )
) 1
A C
i
i ?
i '
C
and solving this expansion for , where the remainder turns out to be a decision tree. This
means that in the limit, boosting decision trees finds a maximum margin solution in :<;= > ,
a goal more directly achievable via a maximum margin perceptron on :<;= > .
Conclusion We have shown that kernel methods with their analytical tools are applicable
well outside of their traditional domain, namely in the area of propositional logic, which
traditionally has been an area of discrete, combinatorial rather then continuous analytical
methods. The constructive lower bounds we proved offer a fresh approach to some seemingly intractable problems. For instance, such bounds can be used as points of reference
for practical applications of inductive techniques like as decision trees.
The use of Boolean kernels introduced here allows a more insightful comparison of performance of logic based and analytical, linear machine learning algorithms.
This contributes to the research in the theory of learning systems as illustrated by the result
on existence of polynomial time algorithm for estimation of minimal number of training
errors for decision trees and disjunctive normal forms.
A potentially more practical link, to boosted decision trees, and their convergence to the
maximum margin solutions has to be investigated further. The current paper sets foundations for such research.
Boolean kernels can potentially stimulate more accurate (kernel) support vector machines
by providing more intuitive construction of kernels. This is the subject of ongoing research.
Acknowledgments A.K. acknowledges permission of the Chief Technology Officer, Telstra to publish this paper. A.S. was supported by a grant of the DFG Sm 62/1-1. Parts of this
work were supported by the ARC and an R& D grant from Telstra. Thanks to P. Sember
and H. Ferra for help in preparation of this paper.
References
[1] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337 ? 404, 1950.
[2] S. Ben-David and H. U. Simon. Efficient learning of linear perceptron. In T.K. Leen,
T.G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing
Systems 13, pages 189?195, Cambridge, MA, 2001. MIT Press.
[3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995.
[4] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and Regression Trees. Wadsworth Int., Belmont, Ca., 1984.
[5] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273 ? 297,
1995.
[6] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and
other kernel-based learning methods. Cambridge University Press, Cambridge, 2000.
[7] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. In J. Shavlik, editor, Machine Learning: Proceedings of the Fifteenth International Conference, San Francisco, CA, 1998. Morgan Kaufmann.
[8] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219?269, 1995.
[9] A. Kowalczyk. Maximal margin perceptron. In A. Smola, P.Bartlett, B. Sch?olkopf,
and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61?100,
Cambridge, MA, 2000. MIT Press.
[10] A. Kowalczyk and H. Ferr`a. Developing higher-order networks with empirically selected units. IEEE Transactions on Neural Networks, 5:698?711, 1994.
[11] A. B. Novikoff. On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata, 12:615?622, 1962.
[12] J.R. Quinlan. Simplifying decision trees. Int. J. Man-Machine Studies, 27:221?234,
(1987).
[13] J. Shawe-Taylor and N. Christianini. Margin distribution and soft margin. In A. J.
Smola, P. L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large
Margin Classifiers, pages 349?358, Cambridge, MA, 2000. MIT Press.
| 1958 |@word version:4 achievable:1 polynomial:18 norm:2 simplifying:1 contains:1 score:2 rkhs:3 percep:2 current:1 od:1 must:1 belmont:2 partition:1 wx:1 girosi:1 cwd:1 update:5 half:1 leaf:9 selected:2 nq:2 rts:2 device:1 greedy:3 realizing:1 vanishing:1 institution:1 provides:1 boosting:2 cse:1 org:1 mathematical:2 direct:1 symposium:1 introduce:2 mask:3 telstra:4 actual:1 becomes:1 moreover:1 notation:3 bounded:1 lowest:1 what:3 minimizes:1 finding:2 nj:2 every:16 act:1 subclass:1 xd:1 nutshell:1 exactly:4 classifier:9 scaled:1 unit:1 grant:2 bertsekas:1 treat:1 limit:2 severely:1 bilinear:1 au:3 studied:1 unique:1 practical:2 acknowledgment:1 implement:1 digit:5 procedure:1 j0:1 area:2 epg:5 word:2 close:1 faulty:1 risk:19 www:2 restriction:1 map:1 missing:1 primitive:2 automaton:1 simplicity:1 estimator:1 rule:6 spanned:1 financial:1 ccb:1 traditionally:1 controlling:1 construction:1 programming:2 homogeneous:2 us:1 regularised:1 trick:1 element:2 satisfying:1 recognition:1 disjunctive:6 ensures:2 rq:1 complexity:14 cristianini:1 solving:1 basis:2 translated:1 represented:1 derivation:1 train:1 dnf:1 effective:1 outside:1 disjunction:1 heuristic:2 widely:1 solve:1 larger:1 otherwise:1 reconstruct:1 noisy:3 seemingly:1 sequence:1 analytical:3 maximal:9 product:3 remainder:1 cients:1 loop:3 oz:1 intuitive:1 olkopf:2 convergence:5 adam:1 converges:1 ben:2 help:2 derive:1 measured:1 eq:3 convention:1 subsequently:2 hold:6 practically:1 normal:6 mapping:2 omitted:1 purpose:1 estimation:1 applicable:2 combinatorial:1 vice:1 tool:2 mit:3 always:3 rather:1 avoid:1 boosted:3 sion:1 breiman:1 conjunction:2 minimisation:1 derived:2 check:1 typically:3 relation:1 comprising:1 issue:1 classification:9 special:6 initialize:2 wadsworth:1 equal:5 jones:1 np:2 others:1 novikoff:1 few:1 dfg:1 fire:1 attempt:2 friedman:1 evaluation:1 predefined:1 subtrees:1 accurate:1 capable:1 poggio:1 unless:1 tree:29 taylor:2 desired:1 minimal:1 instance:2 ferra:1 soft:3 boolean:8 goodness:1 ordinary:2 cost:1 subset:2 monomials:2 kq:1 learnability:1 sv:5 combined:1 thanks:1 international:1 standing:2 probabilistic:1 worse:1 american:1 sember:1 coefficient:6 int:2 satisfy:1 explicitly:1 linked:1 bayes:2 simon:2 minimize:1 kaufmann:1 likewise:1 correspond:1 bayesian:1 bob:1 mlg:1 definition:1 obvious:2 proof:6 associated:1 dataset:1 proved:1 popular:1 ask:1 logical:6 hilbert:3 cj:1 higher:2 originally:1 supervised:1 follow:1 formulation:1 done:1 though:1 evaluated:1 leen:1 furthermore:4 smola:4 hand:1 aronszajn:1 nonlinear:1 defines:1 aj:1 stimulate:1 scientific:1 name:1 dietterich:1 true:2 inductive:1 regularization:4 polarization:1 hence:1 equality:1 laboratory:1 illustrated:1 criterion:2 o3:1 ini:1 stone:1 complete:1 tn:3 gh:1 reasoning:1 functional:3 ji:1 clause:1 empirically:1 linking:3 setk:1 versa:1 cambridge:5 similarly:1 shawe:2 dot:2 closest:1 recent:1 optimizing:1 certain:2 binary:2 devise:1 morgan:1 additional:1 employed:1 prune:1 exceeds:1 offer:1 concerning:1 halt:1 ko:1 regression:1 publish:1 fifteenth:1 iteration:3 kernel:28 achieved:1 penalize:1 else:2 sch:2 rest:1 cart:2 subject:1 incorporates:1 oq:1 integer:1 easy:1 enough:1 fit:1 architecture:1 approaching:1 idea:1 bartlett:2 ferr:1 useful:1 byl:1 involve:1 http:1 schapire:1 discrete:1 group:1 key:1 four:1 officer:1 drawn:1 christianini:1 run:1 throughout:1 reasonable:1 separation:3 decision:25 bit:3 bound:16 display:2 quadratic:4 constraint:1 alex:2 sake:1 generates:1 pruned:1 the0:1 developing:1 combination:1 slightly:1 modification:2 happens:1 turn:1 needed:1 know:1 end:3 available:2 apply:1 kowalczyk:4 permission:1 jnj:30 existence:2 binomial:1 denotes:1 cf:2 ensure:1 quinlan:2 restrictive:1 especially:1 establish:2 society:1 question:3 usual:1 traditional:1 reversed:1 link:2 athena:1 argue:1 fresh:1 providing:1 minimizing:3 trl:1 difficult:1 olshen:1 robert:1 potentially:2 stated:1 ba:1 unsw:1 upper:1 observation:2 sm:1 benchmark:1 finite:1 arc:1 situation:1 extended:4 reproducing:3 clayton:1 propositional:4 introduced:3 rsise:1 required:5 david:2 connection:1 bel:1 namely:1 c4:1 address:1 below:1 built:1 treated:1 regularized:7 vic:1 technology:1 lk:1 carried:1 acknowledges:1 tresp:1 xq:1 relative:1 regularisation:1 freund:1 loss:1 fully:1 expect:1 generation:1 proven:1 validation:1 foundation:1 degree:3 mercer:2 editor:4 supported:2 last:1 l_:1 monomial:2 perceptron:28 shavlik:1 depth:5 dimension:1 commonly:1 made:1 san:1 simplified:1 transaction:2 functionals:2 pruning:1 compact:1 logic:6 dealing:1 ml:3 conclude:1 francisco:1 search:1 un:1 continuous:1 chief:1 table:4 additionally:1 learn:4 reasonably:1 ca:2 contributes:1 schuurmans:2 expansion:6 williamson:2 investigated:1 artificially:1 domain:1 main:1 whole:1 canberra:1 position:1 explicit:1 mmp:3 formula:8 enlarging:1 theorem:7 insightful:1 svm:4 cortes:1 consist:1 exists:7 intractable:1 vapnik:1 execution:1 anu:2 margin:22 led:2 simply:1 expressed:2 scalar:1 corresponds:1 minimizer:1 relies:2 complemented:1 ma:4 identity:2 goal:1 labelled:1 man:1 hard:4 loan:1 specifically:1 determined:1 hyperplane:2 lemma:4 called:2 discriminate:1 experimental:3 la:1 meaningful:1 perceptrons:2 formally:1 support:6 latter:1 assessed:1 preparation:1 ongoing:1 constructive:2 |
1,049 | 1,959 | EM-DD: An Improved Multiple-Instance
Learning Technique
Qi Zhang
Department of Computer Science
Washington University
St. Louis, MO 63130-4899
Sally A. Goldman
Department of Computer Science
Washington University
St. Louis, MO 63130-4899
qz@cs. wustl. edu
sg@cs. wustl. edu
Abstract
We present a new multiple-inst ance (MI) learning technique (EMDD) that combines EM with the diverse density (DD) algorithm.
EM-DD is a general-purpose MI algorithm that can be applied with
boolean or real-value labels and makes real-value predictions. On
the boolean Musk benchmarks, the EM-DD algorithm without any
tuning significantly outperforms all previous algorithms. EM-DD
is relatively insensitive to the number of relevant attributes in the
data set and scales up well to large bag sizes. Furthermore, EMDD provides a new framework for MI learning, in which the MI
problem is converted to a single-instance setting by using EM to
estimate the instance responsible for the label of the bag.
1
Introduction
The multiple-instance (MI) learning model has received much attention. In this
model, each training example is a set (or bag) of instances along with a single
label equal to the maximum label among all instances in the bag. The individual
instances within the bag are not given labels. The goal is to learn to accurately
predict the label of previously unseen bags. Standard supervised learning can be
viewed as a special case of MI learning where each bag holds a single instance. The
MI learning model was originally motivated by the drug activity prediction problem
where each instance is a possible conformation (or shape) of a molecule and each
bag contains all likely low-energy conformations for the molecule. A molecule is
active if it binds strongly to the target protein in at least one of its conformations
and is inactive if no conformation binds to the protein. The problem is to predict
the label (active or inactive) of molecules based on their conformations.
The MI learning model was first formaliz ed by Dietterich et al. in th eir seminal
paper [4] in which they developed MI algorithms for learning axis-parallel rectangles
(APRs) and they also provided two benchmark "Musk" data sets. Following this
work, there has been a significant amount of research directed towards the development of MI algorithms using different learning models [2 ,5,6,9,12]. Maron and
Raton [7] applied the multiple-instance model to the task of recognizing a person
from a series of images that are labeled positive if they contain the person and
negative otherwise. The same technique was used to learn descriptions of natural
scene images (such as a waterfall) and to retrieve similar images from a large image database using the learned concept [7]. More recently, Ruffo [11] has used this
model for data mining applications.
While the musk data sets have boolean labels , algorithms that can handle realvalue labels are often desirable in real-world applications. For example, the binding
affinity between a molecule and receptor is quantitative, and hence a real-value
classification of binding strength is preferable to a binary one. Most prior research
on MI learning is restricted to concept learning (i.e. boolean labels). Recently, MI
learning with real-value labels has been performed using extensions of the diverse
density (DD) and k-NN algorithms [1] and using MI regression [10].
In this paper , we present a general-purpose MI learning technique (EM-DD) that
combines EM [3] with the extended DD [1] algorithm. The algorithm is applied
to both boolean and real-value labeled data and the results are compared with
corresponding MI learning algorithms from previous work. In addition, the effects
of the number of instances per bag and the number of relevant features on the
performance of EM-DD algorithm are also evaluated using artificial data sets . A
second contribution of this work is a new general framework for MI learning of
converting the MI problem to a single-instance setting using EM. A very similar
approach was also used by Ray and Page [10].
2
Background
Dietterich et al. [4], presented three algorithms for learning APRs in the MI model.
Their best performing algorithm (iterated-discrim) , starts with a point in the feature
space and "grows" a box with the goal of finding the smallest box that covers at
least one instance from each positive bag and no instances from any negative bag.
The resulting box was then expanded (via a statistical technique) to get better
results. However, the test data from Muskl was used to tune the parameters of the
algorithm. These parameters are then used for Muskl and Musk2.
Auer [2] presented an algorithm, MULTINST, that learns using simple statistics to
find the halfspaces defining the boundaries of the target APR and hence avoids some
potentially hard computational problems that were required by the heuristics used
in the iterated-discrim algorithm. More recently, Wang and Zucker [11] proposed a
lazy learning approach by applying two variant of the k nearest neighbor algorithm
(k-NN) which they refer to as citation-kNN and Bayesian k-NN. Ramon and De
Raedt [9] developed a MI neural network algorithm.
Our work builds heavily upon the Diverse Density (DD) algorithm of Maron and
Lozano- Perez [5,6]. When describing the shape of a molecule by n features , one can
view each conformation of the molecule as a point in a n-dimensional feature space.
The diverse density at a point p in the feature space is a probabilistic m easure of
both how many different positive bags have an instance near p, and how far the
negative instances are from p. Intuitively, the diversity density of a hypothesis h is
just the likelihood (with respect to the data) that h is the target. A high diverse
density indicates a good candidate for a "true" concept.
We now formally define the general MI problem (with boolean or real-value la-
bels) and DD likelihood measurement originally defined in [6] and extended to
real-value labels in [1]. Let D be the labeled data which consists of a set of m
bags B = {B 1 , ... , B m } and labels L = {l\, ... ,?m }, i.e., D = {< B 1 ,?l >, ... , <
Bm, ?m >}. Let bag Bi = {Bil " '" B ij , ... Bin} where Bij denote the lh instance in bag i. Assume the labels of the instances in Bi are ?i 1, ... , ?ij, ... , ?in .
For boolean labels, ?i = ?i1 V ?i2 V ... V ?in, and for real-value labels, ?i =
max{ ?il, ?i2, ... , ?in}. The diverse density of hypothesized target point h is deh) Pr(h)
Pr(B , L I h) Pr(h) A
.
fi ned as D D (h) = Pr (h I D ) = Pr(D I ()
=
( ) . ssummg a
Pr D
Pr B , L
uniform prior on the hypothesis space and independence of < B i , ?i > pairs given
h , using Bayes' rule, the maximum likelihood hypothesis , h DD , is defined as:
n
arg maxPr(D I h)
hEH
= arg m ax IT Pr(Bi , ?i
hEH
i=l
n
I h)
= arg min I) -log Pr(?i
hEH
i=l
I h , B i ))
where Label (Bi I h) is the label that would be given to B i if h were the correct
hypothesis. As in the extended DD algorithm [1], Pr(?i I h , Bi) is estimated as
l-I ?i - Label (Bi I h) I in [1]. When the labels are boolean (0 or 1) , this formulation
is exactly the most-likely-cause estimator used in the original DD algorit hm [5]. For
most applications t he influence each feature has on t he label varies greatly. This
variation is modeled in the DD algorithm by associating with each attribute an
(unknown) scale factor . Hence the target concept really consists of two values per
dimension , the ideal attribute value and the scale value. Using the assumption that
binding strength drops exponentially as the similarity between the conform ation
to the ideal shape increases , the following generative model was introduced by
Maron and Lozano-Perez [6] for estimating the label of bag B i for hypothesis h =
{h 1 , ... , h n , Sl , ... , sn} :
Label(Bi I h) =max{ ex P [- t (Sd(Bijd - hd)) 2]}
J
d=l
(1)
where Sd is a scale factor indicating the importance of feature d, h d is the feature
value for dimension d, and B ijd is the feature value of instance B ij on dimension d.
Let NLDD(h , D) = 2::7=1(-log Pr(?i I h , B i )) , where NLDD denote the negative
logarit hm of DD. The DD algorithm [6] uses a two-step gradient descent search to
find a value of h that minimizes NLDD (and hence maximizes DD).
Ray and Page [10] developed multiple-instance regression algorithm which can also
handle real-value labeled data. They assumed an underlying linear model for the
hypothesis and applied the algorithm to some artificial data. Similar to the current
work, they also used EM to select one instance fro m each bag so multiple regression
can be applied to MI learning.
3
Our algorithm: EM-DD
We now describe EM-DD and compare it with the original DD algorithm. One
reason why MI learning is so difficult is the ambiguity caused by not knowing
which instan ce is the important one. The basic idea behind EM-DD is to view
the knowledge of which instance corresponds to the label of th e bag as a missing
attribute which can be estimated using EM approach in a way similar to how EM
is used in the MI regression [10]. EM-DD starts with some initial guess of a target
point h obtained in the standard way by trying points from positive bags, then
repeat edly performs the following two steps that combines EM with DD to search
for the maximum likelihood hypothesis. In the first step (E-step) , the current
hypothesis h is used to pick one instance from each bag which is most likely (given
our generative model) to be the one responsible for the label given to the bag. In
the second step (M -step), we use the two-step gradient ascent search (quasi-newton
search dfpmin in [8]) of the standard DD algorithm to find a new hi that maximizes
DD(h). Once this maximization step is completed , we reset the proposed target
h to hi and return to the first step until the algorithm converges. Pseudo-code for
EM-DD is given in Figure 1.
We now briefly provide intuition as to why EM-DD improves both the accuracy and
computation time of the DD algorithm. Again, the basic approach of DD is to use
a gradient search to find a value of h that maximizes DD(h). In every search step ,
the DD algorithm uses all points in each bag and hence the maximum that occurs
in Equation (1) must be computed. The prior diverse density algorithms [1,5,6,7]
used a softmax approximation for the maximum (so that it will b e differentiable),
which dramatically increases the computation complexity and introduces additional
error based on the parameter selected in softmax. In comparison, EM-DD converts
the multiple-instance data to single-instance data by removing all but one point per
bag in the E -step, which greatly simplifies the search step since the maximum that
occurs in Equation (1) is removed in the E -step. The removal of softmax in EMDD greatly decreases the computation time. In addition, we believe that EM-DD
helps avoid getting caught in local minimum since it makes major changes in the
hypothesis when it switches which point is selected from a bag.
We now provide a sketch of the proof of convergence of EM-DD. Note that at
each iteration t , given a set of instances selected in the E-step, the M-step will
find a unique hypothesis (h t ) and corresponding DD (ddt). At iteration t + 1, if
dd t +1 ::; ddt , the algorithm will terminate. Otherwise, dd t +1 > ddt , which means
that a different set of instances are selected. For the iteration to continue, the DD
will decrease monotonically and the set of instances selected can not repeat. Since
there are only finite number of sets to instances that can be selected at the E-step ,
the algorithm will terminate after a finite number of iterations.
However, there is no guarantee on the convergence rate of EM algorithms. We
found that the NLDD(h , D) usually decreases dramatically after the first several
iterations and then begins to flatten out. From empirical tests we found that it is
often beneficial to allow NLDD to increase slightly to escape a local minima and thus
we used the less restrictive termination condition: Idd 1 - dd oI < 0.01 . dd o or the
number of iterations is greater than 10. This modification reduces the training time
while gaining comparable results. However, for this modification no convergence
proof can be given without restricting the number of iterations.
4
Experimental results
In this section we summarize our experimental results. We begin by reporting our
results for the two musk benchmark data sets provided by Dietterich et al. [4].
These data sets contain 166 feature vectors describing the surface for low-energy
conformations of 92 molecules for Muskl and 102 molecules for Musk2 wh ere roughly
half of the molecules are known to smell musky and the remainder are not. The
Musk1 d ata set is smaller both in h aving fewer bags (i. e molecules) and many fewer
instances p er bag (an average of 6.0 for Musk1 versus 64.7 for Musk2). Prior to
this work, the highly-tuned iterated-discrim algorithm of Dietterich et al. still gave
the b est p erformance on both Musk1 and Musk2. Maron and Lozano-Perez [6]
Main(k , D)
partition D = {D1 ' D 2, ... , D 10 }; 111 O-fold cross validation
for (i = l ;i:::; 10 ;i++)
Dt = D - Di ;
IIDt training data , Di validation data
pick k random positive bags B 1 , ... , B k from D t ;
let Ho be the union of all instances from selected bags;
for every instance I j E H 0
hj = EM-DD (Ij, D t );
ei = mino:<;:j:<;:IIHoll{error(hj,Di)};
return avg(e1,e2, ... , e1o) ;
EM-DD(I , D t )
Let h = {h1' ... , hn , Sl, ... , sn};
Ilinitial hypothesis
For each dimension d = 1, ... , n
hd = Id;
Sd = 0.1 ;
nldd o = +00;
nldd 1 = NLDD(h, Dt);
while (nldd 1 < nldd o)
for each bag Bi E D t
liE-step
pi = argmaxBijEBi Pr(Bij E h);
hi = argmaXhEH flP r(fi I h , pi);
11M-step
nldd o = nldd 1;
nldd 1 = NLDD(hl,Dt);
h = hi;
return h;
Figure 1: Pseudo-code for EM-DD where k indicates the number of different starting
bags used, Pr(Bij E h) = exp [- I:~=1 (sd(Bijd - hd))2]. Pr(fi I h , p,!) is calculate as
either 1-lfi - Pr(pi E h) I (linear model) or exp [-( fi - Pr(pi E h) )2] (Gaussian-like
model) , where Pr(pi E h) = maxBijEBi Pr(Bij E h).
summarize the generally held belief that "The performance reported for iterateddiscrim APR involves choosing parameters to maximize the test set performance
and so probably represents an upper bound for accuracy on this (Musk1) data set."
EM-DD without tuning outperforms all previous algorithms. To be consistent with
the way in which past results have been reported for the musk benchmarks we
report the average accuracy of la-fold cross-validation (which is the value returned
by Main in Figure l. EM-DD obtains an average accuracy of 96.8% on Musk1 and
96.0% on Musk2. A summary of the performance of different algorithms on the
Musk1 and Musk2 data sets is given in Table l. In addition , for both data sets ,
there are no false negative errors using EM-DD , which is important for the drug
discovery application since the final hypothesis would be used to filter potential
drugs and a false negative error means that a potential good drug molecule would
not be tested and thus it is good to minimize such errors. As compared to the
standard DD algorithm , EM-DD only used three random bags for Muskl and two
random bags for Musk2 (versus all positive bags used in DD) as the starting point
of the algorithm. Also, unlike th e results reported in [6] in which the threshold is
tuned based on leave-one-out cross validation, for our reported results the threshold
value (of 0.5) is not tuned. More importantly, EM-DD runs over 10 times faster
than DD on Musk1 and over 100 times faster when applied to Musk2.
Table 1: Comparison of performance on Musk1 and Musk2 data sets as measured
by giving the average accuracy across 10 runs using 10-fold cross validation.
Algorithm
EM-DD
Iterated-discrim [4]
Citation-kNN [11]
Bayesian-kNN [11]
Diverse density [6]
Multi-instance neural network [9]
Multinst [2]
Musk1
accuracy
96.8%
92.4%
92.4%
90.2%
88.9%
88.0%
76.7%
Musk 2
accuracy
96.0%
89.2%
86.3%
82.4%
82.5%
82.0%
84.0%
In addition to its superior performance on the musk data sets, EM-DD can handle
real-value labeled data and produces real-value predictions. We present results
using one real data set (Affinity) 1 that has real-value labels and several artificial
data sets generated using the technique of our earlier work [1]. For these data sets,
we used as our starting points the points from the bag with the highest DD value.
The result are shown in Table 2. The Affinity data set has 283 features and 139
bags with an average of 32.5 points per bag. Only 29 bags have labels that were
high enough to be considered as "positive." Using the Gaussian-like version of our
generative model we obtained a squared loss of 0.0185 and with the linear model
we performed slightly better with a loss of 0.0164. In contrast using the standard
diverse density algorithm the loss was 0.042l. EM-DD also gained much better
performance than DD on two artificial data (160.166.1a-S and 80.166.1a-S) where
both algorithms were used 2 . The best result on Affinity data was obtained using a
version of citation-kNN [1] that works with real-value data with the loss as 0.0124.
We think that the affinity data set is well-suited for a nearest neighbor approach in
that all of the negative bags have labels between 0.34 and 0.42 and so the actual
predictions for the negative bags are better with citation-kNN.
To study the sensitivity of EM-DD to the number ofrelevant attributes and the size
of the bags, tests were performed on artificial data sets with different number of
relevant features and bag sizes. As shown in Table 2, similar to the DD algorithm [1],
the performance of EM-DD degrades as the number of relevant features decreases.
This behavior is expected since all scale factors are initialized to the same value
and when most of the features are relevant less adjustment is needed and hence the
algorithm is more likely to succeed. In comparison to DD , EM-DD is more robust
against the change of the number of relevant features. For example, as shown in
Figure 2, when the number of relevant features is 160 out of 166, both EM-DD and
DD algorithms perform well with good correlation between the actual labels and
predicted labels. However, when the number of relevant features decreases to 80 ,
almost no correlation between the actual and predicted labels is found using DD ,
while EM-DD can still provide good predictions on the labels.
Intuitively, as the size of bags increases, more ambiguity is introduced to the data
and the p erformance of algorithms is expected to go down. However , somewhat
] Jonathan Greene from CombiChem provided us with the Affinity data set. However,
due to the proprietary nature of it we cannot make it publicly available.
2See Amar et al. [1] for a description of these two data sets.
Table 2: Performance on data with real-value labels measured as squared loss.
Data set
Affinity
160.166.1a-S
160.166.1b-S
160.166.1c-S
80.166.1a-S
80.166.1b-S
80.166.1c-S
40.166.1a-S
40.166.1b-S
40.166.1 c-S
# reI. features
#pts per bag
160
160
160
80
80
80
40
40
40
32.5
4
15
25
4
15
25
4
15
25
EM-DD
.0164
.0014
.0013
.0012
.0029
.0023
.0022
.0038
.0026
.0037
DD [1]
.0421
.0052
.1116
surprisingly, the performance of EM-DD actually improves as the number of examples per bag increases . We believe that this is partly due to the fact that with
few points per bag the chance that a bad starting point has the highest diverse
density is much higher than when the bags are large. In addition, in contrast to the
standard diverse density algorithm , the overall time complexity of EM-DD does not
go up as the size of the bags increased , since after the instance selection (E-step) ,
the time complexities of the dominant M-step are essentially the same for data sets
with different bag sizes. The fact that EM-DD scales up well to large bag sizes
in both performance and running time is very important for real drug-discovery
applications in which the bags can be quite large.
5
Future directions
There are many avenues for future work. We believe that EM-DD can be refined to
obtain better performance by finding alternate ways to select the initial hypothesis
and scale factors. One option would be to use the result from a different learning
algorithm as the starting point then use EM-DD to refine the hypothesis. We are
currently studying the application of the EM-DD algorithm to other domains such
as content-based image retrieval. Since our algorithm is based on the diverse density
likelihood measurement we believe that it will perform well on all applications in
which the standard diverse density algorithm has worked well. In addition , EM-DD
and MI regression [10] presented a framework to convert the multiple-instance data
to single-instance data, where supervised learning algorithms can be applied. We
are currently working on using this general m ethodology to develop new MI learning
techniques based on supervised learning algorithms and EM.
Acknowledgments
The authors gratefully acknowledge the support NSF grant CCR-998831 4. We
thank Dan Dooly for many useful discussions. We also thank Jonathan Greene who
provided us with the Affinity data set .
References
[1] Amar, R.A., Dooly, D.R., Goldman, S.A. & Zhang, Q. (2001). Multiple-Instance
Learning of Real-Valued Data. Pr'oceedings 18th International Confer'ence on Machine
Learning, pp. 3- 10. San Francisco, CA: Morgan Kaufmann.
[2] Auer, P. (1997) On learning from mult-instance examples: Empirical evaluation of a
theoretical approach. Proceedings 14th International Conference on Ma chine Learning,
160.166.1a-S (DD)
80.166.1a-S (DD)
0. 8
0. 8
0. 6
0. 6
0.4
0.4
, .
- ~-: :- T.;-~ ---
. ~.
0.2
.....
'
"
0.4
0.6
Actual
0.8
0.2
160.166.1a-S (EM-DD)
"
0. 8
~
0. 6
0.2
,
.. :'
.
",
0.2
'-
-
'
':~.,::. - -
..'
0.4
0.6
Actual
0.8
.::::.; ":"..
0. 8
:',"
~
~ 0.4
- -
80.166.1a-S (EM-DD)
,':
~
~
....
0.2
.
0.2
:
0. 6
~ 0.4
..
0.2
0.4
0.6
Actual
0.8
':':....
0.2
0.4
0.6
Actual
0.8
Figure 2: Comparison of EM-DD and DD on real-value labeled artificial data with
different number of relevant features. The x-axis corresponds to the actual label
and y-axis gives t h e predicted label.
pp. 21-29. San Francisco , CA: Morgan Kaufmann.
[3] Dempster, A.P., Laird, N .M. , & Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistics Society, Series B, 39 (1):
1-38.
[4] Dietterich, T. G., Lathrop , R. H., & Lozano-Perez, T. (1997). Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2): 31-7l.
[5] Maron, O. (1998). Lea rning from Ambiguity. Doctoral dissertation, MIT, AI Technical
Report 1639.
[6] Maron, O. & Lozano-Perez, T. (1998). A framework for multiple-instance learning.
Neural Information Processing Systems 10. Cambridge, MA: MIT Press.
[7] Maron, O. & Ratan, A. (1998). Multiple-instance learning for natural scene classification. Proceedings 15th International Conference on Machine Learning, pp. 341-349. San
Francisco, CA: Morgan Kaufmann.
[8] Press, W.H., Teukolsky, S.A., Vetterling, W .T., and F lannery, B.P. (1992). Numerical
Recipes in C: the art of scientific computing . Cambridge University Press, New York,
second edition.
[9] Ramon, J. & L. De Raedt. (2000). Multi instance neural networks. Proceedings of
I CML -2000 workshop on "Attribute - Value and Relational Learning.
[10] Ray, S. & Page , D. (2001) . Multiple-Instance Regression. Proceedings 18th International Conference on Machine Learning, pp. 425-432. San Francisco, CA: Morgan
Kaufmann.
[11] RufIo, G . (2000) . Learning single and multiple instance dec is io n tr'ees for' co mputer'
security appli ca tions. Doctoral dissertation. Department of Computer Science, Uni versity
of Turin, Torino, Italy.
[12] Wang, J. & Zucker, J.-D. (2000). Solving the Multiple-Instance Learning Problem: A
Lazy Learning Approach. Proceedings 17th International Conference on Ma chin e Learning,
pp. 1119-11 25 . San Francisco, CA: Morgan Kaufmann.
| 1959 |@word version:2 briefly:1 termination:1 cml:1 ratan:1 pick:2 tr:1 initial:2 contains:1 series:2 tuned:3 outperforms:2 past:1 current:2 must:1 numerical:1 partition:1 shape:3 drop:1 generative:3 selected:7 guess:1 half:1 fewer:2 intelligence:1 dissertation:2 provides:1 zhang:2 along:1 consists:2 combine:3 dan:1 ray:3 expected:2 behavior:1 roughly:1 multi:2 goldman:2 versity:1 actual:8 provided:4 estimating:1 underlying:1 begin:2 maximizes:3 minimizes:1 developed:3 finding:2 guarantee:1 pseudo:2 quantitative:1 every:2 preferable:1 musk2:9 exactly:1 grant:1 louis:2 positive:7 bind:2 local:2 sd:4 oceedings:1 io:1 receptor:1 id:1 doctoral:2 co:1 multipleinstance:1 bi:8 directed:1 unique:1 responsible:2 acknowledgment:1 union:1 ance:1 erformance:2 rning:1 empirical:2 drug:5 significantly:1 mult:1 flatten:1 wustl:2 protein:2 get:1 cannot:1 selection:1 applying:1 seminal:1 influence:1 missing:1 go:2 attention:1 starting:5 caught:1 rule:1 estimator:1 d1:1 importantly:1 retrieve:1 hd:3 handle:3 variation:1 smell:1 target:7 pt:1 heavily:1 us:2 hypothesis:14 lfi:1 labeled:6 database:1 eir:1 wang:2 algorit:1 calculate:1 decrease:5 removed:1 highest:2 halfspaces:1 intuition:1 dempster:1 complexity:3 solving:2 upon:1 describe:1 artificial:7 choosing:1 refined:1 quite:1 heuristic:1 valued:1 otherwise:2 statistic:2 knn:5 unseen:1 amar:2 think:1 laird:1 final:1 differentiable:1 reset:1 remainder:1 relevant:9 description:2 getting:1 recipe:1 convergence:3 produce:1 converges:1 leave:1 help:1 tions:1 develop:1 measured:2 ij:4 nearest:2 conformation:7 received:1 c:2 involves:1 predicted:3 direction:1 rei:1 multinst:2 attribute:6 correct:1 filter:1 appli:1 bin:1 really:1 extension:1 hold:1 considered:1 exp:2 predict:2 mo:2 major:1 smallest:1 purpose:2 bag:50 label:35 currently:2 ere:1 mit:2 gaussian:2 avoid:1 hj:2 ax:1 waterfall:1 likelihood:6 indicates:2 greatly:3 contrast:2 inst:1 nn:3 vetterling:1 quasi:1 i1:1 arg:3 among:1 musk:7 classification:2 overall:1 development:1 iidt:1 art:1 special:1 softmax:3 equal:1 once:1 washington:2 represents:1 future:2 report:2 escape:1 few:1 bil:1 individual:1 mining:1 highly:1 evaluation:1 introduces:1 perez:5 behind:1 held:1 lh:1 incomplete:1 initialized:1 theoretical:1 instance:44 increased:1 earlier:1 boolean:8 ence:1 cover:1 raedt:2 maximization:1 uniform:1 recognizing:1 reported:4 varies:1 st:2 density:14 person:2 sensitivity:1 international:5 probabilistic:1 again:1 ambiguity:3 squared:2 hn:1 return:3 converted:1 potential:2 de:2 diversity:1 caused:1 performed:3 view:2 h1:1 start:2 bayes:1 option:1 parallel:2 contribution:1 minimize:1 il:1 oi:1 accuracy:7 publicly:1 kaufmann:5 who:1 bayesian:2 iterated:4 accurately:1 ed:1 against:1 energy:2 pp:5 e2:1 proof:2 mi:25 di:3 wh:1 knowledge:1 improves:2 auer:2 actually:1 originally:2 dt:3 supervised:3 higher:1 improved:1 formulation:1 evaluated:1 box:3 strongly:1 furthermore:1 just:1 until:1 correlation:2 sketch:1 working:1 ei:1 maron:7 scientific:1 believe:4 grows:1 dietterich:5 effect:1 contain:2 concept:4 true:1 hypothesized:1 lozano:5 hence:6 i2:2 confer:1 trying:1 chin:1 performs:1 image:5 recently:3 fi:4 superior:1 insensitive:1 exponentially:1 he:2 significant:1 refer:1 measurement:2 cambridge:2 chine:1 ai:1 tuning:2 flp:1 gratefully:1 zucker:2 similarity:1 surface:1 dominant:1 italy:1 binary:1 continue:1 morgan:5 minimum:2 additional:1 greater:1 somewhat:1 turin:1 converting:1 maximize:1 monotonically:1 multiple:14 desirable:1 reduces:1 technical:1 faster:2 heh:3 cross:4 retrieval:1 e1:1 qi:1 prediction:5 variant:1 regression:6 basic:2 essentially:1 iteration:7 dec:1 lea:1 addition:6 background:1 unlike:1 ascent:1 probably:1 idd:1 ee:1 near:1 ideal:2 enough:1 musk1:9 switch:1 discrim:4 independence:1 gave:1 associating:1 idea:1 simplifies:1 knowing:1 avenue:1 inactive:2 motivated:1 returned:1 york:1 cause:1 proprietary:1 dramatically:2 generally:1 useful:1 tune:1 amount:1 sl:2 nsf:1 estimated:2 per:7 ccr:1 diverse:13 conform:1 ddt:3 threshold:2 ce:1 rectangle:2 convert:2 run:2 reporting:1 almost:1 comparable:1 bound:1 hi:4 fold:3 refine:1 activity:1 greene:2 strength:2 worked:1 scene:2 min:1 performing:1 expanded:1 relatively:1 ned:1 department:3 alternate:1 beneficial:1 slightly:2 em:53 smaller:1 across:1 modification:2 hl:1 intuitively:2 restricted:1 pr:19 equation:2 previously:1 describing:2 needed:1 studying:1 available:1 ho:1 original:2 running:1 ijd:1 completed:1 newton:1 giving:1 restrictive:1 build:1 society:1 torino:1 occurs:2 degrades:1 affinity:8 gradient:3 thank:2 reason:1 code:2 modeled:1 difficult:1 potentially:1 negative:8 unknown:1 perform:2 upper:1 benchmark:4 finite:2 acknowledge:1 descent:1 defining:1 extended:3 relational:1 raton:1 introduced:2 pair:1 required:1 bel:1 security:1 learned:1 usually:1 summarize:2 max:2 ramon:2 gaining:1 belief:1 royal:1 ation:1 natural:2 axis:4 fro:1 hm:2 emdd:3 sn:2 prior:4 sg:1 discovery:2 removal:1 loss:5 versus:2 validation:5 consistent:1 rubin:1 dd:81 pi:5 ata:1 summary:1 repeat:2 surprisingly:1 allow:1 neighbor:2 boundary:1 dimension:4 world:1 avoids:1 author:1 avg:1 san:5 bm:1 far:1 citation:4 obtains:1 uni:1 active:2 assumed:1 francisco:5 search:7 why:2 table:5 qz:1 learn:2 terminate:2 molecule:12 robust:1 musky:1 nature:1 ca:6 domain:1 apr:4 main:2 edition:1 mino:1 candidate:1 lie:1 learns:1 bij:4 removing:1 down:1 bad:1 er:1 workshop:1 restricting:1 false:2 importance:1 gained:1 suited:1 likely:4 lazy:2 adjustment:1 sally:1 binding:3 corresponds:2 chance:1 teukolsky:1 ma:3 succeed:1 goal:2 viewed:1 towards:1 instan:1 content:1 hard:1 change:2 lathrop:1 partly:1 experimental:2 la:2 est:1 indicating:1 formally:1 select:2 support:1 jonathan:2 tested:1 ex:1 |
1,050 | 196 | A Computer Modeling Approach to Understanding
A computer modeling approach to understanding the
inferior olive and its relationship to the cerebellar
cortex in rats
Maurice Lee and James M. Bower
Computation and Neural Systems Program
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
This paper presents the results of a simulation of the spatial relationship
between the inferior olivary nucleus and folium crus IIA of the lateral
hemisphere of the rat cerebellum. The principal objective of this
modeling effort was to resolve an apparent conflict between a proposed
zonal organization of olivary projections to cerebellar cortex suggested
by anatomical tract-tracing experiments (Brodal & Kawamura 1980;
Campbell & Armstrong 1983) and a more patchy organization apparent
with physiological mapping (Robertson 1987). The results suggest that
several unique features of the olivocerebellar circuit may contribute to
the appearance of zonal organization using anatomical techniques, but
that the detailed patterns of patchy tactile projections seen with
physiological techniques are a more accurate representation of the
afferent organization of this region of cortex.
1 INTRODUCTION
Determining the detailed anatomical structure of the nervous system has been a major
focus of neurobiology ever since anatomical techniques for looking at the fine structure
of individual neurons were developed more than 100 years ago (Ram6n y Cajal 1911). In
more recent times, new techniques that allow labeling of the distant targets of groups of
neurons have extended this investigation to include studies of the topographic
relationships between different brain regions. In general, these so-called "tract-tracing"
techniques have greatly extended our knowledge of the interrelationships between neural
structures, often guiding and reinforcing the results of physiological investigations
(DeYoe & Van Essen 1988). However, in some cases, anatomical and physiological
techniques have been interpreted as producing conflicting results. One case, considered
here, involves the pattern of neuronal projections from the inferior olivary nucleus to the
117
118
Lee and Bower
cerebellar cortex. In this paper we describe the results of a computer modeling effort,
based on the structure of the olivocerebellar projection, intended to resolve this conflict.
a
b
c
e
Figure 1. a: Profile of the rat brain, showing three areas (Cx, cerebral cortex;
Po, pons; Tr, spinal trigeminal nucleus) that project to the cerebellum (Cb) via
both climbing fiber (CF) pathways through the inferior olive (10) and mossy
fiber (MF) pathways. b: Magnified. highly simplified view of the cerebellar
cortex, showing a Purkinje cell (P) being supplied with climbing fiber input,
directly, and mossy fiber input. through the granule cells (G). c: Zonal
organization of the olivocerebellar projection. Different shading patterns
represent input from different areas of the inferior olive. Adapted from
Campbell & Armstrong 1983. Circled area (crus llNcrus UB) is enlarged in
Figure 1d; bracketed area (anterior lobe) is enlarged in Figure Ie. d: Detail of
zonal organization. Dark areas represent bands of Purkinje cells that stain
positive for monoclonal antibody Zehrin I. According to Gravel et al. 1987,
these bands have boundaries similar to those resulting from partial tracer
injections in the inferior olive. Adapted from Gundappa-Sulur et al. 1989. e:
Patchy organization of the olivocerebellar projection (partial map). Different
shading patterns represent input through the olive from different body surfaces.
The horizontal and vertical scales are different. Adapted from Logan &
Robertson 1986.
A Computer Modeling Approach to Understanding
2 THE OLIVO CEREBELLAR SYSTEM
Purlcinje cells, the principal neurons of the cerebellar cortex, are influenced by two major
excitatory afferent projections to the cerebellum, the mLJSSY fiber system and the climbing
fiber system (palay & Chan-Palay 1973). As shown in Figures la and Ib, mossy fibers
arise from many different nuclei and influence Purkinje cells through granule cells within
the cortex. Within the cortex the mossy fiber-granule cell-Purkinje cell circuit is
characterized by enormous divergence (a single mossy fiber may influence several
thousand Purkinje cells) and convergence (a single Purkinje cell may be influenced by
several hundred thousand mossy fibers). In contrast, as also shown in Figures la and Ib,
climbing fibers arise from a single source, the inferior olive, and exhibit severely limited
divergence (10-15 Purkinje cells) and convergence (I Purkinje cell).
Because the inferior olive is the sole source of the climbing fiber projection to the entire
cerebellar cortex, and each Purkinje cell receives only one climbing fiber, the spatial
organization of the olivocerebellar circuit has been the subject of a large research effort
(Brodal & Kawamura 1980). Much of this effort has involved anatomical tract-tracing
techniques in which injections of neuron ally absorbed substances are traced from the
inferior olive to the cerebellum or vice versa. Based on this work it has been proposed
that the entire cerebellum is organized as a series of strips or zones, oriented in a
parasagittal plane (Figures Ic, Id: Campbell & Armstrong 1983; Gravel et al. 1987). This
principle of organization has served as the basis for several functional speculations on the
role of the cerebellum in coordinating movements (Ito 1984; Oscarsson 1980).
Unfortunately, as suggested in the introduction, these anatomical results are somewhat at
odds with the pattern of organization revealed by detailed electrophysiological mapping
studies of olivary projections (Robertson 1987). Physiological results, summarized in
Figure Ie, suggest that rather than being strictly zone-like, the olivocerebellar projection
is organized more as a mosaic of parasagittally elongated patches.
3 THE MODEL
Our specific interests are with the tactilely responsive regions of the lateral hemispheres
of the rat cerebellum (Bower et al. 1981; Welker 1987), and the modeling effort
described here is a first step in using structural models to explore the functional
organization of this region. As with previous modeling efforts in the olfactory system
(Bower 1990), the current model is based on features of the anatomy and physiology of
the real system. In the following section we will briefly describe these features.
3.1 ANATOMICAL ORGANIZATION
Structure of the inferior olive. The inferior olive has a complex, highly folded
conformation (Gwyn et al. 1977). The portion of the olive simulated in the model
consists of a folded slab of 2520 olivary neurons with a volume of approximately 0.05
mm 3 (Figure 2a).
Afferent projections to the olive. While inputs of various kinds and origins converge
on this nucleus, we have limited those simulated here to tactile afferents from those
119
120
Lee and Bower
perioral regions known to influence the lateral cerebellar hemispheres (Shambes et al.
1978). These have been mapped to the olive following the somatotopically organized
pattern suggested by several previous experiments (Gellman et al. 1983).
Structure or the cerebellum. The cerebellum is represented in the model by a flat sheet
of 2520 Purkinje cells with an area of approximately 2 mm1 (Figure 2a). Within this
region. each Purkinje cell receives input from one. and only one. olivary neuron. Details
of Purlcinje cells at the cellular level have not been included in the current model.
a
b
Figure 2. a: Basic structure of the model. Folia crus I1A and crus lIB of the
cerebellum and a cross section of the inferior olive are shown, roughly to scale.
The regions simulated in the model are outlined. Clusters of neighboring
olivary neurons project to parasagittal strips of Purkinje cells as indicated. This
figure also shows simulated correlation results similar to those in Figure lb. b:
Spatial structure of correlations among records of climbing fiber activity in crus
IIA. Sizes of filled circles represent cross-correlation coefficients with respect
to the "master" site (open circle). Sample cross-correlograms are shown for two
sites as indicated. The autocorrelogram for the "master" site is also shown.
Adapted from Sasaki et al. 1989.
3.2 PHYSIOLOGICAL ORGANIZATION
Spatially correlated patterns or activity. When the activities of multiple climbing
fibers are recorded from within cerebellar cortex, there is a strong tendency for climbing
fibers supplying Purkinje cells oriented parasagittally with respect to each other to be
correlated in their firing activity (Sasaki et al. 1989: Figure 2b). It has been suggested that
these correlations reflect the fact that direct electrotonic couplings exist between olivary
neurons (Llinas & Yarom 1981a, b; Benardo & Foster 1986). These physiological results
are simulated in two ways in the current model. First. neighboring olivary neurons are
electrotonically coupled, thus firing in a correlated manner. Second. small clusters of
olivary neurons have been made to project to parasagittally oriented strips of Purkinje
A Computer Modeling Approach to Understanding
cells. Under these constraints. the model replicates the parasagittal pattern of climbing
fiber activity found in certain regions of cerebellar cortex (compare Figures 2a and 2b).
Topography or cerebeUar afferents. As discussed above. this model is intended to
explore spatial and functional relationships between the inferior olive and the lateral
hemispheres of the rat cerebellum. Unfortunately. a physiological map of the climbing
fiber projections to this cerebellar region does not yet exist for the rat. However. a
detailed map of mossy fiber tactile projections to this region is available (Welker 1987).
As in the climbing fiber map in the anterior lobe (Robertson 1987; Figure Ie) and mossy
fiber maps in various areas in the cat (Kassel et al. 1984). representations of different
parts of the body surface are grouped into patches with adjacent patches receiving input
from nonadjacent peripheral regions. On the assumption that the mossy fiber and
climbing fiber maps coincide. we have based the modeled topography of the olivary
projection to the cerebellum on the well-described mossy fiber map (Figure 3a). In the
model, the smoothly varying topography of the olive is transformed to the patchy
organization of the cerebellar cortex through the projection pathways taken to the
cerebellum by different climbing fibers.
a
b
.-.-.:;:":.
Figure 3. a: Organization of receptive field map in simulated region of crus
IIA. Different shading patterns represent input from different perioral surfaces.
b: Simulated tract-tracing experiment. Left, tracer visualization (dark areas) in
the cerebellum. Right. tracer uptake (dark areas) in the inferior olive.
121
122
Lee and Bower
4 RESULTS: SIMULATION OF ZONAL ORGANIZATION
Having constructed the model to include each of the physiological features described
above. we proceeded to replicate anatomical tract-tracing experiments. This was done by
simulating the chemical labeling of neurons within restricted areas of inferior olive and
following their connections to the cerebellum. As in the biological experiments. in many
cases simulated injections included several folds of the olivary nucleus (Figure 3b). The
results (Figure 3b) demonstrate patterns of labeling remarkably similar to those seen with
real olivary injections in the rat (compare Figures Id and 3b).
5 CONCLUSIONS AND FURTHER WORK
These simulation results have demonstrated that a broadly parasagittal organization can
be generated in a model system which is actually based on a fine-grained patchy pattern
of afferent projections. Further, the simulations allow us to propose that the appearance
of parasagittal zonation may result from several unusual features of the olivary nucleus.
First. the folding characteristic of the inferior olive likely places neurons with different
receptive fields within a common area of tracer uptake in any given anatomical
experiment. resulting in co-labeling of functionally different regions. Second. the
tendency for local clusters of olivary neurons to project to parasagittal strips of Purkinje
cells could serve to extend tracer injection in the parasagittal direction. enhancing the
impression of parasagittal zones. This is further reinforced by the tendency of the
patches themselves to be somewhat elongated in the parasagittal plane. Finally, the
restricted resolution of the anatomical techniques could very well contribute to the overall
impression of parasagittal zonation by obscuring small, unlabeled regions more apparent
using physiological procedures. Modeling efforts currently under way will extend these
results to more than one cerebellar folium in an attempt to account for the appearence of
transfolial zones in some preparations.
In addition to these interpretations of previous data, this model also provides both
directions for further physiological experiments and predictions concerning the results.
First, the model assumes that mossy fiber and climbing fiber projections representing the
same regions of the rat's body surface overlap in the cerebellum. We take the similarity
in modeled and real tract-tracing results (Figures Id and 3b) as suggesting strongly that
this is, in fact. the case; however. physiological experiments are currently underway to
test this hypothesis. Second, the model predicts that the parasagittal pattern of climbing
fiber correlations found in a particular cerebellar region will be dependent on the pattern
of tactile patches found in that region. Those regions containing large patches (e.g. the
center of crus IIA) should clearly show parasagittal strips of correlated climbing fiber
activity. However, in cortical regions containing smaller, more diverse sets of patches
(e.g. more medial regions of crus IIA), this correlation structure should not be as clear.
Experiments are also under way to test this prediction of the model.
A Computer Modeling Approach to Understanding
Acknowledgements
This model has been constructed using GENESIS, the Caltech neural simulation system.
Simulation code for the model presented here can be accessed by registered GENESIS
users.
Information on the simulator or this model can be obtained from
genesiS@caltech.bitnet. This work was supported by NIH grant BNS 22205.
References
Benardo, L. S., and R. E. Foster 1986. Oscillatory behavior in inferior olive neurons:
Mechanism. modulation. cell aggregates. Brain Res. Bull. 17:773-784.
Bower. J. M. 1990. Reverse engineering the nervous system: An anatomical.
physiological. and computer based approach. In An introduction to neural and
electronic networks. ed. S. Zornetzer. J. Davis. and C. Lau, pp. 3-24. Academic
Press.
Bower, J. M.? and J. Kassel 1989. Variability in tactile projection patterns to crus ITA of
the Norway rat. J. Neurosci. (submitted for publication).
Bower, J. M., D. H. Beermann, J. M. Gibson. G. M. Shambes. and W. Welker 1981.
Principles of organization of a cerebro-cerebellar circuit. Micromapping the
projections from cerebral (SI) to cerebellar (granule cell layer) tactile areas of rats.
Brain Behav. Evol. 18:1-18.
Brodal. A .? and K. Kawamura 1980. Olivocerebellar projection: A review. Adv. Anat.
Embryol. Cell Bioi. 64:1-140.
Campbell, N. C., and D. M. Armstrong 1983. Topographical localization in the
olivocerebellar projection in the rat: An autoradiographic study. Brain Res.
275:235-249.
DeYoe. E. A., and D. C. Van Essen 1988. Concurrent processing streams in monkey
visual cortex. Trends Neurosci. 11:219-226.
Gellman. R, J. C. Hook, and A. R Gibson 1983. Somatosensory properties of the
inferior olive of the cat. J. Compo Neurol. 215:228-243.
Gravel. C.? L. M. Eisenman. R Sasseville, and R. Hawkes 1987. Parasagittal
organization of the rat cerebellar cortex: Direct correlation between antigenic
Purkinje cell bands revealed by mabQ 113 and the organization of the olivocerebellar
projection. J. Compo Neurol. 265:294-310.
Gundappa-Sulur. G., H. Shojaeian. M. Paulin, L. Posakony, R. Hawkes, and J. M. Bower
1989. Variability in and comparisons of: 1) tactile projections to the granule cell
layers of cerebellar cortex; and 2) the spatial distribution of Zebrin I-labeled Purkinje
cells. Soc. Neurosci. Abstr. 15:612.
Gwyn, D. G., G. P. Nicholson, and B. A. Flumerfelt 1977. The inferior olivary nucleus
of the rat: A light and electron microscopic study. J. Compo Neurol. 174:489-520.
Ito. M. 1984. The cerebellum and neural control. Raven Press.
Kassel, J .? G. M. Shambes. and W. Welker 1984. Fractured cutaneous projections to the
granule cell layer of the posterior cerebellar hemispheres of the domestic cat. J.
Compo Neurol. 225:458-468.
Llinas, R., and Y. Yarom 1981a. Electrophysiology of mammalian inferior olivary
neurones in vitro. Different types of voltage-dependent ionic conductances. J.
123
124
Lee and Bower
Physiol. (Lond.) 315:549-567.
Llinas, R., and Y. Yarom 1981b. Properties and distribution of ionic conductances
generating electroresponsiveness of mammalian inferior olivary neurones in vitro. J.
Physiol. (Lond.) 315:568-584.
Logan, K., and L. T. Robertson 1986. Somatosensory representation of the cerebellar
climbing fiber system in the rat. Brain Res. 372:290-300.
Oscarsson, O. 1980. Functional organization of olivary projection to the cerebellar
anterior lobe. In The inferior olivary nucleus: Anatomy and physiology, ed. J.
Courville, C. de Montigny, and Y. Lammare, pp. 279-289. Raven Press.
Palay, S. L., and V. Chan-Palay 1973. Cerebellar cortex: Cytology and organization.
Springer-Verlag.
Ram6n y Cajal, S. 1911. Histologie du systeme nerveux de l' homme et des vertebres.
Maloine.
Robertson, L. T. 1987. Organization of climbing fiber representation in the anterior lobe.
In New concepts in cerebellar neurobiology, ed. J. S. King, pp. 281-320. Alan R.
Liss.
Sasaki, K., J. M. Bower, and R. Llinas 1989. Multiple Purkinje cell recording in rodent
cerebellar cortex. Eur. J. Neurosci. (submitted for publication).
Shambes, G. M., J. M. Gibson, and W. Welker 1978. Fractured somatotopy in granule
cell tactile areas of rat cerebellar hemispheres revealed by micromapping. Brain
Behav. Evol. 15:94-140.
Welker, W. 1987. Spatial organization of somatosensory projections to granule cell
cerebellar cortex: Functional and connectional implications of fractured somatotopy
(summary of Wisconsin studies). In New concepts in cerebellar neurobiology, ed. J.
S. King, pp. 239-280. Alan R. Liss.
| 196 |@word proceeded:1 briefly:1 replicate:1 open:1 simulation:6 nicholson:1 lobe:4 systeme:1 tr:1 shading:3 cytology:1 series:1 current:3 anterior:4 si:1 yet:1 olive:21 physiol:2 distant:1 medial:1 nervous:2 plane:2 perioral:2 paulin:1 record:1 supplying:1 compo:4 provides:1 contribute:2 accessed:1 correlograms:1 constructed:2 direct:2 consists:1 pathway:3 vertebres:1 olfactory:1 manner:1 behavior:1 themselves:1 roughly:1 simulator:1 brain:7 resolve:2 lib:1 somatotopically:1 project:4 domestic:1 circuit:4 kind:1 interpreted:1 monkey:1 developed:1 magnified:1 olivary:20 control:1 grant:1 producing:1 positive:1 engineering:1 local:1 severely:1 id:3 firing:2 modulation:1 approximately:2 co:1 limited:2 unique:1 procedure:1 area:13 gibson:3 physiology:2 projection:26 suggest:2 unlabeled:1 sheet:1 influence:3 map:8 demonstrated:1 center:1 appearence:1 elongated:2 resolution:1 evol:2 mossy:11 target:1 user:1 mosaic:1 hypothesis:1 origin:1 stain:1 trend:1 robertson:6 mammalian:2 predicts:1 labeled:1 role:1 thousand:2 region:20 adv:1 movement:1 nonadjacent:1 serve:1 localization:1 basis:1 po:1 various:2 fiber:31 represented:1 cat:3 describe:2 labeling:4 aggregate:1 apparent:3 topographic:1 propose:1 neighboring:2 convergence:2 cluster:3 abstr:1 generating:1 tract:6 coupling:1 conformation:1 sole:1 strong:1 soc:1 involves:1 somatosensory:3 direction:2 anatomy:2 gellman:2 kawamura:3 investigation:2 biological:1 strictly:1 mm:1 considered:1 ic:1 cb:1 mapping:2 slab:1 electron:1 major:2 currently:2 grouped:1 concurrent:1 vice:1 clearly:1 rather:1 varying:1 voltage:1 publication:2 focus:1 greatly:1 contrast:1 dependent:2 entire:2 pasadena:1 transformed:1 overall:1 among:1 spatial:6 field:2 having:1 oriented:3 cajal:2 divergence:2 individual:1 intended:2 somatotopy:2 attempt:1 conductance:2 organization:24 interest:1 essen:2 highly:2 replicates:1 light:1 implication:1 accurate:1 partial:2 filled:1 logan:2 circle:2 re:3 modeling:10 purkinje:18 patchy:5 pons:1 bull:1 hundred:1 eur:1 ie:3 lee:5 receiving:1 reflect:1 recorded:1 containing:2 maurice:1 li:2 account:1 suggesting:1 de:3 summarized:1 embryol:1 coefficient:1 bracketed:1 afferent:6 stream:1 view:1 portion:1 i1a:1 micromapping:2 characteristic:1 reinforced:1 climbing:19 ionic:2 served:1 ago:1 submitted:2 oscillatory:1 nerveux:1 influenced:2 strip:5 ed:4 pp:4 involved:1 james:1 knowledge:1 electrophysiological:1 organized:3 actually:1 campbell:4 norway:1 llinas:4 done:1 strongly:1 correlation:7 ally:1 horizontal:1 receives:2 indicated:2 fractured:3 concept:2 chemical:1 spatially:1 cerebellum:17 adjacent:1 inferior:22 davis:1 hawkes:2 rat:15 impression:2 demonstrate:1 interrelationship:1 nih:1 common:1 functional:5 vitro:2 spinal:1 cerebral:2 volume:1 discussed:1 extend:2 interpretation:1 functionally:1 versa:1 outlined:1 iia:5 cortex:19 surface:4 similarity:1 posterior:1 recent:1 chan:2 hemisphere:6 reverse:1 certain:1 verlag:1 caltech:2 seen:2 somewhat:2 converge:1 multiple:2 alan:2 characterized:1 academic:1 cross:3 concerning:1 prediction:2 basic:1 enhancing:1 cerebellar:27 represent:5 cell:30 folding:1 addition:1 remarkably:1 fine:2 source:2 subject:1 recording:1 odds:1 structural:1 revealed:3 effort:7 reinforcing:1 tactile:8 neurones:2 behav:2 electrotonic:1 detailed:4 clear:1 dark:3 band:3 supplied:1 exist:2 shambes:4 coordinating:1 anatomical:12 broadly:1 diverse:1 group:1 enormous:1 traced:1 welker:6 year:1 master:2 place:1 electronic:1 patch:7 zonal:5 antibody:1 layer:3 courville:1 autocorrelogram:1 fold:1 activity:6 adapted:4 constraint:1 flat:1 bns:1 lond:2 injection:5 according:1 peripheral:1 smaller:1 deyoe:2 cutaneous:1 electrotonically:1 lau:1 restricted:2 taken:1 visualization:1 mechanism:1 unusual:1 available:1 obscuring:1 simulating:1 responsive:1 maloine:1 assumes:1 include:2 cf:1 granule:8 yarom:3 kassel:3 gravel:3 objective:1 receptive:2 exhibit:1 microscopic:1 mapped:1 lateral:4 simulated:8 cellular:1 homme:1 code:1 modeled:2 relationship:4 unfortunately:2 vertical:1 neuron:14 neurobiology:3 ever:1 looking:1 extended:2 genesis:3 variability:2 lb:1 speculation:1 connection:1 conflict:2 california:1 registered:1 conflicting:1 suggested:4 pattern:14 program:1 overlap:1 representing:1 technology:1 hook:1 uptake:2 coupled:1 tracer:5 review:1 understanding:5 circled:1 acknowledgement:1 determining:1 underway:1 wisconsin:1 topography:3 ita:1 nucleus:9 principle:2 foster:2 parasagittal:13 excitatory:1 summary:1 supported:1 allow:2 cerebro:1 institute:1 tracing:6 van:2 boundary:1 cortical:1 made:1 coincide:1 simplified:1 zornetzer:1 ca:1 du:1 complex:1 neurosci:4 arise:2 profile:1 body:3 neuronal:1 enlarged:2 site:3 guiding:1 bower:12 ib:2 anat:1 ito:2 grained:1 specific:1 substance:1 showing:2 neurol:4 physiological:13 raven:2 mf:1 rodent:1 smoothly:1 cx:1 electrophysiology:1 appearance:2 explore:2 likely:1 absorbed:1 visual:1 histologie:1 mm1:1 springer:1 bioi:1 trigeminal:1 king:2 included:2 folded:2 principal:2 called:1 sasaki:3 connectional:1 tendency:3 la:2 zone:4 autoradiographic:1 brodal:3 ub:1 preparation:1 topographical:1 armstrong:4 correlated:4 |
1,051 | 1,960 | Speech Recognition with Missing Data using
Recurrent Neural Nets
S. Parveen
Speech and Hearing Research Group
Department of Computer Science
University of Sheffield
Sheffield S14DP, UK
s.parveen@dcs.shef.ac.uk
P.D. Green
Speech and Hearing Research Group
Department of Computer Science
University of Sheffield
Sheffield S14DP, UK
p.green@dcs.shef.ac.uk
Abstract
In the ?missing data? approach to improving the robustness of automatic
speech recognition to added noise, an initial process identifies spectraltemporal regions which are dominated by the speech source. The
remaining regions are considered to be ?missing?. In this paper we
develop a connectionist approach to the problem of adapting speech
recognition to the missing data case, using Recurrent Neural Networks.
In contrast to methods based on Hidden Markov Models, RNNs allow
us to make use of long-term time constraints and to make the problems
of classification with incomplete data and imputing missing values
interact. We report encouraging results on an isolated digit recognition
task.
1. Introduction
Automatic Speech Recognition systems perform reasonably well in controlled and
matched training and recognition conditions. However, performance deteriorates when
there is a mismatch between training and testing conditions, caused for instance by
additive noise (Lippmann, 1997). Conventional techniques for improving recognition
robustness (reviewed by Furui 1997) seek to eliminate or reduce the mismatch, for
instance by enhancement of the noisy speech, by adapting statistical models for speech
units to the noise condition or simply by training in different noise conditions.
Missing data techniques provide an alternative solution for speech corrupted by additive
noise which make minimal assumptions about the nature of the noise. They are based on
identifying uncorrupted, reliable regions in the frequency domain and adapting
recognition algorithms so that classification is based on these regions.
Present missing data techniques developed at Sheffield (Barker et al. 2000a, Barker et al.
2000b, Cooke et al., 2001) and elsewhere (Drygaglo et al., 1998, Raj et al., 2000) adapt the
prevailing technique for ASR based on Continuous Density Hidden Markov Models.
CDHMMs are generative models which do not give direct estimates of posterior
probabilities of the classes given the acoustics. Neural Networks, unlike HMMs, are
discriminative models which do give direct estimates of posterior probabilities and have
been used with success in hybrid ANN/HMM speech recognition systems (Bourlard et al.,
1998).
In this paper, we adapt a recurrent neural network architecture introduced by (Gingras &
Bengio, 1998) for robust ASR with missing data.
2. Missing data techniques for Robust ASR
2.1 Missing data masks
Speech recognition with missing data is based on the assumption that some regions in
time/frequency remain uncorrupted for speech with added noise. See (Cooke et al., 2001)
for arguments to support this assumption. Initial processes, based on local signal-to-noise
estimates, on auditory grouping cues, or a combination (Barker et al., 2001) define a
binary ?missing data mask?: ones in the mask indicate reliable (or ?present?) features and
zeros indicate unreliable (or ?missing?) features.
2.2 Classification with missing data
Techniques for classification with incomplete data can be divided into imputation and
marginalisation. Imputation is a technique in which missing features are replaced by
estimated values to allow the recognition process proceed in normal way. If the missing
values are replaced by either zeros, random values or their means based on training data,
the approach is called unconditional imputation. On the other hand in conditional
imputation conditional statistics are used to estimate the missing values given the present
values. In the marginalisation approach missing values are ignored (by integrating over
their possible ranges) and recognition is performed with the reduced data vector which is
considered reliable. For the multivariate mixture Gaussian distributions used in
CDHMMs, marginalisation and conditional imputation can be formulated analytically
(Cooke et al., 2001). For missing data ASR further improvements in both techniques
follow from using the knowledge that for spectral energy features the unreliable data is
bounded between zero and the energy in speech+noise mixture (Vizinho et al., 1999),
(Josifovski et al., 1999). These techniques are referred to as bounded marginalisation and
bounded imputation. Coupled with a ?softening? of the reliable/unreliable decision,
missing data techniques produce good results on a standard connected-digits-in-noise
recognition task: performance using models trained on clean data is comparable, and in
severe noise superior, to conventional systems trained across different noise conditions
(Barker et al., 2001).
2.3 Why recurrent neural nets for missing data robust ASR?
Several neural net architectures have been proposed to deal with the missing data problem
in general (Ahmed & Tresp, 1993), (Ghahramani & Jordan, 1994). The problem in using
neural networks with missing data is to compute the output of a node/unit when some of
its input values are unavailable.
For marginalisation, this involves finding a way of integrating over the range of the
missing values. A robust ASR system to deal with missing data using neural networks has
recently been proposed by (Morris et al., 2000). This is basically a radial basis function
neural network with the hidden units associated with a diagonal covariance gaussian. The
marginal over the missing values can be computed in this case and hence the resulting
system is equivalent to the HMM based missing data speech recognition system using
marginalisation. Reported performance is also comparable to that of the HMM based
speech recognition system.
In this paper missing data is dealt with by imputation. We use recurrent neural networks to
estimate missing values in the input vector. RNNs have the potential to capture long-term
contextual effects over time, and hence to use temporal context to compensate for missing
data which CDHMM based missing data techniques do not do. The only contextual
information available in CDHMM decoding come from the addition of temporal
derivatives to the feature vector. RNNs also allow a single net to perform both imputation
and classification, with the potential of combining these processes to mutual benefit.
The RNN architecture proposed by Gingras et al. (1998) is based on a fully-connected
feedforward network with input, hidden and output layers using hyperbolic tangent
activation functions. The output layer has one unit for each class and the network is
trained with the correct classification as target. Recurrent links are added to the
feedforward net with unit delay from output to the hidden units as in Jordan networks
(Jordan, 1988). There are also recurrent links with unit delay from hidden units to missing
input units to impute missing features. In addition, there are self delayed terms with a
fixed weight for each unit which basically serve to stabilise RNN behaviour over time and
help in imputation as well. Gingras et al. used this RNN both for a pattern classification
task with static data (one input vector for each example) and sequential data (a sequence
of input values for each example).
Our aim is to adapt this architecture for robust ASR with missing data. Some preliminary
static classification experiments were performed on vowel spectra (individual spectral
slices excised from the TIMIT database). RNN performance on this task with missing data
was better than standard MLP and gaussian classifiers. In the next section we show how
the net can be adapted for dynamic classification of the spectral sequences constituting
words.
3. RNN architecture for robust ASR with missing data
Figure 1 illustrates our modified version of the Gingras and Bengio architecture. Instead of
taking feedback from the output to the hidden layer we have chosen a fully connected or
Elman RNN (Elman, 1990) where there are full recurrent links from the past hidden layer
to the present hidden layer (figure 1). We have observed that these links produce faster
convergence, in agreement with (Pedersen, 1997). The number of input units depends on
the size of feature vector, i.e. the number of spectral channels. The number of hidden units
is determined by experimentation. There is one output unit for each pattern class. In our
case the classes are taken to be whole words, so in the isolated digit recognition
experiments we report, there are eleven output units, for ?1? - ?9?, ?zero? and ?oh?.
In training, missing inputs are initialised with their unconditional means. The RNN is then
allowed to impute missing values for the next frame through the recurrent links, after a
feedforward pass.
H
X (m,t) = ( 1 ? ? )X (m,t ? 1) +
? v jm f ( hid ( j, t ? 1 ) )
j=1
Where X (m,t) is the missing feature at time t, ? is the learning rate, v jm indicates
recurrent links from a hidden unit to the missing input and hid ( j, t ? 1 ) is the activation of
hidden unit j at time t-1.
The average of the RNN output over all the frames of an example is taken after these
frames have gone through a forward pass. The sum squared error between the correct
targets and the RNN output for each frame is back-propagated through time and RNN
weights are updated until a stopping criterion is reached.
The recognition phase consists of a forward pass to produce RNN output for unseen data
and imputation of missing features at each time step. The highest value in the averaged
output vector is taken as the correct class.
Reliable
features
one
two
h
i
d
d
e
n
o
u
t
p
u
t
three
nine
oh
zero
-1
-1
-1
Figure 1: RNN architecture for robust ASR with missing data technique. Solid arrows
show full forward and recurrent connections between two layers. Shaded blocks in the
input layer indicate missing inputs which keep changing at every time step. Missing
inputs are fully connected (solid arrows) with the hidden layer with a unit delay in
addition to delayed self-connection (thin arrows) with a fixed weight.
4. Isolated word recognition experiments
Continuous pattern classification experiments were performed using data from 30 male
speakers in the isolated digits section of the TIDIGIT database (Leonard, 1984). There
were two examples per speaker of each of the 11 words (i.e. 1-9, zero, oh). 220 examples
were chosen from a subset of 10 speakers for training. Recognition was performed on 110
examples from the speakers not included in training. A validation set of 110 examples was
used for early stopping.
Features were extracted from hamming windowed speech with a window size of 25 msec
and 50% overlap. Two types of feature vectors used for the experiments were total
energies in the four frequency bands (115-629 Hz, 565-1370 Hz, 1262-2292 Hz and 22123769 Hz) and 20 mel scaled FFT filter bank energies.
In the initial experiments we report, the missing data masks were formed by deleting
spectral energy features at random. This allows comparison with early results with HMMbased missing data recognition (Cooke et al. 1996) and close experimental control. For
training 1/3rd of the training examples were clean, 1/3rd had 25% deletions and 1/3rd had
50% deletions. Recognition performance was evaluated with 0% to 80% missing features
with an increment of 10%.
5. Results
5.1 RNN performance as a classifier
An RNN with 20 inputs, 65 hidden and 11 output units was chosen for recognition and
imputation with 20 features per time frame. Its performance on various amounts of
missing features from 0% to 80%, shown in Figure 2 (the ?RNN imputation? curve), is
much better than the standard Elman RNN trained on clean speech only for classification
task and tested with the mean imputation. Use of the self delayed term in addition to the
recurrent links for imputation of missing features contributes positively in case of
sequential data. Results resemble those reported for HMMs in (Cooke et al. 1996). We
also show that results are superior to ?last reliable imputation? in which the imputed value
of a feature is the last reliable value for that feature.
90
missing values replaced by means
missing values replaced by "last reliable" values
missing values imputed by RNN
80
classification error %
70
60
50
40
30
20
10
0
0
10
20
30
40
50
60
70
80
% missing
Figure 2: Comparison of RNN classification performance for different imputation
methods.
5.2 RNN performance on pattern completion
Imputation, or pattern completion, performance was observed for an RNN trained with 4
features per frame of the speech and is shown in Figure 3. The RNN for this task had 4
input, 45 hidden and 11 output units. In figure 3(a), solid curves show the true values of
the feature in each frequency band at every frame for an example of a spoken ?9?, the
horizontal lines are mean feature values, and the circles are the missing values imputed by
the RNN. Imputed values are encouragingly close to the true values. For this network,
classification error for recognition was 10.7% at 0% missing and 46.4% at 80% missing.
energy band4
1.6
2
mean imputation
rnn imputation
0
?2
?4
1.4
0
5
10
15
20
25
30
35
40
2
1.2
0
?2
?4
0
5
10
15
20
25
30
35
40
0
?1
?2
average imputation error
energy band3
energy band2
energy band1
The top curve in figure 3(b) shows the average pattern completion error when an RNN
with 20 input channels was trained on clean speech and during recognition missing
features were imputed from their unconditional means. The bottom curve is the average
pattern completion error with missing features imputed by the network. This demonstrates
the clear advantage of using the RNN for both imputation and classification.
1
0.8
0.6
?3
0
5
10
15
20
25
30
35
40
?1.5
true values
RNN imputation
mean
?2
?2.5
0
5
10
15
20
25
30
35
frame number
(a)
0.4
40
0.2
10
20
30
40
50
60
70
80
% missing
(b)
Figure 3: (a) Missing values for digit 9 imputed by an RNN (b) Average imputation errors
for mean imputation and RNN imputation
6. Conclusion & future work
The experiments reported in section 5 constitute no more than a proof of concept. Our next
step will be to extend this recognition system for the connected digits recognition task
with missing data, following the Aurora standard for robust ASR (Pearce et al. 2000). This
will provide a direct comparison with HMM-based missing data recognition (Barker et al.,
2001). In this case we will need to introduce ?silence? as an additional recognition class,
and the training targets will be obtained by forced-alignment on clean speech with an
existing recogniser. We will use realistic missing data masks, rather than random
deletions. This is known to be a more demanding condition (Cooke et al. 1996).
When we are training using clean speech with added noise, another possibility is to use the
true values of the corrupted features as training targets for imputation. Use of actual
targets for missing values has been reported by (Seung, 1997) but the RNN architecture in
the latter work supports only pattern completion.
Acknowledgement
This work is being supported by Nokia Mobile Phones, Denmark and the UK Overseas
Research Studentship scheme.
References
Ahmed, S. & Tresp, V. (1993). Some solutions to the missing feature problem in vision. Advances in
Neural Information Processing Systems 5 (S.J.Hanson, J.D.Cowan & C.L.Giles, eds.), Morgan
Kaufmann, San Mateo, CA, 393-400.
Barker, J., Green, P.D. and Cooke, M.P. (2001). Linking auditory scene analysis and robust ASR by
missing data techniques. Workshop on Innovation in Speech Processing 2001, Stratford-upon-Avon,
UK.
Barker, J., Josifovski, L., Cooke, M.P. and Green, P.D. (2000a). Soft decisions in missing data
techniques for robust automatic speech recognition. Accepted for ICSLP-2000, Beijing.
Barker, J., Cooke, M.P. and Ellis, D.P.W. (2000b). Decoding speech in the presence of other sound
sources. Accepted for ICSLP-2000, Beijing
Bourlard, H. and N. Morgan (1998). Hybrid HMM/ANN systems for speech recognition: Overview
and new research directions. In C. L.Giles and M. Gori (Eds.), Adaptive Processing of Sequences
and Data Structures, Volume 1387 of Lecture Notes in Artificial Intelligence, pp. 389--417.
Springer.
Cooke, M., Green, P., Josifovski, L. and Vizinho, A. (2001). Robust automatic speech recognition
with missing and unreliable acoustic data. submitted to Speech Communication, 24th June 1999.
Cooke, M.P., Morris, A. & Green, P.D. (1996). Recognising occluded speech. ESCA Tutorial and
Workshop on the Auditory Basis of Speech Perception, Keele University, July 15-19.
Drygajlo, A. & El-Maliki, M. (1998). Speaker verification in noisy environment with combined
spectral subtraction and missing data theory. Proc ICASSP-98, vol. I, pp121-124.
Elman, J.L. (1990). Finding structure in time. Cognitive Science, 14, 179-211.
Furui, S. (1997). Recent advances in robust speech recognition. Proc. ESCA-NATO Tutorial and
Research Workshop on Robust Speech Recognition for Unknown Communication Channels,
France, pp.11-20.
Gingras, F. and Bengio, Y. (1998). Handling Asynchronous or Missing Data with Recurrent
Networks. International Journal of Computational Intelligence and Organizations, vol. 1, no. 3, pp.
154-163.
Ghahramani, Z. & Jordan, M.I. (1994). Supervised learning from incomplete data via an EM
approach. Advances in Neural Information Processing Systems 6 (J.D. Cowan, G. Tesauro & J.
Alspector, eds.), Morgan Kaufmann, San Mateo, CA, pp.120-129.
Jordan, M. I. (1998). Supervised learning and systems with excess degrees of freedom. Technical
Report COINS TR 88-27, Massachusetts Institute of Technology, 1988.
Josifovski, L., Cooke, M., Green, P. and Vizinho, A. (1999). State based imputation of missing data
for robust speech recognition and speech enhancement. Proc. Eurospeech?99, Budapest, Vol. 6, pp.
2837-2840.
Leonard, R. G., (1984). A Database for Speaker-Independent Digit Recognition. Proc. ICASSP 84,
Vol. 3, p. 42.11, 1984.
Lippmann, R. P. (1997). Speech recognition by machines and humans. Speech Communication vol.
22 no. 1 pp. 1-15.
Morris, A., Josifovski, L., Bourlard, H., Cooke, M.P. and Green, P.D. (2000). A neural network for
classification with incomplete data: application to robust ASR. ICSLP 2000, Beijing China.
Pearce, D. and Hirsch, H.--G. (2000). The aurora experimental framework for the performance
evaluation of speech recognition systems under noisy conditions. In Proc. ICSLP 2000, IV, 29--32,
Beijing, China.
Pedersen, M. W. (1997). Optimization of Recurrent Neural Networks for Time Series Modeling.
PhD thesis. Technical University of Denmark.
Raj, B., Seltzer, M., & Stern, R. (2000). Reconstruction of damaged spectrographic features for
robust speech recognition. ICSLP 2000.
Seung, H. S. (1997). Learning continuous attractors in Recurrent Networks. Proc. NIPS?97 pp 654660.
Vizinho, A., Green, P., Cooke, M. and Josifovski, L. (1999). Missing data theory, spectral
subtraction and signal-to-noise estimation for robust ASR: An integrated study. Proc.
Eurospeech?99, Budapest, Sep. 1999, Vol. 5, pp. 2407-2410.
| 1960 |@word version:1 seek:1 covariance:1 tr:1 solid:3 initial:3 series:1 past:1 existing:1 contextual:2 activation:2 realistic:1 additive:2 eleven:1 generative:1 cue:1 intelligence:2 node:1 windowed:1 direct:3 consists:1 introduce:1 mask:5 alspector:1 elman:4 encouraging:1 jm:2 window:1 actual:1 matched:1 bounded:3 developed:1 spoken:1 finding:2 temporal:2 every:2 classifier:2 scaled:1 uk:6 control:1 unit:19 demonstrates:1 local:1 rnns:3 mateo:2 china:2 shaded:1 hmms:2 josifovski:6 range:2 gone:1 averaged:1 testing:1 block:1 digit:7 excised:1 rnn:29 adapting:3 hyperbolic:1 word:4 integrating:2 radial:1 close:2 context:1 conventional:2 equivalent:1 missing:72 barker:8 identifying:1 oh:3 increment:1 gingras:5 updated:1 target:5 damaged:1 agreement:1 recognition:38 database:3 observed:2 bottom:1 capture:1 region:5 connected:5 highest:1 environment:1 seung:2 occluded:1 dynamic:1 trained:6 furui:2 serve:1 upon:1 basis:2 icassp:2 sep:1 various:1 forced:1 encouragingly:1 artificial:1 statistic:1 unseen:1 noisy:3 sequence:3 advantage:1 net:6 reconstruction:1 hid:2 combining:1 budapest:2 convergence:1 enhancement:2 produce:3 help:1 recurrent:15 ac:2 develop:1 completion:5 involves:1 indicate:3 come:1 resemble:1 direction:1 correct:3 filter:1 human:1 seltzer:1 behaviour:1 icslp:5 preliminary:1 considered:2 normal:1 early:2 estimation:1 proc:7 gaussian:3 aim:1 modified:1 rather:1 mobile:1 june:1 improvement:1 indicates:1 contrast:1 stabilise:1 stopping:2 el:1 eliminate:1 integrated:1 aurora:2 hidden:15 france:1 classification:16 prevailing:1 mutual:1 marginal:1 asr:13 thin:1 future:1 connectionist:1 report:4 cdhmm:2 individual:1 delayed:3 replaced:4 phase:1 attractor:1 vowel:1 freedom:1 organization:1 mlp:1 possibility:1 evaluation:1 severe:1 alignment:1 male:1 mixture:2 unconditional:3 incomplete:4 iv:1 circle:1 isolated:4 minimal:1 instance:2 soft:1 giles:2 elli:1 modeling:1 hearing:2 subset:1 delay:3 eurospeech:2 reported:4 corrupted:2 combined:1 density:1 international:1 decoding:2 squared:1 thesis:1 cognitive:1 derivative:1 potential:2 caused:1 recogniser:1 depends:1 performed:4 reached:1 timit:1 formed:1 kaufmann:2 dealt:1 pedersen:2 basically:2 submitted:1 ed:3 energy:9 frequency:4 initialised:1 pp:8 associated:1 proof:1 static:2 propagated:1 hamming:1 auditory:3 massachusetts:1 knowledge:1 back:1 supervised:2 follow:1 evaluated:1 until:1 hand:1 horizontal:1 spectrographic:1 effect:1 concept:1 true:4 analytically:1 hence:2 deal:2 impute:2 self:3 during:1 speaker:6 mel:1 criterion:1 recently:1 superior:2 imputing:1 overview:1 volume:1 extend:1 linking:1 automatic:4 rd:3 softening:1 had:3 posterior:2 multivariate:1 recent:1 raj:2 phone:1 tesauro:1 binary:1 success:1 uncorrupted:2 morgan:3 additional:1 subtraction:2 signal:2 july:1 full:2 sound:1 technical:2 faster:1 adapt:3 ahmed:2 compensate:1 long:2 divided:1 controlled:1 sheffield:5 vision:1 addition:4 shef:2 source:2 marginalisation:6 unlike:1 hz:4 cowan:2 jordan:5 presence:1 feedforward:3 bengio:3 fft:1 architecture:8 reduce:1 speech:38 proceed:1 nine:1 constitute:1 ignored:1 clear:1 amount:1 band:2 morris:3 reduced:1 imputed:7 tutorial:2 deteriorates:1 estimated:1 per:3 vol:6 group:2 four:1 imputation:27 changing:1 clean:6 sum:1 beijing:4 decision:2 comparable:2 layer:8 adapted:1 constraint:1 scene:1 dominated:1 argument:1 department:2 combination:1 remain:1 across:1 em:1 taken:3 available:1 experimentation:1 spectral:7 alternative:1 robustness:2 coin:1 top:1 remaining:1 gori:1 ghahramani:2 vizinho:4 added:4 diagonal:1 link:7 hmm:5 denmark:2 innovation:1 stern:1 unknown:1 perform:2 markov:2 pearce:2 communication:3 dc:2 frame:8 introduced:1 connection:2 hanson:1 acoustic:2 deletion:3 nip:1 pattern:8 mismatch:2 perception:1 green:9 reliable:8 deleting:1 overlap:1 demanding:1 hybrid:2 bourlard:3 scheme:1 technology:1 identifies:1 coupled:1 tresp:2 acknowledgement:1 tangent:1 fully:3 lecture:1 validation:1 degree:1 verification:1 bank:1 cooke:14 elsewhere:1 supported:1 last:3 asynchronous:1 silence:1 allow:3 institute:1 taking:1 nokia:1 benefit:1 slice:1 feedback:1 curve:4 studentship:1 hmmbased:1 forward:3 adaptive:1 san:2 constituting:1 excess:1 lippmann:2 nato:1 unreliable:4 keep:1 keele:1 hirsch:1 discriminative:1 spectrum:1 continuous:3 why:1 reviewed:1 nature:1 reasonably:1 robust:17 channel:3 ca:2 contributes:1 improving:2 unavailable:1 interact:1 domain:1 arrow:3 whole:1 noise:14 allowed:1 stratford:1 positively:1 referred:1 msec:1 cdhmms:2 grouping:1 workshop:3 recognising:1 sequential:2 phd:1 illustrates:1 simply:1 springer:1 extracted:1 conditional:3 formulated:1 ann:2 leonard:2 included:1 determined:1 called:1 total:1 pas:3 accepted:2 experimental:2 support:2 latter:1 tested:1 handling:1 |
1,052 | 1,961 | Laplacian Eigenmaps and Spectral
Techniques for Embedding and Clustering
Mikhail Belkin and Partha Niyogi
Depts. of Mathematics and Computer Science
The University of Chicago
Hyde Park , Chicago, IL 60637.
(misha@math.uchicago.edu,niyogi@cs.uchicago.edu)
Abstract
Drawing on the correspondence between the graph Laplacian, the
Laplace-Beltrami op erator on a manifold , and the connections to
the heat equation , we propose a geometrically motivated algorithm
for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The
algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications
are considered.
In many areas of artificial intelligence, information retrieval and data mining, one
is often confronted with intrinsically low dimensional data lying in a very high dimensional space. For example, gray scale n x n images of a fixed object taken with
a moving camera yield data points in rn: n2 . However , the intrinsic dimensionality of
the space of all images of t he same object is the number of degrees of freedom of
the camera - in fact the space has the natural structure of a manifold embedded in
rn: n2 . While there is a large body of work on dimensionality reduction in general,
most existing approaches do not explicitly take into account the structure of the
manifold on which the data may possibly reside. Recently, there has been some
interest (Tenenbaum et aI, 2000 ; Roweis and Saul, 2000) in the problem of developing low dimensional representations of data in this particular context. In this
paper , we present a new algorithm and an accompanying framework of analysis for
geometrically motivated dimensionality reduction.
The core algorithm is very simple, has a few local computations and one sparse
eigenvalu e problem. The solution reflects th e intrinsic geom etric structure of the
manifold. Th e justification comes from the role of the Laplacian op erator in providing an optimal emb edding. Th e Laplacian of the graph obtained from the data
points may be viewed as an approximation to the Laplace-Beltrami operator defined
on the manifold. The emb edding maps for the data come from approximations to
a natural map that is defined on the entire manifold. The framework of analysis
presented here makes this connection explicit. While this connection is known to
geometers and specialists in sp ectral graph theory (for example , see [1, 2]) to the
best of our knowledge we do not know of any application to data representation
yet. The connection of the Laplacian to the heat kernel enables us to choose the
weights of the graph in a principled manner.
The locality preserving character of the Laplacian Eigenmap algorithm makes it relatively insensitive to outliers and noise. A byproduct of this is that the algorithm
implicitly emphasizes the natural clusters in the data. Connections to spectral clustering algorithms developed in learning and computer vision (see Shi and Malik ,
1997) become very clear. Following the discussion of Roweis and Saul (2000) , and
Tenenbaum et al (2000), we note that the biological perceptual apparatus is confronted with high dimensional stimuli from which it must recover low dimensional
structure. One might argue that if the approach to recovering such low-dimensional
structure is inherently local , then a natural clustering will emerge and thus might
serve as the basis for the development of categories in biological perception.
1
The Algorithm
Given k points Xl , ... , Xk in ]]{ I, we construct a weighted graph with k nodes, one
for each point , and the set of edges connecting neighboring points to each other.
1. Step 1. [Constru cting th e Graph] We put an edge between nodes i and j if
Xi and Xj are "close" . There are two variations:
(a) [-neighborhoods. [parameter [ E ]]{] Nodes i and j are connected by an
edge if Ilxi - Xj 112 < f.
Advantages: geometrically motivated , the relationship is naturally
symmetric.
Disadvantages : often leads to graphs with several connected components , difficult to choose f.
(b) n nearest neighbors. [parameter n E 1'::1] Nodes i and j are connected by
an edge if i is among n nearest neighbors of j or j is among n nearest
neighbors of i.
Advantages: simpler to choose, t ends to lead to connected graphs.
Disadvantages : less geometrically intuitive.
2. Step 2. [Choosing the weights] Here as well we have two variations for
weighting the edges:
(a) Heat kernel. [param eter t E ]]{]. If nodes i and j are connected, put
Wij
= e-
Ilxi-X i 11 2
t
The justification for this choice of weights will be provided later.
(b) Simple-minded. [No parameters]. W ij = 1 if and only if vertices i an d
j are connected by an edge.
A simplificat ion which avoids the necessity of choosing t.
3. Step 3. [Eigenmaps] Assume the graph G, constructed above, is connected ,
otherwise proceed with Step 3 for each connected component .
10- 3
X
8,-- - - - - - - - - - ,
0,---- - - - - - - - - - - - . . . ,
6 ~~
10
4i"?
20
o
~.
-2
$
+
2 :
I
30
40
o
,.
-4 \
20
40
-6
Co
.:
'~
_8 L-~~~~----~
-5
-4L-~
o
__
~
o
-2
____
~
2
Figure 1: The left panel shows a horizontal and a vertical bar. The middle panel
is a two dimensional representation of the set of all images using the Laplacian
eigenmaps. The right panel shows the result of a principal components analysis
using the first two principal directions to represent the data. Dots correspond to
vertical bars and '+' signs correspond to horizontal bars.
Compute eigenvalues and eigenvectors for the generalized eigenvector problem:
Ly
= )'Dy
(1)
where D is diagonal weight matrix , its entries are column (or row, since
W is symmetric) sums of W , Dii = Lj Wji. L = D - W is the Laplacian
matrix. Laplacian is a symmetric , positive semidefinite matrix which can
be thought of as an operator on functions defined on vertices of G.
Let Yo , ... , Y k -1 be the solutions of equation 1, ordered according to their
eigenvalues with Yo having the smallest eigenvalue (in fact 0). The image
of X i under the embedding into the lower dimensional space :Il{ m is given by
(y 1 ( i) , . . . , y m (i)).
2
Justification
Recall that given a data set we construct a weighted graph G = (V, E) with edges
connecting nearby points to each other . Consider the problem of mapping the
weighted connected graph G to a line so that connected points stay as close together
as possible. We wish to choose Yi E :Il{ to minimize
2)Yi - Yj )2Wij
i ,j
under appropriate constraints. Let y = (Y1, Y2 , ... ,Yn)T be the map from the graph
to the real line. First, note that for any y , we have
(2)
where as b efore, L = D - W. To see this , notice that Wij
Dii = Lj W ij . Thus Li ,j(Yi - Yj)2Wij can b e written as
2)Y;
i ,j
+ yJ
- 2YiYj )Wij
= LY; Dii + L yJ Djj j
1S
symmetric and
2 LYiYj W ij
i ,j
= 2yT Ly
-b O
-.,!,!.,'i.~ ""
- sa y
- "" """.,"",oulo
.-n a y
.-
s h o uld
",,;11
Figure 3: Fragments labeled by arrows in figure 2, from left to right. The first
contains infinitives of verbs , the second contains prepositions and the third mostly
modal and auxiliary verbs. We see that syntactic structure is well-preserved.
Therefore, the minimization problem reduces to finding argminyTDY=lyT Ly.
The constraint yT Dy = 1
removes an arbitrary scaling
factor in the embedding. Matrix D provides a natural
measure on the vertices of the
graph. From eq. 2, we see
that L is a positive semidefinite matrix and the vector
y that minimizes the objective function is given by the
minimum eigenvalue solution
to the generalized eigenvalue
problem Ly = )'Dy.
:?st? -.
Figure 2: 300 most frequent words of the Brown
corpus represented in the spectral domain.
Let 1 be the constant function taking value 1 at each
vertex. It is easy to see that 1 is an eigenvector with eigenvalue O. If the graph
is connected , 1 is the only eigenvector for ). = O. To eliminate this trivial solution which collapses all vertices of G onto the real number 1, we put an additional
constraint of orthogonality to obtain
Yopt
= argmm yTDy=l
yT Ly
yTDl=O
Thus, the solution Y opt is now given by the eigenvector with the smallest non-zero
eigenvalue. More generally, the embedding of the graph into lR!. m (m > 1) is given
by the n x m matrix Y = [Y1Y2 ... Yml where the ith row, denoted by
provides
the embedding coordinates of the ith vertex. Thus we need to minimize
Yl,
L
IIYi - 1j 11 2Wi j
= tr(yT LY)
i ,j
This reduces now to
Yopt
= argminY TDY=I
tr(yT LY)
For the one-dimensional embedding problem , the constraint prevents collapse onto
a point. For the m-dimensional embedding problem , the constraint presented above
prevents collapse onto a subspace of dimension less than m.
2.1
The Laplace-Beltrami Operator
The Laplacian of a graph is analogous to the Laplace-Beltrami operator on manifolds.
Consider a smooth m-dimensional
manifold M emb edded in
\
lR k. The Riemannian structure (metric tensor) on the
manifold is induced by the
,/ \
standard Riemannian structure on lR k. Suppose we have
a map f : M ----+ lR . The gradient V f( x) (which in local
coordinates can be written as
Vf( x) = 2::7=1
is a
vector field on the manifold,
Figure 4: 685 speech datapoints plotted in the two
such that for small ox (in a
dimensional Laplacian sp ectral representation.
local coordinate chart)
ltax.)
If(x
+ ox) -
f(x)1 ~ I(Vf(x) , ox)1 ~
IIVf1111ox11
Thus we see that if IIV fll is small , points near x will be mapp ed to points near
f( x). We therefore look for a map that best preserves locality on average by trying
to find
Minimizing
f IIVf(x)11 2 corresponds
directly to minimizing Lf
M
f j )2W ij on a graph.
=
~ 2::ij (li '
Minimizing the squared gradient reduces to finding eigen-
fun ctions of the Laplace-Beltrami op erator.c. Recall that .c d;j div V(I) , where
div is the divergence. It follows from the Stokes theorem that -div and V
are form ally adjoint operators, i. e. if f is a function and X is a vector field
fM (X, V f) = fM div(X)f. Thus
1M IIV fl12 = 1M .c(l)f
We see that .c is positive semidefinite and the
be an eigenfunction of .c.
2.2
f that minimizes fM IIV fl12 has to
H eat K ernels and the Choice of W e ight Matrix
The Laplace-Beltrami operator on differentiable functions on a manifold M is intimately related to the heat flow. Let f : M ----+ lR be the initial heat distribution, u(x, t) be the heat distribution at time t (u(x , O) = f( x) ). The heat
equation is the partial differential equation ~~ = ?u. The solution is given by
u(x , t) = fM Ht(x, y)f(y) where H t is the heat kernel - the Green 's function for
this PDE. Therefore,
Locally, the heat kernel is approximately equal to the Gaussian , Ht(x, y)
n
Ilx-yl12
(47rt)-"2e--4-t
~
. .
where Ilx - yll (x and yare m lo cal coordmates) and tare
both sufficiently small and n = dim M. Notice that as t tends to 0, the heat
kernel Ht(x , y) becomes increasingly lo calized and tends to Dirac's b-function, i.e.,
lim fM Ht(x, y)f(y) = f(x). Therefore , for small t from the definition of the deriva-
t---+D
tive we have
?f(x;)
If Xl ,
... , Xk
~
-I,1 [ f(x) - (47rt)-"2n
l l x -Yl1 2
]
J(
e- - -f(y)dy
M
4t
are data points on M, the last expression can be approximated by
Xj
O< IIX j -X ill<t:
t
The coefficient
is global and will not affect the eigenvectors of the discrete
Laplacian. Since the inherent dimensionality of M may be unknown , we put
a = t(47rt)?-. Noticing that the Laplacian of the constant function is zero, we
immediately have .1
=
0:
e
Ilx ?-x . 11 2
' 4t '
?
Notice, however, that we do not
Xj
O< IIX j -X ill?
have to worry about a , since the graph Laplacian L will choose the correct multiplier for us. Finally we see how to choose the edge weights for the adjacency matrix
W:
if Ilxi - Xj II <
otherwise
3
f
Examples
Exalllple 1 - A Toy Vision Exalllple: Consider binary images of vertical and
horizontal bars lo cated at arbitrary points in the 40 x 40 visual field. We choose
1000 images, each containing either a vertical or a horizontal bar (500 containing
vertical bars and 500 horizontal bars) at random. Fig. 1 shows the result of applying
the Laplacian Eigenmaps compared to PCA.
Exalllple 2 - Words in the Brown Corpus: Fig. 2 shows the results of an
experiment conducted with the 300 most frequent words in the Brown corpus - a
collection of t exts containing about a million words available in electronic format.
Each word is represented as a vector in a 600 dimensional space using information
about the frequency of its left and right neighbors (computed from th e bigram
statistics of the corpus).
Exalllple 3 - Sp eech: In Fig. 4 we consider the low dimensional representations
arising from applying the Laplacian Eigenmap algorithm to a sentence of sp eech
- ge l
p-
P}jf=
J3'~ _o. o
.<=1
00
pO.<=!
Figure 5: A blowup of the three selected regions in figure 4, from left to right.
Notice the phonetic homogeneity of the chosen regions. Note that points marked
with the same symbol may arise from occurrences of the same phoneme at different
points in the utterance. The symbol "sh" stands for the fricative in the word she;
"aa" ," ao" stand for vowels in the words dark and all respectively; "kcl" ," dcl" ," gcl"
stand for closures preceding the stop consonants "k" ," d" ," g" respectively. "h#"
stands for silence.
sampled at 1kHz. Short-time Fourier spectra were computed at 5 ms intervals
yielding 685 vectors of 256 Fourier coefficients for every 30 ms chunk of the speech
signal. Each vector is labeled according to the identity of the phonetic segment it
belonged to. Fig. 4 shows the speech data points plotted in the two dimensional
Laplacian representation. The two "spokes" correspond predominantly to fricatives
and closures respectively. The central portion corresponds mostly to periodic sounds
like vowels, nasals , and semivowels. Fig. 5 shows three different regions of the
representation space.
References
[1] Fan R. K. Chung, Spectral Graph Theory , Regional Conference Series in Mathematics , number 92, 1997
[2] Fan R. K. Chung , A. Grigor 'yan, S.-T. Yau , Higher eigenva lues and isoperimetric inequalities on Riemannian manifolds and graphs, Communications on
Analysis and Geometry, to appear ,
[3] S. Rosenberg, The Laplacian on a Riemmannian Manifold, Cambridge University Press, 1997,
[4] Sam T. Roweis, Lawrence K. Saul, Non lin ear Dimensionality Reduction by
Locally Linear Embedding, Science, vol 290 , 22 Dec. 2000 ,
[5] Jianbo Shi , Jitendra Malik , Norma lized Cuts and Image Segmentation , IEEE
Transactions on PAMI, vol 22, no 8, August 2000
[6] J. B. Tenenbaum, V. de Silva, J. C. Langford, A Global G eom etric Framework
for Nonlinear Dimensionality Reduction, Science, Vol 290, 22 Dec . 2000
| 1961 |@word middle:1 bigram:1 closure:2 tr:2 reduction:5 initial:1 necessity:1 etric:2 fragment:1 contains:2 series:1 existing:1 yet:1 must:1 written:2 chicago:2 enables:1 remove:1 intelligence:1 selected:1 xk:2 ith:2 core:1 short:1 erator:3 lr:5 provides:3 math:1 node:5 simpler:1 constructed:1 become:1 differential:1 manner:1 blowup:1 param:1 becomes:1 provided:1 panel:3 minimizes:2 eigenvector:4 developed:1 finding:2 every:1 fun:1 jianbo:1 ly:8 yn:1 appear:1 positive:3 local:4 apparatus:1 tends:2 approximately:1 pami:1 might:2 co:1 collapse:3 camera:2 yj:4 lf:1 area:1 yan:1 thought:1 word:7 tdy:1 onto:3 close:2 operator:6 cal:1 put:4 context:1 applying:2 map:5 shi:2 yt:5 yopt:2 immediately:1 datapoints:1 embedding:8 variation:2 justification:3 laplace:6 analogous:1 coordinate:3 suppose:1 approximated:1 cut:1 labeled:2 role:1 region:3 connected:11 principled:1 iiv:3 isoperimetric:1 segment:1 serve:1 basis:1 po:1 represented:2 heat:10 kcl:1 artificial:1 neighborhood:1 choosing:2 drawing:1 otherwise:2 niyogi:2 statistic:1 syntactic:1 confronted:2 advantage:2 eigenvalue:7 differentiable:1 propose:1 frequent:2 neighboring:1 roweis:3 adjoint:1 intuitive:1 dirac:1 edding:2 cluster:1 object:2 gcl:1 nearest:3 semivowel:1 ij:5 op:3 eq:1 sa:1 recovering:1 c:1 auxiliary:1 come:2 direction:1 beltrami:6 yiyj:1 correct:1 norma:1 dii:3 adjacency:1 ao:1 hyde:1 biological:2 opt:1 accompanying:1 lying:1 sufficiently:1 considered:1 lawrence:1 mapping:1 smallest:2 minded:1 reflects:1 weighted:3 minimization:1 gaussian:1 fricative:2 rosenberg:1 yo:2 she:1 dim:1 entire:1 lj:2 eliminate:1 wij:5 among:2 ill:2 denoted:1 development:1 field:3 construct:2 equal:1 having:1 park:1 look:1 stimulus:1 inherent:1 belkin:1 few:1 preserve:1 divergence:1 homogeneity:1 geometry:1 vowel:2 freedom:1 interest:1 mining:1 sh:1 misha:1 semidefinite:3 yielding:1 edge:8 byproduct:1 partial:1 plotted:2 column:1 disadvantage:2 vertex:6 entry:1 uld:1 eigenmaps:4 conducted:1 periodic:1 chunk:1 st:1 stay:1 yl:1 connecting:2 together:1 squared:1 central:1 ear:1 containing:3 choose:7 possibly:1 yau:1 chung:2 li:2 toy:1 account:1 de:1 coefficient:2 jitendra:1 explicitly:1 later:1 portion:1 recover:1 dcl:1 partha:1 minimize:2 il:3 chart:1 phoneme:1 yield:1 correspond:3 emphasizes:1 ed:1 definition:1 frequency:1 naturally:1 riemannian:3 sampled:2 stop:1 intrinsically:1 recall:2 knowledge:1 lim:1 dimensionality:7 riemmannian:1 segmentation:1 worry:1 higher:2 modal:1 ox:3 langford:1 ally:1 horizontal:5 nonlinear:2 gray:1 brown:3 y2:1 multiplier:1 symmetric:4 djj:1 m:2 generalized:2 trying:1 mapp:1 silva:1 image:7 recently:1 ilxi:3 argminy:1 predominantly:1 khz:1 insensitive:1 million:1 he:1 cambridge:1 ai:1 mathematics:2 lues:1 efore:1 dot:1 moving:1 fl12:2 phonetic:2 inequality:1 binary:1 yi:3 infinitive:1 wji:1 preserving:2 minimum:1 additional:1 preceding:1 ight:1 signal:1 ii:1 sound:1 reduces:3 smooth:1 pde:1 retrieval:1 lin:1 laplacian:18 j3:1 vision:2 metric:1 kernel:5 represent:1 eter:1 dec:2 preserved:1 ion:1 interval:1 constru:1 regional:1 induced:1 flow:1 near:2 easy:1 xj:5 affect:1 fm:5 motivated:3 expression:1 pca:1 speech:3 proceed:1 generally:1 clear:1 eigenvectors:2 nasal:1 dark:1 tenenbaum:3 locally:2 category:1 deriva:1 notice:4 sign:1 arising:1 discrete:1 vol:3 spoke:1 ht:4 graph:20 geometrically:4 sum:1 noticing:1 electronic:1 dy:4 scaling:1 vf:2 fll:1 correspondence:1 fan:2 constraint:5 orthogonality:1 tare:1 nearby:1 fourier:2 eat:1 relatively:1 format:1 developing:1 according:2 increasingly:1 character:1 intimately:1 wi:1 sam:1 outlier:1 taken:1 computationally:1 equation:4 know:1 ge:1 lyt:1 end:1 available:1 yare:1 spectral:4 appropriate:1 occurrence:1 cated:1 specialist:1 yl1:1 eigen:1 clustering:4 iix:2 tensor:1 malik:2 objective:1 rt:3 diagonal:1 div:4 gradient:2 subspace:1 manifold:14 argue:1 trivial:1 relationship:1 providing:1 minimizing:3 difficult:1 mostly:2 unknown:1 vertical:5 stokes:1 communication:1 y1:1 rn:2 verb:2 arbitrary:2 august:1 tive:1 connection:6 sentence:1 yll:1 eigenfunction:1 bar:7 perception:1 belonged:1 geom:1 green:1 natural:6 eom:1 utterance:1 embedded:2 degree:1 row:2 preposition:1 lo:3 last:1 silence:1 uchicago:2 emb:3 saul:3 neighbor:4 taking:1 emerge:1 mikhail:1 sparse:1 dimension:1 stand:4 avoids:1 reside:1 collection:1 transaction:1 implicitly:1 global:2 corpus:4 consonant:1 xi:1 spectrum:1 ctions:1 inherently:1 constructing:1 domain:1 sp:4 arrow:1 noise:1 arise:1 n2:2 body:1 fig:5 explicit:1 wish:1 xl:2 perceptual:1 weighting:1 third:1 theorem:1 symbol:2 intrinsic:2 depts:1 locality:3 ilx:3 visual:1 prevents:2 ordered:1 aa:1 corresponds:2 viewed:1 marked:1 identity:1 jf:1 principal:2 iiyi:1 eigenmap:2 |
1,053 | 1,962 | Fast, large-scale transformation-invariant
clustering
Brendan J. Frey
Machine Learning Group
University of Toronto
www.psi.toronto.edu/?frey
Nebojsa Jojic
Vision Technology Group
Microsoft Research
www.ifp.uiuc.edu/?jojic
Abstract
In previous work on ?transformed mixtures of Gaussians? and
?transformed hidden Markov models?, we showed how the EM algorithm in a discrete latent variable model can be used to jointly
normalize data (e.g., center images, pitch-normalize spectrograms)
and learn a mixture model of the normalized data. The only input
to the algorithm is the data, a list of possible transformations, and
the number of clusters to find. The main criticism of this work
was that the exhaustive computation of the posterior probabilities over transformations would make scaling up to large feature
vectors and large sets of transformations intractable. Here, we describe how a tremendous speed-up is acheived through the use of
a variational technique for decoupling transformations, and a fast
Fourier transform method for computing posterior probabilities.
For N ?N images, learning C clusters under N rotations, N scales,
N x-translations and N y-translations takes only (C + 2 log N )N 2
scalar operations per iteration. In contrast, the original algorithm
takes CN 6 operations to account for these transformations. We
give results on learning a 4-component mixture model from a video
sequence with frames of size 320 ?240. The model accounts for 360
rotations and 76,800 translations. Each iteration of EM takes only
10 seconds per frame in MATLAB, which is over 5 million times
faster than the original algorithm.
1
Introduction
The task of clustering raw data such as video frames and speech spectrograms is
often obfuscated by the presence of random, but well-understood transformations
in the data. Examples of these transformations include object motion and camera
motion in video sequences and pitch modulation in spectrograms.
The machine learning community has proposed a variety of sophisticated techniques
for pattern analysis and pattern classification, but these techniques have mostly assumed the data is already normalized (e.g., the patterns are centered in the images)
or nearly normalized. Linear approximations to the transformation manifold have
been used to significantly improve the performance of feedforward discriminative
classifiers such as nearest neighbors and multilayer perceptrons (Simard, LeCun
and Denker 1993). Linear generative models (factor analyzers, mixtures of factor
analyzers) have also been modified using linear approximations to the transformation manifold to build in some degree of transformation invariance (Hinton, Dayan
and Revow 1997). A multi-resolution approach can be used to extend the usefulness of linear approximations (Vasconcelos and Lippman 1998), but this approach is
susceptable to local minima ? e.g. a pie may be confused for a face at low resolution.
For significant levels of transformation, linear approximations are far from exact
and better results can be obtained by explicitly considering transformed versions of
the input. This approach has been used to design ?convolutional neural networks?
that are invariant to translations of parts of the input (LeCun et al. 1998).
In previous work on ?transformed mixtures of Gaussians? (Frey and Jojic 2001)
and ?transformed hidden Markov models? (Jojic et al. 2000), we showed how the
EM algorithm in a discrete latent variable model can be used to jointly normalize
data (e.g., center video frames, pitch-normalize spectrograms) and learn a mixture
model of the normalized data. We found ?that the algorithm is reasonably fast (it
learns in minutes or hours) and very effective at transformation-invariant density
modeling.? Those results were for 44 ? 28 images, but realistic applications such
as home video summarization require near-real-time processing of medium-quality
video at resolutions near 320 ? 240. In this paper, we show how a variational
technique and a fast Fourier method for computing posterior probabilities can be
used to achieve this goal.
2
Background
In (Frey and Jojic 2001), we introduced a single discrete variable that enumerates
a discrete set of possible transformations that can occur in the input. Here, we
break the transformation into a sequence of transformations. Tk is the random
variable for the transformation matrix at step k. So, if Tk is the set of possible
transformation matrices corresponding to the type of transformation at step k (e.g.,
image rotation), Tk ? Tk .
The generative model is shown in Fig. 1a and consists of picking a class c, drawing a
vector of image pixel intensities z0 from a Gaussian, picking the first transformation
matrix T1 from Tk , applying this transformation to z0 and adding Gaussian noise
to obtain z1 , and repeating this process until the last transformation matrix TK is
drawn from TK and is applied to zK?1 to obtain the observed data zK . The joint
distribution is
p(c, z0 , T1 , z1 , . . . , TK , zK ) = p(c)p(z0 |c)
K
Y
p(Tk )p(zk |zk?1 , Tk ).
(1)
k=1
The probability of class c ? {1, . . . , C} is parameterized by p(c) = ?c and the
untransformed latent image has conditional density
p(z0 |c) = N (z0 ; ?c , ?c ),
(2)
where N () is the normal distribution, ?c is the mean image for class c and ?c is the
diagonal noise covariance matrix for class c. Notice that the noise modeled by ?c
gets transformed, so ?c can model noise sources that depend on the transformations,
such as background clutter and object deformations in images.
(a)
(b)
(c)
c
T1
TK
c
T1
z0
z1
zK
z0
z1
TK
Figure 1: (a) The Bayesian network for a generative model that draws an image z0
from class c, applies a randomly drawn transformation matrix T1 of type 1 (e.g.,
image rotation) to obtain z1 , and so on, until a randomly drawn transformation
matrix TK of type K (e.g., image translation) is applied to obtain the observed
image zK . (b) The Bayesian network for a factorized variational approximation to
the posterior distribution, given zK . (c) When an image is measured on a discrete,
radial 2-D grid, a scale and rotation correspond to a shift in the radial and angular
coordinates.
The probability of transformation matrix Tk at step k is p(Tk ) = ?k,Tk . (In our
experiments, we often fix this to be uniform.) At each step, we assume a small
amount of noise with diagonal covariance matrix ? is added to the image, so
p(zk |zK?1 , Tk ) = N (zk ; Tk zk?1 , ?).
(3)
Tk operates on zk?1 to produce a transformed image. In fact, Tk can be viewed
as a permutation matrix that rearranges the pixels in zk?1 . Usually, we assume
? = ?I and in our experiments we often set ? to a constant, small value, such as
0.01.
In (2001), an exact EM algorithm for learning this model is described. The sufficient statistics for ?c , ?c and ?c are computed by averaging the derivatives of
ln(?c N (z0 ; ?c , ?c )) over the posterior distribution,
p(c, z0 |zK ) =
X
T1
???
X
p(z0 |c, T1 , . . . , TK , zK )p(c, T1 , . . . , TK |zK ).
(4)
TK
Since z0 , . . . , zK
are jointly Gaussian given c and T1 , . . . , TK ,
p(z0 |c, T1 , . . . , TK , zK ) is Gaussian and its mean and covariance are computed using linear algebra. Also, p(c, T1 , . . . , TK |zK ) is computed using linear
algebra.
The problem with this direct approach is that the number of scalar operations in (4)
is very large for large feature vectors and large sets of transformations. For N ? N
images, learning C clusters under N rotations, N scales, N x-translations and N
y-translations leads to N 4 terms in the summation. Since there are N 2 pixels, each
term is computed using N 2 scalar operations. So, each iteration of EM takes CN 6
scalar operations per training case. For 10 classes and images of size 256 ? 256, the
direct approach takes 2.8 ? 1015 scalar operations per image for each iteration of
EM.
We now describe how a variational technique for decoupling transformations, and
a fast Fourier transform method for computing posterior probabilities can reduce
the above number to (C + 2 log N )N 2 scalar operations. For 10 classes and images
of size 256 ? 256, the new method takes 2, 752, 512 scalar operations per image for
each iteration of EM.
3
Factorized variational technique
To simplify the computation of the required posterior in (4), we use a variational
approximation (Jordan et al. 1998). As shown in Fig. 1b, our variational approximation is a completely factorized approximation to the true posterior:
p(c, z0 , T1 , z1 , . . . , TK |zK ) ? q(c, z0 , T1 , z1 , . . . , TK )
K?1
Y
= q(c)q(z0 )
q(Tk )q(zk ) q(TK ).
(5)
k=1
The q-distributions are parameterized and these variational parameters are varied
to make the approximation a good one. p(c, z0 |zK ) ? q(c)q(zK ), so the sufficient
statistics can be readily determined from q(c) and q(zK ). The variational parameters are q(c) = ?c , q(Tk ) = ?k,Tk , q(zk ) = N (zk ; ? k , ?k ).
The generalized EM algorithm (Neal and Hinton 1998) maximizes a lower bound
on the log-likelihood of the observed image zK :
XZ
p(c, z0 , T1 , z1 , . . . , TK , zK )
q(c, z0 , T1 , z1 , . . . , TK ) ln
B=
? ln p(zK ). (6)
q(c, z0 , T1 , z1 , . . . , TK )
In the E step, the variational parameters are adjusted to maximize B and in the M
step, the model parameters are adjusted to maximize B.
Assuming constant noise, ? = ?I, the derivatives of B with respect to the variational parameters produce the following E-step updates:
?1
X
?1
I
?c ??1
?0 ?
c +?
c
? 0 ? ?0
X
c
?1
?c ??1
c ?c + ?
X
?1,T1 T?1
1 ?1
T1
(7)
1
1
0 ?1
?c ? ?c exp ? tr(?0 ??1
)
?
(?
?
?
)
(?
?
?
)
?
0
c
c
c
c
2
2 0
1
?k ? ?I
2
X
1 X
?k+1,Tk+1 T?1
?
(8)
?k ?
?k,Tk Tk ? k?1 +
k+1 k+1
2
Tk+1
Tk
1
1
?k,Tk ? ?k,Tk exp ? tr(?k ? ?1 ) ? ? ?1 (? k ? Tk ? k?1 )0 (? k ? Tk ? k?1 ) . (9)
2
2
Each time the ?c ?s are updated, they should be normalized and similarly for the
?k,Tk ?s. One or more iterations of the above updates are applied for each training
case and the variational parameters are stored for use in the M-step, and as the
initial conditions for the next E-step.
The derivatives of B with respect to the model parameters produce the following
M-step updates:
?c ? h?c i
?c ? h?c ? 0 i
?c ? h?c (?0 + diag((? 0 ? ?c )(? 0 ? ?c )0 )i,
where hi indicates an average over the training set.
(10)
This factorized variational inference technique is quite greedy, since at each step,
the method approximates the posterior with one Gaussian. So, the method works
best for a small number of steps (2 in our experiments).
4
Inference using fast Fourier transforms
The M-step updates described above take very few computations, but the E-step
updates can be computationally burdensome. The dominant culprits are the computation of the distance of the form
dT = (g ? Th)0 (g ? Th)
(11)
in (9), for all possible transformations T, and the computation of the form
X
?T Th
(12)
T
in (7) and (8).
Since the variational approximation is more accurate when the transformations are
broken into fewer steps, it is a good idea to pack as many transformations into
each step as possible. In our experiments, x-y translations are applied in one step,
and rotations are applied in another step. However, the number of possible x-y
translations in a 320 ? 240 image is 76,800. So, 76,800 dT ?s must be computed and
the computation of each dT uses a vector norm of size 76,800.
It turns out that if the data is defined on a coordinate system where the effect of a
transformation is a shift, the above quantities can be computed very quickly using
fast Fourier transforms (FFTs). For images measured on rectangular grids, an x-y
translation corresponds to a shift in the coordinate system. For images measured
on a radial grid, such as the one shown in Fig. 1c, a scale and rotation corresponds
to a shift in the coordinate system (Wolberg and Zokai 2000).
When updating the variational parameters, it is straightforward to convert them to
the appropriate coordinate system, apply the FFT method and convert them back.
We now use a very different notation to describe the FFT method. The image is
measured on a discrete grid and x is the x-y coordinate of a pixel in the image (x
is a 2-vector). The images g and h in (11) and (12) are written as functions of x:
g(x), h(x). In this representation, T is an integer 2-vector, corresponding to a shift
in x. So, (11) becomes
X
X
d(T) =
(g(x) ? h(x + T))2 =
(g(x)2 ? 2g(x)h(x + T) + h(x + T)2 ) (13)
x
x
and (12) becomes
X
?(T)h(x + T).
(14)
T
The common form is the correlation X
f (T) =
g(x)h(x + T),
(15)
x
For an N ? N grid, computing the correlation directly for all T takes N 4 scalar
operations. The FFT can be used to compute the correlation in N 2 log N time.
The FFTs G(?) and H(?) of g and h are computed in N 2 log N time. Then, the
FFT F (?) of f is computed in N 2 as follows,
F (?) = G(?)? H(?),
(16)
where ?? ? denotes complex conjugate. Then the inverse FFT f (T) of F (?) is
computed in N 2 log N time.
Using this method, the posterior and sufficient statistics for all N 2 shifts in an
N ? N grid can be computed in N 2 log N time. Using this method along with the
variational technique, C classes, N scales, N rotations, N x-translations and N
y-translations can be accounted for using (C + 2 log N )N 2 scalar operations.
5
Results
In order to compare our new learning algorithm with the previously published result,
we repeated the experiment on clustering head poses in 200 44x28 frames. We
achieved essentially the same result, but in only 10 seconds as opposed to 40 minutes
that the original algorithm needed to compete the task. Both algorithms were
implemented in Matlab. It should be noted that the original algorithm tested only
for 9 vertical and 9 horizontal shifts (81 combinations), while the new algorithm
dealt with all 1232 possible discrete shifts. This makes the new algorithm 600
times faster on low resolution data. The speed-up is even more drastic at higher
resolutions, and when rotations and scales are added, since the complexity of the
original algorithm is CN 6 , where C is the number of classes and N is the number
of pixels.
The speed-up promised in the abstract is based on our computations, but obviously
we were not able to run the original algorithm on full 320x240 resolution data.
To illustrate that the fast variational technique presented here can be efficiently
used to learn data means in the presence of scale change, significant rotations and
translations in the data, we captured 10 seconds of a video at 320x240 resolution and
trained a two-stage transformation-invariant where the first stage modeled rotations
and scales as shifts in the log-polar coordinate system and the second stage modeled
all possible shifts as described above. In Fig. 2 we show the results of training an
ordinary Gaussian model, shift-invariant model and finally the scale, rotation and
shift invariant model on the sequence. We also show three frames from the sequence
stabilized using the variational inference.
6
Conclusions
We describes how a tremendous speed-up in training transformation-invariant generative model can be achieved through the use of a variational technique for decoupling transformations, and a fast Fourier transform method for computing posterior
probabilities. For N ? N images, learning C clusters under N rotations, N scales,
N x-translations and N y-translations takes only (C + 2 log N )N 2 scalar operations
per iteration. In contrast, the original algorithm takes CN 6 operations to account
for these transformations. In this way we were able to reduce the computation to
only seconds per frame for the images of 320x240 resolution using a simple Matlab
implementation.
This opens the door for generative models of pixel intensities in video to be efficiently used for transformation-invariant video summary and search. As opposed to
most techniques used in computer vision today, the generative modeling approach
provides the likelihood model useful for search or retrieval, automatic clustering of
the data and the extensibility through adding new hidden variables.
The model described here could potentially be useful for other high-dimensional
data, such as audio.
References
Dempster, A. P., Laird, N. M., and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Proceedings of the Royal Statistical Society,
B-39:1?38.
Frey, B. J. and Jojic, N. 2001. Transformation invariant clustering and dimensionality
reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence. To
appear. Available at http://www.cs.utoronto.ca/?frey.
Figure 2: Learning a rotation, scale and translation invariant model on 320x240
video
Hinton, G. E., Dayan, P., and Revow, M. 1997. Modeling the manifolds of images of
handwritten digits. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8:65?74.
Jojic, N., Petrovic, N., Frey, B. J., and Huang, T. S. 2000. Transformed hidden markov
models: Estimating mixture models of images and inferring spatial transformations
in video sequences. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition.
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. 1998. An introduction
to variational methods for graphical models. In Jordan, M. I., editor, Learning in
Graphical Models. Kluwer Academic Publishers, Norwell MA.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. 1998. Gradient-based learning applied
to document recognition. Proceedings of the IEEE, 86(11):2278?2324.
Neal, R. M. and Hinton, G. E. 1998. A view of the EM algorithm that justifies incremental,
sparse, and other variants. In Jordan, M. I., editor, Learning in Graphical Models,
pages 355?368. Kluwer Academic Publishers, Norwell MA.
Simard, P. Y., LeCun, Y., and Denker, J. 1993. Efficient pattern recognition using a new
transformation distance. In Hanson, S. J., Cowan, J. D., and Giles, C. L., editors,
Advances in Neural Information Processing Systems 5. Morgan Kaufmann, San Mateo
CA.
Vasconcelos, N. and Lippman, A. 1998. Multiresolution tangent distance for affineinvariant classification. In Jordan, M. I., Kearns, M. I., and Solla, S. A., editors,
Advances in Neural Information Processing Systems 10. MIT Press, Cambridge MA.
Wolberg, G. and Zokai, S. 2000. Robust image registration using log-polar transform. In
Proceedings IEEE Intl. Conference on Image Processing, Vancouver, Canada.
| 1962 |@word version:1 norm:1 open:1 covariance:3 tr:2 reduction:1 initial:1 document:1 culprit:1 must:1 readily:1 written:1 realistic:1 x240:4 update:5 nebojsa:1 generative:6 greedy:1 fewer:1 intelligence:2 provides:1 toronto:2 along:1 direct:2 acheived:1 consists:1 xz:1 uiuc:1 multi:1 considering:1 becomes:2 confused:1 estimating:1 notation:1 maximizes:1 medium:1 factorized:4 transformation:40 classifier:1 appear:1 t1:18 understood:1 frey:7 local:1 untransformed:1 modulation:1 mateo:1 camera:1 lecun:4 digit:1 lippman:2 significantly:1 radial:3 get:1 applying:1 www:3 center:2 straightforward:1 rectangular:1 resolution:8 coordinate:7 updated:1 today:1 exact:2 us:1 recognition:3 updating:1 observed:3 solla:1 extensibility:1 dempster:1 broken:1 complexity:1 trained:1 depend:1 algebra:2 completely:1 joint:1 fast:9 describe:3 effective:1 exhaustive:1 quite:1 drawing:1 statistic:3 jointly:3 transform:4 laird:1 obviously:1 sequence:6 achieve:1 multiresolution:1 normalize:4 cluster:4 intl:1 produce:3 incremental:1 object:2 tk:45 illustrate:1 pose:1 measured:4 nearest:1 implemented:1 c:1 centered:1 require:1 fix:1 obfuscated:1 summation:1 adjusted:2 normal:1 exp:2 polar:2 mit:1 gaussian:6 modified:1 jaakkola:1 likelihood:3 indicates:1 contrast:2 brendan:1 criticism:1 burdensome:1 inference:3 dayan:2 hidden:4 transformed:8 pixel:6 classification:2 spatial:1 vasconcelos:2 nearly:1 simplify:1 few:1 randomly:2 microsoft:1 mixture:7 accurate:1 norwell:2 rearranges:1 incomplete:1 deformation:1 modeling:3 giles:1 ordinary:1 uniform:1 usefulness:1 stored:1 petrovic:1 density:2 picking:2 quickly:1 opposed:2 huang:1 simard:2 derivative:3 account:3 explicitly:1 break:1 view:1 convolutional:1 kaufmann:1 efficiently:2 correspond:1 dealt:1 raw:1 bayesian:2 handwritten:1 published:1 psi:1 enumerates:1 dimensionality:1 sophisticated:1 back:1 higher:1 dt:3 angular:1 stage:3 until:2 correlation:3 horizontal:1 quality:1 effect:1 normalized:5 true:1 jojic:7 neal:2 noted:1 generalized:1 motion:2 image:34 variational:20 ifp:1 rotation:15 common:1 million:1 extend:1 approximates:1 kluwer:2 significant:2 cambridge:1 automatic:1 grid:6 similarly:1 analyzer:2 dominant:1 posterior:11 showed:2 captured:1 minimum:1 morgan:1 spectrogram:4 maximize:2 full:1 faster:2 academic:2 x28:1 retrieval:1 pitch:3 variant:1 multilayer:1 vision:3 essentially:1 iteration:7 achieved:2 background:2 source:1 publisher:2 cowan:1 jordan:5 integer:1 near:2 presence:2 door:1 feedforward:1 bengio:1 fft:5 variety:1 reduce:2 idea:1 cn:4 haffner:1 shift:12 speech:1 matlab:3 wolberg:2 useful:2 amount:1 repeating:1 clutter:1 transforms:2 http:1 stabilized:1 notice:1 per:7 discrete:7 group:2 promised:1 zokai:2 drawn:3 registration:1 convert:2 compete:1 inverse:1 parameterized:2 run:1 home:1 draw:1 scaling:1 bound:1 hi:1 occur:1 fourier:6 speed:4 combination:1 conjugate:1 describes:1 em:10 invariant:10 ln:3 computationally:1 previously:1 turn:1 needed:1 drastic:1 available:1 gaussians:2 operation:12 denker:2 apply:1 appropriate:1 original:7 denotes:1 clustering:5 include:1 graphical:3 ghahramani:1 build:1 society:1 already:1 added:2 quantity:1 diagonal:2 gradient:1 distance:3 manifold:3 assuming:1 modeled:3 pie:1 mostly:1 potentially:1 implementation:1 design:1 summarization:1 vertical:1 markov:3 hinton:4 head:1 frame:7 varied:1 community:1 intensity:2 canada:1 introduced:1 required:1 z1:10 hanson:1 tremendous:2 hour:1 able:2 usually:1 pattern:7 royal:1 video:11 improve:1 technology:1 tangent:1 vancouver:1 permutation:1 degree:1 sufficient:3 rubin:1 editor:4 translation:16 summary:1 accounted:1 last:1 neighbor:1 saul:1 face:1 sparse:1 san:1 far:1 transaction:2 assumed:1 discriminative:1 search:2 latent:3 learn:3 zk:30 reasonably:1 pack:1 decoupling:3 ca:2 robust:1 bottou:1 complex:1 diag:1 main:1 noise:6 repeated:1 fig:4 inferring:1 learns:1 ffts:2 minute:2 z0:21 utoronto:1 list:1 intractable:1 adding:2 justifies:1 scalar:10 applies:1 corresponds:2 ma:3 conditional:1 goal:1 viewed:1 revow:2 change:1 determined:1 operates:1 averaging:1 kearns:1 invariance:1 perceptrons:1 audio:1 tested:1 |
1,054 | 1,963 | On the Convergence of Leveraging
Gunnar R?atsch, Sebastian Mika and Manfred K. Warmuth
RSISE, Australian National University, Canberra, ACT 0200 Australia
Fraunhofer FIRST, Kekul?estr. 7, 12489 Berlin, Germany
University of California at Santa Cruz, CA 95060, USA
raetsch@csl.anu.edu.au, mika@first.fhg.de, manfred@cse.ucsc.edu
Abstract
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that
they iteratively call a base learning algorithm which returns hypotheses
that are then linearly combined. We show that these methods are related
to the Gauss-Southwell method known from numerical optimization and
state non-asymptotical convergence results for all these methods. Our
analysis includes -norm regularized cost functions leading to a clean
and general way to regularize ensemble learning.
1 Introduction
We show convergence rates of ensemble learning methods such as AdaBoost [10], Logistic
Regression (LR) [11, 5] and the Least-Square (LS) regression algorithm called LS-Boost
[12]. These algorithms have in common that they iteratively call a base learning algorithm
(also called weak learner) on a weighted training sample. The base learner is expected
from some hypothesis set of weak hypotheses
to return in each iteration a hypothesis
that has small weighted training error. This is the weighted number of false predictions
in classification and weighted estimation error in regression. These hypotheses are then
; in classification one
linearly combined to form the final hypothesis
uses the sign of
for prediction. The hypothesis coefficient
is determined at iteration
, such that a certain objective is minimized or approximately minimized, and is fixed for
later iterations. Here we will work out sufficient conditions on the base learning algorithm
to achieve linear convergence to the minimum of an associated loss function . This means
that for any starting condition the minimum can be reached with precision
in only
iterations.
Relation to Previous Work In the original work on AdaBoost it has been shown that
the optimization objective (which is an upper bound on the training error) converges exponentially fast to zero, if the base learner is consistently better than random guessing, i.e.
its weighted training error is always smaller than some constant with
. In this
case the convergence is known to be linear (i.e. exponentially decreasing) [10]. One can
easily show that this is the case when the data is separable: 1 If the data is not separable, the
Supported by DFG grants MU 987/1-1, JA 379/9-1 and NSF grant CCR 9821087; we gratefully
acknowledge help from B. Borchers, P. Spellucci, R. Israel and S. Lemm. This work has been done,
while G. R?atsch was at Fraunhofer FIRST, Berlin.
1
We call the data separable, if there exists such that
separates the training examples.
weighted training error cannot be upper bounded by a constant smaller , otherwise one
could use AdaBoost to find a separation using the aforementioned convergence result. 2
For AdaBoost and Logistic Regression it has been shown [5] that they generate a combined
hypothesis asymptotically minimizing a loss functional only depending on the output of
the combined hypothesis . This holds for the non-separable case; however, the assumed
conditions in [5] on the performance of the base learner are rather strict and can usually
not be satisfied in practice. Although the analysis in [5] holds in principle for any strictly
convex cost function of Legendre-type (e.g. [24], p. 258, and [1]), one needs to show the
existence of a so-called auxiliary function [7, 5] for each cost function other than the exponential or the logistic loss. This can indeed be done [cf. 19, Section 4.2], but in any case
only leads to asymptotical results. In the present work we can also show rates of convergence.
In an earlier attempt to show the convergence of such methods for arbitrary loss functions
[17], one needed to assume that the hypothesis coefficients
are upper bounded by a
rather small constant. For this case it has been shown that the algorithm asymptotically
converges to a combined hypothesis minimizing . However, since the ?s need to be
small, the algorithm requires many iterations to achieve this goal.
In [9] it has been shown that for loss functions which are (essentially) exponentially decreasing (including the loss functions of AdaBoost and Logistic regression), the loss is
in the first iterations and afterwards
. This implies linear convergence.
However, this only holds, if the loss reaches zero, i.e. if the data is separable. In our work
we do not need to assume separability.
An equivalent optimization problem for AdaBoost has also been considered in a paper that
predates the formulation of AdaBoost [4]. This optimization problem concerns the likelihood maximization for some exponential family of distributions. In this work convergence
is proven for the general non-separable case, however, only for the exponential loss, i.e. for
the case of AdaBoost. 3 The framework set up in this paper is more general and we are able
to treat any strictly convex loss function.
In this paper we propose a family of algorithms that are able to generate a combined hypothesis converging to the minimum of
(if it exists), which is a functional depending
on the outputs of the function evaluated on the training set. Special cases are AdaBoost,
Logistic Regression and LS-Boost. While assuming mild conditions on the base learning
algorithm and the loss function , we can show linear convergence rates [15] (beginning
in the first iteration) of the type
for some fixed
. This means that the difference to the minimum loss converges exponentially
fast to zero (in the number of iterations). A similar convergence has been proven for AdaBoost in the special case of separable data [10], although the constant shown in [10] can
be considerable smaller [see also 9]. To prove the convergence of leveraging, we exploit
results of Luo & Tseng [16] for a variant of the Gauss-Southwell method known from numerical optimization.
Since in practice the hypothesis set
can be quite large, ensemble learning algorithms
without any regularization often suffer from overfitting [22, 12, 2, 19]. Here, the complexity can only be controlled by the size of the base hypothesis set or by early stopping
after a few iterations. However, it has been shown that shrinkage regularization implied
by penalizing some norm of the hypothesis coefficients is the favorable strategy [6, 12].
We therefore extend our analysis to the case of -norm regularized loss functions. With
a slight modification this leads to a family of converging algorithms that e.g. includes the
Leveraged Vector Machine [25] and a variant of LASSO [26].
In the following section we briefly review AdaBoost, Logistic Regression, and LS-Boost
and cast them in a common framework. In Sec. 3 we present our main results. After re2
3
This can also be seen when analyzing a certain linear program in the dual domain (cf. [23])
We will expand on this connection in the full paper (see also [14, 19]).
lating these results to leveraging algorithms, we present an extension to regularized cost
functions in Sec. 4 and finally conclude.
2 Leveraging algorithms revisited
We first briefly review some of the most well known leveraging algorithms for classification
and regression. For more details see e.g. [10, 11, 12, 8]. We work with Alg. 1 as a template
for a generic leveraging algorithm, since these algorithms have the same algorithmical
structure. Finally, we will generalize the problem and extend the notation.
AdaBoost & Logistic Regression are designed for classification tasks. In each iteration
they call a base learning algorithm on the training set
(cf. step 3a in Alg. 1). Here a weighting
on the sample is used
that is recomputed in each iteration . The base learner is expected to return a hypothesis
from some hypothesis space 4
that
5
has small weighted classification error
[10, 11], where
and
. It is more convenient to work with the edge of , which is
. After selecting the hypothesis, its weight
defined as
is computed such that it minimizes a certain functional (cf. step 3b). For AdaBoost this is
(1)
and for Logistic Regression it is
(2)
is the combined hypothesis of the previous iteration given by
. For AdaBoost it has been shown that
minimizing (1) can be computed analytically [3]. This is true because we assumed that the hypotheses are binary
valued. Similarly, for LR there exists an analytic solution of (2). The weighting on the
:
sample is updated based on the new combined hypothesis
and
for AdaBoost and Logistic
Regression, respectively.
Least-Square-Boost is an algorithm to solve regression tasks. In this case
,
and
. It
works in a similar way as AdaBoost and LR. It first selects a hypothesis solving
where
(3)
and then finds the hypothesis weight
bined hypothesis:
by minimizing the squared error of the new com(4)
The ?weighting? of the sample is computed as
, which is the residual
of [12]. In a second version of LS-Boost, the base hypothesis and its weight are found
simultaneously by solving [12]:
(5)
Since in (5) one reaches a lower loss function value than with (3) and (4), it might be the
favorable strategy.
4
Notice that always contains only a finite number of different hypotheses when evaluated on
the training set and is effectively finite [2].
5
Different from common convention, we include the
in
to make the presentation simpler.
Algorithm 1 ? A Leveraging algorithm for the loss function .
1. Input:
2. Initialize:
3. Do for
, No. of Iterations , Loss function
,
for all
,
(a) Train classifier on
and obtain hypothesis
(b) Set
(c) Update
and
4. Output:
The General Case These algorithms can be summarized in Alg. 1 (where case (5) is
slightly degenerated, cf. Sec. 3.2) for some appropriately defined functions and : plugand choosing as
for Adain
Boost (cf. (1)),
for Logistic Regression (cf. (2)) and
for LS-Boost (cf. (4)).
It can easily be verified that the function , used for computing the weights , is the derivative of with respect to the second argument [3, 12].
The Optimization Problem It has been argued in [3, 18, 11, 17] and finally shown in
[5] that AdaBoost and Logistic Regression under certain condition asymptotically converge to a combined hypothesis minimizing the respective loss on the training sample, where
is a linear combination of hypotheses from , i.e.
. Thus, they solve the optimization problem:
(6)
where we defined a matrix
with
.
To avoid confusions, note that hypotheses and coefficients generated during the iterative
algorithm are marked by a hat. In the algorithms discussed so far, the optimization takes
place by employing the leveraging scheme outlined in Alg. 1. The output of such an algorithm is a sequence of pairs
and a combined hypothesis
.
With
,
, it is easy to verify that
, which is in
(note the missing hat).
Other Preliminaries Throughout the paper we assume the loss function
is of the form
Although, this assumption is not necessary, the presentation becomes easier. In [7, 5, 19]
a more general case of Legendre-type cost functions is considered. However, note that
additive loss functions are commonly used, if one considers i.i.d.-drawn examples.
We assume that each element
and
is finite (
,
) and
does not contain a zero column. Furthermore, the function
is assumed to
be strictly convex for all
.
is finite and complementation
For simplicity we assume for the rest of the paper that
closed, i.e. for every
there exists also
. The assumption on the finiteness is
not crucial for classification (cf. footnote 4). For regression problems the hypothesis space
might be infinite. This case has explicitly been analyzed in [20, 19] and goes beyond the
scope of this paper (see also [27]).
3 Main Result
We now state a result known from the field of numerical optimization. Then we show how
the reviewed leveraging algorithms fit into this optimization framework.
3.1 Coordinate Descent
The idea of coordinate descent is to iteratively select a coordinate, say the -th, and find
such that some functional
is minimized with respect to .
There exist several different strategies for selecting the coordinates [e.g. 15]; however, we
are in particular interested in the Gauss-Southwell-type (GS) selection scheme: It selects
the coordinate that has the largest absolute value in the gradient vector
, i.e.
. Instead of doing steps in the direction of the negative gradient
as in standard gradient descent methods, one only changes the variable that has the largest
gradient component. This can be efficient, if there are many variables and most of them are
zero at the minimum.
We start with the following general convergence result, which seemed to be fallen into
oblivion even in the optimization community. It will be very useful in the analysis of
leveraging algorithms. Due to a lack of space we omit proofs (see [21, 19]).
Theorem 1 (Convergence of Coordinate Descent [16]). Suppose
is twice
continuously differentiable and strictly convex on
. Assume that
is open, the
set of solutions
to
(7)
is a fixed matrix having no zero column,
fixed
is not empty, where
and
is a (possibly unbounded) box-constrained set. Furthermore assume that
the Hessian
is a positive matrix for all
. Let
be the sequence
generated by coordinate descent, where the coordinate selection
satisfies
(8)
for some
, where
is the optimal value of
if it would be selected, i.e.
(9)
Then
converges to an element in
.
The coordinate selection in Thm. 1 is slightly different from the Gauss-Southwell selection
rule described before. We therefore need the following:
Proposition 2 (Convergence of GS on
). Assume the conditions on and
as in
Thm. 1. Let
be a convex subset of
such that
. Assume
and
holds for some fixed
Thm. 1, if there exists a fixed
. Then a coordinate selection
such that
(10)
satisfies (8) of
(11)
as described above converges. To
Thus the approximate Gauss-Southwell method on
show the convergence of the second variant of LS-Boost (cf. (5)) we need the following
Proposition 3 (Convergence of the maximal improvement scheme on
). Let
and
as in Proposition 2 and assume (10) holds. Then a coordinate selection
satisfies (8), if there exists a fixed
with
(12)
Thus the maximal improvement scheme on
as above converges in the sense of Thm. 1.
Finally we can also state a rate of convergence, which is surprisingly not worse than the
rates for standard gradient descent methods:
Theorem 4 (Rate of Convergence of Coordinate Descent, [16]). Assume the conditions
of Thm. 1 hold. Let
as in Prop. 2 and assume (10) holds for some
. Then we have
(13)
is the estimate after the -th coordinate descent step,
where
tion, and
. Especially at iteration :
denotes a optimal solu.
Following [16] one can show that the constant is
, where is the Lipschitz
constant of
and is a constant that depends on
and therefore on the geometry of
the hypothesis set (cf. [16, 13] for details). While the upper bound on can be rather large,
making the convergence slow, it is important to note (i) that this is only a rough estimate
of the true constant and (ii) still guarantees an exponential decrease in the error functional
with the number of iterations.
3.2 Leveraging and Coordinate Descent
We now return from the abstract convergence results in Sec. 3.1 to our examples of leveraging algorithms, i.e. we show how to retrieve the Gauss-Southwell algorithm on
as a
part of Alg. 1. For now we set
. The gradient of with respect to
is given by
(14)
is given as in step 3c of Alg. 1. Thus, the coordinate with maximal absolute grawhere
dient corresponds to the hypothesis with largest absolute edge (see definition). However,
according to Proposition 2 and 3 we need to assume less on the base learner. It either has
to return a hypothesis that (approximately) maximizes the edge, or alternatively (approximately) minimizes the loss function.
Definition 5 ( -Optimality). A base learning algorithm is called -optimal, if it always
returns hypotheses that either satisfy condition (11) or (12) for some fixed
.
Since we have assumed
is closed under complementation, there always exist two hypotheses having the same absolute gradient ( and
). We therefore only need to consider
the hypothesis with maximum edge as opposed to the maximum absolute edge.
For classification it means: if the base learner returns the hypothesis with approximately
smallest weighted training error, this condition is satisfied. It is left to show that we can
apply the Thm. 1 for the loss functions reviewed in Sec. 2:
Lemma 6. The loss functions of AdaBoost, Logistic regression and LS-Boost are bounded,
strongly convex and fulfill the conditions in Thm. 1 on any bounded subset of
.
We can finally state the convergence result for leveraging algorithms:
Theorem 7. Let be a loss function satisfying the conditions in Thm. 1. Suppose Alg. 1
and weights
using a -optimal
generates a sequence of hypotheses
with
is bounded. Then any limit
base learner. Assume
is a solution of (6) and converges linearly in the sense of Thm. 4.
point of
Note that this result in particular applies to AdaBoost, Logistic regression and the second
version of LS-Boost. For the selection scheme of LS-Boost given by (3) and (4), both
conditions in Definition 5 cannot be satisfied in general, unless
is constant
for all hypotheses. Since
,
the base learner prefers hypotheses with small
and could therefore stop
improving the objective while being not optimal (see [20, Section 4.3] and [19, Section 5]
for more details).
4 Regularized Leveraging approaches
We have not yet exploited all features of Thm. 1. It additionally allows for box constraints
and a linear function in terms of the hypothesis coefficients. Here, we are in particular
interested in -norm penalized loss functions of the type
, which
are frequently used in machine learning. The LASSO algorithm for regression [26] and the
PBVM algorithm for classification [25] are examples. Since we assumed complementation
closeness of , we can assume without loss of generality that a solution
satisfies
. We can therefore implement the -norm regularization using the linear term
,
where
and
is the regularization constant. Clearly, the regularization
defines a structure of nested subsets of , where the hypothesis set is restricted to a smaller
set for larger values of .
The constraint
causes some minor complications with the assumptions on the base
learning algorithm. However, these can easily be resolved (cf. [21]), while not assuming
more on the base learning algorithm. The first step in solving the problem is to add the
additional constraint
to the minimization with respect to
in step 3b of Alg. 1.
Roughly speaking, this induces the problem that hypothesis coefficient chosen too large in
a previous iteration, cannot be reduced again. To solve this problem one can check for each
coefficient of a previously selected hypothesis whether not selecting it would violate the
-optimality condition (11) or (12). If so, the
Algorithm 2 ? A Leveraging algorithm for -norm regularized loss .
1. Input: Sample , No. of Iterations , Loss function
2. Initialize:
3. Do for
,
for all
,
and obtain hypothesis
(a) Train classifier on
(b) Let
(c)
(d) if
, Reg. const.
and
, where
for
and
and
then
.
else
(e) Set
(f) Update
and
4. Output:
algorithm selects such a coordinate for the next iteration instead of calling the base learning
algorithm. This idea leads to Alg. 2 (see [21] for a detailed discussion). For this algorithm
we can show the following:
Theorem 8 (Convergence of -norm penalized Leveraging). Assume
are as
Thm. 1, is strictly convex,
, and the base learner satisfies
(15)
. Then Alg. 2 converges linearly to a minimum of the regularized loss function.
for
This can also be shown for a maximum-improvement like condition on the base learner,
which we have to omit due to space limitation.
In [27] a similar algorithm has been suggested that solves a similar optimization problem
fixed). For this algorithm one can show order one convergence (which is
(keeping
weaker than linear convergence), which also holds if the hypothesis set is infinite.
5 Conclusion
We gave a unifying convergence analysis for a fairly general family of leveraging methods.
These convergence results were obtained under rather mild assumptions on the base learner
and, additionally, led to linear convergence rates. This was achieved by relating leveraging
algorithms to the Gauss-Southwell method known from numerical optimization.
While the main theorem used here was already proven in [16], its applications closes a
central gap between existing algorithms and their theoretical understanding in terms of
convergence. Future investigations include the generalization to infinite hypotheses spaces
and an improvement of the convergence rate . Furthermore, we conjecture that our results
can be extended to many other variants of boosting type algorithms proposed recently in
the literature (cf. http://www.boosting.org).
References
[1] H.H. Bauschke and J.M. Borwein. Legendre functions and the method of random bregman
projections. Journal of Convex Analysis, 4:27?67, 1997.
[2] K.P. Bennett, A. Demiriz, and J. Shawe-Taylor. A column generation algorithm for boosting.
In P. Langley, editor, Proceedings, 17th ICML, pages 65?72. Morgan Kaufmann, 2000.
[3] L. Breiman. Prediction games and arcing algorithms. Neural Comp., 11(7):1493?1518, 1999.
[4] N. Cesa-Bianchi, A. Krogh, and M. Warmuth. Bounds on approximate steepest descent for
likelihood maximization in exponential families. IEEE Trans. Inf. Th., 40(4):1215?1220, 1994.
[5] M. Collins, R.E. Schapire, and Y. Singer. Logistic Regression, Adaboost and Bregman distances. In Proc. COLT, pages 158?169, San Francisco, 2000. Morgan Kaufmann.
[6] J. Copas. Regression, prediction and shrinkage. J.R. Statist. Soc. B, 45:311?354, 1983.
[7] S. Della Pietra, V. Della Pietra, and J. Lafferty. Duality and auxiliary functions for bregman
distances. TR CMU-CS-01-109, Carnegie Mellon University, 2001.
[8] N. Duffy and D.P. Helmbold. A geometric approach to leveraging weak learners. In P. Fischer
and H. U. Simon, editors, Proc. EuroCOLT ?99, pages 18?33, 1999.
[9] N. Duffy and D.P. Helmbold. Potential boosters? In S.A. Solla, T.K. Leen, and K.-R. M?uller,
editors, NIPS, volume 12, pages 258?264. MIT Press, 2000.
[10] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[11] J. Friedman, T. Hastie, and R.J. Tibshirani. Additive Logistic Regression: a statistical view of
boosting. Annals of Statistics, 2:337?374, 2000.
[12] J.H. Friedman. Greedy function approximation. Tech. rep., Stanford University, 1999.
[13] A.J. Hoffmann. On approximate solutions of systems of linear inequalities. Journal of Research
of the National Bureau of Standards, 49(4):263?265, October 1952.
[14] J. Kivinen and M. Warmuth. Boosting as entropy projection. In Proc. 12th Annu. Conference
on Comput. Learning Theory, pages 134?144. ACM Press, New York, NY, 1999.
[15] D.G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley Publishing Co., Reading, second edition, May 1984. Reprinted with corrections in May, 1989.
[16] Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7?35, 1992.
[17] L. Mason, J. Baxter, P.L. Bartlett, and M. Frean. Functional gradient techniques for combining
hypotheses. In Adv. Large Margin Class., pages 221?247. MIT Press, 2000.
[18] T. Onoda, G. R?atsch, and K.-R. M?uller. An asymptotic analysis of AdaBoost in the binary
classification case. In L. Niklasson, M. Bod?en, and T. Ziemke, editors, Proc. of the Int. Conf. on
Artificial Neural Networks (ICANN?98), pages 195?200, March 1998.
[19] G. R?atsch. Robust Boosting via Convex Optimization. PhD thesis, University of Potsdam,
October 2001. http://mlg.anu.edu.au/?raetsch/thesis.ps.gz.
[20] G. R?atsch, A. Demiriz, and K. Bennett. Sparse regression ensembles in infinite and finite
hypothesis spaces. Machine Learning, 48(1-3):193?221, 2002.
[21] G. R?atsch, S. Mika, and M.K. Warmuth. On the convergence of leveraging. NeuroCOLT2
Technical Report 98, Royal Holloway College, London, 2001.
[22] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Machine Learning,
42(3):287?320, March 2001. also NeuroCOLT Technical Report NC-TR-1998-021.
[23] G. R?atsch and M.K. Warmuth. Marginal boosting. NeuroCOLT2 Tech. Rep. 97, 2001.
[24] R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[25] Y. Singer. Leveraged vector machines. In S.A. Solla, T.K. Leen, and K.-R. M?uller, editors,
NIPS, volume 12, pages 610?616. MIT Press, 2000.
[26] R.J. Tibshirani. Regression selection and shrinkage via the LASSO. Technical report, Department of Statistics, University of Toronto, June 1994. ftp://utstat.toronto.edu/pub/tibs/lasso.ps.
[27] T. Zhang. A general greedy approximation algorithm with applications. In Advances in Neural
Information Processing Systems, volume 14. MIT Press, 2002. in press.
| 1963 |@word mild:2 version:2 briefly:2 norm:7 open:1 tr:2 contains:1 selecting:3 pub:1 existing:1 com:1 luo:2 yet:1 cruz:1 additive:2 numerical:4 analytic:1 designed:1 update:2 greedy:2 selected:2 warmuth:5 beginning:1 steepest:1 manfred:2 lr:3 boosting:8 cse:1 revisited:1 complication:1 toronto:2 org:1 simpler:1 zhang:1 unbounded:1 ucsc:1 re2:1 prove:1 ziemke:1 indeed:1 expected:2 roughly:1 frequently:1 eurocolt:1 decreasing:2 csl:1 becomes:1 bounded:5 notation:1 maximizes:1 israel:1 minimizes:2 unified:1 guarantee:1 every:1 act:1 classifier:2 grant:2 omit:2 positive:1 before:1 treat:1 limit:1 analyzing:1 approximately:4 mika:3 might:2 twice:1 au:2 co:1 practice:2 implement:1 langley:1 convenient:1 projection:2 cannot:3 close:1 selection:8 www:1 equivalent:1 missing:1 go:1 starting:1 l:10 convex:11 simplicity:1 helmbold:2 rule:1 regularize:1 retrieve:1 coordinate:17 updated:1 annals:1 suppose:2 programming:1 us:1 hypothesis:51 element:2 satisfying:1 tib:1 utstat:1 adv:1 solla:2 decrease:1 mu:1 complexity:1 solving:3 learner:13 easily:3 resolved:1 train:2 fast:2 london:1 borchers:1 artificial:1 choosing:1 quite:1 larger:1 valued:1 solve:3 say:1 stanford:1 otherwise:1 statistic:2 fischer:1 demiriz:2 final:1 sequence:3 differentiable:2 propose:1 maximal:3 combining:1 achieve:2 bod:1 convergence:34 empty:1 p:2 converges:8 help:1 depending:2 ftp:1 frean:1 minor:1 krogh:1 soc:1 auxiliary:2 c:1 solves:1 implies:1 australian:1 convention:1 direction:1 australia:1 ja:1 argued:1 generalization:2 preliminary:1 investigation:1 proposition:4 solu:1 strictly:5 extension:1 correction:1 hold:8 considered:2 scope:1 early:1 smallest:1 estimation:1 favorable:2 proc:4 largest:3 weighted:8 minimization:2 uller:4 rough:1 clearly:1 mit:4 always:4 rather:4 fulfill:1 avoid:1 shrinkage:3 breiman:1 arcing:1 june:1 improvement:4 consistently:1 likelihood:2 check:1 tech:2 sense:2 dient:1 stopping:1 relation:1 expand:1 fhg:1 selects:3 germany:1 interested:2 classification:9 aforementioned:1 dual:1 colt:1 constrained:1 special:2 initialize:2 fairly:1 marginal:1 field:1 bined:1 having:2 icml:1 future:1 minimized:3 report:3 few:1 simultaneously:1 national:2 dfg:1 pietra:2 geometry:1 attempt:1 friedman:2 analyzed:1 bregman:3 edge:5 necessary:1 respective:1 unless:1 taylor:1 theoretical:1 column:3 earlier:1 soft:1 maximization:2 kekul:1 cost:5 subset:3 too:1 bauschke:1 combined:10 continuously:1 borwein:1 thesis:2 squared:1 satisfied:3 again:1 opposed:1 leveraged:2 possibly:1 central:1 cesa:1 worse:1 conf:1 booster:1 derivative:1 leading:1 return:7 potential:1 de:1 sec:5 summarized:1 includes:2 coefficient:7 int:1 rockafellar:1 satisfy:1 explicitly:1 depends:1 reg:1 later:1 tion:1 view:1 closed:2 doing:1 reached:1 start:1 simon:1 square:2 kaufmann:2 ensemble:5 generalize:1 weak:3 fallen:1 comp:1 complementation:3 footnote:1 reach:2 sebastian:1 definition:3 mlg:1 associated:1 proof:1 stop:1 wesley:1 adaboost:23 formulation:1 done:2 evaluated:2 box:2 strongly:1 furthermore:3 generality:1 leen:2 nonlinear:1 lack:1 defines:1 logistic:16 usa:1 verify:1 true:2 contain:1 regularization:5 analytically:1 iteratively:3 during:1 game:1 theoretic:1 confusion:1 estr:1 recently:1 common:4 niklasson:1 functional:6 exponentially:4 volume:3 extend:2 slight:1 discussed:1 relating:1 raetsch:2 mellon:1 outlined:1 similarly:1 shawe:1 gratefully:1 base:22 add:1 inf:1 certain:4 inequality:1 binary:2 rep:2 exploited:1 seen:1 minimum:6 additional:1 morgan:2 converge:1 ii:1 afterwards:1 full:1 violate:1 technical:3 controlled:1 prediction:4 variant:4 regression:25 converging:2 essentially:1 cmu:1 iteration:18 achieved:1 else:1 finiteness:1 crucial:1 appropriately:1 rest:1 strict:1 asymptotical:2 leveraging:20 lafferty:1 call:4 easy:1 baxter:1 fit:1 gave:1 hastie:1 lasso:4 idea:2 reprinted:1 whether:1 bartlett:1 suffer:1 hessian:1 cause:1 speaking:1 prefers:1 york:1 useful:1 santa:1 detailed:1 statist:1 induces:1 reduced:1 generate:2 http:2 schapire:2 exist:2 nsf:1 notice:1 sign:1 tibshirani:2 ccr:1 carnegie:1 gunnar:1 recomputed:1 drawn:1 penalizing:1 clean:1 verified:1 asymptotically:3 place:1 family:5 throughout:1 separation:1 decision:1 bound:3 g:2 constraint:3 lemm:1 calling:1 generates:1 argument:1 optimality:2 separable:7 conjecture:1 department:1 according:1 combination:1 march:2 legendre:3 smaller:4 slightly:2 separability:1 modification:1 making:1 restricted:1 southwell:7 previously:1 needed:1 singer:2 addison:1 luenberger:1 apply:1 generic:1 hat:2 existence:1 original:1 bureau:1 denotes:1 cf:13 include:2 publishing:1 unifying:1 const:1 exploit:1 especially:1 implied:1 objective:3 already:1 hoffmann:1 neurocolt2:2 strategy:3 guessing:1 gradient:8 distance:2 separate:1 berlin:2 neurocolt:1 considers:1 tseng:2 assuming:2 degenerated:1 minimizing:5 pbvm:1 nc:1 october:2 negative:1 bianchi:1 upper:4 acknowledge:1 finite:5 descent:11 extended:1 arbitrary:1 thm:11 community:1 cast:1 rsise:1 pair:1 connection:1 california:1 potsdam:1 boost:11 nip:2 trans:1 able:2 beyond:1 suggested:1 usually:1 reading:1 program:1 including:2 royal:1 regularized:6 kivinen:1 residual:1 scheme:5 fraunhofer:2 predates:1 gz:1 review:2 understanding:1 literature:1 geometric:1 asymptotic:1 freund:1 loss:28 generation:1 limitation:1 proven:3 sufficient:1 principle:1 editor:5 penalized:2 supported:1 surprisingly:1 keeping:1 weaker:1 template:1 absolute:5 sparse:1 seemed:1 commonly:1 san:1 far:1 employing:1 approximate:3 overfitting:1 assumed:5 conclude:1 francisco:1 alternatively:1 iterative:1 reviewed:2 additionally:2 onoda:2 robust:1 ca:1 improving:1 alg:10 domain:1 icann:1 main:3 linearly:4 edition:1 canberra:1 en:1 slow:1 ny:1 precision:1 exponential:5 comput:1 weighting:3 theorem:5 annu:1 mason:1 concern:1 closeness:1 exists:6 false:1 effectively:1 phd:1 anu:2 duffy:2 margin:2 gap:1 easier:1 entropy:1 led:1 applies:1 corresponds:1 nested:1 satisfies:5 acm:1 prop:1 goal:1 presentation:2 marked:1 lipschitz:1 bennett:2 considerable:1 change:1 determined:1 infinite:4 lemma:1 called:4 duality:1 gauss:7 atsch:8 select:1 holloway:1 college:1 collins:1 princeton:1 della:2 |
1,055 | 1,964 | A kernel method for multi-labelled classification
Andr?e Elisseeff and Jason Weston
BIOwulf Technologies, 305 Broadway, New York, NY 10007
andre,jason @barhilltechnologies.com
Abstract
This article presents a Support Vector Machine (SVM) like learning system to handle multi-label problems. Such problems are usually decomposed into many two-class problems but the expressive power of such a
system can be weak [5, 7]. We explore a new direct approach. It is based
on a large margin ranking system that shares a lot of common properties with SVMs. We tested it on a Yeast gene functional classification
problem with positive results.
1 Introduction
Many problems in Text Mining or Bioinformatics are multi-labelled. That is, each point
in a learning set is associated to a set of labels. Consider for instance the classification
task of determining the subjects of a document, or of relating one protein to its many
effects on a cell. In either case, the learning task would be to output a set of labels whose
size is not known in advance: one document can for instance be about food, meat and
finance, although another one would concern only food and fat. Two-class and multi-class
classification or ordinal regression problems can all be cast into multi-label ones. This
makes the latter quite attractive but at the same time it gives a warning: their generality
hides their difficulty to solve them. The number of publications is not going to contradict
this statement: we are aware of only a few works about the subject [4, 5, 7] and they all
concern text mining applications.
In Schapire and Singer?s work about Boostexter, one of the only general purpose multilabel ranking systems [7], they observe that overfitting occurs on learning sets of relatively
). They conclude that controlling the complexity of the overall learning
small size (
system is an important research goal. The aim of the current paper is to provide a way
of controlling this complexity while having a small empirical error. For that purpose, we
consider only architectures based on linear models and follow the same reasoning as for the
definition of Support Vector Machines [1]. Defining a cost function (section 2) and margin
for multi-label models, we focus our attention mainly on an approach based on a ranking
method combined with a predictor of the size of the sets (section 3 and 4). Sections 5 and
6 present experiments on a toy problem and on a real dataset.
2 Cost functions
Let
be a d-dimensional input space. We consider as an output space the space
formed by all the sets of integer between 1 and identified here as the labels of the
learning problem. Such an output space contains
elements and one output corresponds
are interested in is to find from a learning set
to
one
set
of
labels.
The
learning
problem
we
, drawn identically and independently from
an unknown distribution , a function such that the following generalization error is as
low as possible:
/ 021
!#"%$ &(')+*-, .
(1)
The function . is a real-valued loss and can take different forms depending
is
4 on how
computed.
Here,
we
consider
only
linear
models.
Given
vectors
and
bias
3
3
5 4 5
, we follow two schemes:
6 7+8 5 96 7+8 5
With the binary approach:
sign 3
3
, where the sign
function applies component-wise. The value of
is a binary vector from
6
which
748 5 the
(= set
of labels can be retrieved easily by stating that label : is in the set iff sign 3<;
.
;
For example this can be achieved by using a SVM for each binary problem and applying
the latter rule [4].
With the ranking approach:
, the size of the label set for the input , is
6 assume
78 that
5 >
known.
a label
3A;
; and consider
: is in the label set of
#B that
We define: ?@;
iff ?9;
is among the largest >
elements ?
?
. The algorithm Boostexter
[7] is an example of such a system. The ranking approach is analyzed more precisely in
section 3.
We consider
the
BB same
loss functions as in [7] for any multi-label system built from real
functions
. It includes the so-called Hamming Loss defined as
C0D / 0
E FGHE
F
E HE
where
stands for the symmetric difference of sets. When
a multi-label system is in fact a multi-class one and the Hamming Loss is I times the loss of the usual
classification loss. We also consider the one-error:
J
ML
/ 0
if argmax; K;
1-err
otherwise
which is exactly the same as the classification error for multi-class problems (it ignores the
rankings apart from the highest ranked one and so does not address the quality of the other
labels).
Other lossesconcern
only ranking systems (a system that specifies
a ranking
#B but no set size
predictor >
). Let us denote by N the complementary set of in
. We define
the Ranking Loss [7] to be:
D / 0
O QR TSU(LV
YX
@O
O
E HE@O PO O
N s.t. ?9W
?4Z
(2)
O N O
It represents the average fraction of pairs that are not correctly ordered. For ranking systems, this loss is natural and is related to the precision which is a common error measure
in Information Retrieval:
E 9^_L
Ma
E
+ [
s.t. ?`
?9;
E
H
/
E
\
E
^
_
L
#
B
M
a
E
precision
s.t. ?`
?
;
;9] &
from which a loss can be directly deduced. All these loss functions have been discussed
in [7]. Good systems should have a high precision and a low Hamming or Ranking Loss.
We do not consider the one-error to be a good loss for multi-label systems but we retain it
because it was measured in [7].
For multi-label linear models, we need to define a way of minimizing the empirical error
measured by the appropriate loss and at the same time to control the complexity of the
resulting model. A direct method would be to use the binary approach and thus take the
benefit of good two-class systems. However, as it has been raised in [5, 7], the binary
approach does not take into account the correlation between labels and therefore does not
capture the structure of some learning problems. We propose here to instead focus on the
ranking approach. This will be done by introducing notions of margin and regularization
as has been done for the two-class case in the definition of SVMs.
3 Ranking based system
Our goal is to define a linear model that minimizes
Loss while having a large
6
the
7 8 Ranking
5
margin. For systems that rank the values of 3<;
,
the
decision
boundaries
for
;
6
7 8 5
5
3 ` + [ ;
are defined by the hyperplanes
whose
equations are 3 ;
, where :
`
^
belongs to the label sets of and does not. So, the margin of
can be expressed as:
6
7+8 5
5
3Y;
3M`
;
`
;9] & $ `] &
3 ;
3 `
It represents the signed distance of to the decision boundary. Considering that all the
I
data in the learning set are well ranked, we can normalize the parameters 3 ; such that:
6
7+8 5
5 a
3 ;
3 `
;
`
L
^TYL
with equality for some
, and :
N . Maximizing the margin on the whole
learning set can then be done via the following problem:
#
!
%
"
$
(
&
'
(3)
]
;9] & $ `] &
$Z $ $
6
7+8 5
^T(LV
5 a
subject to: 3 ;
3M` W
;
`
:
W N W
(4)
!
are always co-occurring),
In the case where the problem is not ill-conditioned (two labels
the objective function can be replaced by:
; $` 3 ;
; $`
3M` I . In order to get a simpler optimization procedure we approximate this maximum by
the sum and, after some calculations (see [3] for details), we obtain:
" #
$ Z $ $
\
;%
" $
3A; I
(5)
6
7+8 5
^T(LV
5 a
subject to: 3 ;
3M` W
;
`
:
W N W
(6)
To generalize this problem in the case where the learning set can not be ranked exactly we
follow the same reasoning as for the binary case: the ultimate goal would be to maximize
the margin and at the same time to minimize the Ranking Loss. The latter can be expressed
quite
directly
6
7 by
8 5 extending
of
^T the
LH previous
problems. Indeed, if we have
5 a the constraints
3A;
3M` W
;
`
WB;` for :
W( N W , then the Ranking Loss on the
learning set is:
8
\ E EBE E
\
W ;R`
W N W ; $` ] ! &
W
& '
8
where is the Heaviside function. As for SVMs we approximate the functions
WB;R`
by only WB;R` and this gives the final quadratic optimization problem:
8
\ 3 ; I
\ E E#E E
\
(7)
WB;R`
$Z $ $
W N W ! ; $` ' ] &
W
;
&
6
7+8 5
- ^ (LV
5 a
(8)
subject to: 3A;
3M` W
;
`
WB;`
:
W N W
a
W ;R`
(9)
-
'&
(
&
%
*),+ * )
-
/.
&
0&
&
-
*),+ * )
&
&
In the case where the label sets W all have a size of we find the same optimization problem as has been derived for multi-class Support Vector Machines [8]. For this reason, we
call the solution of this problem a ranking Support Vector Machine (Rank-SVM). Another
common property with SVM is the possibility to use kernels rather than linear dot products. This can be achieved by computing the dual of the former optimization problem. We
refer the reader to [3] for the dual formluation and to [2] and references therein for more
information about kernels and SVMs.
Solving a constrained quadratic problem like those we just introduced requires an amount
of memory
that is quadratic in terms of the learning set size and it is generally solved in
computational steps where we have put into the the number of labels. Such a
complexity is too high to apply these methods in many real datasets. To circumvent this limitation, we propose to use a linearization method in conjunction with a predictor-corrector
logarithmic barrier procedure. Details are described in [3] with all the calculations
relative
"
to the implementation.
The
memory
cost
of
the
method
then
becomes
where
E E
"
is
the
maximum
number
of
labels.
In
many
applications
is
much
W
W
" . The time cost of each iteration is
larger than
.
I
(
(
(
4 Set size prediction
So far we have only developed ranking systems.
To obtain a complete multi-label system
we need to design a set size predictor >
. A natural way of doing this is to look for
inspiration from the binary approach. The latter can indeed
#B be interpreted as a ranking
system whose ranks are derived
from
the
real
values
. The predictor of the set
E
M=
E
>
size is then quite
simple:
is
the
number
of ; that are greater than .
;
is computed from a threshold value that differentiates labels in the target
The function >
set from others. For the ranking system
section we generalize
introduced
E
M= in4the
previous
E
this idea by designing
a
function
>
%
;
.
The
remaining
problem now
4
is to choose
which
is
done
by
solving
a
learning
problem.
The
training
data are
#B
composed by the
W
W given by the ranking system, and by the target values
defined by:
4
E
L
(X
E8 O
L
(a +O
O :
O
argmin :
s.t. K; W
W
N s.t. K; W
When the minimum is not unique and the optimal values are a segment, we choose the
middle of this segment. We refer to this method of predicting the set size as the threshold
based method. In the following, we have used linear least squares, and we applied it not
only to Rank-SVM but also to Boostexter in order to transform these algorithms from
ranking methods to multi-label ones.
Note that we could have followed a much simpler scheme to build the function >
. A
naive method would be to consider the set
size
prediction
as
a
regression
problem
on
the
E E
original training data with the targets
W W $ $ and to use any regression learning
system. This however does not provide a satisfactory solution mainly because it does not
take into account how the ranking is performed. In particular, when there are some errors in
the ranking, it does not learn how to compensate these errors although the threshold based
approach tries to learn the best threshold with respect to these errors.
5 Toy problem
As previously noticed the binary approach is not appropriate for problems where correlation between labels exist. To illustrate this point consider figure 2. There are only three labels. One of them (label ) is present for all points in the learning set. The binary approach
leads to a system that will fail to separate, for instance, points with label from points of
label sets not containing , that is, on points of label and . We see then that the expressible power of a binary system can be quite low when simple configurations
occur.
5 If we
,
consider
5 the ranking approach, one can imagine the following solution:
5 3
, 5
3
3
3
is the hyperplane separating class 2 from
class
3,
and
.
By
6 7 8 5
I I
I I
taking
the number of labels at point to be >
where 3
and
3
5
, we have a simple multi-label system that separates all the regions exactly.
Figure 2: Three labels and three
regions in the input space. The upper
left region is labelled with . The
bottom right region is partitioned
into
two sub-regions with labels
or
.
1
1,3
1,2
1
To make this point more concrete we sampled points uniformly on ,
I and solved all
optimization problems
with
. On the learning set the Hamming Loss for the binary
approach was
although for the direct approach it was as expected.
.
6 Experiments on real data
Yeast
Saccharomyces cerevisiae
Metabolism
Transcription
Energy
Protein
Synthesis
Protein
Destination
Cell Growth,
Cell Division,
DNA synthesis
Ionic
Homeostasis
Cell. Rescue,
Defense, Cell Death
and Aging
Cell. Transport,
Transport
Mechanisms
Cellular
Organization
Cell. communication,
Signal Transduction
Cellular
Biogenesis
Transposable elements
Viral and Plasmid proteins
Transport
Facilitation
YAL041W
Figure 3: First level of the hierarchy of the gene functional classes. There are 14 groups.
One gene, for instance the gene YAL041W can belong to different groups (shaded in grey
on the figure).
The Yeast dataset is formed by micro-array expression data and phylogenetic profiles with 1500 genes in the learning set and 917 in the test set. The input dimension is
. Each gene is associated with a set of functional classes whose maximum size can be potentially more than
. This dataset has already been analyzed with a two-class approach [6] and is known to be difficult. In order to make
it easier, we used the known structure of the functional classes. The whole set of
classes is indeed structured in a tree whose leaves are the functional categories (see
http://mips.gsf.de/proj/yeast/catalogues/funcat/ for more details). Given
a gene, knowing which edge to take from one level to another leads directly to a leaf and
thus to a functional class. Here we try to predict which edge to take from the root to the
first level of the tree (see figure 3).
Since one gene can have many functional classes this is a multi-label problem: one gene is
and the average number of labels for
associated to different edges. We
then have
all genes in the learning set is
. We assessed the quality of our method from two
perspectives. First as a ranking system with the Ranking Loss and the precision. In that
case, for the binary approach, the real outputs of the two-class SVMs were used as ranking
values. Second, the methods were compared as multi-label systems using the Hamming
Loss. We computed the latter for the binary approach used in conjunction with SVMs, for
the Rank-SVM and for
Boostexter. To measure the Hamming Loss with Boostexter we used
a threshold based >
function in combination with the ranking given by the algorithm.
degree
Precision
Ranking Loss
Hamming Loss
one-error
2
(
#
Rank-SVM
3
4
!
"
!
"#
#
"
$$
5
!#$
2
)
#
Binary-SVM
3
4
%&
!
*
%
% )
5
!'
% )
%
Figure 4: Polynomials of degree 2-5. Loss functions for the rank-SVM and the binary
approach based on two-class
SVMs. Considering the size of the problem, two values dif
ferent from less than
are not significantly different. Bold values represent superior
performance comparing classifiers with the same kernel.
For rank-SVMs and for two-class SVMs in the binary approach we choose polynomial
kernels of degrees two to nine (experiments on two-class problems using the Yeast data in
[6] already showed that polynomial kernels were appropriate for this task). Boostexter was
used with the standard stump weak learner and was stopped after 1000 iterations. Results
are reported in tables 4, 5 and 6.
degree
Precision
Ranking Loss
Hamming Loss
one-error
6
$
!##
!
""
Rank-SVM
7
8
#
+%
!
+%"(
!&
#
9
+%
!)
$
6
%
%!*
%
Binary-SVM
7
8
%
!*
%+)
!&
!
"$
% %&'
9
!&
!
"$
Figure 5: Polynomials of degree 6-9. Loss functions for the rank-SVM and the binary
approach based on two-class
SVMs. Considering the size of the problem, two values dif
are not significantly different. Bold values represent superior
ferent from less than
performance comparing classifiers with the same kernel.
Precision
Ranking Loss
Hamming Loss
one-error
Boostexter (1000 iterations)
% *
% '
% )
% )
Figure 6: Loss functions for Boostexter. Note that these results are worse than with the
binary approach or with rank-SVM.
Note that Boostexter performs quite poorly on this dataset compared to SVM-based approaches. This may be due to the simple decision function realized by Boostexter. One
of the main advantages of the SVM-based approaches is the ability to incorporate priori
knowledge into the kernel and control complexity via the kernel and regularization. We
believe this may also be possible with Boostexter but we are not aware of any work in this
area.
To compare the binary and the rank-SVM we put in bold the best results for each kernel.
For all kernels and for almost all losses, the combination ranking based SVM approach is
better than the binary one. In terms of the Ranking Loss, the difference is significantly in
favor of the rank-SVM. It is consistent with the fact that this system tends to minimize this
particular loss function. It is worth noticing that when the kernel becomes more and more
complex the difference between rank-SVM and the binary method disappears.
7 Discussion and conclusion
In this paper we have defined a whole system to deal with multi-label problems. The main
contribution is the definition of a ranking based SVM that extends the use of the latter to
many problems in the area of Bioinformatics and Text Mining.
We have seen on complex, real data that rank-SVMs lead to better performance than Boostexter and the binary approach. On its own this could be interpreted as a sufficient argument
to motivate the use of such a system. However, we can also extend the rank-SVM system to perform feature selection on ranking problems [3] . This application can be very
useful in the field of bioinformatics as one is often interested in interpretability of a multilabel decision rule. For example one could be interested in a small set of genes which is
discriminative in a multi-condition physical disorder.
We have presented only first experiments using multi-labelled systems applied to Bioinformatics. Our future work is to conduct more investigations in this area.
References
[1] B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Fifth
Annual Workshop on Computational Learning Theory, pages 144?152, Pittsburgh, 1992. ACM.
[2] N. Cristianini and J. Shawe-Taylor. Introduction to Support Vector Machines. Cambridge University Press, 2000.
[3] Andr?e Elisseeff and Jason Weston. Kernel methods for multi-labelled classification and categorical regression problems. Technical report, BIOwulf Technologies, 2001. http://www.bhtlabs.com/public/.
[4] T. Joachims. Text categorization with support vector machines: learning with many relevant
features. In Claire N?edellec and C?eline Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, number 1398, pages 137?142, Chemnitz, DE, 1998.
Springer Verlag, Heidelberg, DE.
[5] A. McCallum. Multi-label text classification with a mixture model trained by em. AAAI?99
Workshop on Text Learning., 1999.
[6] P. Pavlidis, J. Weston, J. Cai, and W.N. Grundy. Combining microarray expression data and
phylogenetic profiles to learn functional categories using support vector machines. In RECOMB,
pages 242?248, 2001.
[7] R.E. Schapire and Y. Singer. Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135?168, 2000.
[8] J. Weston and C. Watkins. Multi-class support vector machines. Technical Report 98-04, Royal
Holloway, University of London, 1998.
| 1964 |@word middle:1 polynomial:4 grey:1 elisseeff:2 configuration:1 contains:1 document:2 err:1 current:1 com:2 comparing:2 leaf:2 metabolism:1 mccallum:1 boosting:1 hyperplanes:1 simpler:2 phylogenetic:2 direct:3 expected:1 indeed:3 multi:24 decomposed:1 food:2 considering:3 becomes:2 argmin:1 interpreted:2 minimizes:1 developed:1 warning:1 growth:1 finance:1 fat:1 exactly:3 classifier:3 control:2 positive:1 tends:1 aging:1 signed:1 therein:1 shaded:1 co:1 dif:2 pavlidis:1 unique:1 tsu:1 procedure:2 area:3 empirical:2 significantly:3 protein:4 get:1 selection:1 put:2 catalogue:1 applying:1 www:1 maximizing:1 attention:1 independently:1 disorder:1 rule:2 array:1 facilitation:1 handle:1 notion:1 controlling:2 target:3 imagine:1 hierarchy:1 designing:1 element:3 bottom:1 solved:2 capture:1 region:5 highest:1 grundy:1 complexity:5 cristianini:1 multilabel:2 motivate:1 trained:1 solving:2 segment:2 division:1 learner:1 easily:1 po:1 london:1 whose:5 quite:5 larger:1 solve:1 valued:1 otherwise:1 ability:1 favor:1 transform:1 final:1 advantage:1 cai:1 propose:2 product:1 relevant:1 combining:1 iff:2 poorly:1 boostexter:13 normalize:1 qr:1 extending:1 categorization:2 depending:1 illustrate:1 stating:1 measured:2 public:1 generalization:1 investigation:1 predict:1 purpose:2 label:41 largest:1 homeostasis:1 always:1 cerevisiae:1 aim:1 rather:1 publication:1 conjunction:2 derived:2 focus:2 joachim:1 saccharomyces:1 rank:16 mainly:2 expressible:1 going:1 proj:1 interested:3 overall:1 classification:8 among:1 ill:1 dual:2 priori:1 raised:1 constrained:1 field:1 aware:2 having:2 represents:2 look:1 future:1 others:1 report:2 micro:1 few:1 composed:1 replaced:1 argmax:1 organization:1 mining:3 possibility:1 analyzed:2 mixture:1 edge:3 lh:1 eline:1 tree:2 conduct:1 taylor:1 stopped:1 instance:4 wb:5 cost:4 introducing:1 predictor:5 too:1 reported:1 combined:1 deduced:1 retain:1 destination:1 yl:1 synthesis:2 concrete:1 aaai:1 containing:1 choose:3 worse:1 toy:2 account:2 de:3 stump:1 bold:3 includes:1 ranking:36 performed:1 try:2 jason:3 lot:1 root:1 doing:1 contribution:1 minimize:2 formed:2 square:1 generalize:2 weak:2 ionic:1 worth:1 andre:1 definition:3 energy:1 associated:3 hamming:9 sampled:1 dataset:4 knowledge:1 follow:3 done:4 generality:1 just:1 correlation:2 expressive:1 transport:3 quality:2 yeast:5 believe:1 effect:1 former:1 regularization:2 equality:1 inspiration:1 symmetric:1 death:1 satisfactory:1 deal:1 attractive:1 funcat:1 complete:1 performs:1 reasoning:2 wise:1 common:3 superior:2 viral:1 functional:8 physical:1 discussed:1 he:3 belong:1 relating:1 in4:1 extend:1 refer:2 cambridge:1 shawe:1 dot:1 own:1 hide:1 showed:1 retrieved:1 perspective:1 belongs:1 apart:1 verlag:1 binary:23 seen:1 minimum:1 greater:1 maximize:1 signal:1 technical:2 calculation:2 ebe:1 compensate:1 retrieval:1 prediction:2 regression:4 iteration:3 kernel:13 represent:2 achieved:2 cell:7 microarray:1 subject:5 integer:1 call:1 identically:1 mips:1 architecture:1 identified:1 idea:1 knowing:1 expression:2 defense:1 ultimate:1 york:1 nine:1 generally:1 useful:1 amount:1 svms:11 category:2 dna:1 schapire:2 specifies:1 http:2 exist:1 andr:2 rescue:1 sign:3 correctly:1 group:2 threshold:5 drawn:1 fraction:1 sum:1 noticing:1 extends:1 almost:1 reader:1 guyon:1 decision:4 followed:1 quadratic:3 annual:1 occur:1 precisely:1 constraint:1 argument:1 relatively:1 structured:1 combination:2 em:1 partitioned:1 equation:1 previously:1 differentiates:1 fail:1 singer:2 ordinal:1 mechanism:1 apply:1 observe:1 appropriate:3 original:1 remaining:1 yx:1 build:1 objective:1 noticed:1 already:2 realized:1 occurs:1 biowulf:2 usual:1 distance:1 separate:2 separating:1 cellular:2 reason:1 corrector:1 minimizing:1 difficult:1 statement:1 potentially:1 broadway:1 implementation:1 design:1 unknown:1 perform:1 upper:1 datasets:1 ecml:1 defining:1 communication:1 introduced:2 cast:1 pair:1 biogenesis:1 boser:1 address:1 usually:1 built:1 interpretability:1 memory:2 royal:1 power:2 difficulty:1 ranked:3 natural:2 circumvent:1 predicting:1 scheme:2 technology:2 disappears:1 categorical:1 naive:1 text:7 determining:1 relative:1 loss:33 limitation:1 recomb:1 lv:4 transposable:1 degree:5 sufficient:1 consistent:1 article:1 editor:1 share:1 claire:1 bias:1 taking:1 barrier:1 fifth:1 fg:1 benefit:1 boundary:2 dimension:1 stand:1 ferent:2 ignores:1 far:1 approximate:2 contradict:1 meat:1 transcription:1 gene:11 ml:1 overfitting:1 pittsburgh:1 conclude:1 discriminative:1 table:1 learn:3 heidelberg:1 complex:2 european:1 main:2 whole:3 profile:2 complementary:1 chemnitz:1 transduction:1 ny:1 precision:7 sub:1 watkins:1 svm:21 concern:3 workshop:2 vapnik:1 linearization:1 conditioned:1 occurring:1 margin:8 easier:1 logarithmic:1 explore:1 expressed:2 ordered:1 applies:1 springer:1 corresponds:1 acm:1 ma:1 weston:4 goal:3 labelled:5 uniformly:1 hyperplane:1 called:1 holloway:1 support:8 latter:6 assessed:1 bioinformatics:4 incorporate:1 heaviside:1 tested:1 |
1,056 | 1,965 | Entropy and Inference, Revisited
Ilya Nemenman,1,2 Fariel Shafee,3 and William Bialek1,3
NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540
2
Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106
3
Department of Physics, Princeton University, Princeton, New Jersey 08544
nemenman@itp.ucsb.edu, {fshafee/wbialek}@princeton.edu
1
Abstract
We study properties of popular near?uniform (Dirichlet) priors for learning undersampled probability distributions on discrete nonmetric spaces
and show that they lead to disastrous results. However, an Occam?style
phase space argument expands the priors into their infinite mixture and
resolves most of the observed problems. This leads to a surprisingly good
estimator of entropies of discrete distributions.
Learning a probability distribution from examples is one of the basic problems in data
analysis. Common practical approaches introduce a family of parametric models, leading to
questions about model selection. In Bayesian inference, computing the total probability of
the data arising from a model involves an integration over parameter space, and the resulting
?phase space volume? automatically discriminates against models with larger numbers of
parameters?hence the description of these volume terms as Occam factors [1, 2]. As we
move from finite parameterizations to models that are described by smooth functions, the
integrals over parameter space become functional integrals and methods from quantum
field theory allow us to do these integrals asymptotically; again the volume in model space
consistent with the data is larger for models that are smoother and hence less complex [3].
Further, at least under some conditions the relevant degree of smoothness can be determined
self?consistently from the data, so that we approach something like a model independent
method for learning a distribution [4].
The results emphasizing the importance of phase space factors in learning prompt us to
look back at a seemingly much simpler problem, namely learning a distribution on a discrete, nonmetric space. Here the probability distribution is just a list of numbers {q i },
i = 1, 2, ? ? ? , K, where K is the number of bins or possibilities. We do not assume any
metric on the space, so that a priori there is no reason to believe that any q i and qj should
be similar. The task is to learn this distribution from a set of examples, which we can
P
describe as the number of times ni each possibility is observed in a set of N = K
i=1 ni
samples. This problem arises in the context of language, where the index i might label
words or phrases, so that there is no natural way to place a metric on the space, nor is it
even clear that our intuitions about similarity are consistent with the constraints of a metric space. Similarly, in bioinformatics the index i might label n?mers of the the DNA or
amino acid sequence, and although most work in the field is based on metrics for sequence
comparison one might like an alternative approach that does not rest on such assumptions.
In the analysis of neural responses, once we fix our time resolution the response becomes
a set of discrete ?words,? and estimates of the information content in the response are de-
termined by the probability distribution on this discrete space. What all of these examples
have in common is that we often need to draw some conclusions with data sets that are not
in the asymptotic limit N K. Thus, while we might use a large corpus to sample the
distribution of words in English by brute force (reaching N K with K the size of the
vocabulary), we can hardly do the same for three or four word phrases.
In models described by continuous functions, the infinite number of ?possibilities? can
never be overwhelmed by examples; one is saved by the notion of smoothness. Is there
some nonmetric analog of this notion that we can apply in the discrete case? Our intuition
is that information theoretic quantities may play this role. If we have a joint distribution of
two variables, the analog of a smooth distribution would be one which does not have too
much mutual information between these variables. Even more simply, we might say that
smooth distributions have large entropy. While the idea of ?maximum entropy inference?
is common [5], the interplay between constraints on the entropy and the volume in the
space of models seems not to have been considered. As we shall explain, phase space
factors alone imply that seemingly sensible, more or less uniform priors on the space of
discrete probability distributions correspond to disastrously singular prior hypotheses about
the entropy of the underlying distribution. We argue that reliable inference outside the
asymptotic regime N K requires a more uniform prior on the entropy, and we offer one
way of doing this. While many distributions are consistent with the data when N ? K,
we provide empirical evidence that this flattening of the entropic prior allows us to make
surprisingly reliable statements about the entropy itself in this regime.
At the risk of being pedantic, we state very explicitly what we mean by uniform or nearly
uniform priors on the space of distributions. The natural ?uniform? prior is given by
!
!
Z
K
K
X
X
1
Pu ({qi }) =
? 1?
q i , Zu =
dq1 dq2 ? ? ? dqK ? 1 ?
qi
(1)
Zu
A
i=1
i=1
where the delta function imposes the normalization, Zu is the total volume in the space of
models, and the integration domain A is such that each qi varies in the range [0, 1]. Note
that, because of the normalization constraint, an individual q i chosen from this distribution
in fact is not uniformly distributed?this is also an example of phase space effects, since in
choosing one qi we constrain all the other {qj6=i }. What we mean by uniformity is that all
distributions that obey the normalization constraint are equally likely a priori.
Inference with this uniform prior is straightforward. If our examples come independently
from {qi }, then we calculate the probability of the model {qi } with the usual Bayes rule: 1
P ({qi }|{ni }) =
K
Y
P ({ni }|{qi })Pu ({qi })
, P ({ni }|{qi }) =
(qi )ni .
Pu ({ni })
i=1
(2)
If we want the best estimate of the probability qi in the least squares sense, then we should
compute the conditional mean, and this can be done exactly, so that [6, 7]
hqi i =
ni + 1
.
N +K
(3)
Thus we can think of inference with this uniform prior as setting probabilities equal to the
observed frequencies, but with an ?extra count? in every bin. This sensible procedure was
first introduced by Laplace [8]. It has the desirable property that events which have not
been observed are not automatically assigned probability zero.
1
If the data are unordered, extra combinatorial factors have to be included in P ({ni }|{qi }). However, these cancel immediately in later expressions.
A natural generalization of these ideas is to consider priors that have a power?law dependence on the probabilities, the so called Dirichlet family of priors:
! K
K
X
Y ??1
1
P? ({qi }) =
? 1?
qi
qi ,
(4)
Z(?)
i=1
i=1
It is interesting to see what typical distributions from these priors look like. Even though
different qi ?s are not independent random variables due to the normalizing ??function,
generation of random distributions is still easy: one can show that if q i ?s are generated
successively (starting from i = 1 and proceeding up to i = K) from the Beta?distribution
!
qi
xa?1 (1 ? x)b?1
P
P (qi ) = B
, (5)
; ?, (K ? i)? , B (x; a, b) =
1 ? j<i qj
B(a, b)
0.8
q
? = 0.0007, S = 1.05 bits
0
0.2
q
? = 0.02, S = 5.16 bits
0
0.01
? = 1, S = 9.35 bits
q
then the probability of the whole sequence
{qi } is P? ({qi }). Fig. 1 shows some typical distributions generated this way. They
represent different regions of the range of
possible entropies: low entropy (? 1 bit,
where only a few bins have observable
probabilities), entropy in the middle of the
possible range, and entropy in the vicinity
of the maximum, log2 K. When learning
an unknown distribution, we usually have
no a priori reason to expect it to look like
only one of these possibilities, but choosing ? pretty much fixes allowed ?shapes.?
This will be a focal point of our discussion.
0
0
200
400
600
bin number
800
1000
Figure 1: Typical distributions, K = 1000.
Even though distributions look different, inference with all priors Eq. (4) is similar [6, 7]:
hqi i? =
ni + ?
,
N +?
? = K?.
(6)
This simple modification of the Laplace?s rule, Eq. (3), which allows us to vary probability assigned to the outcomes not yet seen, was first examined by Hardy and Lidstone
[9, 10]. Together with the Laplace?s formula, ? = 1, this family includes the usual maximum likelihood estimator (MLE), ? ? 0, that identifies probabilities with frequencies, as
well as the Jeffreys? or Krichevsky?Trofimov (KT) estimator, ? = 1/2 [11, 12, 13], the
Schurmann?Grassberger (SG) estimator, ? = 1/K [14], and other popular choices.
To understand why inference in the family of priors defined by Eq. (4) is unreliable, consider the entropy of a distribution drawn at random from this ensemble. Ideally we would
like to compute this whole a priori distribution of entropies,
"
#
Z
K
X
P? (S) = dq1 dq2 ? ? ? dqK P? ({qi }) ? S +
qi log2 qi ,
(7)
i=1
but this is quite difficult. However, as noted by Wolpert and Wolf [6], one can compute
the moments of P? (S) rather easily. Transcribing their results to the present notation (and
correcting some small errors), we find:
?(?) ? h S[ni = 0] i?
? 2 (?) ? h (?S)2 [ni = 0]i?
= ?0 (? + 1) ? ?0 (? + 1) ,
?+1
?1 (? + 1) ? ?1 (? + 1) ,
=
?+1
(8)
(9)
where ?m (x) = (d/dx)m+1 log2 ?(x) are the polygamma functions.
This behavior of the moments is shown on
Fig. 2. We are faced with a striking observation: a priori distributions of entropies in
0.8
K=10
K=100
the power?law priors are extremely peaked
0.7
K=1000
for even moderately large K. Indeed, as
0.8
0.6
a simple analysis shows, their maximum
0.5
0.6
standard deviation of approximately 0.61
0.4
bits is attained at ? ? 1/K, where ?(?) ?
0.4
0.3
1/ ln 2 bits. This has to be compared with
0.2
0.2
the possible range of entropies, [0, log2 K],
0
0.1
which is asymptotically large with K. Even
1e?7 1e?5 1e?3 .25
1.0
1.5
2.0
?
worse, for any fixed ? and sufficiently large
0
0
0.5
1
1.5
2
?
K,??(?) = log2 K ? O(K 0 ), and ?(?) ?
? is
Figure 2: ?(?)/ log2 K and ?(?) as func- 1/ ?. Similarly, if K is large, but ?
tions of ? and K; gray bands are the region small, then ?(?) ? ?, and ?(?) ? ?.
of ??(?) around the mean. Note the transi- This paints a lively picture: varying ? betion from the logarithmic to the linear scale tween 0 and ? results in a smooth variation
of ?, the a priori expectation of the entropy,
at ? = 0.25 in the insert.
from 0 to Smax = log2 K. Moreover, for
large K, the standard deviation of P? (S) is always negligible relative to the possible range
of entropies, and it is negligible even absolutely for ? 1 (? 1/K). Thus a seemingly
innocent choice of the prior, Eq. (4), leads to a disaster: fixing ? specifies the entropy almost uniquely. Furthermore, the situation persists even after we observe some data: until
the distribution is well sampled, our estimate of the entropy is dominated by the prior!
1
?(?)
2
?(?) / log K
0.9
Thus it is clear that all commonly used estimators mentioned above have a problem. While
they may or may not provide a reliable estimate of the distribution {q i }2 , they are definitely a poor tool to learn entropies. Unfortunately, often we are interested precisely in
these entropies or similar information?theoretic quantities, as in the examples (neural code,
language, and bioinformatics) we briefly mentioned earlier.
Are the usual estimators really this bad? Consider this: for the MLE (? = 0), Eqs. (8, 9) are
formally wrong since it is impossible to normalize P0 ({qi }). However, the prediction that
P0 (S) = ?(S) still holds. Indeed, SML , the entropy of the ML distribution, is zero even for
N = 1, let alone for N = 0. In general, it is well known that SML always underestimates
the actual value of the entropy, and the correction
1
K?
+O
(10)
S = SML +
2N
N2
is usually used (cf. [14]). Here we must set K ? = K ? 1 to have an asymptotically correct
result. Unfortunately in an undersampled regime, N K, this is a disaster. To alleviate
the problem, different authors suggested to determine the dependence K ? = K ? (K) by
various (rather ad hoc) empirical [15] or pseudo?Bayesian techniques [16]. However, then
there is no principled way to estimate both the residual bias and the error of the estimator.
The situation is even worse for the Laplace?s rule, ? = 1. We were unable to find any
results in the literature that would show a clear understanding of the effects of the prior
on the entropy estimate, SL?
. And these effects are enormous: the a priori distribution of
the entropy has ?(1) ? 1/ K and is almost ?-like. This translates into a very certain,
but nonetheless possibly wrong, estimate of the entropy. We believe that this type of error
2
In any case, the answer to this question depends mostly on the ?metric? chosen to measure
reliability. Minimization of bias, variance, or information cost (Kullback?Leibler divergence between
the target distribution and the estimate) leads to very different ?best? estimators.
(cf. Fig. 3) has been overlooked in some previous literature.
The Schurmann?Grassberger estimator, ? = 1/K, deserves a special attention. The variance of P? (S) is maximized near this value of ? (cf. Fig. 2). Thus the SG estimator results
in the most uniform a priori expectation of S possible for the power?law priors, and consequently in the least bias. We suspect that this feature is responsible for a remark in Ref. [14]
that this ? was empirically the best for studying printed texts. But even the SG estimator is
flawed: it is biased towards (roughly) 1/ ln 2, and it is still a priori rather narrow.
5
? = 0.001
? = 0.02
?=1
4
3
2
<S>? ? S
Summarizing, we conclude that simple
power?law priors, Eq. (4), must not be used
to learn entropies when there is no strong
a priori knowledge to back them up. On
the other hand, they are the only priors
we know of that allow to calculate hqi i,
hSi, h?2 i, . . . exactly [6]. Is there a way
to resolve the problem of peakedness of
P? (S) without throwing away their analytical ease? One approach would be to use
P ({q })
P?flat ({qi }) = P??(S[qii ]) P actual (S[qi ]) as
a prior on {qi }. This has a feature that the
a priori distribution of S deviates from uniformity only due to our actual knowledge
P actual (S[qi ]), but not in the way P? (S)
does. However, as we already mentioned,
P? (S[qi ]) is yet to be calculated.
1
0
?1
?2
?3
10
30
100
300
N
1000
3000
10000
Figure 3: Learning the ? = 0.02 distribution
from Fig. 1 with ? = 0.001, 0.02, 1. The
actual error of the estimators is plotted; the
error bars are the standard deviations of the
posteriors. The ?wrong? estimators are very
certain but nonetheless incorrect.
Another wayR to a flat prior is to write
P(S) = 1 = ?(S ? ?)d?. If we find a family of priors P({qi }, parameters) that result in
a ?-function over S, and if changing the parameters moves the peak across the whole range
of entropies uniformly, we may be able to use this. Luckily, P ? (S) is almost a ?-function! 3
In addition, changing ? results in changing ?(?) = h S[ni = 0] i? across the whole range
[0, log2 K]. So we may hope that the prior 4
K
X
1
qi
P({qi }; ?) = ? 1 ?
Z
i=1
!
K
Y
i=1
qi??1
d?(?)
P(?)
d?
(11)
may do the trick and estimate entropy reliably even for small N , and even for distributions
that are atypical for any one ?. We have less reason, however, to expect that this will give
an equally reliable estimator of the atypical distributions themselves.2 Note the term d?/d?
in Eq. (11). It is there because ?, not ?, measures the position of the entropy density peak.
Inference with the prior, Eq. (11), involves additional averaging over ? (or, equivalently,
3
The approximation becomes not so good as ? ? 0 since ?(?) becomes O(1) before dropping
to zero. Even worse, P? (S) is skewed at small ?. This accumulates an extra weight at S = 0. Our
approach to dealing with these problems is to ignore them while the posterior integrals are dominated
by ??s that are far away from zero. This was always the case in our simulations, but is an open
question for the analysis of real data.
4
Priors that are formed as weighted sums of the different members of the Dirichlet family are
usually called Dirichlet mixture priors. They have been used to estimate probability distributions of,
for example, protein sequences [17]. Equation (11), an infinite mixture, is a further generalization,
and, to our knowledge, it has not been studied before.
?), but is nevertheless straightforward. The a posteriori moments of the entropy are
R
d? ?(?, {ni })h S m [ni ] i?(?)
m
c
R
, where
S
=
d? ?(?, [ni ])
?(?, [ni ])
= P (? (?))
K
Y
?(?(?))
?(ni + ?(?))
.
?(N + ?(?)) i=1 ?(?(?))
(12)
(13)
Here the moments h S m [ni ] i?(?) are calculated at fixed ? according to the (corrected)
formulas of Wolpert and Wolf [6]. We can view this inference scheme as follows: first, one
sets the value of ? and calculates the expectation value (or other moments) of the entropy
at this ?. For small N , the expectations will be very close to their a priori values due to the
peakedness of P? (S). Afterwards, one integrates over ?(?) with the density ?(?), which
includes our a priori expectations about the entropy of the distribution we are studying
[P (? (?))], as well as the evidence for a particular value of ? [?-terms in Eq. (13)].
The crucial point is the behavior of the evidence. If it has a pronounced peak at some ? cl ,
then the integrals over ? are dominated by the vicinity of the peak, Sb is close to ?(?cl ), and
the variance of the estimator is small. In other words, data ?selects? some value of ?, much
in the spirit of Refs. [1] ? [4]. However, this scenario may fail in two ways. First, there
may be no peak in the evidence; this will result in a very wide posterior and poor inference.
Second, the posterior density may be dominated by ? ? 0, which corresponds to MLE,
the best possible fit to the data, and is a discrete analog of overfitting. While all these
situations are possible, we claim that generically the evidence is well?behaved. Indeed,
while small ? increases the fit to the data, it also increases the phase space volume of all
allowed distributions and thus decreases probability of each particular one [remember that
hqi i? has an extra ? counts in each bin, thus distributions with qi < ?/(N +?) are strongly
suppressed]. The fight between the ?goodness of fit? and the phase space volume should
then result in some non?trivial ?cl , set by factors ? N in the exponent of the integrand.
Figure 4 shows how the prior, Eq. (11), performs on some of the many distributions
we tested. The left panel describes learning of distributions that are typical in the prior
P? ({qi }) and, therefore, are also likely in P({qi }; ?). Thus we may expect a reasonable
performance, but the real results exceed all expectations: for all three cases, the actual relative error drops to the 10% level at N as low as 30 (recall that K = 1000, so we only
have ? 0.03 data points per bin on average)! To put this in perspective, simple estimates
like fixed ? ones, MLE, and MLE corrected as in Eq. (10) with K ? equal to the number of
nonzero ni ?s produce an error so big that it puts them off the axes until N > 100. 5 Our
results have two more nice features: the estimator seems to know its error pretty well, and
it is almost completely unbiased.
One might be puzzled at how it is possible to estimate anything in a 1000?bin distribution
with just a few samples: the distribution is completely unspecified for low N ! The point is
that we are not trying to learn the distribution ? in the absence of additional prior information this would, indeed, take N K ? but to estimate just one of its characteristics. It is
less surprising that one number can be learned well with only a handful of measurements.
In practice the algorithm builds its estimate based on the number of coinciding samples
(multiple coincidences are likely only for small ?), as in the Ma?s approach to entropy
estimation from simulations of physical systems [18].
What will happen if the algorithm is fed with data from a distribution {?
q i } that is strongly
atypical in P({qi }; ?)? Since there is no {?
qi } in our prior, its estimate may suffer. Nonetheless, for any {?
qi }, there is some ? which produces distributions with the same mean entropy
as S[?
qi ]. Such ? should be determined in the usual fight between the ?goodness of fit? and
5
More work is needed to compare our estimator to more complex techniques, like in Ref. [15, 16].
(a)
(b)
0
^
(S?S)/S
?0.2
0.6
? = 0.02
S = 5.16 bits
0
0.3
^
(S?S)/S
? = 0.0007
S = 1.05 bits
? = 0.02
K = 2000 (half empty)
S = 5.16 bits
Zipf?s law: q ~ 1/i
i
K = 1000
S = 7.49 bits
0
?0.2
0.4
0
?0.3
10
? = 1.0
S = 9.35 bits
30
100
300
N
1000
3000
10000
^
(S?S)/S
?0.2
0.1
^
(S?S)/S
0
?0.4
0.4
^
(S?S)/S
^
(S?S)/S
0.6
q ~ 50 ? 4 (ln i)2
i
K = 1000
S = 4.68 bits
0
?0.2
10
30
100
300
N
1000
3000
10000
Figure 4: Learning entropies with the prior Eq. (11) and P(?) = 1. The actual relative
errors of the estimator are plotted; the error bars are the relative widths of the posteriors.
(a) Distributions from Fig. 1. (b) Distributions atypical in the prior. Note that while Sb may
c2 and
be safely calculated as just hSi?cl , one has to do an honest integration over ? to get S
the error bars. Indeed, since P? (S) is almost a ?-function, the uncertainty at any fixed ? is
very small (see Fig. 3).
the Occam factors, and the correct value of entropy will follow. However, there will be an
important distinction from the ?correct prior? cases. The value of ? indexes available phase
space volumes, and thus the smoothness (complexity) of the model class [19]. In the case
of discrete distributions, smoothness is the absence of high peaks. Thus data with faster
decaying Zipf plots (plots of bins? occupancy vs. occupancy rank i) are rougher. The priors
P? ({qi }) cannot account for all possible roughnesses. Indeed, they only generate distributions for which the expected number of bins ? with the probability mass less than some q
is given by ?(q) = KB(q, ?, ? ? ?), where B is the familiar incomplete Beta function, as
in Eq. (5). This means that the expected rank ordering for small and large ranks is
qi
qi
1/(???)
?B(?, ? ? ?)(K ? 1) i
, iK,
? 1?
K
1/?
?B(?, ? ? ?)(K ? i + 1)
?
, K ?i+1K.
K
(14)
(15)
In an undersampled regime we can observe only the first of the behaviors. Therefore,
any distribution with qi decaying faster (rougher) or slower (smoother) than Eq. (14) for
some ? cannot be explained well with fixed ?cl for different N . So, unlike in the cases of
learning data that are typical in P? ({qi }), we should expect to see ?cl growing (falling) for
qualitatively smoother (rougher) cases as N grows.
Figure 4(b) and Tbl. 1 illustrate these points. First, we study
N 1/2 full Zipf rough
the ? = 0.02 distribution from Fig. 1. However, we added a
units ?10?2 ?10?1 ?10?3
1000 extra bins, each with qi = 0. Our estimator performs
10
1.7 1907 16.8
remarkably well, and ?cl does not drift because the ranking
30
2.2 0.99 11.5
law remains the same. Then we turn to the famous Zipf?s
100
2.4 0.86 12.9
distribution, so common in Nature. It has ni ? 1/i, which
300
2.2 1.36 8.3
is qualitatively smoother than our prior allows. Correspond- 1000 2.1 2.24 6.4
ingly, we get an upwards drift in ?cl . Finally, we analyze 3000 1.9 3.36 5.4
a ?rough? distribution, which has qi ? 50 ? 4(ln i)2 , and 10000 2.0 4.89 4.5
?cl drifts downwards. Clearly, one would want to predict Table 1: ? for solutions
cl
the dependence ?cl (N ) analytically, but this requires cal- shown on Fig. 4(b).
culation of the predictive information (complexity) for the
involved distributions [19] and is a work for the future. Notice that, the entropy estimator
for atypical cases is almost as good as for typical ones. A possible exception is the 100?
1000 points for the Zipf distribution?they are about two standard deviations off. We saw
similar effects in some other ?smooth? cases also. This may be another manifestation of
an observation made in Ref. [4]: smooth priors can easily adapt to rough distribution, but
there is a limit to the smoothness beyond which rough priors become inaccurate.
To summarize, an analysis of a priori entropy statistics in common power?law Bayesian
estimators revealed some very undesirable features. We are fortunate, however, that these
minuses can be easily turned into pluses, and the resulting estimator of entropy is precise,
knows its own error, and gives amazing results for a very large class of distributions.
Acknowledgements
We thank Vijay Balasubramanian, Curtis Callan, Adrienne Fairhall, Tim Holy, Jonathan
Miller, Vipul Periwal, Steve Strong, and Naftali Tishby for useful discussions. I. N. was
supported in part by NSF Grant No. PHY99-07949 to the Institute for Theoretical Physics.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
D. MacKay, Neural Comp. 4, 415?448 (1992).
V. Balasubramanian, Neural Comp. 9, 349?368 (1997).
W. Bialek, C. Callan, and S. Strong, Phys. Rev. Lett. 77, 4693?4697 (1996).
I. Nemenman and W. Bialek, Advances in Neural Inf. Processing Systems 13, 287?293 (2001).
J. Skilling, in Maximum entropy and Bayesian methods, J. Skilling ed. (Kluwer Academic
Publ., Amsterdam, 1989), pp. 45?52.
D. Wolpert and D. Wolf, Phys. Rev. E 52, 6841?6854 (1995).
I. Nemenman, Ph.D. Thesis, Princeton, (2000), ch. 3, http://arXiv.org/abs/physics/0009032.
P. de Laplace, marquis de, Essai philosophique sur les probabilit?es (Courcier, Paris, 1814),
trans. by F. Truscott and F. Emory, A philosophical essay on probabilities (Dover, New York,
1951).
G. Hardy, Insurance Record (1889), reprinted in Trans. Fac. Actuaries 8 (1920).
G. Lidstone, Trans. Fac. Actuaries 8, 182?192 (1920).
H. Jeffreys, Proc. Roy. Soc. (London) A 186, 453?461 (1946).
R. Krichevskii and V. Trofimov, IEEE Trans. Inf. Thy. 27, 199?207 (1981).
F. Willems, Y. Shtarkov, and T. Tjalkens, IEEE Trans. Inf. Thy. 41, 653?664 (1995).
T. Schurmann and P. Grassberger, Chaos 6, 414?427 (1996).
S. Strong, R. Koberle, R. de Ruyter van Steveninck, and W. Bialek, Phys. Rev. Lett. 80, 197?
200 (1998).
S. Panzeri and A. Treves, Network: Comput. in Neural Syst. 7, 87?107 (1996).
K. Sjlander, K. Karplus, M. Brown, R. Hughey, A. Krogh, I. S. Mian, and D. Haussler, Computer Applications in the Biosciences (CABIOS) 12, 327?345 (1996).
S. Ma, J. Stat. Phys. 26, 221 (1981).
W. Bialek, I. Nemenman, N. Tishby, Neural Comp. 13, 2409-2463 (2001).
| 1965 |@word schurmann:3 middle:1 briefly:1 seems:2 mers:1 trofimov:2 open:1 essay:1 simulation:2 p0:2 minus:1 holy:1 moment:5 itp:1 hardy:2 emory:1 surprising:1 yet:2 dx:1 must:2 grassberger:3 happen:1 shape:1 drop:1 plot:2 v:1 alone:2 half:1 dover:1 record:1 parameterizations:1 revisited:1 org:1 simpler:1 shtarkov:1 c2:1 become:2 beta:2 incorrect:1 introduce:1 thy:2 expected:2 indeed:6 roughly:1 themselves:1 nor:1 growing:1 behavior:3 balasubramanian:2 automatically:2 resolve:2 actual:7 becomes:3 underlying:1 notation:1 moreover:1 panel:1 mass:1 what:5 unspecified:1 pseudo:1 remember:1 every:1 safely:1 expands:1 innocent:1 exactly:2 wrong:3 brute:1 unit:1 grant:1 before:2 negligible:2 persists:1 limit:2 accumulates:1 marquis:1 approximately:1 might:6 plus:1 studied:1 examined:1 qii:1 ease:1 range:7 steveninck:1 practical:1 responsible:1 practice:1 procedure:1 probabilit:1 empirical:2 printed:1 word:5 protein:1 get:2 cannot:2 close:2 selection:1 cal:1 undesirable:1 put:2 context:1 risk:1 impossible:1 straightforward:2 attention:1 starting:1 independently:1 tjalkens:1 resolution:1 immediately:1 correcting:1 estimator:22 rule:3 haussler:1 notion:2 variation:1 laplace:5 target:1 play:1 hypothesis:1 trick:1 roy:1 observed:4 role:1 coincidence:1 calculate:2 region:2 ordering:1 decrease:1 mentioned:3 discriminates:1 intuition:2 lidstone:2 complexity:2 moderately:1 principled:1 ideally:1 uniformity:2 predictive:1 completely:2 easily:3 joint:1 jersey:2 various:1 fac:2 describe:1 london:1 wbialek:1 outside:1 choosing:2 outcome:1 dq2:2 quite:1 larger:2 say:1 statistic:1 think:1 itself:1 seemingly:3 interplay:1 sequence:4 hoc:1 analytical:1 relevant:1 turned:1 fariel:1 description:1 pronounced:1 normalize:1 empty:1 smax:1 produce:2 tions:1 illustrate:1 tim:1 stat:1 fixing:1 amazing:1 eq:14 krogh:1 soc:1 strong:4 involves:2 come:1 saved:1 correct:3 luckily:1 kb:1 bin:10 fix:2 generalization:2 really:1 alleviate:1 roughness:1 insert:1 correction:1 hold:1 sufficiently:1 considered:1 around:1 panzeri:1 predict:1 claim:1 vary:1 entropic:1 estimation:1 proc:1 integrates:1 label:2 combinatorial:1 saw:1 tool:1 weighted:1 minimization:1 hope:1 rough:4 clearly:1 always:3 ingly:1 reaching:1 rather:3 varying:1 ax:1 consistently:1 rank:3 likelihood:1 sense:1 summarizing:1 posteriori:1 inference:11 sb:2 inaccurate:1 fight:2 interested:1 selects:1 priori:14 exponent:1 integration:3 special:1 mutual:1 mackay:1 field:2 once:1 never:1 equal:2 flawed:1 look:4 cancel:1 nearly:1 peaked:1 future:1 few:2 divergence:1 individual:1 familiar:1 phase:8 polygamma:1 william:1 ab:1 nemenman:5 possibility:4 insurance:1 generically:1 mixture:3 kt:1 callan:2 integral:5 incomplete:1 karplus:1 plotted:2 theoretical:2 earlier:1 goodness:2 phrase:2 cost:1 deserves:1 deviation:4 uniform:9 too:1 tishby:2 answer:1 varies:1 essai:1 density:3 definitely:1 peak:6 physic:4 off:2 together:1 ilya:1 again:1 thesis:1 successively:1 possibly:1 ucsb:1 worse:3 actuary:2 style:1 leading:1 syst:1 account:1 de:4 unordered:1 sml:3 includes:2 explicitly:1 ranking:1 ad:1 depends:1 later:1 view:1 doing:1 analyze:1 bayes:1 decaying:2 square:1 ni:21 formed:1 acid:1 variance:3 characteristic:1 ensemble:1 correspond:2 maximized:1 miller:1 bayesian:4 famous:1 comp:3 explain:1 phys:4 ed:1 against:1 underestimate:1 nonetheless:3 frequency:2 involved:1 pp:1 bioscience:1 puzzled:1 periwal:1 sampled:1 popular:2 recall:1 knowledge:3 nonmetric:3 back:2 steve:1 attained:1 follow:1 response:3 coinciding:1 done:1 though:2 strongly:2 hughey:1 furthermore:1 just:4 xa:1 mian:1 until:2 hand:1 gray:1 behaved:1 believe:2 grows:1 effect:4 lively:1 brown:1 unbiased:1 hence:2 assigned:2 vicinity:2 analytically:1 leibler:1 nonzero:1 skewed:1 self:1 uniquely:1 width:1 naftali:1 noted:1 anything:1 manifestation:1 trying:1 theoretic:2 performs:2 upwards:1 chaos:1 common:5 functional:1 empirically:1 physical:1 volume:8 analog:3 kluwer:1 measurement:1 zipf:5 smoothness:5 focal:1 similarly:2 language:2 reliability:1 similarity:1 pu:3 something:1 posterior:5 own:1 perspective:1 inf:3 barbara:1 scenario:1 certain:2 seen:1 additional:2 determine:1 hsi:2 smoother:4 afterwards:1 desirable:1 multiple:1 full:1 smooth:6 faster:2 adapt:1 academic:1 offer:1 equally:2 mle:5 qi:48 prediction:1 calculates:1 basic:1 metric:5 expectation:6 arxiv:1 normalization:3 represent:1 transi:1 disaster:2 addition:1 want:2 remarkably:1 singular:1 crucial:1 extra:5 rest:1 biased:1 unlike:1 suspect:1 dq1:2 member:1 spirit:1 peakedness:2 near:2 exceed:1 revealed:1 easy:1 independence:1 fit:4 idea:2 reprinted:1 translates:1 qj:2 honest:1 expression:1 suffer:1 transcribing:1 york:1 hardly:1 remark:1 useful:1 santa:1 clear:3 band:1 ph:1 dna:1 generate:1 specifies:1 sl:1 http:1 nsf:1 notice:1 delta:1 arising:1 per:1 discrete:9 write:1 shall:1 dropping:1 four:1 nevertheless:1 enormous:1 falling:1 drawn:1 tbl:1 changing:3 asymptotically:3 sum:1 uncertainty:1 striking:1 place:1 family:6 almost:6 reasonable:1 draw:1 bit:12 fairhall:1 constraint:4 precisely:1 constrain:1 throwing:1 handful:1 flat:2 dominated:4 integrand:1 argument:1 extremely:1 department:1 according:1 poor:2 across:2 describes:1 suppressed:1 rev:3 modification:1 jeffreys:2 explained:1 ln:4 equation:1 remains:1 turn:1 count:2 fail:1 needed:1 know:3 fed:1 studying:2 available:1 apply:1 obey:1 observe:2 away:2 skilling:2 alternative:1 slower:1 dirichlet:4 cf:3 log2:8 build:1 move:2 question:3 quantity:2 paint:1 already:1 parametric:1 added:1 dependence:3 usual:4 bialek:4 krichevsky:1 unable:1 thank:1 sensible:2 argue:1 trivial:1 reason:3 code:1 sur:1 index:3 dqk:2 equivalently:1 difficult:1 unfortunately:2 mostly:1 disastrous:1 statement:1 reliably:1 publ:1 unknown:1 observation:2 hqi:4 willems:1 finite:1 situation:3 precise:1 prompt:1 drift:3 overlooked:1 treves:1 introduced:1 namely:1 paris:1 philosophical:1 california:1 learned:1 narrow:1 distinction:1 rougher:3 trans:5 able:1 suggested:1 bar:3 usually:3 beyond:1 regime:4 summarize:1 reliable:4 power:5 event:1 natural:3 force:1 undersampled:3 residual:1 scheme:1 occupancy:2 imply:1 picture:1 identifies:1 func:1 koberle:1 faced:1 nice:1 prior:40 sg:3 literature:2 understanding:1 text:1 acknowledgement:1 asymptotic:2 law:7 relative:4 expect:4 interesting:1 generation:1 degree:1 consistent:3 imposes:1 occam:3 surprisingly:2 supported:1 english:1 bias:3 allow:2 understand:1 institute:3 wide:1 distributed:1 van:1 calculated:3 vocabulary:1 lett:2 quantum:1 author:1 commonly:1 qualitatively:2 made:1 far:1 observable:1 ignore:1 kullback:1 unreliable:1 dealing:1 ml:1 overfitting:1 corpus:1 conclude:1 continuous:1 pretty:2 why:1 table:1 learn:4 nature:1 ruyter:1 ca:1 curtis:1 adrienne:1 complex:2 cl:11 domain:1 flattening:1 tween:1 whole:4 big:1 n2:1 allowed:2 ref:4 amino:1 fig:9 downwards:1 position:1 fortunate:1 comput:1 atypical:5 formula:2 emphasizing:1 bad:1 zu:3 shafee:1 list:1 evidence:5 normalizing:1 importance:1 nec:1 overwhelmed:1 deviate:1 vijay:1 entropy:42 wolpert:3 logarithmic:1 simply:1 likely:3 amsterdam:1 ch:1 wolf:3 corresponds:1 ma:2 conditional:1 consequently:1 towards:1 absence:2 content:1 included:1 infinite:3 determined:2 uniformly:2 typical:6 averaging:1 corrected:2 total:2 called:2 e:1 exception:1 formally:1 arises:1 jonathan:1 bioinformatics:2 absolutely:1 princeton:5 tested:1 |
1,057 | 1,966 | Linking motor learning to function
approximation: Learning in an
unlearnable force field
Opher Donchin and Reza Shadmehr
Dept. of Biomedical Engineering
Johns Hopkins University, Baltimore, MD 21205
Email: opher@bme.jhu.edu, reza@bme.jhu.edu
Abstract
Reaching movements require the brain to generate motor commands that rely on an internal model of the task?s dynamics. Here
we consider the errors that subjects make early in their reaching
trajectories to various targets as they learn an internal model. Using a framework from function approximation, we argue that the
sequence of errors should reflect the process of gradient descent. If
so, then the sequence of errors should obey hidden state transitions
of a simple dynamical system. Fitting the system to human data,
we find a surprisingly good fit accounting for 98% of the variance.
This allows us to draw tentative conclusions about the basis elements used by the brain in transforming sensory space to motor
commands. To test the robustness of the results, we estimate the
shape of the basis elements under two conditions: in a traditional
learning paradigm with a consistent force field, and in a random
sequence of force fields where learning is not possible. Remarkably,
we find that the basis remains invariant.
1
Introduction
It appears that in constructing the motor commands to guide the arm toward a
target, the brain relies on an internal model (IM) of the dynamics of the task that
it learns through practice [1]. The IM is presumably a system that transforms
a desired limb trajectory in sensory coordinates to motor commands. The motor
commands in turn create the complex activation of muscles necessary to cause
action. A major issue in motor control is to infer characteristics of the IM from the
actions of subjects.
Recently, we took a first step toward mathematically characterizing the IM?s representation in the brain [2]. We analyzed the sequence of errors made by subjects
on successive movements as they reached to targets while holding a robotic arm.
The robot produced a force field and subjects learned to compensate for the field
(presumably by constructing an IM) and eventually produced straight movements
within the field. Our analysis sought to draw conclusions about the structure of
the IM from the sequence of errors generated by the subjects. For instance, in a
velocity-dependent force field (such as the fields we use), the IM must be able to
encode velocity in order to anticipate the upcoming force. We hoped that the effect
of errors in one direction on subsequent movements in other directions would give
information about the width of the elements which the IM used in encoding velocity.
For example, if the basis elements were narrow, then movements in a given direction
would result in little or no change in performance in neighboring directions. Wide
basis elements would mean appropriately larger effects.
We hypothesized that an estimate of the width of the basis elements could be calculated by fitting the time sequence of errors to a set of equations representing
a dynamical system. The dynamical system assumed that error in a movement
resulted from a difference between the IM?s approximation and the actual environment, an assumption that has recently been corroborated [3]. The error in turn
changed the IM, affecting subsequent movements:
(
(n)
y (n) = Dk(n) F (n) ? zk(n)
(1)
(n+1)
(n)
zl
= zl + Bl,k(n) y (n) l = 1, ? ? ? , 8
Here y (n) is the error on the nth movement, made in direction k (n) (8 possible
directions); F (n) is the actual force experienced in the movement, and it is scaled
(n)
by an arm compliance D which is direction dependent; and zk is the current output
of the IM in the direction k. The difference between this output and reality results
in movement errors. B is a matrix characterizing the effect of errors in one direction
on other directions. That is, B can provide the generalization function we sought.
By comparing the B produced by a fit to human data to the Bs produced from
simulated data (generated using a dynamical simulation of arm movements), we
found that the time sequence of the subjects? errors was similar to that generated
by a simulation that represented the IM with gaussian basis elements that encoded
velocity with a ? = 0.08 m/sec.
But why might this dynamical system be a good model of trial-to-trial behavior in a
learning paradigm? Here we demonstrate that, under reasonable assumptions, behavior in accordance with Eq. 1 can be derived within the framework of functional
approximation, and that B is closely related to the basis functions in the approximation process. We find that this model gives accurate fits to human data, even
when the number of parameters in the model is drastically reduced. Finally, we
test the prediction of Eq. 1 that learning involves simple movement-by-movement
corrections to the IM, and that these variations depend only on the shape of the
basis which the IM uses for representation. Remarkably, when subjects perform
movements in a force field that changes randomly from one movement to the next,
the pattern of errors predicts a generalization function, and therefore a set of basis
elements, indistinguishable from the condition where the force field does not change.
That is, ?an unlearnable task is learned in exactly the same way as a learnable task.?
2
2.1
Approach
The Learning Process
In the current task, subjects grip the handle of a robot and make 10cm reaching
movements to targets presented visually. The robot produces a force field F(x)
? proportional and perpendicular to the velocity of the hand, such as F = (0 13; ?13 0)? x?
(with F in Newtons and x? in m/s). To simulate the process of learning an IM,
we assume that the IM uses scalar valued basis functions that encode velocity
g = [g1 (x),
? . . . , gn (x)]
? T so that the IM?s expectation of force at a desired veloc?
ity is: F(x)
? = W g(x),
? where W is a 2 ? n matrix [4]. To move the hand to a
target at direction k, a desired trajectory x? k (t) is given as input to the IM, which
? x? k ) [5, 6]. As a result, forces are experienced F(t)
in turn produces as output F(
?
? x? k (t)). We adjust W in
so that a force error can be calculated as F(t)
= F(t) ? F(
the direction that minimizes a cost function e which is simply the magnitude of the
force error integrated over the entire movement:
Z
Z
1 T
1 T ? T?
F(t) F(t) dt =
(F(t) ? W g(t))T (F(t) ? W g(t)) dt
e=
2 0
2 0
Changing W to minimize this value requires that we calculate the gradient of e with
respect to the weights and move W in the direction opposite to the gradient:
Z T
?e
(5e)Wij =
=?
gj (t)F?i (t) dt
?Wij
0
Z T
? (n) (t)g(x? k(n) (t))T dt
W (n+1) = W (n) + ?
F
(2)
t=0
where W
2.2
(n)
means the W matrix on the nth movement.
Deriving the Dynamical System
Our next step is to represent learning not in terms of weight changes, but in terms
? We do this for an arbitrary point in velocity space x? 0
of changes in IM output, F.
by multiplying both sides of the Eq. 2 by g(x? 0 ) with the result that:
Z T
(n)
(n+1)
(n)
?
?
? dt
F
(x? 0 ) = F (x? 0 ) + ?
g(x? k(n) )T g(x? 0 ) F
(3)
t=0
Further simplification will require approximation. Because we are considering a case
where the actual force, F(x),
? is directly proportional to velocity, it is reasonable to
make the approximation that, along a reasonably straight desired trajectory, the
?
? x? k(n) ) = F
? ? x? k(n) . This
force error, F(t),
is simply proportional to the velocity, F(
means that the integral of Eq. 3 is actually of the form
Z T
?
F
x? k(n) (t)g(x? k(n) )T g(x? 0 ) dt
(4)
t=0
One more assumption is required to make this tractable. If we approximate the
desired trajectory with a triangular function of time, and integrate only over the
raising phase of the velocity curve (because the values are the same going up and
going down) we can simplify the integral to an integral over speed, drawing out a
R x=
? x? k (250ms)
constant (2K x=0
G(x,
? x? 0 ) dx).
? The integral has become a function of the
?
values of x? k(n) (250ms) and x? 0 . Calling this function B, Eq. 4 becomes
? (n+1) (x? 0 ) = F
? (n) (x? 0 ) + B(x? k(n) , x? 0 )F
? (n)
F
(5)
x? 0 is arbitrary. We restrict our attention to only x? 0 that equals the peak velocity of
the desired trajectory associated with a movement direction l. Since we have only
? can be considered an eighteight different points in velocity space to consider, F
?
?
valued vector, Fl rather than a function F(x).
? Similarly, B(x? l , x? k ) will become an
8x8 matrix, Bl,k . The simpler notation allows us to write Eq. 5 as
? (m+1) = F
? (n) + Bl,k(n) F
? (n)
F
l
l
l = 1, . . . , 8
(6)
Figure 1:
We performed simulations
to test the approximation that displacement in arm motion at 250 msec toward
a target at 10 cm is proportional to error in the force estimate made by the IM.
A system of equations describing a controller, dynamics of a typical human arm,
and robot dynamics [7] were simulated for
a 500 msec min jerk motion to 8 targets.
The simulated robot produced one of 8
force fields scaled to 3 different magnitudes, while the controller remained na??ve
to the field. The errors in hand motion at
250 msec were fitted to the robot forces
using a single compliance matrix. Lighter
dashed lines are the displacement predictions of the model, darker solid lines
are the actual displacement in the simulations? movement.
12 N
9N
6N
3 cm
? in a given movement will
One more approximation is to assume that force error F
be proportional to position error in that movement when both are evaluated at
250ms. This approximation is justified by the data presented in Fig. 1 which shows
that the linear relationship holds for a wide range of movements and force errors.
Finally, because the forces are perpendicular to the movement, we will disregard
the error parallel to the direction of movement, reducing Eq. 6 to a scalar equation.
We are now in a position to write our system of equations in its final form:
(
(n)
y (n) = Dk(n) (F (n) ? F?k(n) )
(m+1)
(n)
F?l
= F?l + Bl,k(n) F? (n)
l = 1, . . . , 8
(7)
Note that this is a system of nine equations: a single movement causes a change in
all 8 directions for which the IM has an expectation. Let us now introduce a new
(n)
(n)
variable zk(n) ? Dk(n) F?k(n) , which represents the error (perpendicular displacement)
that would have been experienced during this movement if we had not compensated
for the expected field. With this substitution, Eq. 7 reduces to Eq. 1.
2.3
The shape of the generalization function B
Our task now is to give subjects a sequence of targets, observe the errors in their
movements, and ask whether there are parameters for which the system of Eq. 7
gives a good fit. Given a sequence of N movement directions, forces imposed on each
movement, and the resulting errors ({k, F, y}(n) , j=1, . . . , N ), we search for values
(0)
of Bl,k , Dk and initial conditions (F?m , m = 1, . . . , 8) that minimize the squared
difference, summed over the movements, between the y calculated in Eq. 7 and the
measured errors. One concern is that, in fitting a model with 80 parameters (64
from the B matrix, 8 from D, and 8 from F? (0) ), we are likely to be overfitting our
data. We address this concern by making the assumption that the B matrix has a
special shape: Bl,k = b(6 x? l x? k ). That is, each entry in the B matrix is determined
according to the difference in angle between the two directions represented. This
assumption implies that g(x? k )T g(x? l ) depends only on 6 x? k x? l . This reduces the B
matrix to 8 parameters, and reduces the number of parameters in the model to 24.
? = 04 m/s
08 m/s
12 m/s
20 m/s
Subjects
Error (mm)
20
0
?20
0
100
0
100
0
100
0
1
0.5
0
?0.5
?180
?90
0
0
100
Comparison to subjects
0.04
0.08
0.12
0.20
90
Difference in angle
180
Normalized
Normalized
Simulated Bs
100
1
0.08
Subjects
0.5
0
?0.5
?180
?90
0
90
180
Difference in angle
Figure 2: We simulated a system of equations representing dynamics of robot, human
arm, and adaptive controller for movements to a total of 192 targets spanning 8 directions
of movement. The adaptive controller learned by applying gradient descent (? = 0.002)
to learn a gaussian basis encoding arm velocity with a ? of 0.04, 0.08, 0.12, or 0.20 m/s.
Errors, computed as displacement perpendicular to direction of target were measured at
250 msec and are plotted for one direction of movement (45 deg) (a - d). Simulated data
is the solid line and the fit is shown as a dashed line. Circles indicate error on no field
trials and triangles indicate error on fielded trials. The data for all 192 targets were then
fit to Eq. 7 and the generalization matrix B was estimated (f ). Data was also collected
from 76 subjects, and fit with the model (e), and it gave a generalization function that is
nearly identicals to the generalization function of a controller using gaussians with a width
of 0.08 m/s (g).
3
Results
We first tested the validity of our approach in an artificial learning system that
used a simulation of human arm and robot dynamics to learn an IM of the imposed
force field with gaussian basis elements. The result was a sequence of errors to a
series of targets. We fit Eq. 7 to the sequence of errors and found an estimate for
the generalization function (Fig. 2). As expected, when narrow basis elements are
used, the generalization function is narrow. We next fit the same model to data
that had been collected from 76 subjects and again found an excellent fit.
Plots f and g in Fig. 2 show the generalization function, B, as a function of the angle
between x? k and x? l . The demonstrate that errors in one direction affect movements
in other directions both in simulations errors and in the subjects? errors. The
greatest effect of error is in the direction in which the movement was made. The
immediately neighboring directions are also significantly affected but the effect drops
off with increasing distance. The generalization function which matched the human
data was nearly identical to the one matching data produced by the simulation
whose gaussians had ? = 0.08 m/sec.
The most interesting aspect of the success we had using the simple system in equation 7 to explain human behavior is that the global learning process is being charac-
Error (mm)
Consistent Field
Random Field
20
20
10
10
0
0
?10
?10
?20
0
200
400
Mvmt Num
?20
0
Error (mm)
Fit to Consistent Field
Fit to Random Field
20
20
10
10
0
0
Data
Fit
?10
?20
200
200
400
Mvmt Num
?10
250
300
350
Mvmt Num
?20
200
250
300
350
Mvmt Num
B Matrix
1
0.5
0
Learn 1
Learn 2
Rand 1
Rand 2
?0.5
?180 ?90
0
90 180
Difference in angle
Figure 3: Fitting the model in Eq. 7 to a learning situation (a and c, 76 subjects) or
a situation where subjects are presented with a random sequence of fields (b and d, 6
subjects) produce nearly identical models. a and b show errors (binned to 5 movements
per data point), measured as perpendicular distance from a straight line trajectory at
250ms into the movement. Triangles are field A (F = [0 13; ?13 0] ? x)
? movements , wedges
are field B (F = [0 ? 13; 13 0] ? x),
? and filled circles are no field. The data is split into three
sets of 192 movements. It can be seen that subjects in the learning paradigm learn to
counteract the field, and show after affects. Subjects in the random field do not improve
on either field, and do not show after affects. c and d show that the model fit both the
learning paradigm and the random field paradigm. The fit is plotted for movements made
to 90? during the first 192 movements following first exposure to the field (movements 193
through 384 in a and b). r 2 for the fits is 0.96 and 0.97 respectively. Fits to the last 192
movements in each paradigm gave r 2 of 0.96 and 0.98. Finally, in the bottom plot, we
compare the generalization function, B, given by each fit. The normalized generalization
function is nearly identical for the all four sets. The size of the central peak is 0.21 for
both sets of the consistent field and 0.19 and 0.14, respectively, for the two sets of the
random field.
terized as the accretion of small changes in the state of the controller accumulated
over a large number of movements. In order to challenge this surprising aspect of
the model, we decided to apply it to data in which human subjects performed movements in fields that varied randomly from trial to trial. In this case, no cumulative
learning is possible. The important question is whether the model will still be able
to fit the data. If it does fit the data, then the question is whether the parameters
of the fit are similar to those derived from the learning paradigm.
Fig. 3 is a comparison of fitting a model to a consistent field and a random field.
As seen in a and b of the figure, subjects are able to improve their performance
through learning in a consistent field but they do not improve in the random field.
However, as shown in in c and d, the model is able to fit the performance in both
fields. Although the fits of each type of field were performed independently, we can
see in e that the B matrixes are nearly identical which indicates that trial-by-trial
learning was the same for both types of fields. In the second set of the random
paradigm, it seems as though the adjustment of state may slower. This raises
the possibility that the process of movement-by-movement adjustment of state is
gradually abandoned when it consistently fails to produce improvement. It is likely
that in this case subjects come to rely on a feedback driven controller which would
be unable to compensate for the errors generated early in the movement but would
allow them to more quickly adjust to those errors as information about the field
they are moving through is processed.
4
Conclusions
We hypothesized that the process of learning an internal model of the arm?s dynamics may be similar to mechanisms of gradient descent in the framework of approximation theory. If so, then errors experienced in a given movement should
affect subsequent movements in a meaningful way, and perhaps as simply as those
predicted by the dynamical system in Eq. 7. These equations appear to fit both
simulations and actual human data exceedingly well, making strong predictions
about the shape of the basis with which the IM is apparently learned. Here we find
that the shape of the basis remains invariant despite radical changes in pattern of
errors, as exhibited when subjects were exposed to a random field as compared to
a stationary field. We conclude that even when the task is unlearnable and errors
approximate a flat line, the brain is attempting to learn with the same characteristic
basis which is used when the task is simple and errors exponentially approach zero.
References
[1] R. Shadmehr and F. A. Mussa-Ivaldi. Adaptive representation of dynamics during
learning of a motor task. J. Neurosci., 14(5 Pt 2):3208?3224, 1994.
[2] K. Thoroughman and R. Shadmehr. Learning of action through adaptive combination
of motor primitives. Nature, 407(6805):742?747, 2000.
[3] R. A. Scheidt, J. B. Dingwell, and F. A. Mussa-Ivaldi. Learning to move amid uncertainty. The Journal of Neurophysiology, 86(2):971?985, 2001.
[4] R. M. Sanner and M. Kosha. A mathematical model of the adaptive control of human
arm motions. Biol. Cybern., 80(5):369?382, 1999.
[5] C. G. Atkeson. Learning arm kinematics and dynamics. Annu. Rev. Neurosci., 12:157?
183, 1989.
[6] Y. Uno, M. Kawato, and R. Suzuki. Formation and control of optimal trajectory
in human multijoint arm movement. minimum torque-change model. Biol. Cybern.,
61(2):89?101, 1989.
[7] R. Shadmehr and H. H. Holcomb. Neural correlates of motor memory consolidation.
Science, 277(5327):821?825, 1997.
| 1966 |@word neurophysiology:1 trial:8 seems:1 simulation:8 accounting:1 solid:2 ivaldi:2 substitution:1 series:1 initial:1 current:2 comparing:1 surprising:1 activation:1 dx:1 must:1 john:1 subsequent:3 shape:6 motor:10 plot:2 drop:1 stationary:1 num:4 successive:1 simpler:1 mathematical:1 along:1 become:2 fitting:5 introduce:1 expected:2 behavior:3 brain:5 torque:1 little:1 actual:5 considering:1 increasing:1 becomes:1 notation:1 matched:1 cm:3 minimizes:1 exactly:1 scaled:2 control:3 zl:2 appear:1 engineering:1 accordance:1 despite:1 encoding:2 might:1 perpendicular:5 range:1 decided:1 practice:1 displacement:5 jhu:2 significantly:1 matching:1 applying:1 cybern:2 imposed:2 compensated:1 exposure:1 attention:1 primitive:1 independently:1 immediately:1 deriving:1 ity:1 handle:1 coordinate:1 variation:1 target:12 pt:1 lighter:1 us:2 element:10 velocity:13 corroborated:1 predicts:1 bottom:1 calculate:1 movement:52 transforming:1 environment:1 dynamic:9 depend:1 raise:1 exposed:1 basis:17 triangle:2 various:1 represented:2 artificial:1 formation:1 whose:1 encoded:1 larger:1 valued:2 drawing:1 triangular:1 g1:1 final:1 sequence:12 took:1 neighboring:2 produce:4 radical:1 bme:2 measured:3 eq:15 strong:1 predicted:1 involves:1 implies:1 indicate:2 come:1 direction:25 wedge:1 closely:1 human:12 require:2 generalization:12 anticipate:1 im:23 mathematically:1 correction:1 hold:1 mm:3 considered:1 visually:1 presumably:2 major:1 sought:2 early:2 multijoint:1 create:1 gaussian:3 reaching:3 rather:1 command:5 encode:2 derived:2 improvement:1 consistently:1 indicates:1 dependent:2 accumulated:1 integrated:1 entire:1 hidden:1 wij:2 going:2 issue:1 summed:1 special:1 field:42 equal:1 identical:4 represents:1 nearly:5 simplify:1 randomly:2 resulted:1 ve:1 phase:1 mussa:2 possibility:1 adjust:2 analyzed:1 accurate:1 integral:4 necessary:1 filled:1 desired:6 plotted:2 circle:2 fitted:1 instance:1 gn:1 cost:1 entry:1 peak:2 off:1 hopkins:1 quickly:1 na:1 squared:1 reflect:1 again:1 central:1 amid:1 sec:2 depends:1 performed:3 apparently:1 reached:1 parallel:1 minimize:2 variance:1 characteristic:2 produced:6 trajectory:8 multiplying:1 straight:3 explain:1 email:1 associated:1 ask:1 actually:1 appears:1 dt:6 rand:2 evaluated:1 though:1 biomedical:1 hand:3 perhaps:1 effect:5 hypothesized:2 validity:1 normalized:3 indistinguishable:1 during:3 width:3 m:4 demonstrate:2 motion:4 recently:2 kawato:1 functional:1 reza:2 exponentially:1 linking:1 similarly:1 had:4 moving:1 robot:8 gj:1 driven:1 success:1 muscle:1 seen:2 minimum:1 paradigm:8 dashed:2 infer:1 reduces:3 veloc:1 compensate:2 prediction:3 controller:7 expectation:2 represent:1 justified:1 affecting:1 remarkably:2 baltimore:1 appropriately:1 exhibited:1 subject:24 compliance:2 split:1 jerk:1 affect:4 fit:24 gave:2 restrict:1 opposite:1 whether:3 cause:2 nine:1 action:3 grip:1 transforms:1 processed:1 reduced:1 generate:1 estimated:1 per:1 write:2 affected:1 four:1 changing:1 counteract:1 angle:5 uncertainty:1 reasonable:2 draw:2 fl:1 simplification:1 binned:1 uno:1 flat:1 calling:1 aspect:2 simulate:1 speed:1 min:1 attempting:1 according:1 combination:1 rev:1 b:2 making:2 invariant:2 gradually:1 equation:8 remains:2 turn:3 eventually:1 describing:1 mechanism:1 kinematics:1 tractable:1 gaussians:2 apply:1 obey:1 limb:1 observe:1 robustness:1 slower:1 abandoned:1 newton:1 upcoming:1 bl:6 move:3 question:2 md:1 traditional:1 gradient:5 distance:2 unable:1 simulated:6 argue:1 collected:2 toward:3 spanning:1 relationship:1 charac:1 holding:1 perform:1 descent:3 situation:2 varied:1 arbitrary:2 required:1 fielded:1 tentative:1 raising:1 learned:4 narrow:3 address:1 able:4 dynamical:7 pattern:2 challenge:1 memory:1 greatest:1 force:24 rely:2 sanner:1 arm:13 representing:2 nth:2 improve:3 x8:1 interesting:1 proportional:5 integrate:1 consistent:6 thoroughman:1 changed:1 consolidation:1 surprisingly:1 last:1 drastically:1 guide:1 side:1 allow:1 wide:2 characterizing:2 curve:1 calculated:3 feedback:1 transition:1 cumulative:1 exceedingly:1 sensory:2 made:5 adaptive:5 suzuki:1 atkeson:1 correlate:1 approximate:2 deg:1 global:1 robotic:1 overfitting:1 assumed:1 conclude:1 search:1 why:1 reality:1 learn:7 zk:3 reasonably:1 nature:1 excellent:1 complex:1 constructing:2 mvmt:4 neurosci:2 fig:4 darker:1 experienced:4 position:2 fails:1 msec:4 learns:1 down:1 remained:1 annu:1 learnable:1 dk:4 concern:2 donchin:1 magnitude:2 hoped:1 simply:3 likely:2 adjustment:2 scalar:2 relies:1 change:9 typical:1 determined:1 reducing:1 shadmehr:4 total:1 disregard:1 meaningful:1 opher:2 internal:4 dept:1 tested:1 biol:2 unlearnable:3 |
1,058 | 1,967 | Partially labeled classification with Markov
random walks
Tommi Jaakkola
MIT AI Lab
Cambridge, MA 02139
tommi@ai.mit.edu
Martin Szummer
MIT AI Lab & CBCL
Cambridge, MA 02139
szummer@ai.mit.edu
Abstract
To classify a large number of unlabeled examples we combine a limited number of labeled examples with a Markov random walk representation over the unlabeled examples. The random walk representation exploits any low dimensional structure in the data in a robust, probabilistic
manner. We develop and compare several estimation criteria/algorithms
suited to this representation. This includes in particular multi-way classification with an average margin criterion which permits a closed form
solution. The time scale of the random walk regularizes the representation and can be set through a margin-based criterion favoring unambiguous classification. We also extend this basic regularization by adapting
time scales for individual examples. We demonstrate the approach on
synthetic examples and on text classification problems.
1 Introduction
Classification with partially labeled examples involves a limited dataset of labeled examples as well as a large unlabeled dataset. The unlabeled examples to be classified provide
information about the structure of the domain while the few labeled examples identify the
classification task expressed in this structure. A common albeit tacit assumption in this
context associates continuous high-density clusters in the data with pure classes. When
this assumption is appropriate, we only require one labeled point for each cluster to properly classify the whole dataset.
Data points are typically given relative to a global coordinate system with an associated
metric. While the metric may provide a reasonable local similarity measure, it is frequently
inadequate as a measure of global similarity. For example, the data may lie on a submanifold of the space, revealed by the density, and any global comparisons should preferably
be made along the manifold structure. Moreover, we often wish to assign higher similarity
values to examples contained in the same high-density regions or clusters implying that
comparisons ought to incorporate the density in addition to the manifold structure.
A representation of examples that satisfies these and other desiderata can be constructed
through a Markov random walk similarly to [3]. The resulting global comparisons of examples integrate a ?volume? of paths connecting the examples as opposed to shortest paths
that are susceptible to noise. The time scale of the Markov process (the number of transitions) will permit us to incorporate the cluster structure in the data at different levels
of granularity. We start by defining the representation and subsequently develop several
classification methods naturally operating on such representations.
2 Representation based on Markov random walks
We define a Markov random walk based on a locally appropriate metric [3]. The metric is
the basis for the neighborhood graph, associated weights on the edges, and consequently
the transition probabilities for the random walk. The new representation for the examples
can be obtained naturally from the random walk.
More formally, consider a set of points
with a metric
. We first
construct
a
symmetrized
nearest
neighbor
graph
over
the
points
and
assign
a weight
&'(&)+*
"!
to each undirected edge
in2 the graph. The weights are sym #$,% /.
013
metric and
as we include self-loops;
for all non-neighbors. Note that
the product of weights along a path in the graph relates to the total length of the path in the
same way as the edge weights relate
,' to the distances between the corresponding points. The
one-step transition probabilities 4
from 5 to 6 are obtained directly from these weights:
4
,'7
8'
9
8':2
(1)
,'
for any
are symmetric, the transition
(4
,' non-neighbor 6 ). While the weights
probabilities 4
generally are not, because the normalization varies from node to node.
We use ;=<?> @AB6
C 5 to denote the D step transition probabilities (D here should be interpreted as
a parameter, not as a random
variable).
8' If we organize the one step transition probabilities
as a matrix E whose 5 6 -th entry is 4 , we can simply use a matrix power to calculate
;=<?> @FB6
C 5
The matrix E
GIH
E
<BJ 8'
(2)
is row stochastic so that rows sum to 1.
We assume that the
point for the Markov random walk is chosen uniformly at
MNstarting
.+)PO
random, i.e., ;KL5
. We can now evaluate the probability that the Markov process
5 given that it ended up in 6 after D steps. These conditional probabilities
started from
point 6 is
;=@> <PL5
C 6 define our new representation for the examples. In other
Qwords,
.A &each
O
associated with a vector of conditional probabilities ;@> <PL5
C 6 , 5
. The points
in this representation are close whenever they have nearly the same distribution over the
starting states. This representation is crucially affected by the time scale parameter D . When
DSRUT , all the points become indistinguishable provided that the original neighborhood
graph is connected. Small values of D , on the other hand, merge points in small clusters. In
this representation D controls the resolution at which we look at the data points (cf [3]).
*
The representation is also influenced by , , and the local distance metric , which together define the one-step transition probabilities (see section 4).
3 Parameter estimation for classification
V(XW
P
Z[XW Z\
Z^]=+
S
Y
Y
Given a partially labeled data set
, we wish to clas
sify the unlabeled points. The labels may come from two or more Oclasses, and typically,
the number of labeled points _ is a small fraction of the total points .
Our classification model assumes that each data point has a label or a distribution ;K Y C 5
over the class labels. These distributions are unknown and represent the parameters to be
estimated. Now given a point 6 , which may be labeled or unlabeled, we interpret the point
as a sample from the D step Markov random walk. Since labels are associated with the
original (starting) points, the posterior probability of the label for point 6 is given by
G
; Y C 6
;K Y C 5 ;=@> <P 5PC 6
(3)
To
classify the 6 -th point, we choose the class that maximizes the posterior:
' arg
; Y
C 6 .
We will now discuss two techniques for estimating the unknown parameters ;K Y C 5 : maximum likelihood with EM, and maximum margin subject to constraints.
3.1 EM estimation
The estimation criterion here is the conditional log-likelihood of the labeled points
Z
Z
Y W ' C 6 G
'= ;K
Y W ' C 5 ;@> <PL5
C 6 P
'V = ;K
(4)
Since ;=@> <
5
C 6 are fixed for any specific D , this objective function is jointly concave in the
free parameters and has a unique maximum value. The concavity also guarantees that this
optimization is easily performed via the EM algorithm.
FW '
AW '
be the
assignment
for component 5 given B6 Y , i.e.,
W ' soft
Y
Y
;KL5
C 6 XW +
;K
C 5 ;=@> < 5
C 6 . The EM algorithm iterates between the E-step, where
'
Y
Y
are
recomputed
;KL5
C 6
9 '!#" from the
XY W '+current
&)G9 ' estimates
AY W '+ of ;K C 5 , and the M-step where we
Y
$% $ ;KL5
C 6
update ;K C 5
, (see [1]).
;K 5
C 6
Let ;KX 5PW C '+6 Y
3.2 Margin based estimation
An alternative discriminative formulation is also possible, one that is more sensitive to
individual classification decisions rather than the product of their
Define the
'(' likelihoods.
Y YW ' C 6 #
margin of
the
classifier
on
labeled
point
6
and
class
to
be
&
)
;
*
; + Y
C W 6 ' . For correct
for all classes
'!'-, 2 classification, the margin should be'.nonnegative
"
Y
, and be zero for the correct class & $% =0.
other than , i.e. &
During training, find the parameters ;K Y C 5 that maximize the average margin on the labeled points, thereby forcing most of them to be correctly classified. Unbalanced classes
are handled by the per class margin, and we obtain the linear program
Z
8
' = '9=
.
/.0
$ > 132 4 %65 7 7 # .
Y W ' :
,
; * Y
)
C6
; Y
8
=
;K Y
C5
G
.
and
2
A
7
.
O
'('
8 0 '1 &
;
'('=<
C6
&
6 >
?
Y BA .
;K
subject to
.
<
C5
O
<
_
5
(5)
.G
@>
7
(6)
(7)
'1
Here denotes the number of classes and 8 0 gives the number of labeled points in the
same class as 6 . The solution is achieved at extremal points of the parameter set and thus
it is not surprising that the optimal parameters ;K Y C 5 reduce to hard values (0 or 1). The
solution to this linear program can be found in closed form:
; Y
K
C5
GDC
2
.
if
argE
otherwise.
. F
9
$%
'G"
;=@> <P 5PC 6
(8)
unlabeled
labeled +1
labeled ?1
Figure 1: Top left: local connectivity for =5 neighbors. Below are classifications using
Markov random walks for D =3, 10, and 30 (top to bottom, left to right), estimated with
average margin. There are two labeled points (large cross, triangle) and 148 unlabeled
points, classified (small crosses, triangles) or unclassified (small dots).
The resulting
can also be written compactly as
posterior
- 9 # probabilities
/.0 $ > 1*= ;=@> <P 5PC 6 . The closed form solution for the label distri;)+ Y
C6
butions facilitates an easy cross-validated setting of the various parameters involved in the
example representations.
4
The large margin restricts the
dimension of the classifier (section 3.4) and encourages
generalization to correct classification of the unlabeled points as well. Note that the margins
are bounded and have magnitude less than 1, reducing the risk that any single point would
dominate the average margin. Moreover, this criterion maximizes a sum of probabilities,
whereas likelihood maximizes a product of probabilities, which is easily dominated by low
probability outliers.
Other margin-based formulations are also possible. For separable problems, we can maximize the minimum margin instead of the average margin. In the case of only two classes,
we then have only one global margin parameter & for all labeled points. The algorithm
focuses all its attention at the site of the minimum margin, which unfortunately could be
an outlier. If we tackled noisy or non-separable problems by adding a linear slack variable
to each constraint, we would arrive at the average margin criterion given above (because of
linearity).
Average- and min-margin training yields hard parameters 0 or 1. The risk of overfitting
is controlled by the smooth representation and can be regularized by increasing the time
parameter D . If further regularization is desired, we have also applied the maximum entropy
discrimination framework [2, 1] to bias the solution towards more uniform values. This
additional regularization has resulted in similar classification performance but adds to the
computational cost.
3.3 Examples
Consider an example (figure 1) of classification with Markov random walks. We are given
2 labeled and 148 unlabeled points in an intertwining two moons pattern. This pattern has a
manifold structure where distances are locally but not globally Euclidean, due to the curved
arms. Therefore, the pattern is difficult to classify for traditional* algorithms using global
metrics, such as SVM. We use a Euclidean local metric, =5 and =0.6 (the box has extent
), and show three different timescales. At D =3 the random walk has not connected all
unlabeled points to some labeled point. The parameters for unconnected points do not
affect likelihood or margin, so we assign them uniformly to both classes. The other points
have a path to only one of the classes, and are therefore fully assigned to that class. At
D =10 all points have paths to labeled points but the Markov process has not mixed well.
Some paths do not follow the curved high-density structure, and instead cross between the
two clusters. When the Markov process is well-mixed at D =30, the points are appropriately
labeled. The parameter assignments are hard, but the class posteriors are weighted averages
and remain soft.
3.4 Sample size requirements
Here we quantify the sample size that is needed for accurate estimation of the labels for the
unlabeled examples. Since we are considering a transduction problem, i.e., finding labels
for already observed examples, the sample size requirements can be assessed
directly in
terms of the representation matrix. As before, the probabilities ;[@> <
L5
C and ;=@> <PL5
C 6
denote the conditional probabilities of having started the random walk in 5 given that the
For
a binary problem with
process ends up in , 6 , respectively.
M
/
. simplicity,
we. consider
classes 1 and -1, and let
;K Y
C 5 # ;K Y
# C 5 . Classification decisions are
G 9 =
then directly based on the sign of 6
;@> <PL5
C 6 .
&'S
Lemma
1 Consider the absolute distance between the representations of two points
9 =
4
# ;@> <
L5
C 6 C . The
dimension [5] of the binary transductive classifier
C ;=@> <
5
C
O
6 is upper bounded by the number
of
components of a graph with nodes
&'S . connected
&' A
and adjacency matrix E , where E
if
& and zero otherwise.
4
Proof: To evaluate ' , a measure of the capacity of the classifier,
we
'
B, count the number of
complete labelings Y consistent with the margin constraints Y B6
& for all
&' 6 A (labeled
& must
and unlabeled points). First, we establish that all examples and 6 for which
have the same label. This follows directly from
# 6
C
C
A
A
V C ;@> <
L5
C
V C ;@> <
L5
C
#
#
CC ; Y
;=@> <
5
C 6
;=@> <
5
C 6
C
&'
.
C6
#
;) Y
#
.
C6
C (9)
(10)
as this difference must be larger than & for the
&'
A discriminant functions to have different
signs. Since any pair of examples for which
& share
&'@the
A same label, different labels
can be assigned only to examples not connected by the
& relation, i.e., examples in
distinct connected components.
This theorem applies more generally to any transductive classifier
on a weighted
H .Xbased
. J
representation of examples so long as the weights are bounded in #
.
To determine the 4 sample size needed for a given dataset, and a desired classification margin & , let
dimension.
With high probability we can correctly classify the unlabeled points given
labeled examples
[4]. This
4
4 can
O also be helpful
4 to
determine
.
timescale D since it is reflected
for D =0 and
for D = T
H 2^ J in the , for example
for the full range of &>
.
1
0.45
average margin per class
Class Mac
Class Win
Markov avg margin
Markov min margin
Markov max ent
SVM labeled only
0.4
0.8
0.35
error
0.3
0.6
0.25
0.2
0.4
0.15
0.2
0
0.05
2
0.1
5
10
15
t
20
4
8
16
32
64
128
# labeled exampels
Figure 2: Windows vs. Mac text data. Left: Average per class margins for different D , 16
labeled documents. Right: Classification accuracy, between 2 and 128 labeled documents,
for Markov random walks and best SVM.
4 Choices for , , , and
*
The classifier is robust to rough heuristic choices of , , and , as follows. The local
similarity measure is typically given (Euclidean distance). The local neighborhood
size should be on the order of the manifold dimensionality, sufficiently small to avoid
introducing edges in the neighborhood graph that span outside the manifold. However,
must be large enough to preserve local topology, and ideally large enough to create * a singly
connected graph, yielding an ergodic Markov
process. The local scale parameter trades
*
off the emphasis
* on shortest paths (low effectively ignores distant points), versus volume
of paths (high ).
The smoothness of the random walk representation depends on D , the number of transitions.
This is a regularization parameter akin to the kernel width of a density estimator. In the
limiting case D =1, we employ only the local neighborhood
O graph. As a special case, we obtain the kernel expansion representation [1] by D =1, = , and squared Euclidean
* distance.
If all points are labeled, we obtain the -nearest neighbors classifier by D =1, R T . In
the limiting case D = T the representation for each node becomes a flat distribution over the
points in the same connected component.
We can choose D based on a few unsupervised heuristics, such as the mixing time to reach
the stationary distribution, or dissipation of mutual information [3].
However, appropriate D depends on the classification task. For example, if classes change
quickly over small distances, we want a sharper representation given by smaller D . Crossvalidation could provide a supervised choice of D but requires too many labeled points
for good accuracy. Instead, we propose to choose D that
maximizes the average margin
F 9 '! 0 '1* 9 ' '('
per class, on both labeled and unlabeled data. Plot
& for each c,
*
separately for labeled and
unlabeled
points
to
avoid
issues
of
their
relative
weights. For
S Y W '
labeled points,
6
, for unlabeled points,
6 is the class assigned by the
classifier. Figure 2 shows the average margin as a function of D , for a large
dataset
text
(section 5). We want large margins for both classes simultaneously, so D
is a good
choice, and also gave the best cross-validation accuracy.
4.1 Adaptive time scales
So far, we have employed a single global value of D . However, the desired smoothness
may be different at different locations (akin to adaptive kernel widths in kernel density
estimation). At the simplest, if the graph has multiple connected components, we can
set individual D for each component. Ideally, each point has its own time scale, and the
choice of time scale is optimized jointly with the classifier parameters.' Here we propose a
restricted version of this criterion where we find individual time scales D for each unlabeled
point but estimate a single timescale for labeled points as before.
The principle by which we select the time scales for the unlabeled points encourages the
node identities
to become the only common correlates for the labels. More precisely, define
;K Y C 6 for any unlabeled point 6 as
.
G
;K Y C 6
'
9
;=@> < % L 5
C 6
' ##"
$ $
P
(11)
where
;=@> < % L5
C 6 and both summations are only over the labeled points. More
Y
over, let ;K be the overall probability over the labels across the unlabeled points or
G
;K Y
' ;KB6
;K Y C 6
(12)
where ;KB6 is uniform over the unlabeled points, corresponding to the start distribution.
Note that ;K Y remains a function of all the individual time scales for the unlabeled points.
With these definitions, the principle for setting the time scales reduces to maximizing the
mutual information between the label and the node identity:
D
+
D
2 2
arg< E
<
G
Y
6
2 2
arg< E
<
Y #
;K 6
Y C6
A
(13)
Y C 6 are the marginal and
conditional entropies over the labels and are computed on the basis of ;K Y and ;K Y C 6 , respectively. Note that the ideal setting of the time
Y and
-
scales would be one that determines the labels for the unlabeled points uniquely on the
basis of only the labeled examples while at the same time preserving the overall variability
of the labels across the nodes. This would happen, for example, if the labeled examples
fall on distinct connected components.
We optimize the criterion by an axis parallel search,
'
trying only discrete values of D large enough
that at least one labeled point is reached
'
from each unlabeled point. We initialize D to the smallest number of transitions needed
to reach a labeled point. Empirically we have found that this initialization is close to the
refined solution given by the objective. The objective is not concave, but separate random
initializations generally yield the same answer, and convergence is rapid requiring about 5
iterations.
5 Experimental results
We applied the Markov random walk approach to partially labeled text classification, with
few labeled documents but many unlabeled ones. Text documents are represented by highdimensional vectors but only occupy low-dimensional manifolds, so we expect Markov
random walk to be beneficial. We used the mac and windows subsets from the 20 newsgroups dataset1 . There were 958 and 961 examples in the two classes, with 7511 dimensions. We estimated the manifold dimensionality to exceed 7, and a histogram of the distances to the 10 nearest neighbor is peaked at 1.3. We chose
a Euclidean local metric,
*
=10, which leads to a single connected component,
and
=0.6
for a reasonable falloff.
The average margin criterion indicated D
, and we also cross-validated and plotted the
decay of mutual information over D . We trained both the EM and the margin-based formulations, using between 2 and 128 labeled points, treating all remaining points as unlabeled.
We trained on 20 random splits balanced for class labels, and tested on a fixed separate set
of 987 points. Results in figure 2 show that Markov random walk based algorithms have
1
Processed as 20news-18827, http://www.ai.mit.edu/?jrennie/20Newsgroups/,
removing rare words, duplicate documents, and performing tf-idf mapping.
a clear
7 advantage over the best SVM using only labeled data (which had a linear7 kernel
and =3), out of linear and Gaussian kernels, different kernel widths and values of . The
advantage is especially noticeable for few labeled points, but decreases thereafter. The average margin classifier performs best overall. It can handle outliers and mislabeled points,
unlike the maximum min margin classifier that stops improving once 8 or more labeled
points are supplied.
The adaptive timescale criterion favors relatively small timescales for this dataset. For
90% of the unlabeled points, it picks the smallest timescale that reaches a labeled point,
which is at most 8 for any point. As the number of labeled points increases, shorter times
are chosen. For a few points, the criterion picks a maximally smooth
representation (the
highest timescale considered here, D =12), possibly to increase the Y criterion. However,
our preliminary experiments suggest that the adaptive time scales do not have a special
classification advantage for this dataset.
6 Discussion
The Markov random walk representation of examples provides a robust variable resolution
approach to classifying data sets with significant manifold structure and very few labels.
The average margin estimation criterion proposed in this context leads to a closed form
solution and strong empirical performance. When the manifold structure is absent or unrelated to the classification task, however, our method cannot be expected to derive any
particular advantage.
There are a number of possible extensions of this work. For example, instead of choosing
a single overall resolution or time scale D , we may combine multiple choices. This can
be done either by maintaining
< a; few choices
; explicitly
) ; or including all time scales in a
parametric form as in
[7], but it is unclear whether the
D?E
D E
exponential decay is desirable. To facilitate continuum limit analysis (and establish better
correspondence with the underlying density), we can construct the neighborhood graph on
the basis of -balls rather than nearest neighbors.
Acknowledgements
The authors gratefully acknowledge support from Nippon Telegraph & Telephone (NTT)
and NSF ITR grant IIS-0085836.
References
[1] Szummer, M; Jaakkola, T. (2000) Kernel expansions with unlabeled examples.
NIPS 13.
[2] Jaakkola, T; Meila, M; Jebara, T. (1999) Maximum entropy discrimination. NIPS 12.
[3] Tishby, N; Slonim, N. (2000) Data clustering by Markovian relaxation and the Information Bottleneck Method. NIPS 13.
[4] Blum, A; Chawla, S. (2001) Learning from Labeled and Unlabeled Data using Graph
Mincuts. ICML.
[5] Alon, N. et al (1997) Scale-sensitive Dimensions, Uniform Convergence, and Learnability. J. ACM, 44 (4) 615-631
[6] Tenenbaum, J; de Silva, V; Langford J. (2000) A Global Geometric Framework for
Nonlinear Dimensionality Reduction. Science 290 (5500): 2319-2323.
[7] Kondor, I; Lafferty J; (2001) Diffusion kernels in continuous spaces. Tech report
CMU, to appear.
| 1967 |@word kondor:1 version:1 pw:1 crucially:1 pick:2 thereby:1 reduction:1 document:5 current:1 surprising:1 written:1 must:3 distant:1 happen:1 plot:1 treating:1 update:1 discrimination:2 implying:1 v:1 stationary:1 iterates:1 provides:1 node:7 location:1 c6:6 along:2 constructed:1 become:2 combine:2 manner:1 expected:1 rapid:1 frequently:1 multi:1 globally:1 window:2 considering:1 increasing:1 becomes:1 provided:1 estimating:1 moreover:2 distri:1 maximizes:4 bounded:3 linearity:1 unrelated:1 underlying:1 interpreted:1 finding:1 ended:1 ought:1 guarantee:1 preferably:1 concave:2 classifier:11 control:1 grant:1 appear:1 organize:1 before:2 local:10 limit:1 slonim:1 path:9 merge:1 chose:1 emphasis:1 initialization:2 limited:2 range:1 unique:1 empirical:1 adapting:1 word:1 suggest:1 cannot:1 unlabeled:30 close:2 risk:2 context:2 optimize:1 www:1 maximizing:1 attention:1 starting:2 ergodic:1 resolution:3 simplicity:1 pure:1 estimator:1 dominate:1 handle:1 coordinate:1 limiting:2 nippon:1 associate:1 labeled:47 bottom:1 observed:1 calculate:1 region:1 connected:10 news:1 trade:1 decrease:1 highest:1 balanced:1 ideally:2 trained:2 basis:4 triangle:2 compactly:1 mislabeled:1 po:1 easily:2 various:1 represented:1 distinct:2 neighborhood:6 outside:1 refined:1 choosing:1 whose:1 heuristic:2 larger:1 otherwise:2 favor:1 transductive:2 jointly:2 noisy:1 timescale:5 advantage:4 propose:2 product:3 loop:1 mixing:1 g9:1 crossvalidation:1 ent:1 convergence:2 cluster:6 requirement:2 derive:1 develop:2 alon:1 nearest:4 noticeable:1 strong:1 involves:1 come:1 quantify:1 tommi:2 correct:3 subsequently:1 stochastic:1 adjacency:1 require:1 assign:3 generalization:1 preliminary:1 summation:1 extension:1 sufficiently:1 considered:1 cbcl:1 mapping:1 bj:1 continuum:1 smallest:2 estimation:8 label:19 extremal:1 sensitive:2 create:1 tf:1 weighted:2 mit:5 rough:1 gaussian:1 rather:2 avoid:2 jaakkola:3 validated:2 focus:1 properly:1 likelihood:5 tech:1 helpful:1 typically:3 relation:1 favoring:1 labelings:1 arg:4 classification:22 issue:1 overall:4 special:2 initialize:1 mutual:3 marginal:1 construct:2 once:1 having:1 look:1 unsupervised:1 nearly:1 icml:1 peaked:1 report:1 duplicate:1 few:7 employ:1 preserve:1 resulted:1 simultaneously:1 individual:5 yielding:1 pc:3 accurate:1 edge:4 xy:1 shorter:1 euclidean:5 walk:21 desired:3 plotted:1 classify:5 soft:2 markovian:1 assignment:2 cost:1 mac:3 introducing:1 entry:1 subset:1 rare:1 uniform:3 submanifold:1 inadequate:1 too:1 tishby:1 learnability:1 answer:1 varies:1 aw:1 synthetic:1 density:8 l5:14 probabilistic:1 off:1 telegraph:1 connecting:1 together:1 quickly:1 connectivity:1 squared:1 opposed:1 choose:3 possibly:1 de:1 includes:1 explicitly:1 depends:2 performed:1 lab:2 closed:4 reached:1 start:2 parallel:1 b6:6 accuracy:3 moon:1 yield:2 identify:1 classified:3 falloff:1 influenced:1 reach:3 whenever:1 definition:1 involved:1 naturally:2 associated:4 proof:1 stop:1 dataset:7 dimensionality:3 higher:1 supervised:1 follow:1 reflected:1 maximally:1 formulation:3 done:1 box:1 langford:1 hand:1 nonlinear:1 indicated:1 facilitate:1 requiring:1 regularization:4 assigned:3 symmetric:1 indistinguishable:1 during:1 self:1 encourages:2 width:3 unambiguous:1 uniquely:1 criterion:13 trying:1 ay:1 butions:1 demonstrate:1 complete:1 performs:1 dissipation:1 silva:1 common:2 empirically:1 volume:2 extend:1 interpret:1 significant:1 cambridge:2 ai:5 smoothness:2 meila:1 similarly:1 gratefully:1 had:1 dot:1 jrennie:1 similarity:4 operating:1 add:1 posterior:4 own:1 forcing:1 binary:2 preserving:1 minimum:2 additional:1 employed:1 determine:2 shortest:2 maximize:2 ii:1 relates:1 full:1 multiple:2 desirable:1 reduces:1 smooth:2 ntt:1 cross:6 long:1 controlled:1 desideratum:1 basic:1 metric:10 cmu:1 iteration:1 represent:1 normalization:1 kernel:9 histogram:1 achieved:1 addition:1 whereas:1 want:2 separately:1 appropriately:1 unlike:1 subject:2 undirected:1 facilitates:1 lafferty:1 granularity:1 revealed:1 ideal:1 easy:1 enough:3 exceed:1 newsgroups:2 affect:1 split:1 gave:1 topology:1 reduce:1 itr:1 absent:1 bottleneck:1 whether:1 handled:1 akin:2 generally:3 yw:1 clear:1 singly:1 locally:2 tenenbaum:1 processed:1 simplest:1 http:1 occupy:1 supplied:1 restricts:1 nsf:1 sign:2 estimated:3 correctly:2 per:4 discrete:1 affected:1 recomputed:1 thereafter:1 blum:1 diffusion:1 graph:12 relaxation:1 fraction:1 sum:2 unclassified:1 arrive:1 reasonable:2 decision:2 tackled:1 correspondence:1 nonnegative:1 constraint:3 precisely:1 idf:1 flat:1 dominated:1 min:3 span:1 performing:1 separable:2 martin:1 relatively:1 ball:1 remain:1 smaller:1 em:5 across:2 beneficial:1 outlier:3 restricted:1 remains:1 discus:1 slack:1 count:1 needed:3 end:1 permit:2 appropriate:3 chawla:1 alternative:1 symmetrized:1 original:2 in2:1 assumes:1 denotes:1 include:1 cf:1 top:2 remaining:1 clustering:1 maintaining:1 xw:3 exploit:1 especially:1 establish:2 objective:3 already:1 parametric:1 traditional:1 unclear:1 win:1 distance:8 separate:2 capacity:1 manifold:9 extent:1 discriminant:1 length:1 difficult:1 susceptible:1 unfortunately:1 sharper:1 relate:1 ba:1 unknown:2 upper:1 markov:22 acknowledge:1 curved:2 regularizes:1 defining:1 unconnected:1 variability:1 dc:1 jebara:1 pair:1 optimized:1 nip:3 below:1 pattern:3 program:2 max:1 including:1 power:1 regularized:1 arm:1 axis:1 started:2 tacit:1 text:5 geometric:1 acknowledgement:1 relative:2 fully:1 expect:1 mixed:2 versus:1 validation:1 integrate:1 consistent:1 principle:2 classifying:1 share:1 row:2 free:1 sym:1 bias:1 neighbor:7 fall:1 absolute:1 dimension:5 transition:9 dataset1:1 concavity:1 ignores:1 author:1 made:1 c5:3 avg:1 adaptive:4 far:1 correlate:1 global:8 overfitting:1 discriminative:1 continuous:2 search:1 robust:3 improving:1 expansion:2 domain:1 timescales:2 whole:1 noise:1 site:1 transduction:1 wish:2 exponential:1 lie:1 theorem:1 removing:1 specific:1 decay:2 svm:4 ih:1 albeit:1 adding:1 effectively:1 magnitude:1 margin:34 suited:1 entropy:3 simply:1 clas:1 expressed:1 contained:1 partially:4 applies:1 satisfies:1 determines:1 acm:1 ma:2 conditional:5 identity:2 consequently:1 towards:1 fw:1 hard:3 change:1 telephone:1 uniformly:2 reducing:1 lemma:1 total:2 mincuts:1 experimental:1 formally:1 select:1 highdimensional:1 support:1 szummer:3 unbalanced:1 assessed:1 incorporate:2 evaluate:2 tested:1 |
1,059 | 1,968 | (Not) Bounding the True Error
John Langford
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
jcl+@cs.cmu.edu
Rich Caruana
Department of Computer Science
Cornell University
Ithaca, NY 14853
caruana@cs.cornell.edu
Abstract
We present a new approach to bounding the true error rate of a continuous
valued classifier based upon PAC-Bayes bounds. The method first constructs a distribution over classifiers by determining how sensitive each
parameter in the model is to noise. The true error rate of the stochastic
classifier found with the sensitivity analysis can then be tightly bounded
using a PAC-Bayes bound. In this paper we
demonstrate
the method on
order of magnitude
artificial neural networks with results of a
improvement vs. the best deterministic neural net bounds.
1 Introduction
In machine learning it is important to know the true error rate a classifier will achieve on
future test cases. Estimating this error rate can be suprisingly difficult. For example, all
known bounds on the true error rate of artificial neural networks tend to be extremely loose
and often result in the meaningless bound of ?always err? (error rate = 1.0).
In this paper, we do not bound the true error rate of a neural network. Instead, we bound
the true error rate of a distribution over neural networks which we create by analysing one
neural network. (Hence, the title.) This approach proves to be much more fruitful than
trying to bound the true error rate of an individual network. The best current approaches
[1][2] often require , , or more examples before producing a nontrivial bound on
the true error rate. We produce nontrivial bounds on the true error rate of a stochastic neural
network with less than
examples. A stochastic neural network is a neural network
where each weight
is perturbed by a gaussian with variance every time it is evaluated.
Our approach uses the PAC-Bayes bound [5]. The approach can be thought of as a
redivision of the work between the experimenter and the theoretician: we make the experimenter work harder so that the theoretician?s true error bound becomes much tighter. This
?extra work? on the part of the experimenter is significant, but tractable, and the resulting
bounds are much tighter.
An alternative viewpoint is that the classification problem is finding a hypothesis with
a low upper bound on the future error rate. We present a post-processing phase for neural
networks which results in a classifier with a much lower upper bound on the future error
rate. The post-processing can be used with any artificial neural net trained with any optimization method; it does not require the learning procedure be modified, re-run, or even
that the threshold function be differentiable. In fact, this post-processing step can easily be
adapted to other learning algorithms.
David MacKay [4] has done significant work to make approximate Bayesian learning
tractable with a neural network. Our work here is complimentary rather than competitive.
We exhibit a technique which will likely give nontrivial true error rate bounds for Bayesian
neural networks regardless of approximation or prior modeling errors. Verification of this
statement is work in progress.
The post-processing step finds a ?large? distribution over classifiers, which has a small
average empirical error rate. Given the average empirical error rate, it is straightforward
to apply the PAC-Bayes bound in order to find a bound on the average true error rate. We
find this large distribution over classifiers by performing a simple noise sensitivy analysis
on the learned model. The noise model allows us to generate a distribution of classifiers
with a known, small, average empirical error rate. In this paper we refer to the distribution
of neural nets that results from this noise analysis as a stochastic neural net model.
Why do we expect the PAC-Bayes bound to be a significant improvement over standard
covering number and VC bound approaches? There exist learning problems for which
the difference between the lower bound and the PAC-Bayes upper bound are tight up to
where is the number of training examples. This is superior to the guarantees
which can be made for typical covering number bounds where the gap is, at best, known
up to an (asymptotic) constant. The guarantee that PAC-Bayes bounds are sometimes quite
tight encourages us to apply them here.
The next sections will:
1. Describe the bounds we will compare.
2. Describe our algorithm for constructing a distribution over neural networks.
3. Present experimental results.
2 Theoretical setup
* +,-.#"/
0) ' )
We will work in the standard supervised batch learning setting. This setting starts with the
over
assumption that all examples are drawn from some fixed (unknown) distribution,
and the,input
(input, output) pairs,
. The output is drawn from the space
space is arbitrary. The goal of machine learning is to use a sample set of pairs to find
a classifier, , which maps the input space to the output space and has a small true error,
. Since the distribution is unknown, the true error rate is not
observable. However, we can observe the empirical error rate,
.
Now that the basic quantities of interest are defined, we will first present a modern neural network bound, then specialize the PAC-Bayes bound to a stochastic neural network. A
stochastic neural network is simply a neural network where each weight in the neural network is drawn from some distribution whenever it is used. We will describe our technique
for constructing the distribution of the stochastic neural network.
!#"%$&
() '
31 2 54 1
6) '
2.1 Neural Network bound
We will compare a specialization of the best current neural network true error rate bound
[2] with our approach. The neural network bound is described in terms of the following
parameters:
.
1. A margin,
2. An arbitrary function (unrelated to the neural network sigmoid function) defined by
if ,
if , and linear in between.
3. , an upper bound on the sum of the magnitude of the weights in the th layer of
the neural network
4. , a Lipschitz constant which holds for the th layer of the neural network. A
Lipschitz constant is a bound on the magnitude of the derivative.
5. , the size of the input space.
With these parameters defined, we get the following bound.
Theorem 2.1 (2 layer feed-forward Neural Network true error bound)
@
798:7
;<=
)
:7 ;<=
)
B
?>
A
C
EGF
D"$ (HJI K L>9M5NPQSO R T8UWVYX[Z
A
;
where
1
R 8 ) [2
;
Q
Q
1B 1B @ 1@
Proof: Given in [2].
The theorem is actually only given up to a universal constant. ? ? might be the right
choice, but this is just an educated guess. The neural network true error bound above is
(perhaps) the tightest known bound for general feed-forward neural networks and so it is
the natural bound to compare with.
This 2 layer feed-forward bound is not easily applied in a tight manner because we can?t
calculate a priori what our weight bound should be. This can be patched up using the
!#"
principle of structural risk minimization. In particular, we can state the bound for
where $ is some non-negative integer and
is a constant. If the $ th bound holds with
probability % '" & , then all bounds will hold simultaneously with probability
, since
@
>
)
(
"
41
$
Applying this approach to the values of both
EF
Z
)
+*
@ 1
,
@ 1)
and
@
, we get the following theorem:
Theorem 2.2 (2 layer feed-forward Neural Network true error bound)
#"$ HJI K L>3QM NO R T8 V X9Z
1
Q
where R 8< ) 2 ;
1B 1B
Q
Proof: Apply the union bound to all possible values of and as discussed above.
) and report the value of the tightest applicable bound
In practice, we will use )
for all .
/$ 0
".-
1$ 0
768</9 =
"23-
54
:;
$
2
0
>
$ .0
2.2 Stochastic Neural Network bound
Our approach will start with a simple refinement [3] of the original PAC-Bayes bound [5].
We will first specialize this bound to stochastic neural networks and then show that the use
of this bound in conjunction with a post-processing algorithm results in a much tighter true
error rate upper bound.
First, we will need to define some parameters of the theorem.
1.
is a distribution over the hypotheses which can be found in an example depen?
dent manner.
2.
@
is a distribution over the hypotheses which is chosen a priori?without dependence on the examples.
) D is the true error rate of the stochastic hypothesis which, in
any evaluation, draws a hypothesis from , and outputs
.
* +, is the average empirical error rate of the same stochastic
4. * )
hypothesis.
3.
BA
+CD
E
BA
FCG
HE
A
?
A
Now, we are ready to state the theorem.
EF
KL
N
#"$
K KL * + ,
V X Z
where KL
) N C is the Kullback-Leibler divergence between the distributions and and KL *
* and a coin of bias . is the KL divergence between a coin of bias
Theorem 2.3 (PAC-Bayes Relative Entropy Bound) For all priors, @ ,
A
?
HA
?
O
SR
UT
/?NIOI @
@
JII
P WX V Y
J
ZA
[A
IOI
A
HA
M?NIOI @
LK
QP
&
Proof: Given in [3].
We need to specialize this theorem for application to a stochastic neural network with a
choice of the ?prior?. Our ?prior? will be zero on all neural net structures other than the
one we train and a multidimensional isotropic gaussian on the values of the weights in our
neural network. The multidimensional gaussian will have a mean of and a variance in
each dimension of . This choice is made for convenience and happens to work.
The optimal value of is unknown and dependent on the learning problem so we will
wish to parameterize it in an example dependent manner. We can do this using the same
"
trick as for the original neural net bound. Use a sequence of bounds where
for
and some constants and $ a nonnegative number. For the $ th bound set
% & . Now,
"
the union bound will imply that all bounds hold simultaneously with probability at least
.
Now, assuming that our ?posterior? ? is also defined by a multidimensional gaussian
with the mean and variance in each dimension defined by and , we can specialize to
the following corollary:
Corollary 2.4 (Stochastic Neural Network bound) Let 0 be the number of weights in a
neural net, be the th weight and be the variance of the th weight. Then, we have
R
R
)
R
Z
Z
#"$
F
A
A
1 N
1
N
2
4
K KL * +, 9M5NPO
X Z
?
A
JII
HA
K
9
-
P
9
P
"
&
"
(1)
Proof: Analytic calculation of the KL divergence between two multidimensional Gaussians and the union bound applied for each value of $ .
We will choose
> and > as reasonable default values.
One more step is necessary in order to apply this bound. The essential difficulty is
evaluting A
. This quantity is observable although calculating it to high precision is
difficult. We will avoid the need for a direct evaluation by a monte carlo evaluation and
A
a bound on the tail of the monte carlo evaluation. Let A
be the
observed rate of failure of a random hypotheses drawn according to ? and applied to a
random training example. Then, the following simple bound holds:
Theorem 2.5 (Sample Convergence Bound) For all distributions, ? , for all sample sets ,
*
)
-)
* +, #" /
) '
#"
A
E
KL
* * +, N
V X[Z
A
IOI
HA
P
K
&
where is the number of evaluations of the stochastic hypothesis.
Proof: This is simply an application of the Chernoff bound for the tail of a Binomial
where a ?head? occurs when an error is observed and the bias is A
.
In order to calculate a bound on the expected true error rate, we will first bound the expected
empirical error rate ZA
with confidence & then bound the expected true error rate A
. Since the total probability of failure is only
with confidence & , using our bound on A
. In practice, we will use
&
&
our bound will hold with probability
evaluations
of the empirical error rate of the stochastic neural network.
* +,
) Z
* +,
* +,
Z
)
2.3 Distribution Construction algorithm
One critical step is missing in the description: How do we calculate the multidimensional
gaussian, ? ? The variance of the posterior gaussian needs to be dependent on each weight
in order to achieve a tight bound since we want any ?meaningless? weights to not contribute
significantly to the overall sample complexity. We use a simple greedy algorithm to find
the appropriate variance in each dimension.
1. Train a neural net on the examples
2. For every weight, , search for the variance, , which reduces the empirical
accuracy
of the stochastic neural network by some fixed target percentage (we use
) while holding all other weights fixed.
0.8
1
0.6
0.5
error
10
error
0.7
SNN bound
NN bound
SNN Train error
NN Train error
SNN Test error
NN Test error
100
0.4
0.3
0.2
0.1
0.1
0.01
0
10000
pattern presentations
100000
10000
100000
pattern presentations
Figure 1: Plot of measured errors and error bounds for the neural network (NN) and the
stochastic neural network (SNN) on the synthetic problem. The training set has 100 cases
and the reduction in empirical error is 5%. Note that a true error bound of ?100? (visible
in the graph on the left) implies that at least more examples are required in order to
make a nonvacuous bound. The graph on the right expands the vertical scale by excluding
the poor true error bound that has error above 100. The curves for NN and SNN are qualitatively similar on the train and test sets. As expected, the SNN consistently performs 5%
worse than the NN on the train set (easier to see in the graph on the right). Surprisingly,
the SNN performs worse than the NN by less than 5% on the test sets. Both NN and SNN
exhibit overfitting after about 6000-12000 pattern presentations (600-1200 epochs). The
shape of the SNN bound roughly mimics the shape of the empirically measured true error
(this is more visible in the graph on the right) and thus might be useful for indicating where
the net begins overfitting.
3. The stochastic neural network defined by
will generally have a too-large
empirical error. Therefore, we calculate a global multiplier such that the
stochastic neural network defined by
decreases the empirical accuracy
by only the same (absolute error rate) used in Step 2.
4. Then, we evaluate the empirical error rate of the resulting stochastic neural net
by repeatedly drawing samples from the stochastic neural network. In the work
reported here we use samples.
3 Experimental Results
How well can we bound the true error rate of a stochastic neural network? The answer is
much better than we can bound the true error rate of a neural network.
We use two datasets to empirically evaluate the quality of the new bound. The first is a
synthetic dataset which has 25 input dimensions and one output dimension. Most of these
dimensions are useless?simply random numbers drawn from a !"
#$ Gaussian. One of
the 25 input dimensions is dependent on the label. First, the label % is drawn uniformly
from &
# , then the special dimension is drawn from a !%'
#$ Gaussian. Note that this
learning problem can not be solved perfectly because some examples will be drawn from
the tails where the gaussians overlap. The ?ideal? neural net to use in solving this synthetic
problem is a single node perceptron. We will instead use a 2 layer neural net with 2 hidden
nodes using the sigmoid transfer function. This overly complex neural net will result in the
potential for significant overfitting which makes the bound prediction problem interesting.
It is also somewhat more ?realistic? if the neural net structure does not exactly match the
learning problem.
The second dataset is the ADULT problem from the UCI Machine Learning Repository. We use a 2 layer neural net with 2 hidden units for this problem as well because
preliminary experiments showed that nets this small can overfit the ADULT dataset if the
training sample is small.
To keep things challenging, we use just (*)+ examples in our experiments. As
0.5
0.4
error
10
error
0.6
SNN bound
NN bound
SNN Train error
NN Train error
SNN Test error
NN Test error
100
1
0.3
0.2
0.1
0.1
0.01
0
10000
100000
10000
pattern presentations
100000
pattern presentations
Figure 2: Plot of measured errors and error bounds for the neural network (NN) and the
stochastic neural network (SNN) on the UCI ADULT dataset. These graphs show the
results obtained using a 1% reduction in empirical error instead of the 5% reduction used
in Figure 1. The training sample size for this problem is 200 cases. NN and SNN exhibit
overfitting after approximately 12000 pattern presentations (600 epochs). As in Figure 1, a
true error bound of ?100? implies that at least more examples are required in order to
make a nonvacuous bound. The graph on the right expands the vertical scale by excluding
the poor true error bound.
we will see in Figures 1 and 2, constructing a nonvacuous bound for a continuous hypothesis space with only ) examples is quite difficult. The conventional bounds are
hopelessly loose.
Figure 1 shows the results for the synthetic problem. For this problem we use 100
training cases and a 5% reduction in empirical error. The results for the ADULT problem
are presented in Figure 2. For this problem we use 200 training cases and a 1% reduction
in empirical error. Experiments performed on these problems using somewhat smaller and
larger training samples yielded similar results. The choice of reduction in empirical error
is somewhat arbitrary. We see qualitatively similar results if we switch to a 1% reduction
for the synthetic problem and a 5% reduction for the ADULT problem.
There are several things worth noting about the results in the two figures.
1. The SNN upper bounds are 2-3 orders of magnitude lower than the NN upper
bounds. While not as tight as might be desired, the SNN upper bounds are orders
of magnitude better and are not vacuous.
2. The SNNs perform somewhat better than expected. In particular, on the synthetic
problem the SNN true error rate is at most worse than the true error rate of
the NN (true error rates are estimated using large test sets). This is suprising
considering that we fixed the difference in empirical error rates at for the
synthetic problem. Similarly, on the ADULT problem we observe that the true
error rates between the SNN and NN typically is only about 0.5%, about half of
the target difference of 1%. This is good because it suggests that we do not lose
as much accuracy as might be expected when creating the SNN.
3. On both test problems, the shape of the SNN bound is somewhat similar to the
shape of the true error rate. In particular, the local minima in the SNN bound
occur roughly where the local minima in the true error rates occur. The SNN
bound may weakly predict the overfitting points of the SNN and NN nets.
The comparison between the neural network bound and the stochastic neural network
bound is not quite ?fair? due to the form of the bound. In particular, the stochastic neural
network bound can never return a value greater than ?always err?. This implies that when
the bound is near the value of ? ?, it is difficult to judge how rapidly extra examples will
improve the stochastic neural network bound. We can judge the sample complexity of
the stochastic bound by plotting the value of the numerator in equation 1. Figure 3 plots
the complexity versus the number of pattern presentations in training. In this figure, we
Complexity
100
Complexity
10
1
10000
100000
pattern presentations
Figure 3: We plot the ?complexity? of the stochastic network model (numerator of equation
1) vs. training epoch. Note that the complexity increases with more training as expected
and stays below
, implying nonvacuous bounds on a training set of size .
observe the expected result: the ?complexity? (numerator of equation 1) increases with
more training and is significantly less than the number of examples (100).
The stochastic bound is a radical improvement on the neural network bound but it is not
yet a perfectly tight bound. Given that we do not have a perfectly tight bound, one important consideration arises: does the minimum of the stochastic bound predict the minimum
of the true error rate (as predicted by a large holdout dataset). In particular, can we use
the stochastic bound to determine when we should cease training? The stochastic bound
depends upon (1) the complexity which increases with training time and (2) the training error which decreases with training time. This dependence results in a minima which occurs
at approximately 12000 pattern presentations for both of our test problems. The point of
minimal true error (for the stochastic and deterministic neural networks) occurs at approximately 6000 pattern presentations for the synthetic problem, and at about 18000 pattern
presentations for the ADULT problem, indicating that the stochastic bound weakly predicts
the point of minimum error. The neural network bound has no such minimum.
Is the choice of 1-5% increased empirical error optimal? In general, the ?optimal?
choice of the extra error rate depends upon the learning problem. Since the stochastic
neural network bound (corollary 2.4) holds for all multidimensional gaussian distributions,
we are free to optimize the choice of distribution in anyway we desire. Figure 4 shows the
resulting bound for different choices of posterior ? . The bound has a minimum at >
extra error indicating that our initial choices of > and > are in the right ballpark, and
> may be unnecessarily large. Larger differences in empirical error rate such as > are
easier to obtain reliably with fewer samples from the stochastic neural net, but we have not
had difficulty using as few as 100 samples from the SNN with as small as a 1% increase in
empirical error. Also note that the complexity always decreases with increasing entropy in
the distribution of our stochastic neural net. The existence of a minimum in Figure 4 is the
?right? behaviour: the increased empirical error rate is significant in the calculation of the
true error bound.
4 Conclusion
We have applied a PAC-Bayes bound for the true error rate of a stochastic
network.
neural
The stochastic neural network bound results in a radically tighter (
orders of mag-
true error bound or complexity
100
Stochastic NN bound
Complexity
10
1
0.1
0
0.02
0.04
0.06
extra training error
0.08
0.1
Figure 4: Plot of the stochastic neural net (SNN) bound for ?posterior? distributions chosen
according to the extra empirical error they introduce.
nitude) bound on the true error rate of a classifier while increasing the empirical and true
error rates only a small amount.
Although,
stochastic neural net bound is not completely tight, it is not vacuous with
theexamples
just
and the minima of the bound weakly predicts the point where
overtraining occurs.
The results with two datasets (one synthetic and one from UCI) are extremely
promising?the bounds are orders of magnitude better. Our next step will be to test the
method on more datasets using a greater variety of net architectures to insure that the
bounds remain tight. In addition, there remain many opportunities for improving the application of the bound. For example, it is possible that shifting the weights when finding a
maximum acceptable variance will result in a tighter bound. Also, we have not taken into
account symmetries within the network which would allow for a tighter bound calculation.
References
[1] Peter Bartlett, ?The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network?,
IEEE Transactions on Information Theory, Vol. 44, No. 2, March 1998.
[2] V. Koltchinskii and D. Panchenko, ?Empirical Margin Distributions and
Bounding the Generalization Error of Combined Classifiers?, preprint,
http://citeseer.nj.nec.com/386416.html
[3] John Langford and Matthias Seeger, ?Bounds for Averaging Classifiers.? CMU tech
report, 2001.
[4] David MacKay, ?Probable Networks and Plausible Predictions - A Review of Practical
Bayesian Methods for Supervised Neural Networks?, ??
[5] David McAllester, ?Some PAC-Bayes bounds?, COLT 1999.
| 1968 |@word repository:1 citeseer:1 harder:1 reduction:8 initial:1 mag:1 err:2 current:2 com:1 yet:1 john:2 visible:2 realistic:1 wx:1 shape:4 analytic:1 plot:5 v:2 implying:1 greedy:1 half:1 guess:1 fewer:1 theoretician:2 isotropic:1 contribute:1 node:2 direct:1 specialize:4 introduce:1 manner:3 expected:8 roughly:2 snn:25 considering:1 increasing:2 becomes:1 begin:1 estimating:1 bounded:1 unrelated:1 insure:1 what:1 ballpark:1 complimentary:1 finding:2 nj:1 guarantee:2 every:2 multidimensional:6 expands:2 exactly:1 classifier:12 qm:1 unit:1 producing:1 before:1 educated:1 local:2 approximately:3 might:4 koltchinskii:1 suggests:1 challenging:1 practical:1 union:3 practice:2 procedure:1 universal:1 empirical:24 thought:1 significantly:2 confidence:2 get:2 convenience:1 risk:1 applying:1 optimize:1 fruitful:1 conventional:1 map:1 missing:1 deterministic:2 straightforward:1 regardless:1 anyway:1 construction:1 target:2 us:1 hypothesis:9 pa:1 trick:1 predicts:2 observed:2 preprint:1 solved:1 parameterize:1 calculate:4 decrease:3 panchenko:1 complexity:13 trained:1 weakly:3 tight:9 solving:1 upon:3 completely:1 easily:2 train:8 describe:3 monte:2 artificial:3 quite:3 larger:2 valued:1 plausible:1 drawing:1 sequence:1 differentiable:1 net:22 matthias:1 ioi:2 uci:3 rapidly:1 achieve:2 description:1 convergence:1 produce:1 radical:1 measured:3 progress:1 suprising:1 c:2 predicted:1 implies:3 judge:2 stochastic:42 vc:1 mcallester:1 require:2 behaviour:1 nonvacuous:4 generalization:1 preliminary:1 tighter:6 probable:1 dent:1 hold:7 predict:2 applicable:1 lose:1 label:2 title:1 sensitive:1 create:1 suprisingly:1 minimization:1 always:3 gaussian:9 modified:1 rather:1 avoid:1 cornell:2 conjunction:1 corollary:3 improvement:3 consistently:1 tech:1 seeger:1 dependent:4 nn:18 typically:1 hidden:2 overall:1 classification:2 html:1 colt:1 priori:2 special:1 mackay:2 construct:1 never:1 chernoff:1 unnecessarily:1 future:3 mimic:1 report:2 few:1 modern:1 simultaneously:2 tightly:1 divergence:3 individual:1 phase:1 interest:1 evaluation:6 necessary:1 re:1 desired:1 theoretical:1 minimal:1 increased:2 modeling:1 caruana:2 too:1 reported:1 answer:1 perturbed:1 synthetic:9 combined:1 sensitivity:1 stay:1 choose:1 worse:3 creating:1 derivative:1 return:1 account:1 potential:1 depends:2 performed:1 competitive:1 bayes:12 start:2 accuracy:3 variance:8 bayesian:3 carlo:2 worth:1 za:2 overtraining:1 whenever:1 failure:2 proof:5 experimenter:3 dataset:5 holdout:1 ut:1 actually:1 feed:4 supervised:2 evaluated:1 done:1 just:3 langford:2 overfit:1 quality:1 perhaps:1 true:43 multiplier:1 hence:1 leibler:1 numerator:3 encourages:1 covering:2 trying:1 demonstrate:1 performs:2 consideration:1 ef:2 superior:1 sigmoid:2 ji:2 qp:1 empirically:2 discussed:1 tail:3 he:1 mellon:1 significant:5 refer:1 depen:1 similarly:1 had:1 posterior:4 showed:1 minimum:10 greater:2 somewhat:5 determine:1 reduces:1 match:1 calculation:3 post:5 prediction:2 basic:1 cmu:2 sometimes:1 addition:1 want:1 jcl:1 ithaca:1 extra:6 meaningless:2 sr:1 tend:1 thing:2 integer:1 structural:1 near:1 noting:1 ideal:1 switch:1 variety:1 architecture:1 perfectly:3 specialization:1 bartlett:1 patched:1 peter:1 repeatedly:1 useful:1 generally:1 amount:1 generate:1 http:1 exist:1 percentage:1 estimated:1 overly:1 hji:2 carnegie:1 vol:1 threshold:1 drawn:8 graph:6 sum:1 run:1 reasonable:1 draw:1 acceptable:1 layer:7 bound:130 nonnegative:1 yielded:1 nontrivial:3 adapted:1 occur:2 extremely:2 performing:1 department:2 according:2 march:1 poor:2 smaller:1 remain:2 happens:1 taken:1 equation:3 loose:2 know:1 tractable:2 tightest:2 gaussians:2 apply:4 observe:3 appropriate:1 alternative:1 batch:1 coin:2 existence:1 original:2 binomial:1 opportunity:1 calculating:1 fcg:1 prof:1 quantity:2 occurs:4 dependence:2 exhibit:3 nitude:1 assuming:1 useless:1 difficult:4 setup:1 statement:1 holding:1 negative:1 ba:2 reliably:1 unknown:3 perform:1 upper:8 vertical:2 datasets:3 excluding:2 head:1 arbitrary:3 david:3 vacuous:2 pair:2 required:2 kl:8 learned:1 adult:7 below:1 pattern:12 shifting:1 critical:1 overlap:1 natural:1 difficulty:2 improve:1 imply:1 lk:1 ready:1 prior:4 epoch:3 review:1 determining:1 asymptotic:1 relative:1 expect:1 interesting:1 versus:1 verification:1 principle:1 viewpoint:1 plotting:1 cd:1 surprisingly:1 free:1 bias:3 allow:1 perceptron:1 absolute:1 curve:1 dimension:8 default:1 rich:1 forward:4 made:2 refinement:1 qualitatively:2 transaction:1 approximate:1 observable:2 kullback:1 keep:1 global:1 overfitting:5 pittsburgh:1 continuous:2 search:1 why:1 promising:1 transfer:1 symmetry:1 improving:1 complex:1 constructing:3 t8:1 bounding:3 noise:4 fair:1 ny:1 precision:1 wish:1 theorem:9 pac:12 cease:1 essential:1 magnitude:6 nec:1 margin:2 gap:1 easier:2 entropy:2 simply:3 likely:1 snns:1 hopelessly:1 desire:1 radically:1 goal:1 presentation:11 lipschitz:2 analysing:1 typical:1 uniformly:1 averaging:1 total:1 egf:1 experimental:2 indicating:3 arises:1 evaluate:2 |
1,060 | 1,969 | Keywords: portfolio management, financial forecasting, recurrent neural networks.
Active Portfolio-Management
based on Error Correction Neural Networks
Hans Georg Zimmermann, Ralph Neuneier and Ralph Grothmann
Siemens AG
Corporate Technology
D-81730 M?unchen, Germany
Abstract
This paper deals with a neural network architecture which establishes a
portfolio management system similar to the Black / Litterman approach.
This allocation scheme distributes funds across various securities or financial markets while simultaneously complying with specific allocation
constraints which meet the requirements of an investor.
The portfolio optimization algorithm is modeled by a feedforward neural
network. The underlying expected return forecasts are based on error
correction neural networks (ECNN), which utilize the last model error as
an auxiliary input to evaluate their own misspecification.
The portfolio optimization is implemented such that (i.) the allocations
comply with investor?s constraints and that (ii.) the risk of the portfolio can be controlled. We demonstrate the profitability of our
approach
by constructing internationally diversified portfolios across
different
financial markets of the G7 contries. It turns out, that our approach is
superior to a preset benchmark portfolio.
1 Introduction: Portfolio-Management
We integrate the portfolio optimization algorithm suggested by Black / Litterman [1] into a
neural network architecture. Combining the mean-variance theory [5] with the capital asset
pricing model (CAPM) [7], this approach utilizes excess returns of the CAPM equilibrium
to define a neutral, well balanced benchmark portfolio. Deviations from the benchmark
allocation are only allowed within preset boundaries. Hence, as an advantage, there are no
unrealistic solutions (e. g. large short positions, huge portfolio changes). Moreover, there
is no need of formulating return expectations for all assets.
In contrast to Black / Litterman, excess return forecasts are estimated by time-delay recurrent error correction neural networks [8]. Investment decisions which comply with given
allocation constraints are derived from these predictions. The risk exposure of the portfolio
is implicitly controlled by a parameter-optimizing task over time (sec. 3 and 5).
Our
consists of the following three steps: (i.) Construction of forecast models
approach
on the basis of error correction neural networks (ECNN) for all assets (sec. 2).
To whom correspondence should be addressed: Georg.Zimmermann@mchp.siemens.de.
(ii.) Computation of excess returns
by a higher-level feedforward network
(sec. 3 and 4). By this, the profitability of an asset with respect to all others is measured.
(iii.) Optimization of the investment proportions on the basis of the excess returns.
Allocation constraints ensure, that the investment proportions may deviate from a given
benchmark only within predefined intervals (sec. 3 and 4).
Finally, we apply our neural network based portfolio management system to an asset allocation problem concerning the G7 countries (sec. 6).
2 Forecasting by Error Correction Neural Networks
Most dynamical systems are driven by a superposition of autonomous development and
external influences [8]. For discrete time grids, such a dynamics can be described by a
recurrent state transition
and an output equation (Eq. 1).
state transition eq.
output eq.
(1)
The state transition
is a mapping from the previous state
, external influences and
a comparison between the model output and observed data . If the last model error
is zero, we
have a perfect description of the dynamics. However, due unknown
external influences or noise, our knowledge
about the dynamics is often incomplete.
Under such conditions, the model error quantifies the model?s misfit and serves
as an indicator of short-term effects or external shocks [8].
"! #
Using
weight matrices
of appropriate dimensions corresponding to
$ , $ and
$%&$ , a neural network approach of Eq. 1 can be formulated as
*
/
#
.'"! (*)+ ,
.
1
'"(*)+
0!
(2)
!
In Eq. 2, the output is recomputed# by
and compared to the observation 2 . Different
dimensions in
$ are adjusted by . The system identification
"! # (Eq. 3) is a parameter
optimization task of appropriate sized weight matrices
[8]:
3
56 4
7
98 *: 1; <,=
ACB >@
DEB ? F:) B G
(3)
For an overview of algorithmic solution techniques see [6]. We solve the system identification task of Eq. 3 by finite unfolding in time using shared weights. For details see [3, 8].
Fig. 1 depicts the resulting neural network solution of Eq. 3.
A
zt?2 D
?Id
d
yt?2
s t?1 C
B
ut?2
zt?1 D
?Id
d
yt?1
A
st
B
ut?1
C
zt
A
D
?Id
ytd
s t+1 C
yt+1
A
s t+2 C
yt+2
s t+3 C
yt+3
B
ut
Figure 1. Error correction neural network (ECNN) using unfolding in time and overshooting. Note,
that HJILK is the fixed negative of an appropriate sized identity matrix, while MON0PRQ are output clusters
with target values of zero in order to optimize the error correction mechanism.
The
ECNN (Fig. 1) is best to comprehend by analyzing the dependencies
of
, ,
!
and
. The ECNN has
the externals directly
two different inputs:
influencing the state transition
and
the targets . Only the difference between and
has an impact on
[8]. At all future time steps - , we have no compensation
!
of the internal expectations
$ and thus, the system offers forecasts
$
$ . A
forecast of the ECNN is based on a modeling of the recursive structure of a dynamical
system (coded in ), external influences (coded in! )# and the error correction mechanism
which is also acting as an external input (coded in , ).
Using finite unfolding in time, we have by definition an incomplete formulation of accumulated memory in the leftmost part of the network and thus, the autoregressive modeling
is handicapped [9]. Due to the error correction, the ECNN has an explicit mechanism to
handle the initialization shock of the unfolding [8].
The ! autonomous part of the ECNN is extended into the future by the iteration of matrices
and . This is called overshooting [8]. Overshooting provides additional information about
the system dynamics and regularizes the learning. Hence, the learning of false causalities
might be reduced and the generalization ability of the model should be improved [8]. Of
course, we have to support the additional output clusters
$ by target values. However,
due to shared weights, we have the same number of parameters [8].
3 The Asset Allocation Strategy
Now,
we explain
how the forecasts are transformed into
an asset allocation vector
with investment proportions (
). For simplicity, short sales
(i. e. R
) are not allowed. We have to consider, that the allocation (i.) pays attention to the uncertainty of the forecasts and (ii.) complies with given investment constraints.
In order to handle the uncertainty of the asset forecasts , we utilize the concept of excess
return.
An excess return L is defined
as the difference between the expected returns and
of two assets and , i. e. L
. . The investment proportions of assets which
have a superior excess return should be enlarged, because they seem to be more valuable.
Further on, let us define the cumulated excess return as a weighted sum of the excess returns
for one asset over all other assets ,
:
6
7
with
(4)
The forbiddance of short sales ( ) and the constraint, that investment proportions
sum up to one ( C ), can be easily satisfied by the transformation
R:
(5)
7
The market share constraints are given by the asset manager in form of intervals, which
have a mean value of 5 . 5 is the benchmark allocation. The admissible spread
defines how much the allocation may deviate from :
R"!
5 #
(6)
Since we have to level the
excess
%
returns around the mean of the intervals, Eq. 4 is adjusted
by a bias vector $ $
corresponding to the benchmark allocation:
$
:&$* -
6
7
(7)
The bias $* forces the system to put funds into asset , even if the cumulated excess return
does not propose an investment. The vector $ can be computed before-hand by solving the
system of nonlinear equations which results by setting the excess returns (Eq. 7) to zero:
$
$
..
..
..
(8)
.
.
.
$
$
the allocation
Since
represents the benchmark portfolio, the pre-condition
leads
to
a
non-unique
solution (Eq. 9) of the latter system (Eq. 8)
7
(9)
$*
O) -
for any real number . In the following, we choose
.
B
The interval 5:
5 defines constraints for the parameters B
be
cause the latter quantifies the deviation of from the benchmark . Thus, the return
maximization task
can be stated as a constraint optimization problem with B as the actual
return of asset at time :
3
6
4
7
6
7
B B
B
1
R
> (
E=
"!
#
(10)
This problem can be solved as a penalized maximization task
3
with
6
4
7
6
7
B
R
B
B
R 5
=
> (
(11)
is defined as a type of -insensitive error function:
L
if
otherwise
#
(12)
Summarizing, the construction of the allocation scheme consists of the following
two
steps:
(i.) Train the error correction sub-networks and compute the excess returns : . (ii.)
Optimize the allocation
using the forecast models with respect to the market
parameters
share constraints
:
3
6
4
7
6
7
B
1 R
O) 5 -
6
7
B
B
E=
> (
(13)
As we will explain in sec. 5, Eq. 13 also controls the portfolio risk.
4 Modeling the Asset Allocation Strategy by Neural Networks
A neural network approach of the allocation scheme (sec. 3) is shown in Fig. 2.
The
first layer of the portfolio optimization neural network (Fig. 2) collects the predictions
ECNNs. The matrix entitled ?unfolding? computes the excess re from the underlying
turns for
assets as a contour plot. White
spaces indicate weights with a value
of , while grey equals
and black stands for . The layer entitled ?excess returns? is
designed as an output cluster, i. e. it is associated with an error function which computes
error signals for each training pattern. By this, we can identify inter-market dependencies,
since the neural network is forced to learn cross-market relationships.
unfolding matrix
k market shares
20
40
id (fixed)
excess return
60
k asset allocations
80
100
120
140
160
folding (fixed)
180
(
id
0
0
wk(k?1)
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21
assets
d)
w1
200
e
fix
weigthed
excess returns
w=
k(k?1)
excess returns
2
2
ln(m1 ), ..., ln(mk )
unfolding (fixed)
k forecasts
[ 1 0 ... 0 ]
[ 0 ... 0 1 ]
yt+6
yt+6 (...) yt+6
yt+6
asset 1
asset 2
asset k
asset k?1
ECNN forecasts of k assets
Figure 2. Arranged on the top of the ECNN sub-networks, a higher-level neural network models the
portfolio optimization algorithm on basis of the excess returns H . The diagonal matrix which
computes the weighted excess returns includes the only tunable parameters . All others are fixed.
Next, each excess return is weighted by a particular via a diagonal connection to
the layer entitled ?weighted excess returns?. Afterwards, the weighted excess returns are
folded using the transpose of the sparse matrix called ?unfolding? (see Fig. 2). By this, we
calculate the sum of the weighted excess returns for each asset .
According to the predictions of the excess returns , the layer ?asset
com allocation?
putes profitable investment decisions. In case that all
excess
returns
are
zero
the benchmark portfolio is reproduced by the offset $
$
)
O)
.
Otherwise, funds are allocated within the preset investment boundaries
,
.
while simultaneously complying with the constraints 2 and R
In order to prevent short selling, we assume for each investment proportion. Further
on,
that the sum of the proportions invested in the securities equals
we have to guarantee
, i. e.
. Both constraints are satisfied by the activation function of the ?asset
allocation? layer?, which implements the non-linear transformation of Eq. 5 using softmax. The return maximization task of Eq. 10 is also solved by this cluster by generating
error signals utilizing the prof-max error function (Eq. 14).
3
6
4
7
6
7
B
B
B
> (
E=
(14)
The layer ?market shares? takes care of the allocation constraints. The error function of
Eq. 15 is implemented to ensure, that the investments do not violate preset constraints.
3
6
4
7
6
7
5
=
> (
(15)
The ?market shares? cluster generates error signals for the penalized optimization problem
stated
in Eq.
11. By
this, we implement a penalty for exceeding the allocation intervals
. The error signals of Eq. 10 and Eq. 11 are subsequently used for
computing the gradients in order to adapt the parameters .
5 Risk Analysis of the Neural Portfolio-Management
In Tab. 1 we compare the mean-variance framework of Markowitz with our neural network
based portfolio optimization algorithm.
Markowitz:
Neural Network:
for each decision:
forecasts
input:
prediction models
deviation interval
B
1 R
with implicit risk control
accepted risk exposure
7
with
optimization:
0 R = > (
B
for each decision:
vector
output:
benchmark allocation ,
,
,
E=
> (
k decision schemes
Table 1. Comparison of the portfolio optimization algorithm of Markowitz with our approach.
The most crucial difference between the mean-variance framework and our approach is the
handling of the risk exposure (see Tab. 1). The Markowitz algorithm optimizes the expected
risk explicitly by quadratic programming. Assuming that it is not possible to forecast the
expected
returns of the assets (often referred to as random walk hypothesis), the forecasts
are determined
by an average of most recent observed returns, while the risk-covariance
matrix B 7 B B is estimated by the historical volatility of the assets. Hence, the risk
of the portfolio is determined by the volatility of the time series of the assets.
However, insisting on the existence of useful forecast models, we propose to derive the
covariance matrix from the forecast model residuals, i. e. the risk-matrix is determined
by the covariances of the model errors. Now, the risk of the portfolio is due to the nonforecastability of the assets only. Since our allocation scheme is based on the model uncertainty, we refer to this approach as causal risk.
Using the covariances of the model errors as a measurement of risk still allows to apply the
Markowitz optimization scheme. Here, we propose to substitute the quadratic optimization
problem of the Markowitz approach by the objective function of Eq. 16.
3
6
4
6
B
-
6
B
B
(16)
= > (
7
7
3
The error function of Eq. 16 is optimized over time
with respect to the parameters , which are used to evaluate the certainty of the excess return forecasts. By this,
it is possible to construct an asset allocations strategy which implicitly
controls the risk
exposure of the portfolio according to the certainty of the forecasts . Note, that Eq. 16
$
can be extended by a time delay parameter
4 in order to focus on more recent events.
If the predicted excess returns B B are reliable, then the weights are greater than
zero, because the optimization algorithm emphasizes the particular asset in comparison to
7
O)
other assets with less reliable forecasts. In contrast, unreliable predictions are ruled out by
pushing the associated weights towards zero. Therefore, Eq. 16 implicitly controls the
risk exposure of the portfolio although it is formulated as a return maximization task.
Eq. 16 has to be optimized with respect to the allocation
constraints
. This allows
the definition of an active risk parameter
quantifying the readiness to deviate
!
from the benchmark portfolio
within the allocation constraints:
(17)
R ! 5
B
The weights and the allocations are now dependent on the risk level . If ,
then the benchmark is recovered, while
allows deviations from the benchmark
within the bounds . Thus, the active risk parameter analysis the risk sensitivity of the
portfolios with respect to the quality of the forecast models.
6 Empirical Study
Now, we apply our approach to the financial markets of the G7 countries. We work on
the basis of monthly data in order to forecast the semi-annual development of the stock,
cash and bond markets of the G7 countries Spain, France, Germany, Italy, Japan, UK and
USA. A separate ECNN is constructed for each market on the basis of country specific
economic data. Due to the recurrent modeling, we only calculated the relative change of
each input. The transformed inputs are scaled such that they have a mean of zero and a
variance of one [8]. The complete data set (Sept. 1979 to May 1995) is divided into three
subsets: (i.) Training set (Sept. 1979 to Jan. 1992). (ii.) Validation set (Feb. 1992 to
June 1993), which is used to learn the allocation parameters . (iii.) Generalization set
(July 1993 to May 1995). Each ECNN was trained until convergence by using stochastical
vario-eta learning, which includes re-normalization of the gradients in each step of the
backpropagation algorithm [9].
We evaluate the performance
of our approach by a comparison with the benchmark port
which is calculated with respect to the market shares . The
folio
< and the benchmark portfolio is drawn on the basis of the accucomparison of our strategy
mulated return of investment (Fig. 3). Our strategy
is able to outperform the benchmark
portfolio
on the generalization set by nearly . A further enhancement of the portfolio performance can only be achieved if one relaxes the market share constraints. This
indicates, that the tight allocation boundaries, which prevent huge capital transactions from
non-profitable to booming markets, narrow additional gains.
In Fig. 4 we compare the risk of our portfolio to the risk of the benchmark portfolio. Here,
the portfolio risk is defined analogous to the mean-variance framework. However, in contrast to this approach, the expected (co-)variances are replaced by the residuals 1 of the
underlying forecast models. The risk level which is induced by our strategy is comparable
to the benchmark (Fig. 4), while simultaneously increasing the portfolio return (Fig. 3).
Fig. 5 compares the allocations of German bonds and stocks across the generalization set: A
typical reciprocal investment behavior is depicted, e. g. enlarged positions in stocks often
occur in parallel with smaller investments in bonds. This effect is slightly disturbed by
international diversification. Not all countries show such a coherent investment behavior.
7 Conclusions and Future Work
We described a neural network approach which adapts the Black / Litterman portfolio optimization algorithm. Here, funds are allocated across various securities while simultaneously complying with allocation constraints. In contrast to the mean-variance theory, the
risk exposure of our approach focuses on the uncertainty of the underlying forecast models.
14
portfolio risk
benchmark risk
ECNN
Benchmark
12
8
risk
accumulated return
10
6
4
2
0
?2
July 1993
May 1995
Date
July 1993
May 1995
date
Figure 3.
Figure 4.
0.1
german stocks
german bonds
0.09
Fig.3.
Comparison of accumulated
return of investment (generalization set).
0.08
allcocation
0.07
0.06
Fig.4. Comparison of portfolio risk
(generalization set).
0.05
0.04
Fig.5. Investments in German bond
and stocks (generalization set).
0.03
0.02
0.01
July 1993
May 1995
date
Figure 5.
The underlying forecasts are generated by ECNNs, since our empirical results indicate, that
this is a very promising framework for financial modeling. Extending the ECNN by using
techniques like overshooting, variants-invariants separation or unfolding in space and time,
one is able to include additional prior knowledge of the dynamics into the model [8, 9].
Future work will include the handling of a larger universe of assets. In this case, one may
extend the neural network by a bottleneck which selects the most promising assets.
References
[1] Black, F., Litterman, R.:Global Portfolio Optimization, Financial Analysts Journal, Sep. 1992.
[2] Elton, E. J., Gruber, M. J.: Modern Portfolio Theory and Investment Analysis, J. Wiley & Sons.
[3] Haykin S.: Neural Networks. A Comprehensive Foundation., ed., Macmillan, N. Y. 1998.
[4] Lintner, J.:The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets, in: Review of Economics and Statistics, Feb. 1965.
[5] Markowitz, H. M.: Portfolio Selection, in: Journal of Finance, Vol. 7, 1952, p. 77-91.
[6] Pearlmatter, B.:Gradient Calculations for Dynamic Recurrent Neural Networks: A survey, In
IEEE Transactions on Neural Networks, Vol. 6, 1995.
[7] Sharpe, F.:A Simplified Model for Portfolio Analysis, Management Science, Vol. 9, 1963.
[8] Zimmermann, H. G., Neuneier, R., Grothmann, R.: Modeling of Dynamical Systems by Error Correction Neural Networks, in: Modeling and Forecasting Financial Data, Techniques of
Nonlinear Dynamics, Eds. Soofi, A. and Cao, L., Kluwer 2001.
[9] Zimmermann, H.G., Neuneier, R.:Neural Network Architectures for the Modeling of Dynamical
Systems, in: A Field Guide to Dynamical Recurrent Networks, Eds. Kremer, St. et al., IEEE.
| 1969 |@word complying:3 proportion:7 grey:1 covariance:4 series:1 neuneier:3 com:1 recovered:1 activation:1 mulated:1 plot:1 designed:1 fund:4 overshooting:4 reciprocal:1 short:5 haykin:1 provides:1 constructed:1 consists:2 inter:1 market:15 expected:5 behavior:2 manager:1 actual:1 increasing:1 spain:1 underlying:5 moreover:1 ag:1 transformation:2 guarantee:1 certainty:2 finance:1 scaled:1 uk:1 sale:2 control:4 before:1 influencing:1 id:5 analyzing:1 meet:1 black:6 might:1 initialization:1 collect:1 co:1 g7:4 unique:1 investment:20 recursive:1 implement:2 backpropagation:1 jan:1 empirical:2 pre:1 selection:2 put:1 risk:30 influence:4 optimize:2 disturbed:1 yt:9 exposure:6 attention:1 economics:1 survey:1 simplicity:1 utilizing:1 financial:7 handle:2 autonomous:2 analogous:1 profitable:2 construction:2 target:3 programming:1 hypothesis:1 hjilk:1 observed:2 solved:2 calculate:1 valuable:1 balanced:1 dynamic:7 trained:1 solving:1 tight:1 basis:6 selling:1 easily:1 sep:1 stock:6 various:2 train:1 forced:1 larger:1 solve:1 otherwise:2 ability:1 statistic:1 capm:2 invested:1 reproduced:1 advantage:1 propose:3 cao:1 combining:1 date:3 adapts:1 description:1 convergence:1 cluster:5 requirement:1 enhancement:1 extending:1 generating:1 perfect:1 volatility:2 derive:1 recurrent:6 measured:1 keywords:1 eq:25 auxiliary:1 implemented:2 predicted:1 indicate:2 subsequently:1 fix:1 generalization:7 adjusted:2 correction:11 around:1 equilibrium:1 mapping:1 algorithmic:1 bond:5 superposition:1 establishes:1 weighted:6 unfolding:9 cash:1 derived:1 focus:2 june:1 indicates:1 contrast:4 summarizing:1 dependent:1 accumulated:3 transformed:2 france:1 selects:1 germany:2 ralph:2 development:2 softmax:1 equal:2 construct:1 field:1 represents:1 nearly:1 future:4 markowitz:7 others:2 modern:1 simultaneously:4 comprehensive:1 replaced:1 huge:2 sharpe:1 predefined:1 incomplete:2 walk:1 re:2 ruled:1 causal:1 mk:1 modeling:8 eta:1 maximization:4 deviation:4 neutral:1 subset:1 delay:2 acb:1 dependency:2 st:2 international:1 sensitivity:1 w1:1 satisfied:2 management:7 choose:1 external:7 return:40 japan:1 de:1 sec:7 wk:1 includes:2 vario:1 explicitly:1 tab:2 investor:2 parallel:1 variance:7 identify:1 misfit:1 identification:2 emphasizes:1 asset:37 explain:2 ed:3 definition:2 associated:2 gain:1 tunable:1 knowledge:2 ut:3 higher:2 unchen:1 improved:1 formulation:1 arranged:1 profitability:2 implicit:1 until:1 hand:1 nonlinear:2 readiness:1 defines:2 quality:1 pricing:1 usa:1 effect:2 concept:1 hence:3 deal:1 white:1 comprehend:1 leftmost:1 complete:1 demonstrate:1 superior:2 overview:1 insensitive:1 extend:1 m1:1 kluwer:1 refer:1 measurement:1 monthly:1 grid:1 portfolio:43 han:1 internationally:1 feb:2 own:1 recent:2 optimizing:1 optimizes:1 driven:1 italy:1 diversification:1 entitled:3 additional:4 care:1 greater:1 july:4 signal:4 ii:5 semi:1 corporate:1 afterwards:1 violate:1 adapt:1 calculation:1 offer:1 cross:1 divided:1 concerning:1 coded:3 controlled:2 impact:1 prediction:5 variant:1 expectation:2 iteration:1 normalization:1 achieved:1 folding:1 addressed:1 interval:6 country:5 allocated:2 crucial:1 induced:1 seem:1 weigthed:1 feedforward:2 iii:2 relaxes:1 architecture:3 economic:1 ytd:1 bottleneck:1 forecasting:3 penalty:1 cause:1 useful:1 elton:1 reduced:1 outperform:1 estimated:2 discrete:1 vol:3 georg:2 recomputed:1 drawn:1 capital:3 prevent:2 utilize:2 shock:2 sum:4 uncertainty:4 utilizes:1 separation:1 decision:5 comparable:1 layer:6 bound:1 pay:1 correspondence:1 quadratic:2 annual:1 occur:1 constraint:18 deb:1 generates:1 formulating:1 according:2 across:4 smaller:1 slightly:1 son:1 invariant:1 zimmermann:4 ln:2 equation:2 turn:2 german:4 mechanism:3 complies:1 serf:1 apply:3 appropriate:3 existence:1 substitute:1 top:1 ensure:2 include:2 pushing:1 prof:1 objective:1 strategy:6 diagonal:2 gradient:3 separate:1 whom:1 valuation:1 assuming:1 analyst:1 modeled:1 relationship:1 negative:1 stated:2 zt:3 unknown:1 observation:1 benchmark:20 finite:2 compensation:1 regularizes:1 extended:2 misspecification:1 ecnn:14 connection:1 optimized:2 security:3 coherent:1 narrow:1 able:2 suggested:1 dynamical:5 pattern:1 handicapped:1 max:1 memory:1 reliable:2 unrealistic:1 event:1 force:1 indicator:1 residual:2 scheme:6 technology:1 risky:1 sept:2 deviate:3 comply:2 prior:1 review:1 stochastical:1 relative:1 allocation:33 validation:1 foundation:1 integrate:1 gruber:1 port:1 share:7 course:1 penalized:2 last:2 transpose:1 kremer:1 bias:2 guide:1 mchp:1 sparse:1 boundary:3 dimension:2 calculated:2 transition:4 stand:1 contour:1 autoregressive:1 computes:3 simplified:1 historical:1 transaction:2 excess:28 implicitly:3 unreliable:1 global:1 active:3 quantifies:2 table:1 promising:2 learn:2 constructing:1 spread:1 universe:1 noise:1 allowed:2 enlarged:2 fig:13 causality:1 referred:1 depicts:1 wiley:1 sub:2 position:2 explicit:1 exceeding:1 admissible:1 specific:2 offset:1 false:1 cumulated:2 budget:1 forecast:24 depicted:1 macmillan:1 diversified:1 insisting:1 sized:2 formulated:2 identity:1 quantifying:1 towards:1 shared:2 change:2 folded:1 determined:3 typical:1 acting:1 preset:4 distributes:1 called:2 accepted:1 siemens:2 internal:1 support:1 latter:2 evaluate:3 handling:2 |
1,061 | 197 | 642
Chauvin
Dynamic Behavior of Constrained
Back-Propagation Networks
Yves Chauvin!
Thomson-CSF, Inc.
630 Hansen Way, Suite 250
Palo Alto, CA. 94304
ABSTRACT
The learning dynamics of the back-propagation algorithm are investigated when complexity constraints are added to the standard
Least Mean Square (LMS) cost function. It is shown that loss of
generalization performance due to overtraining can be avoided
when using such complexity constraints. Furthermore, "energy,"
hidden representations and weight distributions are observed and
compared during learning. An attempt is made at explaining the
results in terms of linear and non-linear effects in relation to the
gradient descent learning algorithm.
1 INTRODUCTION
It is generally admitted that generalization performance of back-propagation networks (Rumelhart, Hinton & Williams, 1986) will depend on the relative size ofthe
training data and of the trained network. By analogy to curve-fitting and for theoretical considerations, the generalization performance of the network should decrease as the size of the network and the associated number of degrees of freedom
increase (Rumelhart, 1987; Denker et al., 1987; Hanson & Pratt, 1989).
This paper examines the dynamics of the standard back-propagation algorithm
(BP) and of a constrained back-propagation variation (CBP), designed to adapt
the size of the network to the training data base. The performance, learning
dynamics and the representations resulting from the two algorithms are compared.
1.
Also in the Psychology Department, Stanford University, Stanford, CA. 94305
Dynamic Behavior of Constrained Back-Propagation Networks
2 GENERALIZATION PERFORM:ANCE
2.1 STANDARD BACK-PROPAGATION
In Chauvin (In Press). the generalization performance of a back-propagation network was observed for a classification task from spectrograms into phonemic categories (single speaker. 9 phonemes. 10msx16frequencies spectrograms. 63 training
patterns. 27 test patterns). This performance was examined as a function of the
number of training cycles and of the number of (logistic) hidden units (see also.
Morgan & Bourlard. 1989). During early learning. the performance of the network
appeared to be basically independent of the number of hidden units (provided a
minimal size). However. after prolonged training. performance started to decrease
with training at a rate that was a function of the size of the hidden layer. More
precisely. from 500 to 10.000 cycles. the generalization performance (in terms of
percentage of correctly classified spectrograms) decreased from about 93% to 74%
for a 5 hidden unit network and from about 95% to 62% for a 10 hidden unit
network. These results confirmed the basic hypothesis proposed in the Introduction but only with a sufficient number of training cycles (overtraining).
2.2 CONSTRAINED BACK-PROPAGATION
Several constraints have been proposed to "adapt" the size of the trained network
to the training data. These constraints can act directly on the weights. or on the net
input or activation of the hidden units (Rumelhart. 1987; Chauvin. 1987. 1989. In
Press; Hanson & Pratt. 1989; Ji. Snapp & Psaltis. 1989; Ishikawa. 1989; Golden
and Rumelhart. 1989). The complete cost function adopted in Chauvin (In Press)
for the speech labeling task was the following:
OP
C = aE, + PEn + yW = a
~
L (tip ip
HP
2
Oip)
~
+P L 1
Ip
2
W
Olp
2
+ Oip
2
~
WI}
I)
1 + wI}
+ YL
2
[1]
E, is the usual LMS error computed at the output layer. E" is a function of the
squared activations of the hidden units and W is a function of the squared weights
throughout the network. This constrained back-propagation (CBP) algorithm basically eliminated the overtraining effect: the resulting generalization performance
remained constant (about 95%) throughout the complete training period. independently of the original network size.
3 ERROR AND ENERGY DYNAMICS
Using the same speech labeling task as in Chauvin (In Press). the dynamics of the
global variables of the network defined in Equation 1 (E,. E". and W) were
observed during training of a network with 5 hidden units. Figure 1 represents the
error and energy dynamics for the standard (BP) and the constrained back-propagation algorithm (CBP). For BP and CBP. the error on the training patterns kept
643
644
Chauvin
Er (Test)
1.4-------------------1.2?
0.04~-----------------
BP
BP
1? ,
,..------....... ----------
0.03-
O. 8?
0.6
0.40.2-
'-
CBP
0.01?
CBP
0 ~.....~
0
.
.
.
? .....~
? .....-~.--~......~
4
2
6
8 10
0
2
4
6
0
Number of cycles (x1000).
?
8
10
Figure 1. "Energy" (left) and generalization error - LMS averaged
over the test patterns and output units - (right) when using the standard (BP) or the constrainted (CBP) back-propagation algorithm
during a typical run.
decreasing during the entire training period (more slowly for CBP). The W dynamics over the entire network were similar for BP and CBP (but the distributions were
different, see below).
3.1 STANDARD BACK-PROPAGATION
As shown in Figure 1, the "energy" Ell (Equation 1) of the hidden layer slightly
increases during the entire learning period, long after the minimum was reached for
the test error (around 200 cycles). This "energy" reaches a plateau after long
overtraining, around 10,000 cycles. The generalization error reaches a minimum
and later increases as training continues, also slowly reaching a plateau around
10,000 cycles.
3.2 CONSTRAINED BACK-PROPAGATION
With CBP, the "energy" decreases to a much lower level during early learning and
remains about constant throughout the complete training period. The error quickly
decreases during early learning and remains about constant during the rest of the
training period, apparently stabilized by the energy and weight constraints given in
Equation 1.
Dynamic Behavior of Constrained Back.Propagation Networks
4 REPRESENTATION
The hidden unit activations and weights of the networks were examined after learning. using BP or CBP. A hidden unit was considered "dead" when its contribution
to any output unit (computed as the product of its activation times the corresponding outgoing weight) was at least 50 times smaller than the total contribution from
all hidden units. over the entire set of input patterns.
4.1 STANDARD BACK-PROPAGATION
As also observed by Hanson et al. (1989). standard back-propagation usually
makes use of most or all hidden units: the representation of the input patterns is
well distributed over the entire set of hidden units. even if the network is oversized
for the task. The exact representation depends on the initial weights.
4.2 CONSTRAINED BACK-PROPAGATION
Using the constraints described in Equation 1. the hidden layer was reduced to 2 or
3 hidden units for all the observed runs (2 hidden units corresponds to the minimal
size network necessary to solve the task) . All the other units were actually "killed"
during learning. independently of the size of the original network (from 4 to 11
units in the simulations). Both the constraints on the hidden unit activations (E" )
and on the weights (W) contribute to this reduction.
Figure 2 represents an example of the resulting weights from the input layer to a
remaining hidden unit. As we can see. a few weights ended up dominating the
entire set: they actually "picked up" a characteristic of the input spectrograms that
allow the disctinction between two phoneme categories (this phenomenon was also
predicted and observed by Rumelhart. 1989). In this case. the weights "picked
up" the 10th and 14th frequency components of the spectrograms. both present
during the 5th time interval. The characteristics of the spectrum make the corresponding hidden unit especially responsive to the [0] phoneme. The specific nonlinear W constraint on the input-to-hidden weights used by CBP forced that hidden unit to acquire a very local receptor field. Note that this was not always observed in the simulations. Some hidden units acquired broad receptor fields with
weights distributed over the entire spectrogram (as it is always the case with standard BP). No statistical comparison was made to compute the relative ratio of local
to distributed units. which probably depends on the exact form of the reduction
constraint used in CB P.
5 INTERPRETATION OF RESULTS
We observed that the occurrence of overfitting effects depends both on the size of
the network and on the number of training cycles. At this point. a better theoretical understanding of the back-propagation learning dynamics would be useful to
explain this dependency (Chauvin. In Preparation). This section presents an informal interpretation of the results in terms of linear and non-linear phenomena.
645
646
Chauvin
?
?
?
??
H
(l)
~
?
?
?
as
(l)
~
~
?
?
?
?
?
?
? ?
??
?
?
?
?
?
?
?
?
?
?
?
a
?
C
?
?
?
?
?
...l
?
?
?
.r-!
?
::t:
?
?
?
?
0
~
?
?
?
?
?
?
?
?
?
?
From Input Layer
Figure 2. Typical fan-in weights after learning from the input layer
to a hidden unit using the constrained back-propagation algorithm.
5.1 LINEAR PHENOMENA
These linear phenomena might be due to probable correlations between sample
plus observation noise at the input level and the desired classification at the output
level. The gradient descent learning rule should eventually make use of these correlations to decrease the LMS error. However, these correlations are specific to
the used training data set and should have a negative impact on the performance of
the network on a testing data set. Figure 3 represents the generalization performance of linear networks with 1 and 7 hidden units (averaged over 5 runs) for the
speech labeling task described above. As predicted, we can see that overtraining
effects are actually generated by linear networks (as they would with a one-step
algorithm; e.g., Vallet et a!., 1989). Interestingly, they occur even when the size of
the network is minimum. These effects should obviously decrease by increasing the
size of the training data set (therefore reducing the effect of sample and observation noise).
5.2 NON-LINEAR PHENOMENA
The second type of effect is non-linear. This is illustrated in Chauvin (In Press)
with a curve-fitting problem. In the first problem, a non-linear back-propagation
network (1 input unit, 1 output unit, 2 layers of 20 hidden units) is trained to fit a
function composed of two linear segments separated by a discontinuity. The mapping realized by the network over the entire interval is observed as a function of the
number of training cycles. It appears that the interpolated fit reaches a minimum
Dynamic Behavior of Constrained Back-Propagation Networks
0.10 "
\
\
\
0.08? \ '
\
\
\
0.06\
"
,~
,
H'~
H
~
~
0.04-
H7
_........ -----~-----~~~-~--HI
~
~--~~----
- - - - - ~------~-
Training
0.02?
------
Testing
O~----------~.~--------~.----------~.----------~
o
1
2
4
3
Number of cycles (xl000).
Figure 3. LMS error for the training and test data sets of a speech
labeling task as a function of the number of training cycles. A one
hidden and a 7 hidden unit linear network are considered.
and gets worse with the number of training cycles and with the size of the sample
training set around the discontinuity.
This phenomenon is evocative of an effect in interpolation theory known as the
Runge effect (Steffenssen, 1950). In this case, a "well-behaved" bell-like function, f{x) = 1/(1 + r) , uniformly sampled n+l times over a {-D, +D] interval, is
fitted with a polynomial of degree n. Runge showed that over the considered interval, the maximum distance between the fitted function and the fitting polynomial
goes to infinity as n increases. Note that in theory, there is no overfitting since the
number of degree of freedoms associated with the polynomial matches the number
of data points. However, the interpolation "overfitting effect" actually increases
with the sampling data set, that is with the increased accuracy in the description of
the fitted function. (Runge also showed that the effect may disappear by changing
the size of the sampled interval or the distribution of the sampling data points.)
We can notice that in the piecewise linear example, a linear network would have
computed a linear mapping using only two degrees of freedom (the problem IS then
equivalent to one-dimensional linear regression). With a non-linear network, simulations show that the network actually computes the desired mapping by slowly
647
648
Chauvin
fitting higher and higher "frequency components" present in the desired mapping
(reminiscent of the Gibb's phenomenon observed with successive Fourier series
approximations of a square wave; e.g., Sommerfeld, 1949). The discontinuity,
considered as a singular point with high frequency components, is fitted during later
stages of learning. Increasing the number of sampling points around the discontinuilty generates an effect similar to the Runge effect with overtraining. In this
sense, the notion of degrees of freedom in non-linear neural networks is not only a
function of the network architecture - the .. capacity" of the network - and of the
non-linearities of the fitted function but also of the learning algorithm (gradient
descent), which gradually "adjusts" the "capacity" of the network to fit the nonlinearities required by the desired function.
A practical classification task might generate not only linear overtraining effects
due to sample and observation noise but also non-linear effects if a continuous
input variable (such as a frequency component in the speech example) has to be
classified in two different bins. It is also easy to imagine that noise may generate
non-linear effects. At this stage, the non-linear effects involved in back-propagation networks composed of logistic hidden units are poorly understood. In general,
both effects will probably occur in non-linear networks and might be difficult to
assess. However, because of the gradient descent procedure, both effects seem to
depend on the amount of training relative to the capacity of the network. The use
of complexity constraints acting on the complexity of the network seems to constitute a promising solution to the overtraining problem in both the linear and non-linear cases.
Acknowledgements
I am greatful to Pierre Baldi, Fred Fisher, Matt Franklin, Richard Golden, Julie
Holmes, Erik Marcade, Yoshiro Miyata, David Rumelhart and Charlie Schley for
helpful comments.
References
Chauvin, Y. (1987). Generalization as a function of the number of hidden units in
back-propagation networks. Unpublished Manuscript. University of California,
San Diego, CA.
Chauvin, Y. (1989). A back-propagation algorithm with optimal use of the
hidden units. In D. Touretzky (Ed.), Advances in Neural Information Processing
Systems 1. Palo Alto, CA: Morgan Kaufman.
Chauvin, Y. (In Press). Generalization performance of back-propagation
networks. Proceedings of the 1990 European conference on Signal Processing
(Eurasip) . Springer-Verlag.
Chauvin, Y. (In Preparation). Generalization performance of LMS trained linear
networks.
Dynamic Behavior of Constrained Back.Propagation Networks
Chauvin, Y. (1989). A back-propagation algorithm with optimal use of the
hidden units. In D. Touretzky (Ed.), Advances in Neural Information Processing
Systems 1. Palo Alto, CA: Morgan Kaufman.
Chauvin, Y. (1989). A back-propagation algorithm with optimal use of the
hidden units. In D. Touretzky (Ed.), Advances in Neural Information Processing
Systems 1. Palo Alto, CA: Morgan Kaufman.
Denker, J. S., Schwartz, D. B., Wittner, B. S., Solla, S. A., Howard, R. E., Jackel,
L. D., & Hopfield, J. J. (1987). Automatic learning, rule extraction, and
generalization. Complex systems, 1, 877-922.
Golden, R.M., & Rumelhart, D.E.
(1989).
Improving generalization in
multi-layer networks through weight decay and derivative minimization.
Unpublished Manuscript. Stanford University, Palo Alto, CA.
Hanson, S. J. & Pratt, L. P. (1989). Comparing biases for minimal network
construction with back-propagation. In D. Touretzky (Ed.), Advances in Neural
Information Processing Systems 1. Palo Alto, CA: Morgan Kaufman.
Ishikawa M. (1989). A structural learning algorithm with forgetting of weight link
weights. Proceedings of the IJCNN International Joint Conference on Neural
Networks, II, 626. Washington D.C., June 18-22, 1989.
Ji, C., Snapp R. & Psaltis D. (1989). Generalizing smoothness constraints from
discrete samples. Unpublished Manuscript. Department of Electrical Engineering.
California Institute of Technology, CA.
Morgan, N. & Bourlard, H. (1989). Generalization and parameter estimation in
feedforward nets: some experiments. Paper presented at the Snowbird Conference
on Neural Networks, Utah.
Rumelhart, D. E., Hinton G. E., Williams R. J. (1986). Learning internal
representations by error propagation. In D. E. Rumelhart & J. L. McClelland
(Eds.) Parallel Distributed Processing: Explorations in the Microstructures of
Cognition (Vol. I). Cambridge, MA: MIT Press.
Rumelhart, D. E. (1987). Talk given at Stanford University, CA.
Rumelhart, D. E. (1989). Personal Communication.
Sommerfeld, A. (1949). Partial differential equations in physics.
Academic Press: New York, NY.
Steffenssen, J. F.
(Vol. VI).
(1950). Interpolation. Chelsea: New York, NY.
Vallet, F., Cailton, J.-G. & Refregier P. (1989). Solving the problem of overfitting
of the pseudo-inverse solution for classification learning.
Proceedings of the
IJCNN Conference, II, 443-450. Washington D.C., June 18-22, 1989.
649
| 197 |@word polynomial:3 seems:1 simulation:3 reduction:2 initial:1 series:1 interestingly:1 franklin:1 comparing:1 activation:5 reminiscent:1 designed:1 contribute:1 cbp:12 successive:1 differential:1 fitting:4 baldi:1 acquired:1 forgetting:1 behavior:5 multi:1 decreasing:1 prolonged:1 increasing:2 provided:1 linearity:1 alto:6 kaufman:4 ended:1 suite:1 pseudo:1 act:1 golden:3 schwartz:1 unit:35 understood:1 local:2 engineering:1 receptor:2 interpolation:3 might:3 plus:1 examined:2 averaged:2 practical:1 testing:2 ance:1 procedure:1 bell:1 get:1 equivalent:1 williams:2 go:1 independently:2 examines:1 rule:2 adjusts:1 holmes:1 notion:1 variation:1 imagine:1 diego:1 construction:1 exact:2 hypothesis:1 rumelhart:11 continues:1 observed:10 electrical:1 cycle:12 solla:1 decrease:6 complexity:4 dynamic:13 personal:1 trained:4 depend:2 solving:1 segment:1 joint:1 hopfield:1 talk:1 separated:1 forced:1 labeling:4 stanford:4 solve:1 dominating:1 ip:2 runge:4 obviously:1 net:2 oversized:1 product:1 poorly:1 description:1 snowbird:1 op:1 phonemic:1 predicted:2 csf:1 exploration:1 bin:1 generalization:16 probable:1 around:5 considered:4 cb:1 mapping:4 cognition:1 lm:6 early:3 estimation:1 psaltis:2 hansen:1 palo:6 jackel:1 minimization:1 mit:1 always:2 reaching:1 june:2 sense:1 am:1 helpful:1 entire:8 hidden:34 relation:1 classification:4 constrained:12 ell:1 field:2 extraction:1 washington:2 eliminated:1 sampling:3 ishikawa:2 represents:3 broad:1 piecewise:1 richard:1 few:1 composed:2 attempt:1 freedom:4 partial:1 necessary:1 desired:4 theoretical:2 minimal:3 fitted:5 yoshiro:1 increased:1 cost:2 dependency:1 refregier:1 international:1 yl:1 physic:1 tip:1 quickly:1 squared:2 x1000:1 slowly:3 worse:1 dead:1 derivative:1 nonlinearities:1 inc:1 depends:3 vi:1 later:2 picked:2 apparently:1 reached:1 wave:1 parallel:1 contribution:2 ass:1 yves:1 square:2 accuracy:1 phoneme:3 characteristic:2 ofthe:1 basically:2 confirmed:1 classified:2 overtraining:8 plateau:2 explain:1 reach:3 touretzky:4 ed:5 energy:8 frequency:4 involved:1 associated:2 sampled:2 actually:5 back:30 appears:1 manuscript:3 higher:2 furthermore:1 stage:2 correlation:3 nonlinear:1 propagation:31 microstructures:1 logistic:2 behaved:1 utah:1 effect:19 matt:1 illustrated:1 during:12 speaker:1 thomson:1 complete:3 consideration:1 ji:2 interpretation:2 olp:1 cambridge:1 smoothness:1 automatic:1 hp:1 killed:1 base:1 oip:2 chelsea:1 showed:2 verlag:1 morgan:6 minimum:4 spectrogram:6 period:5 signal:1 ii:2 match:1 adapt:2 h7:1 academic:1 long:2 wittner:1 impact:1 basic:1 regression:1 ae:1 decreased:1 interval:5 singular:1 rest:1 probably:2 comment:1 seem:1 vallet:2 structural:1 feedforward:1 pratt:3 easy:1 fit:3 psychology:1 architecture:1 speech:5 york:2 constitute:1 generally:1 useful:1 yw:1 amount:1 category:2 mcclelland:1 reduced:1 generate:2 percentage:1 stabilized:1 notice:1 correctly:1 discrete:1 vol:2 changing:1 kept:1 run:3 inverse:1 throughout:3 layer:9 hi:1 fan:1 occur:2 ijcnn:2 infinity:1 constraint:11 precisely:1 bp:9 interpolated:1 gibb:1 fourier:1 generates:1 department:2 smaller:1 slightly:1 wi:2 gradually:1 equation:5 remains:2 eventually:1 informal:1 adopted:1 denker:2 pierre:1 occurrence:1 responsive:1 original:2 remaining:1 charlie:1 especially:1 disappear:1 added:1 realized:1 usual:1 gradient:4 distance:1 link:1 capacity:3 chauvin:17 erik:1 ratio:1 acquire:1 difficult:1 negative:1 perform:1 observation:3 howard:1 descent:4 hinton:2 communication:1 david:1 unpublished:3 required:1 hanson:4 california:2 discontinuity:3 below:1 pattern:6 usually:1 appeared:1 bourlard:2 technology:1 started:1 understanding:1 acknowledgement:1 relative:3 loss:1 analogy:1 degree:5 sufficient:1 bias:1 allow:1 institute:1 explaining:1 julie:1 distributed:4 curve:2 fred:1 computes:1 made:2 san:1 avoided:1 global:1 overfitting:4 spectrum:1 continuous:1 pen:1 promising:1 ca:10 miyata:1 improving:1 investigated:1 european:1 complex:1 snapp:2 noise:4 schley:1 ny:2 remained:1 specific:2 er:1 decay:1 generalizing:1 admitted:1 springer:1 corresponds:1 ma:1 fisher:1 eurasip:1 typical:2 reducing:1 uniformly:1 acting:1 total:1 internal:1 preparation:2 outgoing:1 evocative:1 phenomenon:7 |
1,062 | 1,970 | Means. Correlations and Bounds
M.A.R. Leisink and H.J. Kappen
Department of Biophysics
University of Nijmegen , Geert Grooteplein 21
NL 6525 EZ Nijmegen, The Netherlands
{martijn,bert}@mbfys.kun.nl
Abstract
The partition function for a Boltzmann machine can be bounded
from above and below. We can use this to bound the means and
the correlations. For networks with small weights, the values of
these statistics can be restricted to non-trivial regions (i.e. a subset
of [-1 , 1]). Experimental results show that reasonable bounding
occurs for weight sizes where mean field expansions generally give
good results.
1
Introduction
Over the last decade, bounding techniques have become a popular tool to deal with
graphical models that are too complex for exact computation. A nice property of
bounds is that they give at least some information you can rely on. For instance,
one may find that a correlation is definitely between 0.4 and 0.6. An ordinary approximation might be more accurate, but in practical situations there is absolutely
no warranty for that.
The best known bound is probably the mean field bound , which has been described
for Boltzmann machines in [1] and later for sigmoid belief networks in [2]. Apart
from its bounding properties, mean field theory is a commonly used approximation
technique as well. Recently this first order bound was extended to a third order
approximation for Boltzmann machines and sigmoid belief networks in [3] and [4],
where it was shown that this particular third order expansion is still a bound.
In 1996 an upper bound for Boltzmann machines was described in [5] and [6]. In the
same articles the authors derive an upper bound for a special case of sigmoid belief
networks: the two-layered networks. In this article we will focus solely on Boltzmann
machines, but an extension to sigmoid belief networks is quite straightforward.
This article is organized as follows: In section 2 we start with the general theory
about bounding t echniques. Later in that section the upper and lower bound are
briefly described. For a full explanation we refer to the articles mentioned before.
The section is concluded by explaining how these bounds on the partition function
can be used to bound m eans and correlations. In section 3 results are shown for
fully connected Boltzmann machines, where size of weights and thresholds as well as
network size are varied. In section 4 we present our conclusions and outline possible
extensions.
2
Theory
There exists a general method to create a class of polynomials of a certain order,
which all bound a function of interest, fo(x). Such a class of order 2n can be
found if the 2n-th order derivative of fo(x), written as hn(x), can be bounded by
a constant . When this constant is zero , the class is actu ally of order 2n-1. It turns
out that this class is parameterized by n free parameters.
Suppose we have a function b2k for some integer k which bounds the function 12k
from below (an upper bound can be written as a lower bound by using the negative
of both functions). Thus
(1)
Now construct the primitive functions 12k -1 and b2k -1 such that 12k - 1 (p) =
b2k- 1 (p) for a free to choose value for p. This constraint can always be achieved by
adding an appropriate constant to the primitive function b2k - 1 . It is easy to prove
that
{ 12k -1 (x) :S b2k -1 (x) for x < p
(2)
12k -1 (x) 2: b2k -1 (x) for x p
2:
or in shorthand notation hk-1(x) ? b2k - 1(X).
If we repeat this procedure and construct the primitive functions hk-2 and b2k - 2
such that hk-2(p) = b2k - 2(p) for the same p, one can verify that
Vx hk-2(x) 2: b2k - 2(X)
(3)
Thus given a bound 12k (x) 2: b2k (x) we can construct a class of bounding functions
for hk-2 parameterized by p.
Since we assumed hn (x) can be bounded from below by a constant , we can apply the
procedure n times and we finally find fa (x) 2: bo(x), where bo(x) is parameterized
by n free parameters. This procedure can be found in more detail in [4].
2.1
A third order lower bound for Boltzmann machines
Boltzmann machines are stochastic neural networks with N binary valued neurons ,
Si, which are connected by symmetric weights Wij. Due to this symmetry the
probability distribution is a Boltzmann-Gibbs distribution which is given by (see
also [7])
p
(siB, w) =
~ exp (~L.
WijSiSj
'J
where the
Bi
+
L
BiSi)
=
~ exp (-E (s,
B, w))
(4)
'
are threshold values and
Z (B , w) =
L exp (- E (s, B, w))
(5)
all S
is the normalization known as the partition function.
This partition function is especially important, since statistical quantities such as
m eans and correlations can b e directly derived from it. For instance , the m eans can
b e computed as
(sn) =
LP (siB, w) Sn = L P(s, Sn =+ l IB, w) - P(s, Sn = - l iB, w)
all S
a ll
s/sn
Z+ (B, w) - Z _ (B, w)
Z (B, w)
(6)
where Z+ and Z_ are partition functions over a network with Sn clamped to +1
and -1 , respectively.
This explains why the objective of almost any approximation method is the partition
function given by equation 5. In [3] and [4] it is shown that the standard mean field
lower bound can be obtained by applying the linear bound
(7)
to all exponentially many terms in the sum. Since J.l may depend on S, one can
choose J.l (s) = J.li Si + J.lo , which leads to the standard mean field equations, where
the J.li turn out to be the local fields.
Moreover, the authors show that one can apply the procedure of 'upgrading bounds'
(which is described briefly at the beginning of this section) to equation 7, which
leads to the class of third order bounds for eX. This is achieved in the following
way:
= eX 2': eV (1 + x - v) = b2(x)
h(x)=ex'?ell-+ev ((1+J.l-v)(x-J.l)+~(x-J.l)2)
'r/X,V h(x)
'r/X ,Il- ,A fo(x) = eX 2':
ell- { 1
+ x - J.l + eA
C;
>.. (x - J.l)2
(8)
=bdx)
+ ~ (x - J.l)3) }
= bo(x)
with>" = v - J.l.
In principle, this third order bound could be maximized with respect to all the free
parameters, but here we follow the suggestion made in [4] to use a mean field optimization , which is much faster and generally almost as good as a full optimization.
For more details we refer to [4].
2.2
An upper bound
An upper bound for Boltzmann machines has been described in [5] and [6]1. Basically, this method uses a quadratic upper bound on log cosh x, which can easily be
obtained in the following way:
h(x) = 1 - tanh 2 x::; 1 = b2(x)
h(x) = tanh x ~ x - J.l + tanhJ.l
fa (x)
= log cosh x ::;
1
= bdx)
(9)
2
"2 (x - J.l) + (x - J.l) tanh J.l + log cosh J.l
= bo(x)
Using this bound, one can derive
Z (e , w) =
Ls exp (~L WijSiSj + L eiSi)
all
ij
i
= ~ 2exp (lOg cosh (L WniSi + en)) exp
all sisn
,
::; L exp (~ L W~jSiSj + L e;Si + k)
allsls n
ij i' n
INote: The articles referred to, use
(~ .L
WijSiSj
'J i' n
= ek . Z(e' , W')
+L
eiSi)
' i' n
(10)
ii'n
Si
E {O, I} instead of the +1/-1 coding used here.
where k is a constant and
given by
I
e and
l
+ WniWnj
ei + Wni (en -
Wi
are thresholds and weights in a reduced network
Wij = Wij
e;j =
k=
"21 (en
- J-L n
J-Ln
+ tanhJ-Ln)
2
1
+ tanhJ-Ln) -"2 tanh
(11)
2
J-Ln
+ log 2 cosh J-Ln
Hence, equation 10 defines a recursive relation, where each step reduces the network
by one neuron. Finally, after N steps, an upper bound on the partition function is
found 2 . We did a crude minimization with respect to the free parameters J-L. A more
sophisticated method can probably be found, but this is not the main objective of
this article.
2.3
Bounding means and correlations
The previous subsections showed very briefly how we can obtain a lower bound , ZL,
and an upper bound , ZU , for any partition function. We can use this in combination
with equation 6 to obtain a bound on the means:
ZL _ ZU
Zu _ ZL
(sn)L = + X -::::; (sn)::::; + y - = (snt
(12)
where X = ZU if the nominator is positive and X = ZL otherwise. For Y it is the
opposite. The difference , (sn)U - (sn)L, is called the bandwidth.
Naively, we can compute the correlations similarly to the means using
(13)
where the partition function is computed for all combinations Sn Sm. Generally,
however, this gives poor results , since we have to add four bounds together , which
leads to a bandwidth which is about twice as large as for the means. We can
circumvent this by computing the correlations using
(14)
where we allow the sum in the partition functions to be taken over Sn , but fix Sm
either to Sn or its negative. Finally, the computation of the bounds (SnSm)L and
(snsmt is analogue to equation 12.
There exists an alternative way to bound the means and correlations. One can write
( Sn ) -_ Z+ - Z _ -_ Z+/Z_ - 1 -_ z - 1 -Z+ + Z _
Z+/Z_ + 1
z+1
with z = Z+/Z_ , which can be bounded by
ZL
Zu
----? < z < ----?
Z~ Z~
f()
z
(15)
(16)
Since f (z) is a monotonically increasing function of z, the bounds on (Sn) are given
by applying this function to the left and right side of equation 16. The correlations
can be bounded similarly. It is still unknown whether this algorithm would yield
better results than the first one, which is explored in this article.
2The original articles show that it is not necessary to do all the N steps. However,
since this is based on mixing approximation t echniques with exact ca lculations, it is not
used here as it would hide the real error the approximation makes.
13 ir==================i~----~----~
12.5
Exact
Mean field lower bound
Upper bound
Third order lower bound
,,
12
,
,.,,.,.
11
.
,.
'
10.5
10'--~?""'--:
'-'-
o
... ....... ._,_....... - ,'"
0.2
0.4
aw
0.6
0.8
Figure 1: Comparison of 1) the mean field lower bound, 2) the upper bound and
3) the third order lower bound with the exact log partition function. The network
was a fully connected Boltzmann machine with 14 neurons and (J'B = 0.2. The size
of the weights is varied on the x-axis. Each point was averaged over 100 networks.
3
Results
In all experiments we used fully connected Boltzmann machines of which the thresholds and weights both were drawn from a Gaussian with zero mean and standard
deviation (J'B and (J'w/VN, respectively, where N is the network size. This is the so
called sK-model (see also [8]). Generally speaking, the mean field approximation
breaks down for (J'B = 0 and (J'w > 0.5, whereas it can be proven that any expansion
based approximation is inaccurate when (J'w > 1 (which is the radius of convergence
as in [9]). If (J'B #- 0 these maximum values are somewhat larger.
In figure 1 we show the logarithm of the exact partition function , the first order
or mean field bound, the upper bound (which is roughly quadratic) and the third
order lower bound. The weight size is varied along the horizontal axis. One can
see clearly that the mean field bound is not able to capture the quadratic form of
the exact partition function for small weights due to its linear behaviour. The error
made by the upper and third order lower bound is small enough to make non-trivial
bounds on the means and correlations.
An example of this bound is shown in figure 2 for the specific choice (J'B = (J'w = 0.4.
For both the means and t he correlations a histogram is plotted for the upper and
lower bounds computed with equation 12. Both have an average bandwidth of
0.132, which is a clear subset of the whole possible interval of [-1 , 1].
In figure 3 th e average bandwidth is shown for several values of
(J'e and (J'w ' For
bandwidths of 0.01,0.1 and 1 a line is drawn. We conclude that almost everywhere
the bandwidth is non-trivially reduced and reaches practically useful values for (J'w
less than 0.5. This is more or less equivalent to the region wh ere the m ean fi eld
approximation performs well. That approximation , however , gives no information
on how close it actually is to the exact value, whereas the bounding m ethod limits
it to a definit e region.
Means
Correlations
600,---------=----, , - - - - - - , 6 0 1 0
80
500
400
60
300
40
200
20
100
-0.2
0.2
100
-0.2
o
-0.1
0.1
0.2
Distance to exact
Distance to exact
Figure 2: For the specific choice IJo = IJ w = 0.4 thirty fully connected Boltzmann
machines with 14 neurons were initialized and the bounds were computed. The two
left panels show the distance between the lower bound and the exact means (left)
and similarly for the upper bound (right). The right two panels show the distances
of both bounds for the correlations.
0.8
1.5
1.5
0.6
o~
0.4
0.5
0.5
0.2
00
Ow
Figure 3: In the left panel the average bandwidth is colour coded for the means ,
where IJo and IJw are varied in ten steps along the axes. The right panel shows
the same for the correlations. For each IJo , IJ w thirty fully connected Boltzmann
machines were initialized and the bounds on all the m eans and correlations were
computed. For three specific bandwidths a line is drawn.
0.01
0.4
2
?e=0.3
0.008
0 w=0.1
.r::;
?e=0.3
0.3 ? =0 .3
w
"0
0.2
~0006
?e=0.3
1.5 ? =0 .5
w
@0.004
c:J
0.1
0.002
00
10
20
30
40
00
0.5
10
20
30
Network size
40
00
10
20
30
40
Figure 4: For (T w = 0.1, 0.3 and 0.5 the bandwidth for the correlations is shown
versus the network size. (To = 0.3 in all cases, but the plots are nearly the same for
other values. Please note the different scales for the y-axis. A similar graph for the
means is not shown here, but it is roughly the same. The solid line is the average
bandwidth over all correlations , whereas the dashed lines indicate the minimum and
maximum bandwidth found.
Unfortunately, the bounds have the unwanted property that the error scales badly
with the size of the network. Although this makes the bounds unsuitable for very
large networks , there is still a wide range of networks small enough to take advantage of the proposed method and still much too large to be treated exactly. The
bandwidth versus network size is shown in figure 4 for three values of (T w' Obviously,
the threshold of practical usefulness is reached earlier for larger weights .
Finally, we remark that the computation time for the upper bound is (') (N4) and
for the mean field and third order lower bound. This is not shown here.
(') (N 3 )
4
Conclusions
In this article we combined two already existing bounds in such a way that not only
the partition function of a Boltzmann machine is bounded from both sides , but also
the means and correlations . This may seem superfluous, since there exist already
several powerful approximation methods. Our method, however, can be used apart
from any approximation technique and gives at least some information you can rely
on. Although approximation techniques might do a good job on your data, you
can never be sure about that. The method outlined in this paper ensures that the
quantities of interest, the means and correlations, are restricted to a certain region.
We have seen that , generally speaking, the results are useful for weight sizes where
an ordinary mean field approximation performs well. This makes the method applicable to a large class of problems . Moreover, since many architectures are not
fully connected, one can take advantage of that structure. At least for the upper
bound it is shown already that this can improve computation speed and tightness .
This would partially cancel the unwanted scaling with the network size.
Finally, we would like to give some directions for further research. First of all, an
extension to sigmoid belief networks can easily be done, since both a lower and an
upper bound are already described. The upper bound, however , is only applicable to
two layer networks. A more general upper bound can probably b e found. Secondly
one can obtain even b etter bounds (especially for larger weights) if the general
constraint
(17)
is taken into account. This might even b e extended to similar constraints, where
three or more neurons are involved.
Acknowledgelllents
This research is supported by the Technology Foundation STW, applied science
devision of NWO and the technology programme of the Ministry of Economic Affairs.
References
[1] C. Peterson and J. Anderson. A mean field theory learning algorithm for neural networks. Co mplex systems, 1:995- 1019, 1987.
[2] S.K. Saul, T.S . Jaakkola, and M.l. Jordan. Mean field theory for sigmoid b elief networks. Journal of Artific ial Intelligence Research, 4:61- 76 , 1996.
[3] Martijn A.R. Leisink and Hilbert J. Kapp en. A tighter bound for graphical models. In
Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural
Information Processing Systems 13, pages 266- 272. M IT Press, 2001.
[4] Martijn A.R. Leisink and Hilbert J. Kappen. A tighter bound for graphical models.
Neural Co mputation, 13(9) , 2001. To appear.
[5] T. Jaakkola and M.l. Jordan. Recursive algorithms for approximating probabilities in
graphical models. MIT Co mpo Cogn. Sc ience Technical Report 9604, 1996.
[6] Tommi S. Jaakkola and Michael 1. Jordan. Computing upp er and lower bounds on
likelihoods in intractable n etworks. In Proceedings of th e T welfth Annual Conf ere nce
on Uncertainty in Artificial Intelligence (UAI- 96) , pages 340- 348, San Francisco , CA,
1996. Morgan Kaufmann Publishers.
[7] D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines.
Cognitive Science, 9:147-169, 1985.
[8] D. Sherrington and S. Kirkpatrick. Solvable model of a spin-glass. Physi cal R eview
Letters, 35(26):1793-1796, 121975.
[9] T. Plefka. Convergence condition of the TAP equation for the infinite-ranged ising spin
glass model. J.Phys.A: Math. Gen., 15:1971-1978, 1981.
| 1970 |@word briefly:3 polynomial:1 grooteplein:1 eld:1 solid:1 kappen:2 existing:1 si:4 written:2 partition:14 plot:1 intelligence:2 beginning:1 affair:1 ial:1 math:1 along:2 become:1 prove:1 shorthand:1 roughly:2 mbfys:1 ijw:1 increasing:1 bounded:6 notation:1 moreover:2 panel:4 unwanted:2 exactly:1 zl:5 appear:1 before:1 positive:1 local:1 todd:1 limit:1 solely:1 might:3 twice:1 co:3 bi:1 range:1 averaged:1 practical:2 thirty:2 recursive:2 cogn:1 procedure:4 acknowledgelllents:1 close:1 layered:1 cal:1 applying:2 equivalent:1 straightforward:1 primitive:3 l:1 geert:1 suppose:1 exact:10 us:1 leisink:3 ising:1 ackley:1 snt:1 capture:1 region:4 ensures:1 connected:7 mentioned:1 depend:1 easily:2 bdx:2 sejnowski:1 artificial:1 sc:1 quite:1 larger:3 valued:1 tightness:1 otherwise:1 statistic:1 obviously:1 advantage:2 gen:1 mixing:1 wijsisj:3 martijn:3 convergence:2 derive:2 ij:4 eans:4 job:1 indicate:1 tommi:1 direction:1 radius:1 stochastic:1 vx:1 explains:1 behaviour:1 fix:1 tighter:2 secondly:1 extension:3 practically:1 exp:7 applicable:2 tanh:4 nwo:1 eisi:2 create:1 ere:2 tool:1 minimization:1 mit:1 clearly:1 always:1 gaussian:1 volker:1 jaakkola:3 derived:1 focus:1 ax:1 likelihood:1 hk:5 glass:2 inaccurate:1 relation:1 wij:3 special:1 ell:2 field:16 construct:3 never:1 cancel:1 nearly:1 report:1 b2k:11 interest:2 kirkpatrick:1 nl:2 superfluous:1 accurate:1 necessary:1 logarithm:1 initialized:2 plotted:1 instance:2 earlier:1 ordinary:2 deviation:1 subset:2 plefka:1 usefulness:1 too:2 aw:1 combined:1 definitely:1 michael:1 together:1 hn:2 choose:2 conf:1 cognitive:1 ek:1 derivative:1 li:2 account:1 b2:2 coding:1 later:2 break:1 reached:1 start:1 il:1 ir:1 spin:2 kaufmann:1 maximized:1 yield:1 basically:1 fo:3 reach:1 phys:1 involved:1 popular:1 wh:1 subsection:1 organized:1 hilbert:2 eview:1 sophisticated:1 ea:1 actually:1 follow:1 leen:1 done:1 anderson:1 correlation:20 ally:1 horizontal:1 ei:1 defines:1 dietterich:1 verify:1 ranged:1 hence:1 symmetric:1 deal:1 ll:1 please:1 upp:1 outline:1 sherrington:1 performs:2 recently:1 fi:1 sigmoid:6 definit:1 exponentially:1 he:1 refer:2 gibbs:1 trivially:1 outlined:1 similarly:3 add:1 showed:1 hide:1 apart:2 certain:2 binary:1 seen:1 minimum:1 ministry:1 somewhat:1 morgan:1 monotonically:1 dashed:1 ii:1 full:2 reduces:1 sib:2 faster:1 technical:1 coded:1 biophysics:1 histogram:1 normalization:1 achieved:2 whereas:3 interval:1 concluded:1 publisher:1 probably:3 sure:1 seem:1 jordan:3 integer:1 nominator:1 easy:1 enough:2 architecture:1 bandwidth:12 opposite:1 economic:1 whether:1 colour:1 speaking:2 remark:1 generally:5 useful:2 clear:1 netherlands:1 cosh:5 ten:1 reduced:2 exist:1 write:1 four:1 threshold:5 drawn:3 graph:1 sum:2 nce:1 letter:1 uncertainty:1 parameterized:3 you:3 everywhere:1 powerful:1 almost:3 reasonable:1 vn:1 scaling:1 bound:67 layer:1 quadratic:3 annual:1 badly:1 constraint:3 your:1 speed:1 upgrading:1 department:1 combination:2 poor:1 wi:1 lp:1 n4:1 inote:1 mpo:1 restricted:2 taken:2 ln:5 equation:9 etworks:1 turn:2 etter:1 apply:2 appropriate:1 alternative:1 original:1 thomas:1 graphical:4 unsuitable:1 physi:1 especially:2 approximating:1 objective:2 already:4 quantity:2 occurs:1 fa:2 ow:1 distance:4 trivial:2 echniques:2 kun:1 unfortunately:1 nijmegen:2 negative:2 stw:1 ethod:1 boltzmann:16 unknown:1 upper:20 neuron:5 sm:2 situation:1 extended:2 hinton:1 varied:4 bert:1 tap:1 warranty:1 able:1 below:3 ev:2 explanation:1 belief:5 analogue:1 treated:1 rely:2 circumvent:1 solvable:1 improve:1 technology:2 axis:3 tresp:1 sn:15 nice:1 fully:6 suggestion:1 proven:1 versus:2 foundation:1 article:9 principle:1 editor:1 lo:1 repeat:1 last:1 free:5 supported:1 side:2 allow:1 explaining:1 wide:1 peterson:1 saul:1 ijo:3 author:2 commonly:1 made:2 san:1 programme:1 uai:1 assumed:1 conclude:1 francisco:1 decade:1 sk:1 why:1 ca:2 symmetry:1 ean:1 expansion:3 complex:1 z_:4 did:1 main:1 bounding:7 wni:1 whole:1 referred:1 en:4 elief:1 clamped:1 crude:1 ib:2 third:10 down:1 specific:3 zu:5 er:1 explored:1 exists:2 naively:1 intractable:1 adding:1 ez:1 partially:1 bo:4 infinite:1 called:2 experimental:1 absolutely:1 ex:4 |
1,063 | 1,971 | Grammar Transfer in a Second Order
Recurrent Neural Network
Michiro N egishi
Department of Psychology
Rutgers University
101 Warren St. Smith Hall #301
Newark, NJ 07102
negishi@psychology.rutgers.edu
Stephen Jose Hanson
Psychology Department
Rutgers University
101 Warren St. Smith Hall #301
Newark, NJ 07102
jose @psychology.rutgers .edu
Abstract
It has been known that people, after being exposed to sentences
generated by an artificial grammar, acquire implicit grammatical
knowledge and are able to transfer the knowledge to inputs that are
generated by a modified grammar. We show that a second order
recurrent neural network is able to transfer grammatical knowledge
from one language (generated by a Finite State Machine) to another
language which differ both in vocabularies and syntax. Representation of the grammatical knowledge in the network is analyzed using
linear discriminant analysis.
1
Introduction
In the field of artificial grammar learning, people are known to be able to transfer
grammatical knowledge to a new language which consists of a new vocabulary [6].
Furthermore, this effect persists even when the new strings violate the syntactic
rule slightly as long as they are similar to the old strings [1]. It has been shown in
the past studies that recurrent neural networks also have the ability to generalize
previously acquired knowledge to novel inputs. For instance, Dienes et al. ([2])
showed that a neural network can generalize abstract knowledge acquired in one
domain to a new domain. They trained the network to predict the next input
symbol in grammatical sequences in the first domain, and showed that the network
was able to learn to predict grammatical sequences in the second domain more
effectively than it would have learned them without the prior learning. During
the training in the second domain, they had to freeze the weights of a part of the
network to prevent catastrophic forgetting. They used this simulation paradigm to
emulate and analyze domain transfer, effect of similarity between training and test
sequences, and the effect of n-gram information in human data. Hanson et al. ([5])
also showed that a prior learning of a grammar facilitates the learning of a new
grammar in the cases where either the syntax or the vocabulary was kept constant.
In this study we investigate grammar transfer by a neural network, where both syntax and vocabularies are different from the source grammar to the target grammar.
Unlike Dienes et al.'s network, all weights in the network are allowed to change dur-
ing the learning of the target grammar, which allows us to investigate interference
as well as transfer from the source grammar to the target grammar.
2
2.1
Simulation Design
The Grammar Transfer Task
In the following simulations, a neural network is trained with sentences that are
generated by a Finite State Machine (FSM) and is tested whether the learning of
sentences generated by another FSM is facilitated. Four pairs of FSMs used for the
grammar transfer task are shown in Fig. 2. In each FSM diagram, symbols (e.g. A,
B, C, ... ) denote words, numbers represent states, a state number with an incoming
arrow with no state numbers at the arrow foot (e.g. state 1 in the left FSM in
Fig. 2A) signifies the initial state, and numbers in circles (e.g. state 3 in the left
FSM in Fig. 2A) signify the accepting states. In each pair of diagrams, transfer
was tested in both directions: from the left FSM to the right FSM, and to the
opposite direction. Words in a sentence are generated by an FSM and presented to
the network one word at a time. At each time, the next word is selected randomly
from next possible words (or end of sentence where possible) at the current FSM
state with the equal probability, and the FSM state is updated to the next state.
The sentence length is limited to 20 words , excluding START.
The task for the network is to predict the correct termination of sentences. If the
network is to predict that the sentence ends with the current input, the activity
of the output node of the network has to be above a threshold value, otherwise
the output has to be below another threshold value. Note that if a FSM is at
an accepting state but can further transit to another state, the sentence mayor
may not end. Therefore, the prediction may succeed or fail. However, the network
will eventually learn to yield higher values when the FSM is at an accepting state
than when it is not. After the network learns each training sentence, it is tested
with randomly generated 1000 sentences and the training session is completed only
when the network makes correct end point judgments for all sentences. Then the
network is trained with sentences generated by another FSM. The extent of transfer
is measured by the reduction of the number of sentences required to train the
network on an FSM after a prior learning of another FSM, compared to the number
of sentences required to train the network on the current FSM from scratch.
2.2
The Network Architecture and the Learning Algorithm
The network is a second order recurrent neural network, with an added hidden layer
that receives first order connections from the input layer (Fig. 1). The network has
an input layer with seven nodes (A, B, C, ... F, and START), an output layer
with one node, an input hidden layer with four nodes, a state hidden layer with
four nodes, and a feedback layer with four nodes. Recurrent neural networks are
often used for modeling syntactic processing [3]. Second order networks are suited
for processing languages generated by FSMs [4] . Learning is carried out by the
weight update rule for recurrent networks developed by Williams and Zipser ([7]),
extended to second order connections ([4]) where necessary. The learning rate and
the momentum are 0.2 and 0.8, respectively. High and low thresholds are initialized
to 0.20 and 0.17 respectively and are adapted after the network have processed the
test sentences as follows. The high threshold is modified to the minimum value
yielded for all end points in the test sentences minus a margin (0.01). The low
threshold is modified to the high threshold minus another margin (0.02). These
thresholds are used in the next training and test.
Output Layer
State Hidden Laye r
~--'1c:----=,
'-+-+_-+---"
Feedback
Laye r
Input Laye r
Figure 1: A second order recurrent network used in simulations. The network
consists of an input layer that receives words, an output layer that predicts sentence
ends, two hidden layers (an input hidden layer and a state hidden layer) , and a
feedback layer that receives a copy of the state hidden layer activities.
3
3.1
The Simulation Results
The Transfer Effects
Numbers of required trainings and changes in number of trainings averaged over 20
networks with different initial weights are shown in Fig. 2. Numbers in parentheses
are standard errors of number of trainings. Changes are shown with either a "+"
sign (increase) or a "-" sign (reduction). For instance, Fig. 2A shows that it
required 14559 sentence presentations for the network to learn the left FSM after the
network was trained on the right FSM . On the other hand, it required 20995 sentence
presentation for the network to learn the left FSM from the scratch. Therefore
there was 30.7% reduction in the transfer direction from right to left. Note that
the network was trained only once on sentences from the source grammar to the
criteria and then only once on the sentences from the target grammar. Thus after
the completion of the target grammar learning, the knowledge about the source
grammar is disrupted to some extent. To show that the network eventually learns
both grammars , number of required training was examined for more than one cycle.
After ten cycles, number of required trainings was reduced to 0.13% (not shown).
3.2
Representation of Grammatical Knowledge
To analyze the representation of grammatical knowledge in the network, Linear
Discriminant Analysis (LDA) was applied to hidden layer activities. LDA is a
technique which finds sets of coefficients that defines a linear combination of input
variables that can be used to discriminate among sets of input data that belong
to different categories . Linear combinations of hidden layer node activities using
these coefficients provide low-dimensional views of hidden layer activities that best
separate specified categories (e.g. grammatical functions). In this respect, LDA is
similar to Principal Component Analysis (PCA) except that PCA finds dimensions
along which the data have large variances, whereas LDA finds dimensions which
differentiate the specified categories.
A
20995
(1119)
?
9731
(961)
+3.81
?o
-30.7 %
/
\
B
14559
(1332)
9374
(674)
40943
(2905)
25314
(3959)
, ?
<
"
-40.9%
-30 .5%
28448
(7924)
c
20995
(1l20)
42826
(4575)
67521
-~.:~:)
.
i.
+6l2%
"1
/:121~
F
35097
(10255)
F
~ ~
~<-------
D
~E
~
68858
(3423)
D
20995
(1119)
44114
(3604)
D
-------,+ 1~
2 .ma
,
?
-----) 1
-25.8 %
(
E
2
GJ--/
15575
(1149)
39372
(2271)
]
D
F
~
F GGE
CV
Figure 2: Initial savings observed in various grammar transfer tasks. Numbers are
required number of training averaged over 20 networks with different initial weights.
Numbers in parentheses are standard errors. Numbers shown with "%" are change
in number of training due to transfer. A negative change means reduction (positive
transfer) and a positive change means increase (negative transfer, or interference).
e
8
m
e
ee
0
0
ID
0
rJ'
N
~
e
~
0
I
"E
0
"?
0
15
20
5
0
0?00
0
<i>
<i>
0
I!J
00
0
00
00
0
-I
000
0
00
0 0
00
0
0
o~oo
fJO
0
<i>
0 8
0
0
<i>
8
0
0
0
<i>
<i> <i>
0
0
0
0
-2
-3
-3
go
-2
-I
0
0
Linear OiSO"iminanl 1
Figure 3: State space organization for a grammar transfer task (a case of Fig_ 2B)_
State space activities corresponding to FSM states 1, 2, and 3 are plotted with
squares, diamonds, and circles, respectively_ State space activities that belong to
the target FSM have dots in the plots, whereas those that belong to the source FSM
do not have fill in patterns_
State 3 reg ion correspondin g
o
o
'GO inputs
esponding
~ .ACAC??o
o
0
0
o
0
?00
Stat 3 reg ion corresponding
~;::;;;=~~~~~p..,.:l:.~JI~:O.~in~Sbts .. B8B ...
-I
o
o
~
0
~ ~~a~~~~J ,~~~~Tc~~,ing
(state 3)
-3 ~----~------~----~------~----~------~
-3
-2
-I
0
Linear Oisaiminanl 1
Figure 4: Trajectories corresponding to loops in Fig_ 2B in the state hidden layer
state space _ The broken line corresponds to a hypothetical shared discrimination
cue a hypothetical boundary described in 4_ It is the between white diamonds and
white circles (i. e. states 2 and 3 in the source grammar), as well as it can be one of
the discrimination boundaries between diamonds with dots and squares with dots
(i .e. states 2 and 1 in the target grammar). The triangular shape shows the three
FSM state trajectory corresponding to inputs BCCBCC .... Ellipses show to state
space activities involved in one state loops (at state 1 and at state 3) and two state
loops (at state 2 and 3).
4
Discussion
In the first grammar transfer task (Fig. 2A) , only the initial and the accepting
states in the FSMs were different, so the frequency distribution of subsequences
of words were very similar except for short sentences. In this case, 31 % saving
was observed in one transfer direction although there was little change in required
training in the other direction. In the second grammar transfer task, directions of all
arcs in the FSMs were reversed. Therefore the mirror images of sentences accepted
in one grammar were accepted in the other grammar. Although the grammars
were very different, there were significant amount of overlaps in the permissible
short subsequences. In this case, there were 31% and 41 % savings in training. In
the third and fourth grammar transfer tasks, the source and the target grammars
shared less subsequences. In the third case (Fig. 2C) for instance, the subsequences
were very different because the source grammar had two one-state loops (at states
1 and 3) with the same word A, whereas two one-state loops in the target grammar
consisted of different words (D and E). In this case, there was little change in the
number of learnings required in one transfer direction but there was 67% increase
in the other direction. In the fourth case (Fig 2. D), there was 26% reduction in
one direction but there was 12% increase in the other direction in the number of
learnings required. From these observations we hypothesize that , as in the case
of syntax transfer ([5]) , if the acquired grammar allows frequent subsequence of
words that appears in the target grammar (after the equivalent symbol sets are
substituted) the transfer is easier and thus there are more savings.
What is the source of savings in grammar transfer? It is tempting to say that, as
in the vocabulary transfer task ([5]), the source of savings is the organization of the
state hidden layer activity which directly reflects the FSM states. Fig. 3 shows the
state space organization after the grammar transfer shown in Fig. 2B. Fig. 4 shows
the change in the state hidden layer activities drawn over the state space organization. The triangular lines are the trajectories as the network receives BCCBCC,
which creates the 3-state loops (231)(231) in the FSM. Regions of trajectories corresponding to the 2-state loop (23) and two I-state loops (1) and (3) are also shown
in Fig. 4, although the trajectory lines are not shown to avoid a cluttered figure.
It can be seen that state space activities that belong to different FSM state loops
tend to be distinct even when they belong to the same FSM state, although there
seem to be some tendencies that they are allocated in vicinities. Unlike in the vocabulary transfer, regions belonging to different FSM loops tend to be interspersed
by regions that belong to the other grammar, causing state space structure to be
more fragmented. Furthermore, we found that there was no significant correlation
between the correct rate of the linear discrimination with respect to FSM states
(which reflects the extent to which the state space organization reflects the FSM
states) and savings (not shown).
One could reasonably argue that the saving is not due to transfer of grammatical
knowledge but is due to some more low-level processing specific to neural networks.
For instance, the network may have to move weight values to an appropriate range
at the first stage of the source grammar learning, which might become unnecessary
for the leaning of the target grammar. We conducted a simulation to examine the
effect of altering the initial random weights using the source and target grammars.
The space limitation does not permit us to present the details, but we did not
observe the effect of initializing the bias and the weights to appropriate ranges.
If neither the state space organization nor the lower-level statistics was not the
source of savings, what was transferred? As already mentioned, state space organization observed in grammar transfer task is more fragmented than that observed
in vocabulary transfer task (Fig. 3). These fragmented regions have to be discriminated as far as each region (which represents a combination of the current
network state and the current vocabulary) has to yield a different network state.
State hidden nodes provide clues for the discrimination by placing boundaries in the
network state space. Boundary lines collectively define regions in the state space
which correspond to sets of state-vocabulary combinations that should be treated
equivalently in terms of the given task. These boundaries can be shared: for instance, a hypothetical boundary shown by a broken line in the Fig. 4 can be the
discrimination boundary between white diamonds and white circles (i. e. states 2
and 3 in the source grammar), as well as it can be one of the discrimination boundaries between diamonds with dots and squares with dots (i. e. states 2 and 1 in
the target grammar). We speculate that shared boundaries may be the source of
savings. That is, boundaries created for the source grammar learning can be used,
possibly with some modifications, as one of the boundaries for the target grammar.
In other words, the source of savings may not be as high level as FSM state space
but some lower level features at the syntactic processing level.
5
Conclusion
We investigated the ability of a recurrent neural network to transfer grammatical
knowledge of a previously acquired language to another. We found that the network
was able to transfer the grammatical knowledge to a new grammar with a slightly
different syntax defined over a new vocabulary (grammar transfer). The extent of
transfer seemed to depend on the subsequences of symbols generated by the two
grammars, after the equivalence sets are translated, although the results presented
in this paper are admittedly very restricted in the type of syntax covered and the size
of syntactic rules and vocabularies. We hypothesize that the ability of the network
to transfer grammatical knowledge comes from sharing discrimination boundaries
of input and vocabulary combinations. In sum, we hope to have demonstrated that
neural networks do not simply learn associations among input symbols but they
acquire structural knowledge from inputs.
References
[1] Brooks, L. R. , and Vokey, J . R. (1991) Abstract analogies and abstracted grammars:
Comments on Reber (1989) and Mathews et al. (1090). Journal of Experimental Psychology: Gen eral, 120, 316-323.
[2] Dienes, Z. , Altmann, and G. , Gao , S-J. (1999) Mapping across domains without feedback: A neural network model of transfer of implicit knowledge, Cognitive Science 23,
53-82.
[3] Elman, J. L. (1991) Distributed representation , simple recurrent neural networks, and
grammatical structure. Machine Learning, 7, 195-225.
[4] Giles, C. L. , Miller, C. B. , Chen, D. , Chen, H. H. , Sun, G. Z. , and Lee, Y. C. (1992)
Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks, it Neural Computation, 4 , 393-495.
[5] Hanson, S. J., Negishi, M., (2001) The emergence of explicit knowledge (symbols &
rules) in (associationist) neural networks, Submitted.
[6] Reber, A. (1969) Transfer of syntactic structure in synthetic languages. Journal of
Experimental Psychology, 81 , 115-119.
[7] Williams, R . J. and Zipser, D. (1989) A learning algorithm for continually running fully
recurrent neural networks, Neural Computation, 1 (2) , 270.
| 1971 |@word termination:1 simulation:6 minus:2 reduction:5 initial:6 correspondin:1 past:1 current:5 shape:1 hypothesize:2 plot:1 update:1 discrimination:7 cue:1 selected:1 smith:2 short:2 accepting:4 node:8 along:1 become:1 consists:2 acquired:4 forgetting:1 elman:1 examine:1 nor:1 little:2 l20:1 what:2 string:2 developed:1 nj:2 hypothetical:3 mathews:1 continually:1 positive:2 persists:1 id:1 might:1 examined:1 equivalence:1 dienes:3 limited:1 range:2 averaged:2 word:12 equivalent:1 demonstrated:1 williams:2 go:2 cluttered:1 automaton:1 rule:4 fill:1 updated:1 target:14 fjo:1 predicts:1 observed:4 initializing:1 region:6 cycle:2 sun:1 mentioned:1 broken:2 trained:5 depend:1 exposed:1 creates:1 translated:1 emulate:1 various:1 train:2 distinct:1 artificial:2 say:1 otherwise:1 grammar:49 ability:3 triangular:2 statistic:1 syntactic:5 emergence:1 differentiate:1 sequence:3 frequent:1 causing:1 loop:10 gen:1 oo:1 recurrent:11 completion:1 stat:1 measured:1 come:1 differ:1 direction:10 foot:1 correct:3 human:1 hall:2 negishi:2 mapping:1 predict:4 reflects:3 hope:1 modified:3 avoid:1 hidden:15 among:2 field:1 equal:1 once:2 saving:11 represents:1 placing:1 randomly:2 organization:7 investigate:2 analyzed:1 fsm:31 necessary:1 old:1 initialized:1 circle:4 plotted:1 instance:5 modeling:1 giles:1 altering:1 signifies:1 conducted:1 synthetic:1 st:2 disrupted:1 lee:1 possibly:1 cognitive:1 speculate:1 dur:1 coefficient:2 view:1 analyze:2 start:2 square:3 variance:1 miller:1 yield:2 judgment:1 correspond:1 generalize:2 trajectory:5 submitted:1 sharing:1 frequency:1 involved:1 knowledge:17 appears:1 higher:1 furthermore:2 implicit:2 stage:1 correlation:1 hand:1 receives:4 defines:1 lda:4 effect:6 consisted:1 vicinity:1 fig_:2 white:4 during:1 criterion:1 syntax:6 image:1 novel:1 ji:1 discriminated:1 interspersed:1 belong:6 association:1 significant:2 freeze:1 cv:1 session:1 language:6 had:2 dot:5 similarity:1 gj:1 showed:3 seen:1 minimum:1 paradigm:1 tempting:1 stephen:1 violate:1 rj:1 ing:2 long:1 reber:2 ellipsis:1 parenthesis:2 prediction:1 fsms:4 mayor:1 rutgers:4 represent:1 ion:2 whereas:3 signify:1 diagram:2 source:17 allocated:1 permissible:1 unlike:2 comment:1 tend:2 facilitates:1 seem:1 extracting:1 zipser:2 ee:1 structural:1 psychology:6 architecture:1 opposite:1 whether:1 pca:2 michiro:1 covered:1 amount:1 ten:1 processed:1 category:3 reduced:1 sign:2 four:4 threshold:7 drawn:1 prevent:1 neither:1 kept:1 sum:1 jose:2 facilitated:1 fourth:2 eral:1 layer:21 altmann:1 yielded:1 activity:11 adapted:1 transferred:1 department:2 combination:5 belonging:1 across:1 slightly:2 modification:1 restricted:1 interference:2 previously:2 eventually:2 fail:1 end:6 permit:1 observe:1 appropriate:2 running:1 completed:1 move:1 added:1 already:1 reversed:1 separate:1 transit:1 seven:1 argue:1 extent:4 discriminant:2 length:1 acquire:2 equivalently:1 negative:2 design:1 diamond:5 observation:1 arc:1 finite:3 extended:1 excluding:1 pair:2 required:11 specified:2 sentence:24 connection:2 hanson:3 learned:1 brook:1 able:5 below:1 overlap:1 treated:1 created:1 carried:1 prior:3 l2:1 fully:1 limitation:1 analogy:1 leaning:1 copy:1 warren:2 bias:1 distributed:1 fragmented:3 grammatical:14 feedback:4 dimension:2 vocabulary:12 gram:1 boundary:12 seemed:1 clue:1 far:1 abstracted:1 incoming:1 unnecessary:1 subsequence:6 learn:5 transfer:39 reasonably:1 investigated:1 domain:7 substituted:1 did:1 arrow:2 allowed:1 fig:15 momentum:1 explicit:1 third:2 learns:2 laye:3 specific:1 symbol:6 effectively:1 mirror:1 margin:2 chen:2 easier:1 suited:1 tc:1 simply:1 gao:1 collectively:1 corresponds:1 ma:1 succeed:1 presentation:2 shared:4 change:9 except:2 principal:1 admittedly:1 discriminate:1 catastrophic:1 accepted:2 tendency:1 experimental:2 people:2 newark:2 reg:2 tested:3 scratch:2 |
1,064 | 1,972 | Self-regulation Mechanism of Temporally
Asymmetric Hebbian Plasticity
Narihisa Matsumoto
Graduate School of Science and Engineering
Saitama University:
RIKEN Brain Science Institute
Saitama 351-0198, Japan
xmatumo@brain.riken.go.jp
Masato Okada
RIKEN Brain Science Institute
Saitama 351-0198, Japan
okada@brain.riken.go.jp
Abstract
Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation
(LTP) occurs or Long Term Depression (LTD) does. The synaptic
plasticity has been called ?Temporally Asymmetric Hebbian plasticity (TAH)?. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the
mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper,
we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme.
On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH
qualitatively has the same effect as the covariance learning when
spatio-temporal patterns are embedded in the network.
1
Introduction
Recent biological experimental findings have indicated that the synaptic plasticity
depends on the relative timing of the pre- and post- synaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression
(LTD) does [1, 2, 3]. LTP occurs when a presynaptic firing precedes a postsynaptic
one by no more than about 20ms. In contrast, LTD occurs when a presynaptic
firing follows a postsynaptic one. A rapid transition occurs between LTP and LTD
within a time difference of a few ms. Such a learning rule is called ?Temporally
Asymmetric Hebbian learning (TAH)? [4, 5] or ?Spike Timing Dependent synaptic
Plasticity (STDP)? [6]. Many authors have numerically shown that spatio-temporal
patterns can be stored in neural networks [6, 7, 8, 9, 10, 11]. Song et al. discussed
the variablity of spike generation about the network consisting of spiking neurons
using TAH [6]. They found that the condition that the area of LTD was slightly
larger than that of LTP was indispensable of the stability. Namely, the balance of
LTP and LTD is crucial. Yoshioka also discussed the associative memory network
consisting of spiking neurons using TAH [11]. He found that the area of LTP was
needed to be equal to that of LTD for stable retrieval. Munro and Hernandez numerically showed that a network can retrieve spatio-temporal patterns even in a
noisy environment owing to LTD [9]. However, they did not discuss the reason why
TAH was effective in terms of the storage and retrieval of the spatio-temporal patterns. Since TAH has not only the effect of LTP but that of LTD, the interference
of LTP and LTD may prevent retrieval of the patterns. To investigate this unknown
mathematical mechanism for retrieval, we employ an associative memory network
consisting of binary neurons. To simplify the dynamics of internal potential enables
us to analyze the details of the retrieval process. We use a learning rule that is
the similar formulation in the previous works. We show the mechanism that the
spatio-temporal patterns can be retrieved in this network.
There are many works concerned with associative memory networks that store
spatio-temporal patterns by the covariance learning [12, 13]. Many biological findings imply that sparse coding schemes may be used in the brain [14]. It is wellknown that the covariance learning is indispensable when the sparse patterns are
embedded in a network as attractors [15, 16]. The information on the firing rate
for the stored patterns is not indispensable for TAH, although it is indispensable
for the covariance learning. We theoretically show that TAH qualitatively has the
same effect as the covariance learning when the spatio-temporal patterns are embedded in the network. This means that the difference in spike times induces LTP
or LTD, and the effect of the firing rate information can be canceled out by this
spike time difference. We conclude that this is the reason why TAH doesn?t require
the information on the firing rate for the stored patterns.
2
Model
We investigate a network consisting of N binary neurons that are connected mutually. In this paper, we consider the case of N ? ?. We use a neuronal model with
binary state, {0, 1}. We also use discrete time steps and the following synchronous
updating rule,
ui (t) =
N
X
Jij xj (t),
(1)
j=1
xi (t + 1) = ?(ui (t) ? ?),
1, u ? 0
?(u) =
0, u < 0,
(2)
(3)
where xi (t) is the state of the i-th neuron at time t, ui (t) its internal potential,
and ? a uniform threshold. If the i-th neuron fires at time t, its state is xi (t) = 1;
otherwise, xi (t) = 0. The specific value of the threshold is discussed later. Jij is
the synaptic weight from the j-th neuron to the i-th neuron. Each element ?i? of
?
the ?-th memory pattern ?? = (?1? , ?2? , ? ? ?, ? N
) is generated independently by,
?
Prob[?i? = 1] = 1 ? Prob[?i? = 0] = f.
E[?i?]
(4)
The expectation of ? is
= f, and thus, f can be considered as the mean firing
rate of the memory pattern. The memory pattern is ?sparse? when f ? 0, and
this coding scheme is called ?sparse coding?. The synaptic weight Jij follows the
synaptic plasticity that depends on the difference in spike times between the i-th
(post-) and j-th (pre-) neurons. The difference determines whether LTP occurs or
LTD does. Such a learning rule is called ?Temporally Asymmetric Hebbian learning
(TAH)? or ?Spike Timing Dependent synaptic Plasticity (STDP)?. This biological
(a)
60
40
1
LTP
LTP
20
?Jij
Change in EPSP amplitude(%)
experimental finding indicates that LTP or LTD is induced when the difference in
the pre- and post-synaptic spike times falls within about 20ms [3] (Figure 1(a)).
We define that one time step in equations (1)?(3) corresponds to 20ms in Figure
1(a), and a time duration within 20ms is ignored (Figure 1(b)). Figure 1(b) shows
that LTP occurs when the j-th neuron fires one time step before the i-th neuron
does, ?i?+1 = ?j? = 1, and that LTD occurs when the j-th neuron fires one time step
after the i-th neuron does, ?i??1 = ?j? = 1. The previous work indicates the blance
of LTP and LTD is significant [6]. Therefore, we define that the area of LTP is the
0
0
-20
LTD
LTD
-40
-1
-60
-100 -80 -60 -40 -20
0
20
40
tpre - tpost (ms)
60
80 100
(b)
-2
-1
0
1
2
tj - ti
Figure 1: Temporally Asymmetric Hebbian plasticity. (a): The result of biological
finding [3] and (b): the learning rule in our model. LTP occurs when the j-th
neuron fires one time step before the i-th one. On the contrary, LTD occurs when
the j-th neuron fires one time step after the i-th one. Synaptic weight Jij is followed
by this rule.
same as that of LTD, and that the amplitude of LTP is also the same as that of
LTD. On the basis of these definitions, we employ the following learning rule,
p
X
1
(? ?+1 ?j? ? ?i??1 ?j? ).
Jij =
N f(1 ? f) ?=1 i
(5)
The number of memory patterns is p = ?N where ? is defined as the ?loading
rate?. There is a critical value ?C of loading rate. If the loading rate is larger than
?C , the pattern sequence becomes unstable. ?C is called the ?storage capacity?.
The previous works have shown that the learning method of equation (5) can store
spatio-temporal patterns, that is, pattern sequences [9, 10]. We show that p memory
patterns are retrieved periodically like ?1 ? ? 2 ? ? ? ? ? ? p ? ? 1 ? ? ? ?. In other
words, ?1 is retrieved at t = 1, ?2 at t = 2, and ? 1 at t = p + 1.
Here, we discuss the value of threshold ?. It is well-known that the threshold
value should be controlled according to the progress of the retrieval process timedependently [15, 16]. One candidate algorithm for controlling the threshold value
is to maintain the mean firing rate of the network at that of memory pattern, f, as
follows,
N
N
1 X
1 X
xi (t) =
?(ui (t) ? ?(t)).
(6)
f =
N
N
i=1
i=1
It is known that the obtained threshold value is nearly optimal, since it approximately gives a maximal storage capacity value [16].
3
Theory
Many neural network models that store and retrieve sequential patterns by TAH
have been discussed by many authors [7, 8, 9, 10]. They have numerically shown that
TAH is effective for storing pattern sequences. For example, Munro and Hernandez
showed that their model could retrieve a stored pattern sequence even in a noisy
environment [9]. However, the previous works have not mentioned the reason why
TAH is effective. Exploring such a mechanism is the main purpose of our paper.
Here, we discuss the mechanism that the network learned by TAH can store and
retrieve sequential patterns. Before providing details of the retrieval process, we
discuss a simple situation where the number of memory patterns is very small
relative to the number of neurons, i.e., p ? O(1). Let the state at time t be the
same as the t-th memory pattern: x(t) = ?t . Then, the internal potential u i (t) of
the equation (1) is given by,
ui (t) = ?it+1 ? ?it?1 .
(7)
ui (t) depends on two independent random variables, ?it+1 and ?it?1 , according to the
equation (4). The first term ?it+1 of the equation (7) is a signal term for the recall of
the pattern ?t+1 , which is designed to be retrieved at time t+1, and the second term
?it?1 can interfere in retrieval of ?t+1 . According to the equation (7), ui (t) takes a
value of 0, ?1 or +1. ? it?1 = 1 means that the interference of LTD exists. If the
threshold ?(t) is set between 0 and +1, ?it+1 = 0 isn?t influenced by the interference
of ?it?1 = 1. When ?it+1 = 1 and ?it?1 = 1, the interference does influence the
retrieval of ? t+1 . We consider the probability distribution of the internal potential
ui (t) to examine how the interference of LTD influences the retrieval of ?t+1 . The
probability of ? it+1 = 1 and ?it?1 = 1 is f 2 , that of ?it+1 = 1 and ?it?1 = 0 is f ? f 2 ,
that of ?it+1 = 0 and ?it?1 = 1 is f ? f 2 , and that of ?it+1 = 0 and ?it?1 = 0 is
(1 ? f)2 . Then the probability distribution of u i (t) is given by this equation
Prob(ui (t)) = (f ?f 2 )?(ui (t)?1)+(1?2f +2f 2 )?(ui (t))+(f ?f 2 )?(ui (t)+1). (8)
Since the threshold ?(t) is set between 0 and +1, the state xi (t + 1) is 1 with
probability f ? f 2 and 0 with 1 ? f + f 2 . The overlap between the state x(t + 1)
and the memory pattern ? t+1 is given by,
N
mt+1 (t + 1) =
X
1
(? t+1 ? f)xi (t + 1) = 1 ? f.
N f(1 ? f) i=1 i
(9)
In a sparse limit, f ? 0, the probability of ? it+1 = 1 and ?it?1 = 1 approaches 0.
This means that the interference of LTD disappears in a sparse limit, and the model
can retrieve the next pattern ?t+1 . Then the overlap mt+1 (t + 1) approaches 1.
Next, we discuss whether the information on the firing rate is indispensable for
TAH or not. To investigate this, we consider the case that the number of memory
patterns is extensively large, i.e., p ? O(N ). Using the equation (9), the internal
potential ui (t) of the i-th neuron at time t is represented as,
ui (t) = (?it+1 ? ?it?1 )mt (t) + zi (t),
p
X
zi (t) =
(?i?+1 ? ?i??1 )m? (t).
(10)
(11)
?6=t
zi (t) is called the ?cross-talk noise?, which represents contributions from non-target
patterns excluding ?t?1 and prevents the target pattern ?t+1 from being retrieved.
This disappeared in the finite loading case, p ? O(1).
It is well-known that the covariance learning is indispensable when the sparse patterns are embedded in a network as attractors [15, 16]. Under sparse coding schemes,
unless the covariance learning is employed, the cross-talk noise does diverge in the
large N limit. Consequently, the patterns can not be stored. The information on
the firing rate for the stored patterns is not indispensable for TAH, although it is
indispensable for the covariance learning. We use the method of the ?statistical
neurodynamics? [17, 18] to examine whether the variance of cross-talk noise diverges or not. If a pattern sequence can be stored, the cross-talk noise is obeyed
by a Gaussian distribution with mean 0 and time-dependent variance ? 2 (t). Otherwise, ?2 (t) diverges. Since ?2 (t) is changing over time, it is necessary to control
a threshold at an appropriate value at each time step [15, 16]. According to the
statistical neurodynamics, we obtain the recursive equations for the overlap mt (t)
between the network state x(t) and the target pattern ?t and the variance ?2 (t).
The details of the derivation will be shown elsewhere. Here, we show the recursive
equations for mt (t) and ? 2 (t),
1 ? 2f
1?f
f
mt (t) =
erf(?0 ) ?
erf(?1 ) + erf(?2 ),
(12)
2
2
2
t
a
X
Y
? 2 (t) =
C
?q(t
?
a)
U 2 (t ? b + 1),
(13)
2(a+1) (a+1)
a=0
b=1
2
2
2
1
U (t) = ?
{(1 ? 2f + 2f 2 )e??0 + f(1 ? f)(e??1 + e??2 )},
(14)
2??(t ? 1)
1
1 ? (1 ? 2f + 2f 2 )erf(?0 ) ? f(1 ? f)(erf(?1 ) + erf(?2 )) ,
(15)
q(t) =
2 Z
y
2
b!
, a! = a ? (a ? 1) ? ? ? ? ? 1,
erf(y) = ?
exp (?u2 )du, b Ca =
? 0
a!(b ? a)!
?(t ? 1)
?mt?1 (t ? 1) + ?(t ? 1)
mt?1 (t ? 1) + ?(t ? 1)
?
?
?0 = ?
, ?1 =
, ?2 =
.
2?(t ? 1)
2?(t ? 1)
2?(t ? 1)
These equations reveal that the variance ?2 (t) of cross-talk noise does not diverge
as long as a pattern sequence can be retrieved. This result means that TAH qualitatively has the same effect as the covariance learning.
Next, we discuss the mechanism that the variance of cross-talk noise does not diverge. Let us consider the equation (5). Synaptic weight Jij from j-th neuron to
i-th neuron is also derived as follows,
p
p
X
X
1
1
Jij =
(?i?+1 ?j? ? ?i??1 ?j? ) =
(? ? ? ??1 ? ?i? ?j?+1 )
N f(1 ? f) ?=1
N f(1 ? f) ?=1 i j
=
p
n
o
X
1
?i? (?j??1 ? f) ? (?j?+1 ? f)
N f(1 ? f) ?=1
(16)
This equation implies that TAH has the information on the firing rate of the memory
patterns when spatio-temporal patterns are embedded in a network. Therefore,
the variance of cross-talk noise doesn?t diverge, and this is another factor for the
network learned by TAH to store and retrieve a pattern sequence. We conclude
that the difference in spike times induces LTP or LTD, and the effect of the firing
rate information can be canceled out by this spike times difference.
4
Results
We investigate the property of our model and examine the following two conditions:
a fixed threshold and a time-dependent threshold, using the statistical neurodynamics and computer simulations.
overlap (solid), activity/f (dashed)
Figure 2 shows
how the overlap mt (t) and the mean firing rate of the network,
P
1
x?(t) = N i xi (t), depend on the loading rate ? when the mean firing rate of
the memory pattern is f = 0.1 and the threshold is ? = 0.52, where the storage
capacity is maximum with respect to the threshold ?. The stored pattern sequence
can be retrieved when the initial overlap m1 (1) is greater than the critical value
mC . The lower line indicates how the critical initial overlap m C depends on the
loading rate ?. In other words, the lower line represents the basin of attraction
for the retrieved sequence. The upper line denotes a steady value of overlap mt (t)
when the pattern sequence is retrieved. mt (t) is obtained by setting the initial
state to the first memory pattern: x(1) = ?1 . In this case, the storage capacity is
?C = 0.27. The dashed line shows a steady value of the normalized mean firing rate
of network, x?(t)/f, for the pattern sequence. The data points and error bars indicate
the results of the computer simulations with 5000 neurons: N = 5000. The former
indicates mean values and the latter does variances in 10 trials. Since the results
1.2
1
0.8
0.6
0.4
0.2
0
0
0.05
0.1
0.15
0.2
0.25
0.3
loading rate
Figure 2 !!The critical overlap (the lower line) and the
overlap at the stationary state (the upper line). The
dashed line shows the mean firing rate of the network
divided firing rate which is 0.1. The threshold is 0.52
and the number of neurons is 5000. The data points and
error bars show the means and variances, respectively, in
10 trials of computer simulations. The storage capacity
is 0.27.
of the computer simulations coincide with those of the statistical neurodynamics,
hereafter, we show the results only of the statistical neurodynamics.
overlap (solid), activity/f (dashed)
Next, we examine the threshold control scheme in the equation (6), where the
threshold is controlled to maintain the mean firing rate of the network at f. q(t)
PN
2
in equation (15) is equal to the mean firing rate because q(t) = N1
i=1 (xi (t)) =
P
N
1
i=1 xi (t) under the condition xi (t) = {0, 1}. Thus, the threshold is adjusted to
N
satisfy the following equation,
1
f = q(t) =
1 ? (1 ? 2f + 2f 2 )erf(?0 ) ? f(1 ? f)(erf(?1 ) + erf(?2 )) .
(17)
2
Figure 3 shows the overlap mt (t) as a function of loading rate ? with f = 0.1. The
storage capacity is ?C = 0.234. The basin of attraction becomes larger than that
of the fixed threshold condition, ? = 0.52 (Figure 2). Thus, the network becomes
robust against noise. This means that even if the initial state x(1) is different from
the first memory pattern ?1 , that is, the state includes a lot of noise, the pattern
sequence can be retrieved.
1.2
1
0.8
0.6
0.4
0.2
0
0
0.05
0.1
0.15
0.2
loading rate
0.25
0.3
Figure 3 !!The critical overlap (the lower line) and the
overlap at the stationary state (the upper line) when
the threshold is changing over time to maintain mean
firing rate of the network at f. The dashed line shows
the mean firing rates of the network divided firing rate
which is 0.1. The basin of attraction become larger than
that of the fixed threshold condition: Figure 2.
Finally, we discuss how the storage capacity depends on the firing rate f of the
1
memory pattern. It is known that the storage capacity diverges as f | log
f | in a
sparse limit, f ? 0 [19, 20]. Therefore, we investigate the asymptotic property
of the storage capacity in a sparse limit. Figure 4 shows how the storage capacity
depends on the firing rate where the threshold is controlled to maintain the network
1
activity at f (symbol ?). The storage capacity diverges as f | log
f | in a sparse limit.
0.03
0.025
Figure 4 !! The storage capacity as a function of f in
the case of maintaining activity at f (symbol ?). Ths
1
storage capacity diverges as f | log
in a sparse limit.
f|
?C f
0.02
0.015
0.01
0.005
0 0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
1/|log f|
5
Discussion
Using a simple neural network model, we have discussed the mechanism that TAH
enables the network to store and retrieve a pattern sequence. First, we showed that
the interference of LTP and LTD disappeared in a sparse coding scheme. This is
a factor to enable the network to store and retrieve a pattern sequence. Next, we
showed the mechanism that TAH qualitatively had the same effect as the covariance
learning by analyzing the stability of the stored pattern sequence and the retrieval
process by means of the statistical neurodynamics. Consequently, the variance of
cross-talk noise didn?t diverge, and this is another factor for the network learned by
TAH to store and retrieve a pattern sequence. We conclude that the difference in
spike times induces LTP or LTD, and the effect of the firing rate information can
be canceled out by this spike times difference. We investigated the property of our
model. To improve the retrieval property of the basin of attraction, we introduced
a threshold control algorithm where a threshold value was adjusted to maintain the
mean firing rate of the network at that of a memory pattern. As a result, we found
that this scheme enlarged the basin of attraction, and that the network became
1
robust against noise. We also found that the loading rate diverged as f | log
f | in a
sparse limit, f ? 0.
Here, we compare the storage capacity of our model with that of the model using
the covariance learning (Figure 5). The dynamical equations of the model using the
covariance learning is derived by Kitano and Aoyagi [13]. We calculate the storage
capacity ?COV
from their dynamical equations and compare these of our model,
C
?TCAH , by the ratio of ? TCAH /?COV
. The threshold control method is the same as
C
in this paper. As f decreases, the ratio of storage capacities approaches 0.5. The
contribution of LTD reduces the storage capacity of our model to half. Therefore,
in terms of the storage capacity, the covariance learning is better than TAH. But, as
we discussed previously, the information of the firing rate is indispensable in TAH.
In biological systems, to get the information of the firing rate is difficult.
0.52
0.5
/ ?COV
?TAH
C
C
0.48
Figure 5 !! The comparison of the storage capacity of
our model with that of the model using the covariance
learning. As f decreases, the ratio of storage capacity
approaches 0.5.
0.46
0.44
0.42
0.4
0.38
0.36
0.34
-10
-9
-8
-7
-6
-5
-4
-3
-2
-1
log10f
References
[1] G. Q. Bi and M. M. Poo. Synaptic modifications in cultured hippocampal
neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell
type. The Journal of Neuroscience, 18:10464?10472, 1998.
[2] H. Markram, J. L?
ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic
efficacy by coincidence of postsynaptic aps and epsps. Science, 275:213?215,
1997.
[3] L. I. Zhang, H. W. Tao, C. E. Holt, W. A. Harris, and M. M. Poo. A critical window for cooperation and competition among developing retinotectal
synapses. Nature, 395:37?44, 1998.
[4] L. F. Abbott and S. Song. Temporally asymmetric hebbian learning, spike
timing and neuronal response variability. In Advances in Neural Information
Processing Systems 11, pages 69?75. MIT Press, 1999.
[5] J. Rubin, D. D. Lee, and H. Sompolinsky. Equilibrium properties of temporally
asymmetric hebbian plasticity. Physical Review Letters, 86:364?367, 2001.
[6] S. Song, K. D. Miller, and L. F. Abbott. Competitive hebbian learning through
spike-timing-dependent synaptic plasticity. Nature Neuroscience, 3:919?926,
2000.
[7] W. Gerstner, R. Kempter, J. L. van Hemmen, and H. Wagner. A neuronal
learning rule for sub-millisecond temporal coding. Nature, 383:76?78, 1996.
[8] R. Kempter, W. Gerstner, and J. L. van Hemmen. Hebbian learning and
spiking neurons. Physical Review E, 59:4498?4514, 1999.
[9] P. Munro and G. Hernandez. LTD facilitates learning in a noisy environment.
In Advances in Neural Information Processing Systems 12, pages 150?156. MIT
Press, 2000.
[10] R. P. N. Rao and T. J. Sejnowski. Predictive sequence learning in recurrent
neocortical circuits. In Advances in Neural Information Processing Systems 12,
pages 164?170. MIT Press, 2000.
[11] M. Yoshioka. to be published in Physical Review E, 2001.
[12] G. Chechik, I. Meilijson, and E. Ruppin. Effective learning requires neuronal
remodeling of hebbian synapses. In Advances in Neural Information Processing
Systems 11, pages 96?102. MIT Press, 1999.
[13] K. Kitano and T. Aoyagi. Retrieval dynamics of neural networks for sparsely
coded sequential patterns. Journal of Physics A: Mathematical and General,
31:L613?L620, 1998.
[14] M. Miyashita. Neuronal correlate of visual associative long-term memory in
the primate temporal cortex. Nature, 335:817?820, 1988.
[15] S. Amari. Characteristics of sparsely encoded associative memory. Neural
Networks, 2:1007?1018, 1989.
[16] M. Okada. Notions of associative memory and sparse coding. Neural Networks,
9:1429?1458, 1996.
[17] S. Amari and K. Maginu. Statistical neurodynamics of various versions of
correlation associative memory. Neural Networks, 1:63?73, 1988.
[18] M. Okada. A hierarchy of macrodynamical equations for associative memory.
Neural Networks, 8:833?838, 1995.
[19] M. V. Tsodyks and M. V. Feigle?man. The enhanced strage capacity in neural
networks with low activity level. Europhysics Letters, 6:101?105, 1988.
[20] C. J. Perez-Vicente and D. J. Amit. Optimized network for sparsely coded
patterns. Journal of Physics A: Mathematical and General, 22:559?569, 1989.
| 1972 |@word trial:2 version:1 loading:10 simulation:4 covariance:15 solid:2 initial:4 efficacy:1 hereafter:1 kitano:2 periodically:1 plasticity:11 enables:2 designed:1 aps:1 stationary:2 half:1 zhang:1 mathematical:4 become:1 theoretically:1 rapid:1 examine:4 brain:5 window:1 becomes:3 circuit:1 didn:1 finding:5 temporal:12 ti:1 control:4 before:3 engineering:1 timing:7 limit:8 analyzing:1 firing:27 hernandez:3 approximately:1 graduate:1 bi:1 recursive:2 area:3 chechik:1 pre:4 word:2 holt:1 get:1 storage:22 influence:2 poo:2 go:2 duration:1 independently:1 rule:8 attraction:5 retrieve:9 stability:2 notion:1 hierarchy:1 enhanced:1 controlling:1 target:3 cultured:1 element:1 maginu:1 updating:1 asymmetric:7 sparsely:3 coincidence:1 calculate:1 tsodyks:1 connected:1 sompolinsky:1 decrease:2 mentioned:1 environment:3 ui:14 dynamic:2 depend:1 predictive:1 basis:1 represented:1 various:1 talk:8 riken:4 derivation:1 effective:4 sejnowski:1 precedes:1 encoded:1 larger:4 otherwise:2 amari:2 erf:10 cov:3 noisy:3 associative:8 sequence:17 jij:8 epsp:1 maximal:1 competition:1 diverges:5 disappeared:2 recurrent:1 school:1 progress:1 epsps:1 implies:1 indicate:1 owing:1 enable:1 require:1 potentiation:2 biological:6 adjusted:2 exploring:1 considered:1 stdp:2 exp:1 equilibrium:1 diverged:1 purpose:1 mit:4 gaussian:1 pn:1 derived:2 indicates:4 contrast:1 yoshioka:2 dependent:5 tao:1 canceled:3 among:1 frotscher:1 equal:2 represents:2 ubke:1 nearly:1 simplify:1 employ:3 few:1 consisting:4 fire:5 attractor:2 maintain:5 n1:1 investigate:5 perez:1 tj:1 necessary:1 unless:1 rao:1 saitama:3 uniform:1 stored:10 obeyed:1 spatiotemporal:1 lee:1 physic:2 diverge:5 japan:2 potential:5 coding:8 includes:1 satisfy:1 depends:7 later:1 lot:1 analyze:1 meilijson:1 competitive:1 contribution:2 became:1 variance:9 characteristic:1 miller:1 mc:1 published:1 synapsis:2 influenced:1 synaptic:16 definition:1 against:2 recall:1 amplitude:2 response:1 formulation:1 correlation:1 hand:1 interfere:1 reveal:1 indicated:1 effect:9 normalized:1 former:1 self:1 steady:2 m:6 hippocampal:1 neocortical:1 ruppin:1 spiking:3 mt:12 physical:3 jp:2 discussed:6 he:1 m1:1 numerically:4 significant:1 had:1 stable:1 cortex:1 recent:2 showed:4 retrieved:10 wellknown:1 indispensable:10 store:8 binary:3 greater:1 employed:1 signal:1 dashed:5 reduces:1 hebbian:10 cross:8 long:6 retrieval:13 divided:2 post:3 coded:2 europhysics:1 controlled:3 expectation:1 cell:1 crucial:1 induced:1 ltp:23 facilitates:1 contrary:1 concerned:1 xj:1 zi:3 masato:1 synchronous:1 whether:5 munro:3 ltd:30 song:3 depression:2 ignored:1 extensively:1 induces:3 millisecond:1 neuroscience:2 discrete:1 threshold:24 changing:2 prevent:1 abbott:2 prob:3 letter:2 followed:1 activity:5 strength:1 remodeling:1 developing:1 according:4 slightly:1 postsynaptic:5 modification:1 primate:1 interference:8 equation:19 mutually:1 previously:1 discus:7 mechanism:9 needed:1 appropriate:1 denotes:1 maintaining:1 especially:1 amit:1 spike:16 occurs:10 dependence:1 capacity:21 presynaptic:2 unstable:1 reason:3 providing:1 balance:1 ratio:3 regulation:2 difficult:1 sakmann:1 unknown:2 upper:3 neuron:23 matsumoto:1 finite:1 situation:1 excluding:1 variability:1 introduced:1 namely:1 optimized:1 learned:3 tpre:1 miyashita:1 bar:2 dynamical:2 pattern:59 memory:24 critical:6 overlap:14 scheme:7 improve:1 imply:1 temporally:7 disappears:2 isn:1 review:3 relative:3 asymptotic:1 embedded:5 kempter:2 generation:1 basin:5 rubin:1 storing:2 elsewhere:1 cooperation:1 institute:2 fall:1 markram:1 wagner:1 sparse:17 van:2 transition:1 doesn:2 author:3 qualitatively:4 tpost:1 coincide:1 correlate:1 conclude:3 spatio:10 xi:11 why:3 neurodynamics:7 retinotectal:1 nature:4 okada:4 robust:2 ca:1 du:1 investigated:1 gerstner:2 did:1 main:1 noise:11 neuronal:5 enlarged:1 hemmen:2 sub:1 candidate:1 specific:1 symbol:2 exists:1 sequential:3 visual:1 prevents:1 u2:1 corresponds:1 determines:3 harris:1 consequently:2 man:1 change:1 vicente:1 called:6 tah:26 experimental:3 internal:5 latter:1 |
1,065 | 1,973 | Prod uct Analysis:
Learning to model observations as
products of hidden variables
Brendan J. Freyl, Anitha Kannan l , Nebojsa Jojic 2
1
Machine Learning Group, University of Toronto, www.psi.toronto.edu
2 Vision Technology Group, Microsoft Research
Abstract
Factor analysis and principal components analysis can be used to
model linear relationships between observed variables and linearly
map high-dimensional data to a lower-dimensional hidden space.
In factor analysis, the observations are modeled as a linear combination of normally distributed hidden variables. We describe a
nonlinear generalization of factor analysis , called "product analysis", that models the observed variables as a linear combination
of products of normally distributed hidden variables. Just as factor analysis can be viewed as unsupervised linear regression on
unobserved, normally distributed hidden variables, product analysis can be viewed as unsupervised linear regression on products
of unobserved, normally distributed hidden variables. The mapping between the data and the hidden space is nonlinear, so we
use an approximate variational technique for inference and learning. Since product analysis is a generalization of factor analysis,
product analysis always finds a higher data likelihood than factor
analysis. We give results on pattern recognition and illuminationinvariant image clustering.
1
Introduction
Continuous-valued latent representations of observed feature vectors can be useful
for pattern classification via Bayes rule, summarizing data sets, and producing lowdimensional representations of data for later processing.
Linear techniques, including principal components analysis (J olliffe 1986), factor
analysis (Rubin and Thayer 1982) and probabilistic principal components analysis
(Tipping and Bishop 1999) , model the input as a linear combination of hidden
variables, plus sensor noise. The noise models are quite different in all 3 cases (see
Tipping and Bishop (1999) for a discussion). For example, whereas factor analysis
can account for different noise variances in the coordinates of the input, principal
components analysis assumes that the noise variances are the same in different
input coordinates. Also, whereas factor analysis accounts for the sensor noise when
estimating the combination weights, principal components analysis does not.
Often, the input coordinates are not linearly related, but instead the input vector
is the result of a nonlinear generative process. In particular, data often can be
accurately described as the product of unknown random variables. Examples include
the combination of "style" and "content" (Tenenbaum and Freeman 1997), and the
combination of a scalar light intensity and a reflectance image.
We introduce a generalization of factor analysis , called "product analysis" , that performs maximum likelihood estimation to model the input as a linear combination
of products of hidden variables. Although exact EM is not tractable because the
hidden variables are nonlinearly related to the input, the form of the product analysis model makes it well-suited to a variational inference technique and a variational
EM algorithm.
Other approaches to learning nonlinear representations include principal surface
analysis (1984) and nonlinear autoencoders (Baldi and Hornik 1989; Diamantaras
and Kung 1996), which minimize a reconstruction error when the data is mapped
to the latent space and back; mixtures of linear models (Kambhatla and Leen
1994; Ghahramani and Hinton 1997; Tipping and Bishop 1999), which approximate
nonlinear relationships using piece-wise linear patches; density networks (MacKay
1995), which use Markov chain Monte Carlo methods to learn potentially very complex density functions; generative topographic maps (Bishop, SvensE'm and Williams
1998) , which use a finite set of fixed samples in the latent space for efficient inference and learning; and kernel principal components analysis (Sch6Ikopf, Smola and
Muller 1998), which finds principal directions in nonlinear functions of the input.
Our goals in developing product analysis is to introduce a technique that
? produces a density estimator of the data
? separates sensor noise from the latent structure
? learns a smooth, nonlinear map from the input to the latent space
? works for high-dimensional data and high-dimensional latent spaces
? is particularly well-suited to products of latent variables
? is computationally efficient
While none of the other approaches described above directly addresses all of these
goals, product analysis does.
2
Factor analysis model
Of the three linear techniques described above, factor analysis has the simplest
description as a generative model of the data. The input vector x is modeled using
a vector of hidden variables z. The hidden variables are independent and normally
distributed with zero mean and unit variance:
p(z) = N(z ; 0, I).
(1)
The input is modeled as a linear combination of the hidden variables, plus independent Gaussian noise:
(2)
p(xlz) = N(x; Az, \]f).
The model parameters are the factor loading matrix A and the diagonal matrix of
sensor noise variances, \]f.
Factor analysis (d. (Rubin and Thayer 1982)) is the procedure for estimating A and lJI using a training set. The marginal distribution over the input is
p(x) = N(x; 0, AA T + lJI), so factor analysis can be viewed as estimating a lowrank parameterization of the covariance matrix of the data.
3
Product analysis model
In the "product analyzer", the input vector x is modeled using a vector of hidden
variables z, which are independent and normally distributed with zero mean and
unit variance:
p(z) = N(z; 0, I).
(3)
In factor analysis, the input is modeled as a linear combination of the hidden variables. In product analysis, the input is modeled as a linear combination of monomials in the hidden variables. The power of variable Zk in monomial i is Sik. So,
the ith monomial is
(4)
Ji(z) =
ZZik.
II
k
Denoting the vector of Ji(z) 's by f(z) , the density of the input given z is
p(xlz)
= N(x; Af(z) , lJI).
(5)
The model parameters are A and the diagonal covariance matrix lJI . Here, we
learn A , maintaining the distribution over z constant. Alternatively, if A is known
apriori, we can learn the distribution over z, maintaining A to be fixed.
The matrix S = {Sik} can be specified beforehand, estimated from the data using
cross-validation, or averaged over in a Bayesian fashion. When S = I, J(z) = z and
the product analyzer simplifies to the factor analyzer. If, for some i, Sik = 0, for all
k, Ji(z) = 1 and this monomial will account for a constant offset in the input.
4
Product analysis
Exact EM in the product analyzer is intractable, since the sufficient statistics require
averaging over the posterior p(zlx), for which we do not have a tractable expression.
Instead, we use a variational approximation (Jordan et al. 1998) , where for each
training case, the posterior p(zlx) is approximated by a factorized Gaussian distribution q(z) and the parameters of q(z) are adjusted to make the approximation
accurate. Then, the approximation q(z) is used to compute the sufficient statistics
for each training case in a generalized EM algorithm (Neal and Hinton 1993).
The q-distribution is specified by the variational parameters 'f/ and
q(z) = N(z; 'f/, ~),
where
~
~:
(6)
is a diagonal covariance matrix.
q is optimized by minimizing the relative entropy (Kullback-Leibler divergence),
J(
=
1
z
q(z)
q(z) In p(zlx) .
(7)
In fact , minimizing this entropy is equivalent to maximizing the following lower
bound on the log-probability of the observation:
B=
1
q(z) In p~7~~)
~ lnp(x)
(8)
Pulling lnp(x) out of the integral, the bound can be expressed as
B
= lnp(x)
-
1
q(z) In p~~~~)
= lnp(x)
(9)
- K.
Since lnp(x) does not directly depend on the variational parameters, maximizing B
is equivalent to minimizing K. Note that since K :::=: 0, B :S lnp(x). Using Lagrange
multipliers, it is easy to show that the bound is maximized when q(z) = p(zlx), in
which case K = 0 and B = lnp(x).
Substituting the expressions for p(z), p(xlz) and q(z) into (8), and using the fact
that f(z)T ATw- 1Af(z) = tr(f(z)T ATw- 1Af(z)) = tr(ATw- 1Af(z)f(z)T), we
have
B=
~ (In I27fe<I> I -In 127fWI -In 127f11
-l1 Tl1- XTW-1x
+ 2xT w- 1AE[f(z)] + tr(ATW-l AE[f(z)f(z)T])),
(10)
where E[] denotes an expectation with respect to q(z) .
The expectations are simplified as follows:
E[!i(Z)] = E[II ZZik] = II E[ZZik] = II m Sik (T)k, ?k),
k
k
k
E[Ji(Z)!j(z)] = E[II z:idSik ] = IIE[z:ik+sik ] = II mSik+Sik(T)k,?k),
k
k
(11)
k
where mn(T), ?) is the nth moment under a Gaussian with mean T) and variance
?. Closed forms for the mn(T), ?) are found by setting derivatives of the Gaussian
moment generating function to zero:
(12)
After substituting the closed forms for the moments, B is a polynomial in the T)k 'S
and the (Pk's. For each training case, B is maximized with respect to the T)k'S and
the ?k'S using, e.g., conjugate gradients. The model parameters A and W that
maximize the sum of the bounds for the training cases can be computed directly,
since W does not affect the solution for A, B is quadratic in A , and the optimal W
can be written in terms of A and the variational parameters.
If the power of each latent variable is restricted to be 0 or 1 in each monomial,
o :S Sik :S 1, the above expressions simplify to
k
k
In this case, we can directly maximize B with respect to each T)k in turn, since B is
quadratic in each T)k.
5
Experimental results:
5.1 Classification results on the Wisconsin breast cancer database: We
obtained results on using product analysis for classification of malignant and benign
cancer using the breast cancer database provided by Dr. Wolberg from the Univ.
of Wisconsin. Each observation in the database is characterized by nine cytological
a)
b)
.....
c)
--.---,-~~
Figure 1: a) Data from training set. Mean images learned using b) product analysis
c) mixture of gaussians
features, namely, lump thickness, uniformity of cell and shape, marginal adhesion ,
single epithelial cell size, bare nuclei, bland chromotin, normal nucleoli and mitoses.
Each feature is assigned an integer between 1 and 10.
In their earlier work (Wolberg and Mangasarian 1990) , the authors used linear programming for classification. The objective was to find a hyperplane that separates
the classes of malignant and benign cancer. In the absence of a separating plane,
average sum of misclassifications of each class is minimized.
Our approach is learn one density model for the benign feature vectors and a second
density model for the malignant feature vectors and then use Bayes rule to classify an
input vector. With separate models , classification involves assigning the observation
to the model that provides the largest probability for occurrence of that observation
as given by,
P( cl as s Ix ) =
P(xlclass)P(class)
P(xlbenign)P(benign) + P(xlmalignant)P(malignant)
--=--:----:-:-----:-=-:-:----'---:-'---=--:'---c---'-----,-----'-----:-=---:-----::-------,-
To compare with the result reported in (Wolberg and Mangasarian 1990), 4.1 %
error rate on 369 instances, we used the same set for our learning scheme and found
that the product analysis produced 4% misclassfication.
In addition, to compare the recognition rate of product analysis with the recognition
rate of factor analysis, we divided the data set into 3 sets for training, validation and
testing. The parameters of the model are learned using the training set, and tested
on the validation set. This is repeated for 20 times , remembering the parameters
that provided the best classification rate on the validation set. Finally, the parameters that provided the best performance on the validation set is used to classify
the test set, only once. Since the data is limited, we perform this experimentation
on 4 different random breakups of data into training, validation and test set. For
product analysis model, we chose 3 hidden variables without optimization but for
factor analysis , we chose the optimum number of factors. The average error rate on
the 4 breakups was 5% using product analysis and 5.59% using factor analysis.
Figure 2: Images generated from the learned mixture of product analyzers
Figure 3: First row: Observation. Second row: corresponding image normalized for
translation and lighting after lighting & transformation invariant model is learned
5.2 Mixture of lighting invariant appearance models: Often, objects are
imaged under different illuminants. To learn an appearance model, we want to
automatically remove the lighting effects and infer lighting-normalized images.
Since ambient light intensity and reflectances of patches on the object multiply to
produce a lighting-affected image, we can model lighting-invariance using a product analyzer. P(x,z) = P(xlz)P(z), where x is the vector of pixel intensities of
the observation, Zl is the random variable describing the light intensity, and the
remaining Zi are the pixel intensities in the lighting normalized image. We learn
the distribution over z, where f(z) = [ZlZ2, ZlZ3, ... ZlZN+l]T and A is identity. By
infering Zl, we can remove its effect on observation. The mixture model of product
analyzer has joint distribution 7r c P(xlz)P(z), where 7rc is the probability of each
class. It can be used to infer various kinds of images (e.g. faces of different people)
under different lighting conditions.
We trained this model on images with 2 different poses of the same person(Fig. la).
The variation in the images is governed by change in pose, light, and background
clutter. Fig. Ib and Fig. lc compares the components learned using a mixture of
product analyzers and a mixture of Gaussians. Due to limited variation in the
pose and large variation in lighting, the mixture of gaussians is unable to extract
the mean images. However, mixture of product analyzers is able to capture the
distributions well. (Fig. 3).
5.3 Transformation and lighting invariant appearance models: Geometric transformations like shift and shearing can occur when scenes are imaged. Transformation invariant mixtures of Guassians and factor analyzers (Frey and Jojic
2002; Jojic et al. 2001) enable infering transformation-neutral image. Here, we
add lighting-invariance to this framework enabling clustering based on interesting
features such as pose, without concern for transformation and lighting effects.
6
Conclusions
We introduced a density model that explains observations as products of hidden
variables and we presented a variational technique for inference and learning in
this model. On the Wisonsin breast cancer data, we found that product analysis
outperforms factor analysis, when used with Bayes rule for pattern classification.
We also found that product analysis was able to separate the two hidden causes,
lighting and image noise in noisy images with varying illumination and varying pose.
References
Baldi, P. and Hornik, K 1989. Neural networks and principal components analysis: Learning from examples without local minima. N eural Networks , 2:53- 58.
Bishop, C. M. , Svensen , M., and Williams, C. K 1. 1998. Gtm: the generative topographic
mapping. Neural Computation, 10(1):215- 235 .
Diamantaras, K 1. and Kung, S. Y. 1996. Principal Component Neural Networks. Wiley,
New York NY.
Frey, B. J. and Jojic, N. 2002. Transformation invariant clustering and linear component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. To
appear. Available at http://www.cs.utoronto.ca/~frey.
Ghahramani, Z. and Hinton, G . E . 1997. The EM algorithm for mixtures of factor analyzers. University of Toronto Technical Report CRG-TR-96-1.
Hastie, T . 1984. Principal Curv es and Surfaces. Stanford University, Stanford CA. Doctoral dissertation.
Jojic, N. , Simard, P., Frey, B. J. , and Heckerman, D. 2001. Separating appearance from
deformation. To appear in Proceedings of the IEEE International Conference on
Comput er Vision.
Jolliffe, 1. T. 1986. Principal Component Analysis. Springer-Verlag, New York NY.
Jordan, M. 1. , Ghahramani, Z. , Jaakkola, T. S. , and Saul, L. K 1998. An introduction
to variational methods for graphical models. In Jordan , M. 1. , editor , Learning in
Graphical Models. Kluwer Academic Publishers, Norwell MA.
Kambhatla, N. and Leen, T. K. 1994. Fast non-linear dimension reduction. In Cowan ,
J . D. , Tesauro, G., and Alspector, J. , editors, Advances in N eural Information Processing Systems 6, pages 152- 159. Morgan Kaufmann, San Francisco CA.
MacKay, D. J. C. 1995. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, 354:73- 80.
Neal, R. M. and Hinton, G. E. 1993. A new view of the EM algorithm that justifies
incremental and other variants. Unpublished manuscript available over the internet
by ftp at ftp:/ /ftp. cs. utoronto. ca/pub/radford/em. ps. Z.
Rubin, D. and Thayer, D. 1982. EM algorithms for ML factor analysis. Psychometrika,
47(1):69- 76.
Scholkopf, B., Smola, A., and Miiller, K-R. 1998. Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299- 1319.
Tenenbaum, J. B. and Freeman , W. T. 1997. Separating style and content . In Mozer,
M. C. , Jordan, M. 1. , and Petsche, T. , editors, Advances in Neural Information Processing Systems 9. MIT Press, Cambridge MA.
Tipping, M. E. and Bishop, C. M. 1999. Mixtures of probabilistic principal component
analyzers. N eural Computation, 11(2):443- 482 .
Wolberg, W. H. and Mangasarian, O. L. 1990. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. In Proceedings of the National
Academy of Sciences.
| 1973 |@word polynomial:1 loading:1 covariance:3 tr:4 reduction:1 moment:3 cytology:1 pub:1 denoting:1 outperforms:1 assigning:1 written:1 benign:4 shape:1 remove:2 xlclass:1 nebojsa:1 generative:4 intelligence:1 parameterization:1 plane:1 ith:1 dissertation:1 provides:1 toronto:3 rc:1 ik:1 scholkopf:1 baldi:2 introduce:2 shearing:1 alspector:1 f11:1 freeman:2 automatically:1 psychometrika:1 provided:3 estimating:3 factorized:1 kind:1 unobserved:2 transformation:7 zl:2 normally:6 unit:2 diamantaras:2 appear:2 producing:1 medical:1 frey:4 local:1 iie:1 plus:2 chose:2 doctoral:1 limited:2 averaged:1 testing:1 xtw:1 procedure:1 www:2 equivalent:2 map:3 xlz:5 maximizing:2 williams:2 rule:3 estimator:1 nuclear:1 coordinate:3 variation:3 exact:2 programming:1 recognition:3 particularly:1 approximated:1 atw:4 database:3 observed:3 capture:1 mozer:1 lji:4 trained:1 depend:1 uniformity:1 joint:1 various:1 gtm:1 univ:1 fast:1 describe:1 monte:1 quite:1 stanford:2 valued:1 statistic:2 topographic:2 noisy:1 eigenvalue:1 reconstruction:1 lowdimensional:1 product:34 academy:1 description:1 az:1 optimum:1 p:1 produce:2 generating:1 incremental:1 object:2 ftp:3 svensen:1 pose:5 lowrank:1 c:2 involves:1 direction:1 enable:1 explains:1 require:1 generalization:3 adjusted:1 crg:1 normal:1 mapping:2 substituting:2 kambhatla:2 estimation:1 epithelial:1 largest:1 mit:1 sensor:4 always:1 gaussian:4 varying:2 jaakkola:1 likelihood:2 brendan:1 summarizing:1 inference:4 hidden:19 pixel:2 classification:7 mackay:2 marginal:2 apriori:1 once:1 unsupervised:2 tl1:1 minimized:1 report:1 simplify:1 divergence:1 national:1 microsoft:1 multiply:1 mixture:12 light:4 chain:1 accurate:1 beforehand:1 ambient:1 integral:1 norwell:1 deformation:1 instance:1 classify:2 earlier:1 neutral:1 monomials:1 infering:2 reported:1 thickness:1 person:1 density:8 international:1 probabilistic:2 physic:1 thayer:3 dr:1 derivative:1 style:2 simard:1 account:3 piece:1 later:1 view:1 closed:2 bayes:3 minimize:1 variance:6 kaufmann:1 maximized:2 bayesian:2 accurately:1 produced:1 none:1 carlo:1 lighting:14 psi:1 back:1 manuscript:1 higher:1 tipping:4 leen:2 just:1 uct:1 smola:2 autoencoders:1 nonlinear:9 pulling:1 effect:3 normalized:3 multiplier:1 assigned:1 jojic:5 imaged:2 leibler:1 neal:2 generalized:1 performs:1 l1:1 image:15 variational:9 wise:1 mangasarian:3 ji:4 anitha:1 kluwer:1 cambridge:1 analyzer:12 surface:2 add:1 posterior:2 tesauro:1 verlag:1 muller:1 lnp:7 morgan:1 minimum:1 remembering:1 maximize:2 ii:6 infer:2 smooth:1 technical:1 characterized:1 af:4 cross:1 academic:1 divided:1 bland:1 variant:1 regression:2 ae:2 vision:2 expectation:2 breast:4 kernel:2 cell:2 curv:1 background:1 whereas:2 addition:1 want:1 adhesion:1 publisher:1 cowan:1 jordan:4 lump:1 integer:1 easy:1 affect:1 zi:1 misclassifications:1 hastie:1 simplifies:1 shift:1 expression:3 miiller:1 york:2 nine:1 cause:1 wolberg:4 useful:1 clutter:1 tenenbaum:2 simplest:1 http:1 estimated:1 diagnosis:1 affected:1 group:2 sum:2 patch:2 separation:1 bound:4 internet:1 quadratic:2 occur:1 scene:1 freyl:1 developing:1 combination:10 conjugate:1 heckerman:1 em:8 restricted:1 invariant:5 computationally:1 turn:1 describing:1 malignant:4 jolliffe:1 fwi:1 tractable:2 instrument:1 available:2 gaussians:3 experimentation:1 petsche:1 occurrence:1 assumes:1 clustering:3 include:2 denotes:1 remaining:1 graphical:2 maintaining:2 reflectance:2 ghahramani:3 objective:1 breakup:2 diagonal:3 gradient:1 separate:4 mapped:1 separating:3 unable:1 kannan:1 modeled:6 relationship:2 minimizing:3 potentially:1 unknown:1 perform:1 observation:10 markov:1 finite:1 enabling:1 hinton:4 intensity:5 introduced:1 nonlinearly:1 namely:1 specified:2 unpublished:1 optimized:1 learned:5 zlx:4 address:1 able:2 pattern:5 including:1 power:2 mn:2 nth:1 scheme:1 technology:1 extract:1 bare:1 geometric:1 relative:1 wisconsin:2 interesting:1 validation:6 nucleus:1 sufficient:2 rubin:3 sik:7 editor:3 translation:1 row:2 cancer:5 monomial:4 saul:1 face:1 distributed:6 dimension:1 author:1 san:1 simplified:1 transaction:1 guassians:1 approximate:2 kullback:1 ml:1 francisco:1 alternatively:1 continuous:1 latent:8 prod:1 learn:6 zk:1 ca:4 hornik:2 complex:1 cl:1 pk:1 linearly:2 noise:9 repeated:1 eural:3 fig:4 fashion:1 ny:2 wiley:1 lc:1 comput:1 governed:1 ib:1 learns:1 ix:1 bishop:6 xt:1 utoronto:2 er:1 offset:1 concern:1 intractable:1 illumination:1 justifies:1 suited:2 entropy:2 appearance:4 lagrange:1 expressed:1 scalar:1 springer:1 aa:1 radford:1 ma:2 viewed:3 goal:2 identity:1 absence:1 content:2 change:1 averaging:1 hyperplane:1 principal:13 called:2 invariance:2 experimental:1 la:1 e:1 people:1 kung:2 illuminant:1 tested:1 |
1,066 | 1,974 | Modularity in the motor system: decomposition
of muscle patterns as combinations of
time-varying synergies
Andrea d?Avella and Matthew C. Tresch
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology, E25-526
Cambridge, MA 02139
davel, mtresch @ai.mit.edu
Abstract
The question of whether the nervous system produces movement through
the combination of a few discrete elements has long been central to the
study of motor control. Muscle synergies, i.e. coordinated patterns of
muscle activity, have been proposed as possible building blocks. Here we
propose a model based on combinations of muscle synergies with a specific amplitude and temporal structure. Time-varying synergies provide
a realistic basis for the decomposition of the complex patterns observed
in natural behaviors. To extract time-varying synergies from simultaneous recording of EMG activity we developed an algorithm which extends
existing non-negative matrix factorization techniques.
1 Introduction
In order to produce movement, every vertebrate has to coordinate the large number of degrees of freedom in the musculoskeletal apparatus. How this coordination is accomplished
by the central nervous system is a long standing question in the study of motor control.
According to one common proposal, this task might be simplified by a modular organization of the neural systems controlling movement [1, 2, 3, 4]. In this scheme, specific output
modules would control different but overlapping sets of degrees of freedom, thereby decreasing the number of variables controlled by the nervous system. By activating different
output modules simultaneously but independently, the system may achieve the flexibility
necessary to control a variety of behaviors.
Several studies have sought evidence for such a modular controller by examining the patterns of muscle activity during movement, in particular looking for the presence of muscle
synergies. A muscle synergy is a functional unit coordinating the activity of a number of
muscles. The simplest model for such a unit would be the synchronous activation of a set
of muscles with a specific activity balance, i.e. a vector in the muscle activity space. Using
techniques such as the correlation between pairs of muscles, these studies have generally
failed to provide strong evidence in support of such units. However, using a new analysis
that allows for simultaneous combinations of more than one synergy, our group has recently
provided evidence in support of this basic hypothesis of the neural control of movement.
We used a non-negative matrix factorization algorithm to examine the composition of muscle activation patterns in spinalized frogs [5, 6]. This algorithm, similarly to that developed
independently by others [7], extracts a small number of non-negative 1 factors which can be
combined to reconstruct a set of high-dimensional data.
However, this analysis assumed that the muscle synergies consisted of a set of muscles
which were activated synchronously. In examinations of complex behaviors produced by
intact animals, it became clear that muscles within a putative synergy were often activated
asynchronously. In these cases, although the temporal delay between muscles was nonzero,
the dispersion around this delay was very small. These observations suggested that the basic units of motor production might involve not only a fixed coordination of relative muscle
activation amplitudes, but also a coordination of relative muscle activation timings. We
therefore have developed a new algorithm to factorize muscle activation patterns produced
during movement into combinations of such time-varying muscle synergies.
2 Combinations of time-varying muscle synergies
We model the output of the neural controller as a linear combination of muscle patterns
with a specific time course in the activity of each muscle. In discrete time, we can represent
each pattern, or time-varying synergy, as a sequence of vectors
in muscle activity
space. The data set which we consider here consists of episodes of a given behavior, e.g. a
set of jumps in different directions and distances, or a set of walking or swimming cycles.
In a particular episode , each synergy is scaled by an amplitude coefficient and timeshifted by a delay . The sequence of muscle activity for that episode is then given by:
(1)
Fig. 1 illustrates the model with an example of the construction of a muscle pattern by
combinations of three synergies. Compared to the model based on combinations of synchronous muscle synergies this model has more parameters describing each synergy (
vs. , with
muscles and maximum number of time steps in a synergy) but less overall parameters. In fact, with synchronous synergies there is a combination coefficient for
each time step and each synergy, whereas with time-varying synergies there are only two
parameters ( and ) for each episode and each synergy.
3 Iterative minimization of the reconstruction error
!""
# %&(')'*' +-,.0/
$
1-%&
243 2
3
2
3 68 57
;
<
3
:9
9
For a given set of episodes, we search for the set of non-negative time-varying synergies
,
, of maximum duration time steps and the set
of coefficients
and
that minimize the reconstruction error
1
The non-negativity constraint arises naturally in the context of motor control from the fact that
firing rates of motoneurons, and consequently muscle activities, cannot be negative. While it is conceivable that a negative contribution on a motoneuronal pool from one factor would always be cancelled by a larger positive contribution from other factors, we chose a model based on non-negative
factors to ensure that each factor could be independently activated.
Synergy
1
1
2
1
3
4
Muscles
5
1
3
2
Synergy
2
2
3
4
4
5
5
10
1
20
30
40
3
Synergy
50
60
70
80
90
100
Time
2
C1
3
T1
4
C2
T2
5
C3
T3
Figure 1: An example of construction of a muscle pattern by the combinations of three
time-varying synergies. In this example, each time-varying synergy (left) is constituted by
a sequence of 50 activation levels in 5 muscles chosen as samples from Gaussian functions
with different centers, widths, and amplitudes. To construct the muscle pattern (top right,
shaded area), the activity levels of each synergy are first scaled by an amplitude coefficient
( , represented in the bottom right by the height of an horizontal bar) and shifted in time
by a delay ( , represented by the position of the same bar). Then, at each time step, the
scaled and shifted components (top right, broken lines) are summed together.
with
%
-%
for
or
1-
.
After initializing synergies and coefficients to random positive values, we minimize the
error by iterating the following steps:
and the scaling coefficients
, find the
1. For each episode, given the synergies
delays
using a nested matching procedure based on the cross-correlation of the
synergies with the data (see 3.1 below).
2. For each episode, given the synergies and the delays
, update the scaling coefficients
by gradient descent
2
3
7 setting to zero any negative
value.
Here and below, we enforce non-negativity by
3. Given delays and scaling coefficients, update the synergy elements
by gradient descent
243
3.1 Matching the synergy delays
To find the best delay of each synergy in each episode we use the following procedure:
i. Compute the sum of the scalar products between the s -th data episode and the i -th
synergy time-shifted by
5 <
or scalar product cross-correlation at delay , for all possible delays.
(2)
ii. Select the synergy and the delay with highest cross-correlation.
iii. Subtract from the data the selected synergy (after scaling and time-shifting by the
selected delay).
iv. Repeat the procedure for the remaining synergies.
4 Results
We tested the algorithm on simulated data in order to evaluate its performance and then
applied it to EMG recordings from 13 hindlimb muscles of intact bullfrogs during several
episodes of natural behaviors [8].
4.1 Simulated data
We first tested whether the algorithm could reconstruct known synergies and coefficients
from a dataset generated by those same synergies and coefficients. We used two different
types of simulated synergies. The first type was generated using a Gaussian function of
different center, width, and amplitude for each muscle. The second type consisted of synergies generated by uniformly distributed random activities. For each type, we generated
sets of three synergies involving five muscles with a duration of 15 time steps. Using these
synergies, 50 episodes of duration 30 time steps were generated by scaling each synergy
with random coefficients
and shifting it by random delays .
3
In figure 2 the results of a run with Gaussian synergies are shown. Using as a convergence
criterion a change in of less than for 20 iterations, after 474 iterations the solution
had
. Generating and reconstructed synergy activations are shown side by side
on the left, in gray scale. Scatter plots of generating vs. reconstructed scaling coefficients
and temporal delays are shown in the center and on the right respectively. Both synergies
and coefficients were accurately reconstructed by the algorithm.
3 %'
,%
In table 1, a summary of the results from 10 runs with Gaussian and random synergies
is presented. We used the maximum of the scalar product cross-correlation between two
normalized synergies (see eq. 2) to characterize their similarity. We compared two sets
of synergies by matching the pairs in each set with the highest similarity and computing
the mean similarity (
) between these pairs. All the synergy sets that we reconstructed
( ) had a high similarity with the generating set ( ). We also compared the generating and reconstructed scaling coefficients
using their correlation coefficient , and
delays
by counting the number of delay coefficients that were reconstructed correctly
after compensating for possible lags in the synergies ( ). The match in scaling coefficients and delays was in general very good. Only in a few runs with Gaussian synergies
were the data correctly reconstructed (high ) but with synergies slightly different from
the generating ones (as indicated by the lower
) and consequently not perfectly match! ).
ing coefficients (lower "! and # $
.
8
8 3
8
16
2
0
2
1
2
3
4
5
0
0
2160
16
0
0
2 160
16
0
1
2
3
4
5
1
Synergy
3
Synergy
2
Synergy
1
1
2
3
4
5
5
10
Wgen
15
5
10
Wrec
0
15 0
1
Cgen vs. Crec
0
0
16
Tgen vs. Trec
Figure 2: An example of reconstruction of known synergies and coefficients from simulated data. The first column ( ) shows three time-varying synergies, generated from
Gaussian functions, as three matrices each representing, in gray scale, the activity of 5
muscles (rows) over 15 time steps (columns). The second column ( ) shows the three
synergies reconstructed by the algorithm: they accurately match the generating synergies
(except for a temporal shift compensated by an opposite shift in the reconstructed delays).
The third and fourth columns show scatter plots of generating vs. reconstructed scaling
coefficients and delays in 50 simulated episodes. Both sets of coefficients are accurately
reconstructed in almost all episodes.
4.2 Time-varying muscle synergies in frog?s muscle patterns
,,,
We then applied the algorithm to EMG recordings of a large set (
) of hindlimb
kicks, a defensive reflex that frogs use to remove noxious stimuli from the foot. Each kick
consists of a fast extension followed by a slower flexion to return the leg to a crouched
posture. The trajectory of the foot varies with the location of the stimulation on the skin
and, as a consequence, the set of kicks spans a wide range of the workspace of the frog.
Correspondingly, across different episodes the muscle activity patterns in the 13 muscles
that we recorded showed considerable amplitude and timing variations that we sought to
explain by combinations of time-varying synergies.
After rectifying and integrating the EMGs over 10 ms intervals,
we performed the optimization procedure with sets of synergies, with
. We chose the maximum
duration of each synergy to be 20 time steps, i.e. 200 ms, a duration larger than the duration
of a typical muscle burst observed in this behavior. We repeated the procedure 10 times for
each .
'*')'
8
3
max
median
min
561
451
297
8
max
median
min
555
395
208
8
Gaussian synergies
$
!
!
1.0000
1.0000
1.0000
0.9996
0.9990
0.9984
0.9999
0.9998
0.9998
!
0.9989 0.9996 0.9983
0.9952 0.9990 0.9974
0.9874 0.8338 0.2591
Random synergies
3
8
0.9467
0.9233
0.3133
8
$
8
!
0.9867
0.9800
0.9733
Table 1: Comparison between generated and reconstructed synergies and coefficients for
10 runs with Gaussian and random synergies. See text for explanation.
3
3
In figure 3 the result of the extraction of four synergies with the highest is shown. The
convergence criterion of a change in smaller
than for 20 iterations was reached
after 100 iterations with a final
. The synergies extracted in the other nine runs
were in general very similar
to this set, as indicated by a mean similarity (
) ranging from
to
(median ) and a correlation between scaling coefficients ranging from
to
(median ). In the case with the lowest similarity, only one synergy in the set
shown in figure 3 was not properly matched.
%'
%'
%'
%'
%'
3 %'
,%
%'
The four synergies captured the basic features of the muscle patterns observed during different kicks. The first synergy, recruiting all the major knee extensor muscles (VI, RA, and
VE), is highly activated in laterally directed kicks, as seen in the first kick shown in figure 3, which involved a large knee extension. The second synergy, recruiting two large hip
extensor muscles (RI and SM) and an ankle extensor muscle (GA), is highly activated in
caudally and medially directed kicks, i.e. kicks involving hip extension. The third synergy
involves a specific temporal sequencing of several muscles: BI and VE first, followed by
RI, SM, and GA, and then by AD and VI at the end. The fourth synergy has long activation
profiles in many flexor muscles, i.e. those involved in the return phase of the kick, with a
specific temporal pattern (long activation of IP; BI and SA before TA).
When this set of EMGs was reconstructed using different numbers of muscle synergies,
we found that the synergies identified using N synergies were generally preserved in the
synergies identified using N+1 synergies. For instance,
the first two synergies shown in
figure 3 were seen in all sets of synergies, from
to
. Therefore, increasing
the number of synergies allowed the data to be reconstructed more accurately (as seen by a
higher ) but without a complete reorganization of the synergies.
3
5 Discussion
The algorithm that we introduced here represents a new analytical tool for the investigation
of the organization of the motor system. This algorithm is an extension of previous nonnegative matrix factorization procedures, providing a means of capturing structure in a set
of data not only in the amplitude domain but also in the temporal domain. Such temporal
structure is a natural description of motor systems where many behaviors are characterized by a particular temporal organization. The analysis applied to behaviors produced by
the frog, as described here, was able to capture significant physiologically relevant characteristics in the patterns of muscle activations. The motor system is not unique, however,
in having structure in both amplitude and temporal domains and the techniques used here
could easily be extended to other systems.
RI
AD
SM
ST
IP
VI
RA
GA
TA
PE
BI
SA
VE
SM
ST
RI
AD
SM
ST
IP
VI
RA
GA
TA
PE
BI
SA
VE
Muscles
IP
VI
RA
GA
TA
RI
AD
SM
ST
IP
VI
RA
GA
TA
PE
BI
SA
VE
PE
BI
SA
VE
10
RI
AD
SM
ST
IP
VI
RA
GA
TA
PE
BI
SA
VE
20
Time
30
Synergy
T1
15
25
C4
T4
20
15
20
Time
C3
T3
10
10
C2
T2
5
5
C1
4
Synergy
3
Synergy
2
Synergy
1
RI
AD
Figure 3: Reconstruction of rectified and integrated (10 ms) EMGs for two kicks by timevarying synergies. Left: four extracted synergies constituted by activity levels (in gray
scale) for 20 time steps in 13 muscles: rectus internus major (RI), adductor magnus (AD),
semimembranosus (SM), ventral head of semitendinosus (ST), ilio-psoas (IP), vastus internus (VI), rectus anterior (RA), gastrocnemius (GA), tibialis anterior (TA), peroneous (PE),
biceps (BI), sartorius (SA), and vastus externus (VE) [8]. Top right: the observed EMGs
(thin line and shaded area) and their reconstruction (thick line) by combinations of the four
synergies, scaled in amplitude ( ) and shifted in time ( ).
Our model can be naturally extended to include temporal scaling of the synergies, i.e.
allowing different durations of a synergy in different episodes. Work is in progress to
implement an algorithm similar to the one presented here to extract time-varying and timescalable synergies. We will also address the issue of how to identify time-varying muscle
synergies from continuous recordings of EMG patterns, without any manual segmentation
into different episodes. A possibility that we are investigating is to extend the approach
based on a sparse and overcomplete basis used by Lewicki and Sejnowski [9]. Finally,
future work will aim to the development of a probabilistic model to address the issue of the
dimensionality of the synergy set in terms of Bayesian model selection [10].
Acknowledgments
We thank Zoubin Ghahramani, Emanuel Todorov, Emilio Bizzi, Sebastian Seung, Simon
Overduin, and Maura Mezzetti for useful discussions and comments.
References
[1] E. Bizzi, P. Saltiel, and M. Tresch. Modular organization of motor behavior. Z Naturforsch [C],
53(7-8):510?7, 1998.
[2] F. A. Mussa-Ivaldi. Modular features of motor control and learning. Curr Opin Neurobiol,
9(6):713?7, 1999.
[3] W. J. Kargo and S. F. Giszter. Rapid correction of aimed movements by summation of forcefield primitives. J Neurosci, 20(1):409?26, 2000.
[4] Z. Ghahramani and D. M. Wolpert. Modular decomposition in visuomotor learning. Nature,
386(6623):392?5, 1997.
[5] M. C. Tresch, P. Saltiel, and E. Bizzi. The construction of movement by the spinal cord. Nature
Neuroscience, 2(2):162?7, 1999.
[6] P. Saltiel, K. Wyler-Duda, A. d?Avella, M. C. Tresch, and E. Bizzi. Muscle synergies encoded
within the spinal cord: evidence from focal intraspinal nmda iontophoresis in the frog. Journal
of Neurophysiology, 85(2):605?19, 2001.
[7] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization.
Nature, 401(6755):788?91, 1999.
[8] A. d?Avella. Modular control of natural motor behavior. PhD thesis, MIT, 2000.
[9] M. S. Lewicki and T. J. Sejnowski. Coding time-varying signals using sparse, shift-invariant
representations. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural
Information Processing Systems 11. MIT Press, 1999.
[10] L. Wasserman. Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44:92?107, 2000.
| 1974 |@word neurophysiology:1 duda:1 ankle:1 decomposition:3 thereby:1 ivaldi:1 existing:1 anterior:2 activation:10 scatter:2 realistic:1 motor:11 remove:1 plot:2 opin:1 update:2 v:5 selected:2 nervous:3 location:1 five:1 height:1 mathematical:1 burst:1 c2:2 tresch:4 consists:2 ra:7 rapid:1 andrea:1 behavior:10 examine:1 brain:1 compensating:1 decreasing:1 vertebrate:1 increasing:1 provided:1 matched:1 lowest:1 neurobiol:1 developed:3 temporal:11 every:1 laterally:1 scaled:4 control:8 unit:4 extensor:3 positive:2 t1:2 before:1 timing:2 apparatus:1 consequence:1 firing:1 might:2 chose:2 frog:6 shaded:2 factorization:4 range:1 bi:8 directed:2 unique:1 acknowledgment:1 block:1 implement:1 procedure:6 area:2 gastrocnemius:1 matching:3 integrating:1 zoubin:1 cannot:1 ga:8 selection:2 context:1 center:3 compensated:1 primitive:1 independently:3 duration:7 defensive:1 knee:2 wasserman:1 wrec:1 coordinate:1 variation:1 controlling:1 construction:3 hypothesis:1 element:2 walking:1 observed:4 bottom:1 module:2 initializing:1 capture:1 cord:2 cycle:1 episode:16 solla:1 movement:8 highest:3 broken:1 seung:2 basis:2 easily:1 represented:2 fast:1 sejnowski:2 visuomotor:1 modular:6 larger:2 lag:1 encoded:1 reconstruct:2 asynchronously:1 final:1 ip:7 sequence:3 analytical:1 propose:1 reconstruction:5 product:3 relevant:1 flexibility:1 achieve:1 description:1 convergence:2 produce:2 generating:7 object:1 progress:1 sa:7 eq:1 strong:1 involves:1 direction:1 foot:2 thick:1 musculoskeletal:1 activating:1 investigation:1 summation:1 extension:4 correction:1 around:1 avella:3 magnus:1 matthew:1 recruiting:2 major:2 sought:2 ventral:1 bizzi:4 coordination:3 tool:1 minimization:1 mit:3 always:1 gaussian:8 aim:1 varying:16 timevarying:1 properly:1 sequencing:1 integrated:1 overall:1 issue:2 development:1 animal:1 summed:1 construct:1 extraction:1 having:1 represents:1 kargo:1 thin:1 future:1 others:1 t2:2 stimulus:1 few:2 simultaneously:1 ve:8 phase:1 mussa:1 curr:1 freedom:2 organization:4 highly:2 possibility:1 activated:5 necessary:1 iv:1 overcomplete:1 hip:2 column:4 instance:1 delay:20 examining:1 characterize:1 emg:8 varies:1 combined:1 st:6 standing:1 workspace:1 probabilistic:1 lee:1 pool:1 together:1 e25:1 thesis:1 central:2 recorded:1 cognitive:1 return:2 coding:1 coefficient:23 coordinated:1 vi:8 ad:7 performed:1 reached:1 simon:1 rectifying:1 contribution:2 minimize:2 became:1 characteristic:1 t3:2 identify:1 bayesian:2 accurately:4 produced:3 trajectory:1 rectified:1 biceps:1 simultaneous:2 explain:1 manual:1 sebastian:1 involved:2 naturally:2 emanuel:1 dataset:1 massachusetts:1 dimensionality:1 segmentation:1 nmda:1 amplitude:10 rectus:2 ta:7 higher:1 correlation:7 horizontal:1 cohn:1 overlapping:1 indicated:2 gray:3 building:1 consisted:2 normalized:1 nonzero:1 during:4 width:2 criterion:2 m:3 complete:1 ranging:2 recently:1 caudally:1 common:1 stimulation:1 functional:1 spinal:2 extend:1 significant:1 composition:1 cambridge:1 ai:1 focal:1 similarly:1 had:2 similarity:6 showed:1 accomplished:1 muscle:54 captured:1 motoneuron:1 seen:3 signal:1 ii:1 emilio:1 ing:1 match:3 characterized:1 cross:4 long:4 controlled:1 involving:2 basic:3 controller:2 iteration:4 represent:1 c1:2 preserved:1 proposal:1 whereas:1 interval:1 median:4 comment:1 recording:4 presence:1 counting:1 kick:10 iii:1 variety:1 todorov:1 psychology:1 perfectly:1 opposite:1 identified:2 shift:3 synchronous:3 whether:2 nine:1 generally:2 iterating:1 clear:1 involve:1 useful:1 aimed:1 hindlimb:2 simplest:1 shifted:4 coordinating:1 neuroscience:1 correctly:2 discrete:2 group:1 four:4 swimming:1 sum:1 run:5 wgen:1 fourth:2 flexor:1 extends:1 almost:1 putative:1 scaling:11 capturing:1 followed:2 nonnegative:1 activity:15 constraint:1 noxious:1 ri:8 span:1 min:2 flexion:1 department:1 according:1 combination:13 across:1 slightly:1 smaller:1 leg:1 invariant:1 describing:1 end:1 enforce:1 cancelled:1 slower:1 top:3 remaining:1 ensure:1 include:1 ghahramani:2 skin:1 question:2 posture:1 conceivable:1 gradient:2 distance:1 thank:1 simulated:5 reorganization:1 providing:1 balance:1 negative:9 allowing:1 observation:1 dispersion:1 sm:8 descent:2 saltiel:3 extended:2 looking:1 head:1 trec:1 synchronously:1 introduced:1 pair:3 c3:2 c4:1 address:2 able:1 suggested:1 bar:2 below:2 pattern:17 max:2 explanation:1 shifting:2 natural:4 examination:1 representing:1 scheme:1 technology:1 negativity:2 extract:3 text:1 relative:2 degree:2 editor:1 production:1 row:1 course:1 summary:1 repeat:1 side:2 institute:1 wide:1 correspondingly:1 sparse:2 distributed:1 jump:1 simplified:1 reconstructed:14 synergy:102 investigating:1 assumed:1 factorize:1 search:1 iterative:1 physiologically:1 continuous:1 modularity:1 table:2 naturforsch:1 nature:3 complex:2 domain:3 constituted:2 neurosci:1 profile:1 repeated:1 allowed:1 fig:1 position:1 bullfrog:1 pe:6 third:2 specific:6 evidence:4 phd:1 illustrates:1 t4:1 forcefield:1 subtract:1 wolpert:1 failed:1 scalar:3 lewicki:2 reflex:1 nested:1 extracted:2 ma:1 consequently:2 considerable:1 change:2 typical:1 except:1 uniformly:1 averaging:1 kearns:1 giszter:1 intact:2 select:1 support:2 arises:1 evaluate:1 tested:2 |
1,067 | 1,975 | Characterizing neural gain control using
spike-triggered covariance
Odelia Schwartz
Center for Neural Science
New York University
odelia@cns.nyu.edu
E. J. Chichilnisky
Systems Neurobiology
The Salk Institute
ej@salk.edu
Eero P. Simoncelli
Howard Hughes Medical Inst.
Center for Neural Science
New York University
eero.simoncelli@nyu.edu
Abstract
Spike-triggered averaging techniques are effective for linear characterization of neural responses. But neurons exhibit important nonlinear behaviors, such as gain control, that are not captured by such analyses.
We describe a spike-triggered covariance method for retrieving suppressive components of the gain control signal in a neuron. We demonstrate
the method in simulation and on retinal ganglion cell data. Analysis
of physiological data reveals significant suppressive axes and explains
neural nonlinearities. This method should be applicable to other sensory
areas and modalities.
White noise analysis has emerged as a powerful technique for characterizing response properties of spiking neurons. A sequence of stimuli are drawn randomly from an ensemble and
presented in rapid succession, and one examines the subset that elicit action potentials. This
?spike-triggered? stimulus ensemble can provide information about the neuron?s response
characteristics. In the most widely used form of this analysis, one estimates an excitatory
linear kernel by computing the spike-triggered average (STA); that is, the mean stimulus
that elicited a spike [e.g., 1, 2]. Under the assumption that spikes are generated by a
Poisson process with instantaneous rate determined by linear projection onto a kernel followed by a static nonlinearity, the STA provides an unbiased estimate of this kernel [3].
Recently, a number of authors have developed interesting extensions of white noise analysis. Some have examined spike-triggered averages in a reduced linear subspace of input
stimuli [e.g., 4]. Others have recovered excitatory subspaces, by computing the spiketriggered covariance (STC), followed by an eigenvector analysis to determine the subspace
axes [e.g., 5, 6].
Sensory neurons exhibit striking nonlinear behaviors that are not explained by fundamentally linear mechanisms. For example, the response of a neuron typically saturates for large
amplitude stimuli; the response to the optimal stimulus is often suppressed by the presence
of a non-optimal mask [e.g., 7]; and the kernel recovered from STA analysis may change
shape as a function of stimulus amplitude [e.g., 8, 9]. A variety of these nonlinear behaviors can be attributed to gain control [e.g., 8, 10, 11, 12, 13, 14], in which neural responses
are suppressively modulated by a gain signal derived from the stimulus. Although the underlying mechanisms and time scales associated with such gain control are current topics
of research, the basic functional properties appear to be ubiquitous, occurring throughout
the nervous system.
a
b
0
k0
0
Figure 1: Geometric depiction of spike-triggered analyses. a, Spike-triggered averaging
with two-dimensional stimuli. Black points indicate raw stimuli. White points indicate stim
uli eliciting a spike, and the STA (black vector), which provides an estimate of , corresponds to their center of mass. b, Spike-triggered covariance analysis of suppressive axes.
Shown are a set of stimuli lying on a plane perpendicular to the excitatory kernel, . Within
the plane, stimuli eliciting a spike are concentrated in an elliptical region. The minor axis of
the ellipse corresponds to a suppressive stimulus direction: stimuli with a significant component along this axis are less likely to elicit spikes. The stimulus component along the major
axis of the ellipse has no influence on spiking.
Here we develop a white noise methodology for characterizing a neuron with gain control.
We show that a set of suppressive kernels may be recovered by finding the eigenvectors of
the spike-triggered covariance matrix associated with smallest variance. We apply the technique to electrophysiological data obtained from ganglion cells in salamander and macaque
retina, and recover a set of axes that are shown to reduce responses in the neuron. Moreover, when we fit a gain control model to the data using a maximum likelihood procedure
within this subspace, the model accounts for changes in the STA as a function of contrast.
1 Characterizing suppressive axes
As in all white noise approaches, we assume that stimuli correspond to vectors, , in some
finite-dimensional space (e.g., a neighborhood of pixels or an interval of time samples).
We assume a gain control model in which the probability of a stimulus eliciting a spike
grows monotonically with the halfwave-rectified projection onto an excitatory linear kernel,
, and is suppressively modulated by the fullwave-rectified projection onto a set of
linear kernels,
.
First, we recover the excitatory kernel, . This is achieved by presenting spherically symmetric input stimuli (e.g., Gaussian white noise) to the neuron and computing the STA
(Fig. 1a). STA correctly recovers the excitatory kernel, under the assumption that each
of the gain control kernels are orthogonal (or equal) to the excitatory kernel. The proof
is essentially the same as that given for recovering the kernel of a linear model followed
by a monotonic nonlinearity [3]. In particular, any stimulus can be decomposed into a
component in the direction of the excitatory kernel, and a component in a perpendicular
direction. This can be paired with another stimulus that is identical, except that its component in the perpendicular direction is negated. The two stimuli are equally likely to occur
in a spherically Gaussian stimulus set (since they are equidistant from the origin), and they
are equally likely to elicit a spike (since their excitatory components are equal, and their
rectified perpendicular components are equal). Their vector average lies in the direction of
the excitatory kernel. Thus, the STA (which is an average over all such stimuli, or all such
stimulus pairs) must also lie in that direction. In a subsequent section we explain how to
Model:
Retrieved:
Excitatory:
Excitatory:
Eigenvalues:
{
1.5 {
2{
2.5 {
3{
1
Variance (eigenvalue)
Suppressive: Suppressive:
Weights
1
Arbitrary
0
Axis number
350
Figure 2: Estimation of kernels from a simulated model (equation 2). Left: Model kernels.
Right: Sorted eigenvalues of covariance matrix of stimuli eliciting spikes (STC). Five eigenvalues fall significantly below the others. Middle: STA (excitatory kernel) and eigenvectors
(suppressive kernels) associated with the lowest eigenvalues.
recover the excitatory kernel when it is not orthogonal to the suppressive kernels.
Next, we recover the suppressive subspace, assuming the excitatory kernel is known. Consider the stimuli lying on a plane perpendicular to this kernel. These stimuli all elicit the
same response in the excitatory kernel, but they may produce different amounts of suppression. Figure 1b illustrates the behavior in a three-dimensional stimulus space, in which one
axis is assumed to be suppressive. The distribution of raw stimuli on the plane is spherically symmetric about the origin. But the distribution of stimuli eliciting a spike is narrower
along the suppressive direction: these stimuli have a component along the suppressive axis
and are therefore less likely to elicit a spike. This behavior is easily generalized from this
plane to the entire stimulus space. If we assume that the suppressive axes are fixed, then
we expect to see reductions in variance in the same directions for any level of numerator
excitation.
Given this behavior of the spike-triggered stimulus ensemble, we can recover the suppressive subspace using principal component analysis. We construct the sample covariance
matrix of the stimuli eliciting a spike:
(1)
where
is the number of spikes. To ensure the estimated suppressive subspace is or
thogonal to the estimated (as in Figure 1b), the stimuli
are first projected onto the
subspace perpendicular to the estimated . The principal axes (eigenvectors) of that are
associated with small variance (eigenvalues) correspond to directions in which the response
of the neuron is modulated suppressively.
We illustrate the technique on simulated data for a neuron with a spatio-temporal receptive
field. The kernels are a set of orthogonal bandpass filters. The stimulus vectors of this
input sequence are defined over a 18-sample spatial region and a 18-sample time window
(i.e., a
-dimensional space). Spikes are generated using a Poisson process with mean
rate determined by a specific form of gain control [14]:
!#"
$&% ')( +* - , /
.
(2)
10
.1243 .65
The goal of simulation is to recover excitatory kernel , the suppressive subspace spanned
0
by , weights , and constant 3 .
Retrieved kernels:
Eigenvalues:
actual
95 % confidence
0
Variance (eigenvalue)
Excitatory:
Suppressive:
1
Arbitrary
0
Axis number
26
Figure 3: Left: Retrieved kernels from STA and STC analysis of ganglion cell data from a
salamander retina (cell 1999-11-12-B6A). Right: sorted eigenvalues of the spike-triggered
covariance matrix, with corresponding eigenvectors. Low eigenvalues correspond to suppressive directions, while other eigenvalues correspond to arbitrary (ignored) directions. Raw
stimulus ensemble was sphered (whitened) prior to analysis and low-variance axes underrepresented in stimulus set were discarded.
Figure 2 shows the original and estimated kernels for a model simulation with 600K input
samples and 36.98K spikes. First, we note that STA recovers an accurate estimate of the
excitatory kernel. Next, consider the sorted eigenvalues of , as plotted in Figure 2. The
majority of the eigenvalues descend gradually (the covariance matrix of the white noise
source should have constant eigenvalues, but remember that those in Figure 2 are computed
from a finite set of samples). The last five eigenvalues are significantly below the values
one would obtain with randomly selected stimulus subsets. The eigenvectors associated
with these lowest eigenvalues span approximately the same subspace as the suppressive
kernels. Note that some eigenvectors correspond to mixtures of the original suppressive
kernels, due to non-uniqueness of the eigenvector decomposition. In contrast, eigenvectors
corresponding to eigenvalues in the gradually-descending region appear arbitrary in their
structure.
0
3
Finally, we can recover the scalar parameters of this specific model ( and ) by selecting
them to maximize the likelihood of the spike data according to equation (2). Note that a
direct maximum likelihood solution on the raw data would have been impractical due to
the high dimensionality of the stimulus space.
2 Suppressive Axes in Retinal Ganglion Cells
Retinal ganglion cells exhibit rapid [8, 15] as well as slow [9, 16, 17] gain control. We now
demonstrate that we can recover a rapid gain control signal by applying the method to data
from salamander retina [9]. The input sequence consists of 80K time samples of full-field
33Hz flickering binary white noise (contrast 8.5%). The stimulus vectors of this sequence
are defined over a 60-segment time window. Since stimuli are finite in number and binary,
they are not spherically distributed. To correct for this, we discard low-variance axes and
whiten the stimuli within the remaining axes.
Figure 3 depicts the kernels estimated from the 623 stimulus vectors eliciting spikes. Similar to the model simulation, the eigenvalues gradually fall off, but four of the eigenvalues
appear to drop significantly below the rest. To make this more concrete, we test the hypothesis that the majority of the eigenvalues are consistent with those of randomly selected
stimulus vectors, but that the last eigenvalues fall significantly below this range. Specifically, we perform a Monte Carlo simulation, drawing (with replacement) random subsets
of 623 stimuli from the full set of raw stimuli. We also randomly select (orthogonal)
"
"
0
projection onto suppressive kernel
projection onto excitatory kernel
a
0. 5
b
0. 5
0
-0. 5
-0. 5
-0. 5
0
0. 5
projection onto arbitrary kernel
-0. 5
0
0. 5
projection onto arbitrary kernel
Figure 4: Scatter plots from salamander ganglion cell data (cell 1999-11-12-B6A). Black
points indicate the raw stimulus set. White points indicate stimuli eliciting a spike. a, Projection of stimuli onto estimated excitatory kernel vs. arbitrary kernel. b, Projection of
stimuli onto an estimated suppressive kernel vs. arbitrary kernel.
axes, representing a suppressive subspace, and project this subspace out of the set of randomly chosen stimuli. We then compute the eigenvalues of the sample covariance matrix
of these stimuli. We repeat this times, and estimate a 95 percent confidence interval
for each of the eigenvalues. The figure shows that the first eigenvalues lie within the confidence interval. In practice, we repeat this process in a nested fashion, assuming initially no
directions are significantly suppressive, then one direction, and so on up to four directions.
These low eigenvalues correspond to eigenvectors that are concentrated in recent time (as is
the estimated excitatory kernel). The remaining eigenvectors appear to be arbitrary, spanning the full temporal window. We emphasize that these kernels should not be interpreted
to correspond to receptive fields of individual neurons underlying the suppressive signal,
but merely provide an orthogonal basis for a suppressive subspace.
We can now verify that the recovered STA axis is in fact excitatory, and the kernels corresponding to the lowest eigenvalues are suppressive. Figure 4a shows a scatter plot of the
stimuli projected onto the excitatory axis vs. an arbitrary axis. Spikes are seen to occur
only when the component along the excitatory axis is high, as expected. Figure 4b is a
scatter plot of the stimuli projected onto one of the suppressive axes vs. an arbitrary (ignored) axis. The spiking stimuli lie within an ellipse, with the minor axis corresponding to
the suppressive kernel. This is exactly what we would expect in a suppressive gain control
system (see Figure 1b).
Figure 5 illustrates recovery of a two-dimensional suppressive subspace for a macaque retinal ganglion cell. The subspace was computed from the 36.43K stimulus vectors eliciting
spikes out of a total of 284.74K vectors. The data are qualitatively similar to those of the
salamander cell, although both the strength of suppression and specific shapes of the scatter
plots differs. In addition to suppression, the method recovers facilitation (i.e., high-variance
axes) in some cells (not shown here).
3 Correcting for Bias in Kernel Estimates
The kernels in the previous section were all recovered from stimuli of a single contrast.
However, when the STA is computed in a ganglion cell for low and high contrast stimuli,
the low-contrast kernel shows a slower time course [9] (figure 7,a). This would appear
inconsistent with the method we describe, in which the STA is meant to provide an estimate
of a single excitatory kernel. This behavior can be explained by assuming a model of the
form given in equation 2, and in addition dropping the constraint that the gain control
kernels are orthogonal (or identical) to the excitatory kernel.
a
1
60
0.5
0
projection onto suppressive kernel
actual
95% confidence
0
c
b
projection onto excitatory kernel
%& ' ' " ()*) +($
Variance (eigenvalue)
!" #$
0.5
0
-0. 5
-0. 5
0. 5
0
0.5
projection onto arbitrary kernel
0. 5
0
0.5
projection onto arbitrary kernel
Figure 5: a, Sorted eigenvalues of stimuli eliciting spikes from a macaque retina (cell 200109-29-E6A). b-c, Scatter plots of stimuli projected onto recovered axes.
k0
Gain kernel
STA estimate
Figure 6: Demonstration of estimator bias. When a gain control kernel is not orthogonal to
the excitatory kernel, the responses to one side of the excitatory kernel are suppressed more
than those on the other side. The resulting STA estimate is thus biased away from the true
excitatory kernel, .
First we show that when the orthogonality constraint is dropped, the STA estimate of the
excitatory kernel is biased by the gain control signal. Consider a situation in which a
suppressive kernel contains a component in the direction of the excitatory kernel, . We
,
/ .
/ .
write
, where is perpendicular to the excitatory kernel. Then, for example,
10 2 .
a stimulus
, with 043 , produces a suppressive component along equal to
,
2
2
.
0 .
produces
50 , but the corresponding paired stimulus vector
,
/
.
0
. Thus, the two stimuli are equally likely
a suppressive component of
to occur but not equally likely to elicit a spike. As a result, the STA will be biased in the
2 .
direction . Figure 6 illustrates an example in which a non-orthogonal suppressive axis
biases the estimate of the STA.
.2
2 2
.
.
.
Now consider the model in equation 2 in the presence of a non-orthogonal suppressive
subspace. Note that the bias is stronger for larger amplitude stimuli because the constant
term
dominates the gain control signal for weak stimuli. Indeed, we have previously
hypothesized that changes in receptive field tuning can arise from divisive gain control
models that include an additive constant [14].
3.
Even when the STA estimate is biased by the gain control signal, we can still obtain an
(asymptotically) unbiased estimate of the excitatory kernel. Specifically, the true excitatory kernel lies within the subspace spanned by the estimated (biased) excitatory and
suppressive kernels. So, assuming a particular gain control model, we can again maximize
the likelihood of the data, but now allowing both the excitatory and suppressive kernels to
move within the subspace spanned by the initial estimated kernels. The resulting suppres-
a
b
0.1
0.1
0
0
c
Mn
Excitatory:
Suppressive:
-0.5
-1.8
Low contrast STA
High contrast STA
Time preceding spike (sec)
-0.5
0
-1.8
{
{
{
Low contrast STA
High contrast STA
Time preceding spike (sec)
0.99
0.97
{
{
Weights
0.52
0.46
0.87
0
Figure 7: STA kernels estimated from low (8.5%) and high (34%) contrast salamander retinal ganglion cell data (cell 1999-11-12-B6A). Kernels are normalized to unit energy. a, STA
kernels derived from ganglion cell spikes. b, STA kernels derived from simulated spikes
using ML-estimated model. c, Kernels and corresponding weights of ML-estimated model.
sive kernels need not be orthogonal to the excitatory kernel.
We maximize the likelihood of the full two-contrast data set using a model that is a generalization of that given by equation (2):
$ % ')( +* , /
% 10
'
. , . 42 3
(3)
The exponent is incorporated to allow for more realistic contrast-response functions.
The excitatory axis is initially set to the STA and the suppressive axes are set to the
low-eigenvalue eigenvectors of the STC, along with the STA (e.g., to allow for selfsuppression). The recovered
axes and weights are shown in Figure 7b, and remaining model
,
parameters are:
. Whereas the axes recovered from the STA/STC
analysis are orthogonal, the axes determined during the maximum likelihood stage need not
be (and in the data example are not) orthogonal. Figure 7b also demonstrates that the fitted
model accounts for the change in STA observed at different contrast levels. Specifically,
we simulate responses of the model (equation (3) with Poisson spike generation) on each
of the two contrast stimulus sets, and then compute the STA based on these simulated spike
trains. Although it is based on a single fixed excitatory kernel, the model exhibits a change
in STA shape as a function of contrast very much like the salamander neuron.
' 3 "
5
5
4 Discussion
We have described a spike-triggered covariance method for characterizing a neuron with
gain control, and demonstrated the plausibility of the technique through simulation and
analysis of neural data. The suppressive axes recovered from retinal ganglion cell data
appear to be significant because: (1) As in the model simulation, a small number of eigenvalues are significantly below the rest; (2) The eigenvectors associated with these axes are
concentrated in a temporal region immediately preceding the spike, unlike the remaining
axes; (3) Projection of the multi-dimensional stimulus vectors onto these axes reveal reductions of spike probability; (4) The full model, with parameters recovered through maximum
likelihood, explains changes in STA as a function of contrast.
Models of retinal processing often incorporate gain control [e.g., 8, 10, 15, 17, 18]. We
have shown for the first time how one can use white noise analysis to recover a gain control subspace. The kernels defining this subspace correspond to relatively short timescales.
Thus, it is interesting to compare the recovered subspace to models of rapid gain control.
In particular, Victor [15] proposed a retinal gain model in which the gain signal consists
of time-delayed copies of the excitatory kernel. In fact, for the cell shown in Figure 3,
the recovered suppressive subspace lies within the space spanned by shifted copies of the
excitatory kernel. The fact that we do not see evidence for slow gain control in the analysis
might indicate that these signals do not lie within a low-dimensional stimulus subspace. In
addition, the analysis is not capable of distinguishing between physiological mechanisms
that could underlie gain control behaviors. Potential candidates may include internal biochemical adjustments, non-Poisson spike generation mechanisms, synaptic depression, and
shunting inhibition due to other neurons.
This technique should be applicable to a far wider range of neural data than has been
shown here. Future work will incorporate analysis of data gathered using stimuli that vary
in both time and space (as in the simulated example of Figure 2). We are also exploring
applicability of the technique to other visual areas.
Acknowledgments We thank Liam Paninski and Jonathan Pillow for helpful discussions
and comments, and Divya Chander for data collection.
References
[1] E deBoer and P Kuyper. Triggered correlation. In IEEE Transact. Biomed. Eng., volume 15,
pages 169?179, 1968.
[2] J P Jones and L A Palmer. The two-dimensional spatial structure of simple receptive fields in
the cat striate cortex. J Neurophysiology, 58:1187?11211, 1987.
[3] E J Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12(2):199?213, 2001.
[4] D L Ringach, G Sapiro, and R Shapley. A subspace reverse-correlation technique for the study
of visual neurons. Vision Research, 37:2455?2464, 1997.
[5] R de Ruyter van Steveninck and W Bialek. Coding and information transfer in short spike
sequences. In Proc.Soc. Lond. B. Biol. Sci., volume 234, pages 379?414, 1988.
[6] B A Y Arcas, A L Fairhall, and W Bialek. What can a single neuron compute? In Advances in
Neural Information Processing Systems, volume 13, pages 75?81, 2000.
[7] M Carandini, D J Heeger, and J A Movshon. Linearity and normalization in simple cells of the
macaque primary visual cortex. Journal of Neuroscience, 17:8621?8644, 1997.
[8] R M Shapley and J D Victor. The effect of contrast on the transfer properties of cat retinal
ganglion cells. J. Physiol. (Lond), 285:275?298, 1978.
[9] D Chander and E J Chichilnisky. Adaptation to temporal contrast in primate and salamander
retina. J Neurosci, 21(24):9904?9916, 2001.
[10] R Shapley and C Enroth-Cugell. Visual adaptation and retinal gain control. Progress in Retinal
Research, 3:263?346, 1984.
[11] R F Lyon. Automatic gain control in cochlear mechanics. In P Dallos et al., editor, The Mechanics and Biophysics of Hearing, pages 395?420. Springer-Verlag, 1990.
[12] W S Geisler and D G Albrecht. Cortical neurons: Isolation of contrast gain control. Vision
Research, 8:1409?1410, 1992.
[13] D J Heeger. Normalization of cell responses in cat striate cortex. Vis. Neuro., 9:181?198, 1992.
[14] O Schwartz and E P Simoncelli. Natural signal statistics and sensory gain control. Nature
Neuroscience, 4(8):819?825, August 2001.
[15] J D Victor. The dynamics of the cat retinal X cell centre. J. Physiol., 386:219?246, 1987.
[16] S M Smirnakis, M J Berry, David K Warland, W Bialek, and M Meister. Adaptation of retinal
processing to image contrast and spatial scale. Nature, 386:69?73, March 1997.
[17] K J Kim and F Rieke. Temporal contrast adaptation in the input and output signals of salamander
retinal ganglion cells. J. Neurosci., 21(1):287?299, 2001.
[18] M Meister and M J Berry. The neural code of the retina. Neuron, 22:435?450, 1999.
| 1975 |@word neurophysiology:1 middle:1 stronger:1 simulation:7 covariance:11 decomposition:1 eng:1 reduction:2 initial:1 contains:1 selecting:1 recovered:12 current:1 elliptical:1 scatter:5 must:1 physiol:2 additive:1 realistic:1 subsequent:1 shape:3 drop:1 plot:5 v:4 selected:2 nervous:1 plane:5 short:2 characterization:1 provides:2 five:2 along:7 direct:1 retrieving:1 consists:2 shapley:3 mask:1 indeed:1 expected:1 rapid:4 behavior:8 mechanic:2 multi:1 decomposed:1 actual:2 lyon:1 window:3 project:1 underlying:2 moreover:1 linearity:1 mass:1 lowest:3 what:2 interpreted:1 eigenvector:2 developed:1 finding:1 impractical:1 temporal:5 remember:1 sapiro:1 smirnakis:1 exactly:1 demonstrates:1 schwartz:2 control:30 unit:1 medical:1 underlie:1 appear:6 dropped:1 approximately:1 black:3 might:1 examined:1 liam:1 perpendicular:7 range:2 palmer:1 steveninck:1 acknowledgment:1 hughes:1 practice:1 differs:1 procedure:1 area:2 elicit:6 significantly:6 projection:14 confidence:4 onto:18 chander:2 influence:1 applying:1 descending:1 demonstrated:1 center:3 underrepresented:1 recovery:1 immediately:1 correcting:1 examines:1 estimator:1 spanned:4 facilitation:1 rieke:1 distinguishing:1 hypothesis:1 origin:2 observed:1 descend:1 region:4 dynamic:1 transact:1 segment:1 suppressively:3 basis:1 easily:1 k0:2 cat:4 train:1 effective:1 describe:2 monte:1 neighborhood:1 emerged:1 widely:1 larger:1 drawing:1 statistic:1 triggered:14 sequence:5 eigenvalue:30 adaptation:4 produce:3 wider:1 illustrate:1 develop:1 minor:2 progress:1 soc:1 recovering:1 indicate:5 direction:16 correct:1 filter:1 explains:2 generalization:1 extension:1 exploring:1 lying:2 major:1 vary:1 smallest:1 uniqueness:1 estimation:1 proc:1 applicable:2 gaussian:2 ej:1 ax:23 derived:3 likelihood:7 uli:1 salamander:9 contrast:22 suppression:3 kim:1 inst:1 helpful:1 biochemical:1 typically:1 entire:1 initially:2 biomed:1 pixel:1 exponent:1 spatial:3 equal:4 construct:1 field:5 identical:2 jones:1 future:1 others:2 stimulus:70 fundamentally:1 retina:6 sta:34 randomly:5 individual:1 delayed:1 cns:1 replacement:1 mixture:1 light:1 accurate:1 capable:1 orthogonal:12 plotted:1 fitted:1 applicability:1 hearing:1 subset:3 geisler:1 off:1 concrete:1 again:1 albrecht:1 account:2 potential:2 nonlinearities:1 de:1 retinal:14 sec:2 coding:1 cugell:1 vi:1 recover:9 elicited:1 variance:9 characteristic:1 ensemble:4 succession:1 correspond:8 gathered:1 weak:1 raw:6 carlo:1 rectified:3 explain:1 synaptic:1 energy:1 associated:6 attributed:1 recovers:3 static:1 proof:1 gain:34 carandini:1 dimensionality:1 ubiquitous:1 electrophysiological:1 amplitude:3 methodology:1 response:14 stage:1 correlation:2 nonlinear:3 reveal:1 grows:1 effect:1 hypothesized:1 verify:1 unbiased:2 true:2 normalized:1 spherically:4 symmetric:2 ringach:1 white:11 numerator:1 during:1 excitation:1 whiten:1 generalized:1 presenting:1 demonstrate:2 percent:1 image:1 instantaneous:1 recently:1 functional:1 spiking:3 volume:3 significant:3 tuning:1 automatic:1 nonlinearity:2 centre:1 sive:1 depiction:1 inhibition:1 cortex:3 recent:1 retrieved:3 discard:1 reverse:1 verlag:1 binary:2 victor:3 captured:1 seen:1 preceding:3 determine:1 maximize:3 monotonically:1 signal:11 full:5 simoncelli:3 plausibility:1 equally:4 shunting:1 paired:2 biophysics:1 neuro:1 basic:1 whitened:1 vision:2 essentially:1 arca:1 poisson:4 kernel:80 normalization:2 achieved:1 cell:23 addition:3 whereas:1 interval:3 source:1 suppressive:46 modality:1 biased:5 rest:2 unlike:1 comment:1 hz:1 inconsistent:1 presence:2 variety:1 fit:1 equidistant:1 isolation:1 reduce:1 movshon:1 enroth:1 york:2 action:1 depression:1 ignored:2 eigenvectors:11 amount:1 concentrated:3 reduced:1 halfwave:1 shifted:1 estimated:13 neuroscience:2 correctly:1 write:1 dropping:1 four:2 drawn:1 asymptotically:1 merely:1 powerful:1 striking:1 throughout:1 followed:3 fairhall:1 strength:1 occur:3 constraint:2 orthogonality:1 simulate:1 span:1 lond:2 relatively:1 according:1 march:1 suppressed:2 primate:1 explained:2 gradually:3 equation:6 previously:1 mechanism:4 meister:2 apply:1 away:1 slower:1 original:2 remaining:4 ensure:1 include:2 warland:1 ellipse:3 eliciting:10 move:1 spike:44 receptive:4 primary:1 striate:2 bialek:3 exhibit:4 subspace:24 thank:1 simulated:5 sci:1 majority:2 topic:1 cochlear:1 spanning:1 stim:1 assuming:4 code:1 demonstration:1 negated:1 perform:1 allowing:1 neuron:19 discarded:1 howard:1 finite:3 spiketriggered:1 situation:1 neurobiology:1 saturates:1 incorporated:1 defining:1 arbitrary:13 august:1 david:1 pair:1 chichilnisky:3 macaque:4 below:5 natural:1 mn:1 representing:1 axis:15 prior:1 geometric:1 berry:2 expect:2 interesting:2 generation:2 kuyper:1 consistent:1 editor:1 excitatory:44 course:1 repeat:2 last:2 copy:2 bias:4 side:2 allow:2 institute:1 fall:3 characterizing:5 distributed:1 van:1 cortical:1 pillow:1 sensory:3 author:1 qualitatively:1 collection:1 projected:4 far:1 emphasize:1 ml:2 reveals:1 assumed:1 eero:2 spatio:1 nature:2 transfer:2 ruyter:1 stc:5 timescales:1 neurosci:2 noise:9 arise:1 neuronal:1 fig:1 depicts:1 fashion:1 salk:2 slow:2 heeger:2 bandpass:1 lie:7 candidate:1 specific:3 nyu:2 physiological:2 dominates:1 evidence:1 illustrates:3 occurring:1 paninski:1 likely:6 ganglion:13 visual:4 adjustment:1 scalar:1 monotonic:1 springer:1 corresponds:2 nested:1 sorted:4 narrower:1 goal:1 flickering:1 change:6 determined:3 except:1 specifically:3 averaging:2 principal:2 total:1 divisive:1 select:1 internal:1 odelia:2 modulated:3 sphered:1 meant:1 jonathan:1 incorporate:2 biol:1 |
1,068 | 1,976 | Adaptive Sparseness Using Jeffreys Prior
M?ario A. T. Figueiredo
Institute of Telecommunications,
and Department of Electrical and Computer Engineering.
Instituto Superior T?ecnico
1049-001 Lisboa, Portugal
mtf @lx.it.pt
Abstract
In this paper we introduce a new sparseness inducing prior which does not involve any (hyper)parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly available benchmark data sets show that the proposed approach
yields state-of-the-art performance. In particular, our method outperforms support vector
machines and performs competitively with the best alternative techniques, both in terms
of error rates and sparseness, although it involves no tuning or adjusting of sparsenesscontrolling hyper-parameters.
1 Introduction
!
"#%$ &'")(*+&'")(-,.0/2143 5 ,
6143 5
718:9#;:;
<
>?@ =
=
The goal of supervised learning is to infer a functional relation
, based on a
. Usually, the inputs
set of (maybe noisy) training examples
are vectors,
. When is continuous (typically
), we
are in the context of regression, whereas in classification is of categorical nature (e.g.,
). Usually, the structure of
is assumed fixed and the objective is to estimate
.
a vector of parameters defining it; accordingly we write
To achieve good generalization (i.e. to perform well on yet unseen data) it is necessary
to control the complexity of the learned function (see [1] - [4], and the many references
therein). In Bayesian approaches, complexity is controlled by placing a prior on the function to be learned, i.e., on . This should not be confused with a generative (informative)
Bayesian approach, since it involves no explicit modelling of the joint probability
.
A common choice is a zero-mean Gaussian prior, which appears under different names,
like ridge regression [5], or weight decay, in the neural learning literature [6]. Gaussian
priors are also used in non-parametric contexts, like the Gaussian processes (GP) approach
[2], [7], [8], [9], which has roots in earlier spline models [10] and regularized radial basis
functions [11]. Very good performance has been reported for methods based on Gaussian
priors [8], [9]. Their main disadvantage is that they do not control the structural complexity
of the resulting functions. That is, if one of the components of (say, a weight in a neural network) happens to be irrelevant, a Gaussian prior will not set it exactly to zero, thus
=
A @B
=
This work was partially supported by the Portuguese Foundation for Science and Technology
(FCT), Ministry of Science and Technology, under project POSI/33143/SRI/2000.
pruning that parameter, but to some small value.
Sparse estimates (i.e., in which irrelevant parameters are set exactly to zero) are desirable
because (in addition to other learning-theoretic reasons [4]) they correspond to a structural
simplification of the estimated function. Using Laplacian priors (equivalently, -penalized
regularization) is known to promote sparseness [12] - [15]. Support vector machines (SVM)
take a non-Bayesian approach to the goal of sparseness [2], [4]. Interestingly, however, it
can be shown that the SVM and -penalized regression are closely related [13].
Both in approaches based on Laplacian priors and in SVMs, there are hyper-parameters
which control the degree of sparseness of the obtained estimates. These are commonly
adjusted using cross-validation methods which do not optimally utilize the available data,
and are time consuming. We propose an alternative approach which involves no hyperparameters. The key steps of our proposal are: (i) a hierarchical Bayes interpretation
of the Laplacian prior as a normal/independent distribution (as used in robust regression
[16]); (ii) a Jeffreys? non-informative second-level hyper-prior (in the same spirit as [17])
which expresses scale-invariance and, more importantly, is parameter-free [18]; (iii) a simple expectation-maximization (EM) algorithm which yields a maximum a posteriori (MAP)
estimate of (and of the observation noise variance, in the case of regression).
=
Our method is related to the automatic relevance determination (ARD) concept [7], [19],
which underlies the recently proposed relevance vector machine (RVM) [20], [21]. The
RVM exhibits state-of-the-art performance, beating SVMs both in terms of accuracy and
sparseness [20], [21]. However, we do not resort to a type-II maximum likelihood approximation [18] (as in ARD and RVM); rather, our modelling assumptions lead to a marginal a
posteriori probability function on whose mode is located by a very simple EM algorithm.
Like the RVM, but unlike the SVM, our classifier produces probabilistic outputs.
=
Experimental evaluation of the proposed method, both with synthetic and real data, shows
that it performs competitively with (often better than) GP-based methods, RVM, and SVM.
2 Regression
@ = = /
=
)
$; )@
@ -.0/
We consider functions of the type
, i.e., that are linear with respect to
(whose dimensionality we will denote by ). This includes: (i) classical linear regression,
; (ii) nonlinear regression via a set of basis functions,
; (iii) kernel regression,
, where
is some (symmetric) kernel function [2] (as in SVM and RVM regression), not
necessarily verifying Mercer?s condition.
$;& & , .0/
$ -.0/
)@
;
" ) " = "
$ .
$ .0/
=
" A =
With a zero-mean Gaussian prior with covariance ( , A = ( ) = * ( , the posterior
A =
is still Gaussian with +mean and mode at
= (-, . / , /
/ 1 , this is called ridge regression [5].
When ( is proportional to identity, say ( 0
3 " A 5 " 2
, with A 5 " 2 76 :9
2> 5 " , the
With a Laplacian prior for = , A = 2 4
98;:=< is given by
posterior A =
is not Gaussian. The maximum a posteriori (MAP) estimate
+ @
= ?BACEDGFIH K J; = L9
J .M 2>J = J:
(1)
, for
, where
We follow the standard assumption that
is a set of independent zero-mean Gaussian variables with variance . With
, the likelihood function is then
, where is the
"! design matrix which depends on the s and on the adopted function representation,
and #$ % '& denotes a Gaussian density of mean % and covariance & , evaluated at # .
" "
is the norm. In linear
where J;#J is the Euclidean ( ) norm, and J;#J
regression this is called the LASSO (least absolute shrinkage and selection
operator) [14].
+
The main effect of the penalty is that some of the components of may be exactly zero.
If is an orthogonal matrix, (1) can be solved separately for each 5 , leading to the soft
threshold estimation rule, widely used in wavelet-based signal/image denoising [22].
= "
Let us consider an alternative model: let each 5 " have a zero-mean Gaussian prior
A 5'" " 5'" * "- , with its own variance " (like in ARD and RVM). Now, rather
than adopting a type-II maximum likelihood criterion (as in ARD and RVM), let us consider hyper-priors for the " s and integrate them out. Assuming exponential hyper-priors
A " 2 M 81:=< 9
" BMB (for "
* , because these are variances) we obtain
M :9 G 5'"
A 5 " A 5 " "- A " "
8;:=<
This shows that the Laplacian prior is equivalent to a 2-level hierachical-Bayes model:
zero-mean Gaussian priors with independent exponentially distributed variances. This decomposition has been exploited in robust least absolute deviation (LAD) regression [16].
$ .
The hierarchical decomposition of the Laplacian prior allows using the EM algorithm
as hidto implement the LASSO criterion in (1) by simply regarding
den/missing data. In fact, the complete log-posterior (with a flat prior for , and where
diag ,
, ),
C A =
9 C 9 J'
9 = J 9 = / =
(2)
is easy to maximize with respect to = and . The E-step reduces to the computation
of
of , given+ current (at iteration
+ the conditional expectation
+ ( ! ) estimates
+ ( " $ # %& and
*
)
$
1
.
+
5
5
= #'%& . This leads to ( #'%&
" #'%& = #'%& diag #'%& ,
# %& , . The
M-step is then defined by the two following update equations:
(3)
" ,#'%- & ; J'
9 =+ #'%& J
and
=+ #'%- & 2 " ,#'%- & ( #'%& . / , /
(4)
This EM algorithm is not the most efficient way to solve (1); see, e.g., the methods proposed
in [23], [14]. Our main goal is to open the way to the adoption of different hyper-priors.
A +" ./ "
=
( #'%& !( '# %& +( '# %&
One question remains: how to adjust , which controls the degree of sparseness of the estimates? Our proposal is to remove from the model, by replacing the exponential hyper, . This prior expresses ignoprior by a non-informative Jeffreys hyper-prior:
rance with respect to scale (see [17], [18]) and, most importantly, it is parameter-free. Of
course this is no longer equivalent to a Laplacian prior on , but to some other prior. As will
be shown experimentally, this prior strongly induces sparseness and yields state-of-the-art
performance. Computationally, this choice leads to a minor modification
of the + EM algo+
,
5 , .
rithm described above: matrix
is now given by
diag 5
5+
"
( #'%&
Since several of the s may go to zero, it is not convenient to deal with
can re-write the M-step as
( #'%&
. However, we
=+ # %- & 10 #'%& " ,#'%- & 20 #'%& / 30 # %& , 0 #'%& /
+
+
+
where 0 #'%& diag 5 ( #'%& 5 ( # %& , thus avoiding the inversion of the elements of = '# %& .
Moreover, it is not necessary to invert the matrix, but simply to solve the corresponding
linear system, whose dimension is only the number of non-zero elements in
0 #'%&
.
3 Regression experiments
$ . $ = .
Our first example illustrates the use of the proposed method for variable selection in standard linear regression. Consider a sequence of 20 true s, having from 1 to 20 non-zero
* to . For each , we obtain 100 rancomponents (out of 20): from * *
G
*
!
M
*
dom (
) design matrices, following the procedure in [14], and for each of these, we
obtain data points with unit noise variance. Fig. 1 (a) shows the mean number of estimated
non-zero components, as a function of the true number. Our method exhibits a very good
ability to find the correct number of nonzero components in , in an adaptive manner.
=
=
B
1.2
Estim. # of nonzero parameters
25
1
20
0.8
0.6
15
0.4
10
0.2
5
0
- 0.2
5
10
15
20
25
True # of nonzero parameters
-8
-6
-4
-2
0
2
4
6
8
=+
;
= )& &
Figure 1: (a) Mean number of nonzero components in versus the number of nonzero
components in (the dotted line is the identity). (b) Kernel regression. Dotted line: true
function F H
. Dots: 50 noisy observations ( 4* ). Solid line: the estimated
function. Circles: data points corresponding to the non-zero parameters.
= $ ;: B .
* * M * * * , with
We now consider two of the experimental setups of [14]:
*
*
*
*
*
*
*
M
M * , and the design
, and
, with
. In both cases,
matrices are generated
as
in
[14].
In
table
3,
we
compare
the
relative
modelling error
+ - J
;
J
(
) improvement (with respect to the least squares solution) of our
method and of several methods studied in [14]. Our method performs comparably with the
best method for each case, although it involves no tuning or adjustment of parameters, and
is computationally faster.
= $ .
)1) $ = 9 = .
Table 1: Relative ( ) improvement in modeling error of several mehods.
Method
Proposed method M
LASSO (CV)
*
LASSO (GCV)
Subset selection
;
=
=
;
@ "
:9 9 " B !
We now study the performance of our method in kernel regression, using Gaussian kernels,
"J
J M . We begin by considering the synthetic example
i.e.,
1
8
=
:
<
studied in [20] and [21], where the true function is F H
(see Fig. 1 (b)). To
compare our results to the RVM and the variational RVM (VRVM), we ran the algorithm
on 25 generations of the noisy data. The results are summarized in Table 2 (which also
includes the SVM results from [20]). Finally, we have also applied our method to the wellknown Boston housing data-set (20 random partitions of the full data-set into 481 training
samples and 25 test samples); Table 2 shows the results, again versus SVM, RVM, and
VRVM regression (as reported in [20]). In these tests, our method performs better than
& +&
RVM, VRVM, and SVM regression, although it doesn?t require any tuning.
& &
? function
Table 2: Mean root squared errors and mean number of kernels for the ? F H
and the Boston housing examples.
? F H
? function
Boston housing
Method
MSE
No. kernels
Method
MSE No. kernels
*
M
New method * *
New method
*
*
M
*
*
M
M
M
SVM
SVM
*
*
*
RVM
RVM
* *
*
*
VRVM
VRVM
& &
;
;
; ;
;
; ;
4 Classification
1 9#;;
3 5 $ ;.
) ; = / )
9 , ,
Although the most common choice for is the logistic function, ;
8;:=<
in this paper, we adopt the probit model
, where
& *; :&
(5)
In classification the formulation is somewhat more complicated, with the standard approach being generalized linear models [24]. For a two-class problem (
),
the probability that an observation belongs to, say class 1, is given by a nonlinear func
*
tion
(called the link),
, where
can have
one of the forms referred in the first paragraph of Section 2 (linear, nonlinear, kernel).
,
the standard Gaussian cumulative distribution function (cdf). The probit model has a simple
interpretation in terms of hidden variables [25], which we will exploit. Consider a hidden
, where
* . Then, if the classification rule is
variable
*
*
if
, and
if
, we obtain the probit model:
/ 9#; A ;
> ; =
)>;K = / *:
= /
Given training data )
+ , consider the corresponding vector of
hidden/missing variables $ +:.0/ . If we had , we would have a simple linear
regression likelihood A = = . This fact suggests using the EM algorithm to
estimate = , by treating as missing data.
To promote sparseness, we will adopt the same hierarchical prior on = that we have used
5 " * "- and A "- ; " (the Jeffreys prior). The complete
for regression: A 5 " " .
log posterior (with the hidden vectors and ) is
C A =
29 = / / = 9 M = / / 9 = / =
(6)
which is similar to (2), except for the noise variance which
and for
is not + neededto thehere,+regression
the fact that now is missing. The expected value of is similar
case; accordingly we define the
diagonal matrix 0 #'%& diag 5 !( # %& 5 ( #'%& . In
+ same
)
$
.
(notice that the complete log-posterior is linear with
addition, we also need = #'%&
respect to ), which can be expressed in closed form, for each " , as
+
= /#'%& " =+ /#'%& + "-1 *;+ if " 2;
; 9+
9 = /#'%& "-
")( #'%& +) $ + " =+ # %&
.
=+ /#'%& " 9 = /# %+ & / )" * ; if " 29#; . (7)
9 = #'%& "-
" is (conditionally) Gaussian with
/
mean = #'%& "* , but left-truncated at zero if " ; , and right-truncated at zero if " 9#; .
With #'%& $ ( '# %& ( # %& . / , the M-step is similar to the regression case,
= + #'%- & 1 0 #'%& 20 #'%& / 30 # %& , 0 #'%& / #'%&
These expressions are easily derived after noticing that
+
with
#'%&
playing the role of observed data.
5 Classification experiments
)@" >
9 89 " B :
In all the experiments we use kernel classifiers, with Gaussian kernels, i.e.,
"J
J M where is a parameter that controls the kernel width.
8;:=<
Our first experiment is mainly illustrative and uses Ripley?s synthetic data 1 ; the optimal
error rate for this problems is [3]. Table 3 shows the average test set error (on 1000 test
samples) and the final number of kernels, for 20 classifiers learned from 20 random subsets of size 100 from the original 250 training samples. For comparison, we also include
results (from [20]) for RVM, variational RVM (VRVM), and SVM classifiers. On this data
set, our method performs competitively with RVM and VRVM and much better than SVM
(specially in terms of sparseness). To allow the comparisons, we chose @* , as in [20].
Table 3 also reports the numbers of errors achieved by the proposed method and by several
state-of-the-art techniques on three well-known benchmark problems: the Pima Indians
diabetes2 , the Leptograpsus crabs2 , and the Wisconsin breast cancer 3 (WBC). For the
WBC, we report average results over 30 random partitions (300/269 training/testing, as in
[26]). All the inputs are normalized to zero mean and unit variance, and the kernel width
M for the WBC. On the
, for the Pima and crabs problems, and to
was set to
Pima and crabs data sets, our algorithm outperforms all the other techniques. On the WBC
data set, our method performs nearly as well as the best available alternative. The running
time of our learning algorithm (in MATLAB, on a PIII-800MHz) is less than 1 second
for crabs, and about 2 seconds for the Pima and WBC problems. Finally, notice that the
classifiers obtained with our algorithm are much sparser than the SVM classifiers.
;
Table 3: Numbers of test set errors for the four data sets studied (see text for details). The
numbers in square brackets in the ?method? column indicate the bibliographic reference
from which the results are quoted. The numbers in parentheses indicate the (mean) number
of kernels used by the classifiers (when available).
Method
Ripley?s
Pima
Crabs WBC
Proposed method
94 (4.8)
61 (6)
0 (5)
8.5 (5)
SVM [20]
106 (38) 64 (110)
N/A
N/A
RVM [20]
93 (4)
65 (4)
N/A
N/A
VRVM [20]
92 (4)
65 (4)
N/A
N/A
SVM [26]
N/A
64
4
9
Neural network [9]
N/A
75
3
N/A
Logistic regression [9]
N/A
66
4
N/A
Linear discriminant [26]
N/A
67
3
19
Gaussian process [9], [26]
N/A
68, 67
3
8
1
Available (divided into training/test sets) at: http://www.stats.ox.ac.uk/pub/PRNN/
Available at http://www.stats.ox.ac.uk/pub/PRNN/
3
Available at: http://www.ics.uci.edu/ mlearn/MLSummary.html
2
6 Concluding remarks
We have introduced a new sparseness inducing prior related to the Laplacian prior. Its main
feature is the absence of any hyper-parameters to be adjusted or estimated. Experiments
with several publicly available benchmark data sets, both for regression and classification,
have shown state-of-the-art performance. In particular, our approach outperforms support
vector machines and Gaussian process classifiers both in terms of error rate and sparseness,
although it involves no tuning or adjusting of sparseness-controlling hyper-parameters.
Future research includes testing on large-scale problems, like handwritten digit classification. One of the weak points of our approach, when used with kernel-based methods, is the
need to solve a linear system in the M-step (of dimension equal to the number of training
points) whose computational requirements make it impractical to use with very large training data sets. This issue is of current interest to researchers in kernel-based methods (e.g.,
[27]), and we also intend to focus on it.
References
[1] V. Cherkassky and F. Mulier, Learning from Data: Concepts, Theory, and Methods.
New York: Wiley, 1998.
[2] N. Cristianini and J. Shawe-Taylor, Support Vector Machines and Other Kernel-Based
Learning Methods. Cambridge University Press, 2000.
[3] B. Ripley, Pattern Recognition and Neural Networks. Cambridge University Press,
1996.
[4] V. Vapnik, Statistical Learning Theory. New York: John Wiley, 1998.
[5] A. Hoerl and R. Kennard, ?Ridge regression: Biased estimation for nonorthogonal
problems,? Technometrics, vol. 12, pp. 55?67, 1970.
[6] C. Bishop, Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[7] R. Neal, Bayesian Learning for Neural Networks. New York: Springer Verlag, 1996.
[8] C. Williams, ?Prediction with Gaussian processes: from linear regression to linear
prediction and beyond,? in Learning and Inference in Graphical Models, Kluwer,
1998.
[9] C. Williams and D. Barber, ?Bayesian classification with Gaussian priors,? IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1342?1351,
1998.
[10] G. Kimeldorf and G. Wahba, ?A correspondence between Bayesian estimation of
stochastic processes and smoothing by splines,? Annals of Mathematical Statistics,
vol. 41, pp. 495?502, 1990.
[11] T. Poggio and F. Girosi, ?Networks for approximation and learning,? Proceedings of
the IEEE, vol. 78, pp. 1481?1497, 1990.
[12] S. Chen, D. Donoho, and M. Saunders, ?Atomic decomposition by basis pursuit,?
SIAM Journal of Scientific Computation, vol. 20, no. 1, pp. 33?61, 1998.
[13] F. Girosi, ?An equivalence between sparse approximation and support vector machines,? Neural Computation, vol. 10, pp. 1445?1480, 1998.
[14] R. Tibshirani, ?Regression shrinkage and selection via the lasso,? Journal of the Royal
Statistical Society (B), vol. 58, 1996.
[15] P. Williams, ?Bayesian regularization and pruning using a Laplace prior,? Neural
Computation, vol. 7, pp. 117?143, 1995.
[16] K. Lange and J. Sinsheimer, ?Normal/independent distributions and their applications in robust regression,? Journal of Computational and Graphical Statistics, vol. 2,
pp. 175?198, 1993.
[17] M. Figueiredo and R. Nowak, ?Wavelet-based image estimation: an empirical Bayes
approach using Jeffreys? noninformative prior,? IEEE Transactions on Image Processing, vol. 10, pp. 1322-1331, 2001.
[18] J. Berger, Statistical Decision Theory and Bayesian Analysis. Springer-Verlag, 1980.
[19] D. MacKay, ?Bayesian non-linear modelling for the 1993 energy prediction competition,? in Maximum Entropy and Bayesian Methods, G. Heidbreder, ed., pp. 221?234,
Kluwer, 1996.
[20] C. Bishop and M. Tipping, ?Variational relevance vector machines,? in Proceedings
of the 16th Conference in Uncertainty in Artificial Intelligence, pp. 46?53, Morgan
Kaufmann, 2000.
[21] M. Tipping, ?The relevance vector machine,? in Advances in Neural Information Processing Systems ? NIPS 12 (S. Solla, T. Leen, and K.-R. M?uller, eds.), pp. 652?658,
MIT Press, 2000.
[22] D. L. Donoho and I. M. Johnstone, ?Ideal adaptation via wavelet shrinkage,?
Biometrika, vol. 81, pp. 425?455, 1994.
[23] M. Osborne, B. Presnell, and B. Turlach, ?A new approach to variable selection in
least squares problems,? IMA Journal of Numerical Analysis, vol. 20, pp. 389?404,
2000.
[24] P. McCullagh and J. Nelder, Generalized Linear Models. London: Chapman and Hall,
1989.
[25] J. Albert and S. Chib, ?Bayesian analysis of binary and polychotomous response
data,? Journal of the American Statistical Association, vol. 88, pp. 669?679, 1993.
[26] M. Seeger, ?Bayesian model selection for support vector machines, Gaussian processes and other kernel classifiers,? in Advances in Neural Information Processing ?
NIPS 12 (S. Solla, T. Leen, and K.-R. M?uller, eds.), pp. 603?609, MIT Press, 2000.
[27] C. Williams and M. Seeger, ?Using the Nystrom method to speedup kernel machines,?
in NIPS 13, MIT Press, 2001.
| 1976 |@word sri:1 inversion:1 norm:2 turlach:1 open:1 covariance:2 decomposition:3 solid:1 pub:2 bibliographic:1 interestingly:1 outperforms:3 current:2 yet:1 portuguese:1 john:1 numerical:1 partition:2 informative:3 noninformative:1 girosi:2 remove:1 treating:1 update:1 generative:1 intelligence:2 accordingly:2 mulier:1 lx:1 mathematical:1 paragraph:1 manner:1 introduce:1 expected:1 prnn:2 considering:1 confused:1 project:1 moreover:1 begin:1 heidbreder:1 kimeldorf:1 impractical:1 exactly:3 biometrika:1 classifier:9 uk:2 control:5 unit:2 engineering:1 instituto:1 oxford:1 mtf:1 chose:1 therein:1 studied:3 equivalence:1 suggests:1 adoption:1 testing:2 atomic:1 implement:1 digit:1 procedure:1 empirical:1 convenient:1 radial:1 hierachical:1 selection:6 operator:1 context:2 www:3 equivalent:2 map:2 missing:4 go:1 williams:4 stats:2 rule:2 importantly:2 laplace:1 annals:1 pt:1 controlling:1 us:1 element:2 recognition:2 located:1 observed:1 role:1 electrical:1 verifying:1 solved:1 solla:2 ran:1 complexity:3 cristianini:1 dom:1 algo:1 basis:3 easily:1 joint:1 london:1 artificial:1 hyper:11 saunders:1 whose:4 widely:1 solve:3 say:3 ability:1 statistic:2 unseen:1 gp:2 noisy:3 final:1 housing:3 sequence:1 propose:1 adaptation:1 uci:1 achieve:1 inducing:2 competition:1 requirement:1 produce:1 ac:2 ard:4 minor:1 involves:5 indicate:2 closely:1 correct:1 stochastic:1 require:1 generalization:1 adjusted:3 crab:4 hall:1 ic:1 normal:2 nonorthogonal:1 adopt:2 estimation:4 hoerl:1 rvm:18 uller:2 mit:3 gaussian:21 rather:2 shrinkage:3 derived:1 focus:2 improvement:2 modelling:4 likelihood:4 mainly:1 seeger:2 posteriori:3 inference:1 typically:1 hidden:4 relation:1 issue:1 classification:9 html:1 art:5 smoothing:1 mackay:1 marginal:1 equal:1 having:1 chapman:1 placing:1 nearly:1 promote:2 future:1 report:2 spline:2 chib:1 ima:1 technometrics:1 interest:1 evaluation:1 adjust:1 bracket:1 nowak:1 necessary:2 poggio:1 orthogonal:1 euclidean:1 taylor:1 re:1 circle:1 column:1 earlier:1 soft:1 modeling:1 disadvantage:1 mhz:1 maximization:1 deviation:1 subset:2 gcv:1 optimally:1 reported:2 synthetic:3 density:1 siam:1 probabilistic:1 polychotomous:1 posi:1 again:1 squared:1 resort:1 american:1 leading:1 summarized:1 includes:3 cedgfih:1 depends:1 tion:1 root:2 closed:1 bayes:3 complicated:1 square:3 publicly:2 accuracy:1 variance:8 kaufmann:1 yield:3 correspond:1 weak:1 bayesian:12 handwritten:1 comparably:1 researcher:1 mlearn:1 ed:3 leptograpsus:1 energy:1 pp:16 nystrom:1 adjusting:2 dimensionality:1 appears:1 tipping:2 supervised:2 follow:1 response:1 formulation:1 evaluated:1 ox:2 strongly:1 leen:2 replacing:1 nonlinear:3 mode:2 logistic:2 scientific:1 name:1 effect:1 concept:2 true:5 normalized:1 regularization:2 symmetric:1 nonzero:5 neal:1 deal:1 conditionally:1 width:2 illustrative:1 criterion:2 generalized:2 ecnico:1 ridge:3 theoretic:1 complete:3 performs:6 image:3 variational:3 recently:1 superior:1 common:2 functional:1 exponentially:1 association:1 interpretation:2 kluwer:2 lad:1 cambridge:2 cv:1 tuning:4 automatic:1 portugal:1 shawe:1 had:1 dot:1 longer:1 posterior:5 own:1 belongs:1 irrelevant:2 wellknown:1 verlag:2 binary:1 exploited:1 morgan:1 ministry:1 somewhat:1 maximize:1 signal:1 ii:4 full:1 lisboa:1 desirable:1 infer:1 reduces:1 faster:1 determination:1 cross:1 divided:1 controlled:1 laplacian:8 parenthesis:1 underlies:1 regression:30 prediction:3 breast:1 expectation:2 albert:1 iteration:1 kernel:20 adopting:1 invert:1 achieved:1 proposal:2 whereas:1 addition:2 separately:1 biased:1 unlike:1 specially:1 spirit:1 structural:2 ideal:1 iii:2 easy:1 lasso:5 wahba:1 lange:1 regarding:1 expression:1 presnell:1 penalty:1 york:3 remark:1 matlab:1 involve:1 maybe:1 induces:1 svms:2 http:3 notice:2 dotted:2 estimated:5 tibshirani:1 write:2 vol:13 express:2 ario:1 key:1 four:1 threshold:1 utilize:1 noticing:1 uncertainty:1 telecommunication:1 mlsummary:1 decision:1 simplification:1 correspondence:1 l9:1 flat:1 wbc:6 concluding:1 fct:1 department:1 speedup:1 em:6 modification:1 happens:1 jeffreys:5 den:1 computationally:2 equation:1 remains:1 adopted:1 available:8 pursuit:1 competitively:3 hierarchical:3 alternative:4 vrvm:8 original:1 denotes:1 running:1 include:1 graphical:2 exploit:1 classical:1 society:1 objective:1 intend:1 question:1 parametric:1 diagonal:1 exhibit:2 link:1 barber:1 discriminant:1 reason:1 assuming:1 berger:1 equivalently:1 setup:1 pima:5 ba:1 design:3 perform:1 observation:3 benchmark:3 truncated:2 defining:1 introduced:1 bmb:1 learned:3 nip:3 trans:1 beyond:1 usually:2 pattern:3 beating:1 royal:1 regularized:1 technology:2 categorical:1 func:1 text:1 prior:33 literature:1 relative:2 wisconsin:1 probit:3 generation:1 proportional:1 versus:2 validation:1 foundation:1 integrate:1 degree:2 mercer:1 playing:1 cancer:1 course:1 penalized:2 supported:1 free:2 figueiredo:2 allow:1 institute:1 johnstone:1 absolute:2 sparse:2 distributed:1 dimension:2 cumulative:1 doesn:1 commonly:1 adaptive:2 transaction:1 pruning:2 estim:1 assumed:1 consuming:1 quoted:1 nelder:1 ripley:3 continuous:1 table:8 nature:1 robust:3 mse:2 necessarily:1 diag:5 main:4 noise:3 hyperparameters:1 osborne:1 kennard:1 fig:2 referred:1 rithm:1 wiley:2 explicit:1 exponential:2 wavelet:3 bishop:2 decay:1 svm:15 vapnik:1 illustrates:1 sparseness:14 sparser:1 chen:1 boston:3 cherkassky:1 entropy:1 simply:2 expressed:1 adjustment:1 partially:1 springer:2 cdf:1 conditional:1 goal:3 identity:2 donoho:2 absence:1 experimentally:1 mccullagh:1 except:1 denoising:1 called:3 invariance:1 experimental:2 support:6 relevance:4 indian:1 avoiding:1 |
1,069 | 1,977 | Learning spike-based correlations and
conditional probabilities in silicon
Aaron P. Shon
David Hsu
Chris Diorio
Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195-2350 USA
{aaron, hsud, diorio}@cs.washington.edu
Abstract
We have designed and fabricated a VLSI synapse that can learn a
conditional probability or correlation between spike-based inputs
and feedback signals. The synapse is low power, compact, provides
nonvolatile weight storage, and can perform simultaneous multiplication and adaptation. We can calibrate arrays of synapses to ensure uniform adaptation characteristics. Finally, adaptation in our
synapse does not necessarily depend on the signals used for computation. Consequently, our synapse can implement learning rules
that correlate past and present synaptic activity. We provide analysis and experimental chip results demonstrating the operation in
learning and calibration mode, and show how to use our synapse to
implement various learning rules in silicon.
1
I n tro d u cti o n
Computation with conditional probabilities and correlations underlies many models of
neurally inspired information processing. For example, in the sequence-learning neural
network models proposed by Levy [1], synapses store the log conditional probability that
a presynaptic spike occurred given that the postsynaptic neuron spiked sometime later.
Boltzmann machine synapses learn the difference between the correlations of pairs of
neurons in the sleep and wake phase [2]. In most neural models, computation and adaptation occurs at the synaptic level. Hence, a silicon synapse that can learn conditional probabilities or correlations between pre- and post-synaptic signals can be a key part of many
silicon neural-learning architectures.
We have designed and implemented a silicon synapse, in a 0.35?m CMOS process, that
learns a synaptic weight that corresponds to the conditional probability or correlation
between binary input and feedback signals. This circuit utilizes floating-gate transistors to
provide both nonvolatile storage and weight adaptation mechanisms [3]. In addition, the
circuit is compact, low power, and provides simultaneous adaptation and computation.
Our circuit improves upon previous implementations of floating-gate based learning synapses [3,4,5] in several ways.
First, our synapse appears to be the first spike-based floating-gate synapse that implements a general learning principle, rather than a particular learning rule [4,5]. We demon-
strate that our synapse can learn either the conditional probability or the correlation between input and feedback signals. Consequently, we can implement a wide range of synaptic learning networks with our circuit.
Second, unlike the general correlational learning synapse proposed by Hasler et. al. [3],
our synapse can implement learning rules that correlate pre- and postsynaptic activity that
occur at different times. Learning algorithms that employ time-separated correlations
include both temporal difference learning [6] and recently postulated temporally asymmetric Hebbian learning [7]. Hasler?s correlational floating-gate synapse can only perform updates based on the present input and feedback signals, and is therefore unsuitable
for learning rules that correlate signals that occur at different times. Because signals that
control adaptation and computation in our synapse are separate, our circuit can implement these time-dependent learning rules.
Finally, we can calibrate our synapses to remove mismatch between the adaptation
mechanisms of individual synapses. Mismatch between the same adaptation mechanisms
on different floating-gate transistors limits the accuracy of learning rules based on these
devices. This problem has been noted in previous circuits that use floating-gate adaptation [4,8]. In our circuit, different synapses can learn widely divergent weights from the
same inputs because of component mismatch. We provide a calibration mechanism that
enables identical adaptation across multiple synapses despite device mismatch. To our
knowledge, this circuit is the first instance of a floating-gate learning circuit that includes
this feature.
This paper is organized as follows. First, we provide a brief introduction to floating-gate
transistors. Next, we provide a description and analysis of our synapse, demonstrating
that it can learn the conditional probability or correlation between a pair of binary signals.
We then describe the calibration circuitry and show its effectiveness in compensating for
adaptation mismatches. Finally, we discuss how this synapse can be used for silicon implementations of various learning networks.
2
Floating-gate transistors
Because our circuit relies on floating-gate transistors to achieve adaptation, we begin by
briefly discussing these devices. A floating-gate transistor (e.g. transistor M3 of Fig.1(a))
comprises a MOSFET whose gate is isolated on all sides by SiO2. A control gate capacitively couples signals to the floating gate. Charge stored on the floating gate implements a nonvolatile analog weight; the transistor?s output current varies with both the
floating-gate voltage and the control-gate voltage. We use Fowler-Nordheim tunneling
[9] to increase the floating-gate charge, and impact-ionized hot-electron injection (IHEI)
[10] to decrease the floating-gate charge. We tunnel by placing a high voltage on a tunneling implant, denoted by the arrow in Fig.1(a). We inject by imposing more than about
3V across the drain and source of transistor M3. The circuit allows simultaneous adaptation and computation, because neither tunneling nor IHEI interfere with circuit operation.
Over a wide range of tunneling voltages Vtun, we can approximate the magnitude of the
tunneling current Itun as [4]:
I tun = I tun 0 exp (Vtun ? V fg ) / V?
(1)
where Vtun is the tunneling-implant voltage, Vfg is the floating-gate voltage, and Itun0
and V? are fit constants. Over a wide range of transistor drain and source voltages,
we can approximate the magnitude of the injection current Iinj as [4]:
1?U t / V?
I inj = I inj 0 I s
exp ( (Vs ? Vd ) / V?
)
(2)
where Vs and Vd are the drain and source voltages, Iinj0 is a pre-exponential current, V? is a
constant that depends on the VLSI process, and Ut is the thermal voltage kT/q.
3
T h e s i l i co n s y n a p s e
We show our silicon synapse in Fig.1. The synapse stores an analog weight W, multiplies
W by a binary input Xin, and adapts W to either a conditional probability P(Xcor|Y) or a
correlation P(XcorY). Xin is analogous to a presynaptic input, while Y is analogous to a
postsynaptic signal or error feedback. Xcor is a presynaptic adaptation signal, and typically
has some relationship with Xin. We can implement different learning rules by altering the
relationship between Xcor and Xin. For some examples, see section 4.
We now describe the circuit in more detail. The drain current of floating-gate transistor
M4 represents the weight value W. Because the control gate of M4 is fixed, W depends
solely on the charge on floating-gate capacitor C1. We can switch the drain current on or
off using transistor M7; this switching action corresponds to a multiplication of the
weight value W by a binary input signal, Xin. We choose values for the drain voltage of
the M4 to prevent injection. A second floating-gate transistor M3, whose gate is also connected to C1, controls adaptation by injection and tunneling. Simultaneously high input
signals Xcor and Y cause injection, increasing the weight. A high Vtun causes tunneling,
decreasing the weight. We either choose to correlate a high Vtun with signal Y or provide
a fixed high Vtun throughout the adaptation process. The choice determines whether the
circuit learns a conditional probability or a correlation, respectively.
Because the drain current sourced by M4 provides is the weight W, we can express W in
terms of M4?s floating-gate voltage, Vfg. Vfg includes the effects of both the fixed controlgate voltage and the variable floating-gate charge. The expression differs depending on
whether the readout transistor is operating in the subthreshold or above-threshold regime.
We provide both expressions below:
I 0 exp( ? ? 2V fg /(1 + ? )U t )
W=
? V fg
(1 + ? )
2
? V0 ?
below threshold
2
(3)
above threshold
Here V0 is a constant that depends on the threshold voltage and on Vdd, Ut is the
thermal voltage kT/q, ? is the floating-gate-to-channel coupling coefficient, and I 0 is
a fixed bias current. Eq. 3 shows that W depends solely on Vfg, (all the other factors
are constants). These equations differ slightly from standard equations for the
source current through a transistor due to source degeneration caused by M 4. This
degeneration smoothes the nonlinear relationship between Vfg and Is; its addition to
the circuit is optional.
3.1
Weight adaptation
Because W depends on Vfg, we can control W by tunneling or injecting transistor M3. In
this section, we show that these mechanisms enable our circuit to learn the correlation or
conditional probability between inputs Xcor (which we will refer to as X) and Y. Our
analysis assumes that these statistics are fixed over some period during which adaptation
occurs. The change in floating-gate voltage, and hence the weight, discussed below
should therefore be interpreted in terms of the expected weight change due to the statistics of the inputs. We discuss learning of conditional probabilities; a slight change in the
tunneling signal, described previously, allows us to learn correlations instead.
We first derive the injection equation for the floating-gate voltage in terms of the joint
probability P(X,Y) by considering the relationship between the input signals and Is, Vs,
Vb
Vtun
M1
W eq (nA)
80
M2
60
40
C1
Xcor
M4
M3
W
M5
Xin
Y
o chip data
? fit: P(X|Y)0.78
20
M6
0
M7
synaptic
output
0.2
0.4
0.6
Pr(X|Y)
1
0.8
(b)
3.5
Fig. 1. (a) Synapse schematic. (b) Plot of
equilibrium weight in the subthreshold regime versus the conditional probability
P(X|Y), showing both experimental chip data
and a fit from Eq.7 (c). Plot of equilibrium
weight versus conditional probability in the
above-threshold regime, again showing chip
data and a fit from Eq.7.
W eq (?A)
(a).
3
2.5
2
0
o chip data
? fit
0.2
0.4
0.6
Pr(X|Y)
0.8
1
(c)
and Vd of M3. We assume that transistor M1 is in saturation, constraining Is at M3 to be
constant. Presentation of a joint binary event (X,Y) closes nFET switches M5 and M6,
pulling the drain voltage Vd of M3 to 0V and causing injection. Therefore the probability
that Vd is low enough to cause injection is the probability of the joint event Pr(X,Y). By
Eq.2 , the amount of the injection is also dependent on M3?s source voltage Vs. Because
M3 is constrained to a fixed channel current, a drop in the floating-gate voltage, ?Vfg,
causes a drop in Vs of magnitude ??Vfg. Substituting these expressions into Eq.2 results
in a floating-gate voltage update of:
(dV fg / dt )inj = ? I inj 0 Pr( X , Y ) exp(? Vfg / V? )
(4)
where Iinj0 also includes the constant source current. Eq.4 shows that the floating-gate
voltage update due to injection is a function of the probability of the joint event (X,Y).
Next we analyze the effects of tunneling on the floating-gate voltage. The origin of the
tunneling signal determines whether the synapse is learning a conditional probability or a
correlation. If the circuit is learning a conditional probability, occurrence of the conditioning event Y gates a corresponding high-voltage (~9V) signal onto the tunneling implant. Consequently, we can express the change in floating-gate voltage due to tunneling
in terms of the probability of Y, and the floating-gate voltage.
(dV fg / dt )tun = I tun 0 Pr(Y ) exp(?V fg / V? )
(5)
Eq.5 shows that the floating-gate voltage update due to tunneling is a function of the
probability of the event Y.
3.2
Weight equilibrium
To demonstrate that our circuit learns P(X|Y), we show that the equilibrium weight of the
synapse is solely a function of P(X|Y). The equilibrium weight of the synapse is the
weight value where the expected weight change over time equals zero. This weight value
corresponds to the floating-gate voltage where injection and tunneling currents are equal.
To find this voltage, we equate Eq?s. 4 and 5 and solve:
V fgeq =
I inj 0
?1
log Pr( X | Y ) + log
I
(? / Vy + 1/ Vx )
tun 0
(6)
To derive the equilibrium weight, we substitute Eq.6 into Eq.3 and solve:
I0
Weq =
I inj 0
I tun 0
? V0 + ? log
where ? =
?
Pr( X | Y )
I inj 0
I tun 0
below threshold
2
+ log ( Pr( X | Y ) )
above threshold
(7)
?2
?2
and ? =
.
(1 + ? )U t (? / V? + 1/ V? )
(1 + ? )(? / V? + 1/ V? )
Consequently, the equilibrium weight is a function of the conditional probability below
threshold and a function of the log-squared conditional probability above threshold. Note
that the equilibrium weight is stable because of negative feedback in the tunneling and
injection processes. Therefore, the weight will always converge to the equilibrium value
shown in Eq.7. Figs. 1(b) and (c) show the equilibrium weight versus the conditional
P(X|Y) for both sub- and above-threshold circuits, along with fits to Eq.7.
Note that both the sub- and above-threshold relationship between P(X|Y) and the equilibrium weight enables us to compute the probability of a vector of synaptic inputs X given
a post-synaptic response Y. In both cases, we can apply the outputs currents of an array
of synapses through diodes, and then add the resulting voltages via a capacitive voltage
divider, resulting in a voltage that is a linear function of log P(X|Y).
3.3
Calibration circuitry
Mismatch between injection and tunneling in different floating-gate transistors can
greatly reduce the ability of our synapses to learn meaningful values. Experimental data
from floating-gate transistors fabricated in a 0.35?m process show that injection varies by
as much as 2:1 across a chip, and tunneling by up to 1.2:1. The effect of this mismatch on
our synapses causes the weight equilibrium of different synapses to differ by a
multiplicative gain. Fig.2 (b) shows the equilibrium weights of an array of six synapses
exposed to identical input signals. The variation of the synaptic weights is of the same
order of magnitude as the weights themselves, making large arrays of synapses all but
useless for implementing many learning algorithms.
We alleviate this problem by calibrating our synapses to equalize the pre-exponential
tunneling and injection constants. Because the dependence of the equilibrium weight on
these constants is determined by the ratio of Iinj0/Itun0, our calibration process changes Iinj
to equalize the ratio of injection to tunneling across all synapses. We choose to calibrate
injection because we can easily change Iinj0 by altering the drain current through M1.
Our calibration procedure is a self-convergent memory write [11], that causes the equilibrium weight of every synapse to equal the current Ical. Calibration requires many operat-
80
Verase
M1
M8
60
W eq (nA)
Vb
M2
Vtun
40
M3
M4
M9 V
cal
20
M5
0
M7
M6
synaptic
output
0.2
Ical
0.6
P(X|Y)
0.8
1
0.4
0.6
P(X|Y)
0.8
1
0.4
(b)
80
(a)
Fig. 2. (a) Schematic of calibrated synapse
with signals used during the calibration procedure. (b) Equilibrium weights for array of
synapses shown in Fig.1a. (c) Equilibrium
weights for array of calibrated synapses after
calibration.
W eq (nA)
60
40
20
0
0.2
(c)
ing cycles, where, during each cycle, we first increase the equilibrium weight of the synapse, and second, we let the synapse adapt to the new equilibrium weight.
We create the calibrated synapse by modifying our original synapse according to Fig.
2(a). We convert M1 into a floating-gate transistor, whose floating-gate charge thereby
sets M3?s channel current, providing control of Iinj0 of Eq.7. Transistor M8 modifies M1?s
gate charge by means of injection when M9?s gate is low and Vcal is low. M9?s gate is only
low when the equilibrium weight W is less than Ical. During calibration, injection and tunneling on M3 are continuously active. We apply a pulse train to Vcal; during each pulse
period, Vcal is predominately high. When Vcal is high, the synapse adapts towards its equilibrium weight. When Vcal pulses low, M8 injects, increasing the synapse?s equilibrium
weight W. We repeat this process until the equilibrium weight W matches Ical, causing
M9?s gate voltage to rise, disabling Vcal and with it injection. To ensure that a precalibrated synapse has an equilibrium weight below Ical, we use tunneling to erase all bias
transistors prior to calibration. Fig.2(c) shows the equilibrium weights of six synapses
after calibration. The data show that calibration can reduce the effect of mismatched adaptation on the synapse?s learned weight to a small fraction of the weight itself.
Because M1 is a floating-gate transistor, its parasitic gate-drain capacitance causes a mild
dependence between M1?s drain voltage and source current. Consequently, M3?s floatinggate voltage now affects its source current (through M1?s drain voltage), and we can
model M3 as a source-degenerated pFET [3]. The new expression for the injection current in M3 is:
Presynaptic
neuron
W+
Synapse
W?
X
Y
Injection
Postsynaptic
neuron
Injection
Activation
window
Fig. 3. A method for achieving spike-time dependent plasticity in silicon.
(dV fg / dt )inj = ? I inj 0 Pr( X , Y ) exp Vfg
?
V?
?
? k1
Ut
(8)
where k1 is close to zero. The new expression for injection slightly changes the ? and ?
terms of the weight equilibrium in Eq.7, although the qualitative relationship between the
weight equilibrium and the conditional probability remains the same.
4
Implementing silicon synaptic learning rules
In this section we discuss how to implement a variety of learning rules from the computational-neurobiology and neural-network literature with our synapse circuit.
We can use our circuit to implement a Hebbian learning rule. Simultaneously activating
both M5 and M6 is analogous to heterosynaptic LTP based on synchronized pre- and postsynaptic signals, and activating tunneling with the postsynaptic Y is analogous to homosynaptic LTD. In our synapse, we tie Xin and Xcor together and correlate Vtun with Y.
Our synapse is also capable of emulating a Boltzmann weight-update rule [2]. This
weight-update rule derives from the difference between correlations among neurons when
the network receives external input, and when the network operates in a free running
phase (denoted as clamped and unclamped phases respectively). With weight decay, a
Boltzmann synapse learns the difference between correlations in the clamped and unclamped phase. We can create a Boltzmann synapse from a pair of our circuits, in which
the effective weight is the difference between the weights of the two synapses. To implement a weight update, we update one silicon synapse based on pre- and postsynaptic
signals in the clamped phase, and update the other synapse in the unclamped phase. We
do this by sending Xin to Xcor of one synapse in the clamped phase, and sending Xin to Xcor
of the other synapse in the negative phase. Vtun remains constant throughout adaptation.
Finally, we consider implementing a temporally asymmetric Hebbian learning rule [7]
using our synapse. In temporally asymmetric Hebbian learning, a synapse exhibits LTP
or LTD if the presynaptic input occurs before or after the postsynaptic response, respectively. We implement an asymmetric learning synapse using two of our circuits, where
the synaptic weight is the difference in the weights of the two circuit. We show the circuit
in Fig. 3. Each neuron sends two signals: a neuronal output, and an adaptation time window that is active for some time afterwards. Therefore, the combined synapse receives
two presynaptic signals and two postsynaptic signals. The relative timing of a postsynaptic response, Y, with the presynaptic input, X, determines whether the synapse undergoes
LTP or LTD. If Y occurs before X, Y?s time window correlates with X, causing injection
on the negative synapse, decreasing the weight. If Y occurs after X, Y correlates with X?s
time window, causing injection on the positive synapse, increasing the weight. Hence,
our circuit can use the relative timing between presynaptic and postsynaptic activity to
implement learning.
5
Conclusion
We have described a silicon synapse that implements a wide range of spike-based learning rules, and that does not suffer from device mismatch. We have also described how we
can implement various silicon-learning networks using this synapse. In addition, although
we have only analyzed the learning properties of the synapse for binary signals, we can
instead use pulse-coded analog signals. One possible avenue for future work is to analyze the implications of different pulse-coded schemes on the circuit?s adaptive behavior.
A c k n o w l e d g e me n t s
This work was supported by the National Science Foundation and by the Office of
Naval Research. Aaron Shon was also supported by a NDSEG fellowship. We thank
Anhai Doan and the anonymous reviewers for helpful comments.
References
[1]
W.B.Levy, ?A computational approach to hippocampal function,? in R.D. Hawkins and G.H.
Bower (eds.), Computational Models of Learning in Simple Neural Systems, The Psychology
of Learning and Motivation vol. 23, pp. 243-305, San Diego, CA: Academic Press, 1989.
[2]
D. H. Ackley, G. Hinton, and T. Sejnowski, ?A learning algorithm for Boltzmann machines,?
Cognitive Science vol. 9, pp. 147-169, 1985.
[3 ]
P. Hasler, B. A. Minch, J. Dugger, and C. Diorio, ?Adaptive circuits and synapses using pFET
floating-gate devices, ? in G. Cauwenberghs and M. Bayoumi (eds.) Learning in Silicon,
pp. 33-65, Kluwer Academic, 1999.
[4]
P. Hafliger, A spike-based learning rule and its implementation in analog hardware, Ph.D.
thesis, ETH Zurich, 1999.
[5]
C. Diorio, P. Hasler, B. A. Minch, and C. Mead, ?A floating-gate MOS learning array with
locally computer weight updates,? IEEE Transactions on Electron Devices vol. 44(12),
pp. 2281-2289, 1997.
[6]
R. Sutton, ?Learning to predict by the methods of temporal difference,? Machine Learning,
vol. 3, p p . 9-44, 1988.
[7]
H.Markram, J. L?bke, M. Frotscher, and B. Sakmann, ?Regulation of synaptic efficacy by
coincidence of postsynaptic APs and EPSPs,? Science vol. 275, pp.213-215, 1997.
[8]
A. Pesavento, T. Horiuchi, C. Diorio, and C. Koch, ?Adaptation of current signals with floating-gate circuits,? in Proceedings of the 7th International Conference on Microelectronics for
Neural, Fuzzy, and Bio-Inspired Systems (Microneuro99), pp. 128-134, 1999.
[9]
M. Lenzlinger and E. H. Snow. ?Fowler-Nordheim tunneling into thermally grown SiO2,?
Journal of Applied Physics vol. 40(1), p p . 278--283, 1969.
[10] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego,
CA: Academic Press, 1995.
[11] C. Diorio, ?A p-channel MOS synapse transistor with self-convergent memory writes,? IEEE
Journal of Solid-State Circuits vol. 36(5), pp. 816-822, 2001.
| 1977 |@word mild:1 briefly:1 weq:1 pulse:5 thereby:1 solid:1 efficacy:1 past:1 current:20 ihei:2 activation:1 plasticity:1 enables:2 remove:1 designed:2 plot:2 update:10 drop:2 v:5 aps:1 device:7 floatinggate:1 provides:3 along:1 m7:3 qualitative:1 expected:2 behavior:1 themselves:1 nor:1 compensating:1 inspired:2 m8:3 decreasing:2 vfg:10 window:4 considering:1 increasing:3 erase:1 begin:1 circuit:30 interpreted:1 fuzzy:1 fabricated:2 temporal:2 every:1 charge:7 tie:1 control:7 bio:1 carrier:1 before:2 positive:1 engineering:1 timing:2 limit:1 switching:1 despite:1 sutton:1 mead:1 solely:3 co:1 range:4 implement:15 differs:1 writes:1 procedure:2 eth:1 pre:6 onto:1 close:2 cal:1 storage:2 reviewer:1 modifies:1 m2:2 rule:16 array:7 variation:1 analogous:4 diego:2 origin:1 vtun:10 asymmetric:4 ackley:1 coincidence:1 readout:1 degeneration:2 connected:1 cycle:2 diorio:6 decrease:1 equalize:2 vdd:1 depend:1 exposed:1 upon:1 easily:1 joint:4 chip:6 various:3 grown:1 train:1 separated:1 mosfet:1 horiuchi:1 describe:2 effective:1 sejnowski:1 sourced:1 whose:3 widely:1 solve:2 ability:1 statistic:2 ionized:1 itself:1 sequence:1 transistor:24 adaptation:23 causing:4 achieve:1 adapts:2 demon:1 description:1 takeda:1 seattle:1 cmos:1 depending:1 coupling:1 derive:2 disabling:1 eq:18 epsps:1 implemented:1 c:1 diode:1 synchronized:1 differ:2 snow:1 modifying:1 vx:1 enable:1 implementing:3 activating:2 alleviate:1 anonymous:1 hawkins:1 koch:1 exp:6 equilibrium:27 predict:1 mo:3 electron:2 circuitry:2 substituting:1 iinj:2 sometime:1 injecting:1 create:2 always:1 rather:1 voltage:35 office:1 unclamped:3 naval:1 greatly:1 helpful:1 dependent:3 i0:1 typically:1 vlsi:2 ical:5 among:1 denoted:2 multiplies:1 constrained:1 frotscher:1 equal:3 washington:2 identical:2 placing:1 represents:1 future:1 employ:1 simultaneously:2 national:1 individual:1 m4:7 floating:41 phase:8 analyzed:1 implication:1 kt:2 capable:1 predominately:1 capacitively:1 isolated:1 instance:1 altering:2 miura:1 calibrate:3 uniform:1 stored:1 varies:2 minch:2 calibrated:3 combined:1 international:1 off:1 physic:1 together:1 continuously:1 na:3 again:1 squared:1 ndseg:1 thesis:1 choose:3 external:1 cognitive:1 inject:1 m9:4 includes:3 coefficient:1 postulated:1 caused:1 depends:5 later:1 multiplicative:1 analyze:2 cauwenberghs:1 accuracy:1 characteristic:1 equate:1 subthreshold:2 simultaneous:3 synapsis:21 synaptic:13 ed:2 pp:7 couple:1 hsu:1 gain:1 lenzlinger:1 knowledge:1 ut:3 improves:1 organized:1 appears:1 dt:3 response:3 synapse:53 correlation:16 until:1 receives:2 nonlinear:1 interfere:1 mode:1 thermally:1 undergoes:1 fowler:2 pulling:1 usa:1 effect:5 calibrating:1 divider:1 hence:3 during:5 self:2 noted:1 m5:4 hippocampal:1 demonstrate:1 hafliger:1 tro:1 recently:1 conditioning:1 analog:4 occurred:1 discussed:1 slight:1 m1:9 kluwer:1 silicon:13 refer:1 imposing:1 calibration:13 stable:1 operating:1 v0:3 add:1 store:2 binary:6 discussing:1 converge:1 period:2 signal:29 afterwards:1 neurally:1 multiple:1 hebbian:4 ing:1 operat:1 match:1 academic:3 adapt:1 post:2 coded:2 anhai:1 impact:1 schematic:2 underlies:1 c1:3 addition:3 fellowship:1 wake:1 source:10 sends:1 unlike:1 comment:1 ltp:3 sio2:2 capacitor:1 effectiveness:1 yang:1 constraining:1 enough:1 m6:4 switch:2 affect:1 fit:6 variety:1 psychology:1 architecture:1 reduce:2 avenue:1 whether:4 expression:5 six:2 ltd:3 suffer:1 cause:7 action:1 tunnel:1 amount:1 locally:1 ph:1 hardware:1 vy:1 write:1 vol:7 express:2 key:1 demonstrating:2 threshold:11 achieving:1 prevent:1 neither:1 hasler:4 injects:1 convert:1 fraction:1 heterosynaptic:1 throughout:2 smoothes:1 utilizes:1 tunneling:25 vb:2 convergent:2 sleep:1 activity:3 hamada:1 occur:2 homosynaptic:1 injection:26 department:1 according:1 pfet:2 across:4 slightly:2 postsynaptic:12 making:1 dv:3 spiked:1 pr:9 equation:3 zurich:1 previously:1 remains:2 discus:3 mechanism:5 sending:2 operation:2 apply:2 occurrence:1 gate:52 substitute:1 capacitive:1 assumes:1 original:1 ensure:2 include:1 running:1 unsuitable:1 k1:2 capacitance:1 spike:7 occurs:5 dependence:2 exhibit:1 separate:1 thank:1 vd:5 chris:1 strate:1 me:1 presynaptic:8 degenerated:1 useless:1 relationship:6 ratio:2 providing:1 nfet:1 regulation:1 negative:3 rise:1 implementation:3 sakmann:1 boltzmann:5 perform:2 neuron:6 thermal:2 optional:1 neurobiology:1 emulating:1 hinton:1 david:1 pair:3 learned:1 nordheim:2 below:6 mismatch:8 regime:3 saturation:1 tun:7 memory:2 power:2 hot:2 event:5 scheme:1 brief:1 temporally:3 prior:1 literature:1 bayoumi:1 drain:12 multiplication:2 relative:2 versus:3 foundation:1 doan:1 principle:1 repeat:1 supported:2 free:1 side:1 bias:2 mismatched:1 wide:4 markram:1 fg:7 feedback:6 adaptive:2 san:2 correlate:7 transaction:1 approximate:2 compact:2 active:2 learn:9 channel:4 ca:2 necessarily:1 arrow:1 motivation:1 neuronal:1 fig:12 nonvolatile:3 sub:2 comprises:1 exponential:2 clamped:4 bower:1 levy:2 learns:4 showing:2 divergent:1 decay:1 microelectronics:1 hsud:1 derives:1 magnitude:4 implant:3 shon:2 corresponds:3 determines:3 relies:1 cti:1 conditional:20 presentation:1 consequently:5 towards:1 change:8 determined:1 operates:1 correlational:2 inj:9 experimental:3 xin:9 m3:16 meaningful:1 aaron:3 parasitic:1 |
1,070 | 1,978 | Incremental Learning and Selective
Sampling via Parametric Optimization
Framework for SVM
Shai Fine
IBM T. J. Watson Research Center
fshai@us.ibm.com
Katya Scheinberg
IBM T. J. Watson Research Center
katyas@us.ibm.com
Abstract
We propose a framework based on a parametric quadratic programming (QP) technique to solve the support vector machine (SVM)
training problem. This framework, can be specialized to obtain two
SVM optimization methods. The first solves the fixed bias problem, while the second starts with an optimal solution for a fixed
bias problem and adjusts the bias until the optimal value is found.
The later method can be applied in conjunction with any other existing technique which obtains a fixed bias solution. Moreover, the
second method can also be used independently to solve the complete SVM training problem. A combination of these two methods
is more flexible than each individual method and, among other
things, produces an incremental algorithm which exactly solve the
1-Norm Soft Margin SVM optimization problem. Applying Selective Sampling techniques may further boost convergence.
1
Introduction
SVM training is a convex optimization problem which scales with the training set
size rather than the input dimension. While this is usually considered to be a desired
The
quality, in large scale problems it may cause training to be impractical.
common way to handle massive data applications is to turn to active set methods,
which gradually build the set of active constraints by feeding a generic optimizer
with small scale sub-problems. Active set methods guarantee to converge to the
global solut ion, however, convergence may be very slow, it may require too many
passes over the data set, and at each iteration there's an implicit computational
overhead of the actual active set selection. By using some heuristics and caching
mechanisms, one can, in practice, reduce this load significantly.
Another common practice is to modify the SVM optimization problem such that
it wont handle the bias term directly. Instead, the bias is either fixed in advance!
(e.g. [6]) or added as another dimension to the feature space (e.g. [4]). The
advantage is that the resulting dual optimization problem does not contain the
linear constraint, in which case one can suggest a procedure which updates only
IThroughout this sequel we will refer to such solut ion as the fixed bias solut ion.
one Lagrange multiplier at a time. Thus, an incremental approach, which efficiently
updates an existing solution given a new training point, can be devised. Though
widely used, the solution resulting from this practice has inferior generalization
performances and the number of SY tends to be much higher [4].
To the best of our knowledge, the only incremental algorithm suggested so far to
exactly solve the 1-Norm Soft Margin 2 optimization problem, have been described
by Cauwenberghs and Poggio at [3]. This algorithm, handles Adiabatic increments
by solving a system of linear equations resulted from a parametric transcription of
the KKT conditions. This approach is somewhat close to the one independently
developed here and we offer a more thorough comparison in the discussion section.
In this paper 3 we introduce two new methods derived from parametric QP techniques. The two methods are based on the same framework, which we call Parametric Optimization for Kernel methods (POKER), and are essentially the same
methodology applied to somewhat different problems. The first method solves the
fixed bias problem, while the second one starts with an optimal solution for a fixed
bias problem and adjusts the bias until the optimal value is found. Each of these
methods can be used independently to solve the SYM training problem.
The
most interesting application, however, is alternating between the two methods to
obtain a unique incremental algorithm. We will show how by using this approach
we can adjust the optimal solution as more data becomes available, and by applying
Selective Sampling techniques we may further boost convergence rate.
Both our methods converge after a finite number of iterations. In principle, this
number may be exponential in the training set size, n. However, since parametric
QP methods are based on the well-known Simplex method for linear programming,
a similar behavior is expected: Though in theory the Simplex method is known
to have exponential complexity, in practice it hardly ever displays exponential
behavior. The per-iteration complexity is expected to be O(nl), where l is the
number of active points at that iteration, with the exception of some rare cases in
which the complexity is expected to be bounded by O(nl2).
2
Parametric QP for SVM
Any optimal solution to the 1-Norm Soft Margin SYM optimization problem must
satisfy the Karush-Kuhn-Tucker (KKT) necessary and sufficient conditions:
1
2
exiSi = 0, i = 1, ... ,n
(c - exi)~i = 0, i = 1, . . . ,n
3
4
Y ex = 0,
5
T
(1)
-Qex + by + S - ~ = -e,
~ ex ~ c, S :::: 0, ~:::: 0.
?
2 A different incremental approach stems from a geometric interpretation of the primal
problem: Keerthi et al. [7] were the first to suggest a nearest point batch algorithm
and Kowalczyk [8] provided the on-line version. They handled the inseparable with the
well-known transformation W ~ (W, .;c~) and b ~ b, which establish the equivalence
between the Hard Margin and the 2-Norm Soft Margin optimization problems. Although
the i-Norm and the 2-Norm have been shown to yield equivalent generalization properties,
it is often observed (cf. [7]) that the former method results in a smaller number of SV. It
is obvious by the above transformation that the i-Norm Soft Margin is the most general
SVM optimization problem.
3The detailed statements of the algorithms and the supporting lemmas were omitted
due to space limitation, and can be found at [5].
where a E Rn is the vector of Lagrange multipliers, b is the bias (scalar) and sand
~ are the n-dimensional vectors of slack and surplus variables, respectively. Y is a
vector oflabels, ?1. Q is the label encoded kernel matrix, i.e. Qij = YiyjK(Xi,Xj),
e is the vector of all 1 's of length n and c is the penalty associated with errors.
If we assume that the value of the bias is fixed to some predefined value b, then
condition 3 disappears from the system (1) and condition 4 becomes
-Qa + S
-
~
= -e - by
(2)
Consider the following modified parametric system of KKT conditions
aiSi = 0, i = 1, ... ,n
(c - ai)~i = 0, i = 1, ... ,n
-Qa + S - ~ = p + u( -e - yb - p) ,
o : : : a ::::: c, S ~ 0, ~ ~ 0,
(3)
for some vector p. It is easy to find p, a S and ~ satisfying (3) for u = O. For
example, one may pick a = 0, S = e, ~ = 0 and p = -Qa + s. For u = 1 the
systems (3) reduces to the fixed bias system. Our fixed bias method starts at a
solution to (3) for u = 0 and by increasing u while updating a, s and ~ so that they
satisfy (3), obtains the optimal solution for u = 1.
Similarly we can obtain solution to (1) by starting at a fixed bias solution and
update b, while maintaining a, s and ~ feasible for (2) , until the optimal value for
b is reached. The optimal value of the bias is recognized when the corresponding
solution satisfy (1), namely aT y = O.
Both these methods are based on the same framework of adjusting a scalar parameter in the right hand side of a KKT system. In the next section we will present
the method for adjusting the bias (adjusting u in (3) is very similar, save for a few
technical differences). An advantage of this special case is that it solves the original
problem and can, in principal, be applied "from scratch" .
3
Correcting a "Fixed Bias" Solution
Let (a(b), s(b), ~(b)) be a fixed bias solution for a given b. The algorithm that we
present here is based on increasing (or decreasing) b monotonically, until the optimal
b* is found, while updating and maintaining (a(b),s(b),~(b)).
Let us introduce some notation. For a given b and and a fixed bias solution,
(a(b), s(b), ~(b)), we partition the index set I = {I, ... , n} into three sets 10 (b),
Ie(b) and Is(b) in the following way: Vi E Io(b) si(b) > 0 and ai(b) = 0, Vi E Ie(b)
~i(b) > 0 and ai(b) = c and Vi E Is(b) si(b) = ~i(b) = 0 and 0::::: ai(b) ::::: c. It is easy
to see that Io(b)Ule(b)UIs(b) = I and Io(b)nle(b) = Ie(b)nIs(b) = Io(b)nIs(b) = 0.
We will call the partition (Io(b),Ie(b),Is(b)) - the optimal partition for a given b.
We will refer to Is as the active set. Based on partition (Io,Ie,Is) we define Qss
(Qes Qse Qee, Qos, Qoo) as the submatrix of Q whose columns are the columns
of Q indexed by the set Is (Ie, Is, Ie, 10 , 10 ) and whose rows are the rows of Q
indexed by Is (Is, Ie, Ie, Is , 10). We also define Ys (Ye, Yo) and as (a e , ao) and
the subvectors of Y and a whose entries are indexed by Is (Ie, 10). Byes (ee) we
denote a vector of all ones of the appropriate size.
Assume that we are given an initial guess 4 bO
< b*. To initiate the algorithm we
4Whether bO < b* can be determined by evaluating -y T a(bO): if -y T a(bO) > 0 then
bO < b*, otherwise bO > b*, in which case the algorithm is essentially the same, save for
obvious changes.
assume that we know the optimal partition (Ioo'!eo,Iso) = (Io(bO),!c(bO),!s(bO))
that corresponds to aO = a(bO). We know that Vi E 10 ai = 0 and Vi E Ie ai = c.
We also know that -Qia + Yib = -1, Vi E Is (here Qi is the i-th row of Q). We
can write the set of active constraints as
(4)
If Qss is nonsingular (the nondegenerate case), then as depends linearly on scalar
b. Similarly, we can express So and ~e as linear functions of b. If Q ss is singular
(the degenerate case), then, the set of all possible solutions as changes linearly with
b as long as the partition remains optimal. In either case, if 0 < as < c, So > 0
and ~e > 0 then sufficiently small changes in b preserve these constraints. At each
iteration b can increase until one of the four types of inequality constraints becomes
active. Then, the optimal partition is updated, new linear expressions of the active
variables through b are computed, and the algorithm iterates. We terminate when
T
Y a < 0, that is b > b*. The final iteration gives us the correct optimal active set
and optimal partition; from that we can easily compute b* and a*.
A geometric interpretation of the algorithmic steps suggest that we are trying to
move the separating hyperplane by increasing its bias and at the same time adjusting
its orientation so it stays optimal for the current bias. At each iteration we move
the hyperplane until either a support vector is dropped from the support set, a
support vector becomes violated, a violated point becomes a support vector or an
inactive point joins the support vector set.
The algorithm is guaranteed to terminate after finitely many iterations. At each
iteration the algorithm covers an interval that corresponds to an optimal partition.
The same partition cannot correspond to two different intervals and the number of
partitions is finite, hence so is the number of iterations (d. [1, 9]). Per-iteration
complexity depends on whether an iteration is degenerate or not. A nondegenerate
iteration takes O(niIs I) + O(IIs 3 ) arithmetic operations, while a degenerate iteration should in theory take 0(n21Is 12) operations, but in practice it only takes 5
0(nIIsI2). Note that the degeneracy occurs when the active support vectors are
linearly dependent. The larger is the rank of the kernel matrix the less likely is such
a situation. The storage requirement of the algorithm is O(n) + 0(IIsI2).
1
4
Incremental Algorithm
Incremental and on-line algorithms are aimed at training problems for which the
data becomes available in the course of training. Such an algorithm, when given
an optimal solution for a training set of size n, and additional m training points,
has to efficiently find the optimal solution to the extended n + m training set.
Assume we have an optimal solution (a, b, s,~) for a given data set X of size n.
For each new point that is added, we take the following actions: a new Lagrange
multiplier a n +l = 0 is added to the set of multipliers, then the distance to the
margin is evaluated for this point. If the point is not violated, that is if Sn+l =
W T xn+l_yn+1b_1 > 0, then the new positive slack Sn+l is added to the set of slack
variables. If the point is violated then sn+1 = 1 is added to the set of slack variables.
(Notice, that at this point the condition w T x n+ 1 + yn+1b + sn+1 = -1 is violated.)
A surplus variable ~n+l = 0 is also added to the set of surplus variables. The
optimal partition is adjusted accordingly. The process is repeated for all the points
that have to be added at the given step. If no violated points were encountered,
5This assumes solving such a problem by an interior point method
o
Given dataset <X,y> , asolution(oo , bo , so,~o) , and new points <x,y>~t~
1
Set p
2
n+i T
If Sn+i ::::: 0, Set p n+i := -(x
) w
=-e-
by , On+i
t
= <,n+i
= 0,
Sn+i
= - 1-
by n+i
+ (x n+i )T w, i = 1, ... , m
3
+ 1, Sn+i = 1
Else pn+i := -1 - byn+i
X := XU {xn+l , ... , xn+m}, y := (yl , ... , y n, y n+l , ... , yn+m)
4
If p
5
#- - e - by
Call POKERfixedbias(X, y , 0 , b, s, ~ , p)
Call POKERadjustbias (X , y , 0, b, s, ~)
If there are more data points go to O.
Figure 1: Outline of the incremental algorithm (AltPOKER)
then no further action is necessary. The current solution is optimal and the bias
is unchanged. If at least one point is violated, then the new set (Q, b, s,~) is not
feasible for the KKT system (1) with the extended data set. However, it is easy to
find p such that (Q, b, s, ~) is optimal for (3). Thus we can first apply the fixed bias
algorithm to find a new solution and then apply the adjustable bias algorithm to
find the optimal solution to the new extended problem (see Figure 1).
In theory adding even one point may force the algorithm to work as hard as if
it were solving the problem "from scratch". But in practice it virtually never
happens. In our experiments, just a few iterations of the fixed bias and adjustable
bias algorithms were sufficient to find the solution to the extended problem. Overall,
the computational complexity ofthe incremental algorithm is expected to be O(n 2 ) .
5
Experiments
Convergence in Batch Mode: The most straight-forward way to activate
POKER in a batch mode is to construct the trivial partition6 and then apply the
adjustable bias algorithm to get the optimal solution. We term this method SelfInit POKER. Note that the initial value of the bias is most likely far away from the
global solution, and as such, the results presented here should be regarded as a lower
bound. We examined performances on a moderate size problem, the Abalone data
set from the VCI Repository [2]. We fed the training algorithm with increasing subsets up to the whole set (of size 4177). The gender encoding (male/female/infant)
was mapped into {(I,O,O),(O,I,O) ,(O,O,I)}. Then, the data was scaled to lie in the
[-1 ,1] interval. We demonstrate convergence for polynomial kernel with increasing
degree, which in this setting corresponds to level of difficulty. However naive our
implementation is, one can observe (see Figure 2) a linear convergence rate in the
batch mode.
Convergence in Incremental Mode: AltPOKER is the incremental algorithm
described in section 4. We examined the performance on the" diabetes" problem 7
that have been used by Cauwenberghs and Poggio in [3] to test the performance of
their algorithm. We demonstrate convergence for the RBF kernel with increasing
penalty ("C" ). Figure 3 demonstrates the advantage of the more flexible approach
6Fixing the bias term to be large enough (positive or negative) and the Lagrange
multipliers to 0 or C based on their class (negative/positive) membership.
7 available at http://bach . ece.jhu. edu/pub/gert/svm/increm ental
Selflnil POKER: No . ollleralions VS. Problem Size
16000
AUPOKER: No. ol lleralions VS. Chun k Size
,near erne
_
_
POJyKemel:(<>:.y>+1r
?
POJyKemel:(<>:.y>+1r
POJy Kemel:(<>:.y>+1t
POJyKemel:(<>:.y>+1r
-
-
C:O.l
C"l
C,,10
C,,25
C,,50
C,,75
C"l00
2500'
2000
ProblernSize
Chunk Size
Figure 2: SelfInit POKER - Convergence Figure 3: AltPOKER - Convergence in
Incremental mode
in Batch mode
which allows various increment sizes: using increments of only one point resulted in
a performance of a similar scale as that of Cauwenberghs and Poggio, but with the
increase of the chunk sizes we observe rapid improvement in the convergence rate.
Selective Sampling: We can use the incremental algorithm even in case when all
the data is available in advance to improve the overall efficiency. If one can select
a good representative small subset of the data set, then one can use it for training,
hoping that the majority of the data points are classified correctly using the initial
sampled data8 . We applied selective sampling as a preprocess in incremental mode:
At each meta-iteration, we ranked the points according to a predefined selection
criterion, and then picked just the top ones for the increment.
The following selection criteria have been used in our experiments: CIs2W picks the
closest point to the current hyperplane. This approach is inspired by active learning
schemes which strive to halve the version space. However, the notion of a version
space is more complex when the problem is inseparable. Thus, it is reasonable to
adapt a greedy approach which selects the point that will cause the larger change
in the value of the objective function.
While solving the optimization problem for all possible increments is impracticable,
it may still worthwhile to approximate the potential change: MaxSlk picks the
most violating point. This corresponds to an upper bound estimate of the change
in the objective, since the value of the slack (times c) is an upper bound to the
feasibility gap. dObj perform only few iterations of the adjustable bias algorithm
and examine the change in the objective value. This is similar to Strong Branching
technique which is used in branch and bound methods for integer programming.
Here it provides a lower bound estimate to the change in the objective value.
Although performing only few iterations is much cheaper than converging to the
optimal solution, this technique is still more demanding then previous selection
methods. Hence we first ranked the points using CIs2W (MaxSlk) and then applied dObj only to the top few . Table 1 presents the application of the above
mentioned criteria to three different problems. The results clearly shows that advantage of using the information obtained by dObj estimate.
8This is different from a full-fledged Active Learning scheme in which the data is not
labeled, but rather queried at selected points.
Selection
Criteria
No Selection
MaxSlk
MaxSlk+dObj
ClsW
ClsW+dObj
a?
400
I Is I
Ie
I
10
I 4 I 11 I 9985
a?
8
234
112
92
128
116
I
Is
I Ie I
10
II
a?
I
Is
I
Ie
I
10
I 73 I 1 I 277 II 40 I 20 I 313 I 243
871
303
269
433
407
3078
3860
3184
2576
2218
Table 1: The impact of Selective Sampling on the No. of iterations of AltPOKER:
Synthetic data (10Kx2), "ionosophere" [2] and "diabetes" (columns ordered resp.)
6
Conclusions and Discussion
We propose a new finitely convergent method that can be applied in both batch
and incremental modes to solve the 1-Norm Soft Margin SVM problem. Assuming
that the number of support vectors is small compared to the size of the data, the
method is expected to perform O(n 2 ) arithmetic operations, where n is the problem
size. Applying Selective Sampling techniques may further boost convergence and
reduce computation load.
Our method is independently developed, but somewhat similar to that in [3]. Our
method, however, is more general - it can be applied to solve fixed bias problems
as well as obtain optimal bias from a given fixed bias solution; It is not restricted
to increments of size one, but rather can handle increments of arbitrary size; And,
it can be used to get an estimate of the drop in the value of the objective function ,
which is a useful selective sampling criterion.
Finally, it is possible to extend this method to produce a true on-line algorithm, by
assuming certain properties of the data. This re-introduces some very important
applications of the on-line technology, such as active learning, and various forms
of adaptation. Pursuing this direction with a special emphasis on massive data
applications (e.g. speech related applications), is left for further study.
References
[1] A. B. Berkelaar, B. Jansen, K. Roos, and T. Terlaky. Sensitivity analysis in (degenerate) quadratic programming. Technical Report 96-26 , Delft University, 1996.
[2] C. L. Blake and C. J Merz. UCI repository of machine learning databases, 1998.
[3] G . Cauwenberghs and T . Poggio. Incremental and decremental support vector machine
learning. In Adv. in N eural Information Processing Systems 13, pages 409- 415, 2001.
[4] N. Cristianini and J. Shawe-Taylor. An Introductin to Support Vector Macines and
Other Kernel-Based Learning Methods. Cambridge University Press, 2000.
[5] S. Fine and K. Scheinberg. Poker: Parametric optimization framework for kernel
methods. Technical report , IBM T. J. Watson Research Center, 2001. Submitted.
[6] T. T. Friess, N. Cristianini, and C. Campbell. The kernel-adaraton algorithm: A fast
simple learning procedure for SVM. In Pmc. of 15th ICML , pages 188- 196, 1998.
[7] S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy. A fast iterative
nearest point algorithm for SVM classifier design . IEEE Trnas . NN, 11:124- 36, 2000.
[8] A. Kowalczyk. Maximal margin perceptron. In Advances in Large Margin Classifiers ,
pages 75-113. MIT Press, 2000.
[9] R. T. Rockafellar. Conjugate Duality and Optimization. SIAM, Philadelphia, 1974.
| 1978 |@word repository:2 version:3 polynomial:1 norm:8 pick:3 initial:3 pub:1 bhattacharyya:1 existing:2 current:3 com:2 si:2 must:1 partition:12 hoping:1 drop:1 update:3 v:2 infant:1 greedy:1 selected:1 guess:1 accordingly:1 iso:1 iterates:1 provides:1 qij:1 overhead:1 introduce:2 expected:5 rapid:1 behavior:2 examine:1 ol:1 inspired:1 decreasing:1 actual:1 subvectors:1 increasing:6 becomes:6 provided:1 qia:1 moreover:1 bounded:1 notation:1 developed:2 transformation:2 impractical:1 guarantee:1 thorough:1 exactly:2 scaled:1 demonstrates:1 classifier:2 yn:2 positive:3 dropped:1 modify:1 tends:1 io:7 encoding:1 trna:1 emphasis:1 katya:1 examined:2 equivalence:1 unique:1 practice:6 procedure:2 jhu:1 significantly:1 suggest:3 get:2 cannot:1 interior:1 selection:6 close:1 storage:1 applying:3 equivalent:1 center:3 go:1 starting:1 independently:4 convex:1 correcting:1 adjusts:2 q:2 regarded:1 handle:4 gert:1 notion:1 increment:7 updated:1 resp:1 massive:2 programming:4 diabetes:2 satisfying:1 fries:1 updating:2 labeled:1 database:1 observed:1 adv:1 mentioned:1 complexity:5 cristianini:2 solving:4 efficiency:1 easily:1 exi:1 various:2 fast:2 activate:1 ithroughout:1 whose:3 heuristic:1 widely:1 solve:7 encoded:1 larger:2 s:1 otherwise:1 wont:1 final:1 advantage:4 propose:2 maximal:1 adaptation:1 uci:1 degenerate:4 convergence:12 requirement:1 produce:2 incremental:17 oo:1 fixing:1 finitely:2 nearest:2 strong:1 solves:3 kuhn:1 direction:1 correct:1 sand:1 require:1 feeding:1 ao:2 generalization:2 karush:1 adjusted:1 sufficiently:1 considered:1 blake:1 algorithmic:1 optimizer:1 inseparable:2 omitted:1 label:1 mit:1 clearly:1 modified:1 rather:3 caching:1 qoo:1 pn:1 conjunction:1 derived:1 yo:1 improvement:1 rank:1 dependent:1 membership:1 nn:1 selective:8 selects:1 overall:2 among:1 flexible:2 dual:1 orientation:1 jansen:1 special:2 construct:1 never:1 sampling:8 icml:1 simplex:2 report:2 few:5 preserve:1 resulted:2 individual:1 cheaper:1 delft:1 keerthi:2 adjust:1 male:1 introduces:1 nl:1 primal:1 predefined:2 necessary:2 poggio:4 indexed:3 taylor:1 desired:1 re:1 column:3 soft:6 cover:1 entry:1 rare:1 subset:2 terlaky:1 too:1 sv:1 synthetic:1 chunk:2 sensitivity:1 siam:1 ie:14 stay:1 sequel:1 yl:1 strive:1 potential:1 rockafellar:1 satisfy:3 vi:6 depends:2 later:1 picked:1 qex:1 cauwenberghs:4 start:3 reached:1 shai:1 ni:2 efficiently:2 sy:1 yield:1 nonsingular:1 correspond:1 ofthe:1 preprocess:1 straight:1 classified:1 submitted:1 murthy:1 nl2:1 halve:1 tucker:1 obvious:2 associated:1 degeneracy:1 sampled:1 dataset:1 adjusting:4 knowledge:1 surplus:3 campbell:1 higher:1 violating:1 methodology:1 yb:1 evaluated:1 though:2 just:2 implicit:1 until:6 shevade:1 hand:1 vci:1 mode:8 quality:1 ye:1 contain:1 multiplier:5 true:1 former:1 hence:2 alternating:1 branching:1 inferior:1 abalone:1 criterion:5 trying:1 outline:1 complete:1 demonstrate:2 common:2 specialized:1 qp:4 extend:1 interpretation:2 refer:2 cambridge:1 ai:6 queried:1 similarly:2 shawe:1 impracticable:1 closest:1 female:1 moderate:1 certain:1 inequality:1 meta:1 watson:3 kx2:1 additional:1 somewhat:3 eo:1 recognized:1 converge:2 monotonically:1 arithmetic:2 ii:3 branch:1 full:1 reduces:1 stem:1 technical:3 adapt:1 bach:1 offer:1 long:1 devised:1 y:1 feasibility:1 qi:1 converging:1 impact:1 essentially:2 iteration:19 kernel:8 ion:3 fine:2 interval:3 else:1 singular:1 pass:1 dobj:5 virtually:1 thing:1 call:4 integer:1 ee:1 near:1 easy:3 enough:1 xj:1 reduce:2 inactive:1 whether:2 expression:1 handled:1 penalty:2 speech:1 cause:2 hardly:1 action:2 useful:1 detailed:1 aimed:1 http:1 notice:1 per:2 correctly:1 write:1 express:1 four:1 uis:1 reasonable:1 pursuing:1 submatrix:1 bound:5 guaranteed:1 display:1 convergent:1 quadratic:2 encountered:1 constraint:5 performing:1 according:1 combination:1 conjugate:1 smaller:1 happens:1 gradually:1 restricted:1 equation:1 remains:1 scheinberg:2 turn:1 slack:5 mechanism:1 initiate:1 know:3 fed:1 available:4 operation:3 apply:3 observe:2 worthwhile:1 away:1 generic:1 kowalczyk:2 appropriate:1 save:2 batch:6 original:1 assumes:1 top:2 cf:1 maintaining:2 build:1 establish:1 unchanged:1 move:2 objective:5 added:7 occurs:1 parametric:9 poker:6 distance:1 mapped:1 separating:1 majority:1 trivial:1 assuming:2 length:1 index:1 pmc:1 statement:1 negative:2 implementation:1 design:1 adjustable:4 perform:2 upper:2 finite:2 supporting:1 situation:1 extended:4 ever:1 rn:1 yiyjk:1 arbitrary:1 qos:1 namely:1 boost:3 qa:3 suggested:1 usually:1 solut:3 demanding:1 difficulty:1 force:1 ranked:2 scheme:2 improve:1 technology:1 disappears:1 naive:1 philadelphia:1 sn:7 qee:1 geometric:2 interesting:1 limitation:1 degree:1 sufficient:2 principle:1 nondegenerate:2 ibm:5 row:3 course:1 bye:1 sym:2 bias:34 side:1 fledged:1 perceptron:1 dimension:2 xn:3 evaluating:1 ental:1 forward:1 far:2 decremental:1 approximate:1 obtains:2 l00:1 transcription:1 global:2 active:14 kkt:5 roos:1 xi:1 iterative:1 table:2 terminate:2 ioo:1 complex:1 linearly:3 whole:1 repeated:1 ule:1 xu:1 eural:1 representative:1 join:1 slow:1 sub:1 adiabatic:1 yib:1 exponential:3 lie:1 load:2 svm:13 chun:1 adding:1 margin:10 gap:1 likely:2 lagrange:4 ordered:1 scalar:3 bo:11 gender:1 corresponds:4 rbf:1 feasible:2 hard:2 change:8 determined:1 hyperplane:3 lemma:1 principal:1 ece:1 duality:1 merz:1 exception:1 select:1 support:10 violated:7 scratch:2 ex:2 |
1,071 | 1,979 | Iterative Double Clustering for
Unsupervised and Semi-Supervised
Learning
Ran El-Yaniv
Oren Souroujon
Computer Science Department
Technion - Israel Institute of Technology
(rani,orenso)@cs.technion.ac.il
Abstract
We present a powerful meta-clustering technique called Iterative Double Clustering (IDC). The IDC method is a natural extension of the
recent Double Clustering (DC) method of Slonim and Tishby that exhibited impressive performance on text categorization tasks [12]. Using synthetically generated data we empirically find that whenever the
DC procedure is successful in recovering some of the structure hidden
in the data, the extended IDC procedure can incrementally compute
a significantly more accurate classification. IDC is especially advantageous when the data exhibits high attribute noise. Our simulation
results also show the effectiveness of IDC in text categorization problems. Surprisingly, this unsupervised procedure can be competitive
with a (supervised) SVM trained with a small training set. Finally,
we propose a simple and natural extension of IDC for semi-supervised
and transductive learning where we are given both labeled and unlabeled examples.
1
Introduction
Data clustering is a fundamental and challenging routine in information processing
and pattern recognition. Informally, when we cluster a set of elements we attempt
to partition it into subsets such that points in the same subset are more ?similar? to
each other than to points in other subsets. Typical clustering algorithms depend on a
choice of a similarity measure between data points [6], and a ?correct? clustering result
depends on an appropriate choice of a similarity measure. However, the choice of a
?correct? measure is an ill-defined task without a particular application at hand. For
instance, consider a hypothetical data set containing articles by each of two authors
such that half of the articles authored by each author discusses one topic, and the
other half discusses another topic. There are two possible dichotomies of the data
which could yield two different bi-partitions: one according to topic, and another,
according to writing style. When asked to cluster this set into two sub-clusters, one
cannot successfully achieve the task without knowing the goal: Are we interested in
clusters that reflect writing style or semantics? Therefore, without a suitable target at
hand and a principled method for choosing a similarity measure suitable for the target,
it can be meaningless to interpret clustering results.
The information bottleneck (IB) method of Tishby, Pereira and Bialek [8] is a recent
framework that can sometimes provide an elegant solution to this problematic ?metric
selection? aspect of data clustering (see Section 2). The original IB method generates
a soft clustering assignments for the data. In [10], Slonim and Tishby developed a simplified ?hard? variant of the IB clustering, where there is a hard assignment of points
to their clusters. Employing this hard IB clustering, the same authors introduced an
effective two-stage clustering procedure called Double Clustering (DC) [12]. An experimental study of DC on text categorization tasks [12] showed a consistent advantage
of DC over other clustering methods. A striking finding in [12] is that DC sometimes
even attained results close to those of supervised learning.1
In this paper we present a powerful extension of the DC procedure which we term
Iterative Double Clustering (IDC). IDC performs iterations of DC and whenever the
first DC iteration succeeds in extracting a meaningful structure of the data, a number of the next consecutive iterations can continually improve the clustering quality.
This continual improvement achieved by IDC is due to generation of progressively less
noisy data representations which reduce variance. Using synthetically generated data,
we study some properties of IDC. Not only that IDC can dramatically outperform
DC whenever the data is noisy, our experiments indicate that IDC attains impressive
categorization results on text categorization tasks. In particular, we show that our
unsupervised IDC procedure is competitive with an SVM (and Naive Bayes) trained
over a small sized training set. We also propose a natural extension of IDC for transductive semi-supervised transductive. Our preliminary empirical results indicate that
our transductive IDC can yield effective text categorization.
2
Information Bottleneck and Double Clustering
We consider a data set X of elements, each of which is a d-dimensional vector over
a set F of features. We focus on the case where feature values are non-negative real
numbers. For every element x = (f1 , . . . , fd ) ? X we consider the empirical conditional
Pd
distribution {p(fi |x)} of features given x, where p(fi |x) = fi / i=1 fi . For instance,
X can be a set of documents, each of which is represented as a vector of word-features
where fi is the frequency of the ith word (in some fixed word enumeration). Thus,
we represent each element as a distribution over its features, and are interested in a
partition of the data based on these feature conditional distributions. Given a predetermined number of clusters, a straightforward approach to cluster the data using the
above ?distributional representation? would be to choose some (dis)similarity measure
for distributions (e.g. based on some Lp norm or some statistical measure such as the
KL-divergence) and employ some ?plug-in? clustering algorithm based on this measure
(e.g. agglomerative algorithms). Perhaps due to feature noise, this simplistic approach
can result in mediocre results (see e.g. [12]).
Suppose that our data is given via observations of a random variable S. In the information bottleneck (IB) method of Tishby et al. [8] we attempt to extract the essence
of the data S using co-occurrence observations of S together with a target variable
T . The goal is to extract a compressed representation S? of S with minimum compromise of information content with respect to T . This way, T can direct us to extract
meaningful clustering from S where the meaning is determined by the target T . Let
P
p(s,t)
I(S, T ) =
s?S,t?T p(s, t) log p(s)p(t) , the mutual information between S and T [3].
1
Specifically, the DC method obtained in some cases accuracy close to that obtained by a
naive Bayes classifier trained over a small sized sample [12].
The IB method attempts to compute p(?
s|s), a ?soft? assignment of a data point s to
?
? T ), given the Markov condition T ? S ? S?
clusters s?, so as to minimize I(S, S)??I(
S,
(i.e., T and S? are conditionally independent given S). Here, ? is a Lagrange multiplier
? T ) and thus, the tradeoff between the desired comthat controls a constraint on I(S,
pression level and the predictive power of S? with respect to T . As shown in [8], this
minimization yields a system of coupled equations for the clustering mapping p(?
s|s)
in terms of the cluster representations p(t|?
s) and the cluster weights p(?
s). The paper
[8] also presents an algorithm similar to deterministic annealing [9] for recovering a
solution for the coupled equations.
Slonim and Tishby [10] proposed a simplified IB approach for the computation of
?hard? cluster assignments. In this hard IB variant, each data point s, represented by
{p(t|s)}t , is associated with one centroid s?. They also devised a greedy agglomerative
clustering algorithm that starts with the trivial clustering, where each data point s is a
single cluster; then, at each step, the algorithm merges the two clusters that minimize
? T ). The reduction in I(S,
? T ) due to a merge of two
the loss of mutual information I(S,
clusters s?i and s?j is shown to be
(p(?
si ) + p(?
sj ))DJS [p(t|?
si ), p(t|?
sj )],
(1)
where, for any two distributions p(x) and q(x), with priors ?p and ?q , ?p + ?q = 1,
DJS [p(x), q(x)] is the Jensen-Shannon divergence (see [7, 4]),
DJS [p(x), q(x)] = ?p DKL (p||
p+q
p+q
) + ?q DKL (q||
).
2
2
Here, p+q
2 denotes the distribution (p(x) + q(x))/2 and DKL (?||?) is the Kullbak-Leibler
divergence [3]. This agglomerative algorithm is of course only locally optimal, since at
each step it greedily merges the two most similar clusters. Another disadvantage of
this algorithm is its time complexity of O(n2 ) for a data set of n elements (see [12] for
details).
The IB method can be viewed as a meta-clustering procedure that, given observations
of the variables S and T (via their empirical co-occurrence samples p(s, t)), attempts to
cluster s-elements represented as distributions over t-elements. Using the merging cost
of equation (1) one can approximate IB clustering based on other ?plug-in? vectorial
clustering routines applied within the simplex containing the s-elements distributional
representations.
DC [12] is a two-stage procedure where during the first stage we IB-cluster features
represented as distributions over elements, thus generating feature clusters. During
the second stage we IB-cluster elements represented as distributions over the feature
clusters (a more formal description follows). For instance, considering a document
clustering domain, in the first stage we cluster words as distributions over documents
to obtain word clusters. Then in the second stage we cluster documents as distributions
over word clusters, to obtain document clusters.
Intuitively, the first stage in DC generates more coarse pseudo features (i.e. feature
centroids), which can reduce noise and sparseness that might be exhibited in the original feature values. Then, in the second stage, elements are clustered as distributions
over the ?distilled? pseudo features, and therefore can generate more accurate element
clusters. As reported in [12], this DC two-stage procedure outperforms various other
clustering approaches as well as DC variants applied with other dissimilarity measures
(such as the variational distance) different from the optimal JS-divergence of Equation (1). It is most striking that in some cases, the accuracy achieved by DC was close
to that achieved by a supervised Naive Bayes classifier.
3
Iterative Double Clustering (IDC)
Denote by IBN (T |S) the clustering result, into N clusters, of the IB hard clustering procedure when the data is S and the target variable is T (see Section 2).
For instance, if T represents documents and S represents words, the application of
IBN (T = documents|S = words) will cluster the words, represented as distributions
over the documents, into N clusters. Using the notation of our problem setup, with X
denoting the data and F denoting the features, Figure 1 provides a pseudo-code of the
IDC meta-clustering algorithm, which clusters X into NX? clusters. Note that the DC
procedure is simply an application of IDC with k = 1.
The code of Figure 1 requires to specify k, the number of IDC iterations to run, N X? ,
the number of element clusters (e.g. the desired number of of document clusters) and
NF? , the number of feature clusters to use during each iteration. In the experiments
reported below we always assumed that we know the correct NX? . Our experiments
show that the algorithm is not too sensitive to an overestimate of NF? . Note that the
choice of these parameters is the usual model order selection problem. Perhaps the first
question regarding k (number of iterations) to ask is whether or not IDC converges to
a steady state (e.g. where two consecutive iterations generate identical partitions).
Unfortunately, a theoretical understanding of this convergence issue is left open in this
paper. In most of our experiments IDC converged after a small number of iterations.
In all the experiments reported below we used a fixed k = 15.
The ?hard? IB-clustering originally preInput:
sented by [12] uses an agglomerative proX (input data)
cedure as its underlying clustering algoNX? (number of element clusters)
rithm (see Section 2). The ?soft? IB [8]
NF? (number of feature clusters to use)
applies a deterministic annealing clusk (number of iterations)
tering [9] as its underlying procedure.
Initialize: S ? F , T ? X,
As already discussed, the IB method
loop {k times}
can be viewed as meta-clustering which
N ? NF?
can employ many vectorial clustering
F? ? IBN (T |S)
routines. We implemented IDC usN ? NX? , S ? X, T ? F?
ing several routines including agglom? ? IBN (T |S)
erative clustering and deterministic anX
?
nealing. Since both these algorithms
S ? F, T ? X
are computationally intensive, we also
end loop
implemented IDC using a simple fast
?
Output X
algorithm called Add-C proposed by
Guedalia et al. [5]. Add-C is an online
Figure 1: Pseudo-code for IDC
greedy clustering algorithm with linear
running time and can be viewed as a simple online approximation of k-means. For this
reason, all the results reported below were computed using Add-C (whose description
is omitted, for lack of space, see [5] for details). For obtaining a better approximation
to the IB method we of course used the JS-divergence of (1) as our cost measure.
Following [12] we chose to evaluate the performance of IDC with respect to a labeled
data set. Specifically, we count the number of classification errors made by IDC as
obtained from labeled data.
In order to better understand the properties of IDC, we first examined it within a
controlled setup of synthetically generated data points whose feature values were generated by d-dimensional Gaussian distributions (for d features) of the form N (?, ?),
where ? = ? 2 I, with ? constant. In order to simulate different sources, we assigned
different ? values (from a given constant range) to each combination of source and
feature. Specifically, for data simulating m classes and |F | features, |F | ? m differ-
ent distributions were selected. We introduced feature noise by distorting each entry
with value v by adding a random sample from N (0, (? ? v)2 ), where ? is the ?noise
amplitude? (resulting negative values were rounded to zero). In figure 2(a), we plot
the average accuracy of 10 runs of IDC. As can be seen, at low level noise amplitudes
IDC attains perfect accuracy. When the noise amplitude increases, both IDC and DC
deteriorate but the multiple rounds of IDC can better resist the extra noise. After
observing the large accuracy gain between DC and IDC at a specific interval of noise
amplitude within the feature noise setup, we set the noise amplitude to values in that
interval and examined the behavior of the IDC run in more detail. Figure 2(b) shows
a typical trace of the accuracy obtained at each of the 20 iterations of an IDC run over
noisy data. This learning curve shows a quick improvement in accuracy during the first
few rounds, and then reaches a plateau.
Following [12] we used the 20 Newsgroups (NG20) [1] data set to evaluate IDC on real,
labeled data. We chose several subsets of NG20 with various degrees of difficulty. In the
first set of experiments we used the following four newsgroups (denoted as NG4), two
of which deal with sports subjects: ?rec.sport.baseball?, ?rec.sport.hockey?, ?alt.atheism?
and ?sci.med?. In these experiments we tested some basic properties of IDC. In all the
experiments reported in this section we performed the following preprocessing: We
lowered the case of all letters, filtered out low frequency words which appeared up to
(and including) 3 times in the entire set and filtered out numerical and non-alphabetical
characters. Of course we also stripped off newsgroup headers which contain the class
labels.
In Figure 2(c) we display accuracy vs. number of feature clusters (NF? ). The accuracy
deteriorates when NF? is too small and we see a slight negative trend when it increases.
We performed an additional experiment which tested the performance using very large
numbers of feature clusters. Indeed, these results indicate that after a plateau in the
range of 10-20 there is a minor negative trend in the accuracy level. Thus, with respect
to this data set, the IDC algorithm is not too sensitive to an overestimation of the
number NF? of feature clusters.
Other experiments over the NG4 data set confirmed the results of [12] that the JSdivergence dissimilarity measure of Equation (1) outperforms other measures, such as
the variational distance (L1 norm), the KL-divergence, the square-Euclidean distance
and the ?cosine? distance. Details of all these experiments will be presented in the full
version of the paper.
In the next set of experiments we tested IDC?s performance on the same newsgroup
subsets used in [12]. Table 1(a) compares the accuracy achieved by DC to the the last
(15th) round of IDC with respect to all data sets described in [12]. Results of DC were
taken from [12] where DC is implemented using the agglomerative routine.
Table 1(b) displays a preliminary comparison of IDC with the results of a Naive Bayes
(NB) classifier (reported in [11]) and a support vector machine (SVM). In each of
the 5 experiments the supervised classifiers were trained using 25 documents per class
and tested on 475 documents per class. The input for the unsupervised IDC was 500
unlabeled documents per class. As can be seen, IDC outperforms in this setting both
the naive Bayes learner and the SVM.
4
Learning from Labeled and Unlabeled Examples
In this section, we present a natural extension of IDC for semi-supervised transductive
learning that can utilize both labeled and unlabeled data. In transductive learning, the
testing is done on the unlabeled examples in the training data, while in semi-supervised
Newsgroup
Binary1
Binary2
Binary3
M ulti51
M ulti52
M ulti53
M ulti101
M ulti102
M ulti103
Average
DC
0.70
0.68
0.75
0.59
0.58
0.53
0.35
0.35
0.35
0.54
IDC-15
0.85
0.83
0.80
0.86
0.88
0.86
0.56
0.49
0.55
0.74
Data Set
COMP (5)
SCIENCE (4)
POLITICS (3)
RELIGION (3)
SPORT (2)
Average
NB
0.50
0.73
0.67
0.55
0.75
0.64
SVM
0.51
0.68
0.76
0.78
0.78
0.70
IDC-15
0.50
0.79
0.78
0.60
0.89
0.71
IDC-1
0.34
0.44
0.42
0.38
0.76
0.47
Table 1:
Left: Accuracy of DC vs. IDC on most of the data sets described in [12]. DC
results are taken from [12]; Right: Accuracy of Naive Bayes (NB) and SVM classifiers vs.
IDC on some of the data sets described in [11]. The IDC-15 column shows final accuracy
achieved at iteration 15 of IDC; the IDC-1 column shows first iteration accuracy. The NB
results are taken from [11]. The SVM results were produced using the LibSVM package [2]
with its default parameters. In all cases the SVM was trained and tested using the same
training/test set sizes as described in [11] (25 documents per newsgroup for training and 475
for testing; the number of unlabeled documents fed to IDC was 500 per newsgroup). The
number of newsgroups in each hyper-category is specified in parenthesis (e.g. COMP contains
5 newsgroups).
inductive learning it is done on previously unseen data. Here we only deal with the
transductive case. In the full version of the paper we will present a semi-supervised
inductive learning version of IDC.
For motivating the transductive IDC, consider a data set X that has emerged from a
statistical mixture which includes several sources (classes). Let C be a random variable
indicating the class of a random point. During the first iteration of a standard IDC we
cluster the features F so as to preserve I(F, X). Typically, X contains predictive information about the classes C. In cases where I(X, C) is sufficiently large, we expect that
the feature clusters F? will preserve some information about C as well. Having available
some labeled data points, we may attempt to generate feature clusters F? which preserve more information about class labels. This leads to the following straightforward
idea. During the first IB-stage of the IDC first iteration, we cluster the features F as
distributions over class labels (given by the labeled data). This phase results in feature
clusters F? . Then we continue as usual; that is, in the second IB-phase of the first IDC
iteration we cluster X, represented as distributions over F? . Subsequent IDC iterations
use all the unlabeled data.
In Figure 2(d) we show the accuracy obtained by DC and IDC in categorizing 5 newsgroups as a function of the training (labeled) set size. For instance, we see that when
the algorithm has 10 documents available from each class it can categorize the entire
unlabeled set, containing 90 unlabeled documents in each of the classes, with accuracy
of about 80%. The benchmark accuracy of IDC with no labeled examples obtained
about 73%.
In Figure 2(e) we see the accuracy obtained by DC and transductive IDC trained with
a constant set of 50 labeled documents, on different unlabeled (test) sample sizes. The
graph shows that the accuracy of DC significantly degrades, while IDC manages to
sustain an almost constant high accuracy.
5
Concluding Remarks
Our contribution is threefold. First, we present a natural extension of the successful
double clustering algorithm of [12]. Empirical evidence indicates that our new iterative
DC algorithm has distinct advantages over DC, especially in noisy settings. Second,
we applied the unsupervised IDC on text categorization problems which are typically
dealt with by supervised learning algorithms. Our results indicate that it is possible to
achieve performance competitive to supervised classifiers that were trained over small
samples. Finally, we present a natural extension of IDC that allows for transductive
learning. Our preliminary empirical evaluation of this scheme over text categorization
appears to be promising.
A number of interesting questions are left for future research. First, it would be of
interest to gain better theoretical understanding of several issues: the generalization
properties of DC and IDC, the convergence of IDC to a steady state and precise conditions on attribute noise settings within which IDC is advantageous. Second, it would
be important to test the empirical performance of IDC with respect to different problem domains. Finally, we believe it would be of great interest to better understand and
characterize the performance of transductive IDC in settings having both labeled and
unlabeled data.
Acknowledgements
We thank Naftali Tishby and Noam Slonim for helpful discussions and for providing us with
the detailed descriptions of the NG20 data sets used in their experiments. We also thank
Ron Meir, Yiftach Ravid and the anonymous referees for their constructive comments. This
research was supported by the Israeli Ministry of Science
References
[1] 20 newsgroup data set. http://www.ai.mit.edu/?jrennie/20 newsgroups/.
[2] Libsvm. http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[3] T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley & Sons,
Inc., 1991.
[4] R. El-Yaniv, S. Fine, and N. Tishby. Agnostic classification of markovian sequences. In
NIPS97, 1997.
[5] I.D. Guedalia, M. London, and M. Werman. A method for on-line clustering of nonstationary data. Neural Computation, 11:521?540, 1999.
[6] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice-Hall, New Jersey,
1988.
[7] J. Lin. Divergence measures based on the shannon entropy. IEEE Transactions on
Information Theory, 37(1):145?151, 1991.
[8] F.C. Pereira N. Tishby and W. Bialek. Information bottleneck method. In 37-th Allerton
Conference on Communication and Computation, 1999.
[9] K. Rose. Deterministic annealing for clustering, compression, classification, regression
and related optimization problems. Proceedings of the IEEE, 86(11):2210?2238, 1998.
[10] N. Slonim and N. Tishby. Agglomerative information bottleneck. In NIPS99, 1999.
[11] N. Slonim and N. Tishby. The power of word clustering for text classification. To appear
in the European Colloquium on IR Research, ECIR, 2001.
[12] Noam Slonim and Naftali Tishby. Document clustering using word clusters via the information bottleneck method. In ACM SIGIR 2000, 2000.
100
100
First Iteration Accuracy
Last Iteration Accuracy
90
90
80
80
Accuracy
Accuracy
70
60
50
40
30
70
60
50
20
40
10
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
30
0
2
2
4
6
Feature Noise Amplitude
(a)
10
12
14
16
18
20
(b)
100
100
90
90
80
80
70
70
60
Accuracy
Accuracy
8
IDC Iteration
50
40
30
60
50
40
30
20
20
First Iteration Accuracy
Last Iteration Accuracy
10
0
2
4
6
8
10
First Iteration Accuracy
Last Iteration Accuracy
10
12
14
16
18
20
Number Of Feature Clusters
0
10
20
30
40
50
60
70
Training Set Size
(c)
(d)
100
90
80
Accuracy
70
60
50
40
30
20
First Iteration Accuracy
Last Iteration Accuracy
10
0
100
150
200
250
300
350
400
450
500
Test Set Size
(e)
Figure 2: (a) Average accuracy over 10 trials for different amplitudes of proportional feature
noise. Data set: A synthetically generated sample of 200 500-dimensional elements in 4
classes. (b) A trace of a single IDC run. The x-axis is the number of IDC iterations and
the y-axis is accuracy achieved in each iteration. Data set: Synthetically generated sample of
500, 400-dimensional elements in 5 classes; Noise: Proportional feature noise with ? = 1.0;
(c) Average accuracy (10 trials) for different numbers of feature clusters. Data set: NG4. (d)
Average accuracy of (10 trials of) transductive categorization of 5 newsgroups. Sample size:
80 documents per class, X-axis is training set size. Upper curve shows trans. IDC-15 and
lower curve is trans. IDC-1. (e) Average accuracy of (10 trials of) transductive categorization
of 5 newsgroups. Sample size: constant training set size of 50 documents from each class.
The x-axis counts the number of unlabeled samples to be categorized. Upper curve is trans.
IDC-15 and lower curve is trans. IDC-1. Each error bar (in all graphs) specifies one std.
| 1979 |@word trial:4 version:3 rani:1 compression:1 advantageous:2 norm:2 open:1 simulation:1 reduction:1 contains:2 denoting:2 document:20 outperforms:3 si:2 john:1 subsequent:1 numerical:1 partition:4 predetermined:1 plot:1 progressively:1 v:3 half:2 greedy:2 selected:1 ecir:1 ng4:3 ith:1 agglom:1 filtered:2 coarse:1 provides:1 ron:1 allerton:1 direct:1 deteriorate:1 indeed:1 behavior:1 enumeration:1 considering:1 notation:1 underlying:2 agnostic:1 israel:1 anx:1 developed:1 finding:1 pseudo:4 every:1 hypothetical:1 continual:1 nf:7 classifier:6 control:1 appear:1 continually:1 overestimate:1 slonim:7 merge:1 might:1 chose:2 examined:2 challenging:1 co:2 bi:1 range:2 testing:2 alphabetical:1 procedure:12 empirical:6 significantly:2 word:12 cannot:1 unlabeled:12 selection:2 close:3 mediocre:1 nb:4 prentice:1 writing:2 www:2 deterministic:4 quick:1 straightforward:2 sigir:1 target:5 suppose:1 us:1 element:17 trend:2 recognition:1 referee:1 rec:2 std:1 distributional:2 labeled:12 csie:1 ran:1 principled:1 rose:1 pd:1 colloquium:1 complexity:1 overestimation:1 asked:1 trained:7 depend:1 compromise:1 predictive:2 baseball:1 learner:1 represented:7 various:2 jersey:1 distinct:1 fast:1 effective:2 london:1 jain:1 dichotomy:1 hyper:1 choosing:1 header:1 whose:2 emerged:1 compressed:1 unseen:1 transductive:13 noisy:4 final:1 online:2 advantage:2 sequence:1 propose:2 loop:2 achieve:2 description:3 ent:1 convergence:2 double:8 yaniv:2 cluster:48 categorization:10 generating:1 converges:1 perfect:1 ac:1 dubes:1 minor:1 recovering:2 c:1 ibn:4 indicate:4 implemented:3 differ:1 correct:3 attribute:2 f1:1 clustered:1 generalization:1 preliminary:3 anonymous:1 ntu:1 extension:7 sufficiently:1 hall:1 great:1 mapping:1 werman:1 consecutive:2 omitted:1 label:3 sensitive:2 successfully:1 minimization:1 mit:1 always:1 gaussian:1 categorizing:1 focus:1 improvement:2 indicates:1 attains:2 centroid:2 greedily:1 helpful:1 el:2 entire:2 typically:2 hidden:1 interested:2 semantics:1 issue:2 classification:5 ill:1 denoted:1 initialize:1 mutual:2 distilled:1 having:2 identical:1 represents:2 unsupervised:5 future:1 simplex:1 employ:2 few:1 preserve:3 divergence:7 erative:1 pression:1 phase:2 attempt:5 interest:2 fd:1 evaluation:1 mixture:1 accurate:2 euclidean:1 desired:2 theoretical:2 instance:5 column:2 soft:3 markovian:1 cover:1 disadvantage:1 assignment:4 cost:2 subset:5 entry:1 technion:2 successful:2 tishby:11 too:3 motivating:1 reported:6 characterize:1 fundamental:1 off:1 rounded:1 together:1 reflect:1 containing:3 choose:1 style:2 prox:1 includes:1 inc:1 depends:1 performed:2 observing:1 competitive:3 bayes:6 start:1 contribution:1 minimize:2 square:1 ir:1 il:1 accuracy:39 variance:1 tering:1 yield:3 dealt:1 produced:1 manages:1 confirmed:1 comp:2 converged:1 plateau:2 reach:1 whenever:3 frequency:2 associated:1 gain:2 ask:1 amplitude:7 routine:5 appears:1 attained:1 originally:1 supervised:12 specify:1 sustain:1 done:2 stage:10 hand:2 lack:1 incrementally:1 quality:1 perhaps:2 believe:1 contain:1 multiplier:1 inductive:2 assigned:1 leibler:1 nips97:1 deal:2 conditionally:1 round:3 during:6 essence:1 naftali:2 steady:2 cosine:1 performs:1 l1:1 meaning:1 variational:2 fi:5 empirically:1 discussed:1 slight:1 interpret:1 ai:1 dj:3 lowered:1 jrennie:1 impressive:2 similarity:4 add:3 j:2 recent:2 showed:1 meta:4 continue:1 seen:2 minimum:1 additional:1 ministry:1 semi:6 multiple:1 full:2 ing:1 plug:2 lin:1 devised:1 dkl:3 cedure:1 controlled:1 parenthesis:1 variant:3 simplistic:1 idc:76 basic:1 regression:1 metric:1 iteration:27 sometimes:2 represent:1 ng20:3 oren:1 achieved:6 fine:1 annealing:3 interval:2 source:3 extra:1 meaningless:1 exhibited:2 comment:1 subject:1 med:1 elegant:1 effectiveness:1 extracting:1 nonstationary:1 synthetically:5 newsgroups:8 reduce:2 regarding:1 idea:1 knowing:1 tradeoff:1 intensive:1 politics:1 bottleneck:6 whether:1 distorting:1 remark:1 dramatically:1 detailed:1 informally:1 authored:1 locally:1 category:1 generate:3 http:2 outperform:1 meir:1 specifies:1 problematic:1 deteriorates:1 per:6 threefold:1 four:1 libsvm:3 utilize:1 graph:2 run:5 package:1 letter:1 powerful:2 striking:2 almost:1 sented:1 display:2 vectorial:2 constraint:1 generates:2 aspect:1 simulate:1 concluding:1 department:1 according:2 combination:1 son:1 character:1 lp:1 tw:1 intuitively:1 taken:3 computationally:1 equation:5 previously:1 discus:2 count:2 cjlin:1 know:1 fed:1 end:1 available:2 appropriate:1 simulating:1 occurrence:2 original:2 thomas:1 denotes:1 clustering:44 running:1 especially:2 question:2 already:1 degrades:1 usual:2 bialek:2 exhibit:1 distance:4 thank:2 sci:1 nx:3 topic:3 agglomerative:6 trivial:1 reason:1 code:3 providing:1 setup:3 unfortunately:1 trace:2 negative:4 noam:2 upper:2 observation:3 markov:1 benchmark:1 extended:1 communication:1 precise:1 dc:31 introduced:2 kl:2 specified:1 resist:1 merges:2 israeli:1 trans:4 bar:1 below:3 pattern:1 appeared:1 including:2 power:2 suitable:2 natural:6 difficulty:1 scheme:1 improve:1 technology:1 axis:4 naive:6 extract:3 coupled:2 text:8 prior:1 understanding:2 acknowledgement:1 loss:1 expect:1 generation:1 interesting:1 proportional:2 degree:1 consistent:1 article:2 course:3 surprisingly:1 last:5 supported:1 dis:1 formal:1 understand:2 institute:1 stripped:1 curve:5 default:1 author:3 made:1 preprocessing:1 simplified:2 employing:1 transaction:1 sj:2 approximate:1 assumed:1 iterative:5 hockey:1 table:3 promising:1 obtaining:1 european:1 domain:2 noise:16 n2:1 atheism:1 categorized:1 rithm:1 wiley:1 sub:1 pereira:2 ib:19 specific:1 jensen:1 svm:8 alt:1 evidence:1 merging:1 adding:1 dissimilarity:2 sparseness:1 entropy:1 simply:1 lagrange:1 religion:1 sport:4 applies:1 acm:1 conditional:2 goal:2 sized:2 viewed:3 content:1 hard:7 typical:2 determined:1 specifically:3 usn:1 called:3 experimental:1 succeeds:1 shannon:2 meaningful:2 newsgroup:6 indicating:1 support:1 categorize:1 constructive:1 evaluate:2 tested:5 |
1,072 | 198 | 828
Cowan
Neural networks: the early days
J.D. Cowan
Department of Mathematics, Committee on
Neurobiology, and Brain Research Institute,
The University of Chicago, 5734 S. Univ. Ave.,
Chicago, Illinois 60637
ABSTRACT
A short account is given of various investigations of neural network
properties, beginning with the classic work of McCulloch & Pitts.
Early work on neurodynamics and statistical mechanics, analogies with
magnetic materials, fault tolerance via parallel distributed processing,
memory, learning, and pattern recognition, is described.
1 INTRODUCTION
In this brief account of the early days in neural network research, it is not possible to be
comprehensive. This article then is a somewhat subjective survey of some, but not all, of
the developments in the theory of neural networks in the twent-five year period, from
1943 to 1968, when many of the ideas and concepts were formulated, which define the
field of neural network research. This comprises work on connections with automata
theory and computability; neurodynamics, both deterministic and statistical; analogies
with magnetic materials and spin systems; reliability via parallel and parallel distributed
processing; modifiable synapses and conditioning; associative memory; and supervised
and unsupervised learning.
2 McCULLOCH-PITTS NETWORKS
The modem era may be said to have begun with the work of McCulloch and Pitts (1943).
This is too well-known to need commenting on. Let me just make some historical remarks. McCulloch, who was by training a psychiatrist and neuroanatomist, spent some
twenty years thinking about the representation of event in the nervous system. From 1941
to 1951 he worked in Chicago. Chicago at that time was one of the centers of neural of
Neural Networks: The Early Days
Figure1: Warren McCulloch circa 1962
network research, mainly through the work of the Rashevsky group in the Committee on
Mathematical Biology at the University of Chicago. Rashevsky, Landahl, Rapaport and
Shim bel, among others, carried out many early investigations of the dynamics of neural
networks, using a mixture of calculus and algebra. In 1942 McCulloch was introduced to
Walter Pitts, then a 17 year old student of Rashevsky's. Pitts was a mathematical prodigy
who had joined the Committee sometime in 1941. There is an (apocryphal) story that
Pitts was led to the Rashevsky group after a chance meeting with the philosopher
Bertrand Russell, at that time a visitor to the University of Chicago. In any event Pitts
was already working on algebraic aspects of neural networks, and it did not take him long
to see the point behind McCulloch's quest for the embodiment of mind. In one of
McCulloch later essays (McCulloch 1961) he describes the history of his efforts thus:
My object, as a psychologist, was to invent a least psychic event, or
"psychon", that would have the following properties: First, it was to be
so simple an event that it either happened or else it did not happen.
Second, it was to happen only if its bound cause had happened-shades
of Duns Scotus!-that is, it was to imply its temporal antecedent.
Third it was to propose this to subsequent psychons. Fourth, these
were to be compounded to produce the equivalents of more
complicated propositions concerning their antecedents .. .In 1921 it
dawned on me that these events might be regarded as the all-ornothing impulses of neurons, combined by convergence upon the next
neuron to yield complexes of propositional events.
Their subsequent 1943 paper was remarkable in many respects. It is best appreciated
within the zeitgeist of the era when it was written. As Papert has documented in his
introduction to a collection of McCulloch's papers (McCulloch 1967), 1943 was a semi-
829
830
Cowan
nal year for the development of the science of the mind. Craik's monograph The Nature
of Explanation and the paper "Behavior, Purpose and Teleology, by Rosenbleuth,
Wiener and Bigelow, were also published in 1943. As Papert noted, "The common
feature [of these publications] is their recognition that the laws governing the
embodiment of mind should be sought among the laws governing information rather than
energy or matter". The paper by McCulloch and Pitts certainly lies within this
framework.
Figure 2: Walter Pitts circa 1952
McCulloch-Pitts networks (hence-forth referred to as MP networks), are finite state
automata embodying the logic of propositions, with quantifiers, as McCulloch wished;
and permit the framing of sharp hypotheses about the nature of brain mechanisms, in a
form equivalent to computer programs. This was a remarkable achievement. It
established once and for all, the validity of making formal models of brain mechanisms,
if not their veridicality. It also established the possibility of a rigorous theory of mind, in
that neural networks with feedback loops can exhibit purposive behavior, or as
McCulloch and Pitts put it:
both the formal and the final aspects of that activity which we are
wont to call mental are rigorously deducible from present
neurophysiology ... [and] that in [imaginable networks] ... "Mind" no
longer "goes more ghostly than a ghost".
2.1 FAULT TOLERANCE
:MP networks were the first designed to perform specific logical tasks; and of course logic
can be mapped into arithmetic. Landahl, McCulloch and Pitts (1943), for example,
noted that the arithmetical operations +, 1-, and x can be obtained in MP networks via the
logical operations OR. NOT, and AND. Thus the arithmetical expression a-a.b = a.(l-b)
| 198 |@word neurophysiology:1 validity:1 concept:1 hence:1 already:1 imaginable:1 calculus:1 essay:1 said:1 exhibit:1 material:2 prodigy:1 noted:2 mapped:1 me:2 investigation:2 proposition:2 subjective:1 written:1 common:1 pitt:12 subsequent:2 chicago:6 happen:2 sought:1 early:5 conditioning:1 designed:1 purpose:1 sometime:1 he:2 twenty:1 dun:1 nervous:1 perform:1 neuron:2 modem:1 beginning:1 him:1 finite:1 short:1 mathematics:1 mental:1 illinois:1 neurobiology:1 had:2 reliability:1 rather:1 five:1 longer:1 mathematical:2 sharp:1 publication:1 introduced:1 propositional:1 philosopher:1 connection:1 bel:1 mainly:1 commenting:1 ave:1 rigorous:1 framing:1 bigelow:1 behavior:2 fault:2 mechanic:1 meeting:1 brain:3 arithmetical:2 established:2 bertrand:1 somewhat:1 pattern:1 ghost:1 period:1 program:1 semi:1 arithmetic:1 memory:2 explanation:1 among:2 mcculloch:16 event:6 compounded:1 development:2 long:1 field:1 once:1 concerning:1 brief:1 imply:1 temporal:1 biology:1 unsupervised:1 carried:1 thinking:1 invent:1 others:1 craik:1 comprehensive:1 law:2 else:1 shim:1 antecedent:2 era:2 analogy:2 remarkable:2 rapaport:1 might:1 possibility:1 cowan:3 certainly:1 article:1 call:1 mixture:1 circa:2 story:1 behind:1 deducible:1 course:1 appreciated:1 warren:1 idea:1 formal:2 institute:1 old:1 expression:1 tolerance:2 distributed:2 feedback:1 embodiment:2 effort:1 collection:1 algebraic:1 cause:1 historical:1 remark:1 put:1 equivalent:2 deterministic:1 center:1 logic:2 go:1 too:1 automaton:2 survey:1 embodying:1 documented:1 my:1 combined:1 happened:2 regarded:1 his:2 modifiable:1 neurodynamics:2 classic:1 nature:2 group:2 complex:1 hypothesis:1 did:2 nal:1 recognition:2 computability:1 account:2 year:4 fourth:1 student:1 referred:1 matter:1 mp:3 papert:2 russell:1 later:1 comprises:1 monograph:1 lie:1 purposive:1 bound:1 third:1 complicated:1 parallel:3 rigorously:1 dynamic:1 psychiatrist:1 activity:1 shade:1 specific:1 algebra:1 spin:1 wiener:1 worked:1 upon:1 who:2 yield:1 aspect:2 various:1 univ:1 walter:2 published:1 department:1 history:1 synapsis:1 led:1 describes:1 energy:1 making:1 wont:1 psychologist:1 joined:1 quantifier:1 final:1 associative:1 begun:1 chance:1 logical:2 propose:1 committee:3 mechanism:2 mind:5 formulated:1 loop:1 day:3 supervised:1 figure1:1 operation:2 permit:1 forth:1 magnetic:2 just:1 governing:2 achievement:1 convergence:1 working:1 produce:1 quest:1 object:1 spent:1 visitor:1 impulse:1 wished:1 |
1,073 | 1,980 | BLIND SOURCE SEPARATION VIA
MULTINODE SPARSE REPRESENTATION
Michael Zibulevsky
Department of Electrical Engineering
Technion, Haifa 32000, Israel
mzib@ee.technion.ac. if
Pavel Kisilev
Department of Electrical Engineering
Technion, Haifa 32000, Israel
paufk@tx.technion.ac. if
Yehoshua Y. Zeevi
Department of Electrical Engineering
Technion, Haifa 32000, Israel
Barak Pearlmutter
Department of Computer Science
University of New Mexico
Albuquerque, NM 87131 USA
zeevi@ee.technion.ac. if
bap@cs. unm.edu
Abstract
We consider a problem of blind source separation from a set of instantaneous linear mixtures, where the mixing matrix is unknown. It was
discovered recently, that exploiting the sparsity of sources in an appropriate representation according to some signal dictionary, dramatically
improves the quality of separation. In this work we use the property of
multi scale transforms, such as wavelet or wavelet packets, to decompose
signals into sets of local features with various degrees of sparsity. We
use this intrinsic property for selecting the best (most sparse) subsets of
features for further separation. The performance of the algorithm is verified on noise-free and noisy data. Experiments with simulated signals,
musical sounds and images demonstrate significant improvement of separation quality over previously reported results.
1
Introduction
In the blind source separation problem an N-channel sensor signal x(~ ) is generated by
M unknown scalar source signals s rn(~) , linearly mixed together by an unknown N x M
mixing, or crosstalk, matrix A , and possibly corrupted by additive noise n(~):
x(~) = As(~)
+ n(~ ).
(1)
The independent variable ~ is either time or spatial coordinates in the case of images. We
wish to estimate the mixing matrix A and the M-dimensional source signal s(~).
The assumption of statistical independence of the source components Srn(~) , m = 1, ... , M
leads to the Independent Component Analysis (lCA) [1], [2]. A stronger assumption is the
?Supported in part by the Ollendorff Minerva Center, by the Israeli Ministry of Science, by NSF
CAREER award 97-02-311 and by the National Foundation for Functional Brain Imaging
sparsity of decomposition coefficients, when the sources are properly represented [3]. In
particular, let each 8 m (~ ) have a sparse representation obtained by means of its decomposition coefficients Cmk according to a signal dictionary offunctions Y k (~ ):
8m (~ ) = L Cmk Yk(~)'
(2)
k
The functions Yk (~ ) are called atoms or elements of the dictionary. These elements do
not have to be linearly independent, and instead may form an overcomplete dictionary,
e.g. wavelet-related dictionaries (wavelet packets, stationary wavelets, etc., see for example [9]). Sparsity means that only a small number of coefficients Cmk differ significantly
from zero. Then, unmixing of the sources is performed in the transform domain, i.e. in the
domain of these coefficients Cmk. The property of sparsity often yields much better source
separation than standard ICA, and can work well even with more sources than mixtures. In
many cases there are distinct groups of coefficients, wherein sources have different sparsity
properties. The key idea in this study is to select only a subset of features (coefficients)
which is best suited for separation, with respect to the following criteria: (1) sparsity of
coefficients (2) separability of sources' features. After this subset is formed , one uses it
in the separation process, which can be accomplished by standard ICA algorithms or by
clustering. The performance of our approach is verified on noise-free and noisy data. Our
experiments with ID signals and images demonstrate that the proposed method further
improves separation quality, as compared with result obtained by using sparsity of all decomposition coefficients.
2
Two approaches to sparse source separation: InfoMax and
Clustering
Sparse sources can be separated by each one of several techniques, e.g. the Bell-Sejnowski
Information Maximization (BS InfoMax) approach [1], or by approaches based on geometric considerations (see for example [8]). In the former case, the algorithm estimates the
unmixing matrix W = A - I, while in the later case the output is the estimated mixing
matrix. In both cases, these matrices can be estimated only up to a column permutation and
a scaling factor [4].
InfoMax. Under the assumption of a noiseless system and a square mixing matrix in (1),
the BS InfoMax is equivalent to the maximum likelihood (ML) formulation of the problem
[4], which is used in this section. For the sake of simplicity of the presentation, let us
consider the case where the dictionary of functions used in a source decomposition (2) is
an orthonormal basis. (In this case, the corresponding coefficients Cmk =< 8m , 'P k >,
where < ', ' > denotes the inner product). From (1) and (2) the decomposition coefficients
of the noiseless mixtures, according to the same signal dictionary of functions Y k (~ ) ' are:
Ak= ACk,
(3)
where M -dimensional vector Ck forms the k-th column of the matrix C = { Cmk}.
Let Y be thefeatures , or (new) data, matrix of dimension M x K , where K is the number of
features. Its rows are either the samples of sensor signals (mixtures), or their decomposition
coefficients. In the later case, the coefficients Ak'S form the columns ofY. (In the following
discussion we assume this setting for Y , if not stated other). We are interested in the
maximum likelihood estimate of A given the data Y.
Let the corresponding coefficients Cmk be independent random variables with a probability
density function (pdf) of an exponential type
(4)
where the scalar function v(?) is a smooth approximation of an absolute value function.
Such kind of distribution is widely used for modeling sparsity [5]. In view of the independence of Cmk, and (4), the prior pdf of C is
p(C) ex
II exp{ - V(Cmk)}.
(5)
m,k
Taking into account that Y = AC, the parametric model for the pdf of Y with respect to
parameters A is
(6)
Let W = A -I be the unmixing matrix, to be estimated. Then, substituting C = WY,
combining (6) with (5) and taking the logarithm we arrive at the log-likelihood function:
M
Lw(Y) = Klog ldetW I- L
K
LV((WY)mk).
(7)
m=l k = l
Maximization of Lw(Y) with respect to W is equivalent to the BS InfoMax, and can
be solved efficiently by the Natural Gradient algorithm [6]. We used this algorithm as
implemented in the ICAlEEG Matlab toolbox [7].
Clustering. In the case of geometry based methods, separation of sparse sources can be
achieved by clustering along orientations of data concentration in the N-dimensional space
wherein each column Yk of the matrix Y represents a data point (N is the number of mixtures). Let us consider a two-dimensional noiseless case, wherein two source signals, Sl(t)
and S2(t), are mixed by a 2x2 matrix A, arriving at two mixtures Xl(t) and X2(t). (Here,
the data matrix is constructed from these mixtures Xl (t) and xd t)). Typically, a scatter
plot of two sparse mixtures X1(t) versus X2(t), looks like the rightmost plot in Figure 2. If
only one source, say Sl (t), was present, the sensor signals would be Xl (t) = all Sl (t)
and X2(t) = a21s1 (t) and the data points at the scatter diagram of Xl (t) versus X2(t)
would belong to the straight line placed along the vector [ana21 ]T. The same thing happens, when two sparse sources are present. In this sparse case, at each particular index
where a sample of the first source is large, there is a high probability, that the corresponding sample of the second source is small, and the point at the scatter diagram still lies close
to the mentioned straight line. The same arguments are valid for the second source. As a
result, data points are concentrated around two dominant orientations, which are directly
related to the columns of A. Source signals are rarely sparse in their original domain. In
contrast, their decomposition coefficients (2) usually show high sparsity. Therefore, we
construct the data matrix Y from the decomposition coefficients of mixtures (3), rather
than from the mixtures themselves.
In order to determine orientations of scattered data, we project the data points onto the
surface of a unit sphere by normalizing corresponding vectors, and then apply a standard
clustering algorithm. This clustering approach works efficiently even if the number of
sources is greater than the number of sensors. Our clustering procedure can be summarized
as follows:
1. Form the feature matrix Y , by putting samples of the sensor signals or (subset of) their
decomposition coefficients into the corresponding rows ofthe matrix;
= Yk /II Yk I12' in order to project data
points onto the surface of a unit sphere, where 11 ?11 2 denotes the l2 norm. Before nonnal-
2. Normalize feature vectors (columns ofY): Yk
ization, it is reasonable to remove data points with a very small norm, since these very likely to be
crosstalk-corrupted by small coefficients from others' sources.
3. Move data points to a half-sphere, e.g. by forcing the sign of the first coordinate yk to
be positive: IF yk < 0 THEN Yk = - Yk. Without this operation each set oflineariy (i.e., along
a line) clustered data points would yield two clusters on opposite sides of the sphere.
?:
-s
,:
tOO
200
300
~oo
?:I
eoo
700
~
900
tOC>:l
5
100
200
300
~OO
500
600
700
1\00
900
1000
-s
tOO
200
300
~OO
500
eoo
700
1\00
900
tOC>:l
-5
100
200
300
~OO
500
600
700
1\00
900
1():Xl
,
,
:
:
Figure 1: Random block signals (two upper) and their mixtures (two lower)
4. Estimate cluster centers by using a clustering algorithm. The coordinates of these centers
will form the columns of the estimated mixing matrix A. We used Fuzzy C-means (FCM)
clustering algorithm as implemented in Matlab Fuzzy Logic Toolbox.
Sources recovery. The estimated unmixing matrix A-I is obtained by either the BS
InfoMax or the above clustering procedure, applied to either complete data set, or to some
subsets of data (to be explained in the next section). Then, the sources are recovered in their
original domain by s(t) = A- lX(t). We should stress here that if the clustering approach
is used, the estimation of sources is not restricted to the case of square mixing matrices,
although the sources recovery is more complicated in the rectangular cases (this topic is
out of scope of this paper).
3
Multinode based source separation
Motivating example: sparsity of random blocks in the Haar basis. To provide intuitive
insight into the practical implications of our main idea, we first use ID block functions,
that are piecewise constant, with random amplitude and duration of each constant piece
(Figure 1). It is known, that the Haar wavelet basis provides compact representation of such
functions. Let us take a close look at the Haar wavelet coefficients at different resolution
levels j =O,1, ... ,1. Wavelet basis functions at the finest resolution level j =J are obtained
by translation of the Haar mother wavelet: <p(t) = {I , ift E [0, 1) ; - I , ift E [1, 2) ; 0
otherwise}. Taking the scalar product ofa function s(t) with the wavelet <PJ(t - T) , we
produce a finite differentiation of the function s(t) at the point t = T. This means that the
number of non-zero coefficients at the finest resolution for a block function will correspond
roughly to the number of jumps ofthis function. Proceeding to the next, coarser resolution
level, we have <P J - l (t) = {I , ift E [0, 2) ; - 1, if t E [2, 4) ; otherwise}. At this level,
the number of non-zero coefficients still corresponds to the number of jumps, but the total
number of coefficients at this level is halved, and so is the sparsity. If we further proceed
to coarser resolutions, we will encounter levels where the support of a wavelet <Pj (t) is
comparable to the typical distance between jumps in the function s(t). In this case, most
of the coefficients are expected to be nonzero, and, therefore, sparsity will fade away.
?
To demonstrate how this influences accuracy of a blind source separation, we randomly
generated two block-signal sources (Figure 1, two upper plots.), and mixed them by the
crosstalk matrix A with colwnns [0.83 -0.55] and [0.62 0.78]. Resulting sensor signals,
or mixtures, X l (t) and X2 (t) are shown in the two lower plots of Figure l. The scatter
plot of X l (t) versus X2( t) does not exhibit any visible distinct orientations (Figure 2, left).
Similarly, in the scatter plot of the wavelet coefficients at the lowest resolution distinct
orientations are hardly detectable (Figure 2, middle). In contrast, the scatter plot of the
wavelet coefficients at the highest resolution (Figure 2, right) depicts two distinct orientations, which correspond to the columns of the mixing matrix.
Raw signals
High resolution
WT coefficients
All wavelet
coefficients
::"~~:?;K?;\."
, " "1 .::;; :~~.:
Of
,
? ????
1?
.
~."'~>. /"!~>/_~: .~
;
.,/',
:
" " -- I
InfoMax
l.93
0.183
0.005
1.78
0.002
FCM
0.058
Figure 2: Separation of block signals: scatter plots of sensor signals (left), and of their
wavelet coefficients (middle and right). Lower colwnns present the normalized meansquared separation error (%) corresponding to the Bell-Sejnowski InfoMax, and to the
Fuzzy C-Means clustering, respectively.
Since a crosstalk matrix A is estimated only up to a column permutation and a scaling factor, in order to measure the separation accuracy, we normalize the original sources sm(t)
and their corresponding estimated sources sm(t). The averaged (over sources) normal2:~=1 (il sm - sm l ?/ll sm ll?)?
ized squared error (NSE) is then computed as: NSE =
Resulting separation errors for block sources are presented in the lower part of Figure 2.
The largest error (l.93%) is obtained on the raw data, and the smallest ?0.005%) - on
the wavelet coefficients at the highest resolution, which have the best sparsity. Using all
wavelet coefficients yields intermediate sparsity and performance.
it
Multinode representation. Our choice of a particular wavelet basis and of the sparsest
subset of coefficients was obvious in the above example: it was based on knowledge of the
structure of piecewise constant signals. For sources having oscillatory components (like
sounds or images with textures), other systems of basis functions , such as wavelet packets
and trigonometric function libraries [9], might be more appropriate. The wavelet packet
library consists of the triple-indexed family of functions: i.f!j ,i,q(t) = 2j / 2 i.f!q(2 j t - i), j , i E
Z , q E N,where j , i are the scale and shift parameters, respectively, and q is the frequency
parameter. [Roughly speaking, q is proportional to the nwnber of oscillations of a mother
wavelet i.f!q(t)]. These functions form a binary tree whose nodes are indexed by the depth
of the level j and the node number q = 0, 1, 2, 3, ... , 2j - l at the specified level j. This
same indexing is used for corresponding subsets of wavelet packet coefficients (as well as
in scatter diagrams in the section on experimental results).
Adaptive selection of sparse subsets. When signals have a complex nature, it is difficult
to decide in advance which nodes contain the sparsest sets of coefficients. That is why we
use the following simple adaptive approach. First, for every node of the tree, we apply our
clustering algorithm, and compute a measure of clusters' distortion. In our experiments we
used a standard global distortion, the mean squared distance of data points to the centers of
their own (closest) clusters (here again, the weights of the data points can be incorporated):
d=2:f=l min II U m - Yk II ,where K is the nwnber of data points, U m is the m-th centroid
m
coordinates, Yk is the k-th data point coordinates, and
11 . 11
is the sum-of-squares distance.
Second, we choose a few best nodes with the minimal distortion, combine their coefficients
into one data set, and apply a separation algorithm (clustering or Infomax) to these data.
4
Experimental results
The proposed blind separation method based on the wavelet-packet representation, was
evaluated by using several types of signals. We have already discussed the relatively simple
example of a random block signal. The second type of signal is a frequency modulated
(FM) sinusoidal signal. The carrier frequency is modulated by either a sinusoidal function
(FM signal) or by random blocks (BFM signal). The third type is a musical recording of
flute sounds. Finally, we apply our algorithm to images. An example of such images is
presented in the left part of Figure 3.
111
'~
??
'10
?? .. ?:.
~
00
,
,
8
' 22
0?
.
11
.
, ,
S.
:Y6~,
'11
\;
'
'21
t,
'8
, '
SI
'
.'~''.
foo
?
0
8
Ss
,
",t,
:
,
"*,
"
' 26
t:
'JJ
' 13
' 12
' 11
'lI
' ,
"
Figure 3: Left: two source images (upper pair), their mixtures (middle pair) and estimated
images (lower pair). Right: scatter plots ofthe wavelet packet (WP) coefficients of mixtures
of images; subsets are indexed on the WP tree.
In order to compare accuracy of our adaptive best nodes method with that attainable by
standard methods, we form the following feature sets: (1) raw data, (2) Short Time Fourier
Transform (STFT) coefficients (in the case of ID signals), (3) Wavelet Transform coefficients (4) Wavelet packet coefficients at the best nodes found by our method, while using
various wavelet families with different smoothness (haar, db-4, db-S). In the case of image
separation, we used the Discrete Cosine Transform (DCT) instead of the STFT, and the
sym4 and symS mother wavelet instead of db-4 and db-S, when using wavelet transform
and wavelet packets.
The right part of Figure 3 presents an example of scatter plots of the wavelet packet coefficients obtained at various nodes of the wavelet packet tree. The upper left scatter plot,
marked with 'C' , corresponds to the complete set of coefficients at all nodes. The rest are
the scatter plots of sets of coefficients indexed on a wavelet packet tree. Generally speaking, the more distinct the two dominant orientations appear on these plots, the more precise
is the estimation of the mixing matrix, and, therefore, the better is the quality of separation.
Note, that only two nodes, C22 and C23 , show clear orientations. These nodes will most
likely be selected by the algorithm for further estimation process.
Signals
Blocks
BFM sine
FM sine
Flutes
Images
raw
data
10. 16
24.51
25 .57
1.48
raw
data
4.88
STFT
2.669
0.667
0.32
0.287
OCT
3.651
WT
db8
0.174
0.665
1.032
0.355
WT
sym8
l.l64
WT
haar
0.037
2.34
6.105
0.852
WT
haar
l.l14
WP
db8
0.073
0.2
0.176
0.154
WP
sym8
0.365
WP
haar
0.002
0.442
0.284
0.648
WP
haar
0.687
Table 1: Experimental results: normalized mean-squared separation error (%) for noisefree signals and images, applying the FCM separation to raw data and decomposition coefficients in various domains. In the case of wavelet packets (WP) the best nodes selected by
our algorithm were used.
Table 1 summarizes results of experiments in which we applied our approach of the best
features selection along with the FCM separation to each noise-free feature set. In these
experiments, we compared the quality of separation of deterministic signals by calculating
N SE's (i.e., residual crosstalk errors). In the case of random block and BFM signals, we
performed 100 Monte-Carlo simulations and calculated the normalized mean-squared errors (N M SE) for the above feature sets. From Table 1 it is clear that using our adaptive
best nodes method outperforms all other feature sets (including complete set of wavelet
coefficients), for each type of signals. Similar improvement was achieved by using our
method along with the BS InfoMax separation, which provided even better results for images. In the case of the random block signals, using the Haar wavelet function for the
wavelet packet representation yields a better separation than using some smooth wavelet,
e.g. db-S. The reason is that these block signals, that are not natural signals, have a sparser
representation in the case of the Haar wavelets. In contrast, as expected, natural signals
such as the Flute's signals are better represented by smooth wavelets, that in turn provide
a better separation. This is another advantage of using sets of features at multiple nodes
along with various families of 'mother' functions: one can choose best nodes from several
decomposition trees simultaneously.
In order to verify the performance of our method in presence of noise, we added various
types of noise (white gaussian and salt&pepper) to three mixtures of three images at various
signal-to-noise energy ratios (SNR). Table 2 summarizes these experiments in which we
applied our approach along with the BS InfoMax separation. It turns out that the ideas
used in wavelet based signal denoising (see for example [10] and references therein), are
applied to signal separation from noisy mixtures. In particular, in case of white gaussian
noise, the noise energy is uniformly distributed over all wavelet coefficients at various
scales. Therefore, at sufficiently high SNR's, the large coefficients of the signals are only
slightly distorted by the noise coefficients, and the estimation of the unmixing matrix is
almost not affected by the presence of noise. (In contrast, the BS InfoMax applied to
three noisy mixtures themselves, failed completely, arriving at N S E of 19% even in the
case of SNR=12dB). We should stress here that, although our adaptive best nodes method
performs reasonably well in the presence of noise, it is not supposed to further denoise the
reconstructed images (this can be achieved by some denoising method, after source signals
are separated). More experimental results, as well as parameters of simulations, can be
found in [11].
SNR [dB]
Mixtures w. white gaussian noise
Mixtures w. salt&pepper noise
Table 2: Perfonnance of the algorithm in presence of various sources of noise in mixtures
of images: nonnalized mean-squared separation error (%), applying our adaptive approach
along with the BS InfoMax separation.
5
Conclusions
Experiments with both one- and two-dimensional simulated and natural signals demonstrate that multinode sparse representations improve the efficiency of blind source separation. The proposed method improves the separation quality by utilizing the structure of
signals, wherein several subsets of the wavelet packet coefficients have significantly better
sparsity and separability than others. In this case, scatter plots of these coefficients show
distinct orientations each of which specifies a column of the mixing matrix. We choose
the 'good subsets' according to the global distortion adopted as a measure of cluster quality. Finally, we combine together coefficients from the best chosen subsets and restore
the mixing matrix using only this new subset of coefficients by the Infomax algorithm or
clustering. This yields significantly better results than those obtained by applying standard
Infomax and clustering approaches directly to the raw data. The advantage of our method
is in particular noticeable in the case of noisy mixtures.
References
[1] A. 1. Bell and T. 1. Sejnowski, "An information-maximization approach to blind separation and blind deconvolution," Neural Computation, vol. 7, no. 6, pp. 1129- 1159,
1995.
[2] A. Hyvarinen, "Survey on independent component analysis," Neural Computing Surveys, no. 2, pp. 94- 128, 1999.
[3] M. Zibulevsky and B. A. Pearlmutter, "Blind separation of sources with sparse representations in a given signal dictionary," Neural Computation, vol. l3 , no. 4, pp. 863882,2001.
[4] 1.-F. Cardoso. "Infomax and maximum likelihood for blind separation," IEEE Signal
Processing Letters 4 112-114, 1997.
[5] M. S. Lewicki and T. 1. Sejnowski, "Learning overcomplete representations," Neural
Computation, 12(2): 337-365, 2000.
[6] S. Amari, A. Cichocki, and H. H. Yang, "A new learning algorithm for blind signal
separation," In Advances in Neural Information Processing Systems 8. MIT Press.
1996.
[7] S. Makeig, ICAlEEG toolbox. Computational Neurobiology Laboratory, the Salk
Institute. http://www.cnl.salk.edurtewonlica_ cnl.html, 1999.
[8] A. Prieto, C. G. Puntonet, and B. Prieto, "A neural algorithm for blind separation of
sources based on geometric prperties.," Signal Processing, vol. 64, no. 3, pp. 315- 331,
1998.
[9] S. Mallat, A Wavelet Tour of Signal Processing. Academic Press, 1998.
[10] D. L. Donoho, "De-Noising by Soft Thresholding," IEEE Trans. Inf. Theory, vol. 41,
3, 1995, pp.613-627.
[11] P. Kisilev, M. Zibulevsky, Y. Y. Zeevi, and B. A. Pearlmutter, Multiresolution frameworkfor sparse blind source separation, CCIT Report no.317, June 2000
| 1980 |@word middle:3 stronger:1 norm:2 simulation:2 decomposition:11 pavel:1 attainable:1 selecting:1 rightmost:1 outperforms:1 recovered:1 si:1 scatter:13 finest:2 dct:1 visible:1 additive:1 offunctions:1 remove:1 plot:14 stationary:1 half:1 selected:2 short:1 provides:1 node:16 lx:1 c22:1 along:8 constructed:1 consists:1 combine:2 flute:3 expected:2 ica:2 roughly:2 themselves:2 multi:1 brain:1 project:2 provided:1 lowest:1 israel:3 kind:1 fuzzy:3 differentiation:1 every:1 ofa:1 xd:1 makeig:1 unit:2 appear:1 before:1 positive:1 engineering:3 local:1 carrier:1 ak:2 id:3 might:1 therein:1 klog:1 averaged:1 practical:1 crosstalk:5 block:13 procedure:2 bell:3 significantly:3 onto:2 close:2 selection:2 noising:1 influence:1 applying:3 www:1 equivalent:2 deterministic:1 center:4 duration:1 rectangular:1 resolution:9 survey:2 simplicity:1 recovery:2 fade:1 insight:1 utilizing:1 orthonormal:1 coordinate:5 mallat:1 us:1 element:2 puntonet:1 coarser:2 i12:1 electrical:3 solved:1 highest:2 zibulevsky:3 yk:12 mentioned:1 ccit:1 efficiency:1 basis:6 completely:1 various:9 tx:1 represented:2 separated:2 distinct:6 sejnowski:4 monte:1 whose:1 widely:1 cnl:2 say:1 distortion:4 otherwise:2 s:1 amari:1 transform:5 noisy:5 advantage:2 product:2 combining:1 mixing:11 trigonometric:1 multiresolution:1 supposed:1 intuitive:1 normalize:2 exploiting:1 cluster:5 unmixing:5 produce:1 oo:4 ac:4 noticeable:1 implemented:2 c:1 differ:1 packet:15 eoo:2 clustered:1 decompose:1 around:1 sufficiently:1 exp:1 scope:1 zeevi:3 substituting:1 dictionary:8 smallest:1 estimation:4 largest:1 mit:1 sensor:7 gaussian:3 ck:1 rather:1 june:1 improvement:2 properly:1 likelihood:4 contrast:4 centroid:1 typically:1 interested:1 nonnal:1 orientation:9 html:1 spatial:1 construct:1 noisefree:1 having:1 atom:1 represents:1 y6:1 look:2 others:2 report:1 piecewise:2 few:1 randomly:1 simultaneously:1 national:1 geometry:1 mixture:21 implication:1 perfonnance:1 indexed:4 tree:6 logarithm:1 srn:1 haifa:3 overcomplete:2 minimal:1 mk:1 column:10 modeling:1 soft:1 ollendorff:1 maximization:3 subset:13 snr:4 tour:1 technion:6 syms:1 too:2 motivating:1 reported:1 corrupted:2 density:1 infomax:16 michael:1 together:2 squared:5 again:1 nm:1 choose:3 possibly:1 li:1 account:1 sinusoidal:2 de:1 summarized:1 coefficient:49 blind:13 piece:1 performed:2 view:1 later:2 sine:2 complicated:1 formed:1 square:3 accuracy:3 il:1 musical:2 efficiently:2 yield:5 ofthe:2 correspond:2 raw:7 albuquerque:1 carlo:1 straight:2 oscillatory:1 energy:2 frequency:3 pp:5 obvious:1 knowledge:1 improves:3 amplitude:1 wherein:4 formulation:1 evaluated:1 cmk:9 quality:7 usa:1 normalized:3 contain:1 verify:1 ization:1 former:1 nonzero:1 wp:7 laboratory:1 white:3 ll:2 cosine:1 criterion:1 ack:1 pdf:3 stress:2 complete:3 demonstrate:4 pearlmutter:3 performs:1 image:16 instantaneous:1 multinode:4 recently:1 consideration:1 functional:1 salt:2 belong:1 discussed:1 significant:1 mother:4 smoothness:1 stft:3 similarly:1 l3:1 surface:2 etc:1 dominant:2 halved:1 own:1 closest:1 inf:1 forcing:1 binary:1 accomplished:1 ministry:1 greater:1 determine:1 signal:51 ii:4 multiple:1 sound:3 ofy:2 smooth:3 frameworkfor:1 academic:1 sphere:4 award:1 noiseless:3 minerva:1 achieved:3 diagram:3 source:45 rest:1 recording:1 db:7 thing:1 ee:2 presence:4 yang:1 intermediate:1 independence:2 pepper:2 fm:3 opposite:1 inner:1 idea:3 shift:1 kisilev:2 proceed:1 speaking:2 hardly:1 jj:1 matlab:2 dramatically:1 generally:1 clear:2 se:2 cardoso:1 transforms:1 concentrated:1 http:1 specifies:1 sl:3 nsf:1 sign:1 estimated:8 discrete:1 vol:4 affected:1 group:1 key:1 putting:1 pj:2 verified:2 imaging:1 sum:1 letter:1 distorted:1 arrive:1 family:3 reasonable:1 decide:1 almost:1 separation:41 oscillation:1 summarizes:2 scaling:2 comparable:1 toc:2 x2:7 sake:1 fourier:1 argument:1 min:1 relatively:1 department:4 according:4 lca:1 bap:1 slightly:1 separability:2 b:8 happens:1 explained:1 restricted:1 indexing:1 previously:1 turn:2 detectable:1 adopted:1 operation:1 apply:4 away:1 appropriate:2 encounter:1 original:3 denotes:2 clustering:16 calculating:1 move:1 already:1 added:1 parametric:1 concentration:1 exhibit:1 gradient:1 distance:3 simulated:2 prieto:2 topic:1 reason:1 index:1 ratio:1 mexico:1 difficult:1 stated:1 ized:1 unknown:3 upper:4 sm:5 finite:1 neurobiology:1 incorporated:1 precise:1 nonnalized:1 discovered:1 rn:1 nwnber:2 pair:3 toolbox:3 specified:1 meansquared:1 israeli:1 trans:1 wy:2 usually:1 sparsity:16 including:1 natural:4 restore:1 haar:11 residual:1 improve:1 library:2 cichocki:1 prior:1 geometric:2 l2:1 permutation:2 mixed:3 proportional:1 versus:3 lv:1 triple:1 foundation:1 degree:1 thresholding:1 c23:1 translation:1 row:2 nse:2 ift:3 supported:1 placed:1 free:3 arriving:2 side:1 barak:1 institute:1 taking:3 absolute:1 sparse:14 distributed:1 dimension:1 depth:1 valid:1 calculated:1 jump:3 adaptive:6 hyvarinen:1 unm:1 reconstructed:1 compact:1 logic:1 ml:1 global:2 l14:1 why:1 table:5 channel:1 nature:1 reasonably:1 career:1 complex:1 domain:5 main:1 linearly:2 s2:1 noise:15 denoise:1 x1:1 scattered:1 depicts:1 salk:2 foo:1 wish:1 sparsest:2 exponential:1 xl:5 lie:1 lw:2 third:1 wavelet:44 normalizing:1 deconvolution:1 intrinsic:1 ofthis:1 texture:1 sparser:1 suited:1 likely:2 failed:1 scalar:3 lewicki:1 corresponds:2 oct:1 marked:1 presentation:1 donoho:1 typical:1 uniformly:1 wt:5 denoising:2 called:1 total:1 experimental:4 rarely:1 select:1 support:1 modulated:2 ex:1 |
1,074 | 1,981 | Direct value-approxiIllation for factored MDPs
Dale Schuurmans and ReIn Patrascll
Department of Computer Science
University of Waterloo
{dale, rpatrasc} @cs.'Uwaterloo.ca
Abstract
We present a simple approach for computing reasonable policies
for factored Markov decision processes (MDPs), when the optimal value function can be approximated by a compact linear form.
Our method is based on solving a single linear program that approximates the best linear fit to the optimal value function. By
applying an efficient constraint generation procedure we obtain an
iterative solution method that tackles concise linear programs. This
direct linear programming approach experimentally yields a significant reduction in computation time over approximate value- and
policy-iteration methods (sometimes reducing several hours to a
few seconds). However, the quality of the solutions produced by
linear programming is weaker-usually about twice the approximation error for the same approximating class. Nevertheless, the
speed advantage allows one to use larger approximation classes to
achieve similar error in reasonable time.
1
Introduction
Markov decision processes (MDPs) form a foundation for control in uncertain and
stochastic environments and reinforcement learning. Standard methods such as
value-iteration, policy-iteration and linear programming can be used to produce
optimal control policies for MDPs that are expressed in explicit form; that is, the
policy, value function and state transition model are all represented in a tabular
manner that explicitly enumerates the state space. This renders the approaches
impractical for all but toy problems. The real goal is to achieve solution methods
that scale up reasonably in the size of the state description, not the size of the state
space itself (which is usually either exponential or infinite).
There are two basic premises on which solution methods can scale up: (1) exploiting
structure in the MDP model itself (i.e. structure in the reward function and the state
transition model); and (2) exploiting structure in an approximate representation of
the optimal value function (or policy). Most credible attempts at scaling-up have
generally had to exploit both types of structure. Even then, it is surprisingly difficult
to formulate an optimization method that can handle large state descriptions and
yet simultaneously produce value functions or policies with small approximation
errors, or errors that can be bounded tightly. In this paper we investigate a simple
approach to determining approximately .optimal policies based on a simple direct
linear programming approach. Specifically, the idea is to approximate the optimal
value function by formulating a single linear program and exploiting structure in the
MDP and the value function approximation to solve this linear program efficiently.
2
Preliminaries
We consider MDPs with finite state and action spaces and consider the goal of maximizing infinite horizon discounted reward. In this paper, states will be represented
by vectors x of length n, where for simplicity we assume the state variables Xl, ... , X n
are in {O, I}; hence the total nuniber of states is N == 2n . We also assume there
is a small finite set of actions A == {aI, ... , al}. An MDP is defined by: (1) a state
transition model P(x/lx, a) which specifies the probability of the next state Xl given
the current state x and action a; (2) a reward function R(x, a) which specifies the
immediate reward obtained by taking action a in state X; and (3) a discount factor
" 0 :S , < 1. The problem is to determine an optimal control policy 1r* : X --7 A
that achieves maximum expected future discounted reward in every state.
To understand the standard solution methods it is useful to define some auxiliary
concepts. For any policy 1r, the value function V 7r : X --7 JR denotes the expected
future discounted reward achieved by policy 1r in each state x. It turns out that
V 7r satisfies a fixed point relationship between the value of current states and the
expected values of future states, given by a backup operator V 7r == B 7r V 7r , where
B 7r operates on arbitrary functions over the state space according to
(B f) (x) == R(x, 1r(x)) + ,
7r
E P(x'lx, 1r(x)) f(x /)
X'
Another important backup operator is defined with respect to a fixed action a
(B af) (x) == R(x, a)
+, E P(x/lx, a)f(x')
X'
The action-value function Q7r : X x A --7 JR denotes the expected future discounted
reward achieved by taking action a in state x and following policy 1r thereafter;
which must satisfy Q7r (x, a) == B a V 7r ? Given an arbitrary function f over states,
the greedy policy 1rgre (f) with respect to f is defined by
1rgre (I) (x)
== arg max
(B a f) (x)
a
Finally, if we let 1r* denote the optimal policy and
we have the relationship V* == B*V*, where (B* f)
addition, we define Q*(x,a) == BaV* then we also
arg maxa Q* (x, a). Given these definitions, the three
culating 1r* can be formulated as:
V* denote its value function,
(x) == maXa (Ba f) (x). If, in
have 1r*(x) == 1rgre (V*)(x) ==
fundamental methods for cal-
Policy iteration: Start with an arbitrary policy 1r(0). Iterate 1r(i+l)
until1r(i+l) == 1r(i). Return 1r* == 1r(i+I).
f-
1rgre (V 7r (i?)
Value iteration: Start with an arbitrary function f(O). Iterate f(i+l)
untilllf(i+l) - f(i) 1100 < tole Return 1r* == 1rgre (f(i+I)).
Linear programming: Calculate V* == arg min]
(B a f) (x) for all a and x. Return 1r* == 1rgre (V*).
I:x I(x)
f-
B* f(i)
subject to f(x) 2=:
All three methods can be shown to produce optimal policies for the given MDP
[1, 10] even though they do so in very different ways. However, all three approaches
share the same fundamental limitation that they do not scale up feasibly in n, the
size of the state descriptions. Instead, all of these approaches work with explicit
representations of the policies and value functions that are exponential in n.
3
Exploiting structure
To scale up to large state spaces it is necessary to exploit substantial structure in
the MDP while also adopting some form of approximation for the optimal value
function and policy. The two specific structural assumptions we consider in this
paper are (1) factored MDPs and (2) linear value function approximations. Neither
of these two assumptions alone is sufficient to permit efficient policy optimization for
large MDPs. However, combined, the two assumptions allow approximate solutions
to be obtained for problems involving trillions of states reasonably quickly.
3.1
Factored MDPs
In the spirit of [7, 8, 6] we define a factored MDP to be one that can be represented compactly by an additive reward function and a factored state transition model. Specifically, we assume the reward function decomposes as R(x, a) ==
E~=l Ra,r (xa,r) where each local reward function Ra,r is defined on a small set
of variables xa,r' We assume the state transition model P(x/lx, a) can be represented by a set of dynamic Bayesian networks (DBNs) on state variables-one for
each action-where each DBN defines a compact transition model on a directed
bipartite graph connecting state variables in consecutive time steps. Let Xa,i denote the parents of successor variable x~ in the DBN for action a. To allow efficient optimization we assume the patent set Xa,i contains a small number of state
variables from the previous time step. Given this model, the probability of a successor state Xl given a predecessor state x and action a is given by the product
P(x/lx, a) == Il7=1 P(X~IXa,i)'
The main benefit of this factored representation is that it allows large MDPs to
be encoded concisely: if the functions Ra,r(xa,r) and P(X~IXa,i) depend on a small
number of variables, they can be represented by small tables and efficiently combined to determine R(x, a) and P(x'lx, a). Unfortunately, as pointed out in [7],
a factored MDP does not by itself yield a feasible method to determining optimal
policies. The main problem is that, even if P and R are factored, the optimal value
function generally does not have a compact representation (nor does the optimal
policy). Therefore, obtaining an exact solution appears to require a return to explicit representations. However, it turns out that the factored MDP representation
interacts very well with linear value function approximations.
3.2
Linear approximation
One of the central tenets to scaling up is to approximate the optimal value function rather than calculate it exactly. Numerous schemes have been investigated for
approximating optimal value functions and policies in a compact representational
framework, including: hierarchical decompositions [5], decision trees and diagrams
[3, 12], generalized linear functions [1, 13, 4, 7, 8, 6], neural networks [2], and products of experts .[11]. However, the simplest of these is generalized linear functions,
which is the form we investigate below. In this case, we consider functions of the
form f(x)
2:;=1 wjbj(xj) where b1 , ??? , bk are a fixed set of basis functions, and Xj
denotes the variables on which basis bj depends. Combining linear functions with,
factored MDPs provides many opportunities for feasible approximation.
=
The first main benefit of combining linear approximation with factored MDPs is
that the result of applying the backup operator B a to a linear function results in
a compact representation for the action-value function. Specifically if we define
g(X, a) == (B a f) (x) then we can rewrite it as
m
k
g(X, a) == L:Ra,r(xa,r) + L:WjCa,j(Xa,j)
r=l
j=l
where
Ca,j(Xa,j) ==1'L:P(xjla,xa,j)bj (xj) andxa,j
xj
==
U Xa,i
x~Exj
That is, Xa,i are the parent variables of x~, and Xa,j is the union of the parent
variables of x~ E xj. Thus, ca,j expresses the fact that in a factored MDP the
expected future value of one component of the approximation depends only on the
current state variables Xa,j that are direct parents of the variables xj in bj ? If the
MDP is sparsely connected then the variable sets in 9 will not be much larger than
those in f. The ability to represent the state-action value function in a compact
linear form immediately provides a feasible implementation of the greedy policy for
f, since 1rgre (f) (x) == argmaXa g(~, a) by definition of 1rgre , and g(x, a) is efficiently
determinable for each x and a. However, it turns out that this is not enough
to permit feasible forms of approximate policy- and value-iteration to be easily
implemented.
The main problem is that even though Ba f has a factored form for fixed a, B* f does
not and (therefore) neither does 1rgre (f). In fact, even if a policy 1f were concisely
represented, B 1r f would not necessarily have a compact form because 1f usually
depends on all the state variables and thus P(x/lx, 1r(x)) == I17=1 P(x~IX1r(x),i) becomes a product of terms that depend on all the state variables. Here [8, 6] introduce
an additional assumption that there is a special "default" action ad for the MDP
such that all other actions a have a factored transition model P (?1?, a) that differs
from P(?I?, ad) only on a small number of state variables. This allows the greedy
policy 1rgre (f) to have a compact form and moreover allows B 1r gre(f) f to be concisely represented. With some effort, it then becomes possible to formulate feasible
versions of approximate policy- and value-iteration [8, 6].
Approximate policy iteration: Start with default policy 1r(O)(x) == ad. Iterate
f(i) +- arg minf maxx If(x) - (B 1r (i) f) (x) I , 1r(i+1) f- 1fgre (f(i)) until1r(i+1) == 1r(i).
Approximate value iteration: Start with arbitrary f(O). Iterate 1r(i) +1rgre (f(i)) ,f(i+1) +- argminf maxx 1!(x)-(B 1r (i) f)(x)1 until Ilf(i+1)_!(i) 1100 < tole
The most expensive part of these iterative algorithms is determining
arg minf maxx If(x) - (B7r(i) f) (x) I which involves solving a linear program minw,E E
subject to -E :S !w (x) - (B 7r fw) (x) :S E for all x. This linear program is problematic
because it involves an exponential number of constraints. A? central achievement of
[6] is to show that this system of constraints can be encoded by an equivalent system
of constraints that has a much more compact form. The idea behind this construction is to realize that searching for the max or a min of a linear function with a
compact basis can be conducted in an organized fashion, and such an organized
search can be encoded in an equally concise constraint system. This construction
allows approximate solutions to MDPs with up to n == 40 state variables (1 trillion
states) to be generated in under 7.5 hours using approximate policy iteration [6].1
1 It turns out that approximate value iteration is less effective because it takes more
iterations to converge, and in fact can diverge in theory [6, 13].
Our main observation is that if one has to solve linear programs to conduct the
approximate iterations anyway, then it might be much simpler and more efficient
to approximate the linear programming approach directly.
4
Approximate linear programming
Our first idea is simply to observe that a factored MDP and linear value approximation immediately allow one to directly solve the linear programming approximation
to the optimal value function, which is given by
IIjin
L f(x) subject to f(x) -
(B a f) (x) ;::: 0 for all x and a
x
where f is restricted to a linear form over a fixed basis. In fact, it is well known [1, 2]
that this yields a linear program in the basis weights w. However, what had not
been previously shown is that given a factored MDP, an equivalent linear program
of feasible size could be formulated. Given the results of [6] outlined above this is
now easy to do. First, one can show that the minimization objective can be encoded
compactly
k
Lf(x)
x
LLWjbj(xj)
x
j=l
k
LWjYj
where Yj
== 2n-lxjl Lbj(xj)
~
j=l
Here the Yj components can be easily precomputed by enumerating assignments
to the small sets of variables in basis functions. Second, as we have seen, the
exponentially many constraints have a structured form. Specifically f (x) - (B a f) (X)
can be represented as
k
f(x) - (B a f) (x)
L
j=l
Wj
(b j (Xj) -
Ca,j (xa,j))
-
L
Ra,r (Xa,r)
r
which has a simple basis representation that allows the technique of [6] to be used
to encode a constraint system that enforces f(x) - (B a f) (x) 2:: 0 for all x and a
without enumerating the state space for each action.
We implemented this approach and tested it on some of the test problems from [6].
In these problems there is a directed network of computer systems Xl, ??? , X n where
each system is either up (Xi == 1) or down (Xi == 0). Systems can spontaneously
go down with some probability at each step, but this probability is increased if an
immediately preceding machine in the network is down. There are n + 1 actions:
do nothing (the default) and reboot machine i. The reward in a state is simply the
sum of systems that are up, with a bonus reward of 1 if system 1 (the server) is
up. I.e., R(x) == 2Xl + 2:7=2 Xi. We considered the network architectures shown in
Figure 1 and used the transition probabilities P(x~ == llxi, parent(Xi) , a == i) == 0.95
and P(x~ == 11Xi, parent(Xi) , a I- i) == 0.9 if Xi == parent(Xi) == 1; 0.67 if Xi == 1 and
parent(xi) == 0; and 0.01 if Xi == o. The discount factor was 'I == 0.95. The first basis
functions we considered were just the indicators on each variable Xi plus a constant
basis function (as reported in [6]).
The results for two network architectures are shown in Figure 1. Our approximate
linear programming method is labeled ALP and is compared to the approxi.mate
server
0
n=
N=
API[6]
APIgen
time
ALP
ALPgen
ALPgen2
APIgen
constraints ALP
ALPgen
ALPgen2
API[6]
DB Bellman APIgen
ALP
(gen)
/ Rmax
ALPgen2
time
constraints
DB Bellman
/ Rmax
12
4e3
7m
39s
4.5s
0.7s
14s
420
1131
38
166
0.3Q'
0.36
0.85
0.12
n=
N=
API[6]
APIgen
ALP
ALPgen
ALPgen2
APIgen
ALP
ALPgen
ALPgen2
API[6]
APIgen
ALP(gen)
ALPgen2
16
6e4
30m
1.'5m
23s
1.2s
37s
777
2023
50
321
, 0.33
0.34
0.82
0.14
13
8e4
5m
28s
0.7s
0.7s
17s
363
729
50
261
0.27
0.50
0.96
0.21
20
1e6
50m
2.3m
1.4m
1.8s
102m
921
3171
62
514
0.34
0.33
0.80
0.08
16
6e4
15m
106m
1.6s
LOs
338
952
1089
69
381
0.29
0.46
0.82
0.22
24
2e7
1.3h
4.0m
4.1m
2.6s
2.8m
1270
4575
74
914
0.35
0.33
0.78
0.08
22
4e6
50m
3.9m
6.0s
1.5s
1.9m
1699
2025
90
826
0.32
0.42
0.78
0.15
28
3e8
1.9h
6.5m
10m
3.5s
4.7m
1591
6235
86
1223
0.36
0.32
0.78
0.10
28
3e8
l.3h
12m
20s
2.4s
5.4m
3792
3249
114
1505
0.34
0.39
0.78
0.06
32
4e9
3h
13m
23m
4.5s
6.4m
2747
8151
98
1433
0.36
0.32
0.77
0.08
34
2e10
2.Th
23m
56s
3.4s
9.6m
6196
4761
135
1925
0.35
0.38
0.77
0.07
36
7e10
4.5h
22m
47m
5.9s
12m
4325
10K
110
1951
0.37
0.32
0.76
0.07
40
1e12
7.5h
28m
2.4h
7.0s
17m
4438
13K
122
2310
0.38
0.31
0.76
0.07
40
1e12
5h
33m
2.2m
4.7s
23m
7636
6561
162
3034
0.36
0.37
0.76
0.03
Figure 1: Experimental results (timings on a 750MHz PIlI processor, except
2)
policy iteration strategy API described in [6]. Since we did not have the specific
probabilities used in [6] and could only estimate the numbers for API from graphs
presented in the paper, this comparison is only meant to be loosely indicative of
the general run times of the two methods on such problems. (Perturbing the probability values did not significantly affect our results, but we implemented APlgen
for comparison.) As in [6] our implementation is based on Matlab, using CPLEX
to solve linear programs. Our preliminary results appear to support the hypothesis that direct linear programming can be more efficient than approximate policy
iteration on problems of this type. A further advantage of the linear programming approach is that it is simpler to program and involves solving only one LP.
More importantly, the direct LP approach does not require the MDP to have a special default action since the action-value function can be directly extracted using
7r gre (f)(x) == argma:xay(x,a) and g is easily recoverable from f.
Before discussing drawbacks, we note that it is possible to solve the linear program
even more efficiently by iteratively generating constraints as needed. This is now
possible because factored MDPs and linear value approximations allow an efficient
search for the maximally violated constraints in the linear program, which provides
an effective way of generating concise linear programs that can be solved much more
efficiently than those formulated above. Specifically, the procedure ALPgen exploits
the feasible search techniques for minimizing linear functions discussed previously
to efficiently generate a small set of critical constraints, which is iteratively grown
until the final solution is identified; see Figure 2.
2These numbers are estimated from graphs in [6]. The exact probabilities and computer
used for the simulations were not reported in that paper, so we cannot assert an exact
comparison. However, perturbed probabilities have little effect .on the performance of the
methods we tried, and it seems that overall this is a loosely representative comparison of
the general performance of the' various algorithms on these problems.
ALPgen
Start with f(O) = 0 and constraints = 0
Loop
For each a E A, compute x a t- arg minx f(i) (x) - (B a f(i)) (x)
I
constraints t- constraints U {constraint(x a1 ), ??? , constraint(x ak ) }
Solve f(i~l) t- minJ 2:~ f(x) subject to constraints
Until minx f(~)(x) - (Baf(~})(x) ~ 0 - tot for all a
Return g(., a) B a f for each a, to represent the greedy policy
=
Figure 2: ALPgen procedure
The rationale for this procedure is that the main bottleneck in the previous methods is generating the constraints, not solving the linear programs [6]. Since only a
small number of constraints are active at a solution and these are likely t.o be the
most violated near the solution, adding only most violated constraints appears to
be a useful way to proceed. Indeed, Figure 1 shows that ALPgen produces the same
approximate solutions as ALP in a tiny fraction of the time. In the most extreme
case ALPgen produces an approximate solution in 7 seconds while other methods
take several hours on the same problem. The reason for this speedup is explained
by the results which show the numbers of constraints generated by each method.
Further investigation is also required to fully outline the robustness of the constraint generation method. In fact, one cannot guarantee that a greedy constraint
generation scheme like the one proposed here will always produce a feasible number
of constraints [9]. Nevertheless, the potential benefits of conservatively generating
constraints as needed seem to be clear. Of course, the main drawback of the direct
linear programming approach over approximate policy iteration is that ALP incurs
larger approximation errors than API.
5
Bounding approximation error
It turns out that neither API nor ALP are guaranteed to return the best linear approximation to the true value function. Nevertheless, it is possible to efficiently calculate bounds on the approximation errors of these methods, again
by exploiting the structure of the problem: A well known result [14] asserts that
maxx V* (x) - V 7rgre (J) (x) :S 1 2, maxx f(x) - (B* f) (x) (where in our case f ~ V*).
This upper bound can in turn be bounded by a quantity that is feasible to calculate:
maxx f(x)-(B* f) (x) = maxxmina f(x)-(Ba f) (x) :S min a maxx f(x)-(B af)(x).
Thus an upper bound on the error from the optimal value function can be calculated
by performing an efficient search for maxx f(x) - (Baf) (x) for each a.
Figure 1 shows that the measurable error quantity, maxx f(x) - (B a f) (x) (reported
as UB Bellman) is about a factor of two larger for the linear programming approach
than for approximate policy iteration on the same basis. In this respect, API appears to have an inherent advantage (although in the limit of an exhaustive basis
both approaches converge to the same optimal value). To get an indication of the
computational cost required for ALPgen to achieve a similar bound on approximation error, we repeated the same experiments with a larger basis set that included all
four indicators between pairs of connected variables. The results for this model are
reported as ALPgen2, and Figure 1 shows that, indeed, the bound on approximation error is reduced substantially-but at the predictable cost of a sizable increase
in computation time. However, the run times are still appreciably smaller than the
policy iteration methods.
Paradoxically, linear programming seems to offer computational advantages over
policy and value iteration in the context of approximation, even though it is widely
held to be an inferior solution strategy for explicitly represented MDPs.
References
[1] D. Bertsekas. Dynamic Programming and Optimal Control, volume 2. Athena
Scientific, 1995.
[2] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
1996.
[3] C. Boutilier, R. Dearden, and M. Goldszmidt. Stochastic dynamic programming with factored representations. Artificial Intelligence, 2000.
[4] J. Boyan. Least-squares temporal difference learning. In Proceedings ICML,
1999.
[5J T. Dietterich.Hierarchical reinforcement learning vlith the 1\1AXQ value function decomposition. JAIR, 13:227-303,2000.
[6] C. Guestrin, D. Koller, and R. Parr. Max-norm projection for factored MDPs.
?In Proceedings IJCAI, 2001.
[7] D. Koller and R. Parr. Computing factored value functions for policies in
structured MDPs. In Proceedings IJCAI, 1999.
[8] D. Koller and R. Parr. Policy iteration for factored MDPs. In Proceedings
UAI,2000.
.
[9] R. Martin. Large Scale Linear and Integer Optimization. Kluwer, 1999.
[10] M. Puterman. Markov Decision Processes: Discrete Dynamic Programming.
Wiley, 1994.
[IIJ B. Sallans and G. Hinton. Using free energies to represent Q-values in a multiagent reinforcement learning task. In Proceedings NIPS, 2000.
[12] R. St-Aubin, J. Hoey, and C. Boutilier. APRICODD: Approximating policy
construction using decision diagrams. In Proceedings NIPS, 2000.
[13J B. Van Roy. Learning and value function approximation in complex decision
processes. PhD thesis, MIT, EECS, 1998.
[14J R. Williams and L. Baird. Tight performance bounds on greedy policies based
_on imperfect value functions. Technical report, Northeastern University, 1993.
| 1981 |@word version:1 seems:2 norm:1 simulation:1 tried:1 decomposition:2 incurs:1 concise:3 reduction:1 contains:1 current:3 yet:1 must:1 tot:1 tenet:1 realize:1 additive:1 alone:1 greedy:6 intelligence:1 indicative:1 provides:3 lx:7 simpler:2 direct:7 predecessor:1 introduce:1 manner:1 ra:5 indeed:2 expected:5 nor:2 bellman:3 discounted:4 little:1 becomes:2 bounded:2 moreover:1 bonus:1 what:1 rmax:2 reboot:1 substantially:1 maxa:2 impractical:1 guarantee:1 assert:1 temporal:1 every:1 tackle:1 exactly:1 control:4 appear:1 bertsekas:2 before:1 local:1 timing:1 limit:1 api:9 ak:1 approximately:1 might:1 plus:1 twice:1 directed:2 enforces:1 yj:2 spontaneously:1 union:1 differs:1 lf:1 procedure:4 maxx:9 significantly:1 projection:1 argmaxa:1 get:1 cannot:2 operator:3 cal:1 context:1 applying:2 ilf:1 equivalent:2 measurable:1 maximizing:1 go:1 williams:1 formulate:2 simplicity:1 immediately:3 factored:22 importantly:1 handle:1 searching:1 anyway:1 dbns:1 construction:3 exact:3 programming:18 hypothesis:1 roy:1 approximated:1 expensive:1 sparsely:1 labeled:1 solved:1 calculate:4 wj:1 connected:2 e8:2 substantial:1 environment:1 predictable:1 reward:12 dynamic:5 depend:2 solving:4 rewrite:1 tight:1 bipartite:1 basis:12 compactly:2 easily:3 represented:9 various:1 grown:1 effective:2 q7r:2 artificial:1 rein:1 exhaustive:1 encoded:4 larger:5 solve:6 widely:1 ability:1 itself:3 final:1 advantage:4 indication:1 product:3 combining:2 loop:1 gen:2 achieve:3 representational:1 description:3 asserts:1 achievement:1 los:1 exploiting:5 parent:8 ijcai:2 produce:6 generating:4 sizable:1 auxiliary:1 c:1 implemented:3 involves:3 drawback:2 stochastic:2 alp:10 successor:2 require:2 premise:1 preliminary:2 investigation:1 aubin:1 considered:2 bj:3 parr:3 achieves:1 consecutive:1 waterloo:1 appreciably:1 minimization:1 mit:1 always:1 e7:1 rather:1 encode:1 koller:3 arg:6 overall:1 special:2 icml:1 minf:2 tabular:1 future:5 report:1 feasibly:1 few:1 inherent:1 simultaneously:1 tightly:1 cplex:1 lbj:1 attempt:1 investigate:2 extreme:1 behind:1 held:1 necessary:1 minw:1 tree:1 conduct:1 loosely:2 uncertain:1 increased:1 mhz:1 assignment:1 cost:2 conducted:1 reported:4 perturbed:1 eec:1 combined:2 st:1 fundamental:2 diverge:1 connecting:1 quickly:1 again:1 central:2 thesis:1 e9:1 expert:1 return:6 toy:1 potential:1 baird:1 satisfy:1 explicitly:2 depends:3 ad:3 start:5 square:1 efficiently:7 yield:3 bayesian:1 produced:1 processor:1 e12:2 minj:1 definition:2 energy:1 enumerates:1 credible:1 organized:2 appears:3 jair:1 maximally:1 though:3 xa:15 just:1 until:3 defines:1 quality:1 scientific:2 mdp:14 effect:1 dietterich:1 concept:1 true:1 hence:1 ixa:2 iteratively:2 puterman:1 inferior:1 generalized:2 outline:1 perturbing:1 patent:1 exponentially:1 volume:1 discussed:1 approximates:1 kluwer:1 significant:1 ai:1 dbn:2 outlined:1 pointed:1 exj:1 had:2 server:2 discussing:1 seen:1 guestrin:1 additional:1 preceding:1 determine:2 converge:2 recoverable:1 technical:1 af:2 offer:1 equally:1 a1:1 involving:1 basic:1 neuro:1 iteration:20 sometimes:1 adopting:1 represent:3 achieved:2 addition:1 diagram:2 subject:4 db:2 spirit:1 seem:1 integer:1 structural:1 near:1 enough:1 easy:1 iterate:4 xj:9 fit:1 paradoxically:1 affect:1 architecture:2 identified:1 imperfect:1 idea:3 enumerating:2 bottleneck:1 effort:1 render:1 e3:1 proceed:1 action:18 matlab:1 boutilier:2 useful:2 generally:2 clear:1 discount:2 simplest:1 reduced:1 generate:1 specifies:2 problematic:1 estimated:1 discrete:1 express:1 thereafter:1 four:1 nevertheless:3 neither:3 graph:3 fraction:1 sum:1 run:2 reasonable:2 sallans:1 decision:6 scaling:2 bound:6 guaranteed:1 constraint:26 speed:1 min:3 formulating:1 performing:1 martin:1 speedup:1 department:1 structured:2 according:1 jr:2 smaller:1 lp:2 explained:1 restricted:1 hoey:1 previously:2 turn:6 precomputed:1 needed:2 permit:2 observe:1 hierarchical:2 robustness:1 denotes:3 opportunity:1 xay:1 exploit:3 approximating:3 objective:1 quantity:2 strategy:2 interacts:1 minx:2 athena:2 reason:1 length:1 relationship:2 minimizing:1 difficult:1 unfortunately:1 argminf:1 ba:3 implementation:2 policy:42 upper:2 observation:1 markov:3 finite:2 mate:1 immediate:1 hinton:1 arbitrary:5 bk:1 pair:1 required:2 concisely:3 hour:3 nip:2 usually:3 below:1 program:15 max:3 including:1 dearden:1 critical:1 boyan:1 indicator:2 scheme:2 mdps:17 numerous:1 determining:3 fully:1 multiagent:1 gre:2 rationale:1 generation:3 limitation:1 baf:2 foundation:1 sufficient:1 tiny:1 share:1 course:1 surprisingly:1 free:1 tsitsiklis:1 weaker:1 understand:1 allow:4 taking:2 benefit:3 van:1 default:4 calculated:1 transition:8 dale:2 conservatively:1 reinforcement:3 approximate:21 compact:10 approxi:1 active:1 uai:1 b1:1 xi:12 search:4 iterative:2 decomposes:1 table:1 reasonably:2 ca:4 obtaining:1 schuurmans:1 apricodd:1 investigated:1 necessarily:1 complex:1 did:2 main:7 uwaterloo:1 backup:3 bounding:1 nothing:1 repeated:1 representative:1 fashion:1 wiley:1 iij:1 explicit:3 exponential:3 xl:5 northeastern:1 down:3 e4:3 specific:2 adding:1 phd:1 horizon:1 simply:2 likely:1 expressed:1 satisfies:1 trillion:2 extracted:1 goal:2 formulated:3 llxi:1 feasible:9 experimentally:1 fw:1 included:1 infinite:2 specifically:5 reducing:1 operates:1 except:1 total:1 experimental:1 e10:2 e6:2 support:1 goldszmidt:1 meant:1 violated:3 ub:1 tested:1 |
1,075 | 1,982 | A Maximum-Likelihood Approach to
Modeling Multisensory Enhancement
Hans Colonius*
Institut fUr Kognitionsforschung
Carl von Ossietzky Universitat
Oldenburg, D-26111
hans. colonius@uni-oldenburg.de
Adele Diederich
School of Social Sciences
International University Bremen
Bremen, D-28725
a. diederich @iu-bremen.de
Abstract
Multisensory response enhancement (MRE) is the augmentation of
the response of a neuron to sensory input of one modality by simultaneous input from another modality. The maximum likelihood
(ML) model presented here modifies the Bayesian model for MRE
(Anastasio et al.) by incorporating a decision strategy to maximize
the number of correct decisions. Thus the ML model can also deal
with the important tasks of stimulus discrimination and identification in the presence of incongruent visual and auditory cues. It
accounts for the inverse effectiveness observed in neurophysiological recording data, and it predicts a functional relation between
uni- and bimodal levels of discriminability that is testable both in
neurophysiological and behavioral experiments.
1
Introduction
In a typical environment stimuli occur at various positions in space and time. In
order to produce a coherent assessment of the external world an individual must
constantly discriminate between signals relevant for action planning (targets) and
signals that need no immediate response (distractors). Separate sensory channels
process stimuli by modality, but an individual must determine which stimuli are
related to one another, i.e., it is must construct a perceptual event by integrating
information from several modalities. For example, stimuli that occur at the same
time and space are likely to be interrelated by a common cause. However, if the
visual and auditory cues are incongruent, e.g., when dubbing one syllable onto
a movie showing a person mouthing a different syllable, listeners typically report
hearing a third syllable that represents a combination of what was seen and heard
(McGurk effect, cf. [1]). This indicates that cross-modal synthesis is particularly
important for stimulus identification and discrimination, not only for detection.
Evidence for multisensory integration at the neural level has been well documented
in a series of studies in the mammalian midbrain by Stein, Meredith and Wallace
(e.g., [2] ; for a review, see [3]). The deep layers of the superior colliculus (DSC)
? www.uni-oldenburg.de/psychologie /hans.colonius /index.html
integrate multisensory input and trigger orienting responses toward salient targets.
Individual DSC neurons can receive inputs from multiple sensory modalities (visual,
auditory, and somatosensory), there is considerable overlap between the receptive
fields of these individual multisensory neurons, and the number of neural impulses
evoked depends on the spatial and temporal relationships of the multisensory stimuli.
Multisensory response enhancement refers to the augmentation of the response of
a DSC neuron to a multisensory stimulus compared to the response elicited by
the most effective single modality stimulus. A quantitative measure of the percent
enhancement is
MRE = CM - SMmax x 100,
SMmax
(1)
where CM is the mean number of impulses evoked by the combined-modality stimulus in a given time interval, and S Mmax refers to the response of the most effective
single-modality stimulus (cf. [4]). Response enhancement in the DSC neurons can
be quite impressive, with values of M RE sometimes reaching values above 1000.
Typically, this enhancement is most dramatic when the unimodal stimuli are weak
and/or ambiguous, a principle referred to in [4] as "inverse effectiveness" .
Since DSC neurons play an important role in orienting responses (like eye and
head movements) to exogenous target stimuli, it is not surprising that multisensory
enhancement is also observed at the behavioral level in terms of, for example, a
lowering of detection thresholds or a speed-up of (saccadic) reaction time (e.g.,
[5], [6], [7]; see [8] for a review) . Inverse effectiveness makes intuitive sense in the
behavioral situation: the detection probability for a weak or ambiguous stimulus
gains more from response enhancement by multisensory integration than a highintensity stimulus that is easily detected by a single modality alone.
A model of the functional significance of multisensory enhancement has recently
been proposed by Anastasio, Patton, and Belkacem-Boussaid [9]. They suggested
that the responses of individual DSC neurons are proportional to the Bayesian
probability that a target is present given their sensory inputs. Here, this Bayesian
model is extended to yield a more complete account of the decision situation an
organism is faced with. As noted above, in a natural environment an individual is
confronted with the task of discriminating between stimuli important for survival
(" targets") and stimuli that are irrelevant (" distractors") . Thus, an organism must
not only keep up a high rate of detecting targets but, at the same time, must strive
to minimize " false alarms" to irrelevant stimuli. An optimally adapted system will
be one that maximizes the number of correct decisions. It will be shown here that
this can be achieved already at the level of individual DSC neurons by appealing
to a maximum-likelihood principle, without requiring any more information than is
assumed in the Bayesian model.
The next section sketches the Bayesian model by Anastasio, Patton, and BelkacemBoussaid (Bayesian model, for short), after which a maximum-likelihood model of
multisensory response enhancement will be introduced.
2
The Bayesian Model of Multisensory Enhancement
DSC neurons receive input from the visual and auditory systems elicited by stimuli
occurring within their receptive fields! According to the Bayesian model, these vii An extension to the trimodal situation, including somatosensory input, could be easily
attained in the models discussed here.
sual and auditory inputs are represented by random variables V and A, respectively.
A binary random variable T indicates whether a signal is present (T = 1) or not
(T = 0) . The central assumption of the model is that a DSC neuron computes the
Bayesian (posterior) probability that a target is present in its receptive field given
its sensory input:
P(T =
IV
1
A =
=
v,
a
) = P(V = v, A = a IT = I)P(T = 1)
P(V = v, A = a)
,
(2)
where v and a denote specific values of the sensory input variables. Analogous
expressions hold for the two unimodal situations. The response of the DSC neuron
(number of spikes in a unit time interval) is postulated to be proportional to these
probabilities.
In order to arrive at quantitative predictions two more specific assumptions are
made:
(1) the distributions of V and A, given T = 1 or T = 0, are conditionally independent , i.e.,
P(V = v, A = a I T) = P(V = v IT) P(A = a I T)
for any v, a;
(2) the distribution of V , given T = 1 or T = 0, is Poisson with Al or Ao , resp. ,
and the distribution of A, given T = 1 or T = 0, is Poisson with {-tl or {-to,
resp.
The conditional independence assumption means that the visibility of a target indicates nothing about its audibility, and vice-versa. The choice of the Poisson
distribution is seen as a reasonable first approximation that requires only one single
parameter per distribution. Finally, the computation of the posterior probability
that a target is present requires specification of the a-priori probability of a target,
P(T = 1).
The parameters Ao and {-to denote the mean intensity of the visual and auditory
input, resp., when no target is present (spontaneous input) , while Al and {-tl are
the corresponding mean intensities when a target is present (driven input). By an
appropriate choice of parameter values, Anastasio et al. [9] show that the Bayesian
model reproduces values of multisensory response enhancement in the order of magnitude observed in neurophysiological experiments [10]. In particular, the property
of inverse effectiveness, by which the enhancement is largest for combined stimuli
that evoke only small unimodal responses , is reflected by the model.
3
3.1
The Maximum Likelihood Model of Multisensory
Enhancement
The decision rule
The maximum likelihood model (ML model, for short) incorporates the basic decision problem an organism is faced with in a typical environment: to discriminate
between relevant stimuli (targets), i.e. , signals that require immediate reaction, and
irrelevant stimuli (distractors), i.e., signals that can be ignored in a given situation. In the signal-detection theory framework (cf. [11]) , P(Yes IT = 1) denotes the
probability that the organism (correctly) decides that a target is present (hit), while
P(Yes IT = 0) denotes the probability of deciding that a target is present when in
fact only a distractor is present (false alarm). In order to maximize the probability
of a correct response,
P(C)
= P(Yes IT = 1) P(T = 1) + [1- P(Yes IT = O)]P(T = 0),
(3)
the following maximum likelihood decision rule must be adopted (cf. [12]) for , e.g.,
the unimodal visual case:
If P(T
= 11 V = v) > P(T = 0 IV = v), then decide "Yes", otherwise decide " No" .
The above inequality is equivalent to
P(T=IIV=v)
P(T = 0 IV = v)
P(T=I)P(v=vIT=I)
P(T = 0) P(V = v IT = 0)
> 1,
where the right-most ratio is a function of V , L(V), the likelihood ratio. Thus, the
above rule is equivalent to:
If L(v)
>
1 - P , then decide "Yes" , otherwise decide "No" ,
P
with p = P(T = 1).
Since L(V) is a random variable, the probability to decide "Yes" , given a target is
present, is
P (Yes I T
= 1) =
P (L(V) > 1; PIT
= 1) .
Assuming Poisson distributions, this equals
P (exP(Ao -
Ad U~) v > ~ I T
= P(V > ciT =
=
1)
1),
with
In (l;P)
+ Al - AO
c=---'--------'-----;-----;---
In
U~)
In analogy to the Bayesian model, the ML model postulates that the response
of a DSC neuron (number of spikes in a unit time interval) to a given target is
proportional to the probability to decide that a target is present computed under
the optimal (maximum likelihood) strategy defined above.
3.2
Predictions for Hit Probabilities
In order to compare the predictions of the ML model for unimodal vs. bimodal
inputs, consider the likelihood ratio for bimodal Poisson input under conditional
independence:
L(V, A)
P(V
P(V
= v, A = a I T = 1)
= v, A = a I T = 0)
exp(Ao _
Ad
(~~) v exp(po _ pd (~~) A
The probability to decide "Yes" given bimodal input amounts to, after taking logarithms,
P (In
(~~) V + In (~~) A > In (1; p) + Al -
AO
+
PI -Po IT 1)
=
Table 1: Hit probabilities and MRE for different bimodal inputs
Mean Driven Input
Prob (Hit)
Al
J.Ll
V Driven
A Driven
V A Driven
MRE
Low
6
7
8
8
8
7
7
8
9
10
.000
.027
.112
.112
.112
.027
.027
.112
.294
.430
.046
.117
.341
.528
.562
704
335
204
79
31
Medium
12
12
12
13
.652
.652
.652
.748
.872
.895
33
20
High
16
16
16
20
.873
.873
.873
.961
.984
.990
13
3
Note: A-priori target probability is set at p = O.l. Visual and auditory inputs have
spontaneous means of 5 impulses per unit time. V Driven (A Driven, V A Driven) columns
refer to the hit probabilities given a unimodal visual (resp. auditory, bimodal) target.
Multisensory response enhancement (last column) is computed using Eq. (1)
For Ad Ao = J.Ld J.Lo this probability is computed directly from the Poisson distribution with mean (AI + J.Ld. Otherwise, hit probabilities follow the distribution
of a linear combination of two Poisson distributed variables. Table 1 presents 2 hit
probabilities and multisensory response enhancement values for different levels of
mean driven input. Obviously, the ML model imitates the inverse effectiveness relation: combining weak intensity unimodal stimuli leads to a much larger response
enhancement than medium or high intensity stimuli.
3.3
Predictions for discriminability measures
The ML model allows to assess the sensitivity of an individual DSC neuron to discriminate between target and distract or signals. Intuit ively, this sensitivity should
be a (decreasing) function of the amount of overlap between the driven and the
spontaneous likelihood (e.g., P(V = v IT = 1) and P(V = v IT = 0)). One possible
appropriate measure of sensitivity for the Poisson observer is (cf. [12])
Al - Ao
J.Ll - J.Lo
Dy = (AI AO)I /4 and DA = (J.LIJ.LO)l /4
(4)
for the visual and auditory unimodal inputs, resp. A natural choice for the bimodal
measure of sensitivity then is
D
(AI
+ J.Ll)
- (J.Lo
y A = [(A I + J.Ld(Ao
+ Ao)
+ J.Lo)Jl/ 4 .
(5)
Note that, unlike the hit probabilities, the relative increase in discriminability by
combining two unimodal inputs does not decrease with the intensity of the driven
input (see Table 2). Rather, the relation between bimodal and unimodal discriminability measures for the input values in Table 2 is approximately of Euclidean
2For input combinations with >'1 =I- J.t1 hit probabilities are estimated from samples of
1,000 pseudo-random numbers.
Table 2: Discriminability measure values and % increase for different bimodal inputs
Mean Driven Input
Discriminability Value
Al
J.Ll
Dv
DA
DVA
% Increase
7
8
8
7
8
10
.82
1.19
1.19
.82
1.19
1.88
1.16
1.69
2.18
41
41
16
12
16
16
12
16
20
2.52
3.68
3.68
2.52
3.68
4.74
3.57
5.20
5.97
41
41
26
Note: Visual and auditory inputs have spontaneous means of 5 impulses per unit time.
% Increase of Dv A over Dv and DA (last column) is computed in analogy to Eq. (1)
distance form:
(6)
For Al = J.Ll this amounts to
inability. The fact that the
effectiveness rule should not
hard to discriminate depends
intensity.
4
Dv A = V2Dv yielding the 41 % increase in discrimdiscriminability measures do not follow the inverse
be not surprising: whether two stimuli are easy or
on their signal-to-noise ratio, but not on the level of
Discussion and Conclusion
The maximum likelihood model of multisensory enhancement developed here assumes that the response of a DSC neuron to a target stimulus is proportional to
the hit probability under a maximum likelihood decision strategy. Obviously, no
claim is made here that the neuron actually performs these computations, only that
its behavior can be described approximately in this way. Similar to the Bayesian
model suggested by Anastasio et al. [9], the neuron's behavior is solely based on
the a-priori probability of a target and the likelihood function for the different
sensory inputs. The ML model predicts the inverse effectiveness observed in neurophysiological experiments. Moreover, the model allows to derive a measure of
the neuron's ability to discriminate between targets and non-targets. It makes specific predictions how un i- and bimodal discriminability measures are related and,
thereby, opens up further avenues for testing the model assumptions .
The ML model, like the Bayesian model, operates at the level of a single DSC
neuron. However, an extension of the model to describe multisensory population
responses is desirable: First, this would allow to relate the model predictions to
numerous behavioral studies about multisensory effects (e.g., [13], [14]), and, second,
as a recent study by Kadunce et al. [1 5) suggests, the effects of multisensory
spatial coincidence observed in behavioral experiments may only be reconcilable
with the degree of spatial resolution achievable by a population of DSC neurons
with overlapping receptive fields. Moreover, this extension might also be useful to
relate behavioral and single-unit recording results to recent findings on multisensory
brain areas using functional imaging techniques (e.g., King and Calvert [16]).
Acknowledgments
This research was partially supported by a grant from Deutsche Forschungsgemeinschaft-SFB 517 Neurokognition to the first author.
References
[1] McGurk, H. & MacDonald, J . (1976). Hearing lips and seeing voices. Nature, 264,
746-748.
[2] Wallace, M. T ., Meredith, M. A., & Stein, B. E. (1993) . Converging influences from
visual, auditory, and somatosensory cortices onto output neurons of the superior colliculus.
Journal of N europhysiology, 69 , 1797-1809.
[3] Stein, B. E., & Meredith, M. A. (1996). The merging of the senses. Cambridge, MA:
MIT Press.
[4] Meredith, M. A. & Stein, B. E. (1986a). Spatial factors determine the activity of
multisensory neurons in cat superior colliculus. Brain R esearch, 365(2), 350-354.
[5] Frens, van Opstal , & van der Willigen (1995) . Spatial and temporal factors determine auditory-visual interactions in human saccadic eye movements. Perception fj Psychophysics, 57, 802-816.
[6] Colonius, H. & Arndt , P. A. (2001). A two stage-model for visual-auditory interaction
in saccadic latencies. Perception fj Psychophysics, 63, 126-147.
[7] Stein, B. E. , Meredith, M. A., Huneycutt , W. S. , & McDade, L. (1989). Behavioral
indices of multisensory integration: Orientation to visual cues is affected by auditory
stimuli. Journal of Cognitive N eurosciences, 1, 12-24.
[8] Welch, R. B., & Warren, D. H. (1986). Intersensory interactions. In K R. Boff, L.
Kaufman , & J. P. Thomas (eds.), Handbook of perception and human performance, Volum e
I : Sensory process and perception (pp . 25-1-25-36) New York: Wiley
[9] Anastasio" T. J., Patton, P. E. , & Belkacem-Boussaid, K (2000). Using Bayes' rule
to model multisensory enhancement in the superior colliculus. Neural Computation, 12 ,
1165-1187.
[10] Meredith, M. A. & Stein, B. E. (1986b). Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of
Neurophysiology, 56(3), 640-662.
[11] Green, D. M., & Swets, J. A. (1974). Signal detection th eory and psychophysics. New
York: Krieger Pub!. Co.
[12] Egan, J . P. (1975) . Signal detection theory and ROC analysis. New York: Academic
Press.
[13] Craig, A., & Colquhoun, W. P. (1976). Combining evidence presented simultaneously to the eye and the ear: A comparison of some predictive models. Perception fj
Psychophysics, 19, 473-484.
[14] Stein, B. E., London , N. , Wilkinson, L. K , & Price, D. D. (1996). Enhancement
of perceived visual intensity by auditory stimuli: A psychophysical analysis. Journal of
Cognitive Neurosci ence, 8, 497-506.
[15] Kadunce, D . C., Vaughan, J . W ., Wallace, M. T ., & Stein, B. E. (2001) . The influence of visual and auditory receptive field organization on multisensory integration in the
superior colliculus. Experimental Brain Research, 139, 303-310.
[16] King, A. J., & Calvert , G. A. (2001). Multisensory integration: Perceptual grouping
by eye and ear. Current Biology, 11 , 322-325 .
| 1982 |@word neurophysiology:1 achievable:1 open:1 dramatic:1 thereby:1 ld:3 series:1 oldenburg:3 pub:1 reaction:2 current:1 surprising:2 must:6 visibility:1 discrimination:2 alone:1 cue:3 v:1 short:2 detecting:1 behavioral:7 swets:1 behavior:2 planning:1 wallace:3 distractor:1 brain:3 decreasing:1 moreover:2 deutsche:1 maximizes:1 medium:2 what:1 cm:2 kaufman:1 developed:1 finding:1 temporal:2 quantitative:2 pseudo:1 hit:10 unit:5 grant:1 t1:1 solely:1 approximately:2 might:1 discriminability:7 evoked:2 suggests:1 pit:1 co:1 colonius:4 acknowledgment:1 testing:1 area:1 integrating:1 refers:2 seeing:1 onto:2 influence:2 vaughan:1 www:1 equivalent:2 modifies:1 vit:1 resolution:1 welch:1 rule:5 population:2 analogous:1 resp:5 target:25 trigger:1 play:1 spontaneous:4 carl:1 particularly:1 mammalian:1 predicts:2 observed:5 role:1 coincidence:1 movement:2 decrease:1 environment:3 pd:1 wilkinson:1 iiv:1 predictive:1 easily:2 po:2 various:1 listener:1 represented:1 cat:1 effective:2 describe:1 london:1 detected:1 quite:1 larger:1 otherwise:3 ability:1 obviously:2 confronted:1 interaction:3 relevant:2 combining:3 boff:1 intuitive:1 convergence:1 enhancement:20 produce:1 derive:1 school:1 eq:2 somatosensory:4 correct:3 kadunce:2 human:2 require:1 ao:11 trimodal:1 extension:3 hold:1 deciding:1 exp:3 claim:1 arndt:1 perceived:1 largest:1 vice:1 intersensory:1 mit:1 reaching:1 rather:1 fur:1 likelihood:14 indicates:3 esearch:1 sense:1 typically:2 relation:3 iu:1 html:1 orientation:1 priori:3 spatial:5 integration:6 psychophysics:4 field:5 construct:1 equal:1 biology:1 represents:1 report:1 stimulus:28 simultaneously:1 individual:8 detection:6 organization:1 yielding:1 sens:1 ively:1 calvert:2 institut:1 iv:3 euclidean:1 logarithm:1 egan:1 re:1 column:3 modeling:1 ence:1 hearing:2 universitat:1 optimally:1 combined:2 person:1 international:1 sensitivity:4 discriminating:1 synthesis:1 von:1 augmentation:2 central:1 postulate:1 mcgurk:2 ear:2 external:1 cognitive:2 strive:1 account:2 de:3 opstal:1 postulated:1 depends:2 ad:3 observer:1 exogenous:1 bayes:1 elicited:2 minimize:1 ass:1 yield:1 yes:9 weak:3 bayesian:13 identification:2 craig:1 simultaneous:1 ed:1 diederich:2 pp:1 gain:1 auditory:17 distractors:3 actually:1 mre:5 attained:1 follow:2 reflected:1 response:23 modal:1 stage:1 sketch:1 assessment:1 overlapping:1 impulse:4 orienting:2 effect:3 requiring:1 deal:1 conditionally:1 mmax:1 ll:5 ambiguous:2 noted:1 complete:1 performs:1 fj:3 percent:1 recently:1 common:1 superior:6 functional:3 dsc:15 jl:1 discussed:1 organism:4 refer:1 versa:1 cambridge:1 ai:3 specification:1 han:3 impressive:1 cortex:1 posterior:2 recent:2 irrelevant:3 driven:12 inequality:1 binary:1 der:1 seen:2 determine:3 maximize:2 signal:10 multiple:1 unimodal:10 desirable:1 academic:1 cross:1 prediction:6 converging:1 basic:1 poisson:8 sometimes:1 bimodal:10 achieved:1 cell:1 receive:2 interval:3 meredith:6 modality:9 unlike:1 recording:2 incorporates:1 effectiveness:7 presence:1 forschungsgemeinschaft:1 easy:1 independence:2 avenue:1 whether:2 expression:1 sfb:1 york:3 cause:1 action:1 deep:1 ignored:1 useful:1 heard:1 latency:1 amount:3 stein:8 cit:1 documented:1 eory:1 estimated:1 per:3 correctly:1 affected:1 salient:1 threshold:1 lowering:1 imaging:1 colliculus:6 inverse:7 prob:1 arrive:1 reasonable:1 decide:7 decision:8 dy:1 dva:1 layer:1 syllable:3 activity:1 adapted:1 occur:2 speed:1 according:1 combination:3 appealing:1 midbrain:1 dv:4 adopted:1 smmax:2 appropriate:2 voice:1 psychologie:1 thomas:1 denotes:2 assumes:1 cf:5 testable:1 psychophysical:1 already:1 spike:2 strategy:3 receptive:5 saccadic:3 distance:1 separate:1 macdonald:1 toward:1 assuming:1 index:2 relationship:1 ratio:4 relate:2 neuron:21 immediate:2 situation:5 extended:1 head:1 intensity:7 introduced:1 incongruent:2 coherent:1 suggested:2 perception:5 including:1 green:1 event:1 overlap:2 natural:2 movie:1 eye:4 numerous:1 lij:1 imitates:1 faced:2 review:2 relative:1 proportional:4 analogy:2 integrate:1 degree:1 principle:2 bremen:3 pi:1 lo:5 supported:1 last:2 warren:1 allow:1 patton:3 taking:1 distributed:1 van:2 world:1 computes:1 sensory:8 author:1 made:2 social:1 uni:3 keep:1 evoke:1 ml:9 reproduces:1 decides:1 handbook:1 assumed:1 un:1 table:5 lip:1 channel:1 nature:1 distract:1 da:3 significance:1 neurosci:1 noise:1 alarm:2 nothing:1 referred:1 tl:2 roc:1 wiley:1 position:1 perceptual:2 third:1 anastasio:6 specific:3 showing:1 evidence:2 survival:1 incorporating:1 grouping:1 false:2 merging:1 magnitude:1 occurring:1 krieger:1 vii:1 interrelated:1 likely:1 neurophysiological:4 visual:17 partially:1 constantly:1 ma:1 conditional:2 intuit:1 king:2 price:1 considerable:1 hard:1 typical:2 operates:1 discriminate:5 experimental:1 multisensory:28 inability:1 |
1,076 | 1,983 | Predictive Representations of State
Michael L. Littman
Richard S. Sutton
AT&T Labs-Research, Florham Park, New Jersey
{mlittman,sutton}~research.att.com
Satinder Singh
Syntek Capital, New York, New York
baveja~cs.colorado.edu
Abstract
We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this
way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state
representations. Building on prior work by Jaeger and by Rivest
and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the
first specific formulation of the predictive idea that includes both
stochasticity and actions (controls). We show that any system has
a linear predictive state representation with number of predictions
no greater than the number of states in its minimal POMDP model.
In predicting or controlling a sequence of observations, the concepts of state and
state estimation inevitably arise. There have been two dominant approaches. The
generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations
and estimates its state and state dynamics. The history-based approach, typified by
k-order Markov methods, uses simple functions of past observations as state, that is,
as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach
is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a
history-based approach can only remember as far back as its history extends. The
bane of generative-model approaches is that they are often strongly dependent on a
good model of the system's dynamics. Most uses of POMDPs, for example, assume
a perfect dynamics model and attempt only to estimate state. There are algorithms
for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous
to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but
these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997).
observations
(and actions)
1-----1-----1..-
(a)
state
rep'n
observations
(and actions)
?E
/
t/'
--+
1-step
delays
.
state
rep'n
(b)
Figure 1: Data flow in a) POMDP and other recursive updating of state representation, and b) history-based state representation.
In practice, history-based approaches are often much more effective. Here, the state
representation is a relatively simple record of the stream of past actions and observations. It might record the occurrence of a specific subsequence or that one
event has occurred more recently than another. Such representations are far more
closely linked to the data than are POMDP representations. One way of saying
this is that POMDP learning algorithms encounter many local minima and saddle
points because all their states are equipotential. History-based systems immediately break symmetry, and their direct learning procedure makes them comparably
simple. McCallum (1995) has shown in a number of examples that sophisticated
history-based methods can be effective in large problems, and are often more practical than POMDP methods even in small ones.
The predictive state representation (PSR) approach, which we develop in this paper,
is like the generative-model approach in that it updates the state representation
recursively, as in Figure l(a), rather than directly computing it from data. We
show that this enables it to attain generality and compactness at least equal to
that of the generative-model approach. However, the PSR approach is also like the
history-based approach in that its representations are grounded in data. Whereas a
history-based representation looks to the past and records what did happen, a PSR
looks to the future and represents what will happen. In particular, a PSR is a vector
of predictions for a specially selected set of action-observation sequences, called tests
(after Rivest & Schapire, 1994). For example, consider the test U101U202, where U1
and U2 are specific actions and 01 and 02 are specific observations. The correct
prediction for this test given the data stream up to time k is the probability of its
observations occurring (in order) given that its actions are taken (in order) (i.e.,
Pr {Ok = 01, Ok+1 = 02 I A k = u1,A k + 1 = U2}). Each test is a kind of experiment
that could be performed to tell us something about the system. If we knew the
outcome of all possible tests, then we would know everything there is to know
about the system. A PSR is a set of tests that is sufficient information to determine
the prediction for all possible tests (a sufficient statistic).
As an example of these points, consider the float/reset problem (Figure 2) consisting
of a linear string of 5 states with a distinguished reset state on the far right. One
action, f (float), causes the system to move uniformly at random to the right or
left by one state, bounded at the two ends. The other action, r (reset), causes a
jump to the reset state irrespective of the current state. The observation is always
o unless the r action is taken when the system is already in the reset state, in which
case the observation is 1. Thus, on an f action, the correct prediction is always 0,
whereas on an r action, the correct prediction depends on how many fs there have
been since the last r: for zero fS, it is 1; for one or two fS, it is 0.5; for three or
four fS, it is 0.375; for five or six fs, it is 0.3125, and so on decreasing after every
second f, asymptotically bottoming out at 0.2.
No k-order Markov method can model this system exactly, because no limited-.
.5
.5
a) float action
1,0=1
b) reset action
Figure 2: Underlying dynamics of the float/reset problem for a) the float action and
b) the reset action. The numbers on the arcs indicate transition probabilities. The
observation is always 0 except on the reset action from the rightmost state, which
produces an observation of 1.
length history is a sufficient statistic. A POMDP approach can model it exactly by
maintaining a belief-state representation over five or so states. A PSR, on the other
hand, can exactly model the float/reset system using just two tests: rl and fOrI.
Starting from the rightmost state, the correct predictions for these two tests are always two successive probabilities in the sequence given above (1, 0.5, 0.5, 0.375,...),
which is always a sufficient statistic to predict the next pair in the sequence. Although this informational analysis indicates a solution is possible in principle, it
would require a nonlinear updating process for the PSR.
In this paper we restrict consideration to a linear special case of PSRs, for which
we can guarantee that the number of tests needed does not exceed the number
of states in the minimal POMDP representation (although we have not ruled out
the possibility it can be considerably smaller). Of greater ultimate interest are the
prospects for learning PSRs and their update functions, about which we can only
speculate at this time. The difficulty of learning POMDP structures without good
prior models are well known. To the extent that this difficulty is due to the indirect
link between the POMDP states and the data, predictive representations may be
able to do better.
Jaeger (2000) introduced the idea of predictive representations as an alternative
to belief states in hidden Markov models and provided a learning procedure for
these models. We build on his work by treating the control case (with actions),
which he did not significantly analyze. We have also been strongly influenced by
the work of Rivest and Schapire (1994), who did consider tests including actions,
but treated only the deterministic case, which is significantly different. They also
explored construction and learning algorithms for discovering system structure.
1
Predictive State Representations
We consider dynamical systems that accept actions from a discrete set A and generate observations from a discrete set O. We consider only predicting the system, not
controlling it, so we do not designate an explicit reward observation. We refer to
such a system as an environment. We use the term history to denote a test forming
an initial stream of experience and characterize an environment by a probability distribution over all possible histories, P : {OIA}* H- [0,1], where P(Ol??? Otl a1??? at)
is the probability of observations 01, ... , O? being generated, in that order, given that
actions aI, ... ,at are taken, in that order. The probability of a test t conditional
on a history h is defined as P(tlh) = P(ht)/P(h). Given a set of q tests Q = {til,
we define their (1 x q) prediction vector, p(h) = [P(t1Ih),P(t2Ih), ... ,P(tqlh)], as a
predictive state representation (PSR) if and only if it forms a sufficient statistic for
the environment, Le., if and only if
P(tlh) = ft(P(h)),
(1)
for any test t and history h, and for some projection junction ft : [0, l]q ~ [0,1]. In
this paper we focus on linear PSRs, for which the projection functions are linear,
that is, for which there exist a (1 x q) projection vector mt, for every test t, such
that
(2)
P(tlh) == ft(P(h)) =7 p(h)mf,
for all histories h.
Let Pi(h) denote the ith component of the prediction vector for some PSR. This
can be updated recursively, given a new action-observation pair a,o, by
.(h ) == P(t.lh ) == P(otil ha ) == faati(P(h)) == p(h)m'{;ati
P2 ao
2
ao
P(olha)
faa (P(h))
p(h)mro '
(3)
where the last step is specific to linear PSRs. We can now state our main result:
Theorem 1 For any environment that can be represented by a finite POMDP
model, there exists a linear PSR with number of tests no larger than the number of
states in the minimal POMDP model.
2
Proof of Theorem 1: Constructing a PSR from a POMDP
We prove Theorem 1 by showing that for any POMDP model of the environment,
we can construct in polynomial time a linear PSR for that POMDP of lesser or
equal complexity that produces the same probability distribution over histories as
the POMDP model.
We proceed in three steps. First, we review POMDP models and how they assign
probabilities to tests. Next, we define an algorithm that takes an n-state POMDP
model and produces a set of n or fewer tests, each of length less than or equal to
n. Finally, we show that the set of tests constitute a PSR for the POMDP, that is,
that there are projection vectors that, together with the tests' predictions, produce
the same probability distribution over histories as the POMDP.
A POMDP (Lovejoy, 1991; Kaelbling et al., 1998) is defined by a sextuple
(8, A, 0, bo, T, 0). Here, 8 is a set of n underlying (hidden) states, A is a discrete set of actions, and 0 is a discrete set of observations. The (1 x n) vector bo is
an initial state distribution. The set T consists of (n x n) transition matrices Ta,
one for each action a, where Tlj is the probability of a transition from state i to j
when action a is chosen. The set 0 consists of diagonal (n x n) observation matrices
oa,o, one for each pair of observation 0 and action a, where o~'o is the probability
of observation 0 when action a is selected and state i is reached. l
The state representation in a POMDP (Figure l(a)) is the belief state-the (1 x n)
vector of the state-occupation probabilities given the history h. It can be computed
recursively given a new action a and observation 0 by
b(h)Taoa,o
b(hao)
= b(h)Taoa,oe;'
where en is the (1 x n)-vector of all Is.
Finally, a POMDP defines a probability distribution over tests (and thus histories)
by
P(Ol ... otlhal ... at) == b(h)Ta1oal,Ol ... Taloa?,Ole~.
(4)
IThere are many equivalent formulations and the conversion procedure described here
can be easily modified to accommodate other POMDP definitions.
We now present our algorithm for constructing a PSR for a given POMDP. It
uses a function u mapping tests to (1 x n) vectors defined recursively by u(c) ==
en and u(aot) == (Taoa,ou(t)T)T, where c represents the null test. Conceptually,
the components of u(t) are the probabilities of the test t when applied from each
underlying state of the POMDP; we call u(t) the outcome vector for test t. We say
a test t is linearly independent of a set of tests S if its outcome vector is linearly
independent of the set of outcome vectors of the tests in S. Our algorithm search
is used and defined as
Q -<- search(c, {})
search(t, S):
for each a E A, 0 E 0
if aot is linearly independent of S
then S -<- search(aot, S U {aot})
return S
The algorithm maintains a set of tests and searches for new tests that are linearly
independent of those already found. It is a form of depth-first search. The algorithm
halts when it checks all the one-step extensions of its tests and finds none that are
linearly independent. Because the set of tests Q returned by search have linearly
independent outcome vectors, the cardinality of Q is bounded by n, ensuring that
the algorithm halts after a polynomial number of iterations. Because each test in
Q is formed by a one-step extension to some other test in Q, no test is longer than
n action-observation pairs.
The check for linear independence can be performed in many ways, including Gaussian elimination, implying that search terminates in polynomial time.
By construction, all one-step extensions to the set of tests Q returned by search
are linearly dependent on those in Q. We now show that this is true for any test.
Lemma 1 The outcome vectors of the tests in Q can be linearly combined to produce
the outcome vector for any test.
Proof: Let U be the (n x q) matrix formed by concatenating the outcome vectors
for all tests in Q. Since, for all combinations of a and 0, the columns of Taoa,ou
are linearly dependent on the columns of U, we can write Taoa,ou == UW T for
some q x q matrix of weights W.
If t is a test that is linearly dependent on Q, then anyone-step extension of t, aot,
is linearly dependent on Q. This is because we can write the outcome vector for t
as u(t) == (UwT)T for some (1 x q) weight vector w and the outcome vector for aot
as u(aot) == (Taoa,ou(t)T)T == (Taoa,oUwT)T == (UWTwT)T. Thus, aot is linearly
dependent on Q.
Now, note that all one-step tests are linearly dependent on Q by the structure of
the search algorithm. Using the previous paragraph as an inductive argument, this
implies that all tests are linearly dependent on Q.
0
Returning to the float/reset example POMDP, search begins with by enumerating
the 4 extensions to the null test (fO, fl, rO, and rl). Of these, only fa and rO
are are linearly independent. Of the extensions of these, fOrO is the only one that
is linearly independent of the other two. The remaining two tests added to Q by
search are fOfOrO and fOfOfOrO. No extensions of the 5 tests in Q are linearly
independent of the 5 tests in Q, so the procedure halts.
We now show that the set of tests Q constitute a PSR for the POMDP by constructing projection vectors that, together with the tests' predictions, produce the
same probability distribution over histories as the POMDP.
For each combination of a and 0, define a q x q matrix Mao == (U+Taoa,ou)T and
a 1 x q vector mao == (U+Taoa,oe;;J T , where U is the matrix of outcome vectors
defined in the previous section and U+ is its pseudoinverse2 ? The ith row of Mao is
maoti. The probability distribution on histories implied by these projection vectors
is
p(h )m~101 alOl
p(h)M~ol
M~_10l_1 m~Ol
b(h)UU+r a1 oa 1,01 U ... U+T al-10 al-1,Ol-1 UU+Taloal,ol
b(h)T a1 0 a1,01 ... ral-l0al-t,ol-lTaloal,Ole~,
Le., it is the same as that of the POMDP, as in Equation 4. Here, the last step uses
the fact that UU+v T == v T for v T linearly dependent on the columns of U. This
holds by construction of U in the previous section.
This completes the proof of Theorem 1.
Completing the float/reset example, consider the Mf,o matrix found by the process
defined in this section. It derives predictions for each test in Q after taking action f.
Most of these are quite simple because the tests are so similar: the new prediction
for rO is exactly the old prediction for fOrO, for example. The only non trivial test
is fOfOfOrO. Its outcome can be computed from 0.250 p(rOlh) - 0.0625 p(fOrOlh) +
0.750 p(fOfOrOlh). This example illustrates that the projection vectors need not
contain only positive entries.
3
Conclusion
We have introduced a predictive state representation for dynamical systems that
is grounded in actions and observations and shown that, even in its linear form, it
is at least as general and compact as POMDPs. In essence, we have established
PSRs as a non-inferior alternative to POMDPs, and suggested that they might have
important advantages, while leaving demonstration of those advantages to future
work. We conclude by summarizing the potential advantages (to be explored in
future work):
Learnability. The k-order Markov model is similar to PSRs in that it is entirely
based on actions and observations. Such models can be learned trivially from data
by counting-it is an open question whether something similar can be done with a
PSR. Jaeger (2000) showed how to learn such a model in the uncontrolled setting,
but the situation is more complex in the multiple action case since outcomes are
conditioned on behavior, violating some required independence assumptions.
Compactness. We have shown that there exist linear PSRs no more complex that
the minimal POMDP for an environment, but in some cases the minimal linear PSR
seems to be much smaller. For example, a POMDP extension of factored MDPs explored by Singh and Cohn (1998) would be cross-products of separate POMDPs and
have linear PSRs that increase linearly with the number and size of the component
POMDPs, whereas their minimal POMDP representation would grow as the size
2If U = A~BT is the singular value decomposition of U, then B:E+ AT is the pseudoinverse. The pseudoinverse of the diagonal matrix }J replaces each non-zero element with its
reciprocal.
e;
of the state space, Le., exponential in the number of component POMDPs. This
(apparent) advantage stems from the PSR's combinatorial or factored structure.
As a vector of state variables, capable of taking on diverse values, a PSR may be
inherently more powerful than the distribution over discrete states (the belief state)
of a POMDP. We have already seen that general PSRs can be more compact than
POMDPs; they are also capable of efficiently capturing environments in the diversity representation used by Rivest and Schapire (1994), which is known to provide
an extremely compact representation for some environments.
Generalization. There are reasons to think that state variables that are themselves
predictions may be particularly useful in learning to make other predictions. With
so many things to predict, we have in effect a set or sequence of learning problems, all
due to the same environment. In many such cases the solutions to earlier problems
have been shown to provide features that generalize particularly well to subsequent
problems (e.g., Baxter, 2000; Thrun & Pratt, 1998).
Powerful, extensible representations. PSRs that predict tests could be generalized to predict the outcomes of multi-step options (e.g., Sutton et al., 1999). In
this case, particularly, they would constitute a powerful language for representing
the state of complex environments.
AcknowledgIllents: We thank Peter Dayan, Lawrence Saul, Fernando Pereira and
Rob Schapire for many helpful discussions of these and related ideas.
References
Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). A maximization technique
occurring in the statistical analysis of probabilistic functions of Markov chains. Annals
of Mathematical Statistics, 41, 164-171.
Baxter, J. (2000). A model of inductive bias learning. Journal of Artificial Intelligence
Research, 12, 149-198.
Chrisman, L. (1992). Reinforcement learning with perceptual aliasing: The perceptual
distinctions approach. Proceedings of the Tenth National Conference on Artificial Intelligence (pp. 183-188). San Jose, California: AAAI Press.
Jaeger, H. (2000). Observable operator models for discrete stochastic time series. Neural
Computation, 12, 1371-1398.
Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in '
partially observable stochastic domains. Artificial Intelligence, 101, 99-134.
Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov
decision processes. Annals of Operations Research, 28, 47-65.
McCallum, A. K. (1995). Reinforcement learning with selective perception and hidden
state. Doctoral diss.ertation, Department of Computer Science, University of Rochester.
Rivest, R. L., & Schapire, R. E. (1994). Diversity-based inference of finite automata.
Journal of the ACM, 41, 555-589.
Shatkay, H., & Kaelbling, L. P. (1997). Learning topological maps with weak local odometric information~ Proceedings of Fifteenth International Joint Conference on Artificial
Intelligence (IJCAI-91) (pp. 920-929).
Singh, S., & Cohn, D. (1998). How to dynamically merge Markov decision processes.
Advances in Neural and Information Processing Systems 10 (pp. 1057-1063).
Sutton, R. S., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 181-211.
Thrun, S., & Pratt, L. (Eds.). (1998). Learning to learn. Kluwer Academic Publishers.
| 1983 |@word polynomial:3 seems:1 open:1 decomposition:1 ithere:1 accommodate:1 recursively:4 initial:2 series:1 att:1 ours:1 rightmost:2 past:3 ati:1 current:1 com:1 soules:1 subsequent:1 happen:2 enables:1 treating:1 update:2 implying:1 generative:5 selected:2 discovering:1 fewer:1 intelligence:5 mccallum:2 ith:2 reciprocal:1 record:3 successive:1 five:2 mathematical:1 direct:1 prove:1 consists:2 paragraph:1 behavior:1 themselves:1 planning:1 multi:2 ol:8 aliasing:1 informational:1 decreasing:1 cardinality:1 provided:1 estimating:1 rivest:5 bounded:2 underlying:3 begin:1 null:2 what:2 kind:1 string:1 guarantee:1 temporal:1 remember:2 every:2 usefully:1 exactly:4 returning:1 ro:3 control:3 positive:1 local:2 sutton:4 approximately:1 merge:1 might:2 doctoral:1 dynamically:1 limited:1 practical:1 recursive:1 practice:1 procedure:4 attain:1 significantly:2 projection:7 operator:1 equivalent:1 deterministic:1 map:1 baum:3 starting:1 automaton:1 pomdp:34 welch:1 survey:1 immediately:1 factored:2 his:1 otl:1 analogous:1 updated:1 annals:2 controlling:2 construction:3 colorado:1 us:4 element:1 particularly:3 updating:2 ft:3 oe:2 prospect:1 environment:10 complexity:1 reward:1 littman:2 dynamic:5 diagrammed:1 singh:4 predictive:10 basis:1 easily:1 joint:1 indirect:1 jersey:1 represented:2 effective:3 ole:2 artificial:5 tell:1 outcome:14 quite:1 apparent:1 larger:1 say:1 florham:1 ability:1 statistic:5 think:1 sextuple:1 sequence:5 advantage:4 product:1 reset:12 syntek:1 ijcai:1 jaeger:4 produce:6 generating:1 perfect:1 develop:1 p2:1 c:1 indicate:1 implies:1 uu:3 closely:1 correct:5 stochastic:2 elimination:1 everything:1 require:1 assign:1 ao:2 generalization:1 designate:1 extension:8 hold:1 lawrence:1 mapping:1 predict:4 algorithmic:1 estimation:1 combinatorial:1 always:5 gaussian:1 modified:1 rather:1 focus:1 ral:1 indicates:1 check:2 contrast:1 mro:1 summarizing:1 helpful:1 inference:1 abstraction:1 dependent:10 lovejoy:2 dayan:1 bt:1 accept:1 compactness:2 hidden:3 selective:1 special:1 equal:3 construct:1 represents:2 park:1 look:2 future:4 richard:1 simultaneously:1 national:1 consisting:1 attempt:1 interest:1 possibility:1 chain:1 accurate:1 psrs:10 capable:2 experience:1 lh:1 unless:1 old:1 ruled:1 minimal:6 column:3 earlier:1 typified:2 extensible:1 maximization:1 kaelbling:4 entry:1 delay:1 learnability:1 characterize:1 considerably:1 combined:1 international:1 probabilistic:1 michael:1 together:2 precup:1 aaai:1 bane:1 til:1 return:1 potential:1 diversity:2 speculate:1 includes:1 hypothesizes:1 depends:1 stream:3 performed:2 break:1 lab:1 linked:1 analyze:1 reached:1 maintains:1 option:1 rochester:1 formed:2 who:1 efficiently:1 uwt:1 conceptually:1 generalize:2 weak:1 comparably:1 none:1 pomdps:9 ago:1 history:21 influenced:1 fo:1 ed:1 definition:1 pp:3 proof:3 ou:5 sophisticated:1 back:1 ok:2 ta:1 violating:1 wei:1 formulation:2 done:1 strongly:2 generality:1 just:1 hand:1 cohn:2 nonlinear:1 defines:1 building:1 effect:1 concept:1 true:1 contain:1 inductive:2 inferior:1 essence:1 faa:1 generalized:1 equipotential:1 consideration:1 recently:1 petrie:1 mt:1 rl:2 occurred:1 he:1 kluwer:1 refer:1 ai:1 tuning:1 trivially:1 stochasticity:1 language:1 baveja:1 longer:1 dominant:1 mlittman:1 something:2 showed:1 rep:2 arbitrarily:1 seen:1 minimum:1 greater:2 determine:1 fernando:1 semi:1 multiple:1 stem:1 academic:1 cross:1 long:1 a1:4 halt:3 ensuring:1 prediction:18 fifteenth:1 iteration:1 grounded:3 whereas:4 completes:1 float:8 leaving:1 grow:1 singular:1 publisher:1 specially:1 thing:1 flow:2 call:1 counting:1 exceed:1 pratt:2 baxter:2 fori:1 independence:2 restrict:1 idea:3 lesser:1 enumerating:1 whether:1 specialization:1 six:1 ultimate:1 f:5 peter:1 returned:2 york:2 cause:2 proceed:1 action:34 constitute:3 useful:1 schapire:6 generate:1 exist:2 aot:8 happened:1 diverse:1 discrete:6 write:2 four:1 capital:1 tenth:1 ht:1 uw:1 asymptotically:1 tlh:3 jose:1 powerful:3 extends:1 saying:1 decision:3 entirely:1 fl:1 uncontrolled:2 completing:1 capturing:1 replaces:1 topological:1 unlimited:1 u1:2 anyone:1 argument:1 extremely:1 relatively:1 department:1 combination:2 smaller:2 terminates:1 rob:1 pr:1 taken:3 equation:1 needed:1 know:2 end:1 junction:1 operation:1 occurrence:1 distinguished:1 tlj:1 alternative:2 encounter:1 remaining:1 maintaining:1 build:1 implied:1 move:1 already:4 added:1 question:1 fa:1 diagonal:2 link:1 separate:1 thrun:2 oa:2 thank:1 extent:1 trivial:1 reason:1 length:2 demonstration:1 hao:1 conversion:1 observation:26 markov:9 arc:1 finite:2 inevitably:1 immediate:1 situation:1 introduced:2 pair:4 required:1 california:1 learned:1 distinction:1 chrisman:2 established:1 able:1 suggested:1 dynamical:3 perception:1 including:2 oia:1 belief:4 event:2 difficulty:2 treated:1 predicting:2 representing:1 mdps:3 temporally:1 irrespective:1 prior:3 review:1 occupation:1 sufficient:5 principle:1 pi:1 row:1 last:3 dis:1 bias:1 saul:1 taking:2 depth:1 transition:3 jump:1 reinforcement:3 san:1 far:3 observable:4 compact:3 satinder:1 pseudoinverse:2 conclude:1 knew:1 subsequence:1 search:12 learn:3 inherently:1 symmetry:1 complex:3 constructing:3 domain:1 did:3 main:1 linearly:19 arise:1 en:2 mao:3 pereira:1 explicit:1 concatenating:1 exponential:1 perceptual:2 theorem:4 specific:5 showing:1 explored:3 derives:1 exists:1 illustrates:1 occurring:2 conditioned:1 cassandra:1 easier:1 mf:2 saddle:1 psr:19 forming:1 partially:3 bo:2 u2:2 acm:1 shatkay:2 conditional:2 except:1 uniformly:1 acting:1 lemma:1 called:1 internal:1 |
1,077 | 1,984 | Playing is believing:
The role of beliefs in multi-agent learning
Yu-Han Chang
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, Massachusetts 02139
ychang@ai.mit.edu
Leslie Pack Kaelbling
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, Massachusetts 02139
lpk@ai.mit.edu
Abstract
We propose a new classification for multi-agent learning algorithms, with
each league of players characterized by both their possible strategies and
possible beliefs. Using this classification, we review the optimality of existing algorithms, including the case of interleague play. We propose an
incremental improvement to the existing algorithms that seems to achieve
average payoffs that are at least the Nash equilibrium payoffs in the longrun against fair opponents.
1 Introduction
The topic of learning in multi-agent environments has received increasing attention over the
past several years. Game theorists have begun to examine learning models in their study of
repeated games, and reinforcement learning researchers have begun to extend their singleagent learning models to the multiple-agent case. As traditional models and methods from
these two fields are adapted to tackle the problem of multi-agent learning, the central issue
of optimality is worth revisiting. What do we expect a successful learner to do?
Matrix games and Nash equilibrium. From the game theory perspective, the repeated
game is a generalization of the traditional one-shot game, or matrix game. The matrix
game is defined as a reward matrix Ri for each player, Ri : A1 ? A2 ? R, where Ai is the
set of actions available to player i. Purely competitive games are called zero-sum games
and must satisfy R1 = ?R2 . Each player simultaneously chooses to play a particular
action ai ? Ai , or a mixed policy ?i = P D(Ai ), which is a probability distribution over
the possible actions, and receives reward based on the joint action taken. Some common
examples of single-shot matrix games are shown in Figure 1. The traditional assumption is
that each player has no prior knowledge about the other player. As is standard in the game
theory literature, it is thus reasonable to assume that the opponent is fully rational and
chooses actions that are in its best interest. In return, we must play a best response to the
opponent?s choice of action. A best response function for player i, BR i (??i ), is defined
to be the set of all optimal policies for player i, given that the other players are playing the
joint policy ??i : BRi (??i ) = {??i ? Mi |Ri (??i , ??i ) ? Ri (?i , ??i )??i ? Mi }, where
Mi is the set of all possible policies for agent i.
If all players are playing best responses to the other players? strategies, ? i ? BRi (??i )?i,
R1 =
?1
1
1
?1
?
0
R1 = ? 1
?1
?1
0
1
?
1
?1 ?
0
R2 = ?R1
R2 = ?R1
(a) Matching pennies
(b) Rock-Paper-Scissors
R1 =
0
1
3
2
R1 =
2
3
0
1
R2 =
0
3
1
2
R2 =
2
0
3
1
(c) Hawk-Dove
(d) Prisoner?s Dilemna
Figure 1: Some common examples of single-shot matrix games.
then the game is said to be in Nash equilibrium. Once all players are playing by a Nash
equilibrium, no single player has an incentive to unilaterally deviate from his equilibrium
policy. Any game can be solved for its Nash equilibria using quadratic programming, and
a player can choose an optimal strategy in this fashion, given prior knowledge of the game
structure. The only problem arises when there are multiple Nash equilibria. If the players
do not manage to coordinate on one equilibrium joint policy, then they may all end up
worse off. The Hawk-Dove game shown in Figure 1(c) is a good example of this problem.
The two Nash equilibria occur at (1,2) and (2,1), but if the players do not coordinate, they
may end up playing a joint action (1,1) and receive 0 reward.
Stochastic games and reinforcement learning. Despite these problems, there is general
agreement that Nash equilibrium is an appropriate solution concept for one-shot games. In
contrast, for repeated games there are a range of different perspectives. Repeated games
generalize one-shot games by assuming that the players repeat the matrix game over many
time periods. Researchers in reinforcement learning view repeated games as a special case
of stochastic, or Markov, games. Researchers in game theory, on the other hand, view
repeated games as an extension of their theory of one-shot matrix games. The resulting frameworks are similar, but with a key difference in their treatment of game history.
Reinforcement learning researchers focus their attention on choosing a single stationary
policy ? that will maximize the learner?s
expected rewards
in all future time periods given
hP
i
T
? ?t ?
that we are in time t, max? E?
?
R
(?)
,
where
T may be finite or infinite,
? =t
and ? = P D(A). In the infinite time-horizon case, we often include the discount factor
0 < ? < 1.
Littman [1] analyzes this framework for zero-sum games, proving convergence to the Nash
equilibrium for his minimax-Q algorithm playing against another minimax-Q agent. Claus
and Boutilier [2] examine cooperative games where R1 = R2 , and Hu and Wellman [3]
focus on general-sum games. These algorithms share the common goal of finding and
playing a Nash equilibrium. Littman [4] and Hall and Greenwald [5] further extend this
approach to consider variants of Nash equilibrium for which convergence can be guaranteed. Bowling and Veloso [6] and Nagayuki et al. [7] propose to relax the mutual optimality
requirement of Nash equilibrium by considering rational agents, which always learn to play
a stationary best-response to their opponent?s strategy, even if the opponent is not playing
an equilibrium strategy. The motivation is that it allows our agents to act rationally even
if the opponent is not acting rationally because of physical or computational limitations.
Fictitious play [8] is a similar algorithm from game theory.
Game theoretic perspective of repeated games. As alluded to in the previous section,
game theorists often take a more general view of optimality in repeated games. The key
difference is the treatment of the history of actions taken in the game. Recall that in the
Table 1: Summary of multi-agent learning algorithms under our new classification.
H0
H1
H?
B0
minimax-Q,
Nash-Q
B1
Q-learning (Q0 ),
(WoLF-)PHC,
fictitious play
Q1
B?
Bully
Godfather
multiplicativeweight*
* assumes public knowledge of the opponent?s policy at each period
stochastic game
S model, we took ?i = P D(Ai ). Here we redefine ?i : H ? P D(Ai ),
where H = t H t and H t is the set of all possible histories of length t. Histories are
observations of joint actions, ht = (ai , a?i , ht?1 ). Player i?s strategy at time t is then
expressed as ?i (ht?1 ). In essence, we are endowing our agent with memory. Moreover,
the agent ought to be able to form beliefs about the opponent?s strategy, and these beliefs
ought to converge to the opponent?s actual strategy given sufficient learning time. Let
?i : H ? P D(A?i ) be player i?s belief about the opponent?s strategy. Then a learning
path is defined to be a sequence of histories, beliefs, and personal strategies. Now we can
define the Nash equilibrium of a repeated game in terms of our personal strategy and our
beliefs about the opponent. If our prediction about the opponent?s strategy is accurate, then
we can choose an appropriate best-response strategy. If this holds for all players in the
game, then we are guaranteed to be in Nash equilibrium.
Proposition 1.1. A learning path {(ht , ?i (ht?1 ), ?i (ht?1 ))|t = 1, 2, . . .} converges to a
Nash equilibrium iff the following two conditions hold:
? Optimization: ?t, ?i (ht?1 ) ? BRi (?i (ht?1 )). We always play a best-response to
our prediction of the opponent?s strategy.
? Prediction: limt?? |?i (ht?1 ) ? ??i (ht?1 )| = 0. Over time, our belief about the
opponent?s strategy converges to the opponent?s actual strategy.
However, Nachbar and Zame [9] shows that this requirement of simultaneous prediction
and optimization is impossible to achieve, given certain assumptions about our possible
strategies and possible beliefs. We can never design an agent that will learn to both predict
the opponent?s future strategy and optimize over those beliefs at the same time. Despite this
fact, if we assume some extra knowledge about the opponent, we can design an algorithm
that approximates the best-response stationary policy over time against any opponent. In
the game theory literature, this concept is often called universal consistency. Fudenburg
and Levine [8] and Freund and Schapire [10] independently show that a multiplicativeweight algorithm exhibits universal consistency from the game theory and machine learning
perspectives. This give us a strong result, but requires the strong assumption that we know
the opponent?s policy at each time period. This is typically not the case.
2 A new classification and a new algorithm
We propose a general classification that categorizes algorithms by the cross-product of
their possible strategies and their possible beliefs about the opponent?s strategy, H ? B. An
agent?s possible strategies can be classified based upon the amount of history it has in memory, from H0 to H? . Given more memory, the agent can formulate more complex policies,
since policies are maps from histories to action distributions. H0 agents are memoryless
and can only play stationary policies. Agents that can recall the actions from the previous
time period are classified as H1 and can execute reactive policies. At the other extreme,
H? agents have unbounded memory and can formulate ever more complex strategies as
the game is played over time. An agent?s belief classification mirrors the strategy classification in the obvious way. Agents that believe their opponent is memoryless are classified
as B0 players, Bt players believe that the opponent bases its strategy on the previous tperiods of play, and so forth. Although not explicitly stated, most existing algorithms make
assumptions and thus hold beliefs about the types of possible opponents in the world.
We can think of each Hs ? Bt as a different league of players, with players in each league
roughly equal to one another in terms of their capabilities. Clearly some leagues contain
less capable players than others. We can thus define a fair opponent as an opponent from an
equal or lesser league. The idea is that new learning algorithms should ideally be designed
to beat any fair opponent.
The key role of beliefs. Within each league, we assume that players are fully rational
in the sense that they can fully use their available histories to construct their future policy.
However, an important observation is that the definition of full rationality depends on their
beliefs about the opponent. If we believe that our opponent is a memoryless player, then
even if we are an H? player, our fully rational strategy is to simply model the opponent?s
stationary strategy and play our stationary best response. Thus, our belief capacity and our
history capacity are inter-related. Without a rich set of possible beliefs about our opponent,
we cannot make good use of our available history. Similarly, and perhaps more obviously,
without a rich set of historical observations, we cannot hope to model complex opponents.
Discussion of current algorithms. Many of the existing algorithms fall within the H ? ?
B0 league. As discussed in the previous section, the problem with these players is that even
though they have full access to the history, their fully rational strategy is stationary due to
their limited belief set. A general example of a H? ? B0 player is the policy hill climber
(PHC). It maintains a policy and updates the policy based upon its history in an attempt
to maximize its rewards. Originally PHC was created for stochastic games, and thus each
policy also depends on the current state s. In our repeated games, there is only one state.
For agent i, Policy Hill Climbing (PHC) proceeds as follows:
1. Let ? and ? be the learning rates. Initialize
Q(s, a) ? 0, ?i (s, a) ?
1
?s ? S, a ? Ai .
|Ai |
2. Repeat,
a. From state s, select action a according to the mixed policy ?i (s) with some exploration.
b. Observing reward r and next state s0 , update
Q(s, a) ? (1 ? ?)Q(s, a) + ?(r + ? max
Q(s0 , a0 )).
0
a
c. Update ?(s, a) and constrain it to a legal probability distribution:
?
if a = argmaxa0 Q(s, a0 )
?i (s, a) ? ?i (s, a) +
.
??
otherwise
|Ai |?1
The basic idea of PHC is that the Q-values help us to define a gradient upon which we
execute hill-climbing. Bowling and Veloso?s WoLF-PHC [6] modifies PHC by adjusting ?
depending on whether the agent is ?winning? or ?losing.? True to their league, PHC players
play well against stationary opponents.
At the opposite end of the spectrum, Littman and Stone [11] propose algorithms in H 0 ?B?
and H1 ? B? that are leader strategies in the sense that they choose a fixed strategy and
hope that their opponent will ?follow? by learning a best response to that fixed strategy.
Their ?Bully? algorithm chooses a fixed memoryless stationary policy, while ?Godfather?
has memory of the last time period. Opponents included normal Q-learning and Q 1 players,
which are similar to Q-learners except that they explicitly learn using one period of memory
because they believe that their opponent is also using memory to learn. The interesting
result is that ?Godfather? is able to achieve non-stationary equilibria against Q 1 in the
repeated prisoner?s dilemna game, with rewards for both players that are higher than the
stationary Nash equilibrium rewards. This demonstrates the power of having belief models.
However, because these algorithms do not have access to more than one period of history,
they cannot begin to attempt to construct statistical models the opponent. ?Godfather?
works well because it has a built-in best response to Q1 learners rather than attempting to
learn a best response from experience.
Finally, Hu and Wellman?s Nash-Q and Littman?s minimax-Q are classified as H 0 ? B0
players, because even though they attempt to learn the Nash equilibrium through experience, their play is fixed once this equilibrium has been learned. Furthermore, they assume
that the opponent also plays a fixed stationary Nash equilibrium, which they hope is the
other half of their own equilibrium strategy. These algorithms are summarized in Table 1.
A new class of players. As discussed above, most existing algorithms do not form beliefs
about the opponent beyond B0 . None of these approaches is able to capture the essence of
game-playing, which is a world of threats, deceits, and generally out-witting the opponent.
We wish to open the door to such possibilities by designing learners that can model the
opponent and use that information to achieve better rewards. Ideally we would like to
design an algorithm in H? ? B? that is able to win or come to an equilibrium against
any fair opponent. Since this is impossible [9], we start by proposing an algorithm in the
league H? ? B? that plays well against a restricted class of opponents. Since many of the
current algorithms are best-response players, we choose an opponent class such as PHC,
which is a good example of a best-response player in H? ? B0 . We will demonstrate that
our algorithm indeed beats its PHC opponents and in fact does well against most of the
existing fair opponents.
A new algorithm: PHC-Exploiter. Our algorithm is different from most previous work
in that we are explicitly modeling the opponent?s learning algorithm and not simply his
current policy. In particular, we would like to model players from H ? ? B0 . Since we
are in H? ? B? , it is rational for us to construct such models because we believe that
the opponent is learning and adapting to us over time using its history. The idea is that we
will ?fool? our opponent into thinking that we are stupid by playing a decoy policy for a
number of time periods and then switch to a different policy that takes advantage of their
best response to our decoy policy. From a learning perspective, the idea is that we adapt
much faster than the opponent; in fact, when we switch away from our decoy policy, our
adjustment to the new policy is immediate. In contrast, the H? ? B0 opponent adjusts its
policy by small increments and is furthermore unable to model our changing behavior. We
can repeat this ?bluff and bash? cycle ad infinitum, thereby achieving infinite total rewards
as t ? ?. The opponent never catches on to us because it believes that we only play
stationary policies.
A good example of a H? ? B0 player is PHC. Bowling and Veloso showed that in selfplay, a restricted version of WoLF-PHC always reaches a stationary Nash equilibrium in
two-player two-action games, and that the general WoLF-PHC seems to do the same in
experimental trials. Thus, in the long run, a WoLF-PHC player achieves its stationary
Nash equilibrium payoff against any other PHC player. We wish to do better than that
by exploiting our knowledge of the PHC opponent?s learning strategy. We can construct
a PHC-Exploiter algorithm for agent i that proceeds like PHC in steps 1-2b, and then
continues as follows:
c. Observing action at?i at time t, update our history h and calculate an estimate of the
opponent?s policy:
Pt
#(h[? ] = a)
t
?
??i (s, a) = ? =t?w
?a,
w
where w is the window of estimation and #(h[? ] = a) = 1 if the opponent?s action at time
? is equal to a, and 0 otherwise. We estimate ?
? t?w
?i (s) similarly.
d. Update ? by estimating the learning rate of the PHC opponent:
t
?
??i (s) ? ?
?t?w
?i (s)
??
.
w
P
0
0
e. Update ?i (s, a). If we are winning, i.e.
??i (s), ?
??i (s)),
a0 ?i (s, a )Q(s, a ) > Ri (?
then update
1 if a = argmaxa0 Q(s, a0 )
,
?i (s, a) ?
0 otherwise
otherwise we are losing, then update
?i (s, a) ? ?i (s, a) +
?
??
|Ai |?1
if a = argmaxa0 Q(s, a0 )
.
otherwise
Note that we derive both the opponent?s learning rate ? and the opponent?s policy ?
? ?i (s)
from estimates using the observable history of actions. If we assume the game matrix is
public information, then we can solve for the equilibrium strategy ?
? ?i (s), otherwise we can
run WoLF-PHC for some finite number of time periods to obtain an estimate this equilibrium strategy. The main idea of this algorithm
of all time
P is that we take full advantage
periods in which we are winning, that is, when a0 ?i (s, a0 )Q(s, a0 ) > Ri (?
??i (s), ?
??i (s)).
Analysis. The PHC-Exploiter algorithm is based upon PHC and thus exhibits the same
behavior as PHC in games with a single pure Nash equilibrium. Both agents generally
converge to the single pure equilibrium point. The interesting case arises in competitive
games where the only equilibria require mixed strategies, as discussed by Singh et al [12]
and Bowling and Veloso [6]. Matching pennies, shown in Figure 1(a), is one such game.
PHC-Exploiter is able to use its model of the opponent?s learning algorithm to choose better
actions.
In the full knowledge case where we know our opponent?s policy ? 2 and learning rate ?2 at
every time period, we can prove that a PHC-Exploiter learning algorithm will guarantee us
unbounded reward in the long run playing games such as matching pennies.
Proposition 2.1. In the zero-sum game of matching pennies, where the only Nash equilibrium requires the use of mixed strategies, PHC-Exploiter is able to achieve unbounded
rewards as t ? ? against any PHC opponent given that play follows the cycle C defined
by the arrowed segments shown in Figure 2.
Play proceeds along Cw , Cl , then jumps from (0.5, 0) to (1,0), follows the line segments to
(0.5, 1), then jumps back to (0, 1). Given a point (x, y) = (? 1 (H), ?2 (H)) on the graph in
Figure 2, where ?i (H) is the probability by which player i plays Heads, we know that our
expected reward is
R1 (x, y) = ?1 ? [(x)(y) + (1 ? x)(1 ? y)] + 1 ? [(1 ? x)(y) + (x)(1 ? y)].
Action distribution of the two agent system
Action distribution of the two agent system
1.5
Player 2 probability choosing Heads
Player 2 probability choosing Heads
1.5
1
Cw
0.5
Cl
0
-0.5
-0.5
0
0.5
1
agent1 winning
agent1 losing
1
0.5
0
-0.5
-0.5
1.5
0
0.5
1
1.5
Player 1 probability choosing Heads
Player 1 probability choosing Heads
Figure 2: Theoretical (left), Empirical (right). The cyclic play is evident in our empirical
results, where we play a PHC-Exploiter player 1 against a PHC player 2.
Agent 1 total reward over time
8000
6000
total reward
4000
2000
0
-2000
-4000
0
20000
40000
60000
80000
100000
time period
Figure 3: Total rewards for agent 1 increase as we gain reward through each cycle.
We wish to show that
Z
Z
R1 (x, y)dt = 2 ?
C
R1 (x, y)dt +
Cw
Z
R1 (x, y)dt
Cl
>0 .
We consider each part separately. In the losing section, we let g(t) = x = t and h(t) =
y = 1/2 ? t, where 0 ? t ? 1/2. Then
Z
Z 1/2
1
R1 (x, y)dt =
R1 (g(t), h(t))dt = ?
.
12
Cl
0
R
Similarly, we can show that we receive 1/4 reward over Cw . Thus, C R1 (x, y)dt = 1/3 >
0, and we have shown that we receive a payoff greater than the Nash equilibrium payoff of
zero over every cycle. It is easy to see that play will indeed follow the cycle C to a good
approximation, depending on the size of ?2 . In the next section, we demonstrate that we
can estimate ?2 and ?2 sufficiently well from past observations, thus eliminating the full
knowledge requirements that were used to ensure the cyclic nature of play above.
Experimental results. We used the PHC-Exploiter algorithm described above to play
against several PHC variants in different iterated matrix games, including matching pennies, prisoner?s dilemna, and rock-paper-scissors. Here we give the results for the matching pennies game analyzed above, playing against WoLF-PHC. We used a window of
w = 5000 time periods to estimate the opponent?s current policy ? 2 and the opponent?s
learning rate ?2 . As shown in Figure 2, the play exhibits the cyclic nature that we predicted. The two solid vertical lines indicate periods in which our PHC-Exploiter player is
winning, and the dashed, roughly diagonal, lines indicate periods in which it is losing.
In the analysis given in the previous section, we derived an upper bound for our total rewards over time, which was 1/6 for each time step. Since we have to estimate various
parameters in our experimental run, we do not achieve this level of reward. We gain an
average of 0.08 total reward for each time period. Figure 3 plots the total reward for our
PHC-Exploiter agent over time. The periods of winning and losing are very clear from
this graph. Further experiments tested the effectiveness of PHC-Exploiter against other fair
opponents, including itself. Against all the existing fair opponents shown in Table 1, it
achieved at least its average equilibrium payoff in the long-run. Not surprisingly, it also
posted this score when it played against a multiplicative-weight learner.
Conclusion and future work. In this paper, we have presented a new classification for
multi-agent learning algorithms and suggested an algorithm that seems to dominate existing
algorithms from the fair opponent leagues when playing certain games. Ideally, we would
like to create an algorithm in the league H? ? B? that provably dominates larger classes
of fair opponents in any game. Moreover, all of the discussion contained within this paper
dealt with the case of iterated matrix games. We would like to extend our framework to
more general stochastic games with multiple states and multiple players. Finally, it would
be interesting to find practical applications of these multi-agent learning algorithms.
Acknowledgements. This work was supported in part by a Graduate Research Fellowship from the National Science Foundation.
References
[1] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In
Proceedings of the 11th International Conference on Machine Learning (ICML-94), 1994.
[2] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative
multiaent systems. In Proceedings of the 15th Natl. Conf. on Artificial Intelligence, 1998.
[3] Junling Hu and Michael P. Wellman. Multiagent reinforcement learning: Theoretical framework
and an algorithm. In Proceedings of the 15th Int. Conf. on Machine Learning (ICML-98), 1998.
[4] Michael L. Littman. Friend-or-foe q-learning in general-sum games. In Proceedings of the 18th
Int. Conf. on Machine Learning (ICML-01), 2001.
[5] Keith Hall and Amy Greenwald. Correlated q-learning. In DIMACS Workshop on Computational Issues in Game Theory and Mechanism Design, 2001.
[6] Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate.
Under submission.
[7] Yasuo Nagayuki, Shin Ishii, and Kenji Doya. Multi-agent reinforcement learning: An approach
based on the other agent?s internal model. In Proceedings of the International Conference on
Multi-Agent Systems (ICMAS-00), 2000.
[8] Drew Fudenburg and David K. Levine. Consistency and cautious fictitious play. Journal of
Economic Dynamics and Control, 19:1065?1089, 1995.
[9] J.H. Nachbar and W.R. Zame. Non-computable strategies and discounted repeated games. Economic Theory, 1996.
[10] Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights.
Games and Economic Behavior, 29:79?103, 1999.
[11] Michael Littman and Peter Stone. Leading best-response stratgies in repeated games. In 17th
Int. Joint Conf. on Artificial Intelligence (IJCAI-2001) workshop on Economic Agents, Models,
and Mechanisms, 2001.
[12] S. Singh, M. Kearns, and Y. Mansour. Nash convergence of gradient dynamics in general-sum
games. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, 2000.
| 1984 |@word h:1 trial:1 version:1 eliminating:1 seems:3 open:1 hu:3 q1:2 thereby:1 solid:1 shot:6 yasuo:1 cyclic:3 score:1 past:2 existing:8 current:5 must:2 dilemna:3 designed:1 plot:1 update:8 stationary:15 intelligence:5 half:1 unbounded:3 along:1 prove:1 redefine:1 inter:1 indeed:2 expected:2 behavior:3 roughly:2 examine:2 multi:10 discounted:1 actual:2 window:2 considering:1 increasing:1 begin:1 estimating:1 moreover:2 what:1 proposing:1 finding:1 ought:2 guarantee:1 every:2 act:1 tackle:1 demonstrates:1 control:1 despite:2 path:2 limited:1 exploiter:11 range:1 graduate:1 practical:1 shin:1 universal:2 empirical:2 adapting:1 matching:6 cannot:3 impossible:2 optimize:1 map:1 modifies:1 attention:2 independently:1 formulate:2 pure:2 amy:1 adjusts:1 dominate:1 unilaterally:1 his:3 proving:1 coordinate:2 increment:1 pt:1 play:25 rationality:1 programming:1 losing:6 designing:1 agreement:1 continues:1 submission:1 cooperative:2 role:2 levine:2 solved:1 capture:1 calculate:1 revisiting:1 cycle:5 environment:1 nash:26 reward:22 littman:7 ideally:3 dynamic:3 personal:2 singh:2 segment:2 purely:1 upon:4 learner:6 joint:6 various:1 artificial:5 choosing:5 h0:3 larger:1 solve:1 relax:1 otherwise:6 think:1 itself:1 obviously:1 sequence:1 advantage:2 rock:2 took:1 propose:5 product:1 iff:1 achieve:6 forth:1 cautious:1 exploiting:1 convergence:3 ijcai:1 requirement:3 r1:15 incremental:1 converges:2 arrowed:1 help:1 depending:2 derive:1 friend:1 received:1 b0:10 keith:1 strong:2 predicted:1 kenji:1 come:1 indicate:2 stochastic:5 exploration:1 public:2 require:1 generalization:1 proposition:2 extension:1 hold:3 sufficiently:1 hall:2 normal:1 equilibrium:35 predict:1 achieves:1 a2:1 estimation:1 create:1 hope:3 mit:2 clearly:1 always:3 rather:1 derived:1 focus:2 improvement:1 believing:1 contrast:2 ishii:1 sense:2 typically:1 bt:2 a0:8 provably:1 issue:2 classification:8 special:1 initialize:1 mutual:1 field:1 once:2 never:2 categorizes:1 equal:3 construct:4 having:1 yu:1 icml:3 thinking:1 future:4 others:1 simultaneously:1 national:1 attempt:3 interest:1 possibility:1 analyzed:1 extreme:1 wellman:3 natl:1 accurate:1 hawk:2 capable:1 experience:2 theoretical:2 stupid:1 modeling:1 yoav:1 leslie:1 kaelbling:1 successful:1 chooses:3 international:2 off:1 michael:5 godfather:4 central:1 manage:1 choose:5 worse:1 conf:4 leading:1 return:1 summarized:1 int:3 satisfy:1 explicitly:3 infinitum:1 scissors:2 depends:2 ad:1 multiplicative:2 view:3 h1:3 observing:2 competitive:2 start:1 maintains:1 capability:1 climbing:2 generalize:1 dealt:1 iterated:2 craig:1 none:1 worth:1 researcher:4 history:16 classified:4 foe:1 simultaneous:1 caroline:1 lpk:1 reach:1 definition:1 against:16 obvious:1 mi:3 rational:6 gain:2 treatment:2 massachusetts:4 begun:2 adjusting:1 recall:2 knowledge:7 back:1 originally:1 higher:1 dt:6 follow:2 response:15 execute:2 though:2 furthermore:2 hand:1 receives:1 perhaps:1 believe:5 concept:2 contain:1 true:1 q0:1 laboratory:2 memoryless:4 game:69 bowling:5 essence:2 dimacs:1 stone:2 hill:3 evident:1 theoretic:1 demonstrate:2 bash:1 common:3 endowing:1 physical:1 extend:3 discussed:3 approximates:1 cambridge:2 theorist:2 ai:13 consistency:3 league:11 hp:1 similarly:3 access:2 han:1 base:1 own:1 showed:1 perspective:5 certain:2 nachbar:2 analyzes:1 greater:1 converge:2 maximize:2 period:18 dashed:1 multiple:4 full:5 faster:1 characterized:1 veloso:5 cross:1 adapt:1 long:3 a1:1 prediction:4 variant:2 basic:1 limt:1 achieved:1 receive:3 fellowship:1 separately:1 extra:1 zame:2 claus:2 bully:2 effectiveness:1 door:1 easy:1 switch:2 opposite:1 economic:4 idea:5 lesser:1 br:1 computable:1 whether:1 dove:2 peter:1 action:19 boutilier:2 generally:2 fool:1 clear:1 amount:1 discount:1 schapire:2 incentive:1 threat:1 key:3 achieving:1 changing:1 ht:10 graph:2 year:1 sum:6 run:5 uncertainty:1 reasonable:1 doya:1 bound:1 guaranteed:2 played:2 quadratic:1 adapted:1 occur:1 constrain:1 ri:6 optimality:4 attempting:1 bri:3 according:1 climber:1 restricted:2 taken:2 legal:1 alluded:1 mechanism:2 know:3 end:3 available:3 opponent:66 away:1 appropriate:2 assumes:1 include:1 ensure:1 strategy:37 traditional:3 diagonal:1 said:1 exhibit:3 rationally:2 gradient:2 win:1 cw:4 unable:1 capacity:2 topic:1 assuming:1 length:1 decoy:3 robert:1 stated:1 design:4 policy:34 upper:1 vertical:1 observation:4 markov:2 finite:2 beat:2 immediate:1 payoff:6 ever:1 head:5 mansour:1 david:1 learned:1 able:6 beyond:1 proceeds:3 suggested:1 built:1 including:3 max:2 memory:7 belief:21 power:1 minimax:4 technology:2 created:1 catch:1 deviate:1 review:1 prior:2 literature:2 acknowledgement:1 freund:2 fully:5 expect:1 multiagent:2 mixed:4 interesting:3 limitation:1 fictitious:3 foundation:1 agent:36 sufficient:1 s0:2 playing:14 share:1 summary:1 repeat:3 last:1 surprisingly:1 supported:1 institute:2 fall:1 penny:6 world:2 rich:2 bluff:1 reinforcement:8 jump:2 adaptive:1 historical:1 observable:1 b1:1 manuela:1 leader:1 spectrum:1 table:3 pack:1 learn:6 nature:2 complex:3 cl:4 posted:1 main:1 motivation:1 fair:9 repeated:13 fashion:1 wish:3 winning:6 r2:6 dominates:1 workshop:2 drew:1 mirror:1 horizon:1 selfplay:1 simply:2 expressed:1 adjustment:1 contained:1 prisoner:3 chang:1 wolf:7 goal:1 greenwald:2 included:1 infinite:3 except:1 acting:1 kearns:1 called:2 total:7 experimental:3 player:51 select:1 internal:1 arises:2 reactive:1 phc:36 tested:1 correlated:1 |
1,078 | 1,985 | Discriminative Direction for Kernel Classifiers
Polina Golland
Artificial Intelligence Lab
Massachusetts Institute of Technology
Cambridge, MA 02139
polina@ai.mit.edu
Abstract
In many scientific and engineering applications, detecting and understanding differences between two groups of examples can be reduced
to a classical problem of training a classifier for labeling new examples
while making as few mistakes as possible. In the traditional classification setting, the resulting classifier is rarely analyzed in terms of the
properties of the input data captured by the discriminative model. However, such analysis is crucial if we want to understand and visualize the
detected differences. We propose an approach to interpretation of the statistical model in the original feature space that allows us to argue about
the model in terms of the relevant changes to the input vectors. For each
point in the input space, we define a discriminative direction to be the
direction that moves the point towards the other class while introducing
as little irrelevant change as possible with respect to the classifier function. We derive the discriminative direction for kernel-based classifiers,
demonstrate the technique on several examples and briefly discuss its use
in the statistical shape analysis, an application that originally motivated
this work.
1 Introduction
Once a classifier is estimated from the training data, it can be used to label new examples,
and in many application domains, such as character recognition, text classification and others, this constitutes the final goal of the learning stage. The statistical learning algorithms
are also used in scientific studies to detect and analyze differences between the two classes
when the ?correct answer? is unknown, and the information we have on the differences
is represented implicitly by the training set. Example applications include morphological analysis of anatomical organs (comparing organ shape in patients vs. normal controls),
molecular design (identifying complex molecules that satisfy certain requirements), etc. In
such applications, interpretation of the resulting classifier in terms of the original feature
vectors can provide an insight into the nature of the differences detected by the learning
algorithm and is therefore a crucial step in the analysis. Furthermore, we would argue that
studying the spatial structure of the data captured by the classification function is important
in any application, as it leads to a better understanding of the data and can potentially help
in improving the technique.
This paper addresses the problem of translating a classifier into a different representation
that allows us to visualize and study the differences between the classes. We introduce
and derive a so called discriminative direction at every point in the original feature space
with respect to a given classifier. Informally speaking, the discriminative direction tells
us how to change any input example to make it look more like an example from another
class without introducing any irrelevant changes that possibly make it more similar to other
examples from the same class. It allows us to characterize differences captured by the
classifier and to express them as changes in the original input examples.
This paper is organized as follows. We start with a brief background section on kernelbased classification, stating without proof the main facts on kernel-based SVMs necessary
for derivation of the discriminative direction. We follow the notation used in [3, 8, 9]. In
Section 3, we provide a formal definition of the discriminative direction and explain how
it can be estimated from the classification function. We then present some special cases,
in which the computation can be simplified significantly due to a particular structure of the
kernel. Section 4 demonstrates the discriminative direction for different kernels, followed
by an example from the problem of statistical analysis of shape differences that originally
motivated this work.
2 Basic Notation
Given a training set of l pairs {(xk , yk )}lk=1 , where xk ? Rn are observations and
yk ? {?1, 1} are corresponding labels, and a kernel function K : R n ? Rn 7? R, (with
its implied mapping function ?K : Rn 7? F), the Support Vector Machines (SVMs) algorithm [8] constructs a classifier by implicitly mapping the training data into a higher
dimensional space and estimating a linear classifier in that space that maximizes the margin between the classes (Fig. 1a). The normal to the resulting separating hyperplane is a
linear combination of the training data:
X
w=
?k yk ?K (xk ),
(1)
k
where the coefficients ?k are computed by solving a constrained quadratic optimization
problem. The resulting classifier
X
X
fK (x) = hx ? wi +b =
?k yk h?K (x) ? ?K (xk )i+b =
?k yk K(x, xk )+b (2)
k
k
defines a nonlinear separating boundary in the original feature space.
3 Discriminative Direction
Equations (1) and (2) imply that the classification function fK (x) is directly proportional
to the signed distance from the input point to the separating boundary computed in the
higher dimensional space defined by the mapping ?K . In other words, the function output depends only on the projection of vector ?K (x) onto w and completely ignores the
component of ?K (x) that is perpendicular to w. This suggests that in order to create a
displacement of ?K (x) that corresponds to the differences between the two classes, one
should change the vector?s projection onto w while keeping its perpendicular component
the same. In the linear case, we can easily perform this operation, since we have access to
the image vectors, ?K (x) = x. This is similar to visualization techniques typically used
in linear generative modeling, where the data variation is captured using PCA, and new
samples are generated by changing a single principal component at a time. However, this
approach is infeasible in the non-linear case, because we do not have access to the image
vectors ?K (x). Furthermore, the resulting image vector might not even have a source in
the original feature space, i.e., there might be no vector in the original space R n that maps
into the resulting vector in the space F. Our solution is to search for the direction around
w
w
p e
z dz
?
?
dx
x
(a)
(b)
Figure 1: Kernel-based classification (a) and the discriminative direction (b).
the feature vector x in the original space that minimizes the divergence of its image ? K (x)
from the direction of the projection vector w 1 . We call it a discriminative direction, as it
represents the direction that affects the output of the classifier while introducing as little
irrelevant change as possible into the input vector.
Formally, as we move from x to x + dx in Rn , the image vector in the space F changes by
dz = ?K (x + dx) ? ?K (x) (Fig. 1b). This displacement can be thought of as a vector
sum of its projection onto w and its deviation from w:
p=
hdz ? wi
hdz ? wi
w and e = dz ? p = dz ?
w.
hw ? wi
hw ? wi
(3)
The discriminative direction minimizes the divergence component e, leading to the following optimization problem:
2
minimize
s.t.
E(dx) = kek2 = hdz ? dzi ?
hdz ? wi
hw ? wi
kdxk2 = .
(4)
(5)
Since the cost function depends only on dot products of vectors in the space F, it can be
computed using the kernel function K:
X
hw ? wi =
?k ?m yk ym K(xk , xm ),
(6)
k,m
hdz ? wi
= ?fK (x)dx,
(7)
hdz ? dzi
T
(8)
= dx HK (x)dx,
where ?fK (x) is the gradient of the classifier function fK evaluated at x and represented
by a row-vector and matrix HK (x) is one of the (equivalent) off-diagonal quarters of the
Hessian of K, evaluated at (x, x):
? 2 K(u, v)
.
(9)
HK (x)[i, j] =
?ui ?vj (u=x,v=x)
Substituting into Equation (4), we obtain
?2
T
minimize
E(dx) = dxT HK (x) ? kwk ?fK
(x)?fK (x) dx
s.t.
kdxk2 = .
(10)
(11)
1
A similar complication arises in kernel-based generative modeling, e.g., kernel PCA [7]. Constructing linear combinations of vectors in the space F leads to a global search in the original
space [6, 7]. Since we are interested in the direction that best approximates w, we use infinitesimal analysis that results in a different optimization problem.
The solution to this problem is the smallest eigenvector of matrix
T
QK (x) = HK (x) ? kwk?2 ?fK
(x)?fK (x).
(12)
Note that in general, the matrix QK (x) and its smallest eigenvector are not the same for
different points in the original space and must be estimated separately for every input vector x. Furthermore, each solution defines two opposite directions in the input space, corresponding to the positive and the negative projections onto w. We want to move the input
example towards the opposite class and therefore assign the direction of increasing function
values to the examples with label ?1 and the direction of decreasing function values to the
examples with label 1.
Obtaining a closed-form solution of this minimization problem could be desired, or even
necessary, if the dimensionality of the input space is high and computing the smallest eigenvector is computationally expensive and numerically challenging. In the next section, we
demonstrate how a particular form of the matrix HK (x) leads to an analytical solution for
a large family of kernel functions2.
3.1 Analytical Solution for Discriminative Direction
It is easy to see that if HK (x) is a multiple of the identity matrix, HK (x) = cI, then
the smallest eigenvector of the matrix QK (x) is equal to the largest eigenvector of the
T
T
matrix ?fK
(x)?fK (x), namely the gradient of the classifier function ?fK
(x). We will
show in this section that both for the linear kernel and, more surprisingly, for RBF kernels,
the matrix HK (x) is of the right form to yield an analytical solution of this form. It is
well known that to achieve the fastest change in the value of a function, one should move
along its gradient. In the case of the linear and the RBF kernels, the gradient also corresponds to the direction that distinguishes between the two classes while ignoring inter-class
variability.
Dot product kernels, K(u, v) = k(hu ? vi). For any dot product kernel,
? 2 K(u, v)
2
2
= k 0 (kxk )?ij + k 00 (kxk )xi xj ,
?ui ?vj (u=x,v=x)
(13)
2
and therefore HK (x) = cI for all x if and only if k 00 (kxk ) ? 0, i.e., when k is a linear
function. Thus the linear kernel is the only dot product kernel for which this simplification
is relevant. In the linear case, HK (x) = I, and the discriminative direction is defined as
X
T
dx? = ?fK
(x) = w =
?k yk xk ; E(dx? ) = 0.
(14)
This is not entirely surprising, as the classifier is a linear function in the original space and
we can move precisely along w.
Polynomial kernels are a special case of dot product kernels. For polynomials of degree
d ? 2,
? 2 K(u, v)
= d(1 + kxk2 )d?1 ?ij + d(d ? 1)(1 + kxk2 )d?2 xi xj .
(15)
?ui ?vj (u=x,v=x)
HK (x) is not necessarily diagonal for all x, and we have to solve the general eigenvector
problem to identify the discriminative direction.
2
While a very specialized structure of HK (x) in the next section is sufficient for simplifying the
solution significantly, it is by no means necessary, and other kernel families might exist for which
estimating the discriminative direction does not require solving the full eigenvector problem.
Distance kernels, K(u, v) = k(ku ? vk2 ). For a distance kernel,
? 2 K(u, v)
= ?2k 0 (0)?ij ,
?ui ?vj (u=x,v=x)
(16)
and therefore the discriminative direction can be determined analytically:
T
dx? = ?fK
(x); E(dx? ) = ?2k 0 (0) ? kwk
?2
2
T
k?fK
(x)k .
(17)
The Gaussian kernels are a special case of the distance kernel family, and yield a closed
form solution for the discriminative direction:
X
kx?xk k2
2
T
(x)k /kwk2 . (18)
dx? = ?2/?
?k yk e? ? (x ? xk ); E(dx? ) = 2/? ? k?fK
k
Unlike the linear case, we cannot achieve zero error, and the discriminative direction is only
an approximation. The exact solution is unattainable in this case, as it has no corresponding
direction in the original space.
3.2 Geometric Interpretation
We start by noting that the image vectors ?K (x)?s do not populate the entire space F, but
rather form a manifold of lower dimensionality whose geometry is fully defined by the
kernel function K (Fig. 1). We will refer to this manifold as the target manifold in this
discussion. We cannot explicitly manipulate elements of the space F, but can only explore
the target manifold through search in the original space. We perform the search in the
original space by considering all points on an infinitesimally small sphere centered at the
original input vector x. In the range space of the mapping function ? K , the images of
points x + dx form an ellipsoid defined by the quadratic form dzT dz = dxT HK (x)dx.
For HK (x) ? I, the ellipsoid becomes a sphere, all dz?s are of the same length, and
the minimum of error in the displacement vector dz corresponds to the maximum of the
projection of dz onto w. Therefore, the discriminative direction is parallel to the gradient
of the classifier function. If HK (x) is of any other form, the length of the displacement
vector dz changes as we vary dx, and the minimum of the error in the displacement is not
necessarily aligned with the direction that maximizes the projection.
As a side note, our sufficient condition, HK (x) ? I, implies that the target manifold is
locally flat, i.e., its Riemannian curvature is zero. Curvature and other properties of target
manifolds have been studied extensively for different kernel functions [1, 4]. In particular,
one can show that the kernel function implies a metric on the original space. Similarly to
the natural gradient [2] that maximizes the change in the function value under an arbitrary
metric, we minimize the changes that do not affect the function under the metric implied
by the kernel.
3.3 Selecting Inputs
Given any input example, we can compute the discriminative direction that represents the
differences between the two classes captured by the classifier in the neighborhood of the
example. But how should we choose the input examples for which to compute the discriminative direction? We argue that in order to study the differences between the classes,
one has to examine the input vectors that are close to the separating boundary, namely,
the support vectors. Note that this approach is significantly different from the generative
modeling, where a ?typical? representative, often constructed by computing the mean of
the training data, is used for analysis and visualization. In the discriminative framework,
we are more interested in the examples that lie close to the opposite class, as they define
the differences between the two classes and the optimal separating boundary.
(a)
6
6
5
5
4
4
3
3
2
2
1
1
0
0
?1
?1
?2
?2
?3
?3
?4
?4
?5
?3
?2
?1
0
1
2
3
4
5
6
7
8
?2
?1
0
1
(b)
?5
?3
?2
?1
4
5
6
0
1
2
3
4
5
6
7
8
6
5
4
3
2
1
0
?1
?2
?3
?4
(c)
?5
?3
2
3
7
8
Figure 2: Discriminative direction for linear (a), quadratic (b) and Gaussian RBF (c) classifiers. The background is colored using the values of the classifier function. The black
solid line is the separating boundary, the dotted lines indicate the margin corridor. Support
vectors are indicated using solid markers. The length of the vectors is proportional to the
magnitude of the classifier gradient.
Support vectors define a margin corridor whose shape is determined by the kernel type
used for training. We can estimate the distance from any support vector to the separating
boundary by examining the gradient of the classification function for that vector. Large
gradient indicates that the support vector is close to the separating boundary and therefore
can provide more information on the spatial structure of the boundary. This provides a
natural heuristic for assigning importance weighting to different support vectors in the
analysis of the discriminative direction.
4 Simple Example
We first demonstrate the the proposed approach on a simple example. Fig. 2 shows three
different classifiers, linear, quadratic and Gaussian RBF, for the same example training set
that was generated using two Gaussian densities with different means and covariance matrices. We show the estimated discriminative direction for all points that are close to the
separating boundary, not just support vectors. While the magnitude of discriminative direction vector is irrelevant in our infinitesimal analysis, we scaled the vectors in the figure
according to the magnitude of the classifier gradient to illustrate importance ranking. Note
that for the RBF support vectors far away from the boundary (Fig. 2c), the magnitude of
the gradient is so small (tenth of the magnitude at the boundary), it renders the vectors
Normal Control
Patient
Figure 3: Right hippocampus in schizophrenia study. First support vector from each group
is shown, four views per shape (front, medial, back, lateral). The color coding is used to
visualize the amount and the direction of the deformation that corresponds to the discriminative direction, changing from blue (moving inwards) to green (zero deformation) to red
(moving outwards).
too short to be visible in the figure. We can see that in the areas where there is enough
evidence to estimate the boundary reliably, all three classifiers agree on the boundary and
the discriminative direction (lower cluster of arrows). However, if the boundary location
is reconstructed based on the regularization defined by the kernel, the classifiers suggest
different answers (the upper cluster of arrows), stressing the importance of model selection
for classification. The classifiers also provide an indication of the reliability of the differences represented by each arrow, which was repeatedly demonstrated in other experiments
we performed.
5 Morphological Studies
Morphological studies of anatomical organs motivated the analysis presented in this paper.
Here, we show the results for the hippocampus study in schizophrenia. In this study, MRI
scans of the brain were acquired for schizophrenia patients and a matched group of normal
control subjects. The hippocampus structure was segmented (outlined) in all of the scans.
Using the shape information (positions of the outline points), we trained a Gaussian RBF
classifier to discriminate between schizophrenia patients and normal controls. However,
the classifier in its original form does not provide the medical researchers with information
on how the hippocampal shape varies between the two groups. Our goal was to translate
the information captured by the classifier into anatomically meaningful terms of organ
development and deformation.
In this application, the coordinates in the input space correspond to the surface point locations for any particular example shape. The discriminative direction vector corresponds to
displacements of the surface points and can be conveniently represented by a deformation
of the original shape, yielding an intuitive description of shape differences for visualization
and further analysis. We show the deformation that corresponds to the discriminative direction, omitting the details of shape extraction (see [5] for more information). Fig. 3 displays
the first support vector from each group with the discriminative direction ?painted? on it.
Each row shows four snapshots of the same shape form different viewpoints 3. The color at
every node of the surface encodes the corresponding component of the discriminative direction. Note that the deformation represented by the two vectors is very similar in nature,
but of opposite signs, as expected from the analysis in Section 3.3. We can see that the
main deformation represented by this pair of vectors is localized in the bulbous ?head? of
3
An alternative way to visualize the same information is to actually generate the animation of the
example shape undergoing the detected deformation.
the structure. The next four support vectors in each group represent a virtually identical deformation to the one shown here. Starting with such visualization, the medical researchers
can explore the organ deformation and interaction caused by the disease.
6 Conclusions
We presented an approach to quantifying the classifier?s behavior with respect to small
changes in the input vectors, trying to answer the following question: what changes would
make the original input look more like an example from the other class without introducing irrelevant changes? We introduced the notion of the discriminative direction, which
corresponds to the maximum changes in the classifier?s response while minimizing irrelevant changes in the input. For kernel-based classifiers the discriminative directions is
determined by minimizing the divergence of the infinitesimal displacement vector and the
normal to the separating hyperplane in the higher dimensional kernel space. The classifier
interpretation in terms of the original features in general, and the discriminative direction
in particular, is an important component of the data analysis in many applications where
the statistical learning techniques are used to discover and study structural differences in
the data.
Acknowledgments. Quadratic optimization was performed using PR LOQO optimizer
written by Alex Smola. This research was supported in part by NSF IIS 9610249 grant.
References
[1] S. Amari and S. Wu. Improving Support Vector Machines by Modifying Kernel Functions. Neural Networks, 783-789, 1999.
[2] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Comp., 10:251-276,
1998.
[3] C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data
Mining and Knowledge Discovery, 2(2):121-167, 1998.
[4] C. J. C. Burges. Geometry and Invariance in Kernel Based Methods. In Adv. in Kernel
Methods: Support Vector Learning, Eds. Sch?olkopf, Burges and Smola, MIT Press,
89-116, 1999.
[5] P. Golland et al. Small Sample Size Learning for Shape Analysis of Anatomical Structures. In Proc. of MICCAI?2000, LNCS 1935:72-82, 2000.
[6] B. Sch?olkopf et al. Input Space vs. Feature Space in Kernel-Based Methods. IEEE
Trans. on Neural Networks, 10(5):1000-1017, 1999.
[7] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Comp., 10:1299-1319, 1998.
[8] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
[9] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
| 1985 |@word mri:1 briefly:1 polynomial:2 hippocampus:3 hu:1 simplifying:1 covariance:1 functions2:1 solid:2 selecting:1 comparing:1 surprising:1 assigning:1 dx:18 must:1 written:1 john:1 visible:1 shape:14 medial:1 v:2 generative:3 intelligence:1 xk:9 short:1 colored:1 detecting:1 provides:1 complication:1 location:2 node:1 along:2 constructed:1 corridor:2 introduce:1 acquired:1 inter:1 expected:1 behavior:1 examine:1 brain:1 decreasing:1 little:2 considering:1 increasing:1 becomes:1 estimating:2 notation:2 matched:1 maximizes:3 discover:1 what:1 minimizes:2 eigenvector:7 every:3 classifier:34 demonstrates:1 k2:1 control:4 scaled:1 medical:2 grant:1 positive:1 engineering:1 mistake:1 painted:1 signed:1 might:3 black:1 studied:1 suggests:1 challenging:1 fastest:1 perpendicular:2 range:1 acknowledgment:1 displacement:7 lncs:1 area:1 significantly:3 thought:1 projection:7 word:1 suggest:1 onto:5 cannot:2 close:4 selection:1 dzt:1 equivalent:1 map:1 demonstrated:1 dz:9 starting:1 identifying:1 insight:1 notion:1 variation:1 coordinate:1 target:4 exact:1 element:1 recognition:2 expensive:1 adv:1 morphological:3 yk:8 disease:1 ui:4 trained:1 solving:2 completely:1 easily:1 represented:6 derivation:1 artificial:1 detected:3 labeling:1 tell:1 neighborhood:1 whose:2 heuristic:1 solve:1 amari:2 final:1 indication:1 eigenvalue:1 analytical:3 propose:1 interaction:1 product:5 relevant:2 aligned:1 translate:1 achieve:2 intuitive:1 description:1 olkopf:3 cluster:2 requirement:1 help:1 derive:2 illustrate:1 stating:1 ij:3 dzi:2 implies:2 indicate:1 direction:47 correct:1 modifying:1 centered:1 translating:1 require:1 hx:1 assign:1 around:1 normal:6 mapping:4 visualize:4 substituting:1 vary:1 optimizer:1 smallest:4 proc:1 label:4 largest:1 organ:5 create:1 minimization:1 uller:1 mit:2 stressing:1 gaussian:5 rather:1 indicates:1 hk:17 detect:1 vk2:1 typically:1 entire:1 interested:2 classification:9 development:1 spatial:2 special:3 constrained:1 equal:1 once:1 construct:1 extraction:1 identical:1 represents:2 look:2 constitutes:1 others:1 few:1 distinguishes:1 divergence:3 geometry:2 mining:1 analyzed:1 yielding:1 necessary:3 desired:1 deformation:10 modeling:3 cost:1 introducing:4 deviation:1 examining:1 front:1 too:1 characterize:1 unattainable:1 answer:3 varies:1 density:1 off:1 ym:1 choose:1 possibly:1 leading:1 coding:1 coefficient:1 satisfy:1 explicitly:1 ranking:1 depends:2 vi:1 caused:1 performed:2 view:1 lab:1 closed:2 analyze:1 kwk:3 red:1 start:2 parallel:1 minimize:3 qk:3 efficiently:1 yield:2 identify:1 correspond:1 comp:2 researcher:2 explain:1 ed:1 definition:1 infinitesimal:3 proof:1 riemannian:1 massachusetts:1 color:2 knowledge:1 dimensionality:2 organized:1 actually:1 back:1 originally:2 higher:3 follow:1 response:1 evaluated:2 furthermore:3 just:1 stage:1 smola:3 miccai:1 nonlinear:2 marker:1 defines:2 indicated:1 scientific:2 omitting:1 analytically:1 regularization:1 hippocampal:1 trying:1 outline:1 demonstrate:3 image:7 specialized:1 quarter:1 interpretation:4 approximates:1 numerically:1 kwk2:1 refer:1 cambridge:1 ai:1 fk:16 outlined:1 similarly:1 dot:5 reliability:1 moving:2 access:2 surface:3 etc:1 curvature:2 irrelevant:6 certain:1 captured:6 minimum:2 ii:1 multiple:1 full:1 segmented:1 sphere:2 molecular:1 manipulate:1 schizophrenia:4 basic:1 patient:4 metric:3 kernel:38 represent:1 golland:2 background:2 want:2 separately:1 source:1 crucial:2 sch:3 unlike:1 subject:1 virtually:1 call:1 structural:1 noting:1 easy:1 enough:1 affect:2 xj:2 opposite:4 motivated:3 pca:2 render:1 speaking:1 hessian:1 repeatedly:1 polina:2 informally:1 amount:1 locally:1 extensively:1 svms:2 reduced:1 generate:1 exist:1 nsf:1 tutorial:1 dotted:1 sign:1 estimated:4 per:1 anatomical:3 blue:1 express:1 group:6 four:3 changing:2 tenth:1 sum:1 family:3 wu:1 entirely:1 followed:1 simplification:1 display:1 quadratic:5 precisely:1 alex:1 flat:1 encodes:1 loqo:1 infinitesimally:1 according:1 combination:2 son:1 character:1 wi:9 making:1 anatomically:1 pr:1 computationally:1 equation:2 visualization:4 agree:1 discus:1 studying:1 operation:1 away:1 alternative:1 original:20 include:1 classical:1 implied:2 move:5 question:1 traditional:1 diagonal:2 gradient:12 distance:5 separating:10 lateral:1 manifold:6 argue:3 length:3 ellipsoid:2 minimizing:2 inwards:1 potentially:1 negative:1 design:1 reliably:1 unknown:1 perform:2 upper:1 observation:1 snapshot:1 variability:1 head:1 rn:4 arbitrary:1 introduced:1 pair:2 namely:2 trans:1 address:1 pattern:1 xm:1 green:1 natural:3 kek2:1 technology:1 brief:1 imply:1 lk:1 text:1 understanding:2 geometric:1 discovery:1 fully:1 dxt:2 proportional:2 localized:1 bulbous:1 degree:1 sufficient:2 viewpoint:1 row:2 surprisingly:1 supported:1 keeping:1 infeasible:1 populate:1 formal:1 side:1 understand:1 burges:3 institute:1 boundary:14 ignores:1 simplified:1 far:1 reconstructed:1 implicitly:2 global:1 discriminative:37 xi:2 search:4 nature:3 ku:1 molecule:1 ignoring:1 obtaining:1 improving:2 complex:1 necessarily:2 constructing:1 domain:1 vj:4 main:2 arrow:3 animation:1 fig:6 representative:1 wiley:1 position:1 lie:1 kxk2:2 weighting:1 hw:4 undergoing:1 evidence:1 vapnik:2 importance:3 ci:2 magnitude:5 margin:3 kx:1 outwards:1 explore:2 conveniently:1 kxk:3 springer:1 corresponds:7 ma:1 goal:2 identity:1 quantifying:1 rbf:6 towards:2 change:17 determined:3 typical:1 hyperplane:2 principal:1 called:1 discriminate:1 invariance:1 meaningful:1 rarely:1 formally:1 support:15 kernelbased:1 arises:1 scan:2 |
1,079 | 1,986 | The Steering Approach for Multi-Criteria
Reinforcement Learning
Shie Mannor and Nahum Shimkin
Department of Electrical Engineering
Technion, Haifa 32000, Israel
{shie,shimkin}@{tx,ee}.technion.ac.il
Abstract
We consider the problem of learning to attain multiple goals in a dynamic environment, which is initially unknown. In addition, the environment may contain
arbitrarily varying elements related to actions of other agents or to non-stationary
moves of Nature. This problem is modelled as a stochastic (Markov) game between
the learning agent and an arbitrary player, with a vector-valued reward function.
The objective of the learning agent is to have its long-term average reward vector
belong to a given target set. We devise an algorithm for achieving this task, which
is based on the theory of approachability for stochastic games. This algorithm combines, in an appropriate way, a finite set of standard, scalar-reward learning algorithms. Sufficient conditions are given for the convergence of the learning algorithm
to a general target set. The specialization of these results to the single-controller
Markov decision problem are discussed as well.
1
Introduction
This paper considers an on-line learning problem for Markov decision processes with vector-valued
rewards. Each entry of the reward vector represents a scalar reward (or cost) function which is
of interest. Focusing on the long-term average reward, we assume that the desired performance is
specified through a given target set, to which the average reward vector should eventually belong.
Accordingly, the specified goal of the decision maker is to ensure that the average reward vector will
converge to the target set. Following terminology from game theory, we refer to such convergence
of the reward vector as approaching the target set.
A distinctive feature of our problem formulation is the possible incorporation of arbitrarily varying
elements of the environment, which may account for the influence of other agents or non-stationary
moves of Nature. These are collectively modelled as a second agent, whose actions may affect both
the state transition and the obtained rewards. This agent is free to choose its actions according to
any control policy, and no prior assumptions are made regarding its policy.
This problem formulation is derived from the so-called theory of approachability that was introduced
in [3] in the context of repeated matrix games with vector payoffs. Using a geometric viewpoint, it
characterizes the sets in the reward space that a player can guarantee for himself for any possible
policy of the other player, and provides appropriate policies for approaching these sets. Approachability theory has been extended to stochastic (Markov) games in [14], and the relevant results are
briefly reviewed in Section 2. In this paper we add the learning aspect, and consider the problem of
learning such approaching policies on-line, using Reinforcement Learning (RL) or similar algorithms.
Approaching policies are generally required to be non-stationary. Their construction relies on a
geometric viewpoint, whereby the average reward vector is ?steered? in the direction of the target
set by the use of direction-dependent (and possibly stationary) control policies. To motivate the
steering viewpoint, consider the following one dimensional example of an automatic temperature
controlling agent. The measured property is the temperature which should be in some prescribed
range [T , T ], the agent may activate a cooler or a heater at will. An obvious algorithm that achieves
the prescribed temperature range is ? when the average temperature is higher than T choose a
?policy? that reduces it, namely activate the cooler; and if the average temperature is lower than T
use the heater. See Figure 1(a) for an illustration. Note that this algorithm is robust and requires
little knowledge about the characteristics of the processes, as would be required by a procedure that
tunes the heater or cooler for continuous operation. A learning algorithm needs only determine
which element to use at each of the two extreme regions.
a
b
Humidity
Target
Heating policy
T
Cooling policy
T
Temperature
Temperature
Figure 1: (a) The single dimensional temperature example. If the temperature is higher than T
the control is to cool, and if the temperature is lower than T the control is to heat. (b) The two
dimensional temperature-humidity example. The learning directions are denoted by arrows, note
that an infinite number of directions are to be considered.
Consider next a more complex multi-objective version of this controlling agent. The controller?s
objective is as before to have the temperature in a certain range. One can add other parameters such
as the average humidity, frequency of switching between policies, average energy consumption and
so on. This problem is naturally characterized as a multi-objective problem, in which the objective
of the controller is to have the average reward in some target set. (Note that in this example, the
temperature itself is apparently the object of interest rather than its long-term average. However,
we can reformulate the temperature requirement as an average reward objective by measuring the
fraction of times that the temperature is outside the target range, and require this fraction to be zero.
For the purpose of illustration we shall proceed here with the original formulation). For example,
suppose that the controller is also interested in the humidity. For the controlled environment of, say,
a greenhouse, the allowed level of humidity depends on the average temperature. An illustrative
target set is shown in Figure 1(b). A steering policy for the controller is not as simple anymore.
In place of the two directions (left/right) of the one-dimensional case, we now face a continuum of
possible directions, each associated with a possibly different steering policy. For the purpose of the
proposed learning algorithm we shall require to consider only a finite number of steering policies.
We will show that this can always be done, with negligible effect on the attainable performance.
The analytical basis for this work relies on three elements: stochastic game models, which capture the
Markovian system dynamics while allowing arbitrary variation in some elements of the environment;
the theory of approachability for vector-valued dynamic games, which provides the basis for the
steering approach; and RL algorithms for (scalar) average reward problems. For the sake of brevity,
we do not detail the mathematical models and proofs and concentrate on concepts.
Reinforcement Learning (RL) has emerged in the last decade as a unifying discipline for learning and
adaptive control. Comprehensive overviews may be found in [2, 7]. RL for average reward Markov
Decision Processes (MDPs) was suggested in [13, 10] and later analyzed in [1]. Several methods
exist for average reward RL, including Q-learning [1] the E 3 algorithm [8], actor-critic schemes [2]
and more.
The paper is organized as follows: In Section 2 we describe the stochastic game setup, recall ap-
proachability theory, and mention a key theorem that allows to consider only a finite number of
directions for approaching a set. Section 3 describes the proposed multi-criteria RL algorithm and
outlines its convergence proof. We also briefly discuss learning in multi-criteria single controller
environments, as this case is a special case of the more general game model. An illustrative example
is briefly described in Section 4 and concluding remarks are drawn in Section 5.
2
Multi-Criteria Stochastic Games
In this section we present the multi-criteria stochastic game model. We recall some known results
from approachability theory for stochastic games with vector-valued reward, and state a key theorem
which decomposes the problem of approaching a target set into a finite number of scalar control
problems.
We consider a two-person average reward stochastic game model, with a vector-valued reward function. We refer to the players as P1 (the learning agent) and P2 (the arbitrary adversary). The
game is defined by: the state space S; the sets of actions for P1 and P2, respectively, in each
state s, A and B; the state transition kernel, P = (P (s0 |s, a, b)); a vector-valued reward function
m : S ? A ? B ? IRk . The reward itself is allowed to be random, in which case it is assumed
to have a bounded second moment. At each time epoch n ? 0, both players observe the current
state sn , and then P1 and P2 simultaneously choose actions an and bn , respectively. As a result
P1 receives the reward vector mn = m(sn , an , bn ) and the next state is determined according to the
transition probability P (?|sn , an , bn ). More generally, we allow the actual reward mn to be random,
in which case m(sn , an , bn ) denotes its mean and a bounded second moment is assumed. We further
assume that both players observe the previous rewards and actions (however, in some of the learning
algorithms below, the assumption that P1 observes P2?s action may be relaxed). A policy ? ? ? for
P1 is a mapping which assigns to each possible observed history a mixed action in ?(A), namely a
probability vector over P1?s action set A. A policy ? ? ? for P2 is defined similarly. A policy of
either player is called stationary if the mixed action it prescribes depends only on the current state
Pn?1
4
sn . Let m
? n denote the average reward by time n: m
? n = n1 t=0 mt .
The following recurrence assumption will be imposed. Let state s? denote a specific reference state
4
to which a return is guaranteed. We define the hitting time of state s? as: ? = min{n > 0 : sn = s? }.
Assumption 1 (Recurrence) There exist a state s? ? S and a finite constant N such that
s
E??
(? 2 ) < N
for all ? ? ?, ? ? ? and s ? S,
s
where E??
is the expectation operator when starting from state s0 = s and using policies ? and ?
for P1 and P2, respectively.
If the game is finite then this assumption is satisfied if state s? is accessible from all other states
under any pair of stationary deterministic policies [14]. We note that the recurrence assumption
may be relaxed in a similar manner to [11].
Let u be a unit vector in the reward space IRk . We often consider the projected game in direction
u as the zero-sum stochastic game with same dynamic as above, and scalar rewards r n := mn ? u.
Here ??? stands for the standard inner product in IRk . Denote this game by ?s (u), where s is the
initial state. The scalar stochastic game ?s (u), has a value, denoted v?s (u), if
s
s
v?s (u) = sup inf lim inf E??
(m
? n ? u) = inf sup lim sup E??
(m
? n ? u) .
?
?
n??
?
?
n??
For finite games, the value exists [12]. Furthermore, under Assumption 1 the value is independent
of the initial state and can be achieved in stationary policies [6]. We henceforth simply write v?(u)
for this value.
We next consider the task of approaching a given target set in the reward space, and introduce
approaching policies for the case where the game parameters are fully known to P1. Let T ? IR k
s
denote the target set. In the following, d is the Euclidean distance in IRk , and P?,?
is the probability
measure induced by the policies ? and ?, with initial state s.
Definition 2.1 The set T ? IRk is approachable (from initial state s) if there exists a T approaching policy ? ? of P1 such that d(m
? n , T ) ? 0 P?s? ,? -a.s., for every ? ? ? at a uniform rate
over ?.
The policy ? ? in that definition will be called an approaching policy for P1. A set is approachable
if it is approachable from all states. Noting that approaching a set and its closure are the same, we
shall henceforth suppose that the set T is closed.
We recall the basic results from [14] regarding approachability for known stochastic games, which
generalize Blackwell?s conditions for repeated matrix games. Let
P? ?1
s?
4 E?,? (
t=0 mt )
?(?, ?) =
(1)
?
s
E?,? (? )
denote the average per-cycle reward vector, which is the expected total reward over the cycle that
starts and ends in the reference state, divided by the expected duration of that cycle. For any x 6? T ,
denote by Cx a closest point in T to x, and let ux be the unit vector in the direction of Cx ? x,
which points from x to the goal set T , see Figure 2 for an illustration.
Theorem 2.1 [14] Let Assumption 1 hold. Assume that for every point x 6? T there exists a policy
?(x) such that:
(?(?(x), ?) ? Cx ) ? ux ? 0 , ?? ? ? .
(2)
Then T is approachable by P1. An approaching policy is: If sn = s? and m
? n 6? T , play ?(m
? n ) until
the next visit to state s? ; otherwise, play arbitrarily.
Figure 2: An illustration of approachability. ?(x) brings P1 to the other side of the hyperplane
perpendicular to the segment between Cx and x.
Geometrically, the condition in (2) means that P1 can ensure, irrespectively of P2?s policy, that
the average per-cycle reward will be on the other side (relative to x) of the hyperplane which is
perpendicular to the line segment that points from x to Cx . We shall refer to the direction ux
as the steering direction from point x, and to the policy ?(x) as the steering policy from x. The
approaching policy uses the following rule: between successive visits to the reference state, a fixed
(possibly stationary) policy is used. When in the reference state, the current average reward vector
m
? n is inspected. If this vector is not in T , then the steering policy that satisfies (2) with x = m
? n is
selected for the next cycle. Consequently, the average reward is ?steered? towards T , and eventually
converges to it.
Recalling the definition of the projected game in direction u and its value v?(u), the condition in (2)
may be equivalently stated as v?(ux ) ? Cx ? ux . Furthermore, the policy ?(x) can always be chosen
as the stationary policy which is optimal for P1 in the game ?(u x ). In particular, the steering policy
?(x) needs to depend only on the corresponding steering direction ux . It can be shown that for
convex target sets, the condition of the last theorem turns out to be both sufficient and necessary.
Standard approachability results, as outlined above, require to consider an infinite number of steering
directions whenever the reward in non-scalar. The corresponding set of steering policies may turn out
to be infinite as well. For the purpose of our learning scheme, we shall require an approaching policy
which relies on a finite set of steering directions and policies. The following results show that this can
indeed be done, possibly requiring to slightly expand the target set. In the following, let M be an
upper bound on the magnitude of the expected one-stage reward vector, so that ||m(s, a, b)|| < M for
all (s, a, b) (|| ? || denote the Euclidean norm). We say that a set of vectors (u 1 , . . . , uJ ) is an ?-cover
of the unit ball if for every vector in the unit ball u there exists a vector ui such that ||ui ? u|| ? ? .
Theorem 2.2 Let Assumption 1 hold and suppose that the target set T ? IRk satisfies condition
(2). Fix ? > 0. Let {u1 , . . . , uJ } be an ?/M cover of the unit ball. Suppose that ?i is an optimal
strategy in the scalar game ?(ui ) (1 ? i ? J). Then the following policy approaches T ? , the ?expansion of T : If sn = s? and m
? n 6? T ? , then choose j so that um
? n is closest to uj (in Euclidean
norm) and play ?j until the next visit to state s? ; otherwise, play arbitrarily.
Proof: (Outline) The basic observation is that if two directions, u and u i are close, then v?(u)
and v?(ui ) are close. Consequently, by playing a strategy which is optimal in ?(ui ) results in a play
which is almost optimal in ?(u). Finally we can apply Blackwell?s Theorem (2.1) for the expansion
of T , by noticing that a ?good enough? strategy is played in every direction.
Remark: It follows immediately from the last theorem that the set T itself (rather than its ?expansion) is approachable with a finite number of steering directions if T ?? , the ? shrinkage of T ,
satisfies (2). Equivalently, T is required to satisfy (2) with the 0 on the right-hand-side replaced by
? > 0.
3
The Multi-Criteria Reinforcement Learning Algorithm
In this section we introduce and prove the convergence of the MCRL (Multi-Criteria Reinforcement
Learning) algorithm. We consider the controlled Markov model of Section 2, but here we assume that
P1, the learning agent, does not know the model parameters, namely the state transition probabilities
and reward functions. A policy of P1 that does not rely on knowledge of these parameters will be
referred to as a learning policy. P1?s task is to approach a given target set T , namely to ensure
convergence of the average reward vector to this set irrespective of P2?s actions.
The proposed learning algorithm relies on the construction of the previous section of approaching
policies with a finite number of steering directions. The main idea is to apply a (scalar) learning
algorithm for each of the projected games ?(uj ) corresponding to these directions. Recall that each
such game is a standard zero-sum stochastic game with average reward. The required learning
algorithm for game ?(u) should secure an average reward that is not less than the value v?(u) of
that game.
Consider a zero-sum stochastic game, with reward function r(s, a, b), average reward r?n and value
v. Assume for simplicity that the initial state is fixed. We say that a learning policy ? of P1 is
?-optimal in this game if, for any policy ? of P2, the average reward satisfies
lim inf r?n ? v ? ?
n??
P?? a.s. ,
where P?? is the probability measure induced by the algorithm ?, P2?s policy ? and the game
dynamics. Note that P1 may be unable to learn a min-max policy as P2 may play an inferior policy
and refrain from playing certain actions, thereby keeping some parts of the game unobserved.
Remark: RL for average reward zero-sum stochastic games can be devised in a similar manner
to average reward Markov decision processes. For example, a Q-learning based algorithm which
combines the ideas of [9] with those of [1] can be devised. An additional assumption that is needed
for the analysis is that all actions of both players are used infinitely often. A different type of a scalar
algorithm that overcomes this problem is [4]. The algorithm there is similar to the E 3 algorithm [8]
which is based on explicit exploration-exploitation tradeoff and estimation of the game reward and
transition structure.
We now describe the MCRL algorithm that nearly approaches any target set T that satisfies (2).
The parameters of the algorithm are ? and M . ? is the approximation level and M is a known bound
on the norm of the expected reward per step. The goal of the algorithm is to approach T ? , the ?
expansion of T . There are J learning algorithms that are run in parallel, denoted by ? 1 , . . . ?J . The
MCRL is described in Figure 3 and is given here as a meta-algorithm (the scalar RL algorithms ? i
are not specified). When arriving to s? , the decision maker checks if the average reward vector is
outside the set T ? . In that case, he switches to an appropriate policy that is intended to ?steer? the
average reward vector towards the target set. The steering policy (?j ) is chosen according to closest
direction (uj ) to the actual direction needed according to the problem geometry. Recall that each
?j is actually a learning policy with respect to a scalar reward function. In general, when ? j is not
played, its learning pauses and the process history during that time is ignored. Note however that
some ?off-policy? algorithms (such as Q-learning) can learn the optimal policy even while playing a
different policy. In that case a more efficient version of the MCRL is suggested, in which learning is
performed by all learning policies ?j continuously and concurrently.
0. Let u1 , . . . uJ be an ?/2M cover of the unit ball. Initialize J different ?/2-optimal scalar
algorithms, ?1 , . . . , ?J .
1. If s0 6= s? play arbitrarily until sn = s? .
2. (sn = s? ) If m
? n ? T ? goto step 1. Else let i = arg min1?i?J ||ui ? um
? n ||2 .
3. While sn 6= s? play according to ?i , the reward ?i receives is ui ? mn .
4. When sn = s? goto step 2.
Figure 3: The MCRL algorithm
Theorem 3.1 Suppose that Assumption 1 holds and the MCRL algorithm is used with ?-optimal
scalar learning algorithms. If the target set T satisfies (2), then T ? is approached using MCRL.
Proof: (Outline) If a direction is played infinitely often, then eventually the learned strategy in
this direction is nearly optimal. If a direction is not played infinitely often it has a negligible effect
on the long term average reward vector. Since the learning algorithms are nearly optimal, then any
policy ?j that is played infinitely often, eventually attains a (scalar) average reward of v?(u j ) ? ?/2.
One can apply Theorem 2.2 for the set T ?/2 to verify that the overall policy is an approaching policy
for the target set.
Note that for convex target sets the algorithm is consistent in the sense that if the set is approachable
then the algorithm attains it.
Remark: Multi-criteria Markov Decision Process (MDP) models may be regarded as a special case
of the stochastic game model that was considered so far, with P2 eliminated from the problem. The
MCRL meta-algorithm of the previous section remains the same for MDPs. Its constituent scalar
learning algorithms are now learning algorithms for the optimal polices in average-reward MDPs.
These are generally simpler than for the game problem. Examples of optimal or ?-optimal algorithms
are Q-Learning with persistent exploration [2], Actor-critic schemes [2], an appropriate version of
the E 3 algorithm [8] and others. In the absence of an adversary, the problem of approaching a set
becomes much simpler. Moreover, it can be shown that if a set is approachable then it may be
approached using a stationary (possibly randomized) policy. Indeed, any point in feasible set of
state-action frequencies may be achieved by such a stationary policy [5]. Thus, alternative learning
schemes may be applicable to this problem. Another observation is that all steering policies learned
and used within the MCRL may now be deterministic stationary policies, which simplifies the
implementation of this algorithm.
4
Example
Recall the humidity-temperature example from the introduction. Suppose that the system is modelled in such a way that P1 chooses a temperature-humidity curve. Then Nature (modelled as P2)
chooses the exact location on the temperature-humidity curve. In Figure 4(a) we show three different temperature-humidity curves, that can be determined by P1 (each defined by a certain strategy
of P1 - f0 , f1 , f2 ). We implemented MCRL algorithm with nine directions. In each direction a
version of Littman?s Q-learning ([9]), adapted for average cost games, was used. A sample path of
the average reward generated by the MCRL algorithm is shown in Figure 4(b). The sample path
started at ?S? and finished at ?E?. For this specific run, an even smaller number of directions would
have sufficed (up and right). It can be seen that the learning algorithm pushes towards the target set
so that the path is mostly on the edge of the target set. Note that in this example a small number
of directions was quite enough for approaching the target set.
a
b
A sample path of average reward
Problem dynamics for different strategies
2
2
1.8
1.5
1.6
f
Humidity
Humidity
2
f1
1.4
1.2
1
f
0
1
0.5
0.5
1
1.5
Temperature
2
0.8
0.8
S
1
E
1.2
1.4
1.6
1.8
Temperature
Figure 4: (a) Greenhouse problem dynamics. (b) A sample path from ?S? to ?E?
5
Conclusion
We have presented a learning algorithm that approaches a prescribed target set in multi-dimensional
performance space, provided this set satisfies a certain sufficient condition. Our approach essentially
relies on the theory of approachability for stochastic games, which is based on the idea of steering
the average reward vector towards the target set. We provided a key result stating that a set can
be approached to a given precision using only a finite number of steering policies, which may be
learned on-line.
An interesting observation regarding the proposed learning algorithm is that the learned optimal
polices in each direction are essentially independent of the target set T . Thus, the target set need
not be fixed in advance and may be modified on-line without requiring a new learning process. This
may be especially useful for constrained MDPs.
Of further interest is the question of reduction of the number of steering directions used in the
algorithm. In some cases, especially when the requirements embodied by the target set T are not
stringent, this number may be quite small compared to the worst-case estimate used above. A
possible refinement of the algorithm is to eliminate directions that are not required.
The scaling of he algorithm with the dimension of the reward space is exponential. The problem
is that as the dimension increases, exponentially many directions are needed to cover the unit ball.
While in general this is necessary, it might happen that considerably less directions are needed.
Conditions and algorithms that use much less than exponential number of directions are under
current study.
Acknowledgement
This research was supported by the fund for the promotion of research at the Technion.
References
[1] J. Abounadi, D. Bertsekas, and V. Borkar. Learning algorithms for markov decision processes
with average cost. LIDS-P 2434, Lab. for Info. and Decision Systems, MIT, October 1998.
[2] A.G. Barto and R.S. Sutton. Reinforcement Learning. MIT Press, 1998.
[3] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pacific J. Math., 6(1):1?8,
1956.
2
[4] R.I. Brafman and M. Tennenholtz. A near optimal polynomial time algorithm for learning in
certain classes of stochastic games. Artificial Intelligence, 121(1-2):31?47, April 2000.
[5] C. Derman. Finite state Markovian decision processes. Academic Press, 1970.
[6] J. Filar and K. Vrieze. Competitive Markov Decision Processes. Springer Verlag, 1996.
[7] L.P. Kaelbling, M. Littman, and A.W. Moore. Reinforcement learning - a survey. Journal of
Artificial Intelligence Research, (4):237?285, May 1996.
[8] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. In Proc. of
the 15th Int. Conf. on Machine Learning, pages 260?268. Morgan Kaufmann, 1998.
[9] M.L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Morgan
Kaufman, editor, Eleventh International Conference on Machine Learning, pages 157?163, 1994.
[10] S. Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical
results. Machine Learning, 22(1):159?196, 1996.
[11] S. Mannor and N. Shimkin. The empirical bayes envelope approach to regret minimization
in stochastic games. Technical report EE- 1262, Faculty of Electrical Engineering, Technion,
Israel, October 2000.
[12] J.F. Mertens and A. Neyman. Stochastic games. International Journal of Game Theory,
10(2):53?66, 1981.
[13] A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the Tenth International Conference on Machine Learning, pages 298?305. Morgan
Kaufmann, 1993.
[14] N. Shimkin and A. Shwartz. Guaranteed performance regions in markovian systems with
competing decision makers. IEEE Trans. on Automatic Control, 38(1):84?95, January 1993.
| 1986 |@word exploitation:1 version:4 briefly:3 polynomial:2 norm:3 faculty:1 approachability:9 humidity:11 closure:1 bn:4 attainable:1 mention:1 thereby:1 reduction:1 moment:2 initial:5 current:4 happen:1 fund:1 stationary:12 intelligence:2 selected:1 accordingly:1 vrieze:1 provides:2 mannor:2 math:1 location:1 successive:1 simpler:2 mathematical:1 persistent:1 prove:1 combine:2 eleventh:1 introduce:2 manner:2 indeed:2 expected:4 p1:23 multi:12 little:1 actual:2 becomes:1 provided:2 bounded:2 moreover:1 israel:2 kaufman:1 unobserved:1 guarantee:1 every:4 um:2 schwartz:1 control:7 unit:7 bertsekas:1 before:1 negligible:2 engineering:2 switching:1 sutton:1 path:5 ap:1 might:1 range:4 perpendicular:2 regret:1 procedure:1 empirical:2 attain:1 close:2 operator:1 context:1 influence:1 imposed:1 deterministic:2 maximizing:1 starting:1 duration:1 convex:2 survey:1 simplicity:1 assigns:1 immediately:1 rule:1 regarded:1 variation:1 target:30 construction:2 controlling:2 suppose:6 play:8 inspected:1 exact:1 us:1 element:5 cooling:1 observed:1 min1:1 electrical:2 capture:1 worst:1 region:2 cycle:5 observes:1 environment:6 ui:7 reward:61 littman:3 dynamic:7 motivate:1 prescribes:1 depend:1 segment:2 singh:1 distinctive:1 f2:1 basis:2 tx:1 irk:6 heat:1 describe:2 activate:2 artificial:2 approached:3 outside:2 whose:1 emerged:1 quite:2 valued:6 say:3 otherwise:2 itself:3 analytical:1 product:1 relevant:1 greenhouse:2 constituent:1 convergence:5 requirement:2 undiscounted:1 converges:1 object:1 ac:1 stating:1 measured:1 p2:13 implemented:1 cool:1 direction:35 concentrate:1 stochastic:20 exploration:2 stringent:1 require:4 fix:1 f1:2 hold:3 considered:2 mapping:1 achieves:1 continuum:1 purpose:3 estimation:1 proc:1 applicable:1 maker:3 mertens:1 minimization:1 promotion:1 concurrently:1 mit:2 always:2 modified:1 rather:2 pn:1 shrinkage:1 varying:2 barto:1 derived:1 check:1 secure:1 attains:2 sense:1 dependent:1 eliminate:1 initially:1 expand:1 interested:1 arg:1 overall:1 denoted:3 constrained:1 special:2 initialize:1 eliminated:1 represents:1 nearly:3 others:1 report:1 simultaneously:1 comprehensive:1 approachable:7 replaced:1 intended:1 geometry:1 n1:1 recalling:1 interest:3 analyzed:1 extreme:1 edge:1 necessary:2 euclidean:3 haifa:1 desired:1 steer:1 markovian:3 cover:4 measuring:1 cost:3 kaelbling:1 entry:1 uniform:1 technion:4 considerably:1 chooses:2 person:1 international:3 randomized:1 accessible:1 off:1 discipline:1 continuously:1 satisfied:1 choose:4 possibly:5 henceforth:2 conf:1 steered:2 return:1 account:1 int:1 satisfy:1 depends:2 later:1 performed:1 closed:1 lab:1 apparently:1 characterizes:1 sup:3 start:1 sufficed:1 competitive:1 parallel:1 bayes:1 il:1 ir:1 kaufmann:2 characteristic:1 generalize:1 modelled:4 history:2 whenever:1 definition:3 energy:1 frequency:2 shimkin:4 obvious:1 naturally:1 associated:1 proof:4 recall:6 knowledge:2 lim:3 organized:1 actually:1 focusing:1 higher:2 april:1 formulation:3 done:2 furthermore:2 stage:1 until:3 hand:1 receives:2 brings:1 mdp:1 effect:2 concept:1 requiring:2 verify:1 contain:1 moore:1 during:1 game:46 recurrence:3 inferior:1 whereby:1 illustrative:2 criterion:8 outline:3 temperature:22 mt:2 rl:8 overview:1 exponentially:1 belong:2 discussed:1 he:2 analog:1 refer:3 automatic:2 outlined:1 similarly:1 actor:2 f0:1 add:2 closest:3 inf:4 certain:5 verlag:1 meta:2 arbitrarily:5 refrain:1 devise:1 seen:1 morgan:3 additional:1 relaxed:2 steering:21 converge:1 determine:1 multiple:1 reduces:1 technical:1 characterized:1 academic:1 long:4 divided:1 devised:2 visit:3 controlled:2 basic:2 controller:6 himself:1 expectation:1 essentially:2 kernel:1 achieved:2 addition:1 else:1 envelope:1 induced:2 goto:2 shie:2 ee:2 near:2 noting:1 mahadevan:1 enough:2 switch:1 affect:1 approaching:18 competing:1 inner:1 regarding:3 idea:3 simplifies:1 tradeoff:1 specialization:1 proceed:1 nine:1 action:14 remark:4 ignored:1 generally:3 useful:1 tune:1 exist:2 per:3 write:1 irrespectively:1 shall:5 key:3 terminology:1 achieving:1 drawn:1 tenth:1 geometrically:1 fraction:2 sum:4 run:2 noticing:1 nahum:1 place:1 almost:1 decision:12 scaling:1 bound:2 guaranteed:2 played:5 adapted:1 incorporation:1 sake:1 aspect:1 u1:2 prescribed:3 concluding:1 min:2 department:1 pacific:1 according:5 ball:5 describes:1 slightly:1 smaller:1 lid:1 neyman:1 remains:1 discus:1 eventually:4 turn:2 needed:4 know:1 end:1 operation:1 apply:3 observe:2 appropriate:4 anymore:1 alternative:1 original:1 denotes:1 ensure:3 unifying:1 uj:6 especially:2 move:2 objective:6 question:1 strategy:6 distance:1 unable:1 consumption:1 considers:1 filar:1 illustration:4 reformulate:1 equivalently:2 setup:1 mostly:1 october:2 info:1 stated:1 implementation:1 policy:63 unknown:1 allowing:1 upper:1 observation:3 markov:11 finite:12 january:1 payoff:2 extended:1 arbitrary:3 police:2 introduced:1 namely:4 required:5 specified:3 pair:1 blackwell:3 learned:4 trans:1 suggested:2 adversary:2 below:1 tennenholtz:1 including:1 max:1 rely:1 pause:1 mn:4 minimax:1 scheme:4 heater:3 mdps:4 finished:1 irrespective:1 started:1 embodied:1 sn:12 prior:1 geometric:2 epoch:1 acknowledgement:1 relative:1 fully:1 mixed:2 interesting:1 foundation:1 agent:12 sufficient:3 consistent:1 s0:3 viewpoint:3 editor:1 playing:3 critic:2 supported:1 last:3 free:1 keeping:1 arriving:1 brafman:1 side:3 allow:1 face:1 curve:3 dimension:2 transition:5 stand:1 made:1 reinforcement:11 adaptive:1 projected:3 refinement:1 far:1 overcomes:1 assumed:2 shwartz:1 continuous:1 decade:1 decomposes:1 reviewed:1 nature:3 learn:2 robust:1 expansion:4 complex:1 main:1 arrow:1 heating:1 repeated:2 allowed:2 referred:1 precision:1 explicit:1 exponential:2 theorem:10 specific:2 exists:4 magnitude:1 push:1 cx:6 borkar:1 simply:1 infinitely:4 hitting:1 ux:6 scalar:16 collectively:1 springer:1 satisfies:7 relies:5 goal:4 consequently:2 towards:4 absence:1 feasible:1 infinite:3 determined:2 hyperplane:2 kearns:1 called:3 total:1 player:8 brevity:1 |
1,080 | 1,987 | Gaussian Process Regression with
Mismatched Models
Peter Sollich
Department of Mathematics, King's College London
Strand, London WC2R 2LS, U.K. Email peter.sollich@kcl.ac . uk
Abstract
Learning curves for Gaussian process regression are well understood
when the 'student' model happens to match the 'teacher' (true data
generation process). I derive approximations to the learning curves
for the more generic case of mismatched models, and find very rich
behaviour: For large input space dimensionality, where the results
become exact, there are universal (student-independent) plateaux
in the learning curve, with transitions in between that can exhibit
arbitrarily many over-fitting maxima; over-fitting can occur even
if the student estimates the teacher noise level correctly. In lower
dimensions, plateaux also appear, and the learning curve remains
dependent on the mismatch between student and teacher even in
the asymptotic limit of a large number of training examples. Learning with excessively strong smoothness assumptions can be particularly dangerous: For example, a student with a standard radial
basis function covariance function will learn a rougher teacher function only logarithmically slowly. All predictions are confirmed by
simulations.
1
Introduction
There has in the last few years been a good deal of excitement about the use
of Gaussian processes (GPs) as an alternative to feedforward networks [1]. GPs
make prior assumptions about the problem to be learned very transparent, and
even though they are non-parametric models, inference- at least in the case of
regression considered below- is relatively straightforward. One crucial question
for applications is then how 'fast' GPs learn, i.e. how many training examples are
needed to achieve a certain level of generalization performance. The typical (as
opposed to worst case) behaviour is captured in the learning curve, which gives the
average generalization error t as a function of the number of training examples n.
Good bounds and approximations for t(n) are now available [1, 2, 3, 4, 5], but these
are mostly restricted to the case where the 'student' model exactly matches the true
'teacher' generating the datal. In practice, such a match is unlikely, and so it is
lThe exception is the elegant work of Malzahn and Opper [2], which uses a statistical
physics framework to derive approximate learning curves that also apply for any fixed
target function. However, this framework has not yet to my knowledge been exploited to
important to understand how GPs learn if there is some model mismatch. This is
the aim of this paper.
In its simplest form , the regression problem is this: We are trying to learn a function
B* which maps inputs x (real-valued vectors) to (real-valued scalar) outputs B*(x).
We are given a set of training data D , consisting of n input-output pairs (xl, yl) ;
the training outputs yl may differ from the 'clean' teacher outputs B*(x l ) due to
corruption by noise. Given a test input x, we are then asked to come up with a
prediction B(x), plus error bar, for the corresponding output B(x). In a Bayesian
setting, we do this by specifying a prior P(B) over hypothesis functions , and a likelihood P(DIB) with which each B could have generated the training data; from this
we deduce the posterior distribution P(BID) ex P(DIB)P(B). For a GP, the prior is
defined directly over input-output functions B; this is simpler than for a Bayesian
feedforward net since no weights are involved which would have to be integrated
out. Any B is uniquely determined by its output values B(x) for all x from the input domain, and for a GP, these are assumed to have a joint Gaussian distribution
(hence the name). If we set the means to zero as is commonly done, this distribution is fully specified by the covariance function (B(x)B(xl))o = C(X,XI). The
latter transparently encodes prior assumptions about the function to be learned.
Smoothness, for example, is controlled by the behaviour of C(x, Xl) for Xl -+ x: The
Ornstein-Uhlenbeck (OU) covariance function C(x, Xl) = exp( -Ix - xliiI) produces
very rough (non-differentiable) functions , while functions sampled from the radial
basis function (RBF) prior with C(x, Xl) = exp[-Ix - x / 21(212)] are infinitely differentiable. Here I is a lengthscale parameter, corresponding directly to the distance
in input space over which we expect significant variation in the function values.
1
There are good reviews on how inference with GPs works [1 , 6], so I only give
a brief summary here. The student assumes that outputs y are generated from
the 'clean' values of a hypothesis function B(x) by adding Gaussian noise of xindependent variance (J2. The joint distribution of a set of training outputs {yl}
and the function values B(x) is then also Gaussian, with covariances given (under
the student model) by
(ylym) = C(xl,xm) + (J2Jl m = (K)lm,
(yIB(x)) = C(xl,x) = (k(X))1
Here I have defined an n x n matrix K and an x-dependent n-component vector
k(x) . The posterior distribution P(BID) is then obtained by conditioning on the
{yl}; it is again Gaussian and has mean and variance
(B(x))o ID == B(xID)
=
k(X)TK-1y
(1)
((B(x) - B(X))2)o ID
C(x , x) - k(X)TK-1k(x)
(2)
From the student's point of view, this solves the inference problem: The best prediction for B(x) on the basis of the data D is B(xID) , with a (squared) error bar
given by (2). The squared deviation between the prediction and the teacher is
[B(xID) - B*(x)]2; the average generalization error (which, as a function of n, defines the learning curve) is obtained by averaging this over the posterior distribution
of teachers, all datasets, and the test input x:
(3)
Now of course the student does not know the true posterior of the teacher; to
estimate E, she must assume that it is identical to the student posterior, giving
from (2)
E
= ((([B(xID) - B*(xWk ID)D)x
E = ((([B(xID) - B(X)]2)o ID)D)x = ((C(x,x) - k(xfK-1k(X)){ xl})x
(4)
consider systematically the effects of having a mismatch between the teacher prior over
target functions and the prior assumed by the student.
where in the last expression I have replaced the average over D by one over the
training inputs since the outputs no longer appear. If the student model matches
the true teacher model, E and ? coincide and give the Bayes error, i.e. the best
achievable (average) generalization performance for the given teacher.
I assume in what follows that the teacher is also a GP, but with a possibly different
covariance function C* (x, x') and noise level (}";. This allows eq. (3) for E to be
simplified, since by exact analogy with the argument for the student posterior
(()*(x )k iD = k* (x) TK :;-1 y ,
((); (x) )O. ID = (()*(x ))~. I D +C* (x, x) - k* (x) TK :;-1k *(x)
and thus , abbreviating a(x) = K- 1k(x) - K ;;-1k*(x),
E
= ((a(x)T yyTa(x)
+ C*(x,x)
- k*( X)T K:;-1k*(x))D)x
Conditional on the training inputs, the t raining outputs have a Gaussian distribution given by the true (teacher) model; hence (yyT){yl} l{xl } = K *, giving
E
2
= ((C*(x,x) - 2k*(x)TK- 1k(x)
+ k(X)T K -1 K *K -1 k(x )){xl})x
(5)
Calculating the learning curves
An exact calculation of the learning curve E(n) is difficult because of the joint average in (5) over the training inputs X and the test input x . A more convenient
starting point is obtained if (using Mercer's theorem) we decompose the covariance
function into its eigenfunctions ?i(X) and eigenvalues Ai, defined w.r.t. the input
distribution so that (C(x, X') ?i (X') )x' = Ai?i(X) with the corresponding normalization (?i(X)?j(x))x = bij. Then
00
00
i=1
i=1
For simplicity I assume here that the student and teacher covariance functions have
the same eigenfunctions (but different eigenvalues). This is not as restrictive as it
may seem; several examples are given below. The averages over the test input x
in (5) are now easily carried out: E .g. for the last term we need
((k( x) k(x)T)lm)x =
L AiAj?i(Xl)(?i(X)?j (x))x?j (xm) = L A7?i(X l )?i(Xm)
i
ij
Introducing the diagonal eigenvalue matrix (A)ij = Aibij and the 'design matrix '
(<I?li = ?i(X l ), this reads (k(x) k(x)T)x = <I>A2<I>T . Similarly, for the second term
in (5) , (k(x)k;(x))x = <I>AA*<I>T, and (C*(x,x))x = trA*. This gives, dropping
the training inputs subscript from the remaining average,
E
= (tr A* - 2tr<I>AA*<I> T K- 1 + tr <I>A 2<I> T K - 1K *K - 1)
In this new representation we also have K = (}"21 + <I>A<I>T and similarly for K* ;
for the inverse of K we can use the Woodbury formula to write K -1 = (}" -2 [1 (}" - 2<I>g<I> T], where 9 = (A - 1 + (}" - 2<I> T <I> )- 1. Inserting these results , one finds after
some algebra that
E
= (}";(}" -2 [(tr g) - (tr gA -1 g )]
+ (tr gA*A -29)
(7)
which for the matched case reduces to the known result for the Bayes error [4]
? = (tr g)
(8)
Eqs. (7,8) are still exact. We now need to tackle the remaining averages over training
inputs. Two of these are of the form (tr QM9) ; if we generalize the definition of
Q to Q = (A -1 + vI + wM + (/-2IJ>TIJ?-1 and define 9 = (tr Q) , then they reduce
to (trQMQ) = -agjaw. (The derivative is taken at v = w = 0; the idea behind
introducing v will become clear shortly.) So it is sufficient to calculate g. To do
this, consider how Q changes when a new example is added to the training set. One
has
Q(n + 1) - Q(n) = [Q-1(n)
+ (/-2 1jJ1jJTJ -1
_
Q(n) = _ Q(n)1jJ1jJTQ(n)
(/2 + 1jJTQ(n)1jJ
(9)
in terms of the vector 1jJ with elements (1jJ)i = <Pi(xn+d, using again the Woodbury
formula. To obtain the change in 9 we need the average of (9) over both the
new training input X n +1 and all previous ones. This cannot be done exactly, but
we can approximate by averaging numerator and denominator separately; taking
the trace then gives g(n + 1) - g(n) = -(trQ 2(n))j[(/2 + g(n)]. Now, using our
auxiliary parameter v, -(trQ2 ) = agjav; if we also approximate n as continuous,
we get the simple partial differential equation agjan = (agjaV)j((/2 + g) with the
initial condition gln=o = tr (A -1 + vI + WM)-1. Solving this using the method of
characteristics [7] gives a self-consistent equation for g,
9 = tr [A -1 +
r
(v + (/2: g) 1+ wM
1
(10)
The Bayes error (8) is E = glv=w=o and therefore obeys
E = trG,
G -1 = A -1
+ _n_ I
(/2 + E
within our approximation (called 'LC' in [4]). To obtain
sides of (10) w.r.t. w, set v = w = 0 and rearrange to give
(tr QM9)
= -agjaw = (tr MG 2)j[1 -
E,
(11)
we differentiate both
(tr G 2)nj((/2 + E)2]
Using this result in (7), with M = A -1 and M = A -1 A*A -1, we find after some
further simplifications the final (approximate) result for the learning curve:
E
=
, (/; tr G 2 + n- 1 ((/2 + E)2 tr A*A -2G 2
(/2trG2 +n-1((/2 +E)2trA-1G2
E ----'---::------::c-::---':-:---;:---:-;:---:----::--::c-::--
which transparently shows how in the matched case
3
E
(12)
and E become identical.
Examples
I now apply the result for the learning curve (11 ,12) to some exemplary learning
scenarios. First, consider inputs x which are binary vectors 2 with d components
Xa E {-I , I} , and assume that the input distribution is uniform. We consider
covariance functions for student and teacher which depend on the product x . Xl
only; this includes the standard choices (e.g. OU and RBF) which depend on the
Euclidean distance Ix - xII, since Ix - x /12 = 2d - 2x . Xl. All these have the same
eigenfunctions [9], so our above assumption is satisfied. The eigenfunctions are
indexed by subsets p of {I, 2 ... d} and given explicitly by <pp(x) = IT a E P Xa' The
2This scenario may seem strange, but simplifies the determination of the eigenfunctions
and eigenvalues. For large d, one expects other distributions with continuously varying
x and the same first- and second-order statistics ((Xa) = 0, (XaXb) = 8ab ) to give similar
results [8] .
corresponding eigenvalues depend only on the size s = Ipl of the subsets and are
therefore (~)-fold degenerate; letting e = (1,1 ... 1) be the 'all ones' input vector,
they have the values As = (C(x, e)?>p(x))x (which can easily be evaluated as an
average over two binomially distributed variables, counting the number of +1 's in
x overall and among the Xa with a E p). With the As and A; determined, it is then
a simple matter to evaluate the predicted learning curve (11,12) numerically. First,
though, focus on the limit of large d, where much more can be said. If we write
C(X,XI) = f(x? xl/d), the eigenvalues become, for d -+ 00, As = d-sf(s)(O) and
the contribution to C(x, x) = f(l) from the s-th eigenvalue block is As == (~)As -+
f(s)(O)/s!, consistent with f(l) = 2::o f(s)(0)/s! The As, because of their scaling
with d, become infinitely separated for d -+ 00. For training sets of size n = O(d L ),
we then see from (11) that eigenvalues with s > L contribute as if n = 0, since
As ? n / (u 2 + ?); they have effectively not yet been learned. On the other hand,
eigenvalues with s < L are completely suppressed and have been learnt perfectly.
We thus have a hierarchical learning scenario, where different scalings of n with
d-as defined by L-correspond to different 'learning stages'. Formally, we can
analyse the stages separately by letting d -+ 00 at a constant ratio a = n/(f) of the
number of examples to the number of parameters to be learned at stage L (note
(f) = O(d L ) for large d). An independent (replica) calculation along the lines of
Ref. [8] shows that our approximation for the learning curve actually becomes exact
in this limit. The resulting a-dependence of to can be determined explicitly: Set
h = 2:s::=:L As (so that fa = f(I)) and similarly for fi. Then for large a ,
fL+1 + (fL+1 + u;)a- l + O(a- 2 )
(13)
This implies that, during successive learning stages, (teacher) eigenvalues are learnt
one by one and their contribution eliminated from the generalization error, giving
plateaux in the learning curve at to = fi, f2, .... These plateaux, as well as the
asymptotic decay (13) towards them, are universal [8], i.e. student-independent.
The (non-universal) behaviour for smaller a can also be fully characterized: Consider first the simple case of linear percept ron learning (see e.g. [7]), which corresponds to both student and teacher having simple dot-product covariance functions
C (x, Xl) = C* (x, Xl) = X? xl/d. In this case there is only a single learning stage (only
Al = A~ = 1 are nonzero), and to = r(a) decays from r(O) = 1 to r(oo) = 0, with
an over-fitting maximum around a = 1 if u 2 is sufficiently small compared to u;.
In terms of this function r(a), the learning curve at stage L for general covariance
functions is then exactly given by to = fL+1 + ALr(a) if in the evaluation of r(a)
the effective noise levels &2 = (fL+1 + ( 2 ) / AL and &; = (fL+1 + u;) / AL are used.
Note how in &;, the contribution fL+1 from the not-yet-Iearned eigenvalues acts as
effective noise, and is normalized by the amount of 'signal' AL = fL - fL+l available
at learning stage L. The analogous definition of &2 implies that, for small u 2 and
depending on the choice of student covariance function , there can be arbitrarily
many learning stages L where &2 ? &;, and therefore arbitrarily many over-fitting
maxima in the resulting learning curves. From the definitions of &2 and &; it is
clear that this situation can occur even if the student knows the exact teacher noise
level, i.e. even if u 2 = u;.
to =
Fig. 1(left) demonstrates that the above conclusions hold not just for d -+ 00; even
for the cases shown, with d = 10, up to three over-fitting maxima are apparent.
Our theory provides a very good description of the numerically simulated learning
curves even though, at such small d, the predictions are still significantly different
from those for d -+ 00 (see Fig. 1 (right) ) and therefore not guaranteed to be exact.
In the second example scenario, I consider continuous-valued input vectors, uni-
10
100
n
234
a
234
a
234
a
Figure 1: Left: Learning curves for RBF student and teacher, with uniformly distributed, binary input vectors with d = 10 components. Noise levels: Teacher
= 1, student 2 = 10-4, 10- 3 , ... , 1 (top to bottom). Length scales: Teacher
l* = d1/ 2, student l = 2d1/ 2. Dashed: numerical simulations, solid: theoretical
prediction. Right: Learning curves for u 2 = 10- 4 and increasing d (top to bottom:
10, 20, 30 , 40, 60, 80, [bold] 00). The x-axis shows a = n/(f) , for learning stages
L = 1,2,3; the dashed lines are the universal asymptotes (13) for d -+ 00.
u;
u
formly distributed over the unit interval [0,1]; generalization to d dimensions
(x E [O , I]d) is straightforward. For covariance functions which are stationary, i.e.
dependent on x and x' only through x - x' , and assuming periodic boundary conditions (see [4] for details), one then again has covariance function-independent eigenfunctions. They are indexed by integers 3 q, with cPq(x) = e21riqx; the corresponding
eigenvalues are Aq = dx C(O, x)e- 27riqx . For the ('periodified') RBF covariance
function C(x ,x' ) = exp[-(x - X ' )2/(2l 2)], for example, one has Aq ex exp( -ip /2),
where ij = 27rlq. The OU case C(x, x') = exp( -I x - x/l/l), on the other hand,
gives Aq ex (1 + ij2) - 1, thus Aq ex q- 2 for large q. I also consider below covariance
functions which interpolate in smoothness between the OU and RBF limits: E.g.
the MB2 (modified Bessel) covariance C(x, x') = e- a (1 + a), with a = Ix - x /l /l,
yields functions which are once differentiable [5]; its eigenvalues Aq ex (1 + ij2)-2
show a faster asymptotic power law decay, Aq ex q-4, than those of the OU covariance function. To subsume all these cases I assume in the following analysis of the
general shape of the learning curves that Aq ex q-r (and similarly A~ ex q-r.). Here
r = 2 for OU, r = 4 for MB2, and (due to the faster-than-power law decay of its
eigenvalues) effectively r = 00 for RBF.
J
From (11 ,12), it is clear that the n-dependence of the Bayes error E has a strong
effect on the true generalization error E. From previous work [4], we know that E(n)
has two regimes: For small n, where E ? u 2 , E is dominated by regions in input
space which are too far from the training examples to have significant correlation
with them, and one finds E ex n -(r- 1). For much larger n, learning is essentially
against noise, and one has a slower decay E ex (n/u 2)-(r- 1) /r . These power laws can
be derived from (11) by approximating factors such as [A;;-l + n/ (u 2 + E)]- l as equal
to either Aq or to 0, depending on whether n / (u 2 + E) < or > A;;-l. With the same
technique, one can estimate the behaviour of E from (12). In the small n-regime, one
finds E ~ C1 u; + C2n-(r. -1), with prefactors C1, C2 depending on the student. Note
3Since Aq
degenerate.
=
A_ q, one can assume q ~ 0 if all Aq for q
> 0 are taken as doubly
?
0.1
?
0.1
?
0.1
10
n
100
10
n
100
1000
Figure 2: Learning curves for inputs x uniformly distributed over [0,1]. Teacher:
MB2 covariance function, lengthscale I. = 0.1, noise level (7; = 0.1; student lengthscale I = 0.1 throughout. Dashed: simulations, solid: theory. Left: OU student
with (72 as shown. The predicted plateau appears as (72 decreases. Right: Students with (72 = 0.1 and covariance function as shown; for clarity, the RBF and
OU results have been multiplied by v'IO and 10, respectively. Dash-dotted lines
show the predicted asymptotic power laws for MB2 and OU; the RBF data have a
persistent upward curvature consistent with the predicted logarithmic decay. Inset:
RBF student with (72 = 10- 3 , showing the occurrence of over-fitting maxima.
that the contribution proportional to (7; is automatically negligible in the matched
case (since then E = ? ? (72 = (7; for small n); if there is a model mismatch, however,
and if the small-n regime extends far enough, it will become significant. This is the
case for small (72; indeed, for (72 -+ 0, the 'small n' criterion ? ? (72 is satisfied for
any n. Our theory thus predicts the appearance of plateaux in the learning curves,
becoming more pronounced as (72 decreases; Fig. 2 (left ) confirms this4. Numerical
evaluation also shows that for small (72, over-fitting maxima may occur before the
plateau is reached, consistent with simulations; see inset in Fig. 2(right). In the
large n-regime (? ? (72), our theory predicts that the generalization error decays as
a power law. If the student assumes a rougher function than the teacher provides
(r < r.) , the asymptotic power law exponent E ex: n-(r-l)/r is determined by the
student alone. In the converse case, the asymptotic decay is E ex: n-(r.-l) / r and
can be very slow, actually becoming logarithmic for an RBF student (r -+ CXl). For
r = r., the fastest decay for given r. is obtained, as expected from the properties of
the Bayes error. The simulation data in Fig. 2 are compatible with these predictions
(though the simulations cover too small a range of n to allow exponents to be
determined precisely). It should be stressed that the above results imply that there
is no asymptotic regime of large training sets in which the learning curve assumes a
universal form, in contrast to the case of parametric models where the generalization
error decays as E ex: lin for sufficiently large n independently of model mismatch
(as long as the problem is learnable at all). This conclusion may seem counterintuitive, but becomes clear if one remembers that a GP covariance function with
an infinite number of nonzero eigenvalues Ai always has arbitrarily many eigenvalues
4If (J2 = 0 exactly, the plateau will extend to n -+ 00. With hindsight, this is clear:
a GP with an infinite number of nonzero eigenvalues has no limit on the number of its
'degrees of freedom' and can fit perfectly any amount of noisy training data, without ever
learning the true teacher function .
that are arbitrarily close to zero (since the Ai are positive and 2:iAi = (C(x,x)) is
finite). Whatever n, there are therefore many eigenvalues for which Ail? n/u 2 ,
corresponding to degrees of freedom which are still mainly determined by the prior
rather than the data (compare (11)). In other words, a regime where the data
completely overwhelms the mismatched prior- and where the learning curve could
therefore become independent of model mismatch- can never be reached.
In summary, the above approximate theory makes a number of non-trivial predictions for GP learning with mismatched models, all borne out by simulations: for
large input space dimensions, the occurrence of multiple over-fitting maxima; in
lower dimensions, the generic presence of plateaux in the learning curve if the student assumes too small a noise level u 2 , and strong effects of model mismatch on the
asymptotic learning curve decay. The behaviour is much richer than for the matched
case, and could guide the choice of (student) priors in real-world applications of GP
regression; RBF students, for example, run the risk of very slow logarithmic decay
of the learning curve if the target (teacher) is less smooth than assumed.
An important issue for future work- some of which is in progress- is to analyse to
which extent hyperparameter tuning (e.g. via evidence maximization) can make GP
learning robust against some forms of model mismatch, e.g. a misspecified functional
form of the covariance function. One would like to know, for example, whether a
data-dependent adjustment of the lengthscale of an RBF covariance function would
be sufficient to avoid the logarithmically slow learning of rough target functions.
References
[1] See e.g. D J C MacKay, Gaussian Processes, Tutorial at NIPS 10; recent papers
by Csat6 et al. (NIPS 12), Goldberg/Williams/Bishop (NIPS 10), Williams
and Barber/Williams (NIPS 9) , Williams/Rasmussen (NIPS 8); and references
below.
[2] D Malzahn and M Opper. In NIPS 13, pages 273- 279; also in NIPS 14.
[3] C A Michelli and G Wahba. In Z Ziegler, editor, Approximation theory and
applications, pages 329- 348. Academic Press, 1981; M Opper. In I K KwokYee et al., editors, Theoretical Aspects of Neural Computation, pages 17-23.
Springer, 1997.
[4] P Sollich. In NIPS 11, pages 344-350.
[5] C K I Williams and F Vivarelli. Mach. Learn., 40:77-102, 2000.
[6] C K I Williams. In M I Jordan, editor, Learning and Inference in Graphical
Models, pages 599-621. Kluwer Academic, 1998.
[7] P Sollich. J. Phys. A, 27:7771- 7784,1994.
[8] M Opper and R Urbanczik. Phys. Rev. Lett., 86:4410- 4413, 2001.
[9] R Dietrich, M Opper, and H Sompolinsky. Phys. Rev. Lett., 82:2975-2978,
1999.
| 1987 |@word achievable:1 confirms:1 simulation:7 covariance:22 tr:16 solid:2 initial:1 yet:3 dx:1 must:1 numerical:2 shape:1 asymptote:1 stationary:1 alone:1 provides:2 contribute:1 ron:1 successive:1 simpler:1 along:1 c2:1 become:7 differential:1 persistent:1 doubly:1 fitting:8 expected:1 indeed:1 abbreviating:1 trg:1 automatically:1 increasing:1 becomes:2 matched:4 what:1 ail:1 hindsight:1 nj:1 act:1 tackle:1 exactly:4 demonstrates:1 uk:1 whatever:1 unit:1 converse:1 appear:2 before:1 negligible:1 understood:1 positive:1 limit:5 io:1 mach:1 id:6 subscript:1 becoming:2 plus:1 specifying:1 fastest:1 range:1 obeys:1 woodbury:2 practice:1 block:1 urbanczik:1 universal:5 significantly:1 convenient:1 word:1 radial:2 get:1 cannot:1 ga:2 close:1 risk:1 map:1 csat6:1 straightforward:2 williams:6 starting:1 l:1 independently:1 simplicity:1 counterintuitive:1 aiaj:1 variation:1 analogous:1 target:4 exact:7 gps:5 us:1 iearned:1 hypothesis:2 goldberg:1 logarithmically:2 element:1 particularly:1 predicts:2 bottom:2 worst:1 calculate:1 region:1 sompolinsky:1 decrease:2 ij2:2 asked:1 depend:3 solving:1 overwhelms:1 algebra:1 f2:1 basis:3 completely:2 easily:2 joint:3 separated:1 kcl:1 fast:1 london:2 lengthscale:4 effective:2 apparent:1 richer:1 larger:1 valued:3 statistic:1 gp:8 analyse:2 noisy:1 final:1 ip:1 differentiate:1 differentiable:3 eigenvalue:18 net:1 mg:1 exemplary:1 dietrich:1 product:2 j2:2 inserting:1 degenerate:2 achieve:1 description:1 pronounced:1 produce:1 generating:1 tk:5 derive:2 oo:1 ac:1 n_:1 depending:3 this4:1 ij:4 progress:1 eq:2 strong:3 solves:1 auxiliary:1 predicted:4 come:1 implies:2 differ:1 xid:5 wc2r:1 behaviour:6 transparent:1 generalization:9 decompose:1 hold:1 around:1 considered:1 sufficiently:2 exp:5 lm:2 a2:1 ziegler:1 rough:2 gaussian:9 always:1 aim:1 modified:1 rather:1 avoid:1 dib:2 varying:1 derived:1 focus:1 she:1 likelihood:1 mainly:1 contrast:1 inference:4 dependent:4 unlikely:1 integrated:1 qm9:2 upward:1 overall:1 among:1 issue:1 exponent:2 mackay:1 equal:1 once:1 never:1 having:2 eliminated:1 identical:2 future:1 few:1 xfk:1 interpolate:1 replaced:1 consisting:1 ab:1 freedom:2 evaluation:2 behind:1 rearrange:1 partial:1 indexed:2 euclidean:1 michelli:1 theoretical:2 cover:1 alr:1 maximization:1 introducing:2 deviation:1 subset:2 expects:1 uniform:1 too:3 xwk:1 teacher:26 periodic:1 learnt:2 my:1 physic:1 yl:5 continuously:1 squared:2 again:3 satisfied:2 opposed:1 slowly:1 possibly:1 borne:1 derivative:1 li:1 student:34 bold:1 includes:1 matter:1 tra:2 explicitly:2 ornstein:1 vi:2 view:1 reached:2 wm:3 bayes:5 contribution:4 variance:2 characteristic:1 percept:1 correspond:1 yield:1 generalize:1 bayesian:2 confirmed:1 corruption:1 plateau:9 phys:3 email:1 definition:3 against:2 pp:1 involved:1 sampled:1 knowledge:1 dimensionality:1 yyt:1 ou:9 actually:2 appears:1 iai:1 done:2 though:4 evaluated:1 xa:4 stage:9 just:1 correlation:1 hand:2 a7:1 defines:1 name:1 effect:3 excessively:1 normalized:1 true:7 hence:2 read:1 nonzero:3 cpq:1 deal:1 numerator:1 self:1 uniquely:1 during:1 criterion:1 trying:1 fi:2 misspecified:1 functional:1 conditioning:1 extend:1 kluwer:1 numerically:2 significant:3 ai:4 smoothness:3 tuning:1 mathematics:1 similarly:4 aq:10 dot:1 longer:1 prefactors:1 deduce:1 curvature:1 posterior:6 recent:1 scenario:4 certain:1 binary:2 arbitrarily:5 exploited:1 captured:1 bessel:1 signal:1 dashed:3 multiple:1 reduces:1 smooth:1 match:4 determination:1 calculation:2 characterized:1 faster:2 lin:1 long:1 academic:2 controlled:1 prediction:8 regression:5 denominator:1 essentially:1 normalization:1 uhlenbeck:1 c1:2 separately:2 interval:1 crucial:1 eigenfunctions:6 elegant:1 seem:3 jordan:1 integer:1 counting:1 presence:1 feedforward:2 enough:1 bid:2 fit:1 perfectly:2 wahba:1 reduce:1 idea:1 simplifies:1 whether:2 expression:1 peter:2 jj:3 tij:1 clear:5 amount:2 simplest:1 tutorial:1 transparently:2 dotted:1 correctly:1 xii:1 write:2 hyperparameter:1 dropping:1 datal:1 clarity:1 clean:2 replica:1 year:1 run:1 inverse:1 extends:1 throughout:1 strange:1 scaling:2 bound:1 fl:8 guaranteed:1 simplification:1 dash:1 fold:1 occur:3 dangerous:1 precisely:1 encodes:1 dominated:1 aspect:1 argument:1 relatively:1 department:1 smaller:1 sollich:4 suppressed:1 rev:2 happens:1 restricted:1 taken:2 equation:2 remains:1 vivarelli:1 excitement:1 know:4 needed:1 letting:2 available:2 multiplied:1 apply:2 hierarchical:1 generic:2 occurrence:2 alternative:1 shortly:1 slower:1 assumes:4 remaining:2 top:2 graphical:1 calculating:1 giving:3 restrictive:1 approximating:1 question:1 added:1 parametric:2 fa:1 dependence:2 diagonal:1 said:1 exhibit:1 distance:2 simulated:1 c2n:1 barber:1 extent:1 lthe:1 trivial:1 assuming:1 length:1 ratio:1 difficult:1 mostly:1 trace:1 design:1 binomially:1 datasets:1 finite:1 situation:1 subsume:1 ever:1 pair:1 specified:1 learned:4 rougher:2 nip:8 malzahn:2 bar:2 below:4 mismatch:8 xm:3 regime:6 power:6 brief:1 imply:1 axis:1 carried:1 gln:1 remembers:1 prior:10 review:1 asymptotic:8 law:6 fully:2 expect:1 generation:1 proportional:1 analogy:1 degree:2 sufficient:2 consistent:4 mercer:1 editor:3 systematically:1 pi:1 course:1 summary:2 compatible:1 last:3 rasmussen:1 side:1 allow:1 understand:1 ipl:1 mismatched:4 guide:1 taking:1 distributed:4 curve:27 dimension:4 opper:5 transition:1 raining:1 rich:1 xn:1 boundary:1 world:1 commonly:1 lett:2 coincide:1 simplified:1 far:2 approximate:5 uni:1 assumed:3 a_:1 xi:2 continuous:2 cxl:1 learn:5 robust:1 domain:1 noise:11 ref:1 fig:5 slow:3 lc:1 yib:1 sf:1 xl:18 ix:5 bij:1 theorem:1 formula:2 bishop:1 inset:2 showing:1 learnable:1 decay:12 evidence:1 adding:1 effectively:2 logarithmic:3 formly:1 appearance:1 infinitely:2 strand:1 adjustment:1 g2:1 scalar:1 springer:1 aa:2 corresponds:1 conditional:1 king:1 rbf:12 towards:1 change:2 typical:1 determined:6 uniformly:2 infinite:2 averaging:2 called:1 exception:1 formally:1 college:1 latter:1 stressed:1 evaluate:1 d1:2 ex:13 |
1,081 | 1,988 | Neural Implementation of Bayesian
Inference in Population Codes
Si Wu
Computer Science Department
Sheffield University, UK
Shun-ichi Amari
Lab. for Mathematic Neuroscience,
RIKEN Brain Science Institute, JAPAN
Abstract
This study investigates a population decoding paradigm, in which
the estimation of stimulus in the previous step is used as prior
knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate),
and show that it can be implemented by a biologically plausible
recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian
learning.
1
Introduction
Information in the brain is not processed by a single neuron, but rather by a population of them. Such a coding strategy is called population coding. It is conceivable
that population coding has advantage of being robust to the fluctuation in a single
neuron's activity. However, people argue that population coding may have other
computationally desirable properties. One such property is to provide a framework
for encoding complex objects by using basis functions [1]. This is inspired by the
recent progresses in nonlinear function approximation, such as, sparse coding, overcomplete representation and kernel regression. These methods are efficient and show
some interesting neuron-like behaviors [2,3]. It is reasonable to think that similar
strategies are used in the brain under the support of population codes. However,
to confirm this idea, a general suspicion has to be clarified: can the brain perform
such complex statistic inference? An important work towards the answer of this
question was done by Pouget and co-authors [4,5]. They show that Maximum Likelihood (ML) Inference, which is usually thought to be complex, can be implemented
by a biologically plausible recurrent network using the idea of line attractor.
ML is a special case of Bayesian inference when the stimulus is (or assumed to be)
uniformly distributed. In case there is prior knowledge on the stimulus distribution,
Maximum a Posteriori (MAP) Estimate has better performance. Zhang et al. has
successfully applied MAP for reconstructing the rat position in a maze from the
activity of hippocampal place cells [6]. In their method, the prior knowledge is
the rat's position in the previous time step, which restricts the variability of rat's
position in the current step under the continuity constraint. It turns out that MAP
has a much better performance than other decoding methods, and overcomes the
inefficiency of ML when information is not sufficient (when the rat stops running).
This result implies that MAP may be used by the nervous system. So far, in the
literature MAP has been mainly studied as a mathematic tool for reconstructing
data, though its potential neural implementation was pointed out by [1 ,6].
In the present study, we will firmly show how to implement MAP in a biologic way.
The same kind of recurrent network for achieving ML is used [4,5]. The decoding
process consists of two steps. In the first step when there is no prior knowledge of
the stimulus, the network implements ML. Its estimation is subsequently used to
form the prior distribution of stimulus for consecutive decoding, which we assume
is a Gaussian function with the mean value being the estimation. It turns out
that this prior knowledge can be naturally conveyed by the change in the recurrent
interactions according to the Hebbian learning rule. This is an interesting finding
and suggests a new role of Hebbian learning. In the second step, with the changed
interactions, the network implements MAP. The decoding accuracy of MAP and
the optimal form of Gaussian prior are also analyzed in this paper.
2
MAP in Population Codes
Let us consider a standard population coding paradigm. There are N neurons
coding for a stimulus x. The population activity is denoted by r = {rd. Here ri is
the response of the ith neuron, which is given by
(1)
where fi(X) is the tuning function and
fi
is a random noise.
The encoding process of a population code is specified by the conditional probability
Q(rlx) (i.e., the noise model). The decoding is to infer the value of x from the
observed r.
We consider a general Bayesian inference in a population code, which estimates the
stimulus by maximizing a log posterior distribution , In P(xlr) , i.e. ,
argmaxx
argmaxx
In P(xlr) ,
InP(rlx) + InP(x),
(2)
where P(rlx) is the likelihood function. It can be equal to or different from the real
encoding model Q(rlx) , depending on the available information of the encoding
process [7]. P(x) is the distribution of x , representing the prior knowledge. This
method is also called Maximum a Posteriori (MAP). When the distribution of x is,
or is assumed to be (when there is no prior knowledge) uniform, MAP is equivalent
to ML.
MAP could be used in the information processing of the brain in several occasions.
Let us consider the following scenario: a stimulus is decoded in multiple steps. This
happens when the same stimulus is presented through multiple steps, or during
a single presentation, neural signals are sampled many times. In both cases, the
brain successively gains a rough estimation of the stimulus in each step decoding,
which can serve to be the prior knowledge when further decoding is concerned. It
is therefore natural to use MAP in this situation. Experiencing slightly different
stimuli in consecutive steps as studied in [6], or more generally, stimulus slowly
changes with time (multiple-step diagram is a discreted approximation), is a similar
scenario. For simplicity, we only consider that stimulus is unchanged in the present
study.
2.1
The Performance of MAP
Let us analyze the performance of MAP. Some notations are introduced first. Denote Xt a particular estimation of the stimulus in the tth step, and 0; the corresponding variance. The prior distribution of x in the t + lth step is assumed to be
a Gaussian with the mean value X"~ i.e.,
P(xIXt) = _1_ exp-CX-Xt)2 /2r;,
.,J2irTt
where the parameter Tt reflects the estimator's confidence on
value will be calculated later.
(3)
xt,
whose optimal
The posterior distribution of x in the t + lth step is given by
P(
I )=
xr
P(rlx)P(xlxt)
P(r)
,
(4)
and the solution of MAP is obtained by solving
\7 In P(Xt+1 Ir)
\7lnP(rlxt+l) - (Xt+l-Xt)/T;,
O.
(5)
We calculate the decoding accuracies iteratively. In the first step decoding, since
there is no prior knowledge on x, ML is used, whose decoding accuracy is known to
be [7]
02- ?\7lnP(rlx))2>
(6)
1 - < -\7\7lnP(rlx) >2'
where the bracket < . > denotes averaging over Q(rlx).
Note that, to get the above result, we have considered that ML is asymptotically or
quasi-asymptotically (when an unfaithful model is used) efficient [7]. This includes
the cases when neural responses are independent, weakly correlated, uniformly correlated, correlated with strength proportional to firing rate (multiplicative correlation), or the fluctuation in neural responses are sufficiently small. In other strong
correlation cases, ML is proved to be non-Fisherian, i.e, its decoding error satisfies
a Cauchy type of distribution with variance diverging. Decoding accuracy can no
longer be quantified by variance in such situations (for details, please refer to [8]) .
Now come to calculate the decoding error in the second step. Suppose X2 is close
enough to x. By expanding \7lnP(rlx2) at x in eq.(5), we obtain
\7lnP(rlx) + \7\7lnP(rlx)(x2 - x) - (X2 - X1)/T; = O.
(7)
The random variable Xl can be decomposed as Xl = x + f1, where f1 is a random
number satisfying Gaussian distribution of zero mean and variance Oi.
By using the notation of f1, we have
A
X2
-x =
\7lnP(rlx)+fdTf
\7\7lnP(rlx)'
l/T; -
(8)
For the correlation cases considered in the present study (i.e, those ensure ML
asymptotically or quasi-asymptotically efficient), - \7\7 In P(rlx) can be approximated as a (positive) constant according to the law of large numbers [7,8]. Therefore, we can define a constant variable
a = T;(-\7\7lnP(rlx)),
(9)
and a random variable
R
\71nP(rlx)
- \7\71n P(rlx) .
Obviously R satisfies the Gaussian distribution of zero mean and variance
=
(10)
0I.
By using the notations 0: and R, we get
X2-
X
o:R+fl
=--
(11)
1+0:
whose variance is calculated to be
(12)
Since (1 + 0: 2)/(1 + 0:)2 ::::: 1 holds for any positive 0:, the decoding accuracy in the
second step is always improved. It is not difficult to check that its minimum value
is
0 22 -- !02
(13)
2 1>
when 0: = 1, or, the optimal value of Tl is
1
Tl=----,.......,.....,..
- \7\71n P(rlx)
When a faithful model is used , -\7\71nQ(rlx) is the Fisher information.
equals to the variance of decoding error. This is understandable.
(14)
Tl
hence
Following the same procedure, it can be proved that the optimal decoding accuracy
in the tth step is 0; = tOI when the width of Gaussian prior being Tl = tTl.
It is interesting to see that the above multiple decoding procedure, when the optimal
values of Tt are used, achieves the same decoding accuracy as a one-step ML by
using all N x t signals. This is the best for any estimator to achieve. However,
the multiple decoding is not a trivial replacement of one-step ML, and has many
advantages. One of them is to save memory, considering that only N signals and
the value of previous estimation are stored in each step. Moreover, if a slowly
changing stimulus is concerned, the multiple decoding outperforms one-step ML for
the balance between adaptation and memory. These properties are valuable when
information is processed in the brain.
3
Network Implementation of MAP
In this section, we investigate how to implement MAP by a recurrent network. A
two-step decoding is studied. Without loss of generality, we consider N ---+ 00 and
do calculation in the continuous limit.
The network we consider is a fully connected one-dimensional homogeneous neural
field, in which c denotes the position coordinate, i.e., the neurons' preferred stimuli.
The tuning function of the neuron with preferred stimulus c is
f c(x) = _1_ exp-( c- x)2/ 2a 2 .
(15)
"fiifa
For simplicity, we consider an encoding process in which the fluctuations in neurons'
responses are independent Gaussian noises (more general correlated cases can be
handled similarly), that is,
Q(rlx) = ~ exp- ~
j(T c - f c (x))2 dC,
(16)
where p is the neuron density and Z is the normalization factor. A faithful model
is used in both steps decoding, i.e., P(rlx) = Q(rlx) (again, generalization to more
general cases of P(rlx) -::/:- Q(rlx) is straightforward.).
For the above model setting, the solution of ML in the first step is calculated to be
J
rc!e(x)de,
Xl = argmaxx
where the condition
J J;(x)de =
(17)
const has been used.
The solution of MAP in the second step is
X2
= argmaxx
J
rc!e(x)de - (x -
xd 2/ 2Tf.
(18)
Compared with eq.(17), eq.(18) has one more term corresponding to the contribution of prior distribution.
Now come to the study of using a recurrent network to realize eqs.(17) and (18).
Following the idea of Pouget et al. [4,5], the following network dynamics is constructed. Let Ue denote the (average) internal state of neuron at e, and W e,e' the
recurrent connection weights from neurons at e to those at e'. The dynamics of
neural excitation is governed by
dUe
dt
where
= -Ue +
J
We ,e' 0 e, de '
+ Ie,
U;
oe = ----;;-=--=1 + f..LJU;de
(19)
(20)
is the activity of neurons at e and Ie is the external input arriving at e.
The recurrent interactions are chosen to be
W c,c' -- exp-(e-e')2/ 2a 2,
(21)
which ensures that when there is no external input (Ie = 0), the network is neutrally
stable on line attractor,
'r:/z,
(22)
where the parameter D is constant and can be determined easily. Note that the
line attractor has the same shape as the tuning function. This is crucial, which
allows the network perform template-matching by using the tuning function , being
as same as ML and MAP.
When a sufficiently small input Ie is added, the network is no longer neutrally
stable on the line attractor. It can be proved that the steady state of the network
has approximately the same shape as eq.(22) (the deviation is of the 2nd order of
the magnitude of Ie.), whereas, its steady position on the line attractor (i.e., the
network estimation) is determined by maximizing the overlap between Ie and Oe(Z)
[4,9].
Thus, if Ie = ere in the first step1, where e is a sufficiently small number, the
network estimation is given by
21 = argmaxz
-------------
J
reOe(z)de,
(23)
lConsider an instant input, triggering the network to be initially at Oe(t = 0) = r e, as
used in [5] , has the same result .
which has the same value as the solution of ML (see eq.(I7)). We say that the
network implements ML.
To implement MAP in the second step, it is critical to identify a neural mechanism
which can 'transmit' the prior knowledge obtained in the first step to the second
one. We find that this is naturally done by Hebbian learning.
After the first step decoding, the recurrent interaction changes a small amount
according to the Hebbian rule, whose new value is
(24)
where TJ is a small positive number representing the Hebbian learning rate, and
Oe(,2d is the neuron activity in the first step.
With the new recurrent interactions, the net input from other neurons to the one
at c is calculated to be
J
We,e Oe dc'
l
l
J
We,e Oe dc' +TJOe(,2d
l
l
J
Oe/(zd Oe,dc' ,
(25)
where 1/ is a small constant. To get the last approximation, the following facts have
been used: 1) The initial state of neuron in the second step is at Oe(Z1 ), 2) The
neuron activity Oe during the second step is between Oe(zd and Oe(Z2 ), where Z2
is the position of the steady state; 3) (Z1 - z2 )2/2a2 ? 1, considering that neurons
are widely tuned as seen in data (a is large) and consecutive estimations are close
enough. These factors ensures the approximation, Oe/ (zd Oe,dc' :=;:j const to be
good enough.
J
Substituting eq.(25) in (19), we see that the network dynamics in the second step,
when compared with the first one, is in effect to modify the input Ie to be I~ =
?(re + AOc(zd), where A is a constant and can be determined easily.
Thus, the network estimation in the second step is determined by maximizing the
overlap between I~ and Oc(z), which gives
Z2
= argmaxz
J
rcOc(z)dc + A
J
Oe(zdO e(z )dc.
(26)
The first term in the right handside is known to achieve ML. Let us see the contribution of the second one, which can be transformed to
J
Oe(zd Oc(z)dc
=
Bexp-CZI-Z)2/4a2,
:=;:j
-B(z - zd 2 /4a 2
+ terms
not on z,
(27)
where B is a constant. Again, in the above calculation, (Z1 - z)2/4a 2 ? 1 is used
for the same argument discussed above.
Compare eqs.(I8) and (27), we see that the second term plays the same role as the
prior knowledge in MAP. Thus, the network indeed implements MAP. The value of
A (or the Hebbian learning rate) can be adjusted accordingly to match the optimal
choice of Tf .
The above result is confirmed by the simulation experiment (Table.I) , which was
done with 101 neurons uniformly distributed in the region [-3,3] and the true
stimulus being at O. It shows that the estimation of the network agrees well with
MAP.
Table 1: Comparing the decoding accuracies of the network and MAP with different
values of a (the corresponding values of T[ and A are adjusted.). The parameters
are a = 1, f-t = 0.5 and (J2 = 0.01. The data is obtained after 100 trials.
4
Conclusion and Discussion
In summary we have investigated how to implement MAP by using a biologically
plausible recurrent network. A two-step decoding paradigm is studied. In the first
step when there is no prior knowledge, the network implements ML, whose estimation is subsequently used to form the prior distribution of stimulus for consecutive
decoding. In the second step, the network implements MAP.
Line attractor and Hebbian learning are two critical elements to implement MAP.
The former enables the network to do template-matching by using the tuning function, being as same as ML and MAP. The latter provides a mechanism that conveys
the prior knowledge obtained from the first step to the second one. Though the results in this paper may quantitatively depend on the formulation of the models , it is
reasonable to believe that they are qualitatively true, as both Hebbian learning and
line attractor are biologically plausible. Line attractor comes from the translation
invariance of network interactions, and has been shown to be involved in several
neural computations [10-12]. We expect that the essential idea of Bayesian inference
of utilizing previous knowledge for successive decoding is used in the information
processing of the brain.
We also analyzed the decoding accuracy of MAP in a population code and the
optimal form of Gaussian prior. In the present study, stimulus is kept to be fixed
during consecutive decodings. A generalization to the case when stimulus slowly
changes over time is straightforward.
References
[1] A. Pouget , P. Dayan & R. Zemel. Nature Reviews Neurosci ence, 1, 125-132, 2000.
[2] B. Olshausen & D. Field. Nature, 381, 607-609, 1996.
[3] T. Poggio & F . Girosi. Neural Computation, 10, 1445-1454, 1998.
[4] A. Pouget & K. Zhang. NIPS, 9 , 1997.
[5] S. Deneve, P. E. Latham & A. Pouget. Nature N euroscience, 2, 740-745, 1999.
[6] K. Zhang, 1. Ginzburg, B. McNaughton & T. Sejnowski. J. Neurophysiol., 79 , 10171044, 1998.
[7] S. Wu, H. Nakahara & S. Amari. Neural Computation, 13, 775-798 , 200l.
[8] S. Wu, S. Amari & H. Nakahara. CNS*Ol (to appear).
[9] S. Wu, S. Amari & H. Nakahara. N eural Computation (in press).
[10] S. Amari. Biological Cybernetics, 27, 77-87, 1977.
[11] K. Zhang. J. Neurosci., 16, 2112-2126, 1996.
[12] H . Seung. Proc. Natl. Acad. Sci . USA , 93 , 13339-13344, 1996.
| 1988 |@word trial:1 nd:1 simulation:1 initial:1 inefficiency:1 tuned:1 outperforms:1 current:1 z2:4 comparing:1 si:1 xlr:2 realize:1 girosi:1 enables:1 shape:2 nervous:1 nq:1 accordingly:1 ith:1 provides:1 clarified:1 successive:1 zhang:4 rc:2 constructed:1 consists:1 biologic:1 indeed:1 behavior:1 brain:8 ol:1 inspired:1 decomposed:1 considering:2 notation:3 moreover:1 ttl:1 kind:1 finding:1 xd:1 uk:1 appear:1 positive:3 modify:1 limit:1 acad:1 encoding:5 fluctuation:3 firing:1 approximately:1 studied:4 quantified:1 suggests:1 co:1 faithful:2 implement:11 xr:1 procedure:2 thought:1 matching:2 confidence:1 inp:2 get:3 close:2 equivalent:1 map:30 maximizing:3 straightforward:2 simplicity:2 pouget:5 rule:2 estimator:2 utilizing:1 population:13 coordinate:1 mcnaughton:1 transmit:1 suppose:1 play:1 experiencing:1 homogeneous:1 element:1 satisfying:1 approximated:1 observed:1 role:2 calculate:2 region:1 ensures:2 connected:1 oe:16 valuable:1 seung:1 dynamic:3 weakly:1 solving:1 depend:1 serve:1 basis:1 neurophysiol:1 easily:2 riken:1 sejnowski:1 zemel:1 whose:5 widely:1 plausible:4 say:1 amari:5 statistic:1 think:1 obviously:1 advantage:2 net:1 interaction:7 adaptation:1 j2:1 achieve:2 object:1 depending:1 recurrent:12 progress:1 eq:8 strong:1 implemented:2 rlx2:1 implies:1 come:3 subsequently:2 shun:1 f1:3 generalization:2 biological:1 adjusted:2 hold:1 sufficiently:3 considered:2 exp:4 substituting:1 achieves:1 consecutive:6 a2:2 estimation:12 proc:1 agrees:1 tf:2 successfully:1 tool:1 reflects:1 ere:1 rough:1 gaussian:8 always:1 rather:1 likelihood:2 mainly:1 check:1 posteriori:3 inference:6 dayan:1 mathematic:2 initially:1 quasi:2 transformed:1 denoted:1 special:1 equal:2 field:2 np:1 stimulus:22 quantitatively:1 replacement:1 cns:1 attractor:8 investigate:1 analyzed:2 bracket:1 tj:1 natl:1 poggio:1 re:1 overcomplete:1 unfaithful:1 ence:1 deviation:1 uniform:1 stored:1 answer:1 density:1 ie:8 decoding:32 rlx:23 again:2 successively:1 slowly:3 external:2 japan:1 potential:1 de:6 coding:7 includes:1 later:1 multiplicative:1 lab:1 analyze:2 contribution:2 aoc:1 oi:1 ir:1 accuracy:10 variance:7 identify:1 bayesian:5 confirmed:1 cybernetics:1 involved:1 conveys:1 naturally:2 stop:1 sampled:1 gain:1 proved:3 knowledge:15 dt:1 response:4 improved:1 formulation:1 done:3 though:2 generality:1 correlation:3 nonlinear:1 continuity:1 believe:1 olshausen:1 usa:1 effect:1 true:2 former:1 hence:1 iteratively:1 during:3 width:1 ue:2 please:1 excitation:1 steady:3 oc:2 rat:4 occasion:1 hippocampal:1 tt:2 latham:1 fi:2 discussed:1 refer:1 rd:1 tuning:5 similarly:1 pointed:1 stable:2 longer:2 posterior:2 recent:1 lju:1 scenario:2 lnp:9 seen:1 minimum:1 paradigm:3 signal:3 multiple:6 desirable:1 infer:1 hebbian:9 match:1 calculation:2 neutrally:2 regression:1 sheffield:1 kernel:1 normalization:1 cell:1 whereas:1 diagram:1 crucial:1 handside:1 toi:1 enough:3 concerned:2 triggering:1 idea:4 i7:1 handled:1 generally:1 amount:1 processed:2 tth:2 restricts:1 neuroscience:1 zd:6 ichi:1 achieving:1 changing:1 kept:1 deneve:1 asymptotically:4 place:1 reasonable:2 wu:4 investigates:1 fl:1 activity:6 strength:1 constraint:1 ri:1 x2:6 step1:1 argument:1 department:1 according:3 slightly:1 reconstructing:2 biologically:4 happens:1 ginzburg:1 computationally:1 turn:2 mechanism:2 available:1 save:1 denotes:2 running:1 ensure:1 instant:1 const:2 unchanged:1 question:1 added:1 strategy:2 conceivable:1 sci:1 decoder:1 argue:1 cauchy:1 trivial:1 code:6 balance:1 difficult:1 implementation:3 understandable:1 perform:2 neuron:18 situation:2 variability:1 dc:8 introduced:1 specified:1 connection:1 z1:3 nip:1 usually:1 memory:2 overlap:2 critical:2 natural:1 representing:2 firmly:1 suspicion:1 prior:21 literature:1 review:1 law:1 loss:1 fully:1 expect:1 interesting:3 proportional:1 conveyed:2 sufficient:1 i8:1 translation:1 changed:1 summary:1 last:1 arriving:1 czi:1 institute:1 template:2 sparse:1 distributed:2 calculated:4 maze:1 author:1 qualitatively:1 far:1 preferred:2 overcomes:1 confirm:1 ml:20 xlxt:1 assumed:3 continuous:1 euroscience:1 table:2 nature:3 robust:1 expanding:1 argmaxx:4 investigated:1 complex:3 neurosci:2 noise:3 x1:1 eural:1 tl:4 position:6 decoded:1 xl:3 governed:1 xt:6 essential:1 magnitude:1 argmaxz:2 cx:1 satisfies:2 conditional:1 lth:2 presentation:1 nakahara:3 towards:1 fisher:1 change:5 determined:4 uniformly:3 averaging:1 called:2 invariance:1 diverging:1 internal:1 people:1 support:1 latter:1 correlated:4 |
1,082 | 1,989 | A Rational Analysis of Cognitive Control
in a Speeded Discrimination Task
Michael C. Mozer
, Michael D. Colagrosso , David E. Huber
Department
of Computer Science
Department
of Psychology
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309
mozer,colagrom,dhuber @colorado.edu
Abstract
We are interested in the mechanisms by which individuals monitor and
adjust their performance of simple cognitive tasks. We model a speeded
discrimination task in which individuals are asked to classify a sequence
of stimuli (Jones & Braver, 2001). Response conflict arises when one
stimulus class is infrequent relative to another, resulting in more errors
and slower reaction times for the infrequent class. How do control processes modulate behavior based on the relative class frequencies? We
explain performance from a rational perspective that casts the goal of
individuals as minimizing a cost that depends both on error rate and reaction time. With two additional assumptions of rationality?that class
prior probabilities are accurately estimated and that inference is optimal
subject to limitations on rate of information transmission?we obtain a
good fit to overall RT and error data, as well as trial-by-trial variations in
performance.
Consider the following scenario: While driving, you approach an intersection at which the
traffic light has already turned yellow, signaling that it is about to turn red. You also notice
that a car is approaching you rapidly from behind, with no indication of slowing. Should
you stop or speed through the intersection? The decision is difficult due to the presence of
two conflicting signals. Such response conflict can be produced in a psychological laboratory as well. For example, Stroop (1935) asked individuals to name the color of ink on
which a word is printed. When the words are color names incongruous with the ink color?
e.g., ?blue? printed in red?reaction times are slower and error rates are higher. We are interested in the control mechanisms underlying performance of high-conflict tasks. Conflict
requires individuals to monitor and adjust their behavior, possibly responding more slowly
if errors are too frequent.
In this paper, we model a speeded discrimination paradigm in which individuals are asked
to classify a sequence of stimuli (Jones & Braver, 2001). The stimuli are letters of the
alphabet, A?Z, presented in rapid succession. In a choice task, individuals are asked to
press one response key if the letter is an X or another response key for any letter other than
X (as a shorthand, we will refer to non-X stimuli as Y). In a go/no-go task, individuals
are asked to press a response key when X is presented and to make no response otherwise.
We address both tasks because they elicit slightly different decision-making behavior. In
both tasks, Jones and Braver (2001) manipulated the relative frequency of the X and Y
stimuli; the ratio of presentation frequency was either 17:83, 50:50, or 83:17. Response
conflict arises when the two stimulus classes are unbalanced in frequency, resulting in more
errors and slower reaction times. For example, when X?s are frequent but Y is presented,
individuals are predisposed toward producing the X response, and this predisposition must
be overcome by the perceptual evidence from the Y.
Jones and Braver (2001) also performed an fMRI study of this task and found that anterior
cingulate cortex (ACC) becomes activated in situations involving response conflict. Specifically, when one stimulus occurs infrequently relative to the other, event-related fMRI response in the ACC is greater for the low frequency stimulus. Jones and Braver also extended a neural network model of Botvinick, Braver, Barch, Carter, and Cohen (2001) to
account for human performance in the two discrimination tasks. The heart of the model
is a mechanism that monitors conflict?the posited role of the ACC?and adjusts response
biases accordingly. In this paper, we develop a parsimonious alternative account of the role
of the ACC and of how control processes modulate behavior when response conflict arises.
1 A RATIONAL ANALYSIS
Our account is based on a rational analysis of human cognition, which views cognitive
processes as being optimized with respect to certain task-related goals, and being adaptive
to the structure of the environment (Anderson, 1990). We make three assumptions of rationality: (1) perceptual inference is optimal but is subject to rate limitations on information
transmission, (2) response class prior probabilities are accurately estimated, and (3) the
goal of individuals is to minimize a cost that depends both on error rate and reaction time.
The heart of our account is an existing probabilistic model that explains a variety of facilitation effects that arise from long-term repetition priming (Colagrosso, in preparation;
Mozer, Colagrosso, & Huber, 2000), and more broadly, that addresses changes in the nature of information transmission in neocortex due to experience. We give a brief overview
of this model; the details are not essential for the present work.
The model posits that neocortex can be characterized by a collection of informationprocessing pathways, and any act of cognition involves coordination among pathways.
To model a simple discrimination task, we might suppose a perceptual pathway to map
the visual input to a semantic representation, and a response pathway to map the semantic
representation to a response. The choice and go/no-go tasks described earlier share a perceptual pathway, but require different response pathways. The model is framed in terms of
probability theory: pathway inputs and outputs are random variables and microinference in
a pathway is carried out by Bayesian belief revision.
To elaborate, consider a pathway whose input at time is a discrete random variable,
denoted
, which can assume values
corresponding to alternative input
states. Similarly, the output of the pathway at time is a discrete random variable, denoted
, which can assume values
. For example, the input to the perceptual
pathway in the discrimination task is one of
visual patterns corresponding to the
letters of the alphabet, and the output is one of
letter identities. (This model is
highly abstract: the visual patterns are enumerated, but the actual pixel patterns are not
explicitly represented in the model. Nonetheless, the similarity structure among inputs can
be captured, but we skip a discussion of this issue because it is irrelevant for the current
work.) To present a particular input alternative, , to the model for time steps, we clamp
for
. The model computes a probability distribution over given ,
i.e., P
.
#$ ! %&'("
#)*+(
!
"
Y(1)
X(1)
Y(2)
X(2)
Y(3)
...
X(3)
Y(T)
X(T)
Probability of responding
Y(0)
1
0.8
0.6
0.4
0.2
0
10
20
30
40
Reaction time
50
Figure 1: (left panel) basic pathway architecture?a hidden Markov model; (right panel)
time course of inference in a pathway
A pathway is modeled as a dynamic Bayes network; the minimal version of the model used
in the present simulations is simply a hidden Markov model, where the
are observations and the
are inferred state (see Figure 1, left panel). (In typical usage, an HMM is
presented with a sequence of distinct inputs, whereas we maintain the same input for many
successive time steps. Further, in typical usage, an HMM transitions through a sequence
of distinct hidden states, whereas we attempt to converge with increasing confidence on
a single state. Thus, our model captures the time course of information processing for a
single event.)
) * % )*+(
)
To compute P
, three probability distributions must be specified: (1)
, which characterizes how the pathway output evolves over time, (2)
P
P
characterizes the strength of association between inputs and outputs,
, which
, the prior distribution over outputs in the absence of any information
and (3) P
about the input. The particular values hypothesized for these three distributions embody the
knowledge of the model?like weights in a neural networks?and give rise to predictions
from the model.
To give a sense of how the Mozer et al. (2000) model operates, the right panel of Figure 1
depicts the time course of inference in a single pathway which has 26 input and output
alternatives, with one-to-one associations. The solid line in the Figure shows, as a function
of time , P
, i.e., the probability that a given input
will produce its target output. Due to limited association strengths, perceptual evidence
must accumulate over many iterations in order for the target to be produced with high
probability. The densely dashed line shows the same target probability when the target prior
is increased, and the sparsely dashed line shows the target probability when the association
strength to the target is increased. Increasing either the prior or the association strength
causes the speed-accuracy curve to shift to the left. In our previous work, we proposed a
mechanism by which priors and association strengths are altered through experience.
)( *
1.1 Model Details
The simulations we report in this paper utilize two pathways in cascade. A perceptual
pathway maps visual patterns (26 alternatives) to a letter-identity representation (26 alternatives), and a response pathway maps the letter identity to a response. For the choice task,
the response pathway has two outputs, corresponding to the two response keys; for the
go/no-go task, the response pathway also has two outputs, which are interpreted as ?go?
and ?no go.? The interconnection
the pathways is achieved by copying the output
, tobetween
of the perceptual pathway,
the input of the response pathway,
, at each time.
The free parameters of the model are mostly task and experience related. Nonetheless, in
the current simulations we used the same parameter values as Mozer et al. (2000), with one
exception: Because the speeded perceptual discrimination task studied here is quite unlike
the tasks studied by Mozer et al., we allowed ourselves to vary the association-strength
parameter in the response pathway. This parameter has only a quantitative, not qualitative,
influence on predictions of the model.
In our simulations, we also use the priming mechanism proposed by Mozer et al. (2000),
which we briefly describe. The priors for a pathway are internally represented in a nonnormalized
prior for alternative is , and the normalized prior is
form: the nonnormalized
P
. On each trial, the priming mechanism increases the nonnormalized prior of alternative in proportion to its asymptotic activity at final time , and
, where
is the
and all priors undergo exponential decay:
P
strength of priming, and is the decay rate. (The Mozer et al. model also performs priming
in the association strengths by a similar rule, which is included in the present simulation
although it has a negligible effect on the results here.)
!(
!
!
"
" ! )
This priming mechanism yields priors on average that match the presentation probabilities
in the task, e.g., .17 and .83 for the two responses in the 17:83 condition of the Jones
and Braver experiment. Consequently, when we report results for overall error rate and
reaction time in a condition, we make the assumption of rationality that the model?s priors
correspond to the true priors of the environment. Although the model yields the same
result when the priming mechanism is used on a trial-by-trial basis to adjust the priors, the
explicit assumption of rationality avoids any confusion about the factors responsible for the
model?s performance. We use the priming mechanism on a trial-by-trial basis to account
for performance conditional on recent trial history, as explained later.
1.2 Control Processes and the Speed-Accuracy Trade Off
The response pathway of the model produces a speed-accuracy performance function much
like that in Figure 1b. This function characterizes the operation of the pathway, but it does
not address the control issue of when in time to initiate a response. A control mechanism
might simply choose a threshold in accuracy or in reaction time, but we hypothesize a more
general, rational approach in which a response cost is computed, and control mechanisms
initiate a response at the point in time when a minimum in cost is attained.
When stimulus S is presented and the correct response is R, we posit a cost of responding
at time following stimulus onset:
%) S R P
)
R S
(1)
This cost involves two terms?the error rate and the reaction time?which are summed,
with a weighting factor, , that determines the relative importance of the two terms. We
assume that is dependent on task instructions: if individuals are told to make no errors,
should be small to emphasize the error rate; if individuals are told to respond quickly and
not concern themselves with occasional errors, should be large to emphasize the reaction
time.
')
The cost
S R cannot be computed without knowing the correct response R. Nonetheless, the control mechanism could still compute an expected cost over the alternative
responses based on the model?s current estimate of the likelihood of each:
E
#) S R
"!$#
P
' !') S %) S !(
(2)
It is this expected cost that is minimized to determine the appropriate point in time at which
to respond. We index by the response R because it is not sensible to assign a time
; forcost
to a ?no go? response, where no response is produced. Consequently, &%(')(%
the
?go? response and for the two responses
in the choice task, we searched for the parameter
+*
that best fit the data, yielding
.
0.8
0.6
R
?
R
0.4
0.2
0
10
20
30
40
Reaction time
0.8
0.6
R
?
R
0.4
0.2
0
10
20
30
40
Reaction time
1
0.8
0.6
0.4
1.2
1
0.8
0.6
0.4
10
20
30
40
Reaction time
50
0.8
0.6
R
?
R
0.4
0.2
0
10
20
30
40
Reaction time
50
10
20
30
40
Reaction time
50
1.4
Expected cost
1.2
1
50
1.4
Expected cost
Expected cost
1
50
1.4
83:17
Probability of responding
50:50
Probability of responding
Probability of responding
17:83
1
1.2
1
0.8
0.6
0.4
10
20
30
40
Reaction time
50
Figure 2: (upper row) Output of response pathway when stimulus S, associated with response R, is presented, and relative frequency of R and the alternative response, R, is
17:83, 50:50, and 83.17. (lower row) Expected cost of responding (Eqn. 2).
2 RESULTS
Figure 2 illustrates the model?s performance on the choice task when presented with a stimulus, S, associated with a response, R, and the relative frequency of R and the alternative
response, R, is 17:83, 50:50, or 83:17 (left, center, and right columns, respectively). Each
graph in the top row plots the probability of R and R against time. Although R wins out
asymptotically in all three conditions, it must overcome the effect of its low prior in the
17:83 condition. Each graph in the bottom row plots the expected cost,
, over time.
In the early part of the cost function, error rate dominates the cost, and in the late part,
reaction time dominates. In fact, at long times, the error rate is essentially 0, and the cost
grows linearly with reaction time. Our rational analysis suggests that a response should be
initiated at the global minimum?indicated by asterisks in the figure?implying that both
the reaction time and error rate will decrease as the response prior is increased.
Figure 3 presents human and simulation data for the choice task. The data consist of mean
reaction time and accuracy for the two target responses, # and
, for the three condipresentation ratios. Figure 4 presents human and
tions corresponding to different # :
simulation data for the go/no-go task. Note that reaction times are shown only for the ?go?
trials, because no response is required for the ?no go? trials. For both tasks, the model
provides an extremely good fit to the human data. The qualities of the model giving rise to
the fit can be inferred by inspection of Figure 2?namely, accuracy is higher and reaction
times are faster when a response is expected.
Figure 5 reveals how the recent history of experimental trials influences reaction time and
error rate in the choice task. The trial context along the x-axis is coded as
# , where
specifies that trial required the same (?S?) or different (?D?) response as trial (
. For example, if the current trial required response X, and the four trials leading up to the
current trial were?in forward temporal order?Y, Y, Y, and X, the current trial?s context
would be coded as ?SSDS.? The correlation coefficient between human and simulation data
is .960 for reaction time and .953 for error rate.
!
!
The model fits the human data extremely well. The simple priming mechanism proposed
previously by Mozer et al. (2000), which aims to adapt the model?s priors rapidly to the
statistics of the environment, is responsible: On a coarse time scale, the mechanism produces priors in the model that match priors in the environment. On a fine time scale,
changes to and decay of the priors results in a strong effect of recent trial history, consistent
with the human data: The graphs in Figure 5 show that the fastest and most accurate trials
Human Data
Simulation
400
380
R1
R2
360
340
Reaction time
Reaction time
420
30
R1
R2
28
26
320
17:83
50:50
83:17
R1:R2 frequency
17:83
50:50
83:17
R1:R2 frequency
0.9
1
R1
R2
Accuracy
Accuracy
1
0.8
0.9
0.8
17:83
50:50
83:17
R1:R2 frequency
Human Data
340
320
Reaction time
Reaction time
360
go
28
26
24
300
17:83
50:50
83:17
go:no-go frequency
17:83
50:50
83:17
go:no-go frequency
1
0.9
0.8
Accuracy
Accuracy
1
go
no-go
go
no-go
0.9
0.8
17:83
50:50
83:17
go:no-go frequency
Simulation
30
go
380
17:83
50:50
83:17
R1:R2 frequency
420
400
R1
R2
Figure 3: Human data (left
column) and simulation results
(right column) for the choice
task. Human data from Jones
and Braver (2001). The upper
and lower rows show mean reaction time and accuracy, respectively, for the two responses ( # and
) in the
three conditions corresponding
to different # :
frequencies.
17:83
50:50
83:17
go:no-go frequency
Figure 4: Human data (left
column) and simulation results
(right column) for the go/nogo task. Human data from
Jones and Braver (2001). The
upper and lower rows show
mean reaction time and accuracy, respectively, for the two
responses (go and no-go) in the
three conditions corresponding
to different go:no-go presentation frequencies.
are clearly those in which the previous two trials required the same response as the current
trial (the leftmost four contexts in each graph). The fit to the data is all the more impressive
given that Mozer et al. priming mechanism was used to model perceptual priming, and
here the same mechanism is used to model response priming.
3 DISCUSSION
We introduced a probabilistic model based on three principles of rationality: (1) perceptual
inference is optimal but is subject to rate limitations on information transmission, (2) response class prior probabilities are accurately estimated, and (3) the goal of individuals is
to minimize a cost that depends both on error rate and reaction time. The model provides
a parsimonious account of the detailed pattern of human data from two speeded discrimination tasks. The heart of the model was proposed previously by Mozer, Colagrosso, and
Huber (2000), and in the present work we fit experimental data with only two free parameters, one relating to the rate of information flow, and the other specifying the relative cost of
speed and errors. The simplicity and elegance of the model arises from having adopted the
rational perspective, which imposes strong constraints on the model and removes arbitrary
choices and degrees of freedom that are often present in psychological models.
Jones and Braver (2001) proposed a neural network model to address response conflict in
a speeded discrimination task. Their model produces an excellent fit to the data too, but
0.25
Error Rate
0.2
0.15
0.1
0.05
0
SSSS
DSSS
SDSS
DDSS
SSDS
DSDS
SDDS
DDDS
SSSD
DSSD
SDSD
DDSD
SSDD
DSDD
SDDD
DDDD
Human Data
Simulation
Human Data
Simulation
SSSS
DSSS
SDSS
DDSS
SSDS
DSDS
SDDS
DDDS
SSSD
DSSD
SDSD
DDSD
SSDD
DSDD
SDDD
DDDD
Reaction Time Z-score
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
Sequence of stimuli
Sequence of stimuli
Figure 5: Reaction time (left curve) and accuracy (right curve) data for humans (solid line)
and model (dashed line), contingent on the recent history of experimental trials.
involves significantly more machinery, free parameters, and ad hoc assumptions. In brief,
their model is an associative net mapping activity from stimulus units to response units.
When response units # and
both receive significant activation, noise in the system
can push the inappropriate response unit over threshold. When this conflict situation is
detected, a control mechanism acts to lower the baseline activity of response units, requiring them to build up more evidence before responding and thereby reducing the likelihood
of noise determining the response. Their model includes a priming mechanism to facilitate repetition of responses, much as we have in our model. However, their model also
includes a secondary priming mechanism to facilitate alternation of responses, which our
model does not require. Both models address additional data; for example, a variant of
their model predicts a neurophysiological marker of conflict called error-related negativity
(Yeung, Botvinick, & Cohen, 2000).
Jones and Braver posit that the role of the ACC is conflict detection. Our account makes an
alternative proposal?that ACC activity reflects the expected cost of decision making. Both
hypotheses are consistent with the fMRI data indicating that the ACC produces a greater
response for a low frequency stimulus. We are presently considering further experiments
to distinguish these contrasting hypotheses.
Acknowledgments
We thank Andrew Jones and Todd Braver for generously providing their raw data and for helpful discussions leading to this research. This research was supported by Grant 97?18 from the
McDonnell-Pew Program in Cognitive Neuroscience, NSF award IBN?9873492, and NIH/IFOPAL
R01 MH61549?01A1.
References
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Evaluating the
demand for control: anterior cingulate cortex and conflict monitoring. Submitted for publication.
Colagrosso, M. (in preparation). A Bayesian cognitive architecture for analyzing information transmission in neocortex. Ph.D. Dissertation in preparation.
Jones, A. D., & Braver, T. S. (2001). Sequential modulations in control: Conflict monitoring and the
anterior cingulate cortex. Submitted for publication.
Mozer, M. C., Colagrosso, M. D., & Huber, D. H. (2000). A Bayesian Cognitive Architecture for
Interpreting Long-Term Priming Phenomena. Presentation at the 41st Annual Meeting of the Psychonomic Society, New Orleans, LA, November 2000.
Yeung, N., Botvinick, M. M., & Cohen, J. D. (2000). The neural basis of error detection: Conflict
monitoring and the error-related negativity. Submitted for publication.
| 1989 |@word trial:23 version:1 cingulate:3 briefly:1 proportion:1 instruction:1 simulation:14 thereby:1 solid:2 score:1 reaction:33 existing:1 current:7 anterior:3 activation:1 must:4 hypothesize:1 plot:2 remove:1 stroop:1 discrimination:9 implying:1 slowing:1 accordingly:1 inspection:1 dissertation:1 provides:2 coarse:1 successive:1 along:1 qualitative:1 shorthand:1 pathway:29 huber:4 expected:9 rapid:1 embody:1 themselves:1 behavior:4 actual:1 inappropriate:1 considering:1 increasing:2 becomes:1 revision:1 underlying:1 panel:4 interpreted:1 contrasting:1 temporal:1 quantitative:1 act:2 botvinick:4 control:12 unit:5 internally:1 grant:1 producing:1 before:1 negligible:1 todd:1 sd:2 analyzing:1 initiated:1 modulation:1 might:2 studied:2 suggests:1 specifying:1 co:1 fastest:1 limited:1 speeded:6 acknowledgment:1 responsible:2 orleans:1 incongruous:1 signaling:1 elicit:1 cascade:1 printed:2 significantly:1 word:2 confidence:1 cannot:1 context:3 influence:2 map:4 center:1 go:33 simplicity:1 adjusts:1 rule:1 facilitation:1 variation:1 predisposition:1 target:7 rationality:5 colorado:2 infrequent:2 suppose:1 hypothesis:2 infrequently:1 sparsely:1 predicts:1 bottom:1 role:3 capture:1 trade:1 decrease:1 mozer:12 environment:4 asked:5 dynamic:1 basis:3 represented:2 alphabet:2 distinct:2 describe:1 detected:1 whose:1 quite:1 otherwise:1 interconnection:1 statistic:1 final:1 associative:1 hoc:1 sequence:6 indication:1 net:1 clamp:1 frequent:2 turned:1 rapidly:2 transmission:5 r1:8 produce:5 tions:1 develop:1 andrew:1 strong:2 ibn:1 involves:3 skip:1 posit:3 correct:2 human:18 explains:1 require:2 assign:1 enumerated:1 ds:2 cognition:2 mapping:1 driving:1 vary:1 early:1 coordination:1 repetition:2 reflects:1 clearly:1 generously:1 aim:1 publication:3 likelihood:2 baseline:1 sense:1 helpful:1 inference:5 mh61549:1 dependent:1 hidden:3 interested:2 pixel:1 overall:2 among:2 issue:2 denoted:2 summed:1 having:1 jones:12 fmri:3 minimized:1 report:2 stimulus:17 manipulated:1 densely:1 individual:13 ourselves:1 maintain:1 attempt:1 freedom:1 detection:2 highly:1 adjust:3 yielding:1 light:1 behind:1 activated:1 accurate:1 experience:3 machinery:1 predisposed:1 minimal:1 psychological:2 increased:3 classify:2 earlier:1 column:5 cost:20 too:2 st:1 probabilistic:2 off:1 told:2 michael:2 quickly:1 choose:1 possibly:1 slowly:1 cognitive:7 leading:2 colagrosso:6 account:7 ifopal:1 includes:2 coefficient:1 explicitly:1 depends:3 onset:1 ad:1 performed:1 view:1 later:1 traffic:1 red:2 characterizes:3 bayes:1 minimize:2 accuracy:13 succession:1 yield:2 correspond:1 yellow:1 bayesian:3 raw:1 accurately:3 produced:3 monitoring:3 history:4 acc:7 submitted:3 explain:1 against:1 nonetheless:3 frequency:18 elegance:1 associated:2 rational:7 stop:1 color:3 car:1 knowledge:1 sdds:2 higher:2 attained:1 response:62 anderson:1 correlation:1 eqn:1 marker:1 quality:1 indicated:1 grows:1 usage:2 effect:4 hypothesized:1 normalized:1 true:1 facilitate:2 name:2 requiring:1 laboratory:1 semantic:2 leftmost:1 confusion:1 performs:1 interpreting:1 nih:1 psychonomic:1 overview:1 cohen:4 association:8 relating:1 accumulate:1 refer:1 significant:1 framed:1 pew:1 similarly:1 cortex:3 similarity:1 impressive:1 recent:4 perspective:2 irrelevant:1 scenario:1 certain:1 alternation:1 meeting:1 captured:1 minimum:2 additional:2 contingent:1 greater:2 converge:1 paradigm:1 determine:1 dashed:3 signal:1 match:2 characterized:1 faster:1 adapt:1 long:3 posited:1 award:1 coded:2 a1:1 prediction:2 involving:1 basic:1 variant:1 essentially:1 yeung:2 iteration:1 achieved:1 receive:1 whereas:2 proposal:1 fine:1 unlike:1 subject:3 undergo:1 flow:1 presence:1 variety:1 fit:8 psychology:1 architecture:3 approaching:1 knowing:1 shift:1 cause:1 detailed:1 neocortex:3 ph:1 carter:2 specifies:1 nsf:1 notice:1 estimated:3 neuroscience:1 blue:1 broadly:1 discrete:2 key:4 four:2 threshold:2 monitor:3 utilize:1 graph:4 asymptotically:1 nogo:1 letter:7 you:4 respond:2 parsimonious:2 decision:3 distinguish:1 annual:1 activity:4 strength:8 constraint:1 speed:5 extremely:2 department:2 mcdonnell:1 slightly:1 evolves:1 making:2 presently:1 explained:1 boulder:1 heart:3 previously:2 turn:1 mechanism:19 initiate:2 adopted:1 operation:1 occasional:1 appropriate:1 braver:14 alternative:12 slower:3 responding:8 top:1 giving:1 build:1 society:1 r01:1 ink:2 already:1 occurs:1 rt:1 win:1 thank:1 hmm:2 sensible:1 toward:1 modeled:1 copying:1 index:1 ratio:2 minimizing:1 providing:1 difficult:1 mostly:1 rise:2 upper:3 observation:1 markov:2 november:1 situation:2 extended:1 arbitrary:1 dddd:2 inferred:2 david:1 introduced:1 cast:1 required:4 specified:1 namely:1 optimized:1 conflict:15 conflicting:1 address:5 pattern:5 program:1 belief:1 event:2 altered:1 brief:2 axis:1 carried:1 negativity:2 prior:22 determining:1 relative:8 asymptotic:1 limitation:3 asterisk:1 degree:1 consistent:2 imposes:1 principle:1 share:1 row:6 course:3 supported:1 free:3 bias:1 institute:1 overcome:2 curve:3 transition:1 avoids:1 evaluating:1 computes:1 forward:1 collection:1 adaptive:1 emphasize:2 global:1 reveals:1 nature:1 nonnormalized:3 excellent:1 priming:15 linearly:1 noise:2 arise:1 allowed:1 elaborate:1 depicts:1 explicit:1 exponential:1 perceptual:11 weighting:1 late:1 r2:8 decay:3 evidence:3 concern:1 essential:1 dominates:2 consist:1 sequential:1 barch:2 importance:1 illustrates:1 push:1 demand:1 intersection:2 simply:2 neurophysiological:1 visual:4 determines:1 conditional:1 modulate:2 goal:4 presentation:4 identity:3 consequently:2 absence:1 change:2 included:1 specifically:1 typical:2 operates:1 reducing:1 called:1 secondary:1 experimental:3 la:1 exception:1 indicating:1 searched:1 arises:4 unbalanced:1 preparation:3 phenomenon:1 |
1,083 | 199 | 324
Jordan and Jacobs
Learning to Control an Unstable System with
Forward Modeling
Michael I. Jordan
Brain and Cognitive Sciences
MIT
Cambridge, MA 02139
Robert A. Jacobs
Computer and Information Sciences
University of Massachusetts
Amherst, MA 01003
ABSTRACT
The forward modeling approach is a methodology for learning control when data is available in distal coordinate systems. We extend
previous work by considering how this methodology can be applied
to the optimization of quantities that are distal not only in space
but also in time.
In many learning control problems, the output variables of the controller are not
the natural coordinates in which to specify tasks and evaluate performance. Tasks
are generally more naturally specified in "distal" coordinate systems (e.g., endpoint
coordinates for manipulator motion) than in the "proximal" coordinate system of
the controller (e.g., joint angles or torques). Furthermore, the relationship between
proximal coordinates and distal coordinates is often not known a priori and, if
known, not easily inverted.
The forward modeling approach is a methodology for learning control when training data is available in distal coordinate systems. A forward model is a network
that learns the transformation from proximal to distal coordinates so that distal
specifications can be used in training the controller (Jordan & Rumelhart, 1990).
The forward model can often be learned separately from the controller because it
depends only on the dynamics of the controlled system and not on the closed-loop
dynamics.
In previous work, we studied forward models of kinematic transformations (Jordan,
1988, 1990) and state transitions (Jordan & Rumelhart, 1990). In the current paper,
Learning to Control an Unstable System with Forward Modeling
we go beyond the spatial credit assignment problems studied in those papers and
broaden the application of forward modeling to include cases of temporal credit
assignment (cf. Barto, Sutton, & Anderson, 1983; Werbos, 1987). As discussed
below, the function to be modeled in such cases depends on a time integral of the
closed-loop dynamics. This fact has two important implications. First, the data
needed for learning the forward model can no longer be obtained solely by observing
the instantaneous state or output of the plant. Second, the forward model is no
longer independent of the controller: If the parameters of the controller are changed
by a learning algorithm, then the closed-loop dynamics change and so does the
mapping from proximal to distal variables. Thus the learning of the forward model
and the learning of the controller can no longer be separated into different phases.
1
FORWARD MODELING
In this section we briefly summarize our previous work on forward modeling (see
also Nguyen & Widrow, 1989 and Werbos, 1987).
1.1
LEARNING A FORWARD MODEL
Given a fixed control law , the learning of a forward model is a system identification
problem. Let z = g(s, u) be a system to be modeled, where z is the output or the
state-derivative, s is the state, and u is the control. We require the forward model
to minimize the cost functional
Jm =
~
J
(z - z)T(z - z)dt.
(1)
where z = 9(s, u, v) is the parameterized function computed by the model. Once
the minimum is found, backpropagation through the model provides an estimate
of the system Jacobian matrix :~ (cf. Jordan, 1988).
?u
1.2
LEARNING A CONTROLLER
Once the forward model is sufficiently accurate, it can be used in the training of the
controller. Backpropagation through the model provides derivatives that indicate
how to change the outputs of the controller. These derivatives can be used to
change the parameters of the controller by a further application of backpropagation.
Figure 1 illustrates the general procedure.
This procedure minimizes the "distal" cost functional
(2)
where z? is a reference signal. To see this, let the controller output be given as a
function u = f(s, z?, w) of the state s?, the reference signal z?, and a parameter
vector w. Differentiating J with respect to w yields
"w J
=-
J
ouT ozT
ow ou (z? - z)dt.
(3)
325
326
Jordan and Jacobs
\
z*
~
x
Feedforward
Controller
z
Plant
-
Forward
-Model -
+
-
-
Figure 1: Learning a Controller. The Dashed Line Represents Backpropagation.
?u
The Jacobian matrix
cannot be assumed to be available a priori, but can be
estimated by backpropagation through the forward model. Thus the error signal
available for learning the controller is the estimated gradient
.
ou = - J-ow
T 0' T
V'wJ
oz (z ? - z)dt.
OU
(4)
We now consider a task in which the foregoing framework must be broadened to
allow a more general form of distal task specification.
2
THE TASK
The task is to learn to regulate an unstable nonminimum-phase plant. We have
chosen the oft-studied (e.g., Barto, Sutton, & Anderson, 1983; \Vidrow & Smith,
1964) problem of learning to balance an inverted pendulum on a moving cart. The
plant dynamics are given by:
[ M+m
mlcos(J
mlcos(J ] [
I
~
(J
]
+ [ -mlsi~(J
-mglszn(J
]
iP
= [ F0
]
where m is the mass of the pole, M is the mass of the cart, I is half the pole length,
I is the inertia of the pole around its base, and F is the force applied to the cart.
The task we studied is similar to that studied by Barto, Sutton, & Anderson (1983).
A state-feedback controller provides forces to the cart, and the system evolves until
failure occurs (the cart reaches the end of the track or the pole reaches a critical
angle). The system learns from failure; indeed, it is assumed that the only teaching
information provided by the environment is the signal that failure has occurred.
Learning to Control an Unstable System with Forward Modeling
Forward Model
o
sgn (x)
o
o
o
0
lielO
sgn(
x)
lei
sgn(e)
lei
sgn(e)
0
0
0
0
0
o
e
Ii
o
o
o
?
Action
Unit
o
o
o
o
o
o
Temporal
Difference
Unit
-0
o
-0
Controller
~,.p..
nl
Figure 2: The Network Architecture
There are several differences between our task and that studied by Barto, Sutton, &.
Anderson (1983). First, disturbances (white noise) are provided by the environment
rather than by the learning algorithm. This implies that in our experiments the
level of noise seen by the controller does not diminish to zero over the course of
learning. Second, we used real-valued forces rather than binary forces. Finally, we
do not assume the existence of a "reset button" that reinitializes the system to the
origin of state space; upon failure the system is restarted in a random configuration.
3
OUR APPROACH
In our approach, the control system learns a model that relates the current state of
the plant and the current control signal to a prediction of future failure. We make
use of a temporal difference algorithm (Sutton, 1988) to learn the transformation
from (state, action) pairs to an estimate of the inverse of the time until failure.
This mapping is then used as a differentiable forward model in the learning of the
controller-the controller is changed so as to minimize the output of the model and
thereby maximize the time until failure .
The overall system architecture is shown in Figure 2. We describe each component
in detail in the following sections.
An important feature that distinguishes this architecture from previous work (e.g.,
327
328
Jordan and Jacobs
Barto, Sutton, & Anderson, 1983) is the path from the action unit into the forward
model. This path is necessary for supervised learning algorithms to be used (see
also Werbos, 1987).
3.1
LEARNING THE FORWARD MODEL
Temporal difference algorithms learn to make long term predictions by achieving
local consistency between predictions at neighboring time steps, and by grounding
the chain of predictions when information from the environment is obtained. In our
case, if z(t) is the inverse of the time until failure, then consistency is defined by
the requirement that z-l(t) = z-l(t + 1) + 1. The chain is grounded by defining
z(T) 1, where T is the time step on which failure occurs.
=
To learn to estimate the inverse of the time until failure, the following temporal
difference error terms are used. For time steps on which failure does not occur,
( )
et
= 1 + ?-11(t + 1) -
A( )
zt ,
where ?(t) denotes the output of the forward model. When failure occurs, the target
for the forward model is set to unity:
e(t)
= 1 -- ?(t)
The error signal e(t) is propagated backwards at time t + 1 using activations saved
from time t. Standard backpropagation is used to compute the changes to the
weights.
3.2
LEARNING THE CONTROLLER
If the controller is performing as desired, then the output of the forward model
is zero (that is, the predicted time-until-failure is infinity). This suggests that an
appropriate distal error signal for the controller is zero minus the output of the
forward model.
Given that the forward model has the control action as an input, the distal error
can be propagated backward to the hidden units of the forward model, through the
action unit, and into the controller where the weights are changed (see Figure 2).
Thus the controller is changed in such a way as to minimize the output of the
forward model and thereby maximize the time until failure.
3.3
LEARNING THE FORWARD MODEL AND THE CONTROLLER
SIMULTANEOUSLY
As the controller varies, the mapping that the forward model must learn also varies.
Thus, if the forward model is to provide reasonable derivatives, it must be continuously updated as the controller changes. We find that it is possible to train the
forward model and the controller simultaneously, provided that we use a larger
learning rate for the forward model than for the controller.
Learning to Control an Unstable System with Forward Modeling
4
4.1
MISCELLANY
RESET
Although previous studies have assumed the existence of a "reset button" that
can restart the system at the origin of state space, we prefer not to make such an
assumption. A reset button implies the existence of a controller that can stabilize
the system, and the task of learning is to find such a controller. In our simulations,
we restart the system at random points in state space after failure occurs.
4.2
REDUNDANCY
The mapping learned by the forward model depends on both the state and the action. The action, however, is itself a function of the state, so the action unit provides
redundant information. This implies that the forward model could have arbitrary
weights in the path from the action unit and yet make reasonable predictions. Such
a model, however, would yield meaningless derivatives for learning the controller.
Fortunately, backpropagation tends to produce meaningful weights for a path that
is correlated with the outcome, even if that path conveys redundant information.
To further bias things in our favor, we found it useful to employ a larger learning
rate in the path from the action unit to the hidden units of the forward model (0.9)
than in the path from the state units (0.3).
4.3
REPRESENTATION
As seen in Figure 2, we chose input representations that take advantage of symmetries in the dynamics of the cart-pole system. The forward model has even symmetry
with respect to the state variables, whereas the controller has odd symmetry.
4.4
LONG-TERM BEHAVIOR
There is never a need to "turn off" the learning of the forward model. Once the pole
is being successfully balanced in the presence of fluctuations, the average time until
failure goes to infinity. The forward model therefore learns to predict zero in the
region of state space around the origin, and the error propagated to the controller
also goes to zero.
5
RESULTS
We ran twenty simulations starting with random initial weights. The learning rate
for the controller was 0.05 and the learning rate for the forward model was 0.3,
except for the connection from the action unit where the learning rate was 0.9.
Eighteen runs converged to controller configurations that balanced the pole, and
two runs converged on local minima. Figure 3 shows representative learning curves
for six of the successful runs.
To obtain some idea of the size of the space of correct solutions, we performed an
exhaustive search of a lattice in a rectangular region of weight space that contained
329
330
Jordan and Jacobs
1000
800
600
Average time
until failure
.00
200
o
1000
500
1500
Bins
(1 bin ., 20 fillur ?? )
Figure 3: Learning Curves for Six Runs
all of the weight configurations found by our simulations. As shown in Figure 4,
only 15 out of 10,000 weight configurations were able to balance the pole.
6
CONCLUSIONS
Previous wor k within the forward modeling paradigm focused on models of fixed
kinematic or dynamic properties of the controlled plant (Jordan, 1988,1990; Jordan
&, Rumelhart, 1990). In the current paper, the notion of a forward model is broader.
The function that must be modeled depends not only on properties of the controlled
plant, but also on properties of the controller. Nonetheless, the mapping is welldefined, and the results demonstrate that it can be used to provide appropriate
incremental changes for the controller.
These results provide further demonstration of the applicability of supervised learning algorithms to learning control problems in which explicit target information is
not available.
Acknowledgments
The first author was supported by BRSG 2 S07 RR07047-23 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, National
Institutes of Health and by a grant from Siemens Corporation. The second author was supported by the Air Force Office of Scientific Research, through grant
AFOSR-87-0030.
Learning to Control an Unstable System with Forward Modeling
?
3 ?
?
Log
Frequency
?
?
?
?
).
2
?
-.
-
??
0+---44.-----~--.-r_----r_--~
o
200
.00
100
100
1000
Median Time Steps Until Failure
Figure 4: Performance of Population of Controllers
References
Barto, A. G ., Sutton, R. S., & Anderson, C. W. (1983). Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on
Systems, Man, and Cybernetics, SMC.19, 834-846.
Jordan, M. I. (1988). Supervised learning and systems with excess degress of freedom. (COINS Tech. Rep. 88-27). Amherst, MA: University of Massachusetts,
Computer and Information Sciences.
Jordan, M. I. (1990). Motor learning and the degrees of freedom problem. In M.
Jeannerod, (Ed). Attention and Performance, XIII. Hillsdale, NJ: Erlbaum.
Jordan, M. I. & Rumelhart, D. E. (1990). Supervised learning with a distal teacher.
Paper in preparation.
Nguyen, D. & Widrow, B. (1989). The truck backer-upper: An example of selflearning in neural networks. In: Proceedings of the International Joint Conference
on Neural Networks. Piscataway, NJ: IEEE Press.
Sutton, R. S. (1987). Learning to predict by the methods of temporal differences.
Machine Learning, 9, 9-44.
Werbos, P. (1987). Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17, 7-20.
Widrow, B. & Smith, F. W. (1964). Pattern-recognizing control systems. In: Computer and Information Sciences Proceedings, Washington, D.C.: Spartan.
331
| 199 |@word briefly:1 simulation:3 jacob:5 thereby:2 minus:1 initial:1 configuration:4 current:4 activation:1 yet:1 must:4 numerical:1 motor:1 half:1 smith:2 provides:4 welldefined:1 indeed:1 degress:1 behavior:1 brain:2 torque:1 jm:1 considering:1 provided:3 mass:2 minimizes:1 transformation:3 corporation:1 nj:2 temporal:6 control:16 unit:11 broadened:1 grant:3 local:2 tends:1 sutton:8 solely:1 path:7 fluctuation:1 backer:1 chose:1 studied:6 suggests:1 smc:1 acknowledgment:1 backpropagation:7 procedure:2 cannot:1 go:3 attention:1 starting:1 rectangular:1 focused:1 population:1 notion:1 coordinate:9 updated:1 target:2 origin:3 element:1 rumelhart:4 werbos:4 wj:1 region:2 ran:1 balanced:2 environment:3 dynamic:7 upon:1 division:1 easily:1 joint:2 train:1 separated:1 describe:1 spartan:1 outcome:1 exhaustive:1 larger:2 valued:1 foregoing:1 solve:1 favor:1 itself:1 ip:1 advantage:1 differentiable:1 reset:4 neighboring:1 loop:3 oz:1 requirement:1 produce:1 incremental:1 widrow:3 odd:1 predicted:1 indicate:1 implies:3 saved:1 correct:1 sgn:4 bin:2 hillsdale:1 require:1 sufficiently:1 credit:2 around:2 diminish:1 mapping:5 predict:2 successfully:1 mit:1 rather:2 r_:2 barto:6 broader:1 office:1 tech:1 hidden:2 overall:1 priori:2 spatial:1 wor:1 once:3 never:1 washington:1 represents:1 future:1 xiii:1 employ:1 distinguishes:1 simultaneously:2 national:1 phase:2 freedom:2 neuronlike:1 kinematic:2 nl:1 chain:2 implication:1 accurate:1 integral:1 necessary:1 desired:1 modeling:11 assignment:2 lattice:1 cost:2 pole:8 applicability:1 recognizing:1 successful:1 erlbaum:1 varies:2 teacher:1 proximal:4 international:1 amherst:2 off:1 michael:1 continuously:1 cognitive:1 derivative:5 stabilize:1 automation:1 depends:4 performed:1 closed:3 observing:1 pendulum:1 selflearning:1 minimize:3 air:1 yield:2 identification:1 cybernetics:2 converged:2 reach:2 ed:1 failure:18 nonetheless:1 frequency:1 conveys:1 naturally:1 propagated:3 massachusetts:2 ou:3 dt:3 supervised:4 methodology:3 specify:1 anderson:6 furthermore:1 biomedical:1 until:10 lei:2 scientific:1 manipulator:1 building:1 grounding:1 white:1 distal:13 demonstrate:1 motion:1 instantaneous:1 functional:2 endpoint:1 extend:1 discussed:1 occurred:1 cambridge:1 consistency:2 teaching:1 moving:1 specification:2 f0:1 longer:3 base:1 awarded:1 binary:1 rep:1 inverted:2 seen:2 minimum:2 fortunately:1 maximize:2 redundant:2 paradigm:1 signal:7 dashed:1 relates:1 ii:1 long:2 controlled:3 prediction:5 controller:40 nonminimum:1 grounded:1 whereas:1 separately:1 median:1 meaningless:1 cart:6 jeannerod:1 thing:1 jordan:14 presence:1 backwards:1 feedforward:1 architecture:3 idea:1 six:2 action:11 generally:1 useful:1 estimated:2 track:1 redundancy:1 achieving:1 backward:1 button:3 run:4 angle:2 parameterized:1 inverse:3 reasonable:2 prefer:1 truck:1 occur:1 infinity:2 performing:1 piscataway:1 unity:1 evolves:1 resource:1 turn:1 needed:1 end:1 available:5 regulate:1 appropriate:2 coin:1 existence:3 broaden:1 denotes:1 include:1 cf:2 quantity:1 occurs:4 ow:2 gradient:1 restart:2 unstable:6 length:1 modeled:3 relationship:1 balance:2 demonstration:1 difficult:1 robert:1 zt:1 twenty:1 upper:1 eighteen:1 defining:1 reinitializes:1 arbitrary:1 pair:1 specified:1 connection:1 learned:2 beyond:1 able:1 below:1 pattern:1 oft:1 summarize:1 program:1 critical:1 natural:1 force:5 disturbance:1 ozt:1 health:1 understanding:1 law:1 afosr:1 plant:7 degree:1 course:1 changed:4 supported:2 bias:1 allow:1 institute:1 differentiating:1 feedback:1 curve:2 transition:1 forward:47 inertia:1 author:2 adaptive:2 nguyen:2 transaction:2 miscellany:1 excess:1 assumed:3 search:1 learn:5 symmetry:3 noise:2 representative:1 explicit:1 factory:1 jacobian:2 learns:4 illustrates:1 contained:1 restarted:1 ma:3 man:2 change:6 except:1 siemens:1 meaningful:1 support:1 preparation:1 evaluate:1 correlated:1 |
1,084 | 1,990 | Information-Geometrical Significance of
Sparsity in Gallager Codes
Toshiyuki Tanaka
Department of Electronics and Information Engineering
Tokyo Metropolitan University
Tokyo 192-0397, Japan
tanaka@eei.metro-u.ac.jp
Shiro Ikeda
Kyushu Institute of Technology & JST
Fukuoka 808-0196, Japan
shiro@brain.kyutech.ac.jp
Shun-ichi Amari
RIKEN, Brain Science Institute
Saitama 351-0198, Japan
amari@brain.riken.go.jp
Abstract
We report a result of perturbation analysis on decoding error of the belief
propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error
at equilibrium comes from the m-embedding curvature of the log-linear
submanifold spanned by the estimated pseudoposteriors, one for the full
marginal, and K for partial posteriors, each of which takes a single check
into account, where K is the number of checks in the Gallager code. It is
then shown that the principal error term vanishes when the parity-check
matrix of the code is so sparse that there are no two columns with overlap
greater than 1.
1
Introduction
Recent progress on error-correcting codes has attracted much attention because their decoders, exhibiting performance very close to Shannon's limit, can be implemented as neural networks. Important examples are turbo codes and Gallager codes [1]. It is now well
understood that application of belief propagation to the respective graphical representations of the decoding problems for both codes yields practically efficient decoding algorithms which are the same as the existing ones (the turbo decoding [2] and the sum-product
decoding [3], respectively). They are, however, not exact but approximate, since the associated graphical representations have loops in both cases. An important problem posed
is to quantify the effect that comes from the existence of loops in the underlying graph.
The so-called TAP approach [4] in statistical physics is an alternative way to formulate the
same decoding algorithm [5]. Since this approach also assumes that the underlying graph
is locally loop-free, one is faced with the same problem as above.
In this paper, we analyze the properties of the belief propagation decoder to Gallager codes,
expecting that better theoretical understanding of the properties of the belief propagation
generator
matrix
parity-check
matrix
BSC(? 0 )
BSC(? )
s
GT
information
vector
t
r
codeword
received
vector
z
A
syndrome
vector
Figure 1: Gallager code
decoder will help understand the properties and efficiency of belief propagation in general,
applied to loopy graphs, as well as those of the TAP approach. We specifically make use of
the information geometry [6] and report a result of perturbation analysis on decoding error
of the belief propagation decoder.
2
Gallager codes
Gallager code is defined by its parity-check matrix A, which has the form
A = [C1 | C2 ],
(1)
where C1 and C2 are K ? M and K ? K matrices, both of which are taken to be very
sparse. C2 is assumed invertible. We define the generator matrix of the Gallager code to be
I
(2)
GT =
C2?1 C1
where I is the M ? M identity matrix. AG T = O mod 2 holds by definition.
The whole model of communication with the Gallager code is shown in Fig. 1. An information vector s of length M is encoded into a codeword t = GT s mod 2 of length N ? K + M.
The codeword t is then transmitted over a channel. We assume that the transmission channel is a binary symmetric channel (BSC) with bit-error probability ? . The received vector
is then r = t + n mod 2, where n is the noise vector. Decoder tries to find the most probable
x satisfying the parity-check equation
Ax = z mod 2,
(3)
AGT s
where z ? Ar mod 2 is the syndrome vector. Since At =
= 0 mod 2, we have
z = An mod 2. Therefore, the solution x serves as an estimate of the noise vector n. If we
are successful in finding the true noise vector n, we can reconstruct, from r, the original
codeword t by t = r + n mod 2, and then the information vector s. Since Eq. (3) is
underdetermined, one has to take into account the prior knowledge of the noise in order to
solve it properly.
The decoding problem can be cast into the Bayes framework. In the sequel, we transform
expression of a bit from binary (1, 0) to bipolar (?1, 1). The prior for x is
p(x) = exp ?1 ? x ? N ?(?) , ?(?) ? log(e? + e?? ),
(4)
where 1 is an N -dimensional vector whose elements are all 1, i.e., 1 ? [1, . . . , 1]. ? is a
parameter which is related with the bit-error probability ? of the transmission channel by
? =
1
(1 ? tanh ?).
2
(5)
For the sake of analytical tractability, we assume that the syndrome vector z is observed via
another BSC channel with bit-error probability ? 0 (see Fig. 1). This leads
K
h X
i
Y
p(z|x) ? exp ?
zr
xi ,
r=1
(6)
i? (r)
where (r ) is the set of all indices of nonzero elements in row r of the parity-check matrix
A, i.e., (r ) ? {i | Ari = 1}, and ? is defined by ? 0 = (1/2)(1 ? tanh ?). If we take the
limit ? ? +?, then we recover the conventional situation of observing the syndrome in
a deterministic way. In what follows, we consider the case in which ? is finite, or equivalently, the case with soft parity-check constraints. Since experimental findings suggest that
it is usually the case for decoding results of Gallager codes to violate no parity-check constraints [3], we believe that making the parity-check constraints soft does not alter essential
properties of the problem.
3
Decoding
The posterior distribution of x for given observed syndrome z is derived from the prior p(x)
and the conditional p(z | x) by applying the Bayes formula, and the result is
K
h
i
X
p(x|z) ? exp c0 (x) + ?
cr (x) ,
(7)
r=1
where we let
c0 (x) ? ?1 ? x,
cr (x) ? zr
Y
(r = 1, . . . , K ).
xi
(8)
i? (r)
The objective of decoder of Gallager codes is to obtain the marginal-posterior-mode
(MPM) estimate based on the posterior p(x|z):
X
x?i = arg max
p(x|z).
(9)
xi
x\x i
The MPM estimation provides the Bayes-optimum decoder minimizing expected bit-error
probability of the decoding results. However, the marginalization is in general computationally hard, which renders the decoding problem intractable. An approximate decoding
algorithm, originally proposed by Gallager himself [1], is known to be efficient in practice.
It has been recently rediscovered by MacKay [3] by application of the belief propagation
to the underlying graphical model. Murayama et al. [5] also formulated the same algorithm based on the so-called TAP approach [4]. The decoder implementing the algorithm
is called the belief propagation decoder, and is analyzed in the following.
We define a family of distributions with a set of parameters ? = (?1 , . . . , ? N )T and v =
(v1 , . . . , v K ):
n
o
S = p(x; ? , v) p(x; ? , v) = exp ? ? x + v ? c(x) ? ?(? , v) ,
(10)
T
where c(x) ? c1 (x), . . . , c K (x) . The family S includes the factorizable test distribution p0 (x; ? 0 ) (= p(x; ? 0 , 0)), the true posterior p(x|z) (= p(x; ?1, ?1)), and K partial
posteriors pr (x; ? r ) (= p(x; ? r , ?er ); er ? (0, . . . , 0, 1, 0, . . . , 0)T ).
r?
We then define the expectation parameter ?(? , v) by
X
?(? , v) ?
x p(x; ? , v).
x
(11)
The marginalization in Eq. (9) corresponds to evaluating the expectation parameter of the
true posterior. We now introduce the equimarginal family
M(? 0 ) ? p(x; ? , v) ?(? , v) = ?(? 0 , 0) ,
(12)
and define the marginalization operator 5 as follows: For p ? S, 5? p ? ?0 if p ? M(? 0 ).
Knowing ? 0 = 5 ? p is regarded as being equivalent to knowing the expectation parameter
of p, since ?(? 0 , 0) is easily evaluated from ? 0 ; in other words, the marginalization is
tractable for distributions belonging to the factorizable model:
M0 ? p0(x; ? 0 ) ? p(x; ? 0 , 0) = exp(? 0 ? x ? ?0 (? 0 ))
(13)
The basic assumption of iterative decoding is that the marginalization is also tractable for
the models corresponding to constituent decoders with single checks, with factorizable
priors:
Mr ? pr (x; ?) ? p(x; ?, ?er ) = exp(? ? x + ?cr (x) ? ?r (?))
(14)
The algorithm of the belief propagation decoder is described in the notation employed here
as follows:
Initialization: Let t = 0 and ? r0 = ?1, r = 1, . . . , K .
Horizontal step: Evaluate the marginalization of pr (x; ? rt ) ? Mr to produce a
guess ? rt based on the current prior information ?rt and the check zr :
? rt = 5 ? pr (x; ? rt ),
r = 1, . . . , K ,
(15)
and calculate a net contribution (the 'cavity field' [7]) from the check zr by subtracting the prior information:
? rt = ? rt ? ? rt .
(16)
(It should be noted that (? rt )i = 0 holds for i 6? (r ) as it should be, since the
constituent decoder with check zr cannot provide any contribution regarding information of xi , i 6 ? (r).)
Vertical step: Compose the 'leave-one-out' estimates [7]
X
? rt+1 = ?1 +
? rt 0 , r = 1, . . . , K ,
(17)
r 0 6 =r
and the pseudoposterior
? t+1 = ?1 +
K
X
? rt .
(18)
r=1
Iterate the above steps until convergence is achieved. The desired decoding result
?(?1, ?1) is then approximated by ?(? ? , 0), where ? ? is the convergent value of
{? t }.
4
Information-geometrical characterization of equilibrium
Assume that the convergence is achieved and let (?? , ? ?1 , . . . , ? ?K ) be the equilibrium values
of (? t , ? t1 , . . . , ? tK ). Then, from Eqs. (15) and (16), we have
5 ? pr (x; ? ? ? ? r? ) = ? ? ,
r = 1, . . . , K .
(19)
This means that p0(x; ? ? ) and pr (x; ? ? ? ? r? ), r = 1, . . . , K , are equimarginal, that is,
pr (x; ? ? ? ? r? ) ? M(? ? ),
r = 1, . . . , K
(20)
E(? ? )
p(x|z)
M(? ? )
p2 (x; ? ? ? ? ?2 )
p1 (x; ? ? ? ? ?1 )
M1
M2
MK
p K (x; ? ? ? ? ?K )
S
M0
p0 (x; ? ? )
Figure 2: Geometric structure of belief propagation decoder
holds. Another property of the equilibrium is the log-linear relation
?
log p(x|z) ? log p0 (x; ? ) =
K
X
r=1
or, in the (? , v) coordinate,
log pr (x; ? ? ? ? r? ) ? log p0 (x; ? ? ) + const.
(?1, ?1) ? (? ? , 0) =
K
X
r=1
(? ? ? ? r? , ?er ) ? (? ? , 0) .
(21)
(22)
This means that the true posterior p(x|z) belongs to the 'log-linear submanifold' E(?? ),
the affine subspace in the (? , v)-coordinate rooted at (?? , 0) and spanned by (?? r? , ?er ),
r = 1, . . . , K .
These two properties do not imply p(x|z) ? M(?? ). In fact, if we were to assume, instead
of the log-linear relation (21), the linear relation
p(x|z) ? p0 (x; ? ? ) =
K
X
r=1
pr (x; ? r? ) ? p0 (x; ? ? ) ,
(23)
then we would have p(x|z) ? M(? ? ) and thus ?(?1, ?1) = ?(? ? , 0). This is not the
case because of the difference between the linear and log-linear relations. To what degree
the log-linear relation deviates from the linear relation determines the decoding error of
belief propagation decoder. The structure is best described on the basis of information
geometry [6]. Figure 2 illustrates the geometric structure of the belief propagation decoder.
It should be noted that the geometrical structure shown here is essentially the same as that
for the turbo decoding [8, 9].
5
Main result
Based on the information geometry, we have evaluated decoding error, the difference between the true expectation ?(?1, ?1) and its estimate by the belief propagation decoder
?(? ? , 0), via perturbation analysis. Taking into account the terms up to second order, we
have
?2 X
?(?1, ?1) ? ?(? ? , 0) =
Brs ?(? ? , 0) + O(? 3 ),
(24)
2
r,s;r6 =s
where
Brs ?
N
N
?
X
X
? ?
j ?
?
g kk Ark
?
g j j As
,
?vr
??k ?vs
?? j
k=1
(25)
j=1
and
??i (? ? , 0)
(26)
= Cov? ? ,0 x i , cr (x) .
?vr
{Brs } are the elements of the m-embedding curvature tensor of the manifold E(?? ) in S.
g ii ? 1/(1 ? ?ii (? ? , 0)2 ) are the diagonal elements of the inverse of the Fisher information
of p0 (x; ? ? ). This is the generalization of the result obtained for the turbo decoding [8].
Ari ?
Explicit calculation gives the following theorem.
Theorem 1. The decoding error of belief propagation decoder is given, within the secondorder with respect to ?, by
?i (?1, ?1) ? ?i (? ? , 0)
"
X
2
2
= ? (1 ? ?i ) ??i
X
zr z s
r,s
r 6 =s,i ? (r )? (s)
(1 ? ?2j )
j?( (r)? (s))\i
?
Y
k?( (r )? (s))\i, j
+
X
zr z s 1 ?
r,s
r 6 =s,i ? (r )? (s)
Y
Y
X
(1 ? ?2j )
j? (r)? (s)
l?[( (r )? (s))\i]?[ (s)? (r )]
+ O(? 3 )
?l
l?( (r )? (s))?( (s)? (r))
?2j ?
j? (r)? (s)
?
Y
?k2
Y
?k2
k?( (r )? (s))\ j
?l
#
(27)
where ?i ? ?i (? ? , 0).
From this theorem, it immediately follows that:
Corollary 2. If the parity-check matrix A has no two columns with overlap greater than
1, then the principal error term, given in Eq. (27), vanishes.
These are the main result of this paper.
6
Discussion
The general result given in Eq. (24) shows that the principal error term is not coordinate
invariant, since the summation with respect to r and s in the right-hand side of Eq. (24)
excludes terms with r = s. This corresponds to the empirical fact that the performance does
depend on the design of the code, that is, the choice of the parity-check matrix A. Explicit
evaluation of the principal error term, as in Theorem 1, makes it possible to improve the
performance of a code, just in the same way as the perturbational approach to improving
the naive mean-field approximation [10, 11, 12, 13, 14, 15, 16, 17].
It is believed [3] that Gallager codes have smaller average probability of decoding error
if we avoid any two columns of the parity-check matrix A to have overlap greater than 1.
An intuitive explanation to this belief is that such avoidance prevents loops with length 4
from appearing in the graphical representation. Since short loops are expected to do harm
in proper functioning of belief propagation, their existence may raise the possibility of
decoding errors. Our result supports this belief by showing analytically that the principal
term of decoding error vanishes when the parity-check matrix of the code is so sparse and
prepared with care so that there are no two columns with overlap greater than 1.
Loops with length longer than 4 do not contribute to the decoding error at least via the
principal term, but they may have effects via higher-order terms. Our analysis presented
here can be extended in a straightforward manner to higher-order perturbation analysis in
order to quantify these effects.
It should be noted that our approach taken in this paper is different from the common
approach to analyzing the properties of the belief propagation decoder in the literature,
in that we do not consider ensembles of codes. A typical reasoning found in the literature
(e.g., [18]) is first to consider an ensemble of random parity-check matrices, to state that the
probability (over the ensemble) of containing short loops in the associated graph decreases
down to zero as the size of the parity-check matrix tends to infinity, and to assume that the
behavior of the belief propagation decoder for codes with longer loops is the same as that of
belief propagation for loop-free case. The statistical-mechanical approach to performance
analysis of Gallager-type codes [5] also assumes random ensembles. Our analysis, on the
other hand, does not assume ensembles but allows, although asymptotically, performance
evaluation of the belief propagation decoder to Gallager codes with any single instance of
the parity-check matrix with finite size.
Acknowledgments
The authors would like to thank Dr. Yoshiyuki Kabashima for his helpful suggestions and
comments.
References
[1] R. G. Gallager, Low Density Parity Check Codes, Ph. D. Thesis, Mass. Inst. Tech., 1960.
[2] R. J. McEliece, D. J. C. MacKay, and J. Cheng, ?Turbo decoding as an instance of Pearl's `belief
propagation' algorithm,? IEEE J. Select. A. Commun., vol. 16, no. 2, pp. 140?152, 1998.
[3] D. J. C. MacKay, ?Good error-correcting codes based on very sparse matrices,? IEEE Trans.
Inform. Theory, vol. 45, no. 2, pp. 399?431, 1999.
[4] D. J. Thouless, P. W. Anderson, and R. G. Palmer, ?Solution of `Solvable model of a spin glass',?
Phil. Mag., vol. 35, no. 3, pp. 593?601, 1977.
[5] T. Murayama, Y. Kabashima, D. Saad, and R. Vicente, ?Statistical physics of regular low-density
parity-check error-correcting codes,? Phys. Rev. E, vol. 62, no. 2, pp. 1577?1591, 2000.
[6] S. Amari and H. Nagaoka (Transl. by D. Harada), Methods of Information Geometry, Translations of Mathematical Monographs, vol. 191, American Math. Soc., 2000.
[7] Y. Kabashima and D. Saad, ?The TAP approach to intensive and extensive connectivity systems,?
in M. Opper and D. Saad (eds.), Advanced Mean Field Methods ? Theory and Practice, The
MIT Press, 2001, pp. 65?84.
[8] S. Ikeda, T. Tanaka, and S. Amari, ?Information geometrical framework for analyzing belief
propagation decoder,? in T. G. Dietterich et al. (eds.), Advances in Neural Information Processing Systems, vol. 14 (this volume), The MIT Press, 2002.
[9] S. Ikeda, T. Tanaka, and S. Amari, ?Information geometry of turbo codes and low-density paritycheck codes,? submitted to IEEE Trans. Inform. Theory, 2001.
[10] H. J. Kappen and F. B. Rodriguez, ?Efficient learning in Boltzmann machines using linear
response theory,? Neural Computation, vol. 10, no. 5, pp. 1137?1156, 1998.
[11] H. J. Kappen and F. B. Rodriguez, ?Boltzmann machine learning using mean field theory and
linear response correction,? in M. I. Jordan et al. (eds.), Advances in Neural Information Processing Systems, vol. 10, The MIT Press, 1998, pp. 280?286.
[12] T. Tanaka, ?A theory of mean field approximation,? in M. S. Kearns et al. (eds.), Advances in
Neural Information Processing Systems, vol. 11, The MIT Press, 1999, pp. 351?357.
[13] T. Tanaka, ?Information geometry of mean-field approximation,? Neural Computation, vol. 12,
no. 8, pp. 1951?1968, 2000.
[14] J. S. Yedidia, ?An idiosyncratic journey beyond mean field theory,? in M. Opper and D. Saad
(eds.), Advanced Mean Field Methods ? Theory and Practice, The MIT Press, 2001, pp. 21?35.
[15] H. J. Kappen and W. J. Wiegerinck, ?Mean field theory for graphical models,? in M. Opper and
D. Saad (eds.), Advanced Mean Field Methods ? Theory and Practice, The MIT Press, 2001,
pp. 37?49.
[16] S. Amari, S. Ikeda, and H. Shimokawa, ?Information geometry of ?-projection in mean field
approximation,? in M. Opper and D. Saad (eds.), Advanced Mean Field Methods ? Theory and
Practice, The MIT Press, 2001, pp. 241?257.
[17] T. Tanaka, ?Information geometry of mean-field approximation,? in M. Opper and D. Saad
(eds.), Advanced Mean Field Methods ? Theory and Practice, The MIT Press, 2001, pp. 259?
273.
[18] T. J. Richardson and R. L. Urbanke, ?The capacity of low-density parity-check codes under
message-passing decodeing,? IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 599?618, 2001.
| 1990 |@word c0:2 p0:9 kappen:3 electronics:1 equimarginal:2 mag:1 existing:1 current:1 attracted:1 ikeda:4 v:1 guess:1 mpm:2 short:2 characterization:1 provides:1 contribute:1 math:1 mathematical:1 c2:4 transl:1 compose:1 manner:1 introduce:1 expected:2 behavior:1 p1:1 brain:3 underlying:3 notation:1 mass:1 what:2 ag:1 finding:2 bipolar:1 k2:2 t1:1 engineering:1 understood:1 tends:1 limit:2 analyzing:2 initialization:1 palmer:1 acknowledgment:1 practice:6 perturbational:1 empirical:1 projection:1 word:1 regular:1 suggest:1 cannot:1 close:1 operator:1 applying:1 conventional:1 deterministic:1 equivalent:1 phil:1 go:1 attention:1 straightforward:1 formulate:1 immediately:1 correcting:3 m2:1 avoidance:1 regarded:1 spanned:2 his:1 embedding:2 coordinate:3 exact:1 secondorder:1 element:4 satisfying:1 approximated:1 ark:1 observed:2 calculate:1 decrease:1 expecting:1 monograph:1 vanishes:3 depend:1 raise:1 efficiency:1 basis:1 easily:1 riken:2 whose:1 encoded:1 posed:1 solve:1 amari:6 reconstruct:1 cov:1 richardson:1 nagaoka:1 transform:1 analytical:1 net:1 subtracting:1 product:1 loop:9 murayama:2 intuitive:1 constituent:2 convergence:2 transmission:2 optimum:1 produce:1 leave:1 tk:1 help:1 ac:2 received:2 progress:1 eq:6 p2:1 soc:1 implemented:1 come:2 quantify:2 exhibiting:1 tokyo:2 jst:1 implementing:1 shun:1 generalization:1 probable:1 underdetermined:1 summation:1 correction:1 hold:3 practically:1 exp:6 equilibrium:4 m0:2 estimation:1 tanh:2 metropolitan:1 bsc:4 mit:8 avoid:1 cr:4 corollary:1 ax:1 derived:1 properly:1 check:24 tech:1 glass:1 helpful:1 inst:1 relation:6 arg:1 mackay:3 marginal:2 field:14 alter:1 report:2 thouless:1 geometry:9 message:1 rediscovered:1 possibility:1 evaluation:2 analyzed:1 partial:2 respective:1 urbanke:1 desired:1 theoretical:1 mk:1 instance:2 metro:1 column:4 soft:2 ar:1 loopy:1 tractability:1 saitama:1 submanifold:2 harada:1 successful:1 density:4 sequel:1 physic:2 decoding:26 invertible:1 connectivity:1 thesis:1 containing:1 dr:1 american:1 japan:3 account:3 includes:1 eei:1 try:1 analyze:1 observing:1 bayes:3 recover:1 contribution:2 spin:1 ensemble:5 yield:1 toshiyuki:1 kabashima:3 submitted:1 inform:3 phys:1 ed:8 definition:1 pp:14 associated:2 knowledge:1 originally:1 higher:2 response:2 evaluated:2 anderson:1 just:1 until:1 mceliece:1 hand:2 horizontal:1 propagation:21 rodriguez:2 mode:1 shimokawa:1 believe:1 effect:3 dietterich:1 true:5 functioning:1 analytically:1 symmetric:1 nonzero:1 rooted:1 noted:3 geometrical:4 reasoning:1 ari:2 recently:1 common:1 jp:3 volume:1 m1:1 shiro:2 longer:2 gt:3 curvature:2 posterior:8 recent:1 belongs:1 commun:1 codeword:4 binary:2 transmitted:1 greater:4 care:1 mr:2 syndrome:5 employed:1 r0:1 ii:2 full:1 violate:1 calculation:1 believed:1 basic:1 himself:1 expectation:4 essentially:1 yoshiyuki:1 achieved:2 c1:4 saad:7 comment:1 mod:8 jordan:1 iterate:1 marginalization:6 fukuoka:1 regarding:1 knowing:2 br:3 intensive:1 expression:1 render:1 passing:1 prepared:1 locally:1 ph:1 estimated:1 vol:11 ichi:1 v1:1 graph:4 excludes:1 asymptotically:1 sum:1 inverse:1 journey:1 family:3 bit:5 convergent:1 cheng:1 turbo:6 constraint:3 infinity:1 sake:1 kyushu:1 department:1 belonging:1 smaller:1 rev:1 making:1 invariant:1 pr:9 taken:2 computationally:1 equation:1 tractable:2 serf:1 yedidia:1 appearing:1 alternative:1 existence:2 original:1 assumes:2 graphical:5 const:1 tensor:1 objective:1 rt:12 diagonal:1 subspace:1 thank:1 capacity:1 decoder:22 manifold:1 code:30 length:4 index:1 kk:1 minimizing:1 equivalently:1 idiosyncratic:1 design:1 proper:1 boltzmann:2 vertical:1 finite:2 situation:1 extended:1 communication:1 perturbation:4 cast:1 mechanical:1 extensive:1 tap:4 pearl:1 tanaka:7 trans:3 beyond:1 usually:1 sparsity:1 max:1 explanation:1 belief:23 overlap:4 solvable:1 zr:7 advanced:5 improve:1 technology:1 imply:1 naive:1 faced:1 prior:6 understanding:1 geometric:2 deviate:1 literature:2 suggestion:1 generator:2 degree:1 affine:1 translation:1 row:1 parity:18 free:2 side:1 understand:1 institute:2 taking:1 sparse:4 opper:5 evaluating:1 kyutech:1 author:1 approximate:2 cavity:1 harm:1 assumed:1 xi:4 iterative:1 channel:5 improving:1 factorizable:3 significance:1 main:2 whole:1 noise:4 fig:2 vr:2 explicit:2 r6:1 formula:1 theorem:4 down:1 showing:1 er:5 essential:1 intractable:1 agt:1 illustrates:1 gallager:17 prevents:1 corresponds:2 determines:1 conditional:1 identity:1 formulated:1 fisher:1 hard:1 vicente:1 specifically:1 typical:1 wiegerinck:1 principal:7 kearns:1 called:3 experimental:1 shannon:1 select:1 support:1 evaluate:1 |
1,085 | 1,991 | A Rotation and Translation Invariant Discrete
Saliency Network
Lance R. Williams
Dept. of Computer Science
Univ. of New Mexico
Albuquerque, NM 87131
John W. Zweck
Dept. of CS and EE
Univ. of Maryland Baltimore County
Baltimore, MD 21250
Abstract
We describe a neural network which enhances and completes salient
closed contours. Our work is different from all previous work in three
important ways. First, like the input provided to V1 by LGN, the input to our computation is isotropic. That is, the input is composed of
spots not edges. Second, our network computes a well defined function
of the input based on a distribution of closed contours characterized by
a random process. Third, even though our computation is implemented
in a discrete network, its output is invariant to continuous rotations and
translations of the input pattern.
1 Introduction
There is a long history of research on neural networks inspired by the structure of visual
cortex whose functions have been described as contour completion, saliency enhancement,
orientation sharpening, or segmentation[6, 7, 8, 9, 12]. A similiar network has been proposed as a model of visual hallucinations[1]. In this paper, we describe a neural network
which enhances and completes salient closed contours. Our work is different from all previous work in three important ways. First, like the input provided to V1 by LGN, the input
to our computation is isotropic. That is, the input is composed of spots not edges. Second,
our network computes a well defined function of the input based on a distribution of closed
contours characterized by a random process. Third, even though our computation is implemented in a discrete network, its output is invariant to continuous rotations and translations
of the input pattern.
There are two important properties which a computation must possess if it is to be invariant
to rotations and translations, i.e., Euclidean invariant. First, the input, the output, and all
intermediate representations must be Euclidean invariant. Second, all transformations of
these representations must also be Euclidean invariant. The models described in [6, 7, 8,
9, 12] are not Euclidean invariant, first and foremost, because their input representations
are not Euclidean invariant. That is, not all rotations and translations of the input can be
represented equally well. This problem is often skirted by researchers by choosing input
patterns which match particular choices of sampling
rate and phase. For example, Li [7]
used only six samples
in
orientation
(including
)
and
Heitger and von der Heydt[5] only
twelve
(including
,
and
).
Li?s
first
test
pattern
was a dashed line of orientation,
, while Heitger and von der Heydt used a Kanizsa Triangle with sides of , , and
orientation. There is no reason to believe that the experimental
results they showed
would be similiar if the input patterns were rotated by as little as . To our knowledge, no
researcher in this area has ever commented on this problem before.
2 A continuum formulation of the saliency problem
The following section reviews the continuum formulation of the contour completion and
saliency problem as described in Williams and Thornber[11].
2.1 Shape distribution
Mumford[3] observed that the probability distribution of object boundary shapes could be
modeled by a Fokker-Planck equation of the following form:
"!
(1)
$#&$% '
+*
%
#,"'-*
)(
where
is the probability that a particle is located at position,
, and
differential
equation
can
be
viewed
as
is moving in direction, , at time, . This partial
a set of independent advection equations in and (the first and second terms) coupled
in the dimension by the diffusion equation (the third term). The advection equations
translate probability mass in direction, , with unit speed, while the diffusion term models
the Brownian motion in direction, with diffusion parameter, . The combined effect of
these three terms is that particles tend to travel in straight lines, but over time they drift to
the left or right by an amount proportional to . Finally, the effect of the fourth term is
that particles decay over time, with a half-life given by the decay constant, .
2.2 The propagators
#&/% '
103245 % '76 98:*
The Green?s
, gives the probability that a particle
observed at
% function, .
6 )(
8 (
%
5
position, , 0 and direction, , at $time,
,
will
later
be
observed
at
position,
,
and
direction,
#&/% '
0 *
, at time, . $It#&"
)
(
is
the
solution,
,
of
the
Fokker-Planck
initial
value
problem
with
% '
98;*
#&%
% * #
6>*
)(
=<
5 <
initial value,
where < is the Dirac delta function. The
Green?s function is used to define two propagators. The long-time propagator:
/GH#I+* #&/% '
@24% '16
*
? 8 #&/% ' 2@5 % '76>*
BAD8F
C E
)( 5
(
.
(2)
#&/% ' *
#&% '16*
and 5
gives the probability that
are distinct edges from the boundary of a single
1
object. The short-time propagator:
KJ
GL#,+*NM
#&/% '
@24% '76
*
? 0 #:$% ' 2@5 % '16*
BA 8 C E
)( 5
(
.
(3)
#&$% ' *
#:% '16>*
and 5
gives the probability that
are fromGHthe
#O!P* boundary of a single object
GL# but
* are
really the same
edge.
In
both
of
these
propagators,
is
a
cut-off
function
with
GL#,+*
:
and Q RTSIU
C
0
GL#,+*
Y[Z7Y W\I] ^ S `_"a&bca !
WV X
(4)
V
The cut-off function is characterized by three parameters, _ , ] , and d . The parameter, _ ,
specifies where the cut-off is and ] specifies how hard hard it is. The parameter, d , is the
scale of the edge detection process.
1
We assume that the probability that two edges are the same depends only on the distance between
them, and that e/fOg,ik
h j l h g m@ne/fpo9m for particles travelling at unit speed.
2.3 Eigenfunctions
#+! *
The integral linear operator,
, combines three sources of information: 1) the probability
that two edges belong to the same object; 2) the probability that the two edges are distinct;
and 3) the probability that the two edges exist. It is defined as follows:
#&/% ' 245 % '76>*
#&>% * ? 8#&/% ' 2@5 % '76>* #&5 % *
(5)
%
#&>% *
, gives the probability that an edge exists
at
.
where the input bias function,
#+! *
#OAs
!P*
described
in
Williams
and
Thornber[11],
the
right
and
left
eigenfunctions,
and
,
#O!P*
of
with largest positive real eigenvalue, , play a central role in the computation of
saliency:
#&/% ' *
#&$% ' 245 % '76>* #:5 % '16>*
E 5% E 6
AWATA
(6)
#&$% ' *
% E 6 #:5 % '16>* #&5 % '16 24"% ' * !
E
5
AWATA
(7)
#+! *
Because
is invariant under a transformation which reverses the order and direction of
its arguments:
#&/% ' 245 % '76>*
#&5 % '76
24"% '
*
(8)
the right and left eigenfunctions are related as follows:
#:$% ' *
#:$% '
* !
(9)
2.4 Stochastic completion field
#&% '16*
The magnitude of the stochastic completion field, 5
, equals
#&% '76>* the probability that a
closed contour satisfying a subset of the constraints exists at 5
. It is the sum of three
terms:
8#&5 % '16>* 8 #&5 % '76>* 8#&5 % '76>* 0 #:5 % '16* 0 #&5 % '76>* 8 #&5 % '76>*
#&5 % '76>*
(10)
E % E #&$% ' * #&"% ' *
AWATA
#&5 % '76>*
#&5 % '76>*
where
is a source field, and
is a sink field:
#:5 % '16>*
% E ? #&5 % '16 2@"% ' * #&>% * #&$% ' *
E
ATATA
(11)
#:5 % '16>*
%
#&/% ' * #&>% * ? #:"% ' 2@5 % '16*
!
ATATA
E E
(12)
#&% '16*
0 #&5 % '76>* 0 #&5 % '16*
in this way is to remove the contribution,
,
The purpose of writing 5
of closed contours at scales smaller than d which would otherwise dominate the completion field. Given the above expression #Ofor
field, it is clear that the key
!P* the completion
#O!P*
problem is computing the eigenfunction,
, of
with largest positive real eigenvalue.
To accomplish this, we can use the well known power method (see [4]).
#O!P* In this case, the
power
method
involves
repeated
application
of
the
linear
operator,
, to the function,
#O!P*
, followed by normalization:
#&$% ' 2 5 % '76>*
#&5 % '76>*
E 5% E 6
0
ATATA
#&"% ' *
!
%E E E 5 % E 6
#&$% ' 2 5 % '76>*
#&5 % '76>*
(13)
AWA A
AWATA 0
#&/% ' *
#+! *
converges to the eigenfunction of
In the limit, as gets very large,
, with
largest positive real eigenvalue. We observe that the above computation can be considered
a continuous state, discrete time, recurrent neural network.
3 A discrete implementation of the continuum formulation
The continuous functions comprising the state of the computation are represented as
weighted sums of a finite set of shiftable-twistable basis functions. The weights form the
coefficient vectors for the functions. The computation we describe is biologically plausible
in the sense that all transformations of state are effected by linear transformations (or other
vector parallel operations) on the coefficient vectors.
3.1 Shiftable-twistable bases
0
The input and output
of the above computation are functions defined on the 0 continuous
space,
, of positions in the plane, , and directions in the circle, . For such
0
0
computations,
the important symmetry is determined
by those transformations,
, of
by % 8 , followed by a twist in
,
which
perform
a
shift
in
through
an
8
8
0 an angle, 8 , consists of two parts: (1) a rotation,
angle, . A twist through
, of
and (2) a translation in , both by . The symmetry,
, which is called a shift-twist
transformation, is given by the formula,
#&$% ' *
#
#&%
% 8:*>'
8:* !
k
(14)
0
#&% 8' 8:*
shift-twist invariant if, for all
A visual0 computation, , on
#&% 8 is
' 8 called
*
produces an identical shift-twist of the output.
, a shift-twist of the input by
This property can be depicted in the following commutative diagram:
#&%
% 8 *c'
#&"% ' *
#
#&/% ' *
8 *
k
#&%
% 8 *c'
#
k
8 *
! "#
#O!P*
#+! *
where
is the input,
, is the output,
is the computation, and
is the shift0
0
twist transformation.
Correspondingly, we define
a shiftable-twistable basis 2 of functions
0
on#&$% ' *
to be a set of functions on
the property that whenever
#
#&/% ' with
*+*
#&% 8 ' 8 * a function,
in
, is in their span, then so is
, for every0 choice of
.
As such, the notion of a shiftable-twistable basis on
generalizes that of a shiftablesteerable basis on [2, 10].
#&$% ' *
0
% be a function on
Shiftable-twistable
bases can be constructed as follows. Let
which is periodic (with period ) in both spatial variables, . In analogy with the
0 of a shiftable-steerable function on , we say that is shiftable-twistable
on
definition
#&% 8 ' 8 *
, such
if there are integers, 0 and , and interpolation functions,
#:% 8 ' 8 *
#:% 8 ' 8 *
that for each
, the shift-twist of by%
is a linear combination
#
'
*
of a finite number of basic shift-twists of by amounts d d
, i.e., if
#
#&/% ' *+*
#&% 8' 8;* #
#&"% ' *+*
!
^
^
(15)
$
$
/
*
'
(
)
%&
'
+
'
'
43 . ,5.
0
,-.
' 6. "
Here d (879* is the basic shift amount and d
7:+ % is the basic twist amount.
0 # 0 ' 0<; * , in the range,
The
in? equation (15) is taken over all pairs of
integers,
' 0 ;
?
>= 0sum
>=
* , and all integers, , in the range, + .
' 12
%
The Gaussian-Fourier basis is the product of a shiftable-steerable basis of Gaussians in
and a Fourier series basis in . For the0 experiments
in this paper, the standard deviation of
#&% *
^ , equals the basic shift amount, d . We
^
the Gaussian
basis
function,
#&% *
regard
# as a periodic
'
* function of period, , which is chosen to be much larger than
d , so that
and its derivatives are essentially zero. For each frequency, , and
shift amount, d (where
4d is an integer), we define the Gaussian-Fourier basis
functions,
, by
@
&A<BDC C
@
@ 8
( 7 (87
' . I
*
FE
(
G
(H7
' . I
#&$% ' *
@
#&%
0 % d * KA J IL
!
(16)
Zweck and Williams[13] showed that the Gaussian-Fourier basis is shiftable-twistable.
2
We use this terminology even though the basis functions need not be linearly independent.
3.2 Power method update formula
#&$% ' *
can be represented in the Gaussian-Fourier basis as
Suppose that
#&/% ' *
#&/% ' *c!
3 . I . I ' . I
. I
#&/% '
(17)
*
.
The vector,
, with components,
, will be called the coefficient vector of
In the next two sections, we demonstrate how the following integral linear transform:
0
#:$% ' *
#:5 % '16>*
E 5 % E 6 ? 8 #:$% ' 2@5 % '16>* #&5 % *
A AWA
(18)
(i.e., the basic step in the power method) can be implemented as a discrete linear transform
in a Gaussian-Fourier shiftable-twistable0 basis:
!
(19)
3.3 The propagation operator P
In practice, we do not explicitly represent the matrix, . Instead we compute the necessary
matrix-vector product using the advection-diffusion-decay operator in the Gaussian-Fourier
shiftable-twistable basis, , 0 described in detail in Zweck and Williams[13]:
Q U R
C
8 8
where
and where:
0
0
0 GH# d +*
#
*
!
In the shiftable-twistable basis, the advection operator, , is a discrete convolution:
3 . I, . I # d +* .
with the following kernel:
# d +*
, .
, .
B
0
8 X
X A
, .
#
d
J
I
B
'
M * "#
*
- )
E
(20)
(21)
(22)
(23)
(24)
G
where the are sinc functions. Let ! be the number of Fourier series frequencies, , used
in the shiftable-twistable basis, and let d "! . The diffusion-decay operator, , is a
diagonal matrix:
0
^
^ :*
#
*
^ S $# #
(25)
. I
^ S
where % ^
.
AB
3.4 The bias operator
E
A B JI
7
A JI
. I
#:>% *
In the continuum, the
, by the in#:% * bias operator effects a multiplication of the function,
put bias function,
. Our aim is to identify an equivalent linear operator in the #:shiftable>% *
twistable basis. Suppose that both and are represented in a Gaussian basis,
. Their
product is:
#&>% * #&% *
#&>% '
* &
#&% *
#&>% * #&>% * !
(26)
3 . 3 . @ . @
Now, the product of two Gaussian basis functions, @ . and @ , is a Gaussian
vari#&>% * of#:>% smaller
ance which cannot be represented in the Gaussian basis, @. . Because
* is a linear
3 . . @ .
3 @
@ .
combination of the products of pairs of Gaussian basis functions, it cannot be#:represented
% * #:>% *
in the Gaussian basis either. However, we observe
and
0 that the^ convolution of
#&%
* J #:>% * #&>% *9M
#:>% *
^
, where
, can be represented in the
a Gaussian,
X
Gaussian basis. It follows that there exists a matrix, , such that:
#&%
* J #:% * #&% *NM
J M
#&% *c!
(27)
The formula for the matrix, , is derived by first completing the square in the exponent of
the product of two Gaussians to obtain:
% * #&%
#&%
# #:% ^ # % % *+** # ^ # % % **c!
% *
d
d
(28)
#&>% *
This product is then convolved
, which is a shift of the
#:% * with to obtain a function,
Gaussian basis function,
. Finally we use the shiftability formula:
#&%
% 8;*
#&% 8[* #&% *
(29)
% *
#&% *
#:%
d
where
are the interpolation
equals
, and d
is the
#:% * functions,
shift amount, to express
in the Gaussian basis. The result is:
$# 2 2 %$ % 2 2 * # d # %@ % * *c!
(30)
0 @
@
,.
3 .
$
@
@ .
3 . ,5.
3 J J
.
. @ .
0
@
@
E
A B
$
@ .
@
0
@
7 , .
0
(879*
7
4 Experimental results
*
translates (in each
In our experiments the Gaussian-Fourier basis consisted
of
!
spatial dimension) of a Gaussian (of period,
), and !
harmonic signals in
the orientation dimension. The standard deviation of the Gaussian was set equal to the shift
amount, d
. For illustration purposes, all functions
were rendered at a resolution of
!
4 ! 4 . The diffusion parameter, , equaled , and the decay constant, , equaled
. The time step, d , used to solve the Fokker-Planck equation in the basis equaled
d . The parameters for the cut-off function used to eliminate self-loops were _` and
] ; .
#&>% *
In the first experiment, the input bias function,
, consisted of twenty randomly positioned spots and twenty spots on the boundary of an avocado. The positions of the spots
are real valued, i.e., they do not lie on the grid of basis functions. See Fig. 1 (left). The
stochastic completion field computed using 32 iterations of the power method is shown in
Fig. 1 (right).
(
(879*
7
In the second experiment, the input bias function from the first experiment was rotated by
# and#&% translated
J ^ ' ^ Mby*+* half the distance between the centers of adjacent basis functions,
. See Fig. 2 (left). The stochastic completion field is identical (up
to rotation and translation) to the one computed in the first experiment. This demonstrates
the Euclidean invariance of the computation. See Fig. 2 (right). The estimate of the largest
positive real eigenvalue, , as a function of , the power method iteration is shown in Fig.
3.
5 Conclusion
We described a neural network which enhances and completes salient closed contours.
Even though the computation is implemented in a discrete network, its output is invariant
under continuous rotations and translations of the input pattern.
References
[1] Cowan, J.D., Neurodynamics and Brain Mechanisms, Cognition, Computation and
Consciousness, Ito, M., Miyashita, Y. and Rolls, E., (Eds.), Oxford UP, 1997.
#:% *
Figure 1: Left: The input bias function,
. Twenty randomly positioned spots were
added to twenty spots on the boundary of an avocado. The positions are real valued, i.e.,
they do
#&5 % '7not
6>* E lie
6 on the grid of basis functions. Right: The stochastic completion field,
A
, computed using
basis functions.
Figure 2: Left: The input bias function from Fig. 1, rotated by # and
#&% translated
J ^ ' ^ M /by
*+*
half the distance between the centers of adjacent basis functions,
.
Right: The stochastic completion field, is identical (up to rotation and translation) to the
one shown in Fig. 1. This demonstrates the Euclidean invariance of the computation.
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
5
10
15
20
25
30
35
Figure 3: The estimate of the largest positive real eigenvalue, , as a function of , the
power method iteration. Both the final value and all intermediate values are identical in the
rotated and non-rotated cases.
[2] Freeman, W., and Adelson, E., The Design and Use of Steerable Filters, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (9), pp.891-906, 1991.
[3] Mumford, D., Elastica and Computer Vision, Algebraic Geometry and Its Applications, Chandrajit Bajaj (ed.), Springer-Verlag, New York, 1994.
[4] Golub, G.H. and C.F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins Univ. Press, 1996.
[5] Heitger, R. and von der Heydt, R., A Computational Model of Neural Contour Processing, Figure-ground and Illusory Contours, Proc. of 4th Intl. Conf. on Computer
Vision, Berlin, Germany, 1993.
[6] Iverson, L., Toward Discrete Geometric Models for Early Vision, Ph.D. dissertation,
McGill University, 1993.
[7] Li, Z., A Neural Model of Contour Integration in Primary Visual Cortex, Neural
Computation 10(4), pp. 903-940, 1998.
[8] Parent, P., and Zucker, S.W., Trace Inference, Curvature Consistency and Curve
Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 11, pp.
823-889, 1989.
[9] Shashua, A. and Ullman, S., Structural Saliency: The Detection of Globally Salient
Structures Using a Locally Connected Network, 2nd Intl. Conf. on Computer Vision,
Clearwater, FL, pp. 321-327, 1988.
[10] Simoncelli, E., Freeman, W., Adelson E. and Heeger, D., Shiftable Multiscale Transforms, IEEE Trans. Information Theory 38(2), pp. 587-607, 1992.
[11] Williams, L.R., and Thornber, K.K., Orientation, Scale, and Discontinuity as Emergent Properties of Illusory Contour Shape, Neural Computation 13(8), pp. 16831711, 2001.
[12] Yen, S. and Finkel, L., Salient Contour Extraction by Temporal Binding in a
Cortically-Based Network, Neural Information Processing Systems 9, Denver, CO,
1996.
[13] Zweck, J., and Williams, L., Euclidean Group Invariant Computation of Stochastic
Completion Fields Using Shiftable-Twistable Functions, Proc. European Conf. on
Computer Vision (ECCV ?00), Dublin, Ireland, 2000.
| 1991 |@word nd:1 initial:2 series:2 ka:1 must:3 john:2 shape:3 remove:1 update:1 half:3 intelligence:2 plane:1 isotropic:2 short:1 dissertation:1 constructed:1 iverson:1 differential:1 ik:1 consists:1 combine:1 brain:1 inspired:1 freeman:2 globally:1 little:1 provided:2 mass:1 sharpening:1 transformation:7 temporal:1 demonstrates:2 unit:2 planck:3 before:1 positive:5 limit:1 oxford:1 interpolation:2 oas:1 co:1 range:2 practice:1 ance:1 spot:7 steerable:3 area:1 get:1 cannot:2 operator:9 put:1 writing:1 equivalent:1 center:2 williams:7 resolution:1 dominate:1 notion:1 mcgill:1 play:1 suppose:2 satisfying:1 located:1 cut:4 observed:3 role:1 connected:1 basis:32 triangle:1 sink:1 translated:2 emergent:1 represented:7 univ:3 distinct:2 describe:3 clearwater:1 choosing:1 whose:1 larger:1 plausible:1 solve:1 say:1 valued:2 otherwise:1 transform:2 final:1 eigenvalue:5 product:7 elastica:1 loop:1 translate:1 dirac:1 parent:1 enhancement:1 intl:2 produce:1 advection:4 converges:1 rotated:5 object:4 recurrent:1 completion:11 implemented:4 c:1 involves:1 revers:1 direction:7 filter:1 stochastic:7 really:1 county:1 considered:1 ground:1 cognition:1 continuum:4 early:1 purpose:2 proc:2 travel:1 largest:5 weighted:1 gaussian:20 aim:1 finkel:1 derived:1 equaled:3 sense:1 inference:1 eliminate:1 lgn:2 comprising:1 germany:1 orientation:6 exponent:1 spatial:2 integration:1 field:11 equal:4 extraction:1 sampling:1 identical:4 adelson:2 randomly:2 composed:2 phase:1 geometry:1 ab:1 detection:3 heitger:3 hallucination:1 golub:1 fog:1 edge:10 integral:2 partial:1 necessary:1 euclidean:8 circle:1 dublin:1 deviation:2 subset:1 shiftable:16 periodic:2 accomplish:1 combined:1 twelve:1 off:4 hopkins:1 von:3 central:1 nm:3 conf:3 derivative:1 ullman:1 li:3 coefficient:3 explicitly:1 depends:1 later:1 closed:7 shashua:1 effected:1 parallel:1 contribution:1 yen:1 il:1 square:1 roll:1 saliency:6 identify:1 albuquerque:1 researcher:2 straight:1 history:1 whenever:1 ed:2 definition:1 frequency:2 pp:6 illusory:2 knowledge:1 segmentation:1 positioned:2 formulation:3 though:4 multiscale:1 propagation:1 believe:1 effect:3 consisted:2 consciousness:1 adjacent:2 self:1 demonstrate:1 motion:1 gh:2 harmonic:1 rotation:9 ji:2 twist:10 denver:1 belong:1 grid:2 consistency:1 particle:5 moving:1 zucker:1 cortex:2 base:2 curvature:1 brownian:1 showed:2 verlag:1 wv:1 life:1 der:3 period:3 dashed:1 signal:1 simoncelli:1 match:1 characterized:3 h7:1 long:2 equally:1 basic:5 essentially:1 vision:5 foremost:1 iteration:3 normalization:1 represent:1 kernel:1 thornber:3 baltimore:3 completes:3 diagram:1 source:2 posse:1 eigenfunctions:3 tend:1 cowan:1 integer:4 ee:1 structural:1 intermediate:2 translates:1 shift:13 six:1 expression:1 algebraic:1 york:1 clear:1 chandrajit:1 awa:2 amount:8 transforms:1 locally:1 ph:1 specifies:2 exist:1 delta:1 ofor:1 discrete:9 express:1 group:1 commented:1 key:1 salient:5 terminology:1 diffusion:6 v1:2 sum:3 angle:2 fourth:1 fl:1 completing:1 followed:2 constraint:1 fourier:9 speed:2 argument:1 span:1 the0:1 rendered:1 bdc:1 combination:2 smaller:2 biologically:1 invariant:13 taken:1 equation:7 mechanism:1 travelling:1 generalizes:1 operation:1 gaussians:2 observe:2 convolved:1 added:1 mumford:2 primary:1 md:2 diagonal:1 enhances:3 ireland:1 distance:3 maryland:1 berlin:1 reason:1 toward:1 modeled:1 illustration:1 mexico:1 fe:1 trace:1 ba:1 implementation:1 design:1 twenty:4 perform:1 convolution:2 finite:2 propagator:5 similiar:2 ever:1 kanizsa:1 heydt:3 drift:1 pair:2 discontinuity:1 eigenfunction:2 miyashita:1 trans:1 pattern:8 including:2 green:2 lance:1 power:7 coupled:1 kj:1 review:1 bca:1 geometric:1 multiplication:1 proportional:1 analogy:1 translation:9 eccv:1 gl:4 side:1 bias:8 correspondingly:1 van:1 regard:1 boundary:5 dimension:3 curve:1 vari:1 contour:14 computes:2 transaction:2 continuous:6 neurodynamics:1 symmetry:2 european:1 linearly:1 bajaj:1 repeated:1 fig:7 cortically:1 position:6 heeger:1 lie:2 third:3 ito:1 formula:4 decay:5 sinc:1 exists:3 magnitude:1 commutative:1 depicted:1 visual:3 binding:1 springer:1 fokker:3 viewed:1 hard:2 loan:1 determined:1 called:3 invariance:2 experimental:2 dept:2 |
1,086 | 1,992 | Spectral Relaxation for K-means
Clustering
Hongyuan Zha & Xiaofeng He
Dept. of Compo Sci. & Eng.
The Pennsylvania State University
University Park, PA 16802
{zha,xhe}@cse.psu.edu
Chris Ding & Horst Simon
NERSC Division
Lawrence Berkeley National Lab.
UC Berkeley, Berkeley, CA 94720
{chqding,hdsimon}@lbl.gov
Ming Gu
Dept. of Mathematics
UC Berkeley, Berkeley, CA 95472
mgu@math.berkeley.edu
Abstract
The popular K-means clustering partitions a data set by minimizing a sum-of-squares cost function. A coordinate descend method
is then used to find local minima. In this paper we show that the
minimization can be reformulated as a trace maximization problem
associated with the Gram matrix of the data vectors. Furthermore,
we show that a relaxed version of the trace maximization problem
possesses global optimal solutions which can be obtained by computing a partial eigendecomposition of the Gram matrix, and the
cluster assignment for each data vectors can be found by computing a pivoted QR decomposition of the eigenvector matrix. As a
by-product we also derive a lower bound for the minimum of the
sum-of-squares cost function.
1
Introduction
K-means is a very popular method for general clustering [6]. In K-means clusters
are represented by centers of mass of their members, and it can be shown that the
K-means algorithm of alternating between assigning cluster membership for each
data vector to the nearest cluster center and computing the center of each cluster
as the centroid of its member data vectors is equivalent to finding the minimum of a
sum-of-squares cost function using coordinate descend. Despite the popularity of Kmeans clustering, one of its major drawbacks is that the coordinate descend search
method is prone to local minima. Much research has been done on computing refined
initial points and adding explicit constraints to the sum-of-squares cost function for
K-means clustering so that the search can converge to better local minimum [1 ,2].
In this paper we tackle the problem from a different angle: we find an equivalent
formulation of the sum-of-squares minimization as a trace maximization problem
with special constraints; relaxing the constraints leads to a maximization problem
that possesses optimal global solutions. As a by-product we also have an easily
computable lower bound for the minimum of the sum-of-squares cost function. Our
work is inspired by [9, 3] where connection to Gram matrix and extension of Kmeans method to general Mercer kernels were investigated.
The rest of the paper is organized as follows: in section 2, we derive the equivalent
trace maximization formulation and discuss its spectral relaxation. In section 3, we
discuss how to assign cluster membership using pivoted QR decomposition, taking
into account the special structure of the partial eigenvector matrix. Finally, in
section 4, we illustrate the performance of the clustering algorithms using document
clustering as an example.
Notation. Throughout, II . II denotes the Euclidean norm of a vector. The trace
of a matrix A, i.e., the sum of its diagonal elements, is denoted as trace(A). The
Frobenius norm of a matrix IIAIIF = Jtrace(AT A). In denotes identity matrix of
order n.
2
Spectral Relaxation
Given a set of m-dimensional data vectors ai, i = 1, ... ,n, we form the m-by-n data
matrix A = [a1,"" an]. A partition II of the date vectors can be written in the
following form
(1)
where E is a permutation matrix, and Ai is m-by-si, i.e., the ith cluster contains
the data vectors in A. For a given partition II in (1), the associated sum-of-squares
cost function is defined as
k
Si
Si
ss(II) =
Ila~i) - mi11 2 , m?'l = "a(i)ls?
~
S
2,
s=l
i=l s=l
LL
i.e., mi is the mean vector of the data vectors in cluster i. Let e be a vector
of appropriate dimension with all elements equal to one, it is easy to see that
mi = Aiel Si and
Si
SSi ==
Ila~i) - mil1 2 = IIAi - mieTII} = IIAi(Isi - ee T ISi)II}?
s=l
L
Notice that lSi - ee T I Si is a projection matrix and (Isi - ee T I Si)2 = lSi - ee T lSi,
it follows that
SSi
= trace(Ai(Isi
- ee T I si)Af)
= trace((Isi
- ee T I si)AT Ai).
Therefore,
ss(II) =
t, t,
SSi =
(trace(AT Ai) -
(~) AT Ai (~) )
.
Let the n-by-k orthonormal matrix X be
X =
:~ (e lVsl
Sk
elVSi.
(2)
The sum-of-squares cost function can now be written as
ss(II) = trace(A T A) - trace(XT AT AX),
and its minimization is equivalent to
max{ trace(XT AT AX)
I
X of the form in (2)}.
REMARK. Without loss of generality, let E = I in (1). If we let Xi be the cluster
indicator vector, i.e.,
xT = [0, ... ,0,1, ... ,1,0, .. . ,0].
'---v-----"
Si
Then it is easy to see that
trace(XT AT AX) =
t
t
xT AT AXi =
II Ax il1 2
i=l
XTXi
i=l II x il1 2
Using the partition in (1), the right-hand side of the above can be written as
a weighted sum of the squared Euclidean norms of the mean vector of each clusters.
REMARK. If we consider the elements of the Gram matrix AT A as measuring
similarity between data vectors, then we have shown that Euclidean distance leads
to Euclidean inner-product similarity. This inner-product can be replaced by a
general Mercer kernel as is done in [9, 3].
Ignoring the special structure of X and let it be an arbitrary orthonormal matrix,
we obtain a relaxed maximization problem
max trace(XT AT AX)
(3)
XTX=h
It turns out the above trace maximization problem has a closed-form solution.
Theorem. (Ky Fan) Let H be a symmetric matrix with eigenvalues
Al ::::: A2 ::::: ... ::::: An,
and the corresponding eigenvectors U = [Ul, .. . , Un]. Then
Al
+ ... Ak
=
max trace(XT H X) .
XTX=I k
Moreover, the optimal X* is given by X* = [Ul' ... ' Uk]Q with Q an arbitrary
orthogonal matrix.
It follows from the above theorem that we need to compute the largest k eigenvectors
of the Gram matrix AT A. As a by-product, we have
min{m ,n}
minss(II) ::::: trace(A T A) n
max trace(XT AT AX) =
XT X=h
L
i=k+l
0-; (A),
(4)
where oi(A) is the i largest singular value of A. This gives a lower bound for the
minimum of the sum-of-squares cost function.
It is easy to see from the above derivation that we can replace A with
A - aeT , where a is an arbitrary vector. Then we have the following lower bound
REMARK.
min{m,n}
mJnss(II) ::::: m~
L
u;(A - aeT
).
i=k+l
One might also try the following approach: notice that
REMARK.
IIAi -
",
mi eT2
IIF = 2S1
i ~
'"
~
Ilaj -
aj'11 2 .
aj EAi aj' EAi
Let W = (
Ilai - ajl12 )i,j=l'
and and Xi = [Xij]j=l with
1
Xij = {
o
if aj E Ai
otherwise
Then
k
ss(II) =
T
~ ' " Xi WXi > ~ min
2 ~ XT Xi - 2 ZT Z=h
i=l "
ZTWZ =
~
2
n
'"
~
Ai(W).
i=n-k+l
Unfortunately, some of the smallest eigenvalues of W can be negative.
Let X k be the n-by-k matrix consisting of the k largest eigenvectors of AT A. Each
row of X k corresponds to a data vector , and the above process can be considered as
transforming the original data vectors which live in a m-dimensional space to new
data vectors which now live in a k-dimensional space. One might be attempted to
compute the cluster assignment by applying the ordinary K-means method to those
data vectors in the reduced dimension space. In the next section, we discuss an
alternative that takes into account the structure of the eigenvector matrix X k [5].
REMARK. The similarity of the projection process to principal component analysis
is deceiving: the goal here is not to reconstruct the data matrix using a low-rank
approximation but rather to capture its cluster structure.
3
Cluster Assignment Using Pivoted QR Decomposition
Without loss of generality, let us assume that the best partition of the data vectors in A that minimizes ss(II) is given by A = [AI"'" A k], each submatrix Ai
corresponding to a cluster. Now write the Gram matrix of A as
ATA=[A~A'
~
ArA,
1+ E=:B+E.
o
0
ArAk
If the overlaps among the clusters represented by the submatrices Ai are small, then
the norm of E will be small as compare with the block diagonal matrix B in the
above equation. Let the largest eigenvector of AT Ai be Yi , and
AT AiYi = fJiYi ,
then the columns of the matrix
IIYil1
= 1,
i = 1, ... , k,
span an invariant subspace of B. Let the eigenvalues and eigenvectors of AT A be
A1:::: A2:::: ... :::: An,
AT AXi = AiXi,
i = 1, ... ,n.
Assume that there is a gap between the two eigenvalue sets {fl1,'" flk} and
{Ak+1 , '" An}, i.e. ,
o< J =
min{lfli - Aj II i = 1, ... ,k, j = k + 1, ... ,n}.
Then Davis-Kahan sin(0) theorem states that IlynXk+1,'" ,xn]11
Theorem 3.4]. After some manipulation, it can be shown that
X k == [Xl, ... , Xk]
< IIEII/J [11,
= YkV + O(IIEII) ,
where V is an k-by-k orthogonal matrix. Ignoring the O(IIEII) term, we see that
v
v
cluster 1
cluster k
where we have used y'[ = [Yil , ... ,Yis.], and VT = [V1' ... ,Vk]. A key observation is
that all the Vi are orthogonal to each other: once we have selected a Vi, we can jump
to other clusters by looking at the orthogonal complement of Vi' Also notice that
IIYil1 = 1, so the elements of Yi can not be all small. A robust implementation of
the above idea can be obtained as follows: we pick a column of X k T which has the
lar;est norm, say, it belongs to cluster i , we orthogonalize the rest of the columns of
X k against this column. For the columns belonging to cluster i the residual vector
will have small norm, and for the other columns the residual vectors will tend to
be not small. We then pick another vector with the largest residual norm, and
orthogonalize the other residual vectors against this residual vector. The process
can be carried out k steps, and it turns out to be exactly QR decomposition with
column pivoting applied to X k T [4], i.e., we find a permutation matrix P such that
X'[P = QR = Q[Rl1,Rd,
where Q is a k-by-k orthogonal matrix, and Rl1 is a k-by-k upper triangular matrix.
We then compute the matrix
R=
Rj} [Rl1 ' Rd pT = [Ik' Rj} R12]PT.
Then the cluster membership of each data vector is determined by the row index of
the largest element in absolute value of the corresponding column of k
REMARK. Sometimes it may be advantageous to include more than k eigenvectors
to form Xs T with s > k. We can still use QR decomposition with column pivoting
to select k columns of Xs T to form an s-by-k matrix, say X. Then for each column
z of Xs T we compute the least squares solution of t* = argmintERk li z - Xtll. Then
the cluster membership of z is determined by the row index of the largest element
in absolute value of t* .
4
Experimental Results
In this section we present our experimental results on clustering a dataset of newsgroup articles submitted to 20 newsgroups.1 This dataset contains about 20,000
articles (email messages) evenly divided among the 20 newsgroups. We list the
names of the news groups together with the associated group labels.
lThe newsgroup dataset together with the bow toolkit for processing it can be downloadedfrorn http : //www . cs.cmu.edu/afs/cs/project/theo-ll/www/naive-bayes.html.
0?~.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
0?1 L,
-~--,c-----O~
' _--,c-'-_~-----'
p-Kmeans
p-{)R
Figure 1: Clustering accuracy for five newsgroups NG2/NG9/NG10/NG15/NG18:
p-QR vs. p-Kmeans (left) and p-Kmeans vs. Kmeans (right)
NG1: alt.atheism
NG2: comp.graphics
NG3: comp.os.ms-vindovs.misc
NG4: comp.sys.ibm.pc.hardvare
NG5:comp.sys.mac.hardvare
NG6: comp.vindovs.x
NG7:misc.forsale
NG8: rec.autos
NG9:rec.motorcycles
NG10: rec.sport.baseball
NGll:rec.sport.hockey
NG12: sci. crypt
NG13:sci.electronics
NG14: sci.med
NG15:sci.space
NG16: soc.religion.christian
NG17:talk.politics.guns
NG18: talk.politics.mideast
NG19:talk.politics.misc
NG20: talk.religion.misc
We used the bow toolkit to construct the term-document matrix for this dataset,
specifically we use the tokenization option so that the UseNet headers are stripped,
and we also applied stemming [8]. The following three preprocessing steps are done:
1) we apply the usual tf.idf weighting scheme; 2) we delete words that appear too
few times; 3) we normalized each document vector to have unit Euclidean length.
We tested three clustering algorithms: 1) p-QR, this refers to the algorithm using
the eigenvector matrix followed by pivoted QR decomposition for cluster membership assignment; 2) p-Kmeans, we compute the eigenvector matrix, and then apply
K-means on the rows of the eigenvector matrix; 3) K-means, this is K-means directly
applied to the original data vectors. For both K-means methods, we start with a set
of cluster centers chosen randomly from the (projected) data vectors, and we aslo
make sure that the same random set is used for both for comparison. To assess the
quality of a clustering algorithm, we take advantage of the fact that the news group
data are already labeled and we measure the performance by the accuracy of the
clustering algorithm against the document category labels [10]. In particular, for a
k cluster case, we compute a k-by-k confusion matrix C = [Cij] with Cij the number
of documents in cluster i that belongs to newsgroup category j. It is actually quite
subtle to compute the accuracy using the confusion matrix because we do not know
which cluster matches which newsgroup category. An optimal way is to solve the
following maximization problem
max{ trace(CP)
IP
is a permutation matrix},
and divide the maximum by the total number of documents to get the accuracy.
This is equivalent to finding perfect matching a complete weighted bipartite graph,
one can use Kuhn-Munkres algorithm [7]. In all our experiments, we used a greedy
algorithm to compute a sub-optimal solution.
Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering
Newsgroups
NG1/NG2
NG2/NG3
NG8/NG9
NG10/NG11
NG1/NG 15
NG18/NG19
89.29 ?
62.37 ?
75.88 ?
73.32 ?
73.32 ?
63.86 ?
p-QR
7.51 %
8.39%
8.88%
9. 08%
9.08%
6.09%
p-Kmeans
89.62 ? 6.90%
63.84 ? 8.74%
77.64 ? 9.00%
74.86 ? 8.89%
74.86 ? 8.89%
64.04 ? 7.23%
K-means
76.25 ? 13.06%
61.62 ? 8.03%
65.65 ? 9.26%
62.04 ? 8.61%
62 .04 ? 8.61%
63.66 ? 8.48%
Table 2: Comparison of p-QR, p-Kmeans, and K-means for multi-way clustering
Newsgroups
NG2/NG3/NG4/NG5/NG6 (50)
NG2/NG3/NG4/NG5/NG6 UOO)
NG2/NG9/NG10/NG15/NG18 l50j
NG2/NG9/NG10/NG15/NG18 (100)
NG1/NG5/NG7/NG8/NG 11/
(50)
NG12/NG13/NG14/NG15/NG17
NG1/NG5/NG7 /NG8/NG 11/
(100)
NG12/NG13/NG14/NG15/NG17
p-QR
5.17%
5.06%
9.26%
9.90%
?
?
?
?
60.21 ? 4.88%
40.36
41.67
77.83
79.91
65.08
? 5.14%
p-Kmeans
41.15 ? 5.73%
42.53 ? 5.02%
70.13 ? 11.67%
75.56 ? 10.63%
K-means
35.77 ? 5.19%
37.20 ? 4.39%
58.10 ? 9.60%
66.37 ? 10.89%
58.18
? 4.41%
40 .18 ? 4.64%
58.99
? 5.22%
48 .33
? 5.64%
1. In this example, we look at binary clustering. We choose 50 random
document vectors each from two newsgroups. We tested 100 runs for each pair
of newsgroups, and list the means and standard deviations in Table 1. The two
clustering algorithms p-QR and p-Kmeans are comparable to each other, and both
are better and sometimes substantially better t han K-means.
EXAMPLE
2. In this example, we consider k-way clustering with k = 5 and k = 10.
Three news group sets are chosen with 50 and 100 random samples from each newsgroup as indicated in the parenthesis. Again 100 runs are used for each tests and the
means and standard deviations are listed in Table 2. Moreover, in Figure 1, we also
plot the accuracy for the 100 runs for the test NG2/NG9/NG10/NG15/NG18 (50).
Both p-QR and p-Kmeans perform better t han Kmeans. For news group sets with
small overlaps, p-QR performs better t han p-Kmeans. This might be explained by
t he fact t hat p-QR explores the special structure of t he eigenvector matrix and is
therefore more efficient. As a less thorough comparison wit h the information bottleneck method used in [10], there for 15 runs of NG2/NG9/NGlO/NG15/NG 18 (100)
mean accuracy 56.67% with maximum accuracy 67.00% is obtained. For 15 runs
of the 10 newsgroup set with 50 samples, mean accuracy 35.00% with maximum
accuracy about 40.00% is obtained.
EXAMPLE
3. We compare the lower bound given in (4). We only list a typical
sample from NG2/NG9/NGlO/NG15/NG18 (50). The column with "NG labels"
indicates clustering using the newsgroup labels and by definition has 100% accuracy.
It is quite clear that the news group categories are not completely captured by
t he sum-of-squares cost function because p-QR and "NG labels" both have higher
accuracy but also larger sum-of-squares values. Interestingly, it seems t hat p-QR
captures some of this information of the newsgroup categories.
EXAMPLE
accuracy
ssm)
p-QR
86.80%
224.1110
p-Kmeans
83.60%
223.8966
K-means
57.60%
228.8416
NG labels
100%
224.4040
lower bound
N/A
219.0266
Acknowledgments
This work was supported in part by NSF grant CCR-9901986 and by Department
of Energy through an LBL LDRD fund.
References
[1] P. S. Bradley and Usama M. Fayyad. (1998). R efining Initial Points for K-Means
Clustering. Proc. 15th International Conf. on Machine Learning, 91- 99.
[2] P. S. Bradley, K. Bennett and A. Demiritz. Constrained K-means Clustering. Microsoft Research, MSR-TR-2000-65, 2000.
[3] M. Girolani. (2001). Mercer Kernel Based Clustering in Feature Space. To appear in
IEEE Transactions on Neural Networks.
[4] G. Golub and C. Van Loan . (1996) . Matrix Computations. Johns Hopkins University
Press, 3rd Edition.
[5] Ming Gu, Hongyuan Zha, Chris Ding, Xiaofeng He and Horst Simon. (2001) . Spectral
Embedding for K- Way Graph Clustering. Technical Report, Department of Computer
Science and Engineering, CSE-OI-007, Pennsylvania State University.
[6] J.A. Hartigan and M.A. Wong. (1979). A K-means Clustering Algorithm. Applied
Statistics, 28:100- 108.
[7] L. Lovasz and M.D. Plummer. (1986) Matching Theory. Amsterdam: North Holland.
[8] A. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http : //www . CS. cmu. edu/ mccallum/bow.
[9] B. Schi:ilkopf, A. Smola and K.R. Miiller. (1998). Nonlinear Component Analysis as
a Kernel Eigenvalue Problem. N eural Computation, 10: 1299- 1219.
[10] N . Slonim and N. Tishby. (2000). Document clustering using word clusters via the
information bottleneck method. Proceedings of SIGIR-2000.
[11] G.W. Stewart and J.G. Sun. (1990). Matrix Perturbation Theory. Academic Press,
San Diego , CA.
| 1992 |@word msr:1 version:1 seems:1 norm:7 advantageous:1 eng:1 decomposition:6 pick:2 tr:1 electronics:1 initial:2 contains:2 document:8 interestingly:1 bradley:2 si:10 assigning:1 written:3 john:1 stemming:1 partition:5 christian:1 plot:1 fund:1 v:2 greedy:1 selected:1 xk:1 mccallum:2 ng4:3 ith:1 sys:2 compo:1 math:1 cse:2 ssm:1 five:1 ik:1 isi:5 multi:1 ara:1 inspired:1 ming:2 gov:1 project:1 notation:1 moreover:2 mass:1 ykv:1 minimizes:1 eigenvector:8 substantially:1 finding:2 berkeley:6 thorough:1 tackle:1 exactly:1 uk:1 unit:1 grant:1 appear:2 engineering:1 local:3 slonim:1 despite:1 usenet:1 ak:2 might:3 uoo:1 munkres:1 relaxing:1 acknowledgment:1 block:1 submatrices:1 xtx:2 projection:2 matching:2 word:2 refers:1 ila:2 get:1 live:2 applying:1 wong:1 www:3 equivalent:5 center:4 l:1 sigir:1 wit:1 orthonormal:2 embedding:1 coordinate:3 et2:1 rl1:3 usama:1 pt:2 diego:1 aixi:1 pa:1 element:6 rec:4 labeled:1 ding:2 capture:2 descend:3 news:5 sun:1 transforming:1 baseball:1 division:1 bipartite:1 completely:1 gu:2 easily:1 represented:2 talk:4 derivation:1 plummer:1 header:1 refined:1 quite:2 larger:1 solve:1 say:2 s:5 otherwise:1 reconstruct:1 triangular:1 statistic:1 kahan:1 ip:1 advantage:1 eigenvalue:5 product:5 motorcycle:1 date:1 bow:4 frobenius:1 ky:1 qr:20 cluster:28 perfect:1 derive:2 illustrate:1 nearest:1 soc:1 c:3 kuhn:1 drawback:1 assign:1 extension:1 considered:1 lawrence:1 major:1 forsale:1 a2:2 smallest:1 pivoted:4 proc:1 label:6 largest:7 tf:1 weighted:2 minimization:3 lovasz:1 rather:1 ax:6 vk:1 rank:1 indicates:1 centroid:1 ldrd:1 membership:5 among:2 html:1 classification:1 denoted:1 constrained:1 special:4 uc:2 tokenization:1 equal:1 once:1 construct:1 psu:1 ng:7 iif:1 park:1 look:1 deceiving:1 report:1 few:1 randomly:1 national:1 replaced:1 consisting:1 ng2:11 microsoft:1 message:1 golub:1 pc:1 partial:2 iieii:3 orthogonal:5 euclidean:5 divide:1 lbl:2 delete:1 column:12 modeling:1 measuring:1 stewart:1 assignment:4 maximization:8 ordinary:1 cost:9 mac:1 deviation:2 graphic:1 too:1 tishby:1 explores:1 international:1 together:2 hopkins:1 squared:1 again:1 choose:1 conf:1 li:1 account:2 north:1 vi:3 try:1 lab:1 closed:1 zha:3 bayes:1 option:1 start:1 simon:2 ass:1 square:12 oi:2 accuracy:12 comp:5 submitted:1 email:1 definition:1 against:3 crypt:1 energy:1 associated:3 mi:3 dataset:4 popular:2 organized:1 subtle:1 actually:1 higher:1 formulation:2 done:3 generality:2 furthermore:1 smola:1 hand:1 o:1 nonlinear:1 quality:1 lar:1 aj:5 indicated:1 name:1 normalized:1 alternating:1 symmetric:1 misc:4 ll:2 sin:1 davis:1 m:1 complete:1 confusion:2 performs:1 cp:1 aiyi:1 pivoting:2 yil:1 he:5 ai:12 rd:3 mathematics:1 language:1 toolkit:3 han:3 similarity:3 belongs:2 manipulation:1 binary:1 vt:1 yi:3 captured:1 minimum:7 relaxed:2 converge:1 ii:15 rj:2 fl1:1 technical:1 match:1 academic:1 af:1 retrieval:1 divided:1 a1:2 parenthesis:1 cmu:2 kernel:4 sometimes:2 ng20:1 singular:1 rest:2 posse:2 sure:1 tend:1 med:1 member:2 ee:6 easy:3 newsgroups:7 pennsylvania:2 inner:2 idea:1 computable:1 chqding:1 politics:3 bottleneck:2 ul:2 miiller:1 reformulated:1 remark:6 eai:2 clear:1 eigenvectors:5 listed:1 category:5 reduced:1 http:2 xij:2 lsi:3 nsf:1 r12:1 notice:3 popularity:1 ccr:1 write:1 group:6 key:1 hartigan:1 v1:1 graph:2 relaxation:3 nersc:1 sum:13 run:5 angle:1 throughout:1 comparable:1 submatrix:1 bound:6 followed:1 fan:1 constraint:3 idf:1 min:5 span:1 fayyad:1 department:2 belonging:1 wxi:1 s1:1 explained:1 aet:2 invariant:1 equation:1 discus:3 turn:2 mgu:1 know:1 apply:2 spectral:4 appropriate:1 alternative:1 hat:2 original:2 denotes:2 clustering:25 include:1 already:1 usual:1 diagonal:2 subspace:1 distance:1 sci:5 chris:2 evenly:1 gun:1 lthe:1 length:1 index:2 minimizing:1 unfortunately:1 cij:2 trace:19 negative:1 implementation:1 zt:1 perform:1 upper:1 observation:1 looking:1 perturbation:1 arbitrary:3 xtxi:1 complement:1 pair:1 connection:1 max:5 overlap:2 afs:1 indicator:1 residual:5 scheme:1 carried:1 naive:1 auto:1 text:1 loss:2 permutation:3 eigendecomposition:1 mercer:3 article:2 ibm:1 row:4 prone:1 ata:1 supported:1 theo:1 side:1 stripped:1 taking:1 absolute:2 van:1 dimension:2 axi:2 gram:6 ssi:3 xn:1 horst:2 jump:1 preprocessing:1 projected:1 san:1 transaction:1 global:2 hongyuan:2 xi:4 search:2 un:1 sk:1 hockey:1 table:4 robust:1 ca:3 ignoring:2 investigated:1 edition:1 atheism:1 eural:1 il1:2 sub:1 explicit:1 xl:1 mideast:1 weighting:1 theorem:4 xiaofeng:2 xt:10 list:3 x:3 alt:1 adding:1 gap:1 amsterdam:1 religion:2 sport:2 holland:1 corresponds:1 identity:1 goal:1 kmeans:16 replace:1 bennett:1 loan:1 determined:2 specifically:1 typical:1 principal:1 total:1 experimental:2 orthogonalize:2 attempted:1 est:1 newsgroup:8 select:1 dept:2 tested:2 |
1,087 | 1,993 | Causal Categorization with Bayes Nets
Bob Rehder
Department of Psychology
New York University
New York, NY 10012
bob .rehder@nyu.edu
Abstract
A theory of categorization is presented in which knowledge of
causal relationships between category features is represented as a
Bayesian network. Referred to as causal-model theory, this theory
predicts that objects are classified as category members to the
extent they are likely to have been produced by a categorys causal
model. On this view, people have models of the world that lead
them to expect a certain distribution of features in category
members (e.g., correlations between feature pairs that are directly
connected by causal relationships), and consider exemplars good
category members when they manifest those expectations. These
expectations include sensitivity to higher-order feature interactions
that emerge from the asymmetries inherent in causal relationships.
Research on the topic of categorization has traditionally focused on the problem of
learning new categories given observations of category members. In contrast, the
theory-based view of categories emphasizes the influence of the prior theoretical
knowledge that learners often contribute to their representations of categories [1].
However, in contrast to models accounting for the effects of empirical observations,
there have been few models developed to account for the effects of prior knowledge.
The purpose of this article is to present a model of categorization referred to as
causal-model theory or CMT [2, 3]. According to CMT, people 's know ledge of
many categories includes not only features, but also an explicit representation of the
causal mechanisms that people believe link the features of many categories.
In this article I apply CMT to the problem of establishing objects category
membership. In the psychological literature one standard view of categorization is
that objects are placed in a category to the extent they have features that have often
been observed in members of that category. For example, an object that has most of
the features of birds (e.g., wings, fly, build nests in trees, etc.) and few features of
other categories is thought to be a bird. This view of categorization is formalized by
prototype models in which classification is a function of the similarity (i.e. , number
of shared features) between a mental representation of a category prototype and a
to-be-classified object. However , a well-known difficulty with prototype models is
that a features contribution to category membership is independent of the presence
or absence of other features. In contrast , consideration of a categorys theoretical
knowledge is likely to influence which combinations of features make for
acceptable category members. For example , people believe that birds have nests in
trees because they can fly , and in light of this knowledge an animal that doesnt fly
and yet still builds nests in trees might be considered a less plausible bird than an
animal that builds nests on the ground and doesnt fly (e.g., an ostrich) even though
the latter animal has fewer features typical of birds.
To assess whether knowledge in fact influences which feature combinations make
for good category members , in the following experiment undergraduates were taught
novel categories whose four binary features exhibited either a common-cause or a
common-effect schema (Figure 1). In the common-cause schema, one category
feature (PI) is described as causing the three other features (F 2, F 3, and F4). In the
common-effect schema one feature (F4) is described as being caused by the three
others (F I, F 2, and F3). CMT assumes that people represent causal knowledge such
as that in Figure 1 as a kind of Bayesian network [4] in which nodes are variables
representing binary category features and directed edges are causal relationships
representing the presence of probabilistic causal mechanisms between features.
Specifically , CMT assumes that when a cause feature is present it enables the
operation of a causal mechanism that will, with some probability m , bring about the
presence of the effect feature. CMT also allow for the possibility that effect features
have potential background causes that are not explicitly represented in the network,
as represented by parameter b which is the probability that an effect will be present
even when its network causes are absent. Finally, each cause node has a parameter c
that represents the probability that a cause feature will be present.
~
Common-Cause
Schema
~
?
Common-Effect
Schema
Figure 1.
...(~~) @ .....
:
~~:f?""""?1
"....@/ ?"""::?
?'.
F
Common-Cause
Correlations
3
.?
Common-Effect
Correlations
Figure 2.
The central prediction of CMT is that an object is considered to be a category
member to the extent that its features were likely to have been generated by a
category's causal mechanisms. For example, Table 1 presents the likelihoods that
the causal models of Figure 1 will generate the sixteen possible combinations of F I,
F 2, F 3, and F 4. Each likelihood equation can be derived by the application of simple
Boolean algebra operations. For example, the probability of exemplar 1101 (F I, F 2,
F4 present, F3 absent) being generated by a common-cause model is the probability
that F I is present [c], times the probability that F2 was brought about by F I or its
background cause [1- (lmj(l-b)], times the probability that F3 was brought about
by neither F I nor its background cause [(l-m )(l-b)], times the probability that F 4
was brought about by F I or its background cause [1- (lmj(l-b)]. Likewise , the
probability of exemplar 1011 (F I, F 3, F 4 present, F2 absent) being generated by a
common-effect model is the probability that FI and F3 are present [c 2 ], times the
probability that F2 is absent [1-?], times the probability that F4 was brought about
by F I, F 3, or its background cause [1- (lmj(l-m )(l-b)] . Note that these likelihoods
assume that the causal mechanisms in each model operate independently and with
the same probability m, restrictions that can be relaxed in other applications.
This formalization of categorization offered by CMT implies that peoples
theoretical knowledge leads them to expect a certain distribution of features in
category members , and that they use this information when assigning category
membership. Thus , to gain insight into the categorization performance predicted by
CMT , we can examine the statistical properties of category features that one can
expect to be generated by a causal model. For example , dotted lines in Figure 2
represent the features correlations that are generated from the causal schemas of
Figure 1. As one would expect, pairs of features directly linked by causal
relationships are correlated in the common-cause schema F I is correlated with its
effects and in the common-effect schema F4 is correlated with its causes. Thus,
CMT predicts that combinations of features serve as evidence for category
membership to the extent that they preserve these expected correlations (i.e. , both
cause and effect present or both absent) , and against category membership to the
extent that they break those correlations (one present and the other absent).
Table 1: Likelihoods Equations and Observed and Predicted Values
Common Cause Schema
Likelihood
Observed Predicted
e'b ,3
60 .0
61.7
e 'b ,2 b
44 .9
45 .7
e'b ,2 b
46.1
45 .7
e 'b ,2 b
42 .8
45 .7
e m ,3 b ,3
44.5
44.1
e 'b 'b 2
41.0
40.1
e'b 'b 2
40.8
40.1
e 'b 'b 2
42 .7
40.1
e m ,2 b ,2 (1- m 'b ')
55.1
52 .7
em ,2 b ,2 (1- m 'b ')
52 .6
52 .7
em ,2 b ,2 (1- m 'b ')
54 .3
52 .7
e 'b 3
39.4
38.1
em 'b '(1-m 'b ,)2
64 .2
65 .6
e m 'b '(1-m 'b ,)2
65 .3
65 .6
em 'b '(1-m 'b ,)2
62 .0
65 .6
e (1-m 'b ,)3
1111
90 .8
89 .6
Note . e'=l- c . m '=l-m . b'=l-b.
ExemElar
0000
0001
0010
0100
1000
0011
0101
0110
1001
1010
1100
0111
1011
1101
1110
Common Effect Schema
Control
Likelihood
Observed Predicted Observed
e ,3 b ,
70 .0
69 .3
70 .7
e ,3 b
26 .3
27 .8
67.0
ee,2 m 'b '
43.4
47 .7
65.6
ee ,2 m 'b '
47 .3
47 .7
66.0
ee,2 m 'b '
48.0
47 .7
67.0
ee ,2 (1-m 'b ')
56 .3
56.5
67.1
ee,2 (1-m 'b ')
56.5
56.5
66.5
e 2e 'm ,2 b ,
38 .3
39 .2
65.6
ee,2 (1-m 'b ')
57.7
56.5
68.0
e 2e 'm ,2 b ,
43 .0
39 .2
67.6
e 2e 'm ,2 b ,
41.9
39 .2
69 .9
e 2e'(1-m ,2 b ,)
71.0
74.4
67.6
e 2e '(1-m ,2 b ,)
75 .7
74.4
67 .2
e 2e'(1-m ,2 b ,)
74 .7
74.4
70 .2
e 3m ,3 b ,
33 .8
35 .8
72 .2
e 3(1-m ,3 b ,)
91.0
90 .0
75.6
Causal networks not only predict pairwise correlations between directly connected
features. Figure 2 indicates that as a result of the asymmetries inherent in causal
relationships there is an important disanalogy between the common-cause and
common-effect schemas: Although the common-cause schema implies that the three
effects (F 2 , F 3 , F 4) will be correlated (albeit more weakly than directly connected
features) , the common-effect schema does not imply that the three causes (F I , F 2 ,
F 3 ) will be correlated. This asymmetry between common-cause and common-effect
schemas has been the focus of considerable investigation in the philosophical and
psychological literatures [3 , 5]. Use of these schemas in the following experiment
enables a test of whether categorizers are sensitive the pattern of correlations
between features directly-connected by causal laws, and also those that arise due to
the asymmetries inherent in causal relationships shown in Figure 2. Moreover , I will
show that CMT predicts, and humans exhibit, sensitivity to interactions among
features of a higher-order than the pairwise interactions shown in Figure 2.
Method
Six novel categories were used in which the description of causal relationships
between features consisted of one sentence indicating the cause and effect feature ,
and then one or two sentences describing the mechanism responsible for the causal
relationship. For example , one of the novel categories , Lake Victoria Shrimp , was
described as having four binary features (e.g. , A high quantity of ACh
neurotransmitter. , Long-lasting flight response. , Accelerated sleep cycle. , etc.)
and causal relationships among those features (e.g. , "A high quantity of ACh
neurotransmitter causes a long-lasting flight response. The duration of the electrical
signal to the muscles is longer because of the excess amount of neurotransmitter. ").
Participants first studied several computer screens of information about their
assigned category at their own pace. All participants were first presented with the
four features. Participants in the common-cause condition were
categorys
additionally instructed on the common-cause causal relationships (F 1-;' F 2 , F 1-;' F 3 ,
F 1-;' F 4) , and participants in the common-effect condition were instructed on the
common-effect relationships (F 1-;.F4 , F 2 -;.F4 , F 3 -;.F4 ). When ready , participants
took a multiple-choice test that tested them on the knowledge they had just studied.
Participants were required to retake the test until they committed 0 errors.
Participants then performed a classification task in which they rated on a 0-100
scale the category membership of 16 exemplars , consisting of all possible objects
that can be formed from four binary features. For example , those participants
assigned to learn the Lake Victoria Shrimp category were asked to classify a shrimp
that possessed "High amounts of the ACh neurotransmitter ," "A normal flight
response ," "Accelerated sleep cycle ," and "Normal body weight." The order of the
test exemplars was randomized for each participant.
One hundred and eight University of Illinois undergraduates received course credit
for participating in this experiment. They were randomly assigned in equal numbers
to the three conditions , and to one of the six experimental categories.
Results
Categorization ratings for the 16 test exemplars averaged over partIclpants in the
common-cause , common-effect, and control conditions are presented in Table 1.
The presence of causal knowledge had a large effect on the ratings. For instance,
exemplars 0111 and 0001 were given lower ratings in the common-cause and
common-effect conditions , respectively (39.4 and 26.3) than in the control condition
(67.6 and 67.0) presumably because in these exemplars correlations are broken
(effect features are present even though their causes are absent). In contrast,
exemplar 1111 received a significantly higher rating in the common-cause and
common-effect conditions than in the control condition (90.8 and 9l.0 vs. 75.6) ,
presumably because in both conditions all correlations are preserved.
To confirm that causal schemas induced a sensitivity to interactions between
features, categorization ratings were analyzed by performing a multiple regression
for each participant. Four predictor variables (f1 , f2, f3 , f4) were coded as -1 if the
feature was absent , and + 1 if it was present. An additional six predictor variables
were formed from the multiplicative interaction between pairs of features: f12 , f13 ,
f14 , f24 , f34 , and f23. For those feature pairs connected by a causal relationship the
two-way interaction terms represent whether the causal relationship is confirmed
(+ 1, cause and effect both present or both absent) or violated (-1 , one present and
one absent). Finally , the four three-way interactions (f123 , f124 , f134, and f234) , and
the single four-way interaction (f1234) were also included as predictors.
Regression weights averaged over participants are presented in Figure 3 as a
function of causal schema condition. Figure 3 indicates that the interaction terms
corresponding to those feature pairs assigned causal relationships had significantly
positive weights in both the common-cause condition (f12 , f13 , f14) , and the
common-effect condition (f14 , f24 , f34). That is , as predicted (Figure 2) an exemplar
was rated a better category member when it preserved expected correlations (cause
and effect feature either both present or both absent) , and a worse member when it
broke those correlations (one absent and the other present).
12
(a) Common Cause vs. Control
10
.l:
Of)
'0:;
~
6
~
4
a
'"
"''""
?
8
Control Observed
E9
0
(b) Common Effect vs. Control
10
.l:
Of)
CC Predicted
~
2
(2)
12
'0:;
CC Observed
?
8
CE Observed
~
6
Control Observed
~
a
4
CE Predicted
'"
"''""
2
0
(2)
fl
f2
f3
f4
fl2
fl3
fl4
f24
f34
f23 fl23 f124 f134 f234 f1234
Regression Term
Figure 3
In addition, it was shown earlier (Figure 2) that because of their common-cause the
three effect features in a common-cause schema will be correlated, albeit more
weakly than directly-linked features. Consistent with this prediction, in this
condition the three two-way interaction terms between the effect features (f24, f34,
f23) are greater than those interactions in the control condition. In contrast, the
common-effect schema does not imply that the three cause features will be
correlated, and in fact in that condition the interactions between the cause attributes
(f12, f13, f23) did not differ from those in the control condition (Figure 3).
Figure 3 also reveals higher-order interactions among features in the common-effect
condition: Weights on interaction terms f124, f134, f234, and f1234 (- 1.6,2.0 , -2.0,
and 2.2) were significantly different from those in the control condition. These
higher-order interactions arose because a common-effect schema requires only one
cause feature to explain the presence of the common effect. Figures 7b presents
the logarithm of the ratings in the common-effect condition for those test exemplars
in which the common effect is present as a function of the number of cause features
present. Ratings increased
more with the introduction
4.5
of the first cause as
compared to subsequent
bO 4.0
causes. That is, participants
'ill
considered the presence of
~ 3.5
at
least
one
cause
Of)
0
explaining the presence of
.....l 3.0
Observed
Observed
the common-effect to be
(CE Present)
(CC Present)
sufficient grounds to grant
2.5
Pred icted
an exemplar a relatively
high category membership
o
2
3
o
2
3
rating in a common-effect
# of Effects
# of Causes
category.
In
contrast ,
Figure 4
Figure 7a shows a linear
.=
?
increase in (the logarithm of) categorization ratings for those exemplars in which
the common cause is present as a function of the number of effect features. In the
presence of the common cause each additional effect produced a constant increment
to log categorization ratings.
Finally , Figure 3 also indicates that the simple feature weights differed as a function
of causal schema. In the common-cause condition, the common-cause (f1) carried
greater weight than the three effects (f2, f3 , f4). In contrast, in the common-effect
condition it was the common-effect (f4) that had greater weight than the three
causes (f1 , f2, f3). That is , causal networks promote the importance of not only
specific feature combinations , but the importance of individual features as well.
Model Fitting
To assess whether CMT accounts for the patterns of classification found in this
experiment, the causal models of Figure 1 were fitted to the category membership
ratings of each participant in the common-cause and common-effect conditions,
respectively. That is , the ratings were predicted from the equation ,
Rating (X) = K ? Likelihood (X; c, m , b)
where Likelihood (X; c, m , b) is the likelihood of exemplar X as a function of c, m ,
and b. The likelihood equations for the common-cause and common-effect models
shown in Table 1 were used for common-cause and common-effect participants ,
respectively. K is a scaling constant that brings the likelihood into the range 0-100.
For each participant, the values for parameters K , c, m, and b that minimized the
squared deviation between the predicted and observed ratings was computed. The
best fitting values for parameters K , c, m , and b averaged over participants were
846 , .578 , .214 , and .437 in the common-cause condition , and 876 , .522 , .325 , and
.280 in the common-effect condition. The predicted ratings for each exemplar are
presented in Table 1. The significantly positive estimate for m in both conditions
indicates that participants categorization performance was consistent with them
assuming the presence of a probabilistic causal mechanisms linking category
features. Ratings predicted by CMT did not differ from observed ratings according
to chi-square tests: )(\16)=3.0 for common cause, )(\16)=5.3 for common-effect.
To demonstrate that CMT predicts participants
sensitivity to particular
combinations of features when categorizing , each participants predicted ratings
were subjected to the same regressions that were performed on the observed ratings.
The resulting regression weights averaged over participants are presented in Figure
3 superimposed on the weights from the observed data. First, Figure 3 indicates that
CMT reproduces participants sensitivity to agreement between pairs of features
directly connected by causal relationships (f12 , f13 , f14 in the common-cause
condition , and f14 , f24 , f34 in the common-effect condition). That is , according to
both CMT and human participants , category membership ratings increase when
pairs of features confirm causal laws , and decrease when they violate those laws.
Second, Figure 3 indicates that CMT accounts for the interactions between the
effect features in the common-cause condition (f12, f13 , f23) and also for the higherorder feature interactions in the common-effect condition (f124 , f134, f234 , f1234) ,
indicating that that CMT is also sensitive to the asymmetries inherent in causal
relationships. The predictions of CMT superimposed on the observed data in Figure
4 confirm that CMT , like the human participants , requires only one cause feature to
explain the presence of a common effect (nonlinear increase in ratings in Figure
4b) whereas CMT predicts a linear increase in log ratings as one adds effect features
to a common cause (Figure 4a). Finally , CMT also accounts for the larger weight
given to the common cause and common-effect features (Figure 3).
Discussion
The current results support CMTs claims that people have a representation of the
probabilistic causal mechanisms that link category features, and that they classify by
evaluating whether an objects combination of features was likely to have been
generated by those mechanisms. That is , people have models of the world that lead
them to expect a certain distribution of features in category members , and consider
exemplars good category members to the extent they manifest those expectations.
One way this effect manifested itself is in terms of the importance of preserved
correlations between features directly connected by causal relationships. An
alternative model that accounts for this particular result assumes that the feature
space is expanded to include configural cues encoding the confirmation or violation
of each causal relationship [6]. However , such a model treats causal links as
symmetric and does not consider interactions among links. As a result , it does not fit
the common effect data as well as CMT (Figure 4b) , because it is unable to account
for categorizers sensitivity to the higher-order feature interactions that emerge as a
result of causal asymmetries in a complex network.
CMT diverges from traditional models of categorization by emphasizing the
knowledge people possess as opposed to the examples they observe. Indeed , the
current experiment differed from many categorization studies in not providing
examples of category members. As a result , CMT is applicable to the many realworld categories about which people know far more than they have observed first
hand (e.g., scientific concepts). Of course, for many other categories people observe
category members , and the nature of the interactions between knowledge and
observations is an open question of considerable interest. Using the same materials
as in the current study, the effects of knowledge and observations have been
orthogonally manipulated with the finding that observations had little effect on
classification performance as compared to the theories [7]. Thus , theories may often
dominate categorization decisions even when observations are available.
Acknowledgments
Support for this research was provided by funds from the National Science
Foundation (Grants Number SBR-98l6458 and SBR 97-20304) and from the
National Institute of Mental Health (Grant Number ROl MH58362).
References
[1] Murphy, G . L. , & Medin, D . L. (1985). The role of theories in conceptual coherence .
Psychological Review , 92, 289-316.
[2] Rehder, B. (1999). A causal model theory of categorization . In Proceedin gs of the 21st
Annual Meeting of the Cognitive Science Society (pp. 595-600). Vancouver.
[3] Waldmann , M .R ., Holyoak , K.J ., & Fratianne, A. (1995). Causal models and the acquisition
of category structure. Journal of Experimental Psychology: General, 124 , 181-206 .
[4] Pearl , J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible
inference. San Mateo , CA: Morgan Kaufman.
[5] Salmon, W. C. (1984). Scientific explanation and the causa l structure of the world.
Princeton , NJ: Princeton University Press.
[6] Gluck, M. A. , & Bower, G. H . (1988). Evaluating an adaptive network model of human
learning. Journal of Memory and Language, 27,166-195.
[7] Rehder, B., & Hastie , R. (2001). Causal knowledge and categories: The effects of causal beliefs
on categorization , induction, and similarity. Journal of Experimental Psychology: General, 130 ,
323-360.
| 1993 |@word open:1 holyoak:1 accounting:1 rol:1 shrimp:3 current:3 yet:1 assigning:1 subsequent:1 enables:2 fund:1 v:3 cue:1 fewer:1 rehder:4 mental:2 node:2 contribute:1 fitting:2 pairwise:2 indeed:1 expected:2 nor:1 examine:1 chi:1 little:1 provided:1 moreover:1 kind:1 kaufman:1 developed:1 finding:1 nj:1 configural:1 control:11 grant:3 positive:2 sbr:2 treat:1 encoding:1 establishing:1 might:1 bird:5 studied:2 mateo:1 doesnt:2 range:1 averaged:4 medin:1 directed:1 acknowledgment:1 responsible:1 empirical:1 thought:1 significantly:4 influence:3 restriction:1 independently:1 duration:1 focused:1 formalized:1 insight:1 dominate:1 traditionally:1 increment:1 categorizers:2 agreement:1 predicts:5 observed:17 role:1 fly:4 electrical:1 connected:7 cycle:2 decrease:1 broken:1 asked:1 weakly:2 algebra:1 serve:1 f2:7 learner:1 represented:3 neurotransmitter:4 whose:1 larger:1 plausible:2 itself:1 net:1 took:1 interaction:20 causing:1 description:1 participating:1 ach:3 asymmetry:6 diverges:1 categorization:18 object:8 exemplar:16 received:2 predicted:12 implies:2 differ:2 attribute:1 f4:12 human:4 broke:1 material:1 f1:3 investigation:1 considered:3 ground:2 normal:2 credit:1 presumably:2 predict:1 claim:1 purpose:1 applicable:1 waldmann:1 sensitive:2 brought:4 arose:1 categorizing:1 derived:1 focus:1 likelihood:11 indicates:6 superimposed:2 contrast:7 inference:1 membership:9 classification:4 among:4 ill:1 animal:3 equal:1 f3:8 having:1 represents:1 promote:1 minimized:1 others:1 intelligent:1 inherent:4 few:2 randomly:1 manipulated:1 preserve:1 national:2 individual:1 murphy:1 consisting:1 interest:1 possibility:1 violation:1 analyzed:1 light:1 edge:1 tree:3 logarithm:2 causal:48 theoretical:3 fitted:1 psychological:3 instance:1 classify:2 earlier:1 boolean:1 increased:1 deviation:1 hundred:1 predictor:3 st:1 sensitivity:6 randomized:1 probabilistic:4 squared:1 central:1 opposed:1 nest:4 e9:1 worse:1 cognitive:1 wing:1 account:6 potential:1 includes:1 caused:1 explicitly:1 performed:2 view:4 break:1 multiplicative:1 schema:21 linked:2 bayes:1 participant:23 contribution:1 ass:2 formed:2 f12:5 square:1 likewise:1 bayesian:2 produced:2 emphasizes:1 confirmed:1 cc:3 bob:2 classified:2 explain:2 against:1 acquisition:1 pp:1 gain:1 manifest:2 knowledge:14 higher:6 response:3 though:2 just:1 correlation:13 until:1 flight:3 hand:1 nonlinear:1 brings:1 scientific:2 believe:2 effect:62 consisted:1 concept:1 assigned:4 symmetric:1 demonstrate:1 bring:1 reasoning:1 consideration:1 novel:3 fi:1 salmon:1 common:69 linking:1 illinois:1 f23:5 language:1 had:5 similarity:2 longer:1 etc:2 add:1 own:1 certain:3 manifested:1 binary:4 meeting:1 muscle:1 morgan:1 additional:2 relaxed:1 greater:3 signal:1 multiple:2 violate:1 long:2 coded:1 prediction:3 regression:5 expectation:3 represent:3 preserved:3 background:5 whereas:1 addition:1 operate:1 exhibited:1 posse:1 induced:1 member:15 ee:6 presence:10 fit:1 psychology:3 hastie:1 prototype:3 absent:12 whether:5 six:3 york:2 cause:59 amount:2 category:52 generate:1 dotted:1 pace:1 taught:1 four:7 neither:1 ce:3 realworld:1 lake:2 coherence:1 acceptable:1 f13:5 decision:1 scaling:1 fl:1 sleep:2 g:1 annual:1 f24:5 performing:1 expanded:1 relatively:1 department:1 according:3 combination:7 fl2:1 em:4 lasting:2 equation:4 describing:1 mechanism:9 know:2 subjected:1 available:1 operation:2 apply:1 victoria:2 eight:1 observe:2 alternative:1 ledge:1 assumes:3 include:2 build:3 lmj:3 society:1 question:1 quantity:2 f34:5 traditional:1 exhibit:1 link:4 higherorder:1 unable:1 topic:1 extent:6 induction:1 assuming:1 relationship:19 providing:1 observation:6 possessed:1 committed:1 rating:22 pred:1 pair:7 required:1 sentence:2 philosophical:1 pearl:1 pattern:2 ostrich:1 memory:1 explanation:1 belief:1 difficulty:1 representing:2 rated:2 imply:2 orthogonally:1 ready:1 carried:1 health:1 prior:2 literature:2 review:1 vancouver:1 law:3 expect:5 sixteen:1 foundation:1 offered:1 sufficient:1 consistent:2 article:2 pi:1 course:2 placed:1 allow:1 institute:1 explaining:1 emerge:2 world:3 evaluating:2 instructed:2 adaptive:1 san:1 far:1 excess:1 confirm:3 reproduces:1 reveals:1 conceptual:1 table:5 additionally:1 cmt:25 learn:1 nature:1 confirmation:1 ca:1 complex:1 did:2 arise:1 body:1 referred:2 screen:1 differed:2 ny:1 formalization:1 explicit:1 bower:1 emphasizing:1 specific:1 nyu:1 evidence:1 undergraduate:2 albeit:2 importance:3 proceedin:1 gluck:1 likely:4 bo:1 shared:1 absence:1 considerable:2 included:1 typical:1 specifically:1 experimental:3 indicating:2 people:11 support:2 latter:1 accelerated:2 violated:1 princeton:2 tested:1 correlated:7 |
1,088 | 1,994 | Eye movements and the maturation of cortical
orientation selectivity
Michele Rucci and Antonino Casile
Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215.
Scuola Superiore S. Anna, Pisa, Italy
Abstract
Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is
known that the orientation selective responses of V1 neurons are refined
by visual experience. After eye opening, the spatiotemporal structure of
neural activity in the early stages of the visual pathway depends both on
the visual environment and on how the environment is scanned. We have
used computational modeling to investigate how eye movements might
affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes
were scanned so as to model sequences of saccades and fixational eye
movements, such as microsaccades, tremor and ocular drift. The specific
patterns of activity required for a quantitatively accurate development
of simple cell receptive fields with segregated ON and OFF subregions
were observed during fixational eye movements, but not in the presence
of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during
visual fixation in the refinement of orientation selectivity.
1 Introduction
Cortical orientation selectivity, i.e. the preference to edges with specific orientations exhibited by most cells in the primary visual cortex of different mammal species, is one
of the most investigated characteristics of neural responses. Although the essential elements of cortical orientation selectivity seem to develop before the exposure to patterned
visual input, visual experience appears essential both for refining orientation selectivity,
and maintaining the normal response properties of cortical neurons. The precise mechanisms by which visually-induced activity contribute to the maturation of neural responses
are not known.
A number of experimental findings support the hypothesis that the development of orientation selective responses relies on Hebbian/covariance mechanisms of plasticity. According
to this hypothesis, the stabilization of synchronously firing afferents onto common postsynaptic neurons may account for the segregation of neural inputs observed in the receptive
fields of simple cells, where the adjacent oriented excitatory and inhibitory subregions re-
ceive selective input from geniculate ON- and OFF-center cells in the same retinotopic
positions. Modeling studies [10, 9] have shown the feasibility of this proposal assuming
suitable patterns of spontaneous activity in the LGN before eye opening.
After eye opening, the spatiotemporal structure of LGN activity depends not only on the
characteristics of the visual input, but also on the movements performed by the animal
while exploring its environment. It may be expected that changes in the visual input induced by these movements play an important role in shaping the responses of neurons in
the visual system. In this paper we focus on how visual experience and eye movements
might jointly influence the refinement of orientation selectivity under the assumption of a
Hebbian mechanism of synaptic plasticity. As illustrated in Fig. 1, a necessary requirement
of the Hebbian hypothesis is a consistency between the correlated activity of thalamic afferents and the organization of simple cell receptive fields. Synchronous activation is required
among geniculate cells of the same type (ON- or OFF-center) with receptive fields located
at distances smaller than the width of a simple cell subregion, and among cells of opposite
polarity with receptive fields at distances comparable to the separation between adjacent
subregions. We have analyzed the second order statistical structure of neural activity in a
model of cat LGN when natural visual input was scanned so as to replicate the oculomotor
behavior of the cat. Patterns of correlated activity were compared to the structure of simple
cell receptive fields at different visual eccentricities.
2 The model
Modeling the activity of LGN cells
LGN cells were modeled as linear elements with quasi-separable spatial and temporal components as proposed by [3]. This model, derived using the reverse-correlation technique,
has been shown to produce accurate estimates of the activity of different types of LGN
cells. Changes in the instantaneous firing rates with respect to the level of spontaneous
activity were generated by evaluating the spatiotemporal convolution of the input image
with the receptive field kernel
(1)
where is the symbol
for convolution,
and are the spatial
and temporal variables,
!
"$#&%
&'(%!)
and the operator
indicates rectification (
if
otherwise). For each
cell, the kernel consisted of two additive components, representing the center (* ) and
the periphery ( + ) of the receptive field respectively. Each of these two contributions was
separable in its spatial (, ) and temporal ( - ) elements:
.
,0/
!
-1/
0#
,02
!
-12
The spatial receptive fields of both center and surround were modeled as two-dimensional
Gaussians, with a common space constant for both dimensions. Spatial parameters varied
with eccentricity following neurophysiological measurements. As in [3], the temporal profile of the response was given by the difference of two gamma functions, with the temporal
function for the periphery equal to that for the center and delayed by 3 ms.
Modeling eye movements
Modeled eye movements included saccades (both large-scale saccades and microsaccades),
ocular drift, and tremor.
Saccades? Voluntary saccadic eye movements, the fast shifts of gaze among fixation
points, were modeled by assuming a generalized exponential distribution of fixation times.
The amplitude and direction of a saccade were randomly selected among all possible saccades that would keep the point of fixation on the image. Following data described in the
literature, the duration of each saccade was proportional to its amplitude. A modulation
of geniculate activity was present in correspondence of each saccade [7]. Neural activity
around the time of a saccade was multiplied by a gain function so that an initial suppression
of activity with a peak of 10%, gradually reversed to a 20% facilitation with peak occurring
100 ms after the end of the saccade.
Fixational eye movements? Small eye movements included fixational saccades, ocular
drift and tremor. Microsaccades were modeled in a similar way to voluntary saccades, with
amplitude randomly selected from a uniform distribution between 1 and 10 minutes of arc.
No modulation of LGN activity was present in the case of microsaccades.
Ocular drift and tremor were modeled together by approximating their power spectrum
by means of a Poisson process filtered by a second order eye plant transfer function over
the frequency range 0-40 Hz where the power declines as . This term represents the
irregular discharge rate of motor units for frequency
less than 40 Hz. Parameters were
and a mean velocity equal to
/s,
adjusted so as to give a mean amplitude of
which are the values measured in the cat [11].
3 Results
We simulated the activity of geniculate cells with receptive fields in different positions
of the visual field, while receiving visual input in the presence of different types of eye
movements. The relative level of correlation between units of the same and different types
at positions and
in the LGN was measured by means of the correlation difference, D
#
, where the two terms are the correlation coefficients evaluated between the
ONON
ONOFF
two ON units at positions and
, and between the ON unit at position and the OFF
unit at position
respectively. D is positive when the activity of units of the same type
covary more strongly than that of units of different types, and is negative when the opposite
occurs. The average relative levels of correlation between units with receptive fields
at
D
different
distances
in
the
visual
field
were
examined
by
means
of
the
function
'
, which evaluates the average correlation difference D among all pairs of
D
cells at positions
and
at distance from each other. For simplicity, in the following we
D
refer to
as the correlation difference, implicitly assuming that a spatial averaging has
taken place. The correlation difference is a useful tool for predicting the emerging patterns
of connectivity in the presence
of a Hebbian mechanism of synaptic plasticity. The average
separation at which D
changes sign is a key element in determining the spatial extent
of the different subfields within the receptive fields of simple cells.
Fig. 1 ( ) provides an example of application of the correlation difference function to quantify the correlated activity of LGN cells. In this example we have measured the level of
correlation between pairs of cells with receptive fields at different separations when a spot
of light was presented as input. An important element in the resulting level of correlation is
the polarity of the two cells (i.e. whether they are ON- or OFF-center). As shown in Fig. 1
( ), since geniculate cells tend to be coactive when the ON and OFF subregions of their
receptive fields overlap, the correlation between pairs of cells of the same type decreases
when the separation between their receptive fields is increased, while pairs of cells of opposite types tend
to become more correlated. As a consequence, the correlation difference
function, D , is positive at small separations, and negative at large ones.
Fig. 2 shows the measured correlated activity for LGN cells located around 17 deg. of
visual eccentricity in the presence of two types of visual input: retinal spontaneous activity and natural visual stimulation. Spontaneous activity was simulated on the basis of
Matronarde?s data on the correlated firing of ganglion cells in the cat retina [8]. As illustrated by the graph, a close correspondence is present between the measured D and
the response profile of an average cortical simple cell at this eccentricity, indicating that a
LGN
ON
...
V1
LGN
OFF
ON
OFF
2
...
normalized correlation
ON?ON, OFF?OFF
ON?OFF, OFF?ON
difference
1
0
?1
0
20
40
60
80
100
distance (min.)
V1 RF
(a)
(b)
Figure 1: ( ) Patterns of correlated activity required by a Hebbian mechanism of synaptic
plasticity to produce a segregation of geniculate afferents. On average ON- and OFF-center
LGN cells overlapping excitatory and inhibitory subregions in the receptive field of a simple cell must be simultaneously
active. ( ) Example of application of the correlation dif
positions of the
ference function, D . The icons on the top of each graph represent the
receptive fields of the two cells at the corresponding separations along the axis. The bright
dot marks the center of the spot of light. The three
curves represent the correlation coef2
ficients for
pairs
of
units
of
the
same
type
(continuous
thin line),
units of
opposite
2 #
types
(dashed line), and the correlationdifference
function D
D
(bold line). Positive (negative) values of
indicate that the activity of LGN cells of
the same (opposite) type covary more closely than the activity of cells of opposite (same)
types.
Hebbian mechanism of synaptic plasticity can well account for the structure of simple cell
receptive fields before eye opening.
What happens in the presence of natural visual input? We evaluated the correlation difference function on a database of 30 images of
natural
The mean power spectrum
$# %'&
!
"! scenes.
of our database was best approximated by
, which is consistent with the
results of several studies investigating the power spectrum of natural images. The mean
correlation difference function measured when the input images were analyzed statically
is marked by dark triangles in the left panel of Fig. 2. Due to the wide spatial correlations
of natural visual input, the estimated correlation difference did not change sign within the
receptive field of a typical simple cell. That is, LGN cells of the same type were found
to covary more closely than cells of opposite types at all separations within the receptive
field of a simple cell. This result is not consistent with the putative role of a direct Hebbian/covariance model in the refinement of orientation selectivity after eye opening.
A second series of simulations was dedicated to analyze the effects of eye movements on
the structure of correlated activity. In these simulations the images of natural scenes were
scanned so as to replicate cat oculomotor behavior. As shown in right panel of Fig. 2,
significantly different patterns of correlated neural activity were found in the LGN in the
presence of different types of eye movements. In the presence of large saccades, levels
of correlations among the activity of geniculate cells were similar to the case of static
presentation of natural visual input, and they did not match the structure of simple cell
receptive fields. The dark triangles in Fig. 2 represent the correlation difference function
evaluated over a window of observation of 100 ms in the presence of both large saccades
and fixation eye movements. In contrast, when our analysis was restricted to the periods of
visual fixation during which microscopic eye movements occurred, strong covariances were
Saccade + Fixation
Cortical RF
Fixation
cortical RF
spontaneous activity
natural visual input
0.7
0.7
normalized Cd
normalized correlation
measured between cells of the same type located nearby and between cells of opposite types
at distances compatible with the separation between different subregions in the receptive
fields of simple cells.
0.2
?0.3
0.0
0.5
1.0
1.5
2.0
distance (deg.)
2.5
3.0
0.2
?0.3
0.0
0.5
1.0
1.5
2.0
distance (deg.)
Figure 2: Analysis of the correlated activity of LGN units in different experimental conditions. In both graphs, the curve marked by white circles is the average receptive field of
a simple cell, as measured by Jones and Palmer (1987) shown here for comparison. (Left)
Static analysis: patterns of correlated activity in the presence of spontaneous activity and
when natural visual input was analyzed statically. (Right) Effects of eye movements: correlation difference functions measured when natural images were scanned with sequence
or saccades or fixational eye movements.
Fig. 3 shows the results of a similar analysis for LGN cells at different visual eccentricities.
The white circles in the panels of Fig. 3 represent the width of the largest subfield in the
receptive field of cortical simple cells as measured by [13]. The other curves on the left
panel represent the widths of the central lobe of the correlation difference functions (the
spatial separation over which cells of the same type possess correlated activity, measured
as the double of the point in which the correlation difference function intersects the zero
axis) in the cases of spontaneous activity and static presentation of natural visual input. As
in Fig. 2, (1) a close correspondence was present between the experimental data and the
subregion widths predicted by the correlation difference function in the case of spontaneous
activity; and (2) a significant deviation between the two measurements was present in the
case of static examination of natural visual input. The right panel in Fig. 3 shows the
correlation difference functions obtained at different visual eccentricities in the presence of
fixational eye movements. The minimum separation between receptive fields necessary for
observing strong levels of covariance between cells with opposite polarity increased with
eccentricity, as illustrated by the increase in the central lobe of the estimated correlation
functions at the different visual eccentricities. As for the case of spontaneous activity, a
close correspondence is now present between the spatiotemporal characteristics of LGN
activity and the organization of simple cell receptive fields.
4 Discussion
In this paper we have used computer modeling to study the correlated activity of LGN
cells when images of natural scenes were scanned so as to replicate cat eye movements. In
the absence of eye movements, when a natural visual environment was observed statically,
similar to the way it is examined by animals with their eyes paralyzed, we found that the
simulated responses of geniculate cells of the same type at any separation smaller than the
receptive field of a simple cell were strongly correlated. These spatial patterns of covarying geniculate activity did not match the structure of simple cell receptive fields. A similar
result was obtained when natural scenes were scanned through saccades. Conversely, in
10
10
Wilson & Sherman, 1976
spontaneous activity
natural input (static)
central width (deg)
central width (deg)
8
5
2
0
0
10
20
eccentricity (deg.)
30
Wilson & Sherman, 1976
visual fixation
8
5
2
0
0
10
20
30
eccentricity (deg.)
Figure 3: Analysis of the correlated activity of LGN units at different visual eccentricities.
The width of the larger subfield in the receptive field of simple cells at different eccentricities as measured by Wilson and Sherman (1976) (white circles) is compared to the width
of the central lobe of the correlation difference functions measured in different conditions
(Left) Static analysis: results obtained in the presence of spontaneous activity and when
natural visual input was analyzed statically. (Right) Case of fixational eye movements and
natural visual input.
the case of micromovements, including both microsaccades and the combination of ocular
drift and tremor, strong correlations were measured among cells of the same type located
nearby and among cells of opposite types at distances compatible with the separation between different subregions in the receptive fields of simple cells. These results suggest a
developmental role for the small eye movements that occur during visual fixation.
Although the role of visual experience in the development of orientation selectivity has
been extensively investigated, relatively few studies have focused on whether eye movements contribute to the development of the responses of cortical cells. Yet, experiments in
which kittens were raised with their eyes paralyzed have shown basic deficiencies in the
development of visually-guided behavior [6], as well as impairments in ocular dominance
plasticity [4, 12]. In addition, it has been shown that eye movements are necessary for the
reestablishment of cortical orientation selectivity in dark-reared kittens exposed to visual
experience within the critical period [2, 5]. This indicates that simultaneous experience of
visual input and eye movements (and/or eye movement proprioception) may be necessary
for the refinement of orientation selectivity [1]. Our finding that the patterns of LGN activity with static presentation of natural images did not match the spatial structure of the
receptive fields of simple cells is in agreement with the hypothesis that exposure to pattern
vision per se is not sufficient to account for a normal visual development.
A main assumption of this study is that the refinement and maintenance of orientation
selectivity after eye opening is mediated by a Hebbian/covariance process of synaptic plasticity. The term Hebbian is used here with a generalized meaning to indicate the family of
algorithms in which modifications of synaptic efficacies occur on the basis of the patterns
of input covariances. While no previous theoretical study has investigated the influence
of eye movements on the development of orientation selectivity, some models have shown
that schemes of synaptic modifications based on the correlated activity of thalamic afferents can account well for the segregation of ON- and OFF-center inputs before eye opening
in the presence of suitable patterns of spontaneous activity [10, 9]. By showing that, during
fixation, the spatiotemporal structure of visually-driven geniculate activity is compatible
with the structure of simple cell receptive fields, the results of the present study extend the
plausibility of such schemes to the period after eye opening in which exposure to pattern
vision occurs.
Ocular movements are a common feature of the visual system of different species. It should
not come as a surprise that a trace of their existence can be found even in some of the most
basic properties of neurons in the early stages of the visual system, such as orientation
selectivity. Further studies are needed to investigate whether similar traces can be found in
other features of visual neural responses.
References
[1] P. Buisseret. Influence of extraocular muscle proprioception on vision. Physiol. Rev.,
75(2):323?338, 1995.
[2] P. Buisseret, E. Gary-Bobo, and M. Imbert. Ocular motility and recovery of orientational properties of visual cortical neurons in dark-reared kittens. Nature, 272:816?
817, 1978.
[3] D. Cai, G. C. DeAngelis, and R. D. Freeman. Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kitten. J. Neurophysiol., 78(2):1045?
61, 1997.
[4] R. D. Freeman and A. B. Bonds. Cortical plasticity in monocularly deprived immobilized kittens depends on eye movement. Science, 206:1093?1095, 1979.
[5] E. Gary-Bobo, C. Milleret, and P. Buisseret. Role of eye movements in developmental
process of orientation selectivity in the kitten visual cortex. Vision Res., 26(4):557?
567, 1986.
[6] A. Hein, F. Vital-Durand, W. Salinger, and R. Diamond. Eye movements initiate
visual-motor development in the cat. Science, 204:1321?1322, 1979.
[7] D. Lee and J. G. Malpeli. Effect of saccades on the activity of neurons in the cat
lateral geniculate nucleus. J. Neurophysiol., 79:922?936, 1998.
[8] D. N. Mastronarde. Correlated firing of cat retinal ganglion cells. I spontaneously
active inputs to X and Y cells. J. Neurophysiol., 49(2):303?323, 1983.
[9] K. D. Miller. A model of the development of simple cell receptive fields and the
ordered arrangement of orientation columns through activity-dependent competition
between ON- and OFF- center inputs. J. Neurosci., 14(1):409?441, 1994.
[10] M. Miyashita and S. Tanaka. A mathematical model for the self-organization of orientation columns in visual cortex. Neuroreport, 3:69?72, 1992.
[11] E. Olivier, A. Grantyn, M. Chat, and A. Berthoz. The control of slow orienting eye
movements by tectoreticulospinal neurons in the cat: behavior, discharge patterns and
underlying connections. Exp. Brain Res., 93:435?449, 1993.
[12] W. Singer and J. Raushecker. Central-core control of developmental plasticity in
the kitten visual cortex II. Electrical activation of mesencephalic and diencephalic
projections. Exp. Brain Res., 47:22?233, 1982.
[13] J. R. Wilson and S. M. Sherman. Receptive-field characteristics of neurons in the cat
striate cortex: changes with visual field eccentricity. J. Neurophysiol., 39(3):512?531,
1976.
| 1994 |@word replicate:3 simulation:2 lobe:3 covariance:6 mammal:1 initial:1 series:1 efficacy:1 coactive:1 activation:2 yet:1 must:1 physiol:1 additive:1 plasticity:10 motor:2 selected:2 mastronarde:1 core:1 filtered:1 provides:1 contribute:2 preference:1 mathematical:1 along:1 direct:1 become:1 fixation:11 pathway:1 expected:1 behavior:4 brain:2 freeman:2 window:1 retinotopic:1 underlying:1 panel:5 what:1 emerging:1 finding:2 temporal:5 control:2 unit:12 before:4 positive:3 consequence:1 firing:4 modulation:2 might:2 examined:3 conversely:1 dif:1 patterned:1 palmer:1 range:1 subfields:1 spontaneously:1 spot:2 significantly:1 projection:1 suggest:2 onto:1 close:3 operator:1 influence:3 center:11 exposure:3 duration:1 tremor:5 focused:1 simplicity:1 recovery:1 facilitation:1 discharge:2 spontaneous:12 play:1 onon:1 olivier:1 hypothesis:4 agreement:1 element:5 velocity:1 approximated:1 located:4 database:2 observed:3 role:6 electrical:1 extraocular:1 movement:35 decrease:1 environment:4 developmental:3 exposed:1 basis:2 triangle:2 neurophysiol:4 cat:12 intersects:1 fast:1 deangelis:1 refined:1 larger:1 otherwise:1 jointly:1 sequence:2 cai:1 malpeli:1 competition:1 double:1 requirement:1 eccentricity:13 produce:2 develop:1 measured:14 strong:3 subregion:2 predicted:1 indicate:2 come:1 quantify:1 direction:1 guided:1 closely:2 stabilization:1 adjusted:1 exploring:1 around:2 normal:2 visually:3 exp:2 early:2 geniculate:13 bond:1 largest:1 tool:1 wilson:4 derived:1 focus:1 refining:1 indicates:2 contrast:1 suppression:1 dependent:1 selective:3 quasi:1 lgn:22 among:8 orientation:20 development:9 animal:2 spatial:11 ference:1 raised:1 field:39 equal:2 represents:1 jones:1 thin:1 quantitatively:1 opening:8 retina:1 few:1 oriented:2 randomly:2 gamma:1 simultaneously:1 delayed:1 antonino:1 organization:4 investigate:2 analyzed:4 light:2 accurate:2 edge:1 necessary:4 experience:6 re:4 circle:3 hein:1 theoretical:1 increased:2 column:2 modeling:5 deviation:1 uniform:1 spatiotemporal:6 peak:2 lee:1 off:16 receiving:2 gaze:1 together:1 connectivity:1 central:6 cognitive:1 account:4 retinal:2 bold:1 coefficient:1 afferent:4 depends:3 performed:1 analyze:1 observing:1 thalamic:2 contribution:1 bright:1 characteristic:4 miller:1 icon:1 simultaneous:1 synaptic:8 evaluates:1 frequency:2 ocular:8 static:8 gain:1 orientational:1 shaping:2 amplitude:4 appears:2 maturation:2 response:11 evaluated:3 strongly:2 stage:2 correlation:33 overlapping:1 chat:1 michele:1 orienting:1 effect:3 consisted:1 normalized:3 covary:3 illustrated:3 white:3 adjacent:3 motility:1 during:5 width:8 self:1 imbert:1 m:3 generalized:2 dedicated:1 image:9 meaning:1 instantaneous:1 common:3 stimulation:1 covarying:1 extend:1 occurred:1 measurement:2 refer:1 significant:1 surround:1 tuning:1 consistency:1 sherman:4 dot:1 cortex:5 italy:1 driven:1 reverse:1 periphery:2 selectivity:14 durand:1 muscle:1 minimum:1 period:3 dashed:1 paralyzed:2 ii:1 hebbian:10 match:3 plausibility:1 feasibility:1 basic:2 maintenance:1 vision:4 poisson:1 kernel:2 represent:5 cell:63 irregular:1 proposal:1 addition:1 crucial:1 exhibited:1 posse:1 induced:2 hz:2 tend:2 proprioception:2 seem:1 presence:13 vital:1 affect:1 opposite:10 decline:1 shift:1 synchronous:1 whether:3 impairment:1 useful:1 se:1 dark:4 extensively:1 subregions:8 fixational:7 inhibitory:2 sign:2 estimated:2 per:1 dominance:1 key:1 microsaccades:5 v1:3 graph:3 place:1 family:1 separation:12 putative:1 comparable:1 correspondence:4 activity:49 scanned:7 occur:2 deficiency:1 scene:5 nearby:2 min:1 separable:2 statically:4 relatively:1 department:1 according:1 combination:1 smaller:2 postsynaptic:1 rucci:1 rev:1 modification:2 happens:1 berthoz:1 deprived:1 gradually:1 restricted:1 taken:1 rectification:1 segregation:3 mechanism:6 needed:1 initiate:1 singer:1 end:1 gaussians:1 multiplied:1 reared:2 existence:1 top:1 maintaining:1 approximating:1 monocularly:1 casile:1 arrangement:1 occurs:2 receptive:36 primary:1 saccadic:1 striate:1 microscopic:1 distance:9 reversed:1 simulated:4 lateral:2 extent:1 immobilized:1 assuming:3 modeled:6 polarity:3 trace:2 negative:3 diamond:1 neuron:9 convolution:2 observation:1 arc:1 voluntary:2 precise:1 synchronously:1 varied:1 drift:5 pair:5 required:3 connection:1 tanaka:1 alternately:1 miyashita:1 pattern:14 oculomotor:2 rf:3 including:1 power:4 suitable:2 overlap:1 natural:22 examination:1 critical:1 predicting:1 representing:1 scheme:3 eye:43 axis:2 mediated:1 literature:1 segregated:1 determining:1 relative:2 plant:1 subfield:2 proportional:1 nucleus:2 sufficient:1 consistent:2 cd:1 excitatory:3 compatible:3 wide:1 curve:3 dimension:1 cortical:13 evaluating:1 refinement:6 mesencephalic:1 implicitly:1 keep:1 deg:7 active:2 investigating:1 spectrum:3 continuous:1 micromovements:1 nature:1 transfer:1 investigated:3 anna:1 did:4 main:1 neurosci:1 profile:2 fig:11 slow:1 position:8 pisa:1 exponential:1 minute:1 specific:2 showing:1 symbol:1 essential:2 scuola:1 occurring:2 boston:2 surprise:1 ganglion:2 neurophysiological:1 visual:50 ordered:1 saccade:19 gary:2 relies:1 ma:1 marked:2 presentation:4 absence:1 change:5 included:2 typical:1 averaging:1 specie:2 experimental:3 indicating:1 support:1 mark:1 ficients:1 neuroreport:1 kitten:7 correlated:17 |
1,089 | 1,995 | Generating velocity tuning by asymmetric
recurrent connections
Xiaohui Xie and Martin A. Giese
Dept. of Brain and Cognitive Sciences and CBCL
Massachusetts Institute of Technology
Cambridge, MA 02139
Dept. for Cognitive Neurology,
University Clinic T?ubingen
Max-Planck-Institute for Biological Cybernetics
72076
T?ubingen, Germany
E-mail: xhxie|giese @mit.edu
Abstract
Asymmetric lateral connections are one possible mechanism that can account for the direction selectivity of cortical neurons. We present a mathematical analysis for a class of these models. Contrasting with earlier
theoretical work that has relied on methods from linear systems theory,
we study the network?s nonlinear dynamic properties that arise when the
threshold nonlinearity of the neurons is taken into account. We show
that such networks have stimulus-locked traveling pulse solutions that
are appropriate for modeling the responses of direction selective cortical
neurons. In addition, our analysis shows that outside a certain regime
of stimulus speeds the stability of this solutions breaks down giving rise
to another class of solutions that are characterized by specific spatiotemporal periodicity. This predicts that if direction selectivity in the cortex is mainly achieved by asymmetric lateral connections lurching activity waves might be observable in ensembles of direction selective cortical
neurons within appropriate regimes of the stimulus speed.
1 Introduction
Classical models for the direction selectivity in the primary visual cortex have assumed
feed-forward mechanisms, like multiplication or gating of afferent thalamo-cortical inputs
(e.g. [1, 2, 3]), or linear spatio-temporal filtering followed by a nonlinear operation (e.g.
[4, 5]). The existence of strong lateral connectivity has motivated modeling studies, which
have shown that the properties of direction selective cortical neurons can also be accurately
reproduced by recurrent neural network models with asymmetric lateral excitatory or inhibitory connections [6, 7]. Since these biophysically detailed models are not accessible
for mathematical analysis, more simplified models appropriate for a mathematical analysis
have been proposed. Such analysis was based on methods from linear systems theory by
neglecting the nonlinear properties of the neurons [6, 8, 9]. The nonlinear dynamic phenomena resulting from the interplay between the recurrent connectivity and the nonlinear
threshold characteristics of the neurons have not been tractable in this theoretical framework.
In this paper we present a mathematical analysis that takes the nonlinear behavior of the
individual neurons into account. We present the result of the analysis of such networks
for two types of threshold nonlinearities, for which closed-form analytical solutions of the
network dynamics can be derived. We show that such nonlinear networks have a class of
form-stable solutions, in the following signified as stimulus-locked traveling pulses, which
are suitable for modeling the activity of direction selective neurons. Contrary to networks
with linear neurons, the stability of the traveling pulse solutions in the nonlinear network
can break down giving raise to another class of solutions (lurching activity waves) that is
characterized by spatio-temporal periodicity. Our mathematical analysis and simulations
showed that recurrent models with biologically realistic degrees of direction selectivity
typically also show transitions between traveling pulse and lurching solutions.
2 Basic model
Dynamic neural fields have been proposed to model the average behavior
of a large ensem
bles of neurons [10, 11, 12].
The
scalar
neural
activity
distribution
characterizes the
average activity
at time of an ensemble of functionally similar neurons that code for the
position , where can be any abstract stimulus parameter. By the continuous approximation of biophysically discrete neuronal dynamics it is in some cases possible to treat the
nonlinear neural dynamics analytically.
The field dynamics of neural activation variable
is described by:
"!
# %$
(1)
This dynamics is essentially a leaky integrator
with a total input on the right hand side,
which includes a feedfoward input term !
and a feedback term that integrates the
recurrent
contributions from other laterally connected neurons. The interaction kernel
&'
characterizes
the average synaptic connection
strength between the neurons
coding position and the neurons coding position . is the activation function of the
neurons. This function is nonlinear and monotonically increasing, and introduces the nonlinearity that makes it difficult to analyze the network dynamics.
With a moving stimulus at constant velocity ( , it is often *
convenient
to transform the static
+
coordinate to the moving frame by changing
variable
)
( . Under the new frame,
-
.
the stimulus is stationary. Let , )
. The dynamics for , reads
(
,
)
,
(
)
)
,
)
/0
)
)
,
1 2
)
)
"!
)
.$
(2)
A stationary solution in the moving frame has to satisfy the following equation:
(
,43
)
)
,
3
)
)
)
,
3
)
2
)
"!
)
.$
(3)
,43 )
corresponds to a traveling pulse solution with velocity ( in the original static coordinate. Therefore the traveling pulse solution driven by the moving stimulus can be found
by solving Eq. (3), and the stability of the traveling pulse can be studied by perturbing the
stationary solution in Eq. (2).
The neural field dynamics Eq. (2) is a nonlinear integro-differential equation. In most
cases an analytic treatment of such equations is impossible. In this paper, we consider
two biologically inspired special cases, which can be analytically solved. For this purpose
we consider
only one-dimensional neural fields and assume that the nonlinear activation
function is either a step function or a linear threshold function.
3 Step activation function
We first consider step activation function
where
when
and
zero otherwise. This form of activation function approximates activities of neurons, which,
by saturation, are either active or inactive. For the
one-dimensional case, we assume that
only a single
stationary
excited
regime
with
(
)exists that is located between the
, 3 )
points ) 3 ) 3 . Only neurons inside this regime contribute to the
integral, and accordingly
Eq. (3) can be simplified following [11]. The spatial shape , 3 ) of the stationary solution
obeys the ordinary differential equation
,
(
where
the boundaries )
3
)
)
3
,
)
)
) 3
)
) 3
!
.
)
(4)
. The solution of the above equation can be found by treating
and ) 3 as fixed parameters, and solving Eq. (4).
3
To facilitate notation we define an integral operator with parameter
2
where
two functions
5
for
43
6
/
(($7
"!
and
2
2
$#
&%(' *),+.-/10 (#
(5)
otherwise. Using this operator we define the
and
(
as
8
!
(97
%$
(
5
5 functions in the form
The solution of Eq. (4) can be written
with these
3
,
For the boundary points
dent equation system
,
)
3
) 3
)
,
3
5
from which )
) 3
4
) 3
)
) 3
8
%$
)
(6)
must be satisfied, leading to the transcen-
5
5
5
) 3
)
) 3
3
) 3
8
8
) 3
)
(7)
(8)
.
3
and ) 3 can be determined.
3
3.1 Stability of the traveling pulse solution
The stability of the traveling pulse solution can be analyzed
by perturbing the stationary
solution in the moving coordinate system. Let : , )
be a small perturbation of , 3 ) .
The linearized perturbation dynamics reads
: ,
: ,
(
)
=
1>
: ,
)
)
) 3
: )
)
) 3
: )
(9)
where : )*; (<
) are the perturbations
of the
boundary points of the exited regime from
4
the stationary
values
of
. However, : ) ; is not independent of
) ; 3 with ,
) ;3
: ) ;
: ,
)
, and the dependence can be found by noting that
,
Since
,
3
,
) ;
7
) ;3
)
: ,
) ;
,
) ;3
: ) ;
,
) ;3
,
) ;3
)
: ) ;
: ) ;
$
to the first order we have : ) ;
: ,
) ;3
7@? ; 3 , where
. Substituting this back into the perturbed dynamics, we have
: ,
(
: ,
)
) ;3
: ,
)
)
? 3
) 3
: ,
) 3
)
?
) 3
3
: ,
) 3
%$
? ;3
%
Substitute solution of the form : , )
calculation, the eigenvalue equation for reads
? 3
where function
? 3
)
into the above dynamics. After some
) 3
) 3
) 3
) 3
.
(10)
is defined as
6
( 7
7
2
$
(
From the transcendent Eq. (10), can be found. The traveling pulse solution is asymptotically stable only if the real parts of all eigenvalues are negative.
3.2 Simulation results of step activation function model
We use the following function
;
#2
;
as an example interaction kernel, numerically simulate the dynamics and compare the simulation results with the above mathematical analysis. The stimulus used is a moving bar
with constant width and amplitude. The results are shown in the left (a-e) panels of Fig. (1).
Panel (a) shows the speed tuning curve plotted as the dependence of the peak activity of the
traveling pulse as function of the stimulus velocity ( . The solid lines indicate the results
from the numerical simulation and the dotted lines represent results from the analytical solution. Panel (b) shows the maximum real part of the eigenvalues obtained from Eq. (10).
For small and large stimulus velocities maximum of the real parts of becomes positive
indicating a loss of stability of the form-stable solution. To verify this result we calculated
the variability of the peak activity over time in simulation. Panel (c) shows the average
variability as function of the stimulus velocity. At the velocities for which the eigenvalues
indicate a loss of stability the variability of the amplitudes suddenly increases, consistent
with our interpretation as a loss of the form stability of the solution.
An interesting observation is illustrated in panels (d) and (e) that show a color-coded plot of
the space-time evolution of the activity. Panel (e) shows the propagation of the form-stable
traveling pulse. Panel (d) shows the solution that arises when stability is lost. This solution
is characterized
by#a spatio-temporal
periodicity
that is defined in the moving coordinate
4
#
"!
,
system by ,
, where and ! are constants that depend
on the network dynamics. Solutions of similar type have been described before in spiking
networks [13].
4 Linear threshold activation function
&%('
"
In this case, the activation function is taken to be
#$
. Cortical
neurons typically operate far below the saturation level. The linear threshold activation
function is thus more suitable to capture the properties of real neurons while still permitting
a relatively simple theoretical analysis. We consider a ring network with periodic boundary
conditions. The dynamics is given by
)
# )
# #) +
* -,
) /) # ) 1
) 10 $ $
>.
"!
(11)
)
,
This network can be shown equivalent to the the standard one in Eq. (1) by changing variables and transforming stimulus. We chose this form because it simplifies the mathematical
analysis of ring networks. Again, we consider a moving stimulus with velocity ( and analyze the network in the moving frame.
4.1 General solutions and stability analysis
Because the activation function has#) linear
threshold
characteristics, inside the excited
regime for which the total input (
) is positive the system is linear. One approach to solve this dynamics is therefore to find the solutions to the differential equation
assuming the boundaries of the excited regime are given. The conditions at the boundaries lead to a set of self-consistent equations for the solutions to satisfy, from which the
boundaries can be determined.
)
.
#
)
(
By denoting activities in moving coordinates as
, the dynamics can
be written as:
)
(
)
)
#)
*1 ,
#)4)
) >.
)
)
)
8
)
0 $
$
,
Supposing the excited regime is
, we
. solve
. the dynamics by Fourier transforming the above equation in the spatial domain
. Let
,
#
'
)
4
) )
% )
) &% ;
>.# )
,
)
)
where
then be written as
) &% ;
>. )
)
% ; '
- >.
)
)
!
$ $ $
,
/
,
#) &% ;
8
>.
)
)
)
$
is the frequency. The stationary solution in moving coordinates can
3
<
(!
)
"
$# '&% !
(12)
where matrix
is defined as the diagonal
matrix
. The components of the
#'
"
!
, and those
of
are
.
The
above
solution
has
to satisfy two boundary
vector are
)
)
conditions, from which and can be determined.
Stability of this traveling pulse solution can be analyzed by linear perturbation. Note that
perturbed
boundaries
points do not contribute
to the linearized perturbed dynamics since
)
#)
1>
)
;
,where 3
is the total input at the stationary solution of the
: ; 3
<
moving frame on right hand side of Eq. (11). Therefore, the linearized perturbation dynamics can be fully characterized by the perturbed Fourier modes with fixed boundaries.
Hence, the4stability
of the
pulse solution is determined by the eigenvalues of ma)
traveling
trix (
. If the largest real part of eigenvalues of ( is negative, then
< (
the stimulus locking traveling pulse is stable.
4.2 Simplified linear threshold network
The general solution introduced above requires the solution of an equation system. In
practice, the Fourier series have to be truncated in order to obtain a finite number of Fourier
components at the expense of an approximation error. Next we consider a special simple
model for which an exact solution
can be found that contains only two Fourier components
for the interaction kernel and the input ! . For this model a closed form solution and
stability analysis is presented, that at the same time provides insight in some rather general
properties of linear threshold networks.
The interaction kernel and feedforward input are assumed to have the following form:
) +*
* -,.
/ )
)
-,./ #)
10
!
(13)
This network was used by Hansel and Sompolinsky as model of cortical orientation selectivity [14]. However
different from their network, we consider
here an asymmetric
)
)
interaction kernel
and a form-constant moving stimulus !
(
.
Since the interaction kernel and input ! only involve first two Fourier components, the
dynamics can be fully determined in terms of its order parameters defined by
)
)
,
# )
,
# ) % ; ' )
>.
>.
(14)
)
)
,
,
where phase variable is to restrict
to being real. In terms of these two order parameters plus the phase variable, the stimulus-locked traveling pulse solution and its stability
conditions can be expressed analytically. Due to space limitation, the detailed derivations
are omitted here. We show the theoretical results in right five panels of Fig. (1) and compare
them with numerical simulations.
Similar to the results of step function model, panel (A) shows the speed tuning curve plotted
as values of order parameters and as function of different stimulus velocities ( . Panel
(B) shows the largest real part of the eigenvalues of a stability matrix that can be obtained by
linearizing the order parameter dynamics around the stationary solution. Panel (C) shows
the average variations as function of the stimulus velocity. The space-time evolution of
the form-stable traveling pulse is shown in panel (E); the form-unstable lurching wave is
shown in panel (D). Thus we found that lurching wave solution type arises very robustly for
both types of threshold functions when the network achieved substantial direction selective
behavior.
5 Conclusion
We have presented different methods for an analysis of the nonlinear dynamics of simple
recurrent neural models for the direction selectivity of cortical neurons. Compared to earlier works, we have taken into account the essentially nonlinear effects that are introduced
by the nonlinear threshold characteristics of the cortical neurons. The key result of our
work is that such networks have a class of form-stable traveling pulse solutions that behave
similar as the solutions of linear spatio-temporal filtering models within a certain regime
of stimulus speeds. By the essential nonlinearity of the network, however, bifurcations can
arise for which the traveling pulse solutions become unstable. We observed that in this
case a new class of spatio-temporally periodic solutions (?lurching activity waves?) arises.
Since we found this solution type very frequently for networks with substantial direction
selectivity our analysis predicts that such ?lurching behavior? might be observable in visual
cortex areas if, in fact, the direction selectivity is essentially based on asymmetric lateral
connectivity.
Acknowledgments
We acknowledge helpful discussions with H.S. Seung and T. Poggio.
References
[1] C Koch and T Poggio. The synaptic veto mechanism: does it underlie direction
and orientation selectivity in the visual cortex. In D Rose and V G Dobson, editors,
Models of the Visual Cortex, pages 15?34. John Wiley, 1989.
[2] J.P. van Santen and G. Sperling. Elaborated reichardt detectors. J Opt Soc Am A,
256:300?21, 1985.
[3] W. Reichardt. A principle for the evaluation of sensory information by the central
nervous system, 1961.
[4] E. H. Adelson and J. R. Bergen. Spatiotemporal energy models for the perception of
motion. J Opt Soc Am A, 256:284?99, 1985.
c
Peak variance
b
Real(?)
0
-40
2
-30
-20
10
0
Activity
A
1
r
0
r1
0.02
0.01
0
-50
10
0
50
100
0
50
100
50
100
0.5
0
B
-2
-40
0.2
-30
-20
10
0
C
-30
-20
10
0
0
-0.5
-50 -3
x 10
3
10
0.1
0
-40
Real Eig
2
0.03
theory
simulation
Variation
a
Peak Activity
3
2
1
0
-50
10
0
Velocity
Velocity
D
E
TIME
e
TIME
d
SPACE
SPACE
Figure 1: Traveling pulse solution and its stability in two classes of models. In the left side
shown is the step activation function model, while the linear threshold model is drawn in
the right. Panel (a) and (A) show the velocity tuning curves of the traveling pulse in terms
of its peak activity in (a) or order parameters in (A). The solid lines indicate the results
from calculation, and the dotted lines represents the results from simulaion. Panel (b) and
(B) plot the largest real parts of eigenvalues of a stability matrix obtained from perturbed
linear dynamics around the stationary solution. Outside certain range of stimulus velocities
the largest real part of the eigenvalues become positive indicating a loss of stability of the
form-stable solution. Panel (c) and (C) plots the average variations of peak activity, and
order parameters (blue curve) and (green curve) respectively, over time during simulation. A nonzero variance signifies a loss of stability for traveling pulse solutions, which
is consistent with eigenvalue analysis
in Panel (b) and (B). A color coded plot
of spatial#
# #
temporal evolution of the activity
is shown in panels (d) and (e), and
in (D)
and (E). Panel (e) and (E) show the propagation of the form-stable peak over time; panel (d)
and (D) show the lurching activity wave
that arises when
stability is
lost.
interaction
2
The
;
;
kernel
used
in
step
function
model
is
$ >
$
;
;
with
and
. The stimulus is a moving bar
*
>
with
amplitude
.
Parameters
used
in linear threshold model are
* width
$
* and
$
$
$
,
,
and
.
[5] A. B. Watson and A. J. Ahumada. Model of human visual-motion sensing. J Opt Soc
Am A, 256:322?41, 1985.
[6] H. Suarez, C. Koch, and R. Douglas. Modeling direction selectivity of simple cells in
striate visual cortex within the framework of the canonical microcircuit. J Neurosci,
15:6700?19, 1995.
[7] R. Maex and G. A. Orban. Model circuit of spiking neurons generating directional
selectivity in simple cells. J Neurophysiol, 75:1515?45, 1996.
[8] P. Mineiro and D. Zipser. Analysis of direction selectivity arising from recurrent
cortical interactions. Neural Comput, 10:353?71, 1998.
[9] S. P. Sabatini and F. Solari. An architectural hypothesis for direction selectivity in the
visual cortex: the role of spatially asymmetric intracortical inhibition. Biol Cybern,
80:171?83, 1999.
[10] HR Wilson and JD Cowan. A mathematical theory of the functional dynamics of
cortical and thalamic nervous tissue. Kybernetik, 13(2):55?80, 1973.
[11] S Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol
Cybern, 27(2):77?87, 1977.
[12] E. Salinas and L.F. Abbott. A model of multiplicative neural responses in parietal
cortex. Proc. Natl. Acad. Sci. USA, 93:11956?11961, 1996.
[13] D. Golomb and G. B. Ermentrout. Effects of delay on the type and velocity of travelling pulses in neuronal networks with spatially decaying connectivity. Network,
11:221?46, 2000.
[14] David Hansel and Haim Sompolinsky. Modeling feature selectivity in local cortical
circuits. In C. Koch and I. Segev, editors, Methods in Neuronal Modeling, chapter 13,
pages 499?567. MIT Press, Cambridge, Massachusetts, 1998.
| 1995 |@word sabatini:1 simulation:8 linearized:3 pulse:23 excited:4 solid:2 series:1 contains:1 denoting:1 activation:12 written:3 must:1 john:1 numerical:2 realistic:1 shape:1 analytic:1 treating:1 plot:4 stationary:11 nervous:2 accordingly:1 feedfoward:1 provides:1 contribute:2 five:1 mathematical:8 differential:3 become:2 inside:2 behavior:4 frequently:1 brain:1 integrator:1 inspired:1 increasing:1 becomes:1 notation:1 panel:20 circuit:2 golomb:1 contrasting:1 temporal:5 laterally:1 underlie:1 planck:1 positive:3 before:1 local:1 treat:1 kybernetik:1 acad:1 might:2 chose:1 plus:1 studied:1 locked:3 range:1 obeys:1 acknowledgment:1 lost:2 practice:1 integro:1 area:1 convenient:1 operator:2 impossible:1 cybern:2 equivalent:1 xiaohui:1 insight:1 stability:20 coordinate:6 variation:3 exact:1 hypothesis:1 velocity:15 located:1 santen:1 asymmetric:7 predicts:2 observed:1 role:1 suarez:1 solved:1 capture:1 connected:1 sompolinsky:2 substantial:2 rose:1 transforming:2 locking:1 seung:1 ermentrout:1 dynamic:28 raise:1 solving:2 depend:1 neurophysiol:1 chapter:1 derivation:1 formation:1 outside:2 salina:1 solve:2 otherwise:2 amari:1 transform:1 reproduced:1 interplay:1 eigenvalue:10 analytical:2 interaction:8 r1:1 generating:2 ring:2 recurrent:7 eq:10 strong:1 soc:3 indicate:3 direction:16 human:1 opt:3 biological:1 dent:1 around:2 koch:3 cbcl:1 substituting:1 omitted:1 purpose:1 proc:1 integrates:1 hansel:2 largest:4 mit:2 rather:1 wilson:1 derived:1 mainly:1 am:3 helpful:1 bergen:1 typically:2 signified:1 selective:5 germany:1 orientation:2 spatial:3 special:2 bifurcation:1 field:5 represents:1 adelson:1 maex:1 stimulus:22 individual:1 phase:2 evaluation:1 introduces:1 analyzed:2 natl:1 integral:2 neglecting:1 poggio:2 plotted:2 xhxie:1 theoretical:4 earlier:2 modeling:6 ordinary:1 signifies:1 delay:1 perturbed:5 spatiotemporal:2 periodic:2 peak:7 accessible:1 connectivity:4 again:1 central:1 satisfied:1 exited:1 cognitive:2 leading:1 account:4 nonlinearities:1 intracortical:1 coding:2 includes:1 satisfy:3 afferent:1 multiplicative:1 break:2 closed:2 analyze:2 characterizes:2 wave:6 relied:1 thalamic:1 decaying:1 elaborated:1 contribution:1 variance:2 characteristic:3 ensemble:2 directional:1 biophysically:2 accurately:1 cybernetics:1 tissue:1 detector:1 synaptic:2 energy:1 frequency:1 static:2 treatment:1 massachusetts:2 color:2 amplitude:3 back:1 feed:1 xie:1 response:2 microcircuit:1 traveling:22 hand:2 nonlinear:15 eig:1 propagation:2 mode:1 facilitate:1 effect:2 usa:1 verify:1 evolution:3 analytically:3 hence:1 read:3 spatially:2 nonzero:1 illustrated:1 during:1 width:2 self:1 giese:2 linearizing:1 motion:2 functional:1 spiking:2 perturbing:2 interpretation:1 approximates:1 functionally:1 numerically:1 cambridge:2 tuning:4 nonlinearity:3 moving:14 stable:9 cortex:8 inhibition:2 showed:1 driven:1 selectivity:14 certain:3 ubingen:2 watson:1 monotonically:1 characterized:4 calculation:2 permitting:1 coded:2 basic:1 essentially:3 kernel:7 represent:1 achieved:2 cell:2 addition:1 operate:1 supposing:1 cowan:1 veto:1 contrary:1 zipser:1 noting:1 feedforward:1 restrict:1 simplifies:1 inactive:1 motivated:1 detailed:2 involve:1 canonical:1 inhibitory:1 dotted:2 arising:1 blue:1 dobson:1 discrete:1 key:1 threshold:13 drawn:1 changing:2 douglas:1 abbott:1 asymptotically:1 architectural:1 followed:1 haim:1 activity:17 strength:1 segev:1 fourier:6 speed:5 simulate:1 orban:1 martin:1 relatively:1 biologically:2 taken:3 equation:11 sperling:1 mechanism:3 tractable:1 travelling:1 operation:1 bles:1 appropriate:3 robustly:1 existence:1 original:1 substitute:1 jd:1 giving:2 classical:1 suddenly:1 primary:1 dependence:2 striate:1 diagonal:1 lateral:6 sci:1 mail:1 unstable:2 assuming:1 code:1 difficult:1 expense:1 negative:2 rise:1 neuron:23 observation:1 finite:1 acknowledge:1 behave:1 parietal:1 truncated:1 variability:3 frame:5 perturbation:5 introduced:2 david:1 connection:5 bar:2 below:1 perception:1 pattern:1 regime:9 saturation:2 max:1 green:1 suitable:2 hr:1 technology:1 temporally:1 reichardt:2 multiplication:1 loss:5 fully:2 interesting:1 limitation:1 filtering:2 clinic:1 degree:1 consistent:3 principle:1 editor:2 periodicity:3 excitatory:1 side:3 institute:2 leaky:1 van:1 feedback:1 boundary:10 cortical:12 transition:1 curve:5 calculated:1 sensory:1 forward:1 simplified:3 far:1 observable:2 active:1 assumed:2 spatio:5 neurology:1 continuous:1 mineiro:1 ahumada:1 domain:1 neurosci:1 arise:2 neuronal:3 fig:2 wiley:1 position:3 comput:1 down:2 specific:1 gating:1 sensing:1 exists:1 essential:1 thalamo:1 visual:7 expressed:1 trix:1 scalar:1 corresponds:1 ma:2 determined:5 total:3 indicating:2 solari:1 arises:4 phenomenon:1 dept:2 biol:2 |
1,090 | 1,996 | Learning a Gaussian Process Prior
for Automatically Generating Music Playlists
John C. Platt
Christopher J. C. Burges
Steven Swenson
Christopher Weare
Alice Zheng
Microsoft Corporation
1 Microsoft Way
Redmond, WA 98052
jplatt,cburges,sswenson,chriswea @microsoft.com, alicez@cs.berkeley.edu
Abstract
This paper presents AutoDJ: a system for automatically generating music playlists based on one or more seed songs selected by a user. AutoDJ
uses Gaussian Process Regression to learn a user preference function
over songs. This function takes music metadata as inputs. This paper
further introduces Kernel Meta-Training, which is a method of learning
a Gaussian Process kernel from a distribution of functions that generates
the learned function. For playlist generation, AutoDJ learns a kernel from
a large set of albums. This learned kernel is shown to be more effective
at predicting users? playlists than a reasonable hand-designed kernel.
1 Introduction
Digital music is becoming very widespread, as personal collections of music grow to thousands of songs. One typical way for a user to interact with a personal music collection is
to specify a playlist, an ordered list of music to be played. Using existing digital music
software, a user can manually construct a playlist by individually choosing each song. Alternatively, playlists can be generated by the user specifying a set of rules about songs (e.g.,
genre = rock), and the system randomly choosing songs that match those rules.
Constructing a playlist is a tedious process: it takes time to generate a playlist that matches
a particular mood. It is also difficult to construct a playlist in advance, as a user may not
anticipate all possible music moods and preferences he or she will have in the future.
AutoDJ is a system for automatically generating playlists at the time that a user wants to
listen to music. The playlist plays with minimal user intervention: the user hears music
that is suitable for his or her current mood, preferences and situation.
AutoDJ has a simple and intuitive user interface. The user selects one or more seed songs
for AutoDJ to play. AutoDJ then generates a playlist with songs that are similar to the seed
songs. The user may also review the playlist and add or remove certain songs, if they don?t
fit. Based on this modification, AutoDJ then generates a new playlist.
AutoDJ uses a machine learning system that finds a current user preference function over
a feature space of music. Every time a user selects a seed song or removes a song from the
Current address: Department of Electrical Engineering and Computer Science, University of
California at Berkeley
playlist, a training example is generated. In general, a user can give an arbitrary preference
value to any song. By default, we assume that selected songs have target values of 1,
while removed songs have target values of 0. Given a training set, a full user preference
function is inferred by regression. The for each song owned by the user is evaluated,
and the songs with the highest are placed into the playlist.
The machine learning problem defined above is difficult to solve well. The training set
often contains only one training example: a single seed song that the user wishes to listen
to. Most often, AutoDJ must infer an entire function from 1?3 training points. An appropriate machine learning method for such small training sets is Gaussian Process Regression
(GPR) [14], which has been shown empirically to work well on small data sets. Technical
details of how to apply GPR to playlist generation are given in section 2. In broad detail,
GPR starts with a similarity or kernel function
between any two songs. We define
the input space to be descriptive metadata about the song. Given a training set of user
preferences, a user preference function is generated by forming a linear blend of these kernel functions, whose weights are solved via a linear system. This user preference function
is then used to evaluate all of the songs in the user?s collection.
This paper introduces a new method of generating a kernel for use in GPR. We call this
method Kernel Meta-Training (KMT). Technical details of KMT are described in section 3. KMT improves GPR by adding an additional phase of learning: meta-training.
During meta-training, a kernel is learned before any training examples are available. The
kernel is learned from a set of samples from meta-training functions. These meta-training
functions are drawn from the same function distribution that will eventually generate the
training function. In order to generalize the kernel beyond the meta-training data set, we
fit a parameterized kernel to the meta-training data, with many fewer parameters than data
points. The kernel is parameterized as a non-negative combination of base Mercer kernels.
These kernel parameters are tuned to fit the samples across the meta-training functions.
This constrained fit leads to a simple quadratic program. After meta-training, the kernel is
ready to use in standard GPR.
To use KMT to generate playlists, we meta-train a kernel on a large number of albums. The
learned kernel thus reflects the similarity of songs on professionally designed albums. The
learned kernel is hardwired into AutoDJ. GPR is then performed using the learned kernel
every time a user selects or removes songs from a playlist. The learned kernel forms a good
prior, which enables AutoDJ to learn a user preference function with a very small number
of user training examples.
1.1 Previous Work
There are several commercial Web sites for playing or recommending music based on one
seed song. The algorithms behind these sites are still unpublished.
This work is related to Collaborative Filtering (CF) [9] and to building user profiles in
textual information retrieval [11]. However, CF does not use metadata associated with a
media object, hence CF will not generalize to new music that has few or no user votes.
Also, no work has been published on building user profiles for music. The ideas in this
work may also be applicable to text retrieval.
Previous work in GPR [14] learned kernel parameters through Bayesian methods from just
the training set, not from meta-training data. When AutoDJ generates playlists, the user
may select only one training example. No useful similarity metric can be derived from one
training example, so AutoDJ uses meta-training to learn the kernel.
The idea of meta-training comes from the ?learning to learn? or multi-task learning literature [2, 5, 10, 13]. This paper is most similar to Minka & Picard [10], who also suggested
fitting a mean and covariance for a Gaussian Process based on related functions. However,
in [10], in order to generalize the covariance beyond the meta-training points, a Multi-Layer
Perceptron (MLP) is used to learn multiple tasks, which requires non-convex optimization.
The Gaussian Process is then extracted from the MLP. In this work, using a quadratic program, we fit a parameterized Mercer kernel directly to a meta-training kernel matrix in
order to generalize the covariance.
Meta-training is also related to algorithms that learn from both labeled and unlabeled
data [3, 6]. However, meta-training has access to more data than simply unlabeled data:
it has access to the values of the meta-training functions. Therefore, meta-training may
perform better than these other algorithms.
2 Gaussian Process Regression for Playlist Generation
AutoDJ uses GPR to generate a playlist every time a user selects one or more songs. GPR
uses a Gaussian Process (GP) as a prior over functions. A GP is a stochastic process
over a multi-dimensional input space . For any , if vectors are chosen in the input
space, and the corresponding samples are drawn from the GP, then the are jointly
Gaussian.
and the covariance
There are two statistics that fully describe a GP: the mean
. In this paper, we assume that the GP over user preference functions is zero
mean. That is, at any particular time, the user does not want to listen to most of the songs
in the world, which leads to a mean preference close enough to zero to approximate as zero.
simply turns into a correlation over a distribution
Therefore, the covariance kernel
of functions :
.
In section 3, we learn a kernel
which takes music metadata as
and . In this
paper, whenever we refer to a music metadata vector, we mean a vector consisting of 7
categorical variables: genre, subgenre, style, mood, rhythm type, rhythm description, and
vocal code. This music metadata vector is assigned by editors to every track of a large
corpus of music CDs. Sample values of these variables are shown in Table 1. Our kernel
function
thus computes the similarity between two metadata vectors corresponding to two songs. The kernel only depends on whether the same slot in the two vectors are
the same or different. Specific details about the kernel function are described in section 3.2.
Metadata Field
Example Values
Genre
Subgenre
Style
Mood
Rhythm Type
Rhythm Description
Vocal Code
Jazz, Reggae, Hip-Hop
Heavy Metal, I?m So Sad and Spaced Out
East Coast Rap, Gangsta Rap, West Coast Rap
Dreamy, Fun, Angry
Straight, Swing, Disco
Frenetic, Funky, Lazy
Instrumental, Male, Female, Duet
Number of
Values
30
572
890
21
10
13
6
Table 1: Music metadata fields, with some example values
Once we have defined a kernel, it is simple to perform GPR. Let be the metadata vectors
for the songs for which the user has expressed a preference by selecting or removing
them from the playlist. Let be the expressed user preference. In general, can be any
real value. If the user does not express a real-valued preference, is assumed 1 if the user
wants to listen to the song and 0 if the user does not. Even if the values are binary, we do
not use Gaussian Process Classification (GPC), in order to maintain generality and because
GPC requires an iterative procedure to estimate the posterior [1].
Let be the underlying true user preference for the th song, of which is a noisy measurement, with Gaussian noise of variance . Also, let
be a metadata vector of any
song that will be considered to be on a playlist:
is the (unknown) user preference for
that song.
Before seeing the preferences , the vector forms a joint prior Gaussian derived
from the GP. After incorporating the information, the posterior mean of is
where
and
(1)
(2)
Thus, the user preference function for a song s, , is a linear blend of kernels
that compare the metadata vector for song with the metadata vectors
for the songs that the user expressed a preference. The weights are computed by inverting an by matrix. Since the number of user preferences tends to be small, inverting
this matrix is very fast.
Since the kernel is learned before GPR, and the vector is supplied by the user, the only
free hyperparameter is the noise value . This hyperparameter is selected via maximum
likelihood on the training set. The formula for the log likelihood of the training data given
is
!#"
$
7"98:
/ !#"
3/)465
4
3/
(3)
%'&)(*&,+ .+ 0+)12
12
Every time a playlist is generated, different values of are evaluated and the that generates the highest log likelihood is used.
In order to generate the playlist, the matrix is computed, and the user preference function
is computed for every song that the user owns. The songs are then ranked in descending order of . The playlist consists of the top songs in the ranked list. The playlist can cut
off after a fixed number of songs, e.g., 30. It can also cut off if the value of gets too low,
so that the playlist only contains songs that the user will enjoy.
The order of the playlist is the order of the songs in the ranked list. This is empirically
effective: the playlist typically starts with the selected seed songs, proceeds to songs very
similar to the seed songs, and then gradually drifts away from the seed songs towards the
end of the list, when the user is paying less attention. We explored neural networks and
SVMs for determining the order of the playlist, but have not found a clearly more effective
ordering algorithm than simply the order of . Here, ?effective? is defined as generating
playlists that are pleasing to the authors.
3 Kernel Meta-Training (KMT)
This section describes Kernel Meta-Training (KMT) that creates the GP kernel
used in the previous section. As described in the introduction, KMT operates on samples
drawn from a set of ; functions 7<
. This set of functions should be related to a final
trained function, since we derive a similarity kernel from the meta-training set of functions. In other words, we learn a Gaussian prior over the space of functions by computing
Gaussian statistics on a set of functions related to a function we wish to learn.
We express the kernel as a covariance components model [12]:
=
> @?
>@AB>
(4)
where A > are pre-defined Mercer kernels and >DC - . We then fit > to the samples drawn
?
from the meta-training functions. We use the? simpler model instead
of an empirical covariance matrix, in order to generalize the GPR beyond points that are in the meta-training
set.
The functional form of the kernel A and 0E can be chosen via cross-validation. In our
application, both the form of A and E are determined by the available input data (see
section 3.2, below).
One possible method to fit the > is to maximize the likelihood in (3) over all samples
?
drawn from all meta-training functions
[7]. However, solving for the optimal > requires
an iterative algorithm whose inner loop requires Cholesky decomposition of ? a matrix of
dimension the number of meta-training samples. For our application, this matrix would
have dimension 174,577, which makes maximizing the likelihood impractical.
Instead of maximizing the likelihood, we fit a covariance components model to an empirical
covariance computed on the meta-training data set, using a least-square distance function:
=
"
8
'
>
B
A
>
1
&
(5)
> @?
where and index all of the samples in the meta-training data set, and where
empirical covariance
;
<
<
<
is the
(6)
In order to ensure that the final kernel in (4) is Mercer, we apply > C - as a constraint
?
in optimization. Solving (5) subject to non-negativity constraints results
in a fast quadratic
program of size E . Such a quadratic program can be solved quickly and robustly by
standard optimization packages.
The cost function in equation (5) is the square of the Frobenius norm of the difference between the empirical matrix
and the fit kernel
. The use of the Frobenius norm
?
is similar to the Ordinary Least Squares technique of fitting
variogram parameters in geostatistics [7]. However, instead of summing variogram estimates within spatial bins, we
form covariance estimates over all meta-training data pairs .
Analogous to [8], we can prove that the Frobenius norm is consistent: as the amount of
training data goes to infinity, the empirical Frobenius norm, above, approaches the Frobenius norm of the difference between the true kernel and our fit kernel. (The proof is omitted
to save space). Finally, unlike the cost function presented in [8], the cost function in equation (5) produces an easy-to-solve quadratic program.
3.1 KMT for Music Playlist Generation
In this section, we consider the application of the general KMT technique to music playlist
generation.
We decided to use albums to generate a prior for playlist generation, since albums can
be considered to be professionally designed playlists. For the meta-training function ,
we use album indicator functions that are 1 for songs on an album , and 0 otherwise.
Thus, KMT learns a similarity metric that professionals use when they assemble albums.
This same similarity metric empirically makes consonant playlists. Using a small E in
equation (4) forces a smoother, more general similarity metric. If we had simply used the
, the playlist generator would exactly
meta-training kernel matrix
without fitting
? database. This is the meta-training
reproduce one or more albums in the meta-training
equivalent of overfitting.
Because the album indicator functions are uniquely defined for songs, not for metadata
vectors, we cannot simply generate a kernel matrix according to (6). Instead, we generate
a meta-training kernel matrix using meta-training functions that depend on songs:
(7)
;
where is 1 if song belongs to album , 0 otherwise. We then fit the > according to
? are defined in
(5), where the A > Mercer kernels depend on music metadata vectors that
Table 1. The resulting kernel is still defined by (4), with a specific A9> that will be defined
in section 3.2, below.
We used 174,577 songs and 14,198 albums to make up the meta-training matrix
, which
is dimension 174,577x174,577. However, note that the
meta-training matrix is very
sparse, since most songs only belong to 1 or 2 albums. Therefore, it can be stored as
a sparse matrix. We use a quadratic programming package in Matlab that requires the
constant and linear parts of the gradient of the cost function in (5):
(8)
?
A <
A >
A <
>
(9)
> ?
where the first (constant) term is only evaluated on those indicies
in the set of nonzero
. The second (linear) term requires a sum over all and , which is impractical.
Instead, we estimate the second term by sampling a random subset of
pairs (100
<
1
>@A >
>
?
A <
random for each ).
3.2 Kernels for Categorical Data
The kernel learned in section 3 must operate on categorical music metadata. Up until now,
kernels have been defined to operate on continuous data. We could convert the categorical
data to a vector space by allocating one dimension for every possible value of each categorical variable, using a 1-of-N sparse code. This would lead to a vector space of dimension
1542 (see Table 1) and would produce a large number of kernel parameters. Hence, we
have designed a new kernel that operates directly on categorical data.
We define a family of Mercer kernels:
> - or ;
A >
(10)
- ifotherwise,
where > is defined to be the binary representation of the number . The > vector serves
as a mask: when > is 1, then the th component of the two vectors must match in order
for the output of the kernel to be 1. Due to space limitations, proof of the Mercer property
of this kernel is omitted.
For playlist generation, the A > operate on music metadata vectors that are defined in
Table 1. These vectors have 7 fields, thus runs from 1 to 7 and runs from 1 to 128.
Therefore, there are 128 free parameters in the kernel which are fit according to (5). The
sum of 128 terms in (4) can be expressed as a single look-up table, whose keys are 7-bit
long binary vectors, the th bit corresponding to whether . Thus, the evaluation of
from equation (1) on thousands of pieces of music can be done in less than a second on
a modern PC.
4 Experimental Results
We have tested the combination of GPR and KMT for the generation of playlists. We tested
AutoDJ on 60 playlists manually designed by users in Microsoft Research. We compared
the full GPR + KMT AutoDJ with simply using GPR with a pre-defined kernel, and without
using GPR and with a pre-defined kernel (using (1) with all equal). We also compare to
a playlist which are all of the user?s songs permuted in a random
order. As a baseline, we
decided to use Hamming distance as the pre-defined kernel. That is, the similarity between
two songs is the number of metadata fields that they have in common.
We performed tests using only positive training examples, which emulates users choosing
seed songs. There were 9 experiments, each with a different number of seed songs, from
1 to 9. Let the number of seed songs for an experiment be . Each experiment consisted
of 1000 trials.
Each trial chose a playlist at random (out of the playlists that consisted of at
least
songs), then chose songs at random out of the playlist as a training set. The
test set of each trial consisted of all of the remaining songs in the playlist, plus all other
songs owned by the designer of the playlist. This test set thus emulates the possible songs
available to the playlist generator.
To score the produced playlists, we use a standard collaborative filtering metric, described
in [4]. The score of a playlist for trial is defined to be
8
(11)
!
where
is the user preference of the th element of the
th playlist (1 if th element is
on playlist , 0 otherwise), is a ?half-life? of user interest in the playlist (set here to be
10), and
are the number ?of test songs for playlist . This score is summed over all 1000
trials, and normalized:
-#(12)
where
is the score from (11) if that playlist were perfect (i.e., all of the true playlist
songs were at the head of the list). Thus, an score of 100 indicates perfect prediction.
Playlist Method
KMT + GPR
Hamming + GPR
Hamming + No GPR
Random Order
1
42.9
32.7
32.7
6.3
2
46.0
39.2
39.0
6.6
3
44.8
39.8
39.6
6.5
Number of Seed Songs
4
5
6
43.8 46.8 45.0
39.6 41.3 40.0
40.2 42.6 41.4
6.2
6.5
6.6
7
44.2
39.5
41.5
6.2
8
44.4
38.4
41.7
6.1
9
44.8
39.8
43.2
6.8
$
Table 2:
Scores for Different
Playlist
Methods. Boldface indicates best method with
/
statistical significance level - - .
The results for the 9 different experiments are shown in Table 2. A boldface result shows
the best method based on pairwise Wilcoxon signed rank test with a significance level of
0.05 (and a Bonferroni correction for 6 tests).
There are several notable results in Table 2. First, all of the experimental systems perform
much better than random, so they all capture some notion of playlist generation. This
is probably due to the work that went into designing the metadata schema. Second, and
most importantly, the kernel that came out of KMT is substantially better than the handdesigned kernel, especially when the number of positive examples is 1?3. This matches the
hypothesis that KMT creates a good prior based on previous experience. This good prior
helps when the training set is extremely small in size. Third, the performance of KMT +
GPR saturates very quickly with number of seed songs. This saturation is caused by the
fact that exact playlists are hard to predict: there are many appropriate songs that would be
valid in a test playlist, even if the user did not choose those songs. Thus, the quantitative
results shown in Table 2 are actually quite conservative.
Seed
1
2
3
4
5
Playlist 1
Eagles, The Sad Cafe
Genesis, More Fool Me
Bee Gees, Rest Your Love On Me
Chicago, If You Leave Me Now
Eagles, After The Thrill Is Gone
Cat Stevens, Wild World
Playlist 2
Eagles, Life in the Fast Lane
Eagles, Victim of Love
Rolling Stones, Ruby Tuesday
Led Zeppelin, Communication Breakdown
Creedence Clearwater, Sweet Hitch-hiker
Beatles, Revolution
Table 3: Sample Playlists
To qualitatively test the playlist generator, we distributed a prototype version of it to a few
individuals in Microsoft Research. The feedback from use of the prototype has been very
positive. Qualitative results of the playlist generator are shown in Table 3. In that table,
two different Eagles songs are selected as single seed songs, and the top 5 playlist songs
are shown. The seed song is always first in the playlist and is not repeated. The seed song
on the left is softer and leads to a softer playlist, while the seed song on the right is harder
rock and leads to a more hard rock play list.
5 Conclusions
We have presented an algorithm, Kernel Meta-Training, which derives a kernel from a
set of meta-training functions that are related to the function that is being learned. KMT
permits the learning of functions from very few training points. We have applied KMT to
create AutoDJ, which is a system for automatically generating music playlists. However,
the KMT idea may be applicable to other tasks.
Experiments with music playlist generation show that KMT leads to better results than a
hand-built kernel when the number of training examples is small. The generated playlists
are qualitatively very consonant and useful to play as background music.
References
[1] D. Barber and C. K. I. Williams. Gaussian processes for Bayesian classification via
hybrid Monte Carlo. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, NIPS,
volume 9, pages 340?346, 1997.
[2] J. Baxter. A Bayesian/information theoretic model of bias learning. Machine Learning, 28:7?40, 1997.
[3] K. P. Bennett and A. Demiriz. Semi-supervised support vector machines. In M. S.
Kearns, S. A. Solla, and D. A. Cohn, editors, NIPS, volume 11, pages 368?374, 1998.
[4] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms
for collaborative filtering. In Uncertainty in Artificial Intelligence, pages 43?52, 1998.
[5] R. Caruana. Learning many related tasks at the same time with backpropagation. In
NIPS, volume 7, pages 657?664, 1995.
[6] V. Castelli and T. M. Cover. The relative value of labeled and unlabled samples in
pattern recognition with an unknown mixing parameter. IEEE Trans. Info. Theory,
42(6):75?85, 1996.
[7] N. A. C. Cressie. Statistics for Spatial Data. Wiley, New York, 1993.
[8] N. Cristianini, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment.
Technical Report NC-TR-01-087, NeuroCOLT, 2001.
[9] D. Goldberg, D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to
weave an information tapestry. CACM, 35(12):61?70, 1992.
[10] T. Minka and R. Picard. Learning how to learn is learning with points sets. http://
wwwwhite.media.mit.edu/ tpminka/papers/learning.html, 1997.
[11] M. Pazzani and D. Billsus. Learning and revising user profiles: The identification of
interesting web sites. Machine Learning, 27:313?331, 1997.
[12] P. S. R. S. Rao. Variance Components Estimation: Mixed models, methodologies and
applications. Chapman & Hill, 1997.
[13] S. Thrun. Is learning the n-th thing any easier than learning the first? In NIPS,
volume 8, pages 640?646, 1996.
[14] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In NIPS,
volume 8, pages 514?520, 1996.
| 1996 |@word trial:5 version:1 instrumental:1 norm:5 tedious:1 covariance:11 decomposition:1 elisseeff:1 tr:1 harder:1 contains:2 score:6 selecting:1 tuned:1 existing:1 current:3 com:1 must:3 john:1 chicago:1 enables:1 remove:3 designed:5 half:1 selected:5 fewer:1 intelligence:1 preference:23 simpler:1 qualitative:1 consists:1 prove:1 fitting:3 wild:1 weave:1 professionally:2 pairwise:1 mask:1 love:2 multi:3 automatically:4 underlying:1 medium:2 substantially:1 revising:1 corporation:1 impractical:2 berkeley:2 every:7 quantitative:1 fun:1 exactly:1 oki:1 platt:1 intervention:1 enjoy:1 before:3 positive:3 engineering:1 tends:1 becoming:1 handdesigned:1 signed:1 chose:2 plus:1 specifying:1 alice:1 gone:1 decided:2 swenson:1 backpropagation:1 procedure:1 empirical:6 word:1 pre:4 vocal:2 seeing:1 get:1 cannot:1 unlabeled:2 close:1 descending:1 equivalent:1 maximizing:2 go:1 attention:1 williams:2 convex:1 rule:2 importantly:1 his:1 notion:1 analogous:1 target:2 commercial:1 play:4 user:55 exact:1 programming:1 us:5 designing:1 hypothesis:1 cressie:1 goldberg:1 element:2 recognition:1 breakdown:1 cut:2 labeled:2 database:1 steven:1 electrical:1 solved:2 capture:1 thousand:2 went:1 ordering:1 solla:1 removed:1 highest:2 mozer:1 creedence:1 cristianini:1 personal:2 trained:1 depend:2 solving:2 predictive:1 creates:2 joint:1 cat:1 genre:3 train:1 fast:3 effective:4 describe:1 monte:1 artificial:1 clearwater:1 choosing:3 cacm:1 whose:3 quite:1 victim:1 solve:2 valued:1 otherwise:3 statistic:3 gp:7 jointly:1 noisy:1 demiriz:1 final:2 mood:5 a9:1 descriptive:1 rock:3 loop:1 mixing:1 intuitive:1 description:2 frobenius:5 produce:2 generating:6 perfect:2 leave:1 object:1 help:1 derive:1 paying:1 c:1 come:1 stevens:1 stochastic:1 softer:2 bin:1 anticipate:1 correction:1 considered:2 seed:19 predict:1 omitted:2 estimation:1 applicable:2 jazz:1 individually:1 create:1 reflects:1 mit:1 clearly:1 gaussian:16 always:1 derived:2 she:1 rank:1 likelihood:6 indicates:2 baseline:1 entire:1 typically:1 her:1 reproduce:1 playlist:74 selects:4 classification:2 html:1 constrained:1 spatial:2 summed:1 field:4 construct:2 once:1 equal:1 sampling:1 manually:2 hop:1 chapman:1 broad:1 look:1 future:1 report:1 few:3 sweet:1 modern:1 randomly:1 individual:1 phase:1 consisting:1 microsoft:5 maintain:1 ab:1 pleasing:1 mlp:2 interest:1 zheng:1 picard:2 evaluation:1 hitch:1 alignment:1 introduces:2 male:1 pc:1 behind:1 allocating:1 experience:1 taylor:1 minimal:1 rap:3 hip:1 rao:1 cover:1 caruana:1 ordinary:1 cost:4 subset:1 rolling:1 jplatt:1 too:1 stored:1 off:2 quickly:2 choose:1 style:2 kadie:1 notable:1 caused:1 depends:1 piece:1 performed:2 schema:1 start:2 collaborative:4 square:3 variance:2 who:1 emulates:2 spaced:1 generalize:5 bayesian:3 identification:1 produced:1 castelli:1 carlo:1 published:1 straight:1 whenever:1 minka:2 associated:1 proof:2 hamming:3 listen:4 improves:1 variogram:2 actually:1 reggae:1 supervised:1 methodology:1 specify:1 evaluated:3 done:1 generality:1 just:1 correlation:1 until:1 hand:2 web:2 christopher:2 beatles:1 cohn:1 widespread:1 building:2 consisted:3 true:3 normalized:1 nichols:1 swing:1 hence:2 assigned:1 nonzero:1 cafe:1 during:1 bonferroni:1 uniquely:1 rhythm:4 stone:1 ruby:1 hill:1 theoretic:1 interface:1 coast:2 common:1 permuted:1 functional:1 empirically:3 volume:5 belong:1 he:1 refer:1 measurement:1 shawe:1 kmt:20 had:1 access:2 similarity:9 add:1 base:1 wilcoxon:1 posterior:2 female:1 optimizing:1 belongs:1 certain:1 meta:40 binary:3 came:1 life:2 additional:1 maximize:1 semi:1 smoother:1 full:2 multiple:1 infer:1 technical:3 match:4 cross:1 long:1 retrieval:2 prediction:1 regression:5 metric:5 kernel:68 background:1 want:3 grow:1 operate:3 unlike:1 rest:1 probably:1 subject:1 tapestry:1 thing:1 jordan:1 call:1 enough:1 easy:1 baxter:1 fit:12 inner:1 idea:3 prototype:2 whether:2 song:73 york:1 matlab:1 useful:2 gpc:2 fool:1 amount:1 svms:1 generate:8 http:1 supplied:1 designer:1 track:1 hyperparameter:2 express:2 key:1 drawn:5 sum:2 convert:1 run:2 package:2 parameterized:3 you:1 uncertainty:1 family:1 reasonable:1 sad:2 bit:2 layer:1 angry:1 played:1 quadratic:6 assemble:1 unlabled:1 eagle:5 constraint:2 infinity:1 your:1 software:1 lane:1 generates:5 extremely:1 department:1 according:3 combination:2 across:1 describes:1 heckerman:1 modification:1 gradually:1 billsus:1 equation:4 turn:1 eventually:1 end:1 serf:1 available:3 permit:1 apply:2 away:1 appropriate:2 petsche:1 robustly:1 save:1 professional:1 top:2 remaining:1 cf:3 ensure:1 music:29 especially:1 blend:2 gradient:1 distance:2 neurocolt:1 thrun:1 me:3 barber:1 boldface:2 code:3 cburges:1 index:1 nc:1 difficult:2 info:1 negative:1 indicies:1 weare:1 unknown:2 perform:3 situation:1 saturates:1 communication:1 head:1 genesis:1 dc:1 arbitrary:1 drift:1 inferred:1 inverting:2 unpublished:1 pair:2 duet:1 california:1 learned:12 textual:1 geostatistics:1 trans:1 address:1 beyond:3 redmond:1 suggested:1 proceeds:1 below:2 nip:5 pattern:1 program:5 saturation:1 built:1 terry:1 suitable:1 ranked:3 force:1 hybrid:1 predicting:1 hardwired:1 indicator:2 ready:1 categorical:6 negativity:1 metadata:19 hears:1 text:1 prior:8 review:1 literature:1 bee:1 determining:1 relative:1 fully:1 mixed:1 generation:10 limitation:1 filtering:4 interesting:1 generator:4 digital:2 validation:1 metal:1 consistent:1 mercer:7 editor:3 playing:1 cd:1 heavy:1 placed:1 free:2 rasmussen:1 gee:1 bias:1 burges:1 perceptron:1 sparse:3 distributed:1 feedback:1 default:1 dimension:5 world:2 valid:1 computes:1 author:1 collection:3 qualitatively:2 approximate:1 overfitting:1 corpus:1 owns:1 assumed:1 recommending:1 summing:1 consonant:2 alternatively:1 don:1 continuous:1 iterative:2 table:13 learn:10 pazzani:1 interact:1 tuesday:1 constructing:1 did:1 significance:2 noise:2 profile:3 repeated:1 site:3 west:1 wiley:1 wish:2 gpr:21 third:1 learns:2 removing:1 formula:1 specific:2 revolution:1 list:6 explored:1 derives:1 incorporating:1 adding:1 album:13 easier:1 led:1 simply:6 forming:1 lazy:1 expressed:4 ordered:1 subgenre:2 owned:2 extracted:1 slot:1 towards:1 bennett:1 hard:2 typical:1 determined:1 operates:2 disco:1 kearns:1 conservative:1 breese:1 experimental:2 vote:1 east:1 select:1 cholesky:1 support:1 evaluate:1 tested:2 |
1,091 | 1,997 | Probabilistic Inference of Hand Motion from Neural
Activity in Motor Cortex
Y. Gao
M. J. Black
E. Bienenstock
S. Shoham
J. P. Donoghue
Division of Applied Mathematics, Brown University, Providence, RI 02912
Dept.
of Computer Science, Brown University, Box 1910, Providence, RI 02912
Princeton University, Dept. of Molecular Biology Princeton, NJ, 08544
Dept. of Neuroscience, Brown University, Providence, RI 02912
gao@cfm.brown.edu, black@cs.brown.edu, elie@dam.brown.edu,
sshoham@princeton.com, john donoghue@brown.edu
Abstract
Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model
and rigorously compare it with previous models using cross-validation.
Second, we infer a posterior probability distribution over hand motion
conditioned on a sequence of neural test data using Bayesian inference.
The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for
the kinematics. A particle filtering method is used to represent, update,
and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it
may be appropriate for neural prosthetic applications.
1 Introduction
This paper explores the use of statistical learning methods and probabilistic inference techniques for modeling the relationship between the motion of a monkey?s arm and neural
activity in motor cortex. Our goals are threefold: (i) to investigate the nature of encoding
in motor cortex, (ii) to characterize the probabilistic relationship between arm kinematics
(hand position or velocity) and activity of a simultaneously recorded neural population, and
(iii) to optimally reconstruct (decode) hand trajectory from population activity to smoothly
control a prosthetic robot arm (cf [14]).
A multi-electrode array (Figure 1) is used to simultaneously record the activity of 24 neurons in the arm area of primary motor cortex (MI) in awake, behaving, macaque monkeys.
This activity is recorded while the monkeys manually track a smoothly and randomly mov-
C.
Connector
Acrylic
Bone
Silicone
,.-0/1 2 3
! !" #
()) *+
$ %&'
4
444
5
564
White Matter
Figure 1: Multi-electrode array. A. 10X10 matrix of electrodes. Separation 4007 m (size
4X4 mm). B. Location of array in the MI arm area. C. Illustration of implanted array
(courtesy N. Hatsopoulos).
ing visual target on a computer monitor [12]. Statistical learning methods are used to derive
Bayesian estimates of the conditional probability of firing for each cell given the kinematic variables (we consider only hand velocity here). Specifically, we use non-parametric
models of the conditional firing, learned using regularization (smoothing) techniques with
cross-validation. Our results suggest that the cells encode information about the position
and velocity of the hand in space. Moreover, the non-parametric models provide a better
explanation of the data than previous parametric models [6, 10] and provide new insight
into neural coding in MI.
Decoding involves the inference of the hand motion from the firing rate of the cells. In particular, we represent the posterior probability of the entire hand trajectory conditioned on
the observed sequence of neural activity (spike trains). The nature of this activity results in
ambiguities and a non-Gaussian posterior probability distribution. Consequently, we represent the posterior non-parametrically using a discrete set of samples [8]. This distribution
is predicted and updated in non-overlapping 50 ms time intervals using a Bayesian estimation method called particle filtering [8]. Experiments with real and synthetic data suggest
that this approach provides probabilistically sound estimates of kinematics and allows the
probabilistic combination of information from multiple neurons, the use of priors, and the
rigorous evaluation of models and results.
2 Methods: Neural Recording
The design of the experiment and data collection is described in detail in [12]. Summarizing, a ten-by-ten array of electrodes is implanted in the primary motor cortex (MI) of
a Macaque monkey (Figure 1) [7, 9, 12]. Neural activity in motor cortex has been shown
to be related to the movement kinematics of the animal?s arm and, in particular, to the
direction of hand motion [3, 6]. Previous behavioral tasks have involved reaching in one
of a fixed number of directions [3, 6, 14]. To model the relationship between continuous,
smooth, hand motion and neural activity, we use a more complex scenario where the monkey performs a continuous tracking task in which the hand is moved on a 2D tablet while
holding a low-friction manipulandum that controls the motion of a feedback dot viewed on
a computer monitor (Figure 2a) [12]. The monkey receives a reward upon completion of
a successful trial in which the manipulandum is moved to keep the feedback dot within a
pre-specified distance of the target. The path of the target is chosen to be a smooth random
walk that effectively samples the space of hand positions and velocities: measured hand
positions and velocities have a roughly Gaussian distribution (Figure 2b and c) [12]. Neural activity is amplified, waveforms are thresholded, and spike sorting is performed off-line
to isolate the activity of individual cells [9]. Recordings from 24 motor cortical cells are
measured simultaneously with hand kinematics.
Monitor
16
Target
14
25
12
20
10
15
8
Tablet
6
10
Trajectory
4
5
2
Manipulandum
0
0
a
b
c
Figure 2: Smooth tracking task. (a) The target moves with a smooth random walk. Distribution of the position (b) and velocity (c) of the hand. Color coding indicates the frequency
with which different parts of the space are visited. (b) Position: horizontal and vertical
axes represent and position of the hand. (c) Velocity: the horizontal axis represents
direction,
, and the vertical axis represents speed, .
3
2.5
2
1.5
1
0.5
cell 3
0
cell 16
cell 19
Figure 3: Observed mean conditional firing rates in 50 ms intervals for three cells given
hand velocity. The horizontal axis represents the direction of movement, , in radians
(?wrapping? around from to ). The vertical axis represents speed, , and ranges from
0 cm/s to 12 cm/s. Color ranges from dark blue (no measurement) to red (approximately 3
spikes).
3 Modeling Neural Activity
Figure 3 shows the measured mean firing rate within 50 ms time intervals for three cells
conditioned on the subject?s hand velocity. We view the neural firing activity in Figure 3
as a stochastic and sparse realization of some underlying model that relates neural firing
to hand motion. Similar plots are obtained as a function of hand position. Each plot can
be thought of as a type of ?tuning function? [12] that characterizes the response of the cell
conditioned on hand velocity. In previous work, authors have considered a variety of
models of this data including a cosine tuning function [6] and a modified cosine function
[10]. Here we explore a non-parametric model of the underling activity and, adopting a
Bayesian formulation, seek a maximum a posterior (MAP) estimate of a cell?s conditional
firing.
Adopting
a Markov Random Field (MRF) assumption [4], let the velocity space,
!"!#%$ , be discretized on a &(')'
*+&(')' grid. Let g be the array of true (unobserved) conditional neural firing and , be the corresponding observed mean firing. We seek the posterior
-/.
.98:-/.9;
-/.
probability
g 01,1235476
6<0>=?6@2A4CBDFE/G =?6<0H=?6?IJ2K2
(1)
0
0.18
0.16
?2
0.14
?4
0.12
0.1
?6
0.08
?8
0.06
0.04
?10
0.02
0
?3
?2
?1
0
1
2
3
?12
?3
?2
?1
0
1
2
3
a
b
Figure 4: Prior probability of firing variation ( = ). (a) Probability of firing variation computed from training data (blue). Proposed robust prior model (red) plotted for ' .
(b) Logarithm of the distributions shown to provide detail.
8
;
where is a normalizing constant independent of g, 6 and = 6 are the observed and true
mean firing at velocity respectively, = 6 I represents the firing rate for the th neighboring
velocity of , and the neighbors are taken to be the four nearest velocities (
).
; term on the right hand side represents the likelihood of observing a particular firing
The first
- of the neural
rate 6 given that the true rate is =6 . Here we compare
two generative models
spiking process within 50 ms; a Poisson model, , and a Gaussian model,
:
- .;
- .9;
&;
=
0>= 2
0H= 2
&
?
.;
=A2
!
The second term is a spatial prior probability that encodes our expectations about = ,
the variation of neural activity in velocity space. The MRF prior states that the firing,
=?6 , at velocity depends only on the firing at neighboring velocities. We consider two
possible prior models for the distribution of = : Gaussian and ?robust?. A Gaussian prior
corresponds to an assumption that the firing rate varies smoothly. A robust prior assumes
a heavy-tailed distribution of the spatial variation (see Figure 4), = , (derivatives of the
firing rate in the and directions) and implies piecewise smooth data. The two spatial
priors are
-#" .
=2
.
%$
'&
- .
= 2
=2
&
?
.
= 2(
)!
The various models (cosine, a modified cosine (Moran and Schwartz [10]), GausG
sian+Gaussian, and Poisson+Robust) are fit to the training data as shown in Figure 5.
In the case of the Gaussian+Gaussian and Poisson+Robust models, the optimal value of
the parameter is computed for each cell using cross validation. During cross-validation,
each time 10 trials out of 180 are left out for testing and the models are fit with the remaining training data. We then compute the log likelihood of the test data given the model. This
provides a measure of how well the model captures the statistical variation in the training
set and is used for quantitative comparison. The whole procedure is repeated 18 times for
different test data sets.
The solution to the Gaussian+Gaussian model can be computed in closed form but for
the Poisson+Robust model no closed form solution for g exists and an optimal Bayesian
estimate could be achieved with simulated annealing [4]. Instead, we derive an approximate
*
By ?Gaussian+Gaussian? we mean both the likelihood and prior terms are Gaussian whereas
?Poisson+Robust? implies a Poisson likelihood and robust spatial prior.
1.5
1.2
1
0.8
0.6
0.4
0.2
1
1
Cosine
0.5
0.5
1.2
1
0.8
0.6
0.4
0.2
1.5
0.8
0.6
0.4
0.2
0.8
0.9
1
0.5
0.75
0.6
0.8
0.7
0.4
0.8
0.3
cell 3
0.8
0.7
0.6
0.5
1
0.5
0.6
cell 16
Gaussian+Gaussian
0.65
0.7
0.7
Moran&Schwartz
(M&S)
Poisson+Robust
cell 19
Figure 5: Estimated firing rate for cells in Figure 3 using different models.
Method:
G+G over Cosine
G+G over M&S
P+R over Cosine
P+R over M&S
Log Likelihood Ratio
24.9181
15.8333
50.0685
32.2218
p-value
7.6294e-06
0.0047
7.6294e-06
7.6294e-06
Table 1: Numerical comparison; log likelihood ratio of pairs of models and the significance
level given by Wilcoxon signed rank test (Splus, MathSoft Inc., WA).
solution for g in (1) by minimizing the negative logarithm of the distribution using standard
regularization techniques [1, 13] with missing data, the learned prior model, and a Poisson
likelihood term [11]. Simple gradient descent [1] with deterministic annealing provides a
. logarithm of. the prior
. term can be approximated
reasonable solution. Note that the negative
=A2 = &
=A2 12 which has been used
by the robust statistical error function
extensively in machine vision and image processing for smoothing data with discontinuities
[1, 5].
Figure 5 shows the various estimates of the receptive fields. Observe that the pattern of
firing is not Gaussian. Moreover, some cells appear to be tuned to motion direction, , and
not to speed, , resulting in vertically elongated patterns of firing. Other cells (e.g. cell 19)
appear to be tuned to particular directions and speeds; this type of activity is not well fit by
the parametric models.
Table 1 shows a quantitative comparison using cross-validation. The log likelihood ratio
(LLR) is used to compare each pair of models: LLR(model 1, model 2) = log( (observed
firing 0 model 1)/Pr(observed firing 0 model 2)). The positive values in Table 1 indicate
that the non-parametric models do a better job of explaining new data than the parametric
models with the Poisson+Robust fit providing the best description of the data. This P+R
model implies that the conditional firing rate is well described by regions of smooth activity
with relatively sharp discontinuities between them. The non-parametric models reduce the
strong bias of the parametric models with a slight increase in variance and hence achieve a
lower total error.
4 Temporal Inference
Given neural measurements our goal is to infer the motion of the hand over time. Related
approaches have exploited simple linear filtering methods which do not provide a probabilistic interpretation of the data that can facilitate analysis and support the principled
combination of multiple sources of information. Related probabilistic approaches have
exploited Kalman filtering [2]. We note here however, that the learned models of neural
activity are not-Gaussian and the dynamics of the hand motion may be non-linear. Furthermore with a small number of cells, our interpretation of the neural data may be ambiguous
and the posterior probability of the kinematic variables, given the neural activity, may be
best modeled by a non-Gaussian, multi-modal, distribution. To cope with these issues in
a sound probabilistic framework we exploit a non-parametric approach that uses factored
sampling to discretely approximate the posterior distribution, and particle filtering to propagate and update this distribution over time [8].
D
be the mean firing rate of cell
Let the state of the system be s7 ! ?# at time . Let
at time where the mean firing
rate
is
estimated
within
non-overlapping
50 ms temporal
G
windows. Also, let c
# represent the firing rate of all
cells at time .
D
Similarly let
represent the sequence of these firing rates for cell up to time and let
G
C
# represent the firing of all
cells up to time .
We assume that the temporal dynamics of the states, s , form a Markov chain for which the
state at time depends only on the state at the previous time instant:
-/.
- .
G 2
s 0S
s 0s
G2
where S s F s
(# denotes the state history. We also assume that given s , the current
observation c and the previous observations C G are independent.
Using Bayes rule and the above assumptions, the probability of observing the state at time
given the history of firing can be written as
- .
8
s H0 CJ2
8 - .
c 0 s"2
-/.
sH0 C
G2
(2)
- . that
where is -/
insures that the distribution integrates to one. The like D
a . normalizing term
lihood term c 0 s 2 DFE/G 0 s 2 assumes conditional independence of the individual
cells where the likelihood for the firing rate of an individual cell is taken to be a Poisson
distribution with the mean firing rate for the speed and velocity given by s determined by
the conditional firing models learned in the previous section. Plotting this likelihood term
for a range of states reveals
- . that its structure is highly non-Gaussian with multiple peaks.
The temporal prior term,
-/.
- .
sH0 C
s 0C
G2
G2
can be written as
-/.
sH0 s
G2
-/.
s
G 0C G 2
s
G
(3)
.
where sH0 s G 2 embodies the temporal dynamics of the hand velocity- which
are assumed
to be constant with Gaussian noise; that is, a diffusion process. Note, s G 0 C G 2 is the
posterior distribution
over the state space at time & .
- .
The posterior, s 0 CJ2 , is represented with a discrete, weighted set, of ')'' random samples which are propagated in time using a standard particle filter (see [8] for details). Unlike
previous applications of particle filtering, the likelihood of firing for an individual cell in
trial No. 8, Vx in cm/s, blue:true, red:reconstruction
trial No. 8, Vx in cm/s, blue:true, red:reconstruction
10
10
5
5
0
0
-5
-10
125
-5
126
127
128
129
130
131
time in second
132
133
134
135
-10
125
126
127
128
129
Vy in cm/s
10
5
5
0
0
-5
-5
126
127
128
129
130
132
133
134
135
132
133
134
135
Vy in cm/s
10
-10
125
130
131
time in second
131
132
133
134
-10
125
126
127
128
129
130
131
a
b
Figure 6: Tracking results using 1008 synthetic cells showing horizontal velocity, , (top)
and vertical velocity, , (bottom). Blue indicates true velocity of hand. (a) Bayesian
estimate using particle filtering. Red curve shows expected value of the posterior. The
error is '
for and ' ' for . (b) Linear filtering method shown in
red; '
for and ' '
for .
50 ms provides very little information. For the posterior to be meaningful we must combine evidence from multiple cells. Our experiments indicate that the responses from our
24 cells are insufficient for this task. To demonstrate the feasibility of the particle filtering
method, we synthesized approximately 1000 cells by taking the learned models of the 24
cells and translating them along the axis to generate a more complete covering of the
velocity space. Note that the assumption of such a set of cells in MI is quite reasonable
give the sampling of cells we have observed in multiple monkeys.
From the set of synthetic cells we then generate a synthetic spike train by taking a known
sequence of hand velocities and stochastically generating spikes using the learned conditional firing models with a Poisson generative model. Particle filtering is used to estimate
the posterior distribution over hand velocities given the synthetic neural data. The expected
value of the horizontal and vertical velocity is displayed in Figure 6a. For comparison, a
standard linear filtering method [6, 14] was trained on the synthetic data from 50 ms intervals. The resulting prediction is shown in Figure 6b. Linear filtering works well over
longer time windows which introduce lag. The Bayesian analysis provides a probabilistic
framework for sound causal estimates over short time intervals.
We are currently experimenting with modified particle filtering schemes in which linear
filtering methods provide a proposal distribution and importance sampling is used to construct a valid posterior distribution. We are also comparing these results with those of
various Kalman filters.
5 Conclusions
We have described a Bayesian model for neural activity in MI that relates this activity to
actions in the world. Quantitative comparison with previous models of MI activity indicate
that the non-parametric models computed using regularization more accurately describe
the neural activity. In particular, the robust spatial prior term suggests that neural firing in
MI is not a smooth function of velocity but rather exhibits discontinuities between regions
of high and low activity.
We have also described the Bayesian decoding of hand motion from firing activity using a
particle filter. Initial results suggest that measurements from several hundred cells may be
required for accurate estimates of hand velocity. The application of particle filtering to this
problem has many advantages as it allows complex, non-Gaussian, likelihood models that
may incorporate non-linear temporal properties of neural firing (e.g. refractory period).
Unlike previous linear filtering methods this Bayesian approach provides probabilistically
sound, causal, estimates in short time windows of 50ms. Current work is exploring correlations between cells [7] and the relationship between the neural activity and other kinematic
variables [12].
Acknowledgments. This work was supported by the Keck Foundation and by the National
Institutes of Health under grants #R01 NS25074 and #N01-NS-9-2322 and by the National
Science Foundation ITR Program award #0113679. We are very grateful to M. Serruya,
M. Fellows, L. Paninski, and N. Hatsopoulos who provided the neural data and valuable
insight.
References
[1] M. Black and A. Rangarajan. On the unification of line processes, outlier rejection, and robust
statistics with applications in early vision. IJCV, 19(1):57?92, 1996.
[2] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. A statistical paradigm for neural spike
train decoding applied to position prediction from ensemble firing patterns of rat hippocampal
place cells. J. Neuroscience, 18(18):7411?7425, 1998.
[3] Q-G. Fu, D. Flament, J. Coltz, and T. Ebner. Temporal encoding of movement kinematics in the
discharge of primate primary motor and premotor neurons. J. of Neurophysiology, 73(2):836?
854, 1995.
[4] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and Bayesian restoration
of images. PAMI, 6(6):721?741, November 1984.
[5] S. Geman and D. McClure. Statistical methods for tomographic image reconstruction. Bulletin
of the Int. Stat. Inst., LII-4:5?21, 1987.
[6] A. Georgopoulos, A. Schwartz, and R. Kettner. Neuronal population coding of movement
direction. Science, 233:1416?1419, 1986.
[7] N. Hatsopoulos, C. Ojakangas, L. Paninski, and J. Donoghue. Information about movement
direction obtained from synchronous activity of motor cortical neurons. Proc. Nat. Academy of
Sciences, 95:15706?15711, 1998.
[8] M. Isard and A. Blake. Condensation ? conditional density propagation for visual tracking.
IJCV, 29(1): 5?28, 1998.
[9] E. Maynard, N. Hatsopoulos, C. Ojakangas, B. Acuna, J. Sanes, R. Normann, and J. Donoghue.
Neuronal interaction improve cortical population coding of movement direction. J. of Neuroscience, 19(18):8083?8093, 1999.
[10] D. Moran and A. Schwartz. Motor cortical representation of speed and direction during reaching. J. Neurophysiol, 82:2676-2692, 1999.
[11] R. Nowak and E. Kolaczyk. A statistical multiscale framework for Poisson inverse problems.
IEEE Inf. Theory, 46(5):1811?1825, 2000.
[12] L. Paninski, M. Fellows, N. Hatsopoulos, and J. Donoghue. Temporal tuning properties for
hand position and velocity in motor cortical neurons. submitted, J. Neurophysiology, 2001.
[13] D. Terzopoulos. Regularization of inverse visual problems involving discontinuities. PAMI,
8(4):413?424, 1986.
[14] J. Wessberg, C. Stambaugh, J. Kralik, P. Beck, M. Laubach, J. Chapin, J. Kim, S. Biggs, M.
Srinivasan, and M. Nicolelis. Real-time prediction of hand trajectory by ensembles of cortical
neurons in primates. Nature, 408:361?365, 2000.
| 1997 |@word neurophysiology:2 trial:4 kolaczyk:1 seek:2 propagate:2 initial:1 tuned:2 current:2 com:1 comparing:1 written:2 must:1 john:1 numerical:1 motor:12 plot:2 update:2 generative:2 isard:1 manipulandum:3 wessberg:1 short:2 record:1 provides:7 location:1 along:1 ijcv:2 combine:1 behavioral:1 introduce:1 expected:2 roughly:1 multi:4 discretized:1 little:1 window:3 provided:1 moreover:2 underlying:1 chapin:1 cm:6 monkey:7 unobserved:1 nj:1 temporal:8 quantitative:3 fellow:2 schwartz:4 control:2 grant:1 appear:2 positive:1 vertically:1 encoding:2 firing:43 path:1 approximately:2 pami:2 black:3 signed:1 suggests:1 range:3 elie:1 acknowledgment:1 testing:1 procedure:1 area:2 shoham:1 thought:1 pre:1 suggest:4 acuna:1 dam:1 map:1 deterministic:1 courtesy:1 missing:1 elongated:1 factored:1 insight:2 rule:1 array:7 population:4 variation:5 updated:1 discharge:1 target:5 decode:1 mathsoft:1 us:1 tablet:2 velocity:29 approximated:1 lihood:1 geman:3 observed:7 bottom:1 capture:1 region:2 movement:6 hatsopoulos:5 valuable:1 principled:1 reward:1 rigorously:1 dynamic:3 trained:1 grateful:1 upon:1 division:1 biggs:1 ojakangas:2 neurophysiol:1 connector:1 various:3 represented:1 train:3 describe:1 h0:1 quite:1 lag:1 premotor:1 reconstruct:1 statistic:1 sequence:4 advantage:1 reconstruction:3 interaction:1 neighboring:2 realization:1 achieve:1 amplified:1 academy:1 description:1 moved:2 electrode:6 keck:1 rangarajan:1 generating:1 derive:2 completion:1 stat:1 quirk:1 measured:3 nearest:1 job:1 strong:1 c:1 involves:1 predicted:1 implies:3 indicate:3 direction:11 waveform:1 filter:3 stochastic:2 vx:2 translating:1 exploring:1 mm:1 stambaugh:1 around:1 considered:1 blake:1 cfm:1 early:1 a2:3 estimation:1 proc:1 integrates:1 currently:1 visited:1 weighted:1 gaussian:21 modified:3 reaching:2 rather:1 wilson:1 probabilistically:2 encode:1 ax:1 rank:1 likelihood:13 indicates:2 experimenting:1 rigorous:1 kim:1 summarizing:1 inst:1 inference:6 entire:1 bienenstock:1 issue:1 animal:1 smoothing:2 spatial:5 field:2 construct:1 sampling:3 manually:1 biology:1 x4:1 represents:6 piecewise:1 randomly:1 simultaneously:3 national:2 individual:4 beck:1 investigate:1 kinematic:3 highly:1 evaluation:1 llr:2 chain:1 accurate:1 fu:1 nowak:1 unification:1 logarithm:3 walk:2 plotted:1 causal:2 modeling:2 restoration:1 parametrically:1 hundred:1 successful:1 characterize:1 optimally:1 providence:3 varies:1 synthetic:6 combined:1 density:1 explores:1 peak:1 probabilistic:9 off:1 decoding:3 nongaussian:1 ambiguity:1 recorded:2 stochastically:1 lii:1 derivative:1 coding:4 int:1 matter:1 inc:1 depends:2 performed:1 bone:1 view:1 closed:2 observing:2 characterizes:1 red:6 bayes:1 variance:1 who:1 ensemble:2 bayesian:12 accurately:1 trajectory:4 history:2 submitted:1 frequency:1 involved:1 mi:8 radian:1 propagated:1 color:2 response:2 modal:1 formulation:1 box:1 furthermore:1 correlation:1 hand:34 receives:1 horizontal:5 multiscale:1 overlapping:2 propagation:1 maynard:1 facilitate:1 brown:8 true:6 regularization:4 hence:1 white:1 during:2 ambiguous:1 covering:1 cosine:7 rat:1 m:8 hippocampal:1 complete:1 demonstrate:1 performs:1 motion:12 image:3 spiking:1 refractory:1 slight:1 interpretation:2 synthesized:1 measurement:3 gibbs:1 tuning:3 grid:1 mathematics:1 similarly:1 particle:11 dot:2 robot:1 cortex:7 behaving:1 longer:1 underling:1 wilcoxon:1 posterior:15 inf:1 scenario:1 exploited:2 paradigm:1 period:1 ii:1 relates:2 multiple:6 sound:4 infer:3 x10:1 ing:1 smooth:7 cross:5 mcclure:1 molecular:1 award:1 feasibility:1 prediction:3 mrf:2 n01:1 involving:1 implanted:2 vision:2 expectation:1 poisson:12 represent:7 adopting:2 serruya:1 achieved:1 cell:40 proposal:1 whereas:1 condensation:1 interval:5 annealing:2 source:1 unlike:2 subject:2 recording:3 isolate:1 iii:1 variety:1 independence:1 fit:4 reduce:1 itr:1 donoghue:5 synchronous:1 s7:1 action:1 sh0:4 nonparametric:1 dark:1 ten:2 extensively:1 generate:2 cj2:2 vy:2 neuroscience:3 estimated:2 track:1 flament:1 blue:5 discrete:2 threefold:1 srinivasan:1 four:1 splus:1 monitor:3 thresholded:1 diffusion:1 relaxation:1 inverse:2 place:1 reasonable:2 tomographic:1 separation:1 discretely:1 activity:30 awake:1 ri:3 georgopoulos:1 encodes:1 prosthetic:2 speed:6 friction:1 relatively:1 combination:2 primate:2 outlier:1 pr:1 taken:2 kinematics:7 observe:1 appropriate:1 assumes:2 remaining:1 cf:1 denotes:1 top:1 instant:1 dfe:1 embodies:1 exploit:1 r01:1 move:1 spike:6 wrapping:1 parametric:11 primary:3 receptive:1 traditional:1 exhibit:1 gradient:1 distance:1 simulated:1 kalman:2 modeled:1 relationship:4 illustration:1 ratio:3 minimizing:1 providing:1 insufficient:1 laubach:1 holding:1 frank:1 negative:2 kralik:1 design:1 ebner:1 vertical:5 neuron:6 observation:2 markov:2 descent:1 november:1 displayed:1 sharp:1 pair:2 required:1 specified:1 learned:7 macaque:2 discontinuity:4 pattern:3 program:1 including:1 explanation:1 nicolelis:1 sian:1 arm:6 scheme:1 improve:1 axis:5 health:1 normann:1 prior:16 filtering:17 validation:5 foundation:2 plotting:1 heavy:1 supported:1 side:1 bias:1 terzopoulos:1 institute:1 neighbor:1 explaining:1 taking:2 bulletin:1 sparse:1 feedback:2 curve:1 cortical:6 valid:1 world:1 author:1 collection:1 cope:1 approximate:2 keep:1 reveals:1 assumed:1 continuous:2 tailed:1 table:3 learn:1 nature:3 robust:13 kettner:1 complex:2 significance:1 whole:1 noise:1 repeated:1 neuronal:2 n:1 position:11 sanes:1 mov:1 tang:1 showing:1 moran:3 normalizing:2 evidence:1 exists:1 effectively:1 importance:1 nat:1 conditioned:5 sorting:1 rejection:1 smoothly:3 paninski:3 explore:1 insures:1 gao:2 visual:3 tracking:4 g2:5 corresponds:1 conditional:10 goal:2 viewed:1 consequently:1 specifically:1 determined:1 called:1 total:1 meaningful:1 support:1 incorporate:1 dept:3 princeton:3 |
1,092 | 1,998 | KLD-Sampling: Adaptive Particle Filters
Dieter Fox
Department of Computer Science & Engineering
University of Washington
Seattle, WA 98195
Email: fox@cs.washington.edu
Abstract
Over the last years, particle filters have been applied with great success to
a variety of state estimation problems. We present a statistical approach to
increasing the efficiency of particle filters by adapting the size of sample
sets on-the-fly. The key idea of the KLD-sampling method is to bound the
approximation error introduced by the sample-based representation of the
particle filter. The name KLD-sampling is due to the fact that we measure
the approximation error by the Kullback-Leibler distance. Our adaptation
approach chooses a small number of samples if the density is focused on
a small part of the state space, and it chooses a large number of samples
if the state uncertainty is high. Both the implementation and computation
overhead of this approach are small. Extensive experiments using mobile
robot localization as a test application show that our approach yields drastic
improvements over particle filters with fixed sample set sizes and over a
previously introduced adaptation technique.
1 Introduction
Estimating the state of a dynamic system based on noisy sensor measurements is extremely
important in areas as different as speech recognition, target tracking, mobile robot navigation,
and computer vision. Over the last years, particle filters have been applied with great success
to a variety of state estimation problems (see [3] for a recent overview). Particle filters
estimate the posterior probability density over the state space of a dynamic system [4, 11].
The key idea of this technique is to represent probability densities by sets of samples. It is
due to this representation, that particle filters combine efficiency with the ability to represent
a wide range of probability densities. The efficiency of particle filters lies in the way they
place computational resources. By sampling in proportion to likelihood, particle filters focus
the computational resources on regions with high likelihood, where things really matter.
So far, however, an important source for increasing the efficiency of particle filters has only
rarely been studied: Adapting the number of samples over time. While variable sample
sizes have been discussed in the context of genetic algorithms [10] and interacting particle
filters [2], most existing approaches to particle filters use a fixed number of samples during
the whole state estimation process. This can be highly inefficient, since the complexity of the
probability densities can vary drastically over time. An adaptive approach for particle filters
has been applied by [8] and [5]. This approach adjusts the number of samples based on the
likelihood of observations, which has some important shortcomings, as we will show.
In this paper we introduce a novel approach to adapting the number of samples over time.
Our technique determines the number of samples based on statistical bounds on the samplebased approximation quality. Extensive experiments using a mobile robot indicate that our
approach yields significant improvements over particle filters with fixed sample set sizes and
over a previously introduced adaptation technique. The remainder of this paper is organized
as follows: In the next section we will outline the basics of particle filters and their application to mobile robot localization. In Section 3, we will introduce our novel technique to
adaptive particle filters. Experimental results are presented in Section 4 before we conclude
in Section 5.
2 Particle filters for Bayesian filtering and robot localization
Particle filters address the problem of estimating the state of a dynamical system from
sensor measurements. The goal of particle filters is to estimate a posterior probability density
over the state space conditioned on the data collected so far. The data typically consists of
an alternating sequence of time indexed observations and
control measurements , which
describe
the
dynamics
of
the
system.
Let
the
belief
denote the posterior at time
. Under the Markov assumption, the posterior can be computed efficiently by recursively
updating the belief whenever new information is received. Particle
filters represent this belief
by a set of weighted samples distributed according to
:
"
!$# &%('
) +* !-,.,.,.! /
Here each 0 is a sample (or state), and the # are non-negative numerical factors called
importance weights, which sum up to one. The basic form of the particle filter updates
the belief according to the following sampling procedure, often referred to as sequential
importance sampling with re-sampling (SISR, see also [4, 3]):
Re-sampling: Draw with replacement a random sample 132 4 from the sample set 324
according to the (discrete) distribution defined
through
the importance weights # 32 4 . This
$
sample can be seen as an instance of the belief
324.
.
Sampling: Use 324 and the control information 324 to sample 65 from the distribution
7
the dynamics
of $ the system. 1 95 now represents the
' 324 ! 8324
, which describes
7
density given by the product
' 324 ! 324:
324.
. This density is the proposal
distribution used in the next step.
Importance sampling: Weight the sample ; 65 by the importance weight 7
' 65
, the
likelihood of the sample 0 65 given the measurement
.
Each
iteration of these three steps generates a sample drawn from the posterior belief
. After iterations, the importance weights of the samples are normalized so that
they sum up to 1. It can be shown that this procedure in fact approximates the posterior
density, using a sample-based representation [4, 2, 3].
Particle filters for mobile robot localization
We use the problem of mobile robot localization to illustrate and test our approach to adaptive
particle filters. Robot localization is the problem of estimating a robot?s pose relative to a
map of its environment. This problem has been recognized as one of the most fundamental
problems in mobile robotics [1]. The mobile robot localization problem comes in different
flavors. The simplest localization problem is position tracking. Here the initial robot pose
is known, and localization seeks to correct small, incremental errors in a robot?s odometry.
More challenging is the global localization problem, where a robot is not told its initial pose,
but instead has to determine it from scratch.
Robot position
(a)
(b)
Start
Robot position
Robot position
(c)
(d)
Fig. 1: a) Pioneer robot used throughout the experiments. b)-d) Map of an office environment along
with a series of sample sets representing the robot?s belief during global localization using sonar sensors
(samples are projected into 2D). The size of the environment is 54m 18m. b) After moving 5m, the
robot is still highly uncertain about its position and the samples are spread trough major parts of the
free-space. c) Even as the robot reaches the upper left corner of the map, its belief is still concentrated
around four possible locations. d) Finally, after moving approximately 55m, the ambiguity is resolved
and the robot knows where it is. All computation can be carried out in real-time on a low-end PC.
In the context of robot localization, the state of the system is the robot?s position, which is
typically represented in a two-dimensional
Cartesian space and the robot?s heading direction.
The state transition probability 7
' 324 ! 8324
describes how the position of the robot
The perceptual model
changes using information collected by the robot?s wheel encoders.
7
'
describes the likelihood of making the observation given that the robot is at
location . In most applications, measurements consist of range measurements or camera
images (see [6] for details). Figure 1 illustrates particle filters for mobile robot localization.
Shown there is a map of a hallway environment along with a sequence of sample sets during
global localization. In this example, all sample sets contain 100,000 samples. While such
a high number of samples might be needed to accurately represent the belief during early
stages of localization (cf. 1(a)), it is obvious that only a small fraction of this number suffices
to track the position of the robot once it knows where it is (cf. 1(c)). Unfortunately, it is not
straightforward how the number of samples can be adapted on-the-fly, and this problem has
only rarely been addressed so far.
3 Adaptive particle filters with variable sample set sizes
The localization example in the previous section illustrates that the efficiency of particle
filters can be greatly increased by changing the number of samples over time. Before we
introduce our approach to adaptive particle filters, let us first discuss an existing technique.
3.1 Likelihood-based adaptation
We call this approach likelihood-based adaptation since it determines the number of samples such that the sum of non-normalized likelihoods (importance weights) exceeds a prespecified threshold. This approach has been applied to dynamic Bayesian networks [8] and
mobile robot localization [5]. The intuition behind this approach can be illustrated in the
robot localization context: If the sample set is well in tune with the sensor reading, each individual importance weight is large and the sample set remains small. This is typically the case
during position tracking (cf. 1(c)). If, however, the sensor reading carries a lot of surprise,
as is the case when the robot is globally uncertain or when it lost track of its position, the
individual sample weights are small and the sample set becomes large.
The likelihood-based adaptation directly relates to the property that the variance of the importance sampler is a function of the mismatch between the proposal distribution and the
distribution that is being approximated. Unfortunately, this mismatch is not always an accurate indicator for the necessary number of samples. Consider, for example, the ambiguous
belief state consisting of four distinctive sample clusters shown in Fig. 1(b). Due to the symmetry of the environment, the average likelihood of a sensor measurement observed in this
situation is approximately the same as if the robot knew its position unambiguously (cf. 1(c)).
Likelihood-based adaptation would therefore use the same number of samples in both situations. Nevertheless, it is obvious that an accurate approximation of the belief shown in
Fig. 1(b) requires a multiple of the samples needed to represent the belief in Fig. 1(c).
3.2 KLD-sampling
The key idea of our approach is to bound the error introduced by the sample-based representation of the particle filter. To derive this bound, we assume that the true posterior is
given by a discrete, piecewise constant distribution such as a discrete density tree or a multidimensional histogram [8, 9]. For such a representation we can determine the number of
samples so that the distance between the maximum likelihood estimate (MLE) based on the
samples and the true posterior does not exceed a pre-specified threshold . We denote the
resulting approach the KLD-sampling algorithm since the distance between the MLE and the
true distribution is measured by the Kullback-Leibler distance. In what follows, we will first
derive the equation for determining the number of samples needed to approximate a discrete
probability distribution (see also [12, 7]). Then we will show how to modify the basic particle
filter algorithm so that it realizes our adaptation approach.
To see, suppose that samples are drawn from a discrete distribution with different bins.
Let the vector
4 !.,-,.,.!
denote the number of samples drawn from each bin.
is distributed according to a multinomial distribution, i.e.
Multinomial ! 7
, where
7 7 4
7 specifies the probability of each bin. The maximum likelihood estimate of 7 is
,-,.,
given by 7 & 24 . Furthermore, the likelihood ratio statistic
for testing 7 is
When 7
7
7 5
5
7
5
5 4
7 5
5
"
(1)
,
7
5
5 4
is the true distribution, the likelihood ratio converges to a chi-square distribution:
!
24 as "$#
(2)
7 7
Please note that the sum in the rightmost term of (1) specifies the K-L distance
!
between the MLE and the true distribution. Now we can determine the probability that this
distance is smaller than , given that samples are drawn from the true distribution:
7 7
7 7
:
,
(3)
24
!
!
7 7
The second step in (3) follows by replacing
!
with the likelihood ratio statistic, and
by the convergence result in (2). The quantiles of the chi-square distribution are given by
(4)
24
24 4 2
+*
,
%
&(' % *),+
Now if we choose
&-' (% *) +
(%
& .
) +
& ) 0/ 21 4365
such that + is equal to 240/ 4:2
1 , we can combine (3) and (4) to get
& ' % 7 ! 7
*),+:
, *4375 ,
(5)
This derivation can be summarized as follows: If we choose the number of samples as
*
(6)
24 4 2 !
+ . 0/ 21
35
then we can guarantee that with probability *
, the K-L distance between the MLE and
the true distribution is less than . In order to determine according to (6), we need to
compute the quantiles of the chi-square distribution. A good approximation is given by the
Wilson-Hilferty transformation [7], which yields
21
where 4 2
+ 240/ 4 221
*
4365
is the upper *
3 *
+ *43 3 *
3
,
quantile of the standard normal
*
21
4 2
!
(7)
! *
distribution.
This concludes the derivation of the sample size needed to approximate a discrete distribution
with an upper bound on the K-L distance. From (7) we see that the required number
of samples is proportional to the inverse of the bound, and to the first order linear in the
number of bins with support. Here we assume that a bin of the multinomial distribution has
support if its probability is above a certain threshold. This way the number will decrease
with the certainty of the state estimation 1 .
It remains to be shown how to apply this result to particle filters. The problem is that we do
not know the true posterior distribution (the estimation of this posterior is the main goal of the
particle filter). Fortunately, (7) shows that we do not need the complete discrete distribution
but that it suffices to determine the number of bins with support. However, we do not know
this quantity before we actually generate the distribution. Our approach is to estimate by
counting the number of bins with
sampling.
To be more specific, we estimate
support during
$
for the proposal distribution 7
' 324 ! 324
324
resulting from the first two steps
of the particle filter update. The determination of can be done efficiently by checking for
each generated sample whether it falls into an empty bin or not. Sampling is stopped as
soon as the number of samples exceeds the threshold specified in (7). An update step of the
resulting KLD-sampling particle filter is given in Table 1.
The implementation of this modified particle filter is straightforward. The only difference to
the original algorithm is that we have to keep track of the number of supported bins. The
bins can be implemented either as a fixed, multi-dimensional grid, or more efficiently as tree
structures [8, 9]. Please note that the sampling process is guaranteed to terminate, since for a
given bin size , the maximum number of bins is limited.
4 Experimental results
We evaluated our approach using data collected with one of our robots (see Figure 1). The
data consists of a sequence of sonar scans and odometry measurements annotated with timestamps to allow systematic real-time evaluations. In all experiments we compared our KLDsampling approach to the likelihood-based approach discussed in Section 3.1, and to particle
filters with fixed sample set sizes. Throughout the experiments we used different parameters
for the three approaches. For the fixed approach we varied the number of samples, for the
likelihood-based approach we varied the threshold used to determine the number of samples,
and for our approach we varied , the bound on the K-L distance. In all experiments, we
used a value of 0.99 for and a fixed bin size of 50cm 50cm 10deg. We limited the
maximum number of samples for all approaches to 100,000.
5
1
This need for a threshold to determine (and to make vary over time) is not particularly elegant.
However, it results in an efficient implementation that does not even depend on the value of the threshold itself (see next paragraph). We also implemented a version of the algorithm using the complexity
of the state space to determine the number of samples. Complexity is measured by
, where is the
entropy of the distribution. This approach does not depend on thresholding at all, but it does not have a
guarantee of approximation bounds and does not yield significantly different results.
$
Inputs: 3 24 32 4 !# 32 4 % '
) +* !.,-,., ! / representing belief
324.
,
control measurement 324 , observation
, bounds and , bin size
6
do
!
!
!
5
/* Initialize */
/* Generate samples ,.,., */
Sample an index
from the discrete distribution given by the weights in 324
Sample 0 from 7
' 324 ! 324
using 365 2 4 and 8324
6 7 0
;
/* Compute importance weight */
#
'
/* Update normalization factor */
6 #
6 !$# % /
/* Insert sample into sample set */
if
/* Update number of bins with support */
falls into empty bin ) then
9
*
9 non-empty
6"
while
4 *
240/ 4 221
/* Update number of generated samples */
/* ,-,., until K-L bound is reached */
for ) 6 * .! ,-,., ! do
# 9 #
/* Normalize importance weights */
return
Table 1: KLD-sampling algorithm.
Approximation of the true posterior
In the first set of experiments we evaluated how accurately the different methods approximate
the true posterior density. Since the ground truth for these posteriors is not available, we
compared the sample sets generated by the different approaches with reference sample sets.
These reference sets were generated using a particle filter with a fixed number of 200,000
samples (far more than actually needed for position estimation). After each iteration, we
computed the K-L distance between the sample sets and the corresponding reference sets,
using histograms for both sets. Note that in these experiments the time-stamps were ignored
and the algorithms was given as much time as needed to process the data. Fig. 2(a) plots
the average K-L distance along with 95% confidence intervals against the average number
of samples for the different algorithms (for clarity, we omitted the large error bars for KL distances above 1.0). Each data point represents the average of 16 global localization
runs with different start positions of the robot (each run itself consists of approximately 150
sample set comparisons at the different points in time). As expected, the more samples are
used, the better the approximation. The curves also illustrate the superior performance of our
approach: While the fixed approach requires about 50,000 samples before it converges to a KL distance below 0.25, our approach converges to the same level using only 3,000 samples on
average. This is also an improvement by a factor of 12 compared to the approximately 36,000
samples needed by the likelihood-based approach. In essence, these experiments indicate that
our approach, even though based on several approximations, is able to accurately track the
true posterior using significantly smaller sample sets on avarage than the other approaches.
Real-time performance
Due to the computational overhead for determining the number of samples, it is not clear
that our approach yields better results under real-time conditions. To test the performance
of our approach under realistic conditions, we performed multiple global localization experiments under real-time considerations using the timestamps in the data sets. Again, the
200
3.5
KL distance
2.5
Localization error [cm]
KLD?sampling
Likelihood?based adaptation
Fixed sampling
3
2
1.5
1
0.5
KLD?sampling
Likelihood?based adaptation
Fixed sampling
150
100
50
0
?0.5
0
0
20000
40000
60000
Average number of samples
80000
100000
(a)
0
20000
40000
60000
Average number of samples
80000
(b)
Fig. 2: The -axis represents the average sample set size for different parameters of the three approaches. a) The -axis plots the K-L distance between the reference densities and the sample sets
generated by the different approaches (real-time constraints were not considered in this experiment).
b) The -axis represents the average localization error measured by the distance between estimated
positions and reference positions. The U-shape in b) is due to the fact that under real-time conditions,
an increasing number of samples results in higher update times and therefore loss of sensor data.
different average numbers of samples for KLD-sampling were obtained by varying the bound. The minimum and maximum numbers of samples correspond to -bounds of 0.4 and
0.015, respectively. As a natural measure of the performance of the different algorithms, we
determined the distance between the estimated robot position and the corresponding reference position after each iteration. 2 The results are shown in Fig. 2(b). The U-shape of all
three graphs nicely illustrates the trade-off involved in choosing the number of samples under
real-time constraints: Choosing not enough samples results in a poor approximation of the
underlying posterior and the robot frequently fails to localize itself. On the other hand, if we
choose too many samples, each update of the algorithm can take several seconds and valuable
sensor data has to be discarded, which results in less accurate position estimates. Fig. 2(b)
also shows that even under real-time conditions, our KLD-sampling approach yields drastic
improvements over both fixed sampling and likelihood-based sampling. The smallest average localization error is 44cm in contrast to an average error of 79cm and 114cm for the
likelihood-based and the fixed approach, respectively. This result is due to the fact that our
approach is able to determine the best mix between more samples during early stages of
localization and less samples during position tracking. Due to the smaller sample sets, our
approach also needs significantly less processing power than any of the other approaches.
5 Conclusions and Future Research
We presented a statistical approach to adapting the sample set size of particle filters onthe-fly. The key idea of the KLD-sampling approach is to bound the error introduced by
the sample-based belief representation of the particle filter. At each iteration, our approach
generates samples until their number is large enough to guarantee that the K-L distance between the maximum likelihood estimate and the underlying posterior does not exceed a prespecified bound. Thereby, our approach chooses a small number of samples if the density is
focused on a small subspace of the state space, and chooses a large number of samples if the
samples have to cover a major part of the state space.
Both the implementational and computational overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach
yields drastic improvements over particle filters with fixed sample sets and over a previously introduced adaptation approach [8, 5]. In our experiments, KLD-sampling yields bet2
Position estimates are extracted using histograming and local averaging, and the reference positions
were determined by evaluating the robot?s highly accurate laser range-finder information.
ter approximations using only 6% of the samples required by the fixed approach, and using
less than 9% of the samples required by the likelihood adaptation approach. So far, KLDsampling has been tested using robot localization only. We conjecture, however, that many
other applications of particle filters can benefit from this method.
KLD-sampling opens several directions for future research. In our current implementation
we use a discrete distribution with a fixed bin size to determine the number of samples. We assume that the performance of the filter can be further improved by changing the discretization
over time, using coarse discretizations when the uncertainty is high, and fine discretizations
when the uncertainty is low. Our approach can also be extended to the case where in certain
parts of the state space, highly accurate estimates are needed, while in other parts a rather
crude approximation is sufficient. This problem can be addressed by locally adapting the discretization to the desired approximation quality using multi-resolution tree structures [8, 9]
in combination with stratified sampling. As a result, more samples are used in ?important?
parts of the state space, while less samples are used in other parts. Another area of future
research is the thorough investigation of particle filters under real-time conditions. In many
applications the rate of incoming sensor data is higher than the update rate of the particle
filter. This introduces a trade-off between the number of samples and the amount of sensor
data that can be processed (cf. 2(b)). In our future work, we intend to address this problem
using techniques similar to the ones introduced in this work.
Acknowledgments
The author wishes to thank Jon A. Wellner and Vladimir Koltchinskii for their help in deriving the statistical background of this work. Additional thanks go to Wolfram Burgard and
Sebastian Thrun for their valuable feedback on early versions of the technique.
References
[1] I. J. Cox and G. T. Wilfong, editors. Autonomous Robot Vehicles. Springer Verlag, 1990.
[2] P. Del Moral and L. Miclo. Branching and interacting particle systems approximations of feynamkac formulae with applications to non linear filtering. In Seminaire de Probabilites XXXIV, number 1729 in Lecture Notes in Mathematics. Springer-Verlag, 2000.
[3] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo in Practice. SpringerVerlag, New York, 2001.
[4] A. Doucet, S.J. Godsill, and C. Andrieu. On sequential monte carlo sampling methods for
Bayesian filtering. Statistics and Computing, 10(3), 2000.
[5] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte Carlo Localization: Efficient position estimation for mobile robots. In Proc. of the National Conference on Artificial Intelligence (AAAI),
1999.
[6] D. Fox, S. Thrun, F. Dellaert, and W. Burgard. Particle filters for mobile robot localization. In
Doucet et al. [3].
[7] N. Johnson, S. Kotz, and N. Balakrishnan. Continuous univariate distributions, volume 1. John
Wiley & Sons, New York, 1994.
[8] D. Koller and R. Fratkina. Using learning for approximation in stochastic processes. In Proc. of
the International Conference on Machine Learning (ICML), 1998.
[9] A. W. Moore, J. Schneider, and K. Deng. Efficient locally weighted polynomial regression predictions. In Proc. of the International Conference on Machine Learning (ICML), 1997.
[10] M. Pelikan, D.E. Goldberg, and E. Cant-Paz. Bayesian optimization algorithm, population size,
and time to convergence. In Proc. of the Genetic and Evolutionary Computation Conference
(GECCO), 2000.
[11] M. K. Pitt and N. Shephard. Filtering via simulation: auxiliary particle filters. Journal of the
American Statistical Association, 94(446), 1999.
[12] J.A. Rice. Mathematical Statistics and Data Analysis. Duxbury Press, second edition, 1995.
| 1998 |@word cox:1 version:2 polynomial:1 proportion:1 open:1 seek:1 simulation:1 thereby:1 recursively:1 carry:1 initial:2 series:1 genetic:2 rightmost:1 existing:2 freitas:1 current:1 discretization:2 john:1 pioneer:1 numerical:1 timestamps:2 realistic:1 cant:1 shape:2 plot:2 update:9 intelligence:1 hallway:1 prespecified:2 wolfram:1 coarse:1 location:2 mathematical:1 along:3 consists:3 overhead:3 combine:2 paragraph:1 introduce:3 expected:1 frequently:1 multi:2 chi:3 globally:1 increasing:3 becomes:1 estimating:3 underlying:2 avarage:1 what:1 cm:6 probabilites:1 transformation:1 guarantee:3 certainty:1 thorough:1 multidimensional:1 control:3 before:4 engineering:1 local:1 modify:1 approximately:4 might:1 koltchinskii:1 studied:1 challenging:1 limited:2 stratified:1 range:3 acknowledgment:1 camera:1 testing:1 lost:1 practice:1 procedure:2 area:2 discretizations:2 adapting:5 significantly:3 pre:1 confidence:1 get:1 wheel:1 context:3 kld:14 map:4 straightforward:2 go:1 focused:2 resolution:1 adjusts:1 deriving:1 population:1 autonomous:1 target:1 suppose:1 goldberg:1 recognition:1 approximated:1 updating:1 particularly:1 observed:1 fly:3 region:1 decrease:1 trade:2 valuable:2 intuition:1 environment:5 complexity:3 dynamic:5 depend:2 localization:28 distinctive:1 efficiency:5 resolved:1 represented:1 derivation:2 laser:1 shortcoming:1 describe:1 monte:3 artificial:1 choosing:2 ability:1 statistic:4 noisy:1 itself:3 sequence:3 product:1 adaptation:12 remainder:1 normalize:1 seattle:1 convergence:2 cluster:1 empty:3 incremental:1 converges:3 help:1 illustrate:2 derive:2 pose:3 measured:3 received:1 shephard:1 implemented:2 c:1 auxiliary:1 indicate:2 come:1 direction:2 correct:1 annotated:1 filter:46 stochastic:1 bin:17 suffices:2 really:1 investigation:1 insert:1 around:1 considered:1 ground:1 normal:1 great:2 pitt:1 major:2 vary:2 early:3 smallest:1 omitted:1 estimation:7 proc:4 realizes:1 weighted:2 sensor:10 always:1 odometry:2 modified:1 rather:1 mobile:13 varying:1 wilson:1 office:1 focus:1 improvement:5 likelihood:25 greatly:1 contrast:1 typically:3 koller:1 initialize:1 equal:1 once:1 nicely:1 washington:2 sampling:30 represents:4 icml:2 jon:1 future:4 piecewise:1 gordon:1 national:1 individual:2 consisting:1 replacement:1 highly:4 evaluation:1 introduces:1 navigation:1 pc:1 behind:1 accurate:5 necessary:1 fox:4 indexed:1 tree:3 re:2 desired:1 uncertain:2 stopped:1 instance:1 increased:1 cover:1 implementational:1 burgard:3 paz:1 johnson:1 too:1 encoders:1 chooses:4 thanks:1 density:13 fundamental:1 international:2 told:1 systematic:1 off:2 again:1 ambiguity:1 aaai:1 seminaire:1 choose:3 corner:1 american:1 inefficient:1 return:1 de:2 summarized:1 matter:1 trough:1 performed:1 vehicle:1 lot:1 reached:1 start:2 square:3 variance:1 efficiently:3 yield:8 correspond:1 bayesian:4 accurately:3 carlo:3 reach:1 whenever:1 sebastian:1 email:1 against:1 involved:1 obvious:2 organized:1 actually:2 higher:2 unambiguously:1 improved:1 done:1 evaluated:2 though:1 furthermore:1 stage:2 until:2 hand:1 replacing:1 miclo:1 del:1 wilfong:1 quality:2 name:1 contain:1 true:11 normalized:2 andrieu:1 alternating:1 leibler:2 moore:1 illustrated:1 during:8 branching:1 please:2 ambiguous:1 essence:1 outline:1 complete:1 image:1 consideration:1 novel:2 superior:1 multinomial:3 overview:1 volume:1 discussed:2 association:1 approximates:1 measurement:9 significant:1 grid:1 mathematics:1 particle:46 moving:2 robot:43 posterior:16 recent:1 certain:2 verlag:2 success:2 seen:1 minimum:1 fortunately:1 additional:1 schneider:1 deng:1 recognized:1 determine:10 relates:1 multiple:2 mix:1 exceeds:2 determination:1 mle:4 finder:1 prediction:1 basic:3 regression:1 vision:1 iteration:5 represent:5 histogram:2 normalization:1 robotics:1 proposal:3 background:1 fine:1 addressed:2 interval:1 source:1 elegant:1 thing:1 balakrishnan:1 call:1 counting:1 ter:1 exceed:2 enough:2 variety:2 idea:4 whether:1 wellner:1 moral:1 xxxiv:1 speech:1 york:2 dellaert:2 pelikan:1 ignored:1 clear:1 tune:1 amount:1 locally:2 concentrated:1 processed:1 simplest:1 generate:2 specifies:2 estimated:2 track:4 discrete:9 key:4 four:2 threshold:7 nevertheless:1 drawn:4 localize:1 changing:2 clarity:1 graph:1 fraction:1 year:2 sum:4 run:2 inverse:1 uncertainty:3 place:1 throughout:2 kotz:1 draw:1 bound:14 guaranteed:1 adapted:1 constraint:2 generates:2 extremely:1 conjecture:1 department:1 according:5 combination:1 poor:1 describes:3 smaller:3 son:1 making:1 dieter:1 samplebased:1 resource:2 equation:1 previously:3 remains:2 discus:1 needed:8 know:4 drastic:3 end:1 available:1 apply:1 duxbury:1 original:1 cf:5 quantile:1 intend:1 quantity:1 evolutionary:1 subspace:1 distance:18 thank:1 thrun:3 gecco:1 collected:3 index:1 ratio:3 vladimir:1 unfortunately:2 negative:1 godsill:1 implementation:4 upper:3 observation:4 markov:1 discarded:1 situation:2 extended:1 interacting:2 varied:3 introduced:7 required:3 specified:2 extensive:3 kl:3 address:2 able:2 bar:1 dynamical:1 below:1 mismatch:2 reading:2 belief:14 power:1 natural:1 indicator:1 representing:2 axis:3 carried:1 concludes:1 checking:1 determining:2 relative:1 loss:1 lecture:1 filtering:4 proportional:1 sufficient:1 thresholding:1 editor:2 supported:1 last:2 free:1 soon:1 heading:1 drastically:1 allow:1 wide:1 fall:2 distributed:2 benefit:1 curve:1 feedback:1 transition:1 evaluating:1 author:1 adaptive:6 projected:1 far:5 approximate:3 kullback:2 keep:1 deg:1 global:5 doucet:3 incoming:1 conclude:1 knew:1 continuous:1 sonar:2 table:2 terminate:1 symmetry:1 spread:1 main:1 whole:1 edition:1 fig:8 referred:1 quantiles:2 wiley:1 fails:1 position:22 wish:1 lie:1 crude:1 perceptual:1 stamp:1 formula:1 specific:1 consist:1 sequential:3 importance:11 conditioned:1 illustrates:3 cartesian:1 flavor:1 surprise:1 entropy:1 univariate:1 tracking:4 springer:2 truth:1 determines:2 extracted:1 rice:1 goal:2 change:1 springerverlag:1 determined:2 sampler:1 averaging:1 called:1 experimental:2 rarely:2 support:5 scan:1 tested:1 scratch:1 |
1,093 | 1,999 | Tempo Tracking
Rhythm
by Sequential Monte
Ali Taylan Ce:mgil and Bert Kappen
SNN, University of Nijmegen
NL 6525 EZ Nijmegen
The Netherlands
{cemgil,bert}@mbfys.kun.nl
Abstract
We present a probabilistic generative model for timing deviations
in expressive music. performance. The structure of the proposed
model is equivalent to a switching state space model. We formulate two well known music recognition problems, namely tempo
tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks.
The inferences are carried out using sequential Monte Carlo integration (particle filtering) techniques. For this purpose, we have
derived a novel Viterbi algorithm for Rao-Blackwellized particle filters, where a subset of the hidden variables is integrated out. The
resulting model is suitable for realtime tempo tracking and transcription and hence useful in a number of music applications such
as adaptive automatic accompaniment and score typesetting.
1
Introduction
Automatic music transcription refers to extraction of a high level description from
musical performance, for example in form of a music notation. Music notation can
be viewed as a list of the pitch levels and corresponding timestamps.
Ideally, one would like to recover a score directly frOID: sound. Such a representation
of the surface structure of music would be very useful in music information retrieval
(Music-IR) and content description of musical material in large audio databases.
However, when operating on sampled audio data from polyphonic acoustical signals,
extraction of a score-like description is a very challenging auditory scene analysis
task [13].
In this paper, we focus on a subproblem in music-ir, where we assume that exact
timing information of notes is available, for example as a stream of MIDI! events
1 Musical Instruments Digital Interface. A standard communication protocol especially
designed for digital instruments such as keyboards. Each time a key is pressed, a MIDI
keyboard generates a short message containing pitch and key velocity. A computer can tag
each received message by a timestamp for real-time processing and/or "recording" into a
file.
from a digital keyboard.
A model for tempo tracking and transcription is useful in a broad spectrum of applications. One example is automatic score typesetting, the musical analog of word
processing. Almost all score typesetting applications provide a means of automatic
generation of a conventional music notation from MIDI data.
In conventional music notation, onset time of each note is implicitly represented
by the cumulative sum of durations of previous notes. Durations are encoded by
simple rational numbers (e.q. quarter note, eight note), consequently all events in
music are placed on a discrete grid. So the basic task in MIDI transcription is to
associate discrete grid locations with onsets, Le. quantization.
However, unless the music is performed with mechanical precision, identification of
the correct association becomes difficult. Consequently resulting scores have often
very poor quality. This is due to the fact that musicians introduce intentional (and
unintentional) deviations from a mechanical prescription. For example timing of
events can be deliberately delayed or pushed. Moreover, the tempo can fluctuate
by slowing down or accelerating. In fact, such deviations are natural aspects of
expressive performance; in the absence of these, music tends to sound rather dull.
Robust and fast quantization and tempo tracking is also an important requirement
in interactive performance systems. These are emerging applications that "listen"
to the performance for generating an accompaniment or improvisation in real time
[10, 12]. At last, such models are also useful in musicology for systematic study and
characterization of express~ve timing by principled analysis of existing performance
data.
2
Model
Consider the following generative model for timing deviations in music
Tk
+ "/k-1
Wk-1 + (k
Tk-1 + 2
(Ck
'Yk
Tk +?k
Ck
Wk
Ck-1
Wk
-
Ck-1)
(1)
(2)
(3)
(4)
In Eq. 1, Ck denotes the grid location of k'th onset in a score. The interval between
two consecutive onsets in the score is denoted by "/k-1 . .For example consider the
notation j n which encodes ,,/1:3 == [1 0.5 0.5], hence C1:4 == [0 1 1.5 2]. We
assign a prior of form p(Ck) ex exp( -d(Ck)) where d(Ck) is the number of significant
digits in the binary expansion of the fraction of Ck [1]. One can check that such a
prior prefers simpler notations, e.g. p( J]~ITJ ) < p( j n ). We note that Ck are
drawn from an infinite (but discrete) set and are increasing in k, i.e Ck 2:: Ck-1. To
allow for different time signatures and alternative rhythmic subdivisions, one can
introduce additional hidden variables [1], but this is not addressed in this paper.
Eq. 2 defines a prior over possible tempo deviations. We denote the logarithm of
the period (inverse tempo) by w. For example if the tempo is 60 beats per minute
(bpm), w == log 1sec == O. Since tempo appears as a scale variable in mapping grid
locations on a score to the actual performance time, we have chosen to represent it
in the logarithmic scale (eventually a gamma distribution can also be used). This
representation is both perceptually plausible and mathematically convenient since a
symmetric noise model on w assigns equal probabilities to equal relative c~anges in
tempo. We take (k to be a Gaussian random variable with N(O, A2'kQ). Depending
upon the interval between consecutive onsets, the model scales the noise covariance;
longer jumps in the score allow for more freedom in fluctuating the tempo. Given
the W sequence, Eq. 3 defines a model of noiseless onsets with variable tempo. We
will denote the pair of hidden continuous variables by Zk == (Tk' Wk).
Eq. 4 defines the observation model. Here Yk is the observed onset time of the
k'th onset in the performance. The noise term tk models small scale expressive
deviations in timing of individual notes and has a Gaussian distribution parameterized by N(tt("(k-l), "E("(k-l)).Such a parameterization is useful for appropriate
quantization of phrases (short sequences of notes) that are shifted or delayed as a
whole [1].
ill reality, a random walk model for tempo such as in Eq. 2 is not very realistic.
Tempo deviations are usually more smooth. ill the dynamical model framework
such smooth deviations can be allowed by increasing the dimensionality of W by
include higher order "inertia" variables [2]. ill this case we simply rewrite Eq. 2 as
Wk
==
AWk-l
+ (k
and take a diagonal Q. Accordingly, the observation model (Eq. 4) changed such
that Wk is replaced by CWk where C == [1 0 ... 0].
The graphical model is shown in Figure 1. The model is similar to a switching
state space model, that has been recently applied in the context of music transcription [11]. The differences are in parameterization and more importantly in the
inference method.
Figure 1: Graphical Model. The pair of continuous hidden variables (Tk' Wk) is
denoted by Zk. Both C and Z are hidden; only the onsets Y are observed.
We define tempo tracking as a filtering problem
== argmax LP(Ck,ZkIYl:k)
(5)
Zk
and rhythm transcription as a MAP state estimation problem
argmaxp(Cl:KIY1:K)
(6)
Cl:K
p(Cl:K IY1:K)
(7)
The exact computation of the quantities in Eq. 6 and Eq. 5 is intractable due to
the explosion in the number of mixture components required to represent the exact
posterior at each step k. Consequently we will use Monte Carlo approximation
techniques.
3
Sequential Monte Carlo Sampling
Sequential Monte Carlo sampling (a.k.a. particle filtering) is an integration method
especially powerful for inference in dynamical systems. See [4] for a detailed review
of state of the art. At each step k, the exact marginal posterior over hidden states
Xk is approximated by an empirical distribution of form
N
p(Xk IY1:k)
L wi <5(Xk - x~i))
~
i
(8)
)
i==l
where x~i) are a set of points obtained by sampling from a proposal distribution
i
i
) are associated importance weights such that 2::::1
and
) == 1. Particles
at step k are evolved to k + 1 by sequential importance sampling and resampling
methods [6]. Once a set of discrete sample points is obtained during the forward
phase by sampling, particle approximations to quantities such as the smoothed
marginal posterior p(Xk IYl:K) or the maximum a posteriori state sequence (Viterbi
path) xr:K can be obtained efficiently. Due to the discrete nature of the approximate
representation, resulting algorithms are closely related to standard smoothing and
Viterbi algorithms in Hidden Markov models [9, 7, 6].
wi
wi
Unfortunately, if the hidden state space is of high dimensionality, sampling can be
inefficient. Hence increasingly many particles are needed to accurately represent the
posterior. Consequently, the estimation of "off-line" quantities such as p(Xk IY1:K)
and x~:K becomes very costly since one has to store all past trajectories.
For some models, including the one proposed here, one can identify substructures
where integrations, conditioned on certain nodes can be computed analytically [5].
Conditioned on C1:k, the model reduces to the (extended) 2 Kalman filter. In this
case the joint marginal posterior is represented as a mixture
N
p(Ck' Zk IY1:k)
~
"""" (i)
(i)
(i)
L...J
W k p(Zk ICk ,Y1:k)<5(Ck - Ck )
(9)
i==l
Ici
i
The particular case of Gaussian p(Zk ) ,Y1:k) is extensively used in diverse applications [8] and reported to give superior results when compared to standard particle
filtering [3, 6].
3.1
Particle Filtering
We assume that we have obtained a set- of particles from filtered posterior p(Ck IY1:k).
Due to lack of space we do not give the details of the particle filtering algorithm
but refer the reader to [6]. One important point to note is that we have to use the
optimal proposal distribution given as
p(cklc~i~l' Y1:k)
oc
J dZk- 1:k p(Yklzk' Ck, c~i~l)
C)
C)
p(Zk' Ck IZk-1, ck~_1)P(Zk-1Ick~_1' Y1:k-1)
(10)
Since the state-space of Ck is effectively infinite, this step is crucial for efficiency.
Evaluation of the proposal distribution amounts to looking forward and selecting
i
a set of high probability candidate grid locations for quantization. Once ) are
obtained we can use standard Kalman filtering algorithms to update the Gaussian
potentials p(zklcii) , Y1:k). Thus tempo tracking problem as stated in Eq. 5 is ~eadily
solved.
ci
2We linearize the nonlinear observation model 2Wk (en -
(Wk).
Ck-l)
around the expectation
3.2
Modified Viterbi algorithlll
The quantization problem in Eq. 6 can only be solved approximately. Since Z
is integrated over, in general all Ck become coupled and the Markov property is
lost, i.e. p(Cl:K IYl:K) is in general not a chain. One possible approximation, that
we adapt also here, is to assume smoothed estimates are not much different from
filtered estimates [8] i.e.
p(Ck' zklck-l, Zk-l, Yl:K) ~ p(Ck' zklck-l, Zk-1, Yl:k)
(11)
and to write
p(Cl:KIYl:K)
R:j
f
K
dZ1:KP(CIZIIYl)
P(Ck,Zkh-l,Zk-l,Yl:k)
k==2
r
K
dZ1:KP(YIIZl, Cl)p(Zl, Cl)
<X
p(Yk IZk, ck, Ck-l)p(Zk, Ck, IZk-l, Ck-l)
k==2
v
IT we plug in the mixture approximation in Eq. 9 and take the argmaxlog on both
sides we obtain a sum that can be stepwise optimized using the Viterbi algorithm [9J.
The standard Viterbi algorithm for particle filters [7] defines a transition matrix T k - 1 == f(c~) IC~i~l) between each pair of particles at consecutive time slices.
Here, f is a state transition distribution that can be evaluated pointwise and
T k - 1 can be computed on the fly by evaluating f at (c~), c~i~l)' In contrast,
the modified Viterbi algorithm replaces the pointwise evaluation by an expectation under p(Zk' Zk-llc~) , c~i~l' Yl:k) where the transition matrix is defined as
T k- 1(j, i) == p( c~) Ic~i~l' Yl:k). In this case, each entry of T is computed by one step
Kalman likelihood evaluation.
== 1 : N
~1 (i) == logp(cii )) + IOgp(Yllcii ))
j == 1 : N, k == 2 : K
1. Initialization. For i
2. Recursion. For
T k- 1(j, i)
~ k (j)
'l/Jk (j)
logp(c~) Icii~l' Y1:k) (See Eq. 10)
m?X{ dk-l (i) + T k-1 (j, i) }
~
ar g m?X{ ~ k-1 (i)
~
+ T k-l (j, i) }
3. Termination.
rK
argm9X~K(i)
~
4. Backtracking. For k
== K -
1 : -1 : 1
rk
'l/Jk+1 (rk+l)
ck == ci
rk
)
Since the tempo trajectories can be integrated out online, we need to only store
i
the links '?k and quantization locations ci ). Consequently, the random walk tempo
prior can be replaced by a richer model as in Eq. 5, virtually without additional
computational or storage cost. Ail outline of the algorithm is shown in Figure 2.
Of course, the efficiency and accuracy of our approach depends heavily onto the
assumption in Eq. 11, that the T matrix based on filtered estimates is accurate.
-0.1J.5
1.5
T
Figure 2: Outline of the algorithm. Left: forward filtering phase. The ellipses
correspond to the conditionals p(zklcin) ,Yl:k). Vertical dotted lines denote the observations Yk. At each step k, particles with low likelihood are discarded. Surviving
particles are linked to their parents. Right: The transition matrix T k - 1 between
each generation (forall pairs of c~) ,cii~l) is computed by standard Kalman filter
likelihood equations. Note that T k - 1 can be discarded once the forward messages
~k are computed and only the backward links 'l/Jl:K and corresponding Ck' need to
be stored. When all onsets Yl:K are observed, the MAP sequence ci:K is computed
by backtracking.
4
Simulation Results
We have tested tempo tracking and quantization performance of the model on
two different examples. The first example is a repeating "son-clave" pattern
I/:! j j ! /J .J. j :11 (c == [1 2 4 5.5 7 ... ]) with fluctuating tempo 3. Such
syncopated rhythms are usually hard to transcribe and make it difficult to track
the tempo even for experienced human listeners. Moreover, since onsets are absent
at prominent beat locations, standard beat tracking algorithms usually loose track.
We observe' that for various realistic tempo fluctuations and observation noise level,
the particle. filter is able to identify the correct tempo trajectory and the corresponding quantization (Figure 3, above).
The second example is a piano arrangement of the Beatles song (Yesterday) performed by a professional classical pianist on a MIDI grand piano. This is a polyphonic piece, Le. the arrangement contains chords and events occurring at the same
time. We model polyphony by allowing Ck -Ck-l == O. In this case, since the original
arrangement is known, we estimate the true tempo trajectory by Kalman filtering
after clamping Cl:K. As shown in Figure 3, the particle filter estimate and the true
tempo trajectory are almost identical.
5
Discussion and Conclusion
There are several advantages offered by particle filtering approach. The algorithm
is suitable for real time implementation. Since the implementation is easy, this provides an important flexibility in the models one can employ. Although we have not
3We modulate the tempo deterministically according to
observation noise variance is R = 0.0005.
Wk
= 0.3 sin(27rck/32).
The
0.8
0
0.6
0.4
0.2
3
0
*
M
.
-0.2
-0.4
.
-0.6
-0.8
-1
0
2
4
6
8
10
12
14
16
3
Figure 3: Above: Tempo tracking results for the clave pattern with 4 particles. Each
circle denotes the mean (T~n), w~n)). The diameter of each particle is proportional to the
normalized importance weight at each generation. '*' denote the true (T, w) pairs. Below:
Tracking results for "Yesterday". '*' denote the mean of the filtered Zl:K after clamping
to true Cl:K. Small circles denote the mean Zl:K corresponding to the estimated MAP
trajectory Cr:K using 10 particles.
addressed issues such as learning and online adaptation in this- paper, parameters
of the model can also treated as hidden variables and can be eventually integrated
out similar to the tempo trajectories.
Especially in real time music applications fine tuning and careful allocation of computational resources is of primary importance. Particle filtering is suitable since one
can simply reduce the number of particles when computational resources become
overloaded.
Motivated by the advantages of the particle filtering approach, we are currently
working on a real time implementation of the particle filter based tempo tracker for
eventual automatic' accompaniment generation such as an adaptive drum machine.
Consequently, the music is quantized such that it can be typeset in a notation
program.
Acknowledgements
This research is supported by the Technology Foundation STW, applied science
division of NWO and the technology programme of the Dutch Ministry of Economic
Affairs.
References
[1] A. T. Cemgil, P. Desain, and H. Kappen. Rhythm quantization for transcription.
Computer Music Journal, 24:2:60-76, 2000.
[2] A. T. Cemgil, H. Kappen, P. Desain, and H. Honing. On tempo tracking: Tempogram
representation and kalman filtering. Journal of New Music Research, Accepted for
Publication.
[3] R. Chen and J. S. Liu. Mixture kalman filters. J. R. Statist. Soc., 10, 2000.
[4] A. Douchet, N. de Freitas, and N. J. Gordon, editors. Sequential Monte Carlo Methods
in Practice. Springer-Verlag, New York, 2000.
[5] A. Douchet, N. de Freitas, K. Murphy, and S. Russell. Rao-blackwellised particle
filtering for dynamic bayesian networks. In Uncertainty in Artificial Intelligence,
2000.
[6] A. Douchet, S. Godsill, and C. Andrieu. On sequential monte carlo sampling methods
for bayesian filtering. Statistics and Computing, 10(3):197-208, 2000.
[7] S. Godsill, A. Douchet, and M. West. Maximum a posteriori sequence estimation
using monte carlo particle filters. Annals of the Institute of Statistical Mathematics.,
2000.
[8] K. P. Murphy. Switching kalman filters. Technical report, Dept. of Computer Science,
University of California, Berkeley, 1998.
[9] L. R. Rabiner. A tutorial in hidden markov models and selected applications in speech
recognation. Proc. of the IEEE, 77(2):257-286, 1989.
[10] C. Raphael. A probabilistic expert system for automatic musical accompaniment.
Journal of Computational and Graphical Statistics, Accepted for Publication, 1999.
[11] C. Raphael. A mixed graphical model for rhythmic parsing. In to appear in Proc. of
Uncertainty in AI, 2001.
[12] B. Thorn. Unsuper~ised learning and interactive jazz/blues improvisation. In Proceedings of the AAAI2000. AAAI Press, 2000.
[13] Barry L. Vercoe, William G. Gardner, and Eric D. Scheirer. Structured audio: Creation, transmission, and rendering of parametric sound representations. Proc. IEEE,
86:5:922-940, May 1998.
| 1999 |@word termination:1 dz1:2 simulation:1 covariance:1 pressed:1 kappen:3 liu:1 contains:1 score:10 selecting:1 accompaniment:4 past:1 existing:1 freitas:2 parsing:1 realistic:2 timestamps:1 designed:1 update:1 polyphonic:2 resampling:1 generative:2 intelligence:1 selected:1 parameterization:2 slowing:1 accordingly:1 xk:5 affair:1 short:2 filtered:4 characterization:1 provides:1 node:1 location:6 quantized:1 simpler:1 blackwellized:1 become:2 cwk:1 introduce:2 mbfys:1 snn:1 actual:1 increasing:2 becomes:2 notation:7 moreover:2 evolved:1 ail:1 emerging:1 rck:1 blackwellised:1 berkeley:1 interactive:2 zl:3 appear:1 timing:6 tends:1 cemgil:3 switching:3 improvisation:2 path:1 fluctuation:1 approximately:1 scheirer:1 initialization:1 challenging:1 lost:1 practice:1 xr:1 digit:1 empirical:1 convenient:1 word:1 refers:1 onto:1 storage:1 context:1 equivalent:1 map:4 conventional:2 musician:1 duration:2 formulate:1 assigns:1 importantly:1 annals:1 heavily:1 exact:4 associate:1 velocity:1 recognition:1 approximated:1 jk:2 database:1 observed:3 subproblem:1 fly:1 solved:2 awk:1 russell:1 chord:1 iy1:5 yk:4 principled:1 ideally:1 dynamic:1 signature:1 rewrite:1 iyl:2 ali:1 creation:1 upon:1 icii:1 efficiency:2 division:1 eric:1 joint:1 represented:2 listener:1 various:1 fast:1 monte:8 kp:2 artificial:1 encoded:1 richer:1 plausible:1 typeset:1 statistic:2 online:2 sequence:5 advantage:2 raphael:2 adaptation:1 honing:1 flexibility:1 description:3 parent:1 requirement:1 transmission:1 argmaxp:1 generating:1 tk:6 depending:1 linearize:1 received:1 eq:15 soc:1 closely:1 correct:2 filter:10 human:1 material:1 assign:1 mathematically:1 around:1 intentional:1 ic:2 tracker:1 taylan:1 exp:1 viterbi:7 mapping:1 consecutive:3 a2:1 purpose:1 estimation:4 proc:3 jazz:1 currently:1 nwo:1 ick:2 gaussian:4 modified:2 rather:1 ck:34 forall:1 cr:1 fluctuate:1 publication:2 derived:1 focus:1 check:1 likelihood:3 contrast:1 posteriori:3 inference:3 integrated:4 hidden:10 bpm:1 issue:1 ill:3 denoted:2 art:1 integration:3 smoothing:1 timestamp:1 equal:2 marginal:3 once:3 extraction:2 sampling:7 identical:1 broad:1 report:1 gordon:1 employ:1 gamma:1 ve:1 individual:1 delayed:2 murphy:2 replaced:2 argmax:1 phase:2 william:1 freedom:1 message:3 evaluation:3 mixture:4 nl:2 chain:1 accurate:1 explosion:1 unless:1 logarithm:1 walk:2 circle:2 rao:2 ar:1 logp:2 phrase:1 cost:1 deviation:8 subset:1 entry:1 kq:1 reported:1 stored:1 musicology:1 grand:1 probabilistic:2 systematic:1 off:1 yl:7 itj:1 aaai:1 containing:1 expert:1 inefficient:1 potential:1 de:2 sec:1 wk:10 onset:11 stream:1 depends:1 performed:2 piece:1 linked:1 recover:1 substructure:1 ir:2 accuracy:1 musical:5 variance:1 efficiently:1 correspond:1 identify:2 rabiner:1 identification:1 bayesian:2 accurately:1 carlo:7 trajectory:7 associated:1 sampled:1 auditory:1 rational:1 listen:1 dimensionality:2 dzk:1 appears:1 higher:1 evaluated:1 working:1 beatles:1 expressive:3 nonlinear:1 lack:1 defines:4 quality:1 normalized:1 true:4 deliberately:1 andrieu:1 hence:3 analytically:1 dull:1 symmetric:1 sin:1 during:1 yesterday:2 rhythm:5 oc:1 prominent:1 outline:2 tt:1 interface:1 novel:1 recently:1 superior:1 quarter:1 jl:1 analog:1 association:1 significant:1 refer:1 ai:1 automatic:7 tuning:1 grid:5 mathematics:1 particle:26 longer:1 surface:1 operating:1 posterior:6 keyboard:3 store:2 certain:1 verlag:1 binary:1 ministry:1 additional:2 cii:2 period:1 barry:1 signal:1 sound:3 reduces:1 smooth:2 technical:1 adapt:1 plug:1 retrieval:1 prescription:1 ellipsis:1 pitch:2 basic:1 noiseless:1 expectation:2 dutch:1 represent:3 c1:2 proposal:3 conditionals:1 fine:1 addressed:2 interval:2 crucial:1 file:1 recording:1 virtually:1 surviving:1 easy:1 rendering:1 reduce:1 iogp:1 drum:1 economic:1 absent:1 motivated:1 accelerating:1 song:1 speech:1 york:1 prefers:1 useful:5 detailed:1 typesetting:3 netherlands:1 amount:1 repeating:1 extensively:1 statist:1 diameter:1 tutorial:1 shifted:1 dotted:1 estimated:1 per:1 track:2 blue:1 diverse:1 discrete:5 write:1 express:1 key:2 drawn:1 ce:1 backward:1 fraction:1 sum:2 inverse:1 parameterized:1 powerful:1 uncertainty:2 thorn:1 almost:2 reader:1 realtime:1 pushed:1 replaces:1 scene:1 encodes:1 tag:1 generates:1 aspect:1 structured:1 according:1 poor:1 unintentional:1 increasingly:1 son:1 wi:3 lp:1 equation:1 resource:2 eventually:2 loose:1 needed:1 instrument:2 available:1 eight:1 observe:1 izk:3 fluctuating:2 appropriate:1 tempo:31 alternative:1 professional:1 original:1 denotes:2 include:1 graphical:4 music:21 especially:3 classical:1 arrangement:3 quantity:3 parametric:1 costly:1 primary:1 diagonal:1 desain:2 link:2 acoustical:1 kalman:8 pointwise:2 kun:1 difficult:2 unfortunately:1 nijmegen:2 stated:1 stw:1 godsill:2 implementation:3 allowing:1 vertical:1 observation:6 markov:3 discarded:2 beat:3 extended:1 communication:1 looking:1 y1:6 ised:1 bert:2 smoothed:2 overloaded:1 namely:1 mechanical:2 pair:5 required:1 optimized:1 california:1 able:1 usually:3 dynamical:2 pattern:2 below:1 program:1 including:1 suitable:3 event:4 natural:1 treated:1 recursion:1 technology:2 gardner:1 carried:1 coupled:1 pianist:1 prior:4 review:1 piano:2 acknowledgement:1 relative:1 mixed:1 generation:4 filtering:16 proportional:1 allocation:1 digital:3 foundation:1 offered:1 editor:1 course:1 changed:1 placed:1 last:1 supported:1 side:1 allow:2 institute:1 rhythmic:2 slice:1 llc:1 transition:4 cumulative:1 evaluating:1 inertia:1 forward:4 adaptive:2 jump:1 ici:1 programme:1 approximate:1 midi:5 implicitly:1 transcription:8 spectrum:1 continuous:2 reality:1 nature:1 zk:14 robust:1 polyphony:1 expansion:1 cl:9 protocol:1 whole:1 noise:5 allowed:1 west:1 en:1 precision:1 experienced:1 deterministically:1 candidate:1 syncopated:1 down:1 minute:1 rk:4 list:1 dk:1 intractable:1 stepwise:1 quantization:10 sequential:7 effectively:1 importance:4 ci:4 perceptually:1 conditioned:2 occurring:1 clamping:2 chen:1 logarithmic:1 backtracking:2 simply:2 ez:1 tracking:12 springer:1 froid:1 transcribe:1 modulate:1 viewed:1 consequently:6 careful:1 eventual:1 absence:1 content:1 hard:1 infinite:2 unsuper:1 accepted:2 subdivision:1 dept:1 audio:3 tested:1 ex:1 |
1,094 | 2 | 184
THE CAPACITY OF THE KANERVA ASSOCIATIVE MEMORY IS EXPONENTIAL
P. A. Choul
Stanford University. Stanford. CA 94305
ABSTRACT
The capacity of an associative memory is defined as the maximum
number of vords that can be stored and retrieved reliably by an address
vithin a given sphere of attraction. It is shown by sphere packing
arguments that as the address length increases. the capacity of any
associati ve memory is limited to an exponential grovth rate of 1 - h2 ( 0).
vhere h2(0) is the binary entropy function in bits. and 0 is the radius
of the sphere of attraction. This exponential grovth in capacity can
actually be achieved by the Kanerva associative memory. if its
parameters are optimally set . Formulas for these op.timal values are
provided. The exponential grovth in capacity for the Kanerva
associative memory contrasts sharply vith the sub-linear grovth in
capacity for the Hopfield associative memory.
ASSOCIATIVE MEMORY AND ITS CAPACITY
Our model of an associative memory is the folloving. Let ()(,Y) be
an (address. datum) pair. vhere )( is a vector of n ?ls and Y is a
vector of m ?ls. and let ()(l),y(I)), ... ,()(M) , y(M)). be M (address,
datum) pairs stored in an associative memory. If the associative memory
is presented at the input vith an address )( that is close to some
stored address )(W. then it should produce at the output a vord Y that
is close to the corresponding contents y(j). To be specific, let us say
that an associative memory can correct fraction 0 errors if an )( vi thin
Hamming distance no of )((j) retrieves Y equal to y(j). The Hamming
sphere around each )(W vill be called the sphere of attraction, and 0
viII be called the radius of attraction.
One notion of the capacity of this associative memory is the
maximum number of vords that it can store vhile correcting fraction 0
errors . Unfortunately. this notion of capacity is ill-defined. because
it depends on exactly vhich (address. datum) pairs have been stored.
Clearly. no associative memory can correct fraction 0 errors for every
sequence of stored (address, datum) pairs. Consider. for example, a
sequence in vhich several different vords are vritten to the same
address . No memory can reliably retrieve the contents of the
overvritten vords. At the other extreme. any associative memory ' can
store an unlimited number of vords and retrieve them all reliably. if
their contents are identical.
A useful definition of capacity must lie somevhere betveen these
tvo extremes. In this paper. ve are interested in the largest M such
that for most sequences of addresses XU), .. . , X(M) and most sequences of
data y(l), ... , y(M). the memory can correct fraction 0 errors. We define
IThis vork vas supported by the National Science Foundation under NSF
grant IST-8509860 and by an IBM Doctoral Fellovship.
? American Institute of Physics 1988
185
most sequences' in a probabilistic sense, as some set of sequences yi th
total probability greater than say, .99. When all sequences are
equiprobab1e, this reduces to the deterministic version: 991. of all
sequences.
In practice it is too difficult to compute the capacity of a given
associative memory yith inputs of length n and outputs of length Tn.
Fortunately, though, it is easier to compute the asymptotic rate at
which A1 increases, as n and Tn increase, for a given family of
associative memories. This is the approach taken by McEliece et al. [1]
toyards the capacity of the Hopfield associative memory. We take the
same approach tovards the capacity of the Kanerva associative memory,
and tovards the capacities of associative memories in general . In the
next section ve provide an upper bound on the rate of grovth of the
capacity of any associative memory fitting our general model. It is
shown by sphere packing arguments that capacity is limited to an
exponential rate of grovth of 1- h2(t5), vhere h2(t5) is the binary entropy
function in bits, and 8 is the radius of attraction. In a later section
it vill turn out that this exponential grovth in capacity can actually
be achieved by the Kanerva associative memory, if its parameters are
optimally set. This exponential grovth in capacity for the Kanerva
associative memory contrasts sharply yith the sub-linear grovth in
capacity for the Hopfield associative memory [1].
I
A UNIVERSAL UPPER BOUND ON CAPACITY
Recall that our definition of the capacity of an associative memory
is the largest A1 such that for most sequences of addresses
X(1), ... ,X(M) and most sequences of data y(l), ... , y(M), the memory can
correct fraction 8 errors. Clearly, an upper bound to this capacity is
the largest Af for vhich there exists some sequence of addresses
X(1), . . . , X(M) such that for most sequences of data y(l), ... , y(M), the
memory can correct fraction 8 errors. We nov derive an expression for
this upper bound.
Let 8 be the radius of attraction and let DH(X(i) , d) be the sphere
of attraction, i.e., the set of all Xs at most Hamming distance d= Ln8J
from .y(j). Since by assumption the memory corrects fraction 8 errors,
every address X E DH(XU),d) retrieves the vord yW. The size of
DH(XU),d) is easily shown to be independent of xU) and equal to
vn.d = 2:%=0
vhere
is the binomial coefficient n!jk!(n - k)!. Thus
n
out of a total of 2 n-bit addresses, at least vn.d addresses retrieve
y(l), at least Vn.d addresses retrieve y(2), at least Vn.d addresses
retrieve y(~, and so forth. It fol10vs that the total number of
distinct yU)s can be at most 2 n jv n .d ' Nov, from Stirling's formula it
can be shovn that if d:S; nj2, then vn.d = 2nh2 (d/n)+O(logn), vhere
h 2 ( 8) = -81og 2 8 - (1 - 8) log2( 1 - 8) is the binary entropy function in bits,
and O(logn) is some function yhose magnitude grovs more slovly than a
constant times log n. Thus the total number of distinct y(j)s can be at
most 2 n (1-h2(S?+O(logn)
Since any set containing I most sequences' of Af
Tn-bit vords vill contain a large number of distinct vords (if Tn is
(1:),
(I:)
186
Figure 1: Neural net representation of the Kanerva associative memory. Signals propagate from the bottom (input) to the top (output). Each arc multiplies the signal by its
weight; each node adds the incoming signals and then thresholds.
sufficiently large --- see [2] for details), it follovs that
M :5 2 n (l-h 2 (o?+O(logn).
(1)
In general a function fen) is said to be O(g(n)) if f(n)fg(n) is
bounded, i.e. , if there exists a constant a such that If(n)1 :5 a\g(n)1 for
all n. Thus (1) says that there exists a constant a such that
M :5 2 n(l-h 2 (S?+alogn. It should be emphasized that since a is unknow,
this bound has no meaning for fixed n. Hovever, it indicates that
asymptotically in n, the maximum exponential rate of grovth of M is
1 - h2 ( 6).
Intui ti vely, only a sequence of addresses X(l), ... , X(M) that
optimally pack the address space {-l,+l}n can hope to achieve this
upper bound. Remarkably, most such sequences are optimal in this sense,
vhen n is large. The Kanerva associative memory can take advantage of
this fact.
THE KANERVA ASSOCIATIVE MEMORY
The Kanerva associative memory [3,4] can be regarded as a tvo-layer
neural netvork, as shovn in Figure 1, vhere the first layer is a
preprocessor and the second layer is the usual Hopfield style array.
The preprocessor essentially encodes each n-bit input address into a
very large k-bit internal representation, k ~ n, vhose size will be
permitted to grov exponentially in n. It does not seem surprising,
then, that the capacity of the Kanerva associative memory can grov
exponentially in n, for it is knovn that the capacity of the Hopfield
array grovs almost linearly in k, assuming the coordinates of the
k-vector are dravn at random by independent flips of a fair coin [1].
187
Figure 2: Matrix representation of the Kanerva associative memory. Signals propagate
from the right (input) to the left (output). Dimensions are shown in the box corners.
Circles stand for functional composition; dots stand for matrix multiplication.
In this situation, hovever, such an assumption is ridiculous: Since the
k-bit internal representation is a function of the n-bit input address,
it can contain at most n bits of information, whereas independent flips
of a fair coin contain k bits of information. Kanerva's primary
contribution is therefore the specification of the preprocessor, that
is, the specification of how to map each n-bit input address into a very
large k-bit internal representation.
The operation of the preprocessor is easily described. Consider
the matrix representation shovn in Figure 2. The matrix Z is randomly
populated vith ?ls. This randomness assumption is required to ease the
analysis. The function fr is 1 in the ith coordinate if the ith row of
Z is within Hamming distance r of X, and is Oothervise. This is
accomplished by thresholding the ith input against n-2r. The
parameters rand k are two essential parameters in the Kanerva
associative memory. If rand k are set correctly, then the number of 1s
in the representation fr(ZX) vill be very small in comparison to the
number of Os. Hence fr(Z~Y) can be considered to be a sparse internal
representation of X.
The second stage of the memory operates in the usual way, except on
the internal representation of X. That is, Y = g(W fr(ZX)), vhere
M
l-V = LyU)[Jr(ZXU))]t,
(2)
i=l
and 9 is the threshold function whose ith coordinate is +1 if the ith
input is greater than 0 and -1 is the ith input is less than O. The ith
column of l-V can be regarded as a memory location vhose address is the
ith row of Z. Every X vi thin Hamming distance r of the ith rov of Z
accesses this location. Hence r is known as the access radius, and k is
the number of memory locations.
The approach taken in this paper is to fix the linear rate p at
which r grovs vith n, and to fix the exponential rate ~ at which k grovs
with n. It turns out that the capacity then grovs at a fixed
exponential rate Cp,~(t5), depending on p, ~, and 15. These exponential
rates are sufficient to overcome the standard loose but simple
polynomial bounds on the errors due to combinatorial approximations.
188
THE CAPACITY OF THE KANERVA ASSOCIATIVE MEMORY
Fix 0 $ K $1. 0 $ p$ 1/2. and 0 $ 0 $ min{2p,1/2}. Let n be the
input address length, and let Tn be the output word length. It is
assumed that Tn is at most polynomial in n, i.e., Tn = exp{O(logn)}. Let
r = IJmJ be the access radius, let k = 2 L"nJ be the number of memory
locations, and let d= LonJ be the radius of attraction. Let Afn be the
number of stored words. The components of the n-vectors X(l), .. . , X(Mn) ,
the m-vectors y(l), ... , y(,Yn), and the k X n matrix Z are assumed to be
lID equiprobable ?1 random variables. Finally, given an n-vector X,
let Y = g(W fr(ZX)) where W = Ef;nl yU)[Jr(ZXW)jf.
Define the quantity
Cp ,,(0) = { 26 + 2(1- 0)h(P;~~2)
'Cp,ICo(p)(o)
where
KO(p)
2h(p)
2; - 2(1- ;)h(P~242)
= 2h(p) -
and
~-
; =
Theorem:
+ K, -
If
Af
J
196 -
if K, $ K,o(p)
if K> K,O(p) ,
+ 1- he;)
(3)
(4)
2p(1 - p).
< 2nCp... (5)+O(logn)
n_
then for all f>O, all sufficiently large n, all jE{l, ... ,Afn }. and all
X E DH(X(j) , d),
P{y
-::J y(j)}
< f.
See [2].
Interpretation: If the exponential growth rate of the number of
stored words Afn is asymptotically less than C p ,,, ( 0), then for every
sufficiently large address length n. there is some realization of the
nx 2n " preprocessor matrix Z such that the associative memory can
correct fraction 0 errors for most sequences of Afn (address, datum)
pairs. Thus Cp,IC( 0) is a lover bound on the exponential growth rate of
the capacity of the Kanerva associative memory with access radius np and
number of memory locations 2nIC ?
Figure 3 shows Cp,IC(O) as a function of the radius of attraction 0,
for K,= K,o(p) and p=O.l, 0.2, 0.3, 0.4 and 0.45. For? any fixed access
radius p, Cp,ICO(p) (0) decreases as 0 increases. This reflects the fact
that fewer (address, datum) pairs can be stored if a greater fraction of
errors must be corrected. As p increases, Cp,,,o(p)(o) begins at a lower
point but falls off less steeply. In a moment we shall see that p can
be adjusted to provide the optimal performance for a given O.
Not ShOVIl in Figure 3 is the behavior of Cp ,,, ( 0) as a function of K,.
However, the behavior is simple. For K, > K,o(p), Cp,,,(o) remains
unchanged, while for K$ K,o(p), Cp,,,(o) is simply shifted doVIl by the
difference KO(p)-K,. This establishes the conditions under which the
Kanerva associative memory is robust against random component failures.
Although increasing the number of memory locations beyond 2rl11:o(p) does
not increase the capacity, it does increase robustness. Random
Proof:
189
0.8
0.6
'!I.2 ...... - - -
"
" ?1
1Il.2
IIl.S
1Il.3
Figure 3: Graphs of Cp,lCo(p)(o) as defined by (3). The upper envelope is 1- h2(0).
component failures will not affect the capacity until so many components
have failed that the number of surviving memory locations is less than
2nlCo (p) .
Perhaps the most important curve exhibited in Figure 3 is the
sphere packing upper bound 1 - h2 ( 0). which is achieved for a particular
J
p by b = ~ - 196 - 2p(1 - p). Equivalently. the upper bound is achieved
for a particular 0 by P equal to
poCo) =
t - Jt - iO(l -
~o).
(5)
Thus (4) and (5) specify the optimal values of the parameters K and P.
respectively. These functions are shown in Figure 4. With these
optimal values. (3) simplifies to
the sphere packing bound.
It can also be seen that for 0 = 0 in (3). the exponential growth
rate of the capacity is asymptotically equal to K. which is the
exponential growth rate of the number of memory locations. k n ? That is.
Mn = 2n1C +O(logn) = k n . 20 (logn). Kanerva [3] and Keeler [5] have argued
that the capacity at 8 =0 is proportional to the number of memory
locations, i.e .? Mn = k n . (3. for some constant (3. Thus our results are
consistent with those of Kanerva and Keeler. provided the 'polynomial'
20 (logn) can be proved to be a constant. However. the usual statement of
their result. M = k?(3. that the capacity is simply proportional to the
number of memory locations. is false. since in light of the universal
190
liLS
o
riJ.S
Figure 4: Graphs of KO(p) and co(p), the inverse of Po(<5), as defined by (4) and (5).
upper bound, it is impossible for the capacity to grow without bound,
with no dependence on the dimension n. In our formulation, this
difficulty does not arise because we have explicitly related the number
of memory locations to the input dimension: kn =2n~. In fact, our
formulation provides explicit, coherent relationships between all of the
following variables: the capacity .~, the number of memory locations k,
the input and output dimensions n and Tn, the radius of attraction C,
and the access radius p. We are therefore able to generalize the
results of [3,5] to the case C>0, and provide explicit expressions for
the asymptotically optimal values of p and K as well.
CONCLUSION
We described a fairly general model of associative memory and
selected a useful definition of its capacity. A universal upper bound
on the growth of the capacity of such an associative memory was shown by
a sphere packing argument to be exponential with rate 1 - h 2 ( c), where
h2(C) is the binary entropy function and 8 is the radius of attraction.
We reviewed the operation of the Kanerva associative memory, and stated
a lower bound on the exponential growth rate of its capacity. This
lower bound meets the universal upper bound for optimal values of the
memory parameters p and K. We provided explicit formulas for these
optimal values. Previous results for <5 =0 stating that the capacity of
the Kanerva associative memory is proportional to the number of memory
locations cannot be strictly true. Our formulation corrects the problem
and generalizes those results to the case C > o.
191
REFERENCES
1. R.J. McEliece, E.C. Posner, E.R. Rodemich, and S.S. Venkatesh,
"The capacity of the Hopfield associative memory," IEEE
Transactions on Information Theory, submi tt ed .
2. P.A. Chou, "The capacity of the Kanerva associative memory,"
IEEE Transactions on Information Theory, submitted.
3. P. Kanerva, "Self-propagating search: a unified theory of
memory," Tech. Rep. CSLI-84-7, Stanford Center for the Study of
Language and Information. Stanford. CA, March 1984.
4. P. Kanerva, "Parallel structures in human and computer memory,"
in Neural Networks for Computing, (J .S. Denker. ed.), Nev York:
American Institute of Physics. 1986.
5 . J.D. Keeler. "Comparison betveen sparsely distributed memory and
Hopfield-type neural netvork models," Tech . Rep. RIACS TR 86 . 31,
NASA Research Institute for Advanced Computer Science, Mountain
Viev. CA, Dec. 1986.
| 2 |@word version:1 polynomial:3 propagate:2 tr:1 moment:1 surprising:1 must:2 riacs:1 afn:4 fewer:1 selected:1 ith:9 provides:1 node:1 location:13 tvo:2 fitting:1 behavior:2 increasing:1 provided:3 begin:1 bounded:1 mountain:1 unified:1 nj:1 vhich:3 every:4 ti:1 growth:6 exactly:1 grant:1 yn:1 io:1 meet:1 doctoral:1 co:1 ease:1 limited:2 practice:1 universal:4 word:3 cannot:1 close:2 impossible:1 deterministic:1 map:1 center:1 l:3 correcting:1 attraction:11 array:2 regarded:2 posner:1 retrieve:5 notion:2 coordinate:3 vork:1 jk:1 sparsely:1 bottom:1 rij:1 decrease:1 packing:5 easily:2 po:1 hopfield:7 retrieves:2 distinct:3 nev:1 whose:1 csli:1 stanford:4 say:3 associative:40 sequence:16 advantage:1 net:1 vhile:1 fr:5 realization:1 achieve:1 forth:1 produce:1 derive:1 depending:1 n_:1 propagating:1 stating:1 op:1 radius:13 correct:6 human:1 argued:1 fix:3 adjusted:1 keeler:3 strictly:1 around:1 sufficiently:3 considered:1 ic:2 exp:1 iil:1 lyu:1 vith:4 combinatorial:1 largest:3 establishes:1 reflects:1 hope:1 clearly:2 og:1 indicates:1 steeply:1 tech:2 contrast:2 chou:1 sense:2 interested:1 ill:1 logn:9 multiplies:1 fairly:1 equal:4 identical:1 yu:2 thin:2 np:1 equiprobable:1 randomly:1 ve:3 national:1 extreme:2 nl:1 light:1 vely:1 circle:1 column:1 stirling:1 too:1 optimally:3 stored:8 kn:1 probabilistic:1 physic:2 corrects:2 off:1 containing:1 corner:1 american:2 style:1 coefficient:1 explicitly:1 vi:2 depends:1 later:1 parallel:1 contribution:1 il:2 generalize:1 zx:3 randomness:1 submitted:1 ed:2 definition:3 against:2 failure:2 proof:1 hamming:5 proved:1 hovever:2 recall:1 actually:2 nasa:1 rodemich:1 permitted:1 specify:1 rand:2 formulation:3 though:1 box:1 nh2:1 stage:1 until:1 mceliece:2 vords:7 o:1 perhaps:1 contain:3 true:1 hence:2 vhen:1 self:1 tt:1 tn:8 cp:11 meaning:1 ef:1 functional:1 exponentially:2 he:1 interpretation:1 composition:1 populated:1 language:1 dot:1 specification:2 access:6 add:1 retrieved:1 store:2 binary:4 rep:2 yi:1 accomplished:1 fen:1 seen:1 greater:3 fortunately:1 signal:4 ico:2 reduces:1 af:3 sphere:10 a1:2 va:1 ko:3 essentially:1 achieved:4 dec:1 whereas:1 remarkably:1 grow:1 envelope:1 exhibited:1 nj2:1 lover:1 seem:1 surviving:1 affect:1 simplifies:1 expression:2 vord:2 york:1 vithin:1 useful:2 yw:1 nic:1 nsf:1 shifted:1 correctly:1 n1c:1 shall:1 ist:1 threshold:2 jv:1 asymptotically:4 graph:2 fraction:9 inverse:1 almost:1 family:1 vn:5 bit:13 bound:17 layer:3 datum:6 sharply:2 encodes:1 unlimited:1 argument:3 min:1 march:1 jr:2 lid:1 taken:2 kanerva:24 remains:1 turn:2 loose:1 flip:2 generalizes:1 operation:2 denker:1 coin:2 robustness:1 binomial:1 top:1 log2:1 unchanged:1 quantity:1 primary:1 dependence:1 usual:3 said:1 vill:4 distance:4 capacity:40 nx:1 viii:1 assuming:1 length:6 relationship:1 equivalently:1 difficult:1 unfortunately:1 statement:1 stated:1 reliably:3 upper:11 arc:1 lco:1 situation:1 venkatesh:1 pair:6 timal:1 required:1 coherent:1 address:27 beyond:1 able:1 memory:63 difficulty:1 advanced:1 mn:3 multiplication:1 asymptotic:1 proportional:3 h2:9 foundation:1 sufficient:1 consistent:1 thresholding:1 ibm:1 row:2 supported:1 institute:3 fall:1 sparse:1 fg:1 distributed:1 overcome:1 dimension:4 curve:1 stand:2 t5:3 transaction:2 nov:2 incoming:1 assumed:2 search:1 reviewed:1 pack:1 ncp:1 ca:3 robust:1 linearly:1 arise:1 fair:2 xu:4 je:1 sub:2 explicit:3 exponential:17 lie:1 formula:3 preprocessor:5 theorem:1 specific:1 emphasized:1 jt:1 x:1 exists:3 essential:1 false:1 magnitude:1 easier:1 entropy:4 simply:2 failed:1 dh:4 jf:1 content:3 except:1 operates:1 corrected:1 ithis:1 called:2 total:4 internal:5 vhere:7 |
1,095 | 20 | 31
AN ARTIFICIAL NEURAL NETWORK FOR SPATIOTEMPORAL BIPOLAR PATTERNS: APPLICATION TO
PHONEME CLASSIFICATION
Toshiteru Homma
Les E. Atlas
Robert J. Marks II
Interactive Systems Design Laboratory
Department of Electrical Engineering, Ff-l0
University of Washington
Seattle, Washington 98195
ABSTRACT
An artificial neural network is developed to recognize spatio-temporal
bipolar patterns associatively. The function of a formal neuron is generalized by
replacing multiplication with convolution, weights with transfer functions, and
thresholding with nonlinear transform following adaptation. The Hebbian learning rule and the delta learning rule are generalized accordingly, resulting in the
learning of weights and delays. The neural network which was first developed
for spatial patterns was thus generalized for spatio-temporal patterns. It was
tested using a set of bipolar input patterns derived from speech signals, showing
robust classification of 30 model phonemes.
1. INTRODUCTION
Learning spatio-temporal (or dynamic) patterns is of prominent importance in biological
systems and in artificial neural network systems as well. In biological systems, it relates to such
issues as classical and operant conditioning, temporal coordination of sensorimotor systems and
temporal reasoning. In artificial systems, it addresses such real-world tasks as robot control,
speech recognition, dynamic image processing, moving target detection by sonars or radars, EEG
diagnosis, and seismic signal processing.
Most of the processing elements used in neural network models for practical applications
have been the formal neuron l or" its variations. These elements lack a memory flexible to temporal patterns, thus limiting most of the neural network models previously proposed to problems
of spatial (or static) patterns. Some past solutions have been to convert the dynamic problems to
static ones using buffer (or storage) neurons, or using a layered network with/without feedback.
We propose in this paper to use a "dynamic formal neuron" as a processing element for
learning dynamic patterns. The operation of the dynamic neuron is a temporal generalization of
the formal neuron. As shown in the paper, the generalization is straightforward when the activation part of neuron operation is expressed in the frequency domain. Many of the existing learning rules for static patterns can be easily generalized for dynamic patterns accordingly. We show
some examples of applying these neural networks to classifying 30 model phonemes.
? American Institute of Physics 1988
32
2. FORMAL NEURON AND DYNAMIC FORMAL NEURON
The formal neuron is schematically drawn in Fig. l(a), where
r = [Xl Xz ... xd 1
Yi' i = 1,2?... ?N
Zi, i = 1,2. . . . ?N
Input
Activation
Output
Transmittance
Node operator
Neuron operation
W
= [Wil
WiZ ... wiLf
11 where 11(') is a nonlinear memory less transform
Zi
= 11(wTr>
(2.1)
Note that a threshold can be implicitly included as a transmittance from a constant input.
In its original form of formal neuron, Xi E {O,I} and 110 is a unit step function u ('). A
variation of it is a bipolar formal neuron where Xi E {-I, I} and 110 is the sign function sgn O.
When the inputs and output are converted to frequency of spikes, it may be expressed as
Xi E Rand 110 is a rectifying function rO. Other node operators such as a sigmoidal function
may be used.
We generalize the notion of formal neuron so that the input and output are functions of
time. In doing so, weights are replaced with transfer functions, multiplication with convolution,
and the node operator with a nonlinear transform following adaptation as often observed in biological systems.
Fig. 1(b) shows a schematic diagram of a dynamic formal neuron where
r(l) = [Xl(t) xz(t) ... xdt)f
Yi(t), i == 1,2?... . N
Zi(t), i = 1,2?... ?N
w(t) = [Wjl(t) wiZ(t) ... WiL(t)]T
ai (t)
Input
Activation
Output
Transfer function
Adaptation
Node operator
Neuron operation
1l where 110 is a nonlinear memoryless transform
Zj(t)
=ll(ai (-t). W;(t)T .x(t?
(2.2)
For convenience, we denote ? as correlation instead of convolution. Note that convolving a(t)
with b(t) is equivalent to correlating a( -t) with b(t).
If the Fourier transforms r(f)=F{r(t)}, w;(f)=F{W;(t)}, Yj(f)=F{Yi(t)}, and
aj(f) = F {ai(t)} exist, then
Yi (f)
= ai (f)
[Wi (f
fT
r(f)]
(2.3)
where Wi (f fT is the conjugate transpose of Wi (t).
x,(1)
I----zt
1----zt(I)
?
(b)
Fig. 1. Formal Neuron and Dynamic Formal Neuron.
33
3. LEARNING FOR FORMAL NEURON AND DYNAMIC FORMAL NEURON
A number of learning rules for formal neurons has been proposed in the past. In the following paragraphs, we formulate a learning problem and describe two of the existing learning
rules, namely, Hebbian learning and delta learning, as examples.
Present to the neural network M pairs of input and desired output samples
k ::;: 1,2, ... ,M , in order. Let W(k)::;: [w/k) w!k) '" wJk~T where wr) is the
transmittance vector at the k-th step of learning. Likewise, let
{X<k), (lk)},
K(k) = [X<I) x'-2)
... X<k)], r(k)
~(k)
= [z<I) z<2)
... ~k)],
'Ik)
= W(k)x'-k),
z<k)
and
= rfl) t 2) ...
D(k) = [(ll) (l2)
t k)],
'"
(lk)] ,
where
= n<tk?,
and
n<Y> = [T1(Y I) T1(Y2) .. . T1(yN)]T.
The Hebbian learning rule 2 is described as follows *:
W(k) ::;: W(k-I) + a;JC.k)X<k)T
(3.1)
The delta learning (or LMS learning) rule 3, 4 is described as follows:
W(k)
= W(k-I) _
o.{W(k-l)t:k ) _ (lk)}X<k)T
(3.2)
The learning rules described in the previous section are generalized for the dynamic formal
neuron by replacing multiplication with correlation. First, the problem is reformulated and then
the generalized rules are described as follows.
Present to the neural network M pairs of time-varing input
= 1,2, .. . ,M , in order. Let W(k)(t) = [WI(t)(k)(t)
where w/k)(t) is the vector whose elements W;)t)(t) are transfer functions
to the neuron i at the k-th step of learning. The Hebbian learning rule for
then
{X<k)(t), (lk)(t)), k
W(kl(t)
= W(k-I)(t) + 0.(-1}. (lk)(t). X<k)(t)T
and output samples
w~k)(t)?? . wJk)(t)f
connecting the input j
the dynamic neuron is
.
(3.3)
The delta learning rule for dynamic neuron is then
W(kl(t) ::;: W(k-I)(t) - o.(-t). {W(k-Il(t). X<k)(t) - (It)(t)} .X<k)(t)T .
(3.4)
This generalization procedure can be applied to other learning rules in some linear discriminant systems 5 , the self-organizing mapping system by Kohonen6 , the perceptron 7 , the backpropagation model 3 , etc. When a system includes a nonlinear operation, more careful analysis
is necesssay as pointed out in the Discussion section.
4. DELTA LEARNING,PSEUDO INVERSE AND REGULARIZATION
This section reviews the relation of the delta learning rule to the pseudo-inverse and the
technique known as regularization. 4, 6, 8, 9,10
Consider a minimization problem as described below: Find W which minimizes
R
= LII'Ik) -
(lk)U
i = <f-k) -
(lky <tk) - (lk?
(4.1)
subject to t k ) = WX<k) ?
A solution by the delta rule is, using a gradient descent method,
W(k)
-
= W(k-I) _ o.-1... R (k)
aw
? This interpretation assumes a strong supervising signal at the output while learning.
(4.2)
34
where R (k) = II y<k) ... ~A:)1I1. The minimum norm solution to the problem, W*, is unique and
can tie expressed as
W* == D xt
(4.3)
where !. t is the Moore-Penrose pseudo-inverse of!. , i.e.,
X t = lim(XTX + dl/)-lX T = limXT (X XT
-
a-.o -
-
-
On the condition that 0 <
-
a-+O-
- -
+ dl/)-l.
-
(4.4)
a < ~ where An- is the max.imum eigenvalue of !.T!., J'.k) and
(jC.k) are independent, and WCl) is uncorrelated with ~l),
E {W*}
=E (~c..)}
(4.5)
where E {x} denotes the expected value of x. One way to make use of this relation is to calculate W* for known standard data and refine it by (4.2), thereby saving time in the early stage of
learning.
However, this solution results in an ill-conditioned W often in practice. When the problem is ill-posed as such, the technique known as regularization can alleviate the ill-conditioning
of W . The problem is reformulated by finding W which minimizes
R(a)
= Dly<t) -
(jC.k)IIl
+ dlLII wkll 1
1
(4.6)
k
t =
subject to k ) ~k) where W = [Wlw2 ... WN]T .
This reformulation regularizes (4.3) to
W (a) =
D!.T (!.!.T + a2n-1
(4.7)
which is statistically equivalent to Wc..) when the input has an additive noise of variance dl
utlcorrelated with ~l). Interestingly, the leaky LMS algorithm ll leads to a statistically
equivalent solution
W(l)
= ~WCk-l) _ tx~(k-l)~l) -
whete 0 < ~ < 1 and 0 <
E {W(a)}
if dl =
(4.8)
{jC.l)};f<l)T
2
a < Amax ? These solutions are related as
=E {Wc..)}
(4.9)
..!::J!
when WCl) is uncorrelated with ;f<l) .11
a
-
Equation (4.8) can be generalized for a network using dynamic formal neurons, resulting in
a equation similar to (3.4). Making use of (4.9), (4.7) can be generalized for a dynamic neuron
network as
W (t ; a) = F- 1{Q if )!. if fT (!. if )!. if)CT
n-
+ a2
1}
(4.10)
where F-1 denotes the inverse Fourier transform.
s. SYNTHESIS OF BIPOLAR PHONEME PATTERNS
This section illustrates the scheme used to synthesize bipolar phoneme patterns and to
form prototype and test patterns.
The fundamental and first three formant frequencies, along with their bandwidths, of
phonemes provided by Klatt l2 were taken as parameters to synthesize 30 prototype phoneme patterns. The phonemes were labeled as shown in Table 1. An array of L (=100) input neurons
OOVered the range of 100 to 4000 Hz. Each neuron had a bipolar state which was +1 only when
one of the frequency bands in the phoneme presented to the network was within the critical band
35
of the neuron and -1 otherwise. The center frequencies if e) of critical bands were obtained by
dividing the 100 to 4000 Hz range into a log scale by L. The critical bandwidth was a constant
100 Hz up to the center frequency Ie = 500 Hz and 0.2/e Hz when Ie >500 Hz.13
The parameters shown in Table 1 were used to construct Table 1. Labels of Phonemes
30 prototype phoneme patterns. For 9. it was constructed as a
Label
Phoneme
combination of t and 9. Fl. F 2 .F 3 were the first. second. and
1
[i Y]
third formants. and B I' B 2. and B 3. were corresponding
[Ia]
2
bandwidths. The fundamental frequency F 0 = 130 Hz with B 0 =
3
leY]
10 Hz was added when the phoneme was voiced. For plosives.
4
[Ea ]
there was a stop before formant traces start. The resulting bipo[3e']
5
lar patterns are shown in Fig.2. Each pattern had length of 5
6
[el]
time units, composed by linearly interpolating the frequencies
7
[~]
when the formant frequency was gliding.
8
[It ]
A sequence of phonemes converted from a continuous
[ow]
9
pronunciation of digits, {o, zero, one, two, three. four, five, six.
10
[\I~]
seven, eight, nine }, was translated into a bipolar pattern, adding
11
[u w]
12
two time units of transition between two consequtive phonemes
[a;J
13
[a ]
by interpolating the frequency and bandwidth parameters
[aWl
14
linearly. A flip noise was added to the test pattern and created a
15
loY]
noisy test pattern. The sign at every point in the original clean
16
[w]
test pattern was flipped with the probability 0.2. These test pat17
[y]
terns are shown in Fig. 3.
18
[r]
19
[I]
20
[f]
21
[v]
I'IlDM_ Label I 1 5 7 , II Il 15 .7 ., JI 21 Z5 17 It
2 4 , I II 11 14 16 II II U 14 I' II JO
22
[9]
II.
23
[\]
24
[s]
25
[z]
26
[p]
27
[t]
28
[d]
29
[k]
30
[n]
Fig. 2. Prototype Phoneme Patterns. (Thirty phoneme patterns are shown
in sequence with intervals of two time units.)
6. SIMULATION OF SPATIO-TEMPORAL FILTERS FOR PHONEME CLASSIFICATION
The network system described below was simulated and used to classify the prototype
phoneme patterns in the test patterns shown in the previoius section. It is an example of generalizing a scheme developed for static patterns 13 to that for dynamic patterns. Its operation is
in two stages. The first stage operation is a spatio-temporal filter bank:
36
?
!!
z .4
:!
e=
~
!
?
?
I
?
'
,
I
if
"I '
~..
I '
,
lU
'I
U'
(b)
(a)
Fig. 3. Test Patterns. (a) Clean Test Pattern. (b) Noisy Test Pattern.
(6.1)
1(t) = W(t).r(t) , and r(t) = !:(a(-t)y(t? .
The second stage operation is the "winner-take-all" lateral inhibition:
(/(t) = zt(t) , and (/(t+A) = !:(~(-t).(/(t) -
Ii),
(6.2)
and
A(t) = (1
-
114
+ -)/O(t) - -S"fiI' 2,O(t-nA).
SN -
N
(6.3)
11=0
where Ii is a constant threshold vector with elements hi = h and 0(.) is the Kronecker delta
function. This operation is repeated a sufficient number of times, No .13,14 The output is
(/(t + No ?A).
Two models based on different leaming rules were simulated with parameters shown
below.
Model 1 (Spatio-temporal Matched Filter Bank)
Let a(t) = O(t) , (/tk) = et in (3.3) where ek is a unit vector with its elements eki = O(k-i) .
(6.4)
W(t)=!(t)T.
4 1
h=200, and a(t) = 2,-O(t-nA).
11=0 S
Model 2 (Spatio-temporal Pseudo-inverse Filter)
Let D
=L in (4.10). Using the alternative expression in (4.4),
W (t) = F- 1{(! (j fT! (j) + cr2n-lXCT}.
h = O.OS ,cr2 = 1000.0, and a(t)
(6.5)
= O(t).
This minimizes
R (cr,!)
= DI1k )(j) "
(/t")(j )lIi + cr 22,11
k
w" if )lIi
for all ! .
(6.6)
37
Because the time and frequency were finite and discrete in simulation, the result of the
inverse discrete Fourier transform in (6.5) may be aliased. To alleviate the aliasing, the transfer
functions in the prototype matrix:! (t) were padded with zeros, thereby doubling the lengths.
Further zero-padding the transfer functions did not seem to change teh result significantly.
The results are shown in Fig. 4(a)-(d). The arrows indicate the ideal response positions at
the end of a phoneme. When the program was run with different thresholds and adaptation function a (t), the result was not very sensitive to the threshold value, but was, nevertheless affected
by the choice of the adaptation function. The maximum number of iterations for the lateral inhibition network to converge was observed: for the experiments shown in Fig. 4(a) - (d), the
numbers were 44, 69, 29, and 47, respectively. Model 1 missed one phoneme and falsely
responded once in the clean test pattern. It missed three and had one false response in the noisy
test pattern. Model 2 correctly recognized all phonemes in the clean test pattern, and falsealarmed once in the noisy test pattern.
7. DISCUSSION
The notion of convolution or correlation used in the models presented is popular in
engineering disciplines and has been applied extensively to designing filters, control systems, etc.
Such operations also occur in biological systems and have been applied to modeling neural networks. IS ,16 Thus the concept of dynamic formal neuron may be helpful for the improvement of
artificial neural network models as well as the understanding of biological systems. A portion of
the system described by Tank and Hopfield 11 is similar to the matched filter bank model simulated in this paper.
The matched filter bank model (Modell) performs well when all phonemes (as above) are
of the same duration. Otherwise, it would perform poorly unless the lengths were forced to a
maximum length by padding the input and transfer functions with -1' s during calculation. The
pseudo-inverse filter model, on the other hand, should not suffer from this problem. However,
this aspect of the 11KXlel (Model 2) has not yet been explicitly simulated.
Given a spatio-temporal pattern of size L x K, i.e., L spatial elements and K temporal elements, the number of calculations required to process the first stage of filtering by both models is
the same as that by a static formal neuron network in which each neuron is connected to the L x
K input elements. In both cases, L x K multiplications and additions are necessary to calculate
one output value. In the case of bipolar patterns, the rnutiplication used for calculation of activation can be replaced by sign-bit check and addition. A future investigation is to use recursive
filters or analog filters as transfer functions for faster and more efficient calculation. There are
various schemes to obtain optimal recursive or analog filters.t 8,19 Besides the lateral inhibition
scheme used in the models, there are a number of alternative procedures to realize a "winnertake-all" network in analog or digital fashion. IS, 20, 21
As pointed out in the previous section, the Fourier transform in (6.5) requires a precaution
concerning the resulting length of transfer functions. Calculating the recursive correlation equation (3.4) also needs such preprocessing as windowing or truncation. 22
The generalization of static neural networks to dynamic ones along with their learning
rules is strainghtforward as shown if the neuron operation and the learning rule are linear. Generalizing a system whose neuron operation and/or learning rule are nonlinear requires more careful analysis and remains for future work. The system described by Watrous and Shastri l6 is an
example of generalizing a backpropagation model. Their result showed a good potential of the
model and a need for more rigorous analysis of the model. Generalizing a system with recurrent
connections is another task to be pursued. In a system with a certain analytical nonlinearity, the
signals are expressed by Volterra functionals, for example. A practical learning system can then
be constructed if higher kernels are neglected. For example, a cubic function can be used instead
of a sigmoidal function.
38
1'1
3.
0-{'-r.
1\
~
j"--
~
;~.
1\
U
--{.
!
e
(a)
~
z
~
0
'\
.t
?f-t
7\
-
-.
?
?
I
I
, I
I
I
I
I
IS.
t ..
51
I
I
en
TIme
"t
~
l~
~
~
~7
!.
!
1
1
~
e
Ii
(b)
z
";
.:-
~
~
?
1.
l
?
?
j
r--
I
u
I
t ..
I
lSI
tu
TIme
Fig. 4. Performance of Models. (a) Modell with Clean Test Pattern. (b)
Model 2 with Clean Test Pattern. (c) Modell with Noisy Test Pattern.
(d) Model 2 with Noisy Test Pattern. Arrows indicate the ideal response
positions at the end of phoneme.
8. CONCLUSION
The formal neuron was generalized to the dynamic formal neuron to recognize spatiotemporal patterns. It is shown that existing learning rules can be generalized for dynamic formal
neurons.
An artificial neural network using dynamic formal neurons was applied to classifying 30
model phonemes with bipolar patterns created by using parameters of formant frequencies and
their bandwidths. The model operates in two stages: in the first stage, it calculates the correlation between the input and prototype patterns stored in the transfer function matrix, and, in the
second stage, a lateral inhibition network selects the output of the phoneme pattern close to the
input pattern.
39
---{'.-\
3.
1"'? j
,--; '
at
i!!
e
(C)
zii
~
C
It
!"
P,
X
?
?
I
I
I
I
I
t ..
51
u.
t51
nme
.~
3.
"
I
~0
.'--~
u
'1
"?
i!!
e
ii
(d)
z
,..
~
C
?
It
?
I
I
,
?
Fig. 4 (continued.)
Two models with different transfer functions were tested. Model 1 was a matched filter
bank model and Model 2 was a pseudo-inverse filter model. A sequence of phoneme patterns
corresponding to continuous pronunciation of digits was used as a test pattern. For the test pattern, Modell missed to recognize one phoneme and responded falsely once while Model 2
correctly recognized all the 32 phonemes in the test pattern. When the flip noise which flips the
sign of the pattern with the probability 0.2, Model 1 missed three phonemes and falsely
responded once while Model 2 recognized all the phonemes and false-alarmed once. Both
models detected the phonerns at the correct position within the continuous stream.
References
1.
W. S. McCulloch and W. Pitts, "A logical calculus of the ideas imminent in nervous
activity," Bulletin of Mathematical Biophysics, vol. 5, pp. 115-133, 1943.
2.
D. O. Hebb, The Organization of Behavior, Wiley, New York, 1949.
40
3.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by
error propagation," in Parallel Distributed Processing. Vol. 1, MIT, Cambridge, 1986.
4.
B. Widrow and M. E. Hoff, "Adaptive switching circuits," Institute of Radio Engineers.
Western Electronics Show and Convention, vol. Convention Record Part 4, pp. 96-104,
1960.
5.
R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. Chapter 5, Wiley,
New York, 1973.
6.
T. Kohonen, Self-organization and Associative Memory, Springer-Verlag, Berlin, 1984.
7.
F. Rosenblatt, Principles of Neurodynamics, Spartan Books, Washington, 1962.
8.
1. M. Varah, "A practical examination of some numerical methods for linear discrete illposed problems," SIAM Review, vol. 21, no. 1, pp. 100-111, 1979.
9.
C. Koch, J. Marroquin, and A. Y uiIle, "Analog neural networks in early vision," Proceedings of the National Academy of Sciences. USA, vol. 83, pp. 4263-4267, 1986.
10.
G. O. Stone, "An analysis of the delta rule and the learning of statistical associations," in
Parallel Distributed Processing .? Vol. 1, MIT, Cambridge, 1986.
11.
B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood
Cliffs, 1985.
12.
D. H. Klatt, "Software for a cascade/parallel formant synthesizer," Journal of Acoustical
Society of America, vol. 67, no. 3, pp. 971-995, 1980.
13.
L. E. Atlas, T. Homma, and R. J. Marks II, "A neural network for vowel classification,"
Proceedings International Conference on Acoustics. Speech. and Signal Processing, 1987.
14.
R. P. Lippman, "An introduction to computing with neural nets," IEEE ASSP Magazine,
April, 1987.
15.
S. Amari and M. A. Arbib, "Competition and cooperation in neural nets," in Systems Neuroscience, ed. J. Metzler, pp. 119-165, Academic Press, New York, 1977.
16.
R. L. Watrous and L. Shastri, "Learning acoustic features from speech data using connectionist networks," Proceedings of The Ninth Annual Conference of The Cognitive Science
Society, pp. 518-530, 1987.
17.
D. Tank and J. J. Hopfield, "Concentrating information in time: analog neural networks
with applications to speech recognition problems," Proceedings of International Conference on Neural Netoworks, San Diego, 1987.
18.
J. R. Treichler, C. R. Johnson,Jr., and M. G. Larimore, Theory and Design of Adaptive
Filters. Chapter 5, Wiley, New York, 1987.
19.
M Schetzen, The Volterra and Wiener Theories of Nonlinear Systems. Chapter 16, Wiley,
New York, 1980.
20.
S. Grossberg, "Associative and competitive principles of learning," in Competition and
Cooperation in Neural Nets, ed. M. A. Arbib, pp. 295-341, Springer-Verlag, New York,
1982.
21.
R. J. Marks II, L. E. Atlas, J. J. Choi, S. Oh, K. F. Cheung, and D. C. Park, "A performance analysis of associative memories with nonlinearities in the correlation domain,"
(submitted to Applied Optics), 1987.
22.
D. E. Dudgeon and R. M. Mersereau, Multidimensional Digital Signal Processing, pp.
230-234, Prentice-Hall, Englewood Cliffs, 1984.
| 20 |@word norm:1 duda:1 calculus:1 simulation:2 thereby:2 electronics:1 interestingly:1 past:2 existing:3 activation:4 yet:1 synthesizer:1 realize:1 numerical:1 additive:1 wx:1 atlas:3 lky:1 precaution:1 pursued:1 nervous:1 accordingly:2 record:1 node:4 lx:1 sigmoidal:2 five:1 zii:1 mathematical:1 along:2 constructed:2 ik:2 paragraph:1 falsely:3 expected:1 behavior:1 xz:2 formants:1 aliasing:1 provided:1 matched:4 circuit:1 mcculloch:1 aliased:1 watrous:2 minimizes:3 developed:3 finding:1 temporal:13 pseudo:6 every:1 multidimensional:1 xd:1 interactive:1 bipolar:10 tie:1 ro:1 control:2 unit:5 yn:1 t1:3 before:1 engineering:2 switching:1 cliff:2 range:2 statistically:2 wjl:1 grossberg:1 practical:3 unique:1 thirty:1 yj:1 practice:1 recursive:3 backpropagation:2 illposed:1 digit:2 lippman:1 procedure:2 significantly:1 xtx:1 imminent:1 cascade:1 convenience:1 close:1 layered:1 operator:4 storage:1 prentice:2 applying:1 equivalent:3 center:2 straightforward:1 williams:1 duration:1 formulate:1 nme:1 dly:1 rule:20 continued:1 array:1 amax:1 oh:1 notion:2 variation:2 limiting:1 target:1 diego:1 magazine:1 designing:1 element:9 synthesize:2 recognition:2 rumelhart:1 metzler:1 labeled:1 observed:2 ft:4 electrical:1 calculate:2 connected:1 wil:2 dynamic:22 neglected:1 radar:1 imum:1 translated:1 easily:1 hopfield:2 various:1 tx:1 chapter:3 america:1 forced:1 describe:1 artificial:6 wcl:2 detected:1 spartan:1 pronunciation:2 whose:2 posed:1 otherwise:2 amari:1 formant:5 transform:7 noisy:6 associative:3 eki:1 sequence:3 eigenvalue:1 analytical:1 net:3 propose:1 adaptation:5 tu:1 kohonen:1 a2n:1 organizing:1 awl:1 poorly:1 academy:1 competition:2 wjk:2 seattle:1 tk:3 supervising:1 recurrent:1 widrow:2 strong:1 dividing:1 indicate:2 convention:2 correct:1 ley:1 filter:14 sgn:1 generalization:4 alleviate:2 investigation:1 biological:5 koch:1 hall:2 iil:1 mapping:1 lm:2 pitt:1 early:2 a2:1 label:3 radio:1 coordination:1 sensitive:1 minimization:1 mit:2 cr:2 l0:1 derived:1 improvement:1 check:1 rigorous:1 cr2:1 helpful:1 el:1 relation:2 i1:1 selects:1 tank:2 issue:1 classification:5 flexible:1 ill:3 spatial:3 hoff:1 construct:1 saving:1 once:5 washington:3 flipped:1 park:1 future:2 connectionist:1 composed:1 recognize:3 national:1 replaced:2 vowel:1 detection:1 organization:2 englewood:2 necessary:1 unless:1 desired:1 classify:1 modeling:1 delay:1 johnson:1 stored:1 aw:1 spatiotemporal:2 gliding:1 fundamental:2 siam:1 international:2 ie:2 physic:1 discipline:1 connecting:1 synthesis:1 na:2 jo:1 homma:2 cognitive:1 lii:3 american:1 convolving:1 ek:1 book:1 converted:2 potential:1 nonlinearities:1 wck:1 includes:1 jc:4 explicitly:1 stream:1 doing:1 portion:1 start:1 competitive:1 parallel:3 voiced:1 rectifying:1 il:2 responded:3 phoneme:31 variance:1 likewise:1 wiener:1 generalize:1 lu:1 submitted:1 wiz:2 ed:2 sensorimotor:1 frequency:12 pp:9 static:6 stop:1 popular:1 concentrating:1 logical:1 lim:1 marroquin:1 ea:1 higher:1 response:3 rand:1 april:1 stage:8 correlation:6 hand:1 replacing:2 nonlinear:7 o:1 lack:1 propagation:1 western:1 lar:1 aj:1 usa:1 concept:1 y2:1 regularization:3 memoryless:1 laboratory:1 moore:1 ll:3 during:1 self:2 generalized:10 prominent:1 stone:1 performs:1 reasoning:1 image:1 ji:1 conditioning:2 winner:1 analog:5 interpretation:1 association:1 cambridge:2 ai:4 pointed:2 nonlinearity:1 winnertake:1 had:3 moving:1 robot:1 modell:4 inhibition:4 etc:2 fii:1 showed:1 buffer:1 certain:1 verlag:2 yi:4 minimum:1 recognized:3 converge:1 signal:7 ii:14 relates:1 windowing:1 hebbian:4 faster:1 academic:1 calculation:4 wtr:1 concerning:1 hart:1 biophysics:1 schematic:1 z5:1 calculates:1 varah:1 vision:1 iteration:1 kernel:1 schematically:1 addition:2 interval:1 diagram:1 associatively:1 subject:2 hz:8 seem:1 ideal:2 wn:1 zi:3 arbib:2 bandwidth:5 idea:1 prototype:7 schetzen:1 six:1 expression:1 padding:2 suffer:1 reformulated:2 speech:5 york:6 nine:1 transforms:1 band:3 extensively:1 stearns:1 exist:1 lsi:1 zj:1 sign:4 delta:9 neuroscience:1 wr:1 correctly:2 rosenblatt:1 diagnosis:1 discrete:3 vol:7 affected:1 four:1 reformulation:1 threshold:4 nevertheless:1 drawn:1 mersereau:1 clean:6 padded:1 convert:1 run:1 inverse:8 missed:4 bit:1 fl:1 ct:1 hi:1 refine:1 annual:1 activity:1 occur:1 optic:1 kronecker:1 scene:1 software:1 larimore:1 wc:2 fourier:4 aspect:1 department:1 combination:1 conjugate:1 jr:1 wi:4 making:1 operant:1 taken:1 equation:3 previously:1 remains:1 flip:3 end:2 operation:12 eight:1 alternative:2 original:2 assumes:1 denotes:2 xdt:1 l6:1 calculating:1 classical:1 society:2 added:2 spike:1 volterra:2 gradient:1 ow:1 simulated:4 lateral:4 berlin:1 seven:1 acoustical:1 discriminant:1 length:5 besides:1 loy:1 robert:1 shastri:2 trace:1 design:2 zt:3 seismic:1 teh:1 perform:1 neuron:39 convolution:4 finite:1 descent:1 regularizes:1 rfl:1 hinton:1 assp:1 ninth:1 namely:1 pair:2 kl:2 required:1 connection:1 acoustic:2 address:1 below:3 pattern:52 program:1 max:1 memory:4 ia:1 critical:3 examination:1 scheme:4 lk:7 created:2 sn:1 review:2 understanding:1 l2:2 multiplication:4 filtering:1 digital:2 sufficient:1 thresholding:1 principle:2 bank:5 classifying:2 uncorrelated:2 cooperation:2 transpose:1 truncation:1 formal:24 perceptron:1 institute:2 bulletin:1 leaky:1 distributed:2 feedback:1 world:1 transition:1 adaptive:3 preprocessing:1 san:1 functionals:1 implicitly:1 correlating:1 spatio:8 xi:3 continuous:3 sonar:1 table:3 neurodynamics:1 transfer:11 robust:1 eeg:1 plosive:1 interpolating:2 domain:2 did:1 linearly:2 arrow:2 noise:3 repeated:1 fig:11 en:1 ff:1 fashion:1 cubic:1 hebb:1 wiley:4 position:3 xl:2 third:1 varing:1 choi:1 xt:2 showing:1 dl:4 false:2 adding:1 importance:1 conditioned:1 illustrates:1 generalizing:4 penrose:1 expressed:4 doubling:1 springer:2 cheung:1 klatt:2 careful:2 leaming:1 change:1 wilf:1 included:1 operates:1 engineer:1 internal:1 mark:3 tern:1 tested:2 |
1,096 | 200 | 68
Baird
Associative Memory in a Simple Model of
Oscillating Cortex
Bill Baird
Dept Molecular and Cell Biology,
U .C.Berkeley, Berkeley, Ca. 94720
ABSTRACT
A generic model of oscillating cortex, which assumes "minimal"
coupling justified by known anatomy, is shown to function as an associative memory, using previously developed theory. The network
has explicit excitatory neurons with local inhibitory interneuron
feedback that forms a set of nonlinear oscillators coupled only by
long range excitatofy connections. Using a local Hebb-like learning
rule for primary and higher order synapses at the ends of the long
range connections, the system learns to store the kinds of oscillation amplitude patterns observed in olfactory and visual cortex.
This rule is derived from a more general "projection algorithm"
for recurrent analog networks, that analytically guarantees content
addressable memory storage of continuous periodic sequences capacity: N/2 Fourier components for an N node network - no
"spurious" attractors.
1
Introduction
This is a sketch of recent results stemming from work which is discussed completely
in [1, 2, 3]. Patterns of 40 to 80 hz oscillation have been observed in the large
scale activity of olfactory cortex [4] and visual neocortex [5], and shown to predict
the olfactory and visual pattern recognition responses of a trained animal. It thus
appears that cortical computation in general may occur by dynamical interaction of
resonant modes, as has been thought to be the case in the olfactory system. Given
the sensitivity of neurons to the location and arrival times of dendritic input, the
Associative Memory in a Simple Model of Oscillating Cortex
sucessive volleys of pulses that are generated by the collective oscillation of a neural net may be ideal for the formation and reliable longe range transmission of the
collective activity of one cortical area to another. The oscillation can serve a macroscopic clocking function and entrain the relevant microscopic activity of disparate
cortical regions into well defined phase coherent macroscopic collective states which
overide uncorrelated microscopic activity. If this view is correct, then oscillatory
network modules form the actual cortical substrate of the diverse sensory, motor,
and cognitive operations now studied in static networks, and it must ultimately be
shown how those functions can be accomplished with these dynamic networks.
In particular, we are interested here in modeling category learning and object recognition, after feature preprocessing. Equivalence classes of ratios of feature outputs
in feature space must be established as prototype "objects" or categories that are
invariant over endless sensory instances. Without categories, the world never repeats. This is the kind of function generally hypothesized for prepyriform cortex
in the olfactory system[6}, or inferotemporal cortex in the visual system. It is a
different oscillatory network function from the feature "binding", or clustering role
that is hypothesized for "phase labels" in primary visual cortex [5], or from the
"decision states" hypothesized for the olfactory bulb by Li and Hopfield. In these
preprocessing systems, there is no modification of connections, and no learning of
particular perceptual objects. For category learning, full adaptive cross coupling
is required so that all possible input feature vectors may be potential attractors.
This is the kind of anatomical structure that characterizes prepyriform and inferotemporal cortex. The columns there are less structured, and the associational
fiber system is more prominent than in primary cortex. Man shares this same high
level "association" cortex structure with cats and rats. Phylogenetic ally, it is the
preprocessing structures of primary cortex that have grown and evolved to give us
our expanded capabilities. While the bulk of our pattern recognition power may be
contributed by the clever feature preprocessing that has developed, the object classification system seems the most likely locus of the learning changes that underlie
our daily conceptual evolution. That is the phenomenon of ultimate interest in this
work.
2
Minimal Model of Oscillating Cortex
Analog state variables, recurrence, oscillation, and bifurcation are hypothesized
to be essential features of cortical networks which we explore in this approach.
Explicit modeling of known excitatory and inhibitory neurons, and use of only
known long range connections is also a basic requirement to have a biologically
feasible network architecture. We analyse a "minimal" model that is intended to
assume the least coupling that is justified by known anatomy, and use simulations
and analytic results proved in [1, 2] to argue that an oscillatory associative memory
function can be realized in such a system. The network is meant only as a cartoon
of the real biology, which is designed to reveal the general mathematical principles
and mechanisms by which the actual system might function. Such principles can
then be observed or applied in other contexts as well.
69
70
Baird
Long range excitatory to excitatory connections are well known as "associational"
connections in olfactory cortex [6] , and cortic~cortico connections in neocortex.
Since our units are neural populations, we know that some density of full crosscoupling exists in the system [6] , and our weights are the average synaptic strengths
of these connections. There is little problem at the population level with coupling
symmetry in these average connection strenghts emerging from the operation of an
outer product learning rule on initially random connections. When the network
units are neuron pools, analog state variables arise naturally as continuous local
pulse densities and cell voltage averages. Smooth sigmoidal population input-output
functions, whose slope increases with arousal of the animal, have been measured in
the olfactory system [4] . Local inhibitory "interneurons" are a ubiquitous feature
of the anatomy of cortex throughout the brain [5] . It is unlikely that they make
long range connections (> 1 mm) by themselves. These connections, and even the
debated interconnections between them, are therefore left out of a minimal model.
The resUlting network is actually a fair caricature of the well studied circuitry of
olfactory (prepyriform) cortex. This is thought to be one of the clearest cases of a
real biological network with associative memory function [6]. Although neocortex
is far more complicated, it may roughly be viewed as two olfactory cortices stacked
on top of each other. We expect that analysis of this system will lend insight into
mechanisms of associative memory there as well. In [3] we show that this model
is capable of storing complicated multifrequency spati~temporal trajectories, and
argue that it may serve as a model of memory for sequences of actions in motor
cortex.
For an N dimensional system, the "minimal" coupling structure is described mathematically by the matrix
-hI]
T=[~
o
'
where W is the N /2 x N /2 matrix of excitatory interconnections, and gI and hI are
N /2 x N /2 identity matrices multiplied by the positive scalars g, and h. These give
the strength of coupling around local inhibitory feedback loops. A state vector is
composed of local average cell voltages for N /2 excitatory neuron populations x and
N/2 inhibitory neuron populations y (hereafter notated as x, Y E RN/2). Standard
network equations with this coupling might be, in component form,
N/2
-TXj - hU(Yi)
+L
WijU(Xj)
+ hi
(1)
j=l
Yi
-TYi
+ gU(Xi),
(2)
where u(x) = tanh(x) or some other sigmoidal function symmetric about O. Intuitively, since the inhibitory units Yi receive no direct input and give no direct
output, they act as hidden units that create oscillation for the amplitude patterns
stored in the excitatory cross-connections W. This may be viewed as a simple generalization of the analog "Hopfield" network architecture to store periodic instead
of static attractors.
Associative Memory in a Simple Model of Oscillating Cortex
If we expand this network to third order in a Taylors series about the origin, we get
a network that looks something like,
NI2
-TXi -
hYi
+L
j=l
NI2
WijXj -
L
WijklXjXkXl
+ bi,
(3)
jkl=l
Yi
(4)
=
where 0"(0)
1, and ~O''''(O)( < 0) is absorbed into Wijkl. A sigmoid symmetric
about zero has odd symmetry, and the even order terms of the expansion vanish,
leaving the cubic terms as the only nonlinearity. The actual expansion of the excitatory sigmoids in (1,2) (in this coordinate system) will only give cubic terms of
the form Ef~~ WijXl- The competitive (negative) cubic terms of (3) therefore constitute a more general and directly programmable nonlinearity that is independent
of the linear terms. They serve to create multiple periodic at tractors by causing
the oscillatory modes of the linear term to compete, much as the sigmoidal nonlinearity does for static modes in a Hopfield network. Intuitively, these terms may
be thought of as sculpting the maxima of a "saturation" landscape into which the
stored linear modes with positive eigenvalues expand, and positioning them to lie
in the directions specified by the eigenvectors of these modes to make them stable.
A precise definition of this landscape is given by a strict Liapunov function in a
special polar coordinate system[l, 3]. Since we have had no success storing multiple
oscillatory at tractors in the sigmoid net (1,2) by any learning rule, we are driven
to take this very effective higher order net seriously as a biological model. From a
physiological point of view, (3,4) may be considered a model of a biological network
which is operating in the linear region of the known axonal sigmoid nonlinearities[4],
and contains instead sigma-pi units or higher order synaptic nonlinearities.
2.1
Biological justification of the higher order synapses
Using the long range excitatory connections available, the higher order synaptic
weights Wijkl can conceivably be realized locally in the ax~dendritic interconnection plexus known as "neuropil". This a feltwork of tiny fibers so dense that it's
exact circuitry is impossible to investigate with present experimental techniques.
Single axons are known to bifurcate into multiple branches that contribute separate
synapses to the dendrites of target cells. It is also well known that neighboring
synapses on a dendrite can interact in a nonlinear fashion that has been modeled
as higher order synaptic terms by some researchers. It has been suggested that the
neuropil may be dense enough to allow the crossing of every possible combination of
jk/ axons in the vicinity of some dendritic branch of at least one neuron in neuron
pool i (B. Mel). Trophic factors stimulated by the coactivation of the axons and the
dendrite could cause these axons to form of a "cluster" of nearby synapses on the
dendrite to realize a jk/ product synapse. The required higher order terms could
thus be created by a Hebb-like process. The use of competitive cubic cross terms
may therefore be viewed physiologically as the use of this complicated nonlinear
synaptic/dendritic processing, as the decision making nonlinearity in the system, as
71
72
Baird
opposed to the usual sigmoidal axonal nonlinearity. There are more weights in the
cubic synaptic terms, and the network nonlinearity can be programmed in detail.
3
Analysis
The real eigenvectors of W give the magnitudes of the complex eigenvectors of T.
Theorem 3.1 If a is a real eigenvalue of the N/2 x N/2 matrix W, with corresponding eigenvector x, then the N x N matrix
has a pair of complex conjugate eigenvalues ~1,2 = 1/2(a?.ja2 - 4hg) = 1/2(a?iw),
for a 2 < 4hg , where w = .j4hg - a 2. The corresponding complex conjugate pair of
eigenvectors are
[
~
2h X
] ? i
[cr!w ].
2h X
The proof of this theorem is given in [2]. To more clearly see the amplitude and
phase patterns, we can convert to a magnitUde and phase representation~/ 2 Izl~i9,
where IZj 1 = .j~t + ~t, and OJ = arctan(~zJ/(~zJ. We get, IZXi 1 = xi + xi =
v'2lxil , and
9 1 .1__ f2i1 .1
+ w 2) XI~ -_ /4h
1ZYi 1= 2(a24h2
2h2 XI
h XI ?
V
Now Ox = arctan 1 = 7r/4, Oy = arctan ~+~. Dividing out the common v'2 factor in
the magnitudes, we get eigenvectors that clearly display the amplitude patterns of
interest.
Because of the restricted coupling, the oscillations possible in this network are
standing waves, since the phase Ox, Oy is constant for each kind of neuron X and y,
and differs only between them. This is basically what is observed in the olfactory
bulb (primary olfactory cortex) and prepyriform cortex. The phase of inhibitory
components Oy in the bulb lags the phase of the excitatory components Ox by approximately 90 degrees. It is easy to choose a and w in this model to get phase lags
of nearly 90 degrees.
3.1
Learning by the projection algorithm
From the theory detailed in [1], we can program any linearly independent set of
eigenvalues and eigenvectors into W by the "projection" operation W = BDB-l,
where B has the desired eigenvectors as columns, and D is a diagonal matrix of
the desired eigenvalues. Because the complex eigenvectors of T follow from these
Associative Memory in a Simple Model of Oscillating Cortex
learned for W, we can form a projection matrix P with those eigenvectors of T as
columns. Forming also a matrix J of the complex eigenvalues of T in blocks along the
diagonal, we can project directly to get T. If general cubic terms Iij'" XjX"X" also
given by a specific projection operation, are added to network equations with linear
terms Ii; x;, the complex modes (eigenvectors) of the linearization are analytically
guaranteed by the projection theorem[l] to characterize the periodic attractors of
the network vector field. Chosen "normal form" coeficients Amn [1] are projected to
get the higher order synaptic weights Ii;", for these general cubic terms. Together,
these operations constitute the "normal form projection algorithm":
N
T=PJP- l , Ii;",=
L
PimAmnP;;;]P;;"lp;;/.
m,n=l
Either member of the pair of complex eigenvectors shown above will suffice as the
eigenvector that is entered in the P matrix for the projection operation. For real
and imaginary component columns in P,
p_
-
[
Ix? Icos o?x
Ix?1 sin 0; ...
Jflx?1 cosO; Jflx?1 sin 0; ...
J
=>
?
- [
X (t) -
Ix?le i9!+iw?t
Jflx.lei9~+iw't
J,
where x? (t) is an expression for the periodic attractor established for pattern s
when this P matrix is used in the projection algorithm.
The general cubic terms Tij'" x;x"x" however, require use of unlikely long range
inhibitory connections. Simulations of two and four oscillator networks thus far
(N=4 and N=8), reveal that use of the higher order terms for only the anatomically
justified long range excitatory connections Wij"', as in the cubic net (3,4), is effective
in storing randomly chosen sets of desired patterns. The behavior of this network
is very close to the theoretical ideal guaranteed above for a network with general
higher order terms. There is no alteration of stored oscillatory patterns when the
reduced coupling is used.
We have at least general analytic justification for this. "Normal form" theory[l, 3]
guarantees that many other choices of weights will do the same job as the those found
by the projection operation, but does not in general say how to find them. Latest
work shows that a perturbation theory calculation of the normal form coefficients
for general high dimensional cubic nets is tractable and in principle permits the
removal of all but N2 of the N4 higher order weights normally produced by the
projection algorithm. We have already incorporated this in an improved learning
rule (non-Hebbian thus far) which requires even fewer of the excitatory higher order
weights ?N)2 instead of the (N /2)4 used in (3?, and are exploring the size of the
"neighborhood" of state space about the origin in which the rule is effective. This
should lead as well to a rigorous proof of the performance of these networks.
3.2
Learning by local Hebb rules
We show further in [2, 1] that for orthonormal static patterns x?, the projection
operation for the W matrix reduces to an outer product, or "Hebb" rule, and the
73
74
Baird
projection for the higher order weights becomes a multiple outer product rule:
N/2
Wi; =
La'xix} ,
,=1
N/2
Wi;1:l
= c Oij01:l - d Lxi xjXkx; .
(5)
.=1
The first rule is guaranteed to establish desired patterns x' as eigenvectors of the
matrix W with corresponding eigenvalues a'. The second rule, with c > d, gives
higher order weights for the cubic terms in (3) that ensure the patterns defined by
these eigenvectors will appear as at tractors in the network vectorfield. The outer
product is a local synapse rule for synapse ij, that allows additive and incremental
learning. The system can be truly self-organizing because the net can modify itself
based on its own activity. The rank of the coupling matrix Wand T grows as
more memories are learned by the Hebb rule, and the unused capacity appears as
a degenerate subspace with all zero eigenvalues. The flow is thus directed toward
regions of the state space where patterns are stored.
In the minimal net, real eigenvectors learned for Ware converted by the network
structure to standing wave oscillations (constant phase) with the absolute value
of those eigenvectors as amplitudes. From the mathematical perspective, there are
(N/2)! eigenvectors with different permutations of the signs of the same components,
which lead to the same positive amplitude vector. This means that nonorthogonal
amplitude patterns may be stored by the Hebb rule on the excitatory connections,
since there may be many ways to find a perfectly orthonormal set of eigenvectors for
W that stores a given set of nonorthogonal amplitude vectors. Given the complexity
of dendritic processing discussed previously, it is not impossible that there is some
distribution of the signs of the final effect of synapses from excitatory neurons that
would allow a biological system to make use of this mathematical degree of freedom.
For different input objects, feature preprocessing in primary and secondary sensory
cortex may be expected to orthogonalize outputs to the object recognition systems
modeled here. When the rules above are used for nonorthogonal patterns, the
eigenvectors of Wand T are no longer given directly by the Hebb rule, and we
expect that the kind of performance found in Hopfield networks for nonorthogonal
memories will obtain, with reduced capacity and automatic clustering of similar
exemplars. Investigation of this unsupervised induction of categories from training
examples will be the subject of future work[3).
3.3
Architectural Variations -
Olfactory Bulb Model
Another biologically interesting architecture which can store these kinds of patterns
is one with associational excitatory to inhibitory cross-coupling. This may be a
more plausible model of the olfactory bulb (primary olfactory cortex) than the one
above. Experimental work of Freeman suggests an associative memory function for
this cortex as well[4). The evidence for long range excitatory to excitatory coupling
in the olfactory bulb is much weaker than that for the prepyriform cortex. Long
range excitatory tracts connecting even the two halves of the bulb are known, but
anatomical data thus far show these axons entering only the inhibitory granuel cell
Associative Memory in a Simple Model of Oscillating Cortex
layers.
T
= [fJ., -~1]
for g2 < 4ag , where w
x]
[ g+w
2h x
?.. [
,
A1,2
= y'4ag -
x ] ,::}
g-w
2h X
in polar form, where O~ =
7r /4,
= 1/2(g ? y'g2 -
4ag)
= 1/2(g ? iw),
g2. The eigenvectors are,
P=
[
]
Ix' Isin O!
.. . ,
Ix'i cos O! {}'
I??I'I
...
y'flx'i
sin
0;
V 1l x cos f/
and 0; = arctan ~+~
.
If we add inhibitory population self-feedback - f to either model, this additional
term appears subtracted from a or 9 in the real part of the complex eigenvalues,
and added to them in all other expressions[2]. Further extensions of this line of
analysis will consider lateral inhibitory fan out of the inhibitory - excitatory feedback
connections. The -hI block of the coupling matrix T becomes a banded matrix.
Similarly, the gl and - fI may be banded, or both full excitatory to excitatory
Wand full excitatory to inhibitory V coupling blocks may be considered. We
conjecture that the phase restrictions of the minimal model will be relaxed with
these further degrees of freedom available, so that traveling waves may exist.
3.3.1
Acknowledgements
Supported by AFOSR-87-0317. It is a pleasure to acknowledge the support of
Walter Freeman and invaluable assistance of Morris Hirsch.
References
[1] B Baird. A bifurcation theory approach to vector field programming for periodic
attractors. In Proc. Int. Joint Conf. on Neural Networks, Wash. D. C., page
1381, June 18 1989.
[2] B. Baird. Bifurcation and learning in network models of oscillating cortex. In
S. Forest, editor, Proc. Conf. on Emergent Computation, Los Alamos, May 1989,
1990. to appear-Physica D.
[3] B. Baird. Bifurcation Theory Approach to the Analysis and Synthesis of Neural
Networks for Engineering and Biological Modelling. Research Notes in Neural
Computing. Springer, 1990.
[4] W.J. Freeman. Mass Action in the Nervous System. Academic Press, New York,
1975.
[5] C. M. Grey and W. Singer. Stimulus dependent neuronal oscillations in the cat
visual cortex area 17. Neuroscience {Suppl}, 22:1301P, 1987.
[6] Lewis B. Haberly and James M. Bower. Olfactory cortex: model circuit for
study of associative memory? Trends in Neuroscience, 12(7):258, 1989.
7S
| 200 |@word seems:1 grey:1 hu:1 simulation:2 pulse:2 series:1 contains:1 hereafter:1 seriously:1 imaginary:1 must:2 jkl:1 stemming:1 realize:1 additive:1 analytic:2 motor:2 designed:1 half:1 fewer:1 nervous:1 liapunov:1 node:1 location:1 contribute:1 sigmoidal:4 arctan:4 phylogenetic:1 mathematical:3 along:1 direct:2 olfactory:17 expected:1 behavior:1 themselves:1 roughly:1 brain:1 freeman:3 actual:3 little:1 becomes:2 project:1 suffice:1 circuit:1 mass:1 what:1 evolved:1 kind:6 emerging:1 eigenvector:2 developed:2 ag:3 guarantee:2 temporal:1 berkeley:2 wijkl:2 every:1 act:1 unit:5 underlie:1 normally:1 appear:2 positive:3 engineering:1 local:8 modify:1 ware:1 approximately:1 might:2 studied:2 equivalence:1 suggests:1 co:2 programmed:1 p_:1 range:11 bi:1 coactivation:1 directed:1 block:3 differs:1 addressable:1 area:2 thought:3 projection:13 get:6 clever:1 close:1 storage:1 context:1 impossible:2 restriction:1 bill:1 latest:1 rule:16 insight:1 orthonormal:2 population:6 coordinate:2 justification:2 variation:1 target:1 exact:1 substrate:1 programming:1 origin:2 crossing:1 trend:1 recognition:4 jk:2 coso:1 observed:4 role:1 module:1 region:3 complexity:1 dynamic:1 trophic:1 ultimately:1 trained:1 serve:3 completely:1 gu:1 joint:1 hopfield:4 emergent:1 cat:2 fiber:2 grown:1 stacked:1 walter:1 effective:3 formation:1 neighborhood:1 whose:1 lag:2 plausible:1 say:1 interconnection:3 gi:1 analyse:1 itself:1 final:1 associative:11 sequence:2 eigenvalue:9 net:7 interaction:1 product:5 causing:1 relevant:1 loop:1 neighboring:1 entered:1 organizing:1 degenerate:1 los:1 cluster:1 transmission:1 requirement:1 oscillating:8 incremental:1 tract:1 object:6 coupling:14 recurrent:1 exemplar:1 measured:1 ij:1 odd:1 job:1 dividing:1 direction:1 anatomy:3 correct:1 require:1 generalization:1 investigation:1 dendritic:5 sucessive:1 biological:6 mathematically:1 exploring:1 extension:1 physica:1 mm:1 around:1 considered:2 normal:4 nonorthogonal:4 predict:1 circuitry:2 sculpting:1 polar:2 proc:2 label:1 tanh:1 iw:4 create:2 clearly:2 spati:1 cr:1 voltage:2 derived:1 ax:1 june:1 rank:1 modelling:1 rigorous:1 dependent:1 unlikely:2 initially:1 spurious:1 hidden:1 expand:2 wij:1 interested:1 caricature:1 classification:1 animal:2 special:1 bifurcation:4 field:2 never:1 cartoon:1 biology:2 look:1 unsupervised:1 nearly:1 future:1 stimulus:1 randomly:1 composed:1 phase:10 intended:1 attractor:6 freedom:2 interest:2 interneurons:1 izj:1 investigate:1 clocking:1 truly:1 hg:2 endless:1 capable:1 daily:1 taylor:1 desired:4 arousal:1 theoretical:1 minimal:7 instance:1 column:4 modeling:2 coeficients:1 alamo:1 characterize:1 stored:5 periodic:6 density:2 sensitivity:1 standing:2 pool:2 together:1 connecting:1 synthesis:1 opposed:1 choose:1 ni2:2 cognitive:1 conf:2 li:1 potential:1 nonlinearities:2 converted:1 alteration:1 coefficient:1 baird:8 int:1 txj:1 view:2 characterizes:1 competitive:2 wave:3 capability:1 complicated:3 slope:1 landscape:2 produced:1 basically:1 trajectory:1 researcher:1 oscillatory:6 synapsis:6 banded:2 synaptic:7 definition:1 clearest:1 james:1 naturally:1 proof:2 static:4 proved:1 notated:1 ubiquitous:1 amplitude:8 actually:1 appears:3 higher:14 follow:1 response:1 improved:1 synapse:3 ox:3 traveling:1 sketch:1 ally:1 nonlinear:3 mode:6 reveal:2 grows:1 effect:1 hypothesized:4 evolution:1 analytically:2 vicinity:1 entering:1 symmetric:2 amn:1 sin:3 assistance:1 self:2 recurrence:1 mel:1 rat:1 prominent:1 bdb:1 txi:1 invaluable:1 fj:1 ef:1 fi:1 sigmoid:3 common:1 analog:4 discussed:2 association:1 automatic:1 similarly:1 nonlinearity:6 had:1 stable:1 cortex:30 operating:1 longer:1 add:1 inferotemporal:2 something:1 own:1 recent:1 perspective:1 driven:1 store:4 success:1 accomplished:1 yi:4 additional:1 relaxed:1 wijxj:1 hyi:1 ii:3 branch:2 full:4 multiple:4 reduces:1 hebbian:1 smooth:1 positioning:1 academic:1 calculation:1 cross:4 long:10 molecular:1 a1:1 basic:1 suppl:1 cell:5 izl:1 justified:3 receive:1 leaving:1 macroscopic:2 strict:1 hz:1 subject:1 member:1 vectorfield:1 flow:1 axonal:2 ideal:2 unused:1 enough:1 easy:1 xj:1 architecture:3 perfectly:1 prototype:1 expression:2 ultimate:1 york:1 cause:1 constitute:2 action:2 programmable:1 generally:1 tij:1 detailed:1 eigenvectors:19 neocortex:3 locally:1 morris:1 category:5 tyi:1 reduced:2 exist:1 zj:2 inhibitory:14 sign:2 neuroscience:2 bulk:1 anatomical:2 diverse:1 four:1 isin:1 convert:1 wand:3 compete:1 flx:1 throughout:1 resonant:1 architectural:1 oscillation:9 i9:2 decision:2 layer:1 hi:4 guaranteed:3 display:1 fan:1 activity:5 strength:2 occur:1 nearby:1 fourier:1 expanded:1 conjecture:1 structured:1 combination:1 conjugate:2 wi:2 lp:1 n4:1 modification:1 biologically:2 conceivably:1 making:1 intuitively:2 invariant:1 restricted:1 anatomically:1 equation:2 previously:2 mechanism:2 singer:1 locus:1 know:1 tractable:1 end:1 available:2 operation:8 permit:1 multiplied:1 generic:1 subtracted:1 lxi:1 assumes:1 clustering:2 top:1 ensure:1 establish:1 added:2 realized:2 already:1 primary:7 usual:1 diagonal:2 microscopic:2 subspace:1 separate:1 pleasure:1 lateral:1 capacity:3 outer:4 argue:2 toward:1 induction:1 modeled:2 ratio:1 sigma:1 negative:1 disparate:1 bifurcate:1 collective:3 contributed:1 neuron:10 acknowledge:1 incorporated:1 prepyriform:5 precise:1 rn:1 perturbation:1 pair:3 required:2 specified:1 connection:18 coherent:1 learned:3 established:2 suggested:1 dynamical:1 pattern:17 saturation:1 program:1 reliable:1 memory:15 lend:1 oj:1 power:1 created:1 coupled:1 acknowledgement:1 removal:1 tractor:3 afosr:1 expect:2 permutation:1 oy:3 interesting:1 h2:1 bulb:7 degree:4 haberly:1 principle:3 editor:1 uncorrelated:1 share:1 storing:3 pi:1 tiny:1 excitatory:22 repeat:1 gl:1 supported:1 cortico:1 allow:2 weaker:1 absolute:1 xix:1 feedback:4 cortical:5 world:1 sensory:3 adaptive:1 preprocessing:5 projected:1 far:4 hirsch:1 conceptual:1 xi:6 continuous:2 physiologically:1 stimulated:1 ca:1 symmetry:2 dendrite:4 forest:1 interact:1 expansion:2 neuropil:2 pjp:1 complex:8 dense:2 linearly:1 arise:1 arrival:1 n2:1 fair:1 neuronal:1 cubic:11 hebb:7 fashion:1 axon:5 iij:1 explicit:2 debated:1 volley:1 entrain:1 perceptual:1 vanish:1 lie:1 third:1 bower:1 learns:1 zyi:1 ix:5 theorem:3 specific:1 physiological:1 evidence:1 essential:1 exists:1 magnitude:3 linearization:1 associational:3 sigmoids:1 wash:1 interneuron:1 likely:1 explore:1 forming:1 absorbed:1 visual:6 g2:3 scalar:1 binding:1 springer:1 lewis:1 viewed:3 identity:1 oscillator:2 man:1 content:1 change:1 feasible:1 secondary:1 experimental:2 la:1 orthogonalize:1 support:1 meant:1 dept:1 phenomenon:1 |
1,097 | 2,000 | Reinforcement Learning and Time
Perception - a Model of Animal
Experiments
J. L. Shapiro
Department of Computer Science
University of Manchester
Manchester, M13 9PL U.K.
jls@cs.man.ac.uk
John Wearden
Department of Psychology
University of Manchester
Manchester, M13 9PL U.K.
Abstract
Animal data on delayed-reward conditioning experiments shows a
striking property - the data for different time intervals collapses
into a single curve when the data is scaled by the time interval.
This is called the scalar property of interval timing. Here a simple
model of a neural clock is presented and shown to give rise to the
scalar property. The model is an accumulator consisting of noisy,
linear spiking neurons. It is analytically tractable and contains
only three parameters. When coupled with reinforcement learning
it simulates peak procedure experiments, producing both the scalar
property and the pattern of single trial covariances.
1
Introduction
An aspect of delayed-reward reinforcement learning problem which has a long history of study in animal experiments, but has been overlooked by theorists, is the
learning of the expected time to the reward. In a number of animal experiments,
animals need to wait a given time interval after a stimulus before performing an
action in order to receive the reward. In order to be able to do this , the animal
requires an internal clock or mechanism for perceiving time intervals, as well as a
learning system which can tackle more familiar aspects of delayed reward reinforcement learning problem. In this paper it is shown that a simple connectionist model
of an accumulator used to measure time duration, coupled to a standard TD('\)
reinforcement learning rule reproduces the most prominent features of the animal
experiments.
The reason it might be desirable for a learner to learn the expected time to receive
a reward is that it allows it to perform the action for an appropriate length of
time. An example described by Grossberg and Merrill [4] and modeled in animal
experiments by Gibbon and Church [3] is foraging. An animal which had no sense of
the typical time to find food might leave too often, thereby spending an inordinate
amount of time flying between patches. Alternatively it could remain in a depleted
patch and starve. The ability to learn times to rewards is an important aspect of
intelligent behavior more generally.
1.1
Peak Procedure Experiments
A typical type of experiment which investigates how animals learn the time between
stimulus and reward is the peak procedure. In this, the animal is trained to respond
after a given time interval tr has elapsed. Some stimulus (e.g. a light) is presented
which stays on during the trial. The animal is able to respond at any time. The
animal receives a reward for the first response after the length of time t r . The trial
ends when the animal receives the reward.
On some trials, however, no reward is given even when the animal responds appropriately. This is to see when the animal would stop responding. What happens
in non-reward trials is that the animal typically will start responding at a certain
time, will respond for a period, and then stop responding. Responses averaged over
many trials, however, give a smooth curve. The highest response is at the time
interval t r , and there is variation around this. The inaccuracy in the response (as
measured by the standard deviation in the average response curves for non-reward
trials) is also proportional to the time interval. In other words, the ratio of the
standard deviation to the mean response time (the coefficient of variation) is a
constant independent of the time interval.
A more striking property of the timing curves is scalar property, of which the above
are two consequences. When the average response rate for non-reward trials is
multiplied by the time interval and plotted against the relative time (time divided
by the time interval) the data from different time intervals collapse onto one curve.
This strong form of the scalar property can be expressed mathematically as follows.
Let T be the actual time since the start of the trial and T be subjective time.
Subjective time is the time duration which the animal perceives to have occurred,
(or at least appears to perceive judging from its behavior). The experiments show
that T varies for a given T. This variation can be expressed as a conditional
probability, the probability of acting as though the time is T given that the actual
time is T, which is written P(TIT). The fact that the data collapses implies this
probability depends on T and T in a special way,
(1)
Here Pinv is the function which describes the shape of the scaled curves. Thus, time
acts as a scale factor. This is a strong and striking result. This has been seen in
many species, including rats, pigeons, turtles; humans will show similar results if the
time intervals are short or if they are prevented from counting through distracting
tasks. For reviews of interval timing phenomena, see [5] and [3] .
A key question which remains unanswered is: what is the origin of the scalar property. Since the scalar property is ubiquitous, it may be revealing something fundamental about the nature of an internal clock or time perception system. This
is especially true if there are only a few known mechanisms which generate this
phenomenon. It is well known that any model based on the accumulation of independent errors, such as a clock with a variable pulse-rate, does not produce the
scalar property. In such a model it would be the ratio of the variance to the mean
response time which would be independent of the time interval (a consequence of
the law of large numbers). In section 2, a simple stochastic process will be presented
which gives rise to scalar timing. In section 3 simulations of the model on the peak
procedure are presented. The model reproduces experimental results on the mean
responses and the covariation between responses on non-reward trials.
2
2.1
The model
An accumulator network of spiking neurons
Here it is shown that a simple connectionist model of an accumulator can give rise
to the strong scalar property. The network consists of noisy, linear, spiking neurons
which are connected in a random, spatially homogeneous way. The network encodes
time as the total activity in the network which grows during the measured time
interval. Psychological aspects of the model will be presented elsewhere [8]
The network consists of N identical neurons. The connectivity between neurons
is random and defined by a connection matrix Cij which is random and sparse.
The connection strength is the same between all connected neurons. An important
parameter is the fan-out of the ith neuron Ci ; its average across the network is
denoted C. Time is in discrete units of size T, the time required for a spike produced
by a neuron to invoke a spike in a connected neuron. There is no refractory period.
The neurons are linear - the expected number of spikes produced by a neuron is
"( times the number of pre-synaptic spikes. Let ai(t) denote the number of spikes
produced by neuron i at time t. This obeys
hi(t)
ai(t + T) =
L
Va
+ Ii(t),
(2)
a=l
where hi(t) is the number of spikes feeding into neuron i, hi(t) = E j CjiXj(t). Ii(t)
is the external input at i , and V is a random variable which determines whether a
pre-synaptic spike invokes one in a connected neuron. The mean of v is "( and the
variance is denoted a~. So the spikes behave independently; saturation effects are
ignored. The total activity of the network is
N
n(t) =
L ai(t).
(3)
i= l
At each time-step, the number of spikes will grow due to the fan-out of the neurons.
At the same time, the number of spikes will shrink due to the fact that a spike
invokes another spike with a probability less than 1. An essential assumption of
this work is that these two processes balance each other, C"( = 1.
Finally, in order for this network to act as an accumulator, it receives statistically
stationary input during the time interval which is being measured, so I(t) is only
present during the measured interval and statistically stationary then.
2.2
Derivation of the strong scalar property
Here it is shown that the network activity obeys equation (1). Let y be the scaled
network activity,
y(t) = n(t)/t.
(4)
The goal here is the derive the probability distribution for y as a function of time,
P(ylt). In order to do this, we use the cumulant generating function (or characteristic function). For any probability distribution, p(x), the generating function for
cumulants is,
G(8)
(5)
(6)
where n is the domain of p(x), "'i is the ith cumulant of p(x), and 8 is just a dummy
variable. Taking the nth derivative of G(8) with respect to 8 and setting 8 to 0 gives
"'i. Cumulants are like moments, see [1] for some definitions and properties.
We will derive a recursion relation for the cumulant generating function for y(t),
denoted G y (8; t). Let G y (8) denote the generating function for the distribution of v
and G[(8) denote the generating functions for the distribution of inputs I(t). These
latter two are assumed to be stationary, hence there is no time-dependence. From
equation 2 it follows that,
Gy(8;t+T)
G[C:T)+~LGy[tCiGYC:T);t].
=
(7)
?
In deriving the above, it was assumed that the activity at each node is statistically
the same, and that the fan-out at i is uncorrelated with the activity at i (this
requires a sufficiently sparsely connectivity, i.e. no tight loops).
Differentiating the last equation n times with respect to 8 and setting 8 to zero
produces a set recursion relations for the cumulants of y, denoted
It is necessary
to take terms only up to first order in lit to find the fixed point distribution. The
recursion relations to this order are
"'n.
m[
(1 - -T)
"'l(t)+-t
t+
(8)
T
2
(1 - n-T) "'n(t) + -1n(n-1) C(J"y"'n-l(t)
t
+0
t
2
C~) ;n > 1.
(9)
The above depends upon the mean total input activity m [ == G~(O) the average
fan-out C, and the variance in the noise v, (J"~ == G~(O). In general it would depend
upon the fan-out times the mean of the noise v, but that is 1 by assumption. Higher
order statistics in C and v only contribute to terms which are higher order in lit.
The above equations converge to a fixed point, which shows that n(t)lt has a timeindependent distribution for large t. The fixed point is found to be
8n
2m[
(
(J"~)
G y (8,00) = ~
~ ,"'n(OO) = - 2 log 1- 28 .
n=O
n.
(10)
T
(J"
Equation 10 is the generating function for a gamma distribution,
R
r
(I
x a,
b)
=
exp( -xlb)x a -
bar(a)
1
(11)
with
b = C(J"~.
2T
2m[
a
= C (J"y2;
(12)
Corrections to the fixed point are O(l/t).
What this shows is that for large t, the distribution of neural activity, n is scalar,
P(nlt)
with a and b defined above.
=
~ Pr (~ la, b) ;
(13)
2.3
Reinforcement learning of time intervals
The above model represents a way for a simple connectionist system to measure a
time interval. In order to model behavior, the system must learn to association the
external stimulus and the clock with the response and the reward. To do this, some
additional components are needed.
The ith stimulus is represented by a signal Si. The output of the accumulator triggers a set of clock nodes which convert the quantity or activity encoding of time
used by the accumulator into a "spatial code" in which particular nodes represent
different network activities. This was done because it is difficult to use the accumulator activity directly, as this takes a wide range of values. Each clock node
responds to a particular accumulator activity. The output of the ith clock node
at time t is denoted Xi(t) ; it is one if the activity is i, zero otherwise. It would
be more reasonable to use a coarse coding, but this fine-grained encoding is particularly simple. The components of the learning model are shown schematically in
figure 1.
Vj(t)
Stimulus
si
- - - --I
Figure 1: The learning model. The accumulator feeds into a bank of clock nodes , Xi ,
which are tuned to accumulator activities . The response Vj is triggered by simultaneous presence of both the stimulus Si and the appropriate clock node. Solid lines
denote weights which are fixed ; dashed lines show weights which learn according to
the TD(A) learning rule.
The stimulus and the clock nodes feed into response nodes. The output of the jth
response node, Vj(t) is given by
(14)
Here () is a threshold, Aij is the association between the stimulus and the response,
and Wij is the association between a clock node and the response. Both the stimulus
and the appropriate clock node must be present in order for there to be a reasonable
probability of a response. The response probability is Vj (t) , unless that is negative,
in which case there is no response, or is greater than 1, in which case there is
definitely a response.
Both
Aij
and the w's learn via a TD-A learning rule. TD-A is an important learning
rule for modeling associative conditioning; it has been used to model aspects of
classical conditioning including Pavlovian conditioning and blocking. For example,
a model which is very effective at modeling Pavlovian eye-blink experiments and
other classical conditioning results has been proposed by Moore et. al. [6] building
on the model of Sutton, Barto, and Desmond (see description in [7]). This model
represents time using a tapped delay line; at each time-step, a different node in the
delay line is activated. Time acts as one of the conditioned stimuli. The conditioned
stimsing temporal difference (TD) reinforcement learning is associated with the
response through the unconditioned stimulus. These authors did not attempt to
model the scalar property, and in their model time is represented accurately by
the system. The model presented here is similar to these models. The clock nodes
play the role of the tapped delay-line nodes in that model. However, here they
are stimulated by the accumulator rather than each other, and they will follow a
stochastic trajectory due to the fluctuating nature of the accumulator
The learning rule for Wij couples to an "eligibility trace" for the clock nodes Xi(t)
which takes time to build up and decays after the node is turned off. They obey
the following equations,
(15)
The standard TD-A learning parameters, "( and A are used, see [9]. The learning
equations are
t:.Wij
t:.A ij
8(t)
a8(t + T)Xi(t),
a8(t + T)Si'
R(t) + "( Vj(t) - Vj(t - T).
(16)
(17)
(18)
Here a is a learning rate, 8 is the temporal difference component, R(t) is the reinforcement. The outputs Vj at both times use the current value of the weights.
The threshold is set to a constant value (-1 in the simulations). It would make no
difference if a eligibility trace were used for the stimulus Si, because that was held
on during the learning.
3
Simulations
The model has been used to simulate peak procedure. In the simulations, the model
is forced to respond for the first set of trials (50 trials in the simulations); otherwise
the model would never respond. This could represent shaping in real experiments.
After that the model learns using reward trials for an additional number of trials
(150 trials in these simulations). The system is then run for 1000 trials, every
10th trial is a non-reward trial; the system continues to learn during these trials.
Figure 2 shows average over non-reward trials for different time intervals. The scalar
property clearly holds.
Gibbon and Church [3] have argued that the covariation between trials is a useful
diagnostic to distinguish models of scalar timing. The methodology which they
proposed is to fit the results of single non-reward trials from peak procedure experiments to a break-run-break pattern of response The animal is assumed to respond
at a low rate until a start time is reached. The animal then responds at a high rate
until a stop time is reached, whence it returns to the low response rate. The covariation between the start and stop times between trials is measured and compared to
those predicted by theory.
The question Gibbon and Church asked was, how does the start and stop time covary across trials. For example, if the animal starts responding early, does it stop
Q)
~
O.5
cd
l-<
CJ)
0.4
rn
.:::
0
0.3
P.
rn
Q)
$-.(
0.2
time
time/t r
Figure 2: Left) Average response of the spatially encoded network for non-reward
trials. The accumulator parameters are: mI = 10, Cu 2 = 1 (Poisson limit); learning
parameters are "( = 0.75, A = 1, learning rate 0: is 0.5. Right) Relative time plotted
against response rate times time interval for reinforcement times of 40T, 80T, 160T,
240T, and 320T. All experiments are averages over 100 non-reward trials, which
were every 10 trial in 1000 learning trials.
responding early, as though it has a shifted estimate of the time interval? Or does
it stop responding late, as though it has a more liberal view about what constitutes
the particular interval. The covariance between start and stop parameters addresses
this question.
Comparable experiments can be carried out on the model proposed here. The
procedure used is described in [2]. Figure 3 shows a comparison with data from
reference [2] with simulations. The pattern of covariation found in the simulations
is qualitatively similar to that of the animal data. The interesting quantity is the
correlation between the start time and the spread (difference between stop and start
times). This is negative in both.
0.5
-0.5 '---~-____:-____:-___:_-___:_-___:_----'
0.5
I~ I~ I
-0.5 '----~____:,----____:-___:_-___:_-___:_----'
Figure 3: Left) Covariances across individual trials in experiments on rats. Data is
taken from Table 2 of reference [2] averaged over the four conditions. The covariances are shown in the following order: 1. start-stop, 2. start-spread, 3. spreadmiddle, 4. start-middle, 5. stop-spread, 6. stop-middle. The black, gray, and white
bars are for times of reinforcement tr of 15,30, and 60 seconds respectively. Right)
Covariances across individual trials simulated by the model. The reinforcement
times are 40T, 80T, and 160T. The covariances are given in the same order as in left
figure.
4
Conclusion
Previous models of interval timing fail to explain its most striking feature - the
collapse of the data when scaled by the time interval. We have presented a simple model of an accumulator clock based on spiking, noisy, linear neurons which
produces this effect. It is a simple model, analytically tractable, based on a driven
branching process. The parameters are: T - the time for a spike on one neuron
to excite spikes on connected neurons , mI - the average number of spikes excited
externally at each short time interval T, and the variance of the spike transmission
process, which in this model is (}"~. A weakness of this model is that it requires
fine-tuning of a pair of parameters, so that the expected number of spikes grows in
with external excitation only.
Once a scalar clock is produced, simple reinforcement learning can be used to associate the clock signal with appropriate responses . A set of intermediate clock nodes
was used to encode time. TD-'\ reinforcement learning between the intermediate
nodes at reinforcement and an eligibility trace simulates peak procedure and the
individual trial covariances.
References
[1] M. Abramowitz and 1. A. Stegun, editors. Handbook of Mathematical Functions. New
York: Dover Publications, 1967.
[2] Russell M. Church, Walter H. Meck, and John Gibbon. Application of scalar timing
theory to individual trials. Journal of Experimental Psychology - Animal Behavior
Processes, 20(2):135- 155, 1994.
[3] John Gibbon and Russell M. Church. Representation of time. Cognition, 37:23- 54,
1990.
[4] Stephen Grossberg and John W. L. Merrill. A neural network model of adaptively
timed reinforcement learning and hippocampal dynamics. Cognitive Brain Research,
1:3- 38, 1992.
[5] S. C. Hinton and W . H. Meck. How time flies: Functional and neural mechansims
of interval timing. In C. M. Bradshaw and E. Szadabi, editors, Tim e and Behaviour:
Psychological and Neurobehavioural Analyses. Amsterdam: Elsevier Science, 1997.
[6] J. W. Moore, J. E. Desmond, and N. E. Berthier. Adaptively timed conditioned
responses and the cerebellum: A neural network approach. Biological Cybernetics,
62:17- 28, 1989.
[7] John W. Moore, Neil D. Berthier, and Diana E. J. Blazis. Classical eye-blink conditioning: Brain systems and implementation of a computational model. In Michael Gabriel
and John Moore, editors, Learning and Computational Neuroscience: Foundations of
Adaptive Networks, A Bradford Book, pages 359- 387. The MIT Press, 1990.
[8] J. L. Shapiro and John Wearden. Modelling scalar timing by an accumulator network
of spiking neurons. In preparation, 200l.
[9] Richard S. Sutton and Andrew G. Barto. Reinforcment Learning: An Introduction. A
Bradford Book. The MIT Press, 1998.
| 2000 |@word trial:32 cu:1 middle:2 merrill:2 pulse:1 simulation:8 covariance:7 excited:1 thereby:1 tr:2 solid:1 moment:1 contains:1 tuned:1 subjective:2 current:1 si:5 written:1 must:2 john:7 berthier:2 shape:1 stationary:3 ith:4 dover:1 short:2 coarse:1 node:19 contribute:1 liberal:1 mathematical:1 consists:2 expected:4 behavior:4 brain:2 td:7 food:1 actual:2 perceives:1 what:4 temporal:2 every:2 act:3 tackle:1 scaled:4 uk:1 unit:1 producing:1 before:1 timing:9 limit:1 consequence:2 sutton:2 encoding:2 inordinate:1 jls:1 might:2 black:1 collapse:4 abramowitz:1 range:1 statistically:3 averaged:2 obeys:2 grossberg:2 accumulator:16 procedure:8 revealing:1 word:1 pre:2 wait:1 onto:1 accumulation:1 duration:2 independently:1 perceive:1 rule:5 deriving:1 unanswered:1 variation:3 trigger:1 play:1 homogeneous:1 origin:1 tapped:2 associate:1 particularly:1 continues:1 sparsely:1 blocking:1 role:1 fly:1 connected:5 russell:2 highest:1 diana:1 gibbon:5 reward:22 asked:1 dynamic:1 trained:1 depend:1 tight:1 tit:1 flying:1 upon:2 learner:1 represented:2 derivation:1 walter:1 forced:1 effective:1 encoded:1 otherwise:2 ability:1 statistic:1 neil:1 noisy:3 unconditioned:1 associative:1 triggered:1 turned:1 loop:1 description:1 manchester:4 transmission:1 produce:3 generating:6 leave:1 tim:1 derive:2 oo:1 ac:1 andrew:1 measured:5 ij:1 strong:4 c:1 predicted:1 implies:1 stochastic:2 human:1 nlt:1 argued:1 feeding:1 behaviour:1 biological:1 mathematically:1 timeindependent:1 pl:2 correction:1 hold:1 around:1 sufficiently:1 exp:1 cognition:1 early:2 mit:2 clearly:1 rather:1 barto:2 publication:1 encode:1 modelling:1 ylt:1 sense:1 whence:1 elsevier:1 typically:1 relation:3 wij:3 denoted:5 animal:23 spatial:1 special:1 once:1 never:1 identical:1 represents:2 lit:2 constitutes:1 connectionist:3 stimulus:13 intelligent:1 richard:1 few:1 gamma:1 individual:4 delayed:3 familiar:1 consisting:1 attempt:1 weakness:1 light:1 activated:1 held:1 necessary:1 unless:1 timed:2 plotted:2 psychological:2 modeling:2 cumulants:3 deviation:2 delay:3 too:1 foraging:1 varies:1 adaptively:2 peak:7 fundamental:1 definitely:1 stay:1 invoke:1 off:1 michael:1 connectivity:2 external:3 cognitive:1 book:2 derivative:1 return:1 gy:1 coding:1 coefficient:1 depends:2 break:2 view:1 reached:2 start:12 variance:4 characteristic:1 blink:2 accurately:1 produced:4 trajectory:1 cybernetics:1 history:1 simultaneous:1 explain:1 synaptic:2 definition:1 against:2 associated:1 mi:2 couple:1 stop:12 covariation:4 ubiquitous:1 cj:1 shaping:1 appears:1 feed:2 higher:2 follow:1 methodology:1 response:27 done:1 though:3 shrink:1 just:1 clock:19 until:2 correlation:1 receives:3 gray:1 grows:2 building:1 effect:2 true:1 y2:1 analytically:2 hence:1 spatially:2 moore:4 covary:1 white:1 cerebellum:1 during:6 branching:1 eligibility:3 excitation:1 rat:2 prominent:1 distracting:1 hippocampal:1 spending:1 functional:1 spiking:5 conditioning:6 refractory:1 association:3 occurred:1 theorist:1 ai:3 tuning:1 desmond:2 had:1 something:1 driven:1 certain:1 seen:1 additional:2 greater:1 converge:1 period:2 signal:2 ii:2 dashed:1 stephen:1 desirable:1 smooth:1 long:1 divided:1 prevented:1 va:1 poisson:1 represent:2 receive:2 schematically:1 fine:2 interval:28 grow:1 appropriately:1 simulates:2 depleted:1 presence:1 intermediate:2 counting:1 fit:1 psychology:2 whether:1 york:1 action:2 ignored:1 generally:1 useful:1 gabriel:1 amount:1 bradshaw:1 generate:1 shapiro:2 pinv:1 shifted:1 judging:1 diagnostic:1 neuroscience:1 dummy:1 discrete:1 key:1 four:1 threshold:2 convert:1 run:2 respond:6 striking:4 reasonable:2 patch:2 investigates:1 comparable:1 hi:3 distinguish:1 fan:5 activity:14 strength:1 encodes:1 aspect:5 turtle:1 simulate:1 performing:1 pavlovian:2 department:2 according:1 remain:1 describes:1 across:4 happens:1 pr:1 taken:1 equation:7 remains:1 mechanism:2 fail:1 needed:1 tractable:2 end:1 multiplied:1 obey:1 fluctuating:1 appropriate:4 responding:6 invokes:2 especially:1 build:1 classical:3 question:3 quantity:2 spike:17 dependence:1 responds:3 simulated:1 reason:1 length:2 code:1 modeled:1 ratio:2 balance:1 difficult:1 cij:1 trace:3 negative:2 rise:3 implementation:1 perform:1 neuron:19 behave:1 hinton:1 rn:2 overlooked:1 pair:1 required:1 connection:2 elapsed:1 inaccuracy:1 address:1 able:2 bar:2 perception:2 pattern:3 saturation:1 including:2 recursion:3 nth:1 eye:2 church:5 carried:1 coupled:2 review:1 relative:2 law:1 interesting:1 proportional:1 foundation:1 editor:3 bank:1 uncorrelated:1 cd:1 elsewhere:1 last:1 jth:1 aij:2 wide:1 taking:1 differentiating:1 sparse:1 curve:6 stegun:1 author:1 qualitatively:1 reinforcement:15 adaptive:1 reproduces:2 handbook:1 assumed:3 excite:1 xi:4 alternatively:1 table:1 stimulated:1 learn:7 nature:2 m13:2 domain:1 vj:7 did:1 spread:3 noise:2 late:1 learns:1 grained:1 externally:1 decay:1 essential:1 ci:1 conditioned:3 lt:1 pigeon:1 expressed:2 amsterdam:1 scalar:18 a8:2 determines:1 conditional:1 goal:1 man:1 typical:2 perceiving:1 acting:1 called:1 specie:1 total:3 bradford:2 experimental:2 la:1 internal:2 latter:1 cumulant:3 preparation:1 phenomenon:2 |
1,098 | 2,001 | The Unified Propagation and Scaling Algorithm
Max Welling
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square
London WC1N 3AR U.K.
welling@gatsby.ucl.ac.uk
Yee Whye Teh
Department of Computer Science
University of Toronto
10 King?s College Road
Toronto M5S 3G4 Canada
ywteh@cs.toronto.edu
Abstract
In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can
be solved by approximating the KL divergence with a Bethe free energy.
The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm
reduces to a convergent alternative to loopy belief propagation when no
constraints are present. Experiments show the viability of our algorithm.
1 Introduction
For many interesting models, exact inference is intractible. Trees are a notable exception
where Belief Propagation (BP) can be employed to compute the posterior distribution [1].
BP on loopy graphs can still be understood as a form of approximate inference since its
fixed points are stationary points of the Bethe free energy [2]. A seemingly unrelated problem is that of finding the distribution with minimim KL divergence to a prior distribution
subject to some constraints. This problem can be solved through the iterative scaling (IS)
procedure [3]. Although a lot of work has been done on approximate inference, there seems
to be no counterpart in the literature on approximate minimum divergence problems. This
paper shows that the Bethe free energy can be used as an approximation to the KL divergence and derives a novel approximate minimum divergence algorithm which we call
unified propagation and scaling (UPS).
In section 2 we introduce generalized inference and the iterative scaling (IS) algorithm.
In section 3, we approximate the KL divergence with the Bethe free energy and derive
fixed point equations to perform approximate generalized inference. We also show in what
sense our fixed point equations are related to loopy BP and IS. Section 4 describes unified
propagation and scaling (UPS), a novel algorithm to minimize the Bethe free energy, while
section 5 shows experiments on the efficiency and accuracy of UPS.
2 Generalized Inference
In this section we will introduce generalized inference and review some of the literature
on iterative scaling (IS). Let
where
is the variable associated with node
. Consider an undirected graphical model with single and pairwise potentials
,
. Let be the distribution represented by , i.e.
(1)
where
,
, ranges over the edges
of , ranges over the nodes
of neighbours of . Let
oflet and
beisathefixednumber
be a subset of nodes. For
distribution over . Given these
?observed distributions? on , define the generalized posterior as the distribution
which minimizes the KL divergence
(2)
for each . We call these constraints obsubject to the constraints that
!"
#
$&%
*,+
#(')
.-0/1
#
324-0/1
#
5
#
!"
servational (Obs) constraints. Generalized inference is the process by which we determine
the generalized posterior1. Let 6 be the set of unobserved nodes, i.e. all nodes not in .
then the generalized posterior is
for each
.
for a subset of nodes . Similarly if is a subgraph of . The above
where
87
Theorem 1 If
#
!"
:<; >=
!
B
?
=
A@
9
2
!
7
2
!
A@B
C
C
theorem shows that if the constrained marginals are delta functions, i.e. the observations are
hard, then the generalized posterior reduces to a trivial extension of the ordinary posterior,
hence explaining our use of the term generalized inference.
(3)
where we chose to satisfy the Obs constraints. Iterative scaling (IS) can now be used
to solve for [3]. At each iteration of IS, the Lagrange multiplier is updated
using the IS scaling update
for each
(4)
Intuitively, (4) updates the current posterior so that the marginal for node match
the given constraint
. IS is a specific case of the generalized iterative scaling (GIS)
algorithm [4], which updates the Lagrange multipliers for a subset
nodes using
. Parallel GIS steps can be understood as ofperforming
IS
Since generalized inference is a constrained minimum divergence
problem, a standard way
D
and , let E
be the
of solving it is using Lagrange multipliers. For each
!"
Lagrange multiplier enforcing
# +
. Then the generalized posterior
+ is
#
=GF,H
A@
5
@
:
I
A@
=
FJH
E
E
E
+
F
+
3K
H
F H
!"
#
#
+
F
H
!"
+
K
F H
+
+
PO Q
R
LNM
SUT V3T
updates in parallel, but damping the steps such that the algorithm is still guaranteed to
converge.
Ordinary inference is needed to compute the current marginals #
required by (4). If
is singly connected, then belief propagation (BP) can be used to compute the required
marginals. Otherwise, exact inference or sampling algorithms like Markov chain Monte
Carlo can be used, but usually are computationally taxing. Alternative approximate inference algorithms like variational methods and loopy BP can be used instead to estimate the
1
To avoid confusion, we will explicitly use ?ordinary inference? for normal inference, but when
there is no confusion ?inference? by itself will mean generalized inference. Ditto for posteriors.
required marginals. Although being much more efficient, they can also produce biased estimates, potentially leading to the overall IS not converging2. Even if IS did converge, we
do not have much theoretical understanding of the accuracy of the overall algorithm.
A more principled approach is to first approximate the KL divergence, then derive algorithms to minimize the approximation. In the next section, we describe a Bethe free energy
approximation to the KL divergence. Fixed point equations for minimizing the Bethe approximation can then be derived. The fixed point equations reduce to BP propagation
updates at hidden nodes, and to IS scaling updates at observed nodes. As a consequence,
using loopy BP to approximate the required marginals turns out to be a particular scheduling of the fixed point equations. Because the Bethe free energy is fairly well understood,
and is quite accurate in many regimes [5, 2, 6], we conclude that IS with loopy BP is a
viable approximate generalized inference technique. However, in section 4 we describe
more efficient algorithms for approximate generalized inference based upon the Bethe free
energy.
3 Approximate Generalized Inference
. The Bethe free energy is defined as
Let
and
be estimates of the pair-wise and single site marginals of the
generalized posterior.
and
are called beliefs. The beliefs need to satisfy
the following marginalization and normalization (MN) constraints:
* +
*A+
(5)
Let #
*
#
- /
1
*
2
- /
1
(6)
We wish to minimize
subject to the MN and Obs constraints. We use Lagrange
is an approximation to the KL divergence which only accounts for pair-wise correlations between neighbouring variables and is exact if is singly connected.
#
to impose the marginalization constraints. We can also use Lagrange
multipliers E
multipliers to impose the normalization and observational constraints as well, but this reduces to simply
keeping
and
normalized, and keeping
!"
fixed for
. We shall ignore these for clarity. The resulting Lagrangian is
where denotes the set of neighbours of node . Setting derivatives of
to
and
to 0, we get
%
2
#
*
*
)@
* +
E
I2
(7)
%
)
with respect
E
is
Theorem 2 Subject to the MN and Obs constraints, every stationary point of
given by
+
+
+
5
2
F
H
H
., /
!#"%$ '&
F
(
H
(8)
)+*-, (
,
For a quick example, consider a two node Boltzmann machine, with weight and biases
and the desired means on both nodes are
. Then using either naive mean field or naive TAP
equations to estimate the marginals required by IS will not converge.
where the Lagrange
multipliers are fixed points of
+ the following updates:
+
F
H
K
0
@
+
K
for
,
for
,
H
+
5
(9)
(10)
F H
F
!"
+
+
FJH
* +
+
+
Equation (9) is+ equivalent
to the BP propagation updates by identifying the messages as
+
3
. Rewriting (10) in terms of messages as well we find,
F
K
*+
H
!"
for
5
(11)
We can extend the analogy and understand (11) as a message ?bouncing? step, in which
messages going into an observed node get bounced back and are altered
in the process.
K
7
If !"
is a delta function, then (11) reduces to
so
!
!
that instead of bouncing back, messages going into node get absorbed. An alternative
description of (10) is given by the following theorem.
G
for using (10) is equivalent to
(12)
Theorem 3 Let
. Updating each E
using (4), where we identify
updating E
#
)@
!
0
FJH
+
+
!"
)@
0
F,H
Theorem 3 states the unexpected result that scaling updates (4) are just fixed point equations
to minimize
. Further, the required marginals #
are computed using (9), which
is exactly loopy BP. Hence using loopy BP to approximate the marginals required by IS is
just a particular scheduling of the fixed point equations (9,10).
4 Algorithms to Minimize the Bethe Free Energy
Inspired by [2], we can run run the fixed point equations (9,10) and hope that they converge
. We call this algorithm loopy IS. Theorem 2 states that if loopy
to a minimum of
. In simulations we find that it
IS converges it will converge to stationary points of
always gets to a good local minimum, if not the global minimum. However loopy IS does
not necessarily converge, especially when the variables are strongly correlated. There are
two reasons why it can fail to converge. Firstly, the loopy BP component (9) may fail to
converge. However this is not serious as past results indicate that loopy BP often fails only
when the Bethe approximation is not accurate [6]. Secondly, the IS component (10) may
fail to converge, since it is not run sequentially and the estimated marginals are inaccurate.
We will show in section 5 that this is a serious problem for loopy IS.
One way to mitigate the second problem is to use the scaling updates (4), and approximate
the required marginals using an inner phase of loopy BP (call this algorithm IS+BP). Theorem 3 shows that IS+BP is just a particular scheduling of loopy IS, hence it inherits the
accuracy of loopy IS while converging more often. However because we have to run loopy
BP until convergence for each scaling update, IS+BP is not particularly efficient. Another
way to promote convergence is to damp the loopy IS updates. This works well in practice.
In this section, we describe yet another possibility ? an efficient algorithm based on the
3
This was first shown in [2], with a different but equivalent identification of and .
same fixed point equations (9,10) which is guaranteed to converge without damping. In
subsection 4.1 we describe UPS-T, an algorithm which applies when is a tree and the
Obs constraints are on the leaves of . In subsection 4.2 we describe UPS for the general
case, which will make use of UPS-T as a subroutine.
4.1 Constraining the leaves of trees
!
Suppose that is a tree, and all observed nodes
are leaves of . Since is$&a% tree, the
Bethe free energy
is exact, i.e. if the MN constraints are satisfied then
#(')
?
?
where #
. As a consequence,
is convex in the
subspace defined by the MN constraints. Therefore if the fixed point equations (9,10)
converge, they will converge to the unique global minimum. Further, since (9) is exactly
a propagation update, and (10) is exactly a scaling update, the following scheduling of
(9,10) will always converge: alternately run (9) until convergence and perform a single
(10) update. The schedule essentially implements the IS+BP procedure, except that loopy
BP is exact for a tree. Our algorithm essentially implements the scheduling, except that
unnecessary propagation updates are not performed.
Algorithm UPS-T Unified Propagation and Scaling on Trees
,,,
1. Run propagation updates (9) until convergence.
2. Let
be such that every node occurs infinitely often.
3. For
until convergence criterion is met:
4.
Perform scaling update (10) for , where is the unique neighbour of .
5.
For each edge ! on path from to #"$ , apply propagation update (9) for &%(')' .
* / , , ,
6. Run propagation updates (9) until convergence.
4.2 Graphs with cycles
is not exact nor convex. However we can make use of the
For graphs with cycles,
fact that it is exact on trees to find a local minimum (or saddle point). The idea is that we
clamp a number of hidden nodes to their current marginals such that the rest of the hidden
nodes become singly connected, and apply UPS-T. Once UPS-T has converged, we clamp
a different set of hidden nodes and apply UPS-T again. The algorithm can be understood
as coordinate descent where we minimize
with respect to the unclamped nodes at
each iteration.
Let *
from
G
be a set of clamped nodes such that every loop in the graph contains a node
. Define /. to be the graph obtained from as follows. For each node
L
replicate it times, and connect each replica to one neighbour of and no other
nodes. This is shown in figures 1(c) and 1(d) for the graph in 1(a). Clearly 0. will be singly
connected. Let 1 M /. denote the trees in 2. . Define4
M
L
6
.
#
,+-*
.
346587
3465 7
#
3
3.
3.
3
346587
394$5 7
7 7
7 7
@
3
7 7
7 7
@
where 7 is the number of neighbours of node
can show the following:
4
For ;:;
:<>=?: define @ 7 7 7 7 AB@
Similarly for @ 7 7 CED 7 7 7 7 and D 7 7 .
3
in
7
7
7
7
7
7
7
7
@
7
3
@
7
3
7
7
(13)
(14)
we
.
2.
where and are the original nodes in = .
. By regrouping terms in
Theorem 4 Let !
be a distribution over
for
* . Then in the subspace defined
5
by
for
!
* and by the MN and Obs constraints, we have
$D%
*
3465 7
!
#
.3
'P
3.
*
2
@
* +
!
V
(15)
individually.
- /
!
1
$D%
#
.3 'P23 .
To minimize
, now all we have to do is to minimize each
We can already solve this using UPS-T. By clamping the marginals of nodes in * , we
have reduced the problem to one solved by UPS-T, where the observed nodes are taken to
include those in * . The overall algorithm is
Algorithm UPS Unified Propagation and Scaling
D E CD
.
2. For */ , , , until convergence criteria is met:
such that every loopy is broken by
.
3.
Find a set of nodes
6# for $ , and MN
"
! D D
4.
Using UPS-T, set
and Obs constraints are satisfied
.
&% !
for all ' . Now by using the fact that both
It is clear that
1. Initialize beliefs
#
#
scaling and propagation updates are fixed point equations for finding stationary points of
we have,
!
) ( '
6
Theorem 5 If for all ' and
there is a '
to a local minimum (or saddle point) of
*
with
, then #
will converge
with MN and Obs constraints satisfied.
5 Experiments
In this section we report on two experiments on the feasibility of UPS. In the first experiment we compared the speed of convergence against other methods which minimize
. In the second experiment we compared the accuracy of UPS against loopy IS. In
both experiments we used *,+-* Boltzmann machines with states .
and structure as
shown in figure 1a. The weights are sampled randomly from a Gaussian with mean 0 and
/10 and the biases are sampled from a Gaussian with standard deviation
standard deviation
are shifted so that if
/ and mean 2 incoming weights 3254 . The means of .7the
6 * . biases
the mean
values
of
will
be
approximately
The
desired
are
/ is small,
8 where 9 are sampled from a Gaussian with mean 0marginals
32
!"
and standard
F
deviation / 8 .
#
Experiment 1 Speed of Convergence
We compared the speed of convergence for the following algorithms: loopy IS, IS+BP,
GIS+BP (parallel GIS with marginals estimated by loopy BP), UPS-H (clamping rows
of nodes every iteration as in figure 1(b) and UPS-HV (alternatingly clamping rows and
columns as in figures 1(b) and 1(c)). We tested the algorithms on 100 networks, with
and / 8 ;: . We find that the result is not sensitive to the settings of / 0 /
/ 0 * ,/
8
and / so long as the algorithms are able to converge without damping. The result is
shown in figure 1e. IS+BP and GIS+BP are slow because the loopy BP phase is expensive.
UPS-H and UPS-HV both do better than IS+BP and GIS+BP because the inner loops are
cheaper, and the Lagrange multipliers E
are updated more frequently. Further we see
that UPS-HV is faster than UPS-H since information is propagated faster throughout the
network. loopy IS is the fastest. However the next experiment shows that it also converges
less frequently and there is a trade off between the speed of loopy IS and the stability of
UPS.
(b)
(c)
(d)
(e)
Number of Updates
(a)
5
10
4
10
3
10
GIS+BP IS+BP
UPS?H UPS?HV loopy IS
Figure 1: (a) Network structure. Circles are hidden nodes and black squares are observationally
constrained nodes. (b) Clamping rows of nodes. Black circles are the clamped nodes. (c) Clamping
columns of nodes. (d) Replicating each clamped and observed node in (c). (e) Speed of convergence.
The box lines are at the median and upper and lower quartiles, and the whiskers describe the extent
.
of data. An algorithm or subroutine is considered converged if the beliefs change by less than
.#
Experiment 2 Accuracy of Estimated Marginals
We compared the accuracy of the posterior marginals obtained using UPS-HV and loopy
IS for four possible types of constraints, as shown in figure 2. In case (a), the constraint
marginals are delta functions, so that generalized inference reduces down to ordinary inference, loopy IS reduces to loopy BP and UPS becomes a convergent alternative to loopy
BP. In case (b), we did not enforce any Obs constraints so that the problem is one of estimating the marginals of the prior
. The general trend is that loopy BP and UPS are
comparable, and they perform worse as weights get larger, biases get smaller or there is
less evidence. This confirms the results in [6]. Further, we see that when loopy BP did
not converge, UPS?s estimates are not better than loopy BP?s estimates. The reason this is
happening is described in [6].
.76
.76 4
46 . , corresponding to
In cases (c)
(d) we set / 8
spread
* and
. and
respectively. In these cases UPS and loopy IS did equally well when the
out over
latter converged, but UPS continued to perform well even when loopy IS did not converge.
Since loopy BP always converged when UPS performed well (for cases (a) and (b)), and
we used very high damping, we conclude that loopy IS?s failure to converge must be due
to performing scaling updates before accurate marginals were available. Concluding, we
see that UPS is comparable to loopy IS when generalized inference reduces to ordinary
inference, but in the presence of Obs constraints it is better.
6 Discussion
In this paper we have shown that approximating the KL divergence with the Bethe free
energy leads to viable algorithms for approximate generalized inference. We also find that
there is an interesting and fruitful relationship between IS and loopy BP. Our novel algorithm UPS can also be used as a convergent alternative to loopy BP for ordinary inference.
Interesting extensions are to cluster nodes together to get more accurate approximations
to the KL divergence analogous to the Kikuchi free energy, and to handle marginal constraints over subsets of nodes. This will again lead to a close relationship between IS and
junction tree propagation, but the details are to be worked out. We can also explore other
algorithms to minimize
, including the CCCP algorithm [7]. Another interesting direction for future work is algorithms for learning in log linear models by approximating the
free energy.
!
References
[1] J. Pearl. Probabilistic reasoning in intelligent systems : networks of plausible inference. Morgan
Kaufmann Publishers, San Mateo CA, 1988.
(a) Ordinary Inference
(c) ?(?) = 0.2
(b) No Obs Constraints
(d) ?(?) = 2.0
0.4
loopy IS
9
7
5
3
1
0.2
9
UPS
7
5
3
1
1
3
5
7
9
1
3
5
7
9
1
3
5
7
9
1
3
5
7
0
9
Figure 2: Each plot shows the mean absolute errors for various settings of (x-axis) and
(yaxis). The top plots show errors for loopy IS and bottom plots show errors for UPS. The inset shows
the cases (black) when loopy IS did not converge within 2000 iterations, with linear damping slowly
.
increasing to
.,
[2] J.S. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural
Information Processing Systems, volume 13, 2000.
[3] W. E. Deming and F. F. Stephan. On a least square adjustment of a sampled frequency table when
the expected marginal totals are known. Annals of Mathematical Statistics, 11:427?444, 1940.
[4] J. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43:1470?1480, 1972.
[5] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference :
An empirical study. In Proceedings of the Conference on Uncertainty in Artificial Intelligence,
volume 15. Morgan Kaufmann Publishers, 1999.
[6] M. Welling and Y. W. Teh. Belief optimization for binary networks : A stable alternative to loopy
belief propagation. In Uncertainty in Artificial Intelligence, 2001.
[7] A. L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent
alternatives to belief propagation. 2002.
| 2001 |@word bounced:1 seems:1 replicate:1 confirms:1 simulation:1 contains:1 past:1 current:3 yet:1 must:1 plot:3 update:23 stationary:4 intelligence:2 leaf:3 node:39 toronto:3 ditto:1 firstly:1 mathematical:2 become:1 viable:2 deming:1 introduce:2 g4:1 pairwise:1 expected:1 nor:1 frequently:2 inspired:1 freeman:1 increasing:1 becomes:1 estimating:1 unrelated:1 what:1 sut:1 minimizes:1 unified:6 finding:2 unobserved:1 mitigate:1 every:5 exactly:3 uk:1 unit:1 before:1 understood:4 local:3 consequence:2 path:1 approximately:1 black:3 chose:1 mateo:1 fastest:1 range:2 unique:2 practice:1 implement:2 procedure:2 empirical:1 ups:36 road:1 get:7 close:1 scheduling:5 yee:1 equivalent:3 fruitful:1 lagrangian:1 quick:1 convex:2 identifying:1 continued:1 stability:1 handle:1 coordinate:1 analogous:1 updated:2 annals:2 suppose:1 exact:7 neighbouring:1 trend:1 expensive:1 particularly:1 updating:2 observed:6 bottom:1 solved:3 hv:5 connected:4 cycle:2 trade:1 principled:1 broken:1 solving:1 yuille:1 upon:1 efficiency:1 po:1 represented:1 various:1 describe:6 london:2 monte:1 artificial:2 quite:1 larger:1 solve:2 plausible:1 otherwise:1 statistic:2 gi:7 itself:1 seemingly:1 ucl:1 clamp:2 loop:2 subgraph:1 description:1 convergence:11 cluster:1 produce:1 converges:2 kikuchi:2 derive:3 ac:1 c:1 indicate:1 met:2 direction:1 closely:1 quartile:1 observational:1 secondly:1 extension:2 considered:1 normal:1 sensitive:1 individually:1 hope:1 clearly:1 always:3 gaussian:3 avoid:1 derived:1 inherits:1 unclamped:1 ratcliff:1 sense:1 inference:30 inaccurate:1 hidden:5 going:2 subroutine:2 overall:3 constrained:4 fairly:1 initialize:1 marginal:3 field:1 once:1 sampling:1 observationally:1 promote:1 future:1 report:1 intelligent:1 serious:2 randomly:1 neighbour:5 divergence:14 cheaper:1 murphy:1 phase:2 message:5 possibility:1 fjh:3 wc1n:1 chain:1 accurate:4 edge:2 damping:5 tree:10 desired:2 circle:2 theoretical:1 column:2 ar:1 queen:1 loopy:48 ordinary:7 deviation:3 subset:4 connect:1 damp:1 probabilistic:1 off:1 v3t:1 together:1 again:2 satisfied:3 slowly:1 worse:1 derivative:1 leading:1 account:1 potential:1 satisfy:2 notable:1 explicitly:1 performed:2 lot:1 parallel:3 minimize:11 square:3 intractible:1 accuracy:6 kaufmann:2 identify:1 identification:1 carlo:1 m5s:1 alternatingly:1 converged:4 ed:1 against:2 failure:1 energy:15 frequency:1 associated:1 propagated:1 sampled:4 subsection:2 schedule:1 back:2 wei:2 done:1 box:1 strongly:1 just:3 correlation:1 until:6 propagation:23 normalized:1 multiplier:8 counterpart:1 hence:3 i2:1 criterion:2 generalized:24 whye:1 confusion:2 reasoning:1 variational:1 wise:2 novel:3 volume:2 extend:1 marginals:20 similarly:2 replicating:1 stable:1 posterior:10 binary:1 regrouping:1 morgan:2 minimum:10 impose:2 employed:1 determine:1 converge:19 reduces:7 match:1 faster:2 long:1 cccp:2 equally:1 feasibility:1 converging:1 essentially:2 iteration:4 normalization:2 taxing:1 median:1 publisher:2 biased:1 rest:1 subject:3 undirected:1 jordan:1 call:4 presence:1 constraining:1 viability:1 stephan:1 marginalization:2 reduce:1 inner:2 idea:1 darroch:1 clear:1 singly:4 reduced:1 shifted:1 neuroscience:1 delta:3 estimated:3 lnm:1 shall:1 four:1 clarity:1 rewriting:1 replica:1 graph:6 run:7 uncertainty:2 bouncing:2 named:1 throughout:1 ob:11 scaling:22 comparable:2 guaranteed:2 convergent:4 constraint:24 worked:1 bp:38 ywteh:1 speed:5 concluding:1 performing:1 department:1 describes:1 smaller:1 intuitively:1 restricted:1 taken:1 computationally:1 equation:13 turn:1 fail:3 needed:1 available:1 junction:1 yedidia:1 apply:3 enforce:1 alternative:7 original:1 denotes:1 top:1 include:1 graphical:1 especially:1 approximating:3 already:1 occurs:1 subspace:2 extent:1 trivial:1 reason:2 enforcing:1 relationship:2 minimizing:1 potentially:1 boltzmann:2 perform:5 teh:2 upper:1 observation:1 markov:1 descent:1 canada:1 pair:2 required:8 kl:10 tap:1 pearl:1 alternately:1 able:1 usually:1 regime:1 max:1 including:1 belief:13 mn:8 altered:1 axis:1 naive:2 gf:1 prior:2 literature:2 review:1 understanding:1 whisker:1 interesting:4 analogy:1 row:3 free:15 keeping:2 bias:4 understand:1 explaining:1 absolute:1 san:1 welling:3 approximate:16 ignore:1 yaxis:1 global:2 sequentially:1 incoming:1 conclude:2 unnecessary:1 iterative:7 why:1 table:1 bethe:15 ca:1 necessarily:1 did:6 spread:1 site:1 gatsby:2 slow:1 fails:1 wish:1 clamped:3 theorem:10 down:1 specific:1 inset:1 evidence:1 derives:1 clamping:5 simply:1 saddle:2 infinitely:1 explore:1 absorbed:1 happening:1 lagrange:8 unexpected:1 adjustment:1 applies:1 king:1 hard:1 change:1 except:2 called:1 total:1 exception:1 college:2 latter:1 tested:1 correlated:1 |
1,099 | 2,002 | Spectral Kernel Methods for Clustering
N ello Cristianini
BIOwulf Technologies
nello@support-vector.net
John Shawe-Taylor
Jaz Kandola
Royal Holloway, University of London
{john, jaz} @cs.rhul.ac.uk
Abstract
In this paper we introduce new algorithms for unsupervised learning based on the use of a kernel matrix. All the information required by such algorithms is contained in the eigenvectors of the
matrix or of closely related matrices. We use two different but related cost functions, the Alignment and the 'cut cost'. The first
one is discussed in a companion paper [3], the second one is based
on graph theoretic concepts. Both functions measure the level of
clustering of a labeled dataset, or the correlation between data clusters and labels. We state the problem of unsupervised learning as
assigning labels so as to optimize these cost functions. We show
how the optimal solution can be approximated by slightly relaxing
the corresponding optimization problem, and how this corresponds
to using eigenvector information. The resulting simple algorithms
are tested on real world data with positive results.
1
Introduction
Kernel based learning provides a modular approach to learning system design [2]. A
general algorithm can be selected for the appropriate task before being mapped onto
a particular application through the choice of a problem specific kernel function.
The kernel based method works by mapping data to a high dimensional feature
space implicitly defined by the choice of the kernel function. The kernel function
computes the inner product of the images of two inputs in the feature space. From
a practitioners viewpoint this function can also be regarded as a similarity measure
and hence provides a natural way of incorporating domain knowledge about the
problem into the bias of the system.
One important learning problem is that of dividing the data into classes according
to a cost function together with their relative positions in the feature space. We
can think of this as clustering in the kernel defined feature space, or non-linear
clustering in the input space.
In this paper we introduce two novel kernel-based methods for clustering. They both
assume that a kernel has been chosen and the kernel matrix constructed. The methods then make use of the matrix's eigenvectors, or of the eigenvectors of the closely
related Laplacian matrix, in order to infer a label assignment that approximately
optimizes one of two cost functions . See also [4] for use of spectral decompositions
of the kernel matrix. The paper includes some analysis of the algorithms together
with tests of the methods on real world data with encouraging results.
2
Two partition cost measures
All the information needed to specify a clustering of a set of data is contained in
the matrix Mij = (cluster(xi) == cluster(xj)), where (A == B) E {-I, +1}. After
a clustering is specified, one can measure its cost in many ways. We propose here
two cost functions that are easy to compute and lead to efficient algorithms.
Learning is possible when some collusion between input distribution and target
exists, so that we can predict the target based on the input. Typically one would
expect points with similar labels to be clustered and the clusters to be separated.
This can be detected in two ways: either by measuring the amount of label-clustering
or by measuring the correlation between such variables. In the first case, we need
to measure how points of the same class are close to each other and distant from
points of different classes. In the second case, kernels can be regarded as oracles
predicting whether two points are in the same class. The 'true' oracle is the one
that knows the true matrix M. A measure of quality can be obtained by measuring
the Pearson correlation coefficient between the kernel matrix K and the true M .
Both approaches lead to the same quantity, known as the alignment [3].
We will use the following definition of the inner product between matrices
(K 1 ,K2)F = 2:2j=1 K 1 (Xi,Xj)K2(Xi,Xj). The index F refers to the Frobenius
norm that corresponds to this inner product.
Definition 1 Alignment The (empirical) alignment of a kernel kl with a kernel
k2 with respect to the sample S is the quantity
...1(S,k1 ,k2) =
(K 1 ,K2)F
,
yi(K1 ,K1 )F(K2,K2)F
where Ki is the kernel matrix for the sample S using kernel k i .
This can also be viewed as the cosine of the angle between to bi-dimensional vectors
Kl and K 2, representing the Gram matrices. If we consider k2 = yy', where y is
the vector of { -1, + I} labels for the sample, then with a slight abuse of notation
AA(Sk)=
, ,y
V
/
(K,yy')F
(K,yY')F.
(")
= mllKllF ' smce yy,yy F =
(K, K) F (YY' , yy') F
2
m
Another measure of separation between classes is the average separation between
two points in different classes, again normalised by the matrix norm.
Definition 2 Cut Cost. The cut cost of a clustering is defined as
k(Xi XJ)
L..'J:Y;li~IIF'
" ' ..
C(S, k, y) =
-t-
.
This quantity is motivated by a graph theoretic concept. If we consider the Kernel
matrix as the adjacency matrix of a fully connected weighted graph whose nodes
are the data points, the cost of partitioning a graph is given by the total weight of
the edges that one needs to cut or remove, and is exactly the numerator of the 'cut
cost'. Notice also the relation between alignment and cutcost:
...1(S, k, y)
'" k(x? x?) - 2C(S k)
L..ij
" J
,
= T(S k) - 2C(S k )
myi(K, K)F
'
, ,y,
where T(S,k) = ...1(S, k,j), for j the all ones vector.
Among other appealing
properties of the alignment, is that this quantity is sharply concentrated around
its mean, as proven in the companion paper [3]. This shows that the expected
alignment can be reliably estimated from its empirical estimate A.(S). As the cut
cost can be expressed as the difference of two alignments
C(S,k,y) = O.5(T(S,k) - A.(S, k,y)),
(1)
it will be similarly concentrated around its expected value.
3
Optimising the cost with spectral techniques
In this section we will introduce and test two related methods for clustering, as
well as their extensions to transduction. The general problem we want to solve is
to assign class-labels to datapoints so as to maximize one of the two cost functions
given above. By equation (1) the optimal solution to both problems is identical for
a fixed data set and kernel. The difference between the approaches is in the two
approximation algorithms developed for the different cost functions. The approximation algorithms are obtained by relaxing the discrete problems of optimising over
all possible labellings of a dataset to closely related continuous problems solved by
eigenvalue decompositions. See [5] for use of eigenvectors in partitioning sparse
matrices.
3.1
Optimising the alignment
To optimise the alignment, the problem is to find the maximally aligned set of labels
A
A*(S,k)=
A
max
yE{ -1 ,1}=
A(S,k,y)=
max
yE{ -l ,l}=
(K,yy')F
mJ(K, K)F
Since in this setting the kernel is fixed maximising the alignment reduces to choosing y E {-I, l}m to maximise (K,yy') = y'Ky. If we allow y to be chosen from
the larger set IRm subject to the constraint IIyl12 = m, we obtain an approximate
maximum-alignment problem that can be solved efficiently. After solving the relaxed problem, we can obtain an approximate discrete solution by choosing a suitable threshold to the entries in the vector y and applying the sign function. Bounds
will be given on the quality of the approximations.
The solution of the approximate problem follows from the following theorem that
provides a variational characterization of the spectrum of symmetric matrices.
Theorem 3 (Courant-Fischer Minimax Theorem) If ME IRmxm is symmetric, then for k = 1, ... , m,
Ak(M) =
v'Mv
max
min - dirn(T)=k OopvET v'v
=
min
v'Mv
v'v
max - - ,
dirn(T)=m - k+lO opvET
If we consider the first eigenvector, the first min does not apply and we obtain that
the approximate alignment problem is solved by the first eigenvector, so that the
maximal alignment is upper bounded by a multiple of the first eigenvalue, Arnax =
maxOopv EIR= v:~v. One can now transform the vector v into a vector in {-I, +l}m
by choosing the threshold 8 that gives maximum alignment of y = sign(vrnaX - 8).
By definition, the value of alignment A.(S, k, y) obtained by this y will be a lower
bound of the optimal alignment, hence we have
A.(S,k,y):s A.*(S,k):S Amax/IIKIIF.
One can hence estimate the quality of a dichotomy by comparing its value with the
upper bound. The absolute alignment tells us how specialized a kernel is on a given
dataset: the higher this quantity, the more committed to a specific dichotomy.
The first eigenvector can be calculated in many ways, for example the Lanczos
procedure, which is already effective for large datasets. Search engines like Google
are based on estimating the first eigenvector of a matrix with dimensionality more
than 10 9 , so for very large datasets there are approximation techniques.
We applied the procedure outlined above to two datasets from the VCI repository.
We preprocessed the data by normalising the input vectors in the kernel defined
feature space and then centering them by shifting the origin (of the feature space)
to their centre of gravity. This can be achieved by the following transformation of
the kernel matrix, K +--- K - m - 1jg' - m - 1gj' + m - 2 j'KjJ, where j is the all
ones vector, J the all ones matrix and 9 the vector of row sums of K.
Eigenva lue Number
(a)
(b)
Figure 1: (a) Plot of alignment of the different eigenvectors with the labels ordered by increasing eigenvalue. (b) Plot for Breast Cancer data (linear kernel) of
.Amax/llKIIF (straight line), ...1(S, k, y) for y = sign(v maX - (}i ) (bottom curve), and
the accuracy of y (middle curve) against threshold number i.
The first experiment applied the unsupervised technique to the Breast Cancer data
with a linear kernel. Figure l(a) shows the alignmment of the different eigenvectors
with the labels. The highest alignment is shown by the last eigenvector corresponding to the largest eigenvalue.
For each value (}i of the threshold Figure l(b) shows the upper bound of .Amax/llKIIF
(straight line), the alignment ...1(S, k, y) for y = sign( v max - (}i) (bottom curve), and
t he accuracy of y (middle curve). Notice that where actual alignment and upper
bound on alignment get closest, we have confidence that we have partitioned our
data well, and in fact the accuracy is also maximized. Notice also that the choice of
the threshold corresponds to maintaining the correct proportion between positives
and negatives. This suggests another possible t hreshold selection strategy, based on
the availability of enough labeled points to give a good estimate of the proportion
of positive points in the dataset. This is one way label information can be used
to choose the threshold. At the end of the experiments we will describe another
'transduction' method.
It is a measure of how naturally t he data separates that t his procedure is able
to optimise the split with an accuracy of approximately 97.29% by choosing the
threshold that maximises the alignment (threshold number 435) but without making
any use of the labels.
In Figure 2a we present the same results for the Gaussian kernel (u = 6). In this
case the accuracy obtained by optimising the alignment (threshold number 316)
of t he resulting dichotomy is less impressive being only about 79.65%. Finally,
Figure 2b shows the same results for the Ionosphere dataset. Here the accuracy
of the split that optimises the alignment (threshold number 158) is approximately
(a)
(b)
Figure 2: Plot for Breast Cancer data (Gaussian kernel) (a) and Ionosphere data
(linear kernel) (b) of Amax/ilKIIF (straight line), .4(S, k, y) for y = sign(v maX - ()i)
(bottom curve), and the accuracy of y (middle curve) against threshold number i.
71.37%.
We can also use the overall approach to adapt the kernel to the data. For example
we can choose the kernel parameters so as to optimize Amax/IIKIIF. Then find
the first eigenvector, choose a threshold to maximise the alignment and output the
corresponding y.
The cost to the alignment of changing a label Yi is 2 Lj Yjk(Xi' xj)/IIKIIF , so that
if a point is isolated from the others, or if it is equally close to the two different
classes, then changing its label will have only a very small effect. On the other
hand, labels in strongly clustered points clearly contribute to the overall cost and
changing their label will alter the alignment significantly.
The method we have described can be viewed as projecting the data into a 1dimensional space and finding a threshold. The projection also implicitly sorts the
data so that points of the same class are nearby in the ordering. We discuss the
problem in the 2-class case. We consider embedding the set into the real line, so
as to satisfy a clustering criterion. The resulting Kernel matrix should appear as a
block diagonal matrix.
This problem has been addressed in the case of information retrieval in [1], and
also applied to assembling sequences of DNA. In those cases, the eigenvectors of the
Laplacian have been used, and the approach is called the Fiedler ordering. Although
the Fiedler ordering could be used here as well, we present here a variation based
on the simple kernel matrix.
Let the coordinate ofthe point Xi on the real line be v(i). Consider the cost function
Lij v(i)v(j)K(i,j). It is maximized when points with high similarity have the same
sign and high absolute value, and when points with different sign have low similarity.
The choice of coordinates v that optimizes this cost is the first eigenvector, and
hence by sorting the data according to the value of their entry in this eigenvector
one can hope to find a good permutation, that renders the kernel matrix block
diagonal. Figure 3 shows the results of this heuristic applied to the Breast cancer
dataset. The grey level indicates the size of the kernel entry. The figure on the left
is for the unsorted data, while that on the right shows the same plot after sorting.
The sorted figure clearly shows the effectivenesss of the method.
3.2
Optimising the cut-cost
For a fixed kernel matrix minimising the cut-cost corresponds to mlmmlsmg
Ly;#y; k( Xi, Xj), that is the sum of the kernel entries between points of two dif-
Figure 3: Gram matrix for cancer data, before and after permutation of data according to sorting order of first eigenvector of K
ferent classes. Since we are dealing with normalized kernels, this also controls the
expected distance between them.
(
)
"'"'
1",",
We can express this quantity as
~ Kij ="2
~ Kij - Y' Ky
="21 Y, Ly,
ydy;
i,j
where L is the Laplacian matrix, defined as L = D-K, where D = diag(d l , ... , dm )
with di = '?';1 k(Xi , Xj). One would like to find y E {-l,+l}m so as to minimize
the cut cost subject to the division being even, but this problem is NP-hard. Following the same strategy as with the alignment we can impose a slightly looser
constraint on y, y E Rm, '?i yt = m, l:i Yi = O. This gives the problem
min y' Ly
subject to y E Rm
,
l: yt = m, l: Yi = O.
Since, zero is an eigenvalue of L with eigenvector j, the all ones vector, the problem
is equivalent to finding the eigenvector of the smallest non-zero eigenvalue ..\
minO#y l..j y/yY. Hence, this eigenvalue ..\ provides a lower bound on the cut cost
.
mm
y E { - l,l}'"
..\
C(S, k, y) ~ 2 IIKII F
.
So the eigenvector corresponding to the eigenvalue ..\ of the Laplacian can be used
to obtain a good approximate split and ..\ gives a lower bound on the cut-cost. One
can now threshold the entries of the eigenvector in order to obtain a vector with
-1 and + 1 entries. We again plot the lower bound, cut-cost, and error rate as a
function of the threshold.
We applied the procedure to the Breast cancer data with both linear and Gaussian
kernels. The results are shown in Figure 4. Now using the cut cost to select
the best threshold for the linear kernel sets it at 378 with an accuracy of 67.86%,
significantly worse than the results obtained by optimising the alignment. With
the Gaussian kernel, on the other hand, the method selects threshold 312 with an
accuracy of 80.31 %, a slight improvement over the results obtained with this kernel
by optimising the alignment.
So far we have presented algorithms that use unsupervised data. We now consider
the situation where we are given a partially labelled dataset. This leads to a simple algorithm for transduction or semi-supervised learning. The idea that some
labelled data might improve performance comes from observing Figure 4b, where
the selection based on the cut-cost is clearly suboptimal. By incorporating some
label information, it is hoped that we can obtain an improved threshold selection.
0050'---------c:c------c=--=--~-_=_-_=_-~
(a)
(b)
Figure 4: Plot for Breast Cancer data using (a) Linear kernel) and (b) Gaussian
kernel of C(S,k,y) - ,X/(21IKIIF) (dashed curves), for y = sign(v maX - ()i) and the
error of y (solid curve) against threshold number i.
Let z be the vector containing the known labels and 0 elsewhere. Set K P =
K + Cozz', where Co is a positive constant parameter. We now use the original
matrix K to generate the eigenvector, but the matrix K P when measuring the
cut-cost of the classifications generated by different thresholds. Taking Co = 1
we performed 5 random selections of 20% of the data and obtained a mean success
rate of 85.56% (standard deviation 0.67%) for the Breast cancer data with Gaussian
kernel, a marked improvement over the 80.31 % achieved with no label information.
4
Conclusions
The paper has considered two partition costs the first derived from the so-called
alignment of a kernel to a label vector, and the second from the cut-cost of a label
vector for a given kernel matrix. The two quantities are both optimised by the
same labelling, but give rise to different approximation algorithms when the discrete
constraint is removed from the labelling vector. It was shown how these relaxed
problems are solved exactly using spectral t echniques, hence leading to two distinct
approximation algorithms through a post-processing phase that re-discretises the
vector to create a labelling that is chosen to optimise the given criterion.
Experiments are presented showing the performance of both of these clustering
techniques with some very striking results. For the second algorithm we also gave
one preliminary experiment with a transductive version that enables some labelled
data to further refine the clustering.
References
[1] M.W. Berry, B. Hendrickson, and P. Raghavan. Sparse matrix reordering schemes for
browsing hypertext. In Th e Mat ematics of Num erical Analysis, pages 99- 123. AMS,
1996.
[2] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines.
Cambridge University Press, 2000. See also the web site www.support-vector.net.
[3] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandola. On kerneltarget alignment. In submitted to Proceedings of Neural Information Processing Systems
(NIPS), 200l.
[4] Nello Cristianini, Huma Lodhi, and John Shawe-Taylor. Latent semantic kernels
for feature selection. Technical Report NC-TR-00-080, NeuroCOLT Working Group,
http://www.neurocolt.org, 2000.
[5] A. Pothen, H. Simon , and K. Liou. Partitioning sparse matrices with eigenvectors of
graphs. SIAM J. Matrix Anal., 11(3):430- 452, 1990.
| 2002 |@word repository:1 version:1 middle:3 norm:2 proportion:2 lodhi:1 grey:1 decomposition:2 elisseeff:1 tr:1 solid:1 comparing:1 jaz:3 assigning:1 john:4 distant:1 partition:2 enables:1 remove:1 lue:1 plot:6 selected:1 normalising:1 num:1 provides:4 characterization:1 node:1 contribute:1 org:1 constructed:1 introduce:3 expected:3 encouraging:1 actual:1 increasing:1 estimating:1 notation:1 bounded:1 eigenvector:15 developed:1 finding:2 transformation:1 gravity:1 exactly:2 k2:8 rm:2 uk:1 partitioning:3 control:1 ly:3 appear:1 positive:4 before:2 maximise:2 ak:1 optimised:1 approximately:3 abuse:1 might:1 suggests:1 relaxing:2 co:2 dif:1 bi:1 block:2 procedure:4 empirical:2 significantly:2 projection:1 confidence:1 refers:1 get:1 onto:1 close:2 selection:5 applying:1 optimize:2 equivalent:1 www:2 yt:2 regarded:2 amax:5 datapoints:1 his:1 embedding:1 variation:1 coordinate:2 target:2 origin:1 ydy:1 approximated:1 cut:16 labeled:2 bottom:3 eir:1 solved:4 hypertext:1 connected:1 ordering:3 highest:1 removed:1 cristianini:4 solving:1 division:1 separated:1 fiedler:2 distinct:1 effective:1 london:1 describe:1 detected:1 dichotomy:3 tell:1 pearson:1 choosing:4 whose:1 modular:1 larger:1 solve:1 heuristic:1 fischer:1 think:1 transform:1 transductive:1 sequence:1 eigenvalue:8 net:2 propose:1 product:3 maximal:1 aligned:1 frobenius:1 ky:2 cluster:4 ac:1 ij:1 dividing:1 c:1 come:1 closely:3 correct:1 raghavan:1 adjacency:1 assign:1 clustered:2 preliminary:1 extension:1 mm:1 around:2 considered:1 mapping:1 predict:1 smallest:1 label:21 largest:1 create:1 weighted:1 hope:1 clearly:3 gaussian:6 derived:1 improvement:2 indicates:1 am:1 typically:1 lj:1 relation:1 selects:1 overall:2 among:1 classification:1 optimises:1 iif:1 optimising:7 identical:1 unsupervised:4 alter:1 others:1 np:1 report:1 kandola:2 phase:1 alignment:34 ematics:1 edge:1 taylor:4 irm:1 re:1 isolated:1 erical:1 kij:2 measuring:4 lanczos:1 assignment:1 cost:31 deviation:1 entry:6 pothen:1 siam:1 together:2 again:2 containing:1 choose:3 worse:1 leading:1 li:1 includes:1 coefficient:1 availability:1 satisfy:1 mv:2 performed:1 observing:1 sort:1 simon:1 minimize:1 accuracy:9 efficiently:1 maximized:2 ofthe:1 straight:3 submitted:1 andre:1 definition:4 centering:1 against:3 dm:1 naturally:1 di:1 dataset:7 knowledge:1 dimensionality:1 higher:1 courant:1 supervised:1 specify:1 maximally:1 improved:1 strongly:1 correlation:3 hand:2 working:1 web:1 vci:1 google:1 quality:3 effect:1 ye:2 concept:2 true:3 normalized:1 hence:6 symmetric:2 semantic:1 numerator:1 discretises:1 cosine:1 criterion:2 theoretic:2 image:1 variational:1 novel:1 hreshold:1 specialized:1 discussed:1 slight:2 he:3 assembling:1 cambridge:1 outlined:1 similarly:1 centre:1 shawe:4 jg:1 similarity:3 impressive:1 gj:1 closest:1 optimizes:2 success:1 unsorted:1 yi:4 relaxed:2 impose:1 myi:1 maximize:1 dashed:1 semi:1 multiple:1 infer:1 reduces:1 technical:1 adapt:1 minimising:1 retrieval:1 post:1 equally:1 laplacian:4 breast:7 kernel:48 achieved:2 want:1 addressed:1 subject:3 effectiveness:1 practitioner:1 split:3 easy:1 enough:1 xj:7 gave:1 suboptimal:1 inner:3 idea:1 whether:1 motivated:1 render:1 eigenvectors:8 amount:1 concentrated:2 dna:1 generate:1 http:1 notice:3 sign:8 estimated:1 yy:10 discrete:3 mat:1 express:1 group:1 threshold:20 changing:3 preprocessed:1 graph:5 sum:2 angle:1 striking:1 looser:1 separation:2 ki:1 bound:8 refine:1 oracle:2 constraint:3 sharply:1 collusion:1 nearby:1 min:4 according:3 slightly:2 partitioned:1 appealing:1 labellings:1 making:1 projecting:1 equation:1 discus:1 needed:1 know:1 end:1 liou:1 apply:1 spectral:4 appropriate:1 original:1 clustering:13 maintaining:1 k1:3 already:1 quantity:7 strategy:2 biowulf:1 diagonal:2 distance:1 separate:1 mapped:1 neurocolt:2 iikii:1 me:1 nello:3 maximising:1 echniques:1 index:1 nc:1 yjk:1 negative:1 rise:1 design:1 reliably:1 anal:1 maximises:1 upper:4 datasets:3 situation:1 committed:1 required:1 specified:1 kl:2 engine:1 huma:1 nip:1 able:1 royal:1 smce:1 optimise:3 max:8 shifting:1 suitable:1 natural:1 predicting:1 representing:1 minimax:1 improve:1 scheme:1 technology:1 lij:1 berry:1 relative:1 fully:1 expect:1 permutation:2 reordering:1 proven:1 viewpoint:1 lo:1 row:1 cancer:8 elsewhere:1 last:1 bias:1 normalised:1 allow:1 taking:1 absolute:2 sparse:3 hendrickson:1 curve:8 calculated:1 world:2 gram:2 computes:1 ferent:1 far:1 approximate:5 implicitly:2 dealing:1 xi:8 spectrum:1 continuous:1 search:1 latent:1 sk:1 mj:1 kjj:1 domain:1 diag:1 site:1 transduction:3 mino:1 position:1 companion:2 theorem:3 specific:2 showing:1 rhul:1 ionosphere:2 incorporating:2 exists:1 hoped:1 labelling:3 browsing:1 sorting:3 expressed:1 contained:2 ordered:1 partially:1 mij:1 corresponds:4 aa:1 viewed:2 ello:1 sorted:1 marked:1 labelled:3 hard:1 total:1 called:2 holloway:1 select:1 support:3 tested:1 |