Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
200 | 1,182 | Balancing between bagging and bumping
Tom Heskes
RWCP Novel Functions SNN Laboratory; University of Nijmegen
Geert Grooteplein 21 , 6525 EZ Nijmegen, The Netherlands
tom@mbfys.kun.nl
Abstract
We compare different methods to combine predictions from neural networks trained on different bootstrap samples of a regression
problem. One of these methods, introduced in [6] and which we
here call balancing, is based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating
generalization performances of individual networks. We show how
to estimate these individual errors from the residuals on validation patterns. Weighting factors for the different networks follow
from a quadratic programming problem. On a real-world problem
concerning the prediction of sales figures and on the well-known
Boston housing data set, balancing clearly outperforms other recently proposed alternatives as bagging [1] and bumping [8].
1
EARLY STOPPING AND BOOTSTRAPPING
Stopped training is a popular strategy to prevent overfitting in neural networks.
The complete data set is split up into a training and a validation set. Through
learning the weights are adapted in order to minimize the error on the training
data. Training is stopped when the error on the validation data starts increasing.
The final network depends on the accidental subdivision in training and validation
set , and often also on the, usually random, initial weight configuration and chosen
minimization procedure. In other words , early stopped neural networks are highly
unstable: small changes in the data or different initial conditions can produce large
changes in the estimate. As argued in [1 , 8], with unstable estimators it is advisable
to resample, i.e., to apply the same procedure several times using different subdivisions in training and validation set and perhaps starting from different initial
RWCP: Real World Computing Partnership; SNN: Foundation for Neural Networks.
467
Balancing Between Bagging and Bumping
configurations. In the neural network literature resampling is often referred to as
training ensembles of neural networks [3, 6]. In this paper, we will discuss methods
for combining the outputs of networks obtained through such a repetitive procedure.
First, however, we have to choose how to generate the subdivisions in training and
validation sets. Options are, among others, k-fold cross-validation, subsampling and
bootstrapping. In this paper we will consider bootstrapping [2] which is based on
the idea that the available data set is nothing but a particular realization of some
probability distribution. In principle, one would like to do inference on this "true"
yet unknown probability distribution. A natural thing to do is then to define an empirical distribution. With so-called naive bootstrapping the empirical distribution
is a sum of delta peaks on the available data points, each with probability content
l/Pdata with Pdata the number of patterns. A bootstrap sample is a collection of
Pdata patterns drawn with replacement from this empirical probability distribution.
Some of the data points will occur once, some twice and some even more than
twice in this bootstrap sample. The bootstrap sample is taken to be the training
set, all patterns that do not occur in a particular bootstrap sample constitute the
validation set. For large Pdata, the probability that a pattern becomes part of the
validation set is (1 - l/Pdata)Pda.ta. ~ l/e ~ 0.368. An advantage of bootstrapping
over other resampling techniques is that most statistical theory on resampling is
nowadays based on the bootstrap.
Using naive bootstrapping we generate n run training and validation sets out of our
complete data set of Pdata input-output combinations {iI', tl'}. In this paper we
will restrict ourselves to regression problems with, for notational convenience, just
one output variable. We keep track of a matrix with components
indicating
whether pattern p is part of the validation set for run i (q;
1) or of the training
set (qf = 0). On each subdivision we train and stop a neural network with one
layer of nhidden hidden units. The output
of network i with weight vector w( i)
on input il' reads
=
q;
or
+ wo(i)
o~
I
where we use the definition
x~
==
,
1. The validation error for run i can be written
1 Pda.ta.
Evalidation(i)
== -:PI
L qrr; ,
/.'=1
with Pi == L:/.' qf ~ 0.368 Pdata, the number of validation patterns m run z, and
r; == (or - ttl)2/2, the error of network i on pattern p.
After training we are left with n run networks, with, in practice, quite different
performances on the complete data set. How should we combine all these outputs
to get the best possible performance on new data?
2
COMBINING ESTIMATORS
Several methods have been proposed to combine estimators (see e.g. (5) for a review). In this paper we will only consider estimators with the same architecture
468
T. Heskes
but trained and stopped on different subdivisions of the data in training and validation sets. Recently, two such methods have been suggested for bootstrapped
estimators: bagging [1], an acronym for bootstrap aggregating, and bumping [8],
meaning bootstrap umbrella of model parameters. With bagging, the prediction on
a newly arriving input vector is the average over all network predictions. Bagging
completely disregards the performance of the individual networks on the data used
for training and stopping. Bumping, on the other hand, throws away all networks
except the one with the lowest error on the complete data set 1 ? In the following
we will describe an intermediate form due to [6], which we here call balancing. A
theoretical analysis of the implications of this idea can be found in [7].
Suppose that after training we receive a new set of Ptest test patterns for which we
do not know the true targets [II, but can calculate the network output OJ for each
network i. We give each network a weighting factor aj and define the prediction of
all networks on pattern 1/ as the weighted average
nrun
~
-II _
m
-II
= L- ajOi
.
i=1
The goal is to find the weighting factors
aj,
subject to the constraints
nrun
L aj = 1 and aj ~ 0 Vj
(1)
,
j=1
yielding the smallest possible generalization error
1 Ptest
- -- ~
- II - t-II) 2 .
E test =
L- ( m
Ptest 11:1
The problem, of course, is our ignorance about the targets [II. Bagging simply takes
= l/n run for all networks, whereas bumping implies aj = din. with
ai
1
Pd .. t ..
argmin - -
K.
j
L (or - t
JJ
)2 .
Pdata JJ=1
As in [6, 7] we write the generalization error in the form
E test
_1_
Ptest
L ajaj(or - [1I)(oj - [II)
L
2p1
test
..
II
I,)
L L ajaj [(or II
j
[11)2
L ajaj [Etest(i) + Etest(j)
. .
IJ
+ (oj - ill)2 -
(or - oj)2]
,j
-
~ L(or - 5j )2].
Ptest
(2)
II
The last term depends only on the network outputs and can thus be calculated.
This "ambiguity" term favors networks with conflicting outputs. The first part,
lThe idea behind bumping is more general and involved than discussed here. The
interested reader is referred to [8] . In this paper we will only consider its naive version.
469
Balancing Between Bagging and Bumping
containing the generalization errors Etest(i) for individual networks, depends on the
targets tV and is thus unknown. It favors networks that by themselves already have
a low generalization error. In the next section we will find reasonable estimates for
these generalization errors based on the network performances on validation data.
Once we have obtained these estimates, finding the optimal weighting factors Cti
under the constraints (1) is a straightforward quadratic programming problem.
3
ESTIMATING THE GENERALIZATION ERROR
At first sight, a good estimate for the generalization error of network i could be
the performance on the validation data not included during training. However,
the validation error Evalidation (i) strongly depends on the accidental subdivision in
training and validation set. For example, if there are a few outliers which, by pure
coincidence, are part of the validation set, the validation error will be relatively
large and the training error relatively small. To correct for this bias as a result
of the random subdivision, we introduce the "expected" validation error for run i.
First we define nil as the number of runs in which pattern J.l is part of the validation
set and E~alidation as the error averaged over these runs:
1
nrun
nil
L
==
qf
and
E~alidation ==
nil
i=1
nrun
?= qf rf ,
.=1
The expected validation error then follows from
,
1
Evalidation (i)
Pda.ta.
== --:P.
L
qf E~alidation .
11=1
The ratio between the observed and the expected validation error indicates whether
the validation error for network i is relatively high or low. Our estimate for the
generalization error of network i is this ratio multiplied by an overall scaling factor
being the estimated average generalization error:
E
(.) ~
test t
E
,
(.)
validation t
.
1
Pda.ta.
_ _ '""
~
Evalidation (t) Pdata 11=1
Ell.
.
validation'
Note that we implicitly make the assumption that the bias introduced by stopping
at the minimal error on the validation patterns is negligible, i.e., that the validation
patterns used for stopping a network can be considered as new to this network as
the completely independent test patterns.
4
SIMULATIONS
We compare the following methods for combining neural network outputs.
Individual: the average individual generalization error, i.e., the generalization error we will get on average when we decide to perform only one run. It
serves as a reference with which the other methods will be compared.
Bumping: the generalization of the network with the lowest error on the data
available for training and stopping.
T. Heskes
470
unfair
unfair
bumping
bagging
ambiguity
balancing
bumping
balancing
store 1
4%
9%
10%
17 %
17 %
24 %
store 2
5%
15 %
22 %
23 %
23 %
34 %
store 3
-7 %
11%
18 %
25 %
25 %
36 %
store 4
6%
11%
17 %
26 %
26 %
31 %
store 5
6%
10%
22 %
19 %
22 %
26 %
store 6
1%
8%
14 %
19 %
16 %
26 %
mean
3%
11%
17 %
22 %
22 %
30 %
)
Table 1: Decrease in generalization error relative to the average individual generalization error as a result of several methods for combining neural networks trained
to predict the sales figures for several stores.
Bagging: the generalization error when we take the average of all n run network
outputs as our prediction.
Ambiguity: the generalization error when the weighting factors are chosen to maximize the ambiguity, i.e., taking identical estimates for the individual generalization errors of all networks in expression (2).
Balancing: the generalization error when the weighting factors are chosen to minimize our estimate of the generalization error.
Unfair bumping: the smallest generalization error for an individual error, i.e., the
result of bumping if we had indeed chosen the network with the smallest
generalization error.
Unfair balancing: the lowest possible generalization error that we could obtain if
we had perfect estimates of the individual generalization errors.
The last two methods, unfair bumping and unfair balancing, only serve as some
kind of reference and can never be used in practice.
We applied these methods on a real-world problem concerning the prediction of
sales figures for several department stores in the Netherlands. For each store, 100
networks with 4 hidden units were trained and stopped on bootstrap samples of
about 500 patterns. The test set, on which the performances of the various methods
for combination were measured, consists of about 100 patterns. Inputs include
weather conditions, day of the week, previous sales figures, and season. The results
are summarized in Table 1, where we give the decrease in the generalization error
relative to the average individual generalization error.
As can be seen in Table 1, bumping hardly improves the performance. The reason
is that the error on the data used for training and stopping is a lousy predictor of
the generalization error, since some amount of overfitting is inevitable. The generalization performance obtained through bagging, i.e., first averaging over all outputs,
can be pro"en to be always better than the average individual generalization error.
Balancing Between Bagging and Bumping
80r---~----~--~----~-'
471
~ 30 r-------..-------.-----~---~......,
E
~ 25
E
a.
E
Q)
E 60
lI('lIE-
""*- __ ? - - -
-
-lIE
.s 20
~
E
.?40
Q)
C>
~ 20
>
?I
O~--~----~----~--~~
o
20
40
60
number of replicates
80
20
40
60
80
number of replicates
Figure 1: Decrease of generalization error relative to the average individual generalization error as a function of the number of bootstrap replicates for different
combination methods: bagging (dashdot , star), ambiguity (dotted, star), bumping
(dashed, star), balancing (solid, star) , unfair bumping (dashed, circle), unfair balancing (solid, circle). Shown are the mean (left) and the standard deviation (right)
of the decrease in percentages. Networks are trained and tested on the Boston
housing database.
On these data bagging is definitely better than bumping, but also worse than maximizing the ambiguity. In all cases, except for store 5 where maximization of the
ambiguity is slightly better, balancing is a clear winner among the "fair" methods .
The last column in Table 1 shows how much better we can get if we could find more
accurate estimates for the generalization errors of individual networks.
The method of balancing discards most of the networks, i.e., the solution to the
quadratic programming problem (2) under constraints (1) yields just a few weighting
factors different from zero (on average about 8 for this set of simulations). Balancing
is thus indeed a compromise between bagging, taking all networks into acount , and
bumping, keeping just one network.
We also compared these methods on the well-known Boston housing data set concerning the median housing price in several tracts based on 13 mainly socio-economic
predictor variables (see e.g. [1] for more information). We left out 50 of the 506
available cases for assessment of the generalization performance. All other 456 cases
were used for training and stopping neural networks with 4 hidden units. The average individual mean squared error over all 300 bootstrap runs is 16.2, which is
comparable to the mean squared error reported in [1]. To study how the performance depends on the number of bootstrap replicates , we randomly drew sets of
n = 5,10,20,40 and 80 bootstrap replicates out of our ensemble of 300 replicates
and applied the combination methods on these sets. For each n we did this 48
times. Figure 1 shows the mean decrease in the generalization error relative to the
average individual generalization error and its standard deviation .
Again, balancing comes out best , especially for a larger number of bootstrap replicates. It seems that beyond say 20 replicates both bumping and bagging are hardly
helped by more runs, whereas both maximization of the ambiguity and balancing
still increase their performance. Bagging, fully taking into account all network pre-
T. Heskes
472
dictions, yields the smallest variation, bumping, keeping just one of them, by far the
largest. Balancing and maximization of the ambiguity combine several predictions
and thus yield a variation that is somewhere in between.
5
CONCLUSION AND DISCUSSION
Balancing, a compromise between bagging and bumping, is an attempt to arrive
at better performances on regression problems. The crux in all this is to obtain
reasonable estimates for the quality of the different networks and to incorporate
these estimates in the calculation of the proper weighting factors (see [5, 9] for
similar ideas and related work in the context of stacked generalization).
Obtaining several estimators is computationally expensive. However, the notorious
instability offeedforward neural networks hardly leaves us a choice. Furthermore, an
ensemble of bootstrapped neural networks can also be used to deduce (approximate)
confidence and prediction intervals (see e.g. [4]), to estimate the relevance of input
fields and so on. It has also been argued that combination of several estimators
destroys the structure that may be present in a single estimator [8]. Having hardly
any interpretable structure, neural networks do not seem to have a lot they can lose.
It is a challenge to show that an ensemble of neural networks does not only give
more accurate predictions, but also reveals more information than a single network.
References
[1] L. Breiman. Bagging predictors. Machine Learning, 24:123-140, 1996.
[2] B. Efron and R. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall,
London, 1993.
[3] L. Hansen and P. Salomon. Neural network ensembles. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12:993-1001, 1990.
[4] T. Heskes. Practical confidence and prediction intervals.
1997.
These proceedings,
[5] R. Jacobs. Methods for combining experts' probability assessments.
Computation, 7:867-888, 1995.
Neural
[6] A. Krogh and J. Vedelsby. Neural network ensembles, cross validation, and
active learning. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances
in Neural Information Processing Systems 7, pages 231-238, Cambridge, 1995.
MIT Press.
[7] P. Sollich and A. Krogh. Learning with ensembles: How over-fitting can be
useful. In D. Touretzky, M. Mozer, and M. Hasselmo, editors, Advances in
Neural Information Processing Systems 8, pages 190-196, San Mateo, 1996.
Morgan Kaufmann.
[8] R. Tibshirani and K. Knight. Model search and inference by bootstrap "bumping". Technical report, University of Toronto, 1995.
[9] D. Wolpert and W. Macready. Combining stacking with bagging to improve a
learning algorithm. Technical report, Santa Fe Institute, Santa Fe, 1996.
| 1182 |@word version:1 seems:1 grooteplein:1 simulation:2 jacob:1 solid:2 initial:3 configuration:2 bootstrapped:2 outperforms:1 yet:1 written:1 dashdot:1 interpretable:1 resampling:3 intelligence:1 leaf:1 toronto:1 consists:1 combine:4 fitting:1 introduce:1 indeed:2 expected:3 mbfys:1 p1:1 themselves:1 snn:2 increasing:1 becomes:1 estimating:1 lowest:3 ttl:1 argmin:1 kind:1 finding:1 bootstrapping:6 socio:1 sale:4 unit:3 negligible:1 aggregating:1 twice:2 mateo:1 salomon:1 averaged:1 practical:1 practice:2 bootstrap:16 procedure:3 empirical:3 weather:1 word:1 pre:1 confidence:2 get:3 convenience:1 context:1 instability:1 maximizing:1 straightforward:1 starting:1 pure:1 estimator:8 geert:1 variation:2 target:3 suppose:1 programming:3 bumping:24 expensive:1 database:1 observed:1 coincidence:1 calculate:1 decrease:5 knight:1 mozer:1 pd:1 pda:4 trained:5 compromise:2 serve:1 completely:2 various:1 train:1 stacked:1 describe:1 london:1 quite:1 larger:1 say:1 favor:2 final:1 housing:4 advantage:1 combining:6 realization:1 produce:1 perfect:1 tract:1 advisable:1 measured:1 ij:1 krogh:2 throw:1 implies:1 come:1 correct:1 argued:2 crux:1 generalization:37 etest:3 nrun:4 considered:1 hall:1 predict:1 week:1 early:2 smallest:4 resample:1 lose:1 hansen:1 largest:1 hasselmo:1 weighted:1 minimization:1 mit:1 clearly:1 destroys:1 always:1 sight:1 season:1 breiman:1 notational:1 indicates:1 mainly:1 inference:2 stopping:7 hidden:3 interested:1 overall:1 among:2 ill:1 ell:1 field:1 once:2 never:1 having:1 chapman:1 identical:1 qrr:1 pdata:9 alidation:3 inevitable:1 others:1 report:2 few:2 randomly:1 individual:16 ourselves:1 replacement:1 attempt:1 highly:1 replicates:8 nl:1 yielding:1 behind:1 implication:1 accurate:2 nowadays:1 circle:2 theoretical:1 minimal:1 stopped:5 column:1 maximization:3 stacking:1 deviation:2 predictor:3 reported:1 peak:1 definitely:1 squared:2 ambiguity:10 again:1 containing:1 choose:1 worse:1 expert:1 li:1 account:1 star:4 summarized:1 depends:5 helped:1 lot:1 start:1 option:1 minimize:2 il:1 kaufmann:1 ensemble:8 yield:3 touretzky:2 definition:1 involved:1 vedelsby:1 stop:1 newly:1 popular:1 efron:1 improves:1 ta:4 day:1 follow:1 tom:2 leen:1 strongly:1 furthermore:1 just:4 ptest:5 hand:1 assessment:2 quality:1 perhaps:1 aj:5 umbrella:1 true:2 din:1 read:1 laboratory:1 ignorance:1 during:1 complete:4 pro:1 meaning:1 novel:1 recently:2 winner:1 discussed:1 cambridge:1 ai:1 heskes:5 had:2 deduce:1 discard:1 tesauro:1 store:10 seen:1 morgan:1 maximize:1 dashed:2 ii:11 technical:2 calculation:1 cross:2 concerning:3 prediction:11 regression:3 repetitive:1 receive:1 whereas:2 interval:2 median:1 subject:1 thing:1 seem:1 call:2 intermediate:1 split:1 architecture:1 restrict:1 economic:1 idea:4 whether:2 expression:1 wo:1 constitute:1 jj:2 hardly:4 useful:1 clear:1 santa:2 netherlands:2 amount:1 generate:2 percentage:1 dotted:1 delta:1 estimated:1 track:1 tibshirani:2 write:1 drawn:1 prevent:1 sum:1 run:13 arrive:1 reader:1 reasonable:2 decide:1 scaling:1 comparable:1 layer:1 accidental:2 quadratic:3 fold:1 adapted:1 occur:2 constraint:3 relatively:3 department:1 tv:1 combination:5 slightly:1 sollich:1 outlier:1 notorious:1 taken:1 computationally:1 discus:1 know:1 serf:1 acronym:1 available:4 multiplied:1 apply:1 away:1 macready:1 alternative:1 bagging:20 subsampling:1 include:1 somewhere:1 especially:1 already:1 strategy:1 unstable:2 lthe:1 reason:1 ratio:2 kun:1 fe:2 nijmegen:2 proper:1 unknown:2 perform:1 introduced:2 conflicting:1 diction:1 beyond:1 suggested:1 usually:1 pattern:17 challenge:1 rf:1 oj:4 natural:1 residual:1 improve:1 naive:3 review:1 literature:1 relative:4 fully:1 validation:30 foundation:1 principle:1 editor:2 pi:2 balancing:21 qf:5 course:1 last:3 keeping:2 arriving:1 bias:2 institute:1 taking:3 calculated:1 world:3 collection:1 san:1 far:1 transaction:1 approximate:1 implicitly:1 keep:1 overfitting:2 reveals:1 active:1 search:1 table:4 obtaining:1 vj:1 did:1 nothing:1 fair:1 referred:2 tl:1 en:1 lie:2 unfair:8 weighting:8 incorporating:1 drew:1 boston:3 wolpert:1 simply:1 ez:1 cti:1 goal:1 price:1 content:1 change:2 included:1 except:2 averaging:1 called:1 nil:3 subdivision:7 disregard:1 indicating:1 partnership:1 rwcp:2 relevance:1 incorporate:1 tested:1 |
201 | 1,183 | Adaptive On-line Learning in Changing
Environments
Noboru Murata, Klaus-Robert Miiller, Andreas Ziehe
GMD-First, Rudower Chaussee 5, 12489 Berlin, Germany
{mura.klaus.ziehe}~first.gmd.de
Shun-ichi Amari
Laboratory for Information Representation, RIKEN
Hirosawa 2-1, Wako-shi, Saitama 351-01, Japan
amari~zoo.riken.go.jp
Abstract
An adaptive on-line algorithm extending the learning of learning
idea is proposed and theoretically motivated. Relying only on gradient flow information it can be applied to learning continuous
functions or distributions, even when no explicit loss function is given and the Hessian is not available. Its efficiency is demonstrated
for a non-stationary blind separation task of acoustic signals.
1
Introduction
Neural networks provide powerful tools to capture the structure in data by learning.
Often the batch learning paradigm is assumed, where the learner is given all training examples simultaneously and allowed to use them as often as desired. In large
practical applications batch learning is often experienced to be rather infeasible and
instead on-line learning is employed.
In the on-line learning scenario only one example is given at a time and then discarded after learning. So it is less memory consuming and at the same time it fits well
into more natural learning, where the learner receives new information and should
adapt to it, without having a large memory for storing old data. On-line learning
has been analyzed extensively within the framework of statistics (Robbins & Monro
[1951]' Amari [1967] and others) and statistical mechanics (see ego Saad & Solla
[1995]). It was shown that on-line learning is asymptotically as effective as batch
600
N. Murata, K. MUller, A. Ziehe and S. Amari
learning (cf. Robbins & Monro [1951]). However this only holds, if the appropriate
learning rate 1] is chosen. A too large 1] spoils the convergence of learning. In earlier
work on dichotomies Sompolinsky et al. [1995] showed the effect on the rate of
convergence of the generalization error of a constant, annealed and adaptive learning rate. In particular, the annealed learning rate provides an optimal convergence
rate, however it cannot follow changes in the environment. Since on-line learning
aims to follow the change of the rule which generated the data, Sompolinsky et al.
[1995], Darken & Moody [1991] and Sutton [1992] proposed adaptive learning rates,
which learn how to learn. Recently Cichoki et al. [1996] proposed an adaptive online learning algorithm for blind separation based on low pass filtering to stabilize
learning.
We will extend the reasoning of Sompolinsky et al. in several points: (1) we give
an adaptive learning rule for learning continuous functions (section 3) and (2) we
consider the case, where no explicit loss function is given and the Hessian cannot be
accessed (section 4). This will help us to apply our idea to the problem of on-line
blind separation in a changing environment (section 5).
2
On-line Learning
Let us consider an infinite sequence of independent examples (Zl, Yl), (Z2' Y2), ....
The purpose of learning is to obtain a network with parameter W which can simulate
the rule inherent to this data. To this end, the neural network modifies its parameter
Wt at time t into Wt+1 by using only the next example (Zt+l' Yt+l) given by the
rule. We introduce a loss function l(z, Y; w) to evaluate the performance of the
network with parameter w. Let R(w) = (l(z, Y; w)) be the expected loss or the
generalization error of the network having parameter w, where ( ) denotes the
average over the distribution of examples (z, y). The parameter w* of the best
machine is given by w* = argminR(w). We use the following stochastic gradient
descent algorithm (see Amari [1967] and Rumelhart et al. [1986]):
Wt+l
= Wt -1]tC(Wt) 8~ I(Zt+l' Yt+l; Wt),
(1)
where 1]t is the learning rate which may depend on t and C(Wt) is a positive-definite
matrix which rr,ay depend on Wt. The matrix C plays the role of the Riemannian
metric tensor of the underlying parameter space {w}.
When 1]t is fixed to be equal to a small constant 1], E[wt] converges to w* and
Var[wt] converges to a non-zero matrix which is order 0(1]). It means that Wt
fluctuates around w* (see Amari [1967], Heskes & Kappen [1991]). If 1]t = cit
(annealed learning rate) Wt converges to w* locally (Sompolinsky et al. [1995]).
However when the rule changes over time, an annealed learning rate cannot follow
the changes fast enough since 1]t = cit is too small.
3
Adapti ve Learning Rate
The idea of an adaptively changing 1]t was called learning of the learning rule (Sompolinsky et al. [1995]). In this section we investigate an extension of this idea to
differentiable loss functions. Following their algorithm, we consider
Wt - 1]t K - 1 (wt) 8~ l(zt+1' Yt+1; Wt),
(2)
601
Adaptive On-line Learning in Changing Environments
(3)
where c? and f3 are constants, K( Wt) is a Hessian matrix of the expected loss function 8 2 R(Wt)/8w8w and R is an estimator of R(woO) . Intuitively speaking, the
coefficient 'I in Eq.(3) is controlled by the remaining error. When the error is large,
'I takes a relatively large value. When the error is small, it means that the estimated
parameter is close to the optimal parameter; 'I approaches to automatically. However, for the above algorithm all quantities (K, I, R) have to be accessible which
they are certainly not in general. Furthermore I(Zt+l' Yt+1; Wt) - R could take
negative values. Nevertheless in order to still get an intuition of the learning behaviour, we use the continuous versions of (2) and (3), averaged with respect to the
current input-output pair (Zt, Yt) and we omit correlations and variances between
the quantities ('It, Wt, I) for the sake of simplicity
?
Noting that (81(z, Y; woO)/8w) = 0, we have the asymptotic evaluations
(8~ I(z, Y; W t ?)
KoO(Wt - woO),
(/(z, Y; Wt) - R)
R(woO) - R+
~(Wt -
woO? KoO(wt - woO),
with KoO = 8 2 R(w*)/8w8w. AssumingR(woO)-Ris small and K(wt) ':::: KoO yields
:t Wt = -TJt(Wt - woO),
!TJt = C?TJt
(~(Wt -
woO? K*(wt - woO) - 'It) .
(4)
Introducing the squared error et = HWt - woOf KoO(Wt - woO), gives rise to
(5)
The behavior of the above equation system is interesting: The origin (0,0) is its
attractor and the basin of attraction has a fractal boundary. Starting from an
adequate initial value, it has the solution of the form
1 1
2 t
'It = - . -.
(6)
It is important to note that this l/t-convergence rate of the generalization error et
is the optimal order of any estimator Wt converging to woO. So we find that Eq.(4)
gives us an on-line learning algorithm which converges with a fast rate. This holds
also if the target rule is slowly fluctuating or suddenly changing. The technique to
prove convergence was to use the scalar distance in weight space et. Note also that
Eq.(6) holds only within an appropriate parameter range; for small 'I and Wt - woO
correlations and variances between ('It, Wt, I) can no longer be neglected .
4
Modification
From the practical point of view (1) the Hessian KoO of the expected loss or (2)
the minimum value of the expected loss R are in general not known or (3) in some
602
N Murata, K. Muller, A. Ziehe and S. Amari
applications we cannot access the explicit loss function (e.g. blind separation). Let
us therefore consider a generalized learning algorithm:
(7)
where I is a flow which determines the modification when an example (Zt+1' Yt+l)
is given. Here we do not assume the existence of a loss function and we only assume
that the averaged flow vanishes at the optimal parameter, i.e. (f(z, Y; w*?) = o.
With a loss function, the flow corresponds to the gradient of the loss. We consider
the averaged continuous equation and expand it around the optimal parameter:
(8)
where K* = (81(z, Y; w*)/8w). Suppose that we have an eigenvector of the Hessian K* vector v satisfying v T K* = ).v T and let us define
(9)
then the dynamics of
ecan be approximately represented as
(10)
By using e, we define a discrete and continuous modification of the rule for 1]:
1]t+l =
1]t
+ a1]t (Plet I -
1]d
and
(11)
e
Intuitively corresponds to a I-dimensional pseudo distance, where the average
flow I is projected down to a single direction v. The idea is to choose a clever
direction such that it is sufficient to observe all dynamics of the flow only along this
projection. In this sense the scalar is the simplest obtainable value to observe
learning. Noting that is always positive or negative depending on its initial value
and 1] can be positive, these two equations (10) and (11) are equivalent to the
equation system (5). Therefore their asymptotic solutions are
e
e
and
1]t
1
1
).
t
= - . -.
(12)
Again similar to the last section we have shown that the algorithm converges properly, however this time without using loss or Hessian. In this algorithm, an important problem is how to get a good projection v. Here we assume the following facts
and approximate the previous algorithm: (1) the minimum eigenvalue of matrix
f{* is sufficiently smaller than the second minimum eigenvalue and (2) therefore
after a large number of iterations, the parameter vector Wt will approach from the
direction of the minimum eigenvector of K*. Since under these conditions the evolution of the estimated parameter can be thought of as a one-dimensional process,
any vector can be used as v except for the vectors which are orthogonal to the
minimum eigenvector. The most efficient vector will be the minimum eigenvector
itself which can be approximated (for a large number of iterations) by
v
= (/)/11(/)11,
Adaptive On-line Learning in Changing Environments
603
e
where II " denotes the L2 norm. Hence we can adopt = II(f)II. Substituting the
instantaneous average of the flow by a leaky average, we arrive at
Wt - 1Jd(~t+l' Yt+l; Wt),
(1- 8)rt + 8f(~t+l,Yt+1;Wt),
1Jt + a1Jt (,811 r
t+l" - 1Jt) ,
(13)
(0
< 8 < 1)
(14)
(15)
where 8 controls the leakiness of the average and r is used as auxiliary variable
to calculate the leaky average of the flow f. This set of rules is easy to compute.
However 1J will now approach a small value because of fluctuations in the estimation
of r which depend on the choice of a,,8, 'Y. In practice, to assure the stability of the
algorithm, the learning rate in Eq.(13) should be limited to a maximum value 1Jrnax
and a cut-off 1Jrnin should be imposed.
5
Numerical Experiment: an application to blind
separation
In the following we will describe the blind separation experiment that we conducted
(see ego Bell & Sejnowski [1995], Jutten & Herault [1991]' Molgedey & Schuster
[1994] for more details on blind separation). As an example we use the two sun
audio files (sampling rate 8kHz): "rooster" (sD and "space music" (s;) (see Fig.
1). Both sources are mixed on the computer via J; = (lL. + A)s~ where Os <
t < 1.25s and 3.75s ~ t ~ 5s and it = (lL. + B)s~ for 1.25s ~ t < 3.75s, using
A = (0 0.9; 0.7 0) and B = (0 0.8; 0.6 0) as mixing matrices. So the rule switches
twice in the given data. The goal is to obtain the sources s-; by estimating A and
B, given only the measured mixed signals it. A change of the mixing is a scenario
often encountered in real blind separation tasks, e.g. a speaker turns his head or
moves during his utterances. Our on-line algorithm is especially suited to this nonstationary separation task, since adaptation is not limited by the above-discussed
generic drawbacks of a constant learning rate as in Bell & Sejnowski [1995], Jutten
& Herault [1991]' Molgedey & Schuster [1994]. Let u~ be the unmixed signals
(16)
where T is the estimated mixing matrix. Along the lines of Molgedey & Schuster
[1994] we use as modification rule for Tt
~T:j ex 1Jtt ((I!u{), (u~u{), (I!UL1), (U~UL1)) ex 1Jt(I!u{)(u~u{)+(I! UL1)(U~uLl)'
(i,j = 1, 2,i
# j),
where we substitute instantaneous averages with leaky averages
(I! U{)leaky
= (1 -
t)(ILl UL1)leaky
+ d! u{.
Note that the necessary ingredients for the flow t in Eq.(13)-(14) are in this case
simply the correlations at equal or different times; 1Jt is computed according to
Eq.(15). In Fig.2 we observe the results of the simulation (for parameter details,
see figure caption). After a short time (t=O.4s) of large 1J and strong fluctuations in
1J the mixing matrix is estimated correctly. Until t=1.25s the learning rate adapts
cooling down approximately similar to lit (cf. Fig. 2c), which was predicted in
Eq.(12) in the previous section, i.e. it finds the optimal rate for annealing. At the
N. Murata, K. Muller; A. Ziehe and S. Amari
604
point of the switch where simple annealed learning would have failed to adapt to
the sudden change, our adaptive rule increases TJ drastically and is able to follow
the switch within another O.4s rsp. O.ls. Then again, the learning rate is cooled
down automatically as intended. Comparing the mixed, original and unmixed signals in Fig.1 confirms the accurate and fast estimate that we already observed in
the mixing matrix elements. The same also holds for an acoustic cross check : for
a small part of a second both signals are audible, then as time proceeds only one
signal, and again after the switches both signals are audible but only for a very
short moment. The fading away of the signal is so fast to the listener that it seems
that one signal is simply "switched off" by the separation algorithm.
Altogether we found an excellent adaptation behavior of the proposed on-line algorithm, which was also reproduced in other simulation examples omitted here.
6
Conclusion
We gave a theoretically motivated adaptive on-line algorithm extending the work of
Sompolinskyet al. [1995]. Our algorithm applies to general feed-forward networks
and can be used to accelerate learning by the learning about learning strategy in the
difficult setting where (a) continuous functions or distributions are to be learned,
(b) the Hessian K is not available and (c) no explicit loss function is given. Note,
that if an explicit loss function or K is given, this additional information can be
incorporated easily, e.g. we can make use of the real gradient otherwise we only
rely on the flow. Non-stationary blind separation is a typical implementation of the
setting (a)-(c) and we use it as an application of the adaptive on-line algorithm in
a changing environment. Note that we can apply the learning rate adaptation to
most existing blind separation algorithms and thus make them feasible for a nonstationary environment. However, we would like to emphasize that blind separation
is just an example for the general adaptive on-line strategy proposed and applications of our algorithm are by no means limited to this scenario. Future work will
also consider applications where the rules change more gradually (e.g . drift) .
References
Amari, S. (1967) IEEE Trans . EC 16(3):299-307.
Bell, T ., Sejnowski, T . (1995) Neural Compo 7:1129-1159.
Cichocki A., Amari S., Adachi M., Kasprzak W. (1996) Self-Adaptive Neural Networks for Blind Separation of Sources, ISCAS'96 (IEEE), Vol. 2, 157-160.
Darken, C ., Moody, J. (1991) in NIPS 3, Morgan Kaufmann, Palo Alto.
Heskes, T.M., Kappen, B. (1991) Phys. Rev. A 440:2718-2726.
Jutten, C., Herault, J . (1991) Signal Processing 24:1-10.
Molgedey, L., Schuster, H.G. (1994) Phys. Rev. Lett. 72(23):3634-3637.
Robbins, H., Monro, S. (1951) Ann. Math. Statist., 22:400-407.
Rumelhart, D., McClelland, J.L and the PDP Research Group (eds.) (1986) , PDP
Vol. 1, pp. 318-362, Cambridge, MA: MIT Press.
Saad D., and SoIl a S. (1995), Workshop at NIPS '95, see World-Wide-Web page:
http://neural-server.aston.ac.uk/nips95/workshop.html and references therein.
Sompolinsky, H., Barkai, N., Seung, H.S. (1995) in Neural Networks: The Statistical
Mechanics Perspective, pp. 105-130. Singapore: World Scientific.
Sutton, R.S . (1992) in Proc. 10th nat . conf. on AI, 171-176, MIT Press.
605
Adaptive On-line Learning in Changing Environments
I ??
I ...
t:~I.~ ~.~ ~ ~.I~.~I ~ ~~ ~ ~ ~~.I~
o
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
t~~
m
c:
o
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
1~--~--~----~--~--~----~--~--~--------~
Ol
'iii
i
0
'E?_1L-__
~
__
0.5
1
0
0.5
1
r~?,,:
til -
____L -_ _- L_ _
1.5
'. :,
~
o
1.5
2
~
_ _ _ _~_ _- L_ _~_ _ _ _~_ _~
2.5
,
3
3.5
4
5
4.5
5
time sec
Figure 1: s? "space music", the mixture signal Ii, the unmixed signal
separation error u? - s? as functions of time in seconds.
?l
c:
u?
and the
1
III
~0.8
III
8
0l0.6
c:
.~
'E 0.4
0
0.5
1.5
2
2.5
3
3.5
4
4.5
5
0.5
1.5
2
2.5
3
3.5
4
4.5
5
0.5
1.5
2
2.5
3
3.5
4
4.5
5
15
10
.l!l
III
5
10
~
5
~
00
time sec
Figure 2: Estimated mixing matrix T t , evolution ofthe learning rate TJt and inverse
learning rate l/TJt over time. Rule switches (t=1.25s, 3.75s) are clearly observed as
drastic changes in TJt . Asymptotic 1ft scaling in TJ amounts to a straight line in l/TJt.
Simulation parameters are a = 0.002,,B = 20/ maxll(r)II, f = 8 = 0.01. maxll(r)11
denotes the maximal value of the past observations.
| 1183 |@word version:1 norm:1 seems:1 confirms:1 simulation:3 moment:1 kappen:2 initial:2 wako:1 past:1 existing:1 current:1 z2:1 comparing:1 numerical:1 stationary:2 short:2 compo:1 leakiness:1 sudden:1 provides:1 unmixed:3 math:1 accessed:1 along:2 prove:1 introduce:1 theoretically:2 expected:4 behavior:2 mechanic:2 ol:1 relying:1 automatically:2 estimating:1 underlying:1 alto:1 eigenvector:4 pseudo:1 ull:1 uk:1 zl:1 control:1 omit:1 positive:3 sd:1 sutton:2 koo:6 fluctuation:2 approximately:2 twice:1 therein:1 limited:3 range:1 averaged:3 practical:2 practice:1 definite:1 bell:3 thought:1 projection:2 get:2 cannot:4 close:1 clever:1 equivalent:1 imposed:1 demonstrated:1 shi:1 yt:8 annealed:5 go:1 modifies:1 starting:1 l:1 simplicity:1 rule:14 estimator:2 attraction:1 his:2 stability:1 cooled:1 target:1 play:1 suppose:1 caption:1 origin:1 ego:2 rumelhart:2 satisfying:1 approximated:1 assure:1 element:1 cut:1 cooling:1 observed:2 role:1 ft:1 argminr:1 capture:1 calculate:1 sun:1 sompolinsky:6 solla:1 intuition:1 environment:8 vanishes:1 seung:1 neglected:1 dynamic:2 rudower:1 depend:3 molgedey:4 efficiency:1 learner:2 accelerate:1 easily:1 represented:1 listener:1 riken:2 fast:4 effective:1 describe:1 sejnowski:3 dichotomy:1 klaus:2 fluctuates:1 amari:10 otherwise:1 statistic:1 itself:1 online:1 reproduced:1 sequence:1 rr:1 differentiable:1 eigenvalue:2 maximal:1 adaptation:3 mixing:6 adapts:1 convergence:5 extending:2 converges:5 help:1 depending:1 ac:1 measured:1 eq:7 strong:1 auxiliary:1 predicted:1 direction:3 drawback:1 stochastic:1 shun:1 behaviour:1 generalization:3 tjt:7 extension:1 hold:4 around:2 sufficiently:1 substituting:1 adopt:1 omitted:1 purpose:1 hwt:1 estimation:1 proc:1 palo:1 robbins:3 tool:1 mit:2 clearly:1 always:1 aim:1 rather:1 l0:1 properly:1 check:1 rooster:1 sense:1 expand:1 germany:1 ill:1 html:1 herault:3 equal:2 f3:1 having:2 sampling:1 lit:1 future:1 others:1 inherent:1 simultaneously:1 ve:1 intended:1 iscas:1 attractor:1 investigate:1 evaluation:1 certainly:1 analyzed:1 mixture:1 tj:2 accurate:1 necessary:1 orthogonal:1 old:1 desired:1 earlier:1 introducing:1 saitama:1 conducted:1 too:2 adaptively:1 accessible:1 yl:1 off:2 audible:2 moody:2 hirosawa:1 squared:1 again:3 choose:1 slowly:1 conf:1 til:1 japan:1 de:1 sec:2 stabilize:1 coefficient:1 blind:12 view:1 jtt:1 monro:3 variance:2 kaufmann:1 murata:4 yield:1 ofthe:1 zoo:1 straight:1 phys:2 ed:1 pp:2 riemannian:1 obtainable:1 feed:1 follow:4 furthermore:1 just:1 correlation:3 until:1 receives:1 web:1 o:1 noboru:1 jutten:3 scientific:1 barkai:1 effect:1 y2:1 evolution:2 hence:1 laboratory:1 ll:2 during:1 self:1 speaker:1 generalized:1 ay:1 tt:1 reasoning:1 instantaneous:2 recently:1 cichoki:1 khz:1 jp:1 extend:1 discussed:1 cambridge:1 ai:1 heskes:2 access:1 longer:1 showed:1 perspective:1 scenario:3 server:1 muller:3 morgan:1 minimum:6 additional:1 employed:1 paradigm:1 signal:12 ii:5 ul1:4 adapt:2 cross:1 a1:1 controlled:1 converging:1 metric:1 iteration:2 chaussee:1 annealing:1 source:3 saad:2 file:1 flow:10 nonstationary:2 noting:2 iii:4 enough:1 easy:1 switch:5 fit:1 gave:1 andreas:1 idea:5 motivated:2 miiller:1 hessian:7 speaking:1 adequate:1 fractal:1 amount:1 extensively:1 locally:1 statist:1 mcclelland:1 gmd:2 cit:2 simplest:1 http:1 singapore:1 estimated:5 correctly:1 discrete:1 vol:2 ichi:1 group:1 adachi:1 nevertheless:1 changing:8 asymptotically:1 inverse:1 powerful:1 arrive:1 separation:15 scaling:1 encountered:1 fading:1 ri:1 sake:1 simulate:1 relatively:1 according:1 smaller:1 mura:1 rev:2 modification:4 intuitively:2 gradually:1 equation:4 turn:1 drastic:1 end:1 available:2 apply:2 observe:3 fluctuating:1 appropriate:2 generic:1 away:1 batch:3 altogether:1 existence:1 jd:1 substitute:1 denotes:3 remaining:1 cf:2 original:1 music:2 especially:1 suddenly:1 tensor:1 move:1 already:1 quantity:2 strategy:2 rt:1 gradient:4 distance:2 berlin:1 difficult:1 robert:1 negative:2 rise:1 implementation:1 zt:6 observation:1 darken:2 discarded:1 descent:1 incorporated:1 head:1 pdp:2 drift:1 pair:1 acoustic:2 learned:1 nip:2 trans:1 able:1 proceeds:1 memory:2 natural:1 rely:1 aston:1 woo:13 cichocki:1 utterance:1 l2:1 asymptotic:3 loss:15 plet:1 interesting:1 mixed:3 filtering:1 var:1 ingredient:1 spoil:1 switched:1 basin:1 sufficient:1 storing:1 soil:1 last:1 infeasible:1 l_:2 drastically:1 wide:1 leaky:5 boundary:1 lett:1 world:2 forward:1 adaptive:14 projected:1 ec:1 approximate:1 emphasize:1 assumed:1 consuming:1 continuous:6 learn:2 excellent:1 allowed:1 fig:4 experienced:1 explicit:5 down:3 jt:4 workshop:2 nat:1 suited:1 tc:1 simply:2 failed:1 rsp:1 scalar:2 applies:1 corresponds:2 determines:1 ma:1 goal:1 ann:1 feasible:1 change:8 infinite:1 except:1 typical:1 wt:36 called:1 pas:1 ziehe:5 evaluate:1 audio:1 schuster:4 ex:2 |
202 | 1,184 | Basis Function Networks and Complexity
--_.IIiIIIIIIIIlo. ..............
in Function Learning
10. .......,. ...?. . . . .
JIIIL ....'IIIoo4II? .,JIIIL'IIU"JIIILJIIIL
Adam Krzyzak
Department of Computer Science
Concordia University
Montreal, Canada
krzyzak@cs.concordia.ca
Tamas Linder
Dept. of Math. & Comp. Sci.
Technical University of Budapest
Budapest, Hungary
linder@inf.bme.hu
Abstract
In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single
hidden layer radial basis function network. Our approach differs from
the previous complexity regularization neural network function learning
schemes in that we operate with random covering numbers and 11 metric
entropy, making it po~sible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints
previously imposed on the network parameters are also eliminated this
way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in
tenns of the sample size are obtained for a large class of loss functions.
Rates of convergence to the optimal loss are also derived.
1 INTRODUCTION
Artificial neural networks have been found effective in learning input-outputmappings from
noisy examples. In this learning problem an unknown target function is to be inferred from a
set of independent observations drawn according to some unknown probability distribution
from the input-output space JRd x JR. Using this data set the learner tries to determine a
function which fits the data in the sense of minimizing some given empirical loss function.
The target function mayor may not be in the class of functions which are realizable by the
learner. In the case when the class of realizable functions consists of some class of artificial
neural networks, the above problem has been extensively studied from different viewpoints.
In recent years a special class of artificial neural networks, the radial basis function (RBF)
networks have received considerable attention. RBF networks have been shown to be
the solution of the regularization problem in function estimation with certain standard
smoothness functionals used as stabilizers (see [5], and the references therein). Universal
198
A. Krzyiak and T. Linder
convergence of RBF nets in function estimation and classification has been proven by
Krzyzak et at. [6]. Convergence rates of RBF approximation schemes have been shown to
be comparable with those for sigmoidal nets by Girosi and Anzellotti [4]. In a recent paper
Niyogi and Girosi [9] studied the tradeoff between approximation and estimation errors and
provided an extensive review of the problem.
In this paper we consider one hidden layer RBF networks. We look at the problem of
choosing the size of the hidden layer as a function of the available training data by means
of complexity regularization. Complexity regularization approach has been applied to
model selection by Barron [1], [2] resulting in near optimal choice of sigmoidal network
parameters. Our approach here differs from Barron's in that we are using II metric entropy instead of the supremum norm. This allows us to consider amore general class
of activation function, namely the functions of bounded variation, rather than a restricted
class of activation functions satisfying a Lipschitz condition. For example, activations with
jump discontinuities are allowed. In our complexity regularization approach we are able
to choose the network parameters more freely, and no discretization of these parameters is
required. For RBF regression estimation with squared error loss, we considerably improve
the convergence rate result obtained by Niyogi and Girosi [9].
In Section 2 the problem is formulated and two results on the estimation error of complexity
regularized RBF nets are presented: one for general loss functions (Theorem 1) and a
sharpened version of the first one for the squared loss (Theorem 2). Approximation bounds
are combined with the obtained estimation results in Section 3 yielding convergence rates
for function learning with RBF nets.
2 PROBLEM FORMULATION
The task is to predict the value of a real random variable Y upon the observation of an lRd
valued random vector X. The accuracy of the predictor f : lRd --+ R is measured by the
expected risk
J(/) = EL(f(X), Y),
where L : lR x lR --+ lR+ is a nonnegative loss function. It will be assumed that there exists
a minimizing predictor f* such that
J(!*) == inf J(/).
J
A good predictor f n is to be detennined based on the data (Xl, Y]), ? .. , (Xn , Y n ) which
are i.i.d. copies of (X, Y). The goal is to make the expected risk EJ(fn) as small as
possible, while fn is chosen from among a given class :F of candidate functions.
In this paper the set of candidate functions :F will be the set of single-layer feedforward
neural networks with radial basis function activation units and we let:F == Uk=l:Fk, where
:Fk is the family of networks with k hidden nodes whose weight parameters satisfy certain
constraints. In particular, for radial basis functions characterized by a kernel K : lR+ --+ JR.,
:Fk is the family of networks
k
I(x) = LWiK([x-Ci]tAi[X-Ci)) +wo,
i=l
where Wo, WI ..? , Wlc are real numbers called weights, CI, .?? ,Ck E R d , Ai are nonnegative'
definite d x d matrices, and x t denotes the transpose of the column vector x. _
The complexity regularization principle for the learning problem was introduced by Vapnik
[10] and fully developed by Barron [1], [2] (see also Lugosi and Zeger [8]). It enables
the learning algorithm to choose the candidate class :Fk automatically, fro~ which is picks
Radial Basis Function Networks and t;OJ'1Wi~exlltv N,t:Jnu'ln7'J"7n1~..n1lP'll
199
the estimate function by minimizing the empirical error over the training data. Complexity
regularization penalizes the large candidate classes, which are bound to have small approximation error, in favor of the smaller ones, thus balancing the estimation and approximation
errors.
Let :F be a subset of a space X of real functions over some set, and let p be a pseudometric
on X. For f > 0 the covering number N ( f, :F, p) is defined to be the minimal number of
closed f balls whose union cover :F. In other words, N ( f, :F, p) is the least integer such
that there exist 11, ... ,IN with N = N( f.,:F, p) satisfying
sup ~n p(/, fi)
IEF l~I~N
S:
f.
In our case, :F is a family of real functions on lRm , and for any two functions I and g, p is
given by
... , Zn aren givenpointsinlRm ? In this case we will use the notation N( f,:F, p) ==
N ( f, :F, zl)' emphasizing the dependence of the metric p on zl == (Zl' ... , zn). Let us
define the families of functions 1-?k, k = 1, 2, ... by
~hereZl,
1-lIc == {L(f(?), .) : I E :FIc}.
Thus each member of 1-? k maps lR d+ 1 into R. It will be assumed that for each k we are given
a finite, almost sure uniform upper bound on the random covering numbers N( f, 1-?k, Zr),
where Zj == ?Xl, Yl), ... , (Xn , Yn )). We may assume without loss of generality that
N(E,1ik) is monotone decreasing in E. Finally, assume that L(f(X), Y) is uniformly
almost surely bounded by a constant B, Le.,
P{L(/(X), Y)
~
B} == 1, f E :FIc, k == 1,2, ...
(1)
The complexity penalty of the kth class for n training samples is a nonnegative number ~kn
satisfying
Akn ~
lZ8B2 1og N(Akn/ 8,1lk) + Ck ,
n
(2)
:s
where the nonnegative constants Ck satisfy Er=l e- Ck ' 1. Note that since N(f, 1-?k) is
nonincreasing in f, it is possible to choose such ~kn for all k and n. The resulting complexity
penalty optimizes the upper bound on the estimation error in the proof of Theorem 1 below.
We can now define our estimate. Let
that is, fkn minimizes over rk the empirical risk for n training samples. The penalized
empirical risk is defined for each f E rIc as
The estimate f n is then defined as the fkn minimizing the penalized empirical risk over all
classes:
(3)
!~ == argminJn(!kn).
lkn:k?;.l
We have the following theorem for the expected estimation error of the above complexity
regularization scheme.
200
A. Krzyiakand T. Linder'
Theorem 1 For any nand k the complexity regularization estimate (3) satisfies
~ min (Rkn +. JE:Fk
inf J(f) -
EJ(/n) - J(/*)
k~l
J(f*)) ,
where
Assuming without loss of generality that log N (f., 1-l k)
~kn
~
1, it is easy to see that the choice
128B21ogN(B/vnl1it.:) + (:;:
==
(4)
n
satisfies (2).
2.1 SQUARED ERROR LOSS
For the 'special case when
L(x, y) == (x _ y)2
we can obtain a better upper bound. The estimate will be the same as before, but instead of
(2), the complexity penalty A now has to satisfy
kn
A
o.kn
2::
0 log N(A kn /C2 , :Fk) + Ck
1
(5)
,
n
where 01 == 349904, C2 == 2560 3, and 0 == max{B, I}. Here N ( f. , :Fk) is a uniform upper
bound on the random 11 covering numbers N( f., :Fk, Xl). Assume that the class:F == Uk:Fk
is convex, and let :F be the closure of :F in L 2 (lJ), where Jl denotes the distribution of X.
Then there is a unique 1 E :F whose squared loss J (1) achieves infJ E:F J (I). We have the
following bound on the difference EJ(fn) - J(I).
Theorem 2 Assume that:F == Uk:Fk is?a convex set offunctions, and consider the squared
error loss. Suppose that I/(x)1 ~ B for all x E ]Rd and f E :F, and P(IYI > B) == o.
Then complexity regularization estimate with complexity penalty satisfying (5) gives
EJ(fn) - J(l) .::; 2min
k~l
(Akn + fE:Fk
inf J(f) -
C1
J(I)) + 2 .
n
The proof of this result uses an idea of Barron [1] and a Bernstein-type uniform probability
inequality recently obtained by Lee et all [7].
3 RBF NETWORKS
We will consider radial basis function (RBF) networks with one hidden layer. Such a
network is characterized by a kernel K : lR+ -+ JR. An RBF net of k nodes is of the fonn
k
f(x)
= L Wi K
([x - Ci]t A[x - Cil)
+ wo,
(6)
i=l
where wo, WI, .?. , Wk are real numbers called weights, CI, ... , Ck E ]Rd, and the Ai are
nonnegative definite d x d matrices. The kth candidate class :FA: for the function estimation
task is defined as the class of networks with k nodes which satisfy the weight condition
I:~=o tWil ~ b for a fixed b > 0:
:FII: =
{t.
Wi
K([x - Ci]1A[x - cil) + wo: ~ IWil ::; b}.
(7)
Radial Basis Function Networks and LOllrlDlexztv KeRru[airization
Let L(x, y) ==
201
Ix - yiP, and
J(/) == EI/(X) -
YIP,
(8)
where 1 ~ p < 00. Let JJ denote the probability measure induced by X. Define:F to be the
closui"e in V(JJ) ofthe convex hull of the functions bK([x - c]t A[x - c]) and the constant
function h( x) == 1, x E lRd , where fbi ~ b, c E lRd, and A varies over all nonnegative d x d
matrices. That is, :F is the closure of :F == Uk:Fk, where:Fk is given in (7). Let 9 E :F be
arbitrary. If we assume that IK I is uniformly bounded, then by Corollary 1 of Darken et ala
[3], we have for 1 :::; p ~ 2 that
(9)
r
where Ilf - 911?1'(1') denotes the LP(jl) norm (f If IPdjl) IIp, and.1"k is givenin (7).
The approximation error infJErk J(/) - J(f*) can be dealt with using this result if the
optimal 1* happens to be in :F. In this case, we obtain
inf J(/) - J(/*)
JErk
for all 1 ~ p
regression.
~
== O(ljVk)
2. Values of p close to 1 are of great importance for robust neural network
When the kernel K has a bounded total variation, it can be shown that N ( t:, 1i k ) ~
(AI/t:)Azk, where the constants AI, A 2 depend on ~upx IK(x )1, the total variation V of K,
the dimension d, and on the the constant b in the definition (7) of :Fk. Then, if 1 ~ p ~ 2,
the following consequence of Theorem 1 can be proved for LP regression estimation.
Theorem 3 Let the kernel K be of bounded variation and assume that IY I is bounded.
Thenfor 1 ~ p :::; 2 the error (8) ofthe complexity regularized estimate satisfies
~ ~1 [0 (Jkl~g~) + 0(J*)]
EJ(fn) - J(!*)
/
o ( Co~n) 14)
.
For p = 1, i.e., for L 1 regression estimation, this rate is known to be optimal within the
logarithmic factor.
For squared error loss J(f)
then by (9) we obtain
== E(f(X) - y)2 we have 1* (x) == E(YIX == x).
inf 1(/) - J(/*)
JErk,
If f* E :F,
== O(ljk).
(10)
It is easy to check that the class Uk:Fk is convex if the:Fk are the collections of RBF nets
defined in (7). The next result shows that we can get rid of the square root in Theorem 3.
Theorem 4 Assume that K is of bounded variation. Suppose furthermore that IY t is a
bounded random variable, and let L(x, y) = (x - y)2. Then the complexity regularization
RBF squared regression estimate satisfies
EJ(fn) - inf J(f) < 2 min ( inf J(/) - inf J(/)
.
JEr
-
k~l
JErk
JE:F
+0
(k 10gn)) + 0 (~) .
n
n
A. Krzyiakand T. Linder
202
If f~- E :F, this result and (10) give
~ ~? [0 (kl:gn ) + 0 (~ )]
EJ(fn) - J(I*)
o ( Co~ n )
1/2) .
(11)
This result sharpens and extends Theorem 3.1 of Niyogi and Girosi [9] where the weaker
o(
J l:g n) +
k
0
(t)
convergence rate was obtained (in a PAC-like formulation) for the
squared loss of Gaussian RBF network regression estimation. The rate in (11) varies linearly
with dimension. Our result is valid for a very large class of RBF schemes, including the
Gaussian RBF networks considered in [9]. Besides having improved on the convergence
rate, our result has the advantage of allowing kernels which are not continuous, such as the
window kernel.
The above convergence rate results hold in the case when there exists an f* minimizing the
risk which is a member of the LP(JJ) closure of :F = U:Fk, where :Fk is given in (7). In
other words, f* should be such that for all ( > 0 there exists a k and a member f of Fk
with Itf - f* IILP(tt) < f. The precise characterization of:F seems to be difficult. However,
based on the work of Girosi and Anzellotti [4] we can describe a large class of functions
that is contained in :F.
Let H ( x, t) be a real and bounded function of two variables x E lRd and t E lRn. Suppose
that A is a signed measure on lRn with finite total variation If All. If g( x) is defined as
g(x) = ( H(x, t)>'(dt),
JRD
then 9 E LP (J.l) for any probability measure J.l on lRd. One can reasonably expect that g
can be approximated well by functions f (x) of the fonn
k
f(x) =
L wiH(x, ti),
i=}
where t}, ... , tk E lRn and 2::=1 {Wil ~ ItAII. The case m = d and H(x, t) == G(x - t) is
investigated in [4], where a detailed description of function spaces arising from the different
choices of the basis function G is given. Niyogi and Girosi [9] extends this approach to
approximation by convex combinations of translates and dilates of a Gaussian function. In
general, we can prove the following
Lemmal Let
g(x)
= JRD
( H(x, t)>'(dt),
where H ( x, t) and A are as above. Define for each k
gk = {f(X) =
t
wiH(x,ti):
~
t.
(12)
1 the class offunctions
1wd ~ II>'II}.
Then for any probab'ility measure JJ on lRd andforany 1 ~ p < 00, thefunctiong can be
approximated in ?P(J.l) arbitrarily closely by members of9 == Ugk, i.e.,
inf
lEYk
Ilf -
gIILP(tt) ~
0
as
k
-+ 00.
Radial Basis Function Networks and l:o~rrw~~exttvKeR14~larlW~~ton
203
To prove this lemma one need only slightly adapt the proof of Theorem 8.2 in [4], or in a
more elementary way following the lines of the probabilistic proof of Theorem 1 of [6]. To
apply the lemma for RBF networks considered in this paper, let n == d2 + d, t == (A, c),
and H(x, t) == ]( ([x - c]tA[x - c]). Then we obtain that:F contains all the functions 9
with the integral representation
g(x)
= f
JRcil+d
K ([x - e]tA[x -
en A(dedA),
for which 11.A1-I ~ b, where b is the constraint on the weights as in (7).
Acknowledgements
This work was supported in part by NSERC grant OGPOO0270, Canadian National Networks
of Centers of Excelle~ce grant 293 and OTKA grant F014174.
References
[1] A. R. Barron. Complexity regularization with application to artificial neural networks.
In G. Roussas, editor, Nonparametric Functional Estimation andRelatedTopics, pages
561-576. NATO ASI Series, Kluwer Academic Publishers, Dordrecht, 1991.
[2] A. R. Barron. Approximation and estimation bounds for artificial neural networks.
Machine Leaming, 14:115-133,1994.
[3] C. Darken, M. Donahue, L. Gurvits, and E. Sontag. Rate of approximation results
motivated by robust neural network learning. In Proc. Sixth Annual Workshop on
ComputationalLeaming Theory,?pages 303-309. Morgan Kauffman, 1993.
[4] F. Girosi and G. Anzellotti. Rates of convergence for radial basis functions and neural
networks. In R. J. Mammone, editor, ArtijicialNeuralNetworksfor Speech and Vision,
pages 97-113. Chapman & Hall, London, 1993.
[5] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural network architectures. Neural Computation, 7:219-267,1995.
[6] A.Krzyzak, T. Linder, and G. Lugosi. Nonparametric estimation and classification
using radial basis function nets and empirical risk minimization. IEEE Transactions
on Neural Networks, 7(2):475-487, March 1996..
[7] W. S. Lee, P. L. Bartlett, and R. C. Williamson. Efficient agnostic learning of neural
networks with bounded fan-in. to be published in IEEE Transactions on Information
Theory, 1995.
[8] G. Lugosi and K. Zeger. Concept learning using complexity regularization. IEEE
Transactions on Information Theory, 42:48-54, 1996.
.
[9] P. Niyogi and F. Girosi. On the relationship between generalization error, hypothesis
complexity, and sample complexity for radial basis functions. Neural Computation,
8:819-842,1996.
[10] V. N. Vapnik. Estimation ofDependencies Based on Empirical Data. Springer-Verlag,
New York, 1982.
| 1184 |@word version:1 sharpens:1 norm:2 seems:1 hu:1 closure:3 d2:1 pick:1 fonn:2 contains:1 series:1 ala:1 discretization:1 wd:1 activation:5 jkl:1 fn:7 zeger:2 girosi:9 enables:1 offunctions:2 lr:6 characterization:1 math:1 node:3 sigmoidal:2 c2:2 ik:3 consists:1 prove:2 expected:4 decreasing:1 automatically:1 window:1 provided:1 bounded:11 notation:1 agnostic:1 minimizes:1 developed:1 ti:2 uk:5 zl:3 unit:1 grant:3 yn:1 before:1 consequence:1 lrm:1 lugosi:3 signed:1 therein:1 studied:2 co:2 unique:1 union:1 definite:2 differs:2 universal:1 empirical:8 asi:1 word:2 radial:11 lic:1 get:1 close:1 selection:1 risk:9 ilf:2 imposed:1 map:1 center:1 attention:1 convex:5 iiu:1 variation:7 target:2 suppose:3 us:1 hypothesis:1 satisfying:4 approximated:2 krzyiak:1 complexity:23 wil:1 trained:1 depend:1 upon:1 learner:2 basis:13 po:1 jer:1 effective:1 describe:1 london:1 artificial:5 choosing:1 iilp:1 mammone:1 dordrecht:1 whose:3 valued:1 favor:1 niyogi:5 noisy:1 advantage:1 net:7 wih:2 budapest:2 hungary:1 detennined:1 description:1 convergence:9 adam:1 tk:1 derive:1 montreal:1 bme:1 measured:1 received:1 c:1 closely:1 hull:1 wlc:1 generalization:1 of9:1 elementary:1 hold:1 considered:2 hall:1 great:1 predict:1 rkn:1 achieves:1 estimation:19 proc:1 minimization:2 gaussian:3 yix:1 rather:1 ck:6 ej:7 og:1 broader:1 corollary:1 derived:1 check:1 sense:1 realizable:2 el:1 lj:1 nand:1 hidden:5 classification:2 among:1 special:2 yip:2 gurvits:1 having:1 eliminated:1 chapman:1 look:1 jones:1 lrd:7 national:1 n1:1 yielding:1 lrn:3 nonincreasing:1 integral:1 poggio:1 penalizes:1 minimal:1 column:1 gn:2 cover:1 zn:2 subset:1 predictor:3 uniform:3 kn:7 varies:2 lkn:1 considerably:1 combined:1 lee:2 yl:1 probabilistic:1 iy:2 squared:8 sharpened:1 iip:1 choose:3 sible:1 wk:1 satisfy:4 try:1 root:1 closed:1 sup:1 square:1 accuracy:1 ofthe:2 dealt:1 comp:1 published:1 definition:1 sixth:1 proof:4 proved:1 concordia:2 ta:2 dt:2 improved:1 formulation:2 generality:2 furthermore:1 ei:1 nonlinear:1 roussas:1 concept:1 tamas:1 regularization:16 ll:1 covering:4 tt:2 fi:1 recently:1 functional:1 jl:2 otka:1 kluwer:1 ai:4 smoothness:1 rd:2 fk:19 iyi:1 fii:1 recent:2 inf:10 optimizes:1 certain:2 verlag:1 inequality:1 tenns:1 arbitrarily:1 morgan:1 freely:1 surely:1 determine:1 ii:3 jrd:3 technical:1 characterized:2 adapt:1 academic:1 a1:1 involving:1 regression:6 mayor:1 metric:3 vision:1 ief:1 kernel:6 c1:1 publisher:1 operate:1 sure:1 induced:1 member:4 integer:1 near:1 feedforward:1 bernstein:1 easy:2 canadian:1 jerk:3 fit:1 architecture:1 stabilizer:1 idea:1 tradeoff:1 translates:1 motivated:1 bartlett:1 krzyzak:4 wo:5 penalty:4 sontag:1 speech:1 york:1 jj:4 akn:3 detailed:1 nonparametric:2 extensively:1 exist:1 zj:1 arising:1 drawn:1 ce:1 monotone:1 year:1 extends:2 family:5 almost:2 ric:1 comparable:1 bound:10 layer:5 fan:1 nonnegative:6 annual:1 constraint:3 min:3 pseudometric:1 department:1 according:1 march:1 ball:1 combination:1 jr:3 smaller:1 slightly:1 wi:5 lp:4 making:1 happens:1 restricted:1 previously:1 tai:1 fic:2 available:1 apply:2 barron:6 fbi:1 denotes:3 fa:1 dependence:1 thenfor:1 kth:2 sci:1 assuming:1 besides:1 relationship:1 iwil:1 minimizing:5 difficult:1 fe:1 gk:1 unknown:2 allowing:1 upper:4 observation:2 darken:2 finite:2 precise:1 arbitrary:1 canada:1 inferred:1 introduced:1 bk:1 namely:2 required:1 kl:1 extensive:1 discontinuity:1 able:1 below:1 kauffman:1 oj:1 max:1 including:1 regularized:2 zr:1 scheme:4 improve:1 lk:1 fro:1 review:1 probab:1 acknowledgement:1 loss:14 fully:1 expect:1 proven:1 principle:1 viewpoint:1 editor:2 balancing:1 penalized:2 supported:1 copy:1 transpose:1 weaker:1 dimension:2 xn:2 valid:1 collection:1 jump:1 itf:1 transaction:3 functionals:1 nato:1 supremum:1 rid:1 assumed:2 continuous:1 reasonably:1 robust:2 ca:1 williamson:1 investigated:1 linearly:1 allowed:1 je:2 en:1 cil:2 xl:3 candidate:5 ix:1 donahue:1 theorem:13 emphasizing:1 rk:1 pac:1 er:1 ton:1 exists:3 workshop:1 vapnik:2 importance:1 ci:6 aren:1 entropy:2 logarithmic:1 contained:1 nserc:1 springer:1 satisfies:4 ljk:1 goal:1 formulated:1 rbf:17 leaming:1 lipschitz:1 considerable:1 uniformly:2 lemma:2 called:2 total:3 fkn:2 linder:6 dept:1 |
203 | 1,185 | Learning Decision Theoretic Utilities Through
Reinforcement Learning
Magnus Stensmo
Terrence J. Sejnowski
Computer Science Division
University of California
Berkeley, CA 94720, U.S.A.
magnus@cs.berkeley.edu
Howard Hughes Medical Institute
The Salk Institute
10010 North Torrey Pines Road
La Jolla, CA 92037, U.S.A.
terry@salk.edu
Abstract
Probability models can be used to predict outcomes and compensate for
missing data, but even a perfect model cannot be used to make decisions
unless the utility of the outcomes, or preferences between them, are also
provided. This arises in many real-world problems, such as medical diagnosis, where the cost of the test as well as the expected improvement
in the outcome must be considered. Relatively little work has been done
on learning the utilities of outcomes for optimal decision making. In this
paper, we show how temporal-difference reinforcement learning (TO(A?
can be used to determine decision theoretic utilities within the context of
a mixture model and apply this new approach to a problem in medical diagnosis. TO( A) learning of utilities reduces the number of tests that have
to be done to achieve the same level of performance compared with the
probability model alone, which results in significant cost savings and increased efficiency.
1 INTRODUCTION
Decision theory is normative or prescriptive and can tell us how to be rational and behave
optimally in a situation [French, 1988]. Optimal here means to maximize the value of the
expected future outcome. This has been formalized as the maximum expected utility principle by [von Neumann and Morgenstern, 1947]. Decision theory can be used to make optimal choices based on probabilities and utilities. Probability theory tells us how probable
different future states are, and how to reason with and represent uncertainty information.
1062
M. Stensmo and T. 1. Sejnowski
Utility theory provides values for these states so that they can be compared with each other.
A simple form of a utility function is a loss function. Decision theory is a combination of
probability and utility theory through expectation.
There has previously been a lot of work on learning probability models (neural networks,
mixture models, probabilistic networks, etc.) but relatively little on representing and reasoning about preference and learning utility models. This paper demonstrates how both linear utility functions (i.e., loss functions) and non-linear ones can be learned as an alternative
to specifying them manually.
Automated fault or medical diagnosis is an interesting and important application for decision theory. It is a sequential decision problem that includes complex decisions (What is
the most optimal test to do in a situation? When is it no longer effective to do more tests?),
and other important problems such as missing data (both during diagnosis, i.e., tests not yet
done, and in the database which learning is done from). We demonstrate the power of the
new approach by applying it to a real-world problem by learning a utility function to improve automated diagnosis of heart disease.
'2 PROBABILITY, UTILITY AND DECISION THEORY MODELS
The system has separate probability and decision theory models. The probability model is
used to predict the probabilities for the different outcomes that can occur. By modeling the
joint probabilities these predictions are available no matter how many or few of the input
variables are available at any instant. Diagnosis is a missing data problem because of the
question-and-answer cycle that results from the sequential decision making process.
Our decision theoretic automated diagnosis system is based on hypotheses and deductions
according to the following steps:
1. Any number of observations are made. This means that the values of one or several
observation variables of the probability model are determined.
2. The system finds probabilities for the different possible outcomes using the joint
probability model to calculate the conditional probability for each of the possible
outcomes given the current observations.
3. Search for the next observation that is expected to be most useful for improving
the diagnosis according to the Maximum Expected Utility principle.
Each possible next variable is considered. The expected value of the system prediction with this variable observed minus the current maximum value before making the additional observation and the cost of the observation is computed and defined as the net value of information for this variable [Howard, 1966]. The variable
with the maximum of all of these is then the best next observation to make.
4. The steps 1-3 above are repeated until further improvements are not possible. This
happens when none of the net value of information values in step 3 is positive.
They can be negative since a positive cost has been subtracted.
Note that we only look ahead one step (called a myopic approximation [Gorry and Barnett,
1967]). This is in principle suboptimal, however, the reinforcement learning procedure described below can compensate for this. The optimal solution is to consider all possible sequences, but the search tree grows exponentially in the number of unobserved variables.
Learning Decision Theoretic Utilities through Reinforcement Learning
1063
Joint probabilities are modeled using mixture models [McLachlan and Basford, 1988].
Such models can be efficiently trained using the Expectation-Maximization (EM) algorithm
[Dempster et al., 1977], which has the additional benefit that missing variable values in the
training data also can be handled correctly. This is important since most real-world data
sets are incomplete. More detail on the probability model can be found in [Stensmo and
Sejnowski, 1995; Stensmo, 1995]. This paper is concerned with the utility function part of
the decision theoretic model.
The utilities are values assigned to different states so that their usefulness can be compared
and actions are chosen to maximize the expected future utility. Utilities are represented as
preferences when a certain disease has been classified but the patient in reality has another
one [Howard, 1980; Heckerman et al., 1992]. For each pair of diseases there is a utility
value between 0 and 1, where a 0 means maximally bad and a 1 means maximally good.
This is a d x d matrix for d diseases, and the matrix can be interpreted as a kind of a loss
function. The notation is natural and helps for acquiring the values, which is a non-trivial
problem. Preferences are subjective contrary to probabilities which are objective (for the
purposes of this paper). For example, a doctor, a patient and the insurance company may
have different preferences, but the probabilities for the outcomes are the same.
Methods have been devised to convert perceived risk to monetary values [Howard, 1980].
Subjects were asked to answer questions such as: "How much would you have to be paid to
accept a one in a millionth chance of instant painless death r' The answers are recorded for
various low levels of risk. It has been empirically found that people are relatively consistent and that perceived risk is linear for low levels of probability. Howard defined the unit
micromort (mmt) to mean one in J millionth chance of instant painless death and [Heckerman et al., 1992] found that one subject valued 1 micromort to $20 (in 1988 US dollars)
linearly to within a factor of two. We use this to convert utilities in [0,1] units to dollar
values and vice versa.
Previous systems asked experts to supply the utility values, which can be very complicated,
or used some simple approximation. [Heckerman et al., 1992] used a utility value of 1 for
misclassification penalty when both diseases are malign or both are benign, and 0 otherwise
(see Figure 4, left). They claim that it worked in their system but this approximation should
reduce accuracy. We show how to adapt and learn utilities to find better ones.
3 REINFORCEMENT LEARNING OF UTILmES
Utilities are adapted using a type of reinforcement learning, specifically the method of temporal differences [Sutton, 1988]. This method is capable of adjusting the utility values correctly even though a reinforcement signal is only received after each full sequence of questions leading to a diagnosis.
The temporal difference algorithm (ID(A? learns how to predict future values from past
experience. A sequence of observations is used, in our case they are the results of the medical tests that have been done. We used ID(A) to learn how to predict the expected utility
of the final diagnosis.
Using the notation of Sutton, the function Pt predicts the expected utility at time t. Pt is a
vector of expected utilities, one for each outcome. In the linear form described above, Pt =
P(Xt, Wt) = WtXt, where Wt is a matrix of utility values and Xt is the vector of probabilities
of the outcomes, our state description. The objective is to learn the utility matrix Wt.
M. Stensmo and T. J. Sejnowski
1064
We use an intra-sequence version of the ID(,\) algorithm so that learning can occur during
normal operation of the system [Sutton, 1988]. The update equation is
t
Wt+! = Wt
+ a[P(xt+!, Wt) -
P(Xt, Wt))
2: ,\t-kvwP(Xk, Wt),
(1)
k=1
=
=
where a is the learning rate and ,\ is a discount factor. With Pk
P (x k, Wt)
x k Wt and
et
E!=l ,\t-kvwP(Xk, wt} E~=I ,\t-kxk , (1) becomes the two equations
=
=
Wt+l
et+l
+ awt[x t+1 Xt+l + '\et,
Wt
xt)et
=
starting with el
Xl . These update equations were used after each question was answered.
When the diagnosis was done, the reinforcement signal z (considered to be observation
P t +1) was obtained and the weights were updated: Wt+!
Wt + awt[z - Xt)et. A final update of et was not necessary. Note that t~is method allows for the use of any differentiable
utility function, specifically a neural network, in the place of P(Xk, wt}.
=
Preference is sUbjective. In this paper we investigated two examples of reinforcement. One
was to simply give the highest reinforcement (z = 1) on correct diagnosis and the lowest (z
0) for errors. This yielded a linear utility function or loss function that was the
unity matrix which confirmed that the method works. When applied to a non-linear utility
function the result is non-trivial.
=
In the second example the reinforcement signal was modified by a penalty for the use of
a high number questions by multiplying each z above with (maXq -q)j(maXq - minq),
where q is the number of questions used for the diagnostic sequence, and the minimum and
maximum number of questions are minq and maXq, respectively. The results presented in
the next section used this reinforcement signal.
4 RESULTS
The publicly available Cleveland heart-disease database was used to test the method. It consists of 303 cases where the disorder is one of four types of heart-disease or its absence.
There are fourteen variables as shown in Figure 1. Continuous variables were converted into
a 1-0/- N binary code based on their distributions among the cases in the database. Nominal
and categorical variables were coded with one unit per value. In total 96 binary variables
coded the 14 original variables.
To find the parameter values for the mixture model that was used for probability estimation, the EM algorithm was run until convergence [Stensmo and Sejnowski, 1995; Stensmo,
1995]. The classification error was 16.2%. To get this result all of the observation variables
were set to their correct values for each case. Note that all this information might not be
available in a real situation, and that the decision theory model was not needed in this case.
To evaluate how well the complete sequential decision process system does, we went
through each case in the database and answered the questions that came up according to
the correct values for the case. When the system completed the diagnosis sequence, the result was compared to the actual disease that was recorded in the database. The number of
questions that were answered for each case was also recorded (q above). After all of the
cases had been processed in this way, the average number of questions needed, its standard
1065
Learning Decision Theoretic Utilities through Reinforcement Learning
1
2
3
4
5
6
Observ.
age
sex
cp
trestbps
chol
fbs
Description
Age in years
Sex of subject
Chest pain
Resting blood pressure
Serum cholesterol
Fasting blood sugar
7
restecg
8
9
lO
thalach
exang
oldpeak
11
slope
12
ca
13
thaI
Resting electrocardiographic
result
Maximum heart rate achieved
Exercise induced angina
ST depression induced by
exercise relative to rest
Slope of peak exercise
STsegment
Number major vessels colored
by flouroscopy
Defect type
14
Disorder
num
Description
Heart disease
Values
continuous
male/female
four types
continuous
continuous
<,or>
120mgldl
five values
Cost (mmt)
0
0
20
40
100
100
Cost ($)
0
0
400
800
2000
2000
100
2000
continuous
yes/no
continuous
100
100
100
2000
2000
2000
up/flat/down
100
2000
0-3
100
2000
normaVfixedi
reversible
Values
No disease/
four types
100
2000
Figure 1: The Cleveland Heart Disease database. The database consists of 303 cases described by 14 variables. Observation costs are somewhat arbitrarily assigned and are given
in both dollars and converted to micromorts (mmt) in [0,1] units based on $20 per micromort
(one in 1 millionth chance of instant painless death).
deviation, and the number of errors were calculated. If the system had several best answers,
one was selected randomly.
Observation costs were assigned to the different variables according to Figure 1. Using the
full utility/decision model and the O/I-approximation for the utility function (left part of
Figure 4), there were 29.4% errors. The results are summarized in Figure 2. Over the whole
data set an average of 4.42 questions were used with a standard deviation of 2.02. Asking
about 4-5 questions instead of 13 is much quicker but unfortunately less accurate. This was
before the utilities were adapted.
With TD(~) learning (Figure 3), the number of errors decreased to 16.2% after 85 repeated
presentations of all of the cases in random order. We varied ~ from 0 to 1 in increments
of 0.1, and a over several orders of magnitude to find the reported results. The resulting
average number of questions were 6.05 with a standard deviation of 2.08. The utility matrix
after 85 iterations is shown in Figure 4 with 0'=0.0005 and ~=O.I.
The price paid for increased robustness was an increase in the average number of questions
from 4.42 to 6.05, but the same accuracy was achieved using only less than half of them on
average. Many people intuitively think that half of the questions should be enough. There
is, however, no reason for this; furthermore there is no procedure to stop asking questions
if observations are chosen randomly.
In this paper a simple state description has been used, namely the predicted probabilities of
the outcomes. We have also tried other representations by including the test results in the
state description. On this data set similar results were obtained.
1066
M. Stensmo and T. 1. Sejnowski
Model
Probability model only
011 approximation
After 85 iterations of TD("\) learning
Errors
# Questions
13
4.42
6.05
16.2%
29.4%
16.2%
St. Dev.
2.02
2.08
Figure 2: Results on the Cleveland Heart Disease Database. The three methods are described in the text. The first method does not use a utility model. The 011 approximation
use the matrix in Figure 4, left. The utility matrix that was learned by TD("\) is shown in
Figure 4, right.
7
6.5
'"
c
6
0
.;:
5.5
Q)
'"
::I
5
CI
4.5
4
30
,...-.."
~
g
~
25
20
15
10
0
20
40
60
Iteration
80
0
20
40
60
Iteration
80
Figure 3: Learning graphs with discount-rate parameter ..\=0.1, and learning rate a=O.OOO5
for the TD("\) algorithm. One iteration is a presentation of all of the cases in random order.
5 SUMMARY AND DISCUSSION
We have shown how utilities or preferences can be learned for different expected outcomes
in a complex system for sequential decision making based on decision theory. Temporaldifferences reinforcement learning was efficient and effective.
This method can be extended in several directions. Utilities are usually modeled linearly in
decision theory (as with the misclassification utility matrix), since manual specification and
interpretation of the utility values then is quite straight-forward. There are advantages with
non-linear utility functions and, as indicated above, our method can be used for any utility
function that is differentiable.
1
0
0
0
0
0
1
1
1
1
Initial
0 0
1 1
1 1
1 1
1 1
0
1
1
1
1
0.8179
0.0579
0.0215
0.0164
0.0058
After 8S iterations
0.0698 0.0610 0.0435
0.6397 0.2954 0.3331
0.1799 0.6305 0.3269
0.1430 0.2789 0.7210
0.1352 0.2183 0.2742
0.0505
0.6308
0.6353
0.6090
0.8105
Figure 4: Misclassification utility matrices. The disorder no disease is listed in the first
row and column, followed by the four types of heart disease. Left: Initial utility matrix.
Right: After TD learning with discount-rate parameter ..\=0.1 and learning rate a=O.OOO5.
Element Uij (row i, column j) is the utility when outcome i has been chosen but when it
actually is j. Maximally good has value 1, and maximally bad has value O.
Learning Decision Theoretic Utilities through Reinforcement Learning
1067
An alternative to learning the utility or value function is to directly learn the optimal actions
to take in each state, as in Q-Iearning [Watkins and Dayan, 1992]. This would require one
to learn which question to ask in each situation instead of the utility values but would not
be directly analyzable in terms of maximum expected utility.
Acknowledgements
Financial support for M.S. was provided by the Wenner-Gren Foundations and the Foundation Blanceftor Boncompagni-Ludovisi, nee Bildt. The heart-disease database is from the
University of California, Irvine Repository of Machine Learning Databases and originates
from R. Detrano, Cleveland Clinic Foundation. Stuart Russell is thanked for discussions.
References
Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series, B. , 39,
1-38.
French, S. (1988). Decision Theory: An Introduction to the Mathematics of Rationality.
Ellis Horwood, Chichester, UK.
Gorry, G. A. and Barnett, G. O. (1967). Experience with a model of sequential diagnosis.
Computers and Biomedical Research, 1,490-507.
Heckerman, D. E., Horvitz, E. J. and Nathwani, B. N. (1992). Toward normative expert
systems: Part I. The Pathfinder project. Methods of Information in Medicine, 31, 90105.
Howard, R. A. (1966). Information value theory. IEEE Transactions on Systems Science
and Cybernetics, SSC-2, 22-26.
Howard, R. A. (1980). On making life and death decisions. In Schwing, R. C. and Albers,
Jr., W. A., editors, Societal risk assessment: How safe is safe enough? Plenum Press,
New York, NY.
McLachlan, G. J. and Basford, K. E. (1988). Mixture Models: Inference and Applications
to Clustering. Marcel Dekker, Inc., New York, NY.
Stensmo, M. (1995). Adaptive Automated Diagnosis. PhD thesis, Royal Institute of Technology (Kungliga Tekniska Hogskolan), Stockholm, Sweden.
Stensmo, M. and Sejnowski, T. J. (1995). A mixture model system for medical and machine
diagnosis. In Tesauro, G., Touretzky, D. S. and Leen, T. K., editors, Advances in Neural
Information Processing Systems, vol. 7, pp 1077-1084. MIT Press, Cambridge, MA.
Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine
Learning, 3,9-44.
von Neumann, J. and Morgenstern, O. (1947). Theory of Games and Economic Behavior.
Princeton University Press, Princeton, NJ.
Watkins, C. J. and Dayan, P. (1992). Q-Iearning. Machine Learning, 8, 279-292.
| 1185 |@word repository:1 version:1 sex:2 dekker:1 tried:1 paid:2 pressure:1 minus:1 initial:2 series:1 prescriptive:1 subjective:2 past:1 horvitz:1 current:2 yet:1 must:1 benign:1 update:3 alone:1 half:2 selected:1 xk:3 colored:1 num:1 provides:1 preference:7 five:1 supply:1 consists:2 expected:12 behavior:1 company:1 td:5 little:2 actual:1 becomes:1 provided:2 cleveland:4 notation:2 project:1 lowest:1 what:1 kind:1 interpreted:1 morgenstern:2 unobserved:1 nj:1 temporal:4 berkeley:2 iearning:2 demonstrates:1 uk:1 unit:4 medical:6 originates:1 before:2 positive:2 sutton:4 id:3 might:1 specifying:1 electrocardiographic:1 hughes:1 procedure:2 road:1 get:1 cannot:1 context:1 risk:4 applying:1 missing:4 serum:1 starting:1 minq:2 formalized:1 disorder:3 cholesterol:1 financial:1 increment:1 updated:1 plenum:1 pt:3 nominal:1 rationality:1 hypothesis:1 element:1 predicts:1 database:10 observed:1 quicker:1 calculate:1 cycle:1 went:1 russell:1 highest:1 disease:15 dempster:2 sugar:1 asked:2 thai:1 trained:1 division:1 efficiency:1 joint:3 represented:1 various:1 effective:2 sejnowski:7 tell:2 outcome:14 quite:1 valued:1 otherwise:1 fasting:1 torrey:1 think:1 laird:1 final:2 sequence:6 differentiable:2 advantage:1 net:2 monetary:1 achieve:1 description:5 convergence:1 neumann:2 perfect:1 help:1 received:1 albers:1 c:1 predicted:1 marcel:1 direction:1 safe:2 correct:3 require:1 probable:1 stockholm:1 considered:3 magnus:2 normal:1 predict:5 claim:1 pine:1 major:1 purpose:1 perceived:2 estimation:1 vice:1 mclachlan:2 mit:1 modified:1 improvement:2 likelihood:1 dollar:3 inference:1 dayan:2 el:1 accept:1 uij:1 deduction:1 among:1 classification:1 saving:1 barnett:2 manually:1 stuart:1 look:1 future:4 few:1 randomly:2 intra:1 insurance:1 chichester:1 male:1 mixture:6 myopic:1 accurate:1 capable:1 necessary:1 experience:2 sweden:1 unless:1 tree:1 incomplete:2 increased:2 column:2 modeling:1 elli:1 asking:2 dev:1 maximization:1 cost:8 deviation:3 usefulness:1 optimally:1 reported:1 answer:4 st:2 peak:1 probabilistic:1 terrence:1 von:2 thesis:1 recorded:3 ssc:1 expert:2 leading:1 converted:2 summarized:1 north:1 includes:1 matter:1 inc:1 lot:1 doctor:1 complicated:1 slope:2 publicly:1 accuracy:2 efficiently:1 yes:1 fbs:1 none:1 multiplying:1 confirmed:1 cybernetics:1 straight:1 classified:1 touretzky:1 manual:1 pp:1 basford:2 rational:1 stop:1 irvine:1 adjusting:1 ask:1 actually:1 maximally:4 leen:1 done:6 though:1 furthermore:1 biomedical:1 until:2 reversible:1 assessment:1 french:2 indicated:1 grows:1 assigned:3 death:4 during:2 game:1 theoretic:7 demonstrate:1 complete:1 cp:1 reasoning:1 angina:1 empirically:1 fourteen:1 exponentially:1 interpretation:1 resting:2 significant:1 versa:1 cambridge:1 mathematics:1 had:2 specification:1 longer:1 etc:1 female:1 jolla:1 tesauro:1 certain:1 binary:2 came:1 arbitrarily:1 fault:1 life:1 societal:1 minimum:1 additional:2 somewhat:1 nathwani:1 determine:1 maximize:2 signal:4 full:2 reduces:1 adapt:1 compensate:2 devised:1 coded:2 prediction:2 patient:2 expectation:2 malign:1 represent:1 iteration:6 achieved:2 gorry:2 decreased:1 rest:1 subject:3 induced:2 contrary:1 chest:1 enough:2 concerned:1 automated:4 suboptimal:1 reduce:1 economic:1 handled:1 utility:55 pathfinder:1 penalty:2 york:2 action:2 depression:1 useful:1 chol:1 listed:1 discount:3 processed:1 diagnostic:1 oldpeak:1 correctly:2 per:2 diagnosis:16 vol:1 four:4 blood:2 graph:1 defect:1 convert:2 year:1 run:1 uncertainty:1 you:1 place:1 decision:26 followed:1 yielded:1 adapted:2 occur:2 ahead:1 worked:1 flat:1 answered:3 relatively:3 according:4 combination:1 jr:1 heckerman:4 em:3 unity:1 making:5 happens:1 intuitively:1 heart:9 equation:3 previously:1 awt:2 needed:2 horwood:1 available:4 operation:1 apply:1 subtracted:1 alternative:2 robustness:1 original:1 clustering:1 completed:1 instant:4 medicine:1 society:1 ooo5:2 objective:2 question:18 pain:1 separate:1 trivial:2 reason:2 toward:1 code:1 modeled:2 unfortunately:1 negative:1 observation:13 howard:7 behave:1 situation:4 extended:1 varied:1 pair:1 namely:1 california:2 learned:3 nee:1 maxq:3 below:1 usually:1 including:1 royal:2 terry:1 power:1 misclassification:3 natural:1 representing:1 improve:1 technology:1 categorical:1 text:1 acknowledgement:1 relative:1 loss:4 interesting:1 age:2 foundation:3 clinic:1 consistent:1 rubin:1 principle:3 editor:2 lo:1 row:2 summary:1 detrano:1 institute:3 benefit:1 calculated:1 world:3 forward:1 made:1 reinforcement:15 adaptive:1 transaction:1 search:2 continuous:6 reality:1 learn:5 ca:3 improving:1 vessel:1 investigated:1 complex:2 pk:1 mmt:3 wtxt:1 linearly:2 observ:1 whole:1 repeated:2 salk:2 ny:2 analyzable:1 xl:1 exercise:3 watkins:2 learns:1 down:1 bad:2 xt:7 normative:2 sequential:5 ci:1 phd:1 magnitude:1 painless:3 simply:1 kxk:1 acquiring:1 chance:3 ma:1 conditional:1 presentation:2 stensmo:10 absence:1 price:1 determined:1 specifically:2 wt:16 schwing:1 called:1 total:1 la:1 people:2 support:1 arises:1 thanked:1 evaluate:1 princeton:2 |
204 | 1,186 | Text-Based Information Retrieval Using
Exponentiated Gradient Descent
Ron Papka, James P. Callan, and Andrew G. Barto *
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
papka@cs.umass.edu, callan@cs.umass.edu, barto@cs.umass.edu
Abstract
The following investigates the use of single-neuron learning algorithms to improve the performance of text-retrieval systems that
accept natural-language queries. A retrieval process is explained
that transforms the natural-language query into the query syntax
of a real retrieval system: the initial query is expanded using statistical and learning techniques and is then used for document ranking
and binary classification. The results of experiments suggest that
Kivinen and Warmuth's Exponentiated Gradient Descent learning
algorithm works significantly better than previous approaches.
1
Introduction
The following work explores two learning algorithms - Least Mean Squared (LMS)
[1] and Exponentiated Gradient Descent (EG) [2] - in the context of text-based
Information Retrieval (IR) systems. The experiments presented in [3] use connectionist learning models to improve the retrieval of relevant documents from a large
collection of text. Here, we present further analysis of those experiments. Previous
work in the area employs various techniques for improving retrieval [6, 7, 14]. The
experiments presented here show that EG works significantly better than widely
used ad hoc methods for finding a good set of query term weights.
The retrieval processes being considered operate on a collection of documents, a
natural-language query, and a training set of documents judged relevant or nonrelevant to the query. The query may be, for example, the information request
submitted through a web-search engine, or through the interface of a system with
This material is based on work supported by the National Science Foundation, Library
of Congress, and Department of Commerce under cooperative agreement number EEC9209623. Any opinions, findings and conclusions or recommendations expressed in this
material are those of the author and do not necessarily reflect those of the sponsor.
4
R. Papka, J. P. Callan and A. G. Barto
domain-specific information such as legal, governmental, or news data maintained
as a collection of text. The query, expressed as complete or incomplete sentences, is
modified through a learning process that incorporates the terms in the test collection
that are important for improving retrieval performance. The resulting query can
then be used against collections similar in domain to the training collection.
Natural language query:
An insider-trading case.
IR system query using default weights:
#WSUM( 1.0 An 1.0 insider 1.0 trading 1.0 case );
After stop word and stemming process:
#WSUM( 1.0 insid 1.0 trade 1.0 case )j
After Expansion and learning new weights:
#WSUM( 0.181284 insid 0.045721 trade 0.016127 case 0.088143 boesk
0.000001 ivan 0.026762 sec 0.052081 guilt 0 . 074493 drexel 0.000001 plead
0.003834 fraud 0.091436 takeov 0.018636 lavyer 0.000000 crimin 0.137799
alleg 0.057393 attorney 0.155781 charg 0.024237 scandal 0.000000 burnham
0.000000 lambert 0.026270 investig 0.000000 vall 0.000000 firm 0.000000
illeg 0.000000 indict 0.000000 prosecutor 0.000000 profit 0.000000 );
Figure 1: Query Transformation Process.
The query transformation process is illustrated in Figure 1. First, the naturallanguage query is transformed into one which can be used by the query-parsing
mechanism of the IR system. The weights associated with each term are assigned a
default value of 1.0, implying that each term is equally important in discriminating
relevant documents. The query then undergoes a stopping and stemming process,
by which morphological stemming and the elimination of very common words, called
stopwords, increases both the effectiveness and efficiency of a system [9]. The query
is subsequently expanded using a statistical term-expansion process producing terms
from the training set of documents. Finally, a learning algorithm is invoked to
produce new weights for the expanded query.
2
Retrieval Process
Text-based information retrieval systems allow the user to pose a query to a collection or a stream of documents. When a query q is presented to a collection
c, each document dEc is examined and assigned a value relative to how well d
satisfies the semantics of the request posed by q. For any instance of the triple
< q, d,c >, the system determines an evaluation value attributed to d using the
function eval(q, d, c) .
The evaluation function eval(q, d, c) =
L:t~:i;idi
was used for this work, and is based on an implementation of INQUERY [8]. It is
assumed that q and d are vectors of real numbers, and that c contains precomputed
collection statistics in addition to the current set of documents. Since the collection
may change over time, it may be necessary to change the query representation over
time; however, in what follows the training collection is assumed to be static, and
successful learning implies that the resulting query generalizes to similar collections.
An IR system can perform several kinds of retrieval tasks. This work is specifically concerned with two retrieval processes: document ranking and document
classification. A ranking of documents based on query q is achieved by sorting all
documents in a collection by eval1,1ation value. Binary classification is achieved by
determining a threshold () such that for class R, eval (q, d, c) ~ () -+ d E R, and
5
Text-Based Information Retrieval Using Exponentiated Gradient Descent
eval (q, d, c) < () --+ d E R, so that R is the set of documents from the collection that
are classified as relevant to the query, and R is the set classified as non-relevant.
Central to any IR system is a parsing process used for documents and queries,
which produces tokens called terms. The terms derived from a document are used
to build an inverted list structure which serves as an index to the collection.
The natural-language query is also parsed into a set of terms. Research-based IR
systems such as INQUERY, OKAPI [111, and SMART [5], assume that the cooccurrence of a term in a query and a document indicates that the document is
relevant to the query to some degree, and that a query with multiple terms requires
a mechanism by which to combine the evidence each co-occurrence contributes to
the document's degree of relevance to the query. The document representation for
such systems is a vector, each element of which is associated with a unique term in
the document. The values in the vector are produced by a term-evaluation function
comprised of a t.erm frequency component, tf, and an inverse document frequency
component, idj, which are described in [8, 11]. The tf component causes the termevaluation value to increase as a query-term's occurrence in the document increases,
and the idj component causes the term-evaluation value to decrease as the number
of documents in the collection in which the term occurs increases.
3
Query Expansion
Though it is possible to learn weights for terms in the original query, better results are obtained by first expanding the query with additional terms that can
contribute to identifying relevant documents, and then learning the weights for
the expanded query. The optimal number of terms by which to expand a query is
domain-dependent, and query expansion can be performed using several techniques,
including thesaurus expansion and statistical methods [12]. The query expansion
process performed in this work is a two-step process: term selection followed by
weight assignment. The term selection process ranks all terms found in relevant
documents by an information metric described in [8]. The top n terms are used in
the expanded query. The experiments in this work used values of 50 and 1000 for n.
The most common technique for weight assi~ment is derived from a closed-form
function originally presented by Rocchio in l6], but our experiments show that a
single-neuron learning approach is more effective.
3.1 Rocchio Weights
We assume that the terms of the original query are stored in a vector t, and that
their associated weights are stored in q. Assuming that the new terms in the
expanded query are stored t', the weights for q' can be determined using a method
originally developed by Rocchio that has been improved upon in [7, 8]. Using the
notation presented above, the weight assignment can be represented in the linear
form: q' = Ci* j(t) + /hr(t', R q , c) +"I*nr(t', Rq , c), where j is a function operating
on the terms in the original query, r is a function operating on the term statistics
available from the training set of relevant documents (Rq), and nr is a function
operating on the statistics from the non-relevant documents (R q ). The values for
Ci, (3, and "I have been the focus of many IR experiments, and 1.0, 2.0, and 0.5, have
been found to work well with various implementations of the functions j, r, and nr
[7].
3.2 LMS and EG
In the experiments that follow, LMS and EG were used to learn query term weights.
Both algorithms were used in a training process attempting to learn the association
between the set of training instances tdocuments) and their corresponding binary
classifications (relevant or non-relevant). A set of weights tV is updated given an
input instance x and a target binary classification value y. The algorithms learn the
association between x and y perfectly if tV? x = y, otherwise the value (y - tV? x) is
the error or loss incurred. The task of the learning algorithm is to learn the values
of tV for more than one instance of X.
The update rule for LMS is tVt+l = tVt + Tt, where it = -21Jt(tVt' Xt - Yt)Xt, where
.
-.
h
the step-SIze
1Jt = x 1.x . The up d ate ruIe ?or EG'IS Wt+l,i
= uh - eFt,;
were
t
t
"N "Wt ,; e;: _,
~j=l
t"
6
R. Papka, J. P. Callan and A. G. Barto
rt,t. --
-2"
. and"
'It (wt . Xt - y)x
t t ,t,
'It --
2
3(maxi( Xt ,i)-mini(Xt,i?'
There are several fundamental differences between LMS and EG; the most salient
is that EG has a multiplicative exponential update rule, while LMS is additive.
A less obvious difference is the derivation of these two update rules. Kivinen and
Warmuth [2] show that both rules are approximately derivable from an optimization task that minimizes the linear combination of a distance and a loss function: distance (Wt+1 ,Wt) + 1Jtloss(Yt, Wt . Xt). But the distance component for the
derivation leading to the LMS update rule uses the squared Euclidean distance
Ilwt+1 - wtll~, while the derivation leading to the EG update rule uses relative entropy or l:~1 Wt+1,i In W~:,l:i . Entropy metrics had previously been used as the
loss component [4] .
One purpose of Kivinen and Warmuth's work was to describe loss bounds for these
algorithms; however, they also observed that EG suffers significantly less from irrelevant attributes than does LMS. This hypothesis was tested in the experiments
conducted for this work.
4
Experiments
Experiments were conducted on 100 natural-language queries . The queries were
manually transformed into INQUERY syntax, expanded using a statistical technique described in [8], and then given a weight assignment as a result of a learning
process, One set of experiments expanded each query by 50 terms and another
set of experiments expanded each query by 1000 terms. The purpose of the latter
was to test the ability of each algorithm to learn in the presence of many irrelevant
attributes.
4.1 Data
The queries used are the description fields of information requests developed for
Text Retrieval Conferences (TREC) [10] . The first set of queries was taken from
TREC topics 51-100 and the second set from topics 101-150, for a total of 100
queries. After stopping and stemming, the average number of terms remaining
before expansion was 8.34 terms.
Training and testing for all queries was conducted on subsets of the Tipster collection, which currently contains 3.4 gigabytes of text, including 206,201 documents
whose relevance to the TREC topics has been evaluated. The collection is partitioned into 3 volumes. The judged documents from volumes 1 and 2 were used
for training, while the documents from volume 3 were used for testing. Volumes 1
and 2 contain 741,856 documents from the Associated Press(1988-9), Department
of Energy abstract, Federal Register(1988-9), Wall Street Journal(1987-91), and
Ziff-Davis Computer-select articles. Volume 3 contains 336,310 documents from
Associated Press(1990), San Jose Mercury News(1991), and Ziff-Davis articles.
Only a subset of the data for the TREC-Tipster environment has been judged.
Binary judgments are assessed by humans for the top few thousand documents that
were retrieved for each query by participating systems from various commercial and
research institutions. Based on the judged documents available for volumes 1 and
2, on average 280 relevant documents and 1236 non-relevant documents were used
to train each query.
4.2 Training Parameters
Rocchio weights were assigned based on coefficients described in Section 3.1. LMS
and EG update rules were applied using 100,000 random presentations of training
instances. It was empirically determined that this number of presentations was
sufficient to allow both learning algorithms to produce better query weights than
the Rocchio assignment based on performance metrics calculated using the training
instances.
In reality, of course, the number of documents that will be relevant to a particular query is much smaller than the number of documents that are non-relevant.
This property gives rise to the question of what is an appropriate sampling bias
Text-Based Information Retrieval Using Exponentiated Gradient Descent
7
of training instances, considering that the ratio of relevant to non-relevant documents approaches 0 in the limit. In the following experiments, LMS benefitted from
uniform random sampling from the set of training instances, while EG benefitted
from a balanced sampling, that is uniform random sampling from relevant training
instances on even iterations and from non-relevant instances on odd iterations.
A pocketing technique was applied to the learning algorithms [131. The purpose
of this technique is to find a set of weights that optimize a specilic user's utility
function. In the following experiments, weights were tested every 1000 iterations
using a recall and precision performance metric. If a set of weights produced a new
performance-metric maximum, it was saved. The last set saved was assumed to be
the result of the algorithm, and was used for testing.
A binary classification value pair (A, B) is supplied as the target for training, where
A is the classification value for relevant documents, and B is the classification
value for non-relevant documents. Using the standard classification value pair (1,
0), INQUERY's document representation inhibits learning due to the large error
caused by these unattainable values. Therefore, testing was done and resulted in
the observation that .4 was the lowest attainable evaluation value for a document,
and .47 appeared to be a good classification value for relevant documents. The
classification value pair used for both the LMS and EG algorithms was thus (.47,
.40).
4.3 Evaluation
In the experiments that follow, R-Precision (RP) was used to evaluate ranking performance, and a new metric, Lower Bound Accuracy (LBA) was used to evaluate
classification. Both metrics make use of recall and precision, which are defined as
follows: Assume there exists a set of documents sorted by evaluation value and a
process that has performed classification, and that a = number of relevant documents classified as relevant, b = number of non-relevant documents classified as
relevant, c = number of relevant documents classified as non-relevant, and d =
number of non-relevant documents classified as non-relevant; then, Recall = a~c'
and Precision = a~b [3].
Precision and recall can be calculated at any cut-off point in the sorted list of
documents. R-Precision is calculated using the top n documents, where n is the
number of relevant training documents available for a query.
Lower Bound Accuracy (LBA) is a metric that assumes the minimum of a classifier's
accuracy with respect to relevant documents and its accuracy with respect to nonrelevant documents. It is defined as LBA = min(a~c' btd)' An LBA value can be
interpreted as the lower bound of the percent of instances a classifier will correctly
classify, regardless of an imbalance between the actual number of relevant and nonrelevant documents. This metric requires a threshold e. The threshold is taken
to be the evaluation value of the document at a cut-off point in the sorted list of
training documents where LBA is maximized. Hence, = maXi (LBA(di , R q, Rq?,
where di is the ith document in the sorted list.
e
4.4
Results
Query type
NL
EXP
ROC
LMS
EG
RP
22.0
28.7
33.4
32.5
40.3
LBA
88.6
92.0
94.0
89.8
95.1
Table 1: Query expansion by 50 terms
R. Papka, J. P. Callan and A. G. Barto
8
The following results show the ability of a query weight assignment to generalize.
The weights are derived from a subset of the training collection, and the values
reported are based on performance on the test collection. The results of the 50term-expansion experiments are listed in Table 1 1. They indicate that the expanded
query has an advantage over the original query, and that the EG-trained query generalized better than the other algorithms, while Rocchio appears to be the next
best. In terms of ranking, EG gives rise to a 20% improvement over the Rocchio assignment, and realizes 1.2% improvement in terms of classification. This apparently
slight improvement in classification in fact implies that EG is correctly classifying
at least 3000 documents more than the other approaches.
Table 2 shows a cross-algorithm analysis in which any two algorithms can be compared. The analysis is calculated using both RP and LBA over all queries. An
entry for row i column j indicates the number of queries for which the performance
of algorithm i was better than algorithm j. Based on sign tests with 0: = .01, the
results confirm that EG significantly generalized better than the other algorithms. 2
Query type
NL
EXP
ROC
LMS
EG
NL
60 7166 79 -
62
86
46
85
Query counts: RP-LBA
EXP
LMS
ROC
30 -37 18 - 13 24 - 53
9 - 17 35 - 66
72 -79
53 - 73
54 -34 38 - 26
80 -80 70 - 62 74 - 84
EG
12 - 13
11 - 19
17 - 37
13 - 15
-
Table 2: Cross Algorithm Analysis over 100 queries expanded by 50 terms.
As explained in Section 4.3, the thresholds used to calculate the LBA performance
metric are determined by obtaining an evaluation value in the training data corresponding to the cut-off point where LBA was maximized. The threshold analysis
in Table 3 shows the best attainable classification performance against performance
actually achieved. The results indicate that there is still room for improvement;
however, they also indicate that this methodology is acceptable.
The results for queries expanded by 1000 terms are listed in Table 4. Since the
average document length in the Tipster collection is 806 terms (non-unique), at
least 20% of the terms in the expanded query are generally irrelevant to a particular
document. The results indicate that irrelevant attributes prevent all but EG from
generalizing well. Comparing the performance of EG and LMS adds evidence to
the Kivinen-Warmuth hypothesis that EG yields a smaller loss than LMS, given
many irrelevant attributes. Juxtaposing the results of the 50-term and 1000-termexpansion experiments suggests that using a statistical filter for selecting the top few
terms is better than expanding the query by many terms and having the learning
algorithm perform term selection.
5
Conclusion
The experiment results presented here provide evidence that single-neuron learning
algorithms can be used to improve retrieval performance in IR systems. Based on
performance metrics that test the quality of a classification process and a document ranking process, the weights produced by EG were consistently better than
previously available methods.
lR-Precision (RP) and Lower Bound Accuracy (LBA) performance values are normalized to a 0-100 scale. Values are reported for: NL = original natural language query; EXP
= expanded query with weights set to 1.0; ROC = expanded query with weights based on
Rocchio assignment; LMS = expanded query with weights based on LMS learning; and
EG= expanded query with weights based on EG learning.
2Recent experiments using the optimization algorithm DFO (presented in [7]) suggest
that certain parameter settings make it competitive with EG.
Text-Based Information Retrieval Using Exponentiated Gradient Descent
9
I Query type I Potential LBA I Actual LBA I
NL
EXP
ROC
LMS
EG
91.9
95.5
96.7
92.6
97.1
88.6
92.0
94.0
89.8
95.1
Table 3: Threshold Analysis: Query expansion by 50 terms.
I Query type I RP I LBA I
NL
22.0 88.6
EXP
14.4 76.5
ROC
19.7 82.5
LMS
20.4 86.7
35.0 93.2
EG
Table 4: Query expansion by 1000 terms.
References
[1] B. Widrow and M. Hoff, "Adaptive switching circuits", In 1960 IRE WESCON
Convention Record, pp. 96-104, New York, 1960.
[2] J. Kivinen, Manfred Wartmuth, "Exponentiated Gradient Versus Gradient
Descent for Linear Predictors", UCSC Tech report: UCSC-CRL-94-16, June
21, 1994.
[3] D. Lewis, R. Schapire, J. Callan, and R. Papka, "Thaining Algorithms for
Linear Text Classifiers", Proceeding of SIGIR 1996.
[4] B.S. Wittner and J.S. Denker, "Strategies for Teaching Layered Networks
Classification Tasks", NIPS proceedings, 1987.
[5] G. Salton, "Relevance Feedback and optimization of retrieval effectiveness. In
The Smart system - experiments in automatic document processing" , 324-336.
Englewood Cliffs, NJ: Prentice Hall Inc., 1971.
[6] J.J. Rocchio, "Relevance Feedback in Information Retrieval in The Smart System - Experiments in Automatic document processing", 313-323. Englewood
Cliffs, NJ: Prentice Hall Inc., 1971.
[7] C. Buckley and G. Salton, "Optimization of Relevance Feedback Weights",
Proceeding of SIGIR 95 Seattle WA, 1995.
[8] J. Allan, L. Ballesteros, J. Callan, W.B. Croft, and Z. Lu, "Recent Experiments with Inquery", TREC-4 Proceedings, 1995.
[9] M. Porter, "An Algorithm for Suffix Stripping", Program, Vol 14(3), pp. 130137,1980.
[10] D. Harman, Proceedings of Text REtrievl Conferences (TREC), 1993-5.
[11] S.E. Robertson, W. Walker, S. Jones, M.M. Hancock-Beaulieu, and
M.Gatford, "Okapi at TREC-3" , TREC-3 Proceedings, 1994.
[12] G. Salton, Automatic Text Processing, Addison-Wesley Publishing Co, Massachusetts, 1989.
[13] S.I. Gallant, "Optimal Linear Discrimants", Proceedings ofInternational Conference on Pattern Recognition, 1986.
[14] B.T. Bartell, "Optimizing Ranking Functions: A Connectionist Approach to
Adaptive Information Retrieval" , Ph.D. Theis, UCSD 1994.
| 1186 |@word attainable:2 profit:1 initial:1 contains:3 uma:3 selecting:1 document:65 current:1 comparing:1 parsing:2 stemming:4 additive:1 update:6 implying:1 warmuth:4 ith:1 record:1 manfred:1 lr:1 institution:1 ire:1 idi:1 ron:1 contribute:1 stopwords:1 ucsc:2 combine:1 allan:1 actual:2 considering:1 notation:1 circuit:1 lowest:1 what:2 kind:1 interpreted:1 minimizes:1 developed:2 finding:2 transformation:2 juxtaposing:1 nj:2 every:1 inquery:5 classifier:3 producing:1 before:1 congress:1 limit:1 switching:1 cliff:2 nonrelevant:3 approximately:1 examined:1 suggests:1 co:2 commerce:1 unique:2 testing:4 area:1 significantly:4 word:2 fraud:1 suggest:2 selection:3 layered:1 judged:4 prentice:2 context:1 optimize:1 yt:2 regardless:1 sigir:2 identifying:1 rule:7 gigabyte:1 updated:1 target:2 commercial:1 user:2 us:2 hypothesis:2 agreement:1 element:1 robertson:1 recognition:1 cut:3 cooperative:1 observed:1 thousand:1 calculate:1 news:2 morphological:1 trade:2 decrease:1 rq:3 balanced:1 environment:1 cooccurrence:1 trained:1 smart:3 upon:1 efficiency:1 uh:1 various:3 represented:1 derivation:3 train:1 hancock:1 effective:1 describe:1 query:80 insider:2 firm:1 whose:1 widely:1 posed:1 otherwise:1 ability:2 statistic:3 mercury:1 hoc:1 advantage:1 ment:1 relevant:34 description:1 participating:1 seattle:1 produce:3 andrew:1 widrow:1 pose:1 odd:1 c:3 trading:2 implies:2 indicate:4 convention:1 saved:2 attribute:4 filter:1 subsequently:1 human:1 opinion:1 material:2 elimination:1 wall:1 considered:1 hall:2 exp:6 lm:20 rocchio:9 purpose:3 realizes:1 currently:1 tf:2 federal:1 modified:1 barto:5 derived:3 focus:1 june:1 improvement:4 consistently:1 rank:1 indicates:2 tech:1 guilt:1 dependent:1 stopping:2 okapi:2 suffix:1 accept:1 expand:1 transformed:2 semantics:1 classification:18 hoff:1 field:1 having:1 sampling:4 manually:1 jones:1 connectionist:2 report:1 employ:1 few:2 national:1 resulted:1 englewood:2 eval:4 evaluation:9 nl:6 callan:7 necessary:1 incomplete:1 euclidean:1 instance:11 classify:1 column:1 assignment:7 subset:3 entry:1 uniform:2 comprised:1 predictor:1 successful:1 bartell:1 conducted:3 stored:3 reported:2 unattainable:1 stripping:1 explores:1 amherst:1 discriminating:1 fundamental:1 off:3 eval1:1 squared:2 reflect:1 central:1 leading:2 potential:1 attorney:1 sec:1 coefficient:1 inc:2 register:1 ranking:7 ad:1 stream:1 caused:1 performed:3 multiplicative:1 closed:1 apparently:1 competitive:1 ir:8 accuracy:5 maximized:2 judgment:1 yield:1 generalize:1 lambert:1 produced:3 lu:1 classified:6 submitted:1 suffers:1 against:2 energy:1 frequency:2 pp:2 james:1 obvious:1 associated:5 attributed:1 di:2 static:1 salton:3 stop:1 massachusetts:2 recall:4 actually:1 appears:1 wesley:1 originally:2 follow:2 lba:15 methodology:1 improved:1 evaluated:1 though:1 done:1 web:1 porter:1 undergoes:1 quality:1 contain:1 normalized:1 hence:1 assigned:3 harman:1 illustrated:1 eg:28 maintained:1 davis:2 generalized:2 syntax:2 complete:1 tt:1 interface:1 percent:1 invoked:1 common:2 empirically:1 volume:6 association:2 eft:1 slight:1 automatic:3 teaching:1 language:7 had:1 operating:3 add:1 recent:2 retrieved:1 optimizing:1 irrelevant:5 certain:1 binary:6 inverted:1 minimum:1 additional:1 multiple:1 cross:2 retrieval:21 wittner:1 equally:1 sponsor:1 metric:11 iteration:3 achieved:3 dec:1 addition:1 walker:1 operate:1 incorporates:1 effectiveness:2 presence:1 concerned:1 ivan:1 perfectly:1 utility:1 york:1 cause:2 buckley:1 generally:1 listed:2 transforms:1 ph:1 schapire:1 ilwt:1 supplied:1 governmental:1 sign:1 correctly:2 vol:1 ballesteros:1 salient:1 threshold:6 prevent:1 inverse:1 jose:1 scandal:1 thesaurus:1 acceptable:1 investigates:1 bound:5 followed:1 min:1 attempting:1 expanded:17 inhibits:1 department:3 tv:4 request:3 combination:1 ate:1 smaller:2 partitioned:1 explained:2 erm:1 taken:2 legal:1 previously:2 precomputed:1 mechanism:2 count:1 addison:1 serf:1 generalizes:1 available:4 denker:1 appropriate:1 occurrence:2 rp:6 original:5 top:4 remaining:1 assumes:1 publishing:1 l6:1 parsed:1 build:1 question:1 occurs:1 strategy:1 rt:1 nr:3 gradient:8 distance:4 assi:1 street:1 topic:3 assuming:1 length:1 index:1 mini:1 ratio:1 rise:2 implementation:2 perform:2 gallant:1 imbalance:1 neuron:3 observation:1 descent:7 wtll:1 trec:8 ucsd:1 vall:1 pair:3 sentence:1 engine:1 nip:1 pattern:1 appeared:1 thaining:1 program:1 including:2 ation:1 natural:7 drexel:1 kivinen:5 hr:1 improve:3 library:1 text:14 theis:1 determining:1 relative:2 loss:5 versus:1 triple:1 foundation:1 tipster:3 incurred:1 degree:2 sufficient:1 article:2 classifying:1 row:1 course:1 token:1 supported:1 last:1 idj:2 bias:1 exponentiated:7 allow:2 btd:1 feedback:3 default:2 calculated:4 author:1 collection:21 adaptive:2 san:1 derivable:1 confirm:1 wescon:1 assumed:3 search:1 reality:1 table:8 learn:6 expanding:2 obtaining:1 contributes:1 improving:2 expansion:11 necessarily:1 domain:3 tvt:3 roc:6 precision:7 exponential:1 croft:1 beaulieu:1 specific:1 xt:6 jt:2 maxi:2 list:4 evidence:3 exists:1 ci:2 sorting:1 entropy:2 generalizing:1 papka:6 expressed:2 recommendation:1 satisfies:1 determines:1 lewis:1 ma:1 sorted:4 presentation:2 room:1 dfo:1 crl:1 change:2 specifically:1 determined:3 wt:7 called:2 total:1 select:1 latter:1 assessed:1 relevance:5 evaluate:2 benefitted:2 tested:2 |
205 | 1,187 | Support Vector Method for Function
Approximation, Regression Estimation,
and Signal Processing?
Vladimir Vapnik
AT&T Research
101 Crawfords Corner
Holmdel, NJ 07733
vlad@research.att .com
Steven E. Golowich
Bell Laboratories
700 Mountain Ave.
Murray Hill, NJ 07974
golowich@bell-Iabs.com
Alex Smola?
GMD First
Rudower Shausee 5
12489 Berlin
asm@big.att.com
Abstract
The Support Vector (SV) method was recently proposed for estimating regressions, constructing multidimensional splines, and
solving linear operator equations [Vapnik, 1995]. In this presentation we report results of applying the SV method to these problems.
1
Introduction
The Support Vector method is a universal tool for solving multidimensional function
estimation problems. Initially it was designed to solve pattern recognition problems,
where in order to find a decision rule with good generalization ability one selects
some (small) subset of the training data, called the Support Vectors (SVs). Optimal
separation of the SV s is equivalent to optimal separation the entire data.
This led to a new method of representing decision functions where the decision
functions are a linear expansion on a basis whose elements are nonlinear functions
parameterized by the SVs (we need one SV for each element of the basis). This
type of function representation is especially useful for high dimensional input space:
the number of free parameters in this representation is equal to the number of SVs
but does not depend on the dimensionality of the space.
Later the SV method was extended to real-valued functions. This allows us to
expand high-dimensional functions using a small basis constructed from SVs. This
?smola@prosun.first.gmd.de
v. Vapnik, S. E.
282
Golowich and A. Smola
novel type of function representation opens new opportunities for solving various
problems of function approximation and estimation.
In this paper we demonstrate that using the SV technique one can solve problems
that in classical techniques would require estimating a large number of free parameters. In particular we construct one and two dimensional splines with an arbitrary
number of grid points. Using linear splines we approximate non-linear functions .
We show that by reducing requirements on the accuracy of approximation, one decreases the number of SVs which leads to data compression. We also show that the
SV technique is a useful tool for regression estimation. Lastly we demonstrate that
using the SV function representation for solving inverse ill-posed problems provides
an additional opportunity for regularization.
2
SV method for estimation of real functions
Let x E R n and Y E Rl. Consider the following set of real functions: a vector x is
mapped into some a priori chosen Hilbert space, where we define functions that are
linear in their parameters
00
Y = I(x,w)
=L
Wi<Pi(X),
W = (WI, ... ,WN, ... ) E
n
(1)
i=1
In [Vapnik, 1995] the following method for estimating functions in the set (1) based
on training data (Xl, Yd, .. ., (Xl, Yl) was suggested: find the function that minimizes
the following functional:
1
R(w)
l
= ?L
IYi - I(Xi, w)lt: + I(w, w),
(2)
i=1
where
Iy - I(x, w)lt: =
{
if Iy - I(x, w)1 < ?,
Iy - I(x, w)l- ? otherwise,
0
(3)
(w, w) is the inner product of two vectors, and I is some constant . It was shown
that the function minimizing this functional has a form:
l
I(x, a, a*) = L(a; - ai)(<I>(xi), <I>(x)) + b
(4)
;=1
where ai, ai 2:: 0 with aiai = 0 and (<I>(Xi), <I>(x? is the inner product of two
elements of Hilbert space.
To find the coefficients a; and ai one has to solve the following quadratic optimization problem: maximize the functional
i
W(a*, a)
l
l
= -? L(a; +ai)+ Ly(a; -ai)-~
i=1
;=1
L (a; -ai)(aj -aj )(<I>(Xi), <I>(Xj)),
i,j=1
(5)
subject to constraints
l
L(ai-ai)=O,
i=1
O~ai,a;~C,
i=l, ... ,f.
(6)
283
SV Method for Function Approximation and Regression Estimation
The important feature of the solution (4) of this optimization problem is that only
some of the coefficients (a; - ai) differ from zero. The corresponding vectors Xi are
called Support Vectors (SVs). Therefore (4) describes an expansion on SVs.
It was shown in [Vapnik, 1995] that to evaluate the inner products (<1>(Xi)' <1>(x))
both in expansion (4) and in the objective function (5) one can use the general
form of the inner product in Hilbert space. According to Hilbert space theory, to
guarantee that a symmetric function K ( u, v) has an expansion
00
K(u, v)
= L ak1fJk(u)tPk(V)
k=l
with positive coefficients ak > 0, i.e. to guarantee that K (u, v) is an inner product
in some feature space <1>, it is necessary and sufficient that the conditions
J
K(u, v)g(u)g(v) du dv > 0
(7)
be valid for any non-zero function 9 on the Hilbert space (Mercer's theorem).
Therefore, in the SV method, one can replace (4) with
l
= L(a; -
I(x, a, a*)
ai)K(x, Xi)
+b
(8)
i=l
where the inner product (<1>( Xi), <1>( x? is defined through a kernel K (Xi, x). To find
coefficients ai and ai one has to maximize the function
W(a*, a)
l
l
l
i=l
i=l
i,j=l
= -[ L(a; +ai)+ Ly(a; -ai)- ~
L (a; -ai)(aj -aj)K(xi, Xj) (9)
subject to constraints (6).
3
Constructing kernels for inner products
To define a set of approximating functions one has to define a kernel K (Xi, X) that
generates the inner product in some feature space and solve the corresponding
quadratic optimization problem.
3.1
Kernels generating splines
We start with the spline functions. According to their definition, splines are piecewise polynomial functions, which we will consider on the set [0,1]. Splines of order
n have the following representation
n
N
In(x) = L arx r + L Wj(x - t~r~.
r=O
(10)
~=l
where (x - t)+ = max{(x - t), O}, tl, ... , tN E [0,1] are the nodes, and ar , Wj are
real values. One can consider the spline function (10) as a linear function in the
n + N + 1 dimensional feature space spanned by
1, x, ... , xn, (x - tdf., ... , (x - tN)f..
V. Vapnik, S. E. Golowich and A. Smola
284
Therefore the inner product that generates splines of order n in one dimension is
N
n
I?Xi,Xj)
= Lx;xj + L(Xi -t3)~(Xj -t3)~'
r=O
Two dimensional splines are linear functions in the (N
1, x, ... , xn, y, ... , yn, ... , (x -
=
(11)
3=1
td~(y
-
t~)~,
+ n + 1)2 dimensional space
... , (x - tN )~(y - tN )~.
(12)
=
Let us denote by Ui
(Xi , Yi), Uj (Xi,Yj) two two-dimensional vectors. Then the
generating kernel for two dimensional spline functions of order n is
It is easy to check that the generating kernel for the m-dimensional splines is the
product of m one-dimensional generating kernels.
In applications of the SV method the number of nodes does not play an important role. Therefore, we introduce splines of order d with an infinite number of
nodes S~oo). To do this in the R1 case, we map any real value Xi to the element
1, Xi, ... , xi, (Xi - t)+ of the Hilbert space. The inner product becomes
1(Xi-t)~(Xj -t)~dt
1
n
I?Xi,Xj) = Lx;xj+
r=O
(13)
0
For linear splines S~oo) we therefore have the following generating kernel:
In many applications expansions in Bn-splines [Unser & Aldroubi, 1992] are used,
where
Bn(x) =
E(-~y
r=O
n.
(
n
+1
r
)
(X + n + 1 _
2
r)n+ .
One may use Bn-splines to perform a construction similar to the above, yielding
the kernel
3.2
Kernels generating Fourier expansions
Lastly, Fourier expansion can be considered as a hyperplane in following 2N
dimensional feature space
1 ' cos x, sln
. x, ... , cos N
' N x.
V2
x, sln
The inner product in this space is defined by the Dirichlet formula:
+1
285
SV Method for Function Approximation and Regression Estimation
4
Function estimation and data compression
In this section we approximate functions on the basis of observations at f points
(16)
We demonstrate that to construct an approximation within an accuracy of ?c at
the data points, one can use only the subsequence of the data containing the SVs.
We consider approximating the one and two dimensional functions
.
smlxl
(17)
f(x) = smclxl = -I-xl-
on the basis of a sequence of measurements (without noise) on the uniform lattice
(100 for the one dimensional case and 2,500 for the two-dimensional case).
si
For different c we approximate this function by linear splines from
00) .
Figure 1: Approximations with different levels of accuracy require different numbers
ofSV: 31 SV for c = 0.02 (left) and 9 SV for c = 0.1. Large dots indicate SVs .
....
.. .-. -
."'.4\ ,??
..,.
.
.... "+:......
.. +;...-.. ??
,....
:. .
~.
+? ? ? ? ? :
?
os
D
?
- . : ? ? ? ? ? +??
.+ .+
?
.... ,+ ::,????
vi
Figure 2: Approximation of f( x, y)
sinc x 2 + y2 by two dimensional linear
splines with accuracy c = 0.01 (left) required 157 SV (right)
..
o
o
0
?
?
~
of>
?
?
?
? ?
.0
.
?
?
0
~
Figure 3: sincx function corrupted by different levels of noise ?(7 = 0.2 left, 0.5
right) and its regression. Black dots indicate SV, circles non-SV data.
V. Vapnik, S. E. Golowich and A. Smola
286
5
Solution of the linear operator equations
In this section we consider the problem of solving linear equations in the set of
functions defined by SVs. Consider the problem of solving a linear operator equation
Af(t)
= F(x),
f(t) E 2, F(x) E W,
(18)
where we are given measurements of the right hand side
(Xl, FI ), ... , (Xl, Fl).
(19)
Consider the set of functions f(t, w) E 2 linear in some feature space {<I>(t)
=
(?>o(t), ... , ?>N(t), ... )}:
00
f(t, w) = L wr?>r(t) = (W, <I>(t? .
(20)
r=O
The operator A maps this set of functions into
00
00
F(x, w) = Af(t, w) = L wrA?>r(t) = L wrtPr(x) = (W, w(x?
r=O
=
where tPr(x)
A?>r(t), w(x)
kernel in image space
(21)
r=O
= (tPl(X), ... , tPN(X), ... ).
Let us define the generating
00
K(Xi, Xj) = L tPr(Xi)tPr(Xj) = (W(Xi)' W(Xj?
(22)
r=O
and the corresponding cross-kernel function
00
K,(Xi' t) = L tPr(xd?>r(t) = (W(Xi), <I>(t?.
(23)
r=O
The problem of solving (18) in the set of functions f(t, w) E 2 (finding the vector
W) is equivalent to the problem of regression estimation (21) using data (19).
To estimate the regression on the basis of the kernel K(Xi, Xj) one can use the
methods described in Section 1. The obtained parameters (a; - ai, i = 1, ...f)
define the approximation to the solution of equation (18) based on data (19):
l
f(t, a)
= L(ai -
ai)K,(xi, t).
i=l
We have applied this method to solution of the Radon equation
j
aCm)
f( m cos tt + u sin tt, m sin tt - u cos tt )du = p( m, tt),
-a(m)
-1 ~ m ~ 1, 0 < tt < 11",
a(m) = -/1 - m 2
using noisy observations (ml' ttl, pd, ... , (ml' ttl, Pi)' where Pi = p( mi, tti)
{ed are independent with Eei = 0, Eel < 00.
(24)
+ ~i and
sv Method for Function Approximation and Regression Estimation
287
For two-dimensional linear splines S~ 00) we obtained analytical expressions for the
'kernel (22) and cross-kernel (23). We have used these kernels for solving the corresponding regression problem and reconstructing images based on data that is similar
to what one might get from a Positron Emission Tomography scan [Shepp, Vardi
& Kaufman, 1985].
A remarkable feature of this solution is that it aVOIds a pixel representation of the
function which would require the estimation of 10,000 to 60,000 parameters. The
spline approximation shown here required only 172 SVs.
Figure 4: Original image (dashed line) and its reconstruction (solid line) from 2,048
observations (left). 172 SVs (support lines) were used in the reconstruction (right).
6
Conclusion
In this article we present a new method of function estimation that is especially
useful for solving multi-dimensional problems. The complexity of the solution of
the function estimation problem using the SV representation depends on the complexity of the desired solution (i.e. on the required number of SVs for a reasonable
approximation of the desired function) rather than on the dimensionality of the
space. Using the SV method one can solve various problems of function estimation
both in statistics and in applied mathematics.
Acknowledgments
We would like to thank Chris Burges (Lucent Technologies) and Bernhard Scholkopf
(MPIK Tiibingen) for help with the code and useful discussions.
This work was supported in part by NSF grant PHY 95-12729 (Steven Golowich)
and by ARPA grant N00014-94-C-0186 and the German National Scholarship Foundation (Alex Smola).
References
1. Vladimir Vapnik, "The Nature of Statistical Learning Theory", 1995, Springer
Verlag N.Y., 189 p.
2. Michael Unser and Akram Aldroubi, "Polynomial Splines and Wevelets - A Signal
Perspectives", In the book: "Wavelets -A tutorial in Theory and Applications" ,
C.K. Chui (ed) pp. 91 - 122, 1992 Academic Press, Inc.
3. 1. Shepp, Y. Vardi, and L. Kaufman, "A statistical model for Positron Emission
Tomography," J. Amer. Stat. Assoc. 80:389 pp. 8-37 1985.
| 1187 |@word murray:1 especially:2 approximating:2 y2:1 indicate:2 polynomial:2 compression:2 differ:1 regularization:1 classical:1 objective:1 open:1 symmetric:1 laboratory:1 bn:3 sin:2 solid:1 require:3 thank:1 mapped:1 berlin:1 phy:1 generalization:1 att:2 chris:1 hill:1 evaluate:1 tt:6 demonstrate:3 tn:4 code:1 com:3 considered:1 image:3 si:1 novel:1 recently:1 fi:1 minimizing:1 vladimir:2 functional:3 rl:1 designed:1 estimation:14 tpr:4 perform:1 measurement:2 observation:3 positron:2 ai:20 tool:2 grid:1 mathematics:1 provides:1 extended:1 node:3 lx:2 dot:2 rather:1 arbitrary:1 iyi:1 constructed:1 scholkopf:1 emission:2 perspective:1 aldroubi:2 required:3 check:1 introduce:1 n00014:1 verlag:1 ave:1 yi:1 multi:1 shepp:2 suggested:1 additional:1 entire:1 pattern:1 td:1 initially:1 arx:1 expand:1 maximize:2 becomes:1 selects:1 estimating:3 signal:2 pixel:1 dashed:1 max:1 ill:1 what:1 mountain:1 ttl:2 kaufman:2 minimizes:1 priori:1 academic:1 af:2 cross:2 equal:1 finding:1 construct:2 representing:1 nj:2 technology:1 guarantee:2 multidimensional:2 regression:10 xd:1 assoc:1 report:1 spline:21 piecewise:1 kernel:16 ly:2 grant:2 yn:1 crawford:1 positive:1 national:1 ak:1 wra:1 remarkable:1 yd:1 subject:2 black:1 might:1 iabs:1 foundation:1 sufficient:1 mercer:1 article:1 co:4 yielding:1 pi:3 easy:1 wn:1 acknowledgment:1 xj:12 supported:1 yj:1 free:2 necessary:1 inner:11 side:1 burges:1 circle:1 universal:1 desired:2 bell:2 expression:1 dimension:1 xn:2 valid:1 arpa:1 avoids:1 sln:2 get:1 ar:1 operator:4 tpn:1 lattice:1 svs:13 applying:1 useful:4 tiibingen:1 subset:1 approximate:3 equivalent:2 map:2 uniform:1 bernhard:1 ml:2 tomography:2 gmd:2 corrupted:1 sv:21 xi:28 nsf:1 tutorial:1 rule:1 subsequence:1 spanned:1 wr:1 nature:1 yl:1 eel:1 michael:1 iy:3 construction:1 play:1 expansion:7 du:2 containing:1 constructing:2 element:4 recognition:1 corner:1 book:1 big:1 noise:2 vardi:2 steven:2 role:1 de:1 inverse:1 parameterized:1 tl:1 coefficient:4 inc:1 wj:2 reasonable:1 eei:1 vi:1 depends:1 decrease:1 later:1 separation:2 decision:3 holmdel:1 radon:1 xl:5 pd:1 support:6 ui:1 complexity:2 start:1 fl:1 wavelet:1 quadratic:2 formula:1 theorem:1 rudower:1 depend:1 solving:9 lucent:1 accuracy:4 constraint:2 alex:2 unser:2 t3:2 basis:6 sinc:1 generates:2 fourier:2 chui:1 vapnik:8 various:2 according:2 stat:1 led:1 lt:2 ed:2 describes:1 whose:1 definition:1 posed:1 solve:5 valued:1 reconstructing:1 pp:2 otherwise:1 wi:2 ability:1 asm:1 statistic:1 mi:1 dv:1 springer:1 noisy:1 acm:1 sequence:1 vlad:1 equation:6 analytical:1 dimensionality:2 german:1 reconstruction:2 hilbert:6 presentation:1 product:12 replace:1 dt:1 infinite:1 reducing:1 hyperplane:1 amer:1 v2:1 called:2 smola:6 lastly:2 hand:1 requirement:1 r1:1 original:1 generating:7 nonlinear:1 tti:1 o:1 dirichlet:1 mpik:1 oo:2 help:1 opportunity:2 golowich:6 aj:4 uj:1 scan:1 scholarship:1 |
206 | 1,188 | Spatiotemporal Coupling and Scaling of
Natural Images and Human Visual
Sensitivities
Dawei W. Dong
California Institute of Technology
Mail Code 139-74
Pasadena, CA 91125
dawei@hope.caltech.edu
Abstract
We study the spatiotemporal correlation in natural time-varying
images and explore the hypothesis that the visual system is concerned with the optimal coding of visual representation through
spatiotemporal decorrelation of the input signal. Based on the
measured spatiotemporal power spectrum, the transform needed to
decorrelate input signal is derived analytically and then compared
with the actual processing observed in psychophysical experiments.
1
Introduction
The visual system is concerned with the perception of objects in a dynamic world.
A significant fact about natural time-varying images is that they do not change randomly over space-time; instead image intensities at different times and/or spatial
positions are highly correlated. We measured the spatiotemporal correlation function - equivalently the power spectrum - of natural images and we find that it is
non-separable, i.e., coupled in space and time, and exhibits a very interesting scaling
behaviour. When expressed as a function of an appropriately scaled frequency variable, the spatiotemporal power spectrum is given by a simple power-law. We point
out that the same kind of spatiotemporal coupling and scaling exists in human visual sensitivity measured in psychophysical experiments. This poses the intriguing
question of whether there is a quantitative relationship between the power spectrum
of natural images and visual sensitivity. We answer this question by showing that
the latter can be predicted from measurements of the power spectrum.
860
2
D. W. Dong
Spatiotemporal Coupling and Scaling
Interest in properties of time-varying images dates back to the early days of development of the television [1]. But systematic studies have not been possible previously
primarily due to technical obstacles, and our knowledge of the regularities of timevarying images has so far been very limited.
~,
r
. ~
",,,'- ... }
<. )~
't
--=,.1
Figure 1: Natural time-varying images are highly correlated in space and time. Shown on
the top are two frames of a motion scene separated by thirty three milliseconds. These
two frames are highly repetitive, in fact the light intensities of most corresponding pixels
are similar. Shown on the bottom are light increase (on the left) and light decrease (on
the right) between the above two snapshots indicated by greyscale of pixels (white means
no change). One can immediately see that only a small portion of the image changes
significantly over this time scale. Our methods have been described previously [3J. To
summerize, more than one thousand segments of videos on 8mm video tape (NTSC format
RGB) are digitized to 8 bits greyscale using a Silicon Graphics Video board with default
factory settings. Two types of segments are analyzed. The first are segments from movies
on video tapes (e.g. "Raiders of the Lost Ark", "Uncommon Valor"). The second type of
segments that we analyzed are videos made by the authors. The scene of the moving egret
shown here is taken at Central Park in New York City.
We have systematically measured the two point correlation matrix or covariance
matrix of lOo xlOox2s (horizontalxverticalxtemporal digitized to 64x64x64) segments of natural time-varying images by averaging over 1049 movie segments. An
example of two consecutive frames from a typical segment is given in Figure 1. The
Fourier transform of the correlation matrix, or the power spectrum, turns out to
be a non-separable function of spatial and temporal frequencies and exhibits an
interesting scaling behaviour. From our measurements (see Figure 2) we find
R(j,w)
= R(jw)
where 1w is a scaled frequency which is simply the spatial frequency 1 scaled by
G(wl1), a function of the ratio of temporal and spatial frequencies, i.e., 1w =
G(wl1)1. This behaviour is revealed most clearly by plotting the power spectrum
as a function of 1 for fixed
1 ratio: the curves for different 1 ratios are just a
horizontal shift from each other.
wi
wi
861
Spatiotemporal Coupling/Scaling of Natural Images & VISual Sensitivity
A
B
10- 1
W
= 0.9 Hz
w/I = 7?/s
w/I=2.3?/s
10- 1
~ 10- 2
.....:;
~
10- 3
10- 3
10- 4
10- 4
0.1
0.1
1
Spatial Frequency
I (cycle/degree)
1
Spatial Frequency
I (cycle/degree)
Figure 2: Spatiotemporal power spectra of natural time-varying images. (A) plotted as a
function of spatial frequency for three temporal frequencies (0.9, 3, 10) Hz; (:8) plotted for
three velocities - ratios of temporal and spatial frequencies - (0.8, 2.3, 7) degree/second.
There are some important conclusions that can be drawn from this measurement . First ,
it is obvious that the power spectrum cannot be separated into pure spatial and pure
temporal parts ; space and time are coupled in a non-trivial way. The power spectrum at
low temporal frequency decreases more rapidly with increasing spatial frequency. Second ,
underlying this data is an interesting scaling behaviour which can be easily seen from the
curves for constant w / I ratios: each curve is simply shifted horizontally from each other in
the log-log plot. Thus curves for constant w/ I ratio overlap with each other when shifted
by an amount of G(w/J) , Le., when plotted against a scaled frequency Iw = G(w/f)I.
The similar spatio-temporal coupling and scaling for hunam visual sensitivity is shown in
Figure 3.
Interestingly, the human visual system seems to be designed to take advantage
of such regularity in natural images. The spatiotemporal contrast sensitivity of
human K(f, w), i.e., the visual responses to a sinewave grating of spatial frequency
f modulated at temporal frequency w, exhibits the same kind of spatiotemporal
coupling and scaling (see Figure 3),
K(f, w) = K(fw).
Again, when the contrast sensitivity curves are plotted as a function of f for fixed
wi f ratios, the curves have the same shape and are only shifted from each other [2].
A
B
w =2 Hz
~
100
~
.....:;
;3
::5
100
~
~
10
10
0.1
1
Spatial Frequency
10
I (cycle/degree)
0.1
1
Spatial Frequency
10
I (cycle/degree)
Figure 3: Spatiotemporal contrast sensitivities of human vision. (A) plotted as a function
of spatial frequency for two temporal frequencies (2 , 13) Hz; (B) plotted for two w/I
ratios (0.15, 3) degree/second. The solid lines in both A and B are the empirical fits.
The experimental data points and empirical fitting curves are from reference [2]. First,
it can be seen that the human visual sensitivity curve is band-pass filter at low temporal
frequency and approaches low-pass filter for higher temporal frequency. The space and
time are coupled. Second, it is clear that the curves for different w / I ratios have the same
shape and are only shifted horizontally from each other in the log-log plot. Again, curves
for constant w/I ratio overlap with each other when shifted by an amount of G(w/f) ,
i.e., when plotted against a scaled frequency Iw = G(w/f)I. The similar behaviour of
spatiotemporal coupling and scaling for the power spectra of natural images is shown in
Figure 2.
862
3
D. W. Dong
Relative Motion of Visual Scene
Why does the human visual sensitivity have the same spatiotemporal coupling and
scaling as natural images?
The intuition underlying the spatiotemporal coupling and scaling of natural images
is that when viewing a real visual scene the natural eye and/or body movements
translate the entire scene across the retina and every spatial Fourier component
of the scene moves at the same velocity. Thus it is reasonable to assume that
for constant velocity, Le., wi 1 ratio, the power spectrum show the same universal
behaviour. This assumption is tested quantitatively in the following.
Our measurements reveal that the spatiotemporal power spectrum has a simple
form
R(fw) '" 1;;;3
which is shown in Figure 6A. This behaviour can be accounted for if the dominant
component in the temporal signal comes from motion of objects with static power
spectra of Rs(f)'"1- 2 ? The static power spectra for the same collection of images is
measured by treating frames as snapshots (Figure 4A); the measurement confirmed
the above assumption and is in agreement with earlier works on the statistical
properties of static natural images [5, 6, 7].
It is easy to derive that for a rotationally symmetric static spectrum Rs (f) =
(K is a constant), the spatiotemporal power spectrum of moving images is
KIP
K
w
R(f,w) = pP(j)'
(1)
7)
where P(
is the function of velocity distribution, which is shown as the solid curve
in Figure 4B (measured independently from the optical flows between frames).
A
B
:::c: 10- 1
.,........
......
..-..
~
....:;
~ 10- 3
10- 2
0.1
Spatial Frequency
1
f
(cycle/degree)
10-5~----~----------------~
1
10
v, w / f (degree/second)
Figure 4: Spatial power spectrum and velocity distribution. (A) the measured spatial
power spectrum of snap shot images, which shows that Rs(f) rv K/ P is a good approximation to the spectrum; (B) the measured velocity distribution P(v) (solid curve), in
which the data of Figure 2 for the power spectrum were replotted as a function of w / f
after multiplication by j3 - all the data points fall on the P( v) curve.
In summary, the measured spatiotemporal power spectrum is dominated by images
of spatial power spectrum'" 1/12 moving with a velocity distribution P(v) '"
1/(v + vO)2 (similar velocity distribution has been proposed earlier [8, 3] . Thus
R(f, w) = KI 13(wl 1 + VO)2 and G(wl f) '" (wi 1 + VO)2/3.
Spatiotemporal Coupling/Scaling of Natural Images & Visual Sensitivity
863
Based on the assumption that the visual system is optimized to transmit information
from natural scenes, we have derived and pointed out in references [3, 4] that the
spatiotemporal contrast sensitivity K is a function of the power spectrum R, and
thus the spatiotemporal coupling and scaling of R of natural images translates
directly to the spatiotemporal coupling and scaling of K of visual sensitivity i.e., R
is a function of f w only, so is K.
4
Spatiotemporal Decorrelation
The theory of spatiotemporal decorrelation is based on ideas of optimal coding from
information theory: decorrelation of inputs to make statistically independent representations when signal is strong and smoothing where noise is significant. The end
result is that by chosing the correct degree of decorrelation the signal is compressed
by elimination of what is irrelevant without significant loss of information.
The following relationship can be derived for the visual sensitivity K and the power
spectrum R in the presence of noise power N:
The figure below illustrates the predicted filter for the case of white noise (constant
N).
0.1
1
10
Figure 5: Predicted optimal filter (curve I): in the low noise regime, it is given by whitening
filter R- 1 / 2 (curve II), which achieves spatiotemporal decorrelationj while at high noise
regime it asymptotes the low-pass filter (curve III) which suppresses noise.
As shown in Figure 6, the relation between the contrast sensitivity and the power
spectrum predicts
fw
)~
K(fw)""" ( 1 + Nf~
in which N is the power of the white noise. This prediction is compared with psychophysical data in Figure 6B where we have used the scaling function G(w/ f) =
(w/ f + VO)2/3 which has the same asymptotic behaviour as we have shown for the
natural time-varying images [3]. We find that for Vo = 1 degree/second, the human
D. W. Dong
864
contrast sensitivity curves for w/ f from 0.1 to 4 degree/second, measured in reference [2], overlap very well with the theoretical prediction from the power spectrum
of our measurements.
A
B
10- 1
~
100
...:;
"-'
~
10- 3
10
10- 4
~~----------~~----~--~
0.1
1
Scaled Frequency
1w
0.1
10
1
Scaled Frequency
1w
Figure 6: Relation between the power spectrum of natural images and the human visual
sensitivities. (A) the measured spatiotemporal power spectrum (Figure 2B) replotted as
a function of the scaled frequency can be fit very well by R '" 1;;3 (solid line); (B) the
spatiotemporal contrast sensitivities of human vision (Figure 3B) replotted as a function
of the scaled frequency can be fit very well by our theoretical prediction (solid line).
Our theory on the relation between the visual sensitivity K and the power spectrum of
natural time-varyin~ images R in the presence of noise power N has been described in
detail in reference [4] . To summarize, the visual sensitivity in Fourier space is simply
K = R- 1/ 2(1 + N/ R) - 3/ 2. In a linear system, this is proportional to the visual response
to a sinewave of spatial frequency 1 modulated at temporal frequency w, Le., the contrast
sensitivity curves shown in Figure 3. In the case of white noise, Le., N is independent of
1 and w, K depends on 1 and w through the power spectrum R . Since R is a function
of the scaled frequency Iw only, so is K . From our measurement R '" I~ , thus K '"
I:,P(1 + N/~)-3/2 . This curve is plotted in the figure as the solid line with N = 0.01 .
The agreement is very good.
5
Conclusions and Discussions
A simple relati9nship is revealed between the statistical structure of natural timevarying images and the spatiotemporal sensitivity of human vision. The existence
of this relationship supports the hypothesis that visual processing is optimized to
compress as much information as possible about the outside world into the limited
dynamic range of the visual channels.
We should point out that this scaling behaviour is expected to break down for very
high temporal and spatial frequency where the effect of the temporal and spatial
modulation function of the eye [9, 10] cannot be ignored.
Finally while our predictions show that, in general, the human visual sensitivity
is strongly space-time coupled, we do predict a regime where decoupling is a good
approximation. This is based on the fact that in the regime of relatively high
temporal frequency and relatively low spatial frequency we find that the power
spectrum of natural images is separable into spatial and temporal parts [3]. In a
previous work we have used this decoupling to model response properties of cat
LGN cells where we have shown that these can be accounted for by the theoretical
prediction based on the power spectrum in that regime [4].
Acknowledgements
The author gratefully acknowledges the discussions with Dr. Joseph Atick.
Spatiotemporal Coupling/Scaling of Natural Images & Visual Sensitivity
865
References
[1] Kretzmer ER, 1952. Statistics of television signals. The bell system technical
journal. 751-763.
[2] Kelly DR, 1979 Motion and vision. II. Stabilized spatio-temporal threshold
surface. J. Opt. Soc. Am. 69, 1340-1349.
[3] Dong DW, Atick JJ, 1995 Statistics of natural time-varying images. Network:
Computation in Neural Systems, 6, 345-358.
[4] Dong DW, Atick JJ, 1995 Temporal decorrelation: a theory of lagged and
nonlagged responses in the lateral geniculate nucleus. Network: Computation
in Neural Systems, 6, 159-178.
[5] Burton GJ, Moorhead JR, 1987. Color and spatial structure in natural scenes.
Applied Optics. 26(1): 157-170.
[6] Field DJ, 1987. Relations between the statistics of natural images and the
response properties of cortical cells .. J. Opt. Soc. Am. A 4: 2379-2394.
[7] Ruderman DL , Bialek W , 1994. Statistics of natural images: scaling in the
woods. Phy. Rev. Let. 73(6): 814-817.
[8] Van Rateren JR, 1993. Spatiotemporal Contrast sensitivity of early vision.
Vision Res. 33(2): 257-267.
[9] Campbell FW, Gubisch RW, 1966. Optical quality of the human eye. J. Physiol. 186: 558-578.
[10] Schnapf JL, Baylor DA, 1987. Row photoreceptor cells respond to light. Scientific American 256(4): 40-47.
| 1188 |@word seems:1 r:3 rgb:1 covariance:1 decorrelate:1 solid:6 shot:1 phy:1 interestingly:1 intriguing:1 physiol:1 shape:2 asymptote:1 plot:2 designed:1 treating:1 raider:1 fitting:1 expected:1 wl1:2 actual:1 increasing:1 underlying:2 what:1 kind:2 suppresses:1 temporal:19 quantitative:1 every:1 nf:1 scaled:10 modulation:1 limited:2 range:1 statistically:1 thirty:1 lost:1 universal:1 empirical:2 bell:1 significantly:1 cannot:2 independently:1 immediately:1 pure:2 dw:2 ntsc:1 transmit:1 hypothesis:2 agreement:2 velocity:8 ark:1 predicts:1 observed:1 bottom:1 burton:1 thousand:1 cycle:5 decrease:2 movement:1 intuition:1 dynamic:2 segment:7 easily:1 cat:1 separated:2 outside:1 snap:1 compressed:1 statistic:4 transform:2 advantage:1 date:1 rapidly:1 translate:1 regularity:2 object:2 coupling:13 derive:1 pose:1 measured:11 grating:1 soc:2 strong:1 predicted:3 come:1 correct:1 filter:6 human:13 viewing:1 elimination:1 behaviour:9 opt:2 mm:1 predict:1 achieves:1 early:2 consecutive:1 geniculate:1 iw:3 wl:2 city:1 hope:1 clearly:1 schnapf:1 varying:8 timevarying:2 derived:3 contrast:9 am:2 entire:1 pasadena:1 relation:4 lgn:1 pixel:2 development:1 spatial:25 smoothing:1 field:1 park:1 quantitatively:1 primarily:1 retina:1 randomly:1 interest:1 highly:3 uncommon:1 analyzed:2 light:4 nonlagged:1 re:1 plotted:8 theoretical:3 earlier:2 obstacle:1 graphic:1 loo:1 answer:1 spatiotemporal:31 moorhead:1 sensitivity:25 systematic:1 dong:6 again:2 central:1 dr:2 american:1 coding:2 depends:1 break:1 portion:1 confirmed:1 against:2 frequency:33 pp:1 obvious:1 static:4 knowledge:1 color:1 back:1 campbell:1 higher:1 day:1 response:5 jw:1 strongly:1 just:1 atick:3 correlation:4 horizontal:1 ruderman:1 quality:1 indicated:1 reveal:1 scientific:1 effect:1 analytically:1 symmetric:1 white:4 vo:5 motion:4 dawei:2 image:33 jl:1 significant:3 measurement:7 silicon:1 pointed:1 gratefully:1 dj:1 moving:3 surface:1 whitening:1 gj:1 dominant:1 irrelevant:1 caltech:1 seen:2 rotationally:1 signal:6 ii:2 rv:1 technical:2 j3:1 prediction:5 vision:6 repetitive:1 cell:3 appropriately:1 hz:4 flow:1 presence:2 revealed:2 iii:1 easy:1 concerned:2 fit:3 idea:1 translates:1 shift:1 whether:1 york:1 tape:2 jj:2 ignored:1 clear:1 amount:2 band:1 rw:1 millisecond:1 shifted:5 stabilized:1 threshold:1 drawn:1 wood:1 respond:1 reasonable:1 scaling:19 bit:1 ki:1 optic:1 scene:8 dominated:1 fourier:3 separable:3 optical:2 format:1 relatively:2 jr:2 across:1 wi:5 joseph:1 rev:1 taken:1 previously:2 turn:1 needed:1 end:1 existence:1 compress:1 top:1 psychophysical:3 move:1 question:2 bialek:1 exhibit:3 lateral:1 mail:1 trivial:1 code:1 relationship:3 ratio:11 baylor:1 equivalently:1 greyscale:2 lagged:1 snapshot:2 digitized:2 frame:5 intensity:2 optimized:2 kip:1 california:1 below:1 perception:1 regime:5 summarize:1 replotted:3 video:5 power:35 overlap:3 decorrelation:6 natural:28 movie:2 technology:1 eye:3 acknowledges:1 coupled:4 kelly:1 acknowledgement:1 multiplication:1 relative:1 law:1 asymptotic:1 loss:1 interesting:3 proportional:1 nucleus:1 degree:11 plotting:1 systematically:1 row:1 summary:1 accounted:2 institute:1 fall:1 van:1 curve:19 default:1 cortical:1 world:2 author:2 made:1 collection:1 far:1 spatio:2 spectrum:33 why:1 channel:1 ca:1 decoupling:2 da:1 noise:9 body:1 chosing:1 board:1 position:1 factory:1 down:1 showing:1 sinewave:2 er:1 dl:1 exists:1 illustrates:1 television:2 simply:3 explore:1 visual:26 horizontally:2 expressed:1 fw:5 change:3 typical:1 averaging:1 pas:3 experimental:1 photoreceptor:1 support:1 latter:1 modulated:2 tested:1 correlated:2 |
207 | 1,189 | Second-order Learning Algorithm with
Squared Penalty Term
Kazumi Saito
Ryohei Nakano
NTT Communication Science Laboratories
2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02 Japan
{saito,nakano }@cslab.kecl.ntt.jp
Abstract
This paper compares three penalty terms with respect to the efficiency of supervised learning, by using first- and second-order learning algorithms. Our experiments showed that for a reasonably adequate penalty factor, the combination of the squared penalty term
and the second-order learning algorithm drastically improves the
convergence performance more than 20 times over the other combinations, at the same time bringing about a better generalization
performance.
1
INTRODUCTION
It has been found empirically that adding some penalty term to an objective function in the learning of neural networks can lead to significant improvements in
network generalization. Such terms have been proposed on the basis of several
viewpoints such as weight-decay (Hinton, 1987), regularization (Poggio & Girosi,
1990), function-smoothing (Bishop, 1995), weight-pruning (Hanson & Pratt, 1989;
Ishikawa, 1990), and Bayesian priors (MacKay, 1992; Williams, 1995). Some are
calculated by using simple arithmetic operations, while others utilize higher-order
derivatives. The most important evaluation criterion for these terms is how the generalization performance improves, but the learning efficiency is also an important
criterion in large-scale practical problems; i.e., computationally demanding terms
are hardly applicable to such problems. Here, it is naturally conceivable that the
effects of penalty terms depend on learning algorithms; thus, we need comparative
evaluations.
This paper evaluates the efficiency of first- and second-order learning algorithms
K. Saito and R. Nakano
628
with three penalty terms. Section 2 explains the framework of the present learning
and shows a second-order algorithm with the penalty terms. Section 3 shows experimental results for a regression problem, a graphical evaluation, and a penalty
factor determination llsing cross-validation.
2
2.1
LEARNING WITH PENALTY TERM
Framework
Let {(Xl, Y1),"', (xm, Ym)} be a set of examples, where Xt denotes an n-dimensional
input vector and Yt a target value corresponding to Xt. In a three-layer neural
network, let h be the number of hidden units, Wj (j = 1,"', h) be the weight
vector between all the input units and the hidden unit j, and Wo = (WOO,"" WOh)T
be the weight vector between all the hidden units and the output unit; WjO means
a bias term and XtO is set to 1. Note that aT denotes the transposed vector of a.
Hereafter, a vector consisting of all parameters, (w5,"" wD T , is simply expressed
as ~ = (<PI, . . . , <PN ) T, where N (= nh + 2h + 1) denotes the dimension of ~. Then,
the training error in the three-layer neural network can be defined as follows:
(1)
where O'(u) represents a sigmoidal function, O'(u) = 1/(1 + e- U ).
In this paper, we consider the following three penalty terms:
N
02(~) =
L l<Pkl,
k=l
Hereafter, 01, O2 , and 0 3 are referred to as the squared (Hinton, 1987; MacKay,
1992), absolute (Ishikawa, 1990; Williams, 1995), and normalized (Hanson & Pratt,
1989) penalty terms, respectively. Then, learning with one of these terms can be
defined as the problem of minimizing the following objective function
(3)
where J.L is a penalty factor.
2.2
Second-order Algorithm with Penalty Term
In order to minimize the objective function, we employ a newly invented secondorder learning algorithm based on a quasi-Newton method, called BPQ (Saito &
Nakano, 1997), where the descent direction, ~~, is calculated on the basis of a partial BFGS update and a reasonably accurate step-length, .x, is efficiently calculated
as the minimal point of a second-order approximation. Here, the partial BFGS
update can be directly applied, while the step-length .x is evaluated as follows:
.x =
-VFi(~)~~T
~~T'V2 Fi(~)~~
(4)
Second-order Learning Algorithm with Squared Penalty Term
629
The quadratic form for the training error term, .1.~T\72 f(~).1.~, can be calculated efficiently with the computational complexity of Nm + O(hm) by using the
procedure of BPQ, while those for penalty terms are calculated as follows:
N
.1.~T\72rh(~).1.~
= L.1.?>~,
t
.1.~T\72n2(~).1.~ = 0,
k=1
.1.~T\72n (~).1.~ =
3
k=1
(1 - 3?>~).1.?>~
(1 + ?>~)3 .
(5)
Note that, in the step-length calculation, .1.~T\72Fi(~).1.~ is basically assumed to
be positive. The three terms have a different effect on it, Le., the squared penalty
term always adds a non-negative value; the absolute penalty term has no effect; the
normalized penalty term may add a negative value if many weight values are larger
than ...jf73. This indicates that the squared penalty term has a desirable feature.
Incidentally, we can employ other second-order learning algorithms such as SCG
(M(ljller, 1993) or ass (Battiti, 1992), but BPQ worked the most efficiently among
them in our own experience (Saito & Nakano, 1997).
3
3.1
EVALUATION BY EXPERIMENTS
Regression Problem
By using a regression problem for a function y = (1- x + 2x2)e- O.5X2 , the learning
performance of adding a penalty term was evaluated. In the experiment, a value of
x was randomly generated in the range of [-4,4], and the corresponding value of y
was calculated from x; each value of y was corrupted by adding Gaussian noise with
a mean of 0 and a standard deviation of 0.2. The total number of training examples
was set to 30. The number of hidden units was set to 5, where the initial values
for the weights between the input and hidden units were independently generated
according to a normal distribution with a mean of 0 and a standard deviation of
1; the initial values for the weights between the hidden and output units were set
to 0, but the bias value at the output unit was initially set to the average output
value of all training examples. The iteration was terminated when the gradient
vector was sufficiently small (Le., 11\7Fi ( ~) 112/N < 10- 12 ) or the total processing
time exceeded 100 seconds. The penalty factor J.t was changed from tJ to 2- 19 by
multiplying by 2- 1 ; trials were performed 20 times for each penalty factor.
Figure 1 shows the training examples, the true function, and a function obtained
after learning without a penalty term. We can see that such a learning over-fitted
the training examples to some degree.
3.2
Evaluation using Second-order Algorithm
By using BPQ, an evaluation was made after adding each penalty term. Figure 2(a)
compares the generalization performance, which was evaluated by using the average RMSE (root mean squared error) for a set of 5,000 test examples. The best
possible RMSE level is 0.2 because each test example includes the same amount of
Gaussian noise given to each training example. For each penalty term, the generalization performance was improved when J.t was set adequately, but the normalized
K. Saito and R. Nakano
630
3
true function
leaming result
2
o
1
0
a
'0 .
a
-2
-4
4
2
Figure 1: Learning problem
average RMSE
0.8
CPU time (sec.)
,
II
100
'1:3"
0.6
I
,
10
I
II
,,
\
I
, '- ....?. ~ ~
, '1:3 ' ......
~ "... " "
.... <~" :
~-----~-'~--u-t--~'~~"~#'-~~~
penalty
.
.
.
1
'
0.2 -+-r-T"T"T'T'"T"""""""I"'T""r-T"T"T'''T'T''''''''
~
2-5
2. 10
2- 15
2-5
2?20~
(a) Generalization performance
2- 10
2. 15
2-20 ~
(b) CPU time until convergence
Figure 2: Comparison using second-order algorithm BPQ
penalty term was the most unstable among the three, because it frequently got
stuck in undesirable local minima. Figure 2(b) compares the processing time! until
convergence. In comparison to the learning without a penalty term, the squared
penalty term drastically decreased the processing time especially when f.1 was large,
while the absolute penalty term did not converge when f.1 was large; the normalized
penalty term generally required a larger processing time. Thus, only the squared
penalty term improved the convergence performance more than 2 rv 100 times,
keeping a better generalization performance for an adequate penalty factor.
3.3
Evaluation using First-order Algorithm
By using BP, a similar evaluation was made after adding each penalty term. Here,
we adopted Silva and Almeida's learning rate adaptation rule (Silva & Almeida,
1990), i.e., learning rate "'k for each weight <Pk is adjusted by the signs of two
successive gradient values2 . Figure 3(a) compares the generalization performance
and Figure 3(b) compares the processing time until convergence, where the average
processing time for the trials without a penalty term is not displayed because all
trials did not converge within 100 seconds. For each penalty term, the generalization
lOur experiments were done on SUN 8-4/20 computers.
2The increasing and decreasing parameters were set to 1.1 and 1/1.1, respectively, as
recommended by (Silva & Almeida, 1990); if the value of the objective function increases,
all learning rates are halved until the value decreases.
Second-order Learning Algorithm with Squared Penalty Term
average RMSE
CPU time (sec.)
0.8
-n1
....... ~
100
- - ~
10
0.6
~-----------
-n1
.......
without
penalty
0.2
631
~
-- ~
-+-rT""r-T"T"T"T"T".,....,.."T"T"T"I'''T'''I~1''''I'''\
O. 1"""'''''''r-T''1I'''T''"'''''-'r-r"r-T"T''T"'r"T"'T''T"T'1
2-5
2-10
2-15
2-20~
2'5
2-10
2-15
2-20 ~
211
2!J
(a) Generalization performance
(b) CPU time until convergence
Figure 3: Comparison using first-order algorithm BP
performance was improved when f.t was set adequately. Note that BP with the
squared penalty term 01 required more processing time than BPQ with 0 1 . As for
the normalized penalty term 03, BP with 0 3 worked more stably than BPQ with
0 3 , Incidentally, the generalization performance of BP without a penalty term was
better than that of BPQ without it; we predict that this is because the effect of
early stopping (Bishop, 1995) worked for BP. Actually, for the training examples,
the average RMSE of BP without a penalty term was 0.138, while that of BPQ
without it was 0.133.
3.4
Graphical Evaluation
In order to graphically examine the reasons why the effect of the addition of each
penalty term differed, we designed a simple problem; that is, learning a function
Y = U(WIX) + U(W2X), where only two weights, WI and W2, are adjustable. In the
three-layer network, the input and output layers consist of only one unit, while the
hidden layer consists of two units. Note that the weights between the hidden units
and the output unit are fixed at 1, there is no bias, and the activation function
of hidden units is assumed to be u(x) = 1/(1 + exp( -x)). Each target value Yt
was calculated from the corresponding input value Xt E {-0.2, -0.1, 0,0.1, 0.2} by
setting (WI,W2) = (1,3).
Figure 4 shows the learning trajectories on error contour maps with respect to WI
and W2 during 100 iterations starting at (Wt,W2) = (-1,-3), where the penalty
factor f.t was set to 0.1 or 0.01. Here, BPQ was used as a learning algorithm. The
contours for the squared penalty term form ovals, making BPQ learn easily. When
f.t = 0.1, the contours for the absolute penalty term form an almost square-like
shape, and the learning trajectories oscillate near the origin (WI = W2 = 0), due to
the discontinuity of the gradient function . The contours for the normalized penalty
term form a valley, making BPQ's learning more difficult.
3.5
Determining Penalty Factor
In general, for a given problem, we cannot know an adequate penalty factor in
advance. Given a limited number of examples, we must find a reasonably adequate
K. Saito and R. Nakano
632
w2
5
0
0
-5
-5
w
5
-5
-5
w2
5
5 Wl
0
( 1.1=0.1 )
0
0.5
-5
5 W 1 -5
w2
5
0
-5
5w1 -5
w2
5
0.5
5 w1
0
0
0.5
-5
-5
~
0
-5
5 w1 -5
0
5 w1
Figure 4: Graphical evaluation
penalty factor. The procedure of cross-validation (Stone, 1978) is adopted for this
purpose. Since we knew the combination of the squared penalty term 0 1 and the
second-order algorithm BPQ works very efficiently, we performed experiments using
the above regression problem with exactly the same experimental conditions.
Figure 5 shows the experimental results, where the procedure of cross-validation
was implemented as a leave-one-out method, and the initial weight values for evaluating the cross-validation error were set as the learning results of the entire training
examples. Figure 5(a) compares the average generalization error and the average
cross-validation error. Although the cross-validation error was a pessimistic estimator of the generalization error, it showed the same tendency and was minimized at
almost the same penalty factor. Figure 5(b) shows the average processing time and
its standard deviation; although the processing time includes the cross-validation
evaluation, we can see that the learning was performed quite efficiently.
4
CONCLUSION
This paper investigated the efficiency of supervised learning with each of three
penalty terms, by using first- and second-order learning algorithms, BP and BPQ.
Our experiments showed that for a reasonably adequate penalty factor, the combination of the squared penalty term and the second-order algorithm drastically
improves the convergence performance about 20 times over the other combinations,
together with an improvement in the generalization performance. In the case of
other second-order learning algorithms such as SCG or OSS, similar results are possible because the main difference between BPQ and those other algorithms involves
only the learning efficiency. In the future, we plan to do further evaluations using
larger-scale problems.
633
Second-order Learning Algorithm with Squared Penalty Term
average AMSE
CPU time (sec.)
0.8
100
cross-validation error
average
.......
generalization error
0.6
10
0.4
S.d.
...
;'
.
..
0.2
20
.,.# ...
# ............
1
,. ... . ..
0.1
2.5
2.10
2.15
2"201.1
(a) Generalization performance
-++.r-T'"T'"T""""'T'"'I""T""I''''''''''''''''-r"'lr-T'"Ir""'I""I
2!J
2.5
2. 10
2. 15
2"201.1
(b) CPU time until convergence
Figure 5: Learning result
References
Battiti, R. (1992) First- and second-order methods for learning between steepest
descent and Newton's method. Neural Computation 4(2):141-166.
Bishop, C.M. (1995) Neural networks for pattern recognition. Clarendon Press.
Hanson, S.J. & Pratt, L. Y. (1989) Comparing biases for minimal network construction with back-propagation. In D. S. Touretzky (ed.), Advances in Neural
Processing Systems, Volume 1, pp. 177-185. San Mateo, CA: Morgan Kaufmann.
Hinton, G.E. (1987) Learning translation invariant recognition in massively parallel
networks. In J. W. de Bakker, A. J. Nijman and P. C. Treleaven (eds.), Proceedings
PARLE Conference on Parallel Architectures and Languages Europe, pp. 1-13.
Berlin: Springer-Verlag.
Ishikawa, M. (1990) A structural learning algorithm with forgetting of link weight.
Tech. Rep. TR-90-7, Electrotechnical Lab. Tsukuba-City, Japan.
MacKay, D.J.C. (1992) Bayesian interpolation. Neural Computation 4(3):415-447.
M?ller, M.F. (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4):525-533.
Poggio, T. & Girosi, F. (1990) Regularization algorithms for learning that are equivalent to multilayer networks. Science 247:978-982.
Saito, K. & Nakano, R. (1997) Partial BFGS update and efficient step-length calculation for three-layer neural networks. Neural Computation 9(1):239-257 (in press).
Silva, F.M. & Almeida, L.B. (1990) Speeding up backpropagation. In R. Eckmiller
(ed.), Advanced Neural Computers, pp. 151-160. Amsterdam: North-Holland.
Stone, M. (1978) Cross-validation: A review. Operationsforsch. Statist. Ser. Statistics B 9(1):111-147.
Williams, P.M. (1995) Bayesian regularization and pruning using a Laplace prior.
Neural Computation 7(1):117-143.
| 1189 |@word trial:3 scg:2 tr:1 initial:3 hereafter:2 o2:1 wd:1 comparing:1 activation:1 must:1 shape:1 girosi:2 designed:1 update:3 steepest:1 lr:1 successive:1 sigmoidal:1 ryohei:1 consists:1 forgetting:1 os:1 seika:1 frequently:1 examine:1 kazumi:1 decreasing:1 cpu:6 increasing:1 bakker:1 exactly:1 scaled:1 ser:1 unit:14 positive:1 local:1 interpolation:1 mateo:1 limited:1 range:1 practical:1 backpropagation:1 procedure:3 saito:8 got:1 cannot:1 undesirable:1 valley:1 equivalent:1 map:1 yt:2 williams:3 graphically:1 starting:1 independently:1 rule:1 estimator:1 laplace:1 target:2 construction:1 secondorder:1 origin:1 recognition:2 pkl:1 wjo:1 invented:1 wj:1 sun:1 decrease:1 complexity:1 depend:1 efficiency:5 basis:2 easily:1 fast:1 quite:1 larger:3 statistic:1 adaptation:1 convergence:8 comparative:1 incidentally:2 leave:1 implemented:1 involves:1 direction:1 explains:1 generalization:16 pessimistic:1 adjusted:1 sufficiently:1 normal:1 exp:1 predict:1 early:1 purpose:1 applicable:1 wl:1 city:1 gaussian:2 always:1 pn:1 improvement:2 indicates:1 tech:1 stopping:1 parle:1 entire:1 initially:1 hidden:9 quasi:1 among:2 plan:1 smoothing:1 mackay:3 ishikawa:3 represents:1 future:1 minimized:1 others:1 employ:2 randomly:1 consisting:1 n1:2 w5:1 evaluation:12 tj:1 accurate:1 partial:3 experience:1 poggio:2 minimal:2 fitted:1 deviation:3 wix:1 corrupted:1 cho:1 ym:1 together:1 w1:4 squared:15 electrotechnical:1 nm:1 derivative:1 japan:2 bfgs:3 de:1 sec:3 includes:2 north:1 performed:3 root:1 lab:1 parallel:2 rmse:5 minimize:1 square:1 ir:1 kaufmann:1 efficiently:5 bayesian:3 basically:1 multiplying:1 trajectory:2 touretzky:1 ed:3 evaluates:1 pp:3 naturally:1 transposed:1 newly:1 improves:3 actually:1 back:1 exceeded:1 clarendon:1 higher:1 supervised:3 improved:3 evaluated:3 done:1 until:6 propagation:1 stably:1 effect:5 normalized:6 true:2 adequately:2 regularization:3 laboratory:1 during:1 criterion:2 stone:2 silva:4 fi:3 empirically:1 jp:1 nh:1 volume:1 significant:1 language:1 europe:1 add:2 halved:1 own:1 showed:3 massively:1 verlag:1 rep:1 battiti:2 morgan:1 minimum:1 converge:2 ller:1 recommended:1 llsing:1 arithmetic:1 ii:2 rv:1 desirable:1 kyoto:1 ntt:2 determination:1 calculation:2 cross:9 regression:4 multilayer:1 iteration:2 addition:1 decreased:1 w2:9 bringing:1 structural:1 near:1 pratt:3 architecture:1 penalty:57 wo:1 soraku:1 oscillate:1 hardly:1 adequate:5 generally:1 amount:1 statist:1 sign:1 eckmiller:1 utilize:1 almost:2 layer:6 tsukuba:1 quadratic:1 worked:3 bp:8 x2:2 cslab:1 according:1 combination:5 conjugate:1 wi:4 making:2 invariant:1 computationally:1 know:1 adopted:2 operation:1 vfi:1 v2:1 denotes:3 graphical:3 newton:2 nakano:8 especially:1 hikaridai:1 objective:4 rt:1 conceivable:1 gradient:4 link:1 berlin:1 xto:1 gun:1 unstable:1 reason:1 length:4 minimizing:1 lour:1 difficult:1 negative:2 adjustable:1 descent:2 displayed:1 hinton:3 communication:1 y1:1 required:2 hanson:3 discontinuity:1 pattern:1 xm:1 demanding:1 advanced:1 hm:1 woo:1 speeding:1 prior:2 review:1 determining:1 validation:9 amse:1 degree:1 kecl:1 viewpoint:1 pi:1 translation:1 changed:1 keeping:1 drastically:3 bias:4 treleaven:1 absolute:4 calculated:7 dimension:1 evaluating:1 contour:4 stuck:1 made:2 san:1 pruning:2 assumed:2 knew:1 why:1 learn:1 reasonably:4 ca:1 as:1 investigated:1 did:2 pk:1 main:1 rh:1 terminated:1 noise:2 n2:1 referred:1 differed:1 xl:1 bishop:3 xt:3 decay:1 consist:1 adding:5 simply:1 expressed:1 amsterdam:1 holland:1 springer:1 leaming:1 wt:1 called:1 total:2 oval:1 experimental:3 tendency:1 w2x:1 almeida:4 |
208 | 119 | 107
SKELETONIZATION:
A TECHNIQUE FOR TRIMMING THE FAT
FROM A NETWORK VIA RELEVANCE ASSESSMENT
Michael C. Mozer
Paul Smolensky
Department of Computer Science &
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
ABSTRACT
This paper proposes a means of using the knowledge in a network to
determine the functionality or relevance of individual units, both for
the purpose of understanding the network's behavior and improving its
performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and
automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first
learning with spare hidden units and then trimming the unnecessary
ones away, thereby constraining generalization; and to understand the
behavior of networks in terms of minimal "rules."
INTRODUCTION
One thing that connectionist networks have in common with brains is that if you open
them up and peer inside, all you can see is a big pile of goo. Internal organization is
obscured by the sheer number of units and connections. Although techniques such as
hierarchical cluster analysis (Sejnowski & Rosenberg, 1987) have been suggested as a
step in understanding network behavior, one would like a better handle on the role that
individual units play. This paper proposes one means of using the knowledge in a network to determine the functionality or relevance of individual units. Given a measure of
relevance for each unit, the least relevant units can be automatically trimmed from the
network to construct a skeleton version of the network.
Skeleton networks have several potential applications:
? Constraining generalization. By eliminating input and hidden units that serve no purpose, the number of parameters in the network is reduced and generalization will be
constrained (and hopefully improved) .
? Speeding up learning. Learning is fast with many hidden units, but a large number of
hidden units allows for many possible generalizations. Learning is slower with few
108
Mozer and Smolensky
hidden units. but generalization tends to be better. One idea for speeding up learning is
to train a network with many hidden units and then eliminate the irrelevant ones. This
may lead to a rapid learning of the training set. and then gradually, an improvement in
generalization performance.
? Understanding the behavior of a network in terms of "rules". One often wishes to get a
handle on the behavior of a network by analyzing the network in terms of a small
number of rules instead of an enormous number of parameters. In such situations, one
may prefer a simple network that performed correctly on 95% of the cases over a complex network that performed correctly on 100%. The skeletonization process can discover such a simplified network.
Several researchers (Chauvin, 1989; Hanson & Pratt, 1989; David Rumelhart, personal
communication, 1988) have studied techniques for the closely related problem of reducing the number of free parameters in back propagation networks. Their approach
involves adding extra cost terms to the usual error function that cause nonessential
weights and units to decay away. We have opted for a different approach - the all-ornone removal of units - which is not a gradient descent procedure. The motivation for
our approach was twofold. First, our initial interest was in designing a procedure that
could serve to focus "attention" on the most important units, hence an explicit relevance
metric was needed. Second, our impression is that it is a tricky matter to balance a primary and secondary error term against one another. One must determine the relative
weighting of these terms, weightings that may have to be adjusted over the course of
learning. In our experience, it is often impossible to avoid local minima - compromise
solutions that partially satisfy each of the error terms. This conclusion is supported by
the experiments of Hanson and Pratt (1989).
DETERMINING THE RELEVANCE OF A UNIT
Consider a multi-layer feedforward network. How might we determine whether a given
unit serves an important function in the network? One obvious source of information is
its outgoing connections. If a unit in layer I has many large-weighted connections, then
one might expect its activity to have a big impact on higher layers. However, this need
not be. The effects of these connections may cancel each other out; even a large input to
units in layer 1+1 will have little influence if these units are near saturation; outgoing
connections from the innervated units in 1+1 may be small; and the unit in I may have a
more-or-Iess constant activity, in which case it could be replaced by a bias on units in
1+1. Thus, a more accurate measure of the relevance of a unit is needed.
What one really wants to know is, what will happen to the performance of the network
when a unit is removed? That is, how well does the network do with the unit versus
without it? For unit i , then, a straightforward measure of the relevance, Pi' is
Pi = E without .wt i -
E with .wt i
,
where E is the error of the network on the training set. The problem with this measure is
that to compute the error with a given unit removed, a complete pass must be made
through the training sel Thus, the cost of computing P is O(np) stimulus presentations,
where n is the number of units in the network and p is the number of patterns in the
Skeletonization
trammg set. Further, if the training set is not fixed or is not known to the experimenter,
additional difficulties arise in computing p.
We therefore set out to find a good approximation to p. Before presenting this approximation, it is fust necessary to introduce an additional bit of notation. Suppose that associated with each unit i is a coefficient ai which represents the attentional strength of the
unit (see Figure 1). This coefficient can be thought of as gating the flow of activity from
the unit:
OJ
= f (~Wji aioi)
,
i
where OJ is the activity of unit j, Wji the connection strength to j from i, and f the sigmoid squashing function. If ai = 0, unit i has no influence on the rest of the network; if
ai = I, unit i is a conventional unit. In terms of a. the relevance of unit i can then be
rewritten as
Pi
= Ea.:::(J - Ea.=1 ?
We can approximate Pi using the derivative of the error with respect to a:
lim E a.=y - E a.=1
iJE
=-1-+1
y- 1
iJa?I a.=1
Assuming that this equality holds approximately for y = 0:
Ea.:::(J - Ea.=1
dE
------ --1
daj a.=1
. . fior pj IS
. then
Our approxunatton
A
Pi
or
iJE .
= --;--
uaj
This derivative can be computed using an error propagation procedure very similar to
that used in adjusting the weights with back propagation. Additionally, note that because
the approximation assumes that o,j is I, the aj never need be changed. Thus. the a i are
not actual parameters of the system, just a bit of notational convenience used in
Figure 1. A 4-2-3 network with attentionaI coefficients on the input and hidden units.
109
110
Mozer and Smolensky
estimating relevance.
In practice, we have found that dE Ida. fluctuates strongly in time and a more stable estimate that yields better results is an exponentially-decaying time average of the derivative.
In the simulations reported below, we use the following measure:
~i(t+I)=.8~i(t)+ .2
dE(t)
d
.
Clj
One fmal detail of relevance assessment we need to mention is that relevance is computed based on a linear error function, E' = 1: I tpj - opJ I (where p is an index over patterns, j over output units; tpj is the target output, Opj the actual output). The usual quadratic error function, Ef = !Jtpj - Opj )2, provides a poor estima~ of relevance if the output pattern is close to the target. This difficulty with Ef is further elaborated in Mozer
and Smolensky (1989). In the results reported below, while Ef is used as the error
metric in training the weights via conventional back propagation, ~ is measured using E'.
This involves separate back propagation phases for computing the weight updates and the
relevance measures.
A SIMPLE EXAMPLE: THE CUE SALIENCE PROBLEM
Consider a network with four inputs labeled A-D, one hidden unit, and one output. We
generated ten training patterns such that the correlations between each input unit and the
output are as shown in the fIrst row of Table 1. (In this particular task. a hidden layer is
not necessary. The inclusion of the hidden unit simply allowed us to use a standard
three-layer architecture for all tasks.)
In this and subsequent simulations, unit activities range from -I to I, input and target
output patterns are binary (-lor 1) vectors. Training continues until all output activities
are within some acceptable margin of the target value. Additional details of the training
procedure and network parameters are described in Mozer and Smolensky (1989).
To perform perfectly, the network need only attend to input A. This is not what the
input-hidden connections do, however; their weights have the same qualitative proflle as
the correlations (second row of Table 1).1 In contrast, the relevance values for the input
Table 1
A
Correlation with Output Unit
Input-Hidden Connection Strmgths
1 The
1.0
Pi
3.15
5.36
~i
0.46
Input Unit
C
0.2
.83
0.06
0.01
B
0.6
1.23
0.(11
-0.03
D
0.0
- .01
0.00
-0.02
values reported in Table 1 are an average over 100 replications of the simulation with different initial ran-
dom weights. Bs:fore averaging. however, the signs of the weights were flipped if the hidden-output connection
was negative.
Skeletonization
units show A to be highly relevant while B-D have negligible relevance. Further, the
qualitative picture presented by the profile of (Ji s is identical to that of the Pi s. Thus,
while the weights merely reflect the statistics of the training set,
indicates the functionality of the units.
"i
THE RULE-PLUS-EXCEPfION PROBLEM
Consider a network with four binary inputs labeled A-D and one binary output. The task
is to learn the function AB+ABci5; the output unit should be on whenever both A and B are
on, or in the special case that all inputs are off. With two hidden units, back propagation
arrives at a solution in which one unit responds to AB - the rule - and the other to
ABeD - the exception. Clearly, the AB unit is more relevant to the solution; it accounts
for fifteen cases whereas the ABeD unit accounts for only one. This fact is reflected in
the (Ji: in 100 replications of the simulation, the mean value of (JAB was 1.49 whereas ~ABCD
was only .17. These values are extremely reliable; the standard errors are .003 and .005,
respectively.
Relevance was also measured using the quadratic error function. With this metric, the AB
unit is incorrectly judged as being less relevant than the ABcD unit (JiB is .029 and (JIBCD is
.033. As mentioned above, the basis of the failme of the quadratic error function is that
grossly underestimates the true relevance as the output error gbes to zero. Because
the one exception pattern is invariably the last to be learned, the pUtput error for the fIfteen non-exception patterns is significantly lower, and consequently, the relevance values
computed on the basis of the non-exception patterns are much smaller than those computed on the basis of the one exception pattern. This results in the relevance assessment
derived from the exception pattern dominating the overall relevance measure, and in the
incorrect relevance assignments described above. However, this problem can be avoided
by assessing relevance using the linear error function.
"f
If we attempted to "trim" the rule-plus-exception network by eliminating hidden units,
the logical first candidate would be the less relevant ABeD unit. This trimming process
would leave us with a simpler network - a skeleton network - whose behavior is easily
characterized in tenns of a simple rule, but which could only account for 15 of the 16
input cases.
CONSTRUCTING SKELETON NETWORKS
In the remaining examples we construct skeleton networks using the relevance metric.
The procedure is as follows: (1) train the network until all output unit activities are
within some specified margin around the target value (for details, see Mozer & Smolensky, 1989); (2) compute" for each unit; (3) remove the unit with the smallest ,,; and (4)
repeat steps 1-3 a specified number of times. In the examples below, we have chosen to
trim either the input units or the hidden units, not both simultaneously. but there is no
reason why this could not be done.
We have not yet addressed the crucial question of how much to trim away from the network. At present. we specify in advance when to stop trimming. However. the procedure described above makes use only of the ordinal values of the (J. One untapped
111
112
Mozer and Smolensky
source of information that may be quite informative is the magnitudes of the (J. A large
increase in the minimum (J value as trimming progresses may indicate that further trimming will seriously disrupt performance in the network.
THE TRAIN PROBLEM
Consider the task of determining a rule that discriminates the "east" trains from the
"west" trains in Figure 2. There are two simple rules - simple in the sense that the rules
require a minimal number of input features: East trains have a long car and triangle load
in car or an open car or white wheels on car. Thus, of the seven features that describe
each train, only two are essential for making the east/west discrimination.
A 7-1-1 network trained on this task using back propagation learns quickly, but the final
solution takes into consideration nearly all the inputs because 6 of the 7 features are partially correlated with the east/west discrimination. When the skeletonization procedure is
applied to trim the number of inputs from 7 to 2, however, the network is successfully
trimmed to the minimal set of input features - either long car and triangle load, or open
car and white wheels on car - on each of 100 replications of the simulation we ran.
The trimming task is far from trivial. The expected success rate with random removal of
the inputs is only 9.5%. Other skeletonization procedures we experimented with resulted
in success rates of 50%-90%.
THE FOUR-BIT MULTIPLEXOR PROBLEM
Consider a network that learns to behave as a four-bit multiplexor. The task is, given 6
binary inputs labeled A-D, Mit and M2, and one binary output, to map one of the inputs A-D
to the output contingent on the values of MI and M2. The logical function being computed
is MI~A + MIM2B + M1M1C + M\M1.D.
EAST
WEST
I.-Jib
Figure 2. The train problem. Adapted from Medin, Wattenmaker, & Michalski, 1987.
Skeletonization
Table 2
architecture
failure rate
median epochs
to criterion
(with 8 hidden)
standard 4-hidden net
8-+4 skeleton net
17%
--
0%
25
median epochs
to criterion
(with 4 hidden)
52
45
A standard 4-hidden unit back propgation network was tested against a skeletonized network that began with 8 hidden units initially and was trimmed to 4 (an 8-+4 skeleton network). If the network did not reach the performance criterion within 1000 training
epochs, we assumed that the network was stuck in a local minimum and counted the run
as a failure.
Performance statistics for the two networks are shown in Table 2, averaged over 100
replications. The standard network fails to reach criterion on 17% of the runs. whereas
the skeleton network always obtains a solution with 8 hidden units and the solution is not
lost as the hidden layer is trimmed to 4 units. 2 The skeleton network with 8 hidden units
reaches criterion in about half the number of training epochs required by the standard
network. From this point, hidden units are trimmed one at a time from the skeleton network, and after each cut the network is retrained to criterion. Nonetheless, the total
number of epochs required to train the initial 8 hidden unit network and then trim it down
to 4 is still less than that required for the standard network with 4 units. Furthermore, as
hidden units are trimmed, the performance of the skeleton network remains close to criterion, so the improvement in learning is substantial.
THE RANDOM MAPPING PROBLEM
The problem here is to map a set of random 20-element input vectors to random 2element output vectors. Twenty random input-output pairs were used as the training set
Ten such training sets were generated and tested. A standard 2-hidden unit network was
tested against a 6~2 skeleton network. For each training set and architecture, 100 replications of the simulation were run. If criterion was not reached within 1000 training
epochs, we assumed that the network was stuck in a local minimum and counted the run
as a failure.
As Table 3 shows, the standard network failed to reach criterion with two hidden units on
17% of all runs. whereas the skeleton network failed with the hidden layer trimmed to
two units on only 8.3% of runs. In 9 of the 10 training sets, the failure rate of the skeleton network was lower than that of the standard network. Both networks required comparable amounts of training to reach criterion with two hidden units, but the skeleton network reaches criterion much sooner with six hidden units, and its performance does not
significantly decline as the network is trimmed. These results parallel those of the fourbit multiplexor.
report median epochs to criterion rather than mean epochs to avoid aberrations caused by
the large number of epochs consumed in failure runs.
1 Here and below we
113
114
Mozer and Smolensky
Table 3
training set
1
2
3
4
5
6
7
8
9
10
standard network
median epochs
% failures
to criterion
(2 hidden)
20
14
69
16
25
34
33
38
38
96
17
9
9
28
6
13
8
12
12
12
642 skeleton network
median epochs
median epochs
to criterion
% failures
to criterion
(6 hidden)
(2 hidden)
22
7
11
47
12
13
7
14
0
10
0
21
35
<max>
55
17
1
9
14
43
5
0
8
16
8
0
17
17
8
2
SUMMARY AND CONCLUSIONS
We proposed a method of using the know ledge in a network to determine the relevance
of individual units. The relevance metric can identify which input or hidden units are
most critical to the performance of the network. The least relevant units can then be
trimmed to construct a skeleton version of the network.
Skeleton networks have application in two different scenarios. as our simulations demonstrated:
? Understanding the behavior of a network in terms of "rules"
-
-
The cue salience problem. The relevance metric singled out the one input that was
sufficient to solve the problem. The other inputs conveyed redundant information.
The rule-plus-exception problem. The relevance metric was able to distinguish the
hidden unit that was responsible for correctly handling most cases (the general rule)
from the hidden unit that dealt with an exceptional case.
The train problem. The relevance metric correctly discovered the minimal set of
input features required to describe a category.
? Improving learning performance
-
-
The four-bit multiplexor. Whereas a standard network was often unable to discover
a solution. the skeleton network never failed. Further. the skeleton network learned
the training set more quickly.
The random mapping problem. As in the multiplexor problem. the skeleton network succeeded considerably more often with comparable overall learning speed.
and less training was required to reach criterion initially.
Basically. the skeletonization technique allows a network to use spare input and hidden
units to learn a set of training examples rapidly. and gradually. as units are trimmed. to
discover a more concise characterization of the underlying regularities of the task. In the
process, local minima seem to be avoided without increasing the overall learning time.
Skeletonization
One somewhat surprising result is the ease with which a network is able to recover when
a unit is removed. Conventional wisdom has it that if, say, a network is given excess hidden units, it will memorize the training set, thereby making use of all the hidden units
available to it. However, in our simulations, the network does not seem to be distributing
the solution across all hidden units because even with no further training, removal of a
hidden unit often does not drop performance below the criterion. In any case, there generally appears to be an easy path from the solution with many units to the solution with
fewer.
Although we have presented skeletonization as a technique for trimming units from a network, there is no reason why a similar procedure could not operate on individual connections instead. Basically, an 0. coefficient would be required for each connection, allowing for the computation of aE lao.. Yann Ie Cun (personal communication, 1989) has
independently developed 'a procedure quite similar to our skeletonization technique
which operates on individual connections.
Acknowledgements
Our thanks to Colleen Seifert for conversations that led to this work; to Dave Goldberg,
Geoff Hinton, and Yann Ie Cun for their feedback; and to Eric Jorgensen for saving us
from computer hell. This work was supported by grant 87-2-36 from the Sloan Foundation to Geoffrey Hinton, a grant from the James S. McDonnell Foundation to Michael
Mozer, and a Sloan Foundation grant and NSF grants IRI-8609599, ECE-8617947, and
CDR-8622236 to Paul Smolensky.
Rererences
Chauvin, Y. (1989). A back-propagation algorithm with optimal use of hidden units. In
Advances in Neural Network Information Processing Systems. San Mateo, CA:
Morgan Kaufmann.
Hanson, S. J., & Pratt, L. Y. (1989). Some comparisons of constraints for minimal
network construction with back propagation. In Advances in Neural Network
Information Processing Systems. San Mateo, CA: Morgan Kaufmann.
Medin, D. L., Wattenmaker, W. D., & Michalski, R. S. (1987). Constraints and
preferences in inductive learning: An experimental study of human and machine
performance. Cognitive Science, 11, 299-339.
Mozer, M. C., & Smolensky, P. (1989). Skeletonization: A technique for trimming the
fat from a network via relevance assessment (Technical Report CU-CS-421-89).
Boulder: University of Colorado, Department of Computer Science.
Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce
English text. Complex Systems, 1, 145-168.
115
| 119 |@word cu:1 version:2 eliminating:3 open:3 simulation:8 concise:1 fifteen:2 thereby:2 mention:1 initial:3 seriously:1 ida:1 surprising:1 yet:1 must:2 subsequent:1 happen:1 informative:1 remove:1 drop:1 update:1 discrimination:2 cue:2 half:1 fewer:1 provides:1 characterization:1 preference:1 simpler:1 lor:1 replication:5 qualitative:2 incorrect:1 inside:1 introduce:1 expected:1 rapid:1 behavior:7 multi:1 brain:1 automatically:2 little:1 actual:2 increasing:1 discover:3 notation:1 estimating:1 underlying:1 what:3 developed:1 jorgensen:1 fat:2 tricky:1 unit:93 grant:4 before:1 negligible:1 attend:1 local:4 tends:1 analyzing:1 path:1 approximately:1 might:2 plus:3 studied:1 mateo:2 co:1 ease:1 range:1 medin:2 averaged:1 pronounce:1 responsible:1 practice:1 lost:1 procedure:10 thought:1 significantly:2 get:1 convenience:1 close:2 wheel:2 judged:1 impossible:1 influence:2 conventional:3 map:2 demonstrated:1 straightforward:1 attention:1 iri:1 independently:1 m2:2 rule:13 handle:2 target:5 play:1 colorado:2 suppose:1 construction:1 goldberg:1 designing:1 element:2 rumelhart:1 continues:1 cut:1 labeled:3 role:1 daj:1 goo:1 removed:3 ran:2 mentioned:1 mozer:10 discriminates:1 substantial:1 skeleton:21 personal:2 dom:1 trained:1 compromise:1 serve:2 eric:1 estima:1 basis:3 triangle:2 easily:1 geoff:1 train:11 fast:1 describe:2 sejnowski:2 peer:1 whose:1 fluctuates:1 quite:2 dominating:1 solve:1 say:1 statistic:2 final:1 singled:1 net:2 michalski:2 relevant:7 rapidly:1 cluster:1 regularity:1 assessing:1 leave:1 measured:2 progress:1 c:1 involves:2 indicate:1 memorize:1 closely:1 functionality:3 human:1 spare:2 require:1 generalization:6 really:1 hell:1 adjusted:1 hold:1 around:1 mapping:2 smallest:1 purpose:2 trammg:1 exceptional:1 successfully:1 weighted:1 mit:1 clearly:1 always:1 rather:1 avoid:2 sel:1 rosenberg:2 derived:1 focus:1 improvement:2 notational:1 indicates:1 opted:1 contrast:1 tpj:2 sense:1 eliminate:1 fust:1 hidden:45 initially:2 seifert:1 overall:3 proposes:2 constrained:1 special:1 construct:3 never:2 saving:1 identical:1 represents:1 flipped:1 cancel:1 nearly:1 np:1 stimulus:1 connectionist:1 simplify:1 few:1 report:2 simultaneously:1 resulted:1 individual:6 replaced:1 phase:1 ab:4 organization:1 interest:1 trimming:9 invariably:1 highly:1 arrives:1 accurate:1 succeeded:1 necessary:2 experience:1 sooner:1 obscured:1 minimal:5 jib:2 assignment:1 cost:2 reported:3 considerably:1 thanks:1 ie:2 off:1 michael:2 quickly:2 reflect:1 cognitive:2 derivative:3 account:3 potential:1 de:3 coefficient:4 matter:1 untapped:1 satisfy:1 caused:1 sloan:2 performed:2 reached:1 decaying:1 recover:1 parallel:2 elaborated:1 kaufmann:2 yield:1 identify:1 wisdom:1 dealt:1 basically:2 fore:1 researcher:1 dave:1 reach:7 whenever:1 against:3 grossly:1 underestimate:1 failure:7 nonetheless:1 james:1 obvious:1 associated:1 mi:2 stop:1 experimenter:1 adjusting:1 logical:2 knowledge:2 lim:1 car:7 conversation:1 ea:4 back:9 appears:1 higher:1 reflected:1 specify:1 improved:1 fmal:1 done:1 strongly:1 furthermore:1 just:1 correlation:3 until:2 assessment:4 hopefully:1 propagation:9 aj:1 effect:1 true:1 inductive:1 hence:1 equality:1 iteratively:1 white:2 criterion:18 presenting:1 impression:1 complete:1 consideration:1 ef:3 began:1 common:1 sigmoid:1 ji:2 jab:1 exponentially:1 m1:1 ai:3 inclusion:1 stable:1 irrelevant:1 scenario:1 certain:1 binary:5 tenns:1 success:2 wji:2 morgan:2 minimum:5 additional:3 contingent:1 somewhat:1 determine:5 redundant:2 technical:1 characterized:1 long:2 y:1 impact:1 basic:1 ae:1 metric:8 aberration:1 whereas:5 want:1 addressed:1 median:6 source:2 crucial:1 extra:1 rest:1 operate:1 thing:1 flow:1 seem:2 near:1 constraining:2 feedforward:1 pratt:3 easy:1 multiplexor:5 architecture:3 perfectly:1 idea:2 decline:1 consumed:1 whether:1 six:1 distributing:1 trimmed:10 cause:1 generally:1 amount:1 ten:2 category:1 reduced:1 nsf:1 sign:1 correctly:4 four:5 sheer:1 enormous:1 ije:2 pj:1 merely:1 run:7 you:2 yann:2 prefer:1 acceptable:1 comparable:2 bit:5 layer:8 distinguish:1 quadratic:3 activity:7 strength:2 adapted:1 constraint:2 speed:1 extremely:1 department:2 mcdonnell:1 poor:1 smaller:1 across:1 abed:3 cun:2 b:1 making:2 gradually:2 boulder:2 remains:1 needed:2 know:2 ordinal:1 serf:1 available:1 rewritten:1 hierarchical:1 away:3 innervated:1 skeletonization:13 slower:1 ledge:1 assumes:1 remaining:1 question:1 primary:1 usual:2 responds:1 gradient:1 attentional:1 separate:1 unable:1 seven:1 trivial:1 chauvin:2 reason:2 assuming:1 index:1 balance:1 abcd:2 negative:1 twenty:1 perform:1 allowing:1 descent:1 behave:1 incorrectly:1 situation:1 hinton:2 communication:2 discovered:1 retrained:1 david:1 pair:1 required:7 specified:2 connection:12 hanson:3 learned:2 ornone:1 able:2 suggested:1 below:5 pattern:10 smolensky:10 saturation:1 oj:2 reliable:1 max:1 critical:2 difficulty:2 improve:1 lao:1 picture:1 identifies:1 speeding:2 text:1 uaj:1 understanding:4 epoch:12 removal:3 acknowledgement:1 determining:2 relative:1 expect:1 versus:1 geoffrey:1 foundation:3 conveyed:1 sufficient:1 clj:1 pi:7 squashing:1 pile:1 row:2 course:1 changed:1 summary:1 supported:2 last:1 free:1 repeat:1 english:1 salience:2 bias:1 understand:1 institute:1 feedback:1 stuck:2 made:1 san:2 simplified:1 avoided:2 counted:2 far:1 excess:1 approximate:1 obtains:1 trim:6 unnecessary:1 assumed:2 disrupt:1 why:2 table:8 additionally:1 learn:3 correlated:1 ca:2 improving:2 complex:2 constructing:1 did:1 big:2 motivation:1 paul:2 arise:1 profile:1 allowed:1 convey:1 west:4 fails:1 wish:1 explicit:1 candidate:1 weighting:2 learns:2 down:1 load:2 gating:1 decay:1 experimented:1 essential:1 adding:1 magnitude:1 margin:2 led:1 simply:1 cdr:1 failed:3 partially:2 wattenmaker:2 presentation:1 consequently:1 twofold:1 reducing:1 operates:1 wt:2 averaging:1 total:1 secondary:1 pas:1 ece:1 experimental:1 attempted:1 east:5 exception:8 rererences:1 internal:1 relevance:31 outgoing:2 tested:3 handling:1 |
209 | 1,190 | Source Separation and Density
Estimation by Faithful Equivariant SOM
Juan K. Lin
Department of Physics
University of Chicago
Chicago, IL 60637
jk-lin@uchicago.edu
David G. Grier
Department of Physics
University of Chicago
Chicago, IL 60637
d-grier@uchicago.edu
Jack D. Cowan
Department of Math
University of Chicago
Chicago, IL 60637
j-cowan@uchicago.edu
Abstract
We couple the tasks of source separation and density estimation
by extracting the local geometrical structure of distributions obtained from mixtures of statistically independent sources. Our
modifications of the self-organizing map (SOM) algorithm results
in purely digital learning rules which perform non-parametric histogram density estimation. The non-parametric nature of the separation allows for source separation of non-linear mixtures. An
anisotropic coupling is introduced into our SOM with the role of
aligning the network locally with the independent component contours. This approach provides an exact verification condition for
source separation with no prior on the source distributions.
1
INTRODUCTION
Much of the current work on visual cortex modeling has focused on the generation of
coding which captures statistical independence and sparseness (Bell and Sejnowski
1996, Olshausen and Field 1996). The Bell and Sejnowski model suffers from the
parametric and intrinsically non-local nature of their source separation algorithm,
while the Olshausen and Field model does not achieve true sparse-distributed coding where each cell has the same response probability (Field 1994). In this paper, we
construct an extensively modified SOM with equipartition of activity as a steadystate for the task of local statistical independence processing and sparse-distributed
coding.
537
SOFM for Density Approximation and leA
Ritter and Schulten (1986) demonstrated that the density of the Kohonen SOM
units is not proportional to the input density in the steady-state. In one dimension the Kohonen net under-represents high density and over-represents low density regions. Thus SOM's are generally not used for density estimation. Several
modifications for controlling the magnification of the representation have appeared.
Recently, Bauer et. al. (1996) used an "adaptive step size" , and Lin and Cowan
(1996) used an Lp-norm weighting to control the magnification. Here we concentrate on the later's "faithful representation" algorithms for source separation and
density estimation.
2
SHARPLY PEAKED DISTRIBUTIONS
Mixtures of sharply peaked source distributions will contain high density contours
which correspond to the independent component contours. Blind separation can be
performed rapidly for this case in a net with one dimensional branched topology. A
digital learning rule where the updates only take on discrete values was used: 1
(1)
where K is the learning rate, A(?) the neighborhood function, {w} the SOM unit
positions, and ( the input.
.'
~ ",
.
..
"
_113
3??.?.
.
.",
';
.
", ',
.
"
", ,"
." .. .. .
"
.
. .. .
."
.'
..
..
,
\
Figure 1: Left: linear source separation by branched net. Dashed lines correspond
to the independent component axes. Net configuration is shown every 200 points.
Dots denote the unit positions after 4000 points. Right: Voronoi partition of the
vector space by the SOM units.
We performed source separation and coding of two mixed signals in a net with the
topology of two cross-linked branches (see Fig. (1)). The neighborhood function
IThe sign function sgn(i) takes on a value of 1 for i > 0, 0 for i = 0 and -1 for i < O.
Here the sign function acts component-wise on the vector.
538
1. K. Lin, 1. D. Cowan and D. G. Grier
A(?) is taken to be Gaussian where ? is the distance to the winning unit along the
branch structure. Two speech audio files were randomly mixed and pre-whitened
first to decorrelate the two mixtures. Since pre-whitening tends to orthogonalize
the independent component axes, much of the processing that remains is rotation
to find the independent component coordinate system. A typical simulation is
shown in Fig. (1). The branches of the net quickly zero in on the high density
directions. As seen from the nearest-neighbor Voronoi partition of the distribution
(Fig. 1b), the branched SOM essentially performs a one dimensional equipartition of
the mixture. The learning rule Eqn. 1 attempts to place each unit at the componentwise median of the distribution encompassed by its Voronoi partition. For sharply
peaked sources, the algorithm will place the units directly on top of the high density
ridges.
To demonstrate the generality of our non-parametric approach, we perform source
separation and density coding of a non-linear mixture. Because our network has
local dynamics, with enough units, the network can follow the curved "independent
component contours" of the input distribution. The result is shown in Fig. (2).
Figure 2: Source separation of non-linear mixture. The mixture is given by ~1 =
-2sgn(st} . s~ + 1.1s 1 - S2, ~2 = -2sgn(s2) . s~ + SI + 1.1s2. Left: the SOM
configuration is shown periodically in the figure, with the configuration after 12000
points indicated by the dots. Dashed lines denote two independent component
contours. Right: the sources (SI' S2), mixtures (6, 6) and pseudo-histogramequalized representations (01, 02) .
To unmix the input, a parametric separation approach can be taken where least
squares fit to the branch contours is used. For the source separation in Fig. CIa),
assuming linear mixing and inserting the branch coordinate system into an unmixing matrix, we find a reduction of the amplitudes of the mixtures to less than one
percent of the signal. This is typical of the quality of separation obtained in our
simulations. For the non-linear source separation in Fig. (2), parametric unmixing can similarly be accomplished by least squares fit to polynomial contours with
539
SOFM for Density Approximation and leA
quadratic terms. Alternatively, taking full advantage of the non-parametric nature
of the SOM approach, an approximation of the independent sources can be constructed from the positions Wi. of the winning unit. Or as we show in Fig. (2b), the
cell labels i* can be used to give a pseudo-histogram-equalized source representation. This non-parametric approach is thus much more general in the sense that no
model is needed of the mixing transformation. Since there is only one winning unit
along one branch, only one output channel is active at any given time. For sharply
peaked source distributions such as speech, this does not significantly hinder the
fidelity of the source representation since the input sources hover around zero most
of the time. This property also has the potential for utilization in compression.
However, for a full rigorous histogram-equalized source representation, we must
turn to a network with a topology that matches the dimensionality of the input.
3
ARBITRARY DISTRIBUTIONS
For mixtures of sources with arbitrary distributions, we seek a full N dimensional
equipartition. We define an (M, N) partition of !RN to be a partition of !RN into
(M + 1)N regions by M parallel cuts normal to each of N distinct directions. The
simplest equipartition of a source mixtures is the trivial equipartition along the
independent component axes (ICA). Our goal is to achieve this trivial ICA aligned
equipartition using a hypercube architecture SOM with M + 1 units per dimension.
For an (M , N) equipartition, since the number of degrees of freedom to define the
M N hyperplanes grows quadratically in N, while the number of constraints grows
exponentially in N, for large enough M the desired trivial equipartition will the
unique (M, N) equipartition. We postulate that M = 2 suffices for uniqueness.
Complementary to this claim, it is known that a (1, N) equipartition does not exist
for arbitrary distributions for N ~ 5 (Ramos 1996). The uniqueness of the (M, N)
equipartition of source mixtures thus provides an exact verification condition for
noiseless source separation.
With? = i-: -
i, the digital equipartition learning rule is given by:
~wi
-
~A(?)?
~Wi?
-
L~Wi'
sgn(?)
(2)
(3)
i
where
A(?) = A( -?).
(4)
Equipartion of the input distribution can easily be shown to be a steady-state of
the dynamics. Let qk be the probability measure of unit k. For the steady-state:
< ~wk > =
0
L q; .A({ - k) . sgn(i - k) + qk L A( k - i) . sgn( k - i)
L(q; - qk) . A(i - k) . sgn(i - k),
i
By inspection, equipartition, where q; = qk~ for all units i is a
for all units k.
solution to the equation above. It has been shown that equipartition is the only
J. K. Lin, J. D. Cowan and D. G. Grier
540
steady-state of the learning rule in two dimensional rectangular SOM's (Lin and
Cowan 1996), though with the highly overconstrained steady-state equations, the
result should be much more general.
One further modification of the SOM is required. The desired trivial ICA equipartition is not a proper Voronoi partition except when the independent component
axes are orthogonal. To obtain the desired equipartition, it is necessary to change
the definition of the winning unit i-. Let
(5)
be the winning region of the unit at wi' Since a histogram-equalized representation
independent of the mixing transformation A is desired, we require that
{An(w)} = {n(Aw)} ,
i.e.,
n is
(6)
equivariant under the action of A (see e.g. Golubitsky 1988).
.: ;
. :" :,, ,
,./
,./
.------------.
Voronoi
,
,,
,,
,
,,
,,
.------------.
Equ ivariant
I
I
I,:
~i.f
;; ! .' .
."
.
Figure 3: Left: Voronoi and equivariant partitions of the a primitive cell. Right:
configuration of the SOM after 4000 points. Initially the units of the SOM were
equally spaced and aligned along the two mixture coordinate directions.
In two dimensions, we modify the tessellation by dividing up a primitive cell amongst
its constituent units along lines joining the midpoints of the sides. For a primitive
cell composed of units at ii, b, c and J, the region of the primitive cell represented
by ii is the simply connected polygon defined by vertices at ii, (it + b)/2, (it + d)/2
and (it+b+c+d)/4. The two partitions are contrasted in Fig. (3a). Our modified
equivariant partition satisfies Eqn. (6) for all non-singular linear transformations.
The learning rule given above was shown to have an equipartition steady state. It
remains, however, to align the partitions so that it becomes a valid (M, N) partition.
The addition of a local anisotropic coupling which physically, in analogy to elastic
nets, might correspond to a bending modulus along the network's axes, will tend
to align the partitions and enhance convergence to the desired steady state. We
SOFMfor Density Approximation and leA
541
supplemented the digital learning rule (Eqs. (2)-(3)) with a movement of the units
towards the intersections of least squares line fits to the SOM grid.
Numerics are shown in Fig. 3b, where alignment with the independent component
coordinate system and density estimation in the form of equipartition can be seen .
The aligned equipartition representation formed by the network gives histogramequalized representations of the independent sources, which , because of the equivariant nature of the SOM, will be independent of the mixing matrix.
4
DISCUSSION
Most source separation algorithms are parametric density estimation approaches
(e.g. Bell and Sejnowski 1995, Pearlmutter and Parra 1996). Alternatively in
parallel with this work, the standard SOM was used for the separation of both
discrete and uniform sources (Herrmann and Yang 1996, Pajunen et. al. 1996). The
source separation approach taken here is very general in the sense that no a priori
assumptions about the individual source distributions and mixing transformation
are made. Our approach's local non-parametric nature allows for source separation
of non-linear mixtures and also possibly the separation of more sharply peaked
sources from fewer mixtures. The low to high dimensional map required for the
later task will be prohibitively difficult for parametric unmixing approaches.
For density estimation in the form of equipartition, we point out the importance
of a digital scale-invariant algorithm. Direct dependence on ( and Wi has been
extracted out of the learning rule. Because the update depends only upon the
partition, the network learns from its own coarse response to stimuli. This along
with the equivariant partition modification underscore the dynamic partition nature
of the our algorithm. More direct computational geometry partitioning algorithms
are currently being pursued. It is also clear that a hybrid local parametric density
estimation approach will work for the separation of sharply peaked sources (Bishop
et. al. 1996, Utsugi 1996).
5
CONCLUSIONS
We have extracted the local geometrical structure of transformations of product distributions. By modifying the SOM algorithm we developed a network with the capability of non-parametrically separating out non-linear source mixtures. Sharply
peaked sources allow for quick separation via a branched SOM network. For arbitrary source distributions, we introduce the (M,N) equipartition, the uniqueness of
which provides an exact verification condition for source separation.
Fundamentally, equipartition of activity is a very sensible resource allocation principle. In this work, the local equipartition coding and source separation processing
proceed in tandem, resulting in optimal coding and processing of source mixtures.
We believe the digital "counting" aspect of the learning rule, the learning based on
the network's own coarse response to stimuli, the local nature of the dynamics, and
the coupling of coding and processing make this an attractive approach from both
computational and neural modeling perspectives.
542
1. K. Lin, 1. D. Cowan and D. G. Grier
References
Bauer, H.-U., Der, R., and Herrmann, M. 1996. Controlling the magnification factor
of self-organizing feature maps. Neural Compo 8, 757-771.
Bell, A. J., and Sejnowski, T. J. 1995. An information-maximization approach to
blind separation and blind deconvolution. Neural Compo 7,1129-1159.
Bell, A. J ., and Sejnowski, T. J. 1996. Edges are the "independent components" of
natural scenes. NIPS *9.
Bishop, C. M. and Williams, C. 1996. GTM: A principled alternative to the selforganizing map. NIPS *9.
Field, D. J. 1994. What is the goal of sensory coding? Neural Compo 6,559-601.
Golubitsky, M., Stewart, 1., and Schaeffer, D. G. 1988. Singularities and Groups in
Bifurcation Theory. Springer-Verlag, Berlin.
Herrmann,M. and Yang, H. H. 1996. Perspectives and limitations of self-organizing
maps in blind separation of source signals. Proc. lCONlP'96.
Hertz, J., Krogh A., and Palmer, R. G. 1991. Introduction to the Theory of Neural
Computation. Addison-Wesley, Redwood City.
Kohonen, T. 1995. Self-Organizing Maps. Springer-Verlag, Berlin.
Lin, J. K. and Cowan, J. D. 1996. Faithful representation of separable input distributions. To appear in Neural Computation.
Olshausen, B. A. and D. J. Field 1996. Emergence of simple-cell receptive field
properties by learning a sparse code for natural images. Nature 381, 607-609.
Pajunen, P., Hyvarinen, A. and Karhunen, J. 1996. Nonlinear blind source separation by self-organizing maps. Proc. lCONIP'96.
Pearlmutter, B. A. and Parra, L. 1996. Maximum likelihood blind source separation:
a context-sensitive generalization of lCA. NIPS *9.
Ramos, E. A. 1996. Equipartition of mass distributions by hyperplanes. Discrete
Comput. Geom. 15, 147-167.
Ritter, H., and Schulten, K. 1986. On the stationary state of Kohonen's selforganizing sensory mapping. Bioi. Cybern., 54,99-106.
Utsugi, A. 1996. Hyperparameter selection for self-organizing maps. To appear in
Neural Computation.
| 1190 |@word compression:1 polynomial:1 norm:1 grier:5 simulation:2 seek:1 decorrelate:1 reduction:1 configuration:4 current:1 si:2 must:1 periodically:1 chicago:6 partition:15 update:2 stationary:1 pursued:1 fewer:1 inspection:1 compo:3 provides:3 math:1 coarse:2 hyperplanes:2 along:7 constructed:1 direct:2 introduce:1 ica:3 equivariant:6 tandem:1 becomes:1 mass:1 what:1 developed:1 transformation:5 pseudo:2 every:1 act:1 prohibitively:1 control:1 unit:20 utilization:1 partitioning:1 appear:2 local:10 modify:1 tends:1 joining:1 might:1 palmer:1 statistically:1 faithful:3 unique:1 bell:5 significantly:1 pre:2 selection:1 context:1 cybern:1 map:8 demonstrated:1 quick:1 primitive:4 williams:1 focused:1 rectangular:1 rule:9 coordinate:4 controlling:2 exact:3 magnification:3 jk:1 cut:1 role:1 capture:1 region:4 connected:1 movement:1 principled:1 hinder:1 dynamic:4 ithe:1 purely:1 upon:1 easily:1 represented:1 polygon:1 gtm:1 distinct:1 sejnowski:5 equalized:3 neighborhood:2 emergence:1 advantage:1 net:7 hover:1 product:1 kohonen:4 inserting:1 aligned:3 rapidly:1 organizing:6 mixing:5 achieve:2 constituent:1 convergence:1 unmixing:3 coupling:3 nearest:1 eq:1 krogh:1 dividing:1 concentrate:1 direction:3 modifying:1 sgn:7 require:1 suffices:1 generalization:1 parra:2 singularity:1 around:1 normal:1 mapping:1 claim:1 uniqueness:3 estimation:9 proc:2 label:1 currently:1 sensitive:1 city:1 gaussian:1 modified:2 ax:5 likelihood:1 underscore:1 rigorous:1 sense:2 voronoi:6 equipartition:24 initially:1 fidelity:1 priori:1 bifurcation:1 field:6 construct:1 represents:2 peaked:7 stimulus:2 fundamentally:1 randomly:1 composed:1 individual:1 geometry:1 attempt:1 freedom:1 highly:1 alignment:1 mixture:18 edge:1 necessary:1 orthogonal:1 desired:5 modeling:2 stewart:1 tessellation:1 maximization:1 vertex:1 parametrically:1 uniform:1 aw:1 st:1 density:20 ritter:2 physic:2 enhance:1 quickly:1 postulate:1 possibly:1 juan:1 unmix:1 potential:1 coding:9 wk:1 blind:6 depends:1 later:2 performed:2 linked:1 parallel:2 capability:1 pajunen:2 il:3 square:3 formed:1 qk:4 correspond:3 spaced:1 suffers:1 definition:1 schaeffer:1 couple:1 intrinsically:1 dimensionality:1 overconstrained:1 amplitude:1 wesley:1 follow:1 response:3 though:1 generality:1 eqn:2 nonlinear:1 golubitsky:2 quality:1 indicated:1 believe:1 grows:2 olshausen:3 modulus:1 contain:1 true:1 attractive:1 self:6 steady:7 sofm:2 ridge:1 demonstrate:1 pearlmutter:2 performs:1 percent:1 geometrical:2 image:1 steadystate:1 jack:1 wise:1 recently:1 rotation:1 exponentially:1 anisotropic:2 grid:1 similarly:1 dot:2 cortex:1 whitening:1 align:2 aligning:1 own:2 perspective:2 verlag:2 accomplished:1 der:1 seen:2 dashed:2 signal:3 branch:6 full:3 ii:3 match:1 cross:1 lin:8 equally:1 whitened:1 essentially:1 noiseless:1 physically:1 histogram:4 cell:7 lea:3 addition:1 median:1 source:44 singular:1 file:1 tend:1 cowan:8 extracting:1 yang:2 counting:1 enough:2 independence:2 fit:3 architecture:1 topology:3 speech:2 proceed:1 action:1 generally:1 clear:1 selforganizing:2 locally:1 extensively:1 simplest:1 exist:1 sign:2 per:1 discrete:3 hyperparameter:1 group:1 branched:4 place:2 separation:30 quadratic:1 activity:2 constraint:1 sharply:7 scene:1 aspect:1 separable:1 department:3 lca:1 hertz:1 wi:6 lp:1 modification:4 invariant:1 taken:3 equation:2 resource:1 remains:2 turn:1 needed:1 addison:1 cia:1 alternative:1 top:1 hypercube:1 parametric:12 receptive:1 dependence:1 amongst:1 distance:1 separating:1 berlin:2 sensible:1 trivial:4 assuming:1 code:1 difficult:1 numerics:1 proper:1 perform:2 curved:1 rn:2 redwood:1 arbitrary:4 david:1 introduced:1 required:2 componentwise:1 quadratically:1 nip:3 appeared:1 geom:1 natural:2 hybrid:1 ramos:2 bending:1 prior:1 mixed:2 generation:1 limitation:1 proportional:1 allocation:1 analogy:1 digital:6 degree:1 verification:3 principle:1 side:1 uchicago:3 allow:1 neighbor:1 taking:1 midpoint:1 sparse:3 distributed:2 bauer:2 dimension:3 valid:1 contour:7 sensory:2 made:1 adaptive:1 herrmann:3 hyvarinen:1 active:1 equ:1 alternatively:2 nature:8 channel:1 elastic:1 som:21 s2:4 complementary:1 fig:9 encompassed:1 position:3 schulten:2 winning:5 comput:1 weighting:1 learns:1 bishop:2 supplemented:1 deconvolution:1 importance:1 karhunen:1 sparseness:1 intersection:1 simply:1 visual:1 springer:2 satisfies:1 extracted:2 bioi:1 goal:2 towards:1 change:1 typical:2 except:1 contrasted:1 orthogonalize:1 audio:1 |
210 | 1,191 | Regression with Input-Dependent Noise:
A Bayesian Treatment
Christopher M. Bishop
C.M.BishopGaston.ac.uk
Cazhaow S. Qazaz
qazazcsGaston.ac.uk
Neural Computing Research Group
Aston University, Birmingham, B4 7ET, U.K.
http://www.ncrg.aston.ac.uk/
Abstract
In most treatments of the regression problem it is assumed that
the distribution of target data can be described by a deterministic
function of the inputs, together with additive Gaussian noise having constant variance. The use of maximum likelihood to train such
models then corresponds to the minimization of a sum-of-squares
error function. In many applications a more realistic model would
allow the noise variance itself to depend on the input variables.
However, the use of maximum likelihood to train such models would
give highly biased results. In this paper we show how a Bayesian
treatment can allow for an input-dependent variance while overcoming the bias of maximum likelihood.
1
Introduction
In regression problems it is important not only to predict the output variables but
also to have some estimate of the error bars associated with those predictions. An
important contribution to the error bars arises from the intrinsic noise on the data.
In most conventional treatments of regression, it is assumed that the noise can be
modelled by a Gaussian distribution with a constant variance. However, in many
applications it will be more realistic to allow the noise variance itself to depend on
the input variables. A general framework for modelling the conditional probability
density function of the target data, given the input vector, has been introduced in
the form of mixture density networks by Bishop (1994, 1995). This uses a feedforward network to set the parameters of a mixture kernel distribution, following
Jacobs et al. (1991). The special case of a single isotropic Gaussian kernel function
348
C. M. Bishop and C. S. Qazaz
was discussed by Nix and Weigend (1995), and its generalization to allow for an
arbitrary covariance matrix was given by Williams (1996).
These approaches, however, are all based on the use of maximum likelihood, which
can lead to the noise variance being systematically under-estimated. Here we adopt
an approximate hierarchical Bayesian treatment (MacKay, 1991) to find the most
probable interpolant and most probable input-dependent noise variance. We compare our results with maximum likelihood and show how this Bayesian approach
leads to a significantly reduced bias.
In order to gain some insight into the limitations of the maximum likelihood approach, and to see how these limitations can be overcome in a Bayesian treatment, it
is useful to consider first a much simpler problem involving a single random variable
(Bishop, 1995). Suppose that a variable Z is known to have a Gaussian distribution,
but with unknown mean fJ. and unknown variance (J2. Given a sample D == {zn}
drawn from that distribution, where n = 1, ... , N, our goal is to infer values for the
mean and variance. The likelihood function is given by
1
2
p(DIfJ., (J
)
= (27r(J2)N/2 exp {
-
1
2(J2
?; (Zn N
fJ.)
2}
.
(1)
A non-Bayesian approach to finding the mean and variance is to maximize the
likelihood jointly over fJ. and (J2, corresponding to the intuitive idea of finding the
parameter values which are most likely to have given rise to the observed data set.
This yields the standard result
N
(12
=
~ 2)Zn - Ji)2.
(2)
n=l
It is well known that the estimate (12 for the variance given in (2) is biased since
the expectation of this estimate is not equal to the true value
C[~2]
(, (J
where
(J5
_- -N- (-1
2
JO
N
(3)
is the true variance of the distribution which generated the data, and
?[.] denotes an average over data sets of size N. For large N this effect is small.
However, in the case of regression problems there are generally much larger number
of degrees of freedom in relation to the number of available data points, in which
case the effect of this bias can be very substantial.
The problem of bias can be regarded as a symptom of the maximum likelihood
approach. Because the mean Ji has been estimated from the data, it has fitted some
of the noise on the data and this leads to an under-estimate of the variance. If the
true mean is used in the expression for (12 in (2) instead of the maximum likelihood
expression, then the estimate is unbiased.
By adopting a Bayesian viewpoint this bias can be removed. The marginal likelihood
of (J2 should be computed by integrating over the mean fJ.. Assuming a 'flat' prior
p(fJ.) we obtain
(4)
Regression with Input-Dependent Noise: A Bayesian Treatment
349
(5)
Maximizing (5) with respect to ~2 then gives
N
-2
~
=
1 ~(
N _ 1 ~ Zn
-
~)2
J.L
(6)
n=l
which is unbiased.
This result is illustrated in Figure 1 which shows contours of p(DIJ.L, ~2) together
with the marginal likelihood p(DI~2) and the conditional likelihood p(DI;t, ~2) evaluated at J.L = ;t.
2.5
2.5
2
2
Q)
u
c:
.~
1.5
113
>
1
/
0.5
o~~----~----~--~
-2
o
mean
2
2
4
6
likelihood
Figure 1: The left hand plot shows contours of the likelihood function p(DIJ..L, 0- 2) given
by (1) for 4 data points drawn from a Gaussian distribution having zero mean and unit
variance. The right hand plot shows the marginal likelihood function p(DI0-2) (dashed
curve) and the conditional likelihood function p(DI{i,0-2) (solid curve). It can be seen that
the skewed contours result in a value of 0: 2, which maximizes p(DI{i, 0- 2), which is smaller
than 0: 2 which maximizes p(DI0- 2).
2
Bayesian Regression
Consider a regression problem involving the prediction of a noisy variable t given
the value of a vector x of input variables l . Our goal is to predict both a regression
function and an input-dependent noise variance. We shall therefore consider two
networks. The first network takes the input vector x and generates an output
IFor simplicity we consider a single output variable. The extension of this work to
multiple outputs is straightforward.
C. M. Bishop and C. S. Qazaz
350
y(x; w) which represents the regression function, and is governed by a vector of
weight parameters w. The second network also takes the input vector x, and
generates an output function j3(x; u) representing the inverse variance of the noise
distribution, and is governed by a vector of weight parameters u. The conditional
distribution of target data, given the input vector, is then modelled by a normal
distribution p(tlx, w, u) = N(tly, 13- 1 ). From this we obtain the likelihood function
(7)
where j3n = j3(x n ; u),
N
ZD
= n=l
II
(271')
1/2
j3n
(8)
'
and D == {xn' t n } is the data set.
Some simplification of the subsequent analysis is obtained by taking the regression
function, and In 13, to be given by linear combinations of fixed basis functions, as in
MacKay (1995), so that
y(x; w) = w T <j)(x) ,
j3(x; u) = exp (uT ,p(x))
(9)
where choose one basis function in each network to be a constant ?o = 'l/Jo = 1 so
that the corresponding weights Wo and Uo represent bias parameters.
The maximum likelihood procedure chooses values wand u by finding a joint maximum over wand u. As we have already indicated, this will give a biased result
since the regression function inevitably fits part of the noise on the data, leading
to an over-estimate of j3(x). In extreme cases, where the regression curve passes
exactly through a data point, the corresponding estimate of 13 can go to infinity,
corresponding to an estimated noise variance of zero.
The solution to this problem has already been indicated in Section 1 and was first
suggested in this context by MacKay (1991, Chapter 6). In order to obtain an
unbiased estimate of j3(x) we must find the marginal distribution of 13, or equivalently of u, in which we have integrated out the dependence on w. This leads to a
hierarchical Bayesian analysis.
We begin by defining priors over the parameters wand u. Here we consider isotropic
Gaussian priors of the form
(10)
p(ulau )
(11)
where a w and au are hyper-parameters. At the first stage of the hierarchy, we
assume that u is fixed to its most probable value UMP, which will be determined
shortly. The most probable value of w, denoted by WMP, is then found by maxi-
Regression with Input-Dependent Noise: A Bayesian Treatment
351
mizing the posterior distribution 2
(
) - p(Dlw, uMP)p(wlow)
(D IUMP, Ow )
p
D
p w I , UMP, Ow -
(12)
where the denominator in (12) is given by
p(DIUMP, ow) =
I
p(Dlw,uMP)p(wlow)dw.
(13)
Taking the negative log of (12), and dropping constant terms, we see that WMP is
obtained by minimizing
N
S(w) =
L 13nEn + ?2 IIwll2
w
(14)
n=l
where we have used (7) and (10). For the particular choice of model (9) this minimization represents a linear problem which is easily solved (for a given u) by
standard matrix techniques.
At the next level of the hierarchy, we find UMP by maximizing the marginal posterior
distribution
(15)
The term p(Dlu, ow) is just the denominator from (12) and is found by integrating
over w as in (13). For the model (9) and prior (10) this integral is Gaussian and
can be performed analytically without approximation. Again taking logarithms and
discarding constants, we have to minimize
1 N
N
M(u) =
1
L
13n E n + ~u lIuII 2 - 2 L In13n + 2 ln IAI
n=l
n=l
(16)
where IAI denotes the determinant of the Hessian matrix A given by
N
L
(17)
13nl/J(Xn)l/J(xn? + Owl
n=l
and I is the unit matrix. The function M(u) in (16) can be minimized using
standard non-linear optimization algorithms. We use scaled conjugate gradients, in
which the necessary derivatives of In IAI are easily found in terms of the eigenvalues
of A.
A =
In summary, the algorithm requires an outer loop in which the most probable value
UMP is found by non-linear minimization of (16), using the scaled conjugate gradient algorithm. Each time the optimization code requires a value for M(u) or
its gradient, for a new value of u, the optimum value for WMP must be found by
minimizing (14). In effect, w is evolving on a fast time-scale, and U on a slow timescale. The corresponding maximum (penalized) likelihood approach consists of a
joint non-linear optimization over U and w of the posterior distribution p(w, uID)
obtained from (7), (10) and (11). Finally, the hyperparameters are given fixed values Ow = Ou = 0.1 as this allows the maximum likelihood and Bayesian approaches
to be treated on an equal footing.
2Note that the result will be dependent on the choice of parametrization since the
maximum of a distribution is not invariant under a change of variable.
C. M. Bishop and C. S. Qazaz
352
Results and Discussion
3
As an illustration of this algorithm, we consider a toy problem involving one input
and one output, with a noise variance which has an x 2 dependence on the input
variable. Since the estimated quantities are noisy, due to the finite data set, we
consider an averaging procedure as follows. We generate 100 independent data sets
each consisting of 10 data points. The model is trained on each of the data sets in
turn and then tested on the remaining 99 data sets. Both the Y(Xj w) and (3(Xj u)
networks have 4 Gaussian basis functions (plus a bias) with width parameters chosen
to equal the spacing of the centres.
Results are shown in Figure 2. It is clear that the maximum likelihood results are
biased and that the noise variance is systematically underestimated. By contrast,
Maximum likelihood
Maximum likelihood
0 .8
"\
\
0.5
<1l
C,)
0.6
t:
:5
c..
:5
0
\
CI:l
a
?~0.4
\
<1l
en
-1
-4
\
?gO.2
-0.5
,
/
/
/
" '-
a
-2
a
2
-2
-4
4
/
~
x
a
2
4
2
4
x
Bayesian
Bayesian
0.8
0.5
<1l
C,)
0.6
t:
:5
c..
:5
\
CI:l
.~ 0.4
a
\
<1l
0
en
?gO.2
-0.5
-1
-4
-2
a
x
2
4
0
-4
-2
a
x
Figure 2: The left hand plots show the sinusoidal function (dashed curve) from which
the data were generated, together with the regression function averaged over 100 training
sets. The right hand plots show the true noise variance (dashed curve) together with the
estimated. noise variance, again averaged over 100 data sets.
the Bayesian results show an improved estimate of the noise variance. This is borne
out by evaluating the log likelihood for the test data under the corresponding predictive distributions. The Bayesian approach gives a log likelihood per data point,
averaged over the 100 runs, of -1.38. Due to the over-fitting problem, maximum
likelihood occasionally gives extremely large negative values for the log likelihood
(when (3 has been estimated to be very large, corresponding to a regression curve
which passes close to an individual data point). Even omitting these extreme values, the maximum likelihood still gives an average log likelihood per data point of
Regression with Input-Dependent Noise: A Bayesian Treatment
353
-17.1 which is substantially smaller than the Bayesian result.
We are currently exploring the use of Markov chain Monte Carlo methods (Neal,
1993) to perform the integrations required by the Bayesian analysis numerically,
without the need to introduce the Gaussian approximation or the evidence framework. Recently, MacKay (1995) has proposed an alternative technique based on
Gibbs sampling. It will be interesting to compare these various approaches.
Acknowledgements: This work was supported by EPSRC grant GR/K51792,
Validation and Verification of Neural Network Systems.
References
Bishop, C. M. (1994). Mixture density networks. Technical Report
NCRG/94/001, Neural Computing Research Group, Aston University, Birmingham, UK.
Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press.
Jacobs, R. A., M. I. Jordan, S. J. Nowlan, and G. E. Hinton (1991). Adaptive
mixtures of local experts. Neural Computation 3 (1), 79-87.
MacKay, D. J. C. (1991). Bayesian Methods for Adaptive Models. Ph.JJ.thesis,
California Institute of Technology.
MacKay, D. J. C. (1995). Probabilistic networks: new models and new methods.
In F. Fogelman-Soulie and P. Gallinari (Eds.), Proceedings ICANN'95 International Conference on Artificial Neural Networks, pp. 331-337. Paris: EC2
& Cie.
Neal, R. M. (1993). Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto, Cananda.
Nix, A. D. and A. S. Weigend (1995). Learning local error bars for nonlinear
regression. In G. Tesauro, D. S. Touretzky, and T. K. Leen (Eds.), Advances in
Neural Information Processing Systems, Volume 7, pp. 489-496. Cambridge,
MA: MIT Press.
Williams, P. M. (1996). Using neural networks to model conditional multivariate
densities. Neural Computation 8 (4), 843-854.
| 1191 |@word determinant:1 covariance:1 jacob:2 tr:1 solid:1 nowlan:1 must:2 additive:1 subsequent:1 realistic:2 plot:4 isotropic:2 parametrization:1 footing:1 toronto:1 simpler:1 consists:1 fitting:1 introduce:1 begin:1 maximizes:2 substantially:1 finding:3 exactly:1 scaled:2 uk:4 gallinari:1 unit:2 uo:1 grant:1 local:2 oxford:1 plus:1 au:1 averaged:3 procedure:2 evolving:1 significantly:1 integrating:2 close:1 context:1 www:1 conventional:1 deterministic:1 maximizing:2 williams:2 straightforward:1 go:3 simplicity:1 insight:1 regarded:1 dw:1 target:3 suppose:1 hierarchy:2 us:1 recognition:1 observed:1 epsrc:1 solved:1 removed:1 substantial:1 interpolant:1 trained:1 depend:2 predictive:1 basis:3 easily:2 joint:2 chapter:1 various:1 train:2 fast:1 monte:2 artificial:1 hyper:1 qazaz:4 larger:1 timescale:1 jointly:1 noisy:2 itself:2 eigenvalue:1 j2:5 loop:1 intuitive:1 optimum:1 ac:3 owl:1 generalization:1 probable:5 crg:1 extension:1 exploring:1 normal:1 exp:2 predict:2 adopt:1 birmingham:2 currently:1 minimization:3 di0:2 mit:1 gaussian:9 modelling:1 likelihood:30 contrast:1 inference:1 dependent:8 integrated:1 relation:1 j5:1 fogelman:1 denoted:1 special:1 mackay:6 integration:1 marginal:5 equal:3 having:2 sampling:1 represents:2 minimized:1 report:2 individual:1 consisting:1 freedom:1 tlx:1 highly:1 mixture:4 extreme:2 nl:1 chain:2 integral:1 necessary:1 logarithm:1 fitted:1 zn:4 dij:2 gr:1 chooses:1 density:4 international:1 ec2:1 probabilistic:2 together:4 jo:2 again:2 thesis:1 choose:1 borne:1 expert:1 derivative:1 leading:1 toy:1 sinusoidal:1 performed:1 ump:6 contribution:1 minimize:1 square:1 variance:22 yield:1 modelled:2 bayesian:20 carlo:2 touretzky:1 ed:2 pp:2 associated:1 di:4 gain:1 treatment:9 dlw:2 ut:1 mizing:1 ou:1 improved:1 iai:3 leen:1 evaluated:1 symptom:1 wmp:3 stage:1 just:1 hand:4 christopher:1 nonlinear:1 indicated:2 effect:3 omitting:1 true:4 unbiased:3 analytically:1 neal:2 illustrated:1 skewed:1 width:1 fj:5 recently:1 ji:2 b4:1 volume:1 ncrg:2 discussed:1 numerically:1 cambridge:1 gibbs:1 centre:1 posterior:3 multivariate:1 tesauro:1 occasionally:1 seen:1 maximize:1 dashed:3 ii:1 multiple:1 infer:1 technical:2 prediction:2 involving:3 regression:18 j3:5 denominator:2 expectation:1 kernel:2 adopting:1 represent:1 cie:1 spacing:1 underestimated:1 biased:4 pass:2 jordan:1 feedforward:1 xj:2 fit:1 idea:1 expression:2 wo:1 hessian:1 jj:1 useful:1 generally:1 clear:1 ifor:1 ph:1 reduced:1 http:1 generate:1 estimated:6 per:2 zd:1 shall:1 dropping:1 group:2 drawn:2 uid:1 sum:1 weigend:2 tly:1 inverse:1 wand:3 run:1 simplification:1 infinity:1 flat:1 generates:2 extremely:1 department:1 combination:1 conjugate:2 smaller:2 invariant:1 ln:1 turn:1 available:1 nen:1 hierarchical:2 alternative:1 shortly:1 denotes:2 remaining:1 already:2 quantity:1 dependence:2 ow:5 gradient:3 outer:1 assuming:1 code:1 illustration:1 minimizing:2 equivalently:1 negative:2 rise:1 unknown:2 perform:1 markov:2 nix:2 finite:1 inevitably:1 defining:1 hinton:1 arbitrary:1 overcoming:1 introduced:1 required:1 paris:1 california:1 bar:3 suggested:1 pattern:1 treated:1 representing:1 aston:3 technology:1 iiwll2:1 prior:4 acknowledgement:1 interesting:1 limitation:2 validation:1 degree:1 verification:1 viewpoint:1 systematically:2 summary:1 penalized:1 supported:1 bias:7 allow:4 institute:1 taking:3 overcome:1 curve:6 xn:3 evaluating:1 soulie:1 contour:3 adaptive:2 approximate:1 assumed:2 icann:1 noise:21 hyperparameters:1 en:2 slow:1 governed:2 bishop:8 discarding:1 maxi:1 evidence:1 intrinsic:1 ci:2 likely:1 corresponds:1 ma:1 conditional:5 goal:2 change:1 determined:1 averaging:1 arises:1 tested:1 |
211 | 1,192 | Bangs. Clicks, Snaps, Thuds and Whacks:
an Architecture for Acoustic Transient
Processing
Fernando J. Pineda(l)
fernando. pineda@jhuapl.edu
Gert Cauwenberghs(2)
gert@jhunix.hcf.jhu.edu
(iThe Applied Physics Laboratory
The Johns Hopkins University
Laurel, Maryland 20723-6099
R. Timothy Edwards(2)
tim@bach.ece.jhu.edu
Dept. of Electrical and Computer Engineering
The Johns Hopkins University
34th and Charles Streets
Baltimore Maryland 21218
(2
ABSTRACT
We propose a neuromorphic architecture for real-time processing of
acoustic transients in analog VLSI. We show how judicious normalization
of a time-frequency signal allows an elegant and robust implementation
of a correlation algorithm. The algorithm uses binary multiplexing instead
of analog-analog multiplication. This removes the need for analog
storage and analog-multiplication. Simulations show that the resulting
algorithm has the same out-of-sample classification performance (-93%
correct) as a baseline template-matching algorithm.
1 INTRODUCTION
We report progress towards our long-term goal of developing low-cost, low-power, lowcomplexity analog-VLSI processors for real-time applications. We propose a neuromorphic
architecture for acoustic processing in analog VLSI. The characteristics of the architecture
are explored by using simulations and real-world acoustic transients. We use acoustic
transients in our experiments because information in the form of acoustic transients
pervades the natural world. Insects, birds, and mammals (especially marine mammals)
all employ acoustic signals with rich transient structure. Human speech, is largely composed
of transients and speech recognizers based on transients can perform as well as recognizers
based on phonemes (Morgan, Bourlard,Greenberg, Hermansky, and Wu, 1995). Machines
also generate transients as they change state and as they wear down. Transients can be
used to diagnose wear and abnormal conditions in machines.
Architecture for Acoustic Transient Processing
735
In this paper, we consider how algorithmic choices that do not influence classification
performance, make an initially difficult-to-implement algorithm, practical to implement.
In particular, we present a practical architecture for performing real-time recognition of
acoustic transients via a correlation-based algorithm. Correlation in analog VLSI poses
two fundamental implementation challenges. First, there is the problem of template storage,
second, there is the problem of accurate analog multiplication. Both problems can be
solved by building sufficiently complex circuits. This solution is generally unsatisfactory
because the resulting processors must have less area and consume less power than their
digital counterparts in order to be competitive. Another solution to the storage problem is
to employ novel floating gate devices. At present such devices can store analog values
for years without significant degradation. Moreover, this approach can result in very
compact, yet computationally complex devices. On the other hand, programming floating
gate devices is not so straight-forward. It is relatively slow, it requires high voltage and it
degrades the floating gate each time it is reprogrammed. Our "solution" is to side-step the
problem completely and to develop an algorithmic solution that requires neither analog
storage nor analog multiplication. Such an approach is attractive because it is both
biologically plaUSible and electronically efficient. We demonstrate that a high level of
classification performance on a real-world data set is achievable with no measurable loss
of performance, compared to a baseline correlation algorithm.
The acoustic transients used in our experiments were collected by K. Ryals and O.
Steigerwald and are described in (Pineda, Ryals, Steigerwald and Furth, 1995). These
transients consist of isolated Bangs, Claps, Clicks, Cracks, Oinks, Pings, Pops, Slaps,
Smacks, Snaps, Thuds and Whacks that were recorded on OAT tape in an office environment.
The ambient noise level was uncontrolled, but typical of a single-occupant office.
Approximately 221 transients comprising 10 classes were collected. Most of the energy
in one of our typical transients is dissipated in the first 10 ms. The remaining energy is
dissipated over the course of approximately 100 ms. The transients had durations of
approximately 20-100 ms. There was considerable in-class and extra-class variability in
duration. The duration of a transient was determined automatically by a segmentation
algorithm described below. The segmentation algorithm was also used to align the templates
in the correlation calculations.
2 THE BASELINE ALGORITHM
The baseline classification algorithm and its performance is described in Pineda, et al.
(1995). Here we summarize only its most salient features. Like many biologically motivated
acoustic processing algorithms, the preprocessing steps include time-frequency analysis,
rectification, smoothing and compression via a nonlinearity (e.g. Yang, Wang and Shamma,
1992). Classification is performed by correlation against a template that represents a
particular class. In addition , there is a "training" step which is required to create the
templates. This step is described in the "correlation" section below. We turn now to a
more detailed description of each processing step.
A. Time-frequency Analysis: Time-frequency analysis for the baseline algorithm and the
simulations performed in this work, was performed by an ultra-low power (5 .5 mW)
analog VLSI filter bank intended to mimic the processing performed by the mammalian
cochlea (Furth, Kumar, Andreou and Goldstein, 1994). This real-time device creates a
time-frequency representation that would ordinarily require hours of computation on a
736
F J. Pineda. G. Cauwenberghs and R. T. Edwards
high-speed workstation. More complete descriptions can be found in the references. The
time-frequency representation produced by the filter bank is qualitatively similar to that
produced by a wavelet transformation. The center frequencies and Q-factors of each
channel are uniformly spaced in log space. The low frequency channel is tuned to a
center frequency of 100 Hz and Q-factor of 1.0, while the high frequency channel is
tuned to a center frequency of 6000 Hz and Q-factor 3.5. There are 31 output channels.
The 31-channel cochlear output was digitized and stored on disk at a raw rate of 256K
samples per second. This raw rate was distributed over 32 channels, at rates appropriate
for each channel (six rates were used, 1 kHz for the lowest frequency channels up to 32
kHz for the highest-frequency channels and the unfiltered channel).
B. Segmentation: Both the template calculation and the classification algorithm rely on
having a reliable segmenter. In our experiments, the transients are isolated and the noise
level is low, therefore a simple segmenter is all that is needed. Figure 2. shows a
segmenter that we implemented in software and which consists of a three layer neural
network.
noisy segmentation bit
clean segmentation bit
Figure 2: Schematic diagram showing the segmenter network
The input layer receives mean subtracted and rectified signals from the cochlear filters.
The first layer simply thresholds these signals. The second layer consists of a single unit
that accumulates and rethresholds the thresholded signals. The second layer outputs a
noisy segmentation signal that is nonzero if two or more channels in the input layer
exceed the input threshold. Finally, the output neuron cleans up the segmentation signal
by low-pass filtering it with a time-scale of 10 ms (to fill in drop outs) and by low-pass
filtering it with a time-scale of 1 ms (to catch the onset of a transient). The outputs of the
two low-pass filters are OR'ed by the output neuron to produce a clean segmentation bit.
The four adjustable thresholds in the network were determined empirically so as to
maximize the number of true transients that were properly segmented while minimizing
the number of transients that were missed or cut in half.
C. Smoothing & Normalization: The raw output of the filter bank is rectified and smoothed
with a single pole filter and subsequently normalized. Smoothing was done with a the
Architecture for Acoustic Transient Processing
737
same time-scale (l-ms) in all frequency channels. Let X(t) be the instantaneous vector
of rectified and smoothed channel data, then the instantaneous output of the normalizer is
X(t) =
~(t)
()+ X(t)
II. Where
() is a positive constant whose purpose is to prevent the
normalization stage from amplifying noise in the absence of a transient signal. With this
normalization we have IIX(t)lt
0 if IIX(t)lll ?(), and IIX(t)lll
z
z
1 if IIX(t)lll ? (). Thus
() effectively determines a soft input threshold that transients must exceed if they are to
be normalized and passed on to higher level processing.
A sequence of normalized vectors over a time-window of length T is used as the feature
vector for the correlation and classification stages of the algorithm. Figure 3. shows four
normalized feature vectors from one class of transients (concatenated together) .
J,.;~
.,.....
.-.
~
~
....
...I\.
""--'"'~~
.A.~~~
1'0.
..-..
......
.~
~
-......
~
~
...
~-~:.....
......I""'~~~
~I'-..
.......
-.
~
-
~~
'-----.....,
).,
~
,
~
~
o
........................
I
I
I
50
150
100
I
200
I
250
I
300
Time (ms)
Figure 3.: Normalized representation of the first 4 exemplars from one class of transients.
D. Correlation: The feature-vectors are correlated in the time-frequency domain against
a set of K time-frequency templates. The k - th feature-vector-template is precalculated
by averaging over a corpus of vectors from the k - th class. Thus, if Ck represents
the k - th transient class, and if ( ) k represents an average over the elements in a class,
e.g. (X(t?)k
= E{X(t)IX(t)E Ck }. Then the template is of the form
bk(t) =(X(t?)k ? The
instantaneous output of the correlation stage is a K -dimensional vector
t
c(t)whose
A
k -th component is ck(t) = LX(t)? bk(t). The time-frequency window over which the
t'=t-T
correlations are performed is of length T and is advanced by one time-step between
correlation calculations.
E. Classification The classification stage is a simple winner-take-all algorithm that assigns
a class to the feature vector by picking the component of ck(t) that has the largest value
at the appropriate time, i.e. class =argmax{ck(tvalid)}'
k
F. 1. Pineda, G. Cauwenberghs and R. T. Edwards
738
The segmenter is used to determine the time tva1idwhen the output of the winner-take-all
is to be used for classification. This corresponds to properly aligning the feature vector
and the template. Leave-one-out cross-validation was used to estimate the out-of-sample
classification performance of all the algorithms described in this paper. The rate of
correct classification for the baseline algorithm was 92.8%. Out of a total of 221 events
that were detected and segmented, 16 were misclassified.
3 A CORRELATION ALGORITHM FOR ANALOG VLSI
We now address the question of how to perform classification without performing analoganalog multiplication and without having to store analog templates. To provide a better
understanding of the algorithm, we present it as a set of incremental modifications to the
baseline algorithm. This will serve to make clear the role played by each modification.
Examination of the normalized representation in figure 3 suggests that the information
content of anyone time-frequency bin cannot be very high. Accordingly, we seek a
highly compressed representation that is both easy to form and with which it is easy to
compute. As a preliminary step to forming this compressed representation, consider
correlating the time-derivative of the feature vector with the time-derivative of the template,
ckU)=
I,t~(t).bk(t)
where bk(t) = (X(t)}k'
This modification has no effect on the out-of-sample performance of the winner-take-all
classification algorithm. The above representation, by itself, has very few implementation
advantages. It can, in principal, mitigate the effect of any systematic offsets that might
emerge from the normalization circuit. Unfortunately, the price for this small advantage
would be a very complex multiplier. This is evident since the time-derivative of a positive
quantity can have either sign, both the feature vector and the template are now bipolar.
Accordingly the correlation hardware would now require 4-quadrant analog-analog
multipliers. Moreover the storage circuits must handle bipolar quantities as well.
The next step in forming a compressed representation is to replace the time-differentiated
template with just a sign that indicates whether the template value in a particular channel
is increasing or decreasing with time. This template is b' k (t) = Sign(
(XU)} J. We denote
k
this template as the [-1,+ 1]-representation template. The resulting classification algorithm
yields exactly the same out-of-sample performance as the baseline algorithm. The 4-quadrant
analog-analog multiply of the differentiated representation is reduced to a "4-quadrant
analog-binary" multiply. The storage requirements are reduced to a single bit per timefrequency bin. To simplify the hardware yet further, we exploit the fact that the time
derivative of a random unit vector net) (with respect to the I-norm) satisfies
E{ ~Sign?(Uv))iv} = 2E{ ~e?(uv))iv}
where
e
is a step function. Accordingly, if we use a template whose elements are in
[0,1] instead of [-1, +1], i.e.
b'
I
k(t) =
E{ ~ b'vXv }= 2E{b' vXv} =IlxlI"
I
e((X(t)} k)'
we expect
provided the feature vector
X(t) is drawn from the
Architecture/or Acoustic Transient Processing
739
same class as is used to calculate the template. Furthermore, if the feature vector and the
template are statistically independent, then we expect that either representation will produce
a zero correlation,
E{ ~ h' j(v }=E{ h" vXv} =0 . In practice, we find that
the difference
in correlation values between using the [0,1] and the [-1,+1] representations is simply a
scale factor (approximately equal to 2 to several digits of precision). This holds even
when the feature vectors and the templates do not correspond to the same class. Thus the
difference between the two representations is quantitatively minor and qualitatively
nonexistent, as evidenced by our classification experiments, which show that the out-ofsample performance of the [0,1] representation is identical to that of the [-1,+1]
representation. Furthermore, changing to the [0,1] representation has no impact on the
storage requirements since both representations require the storage of single bit per timefrequency bin. On the other hand, consider that by using the [0,1] representation we now
have a "2-quadrant analog-binary" multiply instead of a "4-quadrant analog-binary"
multiply. Finally, we observe that differentiation and correlation are commuting operations,
thus rather than differentiating X(t) before correlation, we can differentiate after the
correlation without changing the result. This reduces the complexity of the correlation
operation still further, since the fact that both X(t) and h" k (t) are positive means that
we need only implement a correlator with I-quadrant analog-binary multiplies.
A
The result of the above evolution is a correlation algorithm that empirically performs as
well as a baseline correlation algorithm, but only requires binary-multiplexing to perform
the correlation. We find that with only 16 frequency channels and 64 time bins (1024bits/templates) , we are able to achieve the desired level of performance. We have undertaken
the design and fabrication of a prototype chip. This chip has been fabricated and we will
report on it's performance in the near future. Figure 4 illustrates the key architectural
features of the correlator/memory implementation. The rectified and
1-norm
correlator/memory
input
Figure 4: Schematic architecture of the k-th correlator-memory.
smoothed frequency-analyzed signals are input from the left as currents. The currents are
normalized before being fed into the correlator. A binary time-frequency template is
stored as a bit pattern in the correlator/memory. A single bit is stored at each time and
frequency bin. If this bit is set, current is mirrored from the horizontal (frequency) lines
onto vertical (aggregation) lines. Current from the aggregation lines is integrated and
shifted in a bucket-brigade analog shift register. The last two stages of the shift register
are differenced to estimate a time-derivative.
4
DISCUSSION AND CONCLUSIONS
The correlation algorithm described in the previous section is related to the zero-crossing
740
F. J Pineda, G. Cauwenberghs and R. T. Edwards
representation analyzed by Yang, Wang. and Shamma (1992). This is because bit flips
in the templates correspond to the zero crossings of the expected time-derivative of the
normalized "energy-envelope." Note that we do not encode the incoming acoustic signal
with a zero-crossing representation. Interestingly enough, if both the analog signal and
the template are reduced to a binary representation, then the classification performance
drops dramatically. It appears that maintaining some analog information in the processing
path is significant.
The frequency-domain normalization approach presented above throws away absolute
intensity information. Thus, low intensity resonances that remain excited after the initial
burst of acoustic energy are as important in the feature vector as the initial burst of
energy. These resonances can contain significant information about the nature of the
transient but would have less weight in an algorithm with a different normalization
scheme. Another consequence of the normalization is that even a transient whose spectrum
is highly concentrated in just a few frequency channels will spread its information over
the entire spectrum through the normalization denominator. The use of a normalized
representation thus distributes the correlation calculation over very many frequency
channels and serves to mitigate the effect of device mismatch.
We consider the proposed correlator/memory as a potential component in more sophisticated
acoustic processing systems. For example, the continuously generated output of the
correlators , c(t), is itself a feature vector that could be used in more sophisticated
segmentation and/or classification algorithms such as the time-delayed neural network
approach ofUnnikrishnan, Hopfield and Tank (1991).
The work reported in this report was supported by a Whiting School of Engineering!Applied
Physics Laboratory Collaborative Grant. Preliminary work was supported by an APL
Internal Research & Development Budget.
REFERENCES
Furth, P.M. and Kumar, N.G., Andreou, A.G. and Goldstein, M.H. , "Experiments with
the Hopkins Electronic EAR", 14th Speech Research Symposium, Baltimore, MD
pp.183-189, (1994).
Pineda, F.J., Ryals, K, Steigerwald, D. and Furth, P., (1995). "Acoustic Transient
Processing using the Hopkins Electronic Ear", World Conference on Neural Networks
1995, Washington DC.
Yang, X., Wang K and Shamma, S.A. (1992). "Auditory Representations of Acoustic
Signals", IEEE Trans. on Information Processing,.3.8., pp. 824-839.
Morgan, N. , Bourlard, H., Greenberg, S., Hermansky, H. and Wu, S. L., (1996).
"Stochastic Perceptual Models of Speech", IEEE Proc. IntI. Conference on Acoustics,
Speech and Signal Processing, Detroit, MI, pp. 397-400.
Unnikrishnan, KP., Hopfield J.J., and Tank, D.W. (1991). "Connected-Digit SpeakerDependent Speech Recognition Using a Neural Network with Time-Delayed
Connections", IEEE Transactions on Signal Processing, 3.2, pp. 698-713
| 1192 |@word timefrequency:2 compression:1 achievable:1 norm:2 disk:1 simulation:3 seek:1 excited:1 mammal:2 initial:2 nonexistent:1 tuned:2 interestingly:1 current:4 yet:2 must:3 john:2 remove:1 drop:2 half:1 device:6 accordingly:3 marine:1 lx:1 burst:2 symposium:1 consists:2 expected:1 nor:1 decreasing:1 automatically:1 correlator:7 lll:3 window:2 increasing:1 provided:1 moreover:2 circuit:3 lowest:1 transformation:1 differentiation:1 fabricated:1 mitigate:2 bipolar:2 exactly:1 unit:2 grant:1 positive:3 before:2 engineering:2 consequence:1 accumulates:1 path:1 approximately:4 might:1 bird:1 suggests:1 shamma:3 statistically:1 practical:2 practice:1 implement:3 digit:2 area:1 jhu:2 matching:1 speakerdependent:1 quadrant:6 onto:1 cannot:1 storage:8 influence:1 measurable:1 center:3 duration:3 assigns:1 fill:1 handle:1 gert:2 programming:1 us:1 element:2 crossing:3 recognition:2 mammalian:1 cut:1 role:1 electrical:1 solved:1 wang:3 calculate:1 connected:1 highest:1 environment:1 complexity:1 segmenter:5 ithe:1 serve:1 creates:1 completely:1 hopfield:2 chip:2 kp:1 detected:1 whose:4 plausible:1 snap:2 consume:1 compressed:3 noisy:2 itself:2 pineda:8 sequence:1 advantage:2 differentiate:1 net:1 propose:2 achieve:1 description:2 requirement:2 produce:2 incremental:1 leave:1 tim:1 develop:1 pose:1 exemplar:1 school:1 minor:1 progress:1 edward:4 throw:1 implemented:1 correct:2 filter:6 subsequently:1 stochastic:1 human:1 transient:32 bin:5 require:3 preliminary:2 ultra:1 hold:1 sufficiently:1 algorithmic:2 purpose:1 proc:1 amplifying:1 largest:1 create:1 detroit:1 ck:5 rather:1 voltage:1 office:2 encode:1 unnikrishnan:1 properly:2 unsatisfactory:1 indicates:1 laurel:1 normalizer:1 baseline:9 integrated:1 entire:1 initially:1 vlsi:6 misclassified:1 comprising:1 tank:2 steigerwald:3 classification:18 insect:1 multiplies:1 development:1 resonance:2 smoothing:3 equal:1 having:2 washington:1 identical:1 represents:3 hermansky:2 mimic:1 future:1 report:3 simplify:1 quantitatively:1 employ:2 few:2 composed:1 delayed:2 floating:3 intended:1 argmax:1 highly:2 multiply:4 analyzed:2 accurate:1 ambient:1 iv:2 desired:1 isolated:2 soft:1 neuromorphic:2 cost:1 pole:1 ryals:3 fabrication:1 stored:3 reported:1 fundamental:1 systematic:1 physic:2 picking:1 together:1 hopkins:4 continuously:1 recorded:1 ear:2 derivative:6 potential:1 register:2 onset:1 performed:5 diagnose:1 cauwenberghs:4 competitive:1 aggregation:2 collaborative:1 phoneme:1 characteristic:1 largely:1 spaced:1 yield:1 correspond:2 raw:3 produced:2 occupant:1 rectified:4 straight:1 processor:2 ping:1 ed:1 against:2 energy:5 frequency:26 pp:4 mi:1 workstation:1 auditory:1 segmentation:9 sophisticated:2 goldstein:2 appears:1 higher:1 done:1 furthermore:2 just:2 stage:5 correlation:25 hand:2 receives:1 horizontal:1 building:1 effect:3 normalized:9 true:1 multiplier:2 counterpart:1 evolution:1 contain:1 laboratory:2 nonzero:1 attractive:1 m:7 evident:1 complete:1 demonstrate:1 performs:1 instantaneous:3 novel:1 charles:1 empirically:2 khz:2 winner:3 brigade:1 analog:26 significant:3 uv:2 nonlinearity:1 had:1 wear:2 furth:4 recognizers:2 align:1 aligning:1 store:2 binary:8 morgan:2 determine:1 maximize:1 fernando:2 replace:1 signal:14 ii:1 reduces:1 segmented:2 calculation:4 bach:1 long:1 cross:1 schematic:2 impact:1 denominator:1 cochlea:1 normalization:9 addition:1 baltimore:2 diagram:1 extra:1 envelope:1 hz:2 elegant:1 correlators:1 mw:1 yang:3 near:1 exceed:2 easy:2 enough:1 architecture:9 click:2 prototype:1 shift:2 whether:1 motivated:1 six:1 passed:1 speech:6 tape:1 dramatically:1 generally:1 detailed:1 clear:1 hardware:2 concentrated:1 reduced:3 generate:1 mirrored:1 shifted:1 crack:1 sign:4 per:3 key:1 salient:1 four:2 threshold:4 drawn:1 changing:2 prevent:1 neither:1 clean:3 thresholded:1 undertaken:1 year:1 wu:2 architectural:1 missed:1 electronic:2 bit:10 abnormal:1 uncontrolled:1 layer:6 played:1 multiplexing:2 software:1 reprogrammed:1 speed:1 anyone:1 kumar:2 performing:2 relatively:1 developing:1 lowcomplexity:1 remain:1 biologically:2 modification:3 bucket:1 inti:1 computationally:1 rectification:1 turn:1 needed:1 flip:1 fed:1 serf:1 operation:2 observe:1 hcf:1 appropriate:2 differentiated:2 away:1 subtracted:1 gate:3 remaining:1 include:1 iix:4 maintaining:1 exploit:1 concatenated:1 especially:1 question:1 quantity:2 degrades:1 md:1 maryland:2 street:1 cochlear:2 collected:2 length:2 minimizing:1 difficult:1 unfortunately:1 ordinarily:1 implementation:4 design:1 adjustable:1 perform:3 vertical:1 neuron:2 commuting:1 variability:1 digitized:1 dc:1 smoothed:3 intensity:2 jhunix:1 bk:4 evidenced:1 required:1 differenced:1 connection:1 andreou:2 acoustic:19 pop:1 hour:1 trans:1 address:1 able:1 below:2 ofsample:1 pattern:1 mismatch:1 challenge:1 summarize:1 reliable:1 oink:1 memory:5 power:3 event:1 natural:1 rely:1 examination:1 bourlard:2 advanced:1 scheme:1 dissipated:2 catch:1 thud:2 understanding:1 multiplication:5 apl:1 loss:1 expect:2 filtering:2 unfiltered:1 digital:1 validation:1 bank:3 course:1 clap:1 supported:2 last:1 electronically:1 side:1 template:26 emerge:1 differentiating:1 absolute:1 distributed:1 greenberg:2 world:4 rich:1 forward:1 qualitatively:2 preprocessing:1 transaction:1 compact:1 correlating:1 incoming:1 corpus:1 spectrum:2 channel:17 nature:1 robust:1 complex:3 domain:2 spread:1 oat:1 noise:3 whack:2 xu:1 slow:1 precision:1 perceptual:1 wavelet:1 ix:1 down:1 showing:1 explored:1 offset:1 consist:1 effectively:1 illustrates:1 budget:1 lt:1 timothy:1 simply:2 forming:2 corresponds:1 determines:1 satisfies:1 goal:1 bang:2 towards:1 vxv:3 price:1 absence:1 considerable:1 change:1 content:1 judicious:1 typical:2 determined:2 uniformly:1 averaging:1 distributes:1 degradation:1 principal:1 total:1 pas:3 ece:1 internal:1 dept:1 correlated:1 |
212 | 1,193 | A New Approach to Hybrid HMMJANN Speech
Recognition Using Mutual Information Neural
Networks
G. Rigoll,
c.
Neukirchen
Gerhard-Mercator-University Duisburg
Faculty of Electrical Engineering
Department of Computer Science
Bismarckstr. 90, Duisburg, Germany
ABSTRACT
This paper presents a new approach to speech recognition with hybrid
HMM/ANN technology. While the standard approach to hybrid
HMMIANN systems is based on the use of neural networks as
posterior probability estimators, the new approach is based on the use
of mutual information neural networks trained with a special learning
algorithm in order to maximize the mutual information between the
input classes of the network and its resulting sequence of firing output
neurons during training. It is shown in this paper that such a neural
network is an optimal neural vector quantizer for a discrete hidden
Markov model system trained on Maximum Likelihood principles.
One of the main advantages of this approach is the fact, that such
neural networks can be easily combined with HMM's of any
complexity with context-dependent capabilities. It is shown that the
resulting hybrid system achieves very high recognition rates, which
are now already on the same level as the best conventional HMM
systems with continuous parameters, and the capabilities of the
mutual information neural networks are not yet entirely exploited.
1 INTRODUCTION
Hybrid HMM/ANN systems deal with the optimal combination of artificial neural
networks (ANN) and hidden Markov models (HMM). Especially in the area of automatic
speech recognition, it has been shown that hybrid approaches can lead to very powerful
and efficient systems, combining the discriminative capabilities of neural networks and
the superior dynamic time warping abilities of HMM's. The most popular hybrid
approach is described in (Hochberg, 1995) and replaces the component modeling the
emission probabilities of the HMM by a neural net. This is possible, because it is shown
Mutual In/ormation Neural Networks/or Hybrid HMMIANN Speech Recognition
773
in (Bourlard, 1994) that neural networks can be trained so that the output of the m-th
neuron approximates the posterior probability p(QmIX). In this paper, an alternative
method for constructing a hybrid system is presented. It is based on the use of discrete
HMM's which are combined with a neural vector quantizer (VQ) in order to form a hybrid
system. Each speech feature vector is presented to the neural network, which generates a
firing neuron in its output layer. This neuron is processed as VQ label by the HMM's.
There are the following arguments for this alternative hybrid approach:
? The neural vector quantizer has to be trained on a special information theory criterion,
based on the mutual information between network input and resulting neuron firing
sequence. It will be shown that such a network is the optimal acoustic processor for a
discrete HMM system, resulting in a profound mathematical theory for this approach.
? Resulting from this theory, a formula can be derived which jointly describes the
behavior of the HMM and the neural acoustic processor. In that way, both systems can
be described in a unified manner and both major components of the hybrid system can
be trained using a unified learning criterion.
? The above mentioned theoretical background leads to the development of new neural
network paradigms using novel training algorithms that have not been used before in
other areas of neurocomputing, and therefore represent major challenges and issues in
learning and training for neural systems.
? The neural networks can be easily combined with any HMM system of arbitrary
complexity. This leads to the combination of optimally trained neural networks with
very powerful HMM's, having all features useful for speech recognition, e.g. triphones,
function words, crossword triphones, etc .. Context-dependency, which is very desirable
but relatively difficult to realize with a pure neural approach, can be left to the HMM's.
? The resulting hybrid system has still the basic structure of a discrete system, and
therefore has all the effective features associated with discrete systems, e.g. quick and
easy training as well as recognition procedures, real-time capabilities, etc ..
? The work presented in this paper has been also successfully implemented for a
demanding speech recognition problem, the 1000 word speaker-independent continuous
Resource Management speech recognition task. For this task, the hybrid system
produces one of the best recognition results obtained by any speech recognition system.
In the following section, the theoretical foundations of the hybrid approach are briefly
explained. A unified probabilistic model for the combined HMMIANN system is derived,
describing the interaction of the neural and the HMM component. Furthermore, it is
shown that the optimal neural acoustic processor can be obtained from a special
information theoretic network training algorithm.
2 INFORMATION THEORY PRINCIPLES FOR NEURAL
NETWORK TRAINING
We are considering now a neural network of arbitrary topology used as neural vector
quantizer for a discrete HMM system. If K patterns are presented to the hybrid system
during training, the feature vectors resulting from these patterns using any feature
extraction method can be denoted as x(k), k=l.. .K. If these feature vectors are presented to
the input layer of a neural network, the network will generate one firing neuron for each
presentation. Hence, all K presentations will generate a stream of firing neurons with
length K resulting from the output layer of the neural net. This label stream is denoted as
Y=y(l) ... y(K). The label stream Y will be presented to the HMM's, which calculate the
probability that this stream has been observed while a pattern of a certain class has been
presented to the system. It is assumed, that M different classes Q m are active in the
G. Rigoll and C. Neukirchen
774
system, e.g. the words or phonemes in speech recognition. Each feature vector ~(k) will
belong to one of these classes. The class Om, to which feature vector ~(k) belongs is
denoted as Q(k). The major training issue for the neural network can be now formulated
as follows : How should the weights of the network be trained, so that the network
produces a stream of firing neurons that can be used by the discrete HMM's in an optimal
way? It is known that HMM's are usually trained with information theory methods which
mostly rely on the Maximum Likelihood (ML) principle. If the parameters of the hybrid
system (i.e. transition and emission probabilities and network weights) are summarized in
the vector !!, the probability P!!(x(k)IQ(k? denotes the probability of the pattern X at
discrete time k, under the assumption that it has been generated by the model representing
class O(k), with parameter set !!. The ML principle will then try to maximize the joint
probability of all presented training patterns ~(k), according to the following Maximum
Likelihood function:
fl* = arg max
~
{~ i
log P!! (K(k) I Q(k?j
(1)
k=1
where !!* is the optimal parameter vector maximizing this equation. Our goal is to feed
the feature vector ~ into a neural network and to present the neural network output to the
Markov model. Therefore, one has to introduce the neural network output in a suitable
manner into the above formula. If the vector ~ is presented to the network input layer, and
we assume that there is a chance that any neuron Yn, n=1...N (with network output layer
size N) can fire with a certain probability, then the output probability p(~IQ) in (1) can
be written as:
N
N
p(KIQ) =
p(x ,Y n IQ) =
p(y n IQ) . p(x Iy n,Q)
(2)
n=1
n=1
Now, the combination of the neural component with the HMM can be made more
obvious: In (2), typically the probability P(YnIQ) will be described by the Markov model,
in terms of the emission probabilities of the HMM . For instance, in continuous
parameter HMM's, these probabilities are interpreted as weights for Gaussian mixtures. In
the case of semi-continuous systems or discrete HMM's, these probabilities will serve as
discrete emission probabilities of the codebook labels. The probability p(xIYn,Q)
describes the acoustic processor of the system and is characterizing the relation between
the vector ~ as input to the acoustic processor and the label Yn, which can be considered
as the n-th output component of the acoustic processor. This n-th output component may
characterize e.g. the n-th Gaussian mixture component in continuous parameter HMM's,
or the generation of the n-th label of a vector quantizer in a discrete system. This
probability is often considered as independent of the class 0 and can then be expressed as
p(xIYn). It is exactly this probability, that can be modeled efficiently by our neural
network. In this case, the vector X serves as input to the neural network and Yn
characterizes the n-th neuron in the output layer of the network. Using Bayes law, this
probability can be written as:
P(YnIK) ' pW
p(xl Yn) =
p(y n)
I
I
(3)
yielding for (2):
(4)
Using again Bayes law to express
Mutual Information Neural Networks for Hybrid HMMIANN Speech Recognition
775
(5)
one obtains from (4):
p(K)
N
p(KI.Q)= -(.Q) . L p(.Qly n ) ?p(ynlo!J
p
n=1
(6)
We have now modified the class-dependent probability of the feature vector X in a way
that allows the incorporation of the probability P(YnIX). This probability allows a better
characterization of the behavior of the neural network, because it describes the probability
of the various neurons Yn, if the vector X is presented to the network input. Therefore,
these probabilities give a good description of the input/output behavior of the neural
network. Eq. (6) can therefore be considered as probabilistic model for the hybrid system,
where the neural acoustic processor is characterized by its input/output behavior. Two
cases can be now distinguished: In the first case, the neural network is assumed to be a
probabilistic paradigm, where each neuron fires with a certain probability, if an input
vector is presented. In this case all neurons contribute to the information forwarded to the
HMM's. As already mentioned, in this paper, the second possible case is considered,
namely that only one neuron in the output layer fires and will be fed as observed label to
the HMM. In this case, we have a deterministic decision, and the probability P(YnIX)
describes what neuron Yn* fires if vector X is presented to the input layer. Therefore, this
probability reduces to
(7)
Then, (6) yields:
(8)
Now, the class-dependent probability p(Xln) is expressed through the probability
p(nIYn*), involving directly the firing neuron Yn*, when feature vector X is presented.
One has now to turn back to (1), recalling the fact, that this equation describes the fact
that the Markov models are trained with the ML criterion. It should also be recalled, that
the entire sequence of feature vectors, x(k), k=l...K, results in a label stream of firing
neurons Yn*(k), k=l...K, where Yn*(k) is the firing neuron if the k-th vector x(k) is
presented to the neural network. Now, (8) can be substituted into (1) for each presentation
k, yielding the modified ML criterion:
1( =
arg;ax {
::1
K
p(x(k))
}
log P(Q (k)) . p(.Q(k) I Y n*,k))
~ arg;ax {~, log p(x (k)) - ~109P(Q(k)) + ~IOg p(Q(k) IYn.(k))}
(9)
Usually, in a continuous parameter system, the probability p(x) can be expressed as:
N
p(K)
=
LP(K,ly n) . p(y n)
n=1
(10)
and is therefore dependent of the parameter vector ft, because in this case, p(xIYn) can be
interpreted as the probability provided by the Gaussian distributions, and the parameters of
G. Rigoll and C. Neukirchen
776
the Gaussians will depend on ft. As just mentioned before, in a discrete system, only one
firing neuron Yn* survives, resulting in the fact that only the n*-th member remains in
the sum in (10). This would correspond to only one "firing Gaussian" in the continuous
case, leading to the following expression for p(x):
p(K)
= p(x Iy nJ? p(y nJ = p(K,y nJ = p(y n"lx)
. p(x)
(11)
Considering now the fact, that the acoustic processor is not represented by a Gaussian but
instead by a vector quantizer, where the probability P(Yn*IX) of the firing neuron is equal
to 1, then (11) reduces to p(~) =p(x) and it becomes obvious that this probability is not
affected by any distribution that depends on the parameter vector ft. This would be
different, if P(Yn*IX) in (11) would not have binary characteristics as in (7), but would be
computed by a continuous function which in this case would depend on the parameter
vector ft. Thus, without consideration of p(X), the remaining expression to be maximized
in (9) reduces to:
,r( =arg;ax
[~ ~IOg p(.Q( k)) +
= arg max [-
!
1
log p(.Q( k) I Y n?(k))
(12)
E {log p(.o)} + E {log p(.o I y n")}]
fJ..
These expectations of logarithmic probabilities are also defined as entropies. Therefore,
(9) can be also written as
fl." = arg max {H (.0) - H(.o I Y)}
fJ..
(13)
This equation can be interpreted as follows: The term on the right side of (13) is also
known as the mutual information I(n,Y~ between the probabilistic variables nand Y,
i.e. :
1(.0, Y) =H(.o) - H (.01 Y) =H (Y) - H(YI.o)
(14)
Therefore, the final information theory-based training criterion for the neural network can
be formulated as follows: The synaptic weights of the neural network should be chosen as
to maximize the mutual information between the string representing the classes of the
vectors presented to the network input layer during training and the string representing the
resulting sequence of firing neurons in the output layer of the neural network. This can be
also expressed as the Maximum Mutual Information (MMI) criterion for neural network
training. This concludes the proof that MMI neural networks are indeed optimal acoustic
processors for HMM's trained with maximum likelihood principles.
3 REALIZATION OF MMI TRAINING ALGORITHMS FOR
NEURAL NETWORKS
Training the synaptic weights of a neural network in order to achieve mutual information
maximization is not easy. Two different algorithms have been developed for this task and
can only be briefly outlined in this paper. A detailed description can be found in (Rigoll,
1994) and (Neukirchen, 1996). The first experiments used a single-layer neural network
with Euclidean distance as propagation function. The first implementation of the MMI
training paradigm has been realized in (Rigoll, 1994) and is based on a self-organizing
procedure, starting with initial weights derived from k-means clustering of the training
vectors, followed by an iterative procedure to modify the weights. The mutual
information increases in a self-organizing way from a low value at the start to a much
higher value after several iteration cycles. The second implementation has been realized
Mutual Information Neural Networks for Hybrid HMMIANN Speech Recognition
777
recently and is described in detail in (Neukirchen, 1996). It is based on the idea of using
gradient methods for finding the MMI value. This technique has not been used before,
because the maximum search for finding the firing neuron in the output layer has
prevented the calculation of derivatives. This maximum search can be approximated using
the softmax function, denoted as sn for the n-th neuron. It can be computed from the
activations Zl of all neurons as:
z IT
Sn=e n
N
""
/ ?..Je
ZI
IT
/=1
(15)
where a small value for parameter T approximates a crisp maximum selection. Since the
string n in (14) is always fixed during training and independent of the parameters in ft,
only the function H(nIY) has to be minimized. This function can also be expressed as
M
H(!2 I Y)
=-
N
L L
p(y n,!2 m ) ?logp(!2 m I Y n)
m=1 n=1
m=1 n=1
(16)
A derivative with respect to a weight Wlj of the neural network yields:
aH (!21 Y)
JW/j
=
(17)
As shown in (Neukirchen, 1996), all the required terms in (17) can be computed
effectively and it is possible to realize a gradient descend method in order to maximize the
mutual information of the training data. The great advantage of this method is the fact
that it is now possible to generalize this algorithm for use in all popular neural network
architectures, including multilayer and recurrent neural networks.
4 RESULTS FOR THE HYBRID SYSTEM
The new hybrid system has been developed and extensively tested using the Resource
Management 1000 word speaker-independent continuous speech recognition task. First, a
baseline discrete HMM system has been built up with all well-known features of a
context-dependent HMM system. The performance of that baseline system is shown in
column 2 of Table 1. The 1st column shows the performance of the hybrid system with
the neural vector quantizer. This network has some special features not mentioned in the
previous sections, e.g. it uses multiple frame input and has been trained on contextdependent classes. That means that the mutual information between the stream of firing
neurons and the corresponding input stream of triphones has been maximized. In this
way, the firing behavior of the network becomes sensitive to context-dependent units.
Therefore, this network may be the only existing context-dependent acoustic processor,
carrying the principle of triphone modeling from the HMM structure to the acoustic front
end. It can be seen, that a substantially higher recognition performance is obtained with
the hybrid system, that compares well with the leading continuous system (HTK, in
column 3). It is expected, that the system will be further improved in the near future
through various additional features, including full exploitation of multilayer neural VQ's
778
G. Rigoll and C. Neukirchen
and several conventional HMM improvements, e.g. the use of crossword triphones.
Recent results on the larger Wall Street Journal (WSJ) database have shown a 10.5% error
rate for the hybrid system compared to a 13.4% error rate for a standard discrete system,
using the 5k vocabulary test with bigram language model of perplexity 110. This error
rate can be further reduced to 8.9% using crossword triphones and 6.6% with a trigram
language model. This rate compares already quite favorably with the best continuous
systems for the same task. It should be noted that this hybrid WSJ system is still in its
initial stage and the neural component is not yet as sophisticated as in the RM system.
5 CONCLUSION
A new neural network paradigm and the resulting hybrid HMMIANN speech recognition
system have been presented in this paper. The new approach performs already very well
and is still perfectible. It gains its good performance from the following facts: (1) The use
of information theory-based training algorithms for the neural vector quantizer, which can
be shown to be optimal for the hybrid approach. (2) The possibility of introducing
context-dependency not only to the HMM's, but also to the neural quantizer. (3) The fact
that this hybrid approach allows the combination of an optimal neural acoustic processor
with the most advanced context-dependent HMM system. We will continue to further
implement various possible improvements for our hybrid speech recognition system.
REFERENCES
Rigoll, G. (1994) Maximum Mutual Information Neural Networks for Hybrid
Connectionist-HMM Speech Recognition Systems, IEEE Transactions on Speech and
Audio Processing, Vol. 2, No.1, Special Issue on Neural Networks for Speech
Processing, pp. 175-184
Neukirchen, C. & Rigoll, G. (1996) Training of MMI Neural Networks as Vector
Quantizers, Internal Report, Gerhard-Mercator-University Duisburg, Faculty of Electrical
Engineering, available via http://www.fb9-tLuni-duisburg.de/veroeffentl.html
Bourlard, H. & Morgan, N. (1994) Connectionist Speech Recognition: A Hybrid
Approach, Kluwer Academic Publishers
Hochberg, M., Renals, S., Robinson, A., Cook, G. (1995) Recent Improvements to the
ABBOT Large Vocabulary CSR System, in Proc. IEEE-ICASSP, Detroit, pp. 69-72
Rigoll, G., Neukirchen, c., Rottland, J. (1996) A New Hybrid System Based on MMINeural Networks for the RM Speech Recognition Task, in Proc. IEEE-ICASSP, Atlanta
Table 1: Comparison of recognition rates for different speech recognition systems
RM SI word recognition rate with word pair grammar: correctness (accuracy)
test set
hybrid MMI-NN
system
baseline k-means
VQ system
continuous pdf system
(HTK)
Feb.'89
96,3 %
(95,6 %)
94,3 % (93,6 %)
96,0 % (95,5 %)
Oct.'89
95,4 %
(94,5 %)
93,5 % (92,0 %)
95,4% (94,9 %)
Feb.'91
96,7 %
(95,9 %)
94,4% (93,5 %)
96,6% (96,0 %)
Sep.'92
93,9 %
(92,5 %)
90,7 % (88,9 %)
93,6 % (92,6 %)
average
95,6 %
(94,6 %)
93,2 % (92,0 %)
95,4% (94,7 %)
| 1193 |@word exploitation:1 briefly:2 faculty:2 pw:1 bigram:1 qly:1 initial:2 existing:1 activation:1 yet:2 si:1 written:3 realize:2 cook:1 characterization:1 quantizer:9 contribute:1 codebook:1 lx:1 mathematical:1 profound:1 introduce:1 manner:2 crossword:3 expected:1 indeed:1 behavior:5 considering:2 becomes:2 provided:1 what:1 interpreted:3 string:3 substantially:1 developed:2 unified:3 finding:2 nj:3 exactly:1 rm:3 zl:1 unit:1 ly:1 yn:12 before:3 engineering:2 modify:1 firing:16 mercator:2 implement:1 procedure:3 area:2 word:6 selection:1 context:7 www:1 crisp:1 conventional:2 deterministic:1 quick:1 maximizing:1 rottland:1 starting:1 pure:1 estimator:1 gerhard:2 us:1 recognition:24 approximated:1 database:1 observed:2 ft:5 electrical:2 descend:1 calculate:1 ormation:1 cycle:1 mentioned:4 complexity:2 dynamic:1 trained:11 depend:2 carrying:1 serve:1 easily:2 joint:1 icassp:2 sep:1 various:3 represented:1 effective:1 artificial:1 quite:1 larger:1 forwarded:1 grammar:1 ability:1 jointly:1 final:1 sequence:4 advantage:2 net:2 interaction:1 renals:1 combining:1 realization:1 organizing:2 achieve:1 description:2 produce:2 wsj:2 iq:4 recurrent:1 eq:1 implemented:1 wall:1 considered:4 great:1 major:3 achieves:1 trigram:1 proc:2 label:8 sensitive:1 correctness:1 successfully:1 detroit:1 survives:1 ynix:2 perfectible:1 gaussian:5 always:1 qmix:1 modified:2 derived:3 emission:4 ax:3 improvement:3 likelihood:4 baseline:3 dependent:8 nn:1 typically:1 entire:1 nand:1 hidden:2 relation:1 germany:1 issue:3 arg:6 html:1 denoted:4 development:1 special:5 softmax:1 mutual:16 equal:1 having:1 extraction:1 future:1 minimized:1 connectionist:2 report:1 neurocomputing:1 hmmiann:6 fire:4 recalling:1 atlanta:1 possibility:1 mixture:2 yielding:2 euclidean:1 theoretical:2 instance:1 neukirchen:9 modeling:2 column:3 abbot:1 logp:1 maximization:1 introducing:1 front:1 optimally:1 characterize:1 dependency:2 combined:4 st:1 probabilistic:4 iy:2 again:1 iyn:1 management:2 derivative:2 leading:2 de:1 summarized:1 depends:1 stream:8 try:1 characterizes:1 start:1 bayes:2 capability:4 om:1 accuracy:1 phoneme:1 characteristic:1 efficiently:1 maximized:2 yield:2 correspond:1 generalize:1 mmi:7 processor:11 ah:1 synaptic:2 pp:2 obvious:2 associated:1 proof:1 gain:1 popular:2 sophisticated:1 back:1 feed:1 higher:2 htk:2 improved:1 jw:1 furthermore:1 just:1 stage:1 wlj:1 propagation:1 hence:1 deal:1 during:4 self:2 speaker:2 noted:1 criterion:6 pdf:1 theoretic:1 performs:1 fj:2 consideration:1 novel:1 recently:1 superior:1 belong:1 approximates:2 kluwer:1 automatic:1 outlined:1 language:2 etc:2 feb:2 triphones:5 posterior:2 recent:2 belongs:1 perplexity:1 certain:3 binary:1 continue:1 yi:1 exploited:1 seen:1 morgan:1 additional:1 triphone:1 maximize:4 paradigm:4 semi:1 multiple:1 desirable:1 full:1 reduces:3 academic:1 characterized:1 calculation:1 prevented:1 iog:2 involving:1 basic:1 multilayer:2 expectation:1 iteration:1 represent:1 background:1 publisher:1 member:1 near:1 easy:2 zi:1 architecture:1 topology:1 idea:1 rigoll:9 expression:2 speech:21 useful:1 detailed:1 extensively:1 processed:1 reduced:1 generate:2 http:1 discrete:14 vol:1 affected:1 express:1 duisburg:4 sum:1 powerful:2 decision:1 hochberg:2 entirely:1 layer:12 fl:2 ki:1 followed:1 replaces:1 incorporation:1 generates:1 argument:1 relatively:1 department:1 according:1 combination:4 describes:5 lp:1 explained:1 equation:3 vq:4 resource:2 remains:1 describing:1 turn:1 fed:1 serf:1 end:1 available:1 gaussians:1 distinguished:1 alternative:2 denotes:1 remaining:1 clustering:1 especially:1 warping:1 already:4 realized:2 gradient:2 distance:1 hmm:34 street:1 length:1 modeled:1 difficult:1 mostly:1 quantizers:1 favorably:1 implementation:2 neuron:25 markov:5 frame:1 arbitrary:2 csr:1 namely:1 required:1 pair:1 acoustic:12 recalled:1 robinson:1 usually:2 pattern:5 challenge:1 built:1 max:3 including:2 suitable:1 demanding:1 hybrid:34 rely:1 bourlard:2 advanced:1 representing:3 technology:1 concludes:1 sn:2 law:2 generation:1 foundation:1 principle:6 xln:1 side:1 characterizing:1 vocabulary:2 transition:1 made:1 transaction:1 obtains:1 ml:4 active:1 assumed:2 discriminative:1 continuous:12 iterative:1 search:2 table:2 constructing:1 substituted:1 main:1 je:1 xl:1 ix:2 xiyn:3 formula:2 effectively:1 entropy:1 logarithmic:1 expressed:5 chance:1 oct:1 goal:1 presentation:3 formulated:2 ann:3 internal:1 audio:1 tested:1 contextdependent:1 |
213 | 1,194 | An Apobayesian Relative of Winnow
Nick Littlestone
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
Chris Mesterharm
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
Abstract
We study a mistake-driven variant of an on-line Bayesian learning algorithm (similar to one studied by Cesa-Bianchi, Helmbold,
and Panizza [CHP96]). This variant only updates its state (learns)
on trials in which it makes a mistake. The algorithm makes binary
classifications using a linear-threshold classifier and runs in time linear in the number of attributes seen by the learner. We have been
able to show, theoretically and in simulations, that this algorithm
performs well under assumptions quite different from those embodied in the prior of the original Bayesian algorithm. It can handle
situations that we do not know how to handle in linear time with
Bayesian algorithms. We expect our techniques to be useful in
deriving and analyzing other apobayesian algorithms.
1
Introduction
We consider two styles of on-line learning. In both cases, learning proceeds in a
sequence of trials. In each trial, a learner observes an instance to be classified,
makes a prediction of its classification, and then observes a label that gives the
correct classification. One style of on-line learning that we consider is Bayesian.
The learner uses probabilistic assumptions about the world (embodied in a prior
over some model class) and data observed in past trials to construct a probabilistic
model (embodied in a posterior distribution over the model class). The learner uses
this model to make a prediction in the current trial. When the learner is told the
correct classification of the instance, the learner uses this information to update the
model, generating a new posterior to be used in the next trial.
In the other style of learning that we consider, the attention is on the correctness
of the predictions rather than on the model of the world. The internal state of the
An Apobayesian Relative o!Winnow
205
learner is only changed when the learner makes a mistake (when the prediction fails
to match the label). We call such an algorithm mistake-driven. (Such algorithms are
often called conservative in the computational learning theory literature.) There is a
simple way to derive a mistake-driven algorithm from anyon-line learning algorithm
(we restrict our attention in this paper to deterministic algorithms). The derived
algorithm is just like the original algorithm, except that before every trial, it makes
a record of its entire state, and after every trial in which its prediction is correct,
it resets its state to match the recorded state, entirely forgetting the intervening
trial. (Typically this is actually implemented not by making such a record, but by
merely omitting the step that updates the state.) For example, if some algorithm
keeps track of the number of trials it has seen, then the mistake-driven version of
this algorithm will end up keeping track of the number of mistakes it has made.
Whether the original or mistake-driven algorithm will do better depends on the task
and on how the algorithms are evaluated.
We will start with a Bayesian learning algorithm that we call SBSB and use this
procedure to derive a mistake-driven variant, SASB. Note that the variant cannot
be expected to be a Bayesian learning algorithm (at least in the ordinary sense)
since a Bayesian algorithm would make a prediction that minimizes the Bayes risk
based on all the available data, and the mistake-driven variant has forgotten quite
a bit. We call such algorithms apobayesian learning algorithms. This name is
intended to suggest that they are derived from Bayesian learning algorithms, but
are not themselves Bayesian. Our algorithm SASB is very close to an algorithm
of [CHP96). We study its application to different tasks than they do, analyzing its
performance when it is applied to linearly separable data as described below.
In this paper instances will be chosen from the instance space X = {a, l}n for some
n. Thus instances are composed of n boolean attributes. We consider only two
category classifications tasks, with predictions and labels chosen from Y = {a, I} .
We obtain a' bound on the number of mistakes SASB makes that is comparable to
bounds for various Winnow family algorithms given in [Lit88,Lit89). As for those
algorithms, the bound holds under the assumption that the points labeled 1 are
linearly separable from the points labeled 0, and the bound depends on the size 8 of
the gap between the two classes. (See Section 3 for a definition of 8.) The mistake
bound for SASB is 0 ( /or log ~ ). While this bound has an extra factor of log ~ not
present in the bounds for the Winnow algorithms, SASB has the advantage of not
needing any parameters. The Winnow family algorithms have parameters, and the
algorithms' mistake bounds depend on setting the parameters to values that depend
on 8. (Often, the value of 8 will not be known by the learner.) We expect the
techniques used to obtain this bound to be useful in analyzing other apobayesian
learning algorithms.
A number of authors have done related research regarding worst-case on-line
loss bounds including [Fre96,KW95,Vov90). Simulation experiments involving a
Bayesian algorithm and a mistake-driven variant are described in [Lit95). That
paper provides useful background for this paper. Note that our present analysis
techniques do not apply to the apobayesian algorithm studied there. The closest of
the original Winnow family algorithms to SASB appears to be the Weighted MaJority algorithm [LW94], which was analyzed for a case similar to that considered
in this paper in [Lit89). One should get a roughly correct impression of SASB if
N. Littlestone and C. Mesterharm
206
one thinks of it as a version of the Weighted Majority algorithm that learns its
parameters.
In the next section we describe the Bayesian algorithm that we start with. In
Section 3 we discuss its mistake-driven apobayesian variant. Section 4 mentions
some simulation experiments using these algorithms, and Section 5 is the conclusion.
2
A Bayesian Learning Algorithm
To describe the Bayesian learning algorithm we must specify a family of distributions over X x Y and a prior over this family of distributions. We parameterize
the distributions with parameters ((h, ... , 8n + l ) chosen from e = [0, 1]n+l. The
parameter 8n +1 gives the probability that the label is 1, and the parameter 8i gives
the probability that the ith attribute matches the label. Note that the probability
that the ith attribute is 1 given that the label is 1 equals the probability that the
ith attribute is 0 given that the label is O. We speak of this linkage between the
probabilities for the two classes as a symmetry condition. With this linkage, the
observation of a point from either class will affect the posterior distribution for both
classes. It is perhaps more typical to choose priors that allow the two classes to be
treated separately, so that the posterior for each class (giving the probability of elements of X conditioned on the label) depends only on the prior and on observations
from that class. The symmetry condition that we impose appears to be important
to the success of our analysis of the apobayesian variant of this algorithm. (Though
we impose this condition to derive the algorithm, it turns out that the apobayesian
variant can actually handle tasks where this condition is not satisfied.)
We choose a prior on e that gives probability 1 to the set of all elements
() = (81, ... , 8n +l ) E e for which at most one of 81 , ... ,8n does not equal
The prior is uniform on this set. Note that for any () in this set only a single attribute has a probability other than ~ of matching the label, and thus only a single
attribute is relevant. Concentrating on this set turns out to lead to an apobayesian
algorithm that can, in fact, handle more than one relevant attribute and that performs particularly well when only a small fraction of the attributes are relevant.
!.
This prior is related to to the familiar Naive Bayes model, which also assumes
that the attributes are conditionally independent given the labels. However, in the
typical Naive Bayes model there is no restriction to a single relevant attribute and
the symmetry condition linking the two classes is not imposed.
Our prior leads to the following algorithm. (The name SBSB stands for "Symmetric
Bayesian Algorithm with Singly-variant prior for Bernoulli distribution.")
Algorithm SBSB Algorithm SBSB maintains counts Si of the number of times
each attribute matches the label, a count M of the number of times the label is 1,
and a count t of the number of trials.
Initialization
Prediction
(M
+ 1)
f=
Si
M t-O
tt-O
Predict 1 given instance (Xl, ... ,xn ) if and only if
XiCSi+l)+Clixi)(t-Si+1)
i=l
Update
t- 0 for i = 1, ... ,n
M t- M
(S,)
> (t - M + 1)
f= (1-Xi)(Si+1~+XiCt-si+l)
(S.)
i=l
+ y, t t- t + 1, and for each i, if Xi
= Y then
Si
t-
Si
+1
An Apobayesian Relative of Winnow
3
207
An Apobayesian Algorithm
We construct an apobayesian algorithm by converting algorithm SBSB into a
mistake-driven algorithm using the standard conversion given in the introduction.
We call the resulting learning algorithm SASBj we have replaced "Bayesian" with
"Apobayesian" in the acronym.
In the previous section we made assumptions made about the generation of the
instances and labels that led to SBSB and thence to SASB. These assumptions
have served their purpose and we now abandon them. In analyzing the apobayesian
algorithm we do not assume that the instances and labels are generated by some
stochastic process. Instead we assume that the instance-label pairs in all of the
trials are linearly-separable, that is, that there exist some WI, ., . ,Wn , and c such
that for every instance-label pair (x, y) we have E~=I WiXi ;::: c when y = 1 and
2:~=1 WiXi ::; c when y = O. We actually make a somewhat stronger assumption,
given in the following theorem, which gives our bound for the apobayesian algorithm.
Theorem 1 Suppose that 'Yi ;::: 0 and "Ii ;::: 0 for i = 1, ... , n, and that 2:~=1 'Yi +
"I i = 1. Suppose that 0 ::; bo < bi ::; 1 and let 8 = bi - bo . Suppose that algorithm
SASB is run on a sequence of trials such that the instance x and label y in each
trial satisfy 2:~=1 'YiXi + "Ii (1 - Xi) ::; bo if y = 0 and 2:~=1 'YiXi + "Ii (1 - Xi) ;::: bi if
y = 1. Then the number of mistakes made by SASB will be bounded by
log
*
8; .
We have space to say only a little about how the derivation of this bound proceeds.
Details are given in [Lit96].
In analyzing SASB we work with an abstract description of the associated algorithm
SBSB. This algorithm starts with a prior on e as described above. We represent
this with a density Po. Then after each trial it calculates a new posterior density
Pt(O) = t-d8kP(X'YI~k, where Pt is the density after trial t and P(x, ylO) is the
pt-d )P(x,y
)
conditional probability ofthe instance x and label y observed in trial t given O. Thus
we can think of the algorithm as maintaining a current distribution on e that is
initially the prior. SASB is similar, but it leaves the current distribution unchanged
when a mistake is not made. For there to exist a finite mistake bound there must
exist some possible choice for the current distribution for which SASB would make
perfect predictions, should it ever arrive at that distribution. We call any such
distribution leading to perfect predictions a possible target distribution. It turns out
that the separability condition given in Theorem 1 guarantees that a suitable target
distribution exists. The analysis proceeds by showing that for an appropriate choice
of a target density p the relative entropy of the current distribution with respect to
the target distribution, p( 0) log(p( 0) / Pt (0)), decreases by at least some amount
R > 0 whenever a mistake is made. Since the relative entropy is never negative, the
number of mistakes is bounded by the initial relative entropy divided by R. This
form of analysis is very similar to the analysis of the various members of the Winnow
family in [Lit89,Lit91].
J
The same technique can be applied to other apobayesian algorithms. The abstract
update of Pt given above is quite general. The success of the analysis depends on
conditions on Po and P(x, ylO) that we do not have space here to discuss.
N. LittLestone and C. Mesterharm
208
p
=0.01 k =1 n =20
250r---~--~----~--~--~
250
p
=0.1 k =5 n =20
.-----.-----,-----,---~--~
,
' Optimal'
'SBSB'
'SASB'
'SASB + voting'
200
f/)
~
... .
-_.-
200
'Optimal'
'SBSB'
.
'SASB'
: 'SASB + voting"
/
/
/
/
.
/
150
/
/
S
.~
:e
/
/
/
,
'"
/
/
/
100
/'
.,,/
/'
/
50
o~--~--~----~--~--~
4000
..
./
r
/
:'/.",,~
./::: .. . .
2000
,/
/
/'" -.---::..' -.. '
/
/ . . . <- ...
o
/
/
/
,/
/'
50
/
/'
,/
/
,/
100
.. ..
//
-.,-. - / /
6000
8000
10000
Trials
/'
,/
/
O""'---...L-------I.-----'----~--~
o
2000
4000
6000
8000
10000
Trials
Figure 1: Comparison of SASB with SBSB
4
Simulation Experiments
The bound of the previous section was for perfectly linearly-separable data. We
have also done some simulation experiments exploring the performance of SASB on
non-separable data and comparing it with SBSB and with various other mistakedriven algorithms. A sample comparison of SASB with SBSB is shown in Figure
1. In each experimental run we generated 10000 trials with the instances and labels
chosen randomly according to a distribution specified by (h = '" = Ok = 1 - p,
Ok+l = ... = 0n+l = .5 where 01 , ??. ,On+l are interpreted as specified in Section
2, n is the number of attributes, and n, p, and k are as specified at the top of each
plot. The line labeled "optimal" shows the performance obtained by an optimal
predictor that knows the distribution used to generate the data ahead of time, and
thus does not need to do any learning. The lines labeled "SBSB" and "SASB" show
the performance of the corresponding learning algorithms. The lines labeled "SASB
+ voting" show the performance of SASB with the addition of a voting procedure
described in [Lit95]. This procedure improves the asymptotic mistake rate of the
algorithms. Each line on the graph is the average of 30 runs. Each line plots the
cumulative number of mistakes made by the algorithm from the beginning of the run
as a function of the number of trials.
In the left hand plot, there is only 1 relevant attribute. This is exactly the case that
SBSB is intended for, and it does better than SASB. In right hand plot, there are 5
relevant attributes; SBSB appears unable to take advantage of the extra information
present in the extra relevant attributes, but SASB successfully does.
Comparison of SASB and previous Winnow family algorithms is still in progress,
and we defer presenting details until a clearer picture has been obtained. SASB and
the Weighted Majority algorithm often perform similarly in simulations. Typically,
as one would expect, the Weighted Majority algorithm does somewhat better than
An Apobayesian Relative of Winnow
209
SASB when its parameters are chosen optimally for the particular learning task, and
worse for bad choices of parameters.
5
Conclusion
Our mistake bounds and simulations suggest that SASB may be a useful alternative
to the existing algorithms in the Winnow family. Based on the analysis style and the
bounds, SASB should perhaps itself be considered a Winnow family algorithm. Further experiments are in progress comparing SASB with Winnow family algorithms
run with a variety of parameter settings.
Perhaps of even greater interest is the potential application of our analytic techniques
to a variety of other apobayesian algorithms (though as we have observed earlier,
the techniques do not appear to apply to all such algorithms) . We have already
obtained some preliminary results regarding an interpretation of the Perceptron
algorithm as an apobayesian algorithm. We are interested in looking for entirely
new algorithms that can be derived in this way and also in better understanding
the scope of applicability of our techniques. All of the analyses that we have looked
at depend on symmetry conditions relating the probabilities for the two classes. It
would be of interest to see what can be said when such symmetry conditions do not
hold. In simulation experiments [Lit95], a mistake-driven variant of the standard
Naive Bayes algorithm often does very well, despite the absence of such symmetry
in the prior that it is based on.
Our simulation experiments and also the analysis of the related algorithm Winnow
[Lit91] suggest that SASB can be expected to handle some instance-label pairs inside
of the separating gap or on the wrong side, especially if they are not too far on the
wrong side. In particular it appears to be able to handle data generated according
to the distributions on which SBSB is based, which do not in general yield perfectly
separable data.
It is of interest to compare the capabilities of the original Bayesian algorithm with
the derived apobayesian algorithm. When the data is stochastically generated in a
manner consistent with the assumptions behind the original algorithm, the original
Bayesian algorithm can be expected to do better (see, for example, Figure 1). On
the other hand, the apobayesian algorithm can handle data beyond the capabilities of the original Bayesian algorithm. For example, in the case we consider, the
apobayesian algorithm can take advantage of the presence of more than one relevant
attribute, even though the prior behind the original Bayesian algorithm assumes a
single relevant attribute. Furthermore, as for all of the Winnow family algorithms,
the mistake bound for the apobayesian algorithm does not depend on details of the
behavior of the irrelevant attributes (including redundant attributes).
Instead of using the apobayesian variant, one might try to construct a Bayesian
learning algorithm for a prior that reflects the actual dependencies among the attributes and the labels. However, it may not be clear what the appropriate prior is.
It may be particularly unclear how to model the behavior of the irrelevant attributes. Furthermore, such a Bayesian algorithm may end up being computationally
expensive. For example, attempting to keep track of correlations among all pairs
of attributes may lead to an algorithm that needs time and space quadratic in the
number of attributes. On the other hand, if we start with a Bayesian algorithm that
210
N. Littlestone and C. Mesterharm
uses time and space linear in the number of attributes we can obtain an apobayesian
algorithm that still uses linear time and space but that can handle situations beyond
the capabilities of the original Bayesian algorithm.
Acknowledgments
This paper has benefited from discussions with Adam Grove.
References
[CHP96] Nicolo Cesa-Bianchi, David P. Helmbold, and Sandra Panizza. On bayes
methods for on-line boolean prediction. In Proceedings of the Ninth Annual
Conference on Computational Learning Theory, pages 314-324, 1996.
[Fre96] Yoav Freund. Predicting a binary sequence almost as well as the optimal
biased coin. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 89-98, 1996.
[KW95] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient
updates for linear prediction. In Proc. 27th ACM Symp. on Theory of
Computing, pages 209-218, 1995.
[Lit88] N. Littlestone. Learning quickly when irrelevant attributes abound: A new
linear-threshold algorithm. Machine Learning, 2:285-318, 1988.
[Lit89] N. Littlestone. Mistake Bounds and Logarithmic Linear-threshold Learning
Algorithms. PhD thesis, Tech. Rept. UCSC-CRL-89-11, Univ. of Calif.,
Santa Cruz, 1989.
[Lit91] N. Littlestone. Redundant noisy attributes, attribute errors, and linearthreshold learning using Winnow. In Proc. 4th Annu. Workshop on Comput. Learning Theory, pages 147- 156. Morgan Kaufmann, San Mateo, CA,
1991.
[Lit95] N. Littlestone. Comparing several linear-threshold learning algorithms on
tasks involving superfluous attributes. In Proceedings of the XII International conference on Machine Learning, pages 353- 361, 1995.
[Lit96] N. Littlestone. Mistake-driven bayes sports: Bounds for symmetric
apobayesian learning algorithms. Technical report, NEC Research Institute, Princeton, NJ, 1996.
[LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm.
Information and Computation, 108:212-261, 1994.
[Vov90] Volodimir G. Vovk. Aggregating strategies. In Proceedings of the 1990
Workshop on Computational Learning Theory, pages 371-383, 1990.
| 1194 |@word trial:21 version:2 stronger:1 simulation:9 mention:1 initial:1 past:1 existing:1 current:5 comparing:3 si:7 must:2 cruz:1 additive:1 analytic:1 plot:4 update:6 leaf:1 warmuth:2 beginning:1 ith:3 record:2 provides:1 ucsc:1 symp:1 inside:1 manner:1 theoretically:1 forgetting:1 expected:3 behavior:2 themselves:1 roughly:1 little:1 actual:1 abound:1 bounded:2 what:2 interpreted:1 minimizes:1 nj:3 guarantee:1 forgotten:1 every:3 voting:4 exactly:1 classifier:1 wrong:2 appear:1 before:1 aggregating:1 rept:1 mistake:28 despite:1 analyzing:5 might:1 initialization:1 studied:2 mateo:1 bi:3 acknowledgment:1 volodimir:1 procedure:3 matching:1 suggest:3 get:1 cannot:1 close:1 risk:1 restriction:1 deterministic:1 imposed:1 attention:2 helmbold:2 deriving:1 lw94:2 handle:8 pt:5 suppose:3 target:4 speak:1 us:5 element:2 expensive:1 particularly:2 labeled:5 observed:3 worst:1 parameterize:1 decrease:1 observes:2 depend:4 panizza:2 learner:9 po:2 various:3 derivation:1 univ:1 describe:2 quite:3 say:1 think:2 itself:1 abandon:1 noisy:1 sequence:3 advantage:3 reset:1 relevant:9 intervening:1 description:1 generating:1 perfect:2 adam:1 derive:3 clearer:1 progress:2 implemented:1 correct:4 attribute:29 stochastic:1 sandra:1 preliminary:1 exploring:1 hold:2 considered:2 scope:1 predict:1 purpose:1 proc:2 label:21 correctness:1 successfully:1 weighted:5 reflects:1 rather:1 derived:4 bernoulli:1 tech:1 sense:1 entire:1 typically:2 initially:1 interested:1 linearthreshold:1 classification:5 among:2 equal:2 construct:3 never:1 report:1 randomly:1 composed:1 familiar:1 replaced:1 intended:2 interest:3 analyzed:1 behind:2 superfluous:1 grove:1 calif:1 littlestone:10 instance:14 earlier:1 boolean:2 yoav:1 ordinary:1 applicability:1 uniform:1 predictor:1 lit95:4 too:1 optimally:1 dependency:1 density:4 international:1 probabilistic:2 told:1 quickly:1 thesis:1 cesa:2 recorded:1 satisfied:1 choose:2 xict:1 worse:1 stochastically:1 style:4 leading:1 potential:1 satisfy:1 lit91:3 depends:4 try:1 start:4 bayes:6 maintains:1 capability:3 defer:1 kaufmann:1 yield:1 ofthe:1 bayesian:23 served:1 classified:1 lit89:4 whenever:1 definition:1 associated:1 concentrating:1 wixi:2 improves:1 actually:3 appears:4 ok:2 specify:1 evaluated:1 though:3 done:2 furthermore:2 just:1 until:1 correlation:1 hand:4 perhaps:3 name:2 omitting:1 symmetric:2 conditionally:1 presenting:1 impression:1 tt:1 performs:2 mesterharm:4 linking:1 interpretation:1 relating:1 similarly:1 nicolo:1 posterior:5 closest:1 winnow:16 irrelevant:3 driven:12 binary:2 success:2 yi:3 seen:2 morgan:1 greater:1 somewhat:2 impose:2 converting:1 redundant:2 ii:3 needing:1 technical:1 match:4 divided:1 calculates:1 prediction:12 variant:12 involving:2 represent:1 background:1 addition:1 separately:1 extra:3 biased:1 member:1 call:5 presence:1 wn:1 variety:2 independence:2 affect:1 restrict:1 perfectly:2 regarding:2 whether:1 linkage:2 useful:4 clear:1 santa:1 singly:1 amount:1 category:1 generate:1 exist:3 track:3 xii:1 threshold:4 graph:1 merely:1 fraction:1 run:6 arrive:1 family:11 almost:1 comparable:1 bit:1 entirely:2 bound:19 quadratic:1 annual:2 ahead:1 attempting:1 separable:6 according:2 separability:1 wi:1 making:1 computationally:1 discus:2 turn:3 count:3 know:2 end:2 acronym:1 available:1 apply:2 appropriate:2 alternative:1 coin:1 original:10 assumes:2 top:1 maintaining:1 giving:1 especially:1 unchanged:1 already:1 looked:1 strategy:1 said:1 unclear:1 gradient:1 unable:1 separating:1 majority:5 chris:1 negative:1 perform:1 bianchi:2 conversion:1 observation:2 thence:1 finite:1 situation:2 ever:1 looking:1 ninth:2 david:1 pair:4 specified:3 nick:1 able:2 beyond:2 proceeds:3 below:1 including:2 suitable:1 treated:1 predicting:1 kivinen:1 picture:1 naive:3 embodied:3 prior:16 literature:1 understanding:1 relative:7 asymptotic:1 freund:1 loss:1 expect:3 generation:1 versus:1 lit88:2 consistent:1 changed:1 keeping:1 side:2 allow:1 exponentiated:1 perceptron:1 institute:3 xn:1 world:2 stand:1 cumulative:1 author:1 made:7 san:1 far:1 keep:2 xi:4 yixi:2 ca:1 symmetry:6 linearly:4 benefited:1 fails:1 ylo:2 xl:1 comput:1 learns:2 theorem:3 annu:1 bad:1 showing:1 exists:1 workshop:2 phd:1 nec:3 conditioned:1 gap:2 entropy:3 led:1 logarithmic:1 sport:1 bo:3 acm:1 conditional:1 absence:1 crl:1 typical:2 except:1 vovk:1 conservative:1 called:1 experimental:1 internal:1 princeton:3 |
214 | 1,195 | Hebb Learning of Features
based on their Information Content
Ferdinand Peper
Hideki Noda
Communications Research Laboratory
588-2, Iwaoka, Iwaoka-cho
Nishi-ku, Kobe 651-24
Japan
peper@crl.go.jp
Kyushu Institute of Technology
Dept. Electr., Electro., and Compo Eng.
1-1 Sensui-cho, Tobata-ku
Kita-Kyushu 804, Japan
noda@kawa.comp.kyutech.ac.jp
Abstract
This paper investigates the stationary points of a Hebb learning rule
with a sigmoid nonlinearity in it. We show mathematically that when
the input has a low information content, as measured by the input's
variance, this learning rule suppresses learning, that is, forces the weight
vector to converge to the zero vector. When the information content
exceeds a certain value, the rule will automatically begin to learn a
feature in the input. Our analysis suggests that under certain conditions
it is the first principal component that is learned. The weight vector
length remains bounded, provided the variance of the input is finite .
Simulations confirm the theoretical results derived.
1
Introduction
Hebb learning, one of the main mechanisms of synaptic strengthening, is induced
by cooccurrent activity of pre- and post-synaptic neurons. It is used in artificial
neural networks like perceptrons, associative memories, and unsupervised learning
neural networks. Unsupervised Hebb learning typically employs rules of the form:
J.tw(t) = x(t)y(t) - d(x(t), y(t), w(t)) ,
(1)
where w is the vector of a neuron's synaptic weights, x is a stochastic input vector,
y is the output expressed as a function of x T w , and the vector function d is a forgetting term forcing the weights to decay when there is little input. The integration
constant J.t determines the learning speed and will be assumed 1 for convenience.
The dynamics of rule (1) determines which features are learned, and, with it, the
rule's stationary points and the boundedness of the weight vector. In some cases,
weight vectors grow to zero or grow unbounded. Either is biologically implausible.
Suppression and unbounded growth of weights is related to the characteristics of
the input x and to the choice for d. Understanding this relation is important to
enable a system, that employs Hebb learning , to learn the right features and avoid
implausible weight vectors.
Unbounded or zero length of weight vectors is avoided in [5] by keeping the total
synaptic strength :Ei Wi constant. Other studies, like [7], conserve the sum-squared
Hebb Learning of Features based on their Information Content
247
synaptic strength. Another way to keep the weight vector length bounded is to
limit the range of each of the individual weights [4]. The effect of these constraints
on the learning dynamics of a linear Hebb rule is studied in [6].
This paper constrains the weight vector length by a nonlinearity in a Hebb rule. It
uses a rule of the form (1) with y = S(x T W - h) and d(x, y, w) = c.w, the function
S being a smooth sigmoid, h being a constant, and c being a positive constant
(see [1] for a similar rule). We prove that the weight vector w assumes a bounded
nonzero solution if the largest eigenvalue Al of the input covariance matrix satisfies
Ai > cjS'(-h). Furthermore, if Al ~ cjS'(-h) the weight vector converges to the
vector O. Since Al equals the variance of the input's first principal component, that
is, Ai is a measure for the amount of information in the input, learning is enabled
by a high information content and suppressed by a low information content.
The next section describes the Hebb neuron and its input in more detail. After
characterizing the stationary points of the Hebb learning rule in section 3, we analyze their stability in section 4. Simulations in section 5 confirm that convergence
towards a nonzero bounded solution occurs only when the information content of
the input is sufficiently high. We finish this paper with a discussion.
2
The Hebb Neuron and its Input
Assume that the n-dimensional input vectors x presented to the neuron are generated by a stationary white stochastic process with mean O. The process's covariance
matrix :E = E[xxT] has eigenvalues AI, "', An (in order of decreasing size) and corresponding eigenvectors UI, ... , Un. Furthermore, E[llxl12] is finite. This implies that
the eigenvalues are finite because E[lIx112] = E[tr[xxT)) = tr[E[xx T)) = L:~=l Ai. It
is assumed that the probability density function of x is continuous. Given an input
x and a synaptic weight vector w, the neuron produces an output y = S(xTw - h),
where S : R -+ R. is a function that satisfies the conditions:
Cl. S is smooth, i.e., S is continuous and differentiable and S' is continuous.
C2. Sis sublinear, Le., lim S(z)jz =
z--+oo
lim S(z)jz = O.
z--+-oo
C3.
S is monotonically nondecreasing.
C4.
S' has one maximum, which is at the point z = -h.
Typically, these conditions are satisfied by smooth sigmoidal functions. This includes sigmoids with infinite saturation values, like S(z) = sign(z)lzli/2 (see [9]).
The point at which a sigmoid achieves maximal steepness (condition C4) is called its
base. Though the step function is discontinuous at its base, thus violating condition
Cl, the results in this paper apply to the step function too, because it is the limit
of a sequence of continuous sigmoids, and the input density function is continuous
and thus Lebesgue-integrable. The learning rule of the neuron is given by
VI = xy - cw,
(2)
c being a positive constant. Use of a linear S(z) = az in this rule gives unstable
dynamics: if a > cj Ai, then the length of the weight vector w grows out of bound
though ultimately w becomes collinear with Ui' It is proven in the next section
that a sublinear S prevents unbounded growth of w.
F. Peper and H. Noda
248
3
Stationary Points of the Learning Rule
To get insight into what stationary points the weight vector w ultimately converges
to, we average the stochastic equation (2) over the input patterns and obtain
(w) = E [xS (XT(W) - h)] - c(w),
(3)
where (w) is the averaged weight vector and the expectation is taken over x, as
with all expectations in this paper. Since the solutions of (2) correspond with the
solutions of (3) under conditions described in [2], the averaged (w) will be referred
to as w. Learning in accordance to (2) can then be interpreted [lJ as a gradient
descent process on an averaged energy function J associated with (3):
with
T(z) =
[~ S(v)dv.
To characterize the solutions of (3) we use the following lemma.
Lemma 1. Given a unit-length vector u, the function f u : R
~
R is defined by
and the constant Au by Au = E[uT xx T u]. The fixed points of fu are as follows.
1. If AuS'( -h) ~ c then fu has one fixed point, i.e. , z =
o.
2. If AuS'( -h) > c then fu has three fixed points, i.e., z = 0, z = Q:~, and
z = Q:~, where Q:~ (Q:~) is a positive (negative) value depending on u.
Proof:(Sketch; for a detailed proof see [11]). Function fu is a smooth sigmoid, since
conditions C1 to C4 carryover from S to fu. The steepness of fu in its base at z = 0
depends on vector u. If AuS'( -h) ~ c, function fu intersects the line h(z) = z only
at the origin, giving z = 0 as the only fixed point . If AuS'( -h) > c, the steepness
of f u is so large as to yield two more intersections: z = Q:~ and z = Q:~ ?
0
Thus characterizing the fixed points of f u, the lemma allows us to find the fixed
points of a vector function g: R n ~ R n that is closely related to (3). Defining
1
g(w) = - E [xS (xTw - h) ] ,
c
we find that a fixed point z = Q: u of f u corresponds to the fixed point w = Q: u u of
g. Then, since (3) can be written as w = c.g(w) - c.w, its stationary points are
the fixed points of g, that is, w = 0 is a stationary point and for each u for which
AuS'( -h) > c there exists one bounded stationary point associated with Q:~ and
one associated with Q:~. Consequently, if Al ~ c/ S' (-h) then the only fixed point
of g is w = 0, because Al ~ Au for all u.
What is the implication of this result? The relation Al ~ c/ S'( -h) indicates a low
information content of the input, because AI - equaling the variance of the input's
first principal component-is a measure for the input's information content . A low
information content thus results in a zero w, suppressing learning. Section 4 shows
249
Hebb Learning of Features based on their Information Content
that a high information content results in a nonzero w. The turnover point of what
is considered high/low information is adjusted by changing the steepness of the
sigmoid in its base or changing constant c in the forgetting term.
To show the boundedness of w, we consider an arbitrary point P: w = f3u sufficiently far away from the origin 0 (but at finite distance) and calculate the component of w along the line OP as well as the components orthogonal to OP. Vector
u has unit length, and f3 may be assumed positive since its sign can be absorbed
by u. Then, the component along 0 P is given by the projection of w on u:
This is negative for all f3 exceeding the fixed points of I u because of the sigmoidal
shape of lu. So, for any point Pin nn lying far enough from 0 the vector component of win P along the line OP is directed towards 0 and not away from it. This
component decreases as we move away from 0, because the value of [f3 - I u (f3)]
increases as f3 increases (fu is sublinear). Orthogonal to this is a component given
by the projection of w on a unit-length vector v that is orthogonal to u:
This component increases as we move away from 0; however, it changes at a slower
pace than the component along OP, witness the quotient of both components:
lim vTwl
{3-+00 uTw w={3u
=
lim
(3-+00
cv T g(f3u)
-c[f3 - lu(f3)]
=
lim
(3-+00
v T g(f3u)/f3
lu(f3)/f3 - 1
=
0
.
Vector w thus becomes increasingly dominated by the component along 0 P as f3
increases. So, the origin acts as an attractor if we are sufficiently far away from it,
implying that w remains bounded during learning.
4
Stability of the Stationary Points
To investigate the stability of the stationary points, we use the Hessian of the
averaged energy function J described in the last section. The Hessian at point w
equals: H(w) = cI - E [xxTS' (xTw - h)]. A stationary point w = w is stable
iff H(w) is a positive definite matrix. The latter is satisfied if for every unit-length
vector v,
(4)
that is, if all eigenvalues of the matrix E[xxTS'(XTw - h)] are less than c. First
consider the stationary point w = o. The eigenvalues of E[XXT S'( -h)] in decreasing
order are AlS'( -h), ... , AnS'( -h). The Hessian H(O) is thus positive definite iff
A1 S'( -h) < c. In this case w = 0 is stable. It is also stable in the case Al =
c/S'( -h), because then (4) holds for all v 1= Ul, preventing growth ofw in directions
other than Ul? Moreover, w will not grow in the direction of Ul, because IIUl (f3) I <
1f31 for all f3 1= O. Combined with the results of the last section this implies:
Corollary 1. If Al ~ c/S'(-h) then the averaged learning equation (3) will have
as its only stationary point w = 0, and this point is stable. If Al > c/ S'( -h) the
stationary point w = 0 is not stable, and there will be other stationary points.
250
F. Peper and H. Noda
We now investigate the other stationary points. Let w = au u be such a point, u
being a unit-length vector and au a nonzero constant. To check whether the Hessian
H(auu) is positive definite, we apply the relation E[XYJ = E[XJ E[YJ + Cov[X, Y]
to the expression E [ u T xxTuS' (aux T u - h)] and obtain after rewriting:
The sigmoidal shape of the function lu implies that lu is less steep than the line
h(z) = z at the intersection at z = au, that is, I~(au) < 1. It then follows that
E [uTxxTuS'(auxTu - h)] = c/~(au) < c, giving:
E [ S' (aux Tu - h)]
1
< Au
Then, yTE [xxTS'(auxTu -
{c - Cov [ U TxxT U, S' (au x Tu - h) ] } .
h)] y =
AvE [ S' (auxT u - h)] + Cov [ y TxxT y, S' (aux Tu - h)] <
~: c - ~: Cov [ u TxxT U, S' (aux T u - h)] + Cov [ y Txx Ty, S' (auxTu - h) ] .
The probability distribution of x unspecified, it is hard to evaluate this upper bound.
For certain distributions the upper bound is minimized when Au is maximized,
that is, when u = Ul and Au = AI, implying the Hebb neuron to be a nonlinear
principal component analyzer. Distributions that are symmetric with respect to
the eigenvectors of :E are probably examples of such distributions, as suggested
by [11, 12J. For other distributions vector w may assume a solution not collinear
with Ul or may periodically traverse (part of) the nonzero fixed-point set of g.
5
Simulations
We carry out simulations to test whether learning behaves in accordance with corollary 1. The following difference equation is used as the learning rule:
(5)
where, is the learning rate and a a constant. The use of a difference .6. in (5) rather
than the differential in (2) is computationally easier, and gives identical results if
, decreases over training time in accordance with conditions described in [3]. We
use ,(t) = 1/(O.Olt + 20). It satisfies these conditions and gives fast convergence
without disrupting stability [10J. Its precise choice is not very critical here, though.
The neuron is trained on multivariate normally distributed random input samples of
dimension 6 with mean 0 and a covariance matrix :E that has the eigenvalues 4.00,
2.25, 1.00, 0.09, 0.04, and 0.01. The degree to which the weight vector and :E's first
eigenvector Ul are collinear is measured by the match coefficient [10], defined by:
m = cos2 L(Ul' w). In every experiment the neuron is trained for 10000 iterations
by (5) with the value of parameter a set to 0.20, 0.25, and 0.30, respectively. This
corresponds to the situations in which Al < c/ S'( -h), Al = cj S'( -h), and Al >
c/ S'( -h), respectively, since c = 1 and the steepness ofthe sigmoid S(z) = tanh(az)
251
Hebb Learning of Features based on their Information Content
in its base z = -h = 0 is 8'(0) = a. We perform each experiment 2000 times, which
allows us to obtain the match coefficients beyond iteration 100 within ?0.02 with
a confidence coefficient of 95 % (and a smaller confidence coefficient on the first 100
iterations). The random initialization of the weight vector-its initial elements are
uniformly distributed in the interval (-1, I)-is different in each experiment.
m
1.0.,-------::;;::::;::;;=---,
Ilwll
1.0
a=0.30
---a=0.25
- - - - - a=0.20
0.5
0.0 -'--+----+----1---+-----+--'
1
103
104
10
102
Iterations
0.0
Figure 1: Match coefficients averaged
over 2000 experiments for parameter
values a = 0.20, 0.25, and 0.30.
Figure 2: Lengths of the weight vector
averaged over 2000 experiments. The
curve types are similar to those in Fig. 1.
1
10
102
103
104
Iterations
Fig. 1 shows that for all tested values of parameter a the weight vector gradually
becomes collinear with Ul over 10000 iterations. The length of the weight vector
converges to 0 when a = 0.20 or a = 0.25 (see Fig. 2). In the case a = 0.30,
corresponding to Al > cj8'(-h), the length converges to a nonzero bounded value.
In conclusion, convergence is as predicted by corollary 1: the weight vector converges
to 0 if the information content in the input is too low for climbing the slope of the
sigmoid in its base, and otherwise the weight vector becomes nonzero.
6
Discussion
Learning by the Hebb rule discussed in this paper is enabled if the input's information content as measured by the variance is sufficiently high, and only then. The
results, though valid for a single neuron, have implications for systems consisting of
multiple neurons connected by inhibitory connections. A neuron in such a system
would have as output y = 8(xT w - h - yT y '), where the inhibitory signal yT y '
would consist of the vector of output signals y' of the other neurons, weighted by the
vector y (see also [1]). Function fu in lemma 1 would, when extended to contain the
signal y T y', still pass through the origin because of the zero-meanness of the input,
but would have a reduced steepness at the origin caused by the shift in S's argument
away from the base. The reduced steepness would make an intersection of f u with
the line h(z) = z in a point other than the origin less likely. Consequently, an inhibitory signal would bias the neuron towards suppressing its weights. In a system
of neurons this would reduce the emergence of neurons with correlated outputs, because of the mutual presence of their outputs in each other's inhibitory signals. The
neurons, then, would extract different features, while suppressing information-poor
features.
In conclusion, the Hebb learning rule in this paper combines well with inhibitory
connections, and can potentially be used to build a system of nonredundant feature
extractors, each of which is optimized to extract only information-rich features.
252
F. Peper and H. Noda
Moreover, the suppression of weights with a low information content suggests a
straightforward way [8J to adaptively control the number of neurons, thus minimizing the necessary neural resources.
Acknowledgments
We thank Dr. Mahdad N. Shirazi at Communications Research Laboratory (CRL)
for the helpful discussions, Prof. Dr. S.-1. Amari for his encouragement, and Dr.
Hidefumi Sawai at CRL for providing financial support to present this paper at
NIPS'96 from the Council for the Promotion of Advanced Information and Communications Technology. This work was financed by the Japan Ministry of Posts
and Telecommunications as part of their Frontier Research Project in Telecommunications.
References
[1] SA. Amari, "Mathematical Foundations of Neurocomputing," Proceedings of the
IEEE, vol. 78, no. 9, pp. 1443-1463, 1990.
[2] S. Geman, "Some Averaging and Stability Results for Random Differential Equations," SIAM J. Appl. Math., vol. 36, no. 1, pp. 86-105, 1979.
[3] H.J. Kushner and D.S. Clark, "Stochastic Approximation Methods for Constrained
and Unconstrained Systems," Applied Mathematical Sciences, vol. 26, New York:
Springer-Verlag, 1978.
[4] R. Linsker, "Self-Organization in a Perceptual Network," Computer, vol. 21, pp. 105117, 1988.
[5] C. von der Malsburg, "Self-Organization of Orientation Sensitive Cells in the Striate
Cortex," Kybernetik, vol. 14, pp. 85-100, 1973.
[6] K.D. Miller and D.J.C. MacKay, "The Role of Constraints in Hebbian Learning,"
Neural Computation, vol. 6, pp. 100-126, 1994.
[7] E. Oja, "A simplified neuron model as a principal component analyzer," Journal of
Mathematics and Biology, vol. 15, pp. 267-273, '1982.
[8] F. Peper and H. Noda, "A Mechanism for the Development of Feature Detecting
Neurons," Proc. Second New-Zealand Int. Two-Stream Conf. on Artificial Neural
Networks and Expert Systems, ANNES'95, Dunedin, New-Zealand, pp. 59-62, 20-23
Nov. 1995.
[9] F. Peper and H. Noda, "A Class of Simple Nonlinear I-unit PCA Neural Networks,"
1995 IEEE Int. Con/. on Neural Networks, ICNN'95, Perth, Australia, pp. 285-289,
27 Nov.-l Dec. 1995.
[10] F. Peper and H. Noda, "A Symmetric Linear Neural Network that Learns Principal
Components and their Variances," IEEE Trans. on Neural Networks, vol. 7, pp. 10421047, 1996.
[11] F. Peper and H. Noda, "Stationary Points of a Hebb Learning Rule for a Nonlinear
Neural Network," Proc. 1996 Int. Symp. Nonlinear Theory and Appl. (NOLTA '96),
Kochi, Japan, pp. 241-244, 7-9 Oct 1996.
[12] F. Peper and M.N. Shirazi, "On the Eigenstructure of Nonlinearized Covariance
Matrices," Proc. 1996 Int. Symp. Nonlinear Theory and Appl. (NOLTA '96), Kochi,
Japan, pp. 491-493, 7-9 Oct 1996.
| 1195 |@word auu:1 cos2:1 simulation:4 eng:1 covariance:4 tr:2 boundedness:2 carry:1 initial:1 suppressing:3 anne:1 si:1 written:1 periodically:1 shape:2 stationary:18 implying:2 electr:1 compo:1 detecting:1 math:1 nishi:1 traverse:1 sigmoidal:3 unbounded:4 mathematical:2 along:5 c2:1 differential:2 prove:1 combine:1 symp:2 forgetting:2 decreasing:2 automatically:1 little:1 becomes:4 begin:1 provided:1 bounded:7 xx:2 moreover:2 project:1 what:3 interpreted:1 unspecified:1 eigenvector:1 suppresses:1 every:2 act:1 growth:3 control:1 unit:6 normally:1 eigenstructure:1 positive:7 yte:1 accordance:3 limit:2 kybernetik:1 au:17 studied:1 initialization:1 suggests:2 xyj:1 appl:3 range:1 averaged:7 directed:1 acknowledgment:1 yj:1 definite:3 xtw:4 projection:2 pre:1 confidence:2 get:1 convenience:1 yt:2 go:1 straightforward:1 zealand:2 rule:18 insight:1 his:1 enabled:2 financial:1 stability:5 us:1 origin:6 element:1 conserve:1 geman:1 role:1 calculate:1 equaling:1 connected:1 decrease:2 ui:2 constrains:1 turnover:1 dynamic:3 ultimately:2 trained:2 xxt:3 intersects:1 fast:1 artificial:2 otherwise:1 amari:2 cov:5 f3u:3 nondecreasing:1 emergence:1 associative:1 sequence:1 eigenvalue:6 differentiable:1 maximal:1 strengthening:1 tu:3 financed:1 iff:2 az:2 ferdinand:1 convergence:3 produce:1 converges:5 oo:2 depending:1 ac:1 measured:3 op:4 sa:1 quotient:1 predicted:1 implies:3 direction:2 closely:1 discontinuous:1 stochastic:4 australia:1 enable:1 icnn:1 txx:1 mathematically:1 adjusted:1 frontier:1 hold:1 lying:1 sufficiently:4 considered:1 aux:4 achieves:1 proc:3 tanh:1 council:1 sensitive:1 largest:1 weighted:1 promotion:1 rather:1 avoid:1 corollary:3 derived:1 indicates:1 check:1 ave:1 suppression:2 helpful:1 nn:1 typically:2 lj:1 relation:3 orientation:1 development:1 constrained:1 integration:1 mackay:1 mutual:1 equal:2 f3:13 identical:1 biology:1 unsupervised:2 linsker:1 minimized:1 carryover:1 kobe:1 employ:2 oja:1 neurocomputing:1 individual:1 consisting:1 lebesgue:1 attractor:1 organization:2 investigate:2 implication:2 fu:9 necessary:1 xy:1 orthogonal:3 theoretical:1 sawai:1 too:2 characterize:1 cho:2 combined:1 adaptively:1 density:2 siam:1 squared:1 von:1 satisfied:2 dr:3 conf:1 expert:1 japan:5 includes:1 coefficient:5 int:4 caused:1 vi:1 depends:1 stream:1 analyze:1 slope:1 variance:6 characteristic:1 maximized:1 correspond:1 yield:1 ofthe:1 climbing:1 miller:1 lu:5 comp:1 implausible:2 synaptic:6 ty:1 energy:2 pp:11 associated:3 proof:2 con:1 lim:5 ut:1 cj:2 violating:1 though:4 furthermore:2 sketch:1 ei:1 nonlinear:5 grows:1 shirazi:2 effect:1 contain:1 symmetric:2 laboratory:2 nonzero:7 white:1 during:1 self:2 disrupting:1 sigmoid:7 behaves:1 jp:2 discussed:1 ai:7 cv:1 encouragement:1 unconstrained:1 mathematics:1 nonlinearity:2 analyzer:2 stable:5 cortex:1 base:7 multivariate:1 nonredundant:1 forcing:1 certain:3 verlag:1 der:1 integrable:1 ministry:1 converge:1 monotonically:1 signal:5 multiple:1 hebbian:1 smooth:4 exceeds:1 match:3 dept:1 post:2 a1:1 expectation:2 iteration:6 cell:1 c1:1 dec:1 interval:1 grow:3 probably:1 induced:1 electro:1 presence:1 enough:1 xj:1 finish:1 reduce:1 shift:1 whether:2 expression:1 pca:1 ul:8 collinear:4 hessian:4 york:1 detailed:1 eigenvectors:2 amount:1 reduced:2 inhibitory:5 sign:2 pace:1 vol:8 steepness:7 changing:2 rewriting:1 sum:1 telecommunication:2 kita:1 investigates:1 bound:3 activity:1 strength:2 constraint:2 dominated:1 speed:1 argument:1 kyushu:2 poor:1 describes:1 smaller:1 increasingly:1 suppressed:1 wi:1 tw:1 biologically:1 dv:1 gradually:1 taken:1 computationally:1 equation:4 resource:1 remains:2 pin:1 mechanism:2 apply:2 away:6 slower:1 assumes:1 perth:1 kushner:1 malsburg:1 giving:2 build:1 prof:1 move:2 occurs:1 striate:1 gradient:1 win:1 cw:1 distance:1 thank:1 unstable:1 length:13 providing:1 minimizing:1 steep:1 potentially:1 negative:2 ofw:1 perform:1 upper:2 neuron:21 finite:4 descent:1 noda:9 defining:1 communication:3 witness:1 precise:1 situation:1 extended:1 olt:1 arbitrary:1 c3:1 connection:2 optimized:1 c4:3 hideki:1 learned:2 nip:1 trans:1 beyond:1 suggested:1 pattern:1 saturation:1 memory:1 critical:1 force:1 advanced:1 technology:2 extract:2 understanding:1 sublinear:3 proven:1 clark:1 foundation:1 degree:1 nolta:2 last:2 keeping:1 bias:1 institute:1 characterizing:2 distributed:2 f31:1 curve:1 dimension:1 valid:1 kyutech:1 rich:1 preventing:1 avoided:1 simplified:1 far:3 nov:2 keep:1 confirm:2 assumed:3 un:1 continuous:5 ku:2 learn:2 jz:2 cl:2 main:1 fig:3 referred:1 hebb:17 exceeding:1 perceptual:1 extractor:1 kochi:2 learns:1 xt:2 decay:1 x:2 exists:1 ilwll:1 consist:1 ci:1 sigmoids:2 easier:1 intersection:3 likely:1 absorbed:1 prevents:1 expressed:1 springer:1 corresponds:2 determines:2 satisfies:3 oct:2 consequently:2 towards:3 crl:3 content:16 change:1 hard:1 infinite:1 uniformly:1 averaging:1 principal:6 lemma:4 total:1 called:1 pas:1 perceptrons:1 support:1 latter:1 evaluate:1 tested:1 correlated:1 |
215 | 1,196 | Competition Among Networks
Improves Committee Performance
Paul W. Munro
Department of Infonnation Science
and Telecommunications
University of Pittsburgh
Pittsburgh PA 15260
Bambang Parman to
Department of Health Infonnation
Management
University of Pittsburgh
Pittsburgh PA 15260
munro@sis.pitt.edu
parmanto+@pitt.edu
ABSTRACT
The separation of generalization error into two types, bias and variance
(Geman, Bienenstock, Doursat, 1992), leads to the notion of error
reduction by averaging over a "committee" of classifiers (Perrone,
1993). Committee perfonnance decreases with both the average error of
the constituent classifiers and increases with the degree to which the
misclassifications are correlated across the committee. Here, a method
for reducing correlations is introduced, that uses a winner-take-all
procedure similar to competitive learning to drive the individual
networks to different minima in weight space with respect to the
training set, such that correlations in generalization perfonnance will be
reduced, thereby reducing committee error.
1 INTRODUCTION
The problem of constructing a predictor can generally be viewed as finding the right
combination of bias and variance (Geman, Bienenstock, Doursat, 1992) to reduce the
expected error. Since a neural network predictor inherently has an excessive number of
parameters, reducing the prediction error is usually done by reducing variance. Methods
for reducing neural network complexity can be viewed as a regularization technique to
reduce this variance. Examples of such methods are Optimal Brain Damage (Le Cun et.
al., 1991), weight decay (Chauvin, 1989), and early stopping (Morgan & Boulard, 1990).
The idea of combining several predictors to fonn a single, better predictor (Bates &
Granger, 1969) has been applied using neural networks in recent years (Wolpert, 1992;
Perrone, 1993; Hashem, 1994).
Competition Among Networks Improves Committee Performance
593
2 REDUCING MISCLASSIFICATION CORRELATION
Since committee errors occur when too many individual predictors are in error, committee
performance improves as the correlation of network misclassifications decreases. Error
correlations can be handled by using a weighted sum to generate a committee prediction;
the weights can be estimated by using ordinary least squares (OLS) estimators (Hashem,
1994) or by using Lagrange multipliers (Perrone, 1993).
Another approach (Parmanto et al., 1994) is to reduce error correlation directly by
attempting to drive the networks to different minima in weight space, that will
presumably have different generalization syndromes, or patterns of error with respect to a
test set (or better yet, the entire stimulus space).
2.1
Data Manipulations
Training the networks using nonidentical data has been shown to improve committee
performance, both when the data sets are from mutually exclusive continuous regions (eg,
Jacobs et al.,1991), or when the training subsets are arbitrarily chosen (Breiman, 1992;
Parmanto, Munro, and Doyle, 1995). Networks tend to converge to different weight
states, because the error surface itself depends on the training set; hence changing the data
changes the error surface.
2.2
Auxiliary tasks
Another way to influence the networks to disagree is to introduce a second output unit
with a different task to each network in the committee. Thus, each network has two
outputs, a primary unit which is trained to predict the class of the input, and a secondary
unit, with some other task that is different than the tasks assigned to the secondary units
of the other committee members. The success of this approach rests on the assumption
that the decorrelation of the network errors will more than compensate for any degradation
of performance induced on the primary task by the auxiliary task. The presence of a
hidden layer in each network guarantees that the two output response functions share
some weight parameters (i.e., the input-hidden weights), and so the learning of the
secondary task influences the function learned by the primary output unit.
Parmanto et al. (1994) acheived significant decorrelation and improved performance on a
varoety of tasks using one of the input variables as the training signal for the secondary
unit. Interestingly, the secondary task does not necessarily degrade performance on the
primary task. Our studies, as well as those of Caruana (1995), show that extra tasks can
facilitate learning time and generalization performance on an individual network. On the
other hand, certain auxiliary tasks interfere with the primary task. We have found
however, that even when the individual performance is degraded, committee performance
is nevertheless enahnced (relative to a committee of single output networks) due to the
magnitude of error decorrelation.
3 THE COMPETITIVE COMMITTEE
An alternative to using a stationary task per se, such as replicating an input variable or
projecting onto principal components (as was done in Parmanto et ai, 1994), is to use a
signal that depends on the other networks, in such a manner that the functions computed
by the secondary units are negatively correlated after training. This notion is reminiscent
of competitive learning (Rumelhart and Zipser, 1986); that is, the functions computed by
the secondary units will partition the stimulus space.
Thus, a Competitive Committee Machine (CCM) is defined as a committee of neural
594
P. W Munro and B. Parmanto
network classifiers, each with two output units: a primary unit trained according to the
classification task, and a secondary unit participating in a competitive process with
secondary units of the other networks in the committee; let the outputs of network i be
denoted Pi and Si, respectively (see Figure 1). The network weights are modified
according to the following variant of the back propagation procedure.
When data item a from the training set is presented to the committee during training,
with input vector XCl and known output classification value yCl (binary), the networks
each process XCl simultaneously, and the P and S output units of each network respond.
Each P-unit receives the identical training signal, yCl, that corresponds to the input item;
the training signal to the S-units is zero for all networks except the network with the
greatest S-unit response among the committee; the maximum Si among the networks in
the committee receives a training signal of 1, and the others receive a training signal of O.
or mof
where
are the errors attributed to the primary and secondary units respectively
to adjust network weights with back propagation l . During the course of training, the Sunit's response is explicitly trained to become sensitive to a unique region (relative to the
other networks' S-units) of the stimulus space. This training signal is different from
typical "tasks" that are used to train neural networks in that it is not a static function of
the input; instead, since it depends on the other networks in the committee, it has a
dynamic qUality.
4
RESULTS
Some experiments have been run using the sine wave classification task (Figure 2) of
Geman and Bienenstock (1992).
Comparisons of CCM perfonnance versus the baseline perfonnance of a committee with
a simple average over a range of architectures (as indicated by the number of hidden units)
are favorable (Figure 3). Also, note that the improvement is primarily attributable to
descreased correlation, since the average individual perfonnance is not significantly
affected.
Visualization of the response of the individual networks to the entire stimulus space
gives a complete picture of how the networks generalize and shows the effect of the
competition (Figure 4). For this particular data set, the classes are easily separated in the
central region (note that all the networks do well here). But at the edges, there is much
more variance in the networks trained with competitive secondary units (Figure 5).
5
DISCUSSION
Caruana (1995) has demontrated significant improvement on "target" classification tasks
in individual networks by adding one or more supplementary output units trained to
compute tasks related to the target task. The additional output unit added to each network
IFor notational convenience, the derivative factor sometimes included in the definition of
8 is not included in this description of
oP and oS.
Competition Among Networks Improves Committee Performance
595
COMMITIEE OUTPUT
(AVERAGE OR VOTE)
(Training signal yfl
.?
s
--
c_
---. -
? ??
Input Variables (simultaneously presented to all networks)
Figure 1: A Competitive Committee Machine . Each of the K networks receives the
srune input and produces two outputs, P and S. The P responses of all the networks are
compared to a common training signal to compute an error value for backpropagation
(dark dashed arrows); the P responses are combined (by vote or by sum) to determine the
committee response. The S-unit responses are compared with each other, with the
"winner" (highest response) receiving a training signal of 1, and the others receiving a
training signal of O. Thus the training signal for network i is computed by comparing all
S-unit responses, and then fed back to the S-units, hence the two-way arrows (gray).
in the CCM merges a variant of Rumelhart and Zipser's (1986) competitive learning
procedure with backpropagation, to form a novel hybrid of a supervised training technique
with an unsupervised method. The training signal delivered to the secondary unit under
CCM is more direct than an arbitrary task, in that it is defined explicitly in terms of
dissociating response properties.
Note that the training signals for the S-units differ from the P-unit training signals in
two important respects:
1. Not static: The signal depends on the S-unit responses/rom the other networks ,md
hence chcmges during the course of training.
2. Not uniform: It is not constant across the committee (whereas the P-unit training
signal is.)
P W Munro and B. Parmanto
596
Fi!.!ure 2. A ("/assif/ca[ion task . Training data (bottom) is srunpled from a classitication
ta;k defined hy a s'inusoid (top) conupted hy noise (middle).
Indiv. Performance
Committee Performance
:e ......
:!
.....
.......
.........
...........................
8
10
. ....-..................?
12
14
8
16
o
~
a;
0
12
<.Q
0
16
~r-----~~~~---;
.....
/,-
?.................. " .....
14
Percent Improvement
..?............_.-............ ..
Correlation
o~-----------
.
10
, hidden unns
, hidden unllS
c:
..
_-. __._-
o" -...... ba&.tlne
cem
-
..... ........."..................
. ...... .... .
..?.
.....
o? ,._
- .. cem
bas.-w
~ ~
0>0
~
N
o
oL..--_
_ _ _ _ _ _ _ _--'
o
4
10
12
, hidden urI/IS
14
16
4
6
8
10
12
14
16
, hidden unns
Figure 3. Peliormallce of CCM. Committees of 5 networks were trained with competitive learning (CCM) and without (baseline). Each data point is an average over 5
simulations with different initial weights.
Competition Among Networks Improves Committee Performance
Network.l (Error: 10.20%)
r-====
597
Network '2 (Err",: 15.59"1.)
Network .3 (Error: 15.250/.)
Nelwork ,4 (Error: 15.54"/.)
Network #5 (Error: 12.65%)
Canmillee OUtput ? Thtesholded (Error: 11.64)
Figure 4. Generalization plots for a committee. The level of gray indicates the response
for each network of a committee trained without competition. The panel on the lower
right shows the (thresholded) committee output. The average pairwise correlation of the
committee is 0.91.
Network #1 (Error: 10.21%)
Network,2 (Error: 9.83%)
Network #3 (Error . 1683%)
Nelwork'4 (Error: 14.88%)
Figure 5. Gelleralization plots for a CCM committee. Comparison with Figure 4 shows
much more variance among the committee at the edges. Note that the committee
performs much better near the right and left ends of the stimulus space than does any
individual network. This committee had an error rate of 8.11 % (cf 11.64% in the
baseline case).
598
P. W. Munro and B. Pannanto
The weighting of oS relative to oP is an important consideration; in the simulations
above, the signal from the secondary unit was arbitrarily multiplied by a factor of 0.1.
While we have not yet examined this systematically, it is assumed that this factor will
modulate the tradeoff between degradation of the primary task and reduction of error
correlation.
References
Bates, J.M., and Granger, C.W. (1969) "The combination of forecasts," Operation
Research Quarterly, 20(4),451-468.
Breiman, L, (1992) "Stacked Regressions", TR 367, Dept. of Statistics, Univ. of Cal.
Berkeley.
Caruana, R (1995) "Learning many related tasks at the same time with backpropagation,"
In: Advances in Neural Information Processing Systems 7. D. S. Touretsky, ed. Morgan
Kaufmann.
Chauvin, Y. (1989) "A backpropagation algorithm with optimal use of hidden units." In
Touretzky D., (ed.), Advances in Neural Information Processing 1, Denver, 1988,
Morgan Kaufmann.
Geman, S ., Bienenstock, E., and Doursat, R. (1992) "Neural networks and the
bias/variance dilemma," Neural Computation 4, 1-58.
Hashem, S. (1994). Optimal Linear Combinations of Neural Networks., PhD Thesis,
Purdue University.
Jacobs, R.A., Jordan, M.I., Nowlan, SJ., and Hinton, G.E. (1991) "Adaptive mixtures
of local experts," Neural Computation, 3, 79-87
Le Cun, Y., Denker J. and Solla, S. (1990). Optimal Brain Damage. In D. Touretzky
(Ed.) Advances in Neural Information Processing Systems 2, San Mateo: Morgan
Kaufmann. 598-605.
Morgan, N. & Boulard, H. (1990) Generalization and parameter estimation in feedforward
nets: some experiments. In D. Touretzky (Ed.) Advances in Neural Information
Processing Systems 2 San Mateo: Morgan Kaufmann.
Parmanto, B., Munro, P.W., Doyle, H.R., Doria, C., Aldrighetti, L., Marino, I.R. ,
Mitchel, S., and Fung, JJ. (1994) "Neural network classifier for hepatoma detection,"
Proceedings of the World Congress of Neural Networks
Parmanto, B., Munro, P.W., Doyle, H.R. (1996) "Improving committee diagnosis with
resampling techniques," In: D. S. Touretzky, M. C. Mozer, M. E. Hasselmo, eds.
Advances in Neural Information Processing Systems 8. MIT Press: Cambridge, MA.
Perrone, M.P. (1993) "Improving Regression Estimation: Averaging Methods for
Variance Reduction with Extension to General Convex Measure Optimization," PhD
Thesis, Department of Physics, Brown University.
Rumelhart. D.E and Zipser, D. (1986) "Feature discovery by competitive learning," In:
Rumelhart, D.E.and McClelland, J.E. (Eds.), Parallel Distributed Processing:
Explorations in the Microstructure of Cognition. MIT Press, Cambridge, MA.
Wolpert, D. (1992). Stacked generalization, Neural Networks, 5,241-259.
| 1196 |@word middle:1 simulation:2 jacob:2 fonn:1 thereby:1 tr:1 reduction:3 initial:1 interestingly:1 err:1 comparing:1 nowlan:1 si:3 yet:2 reminiscent:1 partition:1 plot:2 resampling:1 stationary:1 item:2 direct:1 become:1 acheived:1 manner:1 introduce:1 pairwise:1 expected:1 brain:2 ol:1 panel:1 finding:1 guarantee:1 berkeley:1 classifier:4 unit:32 local:1 congress:1 ure:1 mateo:2 examined:1 range:1 unique:1 backpropagation:4 procedure:3 significantly:1 ccm:7 indiv:1 onto:1 convenience:1 cal:1 influence:2 convex:1 estimator:1 notion:2 target:2 us:1 pa:2 rumelhart:4 geman:4 bottom:1 region:3 solla:1 decrease:2 highest:1 mozer:1 complexity:1 hashem:3 dynamic:1 trained:7 dilemma:1 negatively:1 dissociating:1 easily:1 train:1 separated:1 stacked:2 univ:1 supplementary:1 statistic:1 itself:1 delivered:1 net:1 combining:1 description:1 participating:1 competition:6 constituent:1 produce:1 op:2 auxiliary:3 differ:1 exploration:1 microstructure:1 generalization:7 extension:1 c_:1 presumably:1 cognition:1 predict:1 pitt:2 early:1 favorable:1 estimation:2 infonnation:2 sensitive:1 hasselmo:1 weighted:1 mit:2 modified:1 breiman:2 improvement:3 notational:1 indicates:1 baseline:3 stopping:1 entire:2 bienenstock:4 hidden:8 among:7 classification:4 denoted:1 identical:1 unsupervised:1 excessive:1 others:2 stimulus:5 primarily:1 hepatoma:1 simultaneously:2 doyle:3 individual:8 detection:1 adjust:1 mixture:1 edge:2 perfonnance:5 caruana:3 ordinary:1 subset:1 predictor:5 uniform:1 too:1 combined:1 xcl:2 physic:1 receiving:2 thesis:2 central:1 management:1 expert:1 derivative:1 explicitly:2 depends:4 sine:1 competitive:10 wave:1 parallel:1 square:1 degraded:1 variance:8 kaufmann:4 generalize:1 bates:2 drive:2 touretzky:4 ed:6 definition:1 attributed:1 static:2 improves:5 back:3 ta:1 supervised:1 response:13 improved:1 done:2 correlation:10 hand:1 receives:3 o:2 propagation:2 interfere:1 quality:1 gray:2 indicated:1 nonidentical:1 facilitate:1 effect:1 brown:1 multiplier:1 regularization:1 hence:3 assigned:1 eg:1 during:3 complete:1 performs:1 percent:1 consideration:1 novel:1 fi:1 ols:1 common:1 denver:1 winner:2 significant:2 cambridge:2 ai:1 replicating:1 had:1 surface:2 recent:1 manipulation:1 certain:1 binary:1 arbitrarily:2 success:1 morgan:6 minimum:2 additional:1 syndrome:1 converge:1 determine:1 signal:18 dashed:1 nelwork:2 compensate:1 prediction:2 variant:2 regression:2 sometimes:1 ion:1 receive:1 whereas:1 extra:1 doursat:3 rest:1 induced:1 tend:1 member:1 jordan:1 zipser:3 near:1 presence:1 feedforward:1 misclassifications:2 architecture:1 reduce:3 idea:1 tradeoff:1 handled:1 munro:8 jj:1 generally:1 se:1 ifor:1 dark:1 mcclelland:1 reduced:1 generate:1 estimated:1 per:1 diagnosis:1 affected:1 nevertheless:1 changing:1 thresholded:1 year:1 sum:2 run:1 telecommunication:1 respond:1 separation:1 marino:1 layer:1 occur:1 hy:2 attempting:1 department:3 fung:1 according:2 combination:3 perrone:4 across:2 cun:2 projecting:1 mutually:1 visualization:1 granger:2 committee:39 fed:1 end:1 operation:1 multiplied:1 denker:1 quarterly:1 alternative:1 top:1 cf:1 added:1 damage:2 primary:8 exclusive:1 md:1 degrade:1 chauvin:2 rom:1 touretsky:1 ba:2 disagree:1 purdue:1 hinton:1 arbitrary:1 introduced:1 learned:1 merges:1 usually:1 pattern:1 greatest:1 misclassification:1 decorrelation:3 hybrid:1 improve:1 picture:1 health:1 mof:1 discovery:1 parmanto:9 relative:3 versus:1 degree:1 systematically:1 share:1 pi:1 course:2 bias:3 distributed:1 world:1 adaptive:1 san:2 sj:1 cem:2 pittsburgh:4 assumed:1 continuous:1 ca:1 inherently:1 improving:2 necessarily:1 constructing:1 arrow:2 noise:1 paul:1 attributable:1 weighting:1 decay:1 adding:1 phd:2 magnitude:1 uri:1 forecast:1 wolpert:2 lagrange:1 corresponds:1 ma:2 modulate:1 viewed:2 change:1 included:2 typical:1 except:1 reducing:6 averaging:2 degradation:2 principal:1 secondary:13 vote:2 dept:1 correlated:2 |
216 | 1,197 | Computing with infinite networks
Christopher K. I. Williams
Neural Computing Research Group
Department of Computer Science and Applied Mathematics
Aston University, Birmingham B4 7ET, UK
c.k.i.williamsGaston.ac.nk
Abstract
For neural networks with a wide class of weight-priors, it can be
shown that in the limit of an infinite number of hidden units the
prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian
processes corresponding to networks with sigmoidal and Gaussian
hidden units. This allows predictions to be made efficiently using
networks with an infinite number of hidden units, and shows that,
somewhat paradoxically, it may be easier to compute with infinite
networks than finite ones.
1
Introduction
To someone training a neural network by maximizing the likelihood of a finite
amount of data it makes no sense to use a network with an infinite number of hidden
units; the network will "overfit" the data and so will be expected to generalize
poorly. However, the idea of selecting the network size depending on the amount
of training data makes little sense to a Bayesian; a model should be chosen that
reflects the understanding of the problem, and then application of Bayes' theorem
allows inference to be carried out (at least in theory) after the data is observed.
In the Bayesian treatment of neural networks, a question immediately arises as to
how many hidden units are believed to be appropriate for a task. Neal (1996) has
argued compellingly that for real-world problems, there is no reason to believe that
neural network models should be limited to nets containing only a "small" number
of hidden units. He has shown that it is sensible to consider a limit where the
number of hidden units in a net tends to infinity, and that good predictions can be
obtained from such models using the Bayesian machinery. He has also shown that
for fixed hyperparameters, a large class of neural network models will converge to
a Gaussian process prior over functions in the limit of an infinite number of hidden
units.
C. K. I. Williams
296
Neal's argument is an existence proof-it states that an infinite neural net will
converge to a Gaussian process, but does not give the covariance function needed
to actually specify the particUlar Gaussian process. In this paper I show that
for certain weight priors and transfer functions in the neural network model, the
covariance function which describes the behaviour of the corresponding Gaussian
process can be calculated analytically. This allows predictions to be made using
neural networks with an infinite number of hidden units in time O( n 3 ), where n
is the number of training examples l . The only alternative currently available is to
use Markov Chain Monte Carlo (MCMC) methods (e.g. Neal, 1996) for networks
with a large (but finite) number of hidden units. However, this is likely to be
computationally expensive, and we note possible concerns over the time needed for
the Markov chain to reach equilibrium. The availability of an analytic form for
the covariance function also facilitates the comparison of the properties of neural
networks with an infinite number of hidden units as compared to other Gaussian
process priors that may be considered.
The Gaussian process analysis applies for fixed hyperparameters B. If it were desired to make predictions based on a hyperprior P( B) then the necessary B-space
integration could be achieved by MCMC methods. The great advantage of integrating out the weights analytically is that it dramatically reduces the dimensionality
of the MCMC integrals, and thus improves their speed of convergence.
1.1
From priors on weights to priors on functions
Bayesian neural networks are usually specified in a hierarchical manner, so that the
weights ware regarded as being drawn from a distribution P(wIB). For example,
the weights might be drawn from a zero-mean Gaussian distribution, where B specifies the variance of groups of weights. A full description of the prior is given by
specifying P( B) as well as P( wIB). The hyperprior can be integrated out to give
P(w) = J P(wIB)P(B) dB, but in our case it will be advantageous not to do this as
it introduces weight correlations which prevent convergence to a Gaussian process.
In the Bayesian view of neural networks, predictions for the output value y .. corresponding to a new input value x .. are made by integrating over the posterior in
weight space. Let D = ((XI,t1),(xz,tz), ... ,(xn,t n denote the n training data
pairs, t = (tl'" .,tnl and ! .. (w) denote the mapping carried out by the network
on input x .. given weights w. P(wlt, B) is the weight posterior given the training
data z . Then the predictive distribution for y .. given the training data and hyperparameters B is
?
(1)
We will now show how this can also be viewed as making the prediction using priors
over functions rather than weights. Let f(w) denote the vector of outputs corresponding to inputs (Xl, ... , xn) given weights w. Then, using Bayes' theorem we
have P(wlt,8) = P(tlw)P(wI8)/ P(tI8), and P(tlw) = J P(tly) o(y - f(w? dy.
Hence equation 1 can be rewritten as
P(y.. It, 8) =
P(~18)
JJ
P(tly) o(Y.. -
! .. (w?o(y -
f(w? P(wI8) dw dy
(2)
However, the prior over (y.. , YI, ... , Yn) is given by P(y.. , y18) = P(y .. Iy, 8)P(yI8) =
o(Y.. - ! .. (w) o(y- f(w?P(wI8) dw and thus the predictive distribution can be
J
1 For large n, various ap'proximations to the exact solution which avoid the inversion of
an n x n matrix are available.
2For notational convenience we suppress the x-dependence of the posterior.
Computing with Infinite Networks
written as
P(y .. lt,8) =
P(~18)
297
J
P(tly)P(y.. ly, 8)P(yI8) dy =
J
P(y.. ly, 8)P(ylt, 8) dy
(3)
Hence in a Bayesian view it is the prior over function values P(y .. , Y18) which is
important; specifying this prior by using weight distributions is one valid way to
achieve this goal. In general we can use the weight space or function space view,
which ever is more convenient, and for infinite neural networks the function space
view is more useful.
2
Gaussian processes
A stochastic process is a collection of random variables {Y(z)lz E X} indexed by
a set X . In our case X will be n d , where d is the number of inputs. The stochastic
process is specified by giving the probability distribution for every finite subset
of variables Y(zt), ... , Y(Zk) in a consistent manner. A Gaussian process (GP)
is a stochastic process which can be fully specified by its mean function jJ( z) =
E[Y(z)] and its covariance function C(z, z') = E[(Y(z) - jJ(z?(Y(z') - JJ(z'?];
any finite set ofY-variables will have ajoint multivariate Gaussian distribution. For
a multidimensional input space a Gaussian process may also be called a Gaussian
random field.
=
Below we consider Gaussian processes which have jJ(z)
0, as is the case for the
neural network priors discussed in section 3. A non-zero JJ(z) can be incorporated
into the framework at the expense of a little extra complexity.
A widely used class of covariance functions is the stationary covariance functions,
whereby C(z, z') = C(z - z') . These are related to the spectral density (or power
spectrum) of the process by the Wiener-Khinchine theorem, and are particularly
amenable to Fourier analysis as the eigenfunctions of a stationary covariance kernel
are exp ik.z . Many commonly used covariance functions are also isotropic, so that
C(h) = C(h) where h = z - z' and h = Ihl. For example C(h) = exp(-(h/oy)
is a valid covariance function for all d and for 0 < v ~ 2. Note that in this case
u sets the correlation length-scale of the random field, although other covariance
functions (e.g. those corresponding to power-law spectral densities) may have no
preferred length scale.
2.1
Prediction with Gaussian processes
The model for the observed data is that it was generated from the prior stochastic
process, and that independent Gaussian noise (of variance u~) was then added .
Given a prior covariance function CP(Zi,Zj), a noise process CN(Zj,Zj) = U~6ij
(i.e. independent noise of variance u~ at each data point) and the training data,
the prediction for the distribution of y.. corresponding to a test point z .. is obtained
simply by applying equation 3. As the prior and noise model are both Gaussian the
integral can be done analytically and P(y.. lt, 8) is Gaussian with mean and variance
y(z .. )
=
k~(z .. )(Kp
u2(z .. )
=
Cp(z .. , z .. ) - k~(z .. )(J{p
+ KN)-lt
(4)
+ KN )-lkp(z .. )
(5)
= Co(Zi, Zj) for a = P, Nand kp(z .. ) = (Cp(z .. , zt), ... ,
Cp(z .. , zn?T. u~(z .. ) gives the "error bars" of the prediction.
where [Ko]ij
Equations 4 and 5 are the analogue for spatial processes of Wiener-Kolmogorov
prediction theory. They have appeared in a wide variety of contexts including
C. K. I. Williams
298
geostatistics where the method is known as "kriging" (Journel and Huijbregts, 1978;
Cressie 1993), multidimensional spline smoothing (Wahba, 1990), in the derivation
of radial basis function neural networks (Poggio and Girosi, 1990) and in the work
of Whittle (1963).
3
Covariance functions for Neural Networks
Consider a network which takes an input z, has one hidden layer with H units and
then linearly combines the outputs of the hidden units with a bias to obtain fez).
The mapping can be written
H
fez)
=b+ L.:vjh(z;uj)
(6)
j=l
where h(z; u) is the hidden unit transfer function (which we shall assume is
bounded) which depends on the input-to-hidden weights u. This architecture is
important because it has been shown by Hornik (1993) that networks with one
hidden layer are universal approximators as the number of hidden units tends to
infinity, for a wide class of transfer functions (but excluding polynomials). Let b
and the v's have independent zero-mean distributions of variance O'~ and 0'1) respectively, and let the weights Uj for each hidden unit be independently and identically
distributed. Denoting all weights by w, we obtain (following Neal, 1996)
Ew[!(z)]
Ew[/(z )/(z')]
-
(7)
0
O'~ +
L.: O';Eu[hj(z; u)hj(z'; u)]
(8)
j
O'l
+ HO';Eu[h(z; u)h(z'; u)]
(9)
where equation 9 follows because all of the hidden units are identically distributed.
The final term in equation 9 becomes w 2 Eu[h(z; u)h(z'; u)] by letting 0'; scale as
w 2 /H.
As the transfer function is bounded, all moments of the distribution will be bounded
and hence the Central Limit Theorem can be applied, showing that the stochastic
process will become a Gaussian process in the limit as H -+ 00.
By evaluating Eu[h(z)h(z')] for all z and z' in the training and testing sets we can
obtain the covariance function needed to describe the neural network as a Gaussian
process. These expectations are, of course, integrals over the relevant probability
distributions of the biases and input weights. In the following sections two specific
choices for the transfer functions are considered, (1) a sigmoidal function and (2) a
Gaussian. Gaussian weight priors are used in both cases.
It is interesting to note why this analysis cannot be taken a stage further to integrate
out any hyperparameters as well . For example, the variance 0'; of the v weights
might be drawn from an inverse Gamma distribution. In this case the distribution
P(v) = J P(vIO';)P(O';)dO'; is no longer the product of the marginal distributions
for each v weight (in fact it will be a multivariate t-distribution). A similar analysis
can be applied to the u weights with a hyperprior. The effect is to make the hidden
units non-independent, so that the Central Limit Theorem can no longer be applied.
3.1
Sigmoidal transfer function
A sigmoidal transfer function is a very common choice in neural networks research;
nets with this architecture are usually called multi-layer perceptrons.
299
Computing with Infinite Networks
Below we consider the transfer function h(z; u) = ~(uo+ 'L1=1 UjXi), where ~(z) =
2/Vii e- t2 dt is the error function, closely related to the cumulative distribution
function for the Gaussian distribution. Appropriately scaled, the graph of this
function is very similar to the tanh function which is more commonly used in the
neural networks literature.
J;
In calculating V(z, Z/)d;J Eu[h(z; U)h(Z/; u)] we make the usual assumptions (e.g.
MacKay, 1992) that u is drawn from a zero-mean Gaussian distribution with covariance matrix E, i.e. u "" N(O, E). Let i = (1, Xl, ... , Xd) be an augmented input
vector whose first entry corresponds to the bias. Then Verf(z, Z/) can be written as
Verf(z,z/) =
(211")
~
2
J~(uTi)~(uTi/)exp(-!uTE-lu) du
IE1 1/ 2
(10)
2
This integral can be evaluated analytically3 to give
2
1
?
Verf z, z ) = - sm
(
-1
11"
.... -1
2 Z-T .wZ
---;===========
+ 2iTEi)(1 + 2i/TEi/)
)(1
(11)
We observe that this covariance function is not stationary, which makes sense as
the distributions for the weights are centered about zero, and hence translational
symmetry is not present.
Consider a diagonal weight prior so that E = diag(0"5, 0"7, ... ,0"1), so that the inputs
i = 1, ... , d have a different weight variance to the bias 0"6. Then for Iz12, Iz / 12?
(1+20"6)/20"1, we find that Verf(z, Z/) ~ 1-20/11", where 0 is the angle between z and
Z/. Again this makes sense intuitively; if the model is made up of a large number of
sigmoidal functions in random directions (in z space), then we would expect points
that lie diametrically opposite (i.e. at z and -z) to be anti-correlated, because
they will lie in the + 1 and -1 regions of the sigmoid function for most directions.
3.2
Gaussian transfer function
One other very common transfer function used in neural networks research is the
Gaussian, so that h(z; u)
exp[-(z - u)T(z - u)/20"~], where 0"; is the width
parameter of the Gaussian. Gaussian basis functions are often used in Radial Basis
Function (RBF) networks (e.g. Poggio and Girosi, 1990).
=
For a Gaussian prior over the distribution of u so that u "" N(O, O"~I),
1
J
(z-u)T(z-u)
(Z/-u)T(Z/_U)
uTu
2)d/2 exp2
exp2
exp---2 G
211"0"u
20"9
20"9
20"u
(12)
By completing the square and integrating out u we obtain
1
VG(z,z)=(
0" )d
zT z
(z - z')T(z - z')
zlT z '
= ( _e
eXP{--2
2 } exp{2 2
}exp { - - 22 } (13)
O"U
O"m
0"$
O"m
1/0"2e = 2/0"29 + 1/0"2u' 0"2 = 20"29 + 0"4/0"2 and 0"2 = 20"2u + 0"2 This formula
VG(Z,Z/)
where
$
gum
g.
can be generalized by allowing covariance matrices Eb and Eu in place of O";! and
O"~!; rescaling each input variable Xi independently is a simple example.
3Introduce a dummy parameter A to make the first term in the integrand ~(AUTX).
Differentiate the integral with respect to A and then use integration by parts. Finally
recognize that dVerfjdA is of the form (1-fP)-1/2d9jdA and hence obtain the sin- 1 form
of the result, and evaluate it at A = 1.
300
C. K. I. Williams
Again this is a non-stationary covariance function, although it is interesting to note that if O"~ - 00 (while scaling w 2 appropriately) we find that
VG(Z,Z/) ex: exp{-(z - z/)T(z - z/)/40"2} 4. For a finite value of O"~, VG(Z,Z/)
is a stationary covariance function "modulated" by the Gaussian decay function
exp( _zT z/20"?n) exp( _zIT Zl /20"?n). Clearly if O"?n is much larger than the largest
distance in z-space then the predictions made with VG and a Gaussian process with
only the stationary part of VG will be very similar.
It is also possible to view the infinite network with Gaussian transfer functions as
an example of a shot-noise process based on an inhomogeneous Poisson process
(see Parzen (1962) ?4.5 for details). Points are generated from an inhomogeneous
Poisson process with the rate function ex: exp( _zT z/20"~), and Gaussian kernels of
height v are centered on each of the points, where v is chosen iid from a distribution
with mean zero and variance 0"; .
3.3
Comparing covariance functions
The priors over functions specified by sigmoidal and Gaussian neural networks differ
from covariance functions that are usually employed in the literature, e.g. splines
(Wahba, 1990). How might we characterize the different covariance functions and
compare the kinds of priors that they imply?
The complex exponential exp ik.z is an eigenfunction of a stationary and isotropic
covariance function, and hence the spectral density (or power spectrum) S(k)
(k = Ikl) nicely characterizes the corresponding stochastic process. Roughly speaking the spectral density describes the "power" at a given spatial frequency k; for
example, splines have S(k) ex: k- f3 . The decay of S(k) as k increases is essential,
as it provides a smoothing or damping out of high frequencies. Unfortunately nonstationary processes cannot be analyzed in exactly this fashion because the complex
exponentials are not (in general) eigenfunctions of a non-stationary kernel. Instead,
we must consider the eigenfunctions defined by J C(z, Z/)?(Z/)dz l = )..?(z). However, it may be possible to get some feel for the effect of a non-stationary covariance
function by looking at the diagonal elements in its 2d-dimensional Fourier transform, which correspond to the entries in power spectrum for stationary covariance
functions.
3.4
Convergence of finite network priors to GPs
From general Central Limit Theorem results one would expect a rate of convergence
of H-l/2 towards a Gaussian process prior. How many units will be required
in practice would seem to depend on the particular values of the weight-variance
parameters. For example, for Gaussian transfer functions, O"rn defines the radius
over which we expect the process to be significantly different from zero. If this
radius is increased (while keeping the variance of the basis functions O"~ fixed) then
naturally one would expect to need more hidden units in order to achieve the same
level of approximation as before. Similar comments can be made for the sigmoidal
case, depending on (1 + 20"6)/20"1I have conducted some experiments for the sigmoidal transfer umction, comparing
the predictive performance of a finite neural network with one Input unit to the
equivalent Gaussian process on data generated from the GP. The finite network
simulations were carried out using a slightly modified version of Neal's MCMC
Bayesian neural networks code (Neal, 1996) and the inputs were drawn from a
4Note that this would require w 2 - 00 and hence the Central Limit Theorem would no
longer hold, i.e. the process would be non-Gaussian.
Computing with Infinite Networks
301
N(O,l) distribution. The hyperparameter settings were UI = 10.0, 0"0 = 2.0, O"v =
1.189 and Ub = 1.0. Roughly speaking the results are that 100's of hidden units
are required before similar performance is achieved by the two methods, although
there is considerable variability depending on the particular sample drawn from the
prior; sometimes 10 hidden units appears sufficient for good agreement.
4
Discussion
The work described above shows how to calculate the covariance function for sigmoidal and Gaussian basis functions networks. It is probable similar techniques will
allow covariance functions to be derived analytically for networks with other kinds
of basis functions as well; these may turn out to be similar in form to covariance
functions already used in the Gaussian process literature.
In the derivations above the hyperparameters 9 were fixed. However, in a real data
analysis problem it would be unlikely that appropriate values of these parameters
would be known. Given a prior distribution P(9) predictions should be made by
integrating over the posterior distribution P(9It) ()( P(9)P(tI9), where P(tI9) is
the likelihood of the training data t under the model; P(tI9) is easily computed for
a Gaussian process. The prediction y( z) for test input z is then given by
y(z)
=
J
Y9(z)P(9ID)d9
(14)
where Y9(z) is the predicted mean (as given by equation 4) for a particular value
of 9. This integration is not tractable analytically but Markov Chain Monte Carlo
methods such as Hybrid Monte Carlo can be used to approximate it. This strategy
was used in Williams and Rasmussen (1996), but for stationary covariance functions,
not ones derived from Gaussian processes; it would be interesting to compare results.
Acknowledgements
I thank David Saad and David Barber for help in obtaining the result in equation 11, and
Chris Bishop, Peter Dayan, Ian Nabney, Radford Neal, David Saad and Huaiyu Zhu for
comments on an earlier draft of the paper. This work was partially supported by EPSRC
grant GR/J75425, "Novel Developments in Learning Theory for Neural Networks".
References
Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley.
Hornik, K. (1993). Some new results on neural network approximation. Neural Networks 6 (8), 1069-1072.
Journel, A. G. and C. J. Huijbregts (1978). Mining Geostatistics. Academic Press.
MacKay, D. J. C. (1992). A Practical Bayesian Framework for Backpropagation Networks. Neural Computation 4(3), 448-472.
Neal, R. M. (1996). Bayesian Learning for Neural Networks. Springer. Lecture Notes in
Statistics 118.
Parzen, E. (1962). Stochastic Processes. Holden-Day.
Poggio, T. and F. Girosi (1990). Networks for approximation and learning. Proceedings
of IEEE 78, 1481-1497.
Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Mathematics. CBMS-NSF Regional Conference series in applied mathematics.
Whittle, P. (1963). Prediction and regulation by linear least-square methods. English
Universities Press.
Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian processes for regression. In
D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo (Eds.), Advances in Neural
Information Processing Systems 8, pp. 514-520. MIT Press.
| 1197 |@word version:1 inversion:1 polynomial:1 advantageous:1 diametrically:1 simulation:1 covariance:29 shot:1 moment:1 series:1 selecting:1 denoting:1 comparing:2 written:3 must:1 girosi:3 analytic:2 stationary:11 isotropic:2 draft:1 provides:1 sigmoidal:9 height:1 become:1 ik:2 combine:1 introduce:1 manner:2 expected:1 roughly:2 xz:1 multi:1 fez:2 little:2 becomes:1 bounded:3 kind:2 every:1 multidimensional:2 xd:1 exactly:1 scaled:1 uk:1 zl:1 unit:23 ly:2 uo:1 yn:1 grant:1 t1:1 before:2 tends:3 limit:8 id:1 ware:1 ap:1 might:3 eb:1 specifying:2 someone:1 co:1 limited:1 practical:1 testing:1 practice:1 backpropagation:1 universal:1 significantly:1 convenient:1 integrating:4 radial:2 get:1 convenience:1 cannot:2 context:1 applying:1 equivalent:1 dz:1 maximizing:1 williams:6 independently:2 immediately:1 regarded:1 dw:2 feel:1 exact:1 gps:1 cressie:2 agreement:1 element:1 expensive:1 particularly:1 observed:2 epsrc:1 calculate:1 region:1 eu:6 kriging:1 mozer:1 complexity:1 ui:1 depend:1 predictive:3 basis:6 easily:1 various:1 kolmogorov:1 derivation:2 describe:1 monte:3 kp:2 whose:1 widely:1 larger:1 tlw:2 statistic:2 gp:2 transform:1 final:1 differentiate:1 advantage:1 net:4 product:1 ajoint:1 relevant:1 exp2:2 poorly:1 achieve:2 description:1 convergence:4 help:1 depending:3 ac:1 ij:2 zit:1 predicted:1 differ:1 direction:2 inhomogeneous:2 closely:1 radius:2 j75425:1 stochastic:7 centered:2 observational:1 argued:1 require:1 behaviour:1 probable:1 hold:1 considered:2 exp:13 great:1 equilibrium:1 mapping:2 birmingham:1 currently:1 tanh:1 largest:1 hasselmo:1 reflects:1 mit:1 clearly:1 gaussian:46 modified:1 rather:1 avoid:1 hj:2 derived:3 notational:1 likelihood:2 tnl:1 industrial:1 ylt:1 sense:4 inference:1 dayan:1 integrated:1 unlikely:1 nand:1 holden:1 hidden:23 translational:1 development:1 spatial:3 integration:3 smoothing:2 mackay:2 marginal:1 field:2 f3:1 nicely:1 t2:1 wlt:2 spline:4 proximations:1 gamma:1 recognize:1 mining:1 introduces:1 analyzed:1 chain:3 amenable:1 integral:5 necessary:1 poggio:3 machinery:1 damping:1 indexed:1 hyperprior:3 desired:1 increased:1 earlier:1 zn:1 subset:1 entry:2 conducted:1 gr:1 characterize:1 kn:2 density:4 parzen:2 iy:1 d9:1 again:2 central:4 containing:1 tz:1 rescaling:1 whittle:2 availability:1 depends:1 view:5 characterizes:1 bayes:2 square:2 wiener:2 variance:10 efficiently:1 correspond:1 generalize:1 bayesian:9 iid:1 lu:1 carlo:3 reach:1 touretzky:1 ed:1 frequency:2 pp:1 naturally:1 proof:1 treatment:1 ihl:1 dimensionality:1 improves:1 actually:1 cbms:1 appears:1 dt:1 day:1 specify:1 done:1 evaluated:1 stage:1 correlation:2 overfit:1 christopher:1 y9:2 ikl:1 defines:1 believe:1 effect:2 analytically:5 hence:7 ti8:1 neal:8 sin:1 width:1 whereby:1 generalized:1 cp:4 l1:1 novel:1 common:2 sigmoid:1 b4:1 discussed:1 he:2 mathematics:3 ute:1 longer:3 posterior:4 multivariate:2 certain:1 wib:3 approximators:1 yi:1 somewhat:1 employed:1 converge:2 full:1 ofy:1 reduces:1 academic:1 believed:1 prediction:14 ko:1 regression:1 expectation:1 poisson:2 kernel:3 sometimes:1 achieved:2 utu:1 appropriately:2 extra:1 zlt:1 saad:2 regional:1 eigenfunctions:3 comment:2 facilitates:1 db:1 seem:1 nonstationary:1 identically:2 paradoxically:1 variety:1 tei:1 zi:2 architecture:2 wahba:3 opposite:1 idea:1 cn:1 ie1:1 peter:1 speaking:2 jj:6 dramatically:1 useful:1 amount:2 specifies:1 zj:4 nsf:1 dummy:1 hyperparameter:1 shall:1 iz:1 group:2 yi8:2 drawn:6 prevent:1 journel:2 graph:1 tly:3 inverse:1 angle:1 place:1 uti:2 dy:4 scaling:1 layer:3 completing:1 nabney:1 ti9:3 infinity:2 fourier:2 speed:1 argument:1 integrand:1 department:1 describes:2 slightly:1 making:1 intuitively:1 taken:1 computationally:1 equation:7 turn:1 needed:3 letting:1 tractable:1 available:2 rewritten:1 observe:1 hierarchical:1 appropriate:2 spectral:4 alternative:1 ho:1 existence:1 calculating:1 giving:1 uj:2 society:1 question:1 added:1 already:1 strategy:1 dependence:1 usual:1 diagonal:2 distance:1 thank:1 sensible:1 chris:1 barber:1 reason:1 length:2 code:1 regulation:1 unfortunately:1 expense:1 suppress:1 zt:5 allowing:1 markov:3 sm:1 finite:9 anti:1 huijbregts:2 ever:1 incorporated:1 excluding:1 looking:1 rn:1 variability:1 david:3 pair:1 required:2 specified:4 geostatistics:2 eigenfunction:1 bar:1 usually:3 below:2 appeared:1 fp:1 y18:2 including:1 wz:1 analogue:1 power:5 hybrid:1 zhu:1 aston:1 imply:1 carried:3 huaiyu:1 prior:25 understanding:1 literature:3 acknowledgement:1 law:1 fully:1 expect:4 lecture:1 oy:1 interesting:3 vg:6 integrate:1 sufficient:1 consistent:1 course:1 supported:1 keeping:1 rasmussen:2 english:1 bias:4 allow:1 wide:3 distributed:2 calculated:1 xn:2 world:1 valid:2 evaluating:1 cumulative:1 made:7 collection:1 commonly:2 lz:1 approximate:1 preferred:1 vio:1 xi:2 spectrum:3 why:1 transfer:13 zk:1 correlated:1 symmetry:1 hornik:2 obtaining:1 du:1 complex:2 diag:1 linearly:1 noise:5 hyperparameters:5 augmented:1 tl:1 fashion:1 wiley:1 exponential:2 xl:2 lie:2 ian:1 theorem:7 formula:1 specific:1 bishop:1 showing:1 decay:2 concern:1 essential:1 nk:1 easier:1 vii:1 lt:3 simply:1 likely:1 partially:1 u2:1 applies:1 radford:1 springer:1 corresponds:1 viewed:1 goal:1 rbf:1 towards:1 considerable:1 infinite:14 called:2 ew:2 lkp:1 perceptrons:1 arises:1 modulated:1 ub:1 evaluate:1 mcmc:4 ex:3 |
217 | 1,198 | ARC-LH: A New Adaptive Resampling
Algorithm for Improving ANN Classifiers
Friedrich Leisch
Friedrich.Leisch@ci.tuwien.ac.at
Kurt Hornik
Kurt.Hornik@ci.tuwien.ac.at
Institut fiir Statistik und Wahrscheinlichkeitstheorie
Technische UniversWit Wien
A-I040 Wien, Austria
Abstract
We introduce arc-Ih, a new algorithm for improvement of ANN classifier performance, which measures the importance of patterns by
aggregated network output errors. On several artificial benchmark
problems, this algorithm compares favorably with other resample
and combine techniques.
1
Introduction
The training of artificial neural networks (ANNs) is usually a stochastic and unstable process. As the weights of the network are initialized at random and training
patterns are presented in random order, ANNs trained on the same data will typically be different in value and performance. In addition, small changes in the
training set can lead to two completely different trained networks with different
performance even if the nets had the same initial weights.
Roughly speaking, ANNs have a low bias because of their approximation capabilities, but a rather high variance because of the instability. Recently, several resample
and combine techniques for improving ANN performance have been proposed. In
this paper we introduce an new arcing ("~aptive resample and ?ombine") method
called arc-Ih. Contrary to the arc-fs method by Freund & Schapire (1995), which
uses misclassification rates for adapting the resampling probabilities, arc-Ih uses the
aggregated network output error. The performance of arc-Ih is compared with other
techniques on several popular artificial benchmark problems.
523
ARC-Uf: A New Adaptive Resampling Algorithm/or ANN Classifiers
2
Bias-Variance Decomposition of 0-1 Loss
e
Consider the task of classifying a random vector taking values in X into one of c
classes G1 , ... , Ge , and let g(.) be a classification function mapping the input space
on the finite set {I, ... , c}.
The classification task is to find an optimal function
Rg
= IELg(e) =
l
g
minimizing the risk
(1)
Lg(x) dF(x)
e,
where F denotes the (typically unknown) distribution function of
and L is a
loss function. In this paper, we consider 0-1 loss only, i.e., the loss is 1 for all
misclassified patterns and zero otherwise.
It is well known that the optimal classifier, i.e., the classifier with minimum risk, is
the Bayes classifier g* assigning to each input x the class with maximum posterior
probability IP(Gnlx) . These posterior probabilities are typically unknown, hence
the Bayes classifier cannot be used directly. Note that Rg* = 0 for disjoint classes
and Rg* > 0 otherwise.
Let X N = {xt, ... ,xN} be a set of independent input vectors for which the true
class is known, available for training the classifier. Further, let g X N ( .) denote a
classifier trained using set X N. The risk Rg x N ~ Rg* of classifier gx N is a random
variable depending on the training sample X N. In the case of ANN classifiers it
also depends on the network training, i.e., even for fixed X N the performance of a
trained ANN is a random variable depending on the initialization of weights and
the (often random) presentation of the patterns [x n l during training.
Following Breiman (1996a) we decompose the risk of a classifier into the (minimum
possible) Bayes error, a systematic bias term of the model class and the variance of
the classifier within its model class. We call a classifier model unbiased for input x
if, over replications of all possible training sets X N of size N, network initializations
and pattern presentations, g picks the correct class more often than any other class.
Let U = U(g) denote the set of all x E X where g is unbiased; and B = B(g) = X\U
the set of all points where g is biased. The risk of classifier g can be decomposed as
Rg
= Rg* + Bias(g) + Var(g)
(2)
where Rg* is the risk of the Bayes classifier,
Bias(g)
Rag - Rag*
Var(g)
Rug - Rug*
and Ra and Ru denote the risk on set Band U, respectively, i.e., the integration in
Equation 1 is over B or U instead of X, repectively.
A simpler bias-variance decomposition has been proposed by Kong & Dietterich
(1995):
Bias(g)
Var(g)
IP{B}
Rg - Bias(g)
524
F. LeischandK. Hornik
The size of the bias set is seen as the bias of the model (i.e., the error the model
class "typically" makes) . The variance is simply the difference between the actual
risk and this bias term. This decompostion yields negative variance if the current
classifier performs better than the average classifier.
In both decompositions, the bias gives the systematic risk of the model, whereas
the variance measures how good the current realization is compared to the best
possible realization of the model. Neural networks are very powerful but rather
unstable approximators, hence their bias should be low, but the variance may be
high.
3
Resample and Combine
Suppose we had k independent training sets X N1 , .. . , X Nk and corresponding classifiers 91' . .. , 9k trained using these sets, respectively. We can then combine these
single classifiers into ajoint voting classifier 9~ by assigning to each input x the class
the majority of the 9j votes for . If the 9j have low bias, then 9~ should have low
bias, too. If the model is unbiased for an input x, then the variance of 9~ vanishes
as k -+ 00 , and 9 v = limk --+ oo 9k is optimal for x. Hence, by resampling training sets
from the original training set and combining the resulting classifiers into a voting
classifier it might be possible to reduce the high variance of unstable classification
algorithms.
Training sets
ANN classifiers
X N1
t::...c:,.~ ... . '& .'"
........
t::-
XN'J
XN
. ~~-
?
91
~
~ . - ."
92
.... -..
t::.-.- .... ..
...-:::1; .. ??-? .. ?... ?
-
I>
X Nk
-;/
?
9k
resample
9k
combine
adapt
3.1
Bagging
Breiman (1994 , 1996a) introduced a procedure called bagging ("Qootstrap aggregating") for tree classifiers that may also be used for ANNs. The bagging algorithm
starts with a training set X N of size N. Several bootstrap replica X J.." . .. ,X7v are
constructed and a neural network is trained on each. These networks are finally
combined by majority voting. The bootstrap sets
consist of N patterns drawn
with replacement from the original training set (see Efron & Tibshirani (1993) for
more information on the bootstrap).
X1
ARC-ill: A New Adaptive Resampling Algorithm/or ANN Classifiers
3.2
3.2.1
525
Arcing
Arcing Based on Misclassification Rates
Arcing, which is a more sophisticated version of bagging, was first introduced by
Freund & Schapire (1995) and called boosting. The new training sets are not constructed by uniformly sampling from the empirical distribution of the training set
XN " but from a distribution over X N that includes information about previous
misclassifications.
Let P~ denote the probability that pattern xn is included into the i-th training set
X},y and initialize with P~
1/N. Freund and Schapire's arcing algorithm, called
arc-fs as in Breiman (1996a), works as follows:
=
1. Construct a pattern set Xiv by sampling with replacement with probabilities P~ from XN and train a classifier 9i using set xiv.
=
2. Set dn
1 for all patterns that are misclassified by 9i and zero otherwise.
With fi = L~=lp~dn and!3i = (1- fi)/fi update the probabilities by
HI _
Pn -
3. Set i := i
Pin /3dn
i
N
'd
Ln=l P~(3i n
+ 1 and repeat.
After k steps, 91' . . . ,gk are combined with weighted voting were each 9j'S vote has
weight log!3i. Breiman (1996a) and Quinlan (1996) compare bagging and arcing for
CART and C4.5 classifiers, respectively. Both bagging and arc-fs are very effective
in reducing the high variance component of tree classifiers, with adaptive resampling
being a bit better than simple bagging.
3.2.2
Arcing Based on Network Error
Independently from the arcing and bagging procedures described above, adaptive
resampling has been introduced for active pattern selection in leave-k-out crossvalidation CV / APS (Leisch & Jain, 1996; Leisch et al., 1995). Whereas arc-fs (or
Breiman's arc-x4) uses only the information whether a pattern is misclassified or
not, in CV / APS the fact that MLPs approximate the posterior probabilities of the
classes (Kanaya & Miyake, 1991) is utilized, too. We introduce a simple new arcing
method based on the main idea of CV / APS that the "importance" of a pattern for
the learning process can be measured by the aggregated output error of an MLP
for the pattern over several training runs.
Let the classifier 9 be an ANN using l-of-c coding, i.e., one output node per class,
the target t(x) for each input x is one at the node corresponding to the class of
x and zero at the remaining output nodes . Let e(x)
It(x) - 9(x))12 be the
squared error ofthe network for input x. Patterns that 'repeatedly have high output
errors are somewhat harder to learn for the network and therefore their resampling
probabilities are increased proportionally to the error. Error-dependent resampling
=
F. Leisch and K. Hornik
526
introduces a "grey-scale" of pattern-importance as opposed to the "black and white"
paradigm of misclassification dependent resampling.
Again let p~ denote the probability that pattern xn is included into the i-th training
set Xiv and initialize with
= 1/N. Our new arcing algorithm, called arc-Ih, works
as follows:
p;
1. Construct a pattern set xiv by sampling with replacement with probabilities p~ from X N and train a classifier gj using set
xiv.
2. Add the network output error of each pattern to the resampling probabilities:
3. Set i := i + 1 and repeat.
After k steps, g1' ... ,gk are combined by majority voting.
3.3
Jittering
In our experiments, we also compare the above resample and combine methods with
jittering, which resamples the training set by contaminating the inputs by artificial
noise. No voting is done, but the size of the training set is increased by creation of
artificial inputs "around" the original inputs, see Koistinen & Holmstrom (1992).
4
Experiments
We demonstrate the effects of bagging and arcing on several well known artificial
benchmark problems. For all problems, i - h - c single hidden layer perceptrons
(SHLPs) with i input, h hidden and c output nodes were used. The number of hidden nodes h was chosen in a way that the corresponding networks have reasonably
low bias.
2 Spirals with noise: 2-dimensional input, 2 classes. Inputs with uniform noise
around two spirals. N 300. Rg* 0%. 2-14-2 SHLP.
=
=
Continuous XOR: 2-dimensional input, 2 classes. Uniform inputs on the 2dimensional square -1 :::; x, y :::; 1 classified in the two classes x * y ~ 0 and
x * y < O. N = 300. Rg* = 0%. 2-4-2 SHLP.
Ringnorm: 20-dimensional input, 2 classes. Class 1 is normal wit mean zero and
covariance 4 times the identity matrix. Class 2 is a unit normal with mean
(a, a, ... , a). a 2/.../20. N 300. Rg* 1.2%. 20-4-2 SHLP.
=
=
=
The first two problems are standard benchmark problems (note however that we
use a noisy variant of the standard spirals problem); the last one is, e.g., used in
Breiman (1994, 1996a).
ARC-LH: A New Adaptive Resampling Algorithm/or ANN Classifiers
527
All experiments were replicated 50 times, in each bagging and arcing replication
10 classifiers were combined to build a voting classifier. Generalization errors were
computed using Monte Carlo techniques on test sets of size 10000.
Table 1 gives the average risk over the 50 replications for a standard single SHLP,
an SHLP trained on a jittered training set and for voting classifiers using ten votes
constructed with bagging, arc-Ih and arc-fs, respectively. The Bayes risk ofthe spiral
and xor example is zero, hence the risk of a network equals the sum of its bias and
variance. The Bayes risk of the ringnorm example is 1.2%.
Kong & Dietterich
Var(g)
Var(g) Bias(g)
2 Spirals
0.82
7.43
6.93
6.27
0.52
6.02
3.71
4.04
0.68
3.96
0.60
3.71
4.01
3.60
0.72
XOR
1.32
5.22
6.01
5.21
5.92
1.08
1.22
2.47
3.09
3.15
1.12
2.61
3.08
1.20
2.38
Ringnorm
8.26
13.84
4.80
13.72
4.84
8.34
4.91
13.54
2.18
2.13
4.81
13.58
13.20
2.43
5.13
Breiman
Rg
Bias(g)
standard
jitter
bagging
arc-fs
arc-Ih
7.75
6.53
4.39
4.31
4.32
0.32
0.26
0.35
0.35
0.31
standard
jitter
bagging
arc-fs
arc-Ih
6.54
6.29
3.69
3.73
3.58
0.53
0.37
0.59
0.58
0.50
standard
jitter
bagging
arc-fs
arc-Ih
18.64
18.56
15.72
15.71
15.63
9.19
9.03
9.61
9.70
9.30
Table 1: Bias-variance decompositions.
The variance part was drastically reduced by the res ample & combine methods, with
only a negligible change in bias. Note the low bias in the spiral and xor problems.
ANNs obviously can solve these classification tasks (one could create appropriate
nets by hand), but of course training cannot find the exact boundaries between the
classes. Averaging over several nets helps to overcome this problem. The bias in
the ringnorm example is rather high, indicating that a change of network topology
(bigger net, etc.) or training algorithm (learning rate, etc.) may lower the overall
risk.
5
Summary
Comparison of of the resample and combine algorithms shows slight advantages
for adaptive resampling, but no algorithm dominates the other two. Further im-
528
F. Leisch and K. Hornik
provements should be possible based on a better understanding of the theoretical
properties of resample and combine techniques. These issues are currently being
investigated.
References
Breiman, L. (1994). Bagging predictors. Tech. Rep. 421, Department of Statistics, University of California, Berkeley, California, USA.
Breiman, 1. (1996a). Bias, variance, and arcing classifiers. Tech. Rep. 460, Statistics
Department, University of California, Berkeley, CA, USA.
Breiman, L. (1996b). Stacked regressions. Machine Learning, 24,49.
Drucker, H. & Cortes, C. (1996) . Boosting decision trees. In Touretzky, S., Mozer, M. C.,
& Hasselmo, M. E. (eds.), Advances in Neural Information Processing Systems, vol. 8.
MIT Press.
Efron, B. & Tibshira...u, R. J. (1993). An introduction to the bootstrap. Monographs on
Statistics and Applied Probability. New York: Chapman & Hall.
Freund, Y. & Schapire, R. E. (1995). A decision-theoretic generalization of on-line learning
and an application to boosting. Tech. rep., AT&T Bell Laboratories, 600 Mountain Ave,
Murray Hill, NJ, USA.
Kanaya, F. & Miyake, S. (1991). Bayes statistical behavior and valid generalization of
pattern classifying neural networks. IEEE Transactions on Neural Networks, 2(4), 471475.
Kohavi, R. & Wolpert, D. H. (1996). Bias plus variance decomposition for zero-one loss.
In Machine Learning: Proceedings of the 19th International Conference.
Koistinen, P. & Holmstrom, L. (1992). Kernel regression and backpropagation training
with noise. In Moody, J. E., Hanson, S. J., & Lippmann, R. P. (eds.), Advances in Neural
Information Processing Systems, vol. 4, pp. 1033-1039. Morgan Kaufmann Publishers,
Inc.
Kong, E. B. & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and
variance. In Machine Learning: Proceedings of the 12th International Conference, pp.
313-321. Morgan-Kaufmann.
Leisch, F. & Jain, 1. C. (1996). Cross-validation with active pattern selection for neural
network classifiers. Submitted to IEEE Transactions on Neural Networks, in Review.
Leisch, F., Jain, 1. C., & Hornik, K. (1995). NN classifiers: Reducing the computational
cost of cross-validation by active pattern selection. In Artificial Neural Networks and
Expert Systems, vol. 2. Los Alamitos, CA, USA: IEEE Computer Society Press.
Quinlan, J. R. (1996). Bagging, boosting and C4.5. University of Sydney, Australia.
Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge, UK: Cambridge
University Press.
Tibshirani, R. (1996a). Bias, variance and prediction error for classification rules. University of Toronto, Canada.
Tibshirani, R. (1996b). A comparison of some error estimates for neural network models.
Neural Computation, 8(1), 152-163.
| 1198 |@word kong:3 version:1 grey:1 covariance:1 decomposition:5 pick:1 harder:1 initial:1 kurt:2 current:2 assigning:2 update:1 aps:3 resampling:13 boosting:4 node:5 toronto:1 gx:1 simpler:1 dn:3 constructed:3 replication:3 combine:9 introduce:3 ra:1 behavior:1 roughly:1 decomposed:1 tuwien:2 actual:1 mountain:1 nj:1 berkeley:2 voting:8 classifier:38 uk:1 unit:1 negligible:1 aggregating:1 xiv:5 might:1 black:1 plus:1 initialization:2 ringnorm:4 backpropagation:1 bootstrap:4 procedure:2 empirical:1 bell:1 adapting:1 cannot:2 selection:3 risk:14 instability:1 independently:1 miyake:2 wit:1 correcting:1 rule:1 target:1 suppose:1 exact:1 us:3 recognition:1 utilized:1 monograph:1 mozer:1 und:1 vanishes:1 jittering:2 trained:7 creation:1 completely:1 train:2 stacked:1 jain:3 effective:1 holmstrom:2 monte:1 universwit:1 artificial:7 solve:1 otherwise:3 statistic:3 g1:2 noisy:1 ip:2 obviously:1 advantage:1 net:4 ajoint:1 combining:1 repectively:1 realization:2 crossvalidation:1 los:1 leave:1 help:1 depending:2 oo:1 ac:2 measured:1 sydney:1 correct:1 stochastic:1 australia:1 generalization:3 decompose:1 im:1 around:2 hall:1 normal:2 mapping:1 resample:8 currently:1 hasselmo:1 create:1 weighted:1 mit:1 rather:3 pn:1 breiman:10 arcing:13 improvement:1 tech:3 ave:1 dependent:2 nn:1 typically:4 hidden:3 misclassified:3 overall:1 classification:5 ill:1 issue:1 integration:1 initialize:2 equal:1 construct:2 sampling:3 chapman:1 x4:1 replacement:3 n1:2 mlp:1 introduces:1 lh:2 decompostion:1 institut:1 tree:3 initialized:1 re:1 theoretical:1 increased:2 cost:1 technische:1 uniform:2 predictor:1 too:2 jittered:1 combined:4 international:2 koistinen:2 systematic:2 corrects:1 moody:1 kanaya:2 squared:1 again:1 opposed:1 wahrscheinlichkeitstheorie:1 expert:1 wien:2 coding:2 includes:1 inc:1 depends:1 start:1 bayes:7 capability:1 mlps:1 square:1 provements:1 xor:4 variance:18 kaufmann:2 yield:1 ofthe:2 carlo:1 classified:1 submitted:1 anns:5 touretzky:1 ed:2 pp:2 popular:1 austria:1 efron:2 sophisticated:1 done:1 hand:1 leisch:8 usa:4 dietterich:3 effect:1 true:1 unbiased:3 hence:4 laboratory:1 white:1 during:1 hill:1 theoretic:1 demonstrate:1 performs:1 resamples:1 recently:1 fi:3 slight:1 cambridge:2 cv:3 had:2 gj:1 etc:2 add:1 posterior:3 contaminating:1 rep:3 approximators:1 morgan:2 seen:1 minimum:2 rug:2 somewhat:1 aggregated:3 paradigm:1 adapt:1 cross:2 bigger:1 prediction:1 variant:1 regression:2 df:1 kernel:1 addition:1 whereas:2 kohavi:1 publisher:1 biased:1 limk:1 cart:1 ample:1 contrary:1 call:1 spiral:6 misclassifications:1 topology:1 reduce:1 idea:1 drucker:1 whether:1 f:8 speaking:1 york:1 repeatedly:1 proportionally:1 band:1 ten:1 reduced:1 schapire:4 disjoint:1 tibshirani:3 per:1 vol:3 drawn:1 replica:1 sum:1 run:1 powerful:1 jitter:3 decision:2 bit:1 layer:1 hi:1 statistik:1 uf:1 department:2 lp:1 ln:1 equation:1 pin:1 ge:1 available:1 appropriate:1 original:3 bagging:16 denotes:1 remaining:1 quinlan:2 build:1 murray:1 society:1 alamitos:1 majority:3 unstable:3 ru:1 minimizing:1 lg:1 favorably:1 gk:2 negative:1 unknown:2 arc:22 benchmark:4 finite:1 canada:1 introduced:3 friedrich:2 hanson:1 rag:2 c4:2 california:3 usually:1 pattern:22 misclassification:3 review:1 understanding:1 freund:4 loss:5 var:5 validation:2 classifying:2 course:1 summary:1 repeat:2 last:1 drastically:1 bias:27 taking:1 boundary:1 overcome:1 xn:7 valid:1 adaptive:7 replicated:1 transaction:2 approximate:1 lippmann:1 fiir:1 active:3 ripley:1 continuous:1 table:2 learn:1 reasonably:1 ca:2 hornik:6 improving:2 investigated:1 main:1 noise:4 x1:1 xt:1 cortes:1 dominates:1 consist:1 ih:9 importance:3 ci:2 nk:2 rg:13 wolpert:1 simply:1 identity:1 presentation:2 ann:10 change:3 included:2 uniformly:1 reducing:2 averaging:1 called:5 vote:3 perceptrons:1 indicating:1 |
218 | 1,199 | ?
Neural network models of chemotaxis In
the nematode Caenorhabditis elegans
Thomas C. Ferree, Ben A. Marcotte, Shawn R. Lockery
Institute of Neuroscience, University of Oregon, Eugene, Oregon 97403
Abstract
We train recurrent networks to control chemotaxis in a computer
model of the nematode C. elegans. The model presented is based
closely on the body mechanics, behavioral analyses, neuroanatomy
and neurophysiology of C. elegans, each imposing constraints relevant for information processing. Simulated worms moving autonomously in simulated chemical environments display a variety
of chemotaxis strategies similar to those of biological worms.
1
INTRODUCTION
The nematode C. elegans provides a unique opportunity to study the neuronal basis of neural computation in an animal capable of complex goal-oriented behaviors.
The adult hermaphrodite is only 1 mm long, and has exactly 302 neurons and 95
muscle cells. The morphology of every cell and the location of most electrical and
chemical synapses are known precisely (White et al., 1986), making C. elegans especially attractive for study. Whole-cell recordings are now being made on identified
neurons in the nerve ring of C. elegans to determine electrophysiological properties
which underly information processing in this animal (Lockery and Goodman, unpublished). However, the strengths and polarities of synaptic connections are not
known, so we use neural network optimization to find sets of synaptic strengths
which reproduce actual nematode behavior in a simulated worm.
We focus on chemotaxis, the ability to move up (or down) a gradient of chemical
attractants (or repellants). In the laboratory, flat Petri dishes (radius = 4.25 cm)
are prepared with a Gaussian-shaped field of attractant at the center, and worms
are allowed to move freely about. Worms propel themselves forward by generating
an undulatory body wave, which produces sinusoidal movement. In chemotaxis, the
nervous system generates motor commands which bias this movement and direct
T. C. Ferree, B. A. Marcotte and S. R. Lockery
56
the animal toward higher attractant concentration.
Anatomical constraints pose important problems for C. elegans during chemotaxis.
In particular, the animal detects the presence of chemicals with a pair of sensory
organs (amphids) at the tip of the nose, each containing the processes of multiple
chemosensory neurons. During normal locomotion, however, the animal moves on
its side so that the two amphids are perpendicular to the Petri dish. C. elegans cannot, therefore, sense the gradient directly. One possible strategy for chemotaxis,
which has been suggested previously (Ward, 1973), is that the animal computes a
temporal derivative of the local concentration during a single head sweep, and combines this with some form of proprioceptive feedback indicating muscle contraction
and the direction of head sweep, to compute the spatial gradient for chemotaxis.
The existence of this and other strategies is discussed later.
In Section 2, we derive a simple model of the nematode body which produces realistic sinusoidal trajectories in response to motor commands from the nervous system.
In Section 3, we give a simple model of the C. elegans nervous system based on
preliminary physiological data. In Section 4, we use a stochastic optimization algorithm to determine sets of synaptic weights which control chemotaxis, and discuss
solutions.
2
BIOMECHANICS OF NEMATODE ORIENTATION
Nematode locomotion has been studied in detail (Niebur and Erdos, 1991; Niebur
and Erdos, 1993). These authors derived Newtonian force equations for each muscular segment of the body, which can be solved numerically to generate forward
sinusoidal movement. Unfortunately, such a thorough treatment is computationally intensive and not practical to use with network optimization. To simplify the
problem we first recognize that chemotaxis is a behavior more of orientation than of
locomotion. We therefore derive a set of biomechanical equations which direct the
head to generate sinusoidal movement, which can be biased by the network toward
higher chemical concentrations.
We focus our attention on the point (x,1/) at the tip of the nose, since that is where
the animal senses the chemical environment. As shown in Figure 1(a), we assign
a velocity vector fJ directed along the midline of the first body segment, i.e., the
head. Assuming that the worm moves forward at constant speed v, we can write
the velocity vector as
fJ(t)
= (~;, ~~) = (vcos(}(t),vsin(}(t))
(1)
where x, y and (} are measured relative to fixed coordinates in the Petri dish.
Assuming that the worm moves without lateral slipping and that the undulatory
wave of muscular contraction initiated in the neck travels posteriorally without
modification, then each body segment simply follows the one previous (anterior) to
it. In this way, the head directs the movement and the rest of the body simply
follows.
Figure l(b) shows an expanded view of the neck segment. As the worm moves
forward, the posterior boundary of that segment assumes the position held by its
anterior neighbor at a slightly earlier time. If L is the total body length and N is
57
Neural Network Models of Chemotaxis
the number of body segments, then this time delay is 6t ~ L/Nv. (For L = 1 mm,
v = 0.22 mm/s and N = 10 we have 6t ~ 0.45 s, roughly an order of magnitude
smaller than the relevant behavioral time scale: the head-sweep period T ~ 4.2 s.)
If we define the neck angle aCt) == (h(t) - 92 (t), then the above arguments imply
aCt)
= 9 (t) 1
91 (t - 6t) ~
dOl
dt
6t
(2)
where the second relation is essentially a backward-Euler algorithm for dOl/dt.
Since 9 == 91 , we have reached the intuitive result that the neck angle a determines
the rate of turning dO/dt. Note that while 91 and 92 are defined relative to the
fixed laboratory coordinates, their difference a is invariant under rotations of these
coordinates, and can therefore be viewed as intrinsic to the body. This allows us
to derive an expression for a in terms of muscle cell contraction, or motor neuron
depolarization, as follows.
(a)
v
(b)
Neck
T
Iv
1
Figure 1: Nematode body mechanics. (a) Segmented model of the nematode body,
showing the direction of motion v. (b) Expanded view of the neck segment, showing
dorsal (D) and ventral (V) neck muscles.
Nematodes maintain nearly constant volume during movement. To incorporate this
constraint, albeit approximately, we assume that at all times the geometry of each
segment is such that (ID -10) = -(Iv -10), where 10 == L/N is the equilibrium length
of a relaxed segment. For small angles a, we have a ~ (Iv -ID)/d, where d is the
body diameter. The dashed lines in Figure 1(b) indicate dorsal and ventral muscles,
which are believed to develop tension nearly independent of length (Toida et al.,
1975). When contracting, these muscles must work against the elasticity of the
cuticle, internal fluid pressure, and elasticity and developed tension of the opposing
muscles. If these elastic forces act linearly, then TD-TV ~ k (Iv-ID), where TD and
Tv are dorsal and ventral muscle tensions, and k is an effective force constant. For
simplicity, we further assume that each muscle develops tension linearly in response
to the voltage of its corresponding motor neuron, i.e., TD,v = E VD,V, where E is a
positive constant, and VD and Vv are dorsal and ventral motor neuron voltages.
Combining these results, we have finally
~~
= "1 (VD(t) - Vv(t?)
(3)
T. C. Ferree, B. A. Marcotte and S. R. Lockery
58
where"Y = (Nv/L)? (E/kd). With appropriate motor commands, equations (I) and
(3) can be integrated numerically to generate sinusoidal worm trajectories like those
of biological worms. This model embodies the main anatomical features that are
likely to be important in C. elegans chemotaxis, yet is sufficiently compact to be
embedded in a network optimization procedure.
3
CHEMOTAXIS CONTROL CIRCUIT
C. elegans neurons are tiny and have very simple morpologies: a typical neuron
in the head has a spherical soma 1-2 pm in diameter, and a single cylindrical
process 60-80 pm in length and 0.1-0.2 pm in diameter. Compartmental models, based on this morphology and preliminary physiological recordings, indicate
that C. elegans neurons are effectively isopotential (Lockery, 1995). Furthermore,
C. elegans neurons do not fire classical all-or-none action potentials, but appear to
rely primarily on graded signal propagation (Lockery and Goodman, unpublished).
Thus, a reasonable starting point for a network model is to represent each neuron
by a single isopotential compartment, in which voltage is the state variable, and the
membrane conductance in purely ohmic.
Anatomical data indicate that the C. elegans nervous system has both electrical and
chemical synapses, but the synaptic transfer functions are not known. However,
steady-state synaptic transfer functions for chemical synapses have been measured
in Ascaris s'U'Um, a related species of nematode, where it was found that postsynaptic
voltage is a graded function of presynaptic voltage, due to tonic neurotransmitter
release (Davis and Stretton, 1989). This voltage dependence is sigmoidal, i.e.,
VpOlt tanh(Vpre ). A simple network model which captures all of these features is
"-J
T~ = -Vi + Vmax tanh (f3
t
Wij
(Vj -
Vj?) + ~Iilm{t)
(4)
3=1
where Vi is the voltage of the ith neuron. Here all voltages are measured relative
to a common resting potential, Vmax is an arbitrary voltage scale which sets the
operational range of the neurons, and f3 sets the voltage sensitivity of the synaptic
transfer function. The weight Wij represents the net strength and polarity of all
synaptic connections from neuron j to neuron i, and the constants determine the
center of each transfer function. The membane time constant T is assumed to be
the same for all cells, and will be discussed further later. Note that in (4), synaptic transmission occurs instantaneously: the time constant T arises from capacitive
current through the cell membrane, and is unrelated to synaptic transmission. Note
also that the way in which (4) sums multiple inputs is not unique, i.e., other sigmoidal models which sum inputs differently are equally plausible, since no data on
synaptic summation exists for either C. elegans or Ascaris 8'U'Uffl. .
Vj
The stimulus term ~8t1m(t) is used to introduce chemosensation and sinusoidal
locomotion to the network in (4). We use i = 1 to label a single chemosensory
neuron at the tip of the nose, and i = n - 1 == D and i = n == V to label dorsal and
ventral motor neurons. For simplicity we assume that the chemosensory neuron
voltage responds linearly to the local chemical concentration:
(5)
Neural Network Models of Chemotaxis
59
where Vchem is a positive constant, and the local concentration C(x, y) is always
evaluated at the instantaneous nose position.
In the previous section, we emphasized that locomotion is effectively independent of
orientation. We therefore assume the existence of a central pattern generator (CPG)
which is outside the chemotaxis control circuit (4). Thus, in addition to synaptic
input from other neurons, each motor neuron receives a sinusoidal stimulus
(6)
where VCPG and w
4
=211"/T are positive constants.
RESULTS AND DISCUSSION
Equations (1), (3) and (4), together with (5) and (6), comprise a set of n + 3
first-order nonlinear differential equations, which can be solved numerically given
initial conditions and a specification of the chemical environment. We use a fourthorder Runge-Kutta algorithm and find favorable stability and convergence. The
necessary body parameters have been measured by observing actual worms (Pierce
and Lockery, unpublished): v = 0.022 cm/s, T = 4.2 s and '"Y = 0.8/(2VcPG). The
chemical environment is also chosen to agree roughly with experimental values:
C(x,y) = Co exp(-(x 2 + y2)/-Xb), with Co = 0.052 p.mol/cm3 and -Xc = 2.3 cm.
To optimize networks to control chemotaxis, we use a simple simulated annealing
algorithm which searches over the (n 2 + 3)-dimensional space of parameters Wij,
{3, Vchem and VCPG. In the results shown here, we used n = 12, and set V; = O.
Each set of the resulting parameters represents a different nervous system for the
model worm. At the beginning of each run, the worm is initialized by choosing an
initial position (xo, Yo), an initial angle 00 , and by setting V. = O. Upon numerically
integrating, simulated worms move autonomously in their environment for a predetermined amount of time, typically the real-time equivalent of 10-15 minutes. We
quantify the performance, or fitness, of each worm during chemotaxis by computing
the average chemical concentation at the tip of its nose over the duration of each
run. To avoid lucky scores, the actual score for each worm is obtained by averaging
over several initial conditions.
In Figure 2, we show a comparison of tracks produced by (a) biological and (b)
simulated worms during chemotaxis. In each case, three worms were placed in a
dish with a radial gradient and allowed to move freely for the real-time equivalent
of 15 minutes. In (b), the three worms have the same neural parameters (Wij, (3,
Vchem, VCPG), but different initial angles 00 ? In both (a) and (b), all three worms
make initial movements, then move toward the center of the dish and remain there.
In other optimizations, rather than orbit the center, the simulated worms may
approach the center asymptotically from one side, make simple geometric patterns
which pass through the center, or exhibit a variety of other distinct strategies for
chemotaxis. This is similar to the situation with biological worms, which also have
considerable variation in the details of their tracks.
The behavior shown in Figure 2 was produced using T = 500 ms. However, preliminary electrophysiological recordings from C. elegans neurons suggest that the actual
value may be as much as an order of magnitude smaller, but not bigger (Lockery and
Goodman, unpublished). This presents a potential problem for chemotaxis com-
T. C. Ferree, B. A. Marcotte and S. R. Lockery
60
putation, since shorter time constants require greater sensitivity to small changes
in O(x, y) in order to compute a temporal derivative, which is believed to be required. During optimization, we have seen that for a fixed number of neurons n,
finding optimal solutions becomes more difficult as T is decreased. This observation
is very difficult to quantify, however, due to the existence of local maxima in the
fitness function. Nevertheless, this suggests that additional mechanisms may need
to be included to understand neural computation in C. elegana. First, time- and
voltage-dependent conductances will modify the effective membrane time constant,
and may increase the effective time scale for computation by individual neurons.
Second, more neurons and synaptic delays will also move the effective neuronal time
scale closer to that of the behavior. Either of these will allow comparisons of O(z, II)
across greater distances, thereby requiring less sensitivity to compute the gradient,
and potentially improving the ability of these networks to control chemotaxis.
2em
Figure 2: Nematodes performing chemotaxis: (a) biological (Pierce and Lockery,
unpublished), and (b) simulated.
We also note, based on a variety of other results, not shown here, that the headsweep strategy, described in the introduction, is by no means the only strategy for
chemotaxis in this system. In particular, we have optimized networks without a
CPG, i.e., with VCPG = 0 in (6), and found parameter sets that successfully control chemotaxis. This presents the possibility that even worms with a CPG do not
necessarily compute the gradient based on lateral movement of the head, but may
instead respond only to changes in concentration along their mean trajectory. Similar results have been reported previously, although based on a somewhat different
biomechanical model (Beer and Gallagher, 1992).
Finally, we have also optimized discrete-time networks, obtained by setting T = 0
in (4) and updating all units synchronously. As is well-known, on relatively short
time scales (- T) such a system tends to "overshoot" at each successive time step,
leading to sporadic behavior of the network and the body. Knowing this, it is
interesting that simulated worms with such a nervous system are capable of reliable
behavior over longer time scales, i.e., they successfully perform chemotaxis.
Neural Network Models of Chemotaxis
5
61
CONCLUSIONS AND FUTURE WORK
The main result of this paper is that a small nervous system, based on gradedpotential neurons, is capable of controlling chemotaxis in a worm-like physical body
with the dimensions of C. elegans. The model presented is based closely on the body
mechanics, behavioral analyses, neuroanatomy and neurophysiology of C. elegans,
and is a reliable starting point for more realistic models to follow. Furthermore, we
have established the existence of chemotaxis strategies that had not been anticipated
based on behavioral experiments with real worms.
Future work will involve both improvement of the model and analysis of the resulting solutions. Improvements will include introducing voltage- and time-dependent
membrane conductances, as this data becomes available, and more realistic models
of synaptic transmission. Also, laser ablation experiments have been performed that
suggest which interneurons and motor neurons in C. elegans may be important for
chemotaxis (Bargmann, unpublished), and these data can be used to constrain the
synaptic connections during optimization. Analyses will be aimed at determining
the role of individual physiological and anatomical features, and how they function together to govern the collective properties of the network as a whole during
chemotaxis.
Acknowledgements
The authors would like to thank Miriam Goodman and Jon Pierce for helpful discussions. This work has been supported by NIMH MH11373, NIMH MH51383,
NSF IBN 9458102, ONR N00014-94-1-0642, the Sloan Foundation, and The Searle
Scholars Program.
References
Beer, R. D. and J. C. Gallagher (1992). Evolving dynamical neural networks for
adaptive behavior, Adaptive Behavior 1(1):91-122.
Davis, R. E. and A. O. W. Stretton {1989}. Signaling properties of Ascaris matorneurons: Graded active responses, graded synaptic transmission, and tonic transmitter release, J. Neurosci. 9:415-425.
Lockery, S. R. (1995). Signal propagation in the nerve ring of C. elegans, Soc. Neurosd. Abstr. 569.1:1454.
Niebur, E. and P. Erdos (1991). Theory of the locomotion of nematodes: Dynamics
of undulatory progression on a surface, Biophys. J. 60:1132-1146.
Niebur, E. and P. Erdos (1993). Theory of the locomotion of nematodes: Control
of the somatic motor neurons by interneurons, Math. Biosci. 118:51-82.
Toida, N., H. Kuriyama, N. Tashiro and Y. Ito {1975}. Obliquely striated muscle,
Physiol. Rev. 55:700-756.
Ward, S. (1973). Chemotaxis by the nematode Caenorhabditis elegans: Identification of attractants and analysis of the response by use of mutants,
Proc. Nat. Acad. Sci. USA 10:817-821.
White, J. G., E. Southgate, J. N. Thompson and S. Brenner (1986). The structure
of the nervous system of C. elegans, Phil. Trans. R. Soc. London 314:1-340.
| 1199 |@word neurophysiology:2 cylindrical:1 contraction:3 pressure:1 thereby:1 searle:1 initial:6 score:2 current:1 com:1 anterior:2 yet:1 must:1 physiol:1 realistic:3 biomechanical:2 underly:1 predetermined:1 motor:10 nervous:8 beginning:1 ith:1 short:1 provides:1 math:1 location:1 successive:1 sigmoidal:2 cpg:3 along:2 direct:2 differential:1 vpre:1 combine:1 behavioral:4 introduce:1 roughly:2 themselves:1 behavior:9 mechanic:3 morphology:2 detects:1 spherical:1 td:3 actual:4 becomes:2 unrelated:1 circuit:2 attractant:2 cm:3 depolarization:1 developed:1 finding:1 temporal:2 thorough:1 every:1 act:3 exactly:1 lockery:11 um:1 control:8 unit:1 appear:1 ascaris:3 positive:3 local:4 modify:1 tends:1 acad:1 id:3 initiated:1 approximately:1 studied:1 suggests:1 co:2 perpendicular:1 range:1 directed:1 unique:2 practical:1 signaling:1 procedure:1 lucky:1 evolving:1 integrating:1 radial:1 suggest:2 cannot:1 optimize:1 equivalent:2 center:6 phil:1 attention:1 starting:2 duration:1 thompson:1 simplicity:2 stability:1 coordinate:3 variation:1 controlling:1 locomotion:7 velocity:2 updating:1 role:1 electrical:2 solved:2 capture:1 autonomously:2 movement:8 environment:5 govern:1 nimh:2 dynamic:1 overshoot:1 segment:9 purely:1 upon:1 basis:1 differently:1 bargmann:1 neurotransmitter:1 ohmic:1 train:1 laser:1 distinct:1 effective:4 london:1 outside:1 choosing:1 nematode:15 plausible:1 compartmental:1 ability:2 ward:2 runge:1 putation:1 net:1 caenorhabditis:2 relevant:2 combining:1 ablation:1 intuitive:1 obliquely:1 convergence:1 abstr:1 transmission:4 produce:2 generating:1 ring:2 ben:1 newtonian:1 derive:3 recurrent:1 develop:1 pose:1 measured:4 miriam:1 soc:2 ibn:1 indicate:3 quantify:2 direction:2 radius:1 closely:2 stochastic:1 require:1 assign:1 scholar:1 preliminary:3 biological:5 summation:1 attractants:2 mm:3 sufficiently:1 normal:1 exp:1 equilibrium:1 ventral:5 favorable:1 proc:1 travel:1 label:2 tanh:2 organ:1 successfully:2 instantaneously:1 gaussian:1 always:1 rather:1 avoid:1 command:3 voltage:13 derived:1 focus:2 release:2 yo:1 directs:1 improvement:2 transmitter:1 mutant:1 sense:1 helpful:1 dependent:2 integrated:1 typically:1 relation:1 reproduce:1 wij:4 orientation:3 animal:7 spatial:1 field:1 comprise:1 f3:2 shaped:1 represents:2 nearly:2 jon:1 anticipated:1 petri:3 future:2 stimulus:2 simplify:1 develops:1 primarily:1 oriented:1 recognize:1 midline:1 individual:2 fitness:2 geometry:1 fire:1 maintain:1 opposing:1 conductance:3 interneurons:2 possibility:1 propel:1 sens:1 held:1 xb:1 capable:3 closer:1 necessary:1 elasticity:2 shorter:1 iv:4 initialized:1 orbit:1 earlier:1 introducing:1 euler:1 delay:2 reported:1 sensitivity:3 chemotaxis:31 tip:4 together:2 central:1 containing:1 derivative:2 leading:1 potential:3 sinusoidal:7 oregon:2 sloan:1 vi:2 later:2 view:2 performed:1 observing:1 reached:1 wave:2 compartment:1 identification:1 produced:2 none:1 niebur:4 trajectory:3 synapsis:3 synaptic:15 against:1 treatment:1 electrophysiological:2 nerve:2 higher:2 dt:3 follow:1 tension:4 response:4 evaluated:1 furthermore:2 receives:1 nonlinear:1 propagation:2 usa:1 requiring:1 y2:1 chemical:12 laboratory:2 proprioceptive:1 white:2 attractive:1 during:9 davis:2 steady:1 m:1 motion:1 fj:2 instantaneous:1 common:1 rotation:1 physical:1 volume:1 discussed:2 resting:1 numerically:4 stretton:2 biosci:1 imposing:1 pm:3 had:1 moving:1 specification:1 longer:1 surface:1 posterior:1 dish:5 n00014:1 onr:1 slipping:1 muscle:10 seen:1 greater:2 additional:1 relaxed:1 somewhat:1 neuroanatomy:2 freely:2 determine:3 period:1 dashed:1 signal:2 ii:1 multiple:2 segmented:1 believed:2 long:1 biomechanics:1 equally:1 bigger:1 essentially:1 represent:1 cell:6 addition:1 annealing:1 decreased:1 goodman:4 biased:1 rest:1 nv:2 recording:3 elegans:22 marcotte:4 presence:1 variety:3 identified:1 shawn:1 knowing:1 intensive:1 expression:1 action:1 involve:1 aimed:1 amount:1 prepared:1 diameter:3 generate:3 nsf:1 neuroscience:1 track:2 anatomical:4 write:1 discrete:1 soma:1 nevertheless:1 southgate:1 sporadic:1 undulatory:3 backward:1 asymptotically:1 sum:2 run:2 angle:5 respond:1 reasonable:1 display:1 strength:3 constraint:3 precisely:1 constrain:1 flat:1 generates:1 speed:1 argument:1 performing:1 expanded:2 relatively:1 tv:2 chemosensory:3 kd:1 membrane:4 smaller:2 slightly:1 remain:1 postsynaptic:1 across:1 em:1 rev:1 making:1 modification:1 invariant:1 xo:1 computationally:1 equation:5 agree:1 previously:2 discus:1 mechanism:1 nose:5 t1m:1 available:1 progression:1 appropriate:1 existence:4 thomas:1 capacitive:1 assumes:1 include:1 opportunity:1 xc:1 embodies:1 especially:1 graded:4 classical:1 sweep:3 move:10 occurs:1 strategy:7 concentration:6 dependence:1 responds:1 exhibit:1 gradient:6 kutta:1 distance:1 thank:1 simulated:9 lateral:2 vd:3 sci:1 presynaptic:1 toward:3 assuming:2 length:4 polarity:2 difficult:2 unfortunately:1 potentially:1 fluid:1 collective:1 perform:1 neuron:27 observation:1 situation:1 tonic:2 head:8 synchronously:1 somatic:1 arbitrary:1 unpublished:6 pair:1 required:1 connection:3 optimized:2 established:1 trans:1 adult:1 suggested:1 dynamical:1 pattern:2 program:1 reliable:2 force:3 rely:1 turning:1 imply:1 eugene:1 geometric:1 acknowledgement:1 determining:1 relative:3 embedded:1 contracting:1 interesting:1 generator:1 foundation:1 beer:2 tiny:1 placed:1 supported:1 bias:1 side:2 vv:2 understand:1 institute:1 neighbor:1 allow:1 feedback:1 boundary:1 dimension:1 computes:1 sensory:1 forward:4 made:1 author:2 vmax:2 adaptive:2 compact:1 erdos:4 active:1 assumed:1 search:1 transfer:4 elastic:1 operational:1 mol:1 improving:1 complex:1 necessarily:1 vj:3 main:2 linearly:3 neurosci:1 whole:2 allowed:2 body:17 neuronal:2 position:3 ito:1 down:1 minute:2 emphasized:1 showing:2 dol:2 physiological:3 intrinsic:1 exists:1 albeit:1 effectively:2 magnitude:2 pierce:3 gallagher:2 nat:1 biophys:1 simply:2 likely:1 determines:1 goal:1 viewed:1 brenner:1 considerable:1 change:2 included:1 muscular:2 typical:1 averaging:1 worm:26 total:1 neck:7 isopotential:2 specie:1 experimental:1 pas:1 indicating:1 internal:1 arises:1 dorsal:5 incorporate:1 |
219 | 12 | 783
USING NEURAL NETWORKS TO IMPROVE
COCHLEAR IMPLANT SPEECH PERCEPTION
Manoel F. Tenorio
School of Electrical Engineering
Purdue University
West Lafayette, IN 47907
ABSTRACT
-
An increasing number of profoundly deaf patients suffering from sensorineural deafness are using cochlear implants as prostheses. Mter the
implant, sound can be detected through the electrical stimulation of the
remaining peripheral auditory nervous system. Although great progress has
been achieved in this area, no useful speech recognition has been attained
with either single or multiple channel cochlear implants.
Coding evidence suggests that it is necessary for any implant which
would effectively couple with the natural speech perception system to simulate the temporal dispersion and other phenomena found in the natural
receptors, and currently not implemented in any cochlear implants. To this
end, it is presented here a computational model using artificial neural networks (ANN) to incorporate the natural phenomena in the artificial
cochlear.
The ANN model presents a series of advantages to the implementation
of such systems. First, the hardware requirements, with constraints on
power, size, and processing speeds, can be taken into account together with
the development of the underlining software, before the actual neural structures are totally defined. Second, the ANN model, since it is an abstraction
of natural neurons, carries the necessary ingredients and is a close mapping
for implementing the necessary functions. Third, some of the processing,
like sorting and majority functions, could be implemented more efficiently,
requiring only local decisions. Fourth, the ANN model allows function
modifications through parametric modification (no software recoding), which
permits a variety of fine-tuning experiments, with the opinion of the
patients, to be conceived. Some of those will permit the user some freedom
in system modification at real-time, allowing finer and more subjective
adjustments to fit differences on the condition and operation of individual's
remaining peripheral auditory system.
1. INTRODUCTION
The study of the model of sensory receptors can be carried out either
via trying to understand how the natural receptors process incoming signals
and build a representation code, or via the construction of artificial replacements. In the second case, we are interested in to what extent those
artificial counterparts have the ability to replace the natural receptors.
Several groups are now carrying out the design of artificial sensors.
Artificial cochleas seem to have a number of different designs and a tradition
of experiments. These make them now available for widespread use as
prostheses for patients who have sensorineural deafness caused by hair cell
damage.
? American Institute of Physics 1988
784
Although surgery is required for such implants, their performance has
reached a level of maturity to induce patients to seek out these devices
voluntarily. Unfortunately, only partial acoustic information is obtained by
severely deaf patients with cochlear prosthesis. Useful patterns for speech
communication are not yet 'fully recognizable through auditory prostheses.
This problem with artificial receptors is true for both single implants, that
stimulate large sections of the cochlea with signals that cover a large portion
of the spectrum [4,5], and multi channel implants, that stimulate specific
regions of the cochlea with specific portions of the auditory spectrum [3,13].
In this paper, we tackle the problem of artificial cochlear implants
through the used of neurocomputing tools. The receptor model used here
was developed by Gerald Wasserman of the Sensory Coding Laboratory,
Department of Psychological Sciences, Purdue University [20], and the
implants were performed by Richard Miyamoto of the Department of Otolaryngology, Indiana University Medical School [11].
The idea is to introduce with the cochlear implant, the computation
that would be performed otherwise by the natural receptors. It would therefore be possible to experimentally manipulate the properties of the implant
and measure the effect of coding variations on behavior. The model was
constrained to be portable, simple to implant, fast enough computationally
for on-line use, and built with a flexible paradigm, which would allow for
modification of the different parts of the model, without having to reconstruct it entirely. In the next section, we review parts of the receptor model,
and discuss the block diagram of the implant. Section 3 covers the limitations associated with the technique, and discusses the r.e sults obtained with
a single neuron and one feedback loop. Section 4 discusses the implementations of these models using feedforward neural networks, and the computational advantages for doing so.
2. COCHLEAR IMPLANTS AND THE NEURON MODEL
Although patients cannot reliably recognize randomly chosen spoken
words to them (when implanted with either multichannel or single channel
devices), this is not to say that no information is extracted from speech. If
the vocabulary is reduced to a limited set of words, patients perform
significantly better than chance, at associating the word with a member of
the set.
For these types of experiments, single channel implants correspond to
reported performance of 14% to 20% better than chance, with 62% performance being the highest reported. For multiple channels, performances of
95% were reported. So far no one has investigated the differences in performance between the two types of implants. Since the two implants have so
many differences, it is difficult to point out the cause for the better performance in the multiple channel case.
The results of such experiments are encouraging, and point to the fact
that cochlea implants need only minor improvement to be able to mediate
ad-lib speech perception successfully. Sensory coding studies have suggested
a solution to the implant problem, by showing that the representation code
generated by the sensory system is task dependent. This evidence came
from comparison of intracellular recordings taken from a single receptor of
intact subjects.
This coding evidence suggests that the temporal dispersion (time
integration) found in natural receptors would be a necessary part of any
785
cochlear implant. Present cochlear implants have no dispersion at all. Figure 2 shows the block diagram for a representative cochlear implant, the
House-Urban stimulator. The acoustic signal is picked up by the microphone, which sends it to an AM oscillator. This modulation step is necessary to induce an electro-magnetic coupling between the external and internal coil. The internal coil has been surgically implanted, and it is connected
to a pair of wires implanted inside and outside the cochlea.
Just incorporating the temporal dispersion model to an existing device
would not replicate the fact that in natural receptors, temporal dispersion
appears in conjunction to other operations which are strongly non linear.
There are operations like selection of a portion of the spectrum, rectification,
compression, and time-dispersion to be considered.
In figure 3, a modified implant is shown, which takes into consideration
some of these operations. It is depicted as a single-channel implant,
although the ultimate goal is to make it multichannel. Details of the operation of this device can be found elsewhere [21]. Here, it is important to mention that the implant would also have a compression/rectification function,
and it would receive a feedback from the integrator stage in order to control
its gain.
3. CHARACTERISTICS AND RESULTS OF THE IMPLANTS
The above model has been implemented as an off-line process, and then
the patients were exposed to a preprocessed signal which emulated the
operation of the device. It is not easy to define the amount of feedback
needed in the system or the amount of time dispersion. It could also be that
these parameters are variable across different conditions. Another variance
in the experiment is the amount of damage (and type) among different individuals. So, these parameters have to be determined clinically.
The coupling between the artificial receptor and the natural system also
presents problems. If a physical connection is used, it increases the risk of
infections. When inductive methods are used, the coupling is never ideal. If
portability and limited power is of concern in the implementation, then the
limited energy available for coupling has to be used very effectively.
The computation of the receptor model has to be made in a way to
allow for fast implementation. The signal transformation is to be computed
on-line. Also, the results from clinical studies should be able to be incorpora ted fairly easily without having to reengineer the implant.
Now we present the results of the implementation of the transfer function of figure 4. Patients, drawn from a population described elsewhere
[11,12,14], were given spoken sentences processed off-line, and simultaneously
presented with a couple of words related to the context. Only one of them
was the correct answer. The patient had two buttons, one for each alternative; he/she was to press the button which corresponded to the correct alternative. The results are shown in the tables below.
Patient 1 (Average of the population)
Percentage of correct alternatives
Dispersion
No disp.
0.1 msec
0.3 msec
67%
78%
85%
Best performance
786
1 msec
3 msec
76%
72%
Table I: Phoneme discrimination in
d.
two-alternate task.
Patient 2
.
D ?lsperSlOn
No disp.
1.0 msec
Percentage of correct alternatives
50%
76%
Best performance
Table II: Sentence comprehension in a two-alternative task.
There were quite a lot of variations in the performance of the different
patients, some been able to perform better at different dispersion and
compression amounts than the average of the population. Since one cannot
control the amount of damage in the system of each patient or differences in
individuals, it is hard to predict the ideal values for a given patient.
Nevertheless, the improvements observed are of undeniable value in improving speech perception.
4. THE NEUROCOMPUTING MODEL
In studying the implementation of such a system for on-line use, yet
flexible enough to produce a carry-on device, we look at feedforward neurocomputer models as a possible answer. First, we wanted a model that easily
produced a parallel implementation, so that the model could be expanded in
a multichannel environment without compromising the speed of the system.
Figure 5 shows the initial idea for the implementation of the device as a Single Instruction Multiple Data (SIMD) architecture.
The implant would be similar to the one described in Figure 4, except
that the transfer function of the receptor would be performed by a two layer
feed forward network (Figure 6). Since there is no way of finding out the
values of compression and dispersion a part from clinical trials, or even if
these values do change in certain conditions, we need to create a structure
that is flexible enough to modify the program structure by simple manipulation of parameters. This is also the same problem we would face when trying to expand the system to a multichannel implant. Again, neuromorphic
models provided a nice paradigm in which the dataflow and the function of
the program could be altered by simple parameter (weight) change.
For this first implementation we chose to use the no-contact inductive
coupling method. The drawback of this method is that all the information
has to be compressed in a single channel for reliable transmission and cross
talk elimination.
Since the inductive coupling of the implant? is critical at every cycle, the
most relevant information must be picked out of the processed signal. This
information is then given all the available energy, and after all the coupling
loss, it should be sufficient to provide for speech pattern discrimination. In a
multichannel setting, this corresponds to doing a sorting of all the n signals
in the channels, selecting the m highest signals, and adding them up for
modulation. In a naive single processor implementation, this could
correspond to n 2 comparisons, and in a multiprocessor implementation,
log(n) comparisons. Both are dependent on the number of signals to be
787
sorted.
We needed a scheme in which the sorting time would be constant with
the number of channels, and would be easily implementable in analog circuitry, in case this became a future route. Our scheme is shown in Figure 7.
Each channel is connected to a threshold element, whose threshold can be
varied externally. A monotonically decreasing function scans the threshold
values, from the highest possible value of the output to the lowest. The output of these elements will be high corresponding to the values that are the
highest first. These output are summed with a quasi-integrator with threshold set to m. This element, when high, disables the scanning functions; and
it corresponds to having found the m highest signals. This sorting is
independent of the number of channels.
The output of the threshold units are fed into sigma-pi units which
gates the signals to be modulated. The output of these units are summed
and correspond to the final processed signal (Figure 8).
The user has full control of the characteristics of this device. The
number of channels can be easily altered; the number of components allowed
in the modulation can be changed; the amount of gain, rectificationcompression, and dispersion of each channel can also be individually controlled. The entire system is easily implementable in analog integrated circuits, once the clinical tests have determine the optimum operational
characteristics.
6. CONCLUSION
We have shown that the study of sensory implants can enhance our
understanding of the representation schemes used for natural sensory receptors. In particular, implants can be enhanced significantly if the effects of
the sensory processing and transfer functions are incorporated in the model.
We have also shown that neuromorphic computing paradigm provides a
parallel and easily modifiable framework for signal processing structures,
with advantages that perhaps cannot be offered by other technology.
We will soon start the use of the first on-line portable model, using a
single processor. This model will provide a testbed for more extensive clinical trials of the implant. We will then move to the parallel implementation,
and from there, possibly move toward analog circuitry implementation.
Another route for the use of neuromorphic computing in this domain is
possibly the use of sensory recordings from healthy animals to train selforganizing adaptive learning networks, in order to design the implant
transfer functions.
REFERENCES
[1]
[2]
Bilger, R.C.; Black, F.O.; Hopkinson, N.T.; and Myers, E.N.,
"Implanted auditory prosthesis: An evaluation of subjects presently
fitted with cochlear implants," Otolaryngology, 1977, Vol. 84, pp. 677682.
Bilger, R.C.; Black, F.O.; Hopkinson, N.T.; ~~ers, E.~.; Payne, !.L.;
Stenson, N.R.; Vega, A.; and Wolf, R.V., EvaluatiOn of subJects
presently fitted with implanted auditory prostheses," Annals of Otology, Rhinology, and Laryngology, 1977, Vol. 86(Supp. 38), pp. 1-176.
788
[3]
[4]
[5]
[6]
Eddington, D.K.; Dobelle, W.H.; Brackmann, D.E.; Mladejovsky, M.G.;
and Parkin, J., "Place and periodicity pitch by stimulation of multiple
scala tympani electrodes in deaf volunteers," American Society for
Artificial Internal Organs, Transactions, 1978, Vol. 24, pp. 1-5.
House, W.F.; Berliner, .K.; Crary, W.; Graham, M.; Luckey, R;; Norton,
N.; Selters, W.; Tobm, H.; Urban, J.; and Wexler, M., Cochlear
implants," Annals of Otology, Rhinology and Laryngology, 1976, Vol.
85(Supp. 27), pp. 1-93.
House, W.F. and Urban, J., "Long term results of electrode implantation and electronic stimulation of the cochlea in man," Annals of Otology, Rhinology and Laryngology, 1973, Vol. 82, No.2, pp. 504-517.
Ifukube, T. and White, R.L., "A speech processor with lateral inhibition for an eight channel cochlear implant and its evaluation," IEEE
Trans. on Biomedical Engineering, November 1987, Vol. BME-34, No.
11.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Kong, K.-L., and Wasserman, G.S., "Changing response measures
alters temporal summation in the receptor and spike potentials of the
Limulus lateral eye," Sensory Processes, 1978, Vol. 2, pp. 21-31. (a)
Kong, K.-L., and Wasserman, G.S., "Temporal summation in the receptor potential of the Limulus lateral eye: Comparison between retinula
and eccentric cells," Sensory Processes, 1978, Vol. 2, pp. 9-20. (b)
Michelson, R.P., "The results of electrical stimulation of the cochlea in
human sensory deafness," Annals of Otology, Rhinology and Laryngology, 1971, Vol. 80, pp. 914-919.
Mia dej ovsky, M.G.; Eddington, D.K.; Dobelle, W.H.; and Brackmann,
D.E., "Artificial hearing for the deaf by cochlear stimulation: Pitch
modulation and some parametric thresholds," American Society for
Artificial Internal Organs, Transactions, 1974, Vol. 21, pp. 1-7.
Miyamoto, R.T.; Gossett, S.K.; Groom, G.L.; Kienle, M.L.; Pope, M.L.;
and Shallop, J.K., "Cochlear implants: An auditory prosthesis for the
deaf," Journal of the Indiana State Medical Association, 1982, Vol. 75,
pp. 174-177.
Miyamoto, R.T.; Myres, W.A.; Pope, M.L.; and Carotta, C.A.,
"Cochlear implants for deaf children," Laryngoscope, 1986, Vol. 96, pp.
990-996.
Pialoux, P.; Chouard, C.H.; Meyer, B.; and Fu,?ain, C., "Indications
and results of the multichannel cochlear implant,' Acta Otolaryngology,
1979, Vo .. 87, pp. 185-189.
Robbins, A.M.i Osberger, M.J.; Miyamoto, R.T.; Kienle, M.J.; and
Myres, W.A., ? Speech-tracking performance in single-channel cochlear
implant subjects," JourntLl of Speech and Hearing Research, 1985, Vol.
28, pp. 565-578.
Russell, I.J. and Sellick, P.M., "The tuning properties of cochlear hair
cells," in E.F. Evans and J.P. Wilson (eds.), Psychophysics and Physiology 0 f Hearing, London: Academic Press, 1977.
Wasserman, G.S., "Limulus psychophysics: Temporal summation in the
ventral eye," Journal of Experimental Psychology: General, 1978, Vol.
107, pp. 276-286.
789
[17]
[18]
[19]
[20]
[21]
Wasserman, G.S., "Limulus psychophysics: Increment threshold," Perception & Psychophysics, 1981, Vol. 29, pp. 251-260.
Wasserman, G.S.; Felsten, G.; and Easland, G.S., "Receptor saturation
and the psychophysical function," Investigative Ophthalmology and
Visual Science, 1978, Vol. 17, p. 155 (Abstract).
Wasserman, G.S.; Felsten, G.; and Easland, G.S., "The psychophysical
function: Harmonizing Fechner and Stevens," Science, 1979, Vol. 204,
pp. 85-87.
Wasserman, G.S., "Cochlear implant codes and speech perception in
profoundly deaf," Bulletin of Psychonomic Society, Vol. (18)3, 1987.
Wasserman, G.S.; Wang-Bennett, L.T.; and Miyamoto, R.T., "Temporal dispersion in natural receptors and pattern discrimination mediated by artificial receptor," Proc. of the Fechner Centennial Symposium, Hans Buffart (Ed.), Elsevier/North Holland, Amsterdam, 1987.
r
I
I
7~
9
~ SENSORY CODING DATA ~ - ,
13
RECEPTOR
SIGNAL
STIMULUS
I
I
CENTRAL
ANALYSIS
BEHAVIOR
11
I
15
r----------~
I
L
..: PROSTHETIC :
-
-
I
SIGNAL
:- -
...... -r:-. __ .
I
I
-
..J
?
17
Fig. 1. Path of Natural and Prosthetic Signals.
Sound
Central Nervous System
Fig. 2. The House-Urban Cochlear Implant.
790
AMPLIFICATION
-----1~
COMPRESSIVE RECTIFIER
DISPERSION
-----1~
INTEGRATOR
Fig. 3. Receptor Model
Sound
Central Nervous System
Fig. 4. Modified Implant Model.
791
PORTABLE PARALLEL NEUROCOMpuTER
16KHz AM
MODULATED OUTPUT
m
EXTERNAL
USER CONTROLLED
PARAMETERS
Fig. 5. Initial Concept for a SIMD Architecture.
EXTERNALLY CONTROLLED
AMPUFICATION
DISPERSION
NEURON
MODEL
SORTER
OF
N SIGNALS
NEURON
MODEL
Fig. 6. Feedforward Neuron Model Implant.
792
SORTER OF n SIGNALS IN 0(1)
I--~ RESET SCANNING
SIGNALS
INPUTS
FUNCTION
THRESHOLD
SETOFn,
EXTERNALLY
CONTROLLED
THRESHOLD CONTROL:
SCANNING FUNCTION FROM I imax TO I j min
I j max
I I. min
Fig. 7. Signal Sorting Circuit.
SIGNAL SELECTORS
t---~.
0 -leS
1
1
1
J----" OUTPUT SIGNAL
In
---+---~.c
Fig. 8. Sigma-Pi Units for Signal Composition.
793
USER CONTROLLED PARAMETERS
tJ4
13121114131
BEST
MATCHES
DISPERSION
10101312111
GAIN
~
FILTER
BYPASS
~
PROCESSOR
BYPASS
~
SINGLE
NEURON
PROCESSING
MICROPHONE
Fig. 9. Parameter Controls for Clinical Studies.
| 12 |@word trial:2 kong:2 compression:4 replicate:1 instruction:1 laryngology:4 seek:1 wexler:1 mention:1 carry:2 initial:2 series:1 selecting:1 subjective:1 existing:1 yet:2 must:1 evans:1 disables:1 wanted:1 discrimination:3 device:8 nervous:3 provides:1 parkin:1 symposium:1 maturity:1 recognizable:1 inside:1 introduce:1 behavior:2 multi:1 integrator:3 berliner:1 decreasing:1 actual:1 encouraging:1 increasing:1 totally:1 lib:1 provided:1 circuit:2 lowest:1 what:1 developed:1 compressive:1 spoken:2 finding:1 indiana:2 transformation:1 temporal:8 every:1 tackle:1 control:5 unit:4 medical:2 before:1 engineering:2 local:1 modify:1 severely:1 receptor:22 path:1 modulation:4 black:2 chose:1 acta:1 suggests:2 limited:3 lafayette:1 block:2 otology:4 area:1 significantly:2 physiology:1 word:4 induce:2 groom:1 cannot:3 close:1 selection:1 risk:1 context:1 wasserman:9 imax:1 population:3 variation:2 increment:1 annals:4 construction:1 enhanced:1 user:4 element:3 recognition:1 observed:1 electrical:3 wang:1 region:1 connected:2 cycle:1 russell:1 highest:5 voluntarily:1 environment:1 gerald:1 carrying:1 surgically:1 exposed:1 easily:6 talk:1 train:1 fast:2 investigative:1 london:1 detected:1 artificial:13 corresponded:1 outside:1 quite:1 whose:1 say:1 otherwise:1 reconstruct:1 compressed:1 ability:1 final:1 advantage:3 myers:1 indication:1 reset:1 relevant:1 loop:1 payne:1 amplification:1 electrode:2 requirement:1 transmission:1 optimum:1 produce:1 coupling:7 bme:1 minor:1 school:2 progress:1 implemented:3 drawback:1 correct:4 compromising:1 stevens:1 filter:1 human:1 opinion:1 elimination:1 implementing:1 fechner:2 comprehension:1 summation:3 considered:1 great:1 mapping:1 predict:1 limulus:4 circuitry:2 ventral:1 proc:1 currently:1 healthy:1 ain:1 individually:1 robbins:1 organ:2 create:1 successfully:1 tool:1 sensor:1 modified:2 harmonizing:1 wilson:1 conjunction:1 improvement:2 she:1 tradition:1 am:2 elsevier:1 abstraction:1 dependent:2 multiprocessor:1 entire:1 integrated:1 expand:1 quasi:1 interested:1 among:1 flexible:3 development:1 animal:1 constrained:1 integration:1 fairly:1 summed:2 psychophysics:4 simd:2 never:1 having:3 ted:1 once:1 look:1 implantation:1 future:1 stimulus:1 richard:1 randomly:1 simultaneously:1 recognize:1 neurocomputing:2 individual:3 replacement:1 rhinology:4 freedom:1 disp:2 evaluation:3 fu:1 partial:1 necessary:5 prosthesis:7 miyamoto:5 fitted:2 psychological:1 cover:2 neuromorphic:3 hearing:3 reported:3 answer:2 scanning:3 physic:1 off:2 enhance:1 sensorineural:2 together:1 again:1 central:3 otolaryngology:3 possibly:2 sorter:2 external:2 american:3 supp:2 account:1 potential:2 coding:6 north:1 caused:1 ad:1 performed:3 picked:2 lot:1 doing:2 reached:1 portion:3 start:1 parallel:4 became:1 variance:1 who:1 efficiently:1 characteristic:3 correspond:3 phoneme:1 produced:1 emulated:1 finer:1 processor:4 mia:1 infection:1 ed:2 norton:1 energy:2 pp:16 brackmann:2 associated:1 couple:2 gain:3 auditory:7 dataflow:1 appears:1 feed:1 attained:1 response:1 scala:1 underlining:1 strongly:1 just:1 stage:1 biomedical:1 widespread:1 perhaps:1 stimulate:2 effect:2 requiring:1 true:1 concept:1 counterpart:1 inductive:3 laboratory:1 white:1 trying:2 deafness:3 vo:1 consideration:1 vega:1 stimulation:5 psychonomic:1 physical:1 khz:1 analog:3 he:1 association:1 composition:1 eccentric:1 tuning:2 had:1 han:1 inhibition:1 manipulation:1 route:2 certain:1 came:1 determine:1 paradigm:3 monotonically:1 signal:23 ii:1 multiple:5 sound:3 full:1 match:1 academic:1 clinical:5 long:1 cross:1 manipulate:1 controlled:5 pitch:2 hair:2 implanted:5 patient:15 volunteer:1 cochlea:7 achieved:1 cell:3 receive:1 fine:1 diagram:2 sends:1 recording:2 subject:4 electro:1 member:1 seem:1 ideal:2 feedforward:3 enough:3 easy:1 deaf:7 variety:1 fit:1 psychology:1 architecture:2 associating:1 idea:2 ultimate:1 speech:12 cause:1 useful:2 selforganizing:1 amount:6 hardware:1 processed:3 multichannel:6 reduced:1 percentage:2 alters:1 conceived:1 modifiable:1 michelson:1 vol:18 profoundly:2 group:1 dej:1 nevertheless:1 threshold:9 urban:4 drawn:1 changing:1 preprocessed:1 button:2 fourth:1 place:1 electronic:1 decision:1 graham:1 entirely:1 layer:1 undeniable:1 constraint:1 software:2 prosthetic:2 simulate:1 speed:2 min:2 expanded:1 department:2 alternate:1 peripheral:2 clinically:1 across:1 ophthalmology:1 modification:4 presently:2 taken:2 computationally:1 rectification:2 discus:3 needed:2 fed:1 end:1 studying:1 available:3 operation:6 permit:2 laryngoscope:1 eight:1 magnetic:1 alternative:5 gate:1 remaining:2 build:1 stimulator:1 society:3 contact:1 surgery:1 move:2 psychophysical:2 spike:1 parametric:2 damage:3 lateral:3 majority:1 cochlear:23 extent:1 portable:3 toward:1 code:3 difficult:1 unfortunately:1 sigma:2 implementation:13 design:3 reliably:1 perform:2 allowing:1 neuron:7 dispersion:15 wire:1 purdue:2 implementable:2 november:1 communication:1 incorporated:1 varied:1 pair:1 required:1 extensive:1 connection:1 sentence:2 acoustic:2 manoel:1 testbed:1 trans:1 able:3 suggested:1 below:1 perception:6 pattern:3 program:2 saturation:1 built:1 reliable:1 max:1 power:2 critical:1 natural:13 scheme:3 improve:1 altered:2 technology:1 sults:1 eye:3 carried:1 mediated:1 naive:1 review:1 nice:1 understanding:1 fully:1 loss:1 limitation:1 ingredient:1 offered:1 sufficient:1 tympani:1 bypass:2 pi:2 elsewhere:2 changed:1 periodicity:1 soon:1 allow:2 understand:1 institute:1 face:1 bulletin:1 recoding:1 feedback:3 vocabulary:1 sensory:12 forward:1 made:1 adaptive:1 far:1 transaction:2 selector:1 mter:1 incoming:1 portability:1 spectrum:3 table:3 channel:16 transfer:4 operational:1 improving:1 investigated:1 domain:1 intracellular:1 mediate:1 suffering:1 allowed:1 child:1 fig:9 west:1 representative:1 pope:2 meyer:1 msec:5 house:4 third:1 externally:3 specific:2 rectifier:1 showing:1 er:1 evidence:3 concern:1 incorporating:1 adding:1 effectively:2 implant:47 sorting:5 depicted:1 tenorio:1 visual:1 amsterdam:1 adjustment:1 tracking:1 holland:1 neurocomputer:2 corresponds:2 wolf:1 chance:2 extracted:1 coil:2 goal:1 sorted:1 ann:4 oscillator:1 replace:1 man:1 bennett:1 experimentally:1 hard:1 change:2 determined:1 except:1 microphone:2 experimental:1 intact:1 internal:4 scan:1 modulated:2 incorporate:1 phenomenon:2 |
220 | 120 | 560
A MODEL OF NEURAL OSCILLATOR FOR A UNIFIED SUEt10DULE
A.B.Kirillov, G.N.Borisyuk, R.M.Borisyuk,
Ye.I.Kovalenko, V.I.Makarenko,V.A.Chulaevsky,
V.I.Kryukov
Research Computer Center
USSR Academy of Sciences
Pushchino, Moscow Region
142292 USSR
AmTRACT
A new model of a controlled neuron oscillatJOr,
proposed earlier {Kryukov et aI, 1986} for the
interpretation of the neural activity in various
parts of the central nervous system, may have
important applications in engineering and in the
theory of brain functions. The oscillator has a
good stability of the oscillation period, its
frequency is regulated linearly in a wide range
and it can exhibit arbitrarily long oscillation
periods without changing the time constants of
its elements. The latter is achieved by using
the critical slowdown in the dynamics arising in
a network of nonformal excitatory neurons
{Kovalenko et aI, 1984, Kryukov, 1984}. By
changing the parameters of the oscillator one
can obtain various functional modes which are
necessary to develop a model of higher brain
function.
mE CECILLATOR
Our oscillator comprises several hundreds of modelled
excitatory neurons (located at the 6ites of a plane lattice)
and one inhibitory neuron. The latter receives output
stgnals from all the excitatory neurons and its own output
is transmitted via feedback to every excitatory neuron (Fig.
1). Each excit~tory neuron is connected bilaterally with its
four nearest neighbours.
Each neuron has a threshold r(t) decaying exponentially to a
561
A Model of Neural Oscillator for a Unified Submodule
e
i
value roo or roo (for an excitatory or inhibitory neuron).
A
Gaussian noise with zero mean and standard deviation a is
added to a threshold. A membrane potential of a neuron is
the sum of input impulses decaying exponentially when there
are no inwt. If the membrane potential exceeds the
threshold, the neuron fires and sends impulses to the
neighbouring neurons. An imWlse from excitatory neuron to
excitatory one increases the membrane potential of the
latter by a ee , from the excitatory to the inhibitory - by
a ei , and from the inhibitory to the excitatory
-
decreases
the membrane potential by aie' We consider a discrete time
model J the time step being equal to the absolute refractory
period.
We associate a variable xi(t) with each excitatory neuron.
If the i-th neuron fires at step t, we take x.(t)=1;
1
does not, then Xi (t)=O. The mean E(t)=l/N ~i (t)
referred to as the network activi ty, where N is
of excitatory neurons.
A
'f.!
if
will
the
it
be
number
B
----- ----
Figure 1. A - neuron, B - scheme of interconnections
Let us consider a situation when inhibitory feedback is cut
off. Then such a model exhibits a critical slowdown of the
dynamics {Kovalenko et al, 1984, Kryukov, 1984}. Namely, if
the interconnections and parameters of neurons are chosen
appropriately , initial pattern of activated neurons has an
unusually long lifetime as compared with the time of membrane
potential decay. In this mode R(t) is slowly increasing and
562
Kirillov, et al
causes the inhibitory neuron to fire.
Now if we tum on the negative feedback, outPUt impulse
from inhibitory neuron sharply decreases membrane potentials
of excitatory neurons. As a a consequence, K( t) falls down
and process starts from the beginning.
J
We studied this oscillator by means of simulation model.
There are 400 excitatory neurons (20*20 lattice) and one
inhibitory neuron in our model.
THE MAIN PKFml'IHS OF THE <EClILATOO
a. When the thresholds of excitatory neurons are high
enough, the inhibitory neuron does not fire and there are no
A
B
If I
I
I
I
I
I
I
I
I
I I
I. I
I
II
? I
I
? ?
I I
I
I
I
I
I
I I
I
I
I
I
I
? L~ 'I1. z
I
I II
fl.l
..
?
t.
rep. 126 + 16
~&?,aLJJ~Ji.l
i-
?
II.
I
II!
"'-i
I
? ?? ?
?
ill
II i
I
n,
I
n2
III
I III ? __
??
?
II
H11111
fl.J
?.
??
I fl. z
I
n3
Figure 2. Oscillatory mode. A - network activity,
B - neuron spike trains
~~!!l~~~~lues
of
r! the network activity R(t)
changes
periodically and excitatory neurons generate bursts of
spikes (Fig. 2). The inhibitory neuron generates regular
periodical spike trains.
c. If the parameters are chosen appropriately, the mean
oscillation period is much greater than the mean interspike
interval of a network neuron. The frequency of oscillations
is regulated by r! (Fig. 3A) or, which is the same, by the
A Model of Neural Oscillator for a Unified Submodule
intensity of the inp.lt
flow. The miniIwm period is
determined by the decay rate of the inhibitory input, the
maximum - by the lifetime of the metastable state.
A
B
lIT
k
60
.4
50
.3
40
30
....
~
20
.1
10
j
:.cc r
F~ 3. A - oscillation frequency lIT vs. threshold r!,
B - coefficient of variation of the period K vs. period
9.
10. II. 12. 13.
't!.
';C
J:
1:.C I?T
d. The coefficient of variation of the period is of the
order of several percent, rut it
increases
at
low
frequencies (F~. 3B). The stability of oscillations can be
increased by introducing some inhomogeneity in the network,
for example, when a part of excitatory neurons will receive
no inhibitory signals.
OOCILLA'lOO UNDER ntPULSE
STIMOLATI~
In this section we consider first the neural network without
the inhibitory neuron. But we imitate a periodic input to
the network by slowly varying the thresholds ret) of the
excitatory neurons. Namely, we add to
r( t)
a
value
Ar-A? sin(Wt) and fire a part of the network at some phase of
the sine wave. Then we look at the time needed for the
network to restore its background activity. There
are
specific values of a phase for which this time is rather b~
(Fig. 4A). Now consider the full ocsillator
with
an
oscillation period T (in this section T=35?2. 5 time steps) .
We stimulate the oscillator by periodical (with the period
tat <35) sharp increase of membrane potential
of
each
excitatory neuron by
a
value
8 st .
As
the
stimulation
proceeds, the oscillation period gradually decreases from
T--35 to some value Tat' remaining then equal to Tat. The
563
564
Kirillov, et al
value of Tat depends on the stimulation intensity Sst: as Sat
gets greater, Tat tends to the st1lw.lation period tst'
c
A
a
o
10
B
5
,It ""?
?
o
10
20
Figure 4, A - threshold modulation, B - duration of the
network responce vs. phase of threshold modulation,
C - critical st1lw.lation intensity vs, stimulation period
For every stimulation period tst there
is characteristic
&0
of the st1lw.lation intensity Sst' such that with
Sst>&o the value of Tst is equal to the stimulation period
value
tat' The dependence between &0 and tat is close to a linear
one (Fig, 4B). The usual relaxation oscillator also exibitB
a linear dependence between &0 and tat' At the same time, we
did not find in our oscillator any resonance phenomena
essential to a linear oscillator,
'1'HK
NE'l1?)BK
WITH INTKBNAL R>ISE
In a further development of the neural oscillator we tried
to ooild a model that will be more adequate to the
biological counterpart. To this end, we changed the
structure of interconnections and tried to define more
correctly the noise component of the i.np.lt signal coming to
an excitatory neuron, In the model described above we
A Model of Neural Oscillator for a Unified Submodule
imitated the sum of inputs from distant neurons by
independent Gaussian noise. Here we used real noise produced
by the network.
In order to simulate this internal noise, we randomly choose
16 distant neighbours for every exitatory neuron. Then we
assume that the network elements are adjusted to work in a
certain noise environment. This means that a ' mean' internal
noise would provide conditions for the neuron to be the most
sensitive for the information coming from its
nearest
nelghbors .
So. for every neuron i we calculate the sum k. =&c . (t), where
1
J
summation is over all distant nelghbors of this neuron,
compare it with the mean internal noise k=1/N Lk..
1
and
The
internal noise for the neuron i now is ni=C(ki-k), where C>O
is a constant.
We choose model parameters in such a way that the noise
component is of the order of several percent of the membrane
potential. Nevertheless, the network exhibits in this case a
dramatic increase of the lifetime of initial pattern of
activated neurons, as compared with the network with
independent Gaussian noise. A range of parameters, for which
this slowdown of the dynamics is observed,
is
also
considerably irtCreased. Hence, longer perioos and better
perioo stability could be obtained for our generator if we
use internal noise.
THE CHAIN OF THREE SUBMODULES: A MODEL OF COLUMN OSCILLATOR
Now we consider a small System constituted of three
oscillator submodules, A, B and C, connected consecutively
so that submodule A can transmit excitation to submodule B,
B to C, and C to A. The excitation can only be transmitted
when the total activity of the submodule reaches its
threshold level, i.e. when the corresponding inhibitory
neuron fires. After the inhibitory neuron has fired, the
activity of its submodule is set to be small enough for the
submodule not to be active with large probability until the
excitation from another submodule comes. Therefore, we
expect A, B and C to work consecutively. In fact, in our
simulation experiments we observed such behavior of the
565
566
Kirillov, et al
A
SeT)
T
20
35
o '-------10
15
12
L..-_ _ _ _ _ __
10
12
Figure 5. Chain of three sutmodules. Period of
oscillations (A) and its standard deviation (B) vs.
noise amplitude
closed chain of 3 basic submodules. The activity of the
whole system is nearly periodic. Figure 5A displays the
period T vs. the noise amplitude a. The scale of a is chosen
so that 0.5 corresponds approximately to the
resting
potential. An interesting feature of the chain is that the
standard deviation SeT) of the period (Fig. 5B) is small
enough, even for the oscillator of relatively small size.
The upper lines in Fig. 5 correspond to square 10*10
network, middle - to 9*9, lower - to 8*8 one. One can see
that the loss of 36 percent of elements only causes a
reduction of the working range without the
loss
of
stability.
CXHUJSI~
Though we have not considered all the interesting modes of
the oscillator, we believe that, owing to the phenomenon of
metastability, the same oscillator
exhibits
different
behaviour under slightly different threshold parameters and
the same and/or different inPuts.
Let
us
enumerate
the
most
interesting
functional
possibilities of the oscillator, which can be
easily
obtained from our results.
1.Pacemaker with the frequency regulated in a wide range and
with a high period stability, as compared with the neuron
(Fig. 313).
2. Integrator (input=threshold,
output=phase)
with
a
wide
A Model of Neural Oscillator for a Unified Submodule
range of linear regulation (see Fig. 3A).
3.Generator of damped oscillations (for discontinuous inPut).
4. Delay device controlled by an external signal.
5.Phase comparator (see Fig. 4A).
We have already used these functions for the interPretation
of electrical activity of several functionally different
neural structures {Kryukov et aI, 1986}. The other functions
will be used in a system model of attention {Kryukov, 1989}
presented in this volume. All these considerations justify
the name of our neural oscillator - a unified submodule for
a ' resonance' neurocomputer.
References
E. I. Kovalenko, G. N. Borisyuk, R. M. Borisyuk, A. B.
Kirillov, V. I . Kryukov.
Short-tenn memory as
a
metastable state. II. S1IWlation model, Cybernetics and
Systems Research 3 2, R. Trappl (ed.), Elsevier, pp.
266-270 (1984)
V. I. Kryukov. Short-tenn memory as a metastable state.
I. Master equation approach, Cybernetics and Systems
Research 2, R. Trappl (ed.), Elsevier, pp. 261-265
(1984)
3
V.
I. Kryukov. "Neurolocator" ,
(1989) (in this volume).
a
model
of
attention
V. I. Kryukov, G. N. Borisyuk, R. M. Borisyuk, A. B.
Kirillov, Ye. I. Kovalenko. The Metastable and Unstable
States in the Brain (in Russian), Pushchino, Acad. Sci.
USSR (1986) (to appear in Stochastic Cellular Syste.ms:
Ergodici tY3 l1emory3 Morphogenesis, Manchester University
Press, 1989).
567
| 120 |@word ye:2 lation:3 middle:1 come:1 counterpart:1 hence:1 added:1 already:1 discontinuous:1 owing:1 spike:3 tat:8 tried:2 simulation:2 consecutively:2 stochastic:1 sin:1 exitatory:1 dependence:2 dramatic:1 usual:1 exhibit:4 regulated:3 excitation:3 sci:1 behaviour:1 reduction:1 initial:2 m:1 me:1 biological:1 unstable:1 summation:1 cellular:1 adjusted:1 l1:1 percent:3 considered:1 consideration:1 regulation:1 distant:3 periodically:1 functional:2 interspike:1 stimulation:5 ji:1 negative:1 refractory:1 exponentially:2 volume:2 aie:1 v:6 interpretation:2 tenn:2 pacemaker:1 device:1 nervous:1 imitate:1 imitated:1 plane:1 sensitive:1 beginning:1 resting:1 functionally:1 ai:3 short:2 borisyuk:6 neuron:45 lues:1 situation:1 gaussian:3 rather:1 sharp:1 longer:1 burst:1 varying:1 add:1 intensity:4 morphogenesis:1 bk:1 own:1 namely:2 upper:1 certain:1 hk:1 rep:1 arbitrarily:1 behavior:1 elsevier:2 rut:1 brain:3 integrator:1 transmitted:2 proceeds:1 greater:2 pattern:2 increasing:1 period:19 i1:1 signal:3 ii:8 full:1 memory:2 ill:1 critical:3 exceeds:1 ussr:3 development:1 resonance:2 restore:1 long:2 unified:6 ret:1 equal:3 scheme:1 responce:1 controlled:2 every:4 lit:2 look:1 unusually:1 nearly:1 basic:1 ne:1 lk:1 np:1 achieved:1 appear:1 randomly:1 neighbour:2 receive:1 stimulate:1 impulse:3 engineering:1 background:1 interval:1 tends:1 loss:2 consequence:1 acad:1 phase:5 fire:6 sends:1 appropriately:2 expect:1 syste:1 interesting:3 modulation:2 approximately:1 generator:2 possibility:1 tory:1 studied:1 flow:1 metastability:1 ee:1 activated:2 range:5 damped:1 pushchino:2 iii:2 chain:4 enough:3 submodules:3 excitatory:19 changed:1 slowdown:3 necessary:1 wide:3 submodule:11 fall:1 absolute:1 feedback:3 increased:1 regular:1 inp:1 column:1 earlier:1 get:1 ar:1 close:1 cause:2 ihs:1 periodical:2 lattice:2 adequate:1 enumerate:1 introducing:1 deviation:3 sst:3 hundred:1 center:1 delay:1 attention:2 active:1 duration:1 loo:1 sat:1 generate:1 periodic:2 activi:1 xi:2 considerably:1 inhibitory:15 st:1 arising:1 correctly:1 stability:5 off:1 discrete:1 variation:2 transmit:1 four:1 central:1 neighbouring:1 threshold:11 nevertheless:1 choose:2 slowly:2 tst:3 changing:2 associate:1 element:3 did:1 external:1 main:1 located:1 linearly:1 constituted:1 whole:1 relaxation:1 cut:1 noise:14 sum:3 potential:9 observed:2 n2:1 master:1 electrical:1 fig:10 referred:1 calculate:1 coefficient:2 region:1 connected:2 depends:1 oscillation:10 decrease:3 sine:1 comprises:1 closed:1 environment:1 start:1 decaying:2 wave:1 fl:3 ki:1 display:1 dynamic:3 down:1 activity:8 specific:1 square:1 ni:1 sharply:1 characteristic:1 n3:1 correspond:1 decay:2 easily:1 generates:1 modelled:1 simulate:1 essential:1 various:2 produced:1 train:2 relatively:1 cc:1 cybernetics:2 metastable:4 oscillatory:1 reach:1 lt:2 ise:1 ed:2 membrane:8 slightly:1 ty:1 roo:2 interconnection:3 frequency:5 pp:2 gradually:1 neurocomputer:1 inhomogeneity:1 corresponds:1 equation:1 comparator:1 amplitude:2 coming:2 needed:1 oscillator:21 end:1 tum:1 higher:1 change:1 determined:1 kirillov:6 fired:1 wt:1 justify:1 academy:1 though:1 total:1 lifetime:3 bilaterally:1 until:1 manchester:1 working:1 receives:1 ei:1 moscow:1 remaining:1 internal:5 latter:3 develop:1 mode:4 nearest:2 believe:1 russian:1 phenomenon:2 name:1 |
221 | 1,200 | Visual Cortex Circuitry and Orientation
Tuning
Trevor M undel
Department of Neurology
University of Chicago
Chicago, IL 60637
mundel@math.uchicago.edu
Alexander Dimitrov
Department of Mathematics
University of Chicago
Chicago, IL 60637
a-dimitrov@ucllicago.edu
Jack D. Cowan
Departments of Mathematics and Neurology
University of Chicago
Chicago, IL 60637
cowan@math.uchicago.edu
Abstract
A simple mathematical model for the large-scale circuitry of primary visual cortex is introduced. It is shown that a basic cortical architecture of recurrent local excitation and lateral inhibition can account quantitatively for such properties as orientation tuning. The model can also account for such local effects as cross-orientation suppression. It is also shown that nonlocal state-dependent coupling between similar orientation patches,
when added to the model, can satisfactorily reproduce such effects as non-local iso--orientation suppression, and non-local crossorientation enhancement. Following this an account is given of perceptual phenomena involving object segmentation, such as "popout", and the direct and indirect tilt illusions.
1
INTRODUCTION
The edge detection mechanism in the primate visual cortex (VI) involves at least
two fairly well characterized circuits. There is a local circuit operating at subhypercolumn dimensions comprising strong orientation specific recurrent excitation
and weakly orientation specific inhibition. The other circuit operates between hyper columns , connecting cells with similar orientation preferences separated by several millimetres of cortical tissue. The horizontal connections which mediate this
888
T. Mundel. A. Dimitrov and J. D. Cowan
circuit have been extensively studied. These connections are ideally structured to
provide local cortical processes with information about the global nature of stimuli. Thus they have been invoked to explain a wide variety of context dependent
visual processing. A good example of this is the tilt illusion (TI), where surround
stimulation causes a misperception of the angle of tilt of a grating.
The interaction between such local and long-range circuits has also been investigated . Typically these experiments involve the separate stimulation of a cells
receptive field (the classical receptive field or "center") and the immediate region
outside the receptive field (the non-classical receptive field or "surround"). In the
first part of this work we present a simple model of cortical center-surround interaction. Despite the simplicity of the model we are able to quantitatively reproduce
many experimental findings. We then apply the model to the TI. We are able to
reproduce the principle features of both the direct and indirect TI with the model.
2
PRINCIPLES OF CORTICAL OPERATION
Recent work with voltage-sensitive dyes (Blasdel, 1992) augments the early work of
Rubel & Wiesel (1962) which indicated that clusters of cortical neurons corresponding to cortical columns have similar orientation preferences. An examination of local
field potentials (Victor et al., 1994) which represent potentials averaged over cortical volumes containing many hundreds of cells show orientation preferences. These
considerations suggest that the appropriate units for an analysis of orientation selectivity are the localized clusters of neurons preferring the same orientation. This
choice of a population model immediately simplifies both analysis and computation
with the model. For brevity we will refer to elements or edge detectors, however
these are to be understood as referring to localized populations of neurons with a
common orientation preference. We view the cortex as a lattice of hypercolumns,
in which each hypercolumn comprises a continuum of iso-orientation patches distinguished by their preferred orientation ?. All space coordinates refer to distances
between hypercolumn centers. The popUlation model we adopt throughout this
work is a simplified form of the Wilson-Cowan equations.
2.1
LOCAL MODEL
Our local model is a ring (? = -90 0 to + 90?) of coupled iso-orientation patches and
inhibitors with the following characteristics
? Weakly tuned orientation biased inputs to VI. These may arise either from
slight orientation biases of lateral geniculate nucleus (LGN) neurons or from
converging thalamocortical afferents
? Sharply tuned (space constant ?7.5?) recurrent excitation between isoorientation populations
? Broadly tuned inhibition to all iso-orientation populations with a cut-off
of inhibition interactions at between 45? and 60? separation
The principle constraint is that of a critical balance between excitatory and inhibitory currents. Recent theoretical studies (Tsodyks & Sejnowski 1995; Vreeswijk
& Sompolinsky 1996) have focused on this condition as an explanation for certain
features of the dynamics of natural neuronal assemblies. These features include the
irregular temporal firing patterns of cortical neurons, the sensitivity of neuronal
assemblies in vivo to small fluctuations in total synaptic input and the distribution of firing rates in cortical networks which is markedly skewed towards low mean
VISual Cortex Circuitry and Orientation Tuning
889
rates. Vreeswijk & Sompolinsky demonstrate that such a balance emerges naturally
in certain large networks of excitatory and inhibitory populations. We implement
this critical balance by explicitly tuning the strength of connection weights between
excitatory and inhibitory populations so that the system state is subcritical to a
bifurcation point with respect to the relative strength of excitation/inhibition.
2.2
HORIZONTAL CONNECTIONS
We distinguish three potential patterns of horizontal connectivity
? connections between edge detectors along an axis parallel to the detectors
preferred orientation (visuotopic connection)
? connections along an axis orthogonal to the detectors preferred orientation,
with or without visuotopic connections
? radially symmetric connection to all detectors of the same orientation in
surrounding hypercolumns
Recent experimental work in the tree shrew (Fitzpatrick et al., 1996) and preliminary work in the macaque (Blasdel, personal communication) indicate that visuotopic connection is the predominant pattern of long-range connectivity. This
connectivity pattern allows for the following reduction in dimension of the problem
for certain experimental conditions.
Consider the following experiment. A particular hypercolumn designated the "center" is stimulated with a grating at orientation </> resulting in a response from the
hdge detector. The region outside the receptive area of this hypercolumn (in the
"surround") is also stimulated with a grating at some uniform orientation </>' resulting in responses from <,i>'-edge detectors at each hypercolumn in the surround. In
order to study the interactions between center and surround, then to first order approximation only the center hypercolumn and interaction with the surround along
the </> visuotopic axis (defined by the center) and the </>' visuotopic axis (once again
defined by the center) need be considered. In fact, except when </> = </>' the effect
of the center on the surround will be negligible in view of the modulatory nature
of the horizontal connections detailed above. Thus we can reduce the problem (a
priori three dimensional - one angle and two space dimensions) to two dimensions
(one angle and one space dimension) with respect to a fixed center. This reduction is the key to providing a simple analysis of complex neurophysiological and
psychophysical data.
3
3.1
RESULTS
CENTER-SURROUND INTERACTIONS
Although, we have modeled the state-dependence of the horizontal connections,
many of the center-surround experiments we wish to model have not taken this
dependence explicitly into account. In general the surround has been found to
be suppressive on the center, which accords with the fact that the center is usually
stimulated with high contrast stimuli. A typical example of the surround suppressive
effect is shown in figure 1.
The basic finding is that stimulation in the surround of a visual cortical cell's receptive field generally results in a suppression of the cell's tuning response that
is maximal for surround stimulation at the orientation of the cell's peak tuning
890
T. Mundel, A. Dimitrov and 1. D. Cowan
?
60
30
40
~
A
-60
+30 +60 +90
-30
0
ORJENTATIONOFBAR (dell
B
10
-60 -30
0 +30 +60 +90
ORJENfATION OF GRATINO (dell
Figure 1: Non-local effects on orientation tuning - experimental data. Response to
constant center stimulation at 15? and surround stimulation at angles [-90? , 90 0J
(open circles), Local tuning curve (filled circles). Redrawn from Blakemore and
Tobin (1972)
response and falls off with stimulation at other orientations in a characteristic manner. Further examples of surround suppression can be found in the paper of Sillito
et al. (1995). Figure 2 depicts simulations in which long-range connections to local inhibitory populations are strong compared to connections to local excitatory
populations.
These experiments and simulations appear to conflict with the consistent experimental finding that stimulating a hypercolumn with an orthogonal stimulus suppresses
the response to the original stimulus.
The relevant results can be summarised as follows: cross-orientation suppression
(with orthogonal gratings) originates within the receptive field of most cells examined and is a consistent finding in both complex and simple cells. The degree of
suppression depends linearly on the size of the orthogonal grating up to a critical
dimension which is smaller than the classical receptive field dimension. It is possible
to suppress a response to the baseline firing rate by either increasing the size or the
contrast of the orthogonal grating.
The model outlined earlier can account for all these observations, and similar measurements recently described by Sillitto et. al. (1995), in a strikingly simple fashion in the setting of single mode bifurcations. Orthogonal inputs are of the form
a/2[1 + cos2s(cP - cPo)J + b/2[1 + cos2s(cP - cPo + 90?)]' where a and b are amplitudes with a > band cP E [-900,90 0J. By simple trigonometry this simplifies to
(a + b) /2 + (a - b) /2 cos 2s( cP - cPo) Thus the input of amplitude b reduces the amplitude of the orthogonal input and hence gives rise to a smaller response. This is then
the mechanism by which local double orthogonal stimulation leads to suppression.
The center-surround case is different in that the orthogonal input originates from
the horizontal connections and (in the suppressive setting) is input primarily to
the orthogonal inhibitory population. It can be shown rigorously that for small
amplitude stimuli this is equivalent to an orthogonal input to the excitatory population with opposite sign. Thus we have a total input (a + b)/2[1 + cos(2s(cP - cPo)J
where b arises from the horizontal input and hence increases the amplitude of the
fundamental component of the input.
VISual Cortex Circuitry and Orientation Tuning
891
B
-90
-30
?
30
-?> -30
?
30
-?>
,
60
90
60
~
90
.~1.5
~
~
1
:
::10.5
-90
-?> -30
?
30
,
60
90
-90
Figure 2: Non-local effects on orientation tuning. (<r<ro) = Center response to preferred orientation at different surround orientations, (x-x-x) = Center orientation
tuning without surround stimulation, (-) = Center response to surround stimulation alone. A - response of population with 20? orientation preference. B, C and D
- response of populations with 5? orientation preference.
More realistically, multiple modes may bifurcate simultaneously, though in the small
amplitude linear regime where orientation preference is determined these can all be
treated separately in the manner detailed above and lead to the same result .
3.2
POPOUT AND GAIN CONTROL
"Stimulus" dependent horizontal connections have recently been used to model a
possible physiological correlate for the psychophysical phenomena of " pop-out "
and enhancement (Stemmler, Usher & Niebur, 1995). In this model the horizontal
input to excitatory neurons is via sodium channels which are dependent in the
conventional manner on the differential between the membrane voltage and the
sodium equilibrium potential. For such sodium channels the synaptic currents are
attenuated as the membrane depolarizes towards the sodium equilibrium potential.
This effect is opposite to that observed for the sodium channels mediating the
horizontal input (Hirsch & Gilbert, 1991) which show increased synaptic currents as
the membrane is depolarized. Thus although this model reproduces the phenomena
described above it does not do so on the basis of the known physiology. We have
confirmed with our model that weak stimulus enhancement and strong stimulus
pop-out can be modelled with a variety of formulations for the excitatory neuron
horizontal input, including a formulation which attenuates with increasing activity
as in the model of Stemmler et. al.
It is interesting to note that one overall effect of the horizontal connections is to act
as a type of gain control uniformizing the local response over a range of stimulus
strengths. This gain control function has been discussed by Somers, Nelson & Sur
(1994).
3.3
THE TILT ILLUSION
The tilt illusion (TI) is one of the basic orientation-based visual illusion. A TI
occurs when viewing a test line against an inducing grating of uniformly oriented
lines with an angle of (J between the orientation of the test line and the inducing
892
T. Mundel, A. Dimitrov and J. D. Cowan
grating. Two components of the TI have been described (Wenderoth & Johnstone,
1988), the direct TI where the test line appears to be repelled by the gratingthe orientation differential appears increased, and the indirect TI where the test
line appears attracted to the orientation of the grating-the orientation differential
appears decreased. Figure 3 depicts a typical plot of magnitude of the tilt effect
versus the angle differential between inducing grating and test line reproduced from
Wenderoth & Beh (1977). The TI thus provides compelling evidence that local
I.:
I
0
..,
-- - - - . - - - - -- - - ?? - -
- ? ?? - . - - - - - - - -
Figure 3: Direct (positive) and indirect (negative) tilt effects
detection of edges is dependent on information from more distant points in the
visual field. It is generally believed, that the direct TI is due to lateral inhibition
between cortical neurones (Wenderoth & Johnstone, 1988: Carpenter & Blakemore,
1973). It has been postulated that the indirect TI occurs at a higher level of visual
processing. We show here that both the direct and indirect TI are a consequence
of the lateral and local connections in our model.
A
B
/
/
=
:E
en
.,
:a.E
~
\
0
-0.5 ' - - - - - - - : -___- 10
30
50
70
90
J
J
\
\
?
I
\ I
\I
\
-90
~
-30
30
0
60
-P
c
I
/
"\
0
I
/
",
I
I
\
.,0
\
t~
\
\
-0.5'---10--30--5-0--70--90
~
90
~
-90
~
-30
J
"
0
30
60
90
~
Figure 4: Model simulations of the tilt effect (A and C). Band D show the corresponding kernels mediating long-range interactions. Solid lines indicate the absolute
kernel and dashed lines indicate the effective kernel
In figure 4 we give examples of the TI obtained from the model system. The
effective kernels for long-range interactions are obtained by filtering the absolute
kernels with the local filter which has a band-pass characteristic. It is this effective
kernel which determines the tilt effect in keeping with our simulations and analysis
which show that orientation preference is determined at the small amplitude linear
stage of system development.
Visual Cortex Circuitry and Orientation Tuning
4
893
SUMMARY AND DISCUSSION
We have shown that a very simple center-surround organization, operating in the
orientation domain can successfully account for a wide range of neurophysiological
and psychophysical phenomena, all involving the effects of visual context on the
responses of assemblies of spiking neurons. We expect to be able to show that
such an organization can be seen in many parts of the cortex, and that it plays an
important role in many forms of information processing in the brain.
Acknowledgements
Supported in part by Grant # 96-24 from the James S. McDonnell Foundation.
References
C. Blakemore and E.A. Tobin, Lateral Inhibition Between Orientation Detectors in
the Cat 's Visual Cortex, Exp. Brain Res., 15, 439-440, (1972)
G.G . Blasdel, Orientation selectivity, preference, and continuity in monkey striate
cortex, J. Neurosci. 12 No 8, 3139-3161 (1992)
R.H.S. Carpenter and C. Blakemore, Interactions between orientations in human
vision, Expl. Brain. Res. , 18, 287- 303, (1973)
G.C. DeAngelis, J.G. Robson, I. Ohzawa and R.D. Freeman, Organization of Suppression in Receptive Fields of Neurons in Cat Visual Cortex, J . Neurophysiol. , 68
No 1, 144-163, (1992)
D. Fitzpatrick, The Functional Organization of Local Circuits in Visual Cortex:
Insights from the Study of Tree Shrew Striate Cortex, Cerebral Cortex 6, 329-341 ,
(1996)
J.D . Hirsch and C.D. Gilbert, Synaptic physiology of horizontal connections in the
cat's visual cortex, J. Neurosci., 11 , 1800-1809, (1991)
A.M. Sillito, K.L. Grieve, H.E. Jones, J. Cudeiro & J. Davis, Visual cortical mechanisms detecting focal orientation discontinuities, Nature, 378, 492-496, (1995)
D.C. Somers, S. Nelson and M. Sur, Effects of long-range connections on gain
control in an emergent model of visual cortical orientation selectivity, Soc. Neurosci. ,20, 646.7, (1994)
M. Stemmler, M. Usher and E. Niebur, Lateral Interactions in Primary Visual
Cortex: A Model Bridging Physiology and Psychophysics, Science, 269, 1877-1880,
(1995)
M. V. Tsodyks and T. Sejnowski, Rapid state switching in balanced cortical network
models, Network, 6 No 2, 111-124, (1995)
J.D . Victor, K. Purpura, E. Katz and B. Mao, Population encoding of spatial
frequency, orientation and color in macaque VI , J . Neurophysiol., 72 No 5, (1994)
C. Vreeswijk and H. Sompolinsky, Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724-1726, (1996)
P. Wenderoth and H. Beh, Component analysis of orientation illusions, Perception,
6 57-75, (1977)
P. Wenderoth and S. Johnstone, The different mechanisms of the direct and indirect
tilt illusions, Vision Res., 28 No 2, 301-312, (1988)
| 1200 |@word wiesel:1 wenderoth:5 trigonometry:1 open:1 simulation:4 solid:1 reduction:2 tuned:3 current:3 attracted:1 distant:1 chicago:6 plot:1 alone:1 iso:4 detecting:1 provides:1 math:2 preference:9 dell:2 mathematical:1 along:3 direct:7 differential:4 manner:3 grieve:1 rapid:1 brain:3 freeman:1 increasing:2 circuit:6 monkey:1 suppresses:1 finding:4 temporal:1 ti:13 act:1 ro:1 control:4 unit:1 originates:2 grant:1 appear:1 positive:1 negligible:1 understood:1 local:21 consequence:1 switching:1 despite:1 encoding:1 firing:3 fluctuation:1 studied:1 examined:1 co:2 blakemore:4 range:8 averaged:1 satisfactorily:1 implement:1 illusion:7 area:1 physiology:3 suggest:1 context:2 gilbert:2 equivalent:1 conventional:1 center:20 expl:1 focused:1 simplicity:1 immediately:1 insight:1 population:14 coordinate:1 play:1 element:1 cut:1 observed:1 role:1 tsodyks:2 region:2 sompolinsky:3 balanced:2 ideally:1 rigorously:1 dynamic:1 personal:1 weakly:2 basis:1 neurophysiol:2 strikingly:1 indirect:7 emergent:1 cat:3 surrounding:1 stemmler:3 separated:1 effective:3 sejnowski:2 deangelis:1 hyper:1 outside:2 reproduced:1 shrew:2 interaction:10 maximal:1 relevant:1 realistically:1 inducing:3 enhancement:3 cluster:2 double:1 ring:1 object:1 coupling:1 recurrent:3 strong:3 soc:1 grating:10 involves:1 indicate:3 filter:1 redrawn:1 human:1 viewing:1 preliminary:1 considered:1 exp:1 equilibrium:2 blasdel:3 circuitry:5 fitzpatrick:2 continuum:1 early:1 adopt:1 robson:1 geniculate:1 sensitive:1 successfully:1 inhibitor:1 voltage:2 wilson:1 rubel:1 contrast:2 suppression:8 baseline:1 dependent:5 typically:1 reproduce:3 comprising:1 lgn:1 overall:1 orientation:52 priori:1 development:1 spatial:1 fairly:1 bifurcation:2 psychophysics:1 field:10 once:1 jones:1 stimulus:9 quantitatively:2 primarily:1 oriented:1 simultaneously:1 detection:2 organization:4 predominant:1 edge:5 orthogonal:11 tree:2 filled:1 circle:2 re:3 theoretical:1 increased:2 column:2 earlier:1 compelling:1 depolarizes:1 lattice:1 hundred:1 uniform:1 hypercolumns:2 referring:1 peak:1 sensitivity:1 fundamental:1 preferring:1 off:2 connecting:1 connectivity:3 again:1 containing:1 account:6 potential:5 postulated:1 explicitly:2 afferent:1 vi:3 depends:1 cudeiro:1 view:2 parallel:1 vivo:1 il:3 characteristic:3 weak:1 modelled:1 niebur:2 confirmed:1 tissue:1 explain:1 detector:8 trevor:1 synaptic:4 against:1 frequency:1 james:1 naturally:1 gain:4 radially:1 color:1 emerges:1 segmentation:1 amplitude:7 appears:4 higher:1 response:14 formulation:2 though:1 stage:1 horizontal:13 continuity:1 mode:2 indicated:1 effect:14 ohzawa:1 hence:2 symmetric:1 skewed:1 davis:1 excitation:4 demonstrate:1 cp:5 chaos:1 jack:1 invoked:1 consideration:1 recently:2 common:1 stimulation:10 spiking:1 functional:1 tilt:10 volume:1 cerebral:1 discussed:1 slight:1 katz:1 refer:2 measurement:1 surround:21 tuning:12 outlined:1 mathematics:2 focal:1 cortex:16 operating:2 inhibition:7 recent:3 dye:1 selectivity:3 certain:3 victor:2 seen:1 dashed:1 multiple:1 reduces:1 characterized:1 cross:2 long:6 believed:1 converging:1 involving:2 basic:3 vision:2 popout:2 represent:1 kernel:6 accord:1 cell:8 irregular:1 separately:1 decreased:1 dimitrov:5 suppressive:3 biased:1 depolarized:1 usher:2 markedly:1 cowan:6 visuotopic:5 tobin:2 variety:2 architecture:1 opposite:2 reduce:1 simplifies:2 attenuated:1 bridging:1 neurones:1 cause:1 generally:2 modulatory:1 detailed:2 involve:1 extensively:1 band:3 augments:1 inhibitory:6 sign:1 broadly:1 summarised:1 key:1 subcritical:1 angle:6 somers:2 throughout:1 patch:3 separation:1 distinguish:1 activity:2 strength:3 constraint:1 sharply:1 department:3 structured:1 designated:1 mcdonnell:1 membrane:3 smaller:2 primate:1 taken:1 equation:1 vreeswijk:3 mechanism:4 operation:1 apply:1 appropriate:1 distinguished:1 original:1 include:1 assembly:3 classical:3 psychophysical:3 added:1 occurs:2 receptive:9 primary:2 dependence:2 striate:2 distance:1 separate:1 lateral:6 nelson:2 sur:2 modeled:1 providing:1 balance:3 mediating:2 repelled:1 negative:1 rise:1 suppress:1 bifurcate:1 attenuates:1 neuron:9 observation:1 immediate:1 communication:1 beh:2 introduced:1 hypercolumn:7 connection:20 conflict:1 pop:2 macaque:2 discontinuity:1 able:3 usually:1 pattern:4 perception:1 regime:1 including:1 explanation:1 critical:3 natural:1 examination:1 treated:1 sodium:5 axis:4 coupled:1 acknowledgement:1 relative:1 cpo:4 expect:1 interesting:1 filtering:1 versus:1 localized:2 foundation:1 nucleus:1 degree:1 consistent:2 principle:3 excitatory:8 summary:1 supported:1 thalamocortical:1 keeping:1 bias:1 uchicago:2 johnstone:3 wide:2 fall:1 absolute:2 curve:1 dimension:7 cortical:15 simplified:1 correlate:1 nonlocal:1 preferred:4 global:1 hirsch:2 reproduces:1 neurology:2 sillito:2 purpura:1 stimulated:3 nature:3 channel:3 investigated:1 complex:2 domain:1 linearly:1 neurosci:3 arise:1 mediate:1 carpenter:2 neuronal:3 en:1 depicts:2 fashion:1 mao:1 comprises:1 wish:1 perceptual:1 specific:2 physiological:1 evidence:1 magnitude:1 isoorientation:1 visual:19 neurophysiological:2 determines:1 stimulating:1 towards:2 typical:2 except:1 operates:1 determined:2 uniformly:1 total:2 pas:1 experimental:5 arises:1 alexander:1 brevity:1 phenomenon:4 |
222 | 1,201 | Combining Neural Network Regression
Estimate1s with Regularized Linear
Weights
Christopher J. Merz and Michael J. Pazzani
Dept. of Information and Computer Science
University of California, Irvine, CA 92717-3425 U.S.A.
{cmerz,pazzani }@ics.uci.edu
Category: Algorithms and Architectures.
Abstract
When combining a set of learned models to form an improved estimator, the issue of redundancy or multicollinearity in the set of
models must be addressed. A progression of existing approaches
and their limitations with respect to the redundancy is discussed.
A new approach, PCR *, based on principal components regression is proposed to address these limitations. An evaluation of the
new approach on a collection of domains reveals that: 1) PCR*
was the most robust combination method as the redundancy of the
learned models increased, 2) redundancy could be handled without
eliminating any of the learned models, and 3) the principal components of the learned models provided a continuum of "regularized"
weights from which PCR * could choose.
1
INTRODUCTION
Combining a set of learned models l to improve classification and regression estimates has been an area of much research in machine learning
and neural networks [Wolpert, 1992, Merz, 1995, Perrone and Cooper, 1992,
Breiman, 1992,
Meir, 1995,
Leblanc and Tibshirani, 1993,
Krogh and Vedelsby, 1995, Tresp, 1995, Chan and Stolfo, 1995]. The challenge of
this problem is to decide which models to rely on for prediction and how much
weight to give each.
1A
learned model may be anything from a decision/regression tree to a neural network.
565
Combining Neural Network Regression Estimates
The goal of combining learned models is to obtain a more accurate prediction than
can be obtained from any single source alone. One major issue in combining a set
of learned models is redundancy. Redundancy refers to the amount of agreement or
linear dependence between models when making a set of predictions. The more the
set agrees, the more redundancy is present. In statistical terms, this is referred to
as the multicollinearity problem.
The focus of this paper is to explore and evaluate the properties of existing methods for combining regression estimates (Section 2), and to motivate the need for
more advanced methods which deal with multicollinearity in the set of learned models (Section 3). In particular, a method based on principal components regression
(PCR, [Draper and Smith, 1981]) is described, and is evaluated emperically demonstrating the it is a robust and efficient method for finding a set of combining weights
with low prediction error (Section 4). Finally, Section 5 draws some conclusions.
2
MOTIVATION
The problem of combining a set of learned models is defined using the terminology
of [Perrone and Cooper, 1992]. Suppose two sets of data are given: a training set
'DTrain = (x m , Ym) and a test set 'DTelt = (Xl, Yl). Now suppose 'DTrain is used to
build a set of functions, :F = fi(X), each element of which approximates f(x). The
goal is to find the best approximation of f(x) using :F.
To date, most approaches to this problem limit the space of approximations of f( x)
to linear combinations of the elements of :F, i.e.,
N
j(x)
=L
Cidi(X)
i=l
where Cij is the coefficient or weight of fj(x).
The focus of this paper is to evaluate and address the limitations of these approaches. To do so, a brief summary of these approaches is now provided progressing from simpler to more complex methods pointing out their limitations along the
way.
The simplest method for combining the members of :F is by taking the unweighted
average, (i.e., Cij = 1/ N). Perrone and Cooper refer to this as the Basic Ensemble
Method (BEM), written as
N
fBEM
= I/NLfi(x)
i=l
This equation can also be written in terms of the misfit function for each fi(X).
These functions describe the deviations of the elements of :F from the true solution
and are written as
mi(X)
= f(x) -Ji(x).
Thus,
N
fBEM = f(x) -1/NL mi(x).
i=l
Perrone and Cooper show that as long as the mi (x) are mutually independent
with zero mean, the error in estimating f(x) can be made arbitrarily small by
increasing the population size of :F. Since these assumptions break down in practice,
C. J. Merz and M. J. Pazzani
566
they developed a more general approach which finds the "optimal,,2 weights while
allowing the mi (x) 's to be correlated and have non-zero means. This Generalized
Ensemble Method (GEM) is written as
N
N
IGEM = LQ:di(X) = I(x) - LQ:imi(X)
i=1
i=l
where
C is the symmetric sample covariance matrix for the misfit function and the goal is to
minimize
Q:iQ:jCii' Note that the misfit functions are calculated on the training
data and I(x) is not required. The main disadvantage to this approach is that it
involves taking the inverse of C which can be "unstable". That is, redundancy in
the members of :F leads to linear dependence in the rows and columns of C which
in turn leads to unreliable estimates of C- 1 ?
E7,;
To circumvent this sensitivity redundancy, Perrone and Cooper propose a method
for discarding member(s) of :F when the strength of its agreement with another
member exceeds a certain threshold. Unfortunately, this approach only checks for
linear dependence (or redundancy) between pairs of Ii (x) and two Ii (x) for i =1= j.
In fact, Ii (x) could be a linear combination of several other members of :F and the
instability problem would be manifest. Also, depending on how high the threshold is
set, a member of :F could be discarded while still having some degree of uniqueness
and utility. An ideal method for weighting the members of :F would neither discard
any models nor suffer when there is redundancy in the model set.
The next approach reviewed is linear regression (LR)3 which also finds the "optimal"
weights for the Ii (x) with respect to the training data. In fact, G EM and LR are
both considered "optimal" because they are closely related in that GEM is a form
of linear regression with the added constraint that E~1 Q:i = 1. The weights for
LR are found as follows 4 ,
N
hR =
LQ:di(X)
i=1
where
Like GEM, LR and LRC are subject to the multicollinearity problem because finding
the Q:i's involves taking the inverse of a matrix. That is, if the I matrix is composed
of li(x) which strongly agree with other members of :F, some linear dependence will
be present.
20 ptimal here refers to weights which minimize mean square error for the training data.
it is a form of linear regression without the intercept term. The more general
form, denote by LRC, would be formulated the same way but with member, fo which
always predicts 1. According to [Leblanc and Tibshirani, 1993] having the extra constant
term will not be necessary (i.e., it will equal zero) because in practice, E[fi(x)] = E[f(x)].
4Note that the constraint, E;:'l ai = 1, for GEM is a form of regularization
[Leblanc and Tibshirani, 1993]. The purpose of regularizing the weights is to provide an
estimate which is less biased by the training sample. Thus, one would not expect GEM
and LR to produce identical weights.
3 Actually,
Combining Neural Network Regression Estimates
567
Given the limitations of these methods, the goal of this research was to find a method
which finds weights for the learned models with low prediction error without discarding any of the original models, and without being subject to the multicollinearity
problem.
3
METHODS FOR HANDLING MULTICOLLINEARITY
In the abovementioned methods, multicollinearity leads to inflation of the variance of the estimated weights, Ck. Consequently, the weights obtained from fitting the model to a particular sample may be far from their true values. To
circumvent this problem, approaches have been developed which: 1) constrain
the estimated regression coefficients so as to improve prediction performance (Le.,
ridge regression, RIDG E [Montgomery and Friedman 1993], and principal components regression), 2) search for the coefficients via gradient descent procedures (i.e.,
Widrow-Hofflearning, GD and EG+- [Kivinen and Warmuth, 1994]), or build models which make decorrelated errors by adjusting the bias of the learning algorithm
[Opitz and Shavlik, 1995] or the data which it sees [Meir, 1995]. The third approach
ameliorates, but does not solve, the problem because redundancy is an inherent part
of the task of combining estimators.
The focus of this paper is on the first approach.
Leblanc and Tibshirani
[Leblanc and Tibshirani, 1993] have proposed several ways of constraining or regularizing the weights to help produce estimators with lower prediction error:
1. Shrink a towards (1/ K, 1/ K, ... ,1/ K)T where K is the number of learned
models.
2. 2:~1 Ckj = 1
3. Ckj ~ O,i = 1,2 ... K
Breiman [Breiman, 1992] provides an intuitive justification for these constraints by
pointing out that the more strongly they are satisfied, the more interpolative the
weighting scheme is. In the extreme case, a uniformly weighted set of learned models
is likely to produce a prediction between the maximum and minimum predicted
values of the learned models. Without these constraints, there is no guarantee that
the resulting predictor will stay near that range and generalization may be poor.
The next subsection describes a variant of principal components regression and
explains how it provides a continuum of regularized weights for the original learned
models.
3.1
PRINCIPAL COMPONENTS REGRESSION
When dealing with the above mentioned multicollinearity problem, principal components regression [Draper and Smith, 1981] may be used to summarize and extract
the "relevant" information from the learned models. The main idea of PCR is to
map the original learned models to a set of (independent) principal components in
which each component is a linear combination of the original learned models, and
then to build a regression equation using the best subset of the principal components
to predict lex).
The advantage of this representation is that the components are sorted according to
how much information (or variance) from the original learned models for which they
account. Given this representation, the goal is to choose the number of principal
components to include in the final regression by retaining the first k which meet a
preselected stopping criteria. The basic approach is summarized as follows:
C. J. Merz and M. J. Pazzani
568
1. Do a principal components analysis (PCA) on the covariance matrix of the
learned models' predictions on the training data (i.e., do a PCA on the
covariance matrix of M, where Mi,j is the j-th model's reponse for the
i-th training example) to produce a set of principal components, PC =
{PC1, ... ,PCN }.
2. Use a stopping criteria to decide on k, the number of principal components
to use.
3. Do a least squares regression on the selected components (i.e., include PCi
for i:::; k).
4. Derive the weights,
fri,
for the original learned models by expanding
/peR*
= i31PC1 + ... + i3"PC"
according to
PCi = ;i,O/O + ... + ;i,N /N,
and simplifying for the coefficients of ". Note that ;i,j is the j-th coefficient of the i-th principal component.
The second step is very important because choosing too few or too many principal
components may result in underfitting or overfitting, respectively. Ten-fold crossvalidation is used to select k here.
Examining the spectrum of (N) weight sets derived in step four reveals that PCR*
provides a continuum of weight sets spanning from highly constrained (i.e., weights
generated from PCR1 satisfy all three regularization constraints) to completely unconstrained (i.e., PCRN is equivalent to unconstrained linear regression). To see
that the weights, fr, derived from PCR1 are (nearly) uniform, recall that the first
principal component accounts for where the learned models agree. Because the
learned models are all fairly accurate they agree quite often so their first principal
component weights, ;1,* will be similar. The "Y-weights are in turn multiplied by a
constant when PCR1 is regressed upon. Thus, the resulting fri'S will be fairly uniform. The later principal components serve as refinements to those already included
producing less constrained weight sets until finally PCRN is included resulting in
an unconstrained estimator much like LR, LRC and GEM.
4
EXPERIMENTAL RESULTS
The set of learned models, :F, were generated using Backpropogation
[Rumelhart, 1986]. For each dataset, a network topology was developed which gave
good performance. The collection of networks built differed only in their initial
weights 5 .
Three data sets were chosen: cpu and housing (from the UCI repository), and
body/at (from the Statistics Library at Carnegie Mellon University). Due to space
limitation, the data sets reported on were chosen because they were representative
of the basic trends found in a larger collection of datasets. The combining methods evaluated consist of all the methods discussed in Sections 2 and 3, as well as
PCRI and PCRN (to demonstrate PCR*'s most and least regularized weight sets,
SThere was no extreme effort to produce networks with more decorrelated errors.
Even with such networks, the issue of extreme multicollinearity would still exist because
E[f;(x)] = E[fi(x)] for a.ll i and j.
569
Combining Neural Network Regression Estimates
Data
N
BEM
GEM
LR
RIDGE
GD
EGPM
PCRl
PCRN
PCR*
Table 1? Results
bodyfat
10
50
1.03
1.04
1.02 0.86
1.02 3.09
1.02 0.826
1.03 1.04
1.03
1.07
1.04
1.05
1.02 0.848
0.99 0.786
II
cpu
50
38.57 38.62
46.59
227.54
44.9
238.0
44.8
191.0
38.9
38.8
38.4
38.0
39.0
39 .0
44.8
249.9
40.3
40.8
10
housing 11
10
50
2.79
2.77
2.72 2.57
2.72 6.44
2.72
2.55
2.79
2.77
2.77
2.75
2.78
2.76
2.72 2.57
2.56
2.70
respectively). The more computationally intense procedures based on stacking and
bootstrapping proposed by [Leblanc and Tibshirani, 1993, Breiman, 1992] were not
evaluated here because they required many more models (i.e., neural networks) to
be generated for each of the elements of F.
There were 20 trials run for each of the datasets . On each trial the data was
randomly divided into 70% training data and 30% test data. These trials were rerun
for varying sizes of F (i.e., 10 and 50, respectively). As more models are included
the linear dependence amongst them goes up showing how well the multicollinearity
problem is handled 6 . Table 1 shows the average residual errors for the each of the
methods on the three data sets. Each row is a particular method and each column
is the size of F for a given data set. Bold-faced entries indicate methods which were
not significantly different from the method with the lowest error (via two-tailed
paired t-tests with p :::; 0.05) .
PCR* is the only approach which is among the leaders for all three data sets. For
the body/at and housing data sets the weights produced by BEM, PCRI , GD, and
EG+- tended to be too constrained, while the weights for LR tended to be too
unconstrained for the larger collection of models . The less constrained weights of
GEM, LR, RIDGE, and PCRN severely harmed performance in the cpu domain
where uniform weighting performed better.
The biggest demonstration of PCR*'s robustness is its ability to gravitate towards
the more constrained weights produced by the earlier principal components when
appropriate (i.e., in the cpu dataset). Similarly, it uses the less constrained principal
components closer to PCRn when it is preferable as in the bodyfat and housing
domains .
5
CONCLUSION
This investigation suggests that the principal components of a set of learned models can be useful when combining the models to form an improved estimator. It
was demonstrated that the principal components provide a continuum of weight
sets from highly regularized to unconstrained. An algorithm , PCR* , was developed which attempts to automatically select the subset of these components which
provides the lowest prediction error. Experiments on a collection of domains demonstrated PCR*'s ability to robustly handle redundancy in the set of learned models.
Future work will be to improve upon PCR* and expand it to the classification task.
6This is verified by observing the eigenvalues of the principal components and values
in the covariance matrix of the models in :F
570
C. 1. Merz and M. J. Pazzani
References
[Breiman et ai, 1984] Breiman, L., Friedman, J .H., Olshen, R.A . & Stone, C.J .
(1984). Classification and Regression Trees. Belmont, CA: Wadsworth.
[Breiman, 1992] Breiman, L. (1992). Stacked Regression. Dept of Statistics, Berkeley, TR No. 367.
[Chan and Stolfo, 1995] Chan, P.K., Stolfo, S.J. (1995). A Comparative Evaluation
of Voting and Meta-Learning on Partitioned Data Proceedings of the Twelvth
International Machine Learning Conference (90-98) . San Mateo, CA: Morgan
Kaufmann.
[Draper and Smith, 1981] Draper, N.R., Smith, H. (1981). Applied Regression
Analysis. New York, NY: John Wiley and Sons.
[Kivinen and Warmuth, 1994] Kivinen, J., and Warmuth, M. (1994) . Exponentiated Gradient Descent Versus Gradient Descent for Linear Predictors. Dept. of
Computer Science, UC-Santa Cruz, TR No. ucsc-crl-94-16 .
[Krogh and Vedelsby, 1995] Krogh, A. , and Vedelsby, J. (1995). Neural Network
Ensembles, Cross Validation, and Active Learning. In Advances in Neural Information Processing Systems 7. San Mateo, CA: Morgan Kaufmann.
[Hansen and Salamon, 1990] Hansen, L.K., and Salamon, P. (1990). Neural Network Ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (993-1001).
[Leblanc and Tibshirani, 1993] Leblanc, M., Tibshirani, R. (1993) Combining estimates in regression and classification Dept. of Statistics, University of Toronto,
TR.
[Meir, 1995] Meir, R. (1995) . Bias, variance and the combination of estimators. In
Advances in Neural Information Processing Systems 7. San Mateo, CA: Morgan
Kaufmann.
[Merz, 1995] Merz, C.J . (1995) Dynamical Selection of Learning Algorithms. In
Fisher, C. and Lenz, H. (Eds.) Learning from Data: Artificial Intelligence and
Statistics, 5). Springer Verlag
[Montgomery and Friedman 1993] Mongomery, D.C., and Friedman, D.J. (1993).
Prediction Using Regression Models with Multicollinear Predictor Variables. lIE
Transactions, vol. 25, no. 3 73-85.
[Opitz and Shavlik, 1995] Opitz, D.W ., Shavlik, J .W . (1996) . Generating Accurate
and Diverse Members of a Neural-Network Ensemble. Advances in Neural and
Information Processing Systems 8. Touretzky, D.S., Mozer, M.C., and Hasselmo,
M.E., eds. Cambridge MA : MIT Press.
[Perrone and Cooper, 1992] Perrone, M. P., Cooper, L. N., (1993) . When Networks
Disagree: Ensemble Methods for Hybrid Neural Networks. Neural Networks for
Speech and Image Processing, edited by Mammone, R. J .. New York: Chapman
and Hall.
[Rumelhart, 1986] Rumelhart, D. E., Hinton, G. E., & Williams, R. J . (1986).
Learning Interior Representation by Error Propagation. Parallel Distributed Processing, 1 318-362. Cambridge, MASS.: MIT Press.
[Tresp, 1995] Tresp, V., Taniguchi, M. (1995). Combining Estimators Using NonConstant Weighting Functions. In Advances in Neural Information Processing
Systems 7. San Mateo, CA: Morgan Kaufmann.
[Wolpert, 1992] Wolpert, D. H. (1992). Stacked Generalization. Neural Networks,
5, 241-259.
| 1201 |@word trial:3 repository:1 eliminating:1 covariance:4 simplifying:1 tr:3 initial:1 existing:2 must:1 written:4 john:1 cruz:1 belmont:1 alone:1 intelligence:2 selected:1 warmuth:3 smith:4 lr:9 lrc:3 provides:4 toronto:1 simpler:1 along:1 ucsc:1 fitting:1 underfitting:1 stolfo:3 nor:1 automatically:1 cpu:4 increasing:1 provided:2 estimating:1 mass:1 lowest:2 developed:4 finding:2 bootstrapping:1 guarantee:1 berkeley:1 voting:1 preferable:1 producing:1 limit:1 severely:1 meet:1 mateo:4 suggests:1 range:1 practice:2 procedure:2 area:1 significantly:1 refers:2 interior:1 selection:1 instability:1 intercept:1 equivalent:1 map:1 demonstrated:2 go:1 williams:1 estimator:7 population:1 handle:1 justification:1 suppose:2 us:1 agreement:2 element:4 rumelhart:3 trend:1 predicts:1 edited:1 mentioned:1 mozer:1 motivate:1 serve:1 upon:2 completely:1 stacked:2 describe:1 artificial:1 pci:2 choosing:1 mammone:1 quite:1 larger:2 solve:1 ability:2 statistic:4 final:1 housing:4 advantage:1 eigenvalue:1 leblanc:8 propose:1 fr:1 uci:2 combining:17 relevant:1 date:1 intuitive:1 crossvalidation:1 produce:5 comparative:1 generating:1 help:1 iq:1 depending:1 widrow:1 derive:1 krogh:3 predicted:1 involves:2 indicate:1 ptimal:1 closely:1 explains:1 generalization:2 investigation:1 inflation:1 considered:1 ic:1 hall:1 predict:1 pointing:2 major:1 continuum:4 purpose:1 uniqueness:1 lenz:1 hansen:2 agrees:1 hasselmo:1 weighted:1 mit:2 always:1 harmed:1 e7:1 ck:1 i3:1 breiman:8 varying:1 derived:2 focus:3 check:1 progressing:1 stopping:2 expand:1 rerun:1 issue:3 classification:4 among:1 retaining:1 constrained:6 fairly:2 wadsworth:1 uc:1 equal:1 having:2 chapman:1 identical:1 nearly:1 future:1 inherent:1 few:1 randomly:1 composed:1 friedman:4 attempt:1 highly:2 evaluation:2 extreme:3 nl:1 pc:2 accurate:3 closer:1 necessary:1 intense:1 tree:2 increased:1 column:2 earlier:1 disadvantage:1 stacking:1 deviation:1 subset:2 entry:1 predictor:3 uniform:3 examining:1 too:4 dtrain:2 imi:1 reported:1 gravitate:1 gd:3 international:1 sensitivity:1 stay:1 yl:1 michael:1 ym:1 satisfied:1 choose:2 li:1 account:2 summarized:1 bold:1 coefficient:5 satisfy:1 later:1 break:1 performed:1 observing:1 parallel:1 minimize:2 square:2 variance:3 kaufmann:4 ensemble:6 misfit:3 produced:2 multicollinearity:10 fo:1 tended:2 touretzky:1 decorrelated:2 ed:2 vedelsby:3 mi:5 di:2 irvine:1 dataset:2 adjusting:1 manifest:1 subsection:1 recall:1 actually:1 salamon:2 improved:2 reponse:1 evaluated:3 shrink:1 strongly:2 until:1 christopher:1 propagation:1 true:2 regularization:2 symmetric:1 deal:1 eg:2 ll:1 anything:1 criterion:2 generalized:1 stone:1 pcn:1 ridge:3 demonstrate:1 fj:1 image:1 bem:3 fi:4 ji:1 discussed:2 approximates:1 refer:1 mellon:1 cambridge:2 backpropogation:1 ai:2 unconstrained:5 similarly:1 taniguchi:1 chan:3 discard:1 certain:1 verlag:1 meta:1 arbitrarily:1 morgan:4 minimum:1 fri:2 ii:5 exceeds:1 cross:1 long:1 divided:1 paired:1 prediction:11 ameliorates:1 regression:27 basic:3 variant:1 addressed:1 source:1 extra:1 biased:1 subject:2 member:10 near:1 ideal:1 constraining:1 gave:1 architecture:1 ckj:2 topology:1 idea:1 handled:2 pca:2 utility:1 effort:1 suffer:1 speech:1 york:2 useful:1 santa:1 amount:1 ten:1 category:1 simplest:1 meir:4 exist:1 estimated:2 tibshirani:8 per:1 diverse:1 carnegie:1 vol:1 redundancy:13 four:1 terminology:1 demonstrating:1 threshold:2 interpolative:1 neither:1 verified:1 draper:4 run:1 inverse:2 decide:2 draw:1 decision:1 fold:1 strength:1 constraint:5 constrain:1 regressed:1 according:3 combination:5 perrone:7 poor:1 describes:1 em:1 son:1 partitioned:1 making:1 computationally:1 equation:2 mutually:1 agree:3 turn:2 montgomery:2 multiplied:1 progression:1 appropriate:1 robustly:1 robustness:1 original:6 include:2 build:3 added:1 lex:1 opitz:3 already:1 dependence:5 abovementioned:1 gradient:3 amongst:1 evaluate:2 unstable:1 spanning:1 demonstration:1 unfortunately:1 cij:2 olshen:1 allowing:1 disagree:1 datasets:2 discarded:1 descent:3 hinton:1 pc1:1 pair:1 required:2 california:1 learned:26 address:2 dynamical:1 pattern:1 challenge:1 summarize:1 preselected:1 built:1 pcr:13 rely:1 regularized:5 circumvent:2 hybrid:1 hr:1 kivinen:3 advanced:1 residual:1 scheme:1 improve:3 brief:1 library:1 extract:1 nonconstant:1 tresp:3 faced:1 expect:1 limitation:6 versus:1 validation:1 degree:1 emperically:1 row:2 summary:1 bias:2 exponentiated:1 shavlik:3 taking:3 distributed:1 calculated:1 unweighted:1 collection:5 made:1 refinement:1 san:4 far:1 transaction:2 unreliable:1 dealing:1 overfitting:1 reveals:2 active:1 gem:8 leader:1 spectrum:1 search:1 tailed:1 reviewed:1 table:2 pazzani:5 ca:6 robust:2 expanding:1 correlated:1 complex:1 domain:4 main:2 motivation:1 body:2 referred:1 representative:1 biggest:1 differed:1 cooper:7 ny:1 wiley:1 lq:3 xl:1 lie:1 weighting:4 third:1 down:1 discarding:2 showing:1 consist:1 wolpert:3 explore:1 likely:1 springer:1 ma:1 goal:5 formulated:1 sorted:1 consequently:1 towards:2 crl:1 fisher:1 included:3 uniformly:1 principal:23 experimental:1 merz:7 select:2 bodyfat:2 dept:4 regularizing:2 handling:1 |
223 | 1,202 | Dual Kalman Filtering Methods for
Nonlinear Prediction, Smoothing, and
Estimation
Eric A. Wan
ericwan@ee.ogi.edu
Alex T. Nelson
atnelson@ee.ogi.edu
Department of Electrical Engineering
Oregon Graduate Institute
P.O. Box 91000 Portland, OR 97291
Abstract
Prediction, estimation, and smoothing are fundamental to signal
processing. To perform these interrelated tasks given noisy data,
we form a time series model of the process that generates the
data. Taking noise in the system explicitly into account, maximumlikelihood and Kalman frameworks are discussed which involve the
dual process of estimating both the model parameters and the underlying state of the system. We review several established methods in the linear case, and propose severa! extensions utilizing dual
Kalman filters (DKF) and forward-backward (FB) filters that are
applicable to neural networks. Methods are compared on several
simulations of noisy time series. We also include an example of
nonlinear noise reduction in speech.
1
INTRODUCTION
Consider the general autoregressive model of a noisy time series with both process
and additive observation noise:
x(k)
y(k)
I(x(k - 1), ... x(k - M), w) + v(k - 1)
x(k) + r(k),
(1)
(2)
where x(k) corresponds to the true underlying time series driven by process noise
v(k), and 10 is a nonlinear function of past values of x(k) parameterized by w.
E. A. Wan and A. T. Nelson
794
The only available observation is y(k) which contains additional additive noise r(k) .
Prediction refers to estimating an x(k) given past observations. (For purposes of
this paper we will restrict ourselves to univariate time series.) In estimation, x(k)
is determined given observations up to and including time k. Finally, smoothing
refers to estimating x(k) given all observations, past and future.
The minimum mean square nonlinear prediction of x(k) (or of y(k)) can be written as the conditional expectation E[x(k)lx(k - 1)], where x(k) = [x(k), x(k 1),? .. x(O)] . If the time series x(k) were directly available, we could use this data
to generate an approximation of the optimal predictor. However, when x(k) is not
available (as is generally the case), the common approach is to use the noisy data
directly, leading to an approximation of E[y(k)ly(k -1)] . However, this results in a
biased predictor: E[y(k)ly(k-l)] = E[x(k)lx(k -1) +R(k -1)] i= E[x(k)lx(k-l)].
We may reduce the above bias in the predictor by exploiting the knowledge that
the observations y(k) are measurements arising from a time series. Estimates x(k)
are found (either through estimation or smoothing) such that Ilx(k) - x(k)11 <
II x (k ) - y( k) II. These estimates are then used to form a predictor that approximates
E[x(k)lx(k - 1)].1
In the remainder of this paper, we will develop methods for the dual estimation of
both states x and weights Vi. We show how a maximum-likelihood framework can
be used to relate several existing algorithms and how established linear methods
can be extended to a nonlinear framework. New methods involving the use of dual
Kalman filters are also proposed and experiments are provided to compare results.
2
DUAL ESTIMATION
Given only noisy observations y(k), the dual estimation problem requires consideration of both the standard prediction (or output) errors ep(k) = y(k) - f(ic.(k-1)' w)
as well as the observation (or input) errors eQ(k) = y(k) - x(k) . The minimum obThe prediction error, however,
servation error variance equals the noise variance
is correlated with the observation error since y(k) - f(x(k - 1)) = r(k - 1) + v(k),
and thus has a minimum variance of
+
Assuming the errors are Gaussian,
we may construct a log-likelihood function which is proportional to eT:E-1e, where
e T = [eQ(O), eQ(l) .... eQ(N), ep(M), ep(M + 1), .. .ep(N)], a vector of all errors up to
time N, and
0'; .
0'; 0';.
o
o
o
(3)
Minimization of the log-likelihood function leads to the maximum-likelihood estimates for both x(k) and w. (Although we may also estimate the noise variances
and
we will assume in this paper that they are known.) Two general frameworks
for optimization are available:
0';,
0';
lBecause models are trained on estimated data x(k), it is important that estimated
data still be used for prediction of out-of training set (on-line) data. In other words, if our
model was formed as an approximation of E[x(k)lx(k - 1)], then we should not provide it
with y(k - 1) as an input in order to avoid a model mismatch.
795
Dual Kalman Filtering Methods
2.1
Errors-In-Variables (EIV) Methods
This method comes from the statistics literature for nonlinear regression (see Seber
and Wild, 1989), and involves batch optimization of the cost function in Equation
3. Only minor modifications are made to account for the time series model. These
methods, however, are memory intensive (E is approx. 2N >< 2N) and also do not
accommodate new data in an efficient manner. Retraining is necessary on all the
data in order to produce estimates for the new data points.
If we ignore the cross correlation between the prediction and observation error, then
E becomes a diagonal matrix and the cost function may be expressed as simply
2::=1 "Ye~(k) + e~(k), with "Y (J';/((J'; + (J';). This is equivalent to the Gleaming
(CLRN) cost function (Weigend, 1995), developed as a heuristic method for cleaning
the inputs in neural network modelling problems. While this allows for stochastic
optimization, the assumption in the time series formulation may lead to severely
biased results. Note also that no estimate is provided for the last point x(N).
=
When the model/ = w T x is known and linear, EIV reduces to a standard (batch)
weighted least squares procedure which can be solved in closed form to generate
a maximum-likelihood estimate of the noise free time series. However, when the
linear model is unknown, the problem is far more complicated. The inner product
of the parameter vector w with the vector x( k - 1) indicates a bilinear relationship
between these unknown quantities. Solving for x( k) requires knowledge of w, while
solving for w requires x(k). Iterative methods are necessary to solve the nonlinear optimization, and a Newton's-type batch method is typically employed. An
EIV method for nonlinear models is also readily developed, but the computational
expense makes it less practical in the context of neural networks.
2.2
Kalman Methods
Kalman methods involve reformulation of the problem into a state-space framework
in order to efficiently optimize the cost function in a recursive manner. At each time
point, an optimal estimation is achieved by combining both a prior prediction and
new observation. Connor (1994), proposed using an Extended Kalman filter with a
neural network to perform state estimation alone. Puskorious and Feldkamp (1994)
and others have posed the weight estimation in a state-space framework to allow
Kalman training of a neural network. Here we extend these ideas to include the
dual Kalman estimation of both states and weights for efficient maximum-likelihood
optimization. We also introduce the use offorward-backward in/ormation filters and
further explicate relationships to the EIV methods.
A state-space formulation of Equations 1 and 2 is as follows:
x(k)
y(k)
=
=
F[x(k - 1)] + Bv(k - 1)
Cx(k) + r(k)
(4)
(5)
where
x(k)- 1)
x(k
x(k) = [ .
~(k - M + 1)
1 F[x(k)] =
f(x(k), ... , x(k - M
[
~(k)
x(k - M
+ 2)
+ 1), w)
1
B= [
il'
(6)
796
E. A. Wan and A. T. Nelson
and C = BT. If the model is linear, then f(x(k)) takes the form w T x(k), and
F[x(k)] can be written as Ax(k), where A is in controllable canonical form.
If the model is linear, and the parameters ware known, the Kalman filter (KF)
algorithm can be readily used to estimate the states (see Lewis, 1986). At each
time step, the filter computes the linear least squares estimate x(k) and prediction
x-(k), as well as their error covariances, Px(k) and P.;(k). In the linear case with
Gaussian statistics, the estimates are the minimum mean square estimates. With
no prior information on x, they reduce to the maximum-likelihood estimates.
Note, however, that while the Kalman filter provides the maximum-likelihood estimate at each instant in time given all past data, the EIV approach is a batch
method that gives a smoothed estimate given all data. Hence, only the estimates
x(N) at the final time step will match. An exact equivalence for all time is achieved
by combining the Kalman filter with a backwards information filter to produce a
forward-backward (FB) smoothing filter (Lewis, 1986).2 Effectively, an inverse covariance is propagated backwards in time to form backwards state estimates that
are combined with the forward estimates. When the data set is large, the FB filter
offers Significant computational advantages over the batch form .
When the model is nonlinear, the Kalman filter cannot be applied directly, but
requires a linearization of the nonlinear model at the each time step. The resulting
algorithm is known as the extended Kalman filter (EKF) and effectively approximates the nonlinear function with a time-varying linear one.
2.2.1
Batch Iteration for Unknown Models
Again, when the linear model is unknown, the bilinear relationship between the time
series estimates, X, and the weight estimates, Vi requires an iterative optimization.
One approach (referred to as LS-KF) is to use a Kalman filter to estimate x(k) with
Vi fixed, followed by least-squares optimization to find Vi using the current x( k).
Specifically, the parameters are estimated as Vi = (X~FXKF) -1 XKFY, where XKF
is a matrix of KF state estimates, and Y is a 1 x N vector of observations.
For nonlinear models, we use a feedforward neural network to approximate f(?), and
replace the LS and KF procedures by backpropagation and extended Kalman filtering, respectively (referred to here as BP-EKF, see Connor 1994). A disadvantage
of this approach is slow convergence, due to keeping a set of inaccurate estimates
fixed at each batch optimization stage.
2.2.2
Dual Kalman Filter
Another approach for unknown models is to concatenate both wand x into a joint
state vector. The model and time series are then estimated simultaneously by
applying an EKF to the nonlinear joint state equations (see Goodwin and Sin, 1994
for the linear case). This algorithm, however, has been known to have convergence
problems.
An alternative is to construct a separate state-space formulation for the underlying
weights as follows:
(7)
w(k -1)
w(k)
(8)
y(k) = f(ic.(k - 1), w(k)) + n(k),
2 A slight modification of the cost in Equation 3 is necessary to account for initial
conditions in the Kalman form.
Dual Kalman Filtering Methods
797
where the state transition is simply an identity matrix, and f(x(k-1), w(k)) plays
the role of a time-varying nonlinear observation on w.
When the unknown model is linear, the observation takes the form x(k _1)Tw(k).
Then a pair of dual Kalman filters (DKF) can be run in parallel, one for state
estimation, and one for weight estimation (see Nelson, 1976) . At each time step,
all current estimates are used. The dual approach essentially allows us to separate
the non-linear optimization into two linear ones. Assumptions are that x and w
remain uncorrelated and that statistics remain Gaussian. Note, however, that the
error in each filter should be accounted for by the other. We have developed several
approaches to address this coupling, but only present one here for the sake of brevity.
In short, we write the variance of the noise n( k) as 0 p~ (k )OT + (J'; . in Equation
8, and replace v(k - 1) by v(k - 1) + (w(k)T - wT(k))x(k - 1) in Equation 4 for
estimation of x(k). Note that the ability to couple statistics in this manner is not
possible in the batch approaches.
We further extend the DKF method to nonlinear neural network models by introducing a dual extended Kalman filtering method (DEKF) . This simply requires
that Jacobians of the neural network be computed for both filters at each time step.
Note, by feeding x(k) into the network, we are implicitly using a recurrent network.
2.2.3
Forward-Backward Methods
All of the Kalman methods can be reformulated by using forward-backward (FB)
Kalman filtering to further improve state smoothing. However, the dual Kalman
methods require an interleaving of the forward and backward state estimates in
order to generate a smooth update at each time step. In addition, using the FB
estimates requires caution because their noncausal nature can lead to a biased w
if they are used improperly. Specifically, for LS-FB the weights are computed as:
w = (XRFXFB)-lXKFY ,where XFB is a matrix of FB (smooth) state estimates.
Equivalent adjustments are made to the dual Kalman methods. Furthermore, a
model of the time-reversed system is required for the nonlinear case. The explication
and results of these algorithms will be appear in a future publication.
3
EXPERIMENTS
Table 1 compares the different approaches on two linear time series, both when
the linear model is known and when it is unknown. The least square (LS) estimation for the weights in the bottom row represents a baseline performance wherein
no noise model is used. In-sample training set predictions must be interpreted
carefully as all training set data is being used to optimize for the weights. We
see that the Kalman-based methods perform better out of training set (recall the
model-mismatch issue l ). Further, only the Kalman methods allow for on-line estimations (on the test set, the state-estimation Kalman filters continue to operate
with the weight estimates fixed). The forward-backward method further improves
performance over KF methods. Meanwhile, the clearning-equivalent cost function
sacrifices both state and weight estimation MSE for improved in-sample prediction;
the resulting test set performance is significantly worse.
Several time series were used to compare the nonlinear methods, with the results
summarized in Table 2. Conclusions parallel those for the linear case. Note, the
DEKF method performed better than the baseline provided by standard backprop-
798
E. A. Wan and A. T. Nelson
Table 1: Comparison of methods for two linear models
MLJ:<;
CLRN
KF
FB
EIV
CLRN
LS-KF
LS-FB
UK!"
DFB
LS
Tram 1
Est. Pred.
.094
.322
.203
.134
.134
.559
.094
.559
Model
Test 1
Est. Pred.
1.09
1.08
0.59
.132
.132
0.59
Est.
Pred.
Est.
.138
.099
.135
.096
.563
.347
.557
.329
.886
.139
.136
.133
.134
-
-
Known
w
-
-
-
Tram 2
J:<;st. Pred.
.165
.558
.342
.343
.197
.778
.165
.778
Model Unknown
Pred.
w
Est.
.172
.278
.605
.134
.197
.603
.281
.169
.595
.198
.212
.596
.187
.165
1.09
.612
-
Pred.
.545
.049
.778
.612
.779
.587
1.08
Test 2
J:<;st. Pred.
- 1.32
1.32
.221
0.85
0.85
.221
Est.
-
.226
.229
.221
.221
-
Pred.
1.81
14.1
0.85
0.89
.863
.859
1.32
w
-
w
.122
11.28
.325
.369
.149
.065
0.590
MSE values for estimation (Est.), prediction (Pred .) and weights (w) (normalized to
signal var.). 1 - AR(ll) model, (1'; = 4, (1'; = 1. 2000 training samples, 1000 testing
samples. EIV and CLRN were not computed for the unknown model due to memory
constraints. 2 - AR(5) model, (1'; = .7., (1'; = .5., 375 training, 125 testing.
Table 2: Comparison of methods on nonlinear time series
BP-EKF
DEKF
BP
NNet 1
Test
Tram
Es. Pro
Es . Pro
.17 .58
.15 .63
.14 .57
.13 .59
.95 .57
.95 .69
NNet 2
Test
Tram
Es. Pro
Es. Pro
.08 .31
.08 .33
.06 .32
.07 .30
.29 .36
.22 .30
NNet 3
Tram
Test
Es. Pro
Es. Pro
.16 .59
.17
.59
.56
.14 .55
.14
.92 .68
.92 .68
The series Nnet 1,2,3 are generated by autoregressive neural networks which exhibit limit
cycle and chaotic behavior. (1'; = .16, (1'; = .81, 2700 training samples, 1300 testing
samples. All network models fit using 10 inputs and 5 hidden units. Cross-validation
was not used in any of the methods.
agation (wherein no model of the noise is used). The DEKF method exhibited fast
convergence, requiring only 10-20 epochs for training. A DEFB method is under
development.
The DEKF was tested on a speech signal corrupted with simulated bursting white
noise (Figure 1). The method was applied to successive 64ms (512 point) windows
of the signal, with a new window starting every 8ms (64 points). The results in
the figure were computed assuming both (1'; and (1'; were known. The average
SNR is improved by 9.94 dB. We also ran the experiment when (1'; and (1'; were
estimated using only the noisy signal (Nelson and Wan, 1997), and acheived an SNR
improvement of 8.50 dB. In comparison, available "state-of-the-art" techniques of
spectral subtraction (Boll, 1979) and RASTA processing (Hermansky et al., 1995),
achieve SNR improvements of only .65 and 1.26 dB, respectively. We extend the
algorithms to the colored noise case in a second paper (Nelson and Wan, 1997).
4
CONCLUSIONS
We have described various methods under a Kalman framework for the dual estimation of both states and weights of a noisy time series. These methods utilize both
799
Dual Kalman Filtering Methods
Clean Speech
Noise
Noisy Speech
IIIIiII
-1Itf...., ~.t....~?..------il'" ?
Cleaned Speech
... -'"
Figure 1: Cleaning Noisy Speech With The DEKF. 33,000 pts (5 sec.) shown.
process and observation noise models to improve estimation performance. Work
in progress includes extensions for colored noise, blind signal separation, forwardbackward filtering, and noise estimation. While further study is needed, the dual
extended Kalman filter methods for neural network prediction, estimation, and
smoothing offer potentially powerful new tools for signal processing applications.
Acknowledgements
This work was sponsored in part by NSF under grant ECS-9410823 and by
ARPA/ AASERT Grant DAAH04-95-1-0485.
References
S.F. Boll. Suppression of acoustic noise in speech using spectral subtraction. IEEE
ASSP-27, pp. 113-120. April 1979.
J. Connor, R. Martin, L. Atlas. Recurrent neural networks and robust time series
prediction. IEEE Tr. on Neural Networks. March 1994.
F. Lewis. Optimal Estimation John Wiley & Sons, Inc. New York. 1986.
G. Goodwin, K.S. Sin. Adaptive Filtering Prediction and Control. Prentice-Hall,
Inc., Englewood Cliffs, NJ. 1994.
H. Hermansky, E. Wan, C. Avendano. Speech enhancement based on temporal
processing. ICASSP Proceedings. 1995.
A. Nelson, E. Wan. Neural speech enhancement using dual extended Kalman filtering. Submitted to ICNN'97.
L. Nelson, E. Stear. The simultaneous on-line estimation of parameters and states
in linear systems. IEEE Tr. on Automatic Control. February, 1976.
G. Puskorious, L. Feldkamp. Neural control of nonlinear dynamic systems with
kalman filter trained recurrent networks. IEEE Tm. on NN, vol. 5, no. 2. 1994.
G. Seber, C. Wild. Nonlinear Regression. John Wiley & Sons. 1989.
A. Weigend, H.G. Zimmerman. Clearning. University of Colorado Computer Science Technical Report CU-CS-772-95. May, 1995.
| 1202 |@word cu:1 retraining:1 simulation:1 covariance:2 tr:2 accommodate:1 reduction:1 initial:1 series:18 contains:1 tram:5 past:4 existing:1 current:2 written:2 readily:2 must:1 john:2 additive:2 concatenate:1 atlas:1 sponsored:1 update:1 alone:1 short:1 colored:2 provides:1 severa:1 lx:5 successive:1 acheived:1 wild:2 introduce:1 manner:3 sacrifice:1 behavior:1 feldkamp:2 window:2 becomes:1 provided:3 estimating:3 underlying:3 interpreted:1 developed:3 caution:1 nj:1 temporal:1 every:1 uk:1 control:3 unit:1 ly:2 grant:2 appear:1 engineering:1 limit:1 severely:1 bilinear:2 cliff:1 ware:1 bursting:1 equivalence:1 graduate:1 practical:1 testing:3 recursive:1 backpropagation:1 chaotic:1 procedure:2 significantly:1 word:1 refers:2 cannot:1 prentice:1 context:1 applying:1 optimize:2 equivalent:3 starting:1 l:7 utilizing:1 dkf:3 pt:1 play:1 colorado:1 cleaning:2 exact:1 ep:4 role:1 bottom:1 electrical:1 solved:1 ormation:1 cycle:1 forwardbackward:1 ran:1 dynamic:1 trained:2 solving:2 eric:1 icassp:1 joint:2 various:1 fast:1 heuristic:1 posed:1 solve:1 aasert:1 ability:1 statistic:4 noisy:9 final:1 advantage:1 propose:1 product:1 remainder:1 combining:2 achieve:1 exploiting:1 convergence:3 enhancement:2 produce:2 coupling:1 develop:1 recurrent:3 dfb:1 minor:1 progress:1 eq:4 c:1 involves:1 come:1 filter:22 stochastic:1 backprop:1 require:1 feeding:1 icnn:1 extension:2 hall:1 ic:2 purpose:1 estimation:25 applicable:1 tool:1 weighted:1 minimization:1 gaussian:3 ekf:4 avoid:1 varying:2 publication:1 ax:1 daah04:1 improvement:2 portland:1 modelling:1 likelihood:8 indicates:1 suppression:1 baseline:2 zimmerman:1 nn:1 inaccurate:1 typically:1 bt:1 hidden:1 issue:1 dual:20 development:1 smoothing:7 art:1 equal:1 construct:2 represents:1 hermansky:2 future:2 others:1 report:1 simultaneously:1 ourselves:1 englewood:1 nnet:4 eiv:7 noncausal:1 necessary:3 clearning:2 arpa:1 ar:2 disadvantage:1 cost:6 introducing:1 snr:3 predictor:4 corrupted:1 combined:1 st:2 fundamental:1 again:1 wan:8 worse:1 leading:1 explicate:1 jacobians:1 account:3 summarized:1 sec:1 includes:1 inc:2 oregon:1 explicitly:1 vi:5 blind:1 performed:1 closed:1 complicated:1 parallel:2 square:6 formed:1 il:2 variance:5 efficiently:1 submitted:1 simultaneous:1 pp:1 propagated:1 couple:1 recall:1 knowledge:2 improves:1 carefully:1 mlj:1 wherein:2 improved:2 april:1 formulation:3 box:1 furthermore:1 stage:1 correlation:1 nonlinear:20 ye:1 normalized:1 true:1 requiring:1 hence:1 white:1 ogi:2 sin:2 ll:1 m:2 pro:6 consideration:1 common:1 discussed:1 extend:3 approximates:2 slight:1 measurement:1 significant:1 connor:3 iiiiiii:1 approx:1 automatic:1 xfb:1 driven:1 continue:1 minimum:4 additional:1 employed:1 subtraction:2 signal:7 ii:2 reduces:1 smooth:2 technical:1 match:1 cross:2 offer:2 prediction:16 involving:1 regression:2 essentially:1 expectation:1 iteration:1 achieved:2 addition:1 biased:3 ot:1 operate:1 exhibited:1 db:3 ee:2 backwards:3 feedforward:1 agation:1 fit:1 restrict:1 reduce:2 inner:1 idea:1 tm:1 intensive:1 improperly:1 reformulated:1 speech:9 york:1 generally:1 involve:2 generate:3 canonical:1 nsf:1 estimated:5 arising:1 write:1 vol:1 reformulation:1 clean:1 utilize:1 backward:7 weigend:2 run:1 inverse:1 wand:1 powerful:1 parameterized:1 separation:1 seber:2 followed:1 bv:1 constraint:1 alex:1 bp:3 sake:1 generates:1 px:1 martin:1 department:1 march:1 remain:2 son:2 tw:1 modification:2 equation:6 needed:1 available:5 spectral:2 batch:8 alternative:1 xkf:1 include:2 newton:1 instant:1 february:1 quantity:1 diagonal:1 exhibit:1 reversed:1 separate:2 simulated:1 nelson:9 assuming:2 kalman:34 relationship:3 potentially:1 relate:1 expense:1 unknown:9 perform:3 rasta:1 observation:15 ericwan:1 extended:7 assp:1 smoothed:1 pred:9 boll:2 goodwin:2 pair:1 required:1 cleaned:1 acoustic:1 established:2 address:1 mismatch:2 including:1 memory:2 improve:2 review:1 literature:1 prior:2 epoch:1 kf:7 acknowledgement:1 filtering:10 proportional:1 var:1 validation:1 uncorrelated:1 row:1 accounted:1 last:1 free:1 keeping:1 bias:1 allow:2 institute:1 taking:1 transition:1 fb:9 autoregressive:2 forward:7 made:2 computes:1 adaptive:1 itf:1 far:1 ec:1 approximate:1 ignore:1 implicitly:1 iterative:2 table:4 nature:1 robust:1 controllable:1 mse:2 meanwhile:1 noise:18 referred:2 slow:1 wiley:2 interleaving:1 effectively:2 linearization:1 ilx:1 cx:1 interrelated:1 simply:3 univariate:1 expressed:1 adjustment:1 corresponds:1 lewis:3 avendano:1 conditional:1 identity:1 replace:2 determined:1 specifically:2 wt:1 e:6 est:7 maximumlikelihood:1 brevity:1 tested:1 correlated:1 |
224 | 1,203 | Time Series Prediction Using Mixtures of
Experts
Assaf J. Zeevi
Information Systems Lab
Department of Electrical Engineering
Stanford University
Stanford, CA. 94305
Ron Meir
Department of Electrical Engineering
Technion
Haifa 32000, Israel
rmeir~ee.technion.ac.il
azeevi~isl.stanford.edu
Robert J. Adler
Department of Statistics
University of North Carolina
Chapel Hill, NC. 27599
adler~stat.unc.edu
Abstract
We consider the problem of prediction of stationary time series,
using the architecture known as mixtures of experts (MEM). Here
we suggest a mixture which blends several autoregressive models.
This study focuses on some theoretical foundations of the prediction problem in this context. More precisely, it is demonstrated
that this model is a universal approximator, with respect to learning the unknown prediction function . This statement is strengthened as upper bounds on the mean squared error are established.
Based on these results it is possible to compare the MEM to other
families of models (e.g., neural networks and state dependent models). It is shown that a degenerate version of the MEM is in fact
equivalent to a neural network, and the number of experts in the
architecture plays a similar role to the number of hidden units in
the latter model.
310
1
A. 1. Zeevi, R. Meir and R. 1. Adler
Introduction
In this work we pursue a new family of models for time series, substantially extending, but strongly related to and based on the classic linear autoregressive moving
average (ARMA) family. We wish to exploit the linear autoregressive technique in
a manner that will enable a substantial increase in modeling power, in a framework
which is non-linear and yet mathematically tractable.
The novel model, whose main building blocks are linear AR models, deviates from
linearity in the integration process, that is, the way these blocks are combined. This
model was first formulated in the context of a regression problem, and an extension
to a hierarchical structure was also given [2]. It was termed the mixture of experts
model (MEM).
Variants of this model have recently been used in prediction problems both in
economics and engineering. Recently, some theoretical aspects of the MEM , in the
context of non-linear regression, were studied by Zeevi et al. [8], and an equivalence
to a class of neural network models has been noted.
The purpose of this paper is to extend the previous work regarding the MEM
in the context of regression, to the problem of prediction of time series. We shall
demonstrate that the MEM is a universal approximator, and establish upper bounds
on the approximation error, as well as the mean squared error, in the setting of
estimation of the predictor function.
It is shown that the MEM is intimately related to several existing, state of the art,
statistical non-linear models encompassing Tong's TAR (threshold autoregressive)
model [7], and a certain version of Priestley's [6] state dependent models (SDM).
In addition, it is demonstrated that the MEM is equivalent (in a sense that will be
made precise) to the class of feedforward, sigmoidal, neural networks.
2
Model Description
The MEM [2] is an architecture composed of n expert networks, each being an AR( d)
linear model. The experts are combined via a gating network, which partitions the
input space accordingly. Considering a scalar time series {xt}, we associate with
each expert a probabilistic model (density function) relating input vectors x!=~ ==
[Xt-l,Xt-2, ... ,Xt-d] to an output scalar Xt E]R and denote these probabilistic
models by p(xtl x!=~;Oj,O"j) j = 1,2, ... ,n where (OJ,O"j) is the expert parameter
vector, taking values in a compact subset of ]Rd+l. In what follows we will use
upper case X t to denote random variables, and lower case Xt to denote values taken
by those r.v.'s.
Letting the parameters of each expert network be denoted by (OJ, 0"j ), j =
1,2, ... , n, those of the gating network by Og and letting 8 = ({OJ, O"j }j=l, Og)
represent the complete set of parameters specifying the model, we may express the
conditional distribution of the model, p(xtlx:=~, 8), as
n
p(Xtlx!=~; 8) =
L gj (x!=~; Og)p(Xtlx!=~; OJ, O"j),
j=l
(1)
Tune Series Prediction using Mixtures of Experts
o
Vx~=~. We assume that the parameter vector
311
e
E
n,
a compact subset of
JR 2n (d+1) .
Following the work of Jordan and Jacobs [2] we take the probability density functions to be Gaussian with mean Of X:=~ + OJ,O and variance (7j (representative
of the underlying, local AR{d) model). The function 9j{X; Og) == exp{O~x +
Og; . O}/(E~1 exp{O~x + Ogi.O}' thus implementing a multiple output logistic regression function.
The underlying non-linear mapping (i.e., the conditional expectation, or L2 prediction function) characterizing the MEM, is described by using (1) to obtain the
conditional expectation of Xt,
n
f~ = E[XtIX:=~; Mn] =
L 9j{X:=~; Og)[Of X:=~ + OJ,o]'
(2)
j=1
where Mn denotes the MEM model. Here the subscript n stands for the number
() _
t-l
d
of experts. Thus, we have X t = fn = fn(X t _ d; e) where fn : JR x n ---+ JR, and X t
denotes the projection of X t on the 'relevant past', given the model, thus defining
the model predictor function.
~
~
We will use the notation MEM(n; d) where n is the number of experts in the model
(proportional to the complexity, or number of parameters in the model), and d the
lag size. In this work we assume that d is known and given.
3
3.1
Main results
Background
We consider a stationary time series, more precisely a discrete time stochastic process {Xd which is assumed to be strictly stationary. We define the L2 predictor
function
t- 1 ] = E[Xt IXt-d
t - 1 ] a.s.
f =
- E[Xt IX -00
for some fixed lag size d. Markov chains are perhaps the most widely encountered
class of probability models exhibiting this dependence. The NAR(d), that is nonlinear AR{ d), model is another example, widely studied in the context of time series
(see [4] for details). Assuming additive noise, the NAR(d) model may be expressed
as
(3)
We note that in this formulation {cd plays the role of the innovation process for
Xt, and the function fe) describes the information on X t contained within its past
history.
In what follows, we restrict the discussion to stochastic processes satisfying certain
constraints on the memory decay, more precisely we are assuming that {Xd is an
exponentially a-mixing process. Loosely stated, this assumption enables the process
to have a law of large numbers associated with it, as well as a certain version of
the central limit theorem. These results are the basis for analyzing the asymptotic
behavior of certain parameter estimators (see, [9] for further details), but other than
that this assumption is merely stated here for the sake of completeness. We note in
312
A. 1. Zeevi, R. Meir and R. 1. Adler
passing that this assumption may be substantially weakened, and still allow similar
results to hold, but requires more background and notation to be introduced, and
therefore is not pursued in what follows (the reader is referred to [1] for further
details).
3.2
Objectives
Knowing the L2 predictor function, f, allows optimal prediction of future samples,
where optimal is meant in the sense that the predicted value is the closest to the true
value of the next sample point, in the mean squared error sense. It therefore seems a
reasonable strategy, to try and learn the optimal predictor function, based on some
finite realization of the stochastic process, which we will denote VN = {Xt}t!t'H.
Note that for N ? d, the number of sample points is approximately N.
We therefore define our objective as follows. Based on the data V N , we seek the
'best' approximation to f, the L2 predictor function, using the MEM(n, d) predictor
f~ E Mn as the approximator model.
More precisely, define the least squares (LS) parameter estimator for the MEM(n, d)
as
N
9n,N = arg~l~ L.t X t
A
?
" [
-
]2
fn(X tt_-d I, 9)
t=d+l
where fn(X:=~, 9) is f~ evaluated at the point X:=~, and define the LS functional
estimator as
(J
A
_
fn,N = fnl(J=on,N
where
9n ,N is the LS parameter estimator.
Now, define the functional estimator risk as
MSE[f, in,N]
== Ev [ / If - in'NI2dv]
where v is the d fold probability measure of the process {Xt}. In this work we
maintain that the integration is over some compact domain Id C Rd, though recent
work [3] has shown that the results can be extended to Rd, at the price of slightly
slower convergence rates.
It is reasonable, and quite customary, to expect a 'good' estimator to be one that
is asymptotically unbiased. However, growth of the sample size itself need not, and
in general does not, mean that the estimator is 'becoming' unbiased. Consequently,
as a figure of merit, we may restrict attention to the approximation capacity of the
model. That is we ask, what is the error in approximating a given class of predictor
functions, using the MEM(n, d) (Le., {Mn}) as the approximator class.
To measure this figure, we define the optimal risk as
MSE[j, f~] == / If where f~ ==
f~12dv,
f!I(J=(J* and
9~ = argmin/
If - f~12dv,
(JE9
313
Time Series Prediction using Mixtures of Experts
that is, 9~ is the parameter minimizing the expected L2 loss function. One may
think of f~ as the 'best' predictor function in the class of approximators, i.e., the
closest approximation to the optimal predictor, given the finite complexity, n, (Le.,
finite number of parameters) of the model. Here n is simply the number of experts
(AR models) in the architecture.
3.3
Upper Bounds on the Mean Squared Error and Universal
Approximation Results
Consider first the case where we are simply interested in approximating the function
f, assuming it belongs to some class of functions. The question then arises as to
how well one may approximate f by a MEM architecture comprising n experts. The
answer to this question is given in the following proposition, the proof of which can
be found in [8].
Proposition 3.1 (Optimal risk bounds) Consider the class of functions Mn defined in (2) and assume that the optimal predictor f belongs to a Sobolev class
containing r continuous derivatives in L 2 ? Then the following bound holds:
MSE[f, f~] ::;
n2~jd
(4)
where c is a constant independent of n .
The proof proceeds by first approximating the normalized gating
function gj 0 by polynomials of finite degree, and then using the fact that polynomials can approximate functions in Sobolev space to within a known degree of
approximation.
PROOF SKETCH:
The following main theorem, establishing upper bounds on the functional estimator
risk, constitutes the main result of this paper. The proof is given in [9].
Theorem 3.1 (Upper bounds on the estimator risk)
Suppose the stochastic process obeys the conditions set forth in the previous section.
Assume also that the optimal predictor function, f, possesses r smooth derivatives
in L 2 . Then for N sufficiently large we have
~
MSE[j, fn,N] ::;
c
n2rjd
m*
+ 2; + 0
( 1)
N
where r is the number of continuous derivatives in L2 that
d is the lag size, and N is the size of the data set 'DN ?
f
'
(5)
is assumed to possess,
The proof proceeds by a standard stochastic Taylor expansion of
the loss around the point 9~. Making common regularity assumptions [1] and using
the assumption on the a-mixing nature of the process allows one to establish the
usual asymptotic normality results, from which the result follows.
PROOF SKETCH:
We use the notation m~ to denote the effective number of parameters. More precisely, m~ = Tr{B~(A~)-l} and the matrices A* and B* are related to the Fisher
information matrix in the case of misspecified estimation (see [1] for further discussion). The upper bound presented in Theorem 3.1 is related to the classic bias
- variance decomposition in statistics and the obvious tradeoffs are evident by inspection.
314
3.4
A. 1 Zeevi, R. Meir and R. 1 Adler
Comments
It follows from Proposition 3.1 that the class of mixtures of experts is a universal
approximator, w.r.t. the class of target functions defined for the optimal predictor.
Moreover, Proposition 3.1 establishes the rate of convergence of the approximator,
and therefore relates the approximation error to the number of experts used in the
architecture (n) .
Theorem 3.1 enhances this result, as it relates the sample complexity and model
complexity, for this class of models. The upper bounds may be used in defining
model selection criteria, based on upper bound minimization. In this setting, we
may use an estimator of the stochastic error bound (i.e., the estimation error), to
penalize the complexity of the model, in the spirit of Ale, MDL etc (see [8] for
further discussion).
At a first glance it may seem surprising to find that a combination of linear models
is a universal function approximator. However, one must keep in mind that the
global model is nonlinear, due to the gating network. Nevertheless, this result does
imply, at least on a theoretical ground, that one may restrict the MEM{n, d) to
be locally linear, without loss of generality. Thus, taking a simple local model,
enabling efficient and tractable learning algorithms (see [2]), still results in a rich
global model.
3.5
Comparison
Recently, Mhaskar [5] proved upper bounds on a feedforward sigmoidal neural network, for target functions in the same class as we consider herein, i.e., the Sobolev
class. The bound we have obtained in Proposition 3.1, and its extension in [8],
demonstrate that w.r.t. to this particular target class, neural networks and mixtures of experts are equivalent. That is, both models attain optimal precision in
the degree of approximation results (see [5] for details of this argument). Keeping
in mind the advantages of the MEM with respect to learning and generalization
[2], we believe that our results lend further credence to the emerging view as to
the superiority of modular architectures over the more standard feed forward neural
networks.
Moreover, the detailed proof of Proposition 3.1 (see [8]) actually takes the
MEM(n, d) to be made up of local constants. That is, the linear experts are degenerated to constant functions. Thus, one may conjecture that mixtures of experts
are in fact a more general class than feedforward neural networks, though we have
no proof of this as of yet.
Two nonlinear alternatives, generalizing standard statistical linear models, have
been pointed out in the introductory section. These are Tong's TAR (threshold
autoregressive) model [7], and the more general SDM (state dependent models)
introduced by Priestley. The latter models can be reduced to a TAR model by
imposing a more restrictive structure (for further details see [6]) . We have shown,
based on the results described above (see [9]), that the MEM may be viewed as a
generalization of the SDM (and consequently of the TAR model). The relation to
the state dependent models is of particular interest, as the mixtures of experts is
structured on state dependence as well. Exact statement and proofs of these facts
can be found in [9] .
Time Series Prediction using Mixtures of Experts
315
We should also note that we have conducted several numerical experiments, comparing the performance of the MEM with other approaches. We tested the model
on both synthetic as well as real-world data. Without any fine-tuning of parameters
we found the performance of the MEM, with linear expert functions, to compare
very favorably with other approaches (such as TAR, ARMA and neural networks).
Details of the numerical results may be found in [9]. Moreover, the model also
provided a very natural and intuitive segmentation of the process.
4
Discussion
In this work we have pursued a novel non-linear model for prediction in stationary
time series. The mixture of experts model (MEM) has been demonstrated to be a
rich model, endowed with a sound theoretical basis, and compares favorably with
other, state of the art, nonlinear models .
We hope that the results of this study will aid in establishing the MEM as, yet another, powerful tool for the study of time-series applicable to the fields of statistics,
economics, and signal processing.
References
[1] Domowitz, I. and White, H. "Misspecified Models with Dependent Observations", Journal of Econometrics, vol. 20: 35-58, 1982.
[2] Jordan, M. and Jacobs, R. "Hierarchical Mixtures of Experts and the EM
Algorithm" , Neural Computation, vol. 6, pp. 181-214, 1994.
[3] Maiorov, V. and Meir. V. "Approximation Bounds for Smooth Functions in
C(1Rd) by Neural and Mixture Networks", submitted for publication, December
1996.
[4] Meyn, S.P. and Tweedie, R.L. (1993) Markov Chains and Stochastic Stability,
Springer-Verlag, London.
[5] Mhaskar, H. (1996) "Neural Networks for Optimal Approximation of Smooth
and Analytic Functions", Neural Computation vol. 8(1), pp. 164-177.
[6] Priestley M.B. Non-linear and Non-stationary Time Series Analysis, Academic
Press, New York, 1988.
[7] Tong, H. Threshold Models in Non-linear Time Series Analysis, Springer Verlag, New York, 1983.
[8] Zeevi, A.J., Meir, R. and Maiorov, V. "Error Bounds for Functional Approximation and Estimation Using Mixtures of Experts", EE Pub. CC-132., Electrical Engineerin g Department, Technion, 1995.
[9] Zeevi, A.J., Meir, R. and Adler, R.J. "Non-linear Models for Time Series Using
Mixtures of Experts", EE Pub. CC-150, Electrical Engineering Department,
Technion, 1996.
PART
IV
ALGORITHMS AND ARCHITECTURE
| 1203 |@word version:3 polynomial:2 seems:1 seek:1 carolina:1 jacob:2 decomposition:1 tr:1 series:15 pub:2 past:2 existing:1 comparing:1 surprising:1 yet:3 must:1 fn:7 numerical:2 additive:1 partition:1 enables:1 analytic:1 stationary:5 pursued:2 credence:1 accordingly:1 inspection:1 completeness:1 ron:1 sigmoidal:2 dn:1 xtl:1 introductory:1 assaf:1 manner:1 expected:1 behavior:1 considering:1 provided:1 linearity:1 underlying:2 notation:3 moreover:3 israel:1 what:4 argmin:1 pursue:1 substantially:2 emerging:1 xd:2 growth:1 unit:1 superiority:1 engineering:4 local:3 limit:1 analyzing:1 id:1 establishing:2 subscript:1 becoming:1 approximately:1 studied:2 weakened:1 equivalence:1 specifying:1 obeys:1 block:2 universal:5 attain:1 projection:1 suggest:1 unc:1 selection:1 risk:5 context:5 equivalent:3 demonstrated:3 economics:2 attention:1 l:3 chapel:1 estimator:10 meyn:1 classic:2 stability:1 target:3 play:2 suppose:1 exact:1 associate:1 satisfying:1 econometrics:1 role:2 electrical:4 substantial:1 complexity:5 basis:2 effective:1 london:1 whose:1 lag:3 stanford:3 widely:2 quite:1 modular:1 statistic:3 think:1 itself:1 maiorov:2 sdm:3 advantage:1 relevant:1 realization:1 mixing:2 degenerate:1 forth:1 description:1 intuitive:1 convergence:2 regularity:1 extending:1 ac:1 stat:1 predicted:1 exhibiting:1 stochastic:7 vx:1 enable:1 implementing:1 generalization:2 proposition:6 mathematically:1 extension:2 strictly:1 hold:2 sufficiently:1 around:1 ground:1 exp:2 mapping:1 zeevi:7 purpose:1 estimation:4 applicable:1 establishes:1 tool:1 minimization:1 hope:1 gaussian:1 tar:5 og:6 publication:1 focus:1 sense:3 dependent:5 hidden:1 relation:1 interested:1 comprising:1 arg:1 denoted:1 art:2 integration:2 nar:2 field:1 constitutes:1 future:1 composed:1 maintain:1 interest:1 mdl:1 mixture:16 chain:2 tweedie:1 iv:1 loosely:1 taylor:1 haifa:1 arma:2 theoretical:4 modeling:1 fnl:1 ar:5 subset:2 predictor:13 technion:4 conducted:1 ixt:1 answer:1 synthetic:1 adler:6 combined:2 density:2 probabilistic:2 squared:4 central:1 containing:1 expert:27 derivative:3 north:1 try:1 view:1 lab:1 il:1 square:1 variance:2 cc:2 history:1 submitted:1 pp:2 obvious:1 associated:1 proof:9 proved:1 ask:1 segmentation:1 actually:1 feed:1 formulation:1 evaluated:1 though:2 strongly:1 generality:1 sketch:2 nonlinear:4 glance:1 logistic:1 perhaps:1 believe:1 building:1 normalized:1 true:1 unbiased:2 white:1 ogi:1 noted:1 criterion:1 hill:1 evident:1 complete:1 demonstrate:2 priestley:3 novel:2 recently:3 misspecified:2 common:1 functional:4 exponentially:1 extend:1 relating:1 imposing:1 rd:4 tuning:1 pointed:1 moving:1 gj:2 etc:1 closest:2 recent:1 belongs:2 termed:1 certain:4 verlag:2 approximators:1 ale:1 signal:1 relates:2 multiple:1 sound:1 mhaskar:2 smooth:3 academic:1 prediction:12 variant:1 regression:4 expectation:2 represent:1 penalize:1 addition:1 background:2 fine:1 posse:2 comment:1 december:1 spirit:1 seem:1 jordan:2 ee:3 feedforward:3 architecture:8 restrict:3 regarding:1 knowing:1 tradeoff:1 passing:1 york:2 detailed:1 tune:1 locally:1 reduced:1 meir:7 rmeir:1 discrete:1 shall:1 vol:3 express:1 threshold:3 nevertheless:1 asymptotically:1 merely:1 powerful:1 family:3 reader:1 reasonable:2 vn:1 sobolev:3 bound:15 fold:1 encountered:1 precisely:5 constraint:1 sake:1 aspect:1 argument:1 conjecture:1 department:5 structured:1 combination:1 jr:3 describes:1 slightly:1 em:1 intimately:1 making:1 dv:2 taken:1 mind:2 letting:2 merit:1 tractable:2 endowed:1 hierarchical:2 alternative:1 slower:1 customary:1 jd:1 denotes:2 exploit:1 restrictive:1 establish:2 approximating:3 objective:2 question:2 blend:1 strategy:1 dependence:2 usual:1 enhances:1 capacity:1 assuming:3 degenerated:1 minimizing:1 innovation:1 nc:1 robert:1 statement:2 fe:1 favorably:2 stated:2 unknown:1 upper:10 observation:1 markov:2 finite:4 enabling:1 defining:2 extended:1 precise:1 isl:1 introduced:2 herein:1 established:1 proceeds:2 ev:1 oj:7 memory:1 lend:1 power:1 natural:1 mn:5 normality:1 imply:1 deviate:1 l2:6 asymptotic:2 law:1 encompassing:1 expect:1 loss:3 proportional:1 approximator:7 foundation:1 degree:3 cd:1 keeping:1 bias:1 allow:1 taking:2 characterizing:1 stand:1 world:1 rich:2 autoregressive:5 forward:1 made:2 approximate:2 compact:3 keep:1 global:2 mem:25 assumed:2 continuous:2 learn:1 nature:1 ca:1 expansion:1 mse:4 domain:1 main:4 xtix:1 noise:1 n2:1 representative:1 referred:1 strengthened:1 tong:3 aid:1 precision:1 wish:1 ix:1 theorem:5 xt:12 gating:4 decay:1 generalizing:1 simply:2 expressed:1 contained:1 scalar:2 springer:2 conditional:3 viewed:1 formulated:1 consequently:2 price:1 fisher:1 latter:2 arises:1 meant:1 tested:1 |
225 | 1,204 | For valid generalization, the size of the
weights is more important than the size
of the network
Peter L. Bartlett
Department of Systems Engineering
Research School of Information Sciences and Engineering
Australian National University
Canberra, 0200 Australia
Peter .BartlettClanu .edu.au
Abstract
This paper shows that if a large neural network is used for a pattern
classification problem, and the learning algorithm finds a network
with small weights that has small squared error on the training
patterns, then the generalization performance depends on the size
of the weights rather than the number of weights. More specifically, consider an i-layer feed-forward network of sigmoid units, in
which the sum of the magnitudes of the weights associated with
each unit is bounded by A. The misclassification probability converges to an error estimate (that is closely related to squared error
on the training set) at rate O((cA)l(l+1)/2J(log n)jm) ignoring
log factors, where m is the number of training patterns, n is the
input dimension, and c is a constant. This may explain the generalization performance of neural networks, particularly when the
number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay
and early stopping) that attempt to keep the weights small during
training.
1
Introduction
Results from statistical learning theory give bounds on the number of training examples that are necessary for satisfactory generalization performance in classification
problems, in terms of the Vapnik-Chervonenkis dimension of the class of functions
used by the learning system (see, for example, [13, 5]). Baum and Haussler [4]
used these results to give sample size bounds for multi-layer threshold networks
135
Generalization and the Size of the Weights in Neural Networks
that grow at least as quickly as the number of weights (see also [7]). However,
for pattern classification applications the VC-bounds seem loose; neural networks
often perform successfully with training sets that are considerably smaller than the
number of weights. This paper shows that for classification problems on which neural networks perform well, if the weights are not too big, the size of the weights
determines the generalization performance.
In contrast with the function classes and algorithms considered in the VC-theory,
neural networks used for binary classification problems have real-valued outputs,
and learning algorithms typically attempt to minimize the squared error of the
network output over a training set. As well as encouraging the correct classification,
this tends to push the output away from zero and towards the target values of
{-1, I}. It is easy to see that if the total squared error of a hypothesis on m
examples is no more than mf, then on no more than mf/(I- o? of these examples
can the hypothesis have either the incorrect sign or magnitude less than a.
The next section gives misclassification probability bounds for hypotheses that are
"distinctly correct" in this way on most examples. These bounds are in terms of
a scale-sensitive version of the VC-dimension, called the fat-shattering dimension.
Section 3 gives bounds on this dimension for feedforward sigmoid networks, which
imply the main results. The proofs are sketched in Section 4. Full proofs can be
found in the full version [2].
2
Notation and bounds on misclassification probability
Denote the space of input patterns by X. The space of labels is {-I, I}. We
assume that there is a probability distribution P on the product space X x { -1, I},
that reflects both the relative frequency of different input patterns and the relative
frequency of an expert's classification of those patterns. The learning algorithm
uses a class of real-valued functions, called the hypothesis class H. An hypothesis h
is correct on an example (x, y) if sgn(h(x)) = y, where sgn(a) : 1R -+ {-I, I} takes
value 1 iff a 2: 0, so the misclassification probability (or error) of h is defined as
erp(h) = P {(x, y) E X x {-I, I} : sgn(h(x))
"# y} .
The crucial quantity determining misclassification probability is the fat-shattering
dimension of the hypothesis class H. We say that a sequence Xl, ... , X d of d points
from X is shattered by H iffunctions in H can give all classifications of the sequence.
That is, for all b = (b l , ... , bm ) E {-I, l}m there is an h in H satisfying sgn(h(xi)) =
bi . The VC-dimension of H is defined as the size of the largest shattered sequence. l
For a given scale parameter, > 0, we say that a sequence Xl, ... , Xd of d points
from X is ,-shattered by H if there is a sequence rl, ... , rd of real values such that
for all b = (b l , ... , bm ) E {-I, l}m there is an h in H satisfying (h(xd - rdb i 2: ,.
The fat-shattering dimension of H at " denoted fatH(!), is the size of the largest
,-shattered sequence. This dimension reflects the complexity of the functions in the
class H, when examined at scale ,. Notice that fatH(!) is a nonincreasing function
of ,. The following theorem gives generalization error bounds in terms of fatH(!).
A related result, that applies to the case of no errors on the training set, will appear
in [12].
?
?
Theorem 1 Define the input space X, hypothesis class H, and probability distribution P on X x {-I, I} as above. Let
< 0 < 1/2, and < , < 1. Then,
with probability 1- 0 over the training sequence (Xl, YI), ... , (xm, Ym) of m labelled
1 In fact, according to the usual definition, this is the VO-dimension of the class of
thresholded versions of functions in H.
P. L. Bartlett
136
examples, every hypothesis h in H satisfies
erp(h) <
where
~
I{i : Ih(xdl < ,
m
or sgn(h(xt))
I- Yi}1 + fb, m, 6),
2
m
f2b, m, 6) = - (d In(50em/d) log2(1250m) + In(4/6)) ,
(1)
and d = fatHb/16).
2.1
Comments
It is informative to compare this result with the standard VC-bound. In that case,
the bound on misclassification probability is
1
erp(h) < m I{i : sgn(h(xi))
I- ydl + ( mC (dlog(m/d) + log(I/6)) )
1/2
,
where d = VCdim(H) and c is a constant. We shall see in the next section that
there are function classes H for which VCdim(H) is infinite but fatHb) is finite
for all, > 0; an example is the class of functions computed by any two-layer neural network with an arbitrary number of parameters but constraints on the size of
the parameters. It is known that if the learning algorithm and error estimates are
constrained to make use of the sample only by considering the proportion of training examples that hypotheses misclassify, there are distributions P for which the
second term in the VC-bound above cannot be improved by more than log factors.
Theorem 1 shows that it can be improved if the learning algorithm makes use of
the sample by considering the proportion of training examples that are correctly
classified and have Ih(xdl < ,. It is possible to give a lower bound (see the full
paper [2]) which, for the function classes considered here, shows that Theorem 1
also cannot be improved by more than log factors.
The idea of using the magnitudes of the values of h(xd to give a more precise
estimate of the generalization performance was first proposed by Vapnik in [13]
(and was further developed by Vapnik and co-workers). There it was used only for
the case of linear hypothesis classes. Results in [13] give bounds on misclassification
probability for a test sample, in terms of values of h on the training and test data.
This result is extended in [11], to give bounds on misclassification probability (that
is, for unseen data) in terms of the values of h on the training examples. This is
further extended in [12] to more general function classes, to give error bounds that
are applicable when there is a hypothesis with no errors on the training examples.
Lugosi and Pinter [9] have also obtained bounds on misclassification probability in
terms of similar properties of the class of functions containing the true regression
function (conditional expectation of Y given x). However, their results do not extend
to the case when the true regression function is not in the class of real-valued
functions used by the estimator.
It seems unnatural that the quantity, is specified in advance in Theorem 1, since
it depends on the examples. The full paper [2] gives a similar result in which the
statement is made uniform over all values of this quantity.
3
The fat-shattering dimension of neural networks
Bounds on the VC-dimensionofvarious neural network classes have been established
(see [10] for a review), but these are all at least linear in the number of parameters.
In this section, we give bounds on the fat-shattering dimension for several neural
network classes.
137
Generalization and the Size ofthe Weights in Neural Networks
We assume that the input space X is some subset of ]Rn. Define a sigmoid unit
as a function from ]R k to ]R, parametrized by a vector of weights w E ]R k ? The
unit computes x t-7 0'( X ? w), where 0' is a fixed bounded function satisfying a
Lipchitz condition. (For simplicity, we ignore the offset parameter. It is equivalent
to including an extra input with a constant value.) A multi-layer feed-forward
sigmoid network of depth ? is a network of sigmoid units with a single output unit,
which can be arranged in a layered structure with ? layers, so that the output of
a unit passes only to the inputs of units in later layers. We will consider networks
in which the weights are bounded. The relevant norm is the ?1 norm: for a vector
w E ]Rk, define IIwl11 = 2:7=1 IWil. The following result gives a bound on the fatshattering dimension of a (bounded) linear combination of real-valued functions, in
terms of the fat-shattering dimension of the basis function class. We can apply this
result in a recursive fashion to give bounds for single output feed-forward networks.
Theorem 2 Let F be a class of functions that map from X to [-M/2, M/2], such
that 0 E F and, for all f in F, - f E F. For A > 0, define the class H of
weight-bounded linear combinations of functions from F as
H=
{t Wdi : k
E W, fi E F, "will
,=1
Suppose , > 0 is such that d
fatFb/(32A))
2
(cM2 A d/,2) log2(M Ad/,), for some constant c.
=
~ A} .
2:
1.
Then fatHb)
~
Gurvits and Koiran [6] have shown that the fat-shattering dimension of the class of
two-layer networks with bounded output weights and linear threshold hidden units
is a ((A 2n 2h 2) log(n/,y)), when X = lRn. As a special case, Theorem 2 improves
this result .
Notice that the fat-shattering dimension of a function class is not changed by more
than a constant factor if we compose the functions with a fixed function satisfying
a Lipschitz condition (like the standard sigmoid function) . Also, for X = ]Rn and
H = {x t-7 xd we have fatHb) ~ logn for all 'Y. Finally, for H = {x t-7 W? x: w E
]Rn} we have fatHb) ~ n for all 'Y. These observations, together with Theorem 2,
give the following corollary. The 0(?) notation suppresses log factors. (Formally,
f = O(g) if f = o(g1+O:) for all 0' > 0.)
Corollary 3 If X C lR n and H is the class of two-layer sigmoid networks with the
weights in the outp;;i unit satisfying IIwlh ~ A, then fatHb)
6 (A2n/'Y2).
=
If X = {x E lR n : Ilxli oo ~ B} and the hidden unit weights are also bounded, then
fatHb) = 0 (B2 A6 (log n)h4).
Applying Theorem 2 to this result gives the following result for deeper networks.
Notice that there is no constraint on the number of hidden units in any layer, only
on the total magnitude of the weights associated with a processing unit.
Corollary 4 For some constant c, if X C ]Rn and H is the class of depth ? sigmoid
networks in which the weight vector w associated with each unit beyond the first
layer satisfies IIwlll ~ A, then fatHb) = () (n(cA)l(l-1)h 2(l-1)) .
If X = {x E lR n : IIxlioo ~ B} and the weights in the first layer units also satisfy
IIwll1 ~ A, then fatHb) = () (B2(cA)l(l+1) /'Y2llog n).
In the first part of this corollary, the network has fat-shattering dimension similar
to the VC-dimension of a linear network. This formalizes the intuition that when
the weights are small, the network operates in the "linear part" of the sigmoid, and
so behaves like a linear network.
P. L. Bartlett
138
3.1
Comments
Consider a depth '- sigmoid network with bounded weights. The last corollary and
Theorem 1 imply that if the training sample size grows roughly as B2 Al2 /f 2, then
the misclassification probability of a network is within f of the proportion of training
examples that the network classifies as "distinctly correct."
These results give a plausible explanation for the generalization performance of
neural networks. If, in applications, networks with many units have small weights
and small squared error on the training examples, then the VC-dimension (and
hence number of parameters) is not as important as the magnitude of the weights
for generalization performance.
It is possible to give a version of Theorem 1 in which the probability bound is
uniform over all values of a complexity parameter indexing the function classes
(using the same technique mentioned at the end of Section 2.1). For the case of
sigmoid network classes, indexed by a weight bound, minimizing the resulting bound
on misclassification probability is equivalent to minimizing the sum of a sample error
term and a penalty term involving the weight bound. This supports the use of two
popular heuristic techniques, weight decay and early stopping (see, for example, [8]),
which aim to minimize squared error while maintaining small weights.
These techniques give bounds on the fat-shattering dimension and hence generalization performance for any function class that can be expressed as a bounded number
of compositions of either bounded-weight linear combinations or scalar Lipschitz
functions with functions in a class that has finite fat-shattering dimension. This
includes, for example, radial basis function networks.
4
4.1
Proofs
Proof sketch of Theorem 1
For A f S,' where (S, p) is a pseudometric space, a set T <; S is an f-cover of A if
for all a in A there is a tin T with p(t, a) < f.. We define N(A, f, p) as the size of the
smallest f-cover of A . For x = (Xl, . . . , Xm) E X m , define the pseudometric dloo(x)
on the set S of functions defined on X by dloo(x)(f,g) = max; If(xd - g(x;)I. For a
set A of functions, denote maxxEx",N(A,f,dloo(x)) by Noo(A,f,m). Alon et al. [1]
have obtained the following bound on Noo in terms of the fat-shattering dimension.
Lemma 5 For a class F of functions that map from {I, .. . , n} to {I, ... , b} with
fatF(l) ~ d, log2Noo(F,2,n) < 1 + log2(nb 2)log2
(7)b i ), provided that n ~
(L:1=o
1 + log2
(L:1=o (7)b
i ).
For, > 0 define 7r"'( : lR -+ [-",] as the piecewise-linear squashing function satisfying 7r"'((a) = , if a ~ " 7r"'((a)
if a ~ -" and 7r"'((a)
a otherwise. For a
class H of real-valued functions, define 7r"'((H) as the set of compositions of 7r"'( with
functions in H.
= -,
=
Lemma 6 For X, H, P, 6, and, as in Theorem 1,
pm { z : 3h E H,
erp(h)
2:
(! In CNoo(",(H), '1/ 2m)) )
2,
~ I{i: Ih(z.)1 < 'I orsgn(h(z.)) # !Ii}I} < 6.
1/2
+
139
Generalization and the Size o/the Weights in Neural Networks
The proof of the lemma relies on the observation that
{z :3h E H, erp(h) 2: + ! I{i : Ih(xi)1 <, or sgn(h(xd) =f. ydl}
pm {z :3h E H, P(11I""((h(x)) -,yl2: ,) 2: f+! I{i : 1I""((h(xd) =f. ,ydl}.
pm
<
f
We then use a standard symmetrization argument and the permutation argument
introduced by Vapnik and Chervonenkis to bound this probability by the probability
under a random permutation of a double-length sample that a related property
holds. For any fixed sample , we can then use Pollard's approach of approximating
the hypothesis class using a {J/2)-cover, except that in this case the appropriate
cover is with respect to the /00 pseudometric. Applying Hoeffding's inequality gives
the lemma.
To prove Theorem 1, we need to bound the covering numbers in terms of the fatshattering dimension. It is easy to apply Lemma 5 to a quantized version of the
function class to get such a bound, taking advantage of the range constraint imposed
by the squashing function 11""( .
4.2
Proof sketch of Theorem 2
For x = (Xl"'" Xm) E X m , define the pseudometric dl1(x) on the class of functions defined on X by dl1(x)(f,g) = ~ L:;:l I/(xi) - g(Xi)/. Similarly, define
dl2 (x)(f, g) = (~ L:;:I(f(Xi) - g(xd)2)1/2 . If A is a set of functions defined on
X, denote maxxEXm N(A", dl1(x)) by Nt{A", m), and similarly for N 2(A , " m) .
The idea of the proof of Theorem 2 is to first derive a general upper bound on an
/1 covering number of the class H, and then apply the following result (which is
implicit in the proof of Theorem 2 in [3]) to give a bound on the fat-shattering
dimension.
Lemma 7 For a class F 01 [0 , I]-valued functions on X satisfying fatF(4,)
we have log2 Nt{F" , d) 2: d/32.
2: d,
To derive an upper bound on N 1 {J, H, m) , we start with the bound that Lemma 5
implies on the /00 covering number Noo(F", m) for the class F of hidden unit
functions. Since dl.A/, g) :::; dl oo (f, g) , this implies the following bound on the /2
covering number for F (provided m satisfies the condition required by Lemma 5,
and it turns out that the theorem is trivial otherwise).
log2 N2 (F", m)
< 1 + dlog2 ( 4emM)
d,
log2
(9mM2)
,2
.
(2)
Next, we use the following result on approximation in /2 , which A. Barron attributes
to Maurey.
Lemma 8 (Maurey) Suppose G is a Hilbert space and F ~ G has 11/11:::; b for all
I in F. Let I be an element from the convex closure of F. Then for all k 2: 1 and all
c>
b2 _1//I/ 2, there are functions {It , ? ? . Jd
~ F such that
III - ~ L:7=1 lil1
2
:::;
~.
This implies that any element of H can be approximated to a particular accuracy
(with respect to /2) using a fixed linear combination of a small number of elements
of F . It follows that we can construct an /2 cover of H from the /2 cover of F; using
Lemma 8 and Inequality 2 shows that
log2 N 2 (H", m)
(
(8emMA)
< 2M2A2
,2
1 + dlog2
d,
log2
(36mM2A2))
,2
.
140
P. L. Bartlett
Now, Jensen's inequality implies that dI1(x)(f,g) ~ d I2 (x)(f,g), which gives a bound
on Nt (H, 1, m). Comparing with the lower bound given by Lemma 7 and solving
for m gives the result. A more refined analysis for the neural network case involves
bounding N2 for successive layers, and solving to give a bound on the fat-shattering
dimension of the network.
Acknowledgements
Thanks to Andrew Barron, Jonathan Baxter, Mike Jordan, Adam Kowalczyk, Wee
Sun Lee, Phil Long, John Shawe-Taylor, and Robert Slaviero for helpful discussions
and comments.
References
[1] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive di[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
mensions, uniform convergence, and learn ability. In Proceedings of the 1993
IEEE Symposium on Foundations of Computer Science. IEEE Press, 1993.
P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size
of the network. Technical report, Department of Systems Engineering,
Australian National University, 1996. (available by anonymous ftp from
syseng.anu.edu.au:pub/peter/TR96d.ps).
P. L. Bartlett, S. R. Kulkarni, and S. E. Posner. Covering numbers for realvalued function classes. Technical report, Australian National University and
Princeton University, 1996.
E. Baum and D. Haussler. What size net gives valid generalization? Neural
Computation, 1(1):151-160,1989.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability
and the Vapnik-Chervorienkis dimension. J . ACM, 36(4):929-965, 1989.
L. Gurvits and P. Koiran. Approximation and learning of convex superpositions. In Computational Learning Theory: EUROCOLT'95, 1995.
D. Haussler. Decision theoretic generalizations of the PAC model for neural
net and other learning applications. Inform. Comput., 100(1):78-150,1992.
J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural
Computation. Addison-Wesley, 1991.
G. Lugosi and M. Pinter. A data-dependent skeleton estimate for learning. In
Proc. 9th Annu. Conference on Comput. Learning Theory. ACM Press, New
York, NY, 1996.
W. Maass. Vapnik-Chervonenkis dimension of neural nets. In M. A. Arbib,
editor, The Handbook of Brain Theory and Neural Networks, pages 1000-1003.
MIT Press, Cambridge, 1995.
J. Shawe-Taylor, P. 1. Bartlett, R. C . Williamson, and M. Anthony. A framework for structural risk minimisation. In Proc. 9th Annu. Conference on Comput. Learning Theory. ACM Press, New York, NY, 1996.
J. Shawe-Taylor, P. 1. Bartlett, R. C. Williamson, and M. Anthony. Structural
risk minimization over data-dependent hierarchies. Technical report, 1996.
V. N. Vapnik. Estimation of Dependences Based on Empirical Data. SpringerVerlag, New York, 1982.
| 1204 |@word version:5 norm:2 proportion:3 seems:1 cm2:1 closure:1 pub:1 chervonenkis:3 comparing:1 nt:3 john:1 informative:1 warmuth:1 lr:4 quantized:1 successive:1 lipchitz:1 h4:1 symposium:1 incorrect:1 prove:1 compose:1 emma:1 roughly:1 multi:2 brain:1 eurocolt:1 encouraging:1 jm:1 considering:2 provided:2 classifies:1 bounded:10 notation:2 what:1 suppresses:1 developed:1 formalizes:1 every:1 xd:8 fat:14 unit:17 appear:1 engineering:3 tends:1 lugosi:2 au:2 examined:1 co:1 palmer:1 bi:1 range:1 recursive:1 empirical:1 radial:1 get:1 cannot:2 layered:1 nb:1 risk:2 applying:2 equivalent:2 map:2 imposed:1 phil:1 baum:2 convex:2 simplicity:1 estimator:1 haussler:5 posner:1 target:1 suppose:2 hierarchy:1 us:1 hypothesis:12 element:3 satisfying:7 particularly:1 approximated:1 mike:1 sun:1 mentioned:1 intuition:1 complexity:3 skeleton:1 solving:2 basis:2 refined:1 heuristic:2 valued:6 plausible:1 say:2 otherwise:2 ability:1 unseen:1 g1:1 sequence:7 advantage:1 net:3 product:1 relevant:1 a2n:1 iff:1 convergence:1 double:1 p:1 adam:1 converges:1 ben:1 ftp:1 oo:2 alon:2 derive:2 andrew:1 school:1 krogh:1 involves:1 implies:4 australian:3 closely:1 correct:4 attribute:1 vc:9 australia:1 sgn:7 vcdim:2 generalization:15 anonymous:1 hold:1 considered:2 wdi:1 koiran:2 early:2 smallest:1 estimation:1 proc:2 applicable:1 label:1 maxxex:1 superposition:1 sensitive:2 symmetrization:1 largest:2 successfully:1 reflects:2 minimization:1 mit:1 aim:1 rather:1 minimisation:1 corollary:5 contrast:1 helpful:1 dependent:2 stopping:2 shattered:4 typically:1 hidden:4 sketched:1 classification:9 denoted:1 logn:1 constrained:1 special:1 construct:1 gurvits:2 shattering:14 mm2:1 di1:1 report:3 piecewise:1 wee:1 national:3 attempt:2 misclassify:1 lrn:1 nonincreasing:1 worker:1 necessary:1 indexed:1 taylor:3 rdb:1 cover:6 a6:1 subset:1 uniform:3 too:1 learnability:1 considerably:2 thanks:1 lee:1 ym:1 quickly:1 together:1 squared:6 cesa:1 containing:1 hoeffding:1 expert:1 b2:4 includes:1 satisfy:1 depends:2 fatshattering:2 ad:1 later:1 start:1 minimize:2 accuracy:1 ofthe:1 mc:1 classified:1 explain:1 inform:1 fatf:2 definition:1 frequency:2 associated:3 proof:8 di:1 popular:1 improves:1 hilbert:1 fath:3 feed:3 wesley:1 improved:3 arranged:1 implicit:1 sketch:2 grows:1 true:2 y2:1 hence:2 satisfactory:1 maass:1 i2:1 ehrenfeucht:1 yl2:1 during:1 covering:5 theoretic:1 vo:1 fi:1 sigmoid:11 behaves:1 rl:1 al2:1 ydl:3 extend:1 composition:2 cambridge:1 rd:1 pm:3 similarly:2 shawe:3 inequality:3 binary:1 yi:2 ii:1 full:4 technical:3 long:1 involving:1 regression:2 expectation:1 xdl:2 grow:1 crucial:1 extra:1 pass:1 comment:3 seem:1 jordan:1 structural:2 feedforward:1 iii:1 easy:2 baxter:1 arbib:1 idea:2 dl2:1 bartlett:8 syseng:1 pinter:2 unnatural:1 penalty:1 peter:3 pollard:1 york:3 mensions:1 notice:3 sign:1 correctly:1 shall:1 threshold:2 erp:5 thresholded:1 sum:2 decision:1 layer:12 bound:37 constraint:3 argument:2 pseudometric:4 department:2 according:1 combination:4 hertz:1 smaller:2 em:1 dlog:1 indexing:1 turn:1 loose:1 addison:1 end:1 available:1 apply:3 away:1 appropriate:1 barron:2 kowalczyk:1 jd:1 log2:10 maintaining:1 approximating:1 outp:1 quantity:3 dependence:1 usual:1 parametrized:1 trivial:1 lil1:1 length:1 iwil:1 minimizing:2 robert:1 statement:1 perform:2 bianchi:1 upper:2 observation:2 finite:2 extended:2 precise:1 rn:4 arbitrary:1 introduced:1 david:1 required:1 specified:1 established:1 beyond:1 pattern:8 xm:3 including:1 max:1 explanation:1 emm:1 misclassification:11 imply:2 realvalued:1 review:1 acknowledgement:1 determining:1 relative:2 permutation:2 maurey:2 foundation:1 editor:1 squashing:2 changed:1 last:1 deeper:1 taking:1 distinctly:2 dimension:27 depth:3 valid:2 fb:1 computes:1 forward:3 made:1 bm:2 ignore:1 keep:1 dlog2:2 handbook:1 xi:6 learn:1 ca:3 ignoring:1 williamson:2 anthony:2 main:1 big:1 bounding:1 n2:2 dl1:3 canberra:1 fashion:1 ny:2 xl:5 comput:3 tin:1 theorem:18 rk:1 annu:2 xt:1 pac:1 jensen:1 offset:1 decay:2 dl:2 ih:4 vapnik:7 magnitude:5 push:1 anu:1 mf:2 expressed:1 scalar:1 applies:1 determines:1 satisfies:3 relies:1 acm:3 conditional:1 blumer:1 towards:1 labelled:1 lipschitz:2 springerverlag:1 specifically:1 infinite:1 operates:1 except:1 lemma:11 total:2 called:2 formally:1 support:2 noo:3 jonathan:1 kulkarni:1 princeton:1 |
226 | 1,205 | An Adaptive WTA using Floating Gate
Technology
w.
Fritz Kruger, Paul Hasler, Bradley A. Minch, and Christ of Koch
California Institute of Technology
Pasadena, CA 91125
(818) 395 - 2812
stretch@klab.caltech.edu
Abstract
We have designed, fabricated, and tested an adaptive WinnerTake-All (WTA) circuit based upon the classic WTA of Lazzaro,
et al [IJ. We have added a time dimension (adaptation) to this
circuit to make the input derivative an important factor in winner
selection. To accomplish this, we have modified the classic WTA
circuit by adding floating gate transistors which slowly null their
inputs over time. We present a simplified analysis and experimental data of this adaptive WTA fabricated in a standard CMOS 2f.tm
process.
1
Winner-Take-All Circuits
In a WTA network, each cell has one input and one output. For any set of inputs, the
outputs will all be at zero except for the one which is from the cell with the maximum
input. One way to accomplish this is by a global nonlinear inhibition coupled with a
self-excitation term [2J. Each cell inhibits all others while exciting itself; thus a cell
with even a slightly greater input than the others will excite itself up to its maximal
state and inhibit the others down to their minimal states. The WTA function is
important for many classical neural nets that involve competitive learning, vector
quantization and feature mapping. The classic WTA network characterized by
Lazzaro et. al. [IJ is an elegant, simple circuit that shares just one common line
among all cells of the network to propagate the inhibition.
Our motivation to add adaptation comes from the idea of saliency maps. Picture
a saliency map as a large number of cells each of which encodes an analog value
721
An Adaptive wrA using Floating Gate Technology
V tun01
Vdd
Vtun02
?
~C1
i
M4 ,
V
C2
V1
JLV~
Vb1
1--r---'-c--A
2
~5
V fg1
M2
Figure 1: The circuit diagram of a two input winner-take-all circuit.
reflecting some measure of the importance (saliency) of its input. We would like
to pay attention to the most salient cell, so we employ a WTA function to tell us
where to look. But if the input doesn't change, we never look away from that one
cell. We would like to introduce some concept of fatigue and refraction to each cell
such that after winning for some time, it tires, allowing other cells to win, and then
it must wait some time before it can win again. We call this circuit an adaptive
WTA.
In this paper, we present an adaptive WTA based upon the classic WTA; Figure 1
shows a two-input, adaptive WTA circuit. The difference between the classic and
adaptive WTA is that M4 and Ms are pFET single transistor synapses. A single
transistor synapse [3] is either an nFET or pFET transistor with a floating gate and
a tunneling junction. This enhancement results in the ability of each transistor to
adapt to its input bias current. The adaptation is a result of the electron tunneling
and hot-electron injection modifying the charge on the floating gate; equilibrium is
established when the tunneling current equals the injection current. The circuit is
devised in such a way that these are negative feedback mechanisms, consequently
the output voltage will always return to the same steady state voltage determined
by its bias current regardless of the DC input level. Like the autozeroing amplifier
[4], the adaptive WTA is an example of a circuit where the adaptation occurs as a
natural part of the circuit operation.
2
pFET hot-electron injection and electron tunneling
Before considering the behavior of the adaptive WTA, we will review the processes of
electron tunneling and hot-electron injection in pFETs. In subthreshold operation,
we can describe the channel current of a pFET (Ip) for a differential change in gate
voltage, ~ Vg, around a fixed bias current Iso, as Ip
= Iso exp ( - ,,~:g
)
where Kp is
the amount by which ~ Vg affects the surface potential of the pFET, and UT is
We will assume for this paper that all transistors are identical.
ki.
First, we consider electron tunneling. We start with the classic model of electron
W. F. Kruger, P. Hasler, B. A. Minch and C. Koch
722
Drain
L
J,,'
Ec~
I....~
-La.1inA
,0?
???
Ev - - - .
...
Source
Channel
'-
- 1... 2QOnA
- 1 ?? 1nA
-....".~
e
(a)
&.5
t
IS
10
10.6
11
"'"
(b)
Figure 2: pFET Hot Electron Injection. (a) Band diagram of a subthreshold pFET
transistor for favorable conditions for hot-electron injection. (b) Measured data of pFET
injection efficiency versus the drain to channel voltage for four source currents. Injection
efficiency is the ratio of injection current to source current. At cI>dc equal to 8.2V, the
injection efficiency increases a factor of e for an increase cI>dc of 250mV.
tunneling through a silicon - Si0 2 system [5]. As in the autozeroing amplifier [4],
the tunneling current will be only a weak function for the voltage swing on the
floating gate voltage through the region of subthreshold currents; therefore we will
approximate the tunneling junction as a current source supplying I tunO current to
the floating gate.
Second, we derive a simple model of pFET hot-electron injection. Figure 2a shows
the band diagram of a pFET operating at bias conditions which are favorable for
hot-electron injection. Hot-hole impact ionization creates electrons at the drain edge
of the depletion region. These secondary electrons travel back into the channel
region gaining energy as they go. When their energy exceeds that of the Si02
barrier, they can be injected through the oxide to the floating gate. The hole
impact ionization current is proportional to the source current, and is an exponential
function of the voltage drop from channel to drain (c)de). The injection current is
proportional to the hole impact ionization current and is an exponential function
of the voltage drop from channel to drain. We will neglect the dependence of the
floating-gate voltage for a given source current and c)de as we did in [4]. Figure
2b shows measured injection efficiency for several source currents, where injection
efficiency is the ratio of the injection current to source current. The injection
efficiency is independent of source current and is approximately linear over a 1
- 2V swing in c)de; therefore we model the injection efficiency as proportional to
)
exp ( - t~~c within that 1 to 2V swing, where Vinj is a measured device parameter
which for our process is 250mV at a bias c)de = 8.2V, and 6,c)de is the change in
c) de from the bias level. An increasing voltage input will increase the pFET surface
potential by capacitive coupling to the floating gate. Increasing the pFET surface
potential will increase the source current thereby decreasing c) de for a fixed output
voltage and lowering the injection efficiency.
An Adaptive WTA using Floating Gate Technology
723
,o'r-----~----~---___,
1.55
,
/
~,.
t
~~\
\
/
~
Culftlnt
\
,I
\
\
!
V.. n . 43.3SV
\
ius !I
\
J
1.4
steP Input
10 ,77nA ? 14.12nA - lO.11M
\
\
!
j
I
j
\ ""
~~+~
1 .35O~--;:20::----!:40'---:!:60:--:::'80-----:'::::OO--;'=:-20----:-,40=-~'60:::---:-:'80::--::!200
1000~------7.50:-------:'::::OO------'!'SO
Input CuTent Step (% of bas cumtnt)
1111"18 (5)
(a)
(b)
Figure 3: Illustration of the dynamics for the winning and losing input voltages. (a)
Measured Vi verses time due to an upgoing and a downgoing input current step. The
initial input voltage change due to the input step is much smaller than the voltage change
due to the adaptation. (b) Adaptation time of a losing input voltage for several tunneling
voltages. The adaptation time is the time from the start of the input current step to the
time the input voltage is within 10% of its steady state voltage. A larger tunneling current
decreases the adaptation time by increasing the tunneling current supplied to the floating
gate.
3
Two input Adaptive WTA
We will outline the general procedure to derive the general equations to describe
the two input WTA shown in Fig. 1. We first observe that transistors M 1 , M 2 ,
and Ma make up a differential pair. Regardless of any adaptation, the middle V
node and output currents are set by the input voltages (Vl and V2) , which are set
by the input currents , as in the classic WTA [1]. The dynamics for high frequency
operation are also similar to the classic WTA circuit. Next, we can write the two
Kirchhoff Current Law (KCL) equations at Vl and V2 , which relate the change in
~ and V2 as a function of the two input currents and the floating gate voltages.
Finally, we can write the two KCL equations at the two floating gates VJgl and
VJ g2 , which relates the changes in the floating gate voltages in terms of Vl and V2.
This procedure is directly extendable to multiple inputs . A full analysis of these
equations is very difficult and will be described in another paper.
For this discussion , we present a simplified analysis to develop the intuition of the
circuit operation. At sufficiently high frequencies, the tunneling and injection currents do not adapt the floating gate voltages sufficiently fast to keep the input
voltages at their steady state levels. At these frequencies, the adaptive WTA acts
like the classic WTA circuit with one small difference. A change in the input voltages, Vl or V2 is linearly related to V by the capacitive coupling (~Vl = - ?; ~ V),
where this relationship is exponential in the classic WTA. There is always some capacitance C2 , even if not explicitly drawn due to the overlap capacitance from the
floating gate to drain. This property gives the designer the added freedom to modify the gain. We will assume the circuit operates in its intended operating regime
where the floating gate transistors settle sufficiently fast such that their channel
W. F. Kruger, P. Hasler, B. A. Minch and C. Koch
724
35
". .,.V.......
. '
.
,- . ;
f
~25
L
>
1..
10'?
10"
c~
..... t2(A..
'0'
~1fIp.ll12(A)
10"
(a)
(b)
Figure 4: Measured change in steady state input voltages as a function of bias current.
(a) Change in the two steady state output voltages as a function of the bias current of the
second input. The bias current of the first input was held fixed at 8.14nA. (b) Change in
the RMS noise of the two output voltages as a function of the bias current of the second
input. The RMS noise is much higher for the losing input than for the winning input.
Note that where the two bias currents crOSS roughly corresponds to the location where the
RMS noise on the two input voltages is equal.
current equals the input currents
J. - I
,-
80
exp
(_ K6,V/ 9i )
dIi _ -J.~ dV/ gi
UT
-+ dt , UT dt
(1)
for all inputs indexed by i, but not necessarily fast enough for the floating gates to
settle to their final steady state levels.
To develop some initial intuition, we shall begin by considering one half of the two
input WTA: transistors M 1 , M2 and M4 of Figure 1. First, we notice that Ioutl is
equal to Ib (the current through transistor Mt}; note that this is not true for the
multiple input case. By equating these two currents we get an equation for V as
V = KV1 - KVb, where we will assume that Vb is a fixed bias voltage. Assuming the
input current equals the current through M 4 , VI obeys the equation
(KG1
+ G2 )dVI
- =
dt
GTUT dII
---KIt dt
+ I tunO
(
-
II exp( - 6, VI
)
- ) -1
Vinj
(2)
180
where CT is the total capacitance connected to the floating gate. The steady state
of (2) is
sv;'n = KVinj
UT
I
n
(~)
I
(3)
80
which is exactly the same expression for each input in a multiple input WTA. We get
a linear differential equation by making the substitution X = exp( D..v..Vl) [4], and we
"'1
get similar solutions to the behavior of the autozeroing amplifier. Figure 3a shows
measured data for an upgoing and a downgoing current step. The input current
change results in an initial fast change in the input voltage, and the input voltage
then adapts to its steady state voltage which is a much greater voltage change.
From the voltage difference between the steady states, we get that Vinj is roughly
500mV.
An Adaptive WTA using Floating Gate Technology
o
10
15
20
25
30
35
.a
45
50
725
o
5
10
l1me(a)
,5
20
25
11me(.)
30
35
.a
45
50
(b)
(a)
Figure 5: Experimental time traces measurements of the output current and voltage
for small differential input current steps. (a) Time traces for small differential current
steps around nearly identical bias currents of 8.6nA. (b) Time traces for small differential
current steps around two different bias currents of 8.7nA and O.88nA . In the classic WTA,
the output currents would show no response to the input current steps.
Returning to the two input case, we get two floating gate equations by assuming
that the currents through M4 and M5 are equal to their respective input currents
and writing the KCL equations at each floating gate. If VI and V2 do not cross
each other in the circuit operation, then one can easily solve these KCL equations.
Assume without loss of generality that VI is the winning voltage; which implies that
~ V = K~ Vl . The initial input voltage change before the floating gate adaptation
due to a step in the two input currents of II ~ It and 12 ~ It is
~VI = GT
KG l
In (It)
II'
~V2 ~
GT In
G2
(II It)
It 12
(4)
for G2 much less than KG l . In this case, Vl moves on the order of the floating gate
voltage change, but V2 moves on the order of the floating gate change amplified up
by
The response of ~ VI is governed by an identical equation to (2) ofthe earlier
half-analysis, and therefore results in a small change in VI. Also, any perturbation
of V is only slightly amplified at Vl due to the feedback; therefore any noise at V
will only be slightly amplified into VI. The restoration of V2 is much quicker than
the Vl node if G2 is much less than KGl ; therefore after the initial input step, one
can safely assume that V is nearly constant. The voltage at V is amplified by - ~
at 112; therefore any noise at V is amplified at the losing voltage, but not at the
winning voltage as the data in Fig. 4b shows. The losing dynamics are identical
to the step response of an autozeroing amplifier [4]. Figure 3b shows the variation
. of the adaptation time verses the percent input current change for several values of
tunneling voltages.
.g;..
The main difficulty in exactly solving these KCL equations is the point in the
dynamics where Vi crosses V2 , since the behavior changes when the signals move
726
W. F. Kruger, P. Hasler, B. A. Minch and C. Koch
through the crossover point. If we get more than a sufficient Vi decrease to reach
the starting V2 equilibrium, then the rest of the input change is manifested by an
increase in V2 ? If the voltage V2 crosses the voltage Vi, then V will be set by the
new steady state, and Vi is governed by losing dynamics until Vi :::::l V2 ? At this
point Vi is nearly constant and V2 is governed by losing dynamics. This analysis is
directly extendible to arbitrary number of inputs.
Figure 5 shows some characteristic traces from the two-input circuit. Recall that the
winning node is that with the lowest voltage, which is reflected in its corresponding
high output current. In Fig. 5a, we see that as an input step is applied, the output
current jumps and then begins to adapt to a steady state value. When the inputs
are nearly equal, the steady state outputs are nearly equal; but when the inputs
are different, the steady state output is greater for the cell with the lesser input.
In general, the input current change that is the largest after reaching the previous
equilibrium becomes the new equilibrium. This additional decrease in Vi would
lead to an amplified increase in the other voltage since the losing stage roughly
looks like an autozeroing amplifier with the common node as the input terminal.
The extent to which the inputs do not equal this largest input is manifested as a
proportionally larger input voltage. The other voltage would return to equilibrium
by slowly, linearly decreasing in voltage due to the tunneling current. This process
will continue until Vi equals V2. Note in general that the inputs with lower bias
currents have a slight starting advantage over the inputs with higher bias currents.
Figure 5b illustrates the advantage of the adaptive WTA over the classic WTA. In
the classic WTA, the output voltage and current would not change throughout the
experiment, but the adaptive WTA responds to changes in the input. The second
input step does not evoke a response because there was not enough time to adapt
to steady state after the previous step; but the next step immediately causes it to
win. Also note in both of these traces that the noise is very large in the loosing
node and small in the winner because of the gain differences (see Figure 4b).
References
[1] J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C.A. Mead "Winner-TakeAll Networks of O(N) Complexity" , NIPS 1 Morgan Kaufmann Publishers,
San Mateo, CA, 1989, pp 703 - 711.
[2] Grossberg S. "Adaptive Pattern Classification and Universal Recoding: I. Parallel Development and Coding of Neural Feature Detectors." Biological Cybernetics vol. 23, 121-134, 1988.
[3] P. Hasler, C. Diorio, B. A. Minch, and C. Mead, "Single 'fransistor Learning Synapses", NIPS 7, MIT Press, 1995, 817-824. Also at
http://www.pcmp.caitech.edu/anaprose/paul.
[4] P. Hasler, B. A. Minch, C. Diorio, and C. Mead, "An autozeroing amplifier
using pFET Hot-Electron Injection", ISCAS, Atlanta, 1996, III-325 - III-328.
Also at http://www.pcmp.caitech.edu/anaprose/paul.
[5] M. Lenzlinger and E. H. Snow (1969), "Fowler-Nordheim tunneling into thermally grown Si0 2 ," J. Appl. Phys., vol. 40, pp. 278-283, 1969.
| 1205 |@word middle:1 propagate:1 thereby:1 initial:5 substitution:1 bradley:1 current:64 must:1 designed:1 drop:2 kv1:1 half:2 device:1 iso:2 supplying:1 node:5 location:1 c2:2 differential:6 introduce:1 roughly:3 behavior:3 terminal:1 decreasing:2 considering:2 increasing:3 becomes:1 begin:2 circuit:18 null:1 lowest:1 kg:2 fabricated:2 safely:1 act:1 charge:1 exactly:2 returning:1 before:3 modify:1 mead:3 approximately:1 equating:1 mateo:1 appl:1 obeys:1 grossberg:1 procedure:2 universal:1 crossover:1 wait:1 get:6 selection:1 writing:1 www:2 map:2 go:1 attention:1 regardless:2 starting:2 immediately:1 m2:2 classic:13 variation:1 losing:8 quicker:1 tuno:2 region:3 connected:1 diorio:2 decrease:3 inhibit:1 intuition:2 complexity:1 dynamic:6 vdd:1 solving:1 upon:2 creates:1 efficiency:8 easily:1 kirchhoff:1 grown:1 kcl:5 describe:2 fast:4 kp:1 tell:1 larger:2 solve:1 ability:1 gi:1 itself:2 ip:2 final:1 advantage:2 transistor:11 net:1 maximal:1 adaptation:11 adapts:1 amplified:6 enhancement:1 vinj:3 cmos:1 derive:2 coupling:2 oo:2 develop:2 measured:6 ij:2 come:1 implies:1 snow:1 modifying:1 settle:2 dii:2 biological:1 stretch:1 koch:4 klab:1 around:3 sufficiently:3 exp:5 equilibrium:5 mapping:1 electron:15 favorable:2 travel:1 si0:2 largest:2 mit:1 always:2 modified:1 reaching:1 voltage:50 vl:10 pasadena:1 ioutl:1 among:1 classification:1 fg1:1 k6:1 development:1 equal:11 never:1 identical:4 look:3 nearly:5 others:3 t2:1 employ:1 m4:4 floating:26 vb1:1 intended:1 iscas:1 amplifier:6 freedom:1 atlanta:1 held:1 edge:1 respective:1 indexed:1 minimal:1 earlier:1 restoration:1 mahowald:1 sv:2 minch:6 accomplish:2 extendable:1 fritz:1 na:7 again:1 slowly:2 oxide:1 derivative:1 return:2 potential:3 de:7 coding:1 explicitly:1 mv:3 vi:17 competitive:1 start:2 parallel:1 kaufmann:1 characteristic:1 subthreshold:3 saliency:3 ofthe:1 weak:1 cybernetics:1 detector:1 synapsis:2 reach:1 phys:1 verse:2 energy:2 frequency:3 pp:2 refraction:1 gain:2 lenzlinger:1 recall:1 ut:4 back:1 reflecting:1 higher:2 dt:4 reflected:1 response:4 synapse:1 generality:1 just:1 stage:1 until:2 nonlinear:1 thermally:1 fowler:1 concept:1 true:1 swing:3 self:1 excitation:1 steady:14 m:1 m5:1 fatigue:1 outline:1 percent:1 common:2 mt:1 winner:5 analog:1 slight:1 silicon:1 pfets:1 measurement:1 winnertake:1 surface:3 inhibition:2 operating:2 add:1 kvb:1 gt:2 kg1:1 manifested:2 continue:1 caltech:1 morgan:1 greater:3 additional:1 kit:1 signal:1 ii:4 relates:1 multiple:3 full:1 exceeds:1 characterized:1 adapt:4 cross:4 devised:1 impact:3 cell:11 c1:1 diagram:3 source:10 publisher:1 rest:1 elegant:1 call:1 iii:2 enough:2 affect:1 idea:1 tm:1 lesser:1 expression:1 rms:3 cause:1 lazzaro:3 proportionally:1 involve:1 kruger:4 amount:1 band:2 http:2 supplied:1 notice:1 designer:1 write:2 shall:1 vol:2 ionization:3 salient:1 four:1 drawn:1 hasler:6 lowering:1 v1:1 injected:1 throughout:1 tunneling:16 vb:1 ki:1 ct:1 pay:1 encodes:1 injection:21 inhibits:1 pfet:13 smaller:1 slightly:3 wta:32 making:1 dv:1 depletion:1 equation:12 mechanism:1 junction:2 operation:5 takeall:1 observe:1 away:1 v2:16 gate:27 capacitive:2 pcmp:2 neglect:1 classical:1 move:3 capacitance:3 added:2 occurs:1 ryckebusch:1 dependence:1 responds:1 win:3 me:1 extent:1 assuming:2 relationship:1 illustration:1 ratio:2 nfet:1 difficult:1 relate:1 trace:5 negative:1 ba:1 allowing:1 dc:3 perturbation:1 arbitrary:1 pair:1 extendible:1 california:1 established:1 nip:2 nordheim:1 pattern:1 ev:1 regime:1 gaining:1 hot:9 overlap:1 natural:1 difficulty:1 technology:5 picture:1 coupled:1 review:1 drain:6 law:1 ina:1 loss:1 proportional:3 wra:1 versus:1 vg:2 sufficient:1 exciting:1 dvi:1 share:1 lo:1 tire:1 bias:16 institute:1 barrier:1 recoding:1 feedback:2 dimension:1 doesn:1 adaptive:17 jump:1 simplified:2 san:1 ec:1 approximate:1 fip:1 keep:1 evoke:1 global:1 excite:1 channel:7 ca:2 necessarily:1 vj:1 did:1 main:1 linearly:2 motivation:1 noise:6 paul:3 fig:3 winning:6 exponential:3 governed:3 ib:1 down:1 quantization:1 adding:1 importance:1 ci:2 illustrates:1 hole:3 g2:5 christ:1 corresponds:1 ma:1 consequently:1 loosing:1 jlv:1 change:24 determined:1 except:1 operates:1 total:1 secondary:1 experimental:2 la:1 tested:1 |
227 | 1,206 | A Micropower Analog VLSI
HMM State Decoder for Wordspotting
John Lazzaro and John Wawrzynek
CS Division, UC Berkeley
Berkeley, CA 94720-1776
lazzaroGcs.berkeley.edu. johnwGcs.berkeley.edu
Richard Lippmann
MIT Lincoln Laboratory
Room S4-121, 244 Wood Street
Lexington, MA 02173-0073
rplGsst.ll.mit.edu
Abstract
We describe the implementation of a hidden Markov model state
decoding system, a component for a wordspotting speech recognition system. The key specification for this state decoder design is
microwatt power dissipation; this requirement led to a continuoustime, analog circuit implementation. We characterize the operation
of a 10-word (81 state) state decoder test chip.
1. INTRODUCTION
In this paper, we describe an analog implementation of a common signal processing
block in pattern recognition systems: a hidden Markov model (HMM) state decoder.
The design is intended for applications such as voice interfaces for portable devices
that require micropower operation. In this section, we review HMM state decoding
in speech recognition systems.
An HMM speech recognition system consists of a probabilistic state machine, and
a method for tracing the state transitions of the machine for an input speech waveform. Figure 1 shows a state machine for a simple recognition problem: detecting the presence of keywords ("Yes," "No") in conversational speech (non-keyword
speech is captured by the "Filler" state). This type of recognition where keywords
are detected in unconstrained speech is called wordspotting (Lippmann et al., 1994).
1. Lazzaro. 1. Wawrzynek and R. Lippmann
728
Filler
Figure 1. A two-keyword ("Yes," states 1-10, "No," states 11-20) HMM.
Our goal during speech recognition is to trace out the most likely path through this
state machine that could have produced the input speech waveform. This problem
can be partially solved in a local fashion, by examining short (80 ms. window)
overlapping (15 ms. frame spacing) segments of the speech waveform. We estimate
the probability bi(n) that the signal in frame n was produced by state i, using static
pattern recognition techniques.
To improve the accuracy of these local estimates, we need to integrate information
over the entire word. We do this by creating a set of state variables for the machine,
called likelihoods, that are incrementally updated at every frame. Each state i has
a real-valued likelihood <Pi(n) associated with it. Most states in Figure 1 have a
stereotypical form: a state i that has a self-loop input, an input from state i-I,
and an output to state i + 1, with the self-loop and exit transitions being equally
probable. For states in this topology, the update rule
log{<Pi{n)) = log(bi(n))
+ log(<pi{n -
1) + <Pi-dn - 1))
(1)
lets us estimate the "log likelihood" value log{<pi(n)) for the state i; a log encoding
is used to cope with the large operating range of <pj(n) values. Log likelihoods are
negative numbers, whose magnitudes increase with each frame. We limit the range
of log likelihood values by using a renormalization technique: if any log likelihood
in the system falls below a minimum value, a positive constant is added to all log
likelihoods in the machine.
Figure 2 shows a complete system which uses HMM state decoding to perform
wordspotting. The "Feature Generation" and "Probability Generation" blocks comprise the static pattern recognition system, producing the probabilities bi(n) at each
frame. The "State Decode" block updates the log likelihood variables log( <Pi (n)).
The "Word Detect" block uses a simple online algorithm to flag the occurrence of a
word. Keyword end-state log likelihoods are subtracted by the filler log likelihood,
and when this difference exceeds a fixed threshold a keyword is detected.
1
Feature
Generation
Probability
Generation
21
State 10
20
Decode 21
Word
Detect
Speech Input
Figure 2. Block diagram for the two-keyword spotting system.
729
A Micropower Analog VLSI HMM State Decoder for Wordspotting
2. ANALOG CIRCUITS FOR STATE DECODING
Figure 3a shows an analog discrete-time implementation of Equation 1. The delay
element (labeled Z-I) acts as a edge-triggered sampled analog delay, with full-scale
voltage input and output. The delay element is clocked at the frame rate of the
state decoder (15 ms. clock period). The "combinatorial" analog circuits must
settle within the clock period. A clock period of 15 ms. allows a relatively long
settling time, which enables us to make extensive use of submicroampere currents
in the circuit design. The microwatt power consumption design specification drives
us to use such small currents. As a result of submicroampere circuit operation, the
MOS transistors in Figure 3a are operating in the weak-inversion regime.
(1)
(a)
gi[n)
gi[n -
Vm log[if>i[n])
I]
L...J
(4)
(b)
Vm log[if>i[nlJ
(1)
(2)
(3)
(c)
Vi(t)
L...J
(4)
Figure 3. (a) Analog discrete-time single-state decoder. (b) Enhanced version of
(a), includes the renormalization system. (c) Continuous-time extension of (b).
1. Lazzaro. 1. Wawrzynek and R. Lippmann
730
Equation 1 uses two types of variables: probabilities and log likelihoods. In the
implementation shown in Figure 3, we choose unidirectional current as the signal
type for probability, and large-signal voltage as the signal type for log likelihood.
We can understand the dimensional scaling of these signal types by analyzing the
floating-well transistor labeled (4) in Figure 3a. The equation
Vm log(?i(n)) = Vm log(bi(n))
Ih
+ gi(n -1) + Vm log( T)
(2)
o
describes the behavior of this transistor, where Vm = (VoIKp) In(10), gi(n - 1) is
the output of the delay element, and 10, K and Vo are MOS parameters. Both 10
and K in Equation 2 are functions of V.b . However, the floating-well topology of the
transistor ensures V. b = 0 for this device.
The input probability bj(n) is scaled by the unidirectional current h, defining the
current flowing through the transistor. The current h is the largest current that
keeps the transistor in the weak-inversion regime. We define I, to be the smallest value for hbi(n) that allows the circuit to settle within the clock period. The
ratio Ihl I, sets the supported range of bi(n). In the test-chip fabrication process,
Ihl I, ~ 10,000 is feasible, which is sufficient for accurate wordspotting. Likewise,
the unitless log(?i(n)) is scaled by the voltage Vm to form a large-signal voltage
encoding of log likelihood. A nominal value for Vm is 85m V in the test-chip process. To support a log likelihood range of 35 (the necessary range for accurate
wordspotting) a large-signal voltage range of 3 volts (i.e. 35Vm ) is required.
The term gi(n - 1) in Equation 2 is shown as the output of the circuit labeled
(1) in Figure 3a. This circuit computes a function that approximates the desired
expression Vmlog(?i(n - 1) + ?i-l(n -1)), if the transistors in the circuit operate
in the weak-inversion regime.
The computed log likelihood log( ?i (n)) in Equation 1 decreases every frame. The
circuit shown in Figure 3a does not behave in this way: the voltage Vmlog(?j(n))
increases every frame. This difference in behavior is attributable to the constant
term Vm log(IhIIo) in Equation 2, which is not present in Equation 1, and is always
larger than the negative contribution from Vm log(bj(n)). Figure 3b adds a new
circuit (labeled (2)) to Figure 3a, that allows the constant term in Equation 2 to
be altered under control of the binary input V. If V is Vdd , the circuit in Figure 3b
is described by
Vm log(?j(n))
=Vm log(bj(n)) + gi(n -
1)
+ Vm log( 1;;0),
(3a)
v
where the term Vmlog((hIo)II;) should be less than or equal to zero. If V is
grounded, the circuit is described by
Vm log(?j(n)) = Vm log(bj(n))
+ gi(n -
1)
Ih
+ Vm log( T)'
(3b)
v
where the term Vm log(Ihl Iv) should have a positive value of at least several hundred
millivolts. The goal of this design is to create two different operational modes for
the system. One mode, described by Equation 3a, corresponds to the normal state
decoder operation described in Equation 1. The other mode, described by Equation
731
A Micropower Analog VLSI HMM State Decoder for Wordspotting
3b, corresponds to the renormalization procedure, where a positive constant is added
to all likelihoods in the system. During operation, a control system alternates
between these two modes, to manage the dynamic range of the system.
Section 1 formulated HMMs as discrete-time systems. However, there are significant
advantages in replacing the z-t element in Figure 3b with a continuous-time delay
circuit. The switching noise of a sampled delay is eliminated. The power consumption and cell area specifications also benefit from continuous-time implementation.
Fundamentally, a change from discrete-time to continuous-time is not only an implementation change, but also an algorithmic change. Figure 3c shows a continuoustime state decoder whose observed behavior is qualitatively similar to a discrete-time
decoder. The delay circuit, labeled (3), uses a linear transconductance amplifier in
a follower-integrator configuration. The time constant of this delay circuit should
be set to the frame rate of the corresponding discrete-time state decoder.
For correct decoder behavior over the full range of input probability values, the
transconductance amplifer in the delay circuit must have a wide differential-inputvoltage linear range. In the test chip presented in this paper, an amplifier with a
small linear range was used. To work around the problem, we restricted the input
probability currents in our experiments to a small multiple of II.
Figure 4 shows a state decoding system that corresponds to the grammar shown
in Figure 1. Each numbered circle corresponds to the circuit shown in Figure 3c.
The signal flows of this architecture support a dense layout: a rectangular array of
single-state decoding circuits, with input current signal entering from the top edge
of the array, and end-state log likelihood outputs exiting from the right edge of the
array. States connect to their neighbors via the Vi-l(t) and Vi(t) signals shown in
Figure 3c. For notational convenience, in this figure we define the unidirectional
current Pi(t) to be Ihbi{t).
In addition to the single-state decoder circuit, several other circuits are required.
The "Recurrent Connection" block in Figure 4 implements the loopback connecting
the filled circles in Figure 1. We implement this block using a 3-input version of
the voltage follower circuit labeled (1) in Figure 3c. A simple arithmetic circuit
implements the "Word Detect" block. To complete the system, a high fan-in/fanout control circuit implements the renormalization algorithm. The circuit takes
as input the log likelihood signals from all states in the system, and returns the
binary signal V to the control input of all states. This control signal determines
whether the single-state decoding circuits exhibit normal behavior (Equation 3a) or
renormalization behavior (Equation 3b).
Pl Pll
P2 Pl2
P3 Pl3
P4 Pl4
Ps PlS
P6 Pl6
P7 Pl7
P8 P18
P9 Pl9 PlO P20P21
Figure 4. State decoder system for grammar shown in Figure 1.
1. Lazzaro, 1. Wawrzynek and R. Lippmann
732
3. STATE DECODER TEST CHIP
We fabricated a state decoder test chip in the 21lm, n-well process of Orbit Semiconductor, via MOSIS . The chip has been fully tested and is functional. The chip
decodes a grammar consisting of eight ten-state word models and a filler state. The
state decoding and word detection sections of the chip contain 2000 transistors,
and measure 586 x 28071lm (586x2807.X, ). = 1.0Ilm). In this section, we show test
results from the chip, in which we apply a temporal pattern of probability currents
to the ten states of one word in the model (numbered 1 through 10) and observe
the log likelihood voltage of the final state of the word (state 10).
Figure 5 contains simulated results, allowing us to show internal signals in the
system. Figure 5a shows the temporal pattern of input probability currents PI ... PIO
that correspond to a simulated input word . Figure 5b shows the log likelihood
voltage waveform for the end-state of the word (state 10). The waveform plateaus
at L h , the limit of the operating range of the state decoder system. During this
plateau this state has the largest log likelihood in the system. Figure 5c is an
expanded version of Figure 5b, showing in detail the renormalization cycles. Figure
5d shows the output computed by the "Word Detect" block in Figure 4. Note
the smoothness of the waveform, unlike Figure 5c. By subtracting the filler-state
log likelihood from the end-state log likelihood, the Word Detect block cancels the
common-mode renormalization waveform.
Figure 6 shows a series of four experiments that confirm the qualitative behavior
of the state decoder system. This figure shows experimental data recorded from
the fabricated test chip. Each experiment consists of playing a particular pattern
of input probability currents PI ... PIO to the state decoder many times; for each
repetition, a certain aspect of the playback is systematically varied. We measure the
peak value of the end state log likelihood during each repetition, and plot this value
as a function of the varied input parameter. For each experiment shown in Figure
6, the left plot describes the input pattern, while the right plot is the measured endstate log likelihood data. The experiment shown in Figure 6a involves presenting
complete word patterns of varying durations to the decoder. As expected, words
with unrealistically short durations have end-state responses below L h , and would
not produce successful word detection.
Pi....../'--
-A..P3~
---A---
P5 ----"'-~
P7---"~
--G
'-'
"0
0
0
"0
0
0
v...:.=
v...:.=
>
:S
:::-
:S
;.:3
I -0.4
-;1
;.:3 3.0
P9----"--
bO
-1.0
j
~
200
400
0
0.2
3.5
350
100
400
(ms)
(ms)
(ms)
(ms)
(a)
(b)
(c)
(d)
700
Figure 5. Simulation of state decoder: (a) Inputs patterns, (b), (c) End-state
response, (d) Word-detection response .
A Micropower Analog VLSI HMM State Decoderfor Wordspotting
~L,
Pl~
P3~
P5~
P7~
P9~
"0
0
0
3.4
I
:5
'ii 3.
(a)
~
-->"
Pl~
P3~
"0
0
0
P5~
P7 =:f:.
Lh
:5
'ii
~ 3.
P9~
t
P7--"
400 700
1
Word Length (rns)
3.4
:3
F:L[(C)
Pl~
P3~
P5~
P9
~
733
~
t
:3
2.8
1
10
Last State
Pl
/
(b)
P3.....,.
"0
P5~
P7~
P9~
:S
o
o
(d)
]
:3
First State
Figure 6. Measured chip data for end-state likelihoods for long, short, and incomplete pattern sequences.
The experiment shown in Figure 6b also involves presenting patterns of varying
durations to the decoder, but the word patterns are presented "backwards," with
input current PIO peaking first, and input current PI peaking last. The end-state
response never reaches L h ? even at long word durations, and (correctly) would not
trigger a word detection.
The experiments shown in Figure 6c and 6d involve presenting partially complete
word patterns to the decoder. In both experiments, the duration of the complete
word pattern is 250 ms. Figure 6c shows words with truncated endings, while Figure
6d shows words with truncated beginnings. In Figure 6c, end-state log likelihood is
plotted as a function of the last excited state in the pattern; in Figure 6d, end-state
log likelihood is plotted as a function of the first excited state in the pattern. In
both plots the end-state log likelihood falls below Lh as significant information is
removed from the word pattern.
While performing the experiments shown in Figure 6, the state-decoder and worddetection sections of the chip had a measured average power consumption of 141
nW (Vdd = 5v). More generally, however, the power consumption, input probability
range, and the number of states are related parameters in the state decoder system.
Acknowledgments
We thank Herve Bourlard, Dan Hammerstrom, Brian Kingsbury, Alan Kramer,
Nelson Morgan, Stylianos Perissakis, Su-lin Wu, and the anonymous reviewers for
comments on this work. Sponsored by the Office of Naval Research (URI-NOOOI492-J-1672) and the Department of Defense Advanced Research Projects Agency.
Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Air Force.
Reference
Lippmann, R. P., Chang, E. I., and Jankowski, C. R. (1994). "Wordspotter training
using figure-of-merit back-propagation," Proceedings International Conference on
Acoustics, Speech, and Signal Processing, Vol. 1, pp. 389-392.
| 1206 |@word version:3 inversion:3 simulation:1 excited:2 configuration:1 contains:1 series:1 united:1 current:15 follower:2 must:2 john:2 enables:1 plot:4 sponsored:1 update:2 device:2 p7:6 beginning:1 short:3 detecting:1 kingsbury:1 dn:1 differential:1 qualitative:1 consists:2 dan:1 p8:1 expected:1 behavior:7 integrator:1 pio:3 p9:6 window:1 project:1 circuit:26 lexington:1 fabricated:2 temporal:2 berkeley:4 every:3 act:1 scaled:2 control:5 producing:1 positive:3 local:2 limit:2 semiconductor:1 switching:1 encoding:2 analyzing:1 path:1 hmms:1 bi:5 range:12 acknowledgment:1 block:10 implement:4 procedure:1 area:1 word:27 numbered:2 convenience:1 reviewer:1 layout:1 duration:5 rectangular:1 rule:1 stereotypical:1 array:3 updated:1 enhanced:1 trigger:1 nominal:1 decode:2 us:4 element:4 recognition:9 labeled:6 observed:1 p5:5 solved:1 ensures:1 cycle:1 keyword:5 plo:1 decrease:1 removed:1 agency:1 dynamic:1 vdd:2 segment:1 division:1 exit:1 chip:13 describe:2 detected:2 whose:2 larger:1 valued:1 grammar:3 gi:7 final:1 online:1 triggered:1 advantage:1 transistor:8 sequence:1 subtracting:1 p4:1 loop:2 lincoln:1 pll:1 requirement:1 p:1 produce:1 recurrent:1 measured:3 keywords:2 p2:1 c:1 involves:2 waveform:7 correct:1 settle:2 opinion:1 require:1 anonymous:1 probable:1 brian:1 extension:1 pl:5 around:1 normal:2 algorithmic:1 bj:4 mo:2 lm:2 nw:1 smallest:1 combinatorial:1 largest:2 repetition:2 create:1 mit:2 always:1 playback:1 varying:2 voltage:9 office:1 naval:1 notational:1 likelihood:29 detect:5 entire:1 hidden:2 vlsi:4 uc:1 equal:1 comprise:1 never:1 eliminated:1 cancel:1 fundamentally:1 richard:1 floating:2 intended:1 consisting:1 continuoustime:2 amplifier:2 detection:4 accurate:2 edge:3 necessary:1 lh:2 herve:1 filled:1 iv:1 incomplete:1 desired:1 circle:2 orbit:1 plotted:2 hbi:1 hundred:1 delay:9 examining:1 fabrication:1 successful:1 characterize:1 connect:1 peak:1 international:1 probabilistic:1 vm:18 decoding:8 connecting:1 recorded:1 manage:1 choose:1 creating:1 return:1 ilm:1 includes:1 vi:3 unidirectional:3 wordspotting:9 contribution:1 air:1 accuracy:1 unitless:1 likewise:1 correspond:1 yes:2 weak:3 decodes:1 produced:2 drive:1 plateau:2 reach:1 pp:1 associated:1 static:2 sampled:2 ihl:3 back:1 flowing:1 response:4 p6:1 clock:4 replacing:1 su:1 overlapping:1 propagation:1 incrementally:1 mode:5 contain:1 entering:1 volt:1 laboratory:1 ll:1 during:4 self:2 clocked:1 m:9 presenting:3 complete:5 vo:1 dissipation:1 interface:1 common:2 microwatt:2 functional:1 analog:11 interpretation:1 approximates:1 significant:2 smoothness:1 unconstrained:1 had:1 specification:3 operating:3 add:1 certain:1 binary:2 captured:1 minimum:1 morgan:1 period:4 signal:16 ii:4 arithmetic:1 full:2 multiple:1 exceeds:1 alan:1 long:3 lin:1 equally:1 grounded:1 cell:1 addition:1 unrealistically:1 spacing:1 diagram:1 operate:1 unlike:1 comment:1 flow:1 presence:1 backwards:1 architecture:1 topology:2 whether:1 expression:1 defense:1 fanout:1 speech:12 lazzaro:4 generally:1 involve:1 endorsed:1 s4:1 ten:2 jankowski:1 correctly:1 discrete:6 vol:1 key:1 four:1 threshold:1 pj:1 millivolt:1 mosis:1 pl2:1 wood:1 wu:1 p3:6 scaling:1 fan:1 aspect:1 conversational:1 transconductance:2 expanded:1 performing:1 relatively:1 department:1 alternate:1 describes:2 wawrzynek:4 hio:1 peaking:2 restricted:1 equation:14 merit:1 end:12 operation:5 eight:1 apply:1 observe:1 occurrence:1 subtracted:1 voice:1 hammerstrom:1 top:1 added:2 exhibit:1 thank:1 simulated:2 decoder:26 hmm:9 street:1 consumption:4 nelson:1 portable:1 length:1 ratio:1 trace:1 negative:2 p18:1 implementation:7 design:5 perform:1 allowing:1 markov:2 behave:1 truncated:2 defining:1 frame:9 rn:1 varied:2 exiting:1 required:2 extensive:1 connection:1 nlj:1 acoustic:1 spotting:1 below:3 pattern:17 regime:3 power:5 settling:1 force:1 bourlard:1 advanced:1 improve:1 altered:1 pl4:1 review:1 fully:1 generation:4 integrate:1 sufficient:1 systematically:1 playing:1 pi:11 supported:1 last:3 understand:1 fall:2 wide:1 neighbor:1 tracing:1 benefit:1 transition:2 ending:1 computes:1 wordspotter:1 author:1 qualitatively:1 cope:1 lippmann:6 keep:1 confirm:1 continuous:4 ca:1 operational:1 necessarily:1 dense:1 noise:1 fashion:1 renormalization:7 attributable:1 showing:1 micropower:5 ih:2 magnitude:1 uri:1 led:1 likely:1 pls:1 partially:2 bo:1 recommendation:1 chang:1 corresponds:4 determines:1 ma:1 goal:2 formulated:1 kramer:1 room:1 feasible:1 change:3 flag:1 called:2 experimental:1 internal:1 support:2 filler:5 tested:1 |
228 | 1,207 | GTM: A Principled Alternative
to the Self-Organizing Map
Christopher M. Bishop
Markus Svensen
Christopher K. I. Williams
C.M .Bishop@aston.ac.uk
svensjfm@aston.ac.uk
C.K.r. Williams@aston.ac.uk
Neural Computing Research Group
Aston University, Birmingham, B4 7ET, UK
http://www.ncrg.aston.ac.uk/
Abstract
The Self-Organizing Map (SOM) algorithm has been extensively
studied and has been applied with considerable success to a wide
variety of problems. However, the algorithm is derived from heuristic ideas and this leads to a number of significant limitations. In
this paper, we consider the problem of modelling the probability density of data in a space of several dimensions in terms of
a smaller number of latent, or hidden, variables. We introduce a
novel form of latent variable model, which we call the GTM algorithm (for Generative Topographic Mapping), which allows general
non-linear transformations from latent space to data space, and
which is trained using the EM (expectation-maximization) algorithm. Our approach overcomes the limitations of the SOM, while
introducing no significant disadvantages. We demonstrate the performance of the GTM algorithm on simulated data from flow diagnostics for a multi-phase oil pipeline.
1
Introduction
The Self-Organizing Map (SOM) algorithm of Kohonen (1982) represents a form of
unsupervised learning in which a set of unlabelled data vectors tn (n = 1, ... , N)
in a D-dimensional data space is summarized in terms of a set of reference vectors
having a spatial organization corresponding (generally) to a two-dimensional sheet 1 ?
IBiological metaphor is sometimes invoked when motivating the SOM procedure. It
should be stressed that our goal here is not neuro-biological modelling, but rather the
development of effective algorithms for data analysis.
GTM: A PrincipledAlternative to the Self-Organizing Map
355
While this algorithm has achieved many successes in practical applications, it also
suffers from some major deficiencies, many of which are highlighted in Kohonen
(1995) and reviewed in this paper.
From the perspective of statistical pattern recognition, a fundamental goal in unsupervised learning is to develop a representation of the distribution p(t) from which
the data were generated. In this paper we consider the problem of modelling p( t)
in terms of a number (usually two) of latent or hidden variables. By considering a
particular class of such models we arrive at a formulation in terms of a constrained
Gaussian mixture which can be trained using the EM (expectation-maximization)
algorithm. The topographic nature of the representation is an intrinsic feature of
the model and is not dependent on the details of the learning process. Our model
defines a generative distribution p(t) and will be referred to as the GTM (Generative
Topographic Mapping) algorithm (Bishop et al., 1996a).
2
Latent Variables
The goal of a latent variable model is to find a representation for the distribution
p( t) of data in a D-dimensional space t = (t 1 , ... , t D) in terms of a number L of
latent variables x = (Xl, ... , XL). This is achieved by first considering a non-linear
function y(x; W), governed by a set of parameters W, which maps points x in the
latent space into corresponding points y(x; W) in the data space. Typically we
are interested in the situation in which the dimensionality L of the latent space
is less than the dimensionality D of the data space, since our premise is that the
data itself has an intrinsic dimensionality which is less than D. The transformation
y(x; W) then maps the latent space into an L-dimensional non-Euclidean manifold
embedded within the data space.
IT we define a probability distribution p(x) on the latent space, this will induce a
corresponding distribution p(YIW) in the data space. We shall refer to p(x) as the
prior distribution of x for reasons which will become clear shortly. Since L < D,
the distribution in t-space would be confined to a manifold of dimension L and
hence would be singular. Since in reality the data will only approximately live on
a lower-dimensional manifold, it is appropriate to include a noise model for the
t vector. We therefore define the distribution of t, for given x and W, to be a
spherical Gaussian centred on y(x; W) having variance {3-1 so that p(tlx, W, {3)
N(tly(x; W),{3-1 I). The distribution in t-space, for a given value of W, is then
obtained by integration over the x-distribution
f"V
p(tIW,{3) =
I
p(tlx, W,{3)p(x) dx.
(1)
For a given a data set 1) = (t 1 , ?.. , t N) of N data points, we can determine the
parameter matrix W, and the inverse variance {3, using maximum likelihood, where
the log likelihood function is given by
N
L(W,{3) =
L Inp(tnIW,{3).
(2)
n=l
In principle we can now seek the maximum likelihood solution for the weight matrix,
once we have specified the prior distribution p(x) and the functional form of the
C. M. Bishop, M. Svensen and C. K. I. Williams
356
t)
y(x;W)
~
? ? ?
? ? ?
? ? ?
~
Q
t2
x)
Figure 1: We consider a prior distribution p(x) consisting of a superposition of delta
functions, located at the nodes of a regular grid in latent space. Each node XI is mapped
to a point Y(XI; W) in data space, which forms the centre of the corresponding Gaussian
distribution.
mapping y(x; W), by maximizing L(W, f3). The latent variable model can be related to the Kohonen SOM algorithm by choosingp(x) to be a sum of delta functions
centred on the nodes of a regular grid in latent space p(x) = 1/ K ~~l l5(x - Xl).
This form of p(x) allows the integral in (1) to be performed analytically. Each
point Xl is then mapped to a corresponding point Y(XI; W) in data space, which
forms the centre of a Gaussian density function, as illustrated in Figure 1. Thus
the distribution function in data space takes the form of a Gaussian mixture model
p(tIW, f3) = 1/ K ~~l p(tIXl, W, f3) and the log likelihood function (2) becomes
L(W,f3) = ~ In
N
{I
K ~P(tnIXI' W,f3)
K
}
.
(3)
This distribution is a constrained Gaussian mixture since the centres of the Gaussians cannot move independently but are related through the function y(x; W).
Note that, provided the mapping function y(x; W) is smooth and continuous, the
projected points Y(XI; W) will necessarily have a topographic ordering.
2.1
The EM Algorithm
IT we choose a particular parametrized form for y(x; W) which is a differentiable
function of W we can use standard techniques for non-linear optimization, such as
conjugate gradients or quasi-Newton methods, to find a weight matrix W*, and
inverse variance f3*, which maximize L (W , f3). However, our model consists of a
mixture distribution which suggests that we might seek an EM algorithm (Dempster
et al., 1977). By making a careful choice of model y(x; W) we will see that the
M-step can be solved exactly. In particular we shall choose y(x; W) to be given by
a generalized linear network model of the form
y(x; W) = W <fJ(x)
(4)
where the elements of <fJ(x) consist of M fixed basis functions <Pj (x), and W is a
D x M matrix with elements Wkj. Generalized linear networks possess the same
universal approximation capabilities as multi-layer adaptive networks, provided the
basis functions <Pj (x) are chosen appropriately.
GTM: A Principled Alternative to the Self-Organizing Map
By setting the derivatives of (3) with respect to
357
Wkj
to zero, we obtain
(5)
where ~ is a K x M matrix with elements <P/j = <Pj(X/), T is a N x D matrix with
elements tkn, and R is a K x N matrix with elements R 'n given by
(6)
which represents the posterior probability, or responsibility, of the mixture components I for the data point n. Finally, G is a K x K diagonal matrix, with elements
Gil = 'E:=l R'n(W,!3). Equation (5) can be solved for W using standard matrix
inversion techniques. Similarly, optimizing with respect to {3 we obtain
(7)
Here (6) corresponds to the E-step, while (5) and (7) correspond to the M-step.
Typically the EM algorithm gives satisfactory convergence after a few tens of cycles.
An on-line version of this algorithm can be obtained by using the Robbins-Monro
procedure to find a zero of the objective function gradient, or by using an on-line
version of the EM algorithm.
3
Relation to the Self-Organizing Map
The list below describes some of the problems with the SOM procedure and how
the GTM algorithm solves them.
1. The SOM algorithm is not derived by optimizing an objective function,
unlike GTM. Indeed it has been proven (Erwin et al., 1992) that such an
objective function cannot exist for the SOM algorithm.
2. In GTM the neighbourhood-preserving nature of the mapping is an automatic consequence of the choice of a smooth, continuous function y(x; W).
Neighbourhood-preservation is not guaranteed by the SOM procedure.
3. There is no assurance that the code-book vectors will converge using SOM.
Convergence of the batch GTM algorithm is guaranteed by the EM algorithm, and the Robbins-Monro theorem provides a convergence proof for
the on-line version.
4. GTM defines an explicit probability density function in data space. In contrast, SOM does not define a density model. Attempts have been made to
interpret the density of codebook vectors as a model of the data distribution but with limited success. The advantages of having a density model
include the ability to deal with missing data in a principled way, and the
straightforward possibility of using a mixture of such models, again trained
using EM.
C. M. Bishop, M. Svensen and C. K. I. Williams
358
Figure 2: Examples of the posterior probabilities (responsibilities) of the latent space
points at an early stage (left) and late stage (right) during the convergence of the GTM
algorithm, evaluated for a single data point from the training set in the oil-flow problem
discussed in Section 4. Note how the probabilities form a localized 'bubble' whose size
shrinks automatically during training, in contrast to the hand-crafted shrinkage of the
neighbourhood function in the SOM.
5. For SOM the choice of how the neighbourhood function should shrink over
time during training is arbitrary, and so this must be optimized empirically.
There is no neighbourhood function to select for GTM.
6. It is difficult to know by what criteria to compare different runs of the SOM
procedure. For GTM one simply compares the likelihood of the data under
the model, and standard statistical tests can be used for model comparison.
Notwithstanding these key differences, there are very close similarities between the
SOM and GTM techniques. Figure 2 shows the posterior probabilities (responsibilities) corresponding to the oil flow problem considered in Section 4. At an early
stage of training the responsibility for representing a particular data point is spread
over a relatively large region of the map. As the EM algorithm proceeds so this
responsibility 'bubble' shrinks automatically. The responsibilities (computed in the
E-step) govern the updating of Wand {3 in the M-step and, together with the
smoothing effect of the basis functions <Pi (x), play an analogous role to the neighbourhood function in the SOM procedure. While the SOM neighbourhood function
is arbitrary, however, the shrinking responsibility bubble in GTM arises directly
from the EM algorithm.
4
Experimental Results
We present results from the application of this algorithm to a problem involving
12-dimensional data arising from diagnostic measurements of oil flows along multiphase pipelines (Bishop and James, 1993). The three phases in the pipe (oil, water
and gas) can belong to one of three different geometrical configurations, corresponding to stratified, homogeneous, and annular flows, and the data set consists of 1000
points drawn with equal probability from the 3 classes. We take the latent variable
space to be two-dimensional, since our goal in this application is data visualization.
Each data point tn induces a posterior distribution p(xltn, W,(3) in x-space. However, it is often convenient to project each data point down to a unique point in
x-space, which can be done by finding the mean of the posterior distribution.
359
GTM: A Principled Alternative to the Self-Organizing Map
........... x
"">CI~*
...... )( ? *
:.
? . . . . . . \ i/r"', . . . . . . . . .
)( .. . .. .. .. .. . .. x. .. . .. .. .. ..
. .. )( .. .. . .. .. .. .. .. . . .. ..
..
. .. .. .
. .. .. ..
?x ...........
? ? .- t ? ... .
? ? ? x ? ? ? ? ? ? ? ., ? ? ? ? ?
~.
? ????????????? ? ?
? ? ? ? ?x
?
.
? ........ +: + ...... )(
x ???
* Ii ...... _.;.#''''
,.
.+-"'" ......
? ?? t: ? r.; ....... t '0* *+ + ? .;. . . . )(
? ? + .. ++; ...<:1:.+ ? ? ? ? 0 ? ? tt ? ? ?
+ ++ + .... ~ + l ++e ? ? ? GOne t .
+ x
+
t..
+ + ? ? ~)4IpO
?
*
i
?
? ?
"
?
?
cf& ? ? ...
+ ? ? r.. .+.,',
?
.....;I'.,
i
t
:
"':/
.
?
.
;.
.
.
?
+ + . . . . ~"", oIit ...??? "' :t,
>,..:
??
.. _ ?
..
**"
..
.
..
\ti"' ....... .,... ** .....
.;.>;:.,.: >??????
+::c:+* '*
"??? o ? ? p ? ??lb
~.Iji.; ? ? ?
i+iI' . . . . .
' ....
o :CD 0
~ CIa # . . . .
~.
CID ? ?qD. :
.? ? ? ~ ? ?W ? ? ~ i; . . .
? . , . ? ? 6 . 1:+++ ? ? ..
" "'~~_ClD.
~
~
0 " . Q)<P ? ? ~? .~.
.:::91l
??
?
' '0.;; :
+*+ . ..... i+.t~ ? .".. ..
*ot++ . ...... .j., ?? ? ? *.*+. ~
........
;, .....?.. ?.0. ? . . . ...... ?
.... .~~ ~....!:~ ~~" ......... '.'~:.'
+ + ?
.'. ;", .
..
..
?? )( >U!
I\. 1l
? )( ? ? ? ? ~+.,.~+ ~ .+. ~ .. ? ? ? ? ?? ?
x ? ? . ++~ r~",+:""",d'-"'+"?.' ? ? ??
? ? .} ;
0. r# ?...
)(.)()( . . 0.
o::~;~. ,. : : ~; ? J:~'!;,?
0: 0 ' '0 ii .?
? '. ;':
??..:-..'00:.
Figure 3: The left plot shows the posterior-mean projection of the oil flow data in the
latent space of the non-linear model. The plot on the right shows the same data set
visualized using the batch SOM procedure, in which each data point is assigned to the point
on the feature map corresponding to the codebook vector to which it is nearest. In both
plots, crosses, circles and plus-signs represent the three different oil-flow configurations.
Figure 3 shows the oil data visualized with GTM and SaM. The CPU times taken
for the GTM, SaM with a Gaussian neighbourhood, and SaM with a 'top-hat'
neighbourhood were 644, 1116 and 355 seconds respectively. In each case the algorithms were run for 25 complete passes through the data set.
5
Discussion
In the fifteen years since the SaM procedure was first proposed, it has been used
with great success in a wide variety of applications. It is, however, based on heuristic concepts, rather than statistical principles, and this leads to a number of serious
deficiencies in the algorithm. There have been several attempts to provide algorithms which are similar in spirit to the SaM but which overcome its limitations,
and it is useful to compare these to the GTM algorithm.
The formulation of the elastic net algorithm described by Durbin et al. (1989)
also constitutes a Gaussian mixture model in which the Gaussian centres acquire a
spatial ordering during training. The principal difference compared with GTM is
that in the elastic net the centres are independent parameters but are encouraged
to be spatially close by the use of a quadratic regularization term, whereas in GTM
there is no regularizer on the centres and instead the centres are constrained to lie
on a manifold given by the non-linear projection of the latent-variable space. The
existence of a well-defined manifold means that the local magnification factors can
be evaluated explicitly as continuous functions of the latent variables (Bishop et al.,
1996b). By contrast, in algorithms such as SaM and the elastic net, the embedded
manifold is defined only indirectly as a discrete approximation by the locations of
the code-book vectors or Gaussian centres.
One version of the principal curves algorithm (Tibshirani, 1992) introduces a generative distribution based on a mixture of Gaussians, with a well-defined likelihood
function, which is trained by the EM algorithm. However, the number of Gaussian
components is equal to the number of data points, and the algorithm has been
360
C. M. Bishop, M. Svensen and C. K. I. Williams
formulated for one-dimensional manifolds, making this algorithm further removed
from the SOM.
MacKay (1995) considers convolutional models of the form (1) using multi-layer
network models in which a discrete sample from the latent space is interpreted
as a Monte Carlo approximation to the integration over a continuous distribution.
Although an EM approach could be applied to such a model, the M-step of the
corresponding EM algorithm would itself require a non-linear optimization.
In conclusion, we have provided an alternative algorithm to the SOM which overcomes its principal deficiencies while retaining its general characteristics. We know
of no significant disadvantage in using the GTM algorithm in place of the SOM.
While we believe the SOM procedure is superseded by the GTM algorithm, is should
be noted that the SOM has provided much of the inspiration for developing GTM.
A web site for GTM is provided at:
http://www.ncrg.aston.ac.uk/GTM/
which includes postscript files of relevant papers, software implementations in Matlab and C, and example data sets used in the development of the GTM algorithm.
Acknowledgements
This work was supported by EPSRC grant GR/K51808: Neural Networks for Visualisation
of High-Dimensional Data. Markus Svensen would like to thank the staff of the SANS
group in Stockholm for their hospitality during part of this project.
References
Bishop, C. M. and G. D. James (1993). Analysis of multiphase flows using dual-energy
gamma densitometry and neural networks. Nuclear Instruments and Methods in
Physics Research A327, 580-593.
Bishop, C. M., M. Svensen, and C. K. I. Williams (1996a). Gtm: The generative topographic mapping. Technical Report NCRG/96/015, Neural Computing Research
Group, Aston University, Birmingham, UK. Submitted to Neural Computation.
Bishop, C. M., M. Svensen, and C. K. I. Williams (1996b). Magnification factors for the
GTM algorithm. In preparation.
Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). Maximum likelihood from
incomplete data via the EM algorithm. Journal of the Royal Statistical Society,
B 39 (1), 1-38.
Durbin, R., R. Szeliski, and A. Yuille {1989}. An analysis of the elastic net approach to
the travelling salesman problem. Neural Computation 1 (3),348-358.
Erwin, E., K. Obermayer, and K. Schulten (1992). Self-organizing maps: ordering,
convergence properties and energy functions. Biological Cybernetics 67,47-55.
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps.
Biological Cybernetics 43, 59-69.
Kohonen, T. (1995). Self-Organizing Maps. Berlin: Springer-Verlag.
MacKay, D. J. C. (1995). Bayesian neural networks and density networks. Nuclear
Instruments and Methods in Physics Research, A 354 (1), 73-80.
Tibshirani, R. (1992). Principal curves revisited. Statistics and Computing 2, 183-190.
| 1207 |@word version:4 inversion:1 seek:2 tiw:2 fifteen:1 configuration:2 dx:1 must:1 plot:3 generative:5 assurance:1 provides:1 codebook:2 revisited:1 node:3 location:1 along:1 become:1 consists:2 introduce:1 indeed:1 multi:3 spherical:1 automatically:2 cpu:1 metaphor:1 considering:2 becomes:1 provided:5 project:2 what:1 interpreted:1 finding:1 transformation:2 ti:1 exactly:1 uk:7 grant:1 local:1 consequence:1 approximately:1 might:1 plus:1 studied:1 suggests:1 limited:1 stratified:1 gone:1 practical:1 unique:1 procedure:9 universal:1 convenient:1 projection:2 induce:1 inp:1 regular:2 cannot:2 close:2 sheet:1 live:1 www:2 map:14 missing:1 maximizing:1 williams:7 straightforward:1 independently:1 nuclear:2 analogous:1 play:1 homogeneous:1 element:6 recognition:1 magnification:2 located:1 updating:1 role:1 epsrc:1 solved:2 ipo:1 region:1 cycle:1 ordering:3 removed:1 principled:4 dempster:2 govern:1 trained:4 yuille:1 basis:3 gtm:30 regularizer:1 effective:1 monte:1 formation:1 whose:1 heuristic:2 ability:1 statistic:1 topographic:5 highlighted:1 itself:2 laird:1 advantage:1 differentiable:1 net:4 kohonen:5 relevant:1 organizing:9 convergence:5 develop:1 ac:5 densitometry:1 svensen:7 nearest:1 solves:1 qd:1 correct:1 require:1 premise:1 biological:3 stockholm:1 sans:1 considered:1 great:1 mapping:6 major:1 early:2 birmingham:2 superposition:1 robbins:2 hospitality:1 gaussian:11 rather:2 shrinkage:1 derived:2 modelling:3 likelihood:7 contrast:3 dependent:1 multiphase:2 typically:2 hidden:2 relation:1 visualisation:1 quasi:1 interested:1 dual:1 retaining:1 development:2 spatial:2 constrained:3 integration:2 smoothing:1 mackay:2 equal:2 once:1 f3:7 having:3 encouraged:1 represents:2 unsupervised:2 constitutes:1 t2:1 report:1 serious:1 few:1 gamma:1 phase:2 consisting:1 attempt:2 tlx:2 organization:1 possibility:1 introduces:1 mixture:8 diagnostics:1 integral:1 incomplete:1 euclidean:1 circle:1 disadvantage:2 maximization:2 introducing:1 gr:1 motivating:1 density:7 fundamental:1 l5:1 physic:2 together:1 again:1 choose:2 book:2 derivative:1 centred:2 summarized:1 includes:1 explicitly:1 performed:1 responsibility:7 capability:1 a327:1 monro:2 yiw:1 convolutional:1 variance:3 characteristic:1 correspond:1 bayesian:1 carlo:1 cybernetics:2 submitted:1 suffers:1 energy:2 james:2 proof:1 iji:1 dimensionality:3 organized:1 formulation:2 evaluated:2 shrink:3 done:1 stage:3 hand:1 web:1 christopher:2 defines:2 believe:1 oil:8 effect:1 concept:1 hence:1 analytically:1 assigned:1 spatially:1 regularization:1 inspiration:1 satisfactory:1 illustrated:1 deal:1 during:5 self:10 noted:1 criterion:1 generalized:2 tt:1 demonstrate:1 complete:1 tn:2 fj:2 geometrical:1 invoked:1 novel:1 functional:1 empirically:1 b4:1 ncrg:3 discussed:1 belong:1 interpret:1 significant:3 refer:1 measurement:1 automatic:1 grid:2 similarly:1 centre:8 similarity:1 posterior:6 perspective:1 optimizing:2 verlag:1 success:4 preserving:1 staff:1 converge:1 maximize:1 determine:1 preservation:1 ii:3 smooth:2 technical:1 unlabelled:1 annular:1 cross:1 neuro:1 involving:1 expectation:2 erwin:2 sometimes:1 represent:1 achieved:2 confined:1 whereas:1 singular:1 wkj:2 appropriately:1 ot:1 unlike:1 posse:1 pass:1 file:1 flow:8 spirit:1 call:1 variety:2 tkn:1 idea:1 matlab:1 generally:1 useful:1 clear:1 extensively:1 ten:1 induces:1 visualized:2 http:2 exist:1 gil:1 sign:1 delta:2 arising:1 diagnostic:1 tibshirani:2 discrete:2 shall:2 group:3 key:1 drawn:1 pj:3 sum:1 year:1 tly:1 run:2 inverse:2 wand:1 topologically:1 arrive:1 place:1 layer:2 guaranteed:2 quadratic:1 durbin:2 deficiency:3 software:1 markus:2 relatively:1 developing:1 conjugate:1 smaller:1 describes:1 em:14 sam:6 making:2 pipeline:2 taken:1 equation:1 visualization:1 know:2 instrument:2 travelling:1 salesman:1 gaussians:2 appropriate:1 indirectly:1 cia:1 neighbourhood:9 alternative:4 batch:2 shortly:1 hat:1 existence:1 top:1 include:2 cf:1 newton:1 society:1 move:1 objective:3 diagonal:1 cid:1 obermayer:1 gradient:2 thank:1 mapped:2 simulated:1 berlin:1 parametrized:1 manifold:7 considers:1 reason:1 water:1 code:2 postscript:1 acquire:1 difficult:1 implementation:1 gas:1 situation:1 arbitrary:2 lb:1 specified:1 pipe:1 optimized:1 proceeds:1 usually:1 pattern:1 below:1 cld:1 royal:1 representing:1 aston:7 superseded:1 bubble:3 prior:3 acknowledgement:1 embedded:2 limitation:3 proven:1 localized:1 rubin:1 principle:2 pi:1 cd:1 supported:1 szeliski:1 wide:2 overcome:1 dimension:2 curve:2 made:1 adaptive:1 projected:1 overcomes:2 xi:4 continuous:4 latent:20 reviewed:1 reality:1 nature:2 elastic:4 necessarily:1 som:23 spread:1 noise:1 crafted:1 referred:1 site:1 shrinking:1 schulten:1 explicit:1 xl:4 lie:1 governed:1 late:1 theorem:1 down:1 bishop:11 list:1 intrinsic:2 consist:1 ci:1 notwithstanding:1 simply:1 springer:1 corresponds:1 goal:4 formulated:1 careful:1 considerable:1 principal:4 experimental:1 select:1 stressed:1 arises:1 preparation:1 |
229 | 1,208 | A Mixture of Experts Classifier with
Learning Based on Both Labelled and
Unlabelled Data
David J. Miller and Hasan S. Uyar
Department of Electrical Engineering
The Pennsylvania State University
University Park, Pa. 16802
miller@perseus.ee.psu.edu
Abstract
We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of
unlabelled features. This situation arises, e.g., for medical images, where
although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure
and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total
data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the
EM formalism applied for unlabelled data. The classifier, based on a
joint probability model for features and labels, is a "mixture of experts"
structure that is equivalent to the radial basis function (RBF) classifier,
but unlike RBFs, is amenable to likelihood-based training. The scope of
application for the new method is greatly extended by the observation
that test data, or any new data to classify, is in fact additional, unlabelled
data - thus, a combined learning/classification operation - much akin to
what is done in image segmentation - can be invoked whenever there
is new data to classify. Experiments with data sets from the UC Irvine
database demonstrate that the new learning algorithms and structure
achieve substantial performance gains over alternative approaches.
1
Introduction
Statistical classifier design is fundamentally a supervised learning problem, wherein
a decision function, mapping an input feature vector to an output class label, is
learned based on representative (feature,class label) training pairs. While a variety
of classifier structures and associated learning algorithms have been developed, a
common element of nearly all approaches is the assumption that class labels are
D. J. Miller and H. S. Uyar
572
known for each feature vector used for training. This is certainly true of neural networks such as multilayer perceptrons and radial basis functions (RBFs), for
which classification is usually viewed as function approximation, with the networks
trained to minimize the squared distance to target class values. Knowledge of class
labels is also required for parametric classifiers such as mixture of Gaussian classifiers, for which learning typically involves dividing the training data into subsets
by class and then using maximum likelihood estimation (MLE) to separately learn
each class density. While labelled training data may be plentiful for some applications, for others, such as remote sensing and medical imaging, the training set is in
principle vast but the size of the labelled subset may be inadequate. The difficulty
in obtaining class labels may arise due to limited knowledge or limited resources,
as expensive expertise is often required to derive class labels for features. In this
work, we address classifier design under these conditions, i.e. the training set X
is assumed to consist of two subsets, X
{Xl, Xu}, where Xl
{(Xl, cd, (X2' C2),
... ,(XNI,CNln is the labelled subset and Xu = {XNI+l, ... ,XN} is the unlabelled
subset l. Here, Xi E R. k is the feature vector and Ci E I is the class label from the
label set I = {I, 2, . ", N c }.
=
=
The practical significance of this mixed training problem was recognized in (Lippmann 1989). However, despite this realization, there has been surprisingly little
work done on this problem. One likely reason is that it does not appear possible to incorporate unlabelled data directly within conventional supervised learning
methods such as back propagation. For these methods, unlabelled features must
either be discarded or preprocessed in a suboptimal, heuristic fashion to obtain class
label estimates. We also note the existence of work which is less than optimistic
concerning the value of unlabelled data for classification (Castelli and Cover 1994).
However, (Shashahani and Landgrebe 1994) found that unlabelled data could be
used effectively in label-deficient situations. While we build on their work, as well
as on our own previous work (Miller and Uyar 1996), our approach differs from
(Shashahani and Landgrebe 1994) in several important respects. First, we suggest
a more powerful mixture-based probability model with an associated classifier structure that has been shown to be equivalent to the RBF classifier (Miller 1996). The
practical significance of this equivalence is that unlike RBFs, which are trained in
a conventional supervised fashion, the RBF-equivalent mixture model is naturally
suited for statistical training (MLE). The statistical framework is the key to incorporating unlabelled data in the learning. A second departure from prior work is
the choice of learning criterion. We maximize the joint data likelihood and suggest
two di"tinct EM algorithms for this purpose, whereas the conditional likelihood was
considered in (Shashahani and Landgrebe 1994). We have found that our approach
achieves superior results. A final novel contribution is a considerable expansion of
the range of situations for which the mixed training paradigm can be applied. This
is made possible by the realization that test data or new data to classify can al"o be
viewed as an unlabelled set, available for "training". This notion will be clarified
in the sequel.
2
Unlabelled Data and Classification
Here we briefly provide some intuitive motivation for the use of unlabelled data.
Suppose, not very restrictively, that the data is well-modelled by a mixture density,
lThis problem can be viewed as a type of "missing data" problem, wherein the missing
items are class labels. As such, it is related to , albeit distinct from supervised learning
involving missing and/or noisy jeaturecomponents, addressed in (Ghahramani and Jordan
1995),(Tresp et al. 1995).
A Mixture of Experts Classifier for Label-deficient Data
573
in the following way. The feature vectors are generated according to the density
L
f(z/9) =
2: ad(z/9t),
where f(z/Oc) is one of L component densities, with non-
1=1
L
negative mixing parameters 0.1, such that
2: 0.1 = 1. Here, 01 is the set of parameters
1=1
9 = {Ol}. The class labels are also viewed
specifying the component density, with
as random quantities and are assumed chosen conditioned on the selected mixture
component 7'7I.i E {I, 2, ... , L} and possibly on the feature value, i.e. according
to the probabilities P[CdZi,7'7I.iJ 2. Thus, the data pairs are assumed generated
by selecting, in order, the mixture component, the feature value, and the class
label, with each selection depending in general on preceding ones. The optimal
classification rule for this model is the maximum a posteriori rule:
S(z)
= arg max
L. P[c.. = k/7'7I.i = i, Zi]P[7'7I.i = i/Zi],
k
(1)
j
where P[7'7I.i = i/Zi] =
LajJ(~./6,)
,
and where S(z) is a selector function with
2: atf(~i/61)
1=1
range in T. Since this rule is based on the a posteriori class probabilities, one can
argue that learning should focus solely on estimating these probabilities. However,
if the classifier truly implements (1), then implicitly it has been assumed that the
estimated mixture density accurately models the feature vectors. If this is not true,
then presumably estimates of the a posteriori probabilities will also be affected. This
suggests that even in the ab8ence of cla88 label8, the feature vectors can be used to
better learn a posteriori probabilities via improved estimation of the mixture-based
feature density. A commonly used measure of mixture density accuracy is the data
likelihood.
3
Joint Likelihood Maximization for a Mixtures of Experts
Classifier
The previous section basically argues for a learning approach that uses labelled data
to directly estimate a posteriori probabilities and unlabelled data to estimate the
feature density. A criterion which essentially fulfills these objectives is the joint data
likelihood, computed over both the labelled and unlabelled data subsets. Given our
model, the joint data log-likelihood is written in the form
log L
=L
L
log
L ad(z,i/O,) + L
1=1
L
log
L aIP[cdzi,
7'7I.i
= l]f(Zi/91).
(2)
1=1
This objective function consists of a "supervised" term based on XI and an "unsupervised" term based on Xu. The joint data likelihood was previously considered
in a learning context in (Xu et al. 1995). However, there the primary justification
was simplification of the learning algorithm in order to allow parameter estimation
based on fixed point iterations rather than gradient descent. Here, the joint likelihood allows the inclusion of unlabelled samples in the learning. We next consider
two special cases of the probability model described until now.
2The usual assumption made is that components are "hard-partitioned", in a deterministic fashion, to classes. Our random model includes the "partitioned" one as a special
case. We have generally found this model to be more powerful than the "partitioned" one
(Miller Uyar 1996).
D. J. Miller and H. S. Uyar
574
The "partitioned" mixture (PM) model: This is the previously mentioned
case where mixture components are "hard-partitioned" to classes (Shashahani and
Landgrebe 1994). This is written Mj E C/e, where Mj denotes mixture component
j and C/e is the subset of components owned by class k. The posterior probabilities
have the form
2: ajf(3!/Oj)
P[Ci = k/3!] = )_?;_M_'L,-EC_,,_ _ __
(3)
2: azf(3!/Or)
1=1
The generalized mixture (G M) model: The form of the posterior for each
mixture component is now P[c,:/1'7l.i, 3!il = P[c,:/1'7l.il == {3c,/m,, i.e., it is independent
of the feature value. The overall posterior probability takes the form
' " ( ad(3!i/ Oj) )
P [C,:/3!i 1= ~
'2t azf (3!d OI )
{3c,lj.
(4)
This model was introduced in (Miller and Uyar 1996) and was shown there to lead
to performance improvement over the PM model. Note that the probabilities have
a "mixture of experts" structure, where the "gating units" are the probabilities
P[1'7l.i = jl3!il (in parentheses), and with the "expert" for component j just the
probability {3c,Ii' Elsewhere (Miller 1996), it has been shown that the associated
classifier decision function is in fact equivalent to that of an RBF classifier (Moody
and Darken 1989) . Thus, we suggest a probability model equivalent to a widely
used neural network classifier, but with the advantage that, unlike the standard
RBF, the RBF-equivalent probability model is amenable to statistical training, and
hence to the incorporation of unlabelled data in the learning. Note that more powerful models P[cilTn.i, 3!i] that do condition on 3!i are also possible. However, such
models will require many more parameters which will likely hurt generalization
performance, especially in a label-deficient learning context. Interestingly, for the
mixed training problem, there are two Expectation-Maximization (EM) (Dempster
et al. 1977) formulations that can be applied to maximize the likelihood associated
with a given probability model. These two formulations lead to di8tinct methods
that take different learning "trajectories", although both ascend in the data likelihood. The difference between the formulations lies in how the "incomplete" and
"complete" data elements are defined within the EM framework. We will develop
these two approaches for the suggested G M model.
EM-I (No class labels assumed): Distinct data interpretations are given for
XI and Xu' In this case, for Xu, the incomplete data consists of the features {3!o.}
and the complete data consists of {(3!i' 1'7l.iH. For XI, the incomplete data consists
of {(3!;, Ci)}, with the complete data now the triple {(3!o., Co., Tn.i)}. To clarify, in this
case mizture labels are viewed as the sole missing data elements, for Xu as well as
for XI' Thus, in effect class labels are not even postulated to exist for Xu'
EM-II (Class labels assumed): The definitions for XI are the same as before.
However, for Xu, the complete data now consists of the triple {( 3!o., Ci, 1'7l.i H, i.e. class
labels are also assumed missing for Xu'
For Gaussian components, we have 01 = {I-'I , EI}, with 1-'1 the mean vector and EI
the covariance matrix. For EM-I, the resulting fixed point iterations for updating
the parameters are:
A Mixture of Experts Classifier for Label-deficient Data
575
L
+
S};)P[ffli =j/Xi,O(t)])
z.EX"
L
+
P[ffli = j/Xi, ott)])
Vj
z.EX,.
.B(Hl)
=
I:
I:
P[ffli = j / Xi, Ci, ott)]
ziEX,nCi=k
kIJ
(5)
Vk,j
P[ffli = j/Xi,Ci,O(t)]
ziEX,
Here, S~;) == (Xi - ~;t?)(Xi - ~;t?)T. New parameters are computed at iteration
t+ 1 based on their values at iteration t. In these equations, P[ffli = j/Xi, Ci, ott)] =
?f(z )e('?
'zJ e(.)
.... Pcil .... f ( .1 .... )
",(.) f(~ )e(")
",(')p(')
I:~("C\~)
andP[ffli=j/Xi,O(t)]=
M
J
?
J
I: ",~., f(zile~")
?
For EM-II, it can be
,",=1
shown that the resulting re-estimation equations are identical to those in (5) except
regarding the parameters {.Bk/}}' The updates for these parameters now take the
form
,q(t+l) -_
fJklj
1
(
--(t-)
N aj
"~
P[.
_ )'j X". C,,!7
. il(t)lJ + "
? . il(t)])
ffli ~P [ffli
- '
), c,. -_ k/ X,,!7
z.EX,nCi=k
H ere, we 1'd en t'f
1 y P[ ffli
ZiEX,.
(t)~(.)
(
le(")
"Io/,f
Zi (.). I n t h'
= ),. Ci = k/ Xi, il(t)] ="
i.)
L. "'~ f(z.le .... )
!7
"'J
...
J
1S
J:
latlOn,
'
!ormu
joint probabilities for class and mixture labels are computed for data in Xu and used
in the estimation of {.Bkfj}, whereas in the previous formulation {.Bklj} are updated
solely on the basis of X,. While this does appear to be a significant qualitative
difference between the two methods, both do ascend in log L, and in practice we
have found that they achieve comparable performance.
4
Combined Learning and Classification
The range of application for mixed training is greatly extended by the following
observation: te~t data (with label~ withheld), or for that matter, any new batch of
data to be cla~~ified, can be viewed ~ a new, unlabelled data ~et, Hence, this new
data can be taken to be Xu and used for learning (based on EM-I or EM-II) prior
to its classification, What we are suggesting is a combined learning/classification
operation that can be applied whenever there is a new batch of data to classify. In
the usual supervised learning setting, there is a clear division between the learning
and classification (use) phases, In this setting, modification of the classifier for new
data is not possible (because the data is unlabelled), while for test data such modification is a form of "cheating". However, in our suggested scheme, this learning
for unlabelled data is viewed simply as part of the classification operation. This
is analogous to image segmentation, wherein we have a common energy function
that is minimized for each new image to be segmented. Each such minimization
determines a model local to the image and a segmentation for the image, Our "segmentation" is just classification, with log L playing the role of the energy function.
It may consist of one term which is always fixed (based on a given labelled training
set) and one term which is modified based on each new batch of unlabelled data to
classify. We can envision several distinct learning contexts where this scheme can
576
D. 1. Miller and H. S. Uyar
be used, as well as different ways of realizing the combined learning/classification
operation 3 One use is in classification of an image/speech archive, where each image/speaker segment is a separate data "batch". Each batch to classify can be used
as an unlabelled "training" set, either in concert with a representative labelled data
set, or to modify a design based on such a set 4 . Effectively, this scheme would
adapt the classifier to each new data batch. A second application is supervised
learning wherein the total amount of data is fixed. Here, we need to divide the data
into training and test sets with the conflicting goals of i) achieving a good design
and ii) accurately measuring generalization performance. Combined learning and
classification can be used here to mitigate the loss in performance associated with
the choice of a large test set. More generally, our scheme can be used effectively
in any setting where the new data to classify is either a) sizable or b) innovative
relative to the existing training set.
5
Experimental Results
Figure 1a shows results for the 40-dimensional, 3-class wa.veform- +noise data set
from the UC Irvine database. The 5000 data pairs were split into equal-size training
and test sets. Performance curves were obtained by varying the amount of labelled
training data. For each choice of N/, various learning approaches produced 6 solutions based on random parameter initialization, for each of 7 different labelled
subset realizations. The test set performance was then averaged over these 42 "trials". All schemes used L
12 components. DA-RBF (Miller et at. 1996) is a
deterministic annealing method for RBF classifiers that has been found to achieve
very good results, when given adequate training datas . However, this supervised
learning method is forced to discard unlabelled data, which severely handicaps its
performance relative to EM-I, especially for small N I , where the difference is substantial. TEM-I and TEM-II are results for the EM methods (both I and II) in
combined learning and classification mode, i.e., where the 2500 test vectors were
also used as part of Xu. As seen in the figure, this leads to additional, significant
performance gains for small N/. ~ ote also that performance of the two EM methods
is comparable. Figure 1b shows results of similar experiments performed on 6-class
satellite imagery data ("at), also from the UC Irvine database. For this set, the
feature dimension is 36, and we chose L = 18 components. Here we compared EM-I
with the method suggested in (Shashahani and Landgrebe 1994) (SL), based on the
PM model. EM-I is seen to achieve substantial performance gains over this alternative learning approach. Note also that the EM-I performance is nearly constant,
over the entire range of N/.
=
Future work will investigate practical applications of combined learning and classification, as well as variations on this scheme which we have only briefly outlined.
Moreover, we will investigate possible extensions of the methods described here for
the regression problem.
3The image segmentation analogy in fact suggests an alternative scheme where we
perform joint likelihood maximization over both the model parameters and the "hard",
missing class labels. This approach, which is analogous to segmentation methods such as
ICM, would encapsulate the classification operation directly within the learning. Such a
scheme will be investigated in future work.
~Note that if the classifier is simply modified based on Xu, EM-I will not need to update
{,8kl;}, while EM-II must update the entire model.
5 We assumed the same number of basis functions as mixture components. Also, for the
DA design, there was only one initialization, since DA is roughly insensitive to this choice.
577
A Mixture of Experts Classifier for Label-deficient Data
1 02<
1 021
"
~022
Ii
1
1"
02
022
"
1?,8
io.,6 .
0,,, .
. ..... , ..
lO.2<
Ii
J
., .. , ..
I
I
,.... ... ;....... .; . . . .
M-H
"
' ! ' ?? . . ? ?. ! .??..??? ,.
. !EI'~I .... .... . ,
'0'
Acknowledgement s
This work was supported
IRI-9624870 .
0" .... , .. .. .... , .... .
'hi
in
part by National Science Foundation Career Award
References
V. Castelli and T. M. Cover. On the exponential value of labeled samples. Pattern
Recognition Letters, 16:105-111, 1995.
A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum-likelihood from incomplete
data via the EM algorithm. Journal of the Roy. Stat. Soc. I Ser. B, 39:1-38, 1977.
Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an
EM approach. In Neural Information Processing Systems 6, 120-127, 1994.
M. 1. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM
algorithm. Neural Computation, 6:181-214, 1994.
R. P. Lippmann. Pattern classification using neural networks. IEEE Communications Magazine, 27,47-64, 1989.
D. J. Miller, A. Rao, K. Rose, and A. Gersho. A global optimization method for
statistical classifier design. IEEE Transactions on Signal Processing, Dec. 1996.
D. J. Miller and H. S. Uyar. A generalized Gaussian mixture classifier with learning
based on both labelled and unlabelled data. Conf. on Info. Sci. and Sys., 1996.
D. J. Miller. A mixture model equivalent to the radial basis function classifier.
Submitted to Neural Computation, 1996.
J. Moody and C. J. Darken. Fast learning in locally-tuned processing units. Neural
Computation, 1:281-294, 1989.
B. Shashahani and D. Landgrebe. The effect of unlabeled samples in reducing
the small sample size problem and mitigating the Hughes phenomenon. IEEE
Transactions on Geoscience and Remote Sensing, 32:1087-1095, 1994.
V. Tresp, R . Neuneier, and S. Ahmad . Efficient methods for dealing with missing
data in supervised learning. In Neural Information Processing Systems 7, 689696,1995.
L. Xu, M. I. Jordan, and G. E. Hinton. An alternative model for mixtures of
experts. In Neural Information Processing Systems 7, 633-640, 1995.
| 1208 |@word trial:1 briefly:2 covariance:1 jacob:1 cla:1 plentiful:2 selecting:1 tuned:1 interestingly:1 envision:1 existing:1 neuneier:1 must:2 written:2 concert:1 update:3 selected:1 item:1 sys:1 realizing:1 clarified:1 c2:1 qualitative:1 consists:5 ascend:2 roughly:1 ol:1 ote:1 little:1 estimating:1 moreover:1 what:2 m_:1 perseus:1 developed:1 differing:1 mitigate:1 classifier:27 ser:1 unit:2 medical:2 appear:2 encapsulate:1 before:1 engineering:1 local:1 modify:1 io:2 severely:1 despite:1 solely:2 chose:1 initialization:2 equivalence:1 specifying:1 suggests:2 co:1 limited:2 range:4 averaged:1 practical:3 practice:1 hughes:1 implement:1 differs:1 jl3:1 radial:3 suggest:3 unlabeled:1 selection:1 context:3 equivalent:7 conventional:2 deterministic:2 missing:7 iri:1 rule:3 notion:1 variation:1 justification:1 hurt:1 updated:1 analogous:2 target:1 suppose:1 magazine:1 us:1 pa:1 element:3 roy:1 expensive:2 recognition:1 updating:1 database:3 labeled:1 role:1 electrical:1 remote:2 ahmad:1 substantial:3 mentioned:1 dempster:2 rose:1 trained:2 segment:1 division:1 basis:5 joint:9 various:1 distinct:4 forced:1 effective:1 fast:1 heuristic:1 larger:1 widely:1 noisy:1 laird:1 final:1 advantage:1 propose:1 realization:3 mixing:1 achieve:4 intuitive:1 satellite:1 derive:1 depending:1 develop:1 stat:1 ij:1 sole:1 sizable:1 dividing:1 soc:1 involves:1 require:1 generalization:2 extension:1 clarify:1 considered:2 presumably:1 scope:1 mapping:1 achieves:1 purpose:1 estimation:5 label:27 ere:1 minimization:1 gaussian:3 always:1 modified:2 rather:1 varying:1 focus:1 improvement:1 vk:1 likelihood:15 greatly:2 posteriori:5 typically:1 lj:2 entire:2 mitigating:1 arg:1 classification:18 overall:1 special:2 uc:3 equal:1 psu:1 identical:1 park:1 unsupervised:1 nearly:2 tem:2 future:2 minimized:1 others:1 fundamentally:1 aip:1 national:1 phase:1 consisting:1 investigate:2 certainly:1 mixture:27 truly:1 amenable:2 xni:2 incomplete:5 divide:1 re:1 kij:1 formalism:1 classify:7 rao:1 cover:2 measuring:1 maximization:4 ott:3 subset:9 inadequate:1 combined:7 density:9 sequel:1 moody:2 squared:1 imagery:1 possibly:1 conf:1 expert:10 suggesting:1 includes:1 matter:1 postulated:1 ad:3 performed:1 optimistic:1 rbfs:3 contribution:1 minimize:1 il:6 oi:1 accuracy:1 miller:14 modelled:1 accurately:2 castelli:2 basically:1 produced:1 trajectory:1 expertise:2 submitted:1 whenever:2 definition:1 energy:2 naturally:1 associated:5 di:1 irvine:3 gain:3 knowledge:2 segmentation:6 back:1 supervised:10 wherein:4 improved:1 formulation:4 done:2 just:2 until:1 ei:3 propagation:1 mode:1 aj:1 effect:2 true:2 hence:2 azf:2 speaker:1 oc:1 criterion:2 generalized:2 complete:4 demonstrate:1 tn:1 argues:1 image:9 invoked:1 novel:1 common:2 superior:1 insensitive:1 interpretation:1 significant:2 outlined:1 pm:3 inclusion:1 posterior:3 own:1 discard:1 seen:2 additional:2 preceding:1 recognized:1 maximize:2 paradigm:1 signal:1 ii:10 segmented:1 unlabelled:26 adapt:1 concerning:1 mle:2 award:1 parenthesis:1 involving:1 regression:1 multilayer:1 essentially:1 expectation:1 iteration:4 dec:1 whereas:2 nci:2 separately:1 addressed:1 annealing:1 hasan:1 unlike:3 archive:1 deficient:5 jordan:4 ee:1 split:1 variety:1 zi:5 pennsylvania:1 suboptimal:1 regarding:1 akin:1 speech:1 adequate:1 generally:3 clear:1 amount:2 locally:1 sl:1 exist:1 zj:1 restrictively:1 estimated:1 affected:1 key:1 achieving:1 preprocessed:1 imaging:1 vast:1 letter:1 powerful:3 decision:2 comparable:2 handicap:1 hi:1 simplification:1 incorporation:1 x2:1 innovative:1 department:1 according:2 em:22 partitioned:5 modification:2 hl:1 taken:1 resource:1 equation:2 previously:2 gersho:1 available:1 operation:5 hierarchical:1 alternative:4 batch:6 existence:1 denotes:1 ghahramani:2 build:1 especially:2 objective:2 quantity:1 parametric:1 primary:1 usual:2 gradient:1 distance:1 separate:1 sci:1 argue:1 reason:1 ified:1 info:1 negative:1 design:7 perform:1 observation:2 darken:2 discarded:1 withheld:1 descent:1 situation:3 extended:2 communication:1 hinton:1 david:1 introduced:1 pair:3 required:3 bk:1 cheating:1 kl:1 learned:1 conflicting:1 address:2 suggested:3 andp:1 usually:1 pattern:2 departure:1 max:1 oj:2 difficulty:1 scheme:8 improve:1 extract:1 tresp:2 prior:2 acknowledgement:1 relative:2 loss:1 mixed:5 analogy:1 triple:2 foundation:1 rubin:1 principle:1 playing:1 cd:1 lo:1 elsewhere:1 surprisingly:1 supported:1 allow:1 curve:1 dimension:1 xn:1 landgrebe:6 made:2 commonly:1 transaction:2 lippmann:2 selector:1 implicitly:1 dealing:1 global:1 assumed:8 xi:15 lthis:1 learn:2 mj:2 career:1 obtaining:1 expansion:1 investigated:1 vj:1 da:3 significance:2 motivation:1 noise:1 arise:1 icm:1 xu:15 representative:2 en:1 fashion:3 exponential:1 xl:3 lie:1 gating:1 sensing:2 consist:2 incorporating:1 ih:1 albeit:1 effectively:3 ci:8 te:1 conditioned:1 atf:1 suited:1 simply:2 likely:2 geoscience:1 owned:1 determines:1 conditional:1 viewed:7 goal:1 rbf:8 labelled:13 considerable:1 hard:3 except:1 reducing:1 uyar:8 total:2 experimental:1 perceptrons:1 arises:1 fulfills:1 incorporate:1 phenomenon:1 ex:3 |
230 | 1,209 | Complex-Cell Responses Derived from
Center-Surround Inputs: The Surprising
Power of Intradendritic Computation
Bartlett W. Mel and Daniel L. Ruderman
Department of Biomedical Engineering
University of Southern California
Los Angeles, CA 90089
Kevin A. Archie
Neuroscience Program
University of Southern California
Los Angeles, CA 90089
Abstract
Biophysical modeling studies have previously shown that cortical
pyramidal cells driven by strong NMDA-type synaptic currents
and/or containing dendritic voltage-dependent Ca++ or Na+ channels, respond more strongly when synapses are activated in several
spatially clustered groups of optimal size-in comparison to the
same number of synapses activated diffusely about the dendritic
arbor [8]- The nonlinear intradendritic interactions giving rise to
this "cluster sensitivity" property are akin to a layer of virtual nonlinear "hidden units" in the dendrites, with implications for the cellular basis of learning and memory [7, 6], and for certain classes of
nonlinear sensory processing [8]- In the present study, we show that
a single neuron, with access only to excitatory inputs from unoriented ON- and OFF-center cells in the LGN, exhibits the principal
nonlinear response properties of a "complex" cell in primary visual
cortex, namely orientation tuning coupled with translation invariance and contrast insensitivity_ We conjecture that this type of
intradendritic processing could explain how complex cell responses
can persist in the absence of oriented simple cell input [13]-
B. W. Mel, D. L. Ruderman and K. A. Archie
84
1
INTRODUCTION
Simple and complex cells were first described in visual cortex by Hubel and Wiesel
[4]. Simple cell receptive fields could be subdivided into ON and OFF subregions,
with spatial summation within a subregion and antagonism between subregions;
cells of this type have historically been modeled as linear filters followed by a thresholding nonlinearity (see [13]). In contrast, complex cell receptive fields cannot generally be subdivided into distinct ON and OFF subfields, and as a group exhibit
a number of fundamentally nonlinear behaviors, including (1) orientation tuning
across a receptive field much wider than an optimal bar, (2) larger responses to
thin bars than thick bars-in direct violation of the superposition principle, and (3)
sensitivity to both light and dark bars across the receptive field.
The traditional Hubel-Wiesel model for complex cell responses involves a hierarchy,
consisting of center-surround inputs that drive simple cells, which in turn provide
oriented, phase-dependent input to the complex cell. By pooling over a set of simple
cells with different positions and phases, the complex cell could respond selectively
to stimulus orientation, while generalizing over stimulus position and contrast. A
pure hierarchy involving simple cells is challenged, however, by a variety of more
recent experimental results indicating many complex cells receive monosynaptic input from LGN cells [3], or do not depend on simple cell input [10, 5, 1]. It remains
unknown how complex cell responses might derive from intracortical network computations that do no depend on simple cells, or whether they could originate directly
from intracellular computations.
Previous biophysical modeling studies have indicated that the input-output function
of a dendritic tree containing excitatory voltage-dependent membrane mechanisms
can be abstracted as low-order polynomial function, i.e. a big sum of little products
(see [9] for review). The close match between this type of computation and "energy"
models for complex cells [12, 11, 2] suggested that a single-cell origin of complex
cell responses was possible.
In the present study, we tested the hypothesis that local nonlinear processing in
the dendritic tree of a single neuron, which receives only excitatory synaptic input
from unoriented center-surround LGN cells, could in and of itself generate nonlinear complex cell response properties, including orientation selectivity, coupled with
position and contrast invariance.
2
2.1
METHODS
BIOPHYSICAL MODELING
Simulations of a layer 5 pyramidal cell from cat visual cortex (fig. 1) were carried
out in NEURON 1 . Biophysical parameters and other implementation details were
as in [8] andj or shown in Table 2, except dendritic spines were not modeled here.
The soma contained modified Hodgkin-Huxley channels with peak somatic conductances of UNa and 90R 0.20 Sjcm 2 and 0.12 Sjcm2, respectively; dendritic membrane
was electrically passive. Each synapse included both an NMDA and AMPA-type
INEURON simulation environment courtesy Michael Hines and John Moore; synaptic
channel implementations courtesy Alan Destexhe and Zach Mainen.
Complex-cell Responses Derived from Center-surround Inputs
85
?
Figure 1: Layer 5 pyramidal neuron used in the simulations, showing 100 synaptic
contacts. Morphology courtesy Rodney Douglas and Kevan Martin.
excitatory conductances (see Table 1). Conductances were scaled by an estimate
of the local input resistance, to keep local EPSP size approximately uniform across
the dendritic tree. Inhibitory synapses were not modeled.
2.2
MAPPING VISUAL STIMULI ONTO THE DENDRITIC TREE
A stimulus image consisted of a 64 x 64 pixel array containing a light or dark
bar (pixel value ?1 against a background of 0). Bars of length 45 and width 7
were presented at various orientations and positions within the image. Images were
linearly filtered through difference-of-Gaussian receptive fields (center width: 0.6,
surround width: 1.2, with no DC response). Filtered images were then mapped
onto 64 x 64 arrays of ON-center and OFF-center LGN cells, whose outputs were
thresholded at ?0.02 respectively. In a crude model of gain control, only a random
subset of 100 of the LGN neurons remained active to drive the modeled cortical
cell.
Each LGN neuron gave rise to a single synapse onto the cortical cell's dendritic
tree. In a given run, excitatory synapses originating from the 100 active LGN cells
were activated asynchronously at 40 Hz, while all other synapses remained silent.
The spatial arrangement of connections from LGN cells onto the pyramidal cell
dendrites was generated automatically, such that pairs of LGN cells which are coactive during presentations of optimally oriented bars formed synapses at nearby
sites in the dendritic tree. The activity of the LGN cell array to an optimally
oriented bar is shown in fig. 3. Frequently co-activated pairs of LGN neurons are
hereafter referred to as "friend-pairs", and lie in a geometric arrangement as shown
in fig. 4. Correlation-based clustering of friend-pairs was achieved by (1) choosing
a random LGN cell and placing it at the next available dendritic site, (2) randomly
B. W.
86
Parameter
Rm
Ra
em
Vrest
Somatic 9Na
Somatic !/DR
Synapse count
Stimulus frequency
TAMPA (on, off)
9AMPA
'TNMDA(on, of f)
9NMDA
Esyn
Me~
D. L. Ruderman and K. A. Archie
Value
lOkncm~
200ncm
1.0JLF/cm~
-70 mV
0.20 S/cm~
0.12 S/cm~
100
40 Hz
0.5 ms, 3 ms
0.27 nS - 2.95 nS
0.5 ms, 50 ms
0.027 nS - 0.295 nS
OmV
Figure 2: Table 1. Simulation Parameters.
choosing one of its friends and placing it at the next available dendritic site, and so
on, until until either all of the cell's friends had already been deployed, in which case
a new cell was chosen at random to restart the sequence, or all cells had been chosen,
meaning that all of the 8192 (= 64 x 64 x 2) LGN synapses had been successfully
mapped onto the dendritic tree. In previous modeling work it was shown that this
type of clustering of correlated inputs on dendrites is the natural outcome of a
balance between activity-independent synapse formation, and activity dependent
synapse stabilization [6].
This method guaranteed that an optimally oriented bar stimulus activated a larger
number of friend-pairs on average than did bars at non-optimal orientations. This
led in turn to relatively clustery distributions of activated synapses in the dendrites
in response to optimal bar orientations, in comparison to non-optimal orientations.
In previous work, it was shown that synapses activated in clusters about a dendritic
arbor could produce significantly larger cell responses than the same number of
synapses activated diffusely about the dendritic tree [7, 8].
3
Results
Results for two series of runs are shown in fig. 5. For each bar stimulus, average
spike rate was measured over a 250 ms period, beginning with the first spike initiated
after stimulus onset (if any). This measure de-emphasized the initial transient climb
off the resting potential, and provided a rough steady-state measure of stimulus
effectiveness. Spike rates for 30 runs were averaged for each input condition.
Orientation tuning curves for a thin bar (7 x 45 pixels) are shown in fig. 5. The
orientation tuning peaks sharply within about 10? of vertical, and then decays
slowly for larger angles. Tuning is apparent both for dark and light bars, and
remains independent of location within the receptive field.
4
Discussion
The results of fig. 5 indicate that a pyramidal cell driven exclusively by excitatory
inputs from ON- and OFF-center LGN cells, is at a biophysical level capable of producing the hallmark nonlinear response property of visual complex cells. Furthermore, the cell's translation-invariant preference for light or dark vertical bars was
established by manipulating only the spatial arrangement of connections from LGN
cells onto the pyramidal cell dendrites. Since exactly 100 synapses were activated in
every tested condition, the significantly larger responses to optimal bar orientations
could not be explained by a simple elevation in the total synaptic activity impinging on the neuron in that condition. The origin of the cell's orientation-selective
response resulted from nonlinear pooling of a large number of minimally-oriented
subunits, i.e. consisting of pairs of ON and OFF cells that were co-consistent with
an optimally oriented bar. We have achieved similar results in other experiments
with a variety of different friend-neighborhood structures including ones both simpler and more complex than were used here, for LGN arrays with substantially
different degrees of receptive field overlap, with random subsampling of the LGN
array, with graded LGN activity levels, and for dendritic trees containing active
sodium channels in addition to NMDA channels.
Thus far we have not attempted to relate physiologically-measured orientation and
width tuning curves, and other detailed aspects of complex cell physiology, to our
model cell, as we have been principally interested in establishing whether the most
salient nonlinear features of complex cell physiology were biophysically feasible at
the single cell level. Detailed comparisons between our results and empirical tuning
curves, etc., must be made with caution, since our model cell has been "explanted"
from the network in which it normally exists, and is therefore absent the normal
recurrent excitatory and inhibitory influences the cortical network provides.
B. W
88
Me~
D. L. Ruderman and K. A. Archie
w
I
eo
oe
w
w
I
eo
oe
w
Figure 4: Layout of friends for an ON-center LGN cell for vertically oriented thin
bars (top). The linear friendship linkage for a given ideal vertical bar of width w
was determined as follows. Suppose an LGN cell is chosen at random, e.g. an ONcenter cell at location (i,j) within the cell array. When a vertical bar is presented,
LGN cells along the two vertical edges of the bar become active. The ON-center
cell at position (i,j) is active to a light bar when it is in a column of cells just inside
either edge of the bar. Those cells which are co-active under this circumstance are:
(a) other on-center cells in the same vertical column, (b) on-center cells in vertical
columns a distance w - 1 to the right and left (depending on the bar position),
(c) off-center cells in columns a distance ?1 away (due to the negative-going edge
adjacent), and (d) off-center cells a distance w to the right and left (due to the
opposite edge). As "friend-pairs" we take only those LGN cells a distance ?(w -1)
and ?w away. Those in the same and neighboring columns are not included. The
friends of an off-center cell are shown in the bottom figure. It and its friends are
optimally stimulated by bars of width w placed as shown. The width selected for
our friend-pairs was w = 7, the same width as all bars presented as stimuli.
Experimental validation of these simulation results would imply a significant change
in our conception of the role of the single neuron in neocortical processing.
Acknowledgments
This work was funded by grants from the National Science Foundation and the
Office of Naval Research.
References
[1] G.M. Ghose, R.D. Freeman, and I. Ohzawa. Local intracortical connections in
the cats visual-cortex - postnatal-development and plasticity. J. Neurophysiol. ,
72:1290- 1303,1994.
[2] D.J. Heeger. Normalization of cell responses in cat striate cortex.
Neurosci., 9:181-197, 1992.
Visual
[3] K.P. Hoffman and J. Stone. Conduction velocity of afferents to cat visual
cortex: a correlation with cortical receptive field properties. Brain Res., 32:460466, 1971.
[4] D.H. Hubel and T.N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol., 160:106- 154, 1962.
89
Complex-cell Responses Derived from Center-surround Inputs
70
60
50
30
20
. . . ..... .. -?.
"
'0
?0~--~,0--~~--~3~0---~L---~50--~60L---mL---~60--~~
Orientation (degrees)
Figure 5: Orientation tuning curves for the model neurons. 'X': light bars centered in the receptive field; diamonds: light bars displaced by 6 pixels horizontally;
squares: dark bars centered in the receptive field; '+': dark bars displaced by 6
pixels. Standard errors on the data are about 5 spikes/sec.
[5] J.G. Malpeli, C. Lee, H.D. Schwark, and T.G. Weyand. Cat area 17. I. Pattern
of thalamic control of cortical layers. J. Neurophyiol., 46:1102-1119, 1981.
[6] B.W. Mel. The clusteron: Toward a simple abstraction for a complex neuron. In J. Moody, S. Hanson, and R. Lippmann, editors, Advances in Neural
Information Processing Systems, vol. 4, pages 35-42. Morgan Kaufmann, San
Mateo, CA, 1992.
[7] B.W. Mel. NMDA-based pattern discrimination in a modeled cortical neuron.
Neural Computation, 4:502-516, 1992.
[8] B.W. Mel. Synaptic integration in an excitable dendritic tree. J. Neurophysiol.,
70(3):1086-1101 , 1993.
[9] B.W. Mel. Information processing in dendritic trees. Neural Computation,
6:1031- 1085, 1994.
[10] J .A. Movshon. The velocity tuning of single units in cat striate cortex. J.
Physiol. (Lond), 249:445-468, 1975.
[11] I. Ohzawa, G.C. DeAngelis, and R.D Freeman. Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science,
279:1037-1041, 1990.
[12] D. Pollen and S. Ronner. Visual cortical neurons as localized spatial frequency
filters. IEEE Trans. Sys. Man Cybero., 13:907- 916, 1983.
[13] H.R. Wilson, D. Levi, L. Maffei, J. Rovamo, and R. DeValois. The perception
of form: retina to striate cortex. In L. Spillman and J.s. Werner, editors, Visual
perception: the neurophysiological foundations, pages 231-272. Academic Press,
San Diego, 1990.
| 1209 |@word jlf:1 polynomial:1 wiesel:3 oncenter:1 simulation:5 initial:1 series:1 exclusively:1 hereafter:1 mainen:1 daniel:1 disparity:1 coactive:1 current:1 surprising:1 must:1 john:1 physiol:2 plasticity:1 discrimination:2 selected:1 beginning:1 postnatal:1 sys:1 filtered:2 provides:1 location:2 preference:1 simpler:1 along:1 direct:1 become:1 inside:1 ra:1 spine:1 behavior:1 frequently:1 morphology:1 brain:1 freeman:2 automatically:1 little:1 provided:1 monosynaptic:1 cm:3 substantially:1 caution:1 every:1 exactly:1 scaled:1 rm:1 control:2 unit:2 normally:1 grant:1 maffei:1 producing:1 engineering:1 local:4 vertically:1 omv:1 clustery:1 ghose:1 initiated:1 establishing:1 approximately:1 might:1 minimally:1 mateo:1 co:3 subfields:1 averaged:1 acknowledgment:1 area:1 empirical:1 significantly:2 physiology:2 cannot:1 close:1 onto:6 influence:1 center:17 courtesy:3 layout:1 pure:1 array:6 weyand:1 hierarchy:2 suppose:1 diego:1 hypothesis:1 origin:2 velocity:2 persist:1 bottom:1 role:1 oe:2 environment:1 ideally:1 depend:2 basis:1 neurophysiol:2 cat:7 various:1 distinct:1 deangelis:1 kevin:1 choosing:2 outcome:1 formation:1 neighborhood:1 whose:1 apparent:1 larger:5 itself:1 asynchronously:1 sequence:1 biophysical:5 interaction:2 product:1 epsp:1 neighboring:1 malpeli:1 los:2 cluster:2 produce:1 wider:1 derive:1 friend:11 recurrent:1 depending:1 measured:2 strong:1 una:1 subregion:1 involves:1 indicate:1 thick:1 vrest:1 filter:2 stabilization:1 centered:2 transient:1 virtual:1 subdivided:2 clustered:1 elevation:1 dendritic:18 summation:1 antagonism:1 normal:1 mapping:1 superposition:1 successfully:1 hoffman:1 rough:1 gaussian:1 modified:1 voltage:2 wilson:1 office:1 derived:3 naval:1 contrast:4 dependent:4 abstraction:1 hidden:1 manipulating:1 originating:1 selective:1 going:1 lgn:22 interested:1 pixel:5 orientation:15 development:1 spatial:4 integration:1 field:11 placing:2 thin:3 stimulus:10 fundamentally:1 retina:1 oriented:8 randomly:1 resulted:1 national:1 phase:2 consisting:2 conductance:3 violation:1 light:7 activated:9 implication:1 edge:4 capable:1 tree:11 re:1 column:5 modeling:4 challenged:1 werner:1 subset:1 uniform:1 archie:4 optimally:5 conduction:1 peak:2 sensitivity:2 lee:1 off:11 michael:1 moody:1 na:2 containing:4 slowly:1 dr:1 potential:1 de:1 intracortical:2 sec:1 mv:1 onset:1 afferent:1 thalamic:1 rodney:1 formed:1 square:1 kaufmann:1 biophysically:1 intradendritic:3 drive:2 explain:1 synapsis:11 detector:1 synaptic:6 against:1 energy:1 frequency:2 gain:1 nmda:5 response:17 synapse:5 strongly:1 ronner:1 furthermore:1 just:1 biomedical:1 binocular:1 correlation:2 until:2 receives:1 ruderman:4 nonlinear:10 indicated:1 ohzawa:2 consisted:1 spatially:1 moore:1 adjacent:1 during:1 width:8 mel:6 steady:1 m:5 stone:1 neocortical:1 passive:1 image:4 meaning:1 hallmark:1 functional:1 diffusely:2 unoriented:2 resting:1 significant:1 surround:6 tuning:9 nonlinearity:1 had:3 funded:1 access:1 cortex:10 etc:1 recent:1 driven:2 selectivity:1 certain:1 devalois:1 morgan:1 eo:2 period:1 alan:1 match:1 academic:1 involving:1 circumstance:1 normalization:1 achieved:2 cell:70 receive:1 background:1 addition:1 pyramidal:6 pooling:2 hz:2 climb:1 effectiveness:1 ideal:1 destexhe:1 conception:1 variety:2 gave:1 architecture:1 opposite:1 silent:1 angeles:2 absent:1 whether:2 bartlett:1 linkage:1 akin:1 movshon:1 resistance:1 kevan:1 generally:1 detailed:2 dark:6 subregions:2 generate:1 inhibitory:2 stereoscopic:1 neuroscience:1 vol:1 group:2 levi:1 salient:1 soma:1 douglas:1 thresholded:1 sum:1 run:3 angle:1 respond:2 hodgkin:1 layer:4 followed:1 guaranteed:1 activity:5 huxley:1 sharply:1 nearby:1 aspect:1 lond:1 martin:1 conjecture:1 relatively:1 department:1 electrically:1 membrane:2 across:3 andj:1 em:1 explained:1 invariant:1 principally:1 previously:1 remains:2 turn:2 count:1 mechanism:1 available:2 away:2 top:1 clustering:2 subsampling:1 giving:1 graded:1 contact:1 arrangement:3 already:1 spike:4 receptive:11 primary:1 striate:3 traditional:1 southern:2 exhibit:2 distance:4 mapped:2 restart:1 me:2 originate:1 cellular:1 toward:1 length:1 modeled:5 balance:1 esyn:1 relate:1 negative:1 rise:2 implementation:2 unknown:1 diamond:1 vertical:7 neuron:14 displaced:2 tnmda:1 subunit:1 dc:1 somatic:3 namely:1 pair:8 connection:3 hanson:1 california:2 pollen:1 established:1 trans:1 bar:30 suggested:1 pattern:2 perception:2 program:1 tampa:1 including:3 memory:1 power:1 overlap:1 natural:1 sodium:1 historically:1 imply:1 carried:1 excitable:1 coupled:2 review:1 geometric:1 localized:1 validation:1 foundation:2 degree:2 consistent:1 thresholding:1 principle:1 editor:2 translation:2 excitatory:7 placed:1 curve:4 depth:1 cortical:8 sensory:1 made:1 san:2 far:1 lippmann:1 keep:1 abstracted:1 ml:1 hubel:3 active:6 physiologically:1 table:3 stimulated:1 channel:5 ca:4 dendrite:5 complex:20 ampa:2 impinging:1 did:1 intracellular:1 linearly:1 big:1 neurosci:1 fig:6 site:3 referred:1 deployed:1 n:4 position:6 heeger:1 zach:1 lie:1 crude:1 remained:2 friendship:1 emphasized:1 showing:1 decay:1 exists:1 clusteron:1 suited:1 generalizing:1 led:1 neurophysiological:1 visual:12 ncm:1 horizontally:1 contained:1 hines:1 presentation:1 absence:1 feasible:1 change:1 man:1 included:2 determined:1 except:1 principal:1 total:1 invariance:2 arbor:2 experimental:2 attempted:1 indicating:1 selectively:1 tested:2 correlated:1 |
231 | 121 | 468
LEARNING THE SOLUTION TO THE
APERTURE PROBLEM FOR PATTERN
MOTION WITH A HEBB RULE
Martin I. Sereno
Cognitive Science C-015
University of California, San Diego
La Jolla, CA 92093-0115
ABSTRACT
The primate visual system learns to recognize the true direction of
pattern motion using local detectors only capable of detecting the
component of motion perpendicular to the orientation of the
moving edge. A multilayer feedforward network model similar to
Linsker's model was presented with input patterns each consisting
of randomly oriented contours moving in a particular direction.
Input layer units are granted component direction and speed tuning
curves similar to those recorded from neurons in primate visual
area VI that project to area MT. The network is trained on many
such patterns until most weights saturate. A proportion of the
units in the second layer solve the aperture problem (e.g., show the
same direction-tuning curve peak to plaids as to gratings),
resembling pattern-direction selective neurons, which ftrst appear
inareaMT.
INTRODUCTION
Supervised learning schemes have been successfully used to learn a variety of inputoutput mappings. Explicit neuron-by-neuron error signals and the apparatus for
propagating them across layers, however, are not realistic in a neurobiological
context. On the other hand, there is ample evidence in real neural networks for
conductances sensitive to correlation of pre- and post-synaptic activity, as well as
multiple areas connected by topographic, somewhat divergent feedforward
projections. The present project was to try to learn the solution to the aperture
problem for pattern motion using a simple hebb rule and a layered feedforward
network.
Some of the connections responsible for the selectivity of cortical neurons to local
stimulus features develop in the absence of pattered visual experience. For example,
newborn cats and primates already have orientation-selective neurons in primary
visual cortex (area 17 or VI), before they open their eyes. The prenatally generated
orientation selectivity is sharpened by subsequent visual experience. Linsker (1986)
Learning the Solution to the Aperture Problem
has shown that feedforward networks with somewhat divergent, topOgraphic
interlayer connections, linear summation, and simple hebb rules develop units in
tertiary and higher layers that have parallel, elongated excitatory and inhibitory
sub fields when trained solely on random inputs to the frrst layer.
By contrast, the development of the circuitry in secondary and tertirary visual
cortical areas necessary for processing more complex, non-local features of visual
arrays--e.g., orientation gradients, shape from shading, pattern translation, dilation,
rotation--is probably much more dependent on patterned visual experience. Parietal
visual cortical areas, for example,
are almost totally unresponsive in
dark-reared monkeys, despite the fact
that these monkeys have a normalappearing VI (Hyvarinen, 1984).
Behavioral indices suggest that
development of some perceptual
abilities may require months of
experience.
Human babies, for
direction noise
2-D rotation
example, only evidence seeing the
transition between randomly moving
dots and circular 2-D motion at 6
months, while the transition from
horizontally
moving
dots
with
random x-axis velocities to dots
with sinusoidally varying X-axIS
velocities (the latter gives the
percept of a rotating 3-D cylinder) is
horizontal
only detected after 7 months (Spitz,
3-D cylinder
direction noise
Stiles-Davis, & Siegel, 1988) (see
Fig. 1).
Figure 1. Motion field transitions
During the first 6 months of its life,
a human baby typically makes
approximately 30 million saccades, experiencing in the process many views which
contain large moving fields and smaller moving objects. The importance of these
millions of glances for the development of the ability to recognize complex visual
objects has often been acknowledged. Brute visual experience may. however. be just
as important in developing a solution to the simpler problem of detecting pattern
motion using local cues.
NETWORK ARCHITECTURE
Moving visual stimuli are processed in several stages in the primate visual system.
The first cortical stage is layer 4C-alpha of area VI, which receives its main
ascending input from the magnocellular layers of the lateral geniculate nucleus.
Layer 4C-alpha projects to layer 4B, which contains many tightly-tuned directionselective neurons (Movshon et aI., 1985). These neurons, however, respond to
469
470
Sereno
moving contours as if these contours were moving perpendicular their local
orientation--Le .? they fire in proportion to the difference between the orthogonal
component of motion and their best direction (for a bar). An orientation series run
for a layer 4B nemon using a plaid (2 orthogonal moving gratings) thus results in
two peaks in the direction tuning curve. displaced 45 degrees to either side of the
peak for a single grating (Movshon et al.. 1985). The aperture problem for pattern
motion (see e.g.? Horn & Schunck. 1981) thus exists for cells in area VI of the
adult (and presumably infant) primate.
Layer 4B neurons project topographically via direct and indirect pathways to area
MT. a small exttastriate area specialized for processing moving stimuli. A subset
I
\
\
i
\
/
/
Second
Layer
(=MT)
Input
Layer
(=Vl, layer 4B)
Figure 2. Network Architecture
of neurons in MT show a single peak in their direction tuning curves for a plaid that
is lined up with the peak for a single grating--Le., they fire in proportion to the
difference between the true pattern direction and their best direction (for a bar).
These neurons therefore solve the aperture problem presented to them by the local
translational motion detectors in layer 4B of VI. The excitatory receptive fields of
all MT neurons are much larger than those in VI as a result of divergence in the VIMT projection as well as the smaller areal extent of MT compared to VI.
M.E. Sereno (1987) showed using a supervised learning rule that a linear t two layer
network can satisfactorily solve the aperture problem characterized above. The
present task was to see if unsupervised learning might suffice. A simple caraicature
of the Vl-to-MT projection was constructed. At each x-y location in the first
layer of the network. there are a set of units tuned to a range of local directions and
speeds. The input layer thus has four dimensions. The sample network illustrated
above (Fig. 2) has 5 different directions and 3 speeds at each x-y location. Input
units are granted tuning curves resembling those found for neurons in layer 4B of
Learning the Solution to the Aperture Problem
~:~
o
X
2X
velocity component
orthogonal to contour
=plo
L>\t)(\
I
I
o
0.5
1
speed component
orthogonal to contour
Figure 3. Excitatory Tuning
Curves (1st layer)
area VI. The tuning curves are linear. with half-height overlap for both direction
and speed (see Fig. 3--for 12 directions and 4 speeds). and direction and speed tuning
interact linearly. Inhibition is either tuned or untuned (see Fig. 4). and scaled to
balance excitation. Since direction tuning wraps around. there is a trough in the
tuned inhibition condition. Speed tuning does not wrap around. The relative effect
of direction and speed tuning in the output of ftrst layer units is set by a parameter.
As with Linsker. the probability that a unit in the fust layer will connect with a
unit in the second layer falls off as a gaussian centered on the retinotopically
resp
o
1\
~[""""""'"~""'""""". \,.
untuned
tuned
u: .........................
o
X
2X
velocity component
orthogonal to contour
resp
o
........................
o
...............
0.5
1
speed component orthogonal
to contour (no wrap-around)
Figure 4. Tuned vs. Untuned Inhibition
471
472
Sereno
equivalent point in the second layer (see Fig. 2). New random numbers are drawn to
generate the divergent gaussian projection pattern for each first layer unit (Le., all
of the units at a single x-y location have different. overlapping projection
patterns). There are no local connections within a layer.
The network update rule is similar to that of Linsker except that there is no term
like a decay of synaptic weights (k1) and no offset parameter for the correlations
(k,J. Also, all of the units in each layer are modeled explicitly. The activation, Yj'
for each unit is a linear weighted sum of its Ui inputs, scaled by a, and clipped to a
maximum or minimum value:
y.
)
=
{
a I. u?I w??I)
Ymax.min
Weights are also clipped to maximum and minimum values. The change in each
weight. .1wij, is a simple fraction, a, of the product of the pre- and post-synaptic
values:
.1w??
= au?y?
I)
I )
RESULTS
The network is tr:ained with a set of fullfield texture movements. Each stimulus
consists of a set of randomly oriented contours--one at each x-y point--all moving
in the same, randomly chosen pattern direction. A typical stimulus is drawn in
figure 5 as the set of component motions visible to neurons in VI (i.e .? direction
components perpendicular to the local contour); the local speed component varies as
the cosine of the angle between the pattern direction and the perpendicular to the
local contour. The single component motion at each point is run through the first
layer tuning curves. The response of the input layer to such a pattern is shown in
Figure 6. Each rectangular box represents a single x-y location, containing 48 units
--
........
v
t-
v
,
--
"
~
t-
......
I'
/'
-.. ,
.-.
--+
./'
I'
./'
........
~
'-
'-
v
,
1
./'
~
, --
'.
?
...... "-
'""-
~
~
I'
.-.
"
-+
......
?
....... .-. .......
~
--+
"
1
A
"-
"-
......
II'
"
~
"-
~
., .-.
.... ....... "
A
-..
"
.- /'
~
--
.. '-
"-
"-
I'
-+
/'
"-
~
..
-.
f
.,
,
./' -+ /'
Figure 5. Local Compqnent Motions from a Training
Stimulus (pattern direction is toward right)
Learning the Solution to the Aperture Problem
p. . . . ? ? ? . [J ?
o ?? Q.
? ? ? ?? ????
??? 0 c?
? ? ? .. .
.
.......
??????
?
?
?
0
?
?
?????????????
? . 00 . .
? ? ? ? ? a a ? ? ?
? 0
a ?
? .? .. .. ..
?
?
?
?.. ..
0 . ? ? ? ? ? ? ? aa ? ? ? ?. ? ? ? ? ?
D ? ? ? ? ?
?? C ???
? ? ? . 00 .
? . . . aD?
c?? . . . . . . . ?0? . .
???????? 0
0-. ? ? ? ?? ? ? ?? ? ? ?
.? .? .?
D? ??
? . . ?0?
? . . . 00? . .
. . . . ?Da . ? ? ? ? aa ? ? ? ? ? ?
? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? ?
? . 00? . ? ? ?
J ? ? ? ? ?? ?? ? ? ? .
., ?
? ? ? ? C ? ? ? ? ?
~
? [J
? ? ? ? ? ? ? 0 ?
?0????? ?00??
-0-.?
-ac-0- ? ? ? ? ? ?
?
??
??
? ? ? ? ? ? ? ? ? C ? ? ? ? ? ? ? ? ? ? ? D a ? ? ? ? ?
. .?? ? . a 0 ?????
. 0 a .
? ? ? ? ?
aa? ? ? ? ? ? ? ?
00
? 0
? 0
? 0
? ? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? ? ?
?
? ? ?
? ? ?
? ? ?
?? ?? ? ? ? ? ? ?? ?? ??
? ald?
? 0
???
.
Figure 6. Output of Portion of First Layer to a
Training Slimulus (untuned inhibition)
tuned to different combinations of direction and speed (12 directions run
horizontally and 4 speeds run vertically). Open and filled squares indicate positive
was set so
and negative outputs. Inhibition is untuned here. The hebb sensitivity,
that 1,000 such patterns could be presented before most weights saturated at
maximum values. Weights intially had small random values drawn from a flat
distribution centered around zero. The scale parameter for the weighted sum, a,
was set low enough to prevent second layer units from saturating all the time. In
Figure 6, direction tuning is 2.5 times as important as speed tuning in detennining
the output of a unit
a,
Selectivity of second layer units for pattern direction was examined both before and
after training using four stimulus conditions: 1) grating--contours perpendicular to
pattern direction, 2) random grating--contours randomly oriented with respect to
pattern direction (same as the training condition), 3) plaid--contours oriented 45 or
67 degrees from perpendicular to pattern direction, 4) random plaid--contours
randomly oriented, but avoiding angles nearly perpendicular to pattern direction.
The pre-training direction tuning curves for the grating conditions usually showed
some weak direction selectivity. Pre-training direction tuning curves for the plaid
conditions, however, were often twin-peaked, exhibiting pattern component
responses displaced to either side of the grating peak. Mter training, by contrast,
the direction tuning peaks in all test conditions were single and sharp, and the plaid
condition peaks were usually aligned with the grating peaks.
An example of the weights onto a mature pattern direction selective unit is shown
in Figure 7. As before, each rectangular box contains 48 units representing one
point in x-y space of the input layer (the tails of the 2-D gaussian are cropped in
this illustration), except that the black and white boxes now represent negative and
positive weights onto a single second layer unit. Within each box, 12 directions run
horizontally and 4 speeds run vertically. The peaks in the direction tuning curves
for gratings and 135 degree plaids for this unit were sharp and aligned.
473
474
Sereno
..
......
.. .. .. ..
::
::
::
"
...... .. .. ..
::
.
::
::
Figure 7. Mature Weights Onto Pattern
Direction-Selective Unit
Pattern direction selective units such as this comprised a significant fraction of the
second layer when direction tuning was set to be 2 to 4 times as important as speed
tuning in determining the output of fU'St layer units. Post-training weight
structures under these conditions actually formed a continuum--from units with
component direction selectivity, to units with pattern direction selectivity, to units
with component speed selectivity. Not surprisingly, varying the relative effects of
direction and speed in the VI tuning curves generated more direction-tuned-only or
speed-tuned-only units. In all conditions, units showed clear boundaries between
maximum and minimum weights in the direction-speed subspace each x-y point, and
a single best direction. The location of these boundaries was always correlated
Most units showing unambiguous pattern
across different x-y input points.
direction selectivity were characterized by two oppositely sloping diagonal
boundaries between maximum and minimum weights in direction-speed subspace (see
e.g., Fig. 7).
The stimuli used to train the network above--fullfield movrnents of a rigid texture
field of randomly oriented contours--are unnatural; generally, there may be one or
more objects in the field moving in different directions and at different speeds than
the surround. Weight distributions needed to solve the aperture problem appear
when the network. is trained on occluding moving objects against moving
backgrounds (object and background velocities chosen randomly on each trial), as
long as the object is made small or large relative to the receptive field size. The
solution breaks down when the moving objects occupy a significant fraction of the
area of a second layer receptive field.
Learning the Solution to the Aperture Problem
For comparison, the network was also trained using two different kinds of noise
stimuli. In the fIrst condition (unit noise), each new stimulus consisted of random
input values on each input unit With other network parameters held the same, the
typical mature weight pattern onto a second layer unit showed an intimate
intermixture of maximum and minimum weights in the direction-speed subspace at
each x-y location. In the second condition (direction noise), each new stimulus
consisted of a random direction at each x-y location. The mature weight patterns
now showed continuous regions of all-maximum or all-minimum weights in the
speed-direction supspace at each x-y point. In contrast to the situation with
fullfieid texture movement stimuli, however, the best directions at each of the x-y
points providing input to a given unit were uncorrelated. In addition, multiple best
directions at a single x-y point sometimes appeared.
DISCUSSION
This simple model suggests that it may be possible to learn the solution to the
aperture problem for pattern motion using only biologically realistic unsupervised
learning and minimally structured motion fields.
Using a similar network
architecture, M.E. Sereno had previously shown that supervised learning on the
problem of detecting pattern motion direction from local cues leads to the
emergence of chevron shaped weight structures in direction-speed space (M.E.
Sereno, 1986). The weight structures generated here are similar except that the
inside or outside of the chevron is filled in, and upside-down chevrons are more
common. This results in decreased selectivity to pattern speed in the second layer.
The model needs to be extended to more complex motion correlations in the input-e.g., rotation, dilation, shear, multiple objects, flexible objects. MT in primates
does not respond selectively to rotation or dilation, while its target area MST does.
Thus, biological estimates of rotation and dilation are made in two stages--rotation
and dilation are not detected locally, but instead constructed from estimates of local
translation. Higher layers in the present model may be able to learn interesting
'second-order' things about rotation, dilation, segmentation, and transparency.
The real primate visual system, of course, has a great many more parts than this
model. There are a large number of interconnected cortical visual areas--perbaps as
many as 25. A substantial portion of the 600 possible between-area connections may
be present (for review, see M.I. Sereno, 1988). There are at least 6 map-like visual
structures, and several more non-retinotopic visual structures in the thalamus
(beyond the dLGN) that interconnect with the cortical visual areas. Each visual
cortical area then has its own set of layers and intedayer connections. The most
unbiological aspect of this model is the lack of time and the crude methods of gain
control (clipped synaptic weights and input/output functions).
Future models
should employ within-area connections and time-dependent hebb rules.
Making a biologically realistic model of intermediate and higher level visual
processing is difficult since it ostensibly requires making a biologically realistic
model of earlier, yet often not less complex stations in the system--e.g., the retina,
475
476
Sereno
dLGN, and layer 4C of primary visual cortex in the present case. One way to avoid
having to model all of the stations up to the one of interest is to use physiological
data about how the earlier stations respond to various stimuli, as was done in the
present model. This shortcut is applicable to many other problems in modeling the
visual system. In order for this to be most effective, physiologists and modelers
need to cooperate in generating useful libraries of response profiles to arbitrary
stimuli. Many stimulus parameters interact, often nonlinearly, to produce the final
output of a cell. In the case of simple moving stimuli in VI and MT, we
minimally need to know the interaction between stimulus size, stimulus speed,
stimulus direction, surround speed, surround direction, and x-y starting point of the
movement relative to the classical excitatory receptive field. Collecting this many
response combinations from single cells requires faster serial presentation of stimuli
is customary in visual physiology experiments.
There is no obvious reason,
however, why the rate of stimulus presentation need be any less than the rate at
which the visual system nonnally operates--namely, 3-5 new views per second.
Also, we need to get a better understanding of the 'stimulus set'. The very large set
of stimuli on which the real visual system is trained (millions of views) is still
very poorly characterized. It would be worthwhile and practical, nevertheless, to
collect a naturalistic corpus of perhaps 1000 views (several hours of viewing).
Acknowledgements
I thank M.E. Sereno and U. Wehmeier for discussions and comments. Supported by
NIH grant F32 EY05887.
Networks and displays were constructed on the
Rochester Connectionist Simulator.
References
B.K.P. Hom & B.G. Schunck. Determining optical flow. Artiflntell., 17, 185-203
(1981).
J. Hyvarinen. The Parietal Cortex. Springer-Verlag (1984).
R. Linsker. From basic network principles to neural architecture: emergence of
orientation-selective cells. Proc. Nat. Acad. Sci. 83,8390-8394 (1986).
J.A. Movshon, E.H. Adelson, M.S. Gizzi & W.T. Newsome. Analysis of moving
visual patterns. In Pattern Recognition Mechanisms. Springer-Verlag, pp. 117151 (1985).
M.E. Sereno. Modeling stages of motion processing in neural networks. Proc. 9th
Ann. Con! Cog. Sci. Soc. pp. 405416 (1987).
M.I. Sereno. The visual system. In I.W.v. Seelen, U.M. Leinhos, & G. Shaw
(eds.), Organization of Neural Networks. VCH, pp. 176-184 (1988).
R.V. Spitz, J. Stiles-Daves & R.M. Siegel. Infant perception of rotation from rigid
structure-from-motion displays. Neurosci. Abstr. 14, 1244 (1988).
| 121 |@word trial:1 f32:1 proportion:3 open:2 tr:1 shading:1 contains:2 series:1 tuned:9 activation:1 yet:1 mst:1 realistic:4 subsequent:1 visible:1 shape:1 seelen:1 update:1 v:1 infant:2 cue:2 half:1 tertiary:1 detecting:3 location:7 unbiological:1 simpler:1 height:1 constructed:3 direct:1 consists:1 pathway:1 behavioral:1 inside:1 interlayer:1 simulator:1 totally:1 project:4 retinotopic:1 suffice:1 sloping:1 directionselective:1 kind:1 monkey:2 collecting:1 scaled:2 brute:1 unit:33 control:1 grant:1 appear:2 before:4 positive:2 local:14 vertically:2 apparatus:1 acad:1 despite:1 solely:1 approximately:1 might:1 black:1 au:1 minimally:2 examined:1 suggests:1 collect:1 patterned:1 perpendicular:7 range:1 horn:1 responsible:1 satisfactorily:1 yj:1 practical:1 area:18 physiology:1 projection:5 pre:4 seeing:1 suggest:1 get:1 onto:4 naturalistic:1 layered:1 context:1 intially:1 equivalent:1 elongated:1 map:1 resembling:2 starting:1 rectangular:2 rule:6 array:1 resp:2 diego:1 target:1 experiencing:1 velocity:5 recognition:1 region:1 connected:1 plo:1 movement:3 substantial:1 ui:1 trained:5 topographically:1 indirect:1 cat:1 various:1 train:1 effective:1 detected:2 outside:1 larger:1 solve:4 ability:2 topographic:2 emergence:2 final:1 interconnected:1 product:1 interaction:1 aligned:2 poorly:1 ymax:1 frrst:1 inputoutput:1 abstr:1 produce:1 generating:1 object:9 develop:2 ac:1 propagating:1 grating:10 soc:1 indicate:1 exhibiting:1 direction:60 plaid:8 centered:2 human:2 viewing:1 require:1 leinhos:1 biological:1 summation:1 around:4 presumably:1 great:1 mapping:1 circuitry:1 continuum:1 proc:2 geniculate:1 applicable:1 sensitive:1 successfully:1 weighted:2 gaussian:3 always:1 avoid:1 varying:2 newborn:1 contrast:3 dependent:2 rigid:2 interconnect:1 vl:2 typically:1 fust:1 selective:6 wij:1 translational:1 orientation:7 flexible:1 development:3 field:10 shaped:1 nemon:1 having:1 represents:1 adelson:1 unsupervised:2 nearly:1 linsker:5 peaked:1 future:1 connectionist:1 stimulus:23 employ:1 retina:1 randomly:8 oriented:6 recognize:2 tightly:1 divergence:1 consisting:1 fire:2 cylinder:2 conductance:1 organization:1 interest:1 circular:1 saturated:1 nonnally:1 held:1 edge:1 capable:1 fu:1 necessary:1 experience:5 orthogonal:6 filled:2 rotating:1 sinusoidally:1 earlier:2 modeling:2 newsome:1 subset:1 comprised:1 connect:1 varies:1 st:2 peak:10 sensitivity:1 off:1 sharpened:1 recorded:1 containing:1 cognitive:1 wehmeier:1 twin:1 unresponsive:1 trough:1 explicitly:1 vi:12 ad:1 try:1 view:4 break:1 portion:2 parallel:1 rochester:1 square:1 formed:1 percept:1 weak:1 detector:2 synaptic:4 ed:1 against:1 spitz:2 pp:3 obvious:1 modeler:1 con:1 gain:1 segmentation:1 actually:1 higher:3 oppositely:1 supervised:3 response:4 done:1 box:4 just:1 stage:4 until:1 correlation:3 hand:1 receives:1 horizontal:1 overlapping:1 lack:1 glance:1 perhaps:1 effect:2 contain:1 true:2 consisted:2 illustrated:1 white:1 during:1 davis:1 unambiguous:1 excitation:1 cosine:1 motion:19 cooperate:1 nih:1 common:1 rotation:8 specialized:1 gizzi:1 shear:1 mt:9 retinotopically:1 detennining:1 million:3 tail:1 significant:2 areal:1 surround:3 ai:1 tuning:21 had:2 dot:3 moving:18 cortex:3 inhibition:5 own:1 showed:5 jolla:1 selectivity:9 verlag:2 life:1 baby:2 minimum:6 somewhat:2 signal:1 ii:1 multiple:3 upside:1 thalamus:1 transparency:1 faster:1 characterized:3 long:1 stile:2 post:3 serial:1 basic:1 multilayer:1 ained:1 represent:1 sometimes:1 cell:4 cropped:1 background:2 addition:1 decreased:1 probably:1 comment:1 mature:4 thing:1 ample:1 flow:1 feedforward:4 intermediate:1 enough:1 variety:1 architecture:4 granted:2 unnatural:1 movshon:3 generally:1 useful:1 clear:1 dark:1 locally:1 processed:1 generate:1 occupy:1 inhibitory:1 per:1 four:2 nevertheless:1 acknowledged:1 drawn:3 prevent:1 fraction:3 sum:2 run:6 angle:2 respond:3 clipped:3 almost:1 layer:42 hom:1 display:2 activity:1 flat:1 aspect:1 speed:28 min:1 optical:1 martin:1 structured:1 developing:1 combination:2 across:2 smaller:2 primate:7 biologically:3 making:2 previously:1 mechanism:1 lined:1 needed:1 ostensibly:1 know:1 ascending:1 physiologist:1 reared:1 worthwhile:1 shaw:1 customary:1 k1:1 magnocellular:1 classical:1 already:1 receptive:4 primary:2 diagonal:1 gradient:1 wrap:3 subspace:3 thank:1 lateral:1 sci:2 extent:1 toward:1 reason:1 index:1 modeled:1 illustration:1 providing:1 balance:1 difficult:1 ftrst:2 negative:2 neuron:14 displaced:2 parietal:2 situation:1 extended:1 station:3 sharp:2 arbitrary:1 nonlinearly:1 namely:1 connection:6 vch:1 california:1 hour:1 adult:1 able:1 bar:2 beyond:1 usually:2 pattern:35 perception:1 appeared:1 overlap:1 representing:1 scheme:1 eye:1 library:1 axis:2 review:1 understanding:1 acknowledgement:1 determining:2 relative:4 interesting:1 untuned:5 nucleus:1 degree:3 principle:1 uncorrelated:1 translation:2 excitatory:4 course:1 surprisingly:1 supported:1 side:2 fall:1 curve:12 dimension:1 cortical:7 transition:3 boundary:3 contour:15 made:2 san:1 hyvarinen:2 alpha:2 aperture:12 neurobiological:1 mter:1 corpus:1 continuous:1 dilation:6 why:1 learn:4 ca:1 interact:2 complex:4 da:1 main:1 linearly:1 neurosci:1 noise:5 profile:1 sereno:12 fig:6 siegel:2 hebb:5 sub:1 explicit:1 chevron:3 crude:1 perceptual:1 intimate:1 learns:1 saturate:1 down:2 cog:1 showing:1 offset:1 divergent:3 decay:1 physiological:1 evidence:2 exists:1 importance:1 texture:3 nat:1 visual:27 horizontally:3 schunck:2 saturating:1 saccade:1 springer:2 aa:3 ald:1 month:4 presentation:2 ann:1 absence:1 shortcut:1 change:1 typical:2 except:3 operates:1 secondary:1 la:1 occluding:1 selectively:1 latter:1 avoiding:1 correlated:1 |
232 | 1,210 | Self-Organizing and Adaptive Algorithms for
Generalized Eigen-Decomposition
Chanchal Chatterjee
Vwani P. Roychowdhury
Newport Corporation
1791 Deere Avenue, Irvine, CA 92606
Electrical Engineering Department
UCLA, Los Angeles, CA 90095
ABSTRACT
The paper is developed in two parts where we discuss a new approach
to self-organization in a single-layer linear feed-forward network. First,
two novel algorithms for self-organization are derived from a two-layer
linear hetero-associative network performing a one-of-m classification,
and trained with the constrained least-mean-squared classification error
criterion. Second, two adaptive algorithms are derived from these selforganizing procedures to compute the principal generalized
eigenvectors of two correlation matrices from two sequences of
random vectors. These novel adaptive algorithms can be implemented
in a single-layer linear feed-forward network. We give a rigorous
convergence analysis of the adaptive algorithms by using stochastic
approximation theory. As an example, we consider a problem of online
signal detection in digital mobile communications.
1. INTRODUCTION
We study the problems of hetero-associative trammg, linear discriminant analysis,
generalized eigen-decomposition and their theoretical connections. The paper is divided
into two parts. In the first part, we study the relations between hetero-associative training
with a linear feed-forward network, and feature extraction by the linear discriminant
analysis (LOA) criterion. Here we derive two novel algorithms that unify the two
problems. In the second part, we generalize the self-organizing algorithm for LOA to
obtain adaptive algorithms for generalized eigen-decomposition, for which we provide a
rigorous proof of convergence by using stochastic approximation theory.
1.1 HETERO-ASSOCIATION AND LINEAR DISCRIMINANT ANALYSIS
In this discussion, we consider a special case of hetero-association that deals with the
classification problems. Here the inputs belong to a finite m-set of pattern classes, and the
Self-Organizing and Adaptive Generalized Eigen-Decomposition
397
outputs indicate the classes to which the inputs belong. Usually, the ith standard basis
vector ei is chosen to indicate that a particular input vector x belongs to class i.
The LDA problem, on the other hand, aims at projecting a multi-class data in a lower
dimensional subspace such that it is grouped into well-separated clusters for the m
classes. The method is based upon a set of scatter matrices commonly known as the
mixture scatter Sm and between class scatter Sb (Fukunaga, 1990). These matrices are
used to formulate criteria such as tr(Sm-ISb) and det(Sb)1 det(Sm) which yield a linear
transform <1> that satisfy the generalized eigenvector problem Sb<1>=Sm<1>A, where A is the
generalized eigenvalue matrix. If Sm is positive definite, we obtain a <1> such that <1>TSm<1>
=1 and <1>TSb<1>=A. Furthermore, the significance of each eigenvector (for class
separability) is determined by the corresponding generalized eigenvalue.
A relation between hetero-association and LDA was demonstrated by Gallinari et al.
(1991). Their work made explicit that for a linear multi-layer perceptron performing a
one-from-m classification that minimized the total mean square error (MSE) at the
network output, also maximized a criterion det(Sb)/det(Sm) for LDA at the final hidden
layer. This study was generalized by Webb and Lowe (1990) by using a nonlinear
transform from the input data to the final hidden units, and a linear transform in the final
layer. This has been further generalized by Chatterjee and Roychowdhury (1996) by
including the Bayes cost for misclassification into the criteria tr(Sm-ISb).
Although the above studies offer useful insights into the relations between heteroassociation and LDA, they do not suggest an algorithm to extract the optimal LDA
transform <1>. Since the criteria for class separability are insensitive to multiplication by
nonsingular matrices, the above studies suggest that any training procedure that
minimizes the MSE at the network output will yield a nonsingular transformation of <1>;
i.e., we obtain Q<1> where Q is a nonsingular matrix. Since Q<1> does not satisfy the
generalized eigenvector problem Sb<1>=Sm<1>A for any arbitrary nonsingular matrix Q, we
need to determine an algorithm that will yield Q=I.
In order to obtain the optimum linear transform <1>, we constrain the training of a twolayer linear feed-forward network, such that at convergence, the weights for the first
layer simultaneously diagonalizes Sm and Sb. Thus, the hetero-associative network is
trained by minimizing a constrained MSE at the network output. This training procedure
yields two novel algorithms for LDA.
1.2 LDA AND GENERALIZED EIGEN-DECOMPOSITION
Since the LDA problem is a generalized eigen-decomposition problem for the
symmetric-definite case, the self-organizing algorithms derived from the heteroassociative networks lead us to construct adaptive algorithms for generalized eigendecomposition. Such adaptive algorithms are required in several applications of image
and signal processing. As an example, we consider the problem of online interference
cancellation in digital mobile communications.
Similar to the LDA problem Sb<1>=Sm<1>A, the generalized eigen-decomposition problem
A<1>=B<1>A involves the matrix pencil (A ,B), where A and B are assumed to be real,
symmetric and positive definite. Although a solution to the problem can be obtained by a
conventional method, there are several applications in image and signal processing where
an online solution of generalized eigen-decomposition is desired. In these real-time
situations, the matrices A and B are themselves unknown. Instead, there are available two
398
C. Chatterjee and V. P Roychowdhury
sequences of random vectors {xk} and {Yk} with limk~ooE[x~/J =A and limk~oo
E[Yky/'I=B, where xk and Yk represent the online observations of the application. For
every sample (x/C>Yk), we need to obtain the current estimates <1>k and Ak of <1> and A
respectively, such that <1>k and Ak converge strongly to their true values.
The conventional approach for evaluating <1> and A requires the computation of (A,B)
after collecting all of the samples, and then the application of a numerical procedure; i.e.,
the approach works in a batch fashion. There are two problems with this approach.
Firstly, the dimension of the samples may be large so that even if all of the samples are
available, performing the generalized eigen-decomposition may take prohibitively large
amount of computational time. Secondly, the conventional schemes can not adapt to slow
or small changes in the data. So the approach is not suitable for real-time applications
where the samples come in an online fashion.
Although the adaptive generalized eigen-decomposition algorithms are natural
generalizations of the self-organizing algorithms for LDA, their derivations do not
constitute a proof of convergence. We, therefore, give a rigorous proof of convergence
by stochastic approximation theory, that shows that the estimates obtained from our
adaptive algorithms converge with probability one to the generalized eigenvectors.
In summary, the study offers the following contributions: (1) we present two novel
algorithms that unify the problems of hetero-associative training and LDA feature
extraction; and (2) we discuss two single-stage adaptive algorithms for generalized eigendecomposition from two sequences of random vectors.
In our experiments, we consider an example of online interference cancellation in digital
mobile communications. In this problem, the signal from a desired user at a far distance
from the receiver is corrupted by another user very near to the base. The optimum linear
transform w for weighting the signal is the first principal generalized eigenvector of the
signal correlation matrix with respect to the interference correlation matrix. Experiments
with our algorithm suggest a rapid convergence within four bits of transmitted signal, and
provides a significant advantage over many current methods.
2. HETERO-ASSOCIATIVE TRAINING AND LDA
We consider a two-layer linear network performing a one-from-m classification. Let XE
9t n be an input to the network to be classified into one out of m classes ro l'''''ro m. If x E ro j
then the desired output d=e j (ith std. basis vector). Without loss of generality, we assume
the inputs to be a zero-mean stationary process with a nonsingular covariance matrix.
2.1 EXTRACTING THE PRINCIPAL LDA COMPONENTS
In the two-layer linear hetero-associative network, let there be p neurons in the hidden
layer, and m output units. The aim is to develop an algorithm so that indi",idual weight
vectors for the first layer converge to the first p~m generalized eigenvectors
corresponding to the p significant generalized eigenvalues arranged in decreasing order.
Let WjE9t n (i=I, ... ,n) be the weight vectors for the input layer, and VjE9t m (i=I, ... ,m) be
the weight vectors for the output layer.
The neurons are trained sequentially; i.e., the training of the jlh neuron is started only
after the weight vector of the (j_I)fh neuron has converged. Assume that all the j-I
previous neurons have already been trained and their weights have converged to the
399
Self-Organizing and Adaptive Generalized Eigen-Decomposition
optimal weight vectors wi for i E (1 J-l]. To extract the J'h generalized eigenvector in the
output of the /h neuron, the updating model for this neuron should be constructed by
subtracting the results from all previously computed j-I generalized eigenvectors from
the desired output dj as below
-
dj
= dj
j-I
-
T
L vi W i x.
(1)
i=1
This process is equivalent to the deflation of the desired output.
The scatter matrices Sm and Sb can be obtained from x and d as Sm=E[xx T] and Sb=
MMT, where M=E[xd1). We need to extract the j1h LOA transform Wj that satisfies the
generalized eigenvector equation SbWj=AlmWj such that Aj is the J'h largest generalized
eigenvalue. The constrained MSE criterion at the network output is
Jh,Vj
)=,lld j <~:v;wTx-vjWJxr]+
p{wJSmw j -I).
(2)
Using (2), we obtain the update equation for Wj as
w(J)
hI
= w(J)
+ {Mv(J) k
k
S w(J)(w(J)T Mv(J?)- S j~1 w(J)v(i)T v(J?)
m k
k
k
m L.. k k
k .
;=1
(3)
Differentiating (2) with respect to vi' and equating it to zero, we obtain the optimum
value ofvj as MTWj . Substituting this Vj in (3) we obtain
w(J) = w(J) +
k+1
k
{s
w(J) - S w(J)(w(J)TS w(J?) - S j~1 wU)w(i)T S w(J?)
b k
m k
k
b k
m L.. k k b k .
(4)
i=1
Let Wk be the matrix whose ith column is w~). Then (4) can be written in matrix form as
Wk+1
= Wk + r{SbWk -SmWkU~W[SbWk
p,
(5)
where UT[?] sets all elements below the diagonal of its matrix argument to zero, thereby
making it upper triangular.
2.2 ANOTHER SELF-ORGANIZING ALGORITHM FOR LDA
In the previous analysis for a two-layer linear hetero-associative network, we observed
that the optimum value for V=WTM, where the jlh column of Wand row of V are formed
by Wi and Vi respectively. It is, therefore, worthwhile to explore the gradient descent
procedure on the error function below instead of (2)
J(W)
=
E[lld- MTWWTxI12}
(6)
By differentiating this error function with respect to W, and including the deflation
process, we obtain the following update procedure for W instead of (5)
Wk+1
= Wk + ~2SbWk
- Sm Wk UT [W[ SbWk ] - SbWkUT[ W[ SmWk]).
(7)
3. LDA AND GENERALIZED EIGEN-DECOMPOSITION
Since LOA consists of solving the generalized eigenvector problem Sb<P=Sm<PA, we can
naturally generalize algorithms (5) and (7) to obtain adaptive algorithms for the
generalized eigen-decomposition problem A<P=B<PA, where A and B are assumed to be
symmetric and positive definite. Here, we do not have the matrices A and B. Instead,
400
C. Chatterjee and V. P. Roychowdhury
there are available two sequences of random vectors {xk} and {Yk} with limk~ooE[xp/]
=A and limk~~[Yky/]=B, where xk and Yk represent the online observations.
From (5), we obtain the following adaptive algorithm for generalized eigendecomposition
(8)
Here {17k} is a sequence of scalar gains, whose properties are described in Section 4. The
sequences {Ak} and {B k} are instantaneous values of the matrices A and B respectively.
Although the Ak and Bk values can be obtained from xk and Yk as xp/ and YkY/
respectively, our algorithm requires that at least one of the {Ak} or {B k} sequences have a
dominated convergence property. Thus, the {Ak} and {Bk } sequences may be obtained
from xp/ and YkY/ from the following algorithms
Ak
= Ak_1 +Yk(XkXk -A k- I )
and Bk
= Bk- I
+Yk(YkYk -Bk-d,
(9)
where Ao and Bo are symmetric, and {Yk} is a scalar gain sequence.
As done before, we can generalize (7) to obtain the following adaptive algorithm for
generalized eigen-decomposition from a sequence of samples {Ak} and {Bk}
Wk+1
= Wk + l7k(2A k Wk -
BkWkUT[ W[ AkWk ] - AkWkUT[ W[ BkWk ]).
(10)
Although algorithms (8) and (10) were derived from the network MSE by the gradient
descent approach, this derivation does not guarantee their convergence. In order to prove
their convergence, we use stochastic approximation theory. We give the convergence
results only for algorithm (l0).
4. STOCHASTIC APPROX. CONVG. PROOF FOR ALG. (10)
In order to prove the con vergence of (10), we use stochastic approximation theory due to
Ljung (1977). In stochastic approximation theory, we study the asymptotic properties of
(10) in terms of the ordinary differential equation (ODE)
~ W(t)= 1!!! E[2AkW -
BkWUT[ W T AkW]- AkWUT[ W T BkW]],
where W(t) is the continuous time counterpart of Wk with t denoting continuous time. The
method of proof requires the following steps: (1) establishing a set of conditions to be
imposed on A, B, A", B", and 17", (2) finding the stable stationary points of the ODE; and
(3) demonstrating that Wk visits a compact subset of the domain of attraction of a stable
stationary point infinitely often.
We use Theorem 1 of Ljung (1977) for the convergence proof. The following is a general
set of assumptions for the convergence proof of (10):
Assumption (AI). Each xk and Yk is bounded with probability one, and limk~ooE[xp/]
= A and limk~ooE[y kY k1) = B, where A and B are positive definite.
Assumption (A2). {l7kE9t+} satisfies l7kJ..O, Lk=Ol7k =OO,Lk=Ol7k <00 for some r>1 and
limk~oo sup(l7i l -l7i~l) <00.
Assumption (A3). The p largest generalized eigenvalues of A with respect to B are each
of unit mUltiplicity.
Lemma 1. Let Al and A2 hold. Let w* be a locally asymptotically stable (in the sense of
Liapunov) solution to the ordinary differential equation (ODE):
Self-Organizing and Adaptive Generalized Eigen-Decomposition
~ W(t) = 2AW(t) -
BW(t)U4W(t/ AW(t)] - AW(t)U4W(t/ BW(t)],
401
(11)
with domain of attraction D(W). Then if there is a compact subset S of D(W) such that
?
Wk E S infinitely often, then we have Wk ~ W with probability one as k ~ 00.
We denote A\ > ~ > ... > Ap ~ ... ~ An > 0 as the generalized eigenvalues of A with
respect to B, and 4>; as the generalized eigenvector corresponding to A; such that 4>\, ... ,4>n
are orthonormal with respect to B. Let <l>=[4>\ ... 4>n l and A=diag(A\, ... ,An ) denote the
matrix of generalized eigenvectors and eigenvalues of A with respect to B. Note that if 4>;
is a generalized eigenvector, then d;4>; (ld;l= 1) is also a generalized eigenvector.
In the next two lemmas, we first prove that all the possible equilibrium points ofthe ODE
(11) are up to an arbitrary permutation of the p generalized eigenvectors of A with
respect to B corresponding to the p largest generalized eigenvalues. We next prove that
all these equilibrium points of the ODE (11) are unstable equilibrium points, except for
[d\4>\ ... dn4> nl, where Id;I=1 for i=I, ... ,p.
Lemma 2. For the ordinary differential equation (11), let Al and A3 hold Then W=<l>DP
are equilibrium points of (11), where D=[D\IOV is a nXp matrix with DI being a pXp
diagonal matrix with diagonal elements d; such that Id;l= 1 or d;=O, and P is a nXn
arbitrary permutation matrix.
?
Lemma 3. Let Al and A3 hold Then W=<l>D (where D=[D\101~ D\ =diag(d\, ...,dp )'
Id;I=I) are stable equilibrium points of the ODE (11). In addition, W=<l>DP (d;=O for i~p
or P~J) are unstable equilibrium points of the ODE (11) .
?
Lemma 4. For the ordinary differential equation (11) , let Al and A3 hold Then the
points W=<l>D (where D=[D\101~ D\ =diag(d\, ... ,dp )' Id;I=1 for i=I, ... ,p) are
?
asymptotically stable.
Lemma 5. Let AI-A3 hold Then there exists a uniform upper boundfor 17k such that Wk
is uniformly bounded w.p . I .
?
The convergence of alg. (10) can now be established by referring to Theorem 1 of Ljung.
Theorem 1. Let A I-A3 hold Assume that with probability one the process {Wk } visits
infinitely often a compact subset of the domain of attraction of one of the asymptotically
stable points <l>D. Then with probability one
lim Wk = <l>D.
k~OCl
Proof. By Lemma 2, <l>D (ld;I=I) are asymptotically stable points of the ODE (11). Since
we assume that {Wk } visits a compact subset of the domain of attraction of <l>D infmitely
often, Lemma 1 then implies the theorem.
?
5. EXPERIMENT AL RESULTS
We describe the performance of algorithms (8) and (10) with an example of online
interference cancellation in a high-dimensional signal, in a digital mobile communication
problem. The problem occurs when the desired user transmits a signal from a far distance
to the receiver, while another user simultaneously transmits very near to the base. For
common receivers, the quality of the received signal from the desired user is dominated
by interference from the user close to the base. Due to the high rate and large dimension
of the data, the system demands an accurate detection method for just a few data samples.
C. Chatterjee and V. P. Roychowdhury
402
If we use conventional (numerical analysis) methods, signal detection will require a
significant part of the time slot allotted to a receiver, accordingly reducing the effective
communication rate. Adaptive generalized eigen-decomposition algorithms, on the other
hand, allow the tracking of slow changes, and directly performs signal detection.
The details of the data model can be found in Zoltowski et al. (1996). In this application,
the duration for each transmitted code is 127 IlS, within which we have lOllS of signal
and 1171ls of interference. We take 10 frequency samples equi-spaced between -O.4MHz
to +O.4MHz. Using 6 antennas, the signal and interference correlation matrices are of
dimension 60X60 in the complex domain.
We use both algorithms (8) and (10) for the cancellation of the interference. Figure 1
shows the convergence of the principal generalized eigenvector and eigenvalue. The
closed form solution is obtained after collecting all of the signal and interference
samples. In order to measure the accuracy of the algorithms, we compute the direction
cosine of the estimated principal generalized eigenvector and the generalized eigenvector
computed by the conventional method. The optimum value is one. We also show the
estimated principal generalized eigenvalue in Figure 1b. The results show that both
algorithms converge after the 4th bit of signal.
-
Algonthm (1 0)
-
-
Algonlhm (8)
Algonthm (10)
- Algonlhm (8)
35 ...----.---r--r----r-....-,-..,--.......,--...----.---,
1.1r--.----T---~--__r~r__-,--.......,
. ._09.rf
CLOSl!D FORM SOUlTlON
1.0 ?? ? ? ???.... .? ???.........????.?..??.?..
? ?......
~
....
lll
08
~
:; 07
/--
I
.-- - - - -
25
I
I
Iii
~20
~~: /'
~
....
;::: 04
~
03
Iii
Q
OJ
~
13
~
I
15
I
I::
10 III
Iii
!ii ~
0.1
O.OD?'------=5DD:-----lI~m----:-:I5DD'::-'
NUMBER OF SAMPLES
(a)
?D?'------5DD~---~I~----IJ5DD---~DOO
NUMBER OF SAMPLES
(b)
Figure 1. (a) Direction Cosine of Estimated First Principal Generalized Eigenvector, and
(b) Estimated First Principal Generalized Eigenvalue.
References
C.Chatterjee and V.Roychowdhury (1996), "Statistical Risk Analysis for Classification and
Feature Extraction by Multilayer Perceptrons", Proceedings IEEE Int 'l Conference on Neural
Networks, Washington D.C.
K.Fukunaga (1990), Introduction to Statistical Pattern Recognition, 2nd Edition, New York:
Academic Press.
P.Gallinari, S.Thiria, F.Badran, F.Fogelman-Soulie (1991), "On the Relations Between
Discriminant Analysis and Multilayer Perceptrons", Neural Networks, Vol. 4, pp. 349-360.
L.Ljung (1977), "Analysis of Recursive Stochastic Algorithms", IEEE Transactions on Automatic
Control, Vol. AC-22, No. 4, pp. 551-575.
A.R.Webb and D.Lowe (1990), "The Optimised Internal Representation of Multilayer Classifier
Networks Perfonns Nonlinear Discriminant Analysis", Neural Networks, Vol. 3, pp. 367-375.
M.D.Zoltowski, C.Chatterjee, V.Roychowdhury and J.Ramos (1996), "Blind Adaptive 2D RAKE
Receiver for CDMA Based on Space-Time MVDR Processing", submitted to IEEE Transactions
on Signal Processing.
| 1210 |@word nd:1 wtm:1 decomposition:16 heteroassociative:1 covariance:1 twolayer:1 thereby:1 tr:2 ld:2 denoting:1 current:2 od:1 scatter:4 written:1 numerical:2 l7i:2 update:2 stationary:3 liapunov:1 accordingly:1 xk:6 ith:3 provides:1 equi:1 firstly:1 constructed:1 differential:4 consists:1 prove:4 rapid:1 themselves:1 x60:1 multi:2 decreasing:1 lll:1 xx:1 bounded:2 minimizes:1 eigenvector:14 developed:1 finding:1 transformation:1 corporation:1 guarantee:1 every:1 collecting:2 prohibitively:1 ro:3 classifier:1 gallinari:2 unit:3 control:1 positive:4 before:1 engineering:1 ak:8 id:4 establishing:1 optimised:1 ap:1 yky:4 equating:1 recursive:1 definite:5 procedure:6 suggest:3 ooe:4 close:1 risk:1 conventional:5 equivalent:1 demonstrated:1 imposed:1 duration:1 l:1 formulate:1 unify:2 insight:1 attraction:4 orthonormal:1 user:6 pa:2 element:2 recognition:1 updating:1 std:1 observed:1 electrical:1 wj:2 yk:10 rake:1 trained:4 solving:1 upon:1 basis:2 derivation:2 separated:1 describe:1 effective:1 whose:2 triangular:1 transform:7 antenna:1 final:3 associative:8 online:8 sequence:10 eigenvalue:11 advantage:1 subtracting:1 organizing:8 indi:1 ky:1 los:1 convergence:14 cluster:1 optimum:5 derive:1 oo:3 develop:1 ac:1 received:1 wtx:1 implemented:1 involves:1 indicate:2 come:1 implies:1 direction:2 hetero:11 stochastic:8 newport:1 require:1 ao:1 generalization:1 perfonns:1 secondly:1 hold:6 equilibrium:6 substituting:1 a2:2 fh:1 trammg:1 grouped:1 largest:3 aim:2 mobile:4 vwani:1 derived:4 l0:1 dn4:1 rigorous:3 sense:1 sb:10 hidden:3 relation:4 fogelman:1 classification:6 constrained:3 special:1 construct:1 extraction:3 washington:1 minimized:1 few:1 simultaneously:2 bw:2 detection:4 organization:2 mixture:1 nl:1 accurate:1 desired:7 theoretical:1 column:2 badran:1 mhz:2 ordinary:4 cost:1 subset:4 uniform:1 aw:3 corrupted:1 referring:1 ocl:1 squared:1 li:1 wk:17 int:1 satisfy:2 mv:2 vi:3 blind:1 lowe:2 jlh:2 closed:1 sup:1 bayes:1 pxp:1 contribution:1 square:1 formed:1 il:1 accuracy:1 maximized:1 yield:4 nonsingular:5 ofthe:1 spaced:1 generalize:3 classified:1 converged:2 submitted:1 frequency:1 pp:3 naturally:1 proof:8 di:1 transmits:2 con:1 irvine:1 gain:2 algonthm:2 lim:1 ut:2 feed:4 arranged:1 done:1 strongly:1 generality:1 furthermore:1 just:1 stage:1 correlation:4 hand:2 ei:1 nonlinear:2 aj:1 quality:1 lda:15 true:1 counterpart:1 pencil:1 symmetric:4 deal:1 self:10 cosine:2 criterion:7 generalized:49 performs:1 image:2 instantaneous:1 novel:5 common:1 insensitive:1 association:3 belong:2 significant:3 ai:2 approx:1 automatic:1 cancellation:4 dj:3 stable:7 lolls:1 base:3 belongs:1 xe:1 transmitted:2 determine:1 converge:4 signal:17 ii:1 adapt:1 academic:1 offer:2 divided:1 visit:3 multilayer:3 represent:2 doo:1 addition:1 ode:8 limk:7 extracting:1 near:2 iii:4 avenue:1 det:4 angeles:1 york:1 constitute:1 useful:1 eigenvectors:6 selforganizing:1 amount:1 locally:1 roychowdhury:7 estimated:4 bkw:1 vol:3 four:1 demonstrating:1 asymptotically:4 wand:1 wu:1 bit:2 layer:14 hi:1 constrain:1 thiria:1 ucla:1 dominated:2 argument:1 fukunaga:2 performing:4 department:1 lld:2 separability:2 wi:2 making:1 projecting:1 multiplicity:1 interference:9 equation:6 tsb:1 diagonalizes:1 discus:2 previously:1 deflation:2 available:3 worthwhile:1 batch:1 eigen:16 cdma:1 k1:1 already:1 occurs:1 diagonal:3 gradient:2 dp:4 subspace:1 distance:2 discriminant:5 unstable:2 code:1 minimizing:1 webb:2 unknown:1 upper:2 observation:2 neuron:7 sm:14 finite:1 descent:2 t:1 situation:1 communication:5 arbitrary:3 bk:6 required:1 connection:1 established:1 usually:1 pattern:2 below:3 akw:2 rf:1 including:2 oj:1 misclassification:1 suitable:1 natural:1 ramos:1 scheme:1 xd1:1 lk:2 started:1 extract:3 multiplication:1 asymptotic:1 nxn:1 nxp:1 loss:1 ljung:4 permutation:2 digital:4 eigendecomposition:3 xp:4 dd:2 row:1 summary:1 loa:4 jh:1 allow:1 perceptron:1 differentiating:2 soulie:1 dimension:3 evaluating:1 forward:4 commonly:1 adaptive:18 made:1 far:2 transaction:2 compact:4 sequentially:1 isb:2 receiver:5 assumed:2 continuous:2 vergence:1 ca:2 alg:2 r__:1 mse:5 complex:1 domain:5 vj:2 diag:3 significance:1 mmt:1 edition:1 fashion:2 slow:2 explicit:1 heteroassociation:1 weighting:1 theorem:4 a3:6 exists:1 chatterjee:7 demand:1 explore:1 infinitely:3 tracking:1 scalar:2 bo:1 tsm:1 satisfies:2 slot:1 change:2 determined:1 except:1 uniformly:1 reducing:1 principal:8 lemma:8 total:1 perceptrons:2 allotted:1 internal:1 |
233 | 1,211 | Learning Bayesian belief networks with
neural network estimators
Stefano Monti*
Gregory F. Cooper*''''
*Intelligent Systems Program
University of Pittsburgh
901M CL, Pittsburgh, PA - 15260
"Center for Biomedical Informatics
University of Pittsburgh
8084 Forbes Tower, Pittsburgh, PA - 15261
smonti~isp.pitt.edu
gfc~cbmi.upmc.edu
Abstract
In this paper we propose a method for learning Bayesian belief
networks from data. The method uses artificial neural networks
as probability estimators, thus avoiding the need for making prior
assumptions on the nature of the probability distributions governing the relationships among the participating variables. This new
method has the potential for being applied to domains containing
both discrete and continuous variables arbitrarily distributed. We
compare the learning performance of this new method with the
performance of the method proposed by Cooper and Herskovits
in [7]. The experimental results show that, although the learning
scheme based on the use of ANN estimators is slower, the learning
accuracy of the two methods is comparable.
Category: Algorithms and Architectures.
1
Introduction
Bayesian belief networks (BBN) are a powerful formalism for representing and reasoning under uncertainty. This representation has a solid theoretical foundation
[13], and its practical value is suggested by the rapidly growing number of areas to
which it is being applied. BBNs concisely represent the joint probability distribution
over a set of random variables, by explicitly identifying the probabilistic dependencies and independencies between these variables. Their clear semantics make BBNs
particularly. suitable for being used in tasks such as diagnosis, planning, and control.
The task of learning a BBN from data can usually be formulated as a search
over the space of network structures, and as the subsequent search for an optimal parametrization of the discovered structure or structures. The task can be
further complicated by extending the search to account for hidden variables and for
Learning Bayesian Belief Networks with Neural Network Estimators
579
the presence of data points with missing values. Different approaches have been
successfully applied to the task of learning probabilistic networks from data [5].
In all these approaches, simplifying assumptions are made to circumvent practical problems in the implementation of the theory. One common assumption that
is made is that all variables are discrete, or that all variables are continuous and
normally distributed.
In this paper, we propose a novel method for learning BBNs from data that makes
use of artificial neural networks (ANN) as probability distribution estimators, thus
avoiding the need for making prior assumptions on the nature of the probability
distribution governing the relationships among the participating variables. The use
of ANNs as probability distribution estimators is not new [3], and its application to
the task of learning Bayesian belief networks from data has been recently explored
in [11] . However, in [11] the ANN estimators were used in the parametrization of the
BBN structure only, and cross validation was the method of choice for comparing
different network structures. In our approach, the ANN estimators are an essential
component of the scoring metric used to search over the BBN structure space. We
ran several simulations to compare the performance of this new method with the
learning method described in [7]. The results show that, although the learning
scheme based on the use of ANN estimators is slower, the learning accuracy of the
two methods is comparable.
The rest of the paper is organized as follows. In Section 2 we briefly introduce
the Bayesian belief network formalism and some basics of hbw to learn BBNs from
data. In Section 3, we describe our learning method, and detail the use of artificial
neural networks as probability distribution estimators. In Section 4 we present some
experimental results comparing the performance of this new method with the one
proposed in [7]. We conclude the paper with some suggestions for further research.
2
Background
A Bayesian belief network is defined by a triple (G,n,p), where G = (X,E) is
a directed acyclic graph with a set of nodes X = {Xl"'" xn} representing domain variables, and with a set of arcs E representing probabilistic dependencies
among domain variables; n is the space of possible instantiations of the domain
variables l ; and P is a probability distribution over the instantiations in n. Given
a node X EX, we use trx to denote the set of parents of X in X. The essential
property of BBNs is summarized by the Markov condition, which asserts that each
variable is independent of its non-descendants given its parents. This property allows for the representation of the multivariate joint probability distribution over
X in terms of the univariate conditional distributions P( Xi l7ri, 8 i ) of each variable
Xi given its parents 7ri, with 8 i the set of parameters needed to fully characterize
the conditional probability. Application of the chain rule, together with the Markov
condition, yields the following factorization of the joint probability of any particular
instantiation of all n variables:
n
P(x~, ... , x~) =
II P(x~ 17r~., 8
i) .
(1)
i=l
1 An
Xi
= X:
instantiation w of all n variables in X is an n-uple of values {x~, ... , x~} such that
for i = 1 ... n.
S. Monti and G. F. Cooper
580
2.1
Learning Bayesian belief networks
The task of learning BBNs involves learning the network structure and learning
the parameters of the conditional probability distributions. A well established set
of learning methods is based on the definition of a scoring metric measuring the
fitness of a network structure to the data, and on the search for high-scoring network
structures based on the defined scoring metric [7, 10]. We focus on these methods,
and in particular on the definition of Bayesian scoring metrics.
In a Bayesian framework, ideally classification and prediction would be performed
by taking a weighted average over the inferences of every possible belief network
containing the domain variables. Since this approach is in general computationally
infeasible, often an attempt has been made to use a high scoring belief network for
classification. We will assume this approach in the remainder of this paper.
The basic idea ofthe Bayesian approach is to maximize the probability P(Bs IV) =
P(Bs, V)j P(V) of a network structure Bs given a database of cases V. Because
for all network structures the term P(V) is the same, for the purpose of model
selection it suffices to calculate PCBs, V) for all Bs. The Bayesian metrics developed
so far all rely on the following assumptions: 1) given a BBN structure, all cases
in V are drawn independently from the same distribution (multinomial sample);
2) there are no cases with missing values (complete database); 3) the parameters
of the conditional probability distribution of each variable are independent (global
parameter independence); and 4) the parameters associated with each instantiation
of the parents of a variable are independent (local parameter independence).
The application of these assumptions allows for the following factorization of the
probability PCBs, V)
n
PCBs, V)
= P(Bs)P(V IBs) = PCBs) II S(Xi, 71'i, V)
,
(2)
i=l
where n is the number of nodes in the network, and each s( Xi, 71'i, V) is a term
measuring the contribution of Xi and its parents 71'i to the overall score of the
network Bs. The exact form of the terms s( Xi 71'i, V) slightly differs in the Bayesian
scoring metrics defined so far, and for lack of space we refer the interested reader
to the relevant literature [7, 10].
By looking at Equation (2), it is clear that if we assume a uniform prior distribution
over all network structures, the scoring metric is decomposable, in that it is just
the product of the S(Xi, 71'i, V) over all Xi times a constant P(Bs). Suppose that
two network structures Bs and BSI differ only for the presence or absence of a
given arc into Xi. To compare their metrics, it suffices to compute s( Xi, 71'i, V)
for both structures, since the other terms are the same. Likewise, if we assume
a decomposable prior distribution over network structures, of the form P(Bs) =
11 Pi, as suggested in [10], the scoring metric is still decomposable, since we can
include each Pi into the corresponding s( Xi, 71'i, V).
Once a scoring metric is defined, a search for a high-scoring network structure can
be carried out. This search task (in several forms) has been shown to be NP-hard
[4,6]. Various heuristics have been proposed to find network structures with a high
score. One such heuristic is known as K2 [7], and it implements a greedy search over
the space of network structures. The algorithm assumes a given ordering on the
variables. For simplicity, it also assumes that no prior information on the network is
available, so the prior probability distribution over the network structures is uniform
and can be ignored in comparing network structures.
Learning Bayesian Belief Networks with Neural Network Estimators
581
The Bayesian scoring metrics developed so far either assume discrete variables
[7, 10], or continuous variables normally distributed [9]. In the next section, we
propose a possible generalization which allows for the inclusion of both discrete and
continuous variables with arbitrary probability distributions.
3
An ANN-based scoring metric
The main idea of this work is to use 'artificial neural networks as probability estimators, to define a decomposable scoring metric for which no informative priors on the
class, or classes, of the probability distributions of the participating variables are
needed. The first three of the four assumptions described in the previous section
are still needed, namely, the assumption of a multinomial sample, the assumption of
a complete database, and the assumption of global parameter independence. However, the use of ANN estimators allows for the elimination of the assumption of
local parameter independence. In fact, the conditional probabilities corresponding
to the different instantiations of the parents of a variable are represented by the
same ANN, and they share the same network weights and the same training data.
Let us denote with VI == {C 1 , .. . , C I - 1 } the set of the first I cases in the database,
and with x~l) and 7rr) the instantiations of Xi and 7ri in the l-th case respectively.
The joint probability P( Bs, V) can be written as:
m
P(Bs)P(VIBs)
=
P(Bs) IIp(CIIVI,Bs)
1=1
m
P(Bs)
n
II II P(xi(1)
(I)
l7ri , VI,
Bs).
(3)
1=1 i=l
If we assume uninformative priors, or decomposable priors on network structures,
of the form P(Bs) = rt Pi, the probability PCBs, V) is decomposable. In fact, we
can interchange the two products in Equation 3, so as to obtain
n
PCBs, V)
m
n
1=1
i=l
= II [Pi II p(x~l) l7rr), VI , Bs)] = II S(Xi, 7ri, V),
(4)
where S(Xi, 7rj, V) is the term between square brackets, and it is only a function of
Xi and its parents in the network structure Bs (Pi can be neglected if we assume
a uniform prior over the network structures). The computation of Equation 4
corresponds to the application of the prequential method discussed by Dawid [8].
The estimation of each term P( Xi l7ri , VI, Bs) can be done by means of neural
network. Several schemes are available for training a neural network to approximate
a given probability distribution, or density. Notice that the calculation of each term
S(Xi, 7ri, V) can be computationally very expensive. For each node Xi, computing
S( Xi, 7ri, V) requires the training of mANNs, where m is the size of the database.
To reduce this computational cost, we use the following approximation, which we
call the t-invariance approximation: for any I E {I, . .. , m-l}, given the probability
P(Xi l7ri, VI, Bs), at least t (1
t S m -I) new cases are needed in order to alter
such probability. That is, for each positive integer h, such that h < t, we assume
P(Xi l7rj, VI+h, Bs) = P(Xi l7ri, VI , Bs) . Intuitively, this approximation implies the
assumption that, given our present belief about the value of each P(Xi l7rj, VI, Bs),
at least t new cases are needed to revise this belief. By making this approximation,
we achieve a t-fold reduction in the computation needed, since we now need to build
and train only mit ANNs for each Xi , instead of the original m. In fact, application
s
s. Monti and G. F. Cooper
582
of the t-invariance approximatioin to the computation of a given S(Xi, 7ri, 'D) yields:
Rather than selecting a constant value for t, we can choose to increment t as the
size of the training database 'DI increases. This approach seems preferable. When
estimating P(Xi l7ri, 'D I , Bs), this estimate will be very sensitive to the addition of
new cases when 1 is small, but will become increasingly insensitive to the addition of
new cases as 1 grows. A scheme for the incremental updating oft can be summarized
in the equation t
rAil, where 1 is the number of cases already seen (i.e., the
cardinality of'D/), and 0 < A ~ 1. For example, given a data set of 50 cases,
the updating scheme t
rO.511 would require the training of the ANN estimators
P(Xi I 7ri,'DI, Bs) for 1= 1,2,3,5,8,12,18,27,41.
=
=
4
Evaluation
In this section, we describe the experimental evaluation we conducted to test the
feasibility of use of the ANN-based scoring metric developed in the previous section. All the experiments are performed on the belief network Alarm, a multiplyconnected network originally developed to model anesthesiology problems that may
occur during surgery [2]. It contains 37 nodes/variables and 46 arcs. The variables
are all discrete, and take between 2 and 4 distinct values. The database used in the
experiments was generated from Alarm, and it is the same database used in [7].
In the experiments, we use a modification of the algorithm K2 [7]. The modified
algorithm, which we call ANN-K2, replaces the closed-form scoring metric developed
in [7] with the ANN-based scoring metric of Equation (5). The performance of ANNK2 is measured in terms of accuracy of the recovered network structure, by counting
the number of edges added and omitted with respect to the Alarm network; and in
terms of the accuracy of the learned joint probability distribution, by computing
its cross entropy with respect to the joint probability distribution of Alarm. The
learning performance of ANN-K2 is also compared with the performance of K2. To
train the ANNs, we used the conjugate-gradient search algorithm [12].
Since all the variables in the Alarm network are discrete, the ANN estimators are
defined based on the softmax model,with normalized exponential output units, and
with cross entropy as cost function . As a regularization technique, we augment the
training set so as to induce a uniform conditional probability over the unseen instantiantions of the ANN input. Given the probability P(Xi l7ri, 'DI) to be estimated,
and assuming Xi is a k-valued variable, for each instantiation 7r~ that does not appear in the database D I , we generate k new cases, with 7ri instantiated to 7ri, and Xi
taking each of its k values. As a result, the neural network estimates P(Xi 17r~, 'D I )
to be uniform, with P(Xi I7rL 'D/) = l/k for each of Xi'S values Xn,.??, Xlk.
We ran simulations where we varied the size of the training data set (100, 500,
1000, 2000, and 3000 cases), and the value of A in the updating scheme t = rAil
described in Section 3. We used the settings A = 0.35, and A = 0.5 . For each
run, we measured the number of arcs added, the number of arcs omitted, the cross
entropy, and the computation time, for each variable in the network. That is, we
considered each node, together with its parents, as a simple BBN, and collected the
measures of interest for each of these BBNs. Table 1 reports mean and standard
deviation of each measure over the 37 variables of Alarm, for both ANN-K2 and
K2. The results for ANN-K2 shown in Table 1 correspond to the setting A = 0.5,
583
Learning Bayesian Belief Networks with Neural Network Estimators
Uata
set
Algo.
100
ANN-K2
K2
ANN-K2
K2
ANN?K2
K2
ANN?K2
K2
ANN-K2
K2
500
1000
2000
3000
arcs
+
arcs s.d.
m
s.d.
m
0.19
0 .75
0.19
0.22
0 .24
0.11
0.19
0 .05
0.16
0 .00
0 .40
1.28
0.40
0.42
0.49
0.31
0.40
0.23
0.37
0 .00
0 .62
0 . 22
0.22
0.11
0.22
0.03
0.11
0 .03
0.05
0.03
0 .86
0.48
0 .48
0 .31
0 .48
0.16
0 .31
0.16
0 .23
0 . 16
cross entropy
med s.d.
m
0.23
0.08
0 .04
0 .02
0 .05
0.01
0.02
0.005
0 .01
0.004
.051
.070
.010
.010
.005
.006
.002
.002
.001
.001
0 .52
0 . 10
0 .11
0 .02
0 . 15
0 .01
0 .06
0.007
0 .017
0 .005
m
time (sees)
med s.d.
130
0.44
1077
0.13
6909
0.34
6458
0 .46
11155
1.02
88
.06
480
.06
4866
.23
4155
.44
4672
.84
159
1.48
1312
0 .22
6718
0 .46
7864
0 .65
2136
1.11
Table 1: Comparison ofthe performance of ANN-K2 and of K2 in terms of number of arcs
wrongly added (+), number of arcs wrongly omitted (-), cross entropy, and computation
time. Each column reports the mean m , the median med, and the standard deviation s.d.
of the corresponding measure over the 37 nodes/variables of Alarm. The median for the
number of arcs added and omitted is always 0, and is not reported.
since their difference from the results corresponding to the setting A = 0.35 was not
statistically significant .
Standard t-tests were performed to assess the significance of the difference between
the measures for K2 and the measures for ANN-K2, for each data set cardinality.
No technique to correct for multiple-testing was applied. Most measures show no
statistically significant difference, either at the 0.05 level or at the 0.01 level (most p
values are well above 0.2). In the simulation with 100 cases, both the difference between the mean number of arcs added and the difference between the mean number
of arcs omitted are statistically significant (p ~ 0.01). However, these differences
cancel out, in that ANN-K2 adds fewer extra arcs than K2, and K2 omits fewer
arcs than ANN-K2. This is reflected in the corresponding cross entropies, whose
difference is not statistically significant (p = 0.08). In the simulation with 1000
cases, only the difference in the number of arcs omitted is statistically significant
(p ~ .03) . Finally, in the simulation with 3000 cases, only the difference in the
number of arcs added is statistically significant (p ~ .02). K2 misses a single are,
and does not add any extra are, and this is the best result to date. By comparison,
ANN-K2 omits 2 arcs, and adds 5 extra arcs. For the simulation with 3000 cases,
we also computed Wilcoxon rank sum tests . The results were consistent with the
t-test results, showing a statistically significant difference only in the number of arcs
added. Finally, as it can be noted in Table 1, the difference in computation time is
of several order of magnitude, thus making a statistical analysis superfluous.
A natural question to ask is how sensitive is the learning procedure to the order
of the cases in the training set. Ideally, the procedure would be insensitive to
this order. Since we are using ANN estimators, however, which perform a greedy
search in the solution space, particular permutations of the training cases might
cause the ANN estimators to be more susceptible to getting stuck in local maxima.
We performed some preliminary experiments to test the sensitivity of the learning
procedure to the order of the cases in the training set. We ran few simulations in
which we randomly changed the order of the cases. The recovered structure was
identical in all simulations. Morevoer, the difference in cross entropy for different
orderings of the cases in the training set showed not to be statistically significant.
5
Conclusions
In this paper we presented a novel method for learning BBN s from data based on the
use of artificial neural networks as probability distribution estimators. As a prelim-
584
s. Monti and G. F.
Cooper
inary evaluation, we have compared the performance of the new algorithm with the
performance of K2, a well established learning algorithm for discrete domains, for
which extensive empirical evaluation is available [1,7]. With regard to the learning
accuracy of the new method, the results are encouraging, being comparable to stateof-the-art results for the chosen domain. The next step is the application of this
method to domains where current techniques for learning BBNs from data are not
applicable, namely domains with continuous variables not normally distributed, and
domains with mixtures of continuous and discrete variables. The main drawback of
the new algorithm is its time requirements. However, in this preliminary evaluation,
our main concern was the learning accuracy of the algorithm, and little effort was
spent in trying to optimize its time requirements. We believe there is ample room
for improvement in the time performance of the algorithm. More importantly, the
scoring metric of Section 3 provides a general framework for experimenting with
different classes of probability estimators. In this paper we used ANN estimators,
but more efficient estimators can easily be adopted, especially if we assume the
availability of prior information on the class of probability distributions to be used.
Acknowledgments
This work was funded by grant IRI-9509792 from the National Science Foundation.
References
[1] C. Aliferis and G. F. Cooper. An evaluation of an algorithm for inductive learning
of Bayesian belief networks using simulated data sets. In Proceedings of the 10th
Conference of Uncertainty in AI, pages 8-14, San Francisco, California, 1994.
[2] I. Beinlich, H. Suermondt, H. Chavez, and G. Cooper. The ALARM monitoring
system: A case study with two probabilistic inference techniques for belief networks.
In 2nd Conference of AI in Medicine Europe, pages 247- 256, London, England, 1989.
[3] C. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[4] R. Bouckaert. Properties of learning algorithms for Bayesian belief networks.
Proceedings of the 10th Conference of Uncertainty in AI, pages 102-109, 1994.
In
[5] W. Buntine. A guide to the literature on learning probabilistic networks from data.
IEEE Transactions on Knowledge and Data Engineering, 1996. To appear.
[6] D. Chickering, D. Geiger, and D. Heckerman. Learning Bayesian networks: search
methods and experimental results. Proc. 5th Workshop on AI and Statistics, 1995 .
[7] G. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic
networks from data. Machine Learning, 9:309-347, 1992.
[8] A. Dawid. Present position and potential developments: Some personal views. Statistical theory. The prequential approach. Journal of Royal Statistical Society A,
147:278-292, 1984.
[9] D. Geiger and D. Heckerman. Learning Gaussian networks. Technical Report MSRTR-94-10, Microsoft Research, One Microsoft Way, Redmond, WA 98052, 1994.
[10] D. Heckerman, D. Geiger, and D. Chickering. Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning, 1995 .
[11] R. Hofmann and V. Tresp. Discovering structure in continuous variables using
Bayesian networks. In Advances in NIPS 8. MIT Press, 1995.
[12] M. Moller. A scaled conjugate gradient algorithm for fast supervised learning. Neural
Networks, 6:525-533, 1993.
[13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: networks of plausible inference. Morgan Kaufman Publishers, Inc., 1988.
| 1211 |@word briefly:1 seems:1 nd:1 gfc:1 simulation:8 simplifying:1 solid:1 reduction:1 contains:1 score:2 selecting:1 recovered:2 comparing:3 current:1 written:1 suermondt:1 subsequent:1 informative:1 hofmann:1 greedy:2 fewer:2 discovering:1 parametrization:2 provides:1 node:7 become:1 descendant:1 introduce:1 planning:1 growing:1 encouraging:1 little:1 cardinality:2 estimating:1 kaufman:1 developed:5 every:1 preferable:1 ro:1 k2:29 scaled:1 control:1 normally:3 unit:1 grant:1 appear:2 positive:1 engineering:1 local:3 oxford:1 pcbs:6 might:1 factorization:2 statistically:8 directed:1 practical:2 acknowledgment:1 testing:1 implement:1 differs:1 procedure:3 area:1 empirical:1 induce:1 selection:1 wrongly:2 optimize:1 center:1 missing:2 iri:1 independently:1 decomposable:6 identifying:1 simplicity:1 estimator:22 rule:1 bbns:8 importantly:1 increment:1 suppose:1 exact:1 us:1 pa:2 dawid:2 expensive:1 particularly:1 updating:3 recognition:1 database:9 calculate:1 ordering:2 ran:3 prequential:2 ideally:2 neglected:1 personal:1 algo:1 isp:1 joint:6 easily:1 various:1 represented:1 train:2 distinct:1 instantiated:1 describe:2 london:1 fast:1 artificial:5 whose:1 heuristic:2 valued:1 aliferis:1 plausible:1 statistic:1 unseen:1 rr:1 propose:3 product:2 remainder:1 relevant:1 rapidly:1 date:1 achieve:1 asserts:1 participating:3 getting:1 parent:8 requirement:2 extending:1 incremental:1 spent:1 measured:2 involves:1 implies:1 differ:1 drawback:1 correct:1 elimination:1 mann:1 require:1 suffices:2 generalization:1 preliminary:2 considered:1 pitt:1 omitted:6 purpose:1 estimation:1 proc:1 applicable:1 sensitive:2 successfully:1 weighted:1 mit:2 always:1 gaussian:1 modified:1 rather:1 focus:1 improvement:1 rank:1 experimenting:1 inference:3 hidden:1 interested:1 semantics:1 overall:1 among:3 classification:2 augment:1 stateof:1 development:1 art:1 softmax:1 once:1 identical:1 cancel:1 alter:1 np:1 report:3 intelligent:2 few:1 randomly:1 national:1 fitness:1 microsoft:2 attempt:1 interest:1 evaluation:6 monti:4 bracket:1 mixture:1 superfluous:1 inary:1 chain:1 edge:1 iv:1 theoretical:1 formalism:2 column:1 measuring:2 cost:2 deviation:2 uniform:5 conducted:1 characterize:1 reported:1 buntine:1 dependency:2 upmc:1 gregory:1 density:1 sensitivity:1 probabilistic:7 informatics:1 together:2 containing:2 iip:1 choose:1 account:1 potential:2 summarized:2 availability:1 inc:1 explicitly:1 vi:8 performed:4 view:1 closed:1 complicated:1 forbes:1 contribution:1 square:1 ass:1 accuracy:6 likewise:1 yield:2 ofthe:2 correspond:1 bayesian:22 hbw:1 monitoring:1 anns:3 definition:2 associated:1 di:3 revise:1 ask:1 knowledge:2 organized:1 originally:1 supervised:1 reflected:1 done:1 governing:2 biomedical:1 just:1 lack:1 believe:1 grows:1 normalized:1 inductive:1 regularization:1 bsi:1 during:1 noted:1 trying:1 complete:2 stefano:1 reasoning:2 novel:2 recently:1 common:1 multinomial:2 insensitive:2 discussed:1 refer:1 significant:8 ai:4 inclusion:1 funded:1 europe:1 add:3 wilcoxon:1 multivariate:1 showed:1 arbitrarily:1 scoring:18 seen:1 morgan:1 maximize:1 ii:7 multiple:1 rj:1 technical:1 england:1 calculation:1 cross:8 feasibility:1 prediction:1 basic:2 metric:17 cbmi:1 represent:1 background:1 uninformative:1 addition:2 median:2 publisher:1 extra:3 rest:1 med:3 ample:1 call:2 integer:1 presence:2 counting:1 independence:4 architecture:1 reduce:1 idea:2 effort:1 cause:1 ignored:1 clear:2 category:1 generate:1 herskovits:2 uple:1 notice:1 msrtr:1 estimated:1 diagnosis:1 discrete:8 independency:1 four:1 drawn:1 graph:1 sum:1 run:1 powerful:1 uncertainty:3 reader:1 geiger:3 comparable:3 fold:1 replaces:1 occur:1 ri:9 combination:1 conjugate:2 heckerman:3 slightly:1 increasingly:1 making:4 b:25 modification:1 intuitively:1 computationally:2 equation:5 needed:6 adopted:1 available:3 slower:2 original:1 assumes:2 include:1 medicine:1 build:1 especially:1 society:1 surgery:1 already:1 added:7 question:1 rt:1 gradient:2 simulated:1 tower:1 collected:1 induction:1 assuming:1 relationship:2 susceptible:1 implementation:1 perform:1 markov:2 arc:19 beinlich:1 looking:1 discovered:1 varied:1 arbitrary:1 namely:2 extensive:1 california:1 concisely:1 learned:1 omits:2 established:2 pearl:1 nip:1 suggested:2 redmond:1 usually:1 pattern:1 oft:1 program:1 royal:1 belief:18 suitable:1 natural:1 rely:1 circumvent:1 representing:3 scheme:6 anesthesiology:1 carried:1 tresp:1 prior:11 literature:2 fully:1 permutation:1 suggestion:1 acyclic:1 triple:1 validation:1 foundation:2 consistent:1 pi:5 share:1 changed:1 infeasible:1 guide:1 taking:2 distributed:4 regard:1 xn:2 interchange:1 made:3 stuck:1 san:1 far:3 transaction:1 approximate:1 global:2 instantiation:8 pittsburgh:4 conclude:1 francisco:1 xi:35 continuous:7 search:11 table:4 nature:2 learn:1 moller:1 cl:1 domain:10 significance:1 main:3 alarm:8 cooper:8 position:1 exponential:1 xl:1 rail:2 chickering:2 bishop:1 showing:1 explored:1 concern:1 essential:2 workshop:1 bbn:7 magnitude:1 chavez:1 entropy:7 univariate:1 corresponds:1 trx:1 conditional:6 formulated:1 ann:30 room:1 xlk:1 absence:1 hard:1 miss:1 invariance:2 experimental:4 avoiding:2 ex:1 |
234 | 1,212 | Selective Integration: A Model for
Disparity Estimation
Michael S. Gray, Alexandre Pouget, Richard S. Zemel,
Steven J. Nowlan, Terrence J. Sejnowski
Departments of Biology and Cognitive Science
University of California, San Diego
La Jolla, CA 92093
and
Howard Hughes Medical Institute
Computational Neurobiology Lab
The Salk Institute, P. O. Box 85800
San Diego, CA 92186-5800
Email: michael, alex, zemel, nowlan, terry@salk.edu
Abstract
Local disparity information is often sparse and noisy, which creates
two conflicting demands when estimating disparity in an image region: the need to spatially average to get an accurate estimate, and
the problem of not averaging over discontinuities. We have developed a network model of disparity estimation based on disparityselective neurons, such as those found in the early stages of processing in visual cortex. The model can accurately estimate multiple
disparities in a region, which may be caused by transparency or occlusion, in real images and random-dot stereograms. The use of a
selection mechanism to selectively integrate reliable local disparity
estimates results in superior performance compared to standard
back-propagation and cross-correlation approaches. In addition,
the representations learned with this selection mechanism are consistent with recent neurophysiological results of von der Heydt,
Zhou, Friedman, and Poggio [8] for cells in cortical visual area V2.
Combining multi-scale biologically-plausible image processing with
the power of the mixture-of-experts learning algorithm represents
a promising approach that yields both high performance and new
insights into visual system function.
Selective Integration: A Model for Disparity Estimation
1
867
INTRODUCTION
In many stereo algorithms, the local correlation between images from the two eyes
is used to estimate relative depth (Jain, Kasturi, & Schunk [5]). Local correlation
measures, however, convey no information about the reliability of a particular disparity measurement. In the model presented here, we introduce a separate selection
mechanism to determine which locations of the visual input have consistent disparity information. The focus was on several challenging viewing situations in which
disparity estimation is not straightforward. For example, can the model estimate
the disparity of more than one object in a scene? Does occlusion lead to poorer
disparity estimation? Can the model determine the disparities of two transparent
surfaces? Does the model estimate accurately the disparities present in real world
images? Datasets corresponding to these different conditions were generated and
used to test the model.
Our goal is to develop a neurobiologically plausible model of stereopsis that accurately estimates disparity. Compared to traditional cross-correlation approaches
that try to compute a depth map for all locations in space, the mixture-of-experts
model used here searches for sparse, reliable patterns or configurations of disparity
stimuli that provide evidence for objects at different depths. This allows partial
segmentation of the image to obtain a more compact representation of disparities.
Local disparity estimates are sufficient in this case, as long as we selectively segment
those regions of the image with reliable disparity information.
The rest of the paper is organized as follows. First, we describe the architecture of
the mixture-of-experts model. Second, we provide a brief qualitative description of
the model's performance followed by quantitative results on a variety of datasets.
In the third section, we compare the activity of units in the model to recent neurophysiological data. Finally, we discuss these findings, and consider remaining open
questions.
2
MIXTURE-OF-EXPERTS MODEL
The model of stereopsis that we have explored is based on the filter model for motion
detection devised by Nowlan and Sejnowski [6]. The motion problem was readily
adapted to stereopsis by changing the time domain of motion to the left/right
image domain for stereopsis. Our model (Figure 1) consisted of several stages
and computed its output using only feed-forward processing, as described below
(see also Gray, Pouget, Zemel, Nowlan, and Sejnowski [2] for more detail). The
output of the first stage (disparity energy filters) became the input to two different
primary pathways: (1) the local disparity networks, and (2) the selection networks.
The activation of each of the four disparity-tuned output units in the model was
the product of the outputs of the two primary pathways (summed across space).
An objective function based on the mixture-of-experts framework (Jacobs, Jordan,
Nowlan, & Hinton [4]) was used to optimize the weights from the disparity energy
units to the local disparity networks and to the selection networks. The weights to
the output units from the local disparity and selection pathways were fixed at 1.0.
Once the model was trained, we obtained a scalar disparity estimate from the model
by computing a nonlinear least squares Gaussian fit to the four output values. The
mean of the Gaussian was our disparity estimate. When two objects were present
868
M. S. Gray, A. Pouget, R. S. Zemel. S. 1. Nowlan and T. 1. Sejnowski
.---~~r'---,
Disparity
Energy Filters
Competition
Low SF
'--_---'----''------'1 Medium SF
1
i
i
1 High SF
--re; :??-r
Left Eye Retina
i
Right Eye Retina
Figure 1: The mixture-of-experts architecture.
in the input, we fit the sum of two Gaussians to the four output values.
2.1
DISPARITY ENERGY FILTERS
The retinal layer in the model consisted of one-dimensional right eye and left eye
images, each 82 pixels in length. These images were the input to disparity energy
filters, as developed by Ohzawa, DeAngelis, and Freeman [7]. At the energy filter
layer, there were 51 receptive field (RF) locations which received input from partially
overlapping regions of the retinae. At each of these RF locations, there were 30
energy units corresponding to 10 phase differences at 3 spatial frequencies. These
phase differences were proportional to disparity. An energy unit consisted of 4
energy filter pairs, each of which was a Gabor filter. The outputs of the disparity
energy units were normalized at each RF location and within each spatial frequency
using a soft-max nonlinearity.
2.2
LOCAL DISPARITY NETWORKS
In the local disparity pathway, there were 8 RF locations, and each received a
weighted input from 9 disparity energy locations. Each RF location corresponded
to a local disparity network and contained a pool of 4 disparity-tuned units. Neighboring locations received input from overlapping sets of disparity energy units.
Weights were shared across all RF locations for each disparity. Soft-max competition occurred within each local disparity network (across disparity), and insured
that only one disparity was strongly activated at each RF location.
2.3
SELECTION NETWORKS
Like the local disparity networks, the selection networks were organized into a grid
of 8 RF locations with a pool of 4 disparity-tuned units at each location. These 4
units represented the local support for each of the different disparity hypotheses. It
is more useful to think of the selection networks, however, as 4 separate layers each
Selective Integration: A Modelfor Disparity Estimation
869
of which responded to a specific disparity across all regions of the image. Like the
local disparity pathway, neighboring RF locations received input from overlapping
disparity energy units, and weights were shared across space for each disparity.
In addition, the outputs of the selection network were normalized with the softmax operation. This competition, however, occurred separately for each of the 4
disparities in a global fashion across space.
3
RESULTS
Figure 2 shows the pattern of activations in the model when presented with a single
object at a disparity of 2.1 pixels. The visual layout of the model in this figure
is identical to the layout in Figure 1. The stimulus appears at bottom, with the
3 disparity energy filter banks directly above it. On the left above the disparity
energy filters are the local disparity networks. The selection networks are on the
right. The summed output (across space) appears in the upper right corner of the
figure. Note that the selection network for a 2 pixel disparity (2nd row from the
bottom in the selection pathway) is active for the spatial location at far left. The
corresponding location is also highly active in the local disparity pathway, and this
combination leads to strong activation for a 2 pixel disparity in the output of the
model.
The mixture-of-experts model was optimized individually on a variety of different
datasets and then tested on novel stimuli from the same datasets. The model's
ability to discriminate among different disparities was quantified as the disparity
threshold - the disparity difference at which one can correctly see a difference in
depth 75% of the time. Disparity thresholds for the test stimuli were computed using signal-detection theory (Green & Swets [3]). Sample stimuli and their disparity
thresholds are shown in Table 1. The model performed best on single object stimuli
(top row). This disparity threshold (0.23 pixels) was substantially less than the
input resolution of the model (1 pixel) and was thus exhibiting stereo hyperacuity.
The model also performed well when there were multiple, occluding objects (2nd
row). When both the stimulus and the background were generated from a uniform random distribution, the disparity threshold rose to 0.55 pixels. The model
estimated disparity accurately in random-dot stereograms and real world images.
Binary stereograms containing two transparent surfaces, however, were a challenging stimulus, and the threshold rose to 0.83 pixels. Part of the difficulty with this
stimulus (containing two objects) was fitting the sum of 2 Gaussians to 4 data
points.
We have compared our mixture-of-experts model (containing both a selection pathway and a local disparity pathway) with standard backpropagation and crosscorrelation techniques (Gray et al [2]). The primary difference is that the backpropagation and cross-correlation models have no separate selection mechanism. In
essence, one mechanism must compute both the segmentation and the disparity
estimation. In our tests with the back-propagation model, we found that disparity
thresholds for single object stimuli had risen by a factor of 3 (to 0.74 pixels) compared to the mixture-of-experts model. Disparity estimation of the cross-correlation
model was similarly poor. Thresholds rose by a factor of2 (compared to the mixtureof-experts model) for both single object stimuli and the noise stimuli (threshold =
0.46, 1.28 pixels, respectively).
M. S. Gray, A. Pouget, R. S.
870
Space Output
Desired Output
Zeme~
S. J. Nowlan and T. J. Sejnowski
Actual Output
o
:~
:~
3
0.98 0.00
0.87 0.00
0.98 0.01
Local
Dlsparit\:l Nets
Selection Nets
o
3
0.88 0.00
1.00 0.00
Disparit~ Energ~
Low SF
0.39 0.00
'"
Med SF
~'ffi
.,
l!?
~.
0.38 0.00
High SF
0.82 0.00
Input Stilllulus
0.99 0.50
Figure 2: The activity in the mixture-of-experts model in response to an input
stimulus containing a single object at a disparity of 2.10 pixels. At bottom is the
input stimulus. The 3 regions in the middle represent the output of the disparity
energy filters. Above the disparity energy output are the two pathways of the model.
The local disparity networks appear to the left and the selection networks are to
the right. Both the local disparity networks and the selection networks receive
topographically organized input from the disparity energy filters. The selection and
local disparity networks are displayed so that the top row represents a disparity
of 0 pixels, the next row a 1 pixel disparity, then 2 and 3 pixel disparities in the
remaining rows. At the top left part of the figure is the desired output for the given
input stimulus. In the top middle is the output for each local region of space. On
the top right is the actual output of the model collapsed across space. The numbers
at the bottom left of each part of the network indicate the maximum and minimum
activation values within that part. White indicates maximum activation level, black
is minimum.
871
Selective Integration: A Model for Disparity Estimation
Stimulus Type
Single
Double
Noise
Random-Dot
Transparent
Real
Sample Stimulus
;=
?Eft
" ' _ " ' _ ' .. ::JI ?.? I. __ ".~."' __ .iftk".. !,___;r._.
. ... ... ::?~?~. i;:i;;:;:? "":?? i,,?:;'?::J
Threshold
0.23
0.41
0.55
0.36
0.83
0.30
Table 1: Sample stimuli for each of the datasets, and corresponding disparity thresholds (in pixels) for the mixture-of-experts model.
4
COMPARISON WITH NEUROPHYSIOLOGICAL
DATA
To gain insight into the response properties of the selection units in our model,
we mapped their activations as a function of space and disparity. Specifically, we
Selection 1
measured the activation of a unit as a single
high-contrast edge was moved across the spatial extent of the receptive field. At each spatial location, we tested all possible disparities.
An example of this mapping is shown in Figure
3. This selection unit is sensitive to changes
in disparity as we move across space. We refer
to this property as disparity contrast. In other
words, the selection unit learned that a reliable
indicator for a given disparity is a change in
o
0.2
0.4
0.6
0.8
disparity across space. This type of detector
Figure 3: Selection unit activity can be behaviorally significant, because disparity contrast may playa role in signaling object boundaries. These selection units
could thus provide valuable information in the construction of a 3-D model of the
world. Recent neurophysiological studies by von der Heydt, Zhou, Friedman, and
Poggio [8] is consistent with this interpretation. They found that neurons of awake,
behaving monkeys in area V2 responded to edges of 4? by 4? random-dot stereograms. Because random-dot stereograms have no monocular form cues, these
neurons must be responding to edges in depth. This sensitivity to edges in a depth
map corresponds directly to the response profile of the selection units.
i
5
DISCUSSION
A major difficulty in estimating the disparities of objects in a visual scene in realistic circumstances (i.e., with clutter, transparency, occlusion, noise) is knowing
which cues are most reliable and should be integrated, and which regions have ambiguous or unreliable information. Nowlan and Sejnowski [6] found that selection
units learned to respond strongly to image regions that contained motion energy
in several different directions. The role of those selection units is similar to layered
analysis techniques for computing support maps in the motion domain (Darrell &
Pentland [1]). The operation of the dual pathways in our model bears some similar-
872
M. S. Gray, A. Pouget, R. S. Zemel, S. J. Nowlan and T. 1. Sejnowski
ities to the pathways developed in the motion model of Nowlan and Sejnowski [6].
In the stereo domain, we have found that our selection units develop into edge
detectors on a disparity map. They thus responded to regions rich in disparity information, analogous to the salient motion information captured in the motion [6]
selection units.
We have also found that the model matches psychophysical data recorded by Westheimer and McKee [9] on the effects of spatial frequency filtering on disparity
thresholds (Gray et al [2]). They found, in human psychophysical experiments,
that disparity thresholds increased for any kind of spatial frequency filtering of line
targets. In particular, disparity sensitivity was more adversely affected by high-pass
filtering than by low-pass filtering.
In summary, we propose that the functional division into local response and selection
represents a general principle for image interpretation and analysis that may be
applicable to many different visual cues, and also to other sensory domains. In our
approach to this problem, we utilized a multi-scale neurophysiologically-realistic
implementation of binocular cells for the input, and then combined it with a neural
network model to learn reliable cues for disparity estimation.
References
[1] T. Darrell and A.P. Pentland. Cooperative robust estimation using layers of
support. IEEE Transactions on Pattern Analysis and Machine Intelligence,
17(5):474-87,1995.
[2] M.S. Gray, A. Pouget, R.S. Zemel, S.J. Nowlan, and T .J. Sejnowski. Reliable
disparity estimation through selective integration. INC Technical Report 9602,
Institute for Neural Computation, University of California, San Diego, 1996.
[3] D.M. Green and J.A. Swets. Signal Detection Theory and Psychophysics. John
Wiley and Sons, New York, 1966.
[4] R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton. Adaptive mixtures of
local experts. Neural Computation, 3:79-87, 1991.
[5] R. Jain, R. Kasturi, and B.G. Schunck. Machine Vision. McGraw-Hill, New
York, 1995.
[6] S.J. Nowlan and T.J. Sejnowski. Filter selection model for motion segmentation and velocity integration. Journal of the Optical Society of America A,
11(12):3177-3200, 1994.
[7] I. Ohzawa, G.C. DeAngelis, and R.D. Freeman. Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science,
249:1037-1041,1990.
[8] R. von der Heydt, H. Zhou, H. Friedman, and G.F. Poggio. Neurons of area V2
of visual cortex detect edges in random-dot stereograms. Soc. Neurosci. Abs.,
21:18, 1995.
[9] G. Westheimer and S.P. McKee. Stereoscopic acuity with defocus and spatially
filtered retinal images. Journal of the Optical Society of America, 70:772-777,
1980.
| 1212 |@word middle:2 nd:2 open:1 jacob:2 configuration:1 disparity:98 tuned:3 nowlan:13 activation:7 must:2 readily:1 john:1 realistic:2 discrimination:1 cue:4 intelligence:1 filtered:1 location:17 qualitative:1 pathway:12 fitting:1 introduce:1 swets:2 multi:2 freeman:2 actual:2 estimating:2 medium:1 kind:1 substantially:1 monkey:1 developed:3 finding:1 quantitative:1 unit:23 medical:1 schunk:1 appear:1 local:25 black:1 quantified:1 challenging:2 hughes:1 backpropagation:2 signaling:1 area:3 gabor:1 word:1 get:1 selection:32 layered:1 collapsed:1 optimize:1 map:4 straightforward:1 layout:2 resolution:1 pouget:6 insight:2 analogous:1 diego:3 construction:1 target:1 hypothesis:1 velocity:1 hyperacuity:1 utilized:1 neurobiologically:1 cooperative:1 bottom:4 steven:1 role:2 region:10 valuable:1 rose:3 stereograms:6 ideally:1 trained:1 segment:1 kasturi:2 topographically:1 creates:1 division:1 represented:1 america:2 jain:2 describe:1 sejnowski:10 deangelis:2 zemel:6 corresponded:1 plausible:2 ability:1 think:1 noisy:1 net:2 propose:1 product:1 neighboring:2 combining:1 description:1 moved:1 competition:3 double:1 darrell:2 object:12 develop:2 measured:1 received:4 strong:1 soc:1 indicate:1 exhibiting:1 direction:1 filter:13 human:1 viewing:1 transparent:3 mapping:1 major:1 early:1 estimation:12 applicable:1 sensitive:1 individually:1 weighted:1 behaviorally:1 gaussian:2 zhou:3 focus:1 acuity:1 indicates:1 contrast:3 detect:1 integrated:1 selective:5 pixel:15 among:1 dual:1 spatial:7 integration:6 summed:2 softmax:1 psychophysics:1 field:2 once:1 biology:1 represents:3 identical:1 report:1 stimulus:18 richard:1 retina:3 phase:2 occlusion:3 friedman:3 ab:1 detection:3 highly:1 mixture:12 activated:1 accurate:1 poorer:1 edge:6 partial:1 poggio:3 re:1 desired:2 increased:1 soft:2 insured:1 uniform:1 combined:1 sensitivity:2 terrence:1 pool:2 michael:2 von:3 recorded:1 containing:4 cognitive:1 corner:1 expert:13 adversely:1 crosscorrelation:1 retinal:2 inc:1 caused:1 performed:2 try:1 lab:1 square:1 became:1 responded:3 yield:1 accurately:4 detector:3 email:1 energy:19 frequency:4 gain:1 segmentation:3 organized:3 back:2 appears:2 alexandre:1 feed:1 response:4 box:1 strongly:2 stage:3 binocular:1 correlation:6 nonlinear:1 overlapping:3 propagation:2 gray:8 ohzawa:2 effect:1 consisted:3 normalized:2 spatially:2 white:1 essence:1 ambiguous:1 hill:1 risen:1 motion:9 image:15 novel:1 superior:1 mckee:2 functional:1 ji:1 occurred:2 eft:1 interpretation:2 measurement:1 refer:1 significant:1 grid:1 similarly:1 nonlinearity:1 had:1 dot:6 reliability:1 of2:1 cortex:3 surface:2 behaving:1 playa:1 recent:3 jolla:1 binary:1 der:3 captured:1 minimum:2 determine:2 signal:2 multiple:2 transparency:2 technical:1 match:1 cross:4 long:1 devised:1 circumstance:1 vision:1 represent:1 cell:2 receive:1 addition:2 background:1 separately:1 rest:1 med:1 jordan:2 variety:2 fit:2 architecture:2 knowing:1 stereo:3 york:2 useful:1 clutter:1 stereoscopic:2 estimated:1 correctly:1 affected:1 four:3 salient:1 threshold:13 modelfor:1 changing:1 sum:2 respond:1 layer:4 followed:1 activity:3 adapted:1 alex:1 awake:1 scene:2 optical:2 department:1 combination:1 poor:1 across:11 son:1 biologically:1 monocular:1 discus:1 ffi:1 mechanism:5 gaussians:2 operation:2 v2:3 top:5 remaining:2 responding:1 energ:1 society:2 psychophysical:2 objective:1 move:1 question:1 receptive:2 primary:3 traditional:1 separate:3 mapped:1 extent:1 length:1 westheimer:2 implementation:1 upper:1 neuron:5 datasets:5 howard:1 displayed:1 pentland:2 situation:1 neurobiology:1 hinton:2 heydt:3 pair:1 optimized:1 california:2 learned:3 conflicting:1 discontinuity:1 below:1 pattern:3 rf:9 reliable:7 max:2 green:2 terry:1 power:1 difficulty:2 indicator:1 brief:1 eye:5 relative:1 bear:1 neurophysiologically:1 proportional:1 filtering:4 integrate:1 sufficient:1 consistent:3 principle:1 bank:1 row:6 summary:1 institute:3 sparse:2 boundary:1 depth:7 cortical:1 world:3 rich:1 sensory:1 forward:1 adaptive:1 san:3 far:1 transaction:1 compact:1 mcgraw:1 unreliable:1 ities:1 global:1 active:2 stereopsis:4 search:1 table:2 promising:1 learn:1 robust:1 ca:2 defocus:1 domain:5 neurosci:1 noise:3 profile:1 convey:1 fashion:1 salk:2 wiley:1 sf:6 third:1 specific:1 explored:1 evidence:1 demand:1 suited:1 neurophysiological:4 visual:9 schunck:1 contained:2 partially:1 scalar:1 corresponds:1 goal:1 shared:2 change:2 specifically:1 averaging:1 discriminate:1 pas:2 la:1 occluding:1 selectively:2 support:3 tested:2 |
235 | 1,213 | On the Effect of Analog Noise in
Discrete-Time Analog Computations
Pekka Orponen
Department of Mathematics
University of Jyvaskylat
Wolfgang Maass
Institute for Theoretical Computer Science
Technische Universitat Graz*
Abstract
We introduce a model for noise-robust analog computations with
discrete time that is flexible enough to cover the most important
concrete cases, such as computations in noisy analog neural nets
and networks of noisy spiking neurons. We show that the presence
of arbitrarily small amounts of analog noise reduces the power of
analog computational models to that of finite automata, and we
also prove a new type of upper bound for the VC-dimension of
computational models with analog noise.
1
Introduction
Analog noise is a serious issue in practical analog computation. However there exists
no formal model for reliable computations by noisy analog systems which allows us
to address this issue in an adequate manner. The investigation of noise-tolerant
digital computations in the presence of stochastic failures of gates or wires had been
initiated by [von Neumann, 1956]. We refer to [Cowan, 1966] and [Pippenger, 1989]
for a small sample of the nllmerous results that have been achieved in this direction.
In all these articles one considers computations which produce a correct output not
with perfect reliability, but with probability ~ + p (for some parameter p E (0, D?
The same framework (with stochastic failures of gates or wires) hac; been applied
to analog neural nets in [Siegelmann, 1994].
t
t
The abovementioned approaches are insufficient for the investigation of noise in
analog computations, because in analog computations one has to be concerned not
only with occasional total failures of gates or wires, but also with "imprecision", i.e.
with omnipresent smaller (and occa<;ionally larger) perturbations of analog outputs
? Klosterwiesgasse 32/2, A-BOlO Graz, Austria. E-mail: maass@igi.tu-graz.ac.at.
t P. O. Box 35, FIN-4035l JyvaskyHi, Finland. E-mail: orponen@math.jyu.fi. Part of
this work was done while this author was at the University of Helsinki, and during visits
to the Technische Universitat Graz and the University of Chile in Santiago.
On the Effect ofAnalog Noise in Discrete-Time Analog Computations
219
of internal computational units. These perturbations may for example be given
by Gaussian distributions. Therefore we introduce and investigate in this article
a notion of noise-robust computation by noisy analog systems where we assume
that the values of intermediate analog values are moved according to some quite
arbitrary probability distribution. We consider - as in the traditional framework for
noisy digital computations - arbitrary computations whose output is correct with
some given probability 2: ~ + P (for p E (O,~]) . We will restrict our attention to
analog computation with digital output. Since we impose no restriction (such as
continuity) on the type of operations that can be performed by computational units
in an analog computational system, an output unit of such system can convert an
analog value into a binary output via "thresholding".
Our model and our Theorem 3.1 are somewhat related to the analysis of probabilistic
finite automata in [Rabin, 1963]. However there the finiteness of the state space
simplifies the setup considerably. [Casey, 1996] addresses the special case of analog
computations on recurrent neural nets (for those types of analog noise that can
move an internal state at most over a distance c) whose digital output is perfectly
reliable (Le. p = 1/2 in the preceding notation).l
The restriction to perfect reliability in [Casey, 1996] has immediate consequences
for the types of analog noise processes that can be considered, and for the types of
mathematical arguments that are needed for their investigation. In a computational
model with perfect reliability of the output it cannot happen that an intermediate
state ?. occurs at some step t both in a computation for an input !!2 that leads to
output "0" , and at step t in a computation for the same input "!!2" that leads to
output "I" . Hence an analysis of perfectly reliable computations can focus on partitions of intermediate states ?. according to the computations and the computation
steps where they may occur.
Apparently many important concrete cases of noisy analog computations require a
different type of analysis. Consider for example the special case of a sigmoidal neural
net (with thresholding at the output), where for each input the output of an internal
noisy sigmoidal gate is distributed according to some Gaussian distribution (perhaps
restricted to the range of all possible output values which this sigmoidal gate can
actually produce). In this case an intermediate state ?. of the computational system
is a vector of values which have been produced by these Gaussian distributions.
Obviously each such intermediate state ~ can occur at any fixed step t in any
computation (in particular in computations with different network output for the
same network input). Hence perfect reliability of the network output is unattainable
in this case. For an investigation of the actual computational power of a sigmoidal
neural net with Gaussian noise one haC) to drop the requirement of perfect reliability
of the output, and one has to analyze how probable it is that a particular network
output is given, and how probable it is that a certain intermediate state is assumed.
Hence one has to analyze for each network input and each step t the different
IThere are relatively few examples for nontrivial computations on common digital or
analog computational models that can achieve perfect reliability of the output in spite of
noisy internal components. Most constructions of noise-robust computational models rely
on the replication of noisy computational units (see [von Neumann, 1956], [Cowan, 1966]).
The idea of this method is that the average of the outputs of k identical noisy computational
units (with stochastically independent noise processes) is with high probability close to the
expected value of their output, if k is sufficiently large. However for any value of k there
exists in general a small but nonzero probability that this average deviates strongly from
its expected value. In addition, if one assumes that the computational unit that produces
the output of the computations is also noisy, one cannot expect that the reliability of the
output of the computation is larger than the reliability of this last computational unit.
Consequently there exist many methods for reducing the error-probability of the output
to a small value, but these methods cannot achieve error probability 0 at the output.
W. Maass and P. Orponen
220
probability distributions over intermediate states ?.. that are induced by computations
of the noisy analog computational system. In fact, one may view the set of these
probability distributions over intermediate states ?.. as a generalized set of "states" of
a noisy analog computational system. In general the mathematical structure of this
generalized set of "states" is substantially more complex than that of the original
set of intermediate states ?.. ? We have introduced in [Maass, Orponen, 1996] some
basic methods for analyzing this generalized set of "states", and the proofs of the
main results in this article rely on this analysis.
The preceding remarks may illustrate that if one drops the assumption of perfect
reliability of the output, it becomes more difficult to prove upper bounds for the
power of noisy analog computations. We prove such upper bounds even for the case
of stochastic dependencies among noises for different internal units, and for the case
of nonlinear dependencies of the noise on the current internal state. Our results also
cover noisy computations in hybrid analog/digital computational models, such as for
example a neural net combined with a binary register, or a network of noisy spiking
neurons where a neuron may temporarily assume the discrete state "not-firing".
Obviously it becomes quite difficult to analyze the computational effect of such
complex (but practically occuring) types of noise without a rigorous mathematical
framework. We introduce in section 2 a mathematical framework that is general
enough to subsume all these cases. The traditional case of noisy digital computations
is captured as a special case of our definition.
One goal of our investigation of the effect of analog noise is to find out which features
of analog noise have the most detrimental effect on the computational power of an
analog computational system. This turns out to be a nontrivial question. 2 As a
first step towards characterizing those aspects and parameters of analog noise that
have a strong impact on the computational power of a noisy analog system, the
proof of Theorem 3.1 (see [Maass, Orponen, 1996]) provides an explicit bound on
the number of states of any finite automaton that can be implemented by an analog
computational system with a given type of analog noise. It is quite surprising to
see on which specific parameters of the analog noise the bound depends. Similarly
the proofs of Theorem 3.4 and Theorem 3.5 provide explicit (although very large)
upper bounds for the VC-dimension of noisy analog neural nets with batch input,
which depend on specific parameters of the analog noise.
2
Preliminaries: Definitions and Examples
An analog discrete-time computational system (briefly: computational system) M
is defined in a general way as a 5-tuple (0, pO, F, 1:, s), where 0, the set of states,
is a bounded subset of R d , po E 0 is a distinguished initial state, F ~ 0 is the
set of accepting states, 1: is the input domain, and s : 0 x E ~ 0 is the transition
function. To avoid unnecessary pathologies, we impose the conditions that 0 and
F are Borel subsets of R d , and for each a E 1:, s(p, a) is a measurable function of
p. We also assume that E contains a distinguished null value U, which may be used
to pad the actual input to arbitrary length. The nonnull input domain is denoted
by 1:0 = 1: - {U}.
2For example, one might think that analog noise which is likely to move an internal
state over a large distance is more harmful than another type of analog noise which keeps
an internal state within its neighborhood. However this intuition is deceptive. Consider
the extreme case of analog noise in a Sigmoidal neural net which moves a gate output
x E [-1,1] to a value in the e-neighborhood of -x. This type of noise moves some values
x over large distances, but it appears to be less harmful for noise-robust computing than
noise which moves x to an arbitrary value in the lOe-neighborhood of x .
On the Effect ofAnalog Noise in Discrete-Time Analog Computations
221
The intended noise-free dynamics of such a system M is as follows. The system
starts its computation in state pO, and on each single computation step on input
element a E Eo moves from its current state p to its next state s(p, a). After
the actual input sequence has been exhausted, M may still continue to make pure
computation steps. Each pure computation step leads it from a state p to the state
s(p, U). The system accepts its input if it enters a state in the class F at some point
after the input has finished.
For instance, the recurrent analog neural net model of [Siegelmann, Sontag, 1991]
(also known as the "Brain State in a Box" model) is obtained from this general
framework as follows. For a network N with d neurons and activation values between -1 and 1, the state space is 0 = [-1, 1] d. The input domain may be chosen
as either E = R or E = {-l,O, I} (for "online" input) or E = R n (for "batch"
input).
Feedforward analog neural nets may also be modeled in the same manner, except
that in this case one may wish to select as the state set 0 := ([-1, 1] U {dormant})d,
where dormant is a distinguished value not in [-1, 1]. This special value is used
to indicate the state of a unit whose inputs have not all yet been available at the
beginning of a given computation step (e.g. for units on the l-th layer of a net at
computation steps t < 1).
The completely different model of a network of m stochastic spiking neurons (see
e.g. [Maass, 1997]) is also a special case of our general framework. 3
Let us then consider the effect of noise in a computational system M. Let Z (p, B)
be a function that for each state p E 0 and Borel set B ~ 0 indicates the probability
of noise moving state p to some state in B. The function Z is called the noise process
affecting M, and it should satisfy the mild conditions of being a stochastic kernel,
i.e., for each p E 0, Z(p,.) should be a probability distribution, and for each Borel
set B, Z(-, B) should be a measurable function.
We assume that there is some measure IL over 0 so that Z(p, ?) is absolutely continuous with respect to It for each p EO, i.e. IL(B) = 0 implies Z(p, B) = 0 for every
measurable B ~ 0 . By the Radon-Nikodym theorem, Z then possesses a density
kernel with respect to IL, i.e. there exists a function z(?,?) such that for any state
p E 0 and Borel set B ~ 0, Z(p, B) = JqEB z(p, q) dJL.
We assume that this function z(',?) has values in [0,00) and is measurable. (Actually, in view of our other conditions this can be assumed without loss of generality.)
The dynamics of a computational system M affected by a noise process Z is now
defined as follows. 4 If the system starts in a state p, the distribution of states q
obtained after a single computation step on input a E E is given by the density
kernel 1f'a(P, q) = z(s(p, a), q). Note that as a composition of two measurable func3In this case one wants to set nsp := (U~=l [0, T)i U {not-firing})m, where T > 0 is
a sufficiently large constant so that it suffices to consider only the firing history of the
network during a preceding time interval of length T in order to determine whether a
neuron fires (e.g. T = 30 ms for a biological neural system). If one partitions the time
axis into discrete time windows [0, T) , [T, 2T) ,. .. , then in the noise-free case the firing
events during each time window are completely determined by those in the preceding one.
A component Pi E [0, T)i of a state in this set nsp indicates that the corresponding neuron
i has fired exactly j times during the considered time interval, and it also specifies the j
firing times of this neuron during this interval. Due to refractory effects one can choose
1< 00 for biological neural systems, e.g. 1= 15 for T = 30 ms. With some straightforward
formal operations one can also write this state set nsp as a bounded subset of Rd for
d:= l?m.
4We would like to thank Peter Auer for helpful conversations on this topic.
222
W. Maass and P. Orponen
tions, 1ra is again a measurable function. The long-term dynamics of the system
is given by a Markov process, where the distribution 1rza (p, q) of states after Ixal
computation steps with input xa E E* starting in state p is defined recursively by
1rza (p,q) = IrEo1rz(p,r) '1ra(r,q) dp,.
Let us denote by 1r z (q) the distribution 1rz (pO,q), i.e. the distribution of states of
M after it has processed string x, starting from the initial state pO. Let p > 0
be the required reliability level. In the most basic version the system M accepts
(rejects) some input x E I':o if
1rz (q) dll 2: + p (respectively ~ ~ - p). In less
IF
!
trivial cases the system may also perform pure computation steps after it has read
all of the input. Thus, we define more generally that the system M recognizes a set
L ~ I':o with reliability p if for any x E Eo:
x EL
?:>
11rzu (q) dp, 2:
x ?L
?:>
11rzu (q) dp,
~ +P
~~-
p for all u E {u}*.
This covers also the case of batch input, where
large (e.g. Eo = Rn).
3
for some u E {u}*
Ixl
= 1 and Eo is typically quite
Results
The proofs of Theorems 3.1, 3.4, 3.5 require a mild continuity assumption for the
density functions z(r,?) , which is satisfied in all concrete cases that we have examined. We do not require any global continuity property over 0 for the density functions z(r,?) because there are important special cases (see [Maass, o rponen , 1996]),
where the state space 0 is a disjoint union of subspaces 0 1 , .?? ,Ok with different
measures on each subspace. We only assume that for some arbitrary partition of
n into Borel sets 0 1 , ... ,Ok the density functions z(r,?) are uniformly continuous
over each OJ , with moduli of continuity that can be bounded independently of r .
In other words, we require that z(?, .) satisfies the following condition:
We call a function 1r(" .) from 0 2 into R piecewise uniformly continuous if for every
c> 0 there is a 8 > 0 such that for every rEO, and for all p, q E OJ, j = 1, ... , k:
II p -
q
II
~
8 implies
11r(r,p) - 1r(r, q)1
~
c.
(1)
If z(',') satisfies this condition, we say that the re~mlting noise process Z is piecewise
uniformly continuous.
Theorem 3.1 Let L ~ I':o be a set of sequences over an arbitrary input domain
Eo. Assume that some computational system M, affected by a piecewise uniformly
continuous noise process Z, recognizes L with reliability p, for some arbitrary p > O.
Then L is regular.
The proof of Theorem 3.1 relies on an analysis of the space of probability density
functions over the state set 0 . An upper bound on the number of states of a deterministic finite automaton that simulates M can be given in terms of the number
k of components OJ of the state set 0 , the dimension and diameter of 0 , a bound
on the values of the noise density function z , and the value of 8 for c = p/4p,(0) in
condition (1). For details we refer to [Maass, Orponen, 1996].5
?
'" A corresponding result is claimed in Corollary 3.1 of [Casey, 1996] for the special case
On the Effect ofAnalog Noise in Discrete-TIme Analog Computations
223
Remark 3.2 In stark contrast to the results of [Siegelmann, Sontag, 1991} and
[Maass, 1996} for the noise-free case, the preceding Theorem implies that both recurrent analog neural nets and recurrent networks of spiking neurons with online
input from ~o can only recognize regular languages in the presence of any reasonable
type of analog noise, even if their computation time is unlimited and if they employ
arbitrary real-valued parameters.
Let us say that a noise process Z defined on a set 0 ~ Rd is bounded by 11 if it can
move a state P only to other states q that have a distance $ 11 from p in the LI -norm
over Rd , Le. if its density kernel z has the property that for any p = (PI, ... ,Pd)
and q = (ql, ... , qd) E 0, z(p, q) > 0 implies that Iqi - Pil $ 11 for i = 1, ... , d.
Obviously 11-bounded noise processes are a very special class. However they provide
an example which shows that the general upper bound of Theorem 3.1 is a sense
optimal:
Theorem 3.3 For every regular language L ~ {-1, 1}* there is a constant 11 > 0
such that L can be recognized with perfect reliability (i. e. p = ~) by a recurrent
?
analog neural net in spite of any noise process Z bounded by 11.
We now consider the effect of analog noise on discrete time analog computations
with batch-input. The proofs of Theorems 3.4 and 3.5 are quite complex (see
[Maass, Orponen, 1996]).
Theorem 3.4 There exists a finite upper bound for the VC-dimension of layered feedforward sigmoidal neural nets and feedforward networks of spiking neurons
with piecewise uniformly continuous analog noise (for arbitrary real-valued inputs,
Boolean output computed with some arbitrary reliability p > OJ and arbitrary realvalued ''programmable parameters") which does not depend on the size or structure
of the network beyond its first hidden layer.
?
Theorem 3.5 There exists a finite upper bound for the VC-dimension of recurrent
sigmoidal neural nets and networks of spiking neurons with piecewise uniformly continuous analog noise (for arbitrary real valued inputs, Boolean output computed with
some arbitrary reliability p > 0, and arbitrary real valued ''programmable parameters") which does not depend on the computation time of the network, even if the
computation time is allowed to vary for different inputs.
?
4
Conclusions
We have introduced a new framework for the analysis of analog noise in discretetime analog computations that is better suited for "real-world" applications and
of recurrent neural nets with bounded noise and p = 1/2 , i.e. for certain computations
with perfect reliability. This case may not require the consideration of probability density
functions. However it turns out that the proof for this special case in [Casey, 1996J is wrong.
The proof of Corollary 3.1 in [Casey, 1996J relies on the argument that a compact set "can
contain only a finite number of disjoint sets with Jlonempty interior" . This argument is
wrong, as the counterexample of the intervals [1/{2i + 1), 1/2iJ for i = 1,2, ... shows.
These infinitely many disjoint intervals are all contained in the compact set [0, 1J . In
addition, there is an independent problem with the structure of the proof of Corollary 3.1
in [Casey, 1996J. It is derived as a consequence ofthe proof of Theorem 3.1 in [Casey, 1996].
However that proof relies on the assumption that the recurrent neural net accepts a regular
language. Hence the proof via probability density functions in [Maass, Orponen, 1996]
provides the first valid proof for the claim of Corollary 3.1 in [Casey, 1996].
224
W Maass and P. Orponen
more flexible than previous models. In contrast to preceding models it also covers
important concrete cases such as analog neural nets with a Gaussian distribution of
noise on analog gate outputs, noisy computations with less than perfect reliability,
and computations in networks of noisy spiking neurons.
Furthermore we have introduced adequate mathematical tools for analyzing the
effect of analog noise in this new framework. These tools differ quite strongly from
those that have previously been used for the investigation of noisy computations.
We show that they provide new bounds for the computational power and VCdimension of analog neural nets and networks of spiking neurons in the presence of
analog noise.
Finally we would like to point out that our model for noisy analog computations
can also be applied to completely different types of models for discrete time analog
computation than neural nets, such as arithmetical circuits, the random access
machine (RAM) with analog inputs, the parallel random access machine (PRAM)
with analog inputs, various computational discrete-time dynamical systems and
(with some minor adjustments) also the BSS model [Blum, Shub, Smale, 1989]. Our
framework provides for each of these models an adequate definition of noise-robust
computation in the presence of analog noise, and our results provide upper bounds
for their computational power and VC-dimension in terms of characteristica of their
analog noise.
References
[Blum, Shub, Smale, 1989] L. Blum, M. Shub, S. Smale, On a theory of computation
over the real numbers: NP-completeness, recursive functions and universal machines.
Bulletin of the Amer. Math. Soc. 21 (1989), 1-46.
[Casey, 1996] M. Casey, The dynamics of discrete-time computation, with application to
recurrent neural networks and finite state machine extraction. Neural Computation 8
(1996), 1135-1178.
[Cowan, 1966] J. D. Cowan, Synthesis of reliable automata from unreliable components.
Automata Theory (E. R. Caianiello, ed.), 131-145. Academic Press, New York, 1966.
[Maass, 1996] W . Maass, Lower bounds for the computational power of networks of spiking
neurons. Neural Computation 8 (1996), 1-40.
[Maass, 1997] W. Maass, Fast sigmoidal networks via spiking neurons, to appear in
Neural Computation 9, 1997. FTP-host: archive.cis.ohio-state.edu, FTP-filename:
/pub/neuroprose /maass.sigmoidal-spiking.ps.Z.
[Maass, Orponen, 1996] W. Maass, P. Orponen, On the effect of analog noise in
discrete-time analog computations Uournal version), submitted for publication; see
http://www.math.jyu.fi/ ... orponen/papers/noisyac.ps.
[Pippenger, 1989] N. Pippenger, Invariance of complexity measures for networks with unreliable gates. J. Assoc. Comput. Mach. 36 (1989), 531-539.
[Rabin, 1963] M. Rabin, Probabilistic automata. Information and Control 6 (1963), 230245.
[Siegelmann, 19941 H. T. Siegelmann, On the computational power of probabilistic and
faulty networks. Proc. 21st International Colloquium on Automata, Languages, and
Programming, 23-34. Lecture Notes in Computer Science 820, Springer-Verlag, Berlin,
1994.
[Siegelmann, Sontag, 1991] H. T. Siegelmann, E. D. Sontag, Turing computability with
neural nets. Appl. Math. Letters 4(6) (1991), 77-80.
[von Neumann, 1956] J. von Neumann, Probabilistic logics and the synthesis of reliable
organisms from unreliable components. Automata Studies (C. E. Shannon, J. E. McCarthy, eds.), 329-378. Annals of Mathematics Studies 34, Princeton University Press,
Princeton, NJ, 1956.
| 1213 |@word mild:2 version:2 briefly:1 norm:1 ithere:1 recursively:1 initial:2 contains:1 pub:1 orponen:13 current:2 surprising:1 activation:1 yet:1 happen:1 partition:3 drop:2 chile:1 beginning:1 accepting:1 provides:3 math:4 completeness:1 sigmoidal:9 mathematical:5 replication:1 prove:3 manner:2 introduce:3 ra:2 expected:2 brain:1 actual:3 window:2 becomes:2 notation:1 bounded:7 circuit:1 null:1 substantially:1 string:1 nj:1 every:4 exactly:1 wrong:2 assoc:1 control:1 unit:10 appear:1 iqi:1 consequence:2 mach:1 analyzing:2 initiated:1 firing:5 might:1 deceptive:1 examined:1 appl:1 range:1 practical:1 union:1 recursive:1 universal:1 reject:1 pekka:1 djl:1 word:1 regular:4 spite:2 cannot:3 close:1 layered:1 interior:1 faulty:1 restriction:2 measurable:6 deterministic:1 www:1 straightforward:1 attention:1 starting:2 independently:1 automaton:9 pure:3 notion:1 annals:1 construction:1 programming:1 element:1 enters:1 graz:4 intuition:1 pd:1 colloquium:1 complexity:1 dynamic:4 caianiello:1 depend:3 klosterwiesgasse:1 completely:3 po:5 various:1 fast:1 neighborhood:3 quite:6 whose:3 larger:2 valued:4 say:2 think:1 noisy:23 online:2 obviously:3 sequence:2 net:21 tu:1 fired:1 achieve:2 moved:1 requirement:1 neumann:4 p:2 produce:3 perfect:10 tions:1 illustrate:1 recurrent:9 ac:1 vcdimension:1 ftp:2 ij:1 minor:1 strong:1 soc:1 implemented:1 indicate:1 implies:4 qd:1 differ:1 direction:1 dormant:2 correct:2 stochastic:5 vc:5 require:5 suffices:1 investigation:6 preliminary:1 probable:2 biological:2 practically:1 sufficiently:2 considered:2 claim:1 finland:1 vary:1 proc:1 tool:2 gaussian:5 avoid:1 publication:1 corollary:4 derived:1 focus:1 casey:10 indicates:2 contrast:2 rigorous:1 sense:1 helpful:1 el:1 typically:1 pad:1 hidden:1 issue:2 among:1 flexible:2 denoted:1 special:9 extraction:1 identical:1 np:1 piecewise:5 serious:1 few:1 employ:1 recognize:1 intended:1 fire:1 investigate:1 extreme:1 tuple:1 harmful:2 re:1 theoretical:1 instance:1 rabin:3 boolean:2 cover:4 technische:2 subset:3 universitat:2 unattainable:1 dependency:2 considerably:1 combined:1 st:1 density:10 international:1 probabilistic:4 synthesis:2 concrete:4 von:4 again:1 satisfied:1 choose:1 stochastically:1 stark:1 li:1 santiago:1 satisfy:1 register:1 igi:1 depends:1 performed:1 view:2 wolfgang:1 apparently:1 analyze:3 start:2 parallel:1 pil:1 dll:1 il:3 ofthe:1 produced:1 history:1 submitted:1 ed:2 definition:3 failure:3 proof:13 austria:1 conversation:1 actually:2 auer:1 appears:1 ok:2 amer:1 done:1 box:2 strongly:2 generality:1 furthermore:1 xa:1 nonlinear:1 continuity:4 perhaps:1 modulus:1 effect:12 contain:1 hence:4 imprecision:1 read:1 nonzero:1 maass:20 during:5 m:2 generalized:3 occuring:1 consideration:1 ohio:1 fi:2 common:1 spiking:11 refractory:1 analog:69 organism:1 refer:2 composition:1 counterexample:1 rd:3 mathematics:2 similarly:1 pathology:1 had:1 reliability:17 language:4 moving:1 access:2 mccarthy:1 claimed:1 certain:2 verlag:1 binary:2 arbitrarily:1 continue:1 arithmetical:1 captured:1 filename:1 somewhat:1 impose:2 preceding:6 eo:6 recognized:1 determine:1 ii:2 reduces:1 academic:1 long:1 host:1 visit:1 impact:1 basic:2 kernel:4 achieved:1 pram:1 addition:2 affecting:1 want:1 interval:5 finiteness:1 hac:2 posse:1 archive:1 induced:1 simulates:1 cowan:4 call:1 presence:5 intermediate:9 feedforward:3 enough:2 concerned:1 restrict:1 perfectly:2 simplifies:1 idea:1 whether:1 peter:1 sontag:4 york:1 remark:2 adequate:3 programmable:2 generally:1 amount:1 processed:1 diameter:1 discretetime:1 http:1 specifies:1 exist:1 disjoint:3 discrete:13 write:1 affected:2 blum:3 occa:1 ram:1 computability:1 convert:1 turing:1 letter:1 reasonable:1 radon:1 bound:14 layer:2 nontrivial:2 occur:2 helsinki:1 unlimited:1 aspect:1 argument:3 relatively:1 department:1 according:3 smaller:1 restricted:1 neuroprose:1 previously:1 turn:2 needed:1 available:1 operation:2 occasional:1 distinguished:3 batch:4 gate:8 original:1 rz:2 assumes:1 recognizes:2 siegelmann:7 move:7 question:1 occurs:1 traditional:2 abovementioned:1 detrimental:1 dp:3 distance:4 thank:1 subspace:2 berlin:1 topic:1 mail:2 considers:1 trivial:1 length:2 modeled:1 insufficient:1 setup:1 difficult:2 ql:1 smale:3 shub:3 perform:1 upper:9 neuron:15 wire:3 markov:1 fin:1 finite:8 immediate:1 subsume:1 rn:1 perturbation:2 arbitrary:14 introduced:3 required:1 accepts:3 address:2 beyond:1 dynamical:1 reo:1 reliable:5 oj:4 power:9 event:1 rely:2 hybrid:1 finished:1 axis:1 realvalued:1 nsp:3 deviate:1 loss:1 expect:1 lecture:1 ixl:1 digital:7 article:3 thresholding:2 nikodym:1 pi:2 last:1 nonnull:1 free:3 formal:2 institute:1 characterizing:1 bulletin:1 distributed:1 dimension:6 bs:1 transition:1 world:1 valid:1 author:1 compact:2 keep:1 unreliable:3 logic:1 global:1 tolerant:1 assumed:2 unnecessary:1 continuous:7 robust:5 complex:3 domain:4 main:1 noise:57 allowed:1 borel:5 explicit:2 wish:1 comput:1 theorem:15 specific:2 omnipresent:1 exists:5 ci:1 exhausted:1 suited:1 likely:1 infinitely:1 contained:1 adjustment:1 temporarily:1 springer:1 satisfies:2 relies:3 goal:1 consequently:1 pippenger:3 towards:1 determined:1 except:1 reducing:1 uniformly:6 total:1 called:1 invariance:1 shannon:1 select:1 internal:8 absolutely:1 princeton:2 |
236 | 1,214 | A Convergence Proof for the Softassign
Quadratic Assignment Algorithm
Anand Rangarajan
Department of Diagnostic Radiology
Yale University School of Medicine
New Haven, CT 06520-8042
e-mail: anand<Onoodle. med. yale. edu
Steven Gold
CuraGen Corporation
322 East Main Street
Branford, CT 06405
e-mail: gold-steven<ocs. yale. edu
Alan Yuille
Smith-Kettlewell Eye Institute
2232 Webster Street
San Francisco, CA 94115
e-mail: yuille<oskivs.ski. org
Eric Mjolsness
Dept. of Compo Sci. and Engg.
Univ. of California San Diego (UCSD)
La Jolla, CA 92093-0114
e-mail: emj <Ocs . ucsd. edu
Abstract
The softassign quadratic assignment algorithm has recently
emerged as an effective strategy for a variety of optimization problems in pattern recognition and combinatorial optimization. While
the effectiveness of the algorithm was demonstrated in thousands
of simulations, there was no known proof of convergence. Here,
we provide a proof of convergence for the most general form of the
algorithm.
1
Introduction
Recently, a new neural optimization algorithm has emerged for solving quadratic
assignment like problems [4, 2]. Quadratic assignment problems (QAP) are characterized by quadratic objectives with the variables obeying permutation matrix
constraints. Problems that roughly fall into this class are TSP, graph partitioning
(GP) and graph matching. The new algorithm is based on the softassign procedure
which guarantees the satisfaction of the doubly stochastic matrix constraints (resulting from a "neural" style relaxation of the permutation matrix constraints). While
the effectiveness of the softassign procedure has been demonstrated via thousands
A Convergence Prooffor the Sojtassign Quadratic Assignment Algorithm
621
of simulations, no proof of convergence was ever shown.
Here, we show a proof of convergence for the soft assign quadratic assignment algorithm. The proof is based on algebraic transformations of the original objective and
on the non-negativity of the Kullback-Leibler measure. A central requirement of
the proof is that the softassign procedure always returns a doubly stochastic matrix.
After providing a general criterion for convergence, we separately analyze the cases
of TSP and graph matching.
2
Convergence proof
The deterministic annealing quadratic assignment objective function is written as
[4, 5]:
')'~ 2
- - ~Mai
2 .
a~
+ -(31~
~MailogMai
.
a~
Here M is the desired N x N permutation matrix. This form of the energy function
has a self-amplification term with a parameter,)" two Lagrange parameters J-L and
l/ for constraint satisfaction, an x log x barrier function which ensures positivity of
Mai and a deterministic annealing control parameter (3. The QAP benefit matrix
Cai;bj is preset based on the chosen problem, for example, graph matching or TSP.
In the following deterministic annealing pseudocode (30 and (3, are the initial and
final values of (3, (3r is the rate at which (3 is increased, IE is an iteration cap and
~ is an N x N matrix of small positive-valued random numbers.
Initialize (3 to (30, Mai to ~
+ ~ai
Begin A: Deterministic annealing. Do A until (3
~
(3,
Begin B: Relaxation. Do B until all Mai converge or number of
iterations> IE
Qai +- 'Ebj Cai;bjMbj
+ ')'Mai
Begin Softassign:
Mai +- exp ((3Qai)
Begin C: Sinkhorn. Do C until all Mai converge
Update Mai by normalizing the rows:
Mai +-
2:M.. t-'u.
Update Mai by normalizing the columns:
Mai +-
2::J:ta,
End C
End Soft assign
End B
End A
(1)
A. Rangarajan, A. Yuille, S. Gold and E. Mjolsness
622
The softassign is used for constraint satisfaction. The softassign is based on
Sinkhorn's theorem [4] but can be independently derived as coordinate ascent on the
Lagrange parameters 11 and 1/. Sinkhorn's theorem ensures that we obtain a doubly
stochastic matrix by the simple process of alternating row and column normalizations. The QAP algorithm above was developed using the graduated assignment
heuristic [1] with no proof of convergence until now.
We simplify the objective function in (1) by collecting together all terms quadratic
in M ai . This is achieved by defining
(2)
Then we use an algebraic transformation [3] to transform the quadratic form into
a more manageable linear form:
-X2
- --t
2
min
( -X(J
u
1)
+ _(J2
(3)
2
Application of the algebraic transformation (in a vectorized form) to the quadratic
term in (1) yields:
Eqap(M, (J, 11, 1/) = -
+ L l1a(~ Mai
a
-
1)
L
aibj
(J,
1
1) + fJ
a
t
Cinj(Jai(Jbj
aibj
+ ~ l/i(L Mai -
t
Extremizing (4) w.r.t.
ciI;~jMai(Jbj + ~ L
L
Mai
at
(4)
log Mai
we get
'~
" c(-y)
ai;bj Mb'J -bj
'~
" c(-y)
ai;bj (Jb J' ->..
-- (J at. -bj
M at.
(5)
is a minimum, provided certain conditions hold which we specify below.
In the first part of the proof, we show that setting (Jai =
decrease the energy function. Restated, we require that
(Jai
" C(-y)
. ( - '~
= M ai -- argmin
ai;bj M aiCJbj
aibj
:Eaibj
is guaranteed to
1 '~
" C(-y)
+2
ai;bj(Jai(Jbj )
(6)
aibj
If C~l~j is positive definite in the subspace spanned by
minimum of the energy function -
Mai
M,
then
(Jai
=
Mai
is a
C~I~jMaiCJbj + ! :Eaibj Cil~j(JaiCJbj .
At this juncture, we make a crucial assumption that considerably Simplifies the
proof. Since this assumption is central, we formally state it here: "M is always
constrained to be a doubly stochastic matrix." In other words, for our proof of convergence, we require the soft assign algorithm to return a doubly stochastic matrix
(as Sinkhorn's theorem guarantees that it will) instead of a matrix which is merely
close to being doubly stochastic (based on some reasonable metric). We also require
the variable (J to be a doubly stochastic matrix.
cilL
Since M is always constrained to be a doubly stochastic matrix,
is required
to be positive definite in the linear subspace of rows and columns of M summing to
A Convergence Prooffor the Softassign Quadratic Assignment Algorithm
623
one. The value of "f should be set high enough such that ciJ;~j does not have any
negative eigenvalues in the subspace spanned by the row and column constraints.
This is the same requirement imposed in [5] to ensure that we obtain a permutation
matrix at zero temperature.
To derive a more explicit criterion for "f, we first define a matrix r in the following
manner:
r
def
== IN
1,
- -ee
(7)
N
where IN is the N x N identity matrix, e is the vector of all ones and the "prime"
indicates a transpose operation. The matrix r has the property that any vector
rs with s arbitrary will sum to zero. We would like to extend such a property to
cover matrices whose row and column sums stay fixed. To achieve this, take the
Kronecker product of r with itself:
10.
R def
=r'6Jr
(8)
R has the property that it will annihilate all row and column sums. Form a vector
m by concatenating all the columns of the matrix M together into a single column
[m = vec(M)]. Then the vector Rm has the equivalent property of the "rows" and
"columns" summing to zero. Hence the matrix RC(-Y) R (where C("'() is the matrix
equivalent of ciJ;~j) satisfies the criterion of annihilated row and column sums in
any quadratic form; m'RC(-Y)Rm = (Rm)'C("'()(Rm).
The parameter "f is chosen such that all eigenvalues of RC(-Y) R are positive:
"f = -
min >'(RCR)
A
+?
(9)
where ? > 0 is a small quantity. Note that C is the original QAP benefit matrix
whereas C("'() is the augmented matrix of (2). We cannot always efficiently compute
the largest negative eigenvalue of the matrix RCR. Since the original Cai;bj is
four dimensional, the dimensions of RC Rare N 2 x N 2 where N is the number of
elements in one set. Fortunately, as we show later, for specific problems it's possible
to break up RC R into its constituents thereby making the calculation of the largest
negative eigenvalue of RCR more efficient. We return to this point in Section 3.
The second part of the proof involves demonstrating that the softassign operation
also decreases the objective in (4). (Note that the two Lagrange parameters J-t and
1/ are specified by the softassign algorithm [4]).
M
= Softassign(Q,,B)
where Qai
= 2: Ci7;~jabj
(10)
bj
Recall that the step immediately preceding the softassign operation sets a ai
Mai. We are therefore justified in referring to aai as the "old" value of M ai . For
convergence, we have to show that Eqap(a, a) 2: Eqap(M, a) in (4).
Minimizing (4) w.r.t. M ai , we get
(11)
A. Rangarajan, A. Yuille, S. Gold and E. Mjolsness
624
From (11), we see that
~ 2;: Mai log Mai = ~ C~IitjMai(Jbj at
2: 2:
/la
atbJ
a
2: 2:
Mai -
IIi
t
Mai -
~ 2: Mai
a
t
at
(12)
and
~ 2;: (Jai log Mai = ~ C~J;tj(JaWbj at
atbJ
2: ~
/la
a
(Jai -
t
~ IIi 2: (Jai - ~ ~ (Jai
t
a
(13)
at
From (12) and (13), we get (after some algebraic manipulations)
Eqap((J,(J) - Eqap(M,(J)
- 2: C~Iitj(JaWbj
1
+ ~ ~ (Jai log (Jai
-
at
aibj
=
1
-
~ ~ Mai log Mai
at
by the non-negativity of the Kullback-Leibler measure. We have shown that the
change in energy after (J has been initialized with the "old" value of M is nonnegative. We require that (J and M are always doubly stochastic via the action of the
softassign operation. Consequently, the terms involving the Lagrange parameters
/l and II can be eliminated from the energy function (4). Setting (J = M followed
by the softassign operation decreases the objective in (4) after excluding the terms
involving the Lagrange parameters.
We summarize the essence of the proof to bring out the salient points. At each
temperature, the quadratic assignment algorithm executes the following steps until
convergence is established.
Step 1:
(Jai
+-
M ai .
Step 2:
Step 2a: Qai +- L:bj C~Iiti(Jbj.
Step 2b: M +- Softassign(Q,,8).
Return to Step 1 until convergence.
Our proof is based on demonstrating that an appropriately designed energy function
decreases in both Step 1 and Step 2 (at fixed temperature). This energy function
is Equation (4) after excluding the Lagrange parameter terms.
Step 1: Energy decreases due to the positive definiteness of C~Iitj in the
linear subspace spanned by the row and column constraints. 1 has to be
set high enough for this statement to be true.
Step 2: Energy decreases due to the non-negativity of the Kullback-Leibler
measure and due to the restriction that M (and (J) are doubly stochastic.
A Convergence Prooffor the Softassign Quadratic Assignment Algorithm
3
3.1
625
Applications
Quadratic Assignment
The QAP benefit matrix is chosen such that the softassign algorithm will not converge without adding the, term in (1). To achieve this, we randomly picked a unit
vector v of dimension N2. The benefit matrix C is set to -vv'. Since C has only one
negative eigenvalue, the softassign algorithm cannot possibly converge. We ran the
softassign algorithm with f30 = 1, (3r = 0.9 and, = O. The energy difference plot
on the left in Figure 1 shows the energy never decreasing with increasing iteration
number. Next, we followed the recipe for setting, exactly as in Section 2. After
projecting C into the subspace of the row and column constraints, we calculated
the largest negative eigenvalue of the matrix RCR which turned out to be -0.8152.
We set, to 0.8162 (? = 0.001) and reran the softassign algorithm. The energy
difference plot shows (Figure 1) that the energy never increases. We have shown
that a proper choice of , leads to a convergent algorithm.
0.8
g-O.5
gO.6
~
Ql
!!!
-1
U
;g 0.4
~-1 . 5
~
2!
Iii 0.2
:s::
>Ql
-2
Ql
-2.5'-----~--~--......J
o
10
20
30
0
0
20
iterations
40
60
80
iterations
Figure 1: Energy difference plot. Left: , = 0 and Right: , = 0.8162. While
the change in energy is always negative when, = 0, it is always non-negative when
, = 0.8162. The negative energy difference (on the left) implies that the energy
function increases whereas the non-negative energy difference (on the right) implies
that the energy function never increases.
3.2
TSP
The TSP objective function is written as follows: Given N cities,
Etsp(M)
= L dijMaiM(aEIH)j = trace(DM'T M)
(14)
aij
where the symbol EB is used to indicate that the summation in (14) is taken modulo
N, dij (D) is the inter-city distance matrix and M is the desired permutation
matrix. T is a matrix whose (i,j)th entry is 6(i$1)j (6ij is the Kronecker delta
function). Equation (14) is transformed into the m'Cm form:
Etsp(m) = trace(m'(D (9 T)m)
where m = vec(M). We identify our general matrix C with -2D (9 T.
(15)
626
A. Rangarajan, A. Yuille, S. Gold and E. Mjolsness
For convergence, we require the largest eigenvalue of
-RCR
= 2(r 0
r)(D 0 T)(r 0 r)
= 2(rDr) 0
(rTr)
= 2(rDr) 0
(rT)
(16)
The eigenvalues of rT are bounded by unity. The eigenvalues of r Dr will depend
on the form of D. Even in Euclidean TSP the values will depend on whether the
Euclidean distance or the distance squared between the cities is used.
3.3
Graph Matching
The graph matching objective function is written as follows: Given Nl and N2 node
graphs with adjacency matrices G and 9 respectively,
1
Egm(M) =
-2 L
Caii bjMai Mbj
(17)
aibj
where Caiibj = 1 - 31G a b - gijl is the compatibility matrix [1]. The matching
constraints are somewhat different from TSP due to the presence of slack variables
[1]. This makes no difference however to our projection operators. We add an extra
row and column of zeros to 9 and G in order to handle the slack variable case. Now
Gis (Nl + 1) X (Nl + 1) and 9 is (N2 + 1) X (N2 + 1). Equation (17) can be readily
transformed into the m'Cm form. Our projection apparatus remains unchanged.
For convergence, we require the largest negative eigenvalue of RC R.
4
Conclusion
We have derived a convergence proof for the softassign quadratic assignment algorithm and specialized to the cases of TSP and graph matching. An extension
to graph partitioning follows along the same lines as graph matching. Central to
our proof is the requirement that the QAP matrix M is always doubly stochastic. As a by-product, the convergence proof yields a criterion by which the free
self-amplification parameter I is set. We believe that the combination of good theoretical properties and experimental success of the softassign algorithm make it the
technique of choice for quadratic assignment neural optimization.
References
[1] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph
matching. IEEE Transactions on Pattern Analysis and Machine Intelligence,
18(4):377~388, 1996.
[2] S. Gold and A. Rangarajan. Softassign versus softmax: Benchmarks in combinatorial optimization. In Advances in Neuml Information Processing Systems
8, pages 626-632. MIT Press, 1996.
[3] E. Mjolsness and C. Garrett. Algebraic transformations of objective functions.
Neuml Networks, 3:651-669, 1990.
[4] A. Rangarajan, S. Gold, and E. Mjolsness. A novel optimizing network architecture with applications. Neuml Computation, 8(5):1041~1060, 1996.
[5] A. L. Yuille and J. J. Kosowsky. Statistical physics algorithms that converge.
Neuml Computation, 6(3):341~356, May 1994.
| 1214 |@word manageable:1 r:1 simulation:2 thereby:1 initial:1 reran:1 written:3 readily:1 engg:1 webster:1 designed:1 plot:3 update:2 intelligence:1 smith:1 compo:1 node:1 org:1 rc:6 along:1 kettlewell:1 doubly:11 manner:1 inter:1 roughly:1 decreasing:1 increasing:1 begin:4 ocs:2 provided:1 bounded:1 argmin:1 cm:2 developed:1 transformation:4 corporation:1 guarantee:2 collecting:1 exactly:1 rm:4 partitioning:2 control:1 unit:1 positive:5 apparatus:1 eb:1 definite:2 procedure:3 matching:9 projection:2 word:1 get:3 cannot:2 close:1 aai:1 operator:1 restriction:1 equivalent:2 deterministic:4 demonstrated:2 imposed:1 go:1 independently:1 restated:1 immediately:1 rdr:2 iiti:1 spanned:3 handle:1 coordinate:1 diego:1 qai:4 modulo:1 element:1 recognition:1 steven:2 thousand:2 ensures:2 mjolsness:6 decrease:6 ran:1 depend:2 solving:1 yuille:6 eric:1 univ:1 effective:1 whose:2 emerged:2 heuristic:1 valued:1 gi:1 radiology:1 gp:1 tsp:8 transform:1 final:1 itself:1 eigenvalue:10 cai:3 mb:1 product:2 j2:1 turned:1 achieve:2 gold:8 amplification:2 constituent:1 recipe:1 convergence:19 rangarajan:7 requirement:3 derive:1 ij:1 school:1 involves:1 implies:2 indicate:1 stochastic:11 adjacency:1 require:6 assign:3 summation:1 extension:1 hold:1 exp:1 bj:10 combinatorial:2 largest:5 city:3 mit:1 always:8 derived:2 indicates:1 transformed:2 compatibility:1 constrained:2 softmax:1 initialize:1 never:3 eliminated:1 emj:1 jb:1 simplify:1 haven:1 randomly:1 softassign:24 rcr:5 nl:3 tj:1 old:2 euclidean:2 initialized:1 desired:2 theoretical:1 increased:1 column:13 soft:3 cover:1 assignment:15 entry:1 rare:1 dij:1 considerably:1 referring:1 ie:2 stay:1 physic:1 together:2 squared:1 central:3 possibly:1 positivity:1 dr:1 style:1 return:4 later:1 break:1 picked:1 analyze:1 efficiently:1 yield:2 identify:1 executes:1 energy:19 dm:1 proof:18 recall:1 cap:1 garrett:1 ta:1 specify:1 until:6 annihilated:1 believe:1 true:1 hence:1 alternating:1 leibler:3 self:2 essence:1 criterion:4 jbj:5 temperature:3 fj:1 bring:1 extremizing:1 novel:1 recently:2 specialized:1 pseudocode:1 extend:1 vec:2 ai:11 sinkhorn:4 add:1 optimizing:1 jolla:1 prime:1 manipulation:1 certain:1 success:1 minimum:2 fortunately:1 somewhat:1 preceding:1 cii:1 converge:5 ii:1 alan:1 characterized:1 calculation:1 involving:2 metric:1 iteration:5 normalization:1 achieved:1 justified:1 whereas:2 separately:1 annealing:4 crucial:1 appropriately:1 extra:1 f30:1 ascent:1 med:1 anand:2 effectiveness:2 ee:1 presence:1 iii:3 enough:2 variety:1 graduated:2 architecture:1 simplifies:1 whether:1 qap:6 algebraic:5 action:1 prooffor:3 mai:26 diagnostic:1 delta:1 four:1 salient:1 demonstrating:2 graph:11 relaxation:2 merely:1 sum:4 reasonable:1 def:2 ct:2 guaranteed:1 followed:2 convergent:1 yale:3 quadratic:18 nonnegative:1 constraint:9 kronecker:2 x2:1 min:2 department:1 combination:1 jr:1 unity:1 making:1 projecting:1 taken:1 equation:3 remains:1 slack:2 end:4 operation:5 original:3 neuml:4 ensure:1 medicine:1 unchanged:1 objective:9 quantity:1 strategy:1 rt:2 subspace:5 distance:3 sci:1 street:2 mail:4 providing:1 minimizing:1 ql:3 cij:2 statement:1 trace:2 negative:10 ski:1 proper:1 benchmark:1 defining:1 ever:1 excluding:2 ucsd:2 arbitrary:1 aibj:6 required:1 specified:1 kosowsky:1 california:1 established:1 below:1 pattern:2 summarize:1 satisfaction:3 eye:1 negativity:3 jai:12 permutation:5 versus:1 vectorized:1 row:11 transpose:1 free:1 aij:1 vv:1 institute:1 fall:1 barrier:1 benefit:4 dimension:2 calculated:1 san:2 transaction:1 kullback:3 annihilate:1 summing:2 francisco:1 ca:2 main:1 n2:4 augmented:1 rtr:1 definiteness:1 cil:1 explicit:1 obeying:1 concatenating:1 theorem:3 specific:1 symbol:1 normalizing:2 adding:1 juncture:1 lagrange:6 egm:1 satisfies:1 identity:1 consequently:1 change:2 preset:1 iitj:2 experimental:1 la:3 east:1 formally:1 dept:1 |
237 | 1,215 | LSTM CAN SOLVE HARD
LO G TIME LAG PROBLEMS
Sepp Hochreiter
Fakultat fur Informatik
Technische Universitat Munchen
80290 Miinchen, Germany
Jiirgen Schmidhuber
IDSIA
Corso Elvezia 36
6900 Lugano, Switzerland
Abstract
Standard recurrent nets cannot deal with long minimal time lags
between relevant signals. Several recent NIPS papers propose alternative methods. We first show: problems used to promote various
previous algorithms can be solved more quickly by random weight
guessing than by the proposed algorithms. We then use LSTM,
our own recent algorithm, to solve a hard problem that can neither
be quickly solved by random search nor by any other recurrent net
algorithm we are aware of.
1
TRIVIAL PREVIOUS LONG TIME LAG PROBLEMS
Traditional recurrent nets fail in case 'of long minimal time lags between input signals and corresponding error signals [7, 3]. Many recent papers propose alternative
methods, e.g., [16, 12, 1,5,9]. For instance, Bengio et ale investigate methods such
as simulated annealing, multi-grid random search, time-weighted pseudo-Newton
optimization, and discrete error propagation [3]. They also propose. an EM approach [1]. Quite a few papers use variants of the "2-sequence problem" (and ('latch
problem" ) to show the proposed algorithm's superiority, e.g. [3, 1, 5, 9]. Some papers also use the "parity problem", e.g., [3, 1]. Some of Tomita's [18] grammars are
also often used as benchmark problems for recurrent nets [2, 19, 14, 11].
Trivial versus non-trivial tasks. By our definition, a "trivial" task is one that
can be solved quickly by random search (RS) in weight space. RS works as follows:
REPEAT randomly initialize the weights and test the resulting net on a training set
UNTIL solution found.
474
s. Hochreiter and J. Schmidhuber
Random search (RS) details. In all our RS experiments, we randomly initialize
weights in [-100.0,100.0]. Binary inputs are -1.0 (for 0) and 1.0 (for 1). Targets are
either 1.0 or 0.0. All activation functions are logistic sigmoid in [0.0,1.0]. We use two
architectures (AI, A2) suitable for many widely used "benchmark" problems: Al is
a fully connected net with 1 input, 1 output, and n biased hidden units. A2 is like
Al with n = 10, but less densely connected: each hidden unit sees the input unit, _
the output unit, and itself; the output unit sees all other units; all units are biased.
All activations are set to 0 at each sequence begin. We will indicate where we
also use different architectures of other authors. All sequence lengths are randomly
chosen between 500 and 600 (most other authors facilitate their problems by using
much shorter training/test sequences). The "benchmark" problems always require
to classify two types of sequences. Our training set consists of 100 sequences, 50
from class 1 (target 0) and 50 from class 2 (target 1). Correct sequence classification
is defined as "absolute error at sequence end below 0.1". We stop the search once
a random weight matrix correctly classifies all training sequences. Then we test on
the test set (100 sequences). All results below are averages of 10 trials. In all our
simulations below, RS finally classified all test set sequences correctly;
average final absolute test set errors were always below 0.001 - in most
cases below 0.0001.
"2-sequence p~oblem" (and "latch problem") [3, 1, 9]. The task is to observe and
classify input sequences. There are two classes. There is only one input unit or input
line. Only the first N real-valued sequence elements convey relevant information
about the class. Sequence elements at positions t > N(we use N = 1) are generated
by a Gaussian with mean z~ro and variance 0.2. The first sequence element is 1.0
(-1.0) for class 1 (2). Tar#t at sequence end is 1.0 (0.0) for class 1 (2) (the latch
problem is a simple version of the 2-sequence problem that allows for input tuning
instead of weight tuning).
Bengio et ale 's results. For the 2-sequence problem, the best method among the
six tested by Bengio et all [3] was multigrid random search (sequence lengths 50
- 100; N and stopping criterion undefined), which solved the problem after 6,400
sequence presentations, with final classification error 0.06. In more recent work,
Bengio andFrasconi reported that an EM-approach [1] solves the problem within
2,900 trials.
RS results. RS with architecture A2 (AI, n = 1) solves the problem within only
718 (1247) trials on average. Using an architecture with only 3 parameters (as
in Bengio et at's architecture for the latch problem [3]), the problem was solved
within only 22 trials on average, due to tiny parameter space. According to our
definition above, the problem is trivial. RS outperforms Bengio et al.'s methods in
every respect: (1) many fewer trials required, (2) much less computation time per
trial. Also, in most cases (3) the solution quality is better (less error).
It should be mentioned, however, that different input representations and different types of noise may lead to worse RS performance (Yoshua Bengio, personal
communication, 1996).
"Parity problem". The parity task [3, 1] requires to classify sequences with
several 100 elements (only l's or -1's) according to whether the number of l's is
even or odd. The target at sequence end is 1.0 for odd and 0.0 for even.
LSTM can Solve
Time
Problems
475
Bengio et al."s results. For sequences with only 25-50 steps, among the six
methods tested in [3] only simulated annealing was reported to achieve final classification error of 0.000 (within about 810,000 triaIs-- the authors did not mention
the precise stopping criterion). A method called "discrete error BP" took about
54,000 trials to achieve final classification error 0.05. In more recent work [1], for
sequences with 250-500 steps, their EM-approach took about 3,400 trials to achieve
- final classification error 0.12.
RS results. RS with Al (n = 1) solves the problem within only 2906 trials on
average. RS with A2 solves it within 2797 trials. We also ran another experiment
with architecture A2, but without self-connections for hidden units. RS solved the
problem within 250 trials on average.
Again it should be mentioned that different input representations and noise types
may lead to worse RS performance (Yoshua Bengio, personal communication, 1996).
Tomita grammars. Many authors also use Tomita's grammars [18] to test their
algorithms. See, e.g., [2, 19, -14, 11, 10]. Since we already tested parity problems
above, we now focus on a few "parity-free" Tomita grammars (nr.s #1, #2, #4).
Previous work facilitated the problems by restricting sequence length. E.g., in
[11], maximal test (training) sequence length is 15 (10). Reference [11] reports the
number of sequences required for convergence (for various first and second order
nets with 3 to 9 units): Tomita #1: 23,000 - 46,000; Tomita #2: 77,000 - 200,000;
Tomita #4: 46,000 - 210,000. RS, however, clearly outperforms the methods in
[11]. The average results are: Tomita #1: 182 (AI, n = 1) and 288 (A2), Tomita
#2: 1,511 (AI, n = 3) and 17,953 (A2), Tomita #4: 13,833 (AI, n 2) and 35,610
(A2).
=
Non-trivial tasks / Outline of remainder. Solutions of non-trivial tasks are
sparse in weight space. They require either many free parameters (e.g., input
weights) or high weight precision, such that RS becomes infeasible. To solve such
tasks we need a novel method called "Long Short-Term Memory", or LSTM for
short [8]. Section 2 will briefly review LSTM. Section 3 will show results on a task
that cannot be solved at all by any other recurrent net learning algorithm we are
aware of. The task involves distributed, high-precision, continuous-valued representations and long minimal time lags - there are no short time lag training exemplars
facilitating learning.
2
LONG SHORT-TERM MEMORY
Memory cells and gate units: basic ideas. LSTM's basic unit is called a
memory cell. Within each memory cell, there is a linear unit with a fixed-weight
self-connection (compare Mozer's time constants [12]). This enforces constant, nonexploding, non-vanishing error flow within the memory cell. A multiplicative input
gate unit learns to protect the constant error flow within the memory cell from
perturbation by irrelevant inputs. Likewise, a multiplicative output gate unit learns
to protect other units from perturbation by currently irrelevant memory contents
. stored in the memory cell. The gates learn-to open and close access to constant error
flow. Why is constant error flow important? For instance, with conventional "backprop through time" (BPTT, e.g., [20]) or RTRL (e.g., [15]), error signals ''flowing
backwards in time" tend to vanish: the temporal evolution of the backpropagated
S. Hochreiter and 1. Schmidhuber
476
error exponentially depends on the size of the weights. For the first theoretical error
flow analysis see [7]. See [3] for a more recent, independent, essentially identical
analysis.
LSTM details. In what follows, W uv denotes the weight on the connection from
unit v to unit u. netu(t), yU(t) are net input and activation of unit u (with activation
function fu) at time t. For all non-input units that aren't memory cells (e.g. output
units), we have yU(t) = fu(netu(t)), where netu(t) = Lv wuvyV(t - 1). The jth memory cell is denoted Cj. Each memory cell is built around a central linear
unit with a fixed self-connection (weight 1.0) and identity function as activation
function (see definition of SCj below). In addition to netCj(t) = Lu WCjuyU(t - 1),
c; also gets input from a special unit out; (the "output gate"), and from another
special unit in; (the "input gate"). in; 's activation at time t is denoted by ifnj(t).
out; 's activation at time t is denoted by youtj (t). in;, outj are viewed as ordinary
hidden units. We have youtj(t) !outj (net outj (t)), yin;(t) = finj(netinj(t)), where
netoutj(t) = Lu WoutjuyU(t - 1), netinj(t) = Lu WinjuyU(t - 1). The summation
indices u may stand for input units, gate units, memory cells, or even conventional
hidden units if there are any (see also paragraph on "network topology" below).
All these different types of units may convey useful information about the current
state of the net. For instance, an input gate (output gate) may use inputs from
other memory cells to decide whether to store (access) certain information in its
memory cell. There even may be recurrent self-connections like WCjCj . It is up to
the user to define the network topology. At time t, Cj'S output yCj(t) is computed
in a sigma-pi-likefashion: yCj(t) youtj(t)h(scj(t)), where
=
=
SCj(O)
= 0, SCj(t) = SCj(t -1) + ifnj(t)g (netCj(t))
for t
> o.
The differentiable function 9 scales net Cj . The differentiable function h scales memory cell outputs computed from the internal state SCj.
Why gate units? in; controls the error flow to memory cell Cj 's input connections
W Cju ? out; controls the error flow from unit j's output connections.. Error signals
trapped within a memory cell cannot change - but different error signals flowing into
the ceil (at different times) via its output gate may get superimposed. The output
gate will have to learn which errors to trap in its memory cell, by appropriately
scaling them. Likewise, the input gate will have to learn when to release errors.
Gates open and close access to constant error flow.
Network topology. There is one input, one hidden, 'and one output layer.. The
fully self-connected hidden layer contains memory cells and corresponding gate units
(for convenience, we refer to both memory cells and gate units as hidden units
located in the hidden layer). The hidden layer may also contain "conventional"
hidden units providing inputs to gate uni~s and memory cells. All units (except for
gate units) in a~llayers have directed connections (serve as inputs) to all units in
higher layers.
Memory .cell blocks. S memory cells sharing one input gate a:nd one output gate
form a "memory cell block of size S".. They can facilitate information storage.
Learning with excellent computational complexity - see details in appendix
of [8]. We use a variant of RTRL which properly takes into account the altered
(sigma-pi-like) dynamics caused by input and output gates. Ho~ever, to ensure
constant error backprop, like with truncated BP.TT [20], errors arriving at "memory
LSTM can Solve
Time Lag Problems
477
cell net inputs" (for cell Cj, this includes net
., netin J., net out J.) do not get propagated
. eJ
back further in time (although they do serve to change the incoming weights). Only
within memory cells, errors are propagated back through previous internal states SCj.
This enforces constant error flow within memory cells. Thus only the derivatives
8s c d
~ nee to be stored and updated. Hence, the algorithm is very efficient,
and LSTM's update complexity per time step is excellent in comparison to other
approaches such as RTRL: given n units and a fixed number ofoutput units, LSTM's
update complexity per time step is at most O(n 2 ), just like BPTI's.
3
EXPERIMENT: ADDING PROBLEM
Our previous experimental comparisons (on widely used benchmark problems)
with.RTRL (e.g., .[15]; results compared to the ones in [17]), Recurrent CascadeCorrelation [6], Elman nets (results compared to the ones in [4]), and Neural Sequence Chunking [16], demonstrated that LSTM leads to many more successful
runs than its competitors, and learns much faster [8]. The following task, though,
is more difficult than the above benchmark problems: it cannot be solved at all in
reasonable time by RS (we tried various architectures) nor any other recurrent net
learning algorithm we are aware of (see [13] for ?an overview). The experiment will
show that LSTM can solve non-trivial, complex long time lag problems involving
distributed, high-precision, continuous-valued representations.
Task. Each element of each input sequence is a pair consisting of two components.
The first component is a real value randomly chosen from the interval [-1,1]. The
second component is ~ither 1.0, 0.0, or -1.0, and is used as a marker: at the end
of each sequence, the task is to output the sum of the first components of those
pairs that are marked by second components .equal to 1.0. The value T is used to
determine average sequence length, which is a randomly chosen int~ger between T
and T + 'fa. With a given sequence, exactly two pairs are marked as follows: we first
randomly select and mark one of the first ten pairs (whose first component is called
,Xl). Then we randomly select and mark one of the first ~ -1 still unmarked pairs
(whose first component is called X2). The second components of the remaining
pairs are zero except for the first and final pair, whose second components are -1
(Xl is set to zero in the rare case where the first pair of the sequence got marke~.
An error signal is generated only at the sequence end: the target is 0.5 + X)4~O 2
(the sum Xl + X2 scaled to the interval [0,1]). A sequence was processed correctly
if the absolute error at the sequence end is below 0.04.
Architecture. We use a 3-layer net with 2 input units, 1 output unit, and 2
memory cell blocks of size 2 (a cell block size of 1 works well, too). The output
layer receives connections only from memory cells. Memory cells/ gate units receive
inputs from memory cells/gate units (fully connected hidden layer). Gate units
(finj,fout;) and output units are sigmoid in [0,1]. h is sigmoid in [-1,1], and 9 is
sigmoid in [-2,2].
State drift versus initial bias. Note that the task requires to store the precise
values of real numbers for long durations - the system must learn to protect memory cell contents against even minor "internal state drifts". Our simple but highly
effective way of solving drift problems at the. beginning of learning is to initially bias
the input gate in; towards zero. There is no need for fine tuning initial bias: with
S. Hochreiter and J. Schmidhuber
478
sigmoid l~gistic activation functions, the precise initial bias hardly matters because
vastly different initial bias values produce almost the same near-zero activations. In
fact, the system itself learns to generate the most appropriate input gate bias. To
study the significance of the drift problem, we bias all non-input units, thus artificially inducing internal state drifts. Weights (including bias weights) are randomly
initialized in the range [-0.1,0.1]. The first (second) input gate bias is initialized
with -3.0 (-6.0) (recall that the precise- initialization values hardly matters, as
confirmed by additional experiments)..
Training / Testing. The learning rate is 0.5. Training examples are generated
on-line. Training is stopped if the average training error is below 0.01, and the 2000
most recent sequences were processed correctly (see definition above).
Results. With a test set consisting of 2560 randomly chosen. sequences, the average
test set error was always below 0.01, and there were never more than 3 incorrectly
processed sequences. The following results are means of 10 trials: For T = 100
(T = 500, T = 1000), training was stopped after 74,000 (209,000; 853,000) training
sequences, and then only 1 (0, 1) of the test sequences was not processed correctly.
For T = 1000, the number of required training examples varied between 370,000
and 2,020,000, exceeding 700,000 in only 3 cases.
The experiment demonstrates even for very long minimal time lags: (1) LSTM is
able to work well with distributed representations. (2) LSTM is able to perform
calculations involving high-precision, continuous values. Such tasks are impossible
to solve within reasonable time by other algorithms: the main problem of gradientbased approaches (including TDNN, pseudo Newton) is their inability to deal with
very long minimal time lags (vanishing gradient). A main problem of "global" and
"discrete" approaches (RS, Bengio's and Frasconi's EM-approach, discrete error
propagation) is their inability to deal with high-precision, continuous values.
Other experiments. In [8] LSTM is used to solve numerous additional tasks that
cannot be solved by other recurrent net learning algorithm we are aware of. For
instance, LSTM can extract information conveyed by the temporal order of widely
separated inputs. LSTM also can learn real-valued, conditional expectations of
strongly delayed, noisy targets, given the inputs.
Conclusion. For non-trivial tasks (where RS is infeasible), we recommend LSTM.
4
ACKNOWLEDGMENTS
This work was supported by DFG grant SCRM 942/3-1 from "Deutsche Forschungsgemeinschaft" .
References
[1] Y. Bengio and P. Frasconi. Credit assignment through time: Alternatives to backpropagation. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural
Information Processing Systems 6, pages 75-82. San Mateo, CA: Morgan Kaufmann,
1994.
[2] Y. Bengio and P. Frasconi. An input output HMM architecture. In G. Tesauro,
D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing
Systems 7, pages 427-434. MIT Press, Cambridge MA, 1995.
LSTM can Solve
4 ............ oIIL.J'IV"'I.)IIi;
Time Lag Problems
479
[3] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient
descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166, 1994.
[4] A. Cleeremans, D. Servan-Schreiber, and J. L. McClelland. Finite-state automata
and simple recurrent networks. Neural Computation, 1:372-381, 1989.
[5] S. EI Hihi and Y. Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information?Processing Systems 8, 1995. to appear.
[6] S. E. Fahlman. The recurrent cascade-correlation learning algorithm. In R. P. Lippmann, J . E. Moody, and D. S. Touretzky, editors, Advances in Neural Information
Processing Systems 9, pages 190-196. San Mateo, CA: Morgan Kaufmann, 1991.
[7] J. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis,
Institut fiir Informatik, Lehrstuhl Prof. Brauer, Technische Universitat Munchen,
1991. See www7.informatik. tu-muenchen.de/-hochreit.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Technical Report FKI207-95, Fakultii.t fur Informatik, Technische Universitat Munchen, 1995. Revised 1996
(see www.idsia.ch/-juergen, www7.informatik.tu-muenchen.dej-hochreit).
[9] T. Lin, B. G. Horne, P. Tino, and C. L. Giles. Learning long-term dependencies is
not as difficult with NARX recurrent neural networks. Technical Report UMIACSTR-95-78 and CS-TR-3500, Institute for Advanced Computer Studies, University of
Maryland, College Pa.rk, MD 20742, 1995.
[10] P. Manolios and R. Fanelli. First-order recurrent neural networks and deterministic
finite state automata. Neural Computation, 6:1155-1173, 1994.
[11] C. B. Miller and C. L. Giles. Experimental comparison of the effect of order in
recurrent neural networks. International Journal of Pattern Recognition and Artificial
Intelligence, 7(4):849-872, 1993.
[12] M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J.
Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing
Systems 4, pages 275-282. San Mateo, CA: Morgan Kaufmann, 1992.
[13] B. A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A
survey. IEEE Transactions on Neural Networks, 6(5):1212-1228, 1995.
[14] J. B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:227-252,
1991.
[15] A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUEDjF-INFENGjTR.1, Cambridge University Engineering
Department, 1987.
[16] J. H. Schmidhuber. Learning complex, extended sequences using the principle of
history compression. Neural Computation, 4(2):234-242, 1992.
[17] A. W. Smith and D. Zipser. Learning sequential structures with the real-time recurrent learning algorithm. International Journal of Neural Systems, 1(2):125-131,
1989.
[18] M. Tomita. Dynamic construction of finite automata from examples using hillclimbing. In Proceedings of the Fourth Annual Cognitive Science Conference, pages
105-108. Ann Arbor, MI, 1982.
[19] R. L. Watrous and G. M. Kuhn. Induction of finite-state automata using secondorder recurrent networks. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors,
Advances in Neural Information Processing Systems 4, pages 309-316. San Mateo,
CA: Morgan Kaufmann, 1992.
[20] R. J. Williams and J. Peng. An efficient gradient-based algorithm for on-line training
of recurrent network trajectories. Neural Computation, 4:491-501, 1990.
| 1215 |@word trial:12 briefly:1 version:1 compression:1 nd:1 bptt:1 open:2 r:19 simulation:1 tried:1 mention:1 tr:1 initial:4 contains:1 outperforms:2 current:1 activation:9 must:1 hochreit:2 update:2 intelligence:1 fewer:1 beginning:1 vanishing:2 short:5 smith:1 miinchen:1 consists:1 paragraph:1 peng:1 alspector:1 elman:1 nor:2 multi:1 becomes:1 begin:1 classifies:1 horne:1 deutsche:1 what:1 multigrid:1 watrous:1 pseudo:2 temporal:3 every:1 exactly:1 ro:1 scaled:1 demonstrates:1 control:2 unit:47 grant:1 superiority:1 appear:1 engineering:1 ceil:1 initialization:1 mateo:4 range:1 directed:1 acknowledgment:1 enforces:2 testing:1 block:4 backpropagation:1 lippman:2 got:1 cascade:1 get:3 cannot:5 close:2 convenience:1 storage:1 impossible:1 www:1 conventional:3 deterministic:1 demonstrated:1 sepp:1 williams:1 duration:1 automaton:4 survey:1 scj:7 updated:1 target:6 netzen:1 construction:1 user:1 secondorder:1 fout:1 element:5 idsia:2 pa:1 recognition:1 located:1 solved:9 cleeremans:1 connected:4 ran:1 mentioned:2 mozer:2 complexity:3 brauer:1 dynamic:4 personal:2 solving:1 serve:2 various:3 separated:1 effective:1 artificial:1 quite:1 lag:11 widely:3 solve:9 valued:4 whose:3 grammar:4 itself:2 noisy:1 final:6 sequence:43 differentiable:2 net:19 took:2 propose:3 maximal:1 remainder:1 tu:2 relevant:2 achieve:3 inducing:1 convergence:1 produce:1 recurrent:19 exemplar:1 minor:1 odd:2 solves:4 c:1 involves:1 indicate:1 switzerland:1 kuhn:1 correct:1 backprop:2 require:2 summation:1 gradientbased:1 around:1 credit:1 a2:8 currently:1 schreiber:1 weighted:1 mit:1 clearly:1 always:3 gaussian:1 ej:1 tar:1 release:1 focus:1 properly:1 fur:2 superimposed:1 stopping:2 initially:1 hidden:12 germany:1 oblem:1 classification:5 among:2 denoted:3 special:2 initialize:2 equal:1 aware:4 once:1 never:1 frasconi:4 untersuchungen:1 identical:1 yu:2 promote:1 yoshua:2 report:4 recommend:1 few:2 randomly:9 densely:1 delayed:1 dfg:1 consisting:2 ycj:2 investigate:1 highly:1 undefined:1 fu:2 shorter:1 institut:1 iv:1 initialized:2 theoretical:1 minimal:5 stopped:2 pollack:1 instance:4 classify:3 giles:2 servan:1 assignment:1 juergen:1 ordinary:1 technische:3 rare:1 successful:1 too:1 universitat:3 reported:2 stored:2 dependency:3 lstm:19 international:2 quickly:3 moody:3 again:1 central:1 vastly:1 thesis:1 worse:2 cognitive:1 derivative:1 simard:1 account:1 de:1 includes:1 int:1 matter:2 caused:1 depends:1 multiplicative:2 variance:1 kaufmann:4 likewise:2 miller:1 informatik:5 lu:3 trajectory:1 confirmed:1 classified:1 history:1 touretzky:2 sharing:1 cju:1 definition:4 competitor:1 against:1 corso:1 jiirgen:1 mi:1 propagated:2 stop:1 dynamischen:1 recall:1 cj:5 back:2 higher:1 youtj:3 flowing:2 lehrstuhl:1 leen:1 though:1 strongly:1 outj:3 just:1 until:1 correlation:1 receives:1 ei:1 multiscale:1 marker:1 propagation:3 logistic:1 quality:1 facilitate:2 effect:1 contain:1 evolution:1 hence:1 deal:3 latch:4 tino:1 self:5 criterion:2 outline:1 tt:1 pearlmutter:1 novel:1 sigmoid:5 overview:1 exponentially:1 hihi:1 refer:1 cambridge:2 ai:5 tuning:3 uv:1 grid:1 access:3 recognizers:1 own:1 recent:7 irrelevant:2 driven:1 tesauro:2 schmidhuber:6 store:2 certain:1 binary:1 morgan:4 additional:2 determine:1 ale:2 signal:7 technical:3 faster:1 calculation:2 long:14 lin:1 variant:2 basic:2 involving:2 muenchen:2 essentially:1 expectation:1 hochreiter:6 cell:32 receive:1 addition:1 fine:1 annealing:2 interval:2 appropriately:1 biased:2 tend:1 cowan:1 flow:9 zipser:1 near:1 backwards:1 bengio:14 forschungsgemeinschaft:1 iii:1 architecture:9 topology:3 idea:1 whether:2 six:2 utility:1 hardly:2 useful:1 backpropagated:1 ten:1 processed:4 mcclelland:1 generate:1 trapped:1 correctly:5 per:3 discrete:4 dej:1 neither:1 sum:2 run:1 facilitated:1 fourth:1 almost:1 reasonable:2 decide:1 appendix:1 scaling:1 layer:8 annual:1 bp:2 x2:2 department:1 according:2 cascadecorrelation:1 em:4 rtrl:4 chunking:1 fail:1 end:6 munchen:3 observe:1 hierarchical:1 appropriate:1 alternative:3 ho:1 gate:27 denotes:1 tomita:11 ensure:1 remaining:1 newton:2 narx:1 prof:1 already:1 fa:1 md:1 traditional:1 guessing:1 nr:1 gradient:4 fallside:1 maryland:1 simulated:2 hmm:1 trivial:9 induction:3 length:5 index:1 providing:1 difficult:3 sigma:2 perform:1 revised:1 benchmark:5 finite:4 descent:1 truncated:1 incorrectly:1 extended:1 communication:2 precise:4 ever:1 perturbation:2 varied:1 drift:5 pair:8 required:3 connection:9 hanson:2 fakultat:1 protect:3 nee:1 nip:1 robinson:1 able:2 below:10 pattern:1 dynamical:1 built:1 including:2 memory:34 suitable:1 advanced:1 altered:1 numerous:1 tdnn:1 extract:1 review:1 fully:3 diploma:1 ger:1 versus:2 lv:1 conveyed:1 principle:1 editor:5 neuronalen:1 tiny:1 pi:2 lo:1 repeat:1 parity:5 free:2 arriving:1 infeasible:2 jth:1 supported:1 bias:9 fahlman:1 institute:1 absolute:3 sparse:1 distributed:3 stand:1 author:4 san:4 transaction:2 lippmann:1 uni:1 fiir:1 global:1 incoming:1 search:6 continuous:4 why:2 learn:5 ca:4 excellent:2 complex:2 artificially:1 did:1 significance:1 main:2 unmarked:1 noise:2 facilitating:1 convey:2 precision:5 position:1 exceeding:1 lugano:1 xl:3 vanish:1 learns:4 rk:1 zu:1 trap:1 restricting:1 elvezia:1 adding:1 sequential:1 aren:1 yin:1 hillclimbing:1 ch:1 ma:1 conditional:1 identity:1 presentation:1 viewed:1 marked:2 ann:1 towards:1 content:2 hard:2 change:2 except:2 called:5 experimental:2 arbor:1 select:2 college:1 internal:4 mark:2 inability:2 tested:3 |
238 | 1,216 | Reinforcement Learning for Dynamic
C?h annel Allocation in Cellular Telephone
Systems
Satinder Singh
Department of Computer Science
University of Colorado
Boulder, CO 80309-0430
bavej a@cs.colorado.edu
Dimitri Bertsekas
Lab. for Info. and Decision Sciences
MIT
Cambridge, MA 02139
bertsekas@lids.mit.edu
Abstract
In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is
naturally formulated as a dynamic programming problem and we
use a reinforcement learning (RL) method to find dynamic channel
allocation policies that are better than previous heuristic solutions.
The policies obtained perform well for a broad variety of call traffic patterns. We present results on a large cellular system with
approximately 4949 states.
In cellular communication systems, an important problem is to allocate the communication resource (bandwidth) so as to maximize the service provided to a set of
mobile callers whose demand for service changes stochastically. A given geographical area is divided into mutually disjoint cells, and each cell serves the calls that
are within its boundaries (see Figure 1a). The total system bandwidth is divided
into channels, with each channel centered around a frequency. Each channel can be
used simultaneously at different cells, provided these cells are sufficiently separated
spatially, so that there is no interference between them. The minimum separation
distance between simultaneous reuse of the same channel is called the channel reuse
constraint.
When a call requests service in a given cell either a free channel (one that does not
violate the channel reuse constraint) may be assigned to the call, or else the call is
blocked from the system; this will happen if no free channel can be found. Also,
when a mobile caller crosses from one cell to another, the call is "handed off" to the
cell of entry; that is, a new free channel is provided to the call at the new cell. If no
such channel is available, the call must be dropped/disconnected from the system.
RLfor Dynamic Channel Allocation
975
One objective of a channel allocation policy is to allocate the available channels
to calls so that the number of blocked calls is minimized. An additional objective
is to minimize the number of calls that are dropped when they are handed off to
a busy cell. These two objectives must be weighted appropriately to reflect their
relative importance, since dropping existing calls is generally more undesirable than
blocking new calls.
To illustrate the qualitative nature of the channel assignment decisions, suppose
that there are only two channels and three cells arranged in a line. Assume a
channel reuse constraint of 2, i.e., a channel may be used simultaneously in cells
1 and 3, but may not be used in channel 2 if it is already used in cell 1 or in cell
3. Suppose that the system is serving one call in cell 1 and another call in cell
3. Then serving both calls on the same channel results in a better channel usage
pattern than serving them on different channels, since in the former case the other
channel is free to be used in cell 2. The purpose of the channel assignment and
channel rearrangement strategy is, roughly speaking, to create such favorable usage
patterns that minimize the likelihood of calls being blocked.
We formulate the channel assignment problem as a dynamic programming problem,
which, however, is too complex to be solved exactly. We introduce approximations
based on the methodology of reinforcement learning (RL) (e.g., Barto, Bradtke and
Singh, 1995, or the recent textbook by Bertsekas and Tsitsiklis, 1996). Our method
learns channel allocation policies that outperform not only the most commonly used
policy in cellular systems, but also the best heuristic policy we could find in the
literature.
1
CHANNEL ASSIGNMENT POLICIES
Many cellular systems are based on a fixed assignment (FA) channel allocation; that
is, the set of channels is partitioned, and the partitions are permanently assigned
to cells so that all cells can use all the channels assigned to them simultaneously
without interference (see Figure 1a). When a call arrives in a cell, if any preassigned channel is unused; it is assigned, else the call is blocked. No rearrangement
is done when a call terminates. Such a policy is static and cannot take advantage of
temporary stochastic variations in demand for service. More efficient are dynamic
channel allocation policies, which assign channels to different cells, so that every
channel is available to every cell on a need basis, unless the channel is used in a
nearby cell and the reuse constraint is violated.
The best existing dynamic channel allocation policy we found in the literature is
Borrowing with Directional Channel Locking (BDCL) of Zhang & Yum (1989). It
numbers the channels from 1 to N, partitions and assigns them to cells as in FA.
The channels assigned to a cell are its nominal channels. If a nominal channel
is available when a call arrives in a cell, the smallest numbered such channel is
assigned to the call. If no nominal channel is available, then the largest numbered
free channel is borrowed from the neighbour with the most free channels. When a
channel is borrowed, careful accounting of the directional effect of which cells can
no longer use that channel because of interference is done. The call is blocked if
there are no free channels at all. When a call terminates in a cell and the channel
so freed is a nominal channel, say numbered i, of that cell, then if there is a call
in that cell on a borrowed channel, the call on the smallest numbered borrowed
channel is reassigned to i and the borrowed channel is returned to the appropriate
cell. If there is no call on a borrowed channel, then if there is a call on a nominal
channel numbered larger than i, the call on the highest numbered nominal channel
is reassigned to i. If the call just terminated was itself on a borrowed channel, the
976
S. Singh and D. Bertsekas
call on the smallest numbered borrowed channel is reassigned to it and that channel
is returned to the cell from which it was borrowed. Notice that when a borrowed
channel is returned to its original cell, a nominal channel becomes free in that cell
and triggers a reassignment. Thus, in the worst case a call termination in one cell
can sequentially cause reassignments in arbitrarily far away cells - making BDCL
somewhat impractical.
BOCL is quite sophisticated and combines the notions of channel-ordering, nominal
channels, and channel borrowing. Zhang and Yum (1989) show that BOCL is
superior to its competitors, including FA. Generally, BOCL has continued to be
highly regarded in the literature as a powerful heuristic (Enrico et.al., 1996) . In
this paper, we compare the performance of dynamic channel allocation policies
learned by RL with both FA and BOCL.
1.1
DYNAMIC PROGRAMMING FORMULATION
We can formulate the dynamic channel allocation problem using dynamic programming (e.g., Bertsekas, 1995). State transitions occur when channels become free due
to call departures, or when a call arrives at a given cell and wishes to be assigned
a channel, or when there is a handoff, which can be viewed as a simultaneous call
departure from one cell and a call arrival at another cell. The state at each time
consists of two components:
(1) The list of occupied and unoccupied channels at each cell. We call this the
configuration of the cellular system. It is exponential in the number of cells.
(2) The event that causes the state transition (arrival, departure, or handoff). This
component of the state is uncontrollable.
The decision/control applied at the time of a call departure is the rearrangement
of the channels in use with the aim of creating a more favorable channel packing
pattern among the cells (one that will leave more channels free for future assignments) . Unlike BDCL, our RL solution will restrict this rearrangement to the cell
with the current call departure. The control exercised at the time of a call arrival
is the assignment of a free channel, or the blocking of the call if no free channel is
currently available. In general, it may also be useful to do admission control, i.e.,
to allow the possibility of not accepting a new call even when there exists a free
channel to minimize the dropping of ongoing calls during handoff in the future. We
address admission control in a separate paper and here restrict ourselves to always
accepting a call if a free channel is available. The objective is to learn a policy that
assigns decisions (assignment or rearrangement depending on event) to each state
so as to maximize
J= E
{l
CO
e- f3t e(t)dt} ,
where E{-} is the expectation operator, e(t) is the number of ongoing calls at time
t, and j3 is a discount factor that makes immediate profit more valuable than future
profit. Maximizing J is equivalent to minimizing the expected (discounted) number
of blocked calls over an infinite horizon.
2
REINFORCEMENT LEARNING SOLUTION
RL methods solve optimal control (or dynamic programming) problems by learning
good approximations to the optimal value function, J*, given by the solution to
977
RLfor Dynamic Channel Allocation
the Bellman optimality equation which takes the following form for the dynamic
channel allocation problem:
J(x)
Ee{ max
aEA(r ,e)
[E~dc(x,a,~t)+i(~t)J(Y)}]},
(1)
where x is a configuration, e is the random event (a call arrival or departure), A( x, e)
is the set of actions available in the current state (x, e), ~t is the random time until
the next event, c(x, a, ~t) is the effective immediate payoff with the discounting,
and i(~t) is the effective discount for the next configuration y.
RL learns approximations to J* using Sutton's (1988) temporal difference (TD(O))
algorithm. A fixed feature extractor is used to form an approximate compact representation of the exponential configuration of the cellular array. This approximate representation forms the input to a function approximator (see Figure 1) that
learns/stores estimates of J*. No partitioning of channels is done; all channels are
available in each cell. On each event, the estimates of J* are used both to make
decisions and to update the estimates themselves as follows:
Call Arrival: When a call arrives, evaluate the next configuration for each free
channel and assign the channel that leads to the configuration with the largest
estimated value. If there is no free channel at all, no decision has to be made.
Call Termination: When a call terminates, one by one each ongoing call in that
cell is considered for reassignment to the just freed channel; the resulting configurations are evaluated and compared to the value of not doing any reassignment at
all. The action that leads to the highest value configuration is then executed.
On call arrival, as long as there is a free channel, the number of ongoing calls and the
time to next event do not depend on the free channel assigned. Similarly, the number
of ongoing calls and the time to next event do not depend on the rearrangement done
on call departure. Therefore, both the sample immediate payoff which depends on
the number of ongoing calls and the time to next event, and the effective discount
factor which depends only on the time to next event are independent of the choice
of action. Thus one can choose the current best action by simply considering the
estimated values of the next configurations. The next configuration for each action
is deterministic and trivial to compute.
When the next random event occurs, the sample payoff and the discount factor become available and are used to update the value function as follows: on a transition
from configuration x to y on action a in time ~t,
(1- a)Jo/d(X) + a (c(x, a, ~t) + i(~t)Jo/d(y?
(2)
where x is used to indicate the approximate feature-based representation of x. The
parameters ofthe function approximator are then updated to best represent Jnew(x)
using gradient descent in mean-squared error (Jnew(x) - JO /d(x?2 .
3
SIMULATION RESULTS
Call arrivals are modeled as Poisson processes with a separate mean for each cell,
and call durations are modeled with an exponential distribution . The first set of
results are on the 7 by 7 cellular array of Figure ??a with 70 channels (roughly
70 49 configurations) and a channel reuse constraint of 3 (this problem is borrowed
from Zhang and Yum's (1989) paper on an empirical comparison of BDCL and its
competitors) . Figures 2a, b & c are for uniform call arrival rates of 150, 200, and
350 calls/hr respectively in each cell. The mean call duration for all the experiments
S. Singh and D. Bertsekas
978
reported here is 3 minutes. Figure 2d is for non-uniform call arrival rates. Each
curve plots the cumulative empirical blocking probability as a function of simulated
time. Each data point is therefore the percentage of system-wide calls that were
blocked up until that point in time. All simulations start with no ongoing calls.
a)
TO(O) Training
,-----,,
b)
Con figuration
,
Feature
Features
FunCtion
Extractor
(Availability
Ap5ioXimator
,
and
Packing)
Value
I
,
Figure 1: a) Cellular Array. The market area is divided up into cells, shown here as
hexagons. The available bandwidth is divided into channels. Each cell has a base station responsible for calls within its area. Calls arrive randomly, have random durations
and callers may move around in the market area creating handoffs. The channel reuse
constraint requires that there be a minimum distance between simultaneous reuse of the
same channel. In a fixed assignment channel allocation policy (assuming a channel reuse
constraint of 3), the channels are partitioned into 7 lots labeled 1 to 7 and assigned to the
cells in the compact pattern shown here. Note that the minimum distance between cells
with the same number is at least three. b) A block diagram of the RL system. The exponential configuration is mapped into a feature-based approximate representation which
forms the input to a function approximation system that learns values for configurations.
The parameters of the function approximator are trained using gradient descent on the
squared TD(O) error in value function estimates (c.L Equation 2) .
The RL system uses a linear neural network and two sets of features as input: one
availability feature for each cell and one packing feature for each cell-channel pair.
The availability feature for a cell is the number offree channels in that cell, while the
packing feature for a cell-channel pair is the number of times that channel is used
in a 4 cell radius. Other packing features were tried but are not reported because
they were insignificant. The RL curves in Figure 2 show the empirical blocking
probability whilst learning. Note that learning is quite rapid. As the mean call
arrival rate is increased the relative difference between the 3 algorithms decreases.
In fact, FA can be shown to be optimal in the limit of infinite call arrival rates
(see McEliece and Sivarajan, 1994). With so many customers in every cell there
are no short-term fluctuations to exploit. However, as demonstrated in Figure 2,
for practical traffic rates RL consistently gives a big win over FA and a smaller win
over BnCL.
One difference between RL and BnCL is that while the BnCL policy is independent
of call traffic, RL adapts its policy to the particulars of the call traffic it is trained
on and should therefore be less sensitive to different patterns of non-uniformity of
call traffic across cells. Figure 3b presents multiple sets of bar-graphs of asymptotic
blocking probabilities for the three algorithms on a 20 by 1 cellular array with 24
channels and a channel reuse constraint of 3. For each set, the average per-cell call
arrival rate is the same (120 calls/hr; mean duration of 3 minutes), but the pattern
of call arrival rates across the 20 cells is varied . The patterns are shown on the left
of the bar-graphs and are explained in the caption of Figure 3b. From Figure 3b
it is apparent that RL is much less sensitive to varied patterns of non-uniformity
than both BnCL and FA.
We have showed that RL with a linear function approximator is able to find better
dynamic channel allocation policies than the BnCL and FA policies. Using nonlinear neural networks as function approximators for RL did in some cases improve
RLfor Dynamic Channel Allocation
979
150 calslhr
a)
~
..t.
J'"
j
~
~
FA
I"
!
?
~
w
c)
~
~
r....
......
..
lgo."
2500.0
--
i
BDCL
E
w
RL
5000.0
Time
350 calslhr
~==:;c::-==;::~.=.==:J
L
/
/
RL
FA
B~L
BDCL
RL
Non-Unilorm
~
FA
i
.20
:Ii!
~
"20
CD
~
:~~
.........-----26....,... ,..?-----...
~O'.
TIIYIe
......
2500.0
TIIYIe
go
! .t.
E
w
......
d)
?
:Ii!
!
-
-.
ell 0.10
h.
~....
-
:i'"
.oo
FA
0.20
.0
CD
~
200 calslhr
b)
~
. t.
BDCL
.,...----------1
RL
..........-----26--..
~
.-----....
TIIYIe
...J .?
Figure 2: a), b)t c) & d) These figures compare performance of RL, FA, and BDCL on the 7
by 7 cellular array of Figure lao The means of the call arrival (Poisson) processes are shown
in the graph titles. Each curve presents the cumulative empirical blocking probability as
a function of time elapsed in minutes. All simulations start with no ongoing calls and
therefore the blocking probabilities are low in the early minutes of the performance curves.
The RL curves presented here are for a linear function approximator and show performance
while learning. Note that learning is quite rapid.
performance over linear networks by a small amount but at the cost of a big increase in training time. We chose to present results for linear networks because they
have the advantage that even though training is centralized, the policy so learned
is decentralized because the features are local and therefore just the weights from
the local features in the trained linear network can be used to choose actions in
each cell. For large cellular arrays, training itself could be decentralized because
the choice of action in a particular cell has a minor effect on far away cells. We will
explore the effect of decentralized training in future work.
4
CONCLUSION
The dynamic channel allocation problem is naturally formulated as an optimal control or dynamic programming problem, albeit one with very large state spaces. Traditional dynamic programming techniques are computationally infeasible for such
large-scale problems. Therefore, knowledge-intensive heuristic solutions that ignore the optimal control framework have been developed. Recent approximations
to dynamic programming introduced in the reinforcement learning (RL) community make it possible to go back to the channel assignment probiem and solve it
as an optimal control problem, in the process finding better solutions than previously available . We presented such a solution using Sutton's (1988) TD(O) with a
feature-based linear network and demonstrated its superiority on a problem with
approximately 7049 states. Other recent examples of similar successes are the game
S. Singh and D. Bertsekas
980
Patterne
~LJ.."j
".LLd.d..J -.I
a) ~~~~~;.;~
b)
~tI!C~"I""""CIhtuT~.""
(mmm . ...... , .?..m)
01 ..........11)
"'1Ir1d~
1__ 11't-10
_ ",,,
'
,.
..........
,
- '- ' -=-.......
~...:=.:.J
-_
FA
BDCL
_
AL
~
~"'0011_~
fWnfore......tl. .nlng
A_A.....n_'
(II ................. ""
....... I'............ e..........
....
-
Figure 3: a) Screen dump of a Java Demonstration available publicly at http:/ /
www.cs.colorado.edurbaveja/Demo.html b) Sensitivity of channel assignment methods to
non-uniform traffic patterns. This figure plots asymptotic empirical blocking probability
for RL, BDCL, and FA for a linear array of cells with different patterns (shown at left) of
mean call arrival rates - chosen so that the average per cell call arrival rate is the same
across patterns. The symbol I is for low, m for medium, and h for high. The numeric
values of I, h, and m are chosen separately for each pattern to ensure that the average per
cell rate of arrival is 120 calls/hr. The results show that RL is able to adapt its allocation
strategy and thereby is better able to exploit the non-uniform call arrival rates.
of backgammon (Tesauro, 1992), elevator-scheduling (Crites & Barto, 1995), and
job-shop scheduling (Zhang & Dietterich, 1995). The neuro-dynamic programming
textbook (Bertsekas and Tsitsiklis, 1996) presents a variety ofrelated case studies.
References
Barto, A.G., Bradtke, S.J. & Singh, S. (1995) Learning to act using real-time dynamic
programming. Artificial Intelligence, 72:81-138.
Bertsekas, D.P. (1995) Dynamic Programming and Optimal Control: Vols 1 and 2. AthenaScientific, Belmont, MA.
Bertsekas, D.P. & Tsitsiklis, J. (1996) Neuro-Dynamic Programming Athena-Scientific,
Belmont, MA.
Crites, R.H. & Barto, A.G. (1996) Improving elevator performance using reinforcement
learning. In Advances is Neural Information Processing Systems 8.
Del Re, W., Fantacci, R. & Ronga, L. (1996) A dynamic channel allocation technique
based on Hopfield Neural Networks. IEEE Transactions on Vehicular Technology, 45:1.
McEliece, R.J. & Sivarajan, K.N. (1994), Performance limits for channelized cellular telephone systems. IEEE Trans. Inform. Theory, pp. 21-34, Jan.
Sutton, R.S. (1988) Learning to predict by the methods of temporal differences. Machine
Learning, 3:9-44.
Tesauro, G.J. (1992) Practical issues in temporal difference learning. Machine Learning,
8{3/4):257-277.
Zhang, M. & Yum, T.P. (1989) Comparisons of Channel-Assignment Strategies in Cellular
Mobile Telephone Systems. IEEE Transactions on Vehicular Technology Vol. 38, No.4.
Zhang, W. & Dietterich, T .G. (1996) High-performance job-shop scheduling with a timedelay TD{lambda) network. In Advances is Neural Information Processing Systems 8.
| 1216 |@word termination:2 simulation:3 tried:1 accounting:1 profit:2 thereby:1 configuration:14 existing:2 current:3 must:2 belmont:2 partition:2 happen:1 plot:2 update:2 intelligence:1 short:1 accepting:2 zhang:6 admission:2 become:2 qualitative:1 consists:1 combine:1 introduce:1 market:2 expected:1 rapid:2 themselves:1 roughly:2 bellman:1 discounted:1 td:4 considering:1 becomes:1 provided:3 medium:1 textbook:2 developed:1 whilst:1 finding:1 impractical:1 temporal:3 every:3 ti:1 act:1 exactly:1 control:9 partitioning:1 superiority:1 bertsekas:10 service:5 dropped:2 local:2 limit:2 preassigned:1 sutton:3 fluctuation:1 approximately:2 chose:1 dynamically:1 co:2 practical:2 responsible:1 block:1 jan:1 area:4 empirical:5 java:1 numbered:7 cannot:1 undesirable:1 operator:1 scheduling:3 www:1 equivalent:1 deterministic:1 customer:1 demonstrated:2 maximizing:1 go:2 duration:4 formulate:2 assigns:2 continued:1 array:7 regarded:1 notion:1 caller:4 variation:1 updated:1 suppose:2 colorado:3 nominal:8 trigger:1 programming:12 caption:1 us:1 rlfor:3 labeled:1 blocking:8 solved:1 worst:1 ordering:1 decrease:1 highest:2 valuable:1 environment:1 locking:1 dynamic:25 trained:3 singh:6 depend:2 uniformity:2 basis:1 packing:5 hopfield:1 separated:1 effective:3 artificial:1 whose:1 heuristic:4 larger:1 quite:3 solve:2 say:1 apparent:1 itself:2 advantage:2 adapts:1 leave:1 illustrate:1 depending:1 oo:1 mmm:1 n_:1 minor:1 borrowed:11 job:2 c:2 indicate:1 radius:1 stochastic:2 centered:1 assign:2 uncontrollable:1 around:2 sufficiently:1 considered:1 predict:1 early:1 smallest:3 purpose:1 favorable:2 currently:1 title:1 exercised:1 sensitive:2 largest:2 create:1 weighted:1 mit:2 always:1 aim:1 occupied:1 mobile:3 barto:4 consistently:1 backgammon:1 likelihood:1 lj:1 borrowing:2 issue:1 among:1 html:1 ell:1 broad:1 future:4 minimized:1 randomly:1 neighbour:1 simultaneously:3 elevator:2 ourselves:1 rearrangement:6 centralized:1 highly:1 possibility:1 arrives:4 unless:1 re:1 handed:2 increased:1 assignment:12 yum:4 cost:1 entry:1 f3t:1 uniform:4 too:1 reported:2 geographical:1 sensitivity:1 off:2 jo:3 squared:2 reflect:1 choose:2 lambda:1 stochastically:1 creating:2 dimitri:1 busy:1 availability:3 depends:2 lot:1 lab:1 doing:1 traffic:6 start:2 minimize:3 publicly:1 ofthe:1 directional:2 simultaneous:3 inform:1 competitor:2 frequency:1 pp:1 naturally:2 static:1 con:1 knowledge:1 sophisticated:1 back:1 dt:1 methodology:1 formulation:1 arranged:1 done:4 evaluated:1 though:1 just:3 until:2 mceliece:2 unoccupied:1 nonlinear:1 del:1 vols:1 scientific:1 usage:2 effect:3 dietterich:2 former:1 discounting:1 assigned:9 spatially:1 offree:1 during:1 game:1 timedelay:1 reassignment:4 bradtke:2 superior:1 rl:24 blocked:7 cambridge:1 similarly:1 longer:1 base:1 recent:3 showed:1 tesauro:2 store:1 arbitrarily:1 success:1 approximators:1 minimum:3 additional:1 somewhat:1 maximize:3 ii:3 violate:1 multiple:1 adapt:1 cross:1 long:1 divided:4 j3:1 neuro:2 expectation:1 poisson:2 represent:1 cell:66 separately:1 enrico:1 else:2 diagram:1 appropriately:1 unlike:1 call:81 ee:1 unused:1 variety:2 bandwidth:3 restrict:2 intensive:1 allocate:3 reuse:10 lgo:1 returned:3 aea:1 speaking:1 cause:2 action:8 useful:1 generally:2 amount:1 discount:4 http:1 outperform:1 percentage:1 notice:1 estimated:2 disjoint:1 per:3 serving:3 dropping:2 vol:1 freed:2 graph:3 powerful:1 arrive:1 separation:1 decision:6 hexagon:1 occur:1 constraint:8 nearby:1 optimality:1 vehicular:2 department:1 request:1 disconnected:1 terminates:3 smaller:1 across:3 lld:1 partitioned:2 lid:1 making:1 explained:1 boulder:1 interference:3 computationally:1 resource:2 mutually:1 equation:2 previously:1 serf:1 available:13 decentralized:3 away:2 appropriate:1 permanently:1 original:1 ensure:1 exploit:2 objective:4 move:1 already:1 occurs:1 strategy:3 fa:15 traditional:1 gradient:2 win:2 distance:3 separate:2 mapped:1 simulated:1 athena:1 cellular:15 trivial:1 assuming:1 modeled:2 minimizing:1 demonstration:1 executed:1 info:1 policy:18 perform:1 descent:2 immediate:3 payoff:3 communication:3 dc:1 varied:2 station:1 community:1 introduced:1 pair:2 elapsed:1 learned:2 temporary:1 trans:1 address:1 able:3 bar:2 pattern:13 departure:7 including:1 max:1 event:10 hr:3 shop:2 improve:1 technology:2 lao:1 literature:3 relative:2 asymptotic:2 allocation:18 approximator:5 cd:2 free:18 infeasible:1 tsitsiklis:3 allow:1 wide:1 boundary:1 curve:5 transition:3 cumulative:2 numeric:1 commonly:1 reinforcement:6 made:1 far:2 transaction:2 approximate:4 compact:2 ignore:1 satinder:1 sequentially:1 demo:1 reassigned:3 channel:109 nature:1 learn:1 improving:1 complex:1 did:1 terminated:1 big:2 crites:2 arrival:18 dump:1 tl:1 screen:1 jnew:2 wish:1 exponential:4 extractor:2 learns:4 minute:4 symbol:1 list:1 insignificant:1 exists:1 albeit:1 importance:1 demand:2 horizon:1 simply:1 explore:1 ma:3 viewed:1 formulated:2 careful:1 change:1 telephone:4 infinite:2 total:1 called:1 violated:1 ongoing:8 evaluate:1 |
239 | 1,217 | Clustering Sequences with Hidden
Markov Models
Padhraic Smyth
Information and Computer Science
University of California, Irvine
CA 92697-3425
smyth~ics.uci.edu
Abstract
This paper discusses a probabilistic model-based approach to clustering sequences, using hidden Markov models (HMMs) . The problem can be framed as a generalization of the standard mixture
model approach to clustering in feature space. Two primary issues
are addressed. First, a novel parameter initialization procedure is
proposed, and second, the more difficult problem of determining
the number of clusters K, from the data, is investigated. Experimental results indicate that the proposed techniques are useful for
revealing hidden cluster structure in data sets of sequences.
1
Introduction
Consider a data set D consisting of N sequences, D = {SI,"" SN}' Si =
(.f.L ... .f.~J is a sequence of length Li composed of potentially multivariate feature vectors.f.. The problem addressed in this paper is the discovery from data of a
natural grouping of the sequences into K clusters. This is analagous to clustering in
multivariate feature space which is normally handled by methods such as k-means
and Gaussian mixtures. Here, however, one is trying to cluster the sequences S
rather than the feature vectors.f.. As an example Figure 1 shows four sequences
which were generated by two different models (hidden Markov models in this case) .
The first and third came from a model with "slower" dynamics than the second and
fourth (details will be provided later). The sequence clustering problem consists
of being given sample sequences such as those in Figure 1 and inferring from the
data what the underlying clusters are. This is non-trivial since the sequences can
be of different lengths and it is not clear what a meaningful distance metric is for
sequence comparIson.
The use of hidden Markov models for clustering sequences appears to have first
Clustering Sequences with Hidden Markov Models
649
Figure 1: Which sequences came from which hidden Markov model?
been mentioned in Juang and Rabiner (1985) and subsequently used in the context
of discovering subfamilies of protein sequences in Krogh et al. (1994). This present
paper contains two new contributions in this context: a cluster-based method for
initializing the model parameters and a novel method based on cross-validated likelihood for determining automatically how many clusters to fit to the data.
A natural probabilistic model for this problem is that of a finite mixture model:
K
fK(S) = L/j(SIOj)pj
(1)
j=l
where S denotes a sequence, Pj is the weight of the jth model , and /j (SIOj) is
the density function for the sequence data S given the component model /j with
parameters OJ . Here we will assume that the /j's are HMMs: thus, the OJ'S are the
transition matrices, observation density parameters, and initial state probabilities,
all for the jth component. /j (SIOj) can be computed via the forward part of the
forward backward procedure. More generally, the component models could be any
probabilistic model for S such as linear autoregressive models, graphical models,
non-linear networks with probabilistic semantics, and so forth .
It is important to note that the motivation for this problem comes from the goal
of building a descriptive model for the data, rather than prediction per se. For the
prediction problem there is a clearly defined metric for performance, namely average
prediction error on out-of-sample data (cf. Rabiner et al. (1989) in a speech context
with clusters of HMMs and Zeevi, Meir, and Adler (1997) in a general time-series
context). In contrast, for descriptive modeling it is not always clear what the
appropriate metric for evaluation is, particularly when K, the number of clusters,
is unknown . In this paper a density estimation viewpoint is taken and the likelihood
of out-of-sample data is used as the measure of the quality of a particular model.
2
An Algorithm for Clustering Sequences into ]{ Clusters
Assume first that K, the number of clusters, is known. Our model is that of a
mixture of HMMs as in Equation 1. We can immediately observe that this mixture
can itself be viewed as a single "composite" HMM where the transition matrix A of
the model is block-diagonal, e.g., if the mixture model consists of two components
with transition matrices Al and A2 we can represent the overall mixture model as
650
P Smyth
a single HMM (in effect, a hierarchical mixture) with transition matrix
(2)
where the initial state probabilities are chosen appropriately to reflect the relative
weights ofthe mixture components (the Pic in Equation 1). Intuitively, a sequence is
generated from this model by initially randomly choosing either the "upper" matrix
Al (with probability PI) or the "lower" matrix with probability A2 (with probability
1 - PI) and then generating data according to the appropriate Ai . There is no
"crossover" in this mixture model: data are assumed to come from one component
or the other. Given this composite HMM a natural approach is to try to learn
the parameters of the model using standard HMM estimation techniques, i.e., some
form of initialization followed by Baum-Welch to maximize the likelihood . Note
that unlike predictive modelling (where likelihood is not necessarily an appropriate
metric to evaluate model quality), likelihood maximization is exactly what we want
to do here since we seek a generative (descriptive) model for the data. We will
assume throughout that the number of states per component is known a priori, i.e.,
that we are looking for K HMM components each of which has m states and m
is known. An obvious extension is to address the problem of learning K and m
simultaneously but this is not dealt with here.
2.1
Initialization using Clustering in "Log-Likelihood Space"
Since the EM algorithm is effectively hill-climbing the likelihood surface, the quality
of the final solution can depend critically on the initial conditions. Thus, using as
much prior information as possible about the problem to seed the initialization is
potentially worthwhile. This motivates the following scheme for initializing the A
matrix of the composite HMM:
1. Fit N m-state HMMs, one to each individual sequence Si, 1 ~ i ~ N.
These HMMs can be initialized in a "default" manner: set the transition
matrices uniformly and set the means and covariances using the k-means
algorithm, where here k = m, not to be confused with K, the number of
HMM components. For discrete observation alphabets modify accordingly.
2. For each fitted model Mi, evaluate the log-likelihood of each of the N
sequences given model Mi, i.e., calculate Lij = log L(Sj IMi), 1 ~ i, j ~ N.
3. Use the log-likelihood distance matrix to cluster the sequences into K
groups (details of the clustering are discussed below).
4. Having pooled the sequences into K groups, fit K HMMs, one to each group,
using the default initialization described above. From the K HMMs we get
K sets of parameters: initialize the composite HMM in the obvious way,
i.e., the m x m "block-diagonal" component Aj of A (where A is mK x mK)
is set to the estimated transition matrix from the jth group and the means
and covariances of the jth set of states are set accordingly. Initialize the Pj
in Equation 1 to Nj / N where N j is the number of sequences which belong
to cluster j.
After this initialization step is complete, learning proceeds directly on the composite
HMM (with matrix A) in the usual Baum-Welch fashion using all of the sequences.
The intuition behind this initialization procedure is as follows. The hypothesis is
that the data are being generated by K models. Thus, if we fit models to each
individual sequence, we will get noisier estimates of the model parameters (than
if we used all of the sequences from that cluster) but the parameters should be
Clustering Sequences with Hidden Markov Models
651
clustered in some manner into K groups about their true values (assuming the
model is correct). Clustering directly in parameter space would be inappropriate
(how does one define distance?): however, the log-likelihoods are a natural way to
define pairwise distances.
Note that step 1 above requires the training of N sequences individually and step 2
requires the evaluation of N 2 distances. For large N this may be impractical. Suitable modifications which train only on a small random sample of the N sequences
and randomly sample the distance matrix could help reduce the computational burden, but this is not pursued here. A variety of possible clustering methods can be
used in step 3 above. The "symmetrized distance" Lij = Ij2(L(Si IMj)+ L(Sj IMi?
can be shown to be an appropriate measure of dissimilarity between models Mi and
Mj (Juang and Rabiner 1985). For the results described in this paper, hierarchical
clustering was used to generate K clusters from the symmetrized distance matrix.
The "furthest-neighbor" merging heuristic was used to encourage compact clusters
and worked well empirically, although there is no particular reason to use only this
method.
We will refer to the above clustering-based initialization followed by Baum-Welch
training on the composite model as the "HMM-Clustering" algorithm in the rest of
the paper.
2.2
Experimental Results
Consider a deceptively simple "toy" problem. I-dimensional feature data are generated from a 2-component HMM mixture (K = 2), each with 2 states. We have
A - (0.6
1 0.4
A _ (0.4 0.6)
2 0.6 0.4
0.4)
0.6
and the observable feature data obey a Gaussian density in each state with
= 0'2 = 1 for each state in each component, and PI = 0, P2 = 3 for the respective mean of each state of each component. 4 sample sequences are shown in
Figure 1. The top, and third from top, sequences are from the "slower" component
Al (is more likely to stay in any state than switch). In total the training data
contain 20 sample sequences from each component of length 200. The problem is
non-trivial both because the data have exactly the same marginal statistics if the
temporal sequence information is removed and because the Markov dynamics (as
governed by Al and A 2 ) are relatively similar for each component making identification somewhat difficult.
0'1
The HMM clustering algorithm was applied to these sequences. The symmetrized
likelihood distance matrix is shown as a grey-scale image in Figure 2. The axes
have been ordered so that the sequences from the same clusters are adjacent . The
difference in distances between the two clusters is apparent and the hierarchical
clustering algorithm (with K = 2) easily separates the two groups. This initial
clustering, followed by training separately the two clusters on the set of sequences
assigned to each cluster, yielded:
A 1 -
(0 .580
0.420
0.402)
0.598
PI =
A2 =
(0.392
0.608
0.611)
0.389
P2
A
A
A
(2.892
0.040
= (2.911
0.138
)
)
A
0'1
A
0'2
=
(1.353
1.219
=
(1.239
1.339
)
)
Subsequent training of the composite model on all of the sequences produced more
refined parameter estimates, although the basic cluster structure of the model remained the same (i .e., the initial clustering was robust) .
P. Smyth
652
modeLo."
to
mod,LO."
15
20
26
30
3S
40
modelO.e
Figure 2: Symmetrized log-likelihood distance matrix.
For comparative purposes two alternative initialization procedures were used to
initialize the training ofthe composite HMM. The "unstructured" method uniformly
initializes the A matrix without any knowledge of the fact that the off-block-diagonal
terms are zero (this is the "standard" way of fitting a HMM). The "block-uniform"
method uniformly initializes the K block-diagonal matrices within A and sets the
off-block-diagonal terms to zero. Random initialization gave poorer results overall
compared to uniform.
Table 1: Differences in log-likelihood for different initialization methods.
Maximum
Standard
Initialization
Mean
Log-Likelihood Log-Likelihood
Method
Deviation
of Log-Likelihoods
Value
Value
Unstructured
7.6
1.3
0.0
Block-Uniform
44.8
8.1
28.7
HMM-Clustering
50.4
0.9
55.1
The three alternatives were run 20 times on the data above, where for each run the
seeds for the k-means component of the initialization were changed. The maximum,
mean and standard deviations of log-likelihoods on test data are reported in Table
1 (the log-likelihoods were offset so that the mean unstructured log-likelihood is
zero) . The unstructured approach is substantially inferior to the others on this
problem: this is not surprising since it is not given the block-diagonal structure
of the true model. The Block-Uniform method is closer in performance to HMMClustering but is still inferior. In particular, its log-likelihood is consistently lower
than that of the HMM-Clustering solution and has much greater variability across
different initial seeds. The same qualitative behavior was observed across a variety
of simulated data sets (results are not presented here due to lack of space).
3
3.1
Learning K, the Number of Sequence Components
Background
Above we have assumed that K, the number of clusters, is known. The problem
of learning the "best" value for K in a mixture model is a difficult one in practice
Clustering Sequences with Hidden Markov Models
653
even for the simpler (non-dynamic) case of Gaussian mixtures. There has been considerable prior work on this problem. Penalized likelihood approaches are popular,
where the log-likelihood on the training data is penalized by the subtraction of a
complexity term. A more general approach is the full Bayesian solution where the
posterior probability of each value of K is calculated given the data, priors on the
mixture parameters, and priors on K itself. A potential difficulty here is the the
computational complexity of integrating over the parameter space to get the posterior probabilities on K. Various analytic and sampling approximations are used
in practice. In theory, the full Bayesian approach is fully optimal and probably the
most useful. However, in practice the ideal Bayesian solution must be approximated
and it is not always obvious how the approximation affects the quality of the final
answer. Thus, there is room to explore alternative methods for determining K.
3.2
A Monte-Carlo Cross-Validation Approach
Imagine that we had a large test data set D test which is not used in fitting any of the
models. Let LK(Dtest) be the log-likelihood where the model with K components is
fit to the training data D but the likelihood is evaluated on D test . We can view this
likelihood as a function of the "parameter" K, keeping all other parameters and D
fixed. Intuitively, this "test likelihood" should be a much more useful estimator than
the training data likelihood for comparing mixture models with different numbers of
components. In fact, the test likelihood can be shown to be an unbiased estimator
of the Kullback-Leibler distance between the true (but unknown) density and the
model. Thus, maximizing out-of-sample likelihood over K is a reasonable model
selection strategy. In practice, one does not usually want to reserve a large fraction
of one's data for test purposes: thus, a cross-validated estimate of log-likelihood can
be used instead.
In Smyth (1996) it was found that for standard multivariate Gaussian mixture modeling, the standard v-fold cross-validation techniques (with say v = 10) performed
poorly in terms of selecting the correct model on simulated data. Instead MonteCarlo cross-validation (Shao, 1993) was found to be much more stable: the data are
partitioned into a fraction f3 for testing and 1 - f3 for training, and this procedure is
repeated M times where the partitions are randomly chosen on each run (i.e., need
not be disjoint). In choosing f3 one must tradeoff the variability of the performance
estimate on the test set with the variability in model fitting on the training set. In
general, as the total amount of data increases relative to the model complexity, the
optimal f3 becomes larger. For the mixture clustering problem f3 = 0.5 was found
empirically to work well (Smyth, 1996) and is used in the results reported here.
3.3
Experimental Results
The same data set as described earlier was used where now K is not known a priori.
The 40 sequences were randomly partitioned 20 times into training and test crossvalidation sets. For each train/test partition the value of K was varied between 1
and 6, and for each value of K the HMM-Clustering algorithm was fit to the training
data, and the likelihood was evaluated on the test data. The mean cross-validated
likelihood was evaluated as the average over the 20 runs. Assuming the models
are equally likely a priori, one can generate an approximate posterior distribution
p(KID) by Bayes rule: these posterior probabilities are shown in Figure 3. The
cross-validation procedure produces a clear peak at K = 2 which is the true model
size. In general, the cross-validation method has been tested on a variety of other
simulated sequence clustering data sets and typically converges as a function of the
number oftraining samples to the true value of K (from below). As the number of
P. Smyth
654
Eallmeted POIIertor PrcbabIlIlu on K lor 2-Clus1er Data
0.9
-
0.8
0.7
~0.6
1
0 .5
?~0.4
~ 0.3
0.2
0.1
?0
5
2
6
7
K
Figure 3: Posterior probability distribution on K as estimated by cross-validation.
data points grow, the posterior distribution on K narrows about the true value of
K. If the data were not generated by the assumed form of the model, the posterior
distribution on K will tend to be peaked about the model size which is closest
(in K-L distance) to the true model. Results in the context of Gaussian mixture
clustering(Smyth 1996) have shown that the Monte Carlo cross-validation technique
performs as well as the better Bayesian approximation methods and is more robust
then penalized likelihood methods such as BIC.
In conclusion, we have shown that model-based probabilistic clustering can be generalized from feature-space clustering to sequence clustering. Log-likelihood between
sequence models and sequences was found useful for detecting cluster structure and
cross-validated likelihood was shown to be able to detect the true number of clusters.
References
Baldi, P. and Y. Chauvin, 'Hierarchical hybrid modeling, HMM/NN architectures,
and protein applications,' Neural Computation, 8(6), 1541-1565,1996.
Krogh, A. et al., 'Hidden Markov models in computational biology: applications
to protein modeling,' it J. Mol. Bio., 235:1501-1531, 1994.
Juang, B. H., and L. R. Rabiner, 'A probabilistic distance measure for hidden
Markov models,' AT&T Technical Journal, vo1.64, no.2 , February 1985.
Rabiner, L. R., C. H. Lee, B. H. Juang, and J. G . Wilpon, 'HMM clustering for
connected word recognition,' Pmc. Int. Conf. Ac. Speech. Sig. Proc,
IEEE Press, 405-408, 1989.
Shao, J ., 'Linear model selection by cross-validation,' J. Am. Stat. Assoc., 88(422),
486-494, 1993.
Smyth, P., 'Clustering using Monte-Carlo cross validation,' in Proceedings of the
Second International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA : AAAI Press, pp.126-133, 1996.
Zeevi, A. J ., Meir, R., Adler, R., 'Time series prediction using mixtures of experts,'
in this volume, 1997.
| 1217 |@word grey:1 seek:1 covariance:2 initial:6 contains:1 series:2 selecting:1 comparing:1 surprising:1 si:4 must:2 subsequent:1 partition:2 analytic:1 generative:1 discovering:1 pursued:1 accordingly:2 detecting:1 simpler:1 lor:1 qualitative:1 consists:2 fitting:3 baldi:1 manner:2 pairwise:1 behavior:1 automatically:1 inappropriate:1 becomes:1 provided:1 confused:1 underlying:1 what:4 substantially:1 nj:1 impractical:1 temporal:1 exactly:2 assoc:1 bio:1 normally:1 modelo:2 modify:1 initialization:13 hmms:8 testing:1 practice:4 block:9 procedure:6 crossover:1 revealing:1 composite:8 word:1 integrating:1 protein:3 get:3 selection:2 context:5 baum:3 maximizing:1 welch:3 unstructured:4 immediately:1 estimator:2 rule:1 deceptively:1 imagine:1 smyth:9 hypothesis:1 sig:1 approximated:1 particularly:1 recognition:1 observed:1 initializing:2 calculate:1 connected:1 removed:1 mentioned:1 intuition:1 ij2:1 complexity:3 dynamic:3 depend:1 predictive:1 shao:2 easily:1 various:1 alphabet:1 train:2 monte:3 choosing:2 refined:1 imj:1 heuristic:1 apparent:1 larger:1 say:1 statistic:1 itself:2 final:2 sequence:46 descriptive:3 uci:1 poorly:1 forth:1 crossvalidation:1 juang:4 cluster:24 produce:1 generating:1 comparative:1 converges:1 help:1 ac:1 stat:1 p2:2 krogh:2 indicate:1 come:2 correct:2 subsequently:1 generalization:1 clustered:1 extension:1 ic:1 seed:3 zeevi:2 reserve:1 a2:3 purpose:2 estimation:2 proc:1 individually:1 clearly:1 gaussian:5 always:2 rather:2 validated:4 ax:1 kid:1 consistently:1 modelling:1 likelihood:35 contrast:1 detect:1 am:1 nn:1 typically:1 initially:1 hidden:11 semantics:1 issue:1 overall:2 priori:3 initialize:3 marginal:1 f3:5 having:1 sampling:1 biology:1 park:1 peaked:1 others:1 randomly:4 composed:1 simultaneously:1 individual:2 consisting:1 mining:1 evaluation:2 mixture:19 behind:1 poorer:1 encourage:1 closer:1 respective:1 initialized:1 fitted:1 mk:2 modeling:4 earlier:1 maximization:1 deviation:2 uniform:4 imi:2 reported:2 answer:1 adler:2 density:5 peak:1 international:1 stay:1 probabilistic:6 off:2 lee:1 reflect:1 aaai:1 padhraic:1 dtest:1 conf:1 expert:1 li:1 toy:1 potential:1 pooled:1 int:1 analagous:1 later:1 try:1 view:1 performed:1 bayes:1 contribution:1 rabiner:5 ofthe:2 climbing:1 dealt:1 identification:1 bayesian:4 critically:1 produced:1 carlo:3 pp:1 obvious:3 mi:3 irvine:1 popular:1 knowledge:2 appears:1 evaluated:3 lack:1 aj:1 quality:4 building:1 effect:1 contain:1 true:8 unbiased:1 assigned:1 leibler:1 adjacent:1 inferior:2 generalized:1 trying:1 hill:1 complete:1 performs:1 image:1 novel:2 empirically:2 volume:1 discussed:1 belong:1 refer:1 ai:1 framed:1 fk:1 wilpon:1 had:1 stable:1 surface:1 multivariate:3 posterior:7 closest:1 came:2 greater:1 somewhat:1 subtraction:1 maximize:1 full:2 technical:1 cross:13 equally:1 prediction:4 basic:1 metric:4 represent:1 background:1 want:2 separately:1 addressed:2 grow:1 appropriately:1 rest:1 unlike:1 probably:1 tend:1 mod:1 ideal:1 variety:3 switch:1 fit:6 gave:1 affect:1 architecture:1 bic:1 reduce:1 tradeoff:1 handled:1 speech:2 useful:4 generally:1 clear:3 se:1 amount:1 generate:2 meir:2 estimated:2 disjoint:1 per:2 discrete:1 group:6 four:1 pj:3 backward:1 fraction:2 run:4 fourth:1 throughout:1 reasonable:1 oftraining:1 followed:3 fold:1 yielded:1 worked:1 relatively:1 according:1 across:2 em:1 partitioned:2 modification:1 making:1 intuitively:2 taken:1 equation:3 discus:1 montecarlo:1 observe:1 worthwhile:1 hierarchical:4 appropriate:4 obey:1 alternative:3 symmetrized:4 slower:2 denotes:1 clustering:32 cf:1 top:2 graphical:1 february:1 initializes:2 strategy:1 primary:1 usual:1 diagonal:6 distance:14 separate:1 simulated:3 hmm:19 trivial:2 reason:1 furthest:1 chauvin:1 assuming:2 length:3 pmc:1 difficult:3 potentially:2 motivates:1 unknown:2 upper:1 observation:2 markov:11 finite:1 looking:1 variability:3 varied:1 pic:1 namely:1 california:1 narrow:1 address:1 able:1 proceeds:1 below:2 usually:1 oj:2 suitable:1 natural:4 difficulty:1 hybrid:1 scheme:1 lk:1 lij:2 sn:1 prior:4 discovery:2 determining:3 relative:2 fully:1 validation:9 viewpoint:1 pi:4 lo:1 changed:1 penalized:3 keeping:1 jth:4 neighbor:1 default:2 calculated:1 transition:6 autoregressive:1 forward:2 sj:2 approximate:1 compact:1 observable:1 kullback:1 assumed:3 table:2 learn:1 mj:1 robust:2 ca:2 menlo:1 mol:1 investigated:1 necessarily:1 motivation:1 repeated:1 fashion:1 inferring:1 governed:1 third:2 remained:1 offset:1 grouping:1 burden:1 merging:1 effectively:1 dissimilarity:1 likely:2 subfamily:1 explore:1 ordered:1 goal:1 viewed:1 room:1 considerable:1 uniformly:3 vo1:1 total:2 experimental:3 meaningful:1 noisier:1 evaluate:2 tested:1 |
240 | 1,218 | On a Modification to the Mean Field EM
Algorithm in Factorial Learning
A. P. Dunmur
D. M. Titterington
Department of Statistics
Maths Building
University of Glasgow
Glasgow G12 8QQ, UK
alan~stats.gla.ac.uk
mike~stats.gla.ac.uk
Abstract
A modification is described to the use of mean field approximations in the E step of EM algorithms for analysing data from latent
structure models, as described by Ghahramani (1995), among others. The modification involves second-order Taylor approximations
to expectations computed in the E step. The potential benefits of
the method are illustrated using very simple latent profile models.
1
Introduction
Ghahramani (1995) advocated the use of mean field methods as a means to avoid the
heavy computation involved in the E step of the EM algorithm used for estimating
parameters within a certain latent structure model, and Ghahramani & Jordan
(1995) used the same ideas in a more complex situation. Dunmur & Titterington
(1996a) identified Ghahramani's model as a so-called latent profile model, they
observed that Zhang (1992,1993) had used mean field methods for a similar purpose,
and they showed, in a simulation study based on very simple examples, that the
mean field version of the EM algorithm often performed very respectably. By this
it is meant that, when data were generated from the model under analysis, the
estimators of the underlying parameters were efficient, judging by empirical results,
especially in comparison with estimators obtained by employing the 'correct' EM
algorithm: the examples therefore had to be simple enough that the correct EM
algorithm is numerically feasible, although any success reported for the mean field
A. P. Dunmur and D. M. Titterington
432
version is, one hopes, an indication that the method will also be adequate in more
complex situations in which the correct EM algorithm is not implement able because
of computational complexity.
In spite of the above positive remarks, there were circumstances in which there was
a perceptible, if not dramatic, lack of efficiency in the simple (naive) mean field
estimators, and the objective of this contribution is to propose and investigate ways
of refining the method so as to improve performance without detracting from the
appealing, and frequently essential, simplicity of the approach. The procedure used
here is based on a second order correction to the naive mean field well known in
statistical physics and sometimes called the cavity or TAP method (Mezard, Parisi
& Virasoro , 1987). It has been applied recently in cluster analysis (Hofmann &
Buhmann, 1996). In Section 2 we introduce the structure of our model, Section 3
explains the refined mean field approach, Section 4 provides numerical results, and
Section 5 contains a statement of our conclusions.
2
The Model
The model under study is a latent profile model (Henry, 1983), which is a latent
structure model involving continuous observables {x r : r = 1 ... p} and discrete
latent variables {Yi : i = 1 ... d}. The Yi are represented by indicator vectors such
that for each i there is a single j such that Yij = 1 and Yik = 0, for all k f; j. The
latent variables are connected to the observables by a set of weight matrices Wi
in such a way that the distribution of the observations given the latent variables
is a multivariate Gaussian with mean Ei WiYi and covariance matrix r. To ease
the notation, the covariance matrix is taken to be the identity matrix, although
extension is quite easy to the case where r is a diagonal matrix whose elements have
to be estimated (Dunmur & Titterington, 1996a). Also to simplify the notation,
the marginal distributions of the latent variables are taken to be uniform, so that
the totality of unknown parameters is made up of the set of weight matrices, to be
denoted by W = (WI, W 2 , ?? ?, Wd).
3
Methodology
In order to learn about the model we have available a dataset V = {xl-' : J.L = 1 ... N}
of N independent, p dimensional realizations of the model, and we adopt the Maximum Likelihood approach to the estimation of the weight matrices. As is typical
of latent structure models, there is no explicit Maximum Likelihood estimate of the
parameters of the model, but there is a version of the EM algorithm (Dempster,
Laird & Rubin, 1977) that can be used to obtain the estimates numerically. The
EM algorithm consists of a sequence of double steps, E steps and M steps.
At stage m the E step, based on a current estimate wm-I of the parameters,
calculates
where, the expectation ( . ) is over the latent variables y, and is conditional on V and
wm-l, and Cc denotes the crucial part of the complete-data log-likelihood, given
A Modification to Mean Field EM
433
by
The M step then maximizes Q with respect to W and gives the new parameter
estimate W m .
For the simple model considered here, the M step gives
where W = (WI, W 2 , ..? , Wd ) and yT = (Yf, Yf, ... ,yJ) and, for brevity, explicit
mention of the conditioned quantities in the expectations (.) has been omitted.
The above formula differs somewhat from that given by Ghahramani (1995).
Hence we need to evaluate the sets of expectations (Yi) and (YiyJ) for each example in the dataset. (The superscript J.t is omitted, for clarity.) As pointed out
in Ghahramani (1995), it is possible to evaluate these expectations directly by
summing over all possible latent states. This has the disadvantage of becoming
exponentially more expensive as the size of the latent space increases.
The mean field approximation is well known in physics and can be used to reduce
the computational complexity. At its simplest level, the mean field approximation
replaces the joint expectations of the latent variables by the products of the individual expectations; this can be interpreted as bounding the likelihood from below
(Saul, Jaakkola, Jordan, 1996). Here we consider a second order approximation, as
outlined below.
Since the latent variables are categorical, it is simple to sum over the state space
of a single latent variable. Hence, following Parisi (1988), the expectations of the
latent variables are given by
(1)
where /j(fi) is the lh component of the softmax function; exp(fij)/ ~k exp(fik),
and the expectation (.) is taken over the remaining latent variables. The vector
fi = {fij} contains the log probabilities (up to a constant) associated with each
category of the latent variable for each example in the data set. For the simple
model under study fij is given by
f'
0
I)
= {W!(x t
'L.J
" k#i WkYk)} j - !(W!Wo)
2
t
1 11
0
0.
(2)
The expectation in (1) can be expanded in a Taylor series about the average, (fi),
giving
(3)
where D.fij = fij - (fij ). The naive mean field approximation simply ignores all
corrections. We can postulate that the second order fluctuations are taken care of
A. P. Dunmur and D. M. Titterington
434
by a so called cavity field, (see, for instance, Mezard, Parisi & Virasoro , 1987, p.16),
that is,
(4)
where the vector of fields hi = {hik} has been introduced to take care of the
correction terms. This equation may also be expanded in a Taylor series to give
Then, equating coefficients with (3) and after a little algebra, we get
(5)
where
djk
is the Kronecker delta and, for the model under consideration,
(tl?ijtl?ik)
=
(wr L,
mn#~
Wm ((
YmY~) - (Ym) (y~?) W!Wi)
(6)
jk
The naive mean field assumption may be used in (6), giving
(7)
Within the E step, for each realization in the data set, the mean fields (4), along
with the cavity fields (5), can be evaluated by an iterative procedure which gives the
individual expectations of the latent variables. The naive mean field approximation
(7) is then used to evaluate the joint expectations (YiyJ). In the next section we
report, for a simple model, the effect on parameter estimation of the use of cavity
fields.
4
Results
Simulations were carried out using latent variable models (i) with 5 observables
and 4 binary hidden variables and (ii) with 5 observables and 3 3-state hidden
variables. The weight matrices were generated from zero mean Gaussian variables
with standard deviation w. In order to make the M step trivial it was assumed that
the matrices were known up to the scale parameter w, and this is the parameter
estimated by the algorithm. A data set was generated using the known parameter
and this was then estimated using straight EM, naive mean field (MF) and mean
field with cavity fields (MFcay ).
Although datasets of sizes 100 and 500 were generated, only the results for N = 500
are presented here since both scenarios showed the same qualitative behaviour. Also,
the estimation algorithms were started from different initial positions; this too had
no effect on the final estimates of the parameters. A representative selection of
results follows; Table 1 shows the results for both the 5 x 4 x 2 model and the
5 x 3 x 3 model.
The results show that, when the true value, Wtr, of the parameter was small, there
is little difference among the three methods. This is due to the fact that at these
A Modification to Mean Field EM
435
Table 1: Table of results N=500, results averaged over 50 simulations for 5 observables with 4 binary latent variables and for 5 observables with 3 3-state latent
variables. The figures in brackets give the standard deviation of the estimates, West,
in units according to the final decimal place. RMS is the root mean squared error
of the estimate compared to the true value.
Method
EM
MF
MFcav
EM
MF
MFcav
EM
MF
MFcav
EM
MF
MFcav
EM
MF
MFcav
Wtr
Winit
0.1
0.1
0.1
0.5
0.5
0.5
1.0
1.0
1.0
2.0
2.0
2.0
5.0
5.0
5.0
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.1
0.1
0.2
0.2
0.2
0.1
0.1
0.1
5x4x2
RMS
0.09(1) 0.014
0.09(1) 0.014
0.09(1) 0.014
0.49(2) 0.016
0.47(2) 0.029
0.48(2) 0.026
0.99(2) 0.016
0.96(2) 0.040
0.99(2) 0.018
1.99(1) 0.014
1.98(2) 0.021
1.97(2) 0.027
4.99(1) 0.013
4.99(1) 0.016
4.97(2) 0.032
West
5x3x3
RMS
0.10(2) 0.024
0.10(2) 0.023
0.10(2) 0.023
0.50(2) 0.019
0.46(2) 0.038
0.47(2) 0.032
1.00(2) 0.018
0.98(2) 0.032
1.00(2) 0.018
1.99(1) 0.015
1.98(2) 0.023
1.97(2) 0.031
5.00(1) 0.013
4.96(2) 0.047
4.88(3) 0.114
West
small values there is little separation among the mixtures that are used to generate
the data and hence the methods are all equally good (or bad) at estimating the
parameter. As the true parameter increases and becomes close to one, the cavity
field method performs significantly better then naive mean field; in fact it performs
as well as EM for Wtr=1.
For values of Wtr greater than one, the cavity field method performs less well than
naive mean field. This suggests that the Taylor expansions (3) and (5) no longer
provide reliable approximations. Since
(8)
it is easy to see that if the elements in Wi are much less than one then the corrections to the Taylor expansion are small and hence the cavity fields are small,
so the approximations hold. If W is much larger than one, then the mean field
estimates become closer to zero and one since the energies f.ij (equation 2) become
more extreme. Hence if the mean fields correctly estimate the latent variables the
corrections are indeed small, but if the mean fields incorrectly estimate the latent
variable the error term is substantial, leading to a reduction in performance.
Another simulation, similar to that presented in Ghahramani (1995), was also studied to compare the modification to both the 'correct' EM and the naive mean field.
The model has two latent variables that correspond to either horizontal or vertical
A. P. Dunmur and D. M. Titterington
436
lines in one of four positions. These are combined and zero mean Gaussian noise
added to produce a 4 x 4 example image. A data set is created from many of these
examples with the latent variables chosen at random. From the data set the weight
matrices connecting the observables to the latent variables are estimated and compared to the true weights which consist of zeros and ones. Typical results for a
sample size of 160 and Gaussian noise of variance 0.2 are presented in Figure 1.
The number of iterations needed to converge were similar for all three methods.
EM
0.158
MF
0.154
MFcav
0.153
liD
Figure 1: Estimated weights for a sample size of N = 160 and noise variance 0.2
added. The three rows correspond to EM, naive MF and MF cay respectively. The
number on the left hand end of the row is the mean squared error of the estimated
weights compared with the true weights. The first four images are the estimates of
the first latent vector and the remaining four images are the estimates of second
latent vector.
As can be seen from Figure 1 there is very little difference between the estimates of
the weights and the mean squared errors are all very close. The mean field method
converged in approximately four iterations which means that for this simple model
the MF E step is taking approximately 32 steps as compared to 16 steps for the
straight EM. This is due to the simplicity of the latent structure for this model. For
a more complicated model the MF algorithm should take fewer iterations. Again
the results are encouraging for the MF method, but they do not show any obvious
benefit from using the cavity field correction terms.
5
Conclusion
The cavity field method can be applied successfully to improve the performance
of naive mean field estimates. However, care must be taken when the corrections
become large and actually degrade the performance. Predicting the failure modes of
the algorithm may become harder for larger (more realistic) models. The message
seems to be that where the mean field does well the cavity fields will improve the
situation, but where the mean field performs less well the cavity fields can degrade
performance. This suggests that the cavity fields could be used as a check on
the mean field method. Where the cavity fields are small we can be reasonably
confident that the mean field is producing sensible answers. However where the
cavity fields become large it is likely that the mean field is no longer producing
accurate estimates.
Further work would consider larger simulations using more realistic models. It
A Modification to Mean Field EM
437
might no longer be feasible to compare these simulations with the 'correct' EM
algorithm as the size of the model increases, though other techniques such as Gibbs
sampling could be used instead. It would also be interesting to look at the next
level of approximation where, instead of approximating the joint expectations by
the product of the individual expectations in equation (6), the joint expectations
are evaluated by summing over the joint state space (c.f. equation (1)) and possibly
evaluating the corresponding cavity fields (Dunmur & Titterington, 1996b). This
would perhaps improve the quality of the approximation without introducing the
exponential complexity associated with the full Estep.
Acknowledgements
This research was supported by a grant from the UK Engineering and Physical
Sciences Research Council.
References
DEMPSTER, A. P., LAIRD, N. M. & RUBIN, D. B. (1977). Maximum likelihood
estimation from incomplete data via the EM algorithm (with discussion). J. R.
Statist. Soc. B 39, 1-38.
DUNMUR, A. P. & TITTERINGTON, D. M. (1996a). Parameter estimation in latent
structure models. Tech. Report 96-2, Dept. Statist., Univ. Glasgow.
DUNMUR, A. P. & TITTERINGTON, D. M. (1996b). Higher order mean field approximations. In preparation.
GHAHRAMANI, Z. (1995). Factorial learning and the EM algorithm. In Advances in
Neural Information Processing Systems 7, Eds. G. Tesauro, D. S. Touretzky &
T. K. Leen. Cambridge MA: MIT Press.
GHAHRAMANI, Z. & JORDAN, M. I. (1995). Factorial hidden Markov models.
Computational Cognitive Science Technical Report 9502, MIT.
HENRY, N. W. (1983). Latent structure analysis. In Encyclopedia of Statistical
Sciences, Volume 4, Eds. S. Kotz, N. L. Johnson & C. B. Read, pp.497-504. New
York: Wiley.
HOFMANN, T. & BUHMANN, J. M. (1996) Pairwise Data Clustering by Deterministic Annealing. Tech. Rep. IAI-TR-95-7, Institut fur Informatik III, Universitiit
Bonn.
MEZARD, M., PARISI, G. & VIRASORO, M. A. (1987) Spin Glass Theory and
Beyond. Lecture Notes in Physics, 9. Singapore: World Scientific.
PARISI, G. (1988). Statistical Field Theory. Redwood City CA: Addison-Wesley.
SAUL, L. K., JAAKKOLA, T. & JORDAN, M. I. (1996) Mean Field Theory for
Sigmoid Belief Networks. J. Artificial Intelligence Research 4,61-76.
ZHANG, J. (1992). The Mean Field Theory in EM procedures for Markov random
fields. 1. E. E. E. Trans. Signal Processing 40, 2570-83.
ZHANG, J. (1993). The Mean Field Theory in EM procedures for blind Markov
random field image restoration. 1. E. E. E. Trans. Image Processing 2, 27-40.
| 1218 |@word version:3 seems:1 simulation:6 covariance:2 dramatic:1 mention:1 tr:1 harder:1 reduction:1 initial:1 contains:2 series:2 current:1 wd:2 must:1 numerical:1 realistic:2 hofmann:2 intelligence:1 fewer:1 provides:1 math:1 zhang:3 along:1 become:5 ik:1 qualitative:1 consists:1 introduce:1 pairwise:1 indeed:1 frequently:1 little:4 encouraging:1 becomes:1 estimating:2 underlying:1 notation:2 maximizes:1 interpreted:1 titterington:9 uk:4 unit:1 grant:1 producing:2 positive:1 engineering:1 fluctuation:1 becoming:1 approximately:2 might:1 equating:1 studied:1 suggests:2 ease:1 averaged:1 yj:1 implement:1 differs:1 procedure:4 empirical:1 significantly:1 spite:1 get:1 close:2 selection:1 deterministic:1 yt:1 simplicity:2 stats:2 glasgow:3 fik:1 estimator:3 qq:1 element:2 expensive:1 jk:1 mike:1 observed:1 connected:1 substantial:1 dempster:2 complexity:3 algebra:1 efficiency:1 observables:7 joint:5 represented:1 univ:1 artificial:1 refined:1 quite:1 whose:1 larger:3 statistic:1 laird:2 superscript:1 final:2 sequence:1 indication:1 parisi:5 propose:1 product:2 realization:2 cluster:1 double:1 produce:1 ac:2 ij:1 advocated:1 soc:1 involves:1 fij:6 correct:5 yiyj:2 explains:1 behaviour:1 yij:1 extension:1 correction:7 hold:1 considered:1 exp:2 adopt:1 omitted:2 purpose:1 estimation:5 council:1 successfully:1 city:1 hope:1 mit:2 gaussian:4 avoid:1 jaakkola:2 refining:1 fur:1 likelihood:5 check:1 tech:2 dunmur:9 glass:1 hidden:3 djk:1 among:3 denoted:1 softmax:1 marginal:1 field:56 sampling:1 look:1 others:1 report:3 simplify:1 individual:3 message:1 investigate:1 mixture:1 bracket:1 extreme:1 gla:2 accurate:1 closer:1 lh:1 institut:1 incomplete:1 taylor:5 virasoro:3 instance:1 disadvantage:1 restoration:1 introducing:1 deviation:2 uniform:1 johnson:1 too:1 reported:1 answer:1 combined:1 confident:1 physic:3 ym:1 connecting:1 squared:3 postulate:1 again:1 possibly:1 cognitive:1 leading:1 potential:1 coefficient:1 blind:1 performed:1 root:1 wm:3 complicated:1 contribution:1 spin:1 variance:2 correspond:2 informatik:1 cc:1 straight:2 converged:1 touretzky:1 ed:2 failure:1 energy:1 pp:1 involved:1 obvious:1 associated:2 dataset:2 actually:1 wesley:1 higher:1 methodology:1 iai:1 leen:1 evaluated:2 though:1 stage:1 hand:1 horizontal:1 ei:1 lack:1 mode:1 yf:2 quality:1 perhaps:1 scientific:1 building:1 effect:2 true:5 hence:5 read:1 illustrated:1 hik:1 complete:1 x3x3:1 performs:4 image:5 consideration:1 recently:1 fi:3 sigmoid:1 physical:1 exponentially:1 volume:1 numerically:2 cambridge:1 gibbs:1 outlined:1 pointed:1 had:3 henry:2 longer:3 multivariate:1 showed:2 tesauro:1 scenario:1 certain:1 binary:2 success:1 rep:1 yi:3 seen:1 greater:1 somewhat:1 care:3 converge:1 signal:1 ii:1 full:1 alan:1 technical:1 wtr:4 totality:1 equally:1 calculates:1 involving:1 circumstance:1 expectation:15 iteration:3 sometimes:1 annealing:1 crucial:1 jordan:4 iii:1 enough:1 easy:2 identified:1 reduce:1 idea:1 rms:3 wo:1 york:1 remark:1 adequate:1 yik:1 factorial:3 encyclopedia:1 statist:2 category:1 simplest:1 generate:1 singapore:1 judging:1 estimated:6 delta:1 wr:1 correctly:1 discrete:1 four:4 clarity:1 sum:1 place:1 kotz:1 separation:1 hi:1 replaces:1 kronecker:1 bonn:1 expanded:2 estep:1 department:1 according:1 em:28 wi:5 appealing:1 perceptible:1 lid:1 modification:7 taken:5 equation:4 needed:1 addison:1 end:1 available:1 denotes:1 remaining:2 clustering:1 giving:2 ghahramani:9 especially:1 approximating:1 objective:1 added:2 quantity:1 diagonal:1 sensible:1 degrade:2 evaluate:3 trivial:1 decimal:1 statement:1 unknown:1 vertical:1 observation:1 datasets:1 markov:3 incorrectly:1 situation:3 redwood:1 introduced:1 tap:1 trans:2 able:1 beyond:1 below:2 wiyi:1 reliable:1 belief:1 predicting:1 buhmann:2 indicator:1 mn:1 improve:4 started:1 carried:1 created:1 categorical:1 naive:11 acknowledgement:1 lecture:1 interesting:1 rubin:2 heavy:1 row:2 supported:1 saul:2 taking:1 benefit:2 evaluating:1 world:1 ignores:1 made:1 employing:1 winit:1 cavity:16 summing:2 assumed:1 continuous:1 latent:34 iterative:1 table:3 learn:1 reasonably:1 ca:1 expansion:2 complex:2 bounding:1 noise:3 profile:3 representative:1 west:3 tl:1 wiley:1 mezard:3 position:2 explicit:2 exponential:1 xl:1 formula:1 bad:1 essential:1 consist:1 conditioned:1 mf:12 simply:1 likely:1 ma:1 conditional:1 identity:1 g12:1 universitiit:1 feasible:2 analysing:1 typical:2 called:3 meant:1 brevity:1 preparation:1 dept:1 |
241 | 1,219 | MLP can provably generalise much better
than VC-bounds indicate.
A. Kowalczyk and H. Ferra
Telstra Research Laboratories
770 Blackburn Road, Clayton, Vic. 3168, Australia
({ a.kowalczyk, h.ferra}@trl.oz.au)
Abstract
Results of a study of the worst case learning curves for a particular class of probability distribution on input space to MLP with
hard threshold hidden units are presented. It is shown in particular, that in the thermodynamic limit for scaling by the number
of connections to the first hidden layer, although the true learning
curve behaves as ~ a-I for a ~ 1, its VC-dimension based bound
is trivial (= 1) and its VC-entropy bound is trivial for a ::; 6.2. It
is also shown that bounds following the true learning curve can be
derived from a formalism based on the density of error patterns.
1
Introduction
The VC-formalism and its extensions link the generalisation capabilities of a binary
valued neural network with its counting function l , e.g. via upper bounds implied by
VC-dimension or VC-entropy on this function [17, 18]. For linear perceptrons the
counting function is constant for almost every selection of a fixed number of input
samples [2], and essentially equal to its upper bound determined by VC-dimension
and Sauer's Lemma. However, in the case for multilayer perceptrons (MLP) the
counting function depends essentially on the selected input samples. For instance,
it has been shown recently that for MLP with sigmoidal units although the largest
number of input samples which can be shattered, Le. VC-dimension, equals O(w 2 )
[6], there is always a non-zero probability of finding a (2w + 2)-element input sample
which cannot be shattered, where w is the number of weights in the network [16].
In the case of MLP using Heaviside rather than sigmoidal activations (McCullochPitts neurons), a similar claim can be made: VC-dimension is O(wl1og21lt} [13, 15],
1
Known also as the partition function in computational learning theory.
191
MLP Can Provably Generalize Much Better than VC-bounds Indicate
where WI is the number of weights to the first hidden layer of 11.1 units, but there is
a non-zero probability of finding a sample of size WI + 2 which cannot be shattered
[7, 8]. The results on these "hard to shatter samples" for the two MLP types
differ significantly in terms of techniques used for derivation. For the sigmoidal
case the result is "existential" (based on recent advances in "model theory") while
in the Heaviside case the proofs are constructive, defining a class of probability
distributions from which "hard to shatter" samples can be drawn randomly; the
results in this case are also more explicit in that a form for the counting function
may be given [7, 8].
Can the existence of such hard to shatter samples be essential for generalisation
capabilities of MLP? Can they be an essential factor for improvement of theoretical
models of generalisation? In this paper we show that at least for the McCullochPitts case with specific (continuous) probability distributions on the input space
the answer is "yes". We estimate "directly" the real learning curve in this case and
show that its bounds based on VC-dimension or VC-entropy are loose at low learning
sample regimes (for training samples having less than 12 x WI examples) even for
the linear perceptron. We also show that a modification to the VC-formalism given
in [9, 10] provides a significantly better bound. This latter part is a more rigorous
and formal extension and re-interpretation of some results in [11, 12]. All the results
are presented in the thermodynamic limit, i.e. for MLP with WI ~ 00 and training
sample size increasing proportionally, which simplifies their mathematical form.
2
Overview of the formalism
On a sample space X we consider a class H of binary functions h : X ~ {a, 1}
which we shall call a hypothesis space. Further we assume that there are given a
probability distribution jJ on X and a target concept t : X ~ {a, 1}. The quadruple
C = (X, jJ, H, t) will be called a learning system.
In the usual way, with each hypothesis h E H we associate the generalization error
fh d~ Ex [It(x) - h(x)l] and the training error fh,x d~ ~ L:~l It(Xi) - h(xdl for
any training m-sample x = (Xl, ... ,xm) E xm.
?
Given a learning threshold ~ ,X ~ 1, let us introduce an auxiliary random variable
f~ax(X) d~ max{fh ; h E H & fh,x ~ ,X} for x E xm, giving the worst generalization error of all hypotheses with training error ~ ,X on the m-sample E xm. 2
The basic objects of intE'rest in this paper are the learning cUnJe3 defined as
x
tL>C( m ) d!l
...)].
- E Xm [f).max ( X
f).
2.1
Thermodynamic limit
Now we introduce the thermodynamic limit of the learning curve. The underlying idea of such asymptotic analysis is to capture the essential features of learning
2In this paper max(S), where S C R, denotes the maximal element in the closure of S,
if no such element exists. Similarly, we understand mineS).
3Note that our learning curve is determined by the worst generalisation error of acceptable hypotheses and in this respect differs from "average generalisation error" learning
curves considered elsewhere, e.g. [3, 5].
or
00
192
A. Kowalczyk and H. Ferra
systems of very large size. Mathematically it turns out that in the thermodynamic
limit the functional forms of learning curves simplify significantly and analytic characterizations of these are possible.
We are given a sequence of learning systems, or shortly, LN = (XN,/J.N,HN,tN)'
N = 1,2, ... and a scaling N f-7 TN E R+, with the property TN ~ 00; the scaling
can be thought of as a measure of the size (complexity) of a learning system, e.g.
VC-dimension of HN. The thermodynamic limit of scaled learning curves is defined
for a > as follows 4
?
=
we ( a ) de!
?AOO
l'1m sup ?A,N
we (L aTN J) ,
(1)
N--+oo
Here, and below, the additional subscript N refers to the N-th learning system.
2.2
Error pattern density formalism
This subsection briefly presents a thermodynamic version of a modified VC formalism discussed previously in [9J; more details and proofs can be found in [1OJ. The
main innovation of this approach comes from splitting error patterns into error shells
and using estimates on the size of these error shells rather than the total number
of error patterns. We shall see on examples discussed in the following section that
this improves results significantly.
The space {O, l}m of all binary m-vectors naturally splits into m + 1 error pattern
shells Ern, i = 0,1, ... , m, with the i-th shell composed of all vectors with exactly i
entries equal to 1 . For each h E Hand i = (Xl, ... ,Xm ) E X m , let vh(i) E {O,l}m
denote a vector (error pattern) having 1 in the j-th position if and only if h(xj) :j:.
t(Xj). As the i-th error shell has
elements, the average error pattern density
falling into this error shell is
en
(i = 0,1, ... ,m),
where
# denotes the cardinality of a set 5
(2)
.
Theorem 1 Given a sequence of learning systems LN = (XN, /J.N, HN, tN), a scaling TN and a function 'P : R+ X (0, 1) ~ R+ such that
In (dfN)
~ -TN'P (m
,i) + O(TN),
,
TN m
for all m,N = 1,2, ... ,
?
~
(3)
i ~ m.
Then
(4)
4We recall that lxJ denotes the largest integer $ x and limsuPN-+oo XN is defined as
limN-+oo of the monotonic sequence N 1--+ max{xl' X2, ??? , XN}' Note that in contrast to
the ordinary limit, lim sup always exists.
5Note the difference to the concept of error shells used in [4] which are partitions of the
finite hypothesis space H according to the generalisation error values. Both formalisms
are related though, and the central result in [4], Theorem 4, can be derived from our
Theorem 1 below.
193
MLP Can Provably Generalize Much Better than VC-bounds Indicate
for any
?:s
A
:s 1 and a, /3 > 0, where
f>.,6(a) de!
= max { f E (0,1) ; 3 0 ::;y::;>. a(1i(y)
t::;x::;1
+ /31i(x))
+ a/3, y1+ /3/3X )
+
- rp ( a
~
?
}
and 1i(y) d~ -y In y - (1 - y) In(l - y) denotes the entropy function.
3
3.1
Main results : applications of the formalism
VC-bounds
We consider a learning sequence L N = (X N , J.L N , H N , t N), t N E H N (realisable
case) and the scaling of this sequence by VC-dimension [17], i.e. we assume TN =
dvc(HN) -+ 00. The following bounds for the N-th learning system can be derived
for A = (consistent learning case) [1, 17]:
?
O,N (m)
fwe
<
;
.1
.
? mm
(
2-mt/2
1,2
In the thermodynamic limit, i.e. as N -+
fg'~(a)
<
-
00,
. (1
mIn
(2) dVc(HN?)
em
dvc(HN)
we get for any a
(5)
df.
> lie
g
, 210 2 (2ea)) ,
(6)
a
Note that this bound is independent of probability distributions J.LN.
3.2
Piecewise constant functions
Let PC(d) denote the class of piecewise constant binary functions on the unit
discontinuities and with their values defined as
segment [0,1) with up to d ~
1 at all these discontinuity points. We consider here the learning sequence LN =
([0,1), J.LN, PC(dN), tN) where J.LN is any continuous probability distributions on
[0,1), dN is a monotonic sequence of positive integers diverging to 00 and targets
?
are such that the limit c5t d~ limN-+oo .!!.I:Ldd exists. (Without loss of
tN
generality we can assume that all J.LN are the uniform distribution on [0,1).)
tN E PC(dt N
)
For this learning sequence the following can be established.
Claim 1. The following function defined for a
> 1 and
?:s x :s
rp(a, x) d~ -a(l-x)1i (2;(t~X?) -ax1i (~~~) +a1i(x)
1 as
for2ax(1-x) > 1,
and as 0, otherwise, satisfies assumption (3) with respect to the scaling TN
Claim 2.
d;!
dN.
The following two sided bound on the learning curve holds:
(7)
A. Kowalczyk and H. Ferra
194
We outline the main steps of proof of these two claims now.
For Claim 1 we start with a combinatorial argument establishing that in the particular case of constant target
tl,"!lN
"
={
-1
( ':1'-1)
,-1
1
"'~N
/2
L...J)=O
(m-:-i-l) (i~l)
)-1
)
for d + dt < min(2i, 2(m - i)),
otherwise.
Next we observe that that the above sum equals
This easily gives Claim 1 for constant target (tSt = 0). Now we observe that this
particular case gives an upper bound for the general case (of non-constant target)
if we use the "effective" number of discontinuities dN + dt N instead of dN.
For Claim 2 we start with the estimate [12, 11]
derived from the Mauldon result [14] for the constant target tN = canst, m
This implies immediately the expression
?o~(a)
=
-..!..
(1 + In(2a)) .
2a
~
dN.
(8)
for the constant target, which extends to the estimate (7) with a straightforward
lower and upper bound on the "effective" number of discontinuities in the case of a
non-constant target.
3.3
Link to multilayer perceptron
Let MLpn(wd denote the class offunction from R n to {O, I} which can be implemented by a multilayer perceptron (feedforward neural network) with ~ 1 number
of hidden layers, with Wt connections to the first hidden layer and the first hidden
layer composed entirely of fully connected, linear threshold logic units (i.e. units
able to implement any mapping of the form (Xl, .. , Xn) f-t O(ao + L~l aixi) for
ai E R). It can be shown from the properties of Vandermonde determinant (c.f.
[7, 8]) that if 1 : [0,1) -+ R n is a mapping with coordinates composed of linearly
independent polynomials (generic situation) of degree::; n, then
(9)
This implies immediately that all results for learning the class of PC functions in
Section 5.2 are applicable (with obvious modifications) to this class of multilayer
perceptrons with probability distribution concentrated on the I-dimensional curves
of the form 1([0,1)) with 1 as above.
However, we can go a step further. We can extend such a distribution to a continuous distribution on R n with support "sufficiently close" to the curve 1([0,1)),
MLP Can Provably Generalize Much Better than VC-bounds Indicate
,......
195
1.0
..!:!.
.... 0.8
0
t::
CD
c: 0.6
0
:; 0.4
.~
Iii
....
CD
c: 0.2
CD
\..
.... \..
"
..........."-: ... .
. . .''':2::'C: 2:'.::::CC:.:. .......~. ". . ......... .......... . ~."".csc.
C)
0.0
0.0
10.0
VC Entropy
EPD,0,=0.2
EPD,o,= O.O
TC+, 0,=0.2
TCO,o,=O.O
TC-, 0, = 0.2
20.0
scaled training sample size (a)
Figure 1: Plots of different estimates for thermodynamic limit of learning curves for
the sequence of multilayer perceptrons as in Claim 3 for consistent learning (A = 0).
Estimates on true learning curve from (7) are for 8t = 0 ('TCO') and 8t = 0.2 ('TC+'
and 'TC-' for the upper and lower bound, respectively) . Two upper bounds of the
form (4) from the modified VC-formalism for r.p as in Claim 1 and f3 = 1 are plotted
for 8t = 0.0 and 8t = 0.2 (marked EPD). For comparison, we plot also the bound
(10) based on the VC-entropy; VC bound (5) being trivial for this scaling, 1, c.f.
Corollary 2, is not shown.
=
with changes to the error pattern densities /)/fN, the learning curves, etc., as small
as desired. This observa.tion implies the follo~ing result:
Claim 3 For any sequence of multilayer perceptrons, M LpnN (WIN), WIN ~
00, there exists a sequence of continuous probability distributions J.1.N on
R nN with properties as follows.
For any sequence of targets tN E
M LpnN (WltN)' both Claim 1 and Claim 2 of Section 3.2 hold for the learning sequence (RnN, J.1.N, M Lpn N(WIN), tN) with scaling TN d~ nIN and 8t =
limN-+oo WltN IWIN. In particular bound (4) on the learning curve holds for r.p
as in Claim 1.
Corollary 2 If additionally the number of units in first hidden layer 1llN ~ 00,
then the thermodynamic limit of VC-bound (5) with respect to the scaling TN =
WIN is trivial, i.e. = 1 for all a > O.
Proof. The bound (5) is trivial for m ~ 12dN, where dN d~ dvc(M Lpn N(WltN )).
As dN = O(WIN IOg2(1lIN)) [13, 15] for any continuous probability on the input
space, this bound is trivial for any a = ~
< 12..4H...
~ 00 if N ~ 00. 0
tt'lN WIN
There is a possibility that VC dimension based bounds are applicable but fail to capture the true behavior because of their independence from the distribution. One option to remedy the situation is to try a distribution-specific estimate such as VC entropy (i.e. the expectation of the logarithm of the counting function IIN(XI, ... , xm)
which is the number of dichotomies realised by the perceptron for the m-tuple
of input points [18]) . However, in our case, lIN (Xl , ..., xm) has the lower bound
m-l) (m)
?,
?
l' .
h' h' . all h
2 ",min(wlN/2
L."i=O
'
i ' or Xl, ... , Xm m genera pOSItIOn, w IC IS VIrtu y t e expression from Sauer's lemma with VC-dimension replaced by WIN 12. Thus using
A. Kowalczyk and H. Ferra
196
VC entropy instead of VC dimension (and Sauer's Lemma) we cannot hope for
a better result than bounds of the form (5) with WlN 12 replacing VC-dimension
resulting in the bound
(0
> lie)
(10)
in the thermodynamic limit with respect to the scaling TN = WlN. (Note that
more "optimistic" VC entropy based bounds can be obtained if prior distribution
on hypothesis space is given and taken into account [3].)
The plots of learning curves are shown in Figure l.
Acknowledgement. The permission of Director of Telstra Research Laboratories
to publish this paper is gratefully acknowledged.
References
[1) A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth. Learnability and the
Vapnik-Chervonenkis dimensions. Journal of the ACM, 36:929-965, (Oct. 1989).
[2) T.M. Cover. Geometrical and statistical properties of linear inequalities with applications to pattern recognition. IEEE Trans. Elec. Comp., EC-14:326-334, 1965.
[3) D. Hausler, M. Kearns, and R. Shapire. Bounds on the Sample Complexity of Bayesian
Learning Using Information Theory and VC Dimension. Machine Learning, 14:83113, (1994).
[4) D. Haussler, M. Kearns, H.S. Seung, and N. Tishby. Rigorous learning curve bounds
from statistical mechanics. In Proc . COLT'94, pages 76-87, 1994.
[5) S.B. Holden and M. Niranjan. On the Practical Applicability of VC Dimension
Bounds. Neural Computation, 1:1265-1288, 1995).
[6) P. Koiran and E.D. Sontag. Neural networks with quadratic VC-dimension. In Proc.
NIPS 8, pages 197-203, The MIT Press, Cambridge, Ma., 1996..
[7) A. Kowalczyk. Counting function theorem for multi-layer networks. In Proc. NIPS
6, pages 375-382. Morgan Kaufman Publishers, Inc., 1994.
[8) A. Kowalczyk. Estimates of storage capacity of multi-layer perceptron with threshold
logic hidden units. Neural networks, to appear.
[9) A. Kowalczyk and H. Ferra. Generalisation in feedforward networks. Proc. NIPS 6,
pages 215-222, The MIT Press, Cambridge, Ma., 1994.
[10) A. Kowalczyk. An asymptotic version of EPD-bounds on generalisation in learning
systems. 1996. Preprint.
[11) A. Kowalczyk, J. Szymanski, and R.C. Williamson. Learning curves from a modified
VC-formalism: a case study. In Proc. of 1CNN'95 , 2939-2943, IEEE, 1995.
[12) A. Kowalczyk, J. Szymanski, P.L. Bartlett, and R.C. Williamson. Examples of learning curves from a modified VC-formalism. Proc. NIPS 8, pages 344- 350, The MIT
Press, 1996.
[13) W. Maas. Neural Nets with superlinear VC-dimesnion. Neural Computation, 6:877884, 1994.
[14) J.G. Mauldon. Random division of an interval. Proc. Cambridge Phil. Soc., 41:331336, 1951.
[15) A. Sakurai. Tighter bounds of the VC-dimension of three-layer networks. In Proc. of
the 1993 World Congress on Neural Networks, 1993.
[16) E. Sontag. Shattering all sets of k points in "general position" requires (k - 1)/2
parameters. Report 96-01, Rutgers Center for Systems and Control, 1996.
[17) V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag,
1982.
[18) V. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995.
| 1219 |@word determinant:1 version:2 briefly:1 cnn:1 polynomial:1 closure:1 chervonenkis:1 tco:2 wd:1 activation:1 fn:1 csc:1 partition:2 aoo:1 analytic:1 plot:3 selected:1 warmuth:1 iog2:1 provides:1 characterization:1 sigmoidal:3 mathematical:1 shatter:3 dn:9 director:1 introduce:2 behavior:1 telstra:2 mechanic:1 multi:2 cardinality:1 increasing:1 underlying:1 kaufman:1 finding:2 every:1 exactly:1 scaled:2 control:1 unit:8 dfn:1 appear:1 positive:1 congress:1 limit:12 establishing:1 quadruple:1 subscript:1 au:1 genus:1 practical:1 implement:1 differs:1 rnn:1 empirical:1 significantly:4 thought:1 road:1 refers:1 get:1 cannot:3 close:1 selection:1 superlinear:1 storage:1 phil:1 center:1 straightforward:1 go:1 splitting:1 immediately:2 haussler:2 coordinate:1 target:9 aixi:1 hypothesis:6 associate:1 element:4 recognition:1 preprint:1 capture:2 worst:3 connected:1 lxj:1 complexity:2 seung:1 mine:1 segment:1 division:1 easily:1 derivation:1 elec:1 effective:2 dichotomy:1 valued:1 otherwise:2 sequence:13 net:1 maximal:1 oz:1 nin:1 object:1 oo:5 soc:1 auxiliary:1 implemented:1 indicate:4 come:1 implies:3 differ:1 vc:38 dvc:4 australia:1 ao:1 generalization:2 tighter:1 mathematically:1 extension:2 mm:1 hold:3 sufficiently:1 considered:1 ic:1 mapping:2 claim:13 koiran:1 fh:4 estimation:1 proc:8 applicable:2 combinatorial:1 largest:2 hope:1 mit:3 always:2 modified:4 rather:2 canst:1 corollary:2 derived:4 ax:1 improvement:1 contrast:1 rigorous:2 nn:1 shattered:3 holden:1 hidden:8 provably:4 colt:1 wln:3 equal:4 f3:1 having:2 shattering:1 blackburn:1 report:1 simplify:1 piecewise:2 randomly:1 composed:3 replaced:1 mlp:11 possibility:1 pc:4 tuple:1 sauer:3 logarithm:1 re:1 plotted:1 desired:1 observa:1 theoretical:1 ferra:6 formalism:11 instance:1 cover:1 sakurai:1 ordinary:1 applicability:1 entry:1 epd:4 uniform:1 tishby:1 learnability:1 answer:1 density:4 a1i:1 central:1 hn:6 account:1 de:2 realisable:1 inc:1 depends:1 tion:1 try:1 optimistic:1 sup:2 realised:1 start:2 option:1 capability:2 yes:1 generalize:3 bayesian:1 comp:1 cc:1 obvious:1 naturally:1 proof:4 recall:1 subsection:1 lim:1 improves:1 ea:1 dt:3 though:1 generality:1 hand:1 replacing:1 atn:1 concept:2 true:4 remedy:1 laboratory:2 ehrenfeucht:1 mccullochpitts:2 outline:1 tt:1 tn:19 iin:1 geometrical:1 recently:1 behaves:1 functional:1 mt:1 overview:1 discussed:2 interpretation:1 extend:1 cambridge:3 ai:1 similarly:1 gratefully:1 fwe:1 etc:1 recent:1 verlag:2 inequality:1 binary:4 morgan:1 additional:1 thermodynamic:11 ing:1 lin:2 niranjan:1 basic:1 multilayer:6 essentially:2 expectation:1 df:1 publish:1 rutgers:1 interval:1 xdl:1 limn:3 publisher:1 rest:1 call:1 integer:2 counting:6 feedforward:2 split:1 iii:1 xj:2 independence:1 simplifies:1 idea:1 expression:2 bartlett:1 sontag:2 jj:2 proportionally:1 concentrated:1 shall:2 threshold:4 falling:1 drawn:1 tst:1 acknowledged:1 sum:1 extends:1 almost:1 acceptable:1 scaling:10 entirely:1 bound:35 layer:9 quadratic:1 x2:1 argument:1 min:3 ern:1 according:1 em:1 wi:4 modification:2 lpnn:2 sided:1 taken:1 ln:9 previously:1 turn:1 loose:1 fail:1 observe:2 kowalczyk:11 mlpn:1 generic:1 c5t:1 permission:1 shortly:1 rp:2 existence:1 denotes:4 follo:1 giving:1 shapire:1 implied:1 dependence:1 usual:1 win:7 link:2 capacity:1 trivial:6 innovation:1 inte:1 trl:1 upper:6 neuron:1 finite:1 mauldon:2 defining:1 situation:2 y1:1 clayton:1 connection:2 established:1 discontinuity:4 trans:1 nip:4 able:1 below:2 pattern:9 xm:9 regime:1 max:5 oj:1 vic:1 existential:1 vh:1 prior:1 acknowledgement:1 asymptotic:2 loss:1 fully:1 vandermonde:1 degree:1 consistent:2 lpn:2 cd:3 elsewhere:1 maas:1 lln:1 formal:1 understand:1 generalise:1 perceptron:5 fg:1 curve:20 dimension:17 xn:5 world:1 made:1 ec:1 logic:2 xi:2 szymanski:2 continuous:5 additionally:1 nature:1 williamson:2 main:3 linearly:1 tl:2 en:1 position:3 explicit:1 xl:6 lie:2 theorem:4 specific:2 essential:3 exists:4 vapnik:3 entropy:9 tc:4 monotonic:2 springer:2 satisfies:1 acm:1 ma:2 shell:7 oct:1 marked:1 blumer:1 hard:4 change:1 generalisation:8 determined:2 wt:1 lemma:3 kearns:2 called:1 total:1 diverging:1 perceptrons:5 support:1 latter:1 constructive:1 heaviside:2 ex:1 |
242 | 122 | 141
GEMINI: GRADIENT ESTIMATION
THROUGH MATRIX INVERSION
AFTER NOISE INJECTION
Yann Le Cun 1 Conrad C. Galland and Geoffrey E. Hinton
Department of Computer Science
University of Toronto
10 King's College Rd
Toronto, Ontario M5S 1A4
Canada
ABSTRACT
Learning procedures that measure how random perturbations of unit activities correlate with changes in reinforcement are inefficient but simple
to implement in hardware. Procedures like back-propagation (Rumelhart,
Hinton and Williams, 1986) which compute how changes in activities affect the output error are much more efficient, but require more complex
hardware. GEMINI is a hybrid procedure for multilayer networks, which
shares many of the implementation advantages of correlational reinforcement procedures but is more efficient. GEMINI injects noise only at the
first hidden layer and measures the resultant effect on the output error.
A linear network associated with each hidden layer iteratively inverts the
matrix which relates the noise to the error change, thereby obtaining
the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearities in the system. Two simulations demonstrate the
effectiveness of GEMINI.
OVERVIEW
Reinforcement learning procedures typically measure the effects of changes in local variables on a global reinforcement signal in order to determine sensible weight
changes. This measurement does not require the connections to be used backwards
(as in back-propagation), but it is inefficient when more than a few units are involved. Either the units must be perturbed one at a time, or, if they are perturbed
simultaneously, the noise from all the other units must be averaged away over a
large number of samples in order to achieve a reasonable signal to noise ratio. So
reinforcement learning is much less efficient than back-propagation (BP) but much
easier to implement in hardware.
GEMINI is a hybrid procedure which retains many of the implementation advantages of reinforcement learning but eliminates some of the inefficiency. GEMINI
uses the squared difference between the desired and actual output vectors as a
reinforcement signal. It injects random noise at the first hidden layer only, causing correlated noise at later layers. If the noise is sufficiently small, the resultant
1 First Author's present address: Room 4G-332, AT&T Bell Laboratories, Crawfords Corner
Rd, Holmdel, NJ 07733
142
Le Cun, Galland and Hinton
change in the reinforcement signal is a linear function of the noise vector at any
given layer. A matrix inversion procedure implemented separately at each hidden
layer then determines how small changes in the activities of units in the layer affect
the reinforcement signal. This matrix inversi?n gives a much more accurate estimate of the error-derivatives than simply averaging away the effects of noise and,
unlike the averaging approach, it can be used when the noise is correlated.
The matrix inversion at each layer can be performed iteratively by a local linear
network that "learns" to predict the change in reinforcement from the noise vector at
that layer. For each input vector, one ordinary forward pass is performed, followed
by a number of forward passes each with a small amount of noise added to the total
inputs of the first hidden layer. After each forward pass, one iteration of an LMS
training procedure is run at each hidden layer in order to improve the estimate of
the error-derivatives in that layer. The number of iterations required is comparable
to the width of the largest hidden layer. In order to avoid singularities in the
matrix inversion procedure, it is necessary for each layer to have fewer units than
th~ preceding one.
In this hybrid approach, the computations that relate the perturbation vectors
to the reinforcement signal are all local to a layer. There is no detailed backpropagation of information, so that GEMINI is more amenable to optical or electronic implementations than BP. The additional time needed to run the gradientestimating inner loop, may be offset by the fact that only forward propagation is
required, so this can be made very efficient (e.g. by using analog or optical hardware).
TECHNIQUES FOR GRADIENT ESTIMATION
The most obvious way to measure the derivative of the cost function w.r.t the
weights is to perturb the weights one at a time, for each input vector, and to
measure the effect that each weight perturbation has on the cost function, C. The
advantage of this technique is that it makes very few assumptions about the way
the network computes its output.
It is possible to use far fewer perturbations (Barto and Anandan, 1985) if we are
using "quasi-linear" units in which the output, Yi, of unit i is a smooth non-linear
function, I, of'its total input, Xi, and the total input is a linear function of the
incoming weights, Wij and the activities, Yi, of units in the layer below:
Xi
=
L
WijYj
i
Instead of perturbing the weights, we perturb the total input, Xi, received by each
unit, in order to measure 8C/ 8Xi . Once this derivative is known it is easy to
derive 8C / 8Wij for each of the unit's incoming weights by performing a simple local
compu tation:
8C
8C
_
_ -_yo
8W ij
-
8Xi J
If the units are perturbed one at a time, we can approximate 8C / 8Xi by
GEMINI
where 6C is the variation of the cost function induced by a perturbation 6Xi of the
total input to unit i. This method is more efficient than perturbing the weights
directly, but it still requires as many forward passes as there are hidden units.
Reducing the number of perturbations required
If the network has a layered, feed-forward, architecture the state of any single layer
completely determines the output. This makes it possible to reduce the number of
required perturbations and forward passes still further . Perturbing units in the first
hidden layer will induce perturbations at the following layers, and we can use these
induced perturbations to compute the gradients for these layers. However, since
many of the units in a typical hidden layer will be perturbed simultaneously, and
since these induced perturbations will generally be correlated, it is necessary to do
some local computation within each layer in order to solve the credit assignment
problem of deciding how much of the change in the final cost function to attribute
to each of the simultaneous perturbations within the layer . This local computation
is relatively simple. Let x(k) be the vector of total inputs to units in layer k. Let
6xt(k) be the perturbation vector of layer k at time t. It does not matter for the
following analysis whether the perturbations are directly caused (in the first hidden
layer) or are induced. For a given state of the network, we have:
To compute the gradient w.r.t. layer k we must solve the following system for g,c
t = 1. .. P
where P is the number of perturbations. Unless P is equal to the number of units
in layer k, and the perturbation vectors are linearly independent, this system will
be over- or under-determined. In some network architectures it is impossible to
induce nl linearly independent perturbation vectors in a hidden layer, I containing
nl units. This happens when one of the preceding hidden layers, k, contains fewer
units because the perturbation vectors induced by a layer with nk units on the
following layer generate at most nk independent directions. So to avoid having to
solve an under-determined system, we require "convergent" networks in which each
hidden layer has no mbre units than the preceding layer.
Using a Special Unit to Allocate Credit within a Layer
Instead of directly solving for the 8C/8xi within each layer, we can solve the same
system iteratively by minimizing:
E
= I)6Ct t
gf6xt(k))2
143
144
Le Cun, Galland and Hinton
o o o
D
linear
unit
D
linear
unit
input layer
Figure 1: A GEMINI network.
This can be done by a special unit whose inputs are the perturbations of layer
k and whose desired output is the resulting perturbation of the cost function 6C
(figure 1). When the LMS algorithm is used, the weight vector gk of this special
unit converges to the gradient of C with respect to the vector of total inputs x(k).
If the components of the perturbation vector are uncorrelated, the convergence will
be fast and the number of iterations required should be of the order of the the
number of units in the layer. Each time a new input vector is presented to the main
network, the "inner-loop" minimization process that estimates the 8C/ 8Xi must
be re-initialized by setting the weights of the special units to zero or by reloading
approximately correct weights from a table that associates estimates of the 8C / 8Xi
with each input vector .
Summary of the Gemini Algorithm
1. Present an input pattern and compute the network state by forward propagation.
2. Present a desired output and evaluate the cost function.
3. Re-initialize the weights of the special units.
4. Repeat until convergence:
(a) Perturb the first hidden layer and propagate forward.
(b) Measure the induced perturbations in other layers and the output cost function.
(c) At each layer apply one step of the LMS rule on the special units to minimize
the error between the predicted cost variation and the actual variation.
5. Use the weights of the special units (the estimates of 8C /8Xi ) to compute the
weight changes of the main network.
6. Update the weights of the main network .
GEMINI
A TEST EXAMPLE: CHARACTER RECOGNITION
The GEMINI procedure was tested on a simple classification task using a network
with two hidden layers. The input layer represented a 16 by 16 binary image of
a handwritten digit. The first hidden layer was an 8 by 8 array of units that
were locally connected to the input layer in the following way: Each hidden unit
connected to a 3 by 3 "receptive field" of input units and the centers of these
receptive fields were spaced two "pixels" apart horizontally and vertically. To avoid
boundary effects we used wraparound which is unrealistic for real image processing.
The second hidden layer was a 4 by 4 array of units each of which was connected to
a 5 by 5 receptive field in the previous hidden layer. The centers of these receptive
fields were spaced two pixels apart. Finally the output layer contained 10 units,
one for each digit, and was fully connected to the second hidden layer. The network
contained 1226 weights and biases.
The sigmoid function used at each node was of the form f( x) = stanh( mx) with
m = 2/3 and s = 1.716, thus f was odd, and had the property that f(l) = 1
(LeCun, 1987). The training set was composed of 6 handwritten exemplars of each
of the 10 digits. It should be emphasized that this task is simple (it is linearly
separable), and the network has considerably more weights than are required for
this problem.
Experiments were performed with 64 perturbations in the gradient estimation
inner loop. Therefore, assuming that the perturbation vectors were linearly independent, the linear system associated with the first hidden layer was not underconstrained 2. Since a stochastic gradient procedure was used with a single sweep
through the training set, the solution was only a rough approximation, though convergence was facilitated by the fact that the components of the perturbations were
statistically independent.
The linear systems associated with the second hidden layer and the output layer
were almost certainly overconstrained 3, so we expected to obtain a better estimate
of the gradient for these layers than for the first one. The perturbations injected at
the first hidden layer were independent random numbers with a zero-mean gaussian
distribution and standard deviation of 0.1.
The minimization procedure used for gradient estimation was not a pure LMS,
but a pseudo-newton method that used a diagonal approximation to the matrix of
second derivatives which scales the learning rates for each link independently (Le
Cun, 1987; Becker and Le Cun, 1988). In our case, the update rule for a gradient
estimate coefficient was
where a'[ is an estimate of the variance of the perturbation for unit i. In the
simulations TJ was equal to 0.02 for the first hidden layer, 0.03 for the second hidden
layer, and 0.05 for the output layer. Although there was no real need for it, the
gradient associated-with the output units was estimated using GEMINI so that we
could evaluate the accuracy of gradient estimates far away from the noise-injection
2Jt may have been overconstrained since the actual relation between the perturbation and
variation of the cost function is usually non-linear for finite perturbations
3This depends on the degeneracy of the weight matrices
145
146
Le Cun, Galland and Hinton
B.1
B +---~--~--~---r---r---+---+--~--~---4~~
12
Figure 2: The mean squared error as a function of the number of sweeps
through the training set for GEMINI (top curve) and BP (bottom curve).
layer. The learning rates for the main network, fi, had different values for each unit
and were equal to 0.1 divided by the fan-in of the unit.
Figure 2 shows the relative learning rates of BP and GEMINI. The two runs were
started from the same initial conditions. Although the learning curve for GEMINI
is consistently above the one for BP and is more irregular, the rate of decrease of
the two curves is similar. The 60 patterns are all correctly classified after 10 passes
through the training set for regular BP, and after 11 passes for GEMINI. In the
experiments, the direction of the estimated gradient for a single pattern was within
about 20 degrees of the true gradient for the output layer and the second hidden
layer, and within 50 degrees for the first hidden layer. Even with such inaccuracies
in the gradient direction, the procedure still converged at a reasonable rate.
LEARNING TO CONTROL A SIMPLE ROBOT ARM
In contrast to the digit recognition task, the robot arm control task considered
here is particularily suited to the GEMINI procedure because it contains a nonlinearity which is unknown to the network. In this simulation, a network with 2
input units, a first hidden layer with 8 units, a second with 4 units, and an output
layer with 2 units is used to control a simulated arm with two angular degrees of
freedom. The problem is to train the network to receive x, y coordinates encoded
on the two input units and produce two angles encoded on the output units which
would place the end of the arm on the desired input point (figure 3). The units use
the same input-output function as in the digit recognition example.
GEMINI
Cost. (Euclidean disl. 10 adual point?
t
ROBOT ARM
"unknown"
91
(x,y)
non-lIne.rlty
t92
00
o oto
000
O~O
0
0 0 0
00
(a)
Jl
y
(b)
Figure 3: (a) The network trained with the GEMINI procedure, and (b)
the 2-D arm controlled by the network.
Each point in the training set is successively applied to the inputs and the resultant
output angles determined. The training points are chosen so that the code for the
output angles exploits most of the sigmoid input-output curve while avoiding the
extreme ends. The "unknown" non-linearity is essentially the robot arm, which
takes the joint angles as input and then "outputs" the resulting hand coordinates
by positioning itself accordingly. The cost function, C, is taken as the square of
the Euclidean distance from this point to the desired point. In the simulation, this
distance is determined using the appropriate trigonometric relations:
where al and a2 are the lengths of the two components of the arm. Although
this non-linearity is not actually unknown, analytical derivative calculation can be
difficult in many real world applications, and so it is interesting to explore the
possibility of a control system that can learn without it.
It is found that the minimum number of iterations of the LMS inner loop search
needed to obtain good estimates ofthe gradients when compared to values calculated
by back-propagation is between 2 and 3 times the number of units in the first hidden
layer (figure 4). For this particular kind of problem, the process can be sped up
significantly by using the following two modifications. The same training vector
can be applied to the inputs and the weights changed repeatedly until the actual
output is within a certain radius of the desired output. The gradient estimates
are kept between these weight updates, thereby reducing the number of inner loop
147
148
Le Cun, Galland and Hinton
Figure 4: Gradients of the units in all non-input layers, determined
(a) by the GEMINI procedure after 24 iterations of the gradient
estimating inner loop, and (b) through analytical calculation.
The size of the black and white squares indicates the magnitude of
negative and positive error gradients respectively.
iterations needed at each step. The second modification requires that the arm be
made to move continuously through 2-D space by using an appropriately ordered
training set. The state of the network changes slowly as a result, leading to a slowly
varying gradient. Thus, if the gradient estimate is not reset between successive
input vectors, it can track the real gradient, allowing the number of iterations per
gradient estimate to be reduced to as little as 5 in this particular network.
The results of simulations using training sets of closely spaced points in the first
quadrant show that GEMINI is capable of training this network to correctly orient
the simulated arm, with significantly improved learning efficiency when the above
two modifications are employed. Details of these simulation results and the parameters used to obtain them are given in (Galland, Hinton, and Le Cun, 1989).
Acknowledgements
This research was funded by grants from the Ontario Information Technology
Research Center, the Fyssen Foundation, and the National Science and Engineering Research Council. Geoffrey Hinton is a fellow of the Canadian Institute for
Advanced Research.
References
A. G. Barto and P. Anandan (1985) Pattern recognizing stochastic learning automata. IEEE Transactions on Systems, Man and Cybernetics, 15, 360-375.
S. Becker and Y. Le Cun (1988) Improving the convergence of back-propagation
learning with second order methods. In Touretzky, D. S., Hinton, G. E. and Sejnowski, T. J., editors, Proceedings of the 1988 Connectionist Summer School, Morgan Kauffman: Los Altos, CA.
C. C. Galland, G. E. Hinton and Y. Le Cun (1989) Technical Report, in preparation.
Y. Le Cun (1987) Modeles Connexionnistes de l'Apprentissage. Doctoral thesis,
University of Paris, 6.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1986) Learning internal representations by back-propagating errors. Nature, 323, 533-536.
| 122 |@word inversion:4 simulation:6 propagate:1 thereby:2 initial:1 inefficiency:1 contains:2 must:4 update:3 fewer:3 accordingly:1 node:1 toronto:2 successive:1 expected:1 actual:4 little:1 estimating:1 linearity:3 alto:1 modeles:1 kind:1 nj:1 pseudo:1 fellow:1 control:4 unit:50 grant:1 positive:1 engineering:1 local:6 vertically:1 tation:1 approximately:1 black:1 doctoral:1 statistically:1 averaged:1 lecun:1 implement:2 backpropagation:1 digit:5 procedure:16 reloading:1 bell:1 significantly:2 induce:2 regular:1 quadrant:1 layered:1 impossible:1 center:3 williams:2 independently:1 automaton:1 pure:1 rule:2 array:2 variation:4 coordinate:2 us:1 associate:1 rumelhart:2 recognition:3 connexionnistes:1 bottom:1 connected:4 decrease:1 trained:1 solving:1 efficiency:1 completely:1 joint:1 represented:1 train:1 fast:1 sejnowski:1 whose:2 encoded:2 solve:4 itself:1 final:1 advantage:3 analytical:2 reset:1 causing:1 wijyj:1 loop:6 trigonometric:1 ontario:2 achieve:1 los:1 convergence:4 produce:1 converges:1 derive:1 propagating:1 exemplar:1 ij:1 school:1 odd:1 received:1 implemented:1 predicted:1 direction:3 radius:1 closely:1 correct:1 attribute:1 stochastic:2 require:3 singularity:1 sufficiently:1 credit:2 considered:1 deciding:1 predict:1 lm:5 a2:1 estimation:4 council:1 largest:1 minimization:2 rough:1 gaussian:1 avoid:3 varying:1 barto:2 yo:1 consistently:1 indicates:1 contrast:1 typically:1 hidden:30 relation:2 quasi:1 wij:2 pixel:2 classification:1 special:7 initialize:1 equal:3 once:1 field:4 having:1 stanh:1 connectionist:1 report:1 few:2 composed:1 simultaneously:2 national:1 freedom:1 possibility:1 certainly:1 extreme:1 nl:2 tj:1 amenable:1 accurate:1 capable:1 necessary:2 unless:1 euclidean:2 initialized:1 desired:6 re:2 retains:1 assignment:1 ordinary:1 cost:11 deviation:1 recognizing:1 perturbed:4 considerably:1 continuously:1 squared:2 thesis:1 successively:1 containing:1 slowly:2 corner:1 compu:1 inefficient:2 derivative:7 leading:1 de:1 coefficient:1 matter:1 caused:1 depends:1 later:1 performed:3 minimize:1 square:2 accuracy:1 variance:1 spaced:3 ofthe:1 handwritten:2 cybernetics:1 m5s:1 classified:1 converged:1 simultaneous:1 touretzky:1 involved:2 obvious:1 resultant:3 associated:4 degeneracy:1 overconstrained:2 actually:1 back:7 feed:1 improved:1 done:1 though:1 angular:1 until:2 hand:1 propagation:8 effect:5 true:1 iteratively:3 laboratory:1 oto:1 white:1 width:1 demonstrate:1 image:2 fi:1 sigmoid:2 sped:1 overview:1 perturbing:3 jl:1 analog:1 measurement:1 rd:2 nonlinearity:1 had:2 funded:1 robot:4 apart:2 certain:1 binary:1 yi:2 conrad:1 minimum:1 additional:1 anandan:2 preceding:3 morgan:1 employed:1 determine:1 signal:6 relates:1 smooth:1 positioning:1 technical:1 calculation:2 divided:1 controlled:1 multilayer:1 essentially:1 iteration:7 irregular:1 receive:1 separately:1 appropriately:1 eliminates:1 unlike:1 pass:5 induced:6 effectiveness:1 backwards:1 canadian:1 easy:1 affect:2 architecture:2 inner:6 reduce:1 whether:1 allocate:1 becker:2 repeatedly:1 generally:1 detailed:1 amount:1 locally:1 hardware:4 reduced:1 generate:1 estimated:2 correctly:2 track:1 per:1 kept:1 injects:2 gemini:22 run:3 facilitated:1 angle:4 injected:1 orient:1 place:1 almost:1 reasonable:2 yann:1 electronic:1 holmdel:1 comparable:1 layer:66 ct:1 followed:1 summer:1 convergent:1 fan:1 activity:4 bp:6 performing:1 separable:1 injection:2 optical:2 relatively:1 department:1 character:1 cun:11 modification:3 happens:1 taken:1 needed:3 end:2 apply:1 away:3 appropriate:1 galland:7 top:1 a4:1 newton:1 exploit:1 perturb:3 sweep:2 move:1 added:1 receptive:4 diagonal:1 gradient:24 mx:1 distance:2 link:1 simulated:2 sensible:1 assuming:1 code:1 length:1 ratio:1 minimizing:1 difficult:1 relate:1 gk:1 negative:1 implementation:3 unknown:5 allowing:2 finite:1 hinton:11 perturbation:28 canada:1 wraparound:1 required:6 paris:1 connection:1 inaccuracy:1 address:1 below:1 pattern:4 usually:1 kauffman:1 unrealistic:1 hybrid:3 advanced:1 arm:10 improve:1 technology:1 started:1 crawford:1 acknowledgement:1 relative:1 fully:1 interesting:1 geoffrey:2 foundation:1 degree:3 apprentissage:1 editor:1 uncorrelated:1 share:1 summary:1 changed:1 repeat:1 bias:1 institute:1 boundary:1 curve:5 calculated:1 world:1 computes:1 author:1 forward:9 reinforcement:11 made:2 far:2 correlate:1 transaction:1 approximate:1 disl:1 global:1 incoming:2 xi:11 search:1 table:1 learn:1 nature:1 ca:1 obtaining:1 improving:1 complex:1 main:4 linearly:4 noise:14 inverts:1 learns:1 xt:1 emphasized:1 jt:1 offset:1 underconstrained:1 magnitude:1 nk:2 easier:1 suited:1 simply:1 explore:1 horizontally:1 contained:2 ordered:1 determines:2 king:1 room:1 man:1 change:11 determined:5 typical:1 reducing:2 averaging:2 correlational:1 total:7 pas:2 college:1 internal:1 preparation:1 evaluate:2 tested:1 avoiding:1 correlated:3 |
243 | 1,220 | The Generalisation Cost of RAMnets
Richard Rohwer and Michal Morciniec
rohwerrj~cs.aston.ac.uk
morcinim~cs.aston.ac.uk
Neural Computing Research Group
Aston University
Aston Triangle, Birmingham B4 7ET, UK.
Abstract
Given unlimited computational resources, it is best to use a criterion of minimal expected generalisation error to select a model and
determine its parameters. However, it may be worthwhile to sacrifice some generalisation performance for higher learning speed.
A method for quantifying sub-optimality is set out here, so that
this choice can be made intelligently. Furthermore, the method
is applicable to a broad class of models, including the ultra-fast
memory-based methods such as RAMnets. This brings the added
benefit of providing, for the first time, the means to analyse the
generalisation properties of such models in a Bayesian framework .
1
Introduction
In order to quantitatively predict the performance of methods such as the ultra-fast
RAMnet, which are not trained by minimising a cost function, we develop a Bayesian
formalism for estimating the generalisation cost of a wide class of algorithms.
We consider the noisy interpolation problem, in which each output data point if
results from adding noise to the result y = f(x) of applying unknown function f
to input data point x, which is generated from a distribution P (x). We follow a
similar approach to (Zhu & Rohwer, to appear 1996) in using a Gaussian process to
define a prior over the space of functions, so that the expected generalisation cost
under the posterior can be determined. The optimal model is defined in terms of
the restriction of this posterior to the subspace defined by the model. The optimum
is easily determined for linear models over a set of basis functions. We go on to
compute the generalisation cost (with an error bar) for all models of this class,
which we demonstrate to include the RAMnets.
254
R. Rohwer and M. Morciniec
Section 2 gives a brief overview of RAMnets. Sections 3 and 4 supply the formalism
for computing expected generalisation costs under Gaussian process priors. Numerical experiments with this formalism are presented in Section 5. Finally, we discuss
the current limitations of this technique and future research directions in Section 6.
2
RAMnets
The RAMnet, or n-tuple network is a very fast I-pass learning system that often gives excellent results competitive with slower methods such as Radial Basis
Function networks or Multi-layer Perceptrons (Rohwer & Morciniec, 1996). Although a semi-quantitative theory explains how these systems generalise, no formal
framework has previously been given to precisely predict the accuracy of n-tuple
networks.
Essentially, a RAMnet defines a set of "features" which can be regarded as Boolean
functions of the input variables. Let the a th feature of x be given by a {O, 1}-valued
function 4>a(x). We will focus on the n-tuple regression network (Allinson & Kolcz,
1995), which outputs
(1)
in response to input x, if trained on the set of N samples {X(N)Y(N)} = {(xi, !I)}~l'
Here U(x , x')
E 4>a(x)4>a(x' ) can be seen to play the role of a smoothing kernel,
=a
provided that it turns out to have a suitable shape. It is well-know that it does, for
appropriate choices of feature sets. The strength of this method is that the sums
over training data can be done in one pass, producing a table containing two totals
for each feature. Only this table is required for recognition.
It is interesting to note that there is a familiar way to expand a kernel into the form
U(x, x') =
4>a(x)4>a(x ' ), at least when U(x, x') = U(x - x'), if the range of 4> is
E
a
not restricted to {O, I}: an eigenfunction expansion l . Indeed, principal component
analysis 2 applied to a Gaussian with variance V shows that the smallest feature
set for a given generalisation cost consists of the (real-valued) projections onto
the leading eigenfunctions of V. Be that as it may, the treatment here applies to
arbitrary feature sets.
3
Bayesian inference with Gaussian priors
Gaussian processes provide a diverse set of priors over function spaces. To
avoid mathematical details of peripheral interest, let us approximate the infinitedimensional space of functions by a finite-dimensional space of discretised functions,
so that function f is replaced by high-dimensional vector f, and f(x) is replaced
by f x , with f(x) ~ fx within a volume Llx around x. We develop the case of scalar
functions f, but the generalisation to vector-valued functions is straightforward.
physics, this is essentially the mode function expansion of U- 1 , the differential
operator with Green's function U.
2V- 1 needs to be a compact operator for this to work in the infinite-dimensional limit.
1 In
255
The Generalisation Cost of RAMnets
We assume a Gaussian prior on f, with zero mean and covariance V /0::
(2)
where Za = det(~:V)t. The overall scale of variation of f is controlled by 0:.
Illustrative samples of the functions generated from various choices of covariance
are given in (Zhu & Rohwer, to appear 1996). With q:c/f3 denoting the (possibly
position-dependent) variance of the Gaussian output noise, the likelihood of outputs
YeN) given function f and inputs X(N) is
p(Y(N)IX(N),f)
= (1/Z,8)exp-~L(f:c. _yi)q;.l(f:c' _yi)
(3)
i
where Z~ =
n?q:c. = det [?Q] with Qij = q:c. 6ij.
i
Because f and
X(N)
)
P (Y(N) , f I
X(N)
are independent the joint distribution is
= P (YeN) If , X(N) ) P (f ) = (e !.bTAb+C)/( ZaZ,8 ) e-!'(f-Ab)TA-1(f-Ab)
~
2
(4)
where 6:c:c. is understood to be 1 whenever xi is in the same cell of the discretisation ~ x, and A;;, = o:V;;, + f3'L.q;,I6:c,:c.6x',:c" b:c = f3'L.yiq;.I6:c,:c" and
i
C
i
= - ~f3 'L. yi q;,Iyi. One can readily verify that
i
A:c:c' = (ljo:)V:c:c,+ LV:c:ctKtuV:c,,:c,
(5)
tu
where K is the N x N matrix defined by
(6)
The posterior is readily determined to be
where f* = Ab is the posterior mean estimate of the true function f.
4
Calculation of the expected cost and its variance
m
Let us define the cost of associating an output f:c of the model with an input x that
actually produced an output y as
m
C(f:c, y)
= Hf:cm -
where r:c is a position dependent cost weight.
2
y) r:c
R. Rohwer and M. Morciniec
256
The average of this cost defines a cost functional, given input data X(N):
C( m
f , flx(N? =
J
m
C(fx,
y)P (XIX(N?) P (ylx, f) dxdy.
(8)
This form is obtained by noting that the function f carries no information about
the input point x, and the input data X(N) supplies no information about y beyond
that supplied by f. The distributions in (8) are unchanged by further conditioning
m
m
on Y(N), so we could write C( f, flx(N? = C( f, flx(N)' YeN?~. This cost functional
therefore has the posterior expectation value
and variance
Plugging in the distributions (2) (applied to a single sample), (3) and (7) leads to:
*
m T
*
m
(ClX(N),Y(N?) = 'ttr[AR] + Hf-f) R(f-f)+
tr [QR]
2/3
(11)
where the diagonal matrices Rand Q have the elements Rxx' = P(XIX)TxAxbx,x'
and Qxx' = qxbxx'.
Similar calculations lead to the expression for the variance
(12)
m
*
where the elements of Fare Fxx' = (fx - fx)bx,x'.
m
Note that the RAMnet (1) has the form f x =
2: J
.
Xxi
yl linear in the output data
i
YeN), with J xx ' = U(x, xi)/ 2: j U(x, Xj). Let us take V to have the form Vex, x') =
p(x )G(x - x')p(x'), combining translation-invariant and non-invariant factors in a
plausible way. Then with the sums over x replaced by integrals, (11) becomes
explicitly
2 (ClX(N), YeN?)
=
~
J
dxP (XIX(N?) qxTx
+.!. LPxtKtuPx"
0"
tu
+0"2 LyUKutPx t
tUV$
+20" LyUKutpx t
tuv
+ Lyu
uv
J
J
J
J
+
?J
P (XIX(N?) Txp;G xx
dxP (XIX(N?) Txp;Gx"xGxxt
dxP (XIX(N?) Txp;GxtxGxxopx.K$vYv
dxP (XIX(N?) TxPxGxtxJxxvyV
dxP (XIX(N?) TxJx"xJxxvyv.
(13)
257
The Generalisation Cost of RAMnets
&) True, oplim&1 &nd s ubOplim&1 functions
b) Dislribulion of Ihe cost C
1.S,----..-- - - r - --,-----,.-----.---r--,------,----r---.
? r.:
1
'.
0.5
,,;
C
.~
~
~
L--_~:..:.-~-V
\_------
0
r
r
- _. r
~ -0.5
E-<
::z,="]
-0.8
-0.6
-0.4
-0.2
0
0.2
0..
0.6
0.8
1
3:
Figure 1: a) The lower figure shows the input distribution. The upper figure shows
the true function f generated from a Gaussian prior with covariance matrix V (dot-
?
ted line), the optimal function f
m
= Ab
(solid line) and the suboptimal solution
f (dashed line). b )The distribution of the cost function obtained by generating
functions from the posterior Gaussian with covariance matrix A and calculating
the cost according to equation 14. The mean and one standard deviation calculated analytically and numerically are shown by the lower and upper error bars
respecti vely.
Taking P (XIX(N)) to be Gaussian (the maximum likelihood estimate would be
reasonable) and p, r, and q uniform, the first four integrals are straightforward.
The latter two involve the model J, and were evaluated numerically in the work
reported below.
5
Numerical results
We present one numerical example to illustrate the formalism , and another to illustrate its application.
For the first illustration, let the input and output variables be one dimensional real
numbers. Let the input distribution P (x) be a Gaussian with mean I':e = 0 and
standard deviation (1:e = 0.2. Nearly all inputs then fall within the range [-1,1]'
which we uniformly quantise into 41 bins. The true function f is generated from a
Gaussian distribution with 1', = 0 and 41 x 41 covariance matrix V with elements
V:e:e' = e-1:e-:e'I . 50 training inputs x were generated from the input distribution
and assigned corresponding outputs y = f:e + (, where ( is Gaussian noise with zero
mean and standard deviation Jq:el/3 0.01. The cost weight r:e 1.
=
=
The inputs were thermometer coded 3 over 256 bits, from which 100 subsets of 30
bits were randomly selected. Each of the 100 x 230 patterns formed over these
bits defines a RAMnet feature which evaluates to 1 when that pattern is present
in the input x. (Only those features which actually appear in the data need to be
tabulated.) The 50 training data points were used in this way to train an n-tuple
3The first 256(x
+ 1)/2 bits
are set to 1, and the remaining bits to O.
R. Rohwer and M. Morciniec
258
b)Mean cosl (CIX(N)'Y(N?
a) Neal' s regression problem
as a funclion of 0'1 and a
0..06'---~-r---r~-~~-"'--~-"'--~-'
..
0::
2
0..045
.~ 1.5
0..04
..." ,
0..036
0'
o
= O.b
0::
"
0::
.. 0.5
.,"
~ 0..025
0.
0::
.~
[IJ
- ...(U
"
Eo< ... ,
-
..0::
-o
...2
...
,
...!!.
...... ......
... ...... ...... ......
. . . . ......
...
........................... ... ... ~~----~~~~~~~
... -~
CJ
t
0..<>2
~
0.015
Dala
0.
:z
0.010
0.02
O.CM
0.08
0 .01
0 .1
a
0.12
0.1.
0.18
0.1.
0.2
Figure 2: a) Neal's Regression problem. The true function f is indicated by a dotted
?
m
line, the optimal function f is denoted by a solid line and the suboptimal solution f
is indicated by a dashed line. Circles indicate the training data. b) Dependence of
the cost prediction on the values of parameters a and O'f. The cost evaluated from
the test set is plotted as a dashed line, predicted cost is shown as a solid line with
one standard deviation indicated by a dotted line.
m
?
regression network. The input distribution and functions f, f , f are plotted
figure 1a.
III
?
A Gaussian distribution with mean f and posterior covariance matrix A was then
used to generate 104 functions . For each such function fp, the generalisation cost
(14)
x
was computed. A histogram of these costs appears in figure 1b, together with the
theoretical and numerically computed average generalisation cost and its variance.
Good agreement is evident.
Another one-dimensional problem illustrates the use of this formalism for predicting
the generalisation performance of a RAMnet when the prior over functions can only
be guessed. The true function , taken from (Neal, 1995) is given by
fx
= 0.3 + OAx + 0.5 sin(2.7x) + 1.1/(1 + x 2 ) + (
(15)
where the Gaussian noise variable ( has mean I'f = 0 and standard deviation
Jq:c//3 == 0.1. The cost weight Tx == 1. The training and test set each comprised
100 data-points. The inputs were generated by the standard normal distribution
(I'x = 0, O':c = 1) and converted into the binary strings using a thermometer code.
The input range [-3,3] was quantised into 61 uniform bins.
m ?
The training set and the functions f , f , f are shown on figure 2a for a = 0.1. The
function space covariance matrix was defined to have the Gaussian form V xx'
_1 (., _ .,')2
e
2
,,;
where
O'f
= 1.0.
The Generalisation Cost of RAMnets
259
is the correlation length of the functions, which is of order 1, judging from
figure 2a. The overall scale of variation is 1/
which appears to be about 3,
so Q' should be about 1/9. Figure 2b shows the expected cost as a function of Q'
for various choices of (Jf, with error bars on the (Jf = 1.0 curve. The actual cost
(Jf
va,
computed from the test set according to C =
m
t E(yi -fx? is plotted with a dashed
i
line. There is good agreement around the sensible values of Q' and
6
(Jf.
Conclusions
This paper demonstrates that unusual models, such as the ultra-fast RAMnets
which are not trained by directly optimising a cost function, can be analysed in a
Bayesian framework to determine their generalisation cost. Because the formalism
is constructed in terms of distributions over function space rather than distributions
over model parameters, it can be used for model comparison, and in particular to
select RAMnet parameters.
The main drawback with this technique, as it stands, is the need to numerically
integrate two expressions which involve the model. This difficulty intensifies rapidly
as the input dimension increases. Therefore, it is now a research priority to search
for RAMnet feature sets which allow these integrals to be performed analytically.
It would also be interesting to average the expected costs over the training data,
producing an expected generalisation cost for an algorithm. The Y(N) integral is
straightforward, but the X(N) integral is difficult. However, similar integrals have
been carried out in the thermodynamic limit (high input dimension) (Sollich, 1994),
so the investigation of these techniques in the current setting is another promising
research direction.
7
Acknowledgements
We would like to thank the Aston Neural Computing Group, and especially Huaiyu
Zhu, Chris Williams, and David Saad for helpful discussions.
References
Allinson, N.M., & Kolcz, A. 1995. N-tuple Regression Network. to be published in
Neural Networks.
Neal, R. 1995. Introductory} documentation for software implementing Bayesian
learning for neural networks using Markov chain Monte Carlo techniques. Tech.
rept. Dept of Computer Science, University of Toronto.
Rohwer, R. , & Morciniec, M. 1996. A theoretical and experimental account of the
n-tuple classifier performance. Neural Computation, 8(3), 657-670.
Sollich, Peter. 1994. Finite-size effects in learning and generalization in linear perceptrons. J. Phys. A, 27, 7771-7784.
Zhu,
H., & Rohwer, R. to appear 1996.
Bayesian regression filters and the issue of priors.
Neural Computing and Applications.
ftp://cs.aston.ac.uk/neural/zhuh/reg~il-prior.ps.Z.
| 1220 |@word nd:1 ljo:1 covariance:7 tr:1 solid:3 carry:1 denoting:1 current:2 michal:1 analysed:1 readily:2 numerical:3 shape:1 selected:1 toronto:1 gx:1 thermometer:2 mathematical:1 constructed:1 differential:1 supply:2 qij:1 consists:1 introductory:1 sacrifice:1 indeed:1 expected:7 multi:1 txp:3 actual:1 becomes:1 provided:1 estimating:1 xx:3 cm:2 oax:1 string:1 quantitative:1 demonstrates:1 classifier:1 uk:4 appear:4 producing:2 understood:1 morciniec:6 rept:1 limit:2 interpolation:1 dala:1 range:3 projection:1 radial:1 onto:1 operator:2 applying:1 restriction:1 go:1 straightforward:3 williams:1 regarded:1 fx:6 variation:2 play:1 agreement:2 element:3 documentation:1 recognition:1 role:1 trained:3 basis:2 triangle:1 easily:1 joint:1 various:2 tx:1 train:1 fast:4 monte:1 valued:3 plausible:1 analyse:1 noisy:1 dxp:5 intelligently:1 tu:2 combining:1 rapidly:1 qr:1 optimum:1 p:1 generating:1 ftp:1 illustrate:2 develop:2 ac:3 ij:2 c:3 qxx:1 indicate:1 predicted:1 direction:2 drawback:1 filter:1 implementing:1 bin:2 explains:1 generalization:1 investigation:1 ultra:3 around:2 normal:1 exp:1 lyu:1 predict:2 smallest:1 birmingham:1 applicable:1 gaussian:16 rather:1 avoid:1 focus:1 likelihood:2 tech:1 helpful:1 inference:1 dependent:2 el:1 jq:2 expand:1 overall:2 issue:1 denoted:1 respecti:1 smoothing:1 f3:4 ted:1 optimising:1 broad:1 nearly:1 future:1 quantitatively:1 richard:1 randomly:1 familiar:1 replaced:3 ab:4 interest:1 chain:1 tuple:6 integral:6 discretisation:1 vely:1 circle:1 plotted:3 theoretical:2 minimal:1 formalism:6 boolean:1 ar:1 cost:33 deviation:5 subset:1 uniform:2 comprised:1 reported:1 physic:1 yl:1 together:1 containing:1 possibly:1 priority:1 leading:1 bx:1 account:1 converted:1 explicitly:1 tuv:2 performed:1 competitive:1 hf:2 yen:5 formed:1 il:1 accuracy:1 variance:6 guessed:1 bayesian:6 produced:1 carlo:1 published:1 za:1 phys:1 whenever:1 rohwer:9 evaluates:1 treatment:1 cj:1 actually:2 appears:2 higher:1 ta:1 follow:1 response:1 rand:1 zaz:1 done:1 evaluated:2 furthermore:1 correlation:1 defines:3 mode:1 brings:1 indicated:3 effect:1 verify:1 true:6 analytically:2 assigned:1 neal:4 sin:1 allinson:2 illustrative:1 criterion:1 ttr:1 evident:1 demonstrate:1 functional:2 overview:1 b4:1 conditioning:1 volume:1 fare:1 numerically:4 cix:1 llx:1 uv:1 i6:2 dot:1 iyi:1 posterior:7 binary:1 yi:4 seen:1 dxdy:1 eo:1 determine:2 dashed:4 semi:1 thermodynamic:1 calculation:2 minimising:1 clx:2 coded:1 plugging:1 controlled:1 va:1 prediction:1 regression:6 essentially:2 expectation:1 histogram:1 kernel:2 cell:1 xxi:1 saad:1 eigenfunctions:1 noting:1 iii:1 xj:1 fxx:1 associating:1 suboptimal:2 det:2 expression:2 tabulated:1 peter:1 involve:2 ylx:1 generate:1 supplied:1 dotted:2 judging:1 diverse:1 write:1 rxx:1 group:2 four:1 sum:2 quantised:1 flx:3 reasonable:1 vex:1 bit:5 layer:1 strength:1 precisely:1 software:1 unlimited:1 speed:1 optimality:1 according:2 peripheral:1 sollich:2 restricted:1 invariant:2 taken:1 resource:1 equation:1 previously:1 discus:1 turn:1 ihe:1 know:1 unusual:1 worthwhile:1 appropriate:1 slower:1 remaining:1 include:1 calculating:1 especially:1 unchanged:1 added:1 dependence:1 diagonal:1 subspace:1 thank:1 sensible:1 chris:1 code:1 length:1 illustration:1 providing:1 difficult:1 unknown:1 upper:2 markov:1 finite:2 arbitrary:1 david:1 discretised:1 required:1 eigenfunction:1 beyond:1 bar:3 below:1 pattern:2 fp:1 including:1 memory:1 green:1 suitable:1 difficulty:1 predicting:1 zhu:4 aston:6 brief:1 carried:1 huaiyu:1 prior:9 acknowledgement:1 interesting:2 limitation:1 lv:1 integrate:1 translation:1 formal:1 allow:1 generalise:1 wide:1 fall:1 taking:1 benefit:1 xix:9 curve:1 calculated:1 dimension:2 stand:1 infinitedimensional:1 made:1 approximate:1 compact:1 xi:3 search:1 table:2 promising:1 expansion:2 excellent:1 main:1 noise:4 sub:1 position:2 ix:1 adding:1 illustrates:1 scalar:1 applies:1 quantifying:1 jf:4 generalisation:18 determined:3 infinite:1 uniformly:1 principal:1 total:1 pas:2 experimental:1 perceptrons:2 select:2 latter:1 dept:1 reg:1 |
244 | 1,221 | Multi-Task Learning for Stock Selection
Joumana Ghosn
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
Yoshua Bengio *
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
ghosn~iro.umontreal.ca
bengioy~iro.umontreal . ca
Abstract
Artificial Neural Networks can be used to predict future returns
of stocks in order to take financial decisions . Should one build a
separate network for each stock or share the same network for all
the stocks? In this paper we also explore other alternatives, in
which some layers are shared and others are not shared. When
the prediction of future returns for different stocks are viewed as
different tasks, sharing some parameters across stocks is a form
of multi-task learning. In a series of experiments with Canadian
stocks, we obtain yearly returns that are more than 14% above
various benchmarks.
1
Introd uction
Previous applications of ANNs to financial time-series suggest that several of these
prediction and decision-taking tasks present sufficient non-linearities to justify the
use of ANNs (Refenes , 1994; Moody, Levin and Rehfuss, 1993). These models can
incorporate various types of explanatory variables: so-called technical variables (depending on the past price sequence) , micro-economic stock-specific variables (such
as measures of company profitability), and macro-economic variables (which give
information about the business cycle).
One question addressed in this paper is whether the way to treat these different variables should be different for different stocks , i.e., should one use the same network
for all the stocks or a different network for each stock? To explore this question
"also, AT&T Labs, Holmdel, NJ 07733
Multi-Task Learning/or Stock Selection
947
we performed a series of experiments in which different subsets of parameters are
shared across the different stock models. When the prediction of future returns for
different stocks are viewed as different tasks (which may nonetheless have something in common), sharing some parameters across stocks is a form of multi-task
learning.
These experiments were performed on 9 years of data concerning 35 large capitalization companies of the Toronto Stock Exchange (TSE). Following the results of
previous experiments (Bengio, 1996), the networks were not trained to predict the
future return of stocks, but instead to directly optimize a financial criterion. This
has been found to yield returns that are significantly superior to training the ANNs
to minimize the mean squared prediction error.
In section 2, we review previous work on multi-task. In section 3, we describe the
financial task that we have considered, and the experimental setup. In section 4,
we present the results of these experiments. In section 5, we propose an extension
of this work in which the models are re-parameterized so as to automatically learn
what must be shared and what need not be shared.
2
Parameter Sharing and Multi-Task Learning
Most research on ANNs has been concerned with tabula rasa learning. The learner
is given a set of examples (Xl, yr), (X2' Y2), ... , (XN , YN) chosen according to some
unknown probability distribution. Each pair (x, y) represents an input x, and a
desired value y. One defines a training criterion C to be minimized in function
of the desired outputs and of the outputs of the learner f(x). The function f is
parameterized by the parameters of the network and belongs to a set of hypotheses
H, that is the set of all functions that can be realized for different values of the
parameters. The part of generalization error due to variance (due to the specific
choice of training examples) can be controlled by making strong assumptions on
the model, i.e., by choosing a small hypotheses space H . But using an incorrect
model also worsens performance.
Over the last few years, methods for automatically choosing H based on similar
tasks have been studied. They consider that a learner is embedded in a world
where it faces many related tasks and that the knowledge acquired when learning a task can be used to learn better and/or faster a new task. Some methods
consider that the related tasks are not always all available at the same time (Pratt,
1993; Silver and Mercer, 1995): knowledge acquired when learning a previous task
is transferred to a new task. Instead, all tasks may be learned in parallel (Baxter,
1995; Caruana, 1995), and this is the approach followed here. Our objective is not
to use multi-task learning to improve the speed of learning the training data (Pratt,
1993; Silver and Mercer, 1995), but instead to improve generalization performance.
For example, in (Baxter, 1995), several neural networks (one for each task) are
trained simultaneouly. The networks share their first hidden layers, while all the
remaining layers are specific to each network. The shared layers use the knowledge
provided from the training examples of all the tasks to build an internal representation suitable for all these tasks. The remaining layers of each network use the
internal representation to learn a specific task.
In the multitask learning method used by Caruana (Caruana, 1995), all the hidden
J. Ghosn and Y. Bengio
948
layers are shared. They serve as mutual sources of inductive bias. It was also
suggested that besides the relevant tasks that are used for learning, it may be
possible to use other related tasks that we do not want to learn but that may
help to further bias the learner (Caruana, Baluja and Mitchell, 1996; Intrator and
Edelman, 1996) .
In the family discovery method (Omohundro, 1996), a parameterized family of
models is built. Several learners are trained separately on different but related
tasks and their parameters are used to construct a manifold of parameters. When
a new task has to be learned, the parameters are chosen so as to maximize the data
likelihood on the one hand, and to maximize a "family prior" on the other hand
which restricts the chosen parameters to lie on the manifold .
In all these methods, the values of some or all the parameters are constrained.
Such models restrict the size of the hypotheses space sufficiently to ensure good
generalization performance from a small number of examples.
3
Application to Stock Selection
We apply the ideas of multi-task learning to a problem of stock selection and portfolio management . We consider a universe of 36 assets, including 35 risky assets
and one risk-free asset. The risky assets are 35 Canadian large-capitalization stocks
from the Toronto Stock Exchange. The risk-free asset is represented by 90-days
Canadian treasury bills. The data is monthly and spans 8 years, from February
1986 to January 1994 (96 months). Each month, one can buy or sell some of these
assets in such a way as to distribute the current worth between these assets. We do
not allow borrowing or short selling, so the weights of the resulting portfolio are all
non-negative (and they sum to 1).
We have selected 5 explanatory variables, 2 of which represent macro-economic
variables which are known to influence the business cycle, and 3 of which are microeconomic variables representing the profitability of the company and previous price
changes of the stock. The macro-economic variables were derived from yields of
long-term bonds and from the Consumer Price Index. The micro-economic variables
were derived from the series of dividend yields and from the series of ratios of stock
price to book value of the company. Spline extrapolation (not interpolation) was
used to obtain monthly data from the quarterly or annual company statements or
macro-economic variables . For these variables, we used the dates at which their
value was made public, not the dates to which they theoretically refer.
To take into account the non-stationarity of the financial and economic time-series,
and estimate performance over a variety of economic situations, multiple training
experiments were performed on different training windows, each time testing on
the following 12 months. For each architecture, 5 such trainings took place, with
training sets of size 3, 4, 5, 6, and 7 years respectively. Furthermore, multiple such
experiments with different initial weights were performed to verify that we did not
obtain "lucky" results due to particular initial weights. The 5 concatenated test
periods make an overall 5-year test period from February 1989 to January 1994.
The training algorithm is described in (Bengio, 1996) and is based on the optimization of the neural network parameters with respect to a financial criterion (here
maximizing the overall profit) . The outputs of the neural network feed a trading
Multi-Task Learning/or Stock Selection
949
module. The trading module has as input at each time step the output of the network, as well as , the weights giving the current distribution of worth between the
assets. These weights depend on the previous portfolio weights and on the relative
change in value of each asset (due to different price changes) . The outputs of the
trading module are the current portfolio weights for each of the assets . Based on
the difference between these desired weights and the current distribution of worth,
transactions are performed. Transaction costs of 1% (of the absolute value of each
buy or sell transaction) are taken into account. Because of transaction costs , the actions of the trading module at time t influence the profitability of its future actions.
The financial criterion depends in a non-additive way on the performance of the
network over the whole sequence. To obtain gradients of this criterion with respect
to the network output we have to backpropagate gradients backward through time,
through the trading module, which computes a differentiable function of its inputs.
Therefore, a gradient step is performed only after presenting the whole training
sequence (in order , of course) . In (Bengio, 1996), we have found this procedure
to yield significantly larger profits (around 4% better annual return), at comparable risks, in comparison to training the neural network to predict expected future
returns with the mean squared error criterion. In the experiments, the ANN was
trained for 120 epochs.
4
Experimental Results
Four sets of experiments with different types of parameter sharing were performed,
with two different architectures for the neural network: 5-3-1 (5 inputs, a hidden
layer of 3 units, and 1 output) , 5-3-2-1 (5 inputs, 3 units in the first hidden layer,
2 units in the second hidden layer, and 1 output) . The output represents the belief
that the value of the stock is going to increase (or the expected future return over
three months when training with the MSE criterion) .
Four types of parameter sharing between the different models for each stock are
compared in these experiments: sharing everything (the same parameters for all the
stocks) , sharing only the parameters (weights and biases) of the first hidden layers ,
sharing only the output layer parameters, and not sharing anything (independent
models for each stock).
The main results for the test period, using the 5-3-1 architecture, are summarized
in Table 1, and graphically depicted in Figure 1 with the worth curves for the four
types of sharing. The results for the test period, using the 5-3-2-1 architecture are
summarized in Table 2. The ANNs were compared to two benchmarks: a buy-andhold benchmark (with uniform initial weights over all 35 stocks), and the TSE300
Index. Since the buy-and-hold benchmark performed better (8.3% yearly return)
than the TSE300 Index (4.4% yearly return) during the 02/89-01/94 test period,
Tables 1, and 2 give comparisons with the buy-and-hold benchmark. Variations
of average yearly return on the test period due to different initial weights were
computed by performing each of the experiments 18 times with different random
seeds. The resulting standard deviations are less than 3.7 when no parameters or
all the parameters are shared, less than 2.7 when the parameters of the first hidden
layers are shared, and less than 4.2 when the output layer is shared.
The values of beta and alpha are computed by fitting the monthly return of the
portfolio r p to the return of the benchmark r M , both adjusted for the risk-free return
J. Ghosn and Y. Bengio
950
Table 1: Comparative results for the 5-3-1 architecture: four types
compared with the buy-and-hold benchmark (see text).
buy & share
share
share
hold
hidden output
all
Average yearly return
8.3%
13%
24.8%
23.4'70
Standard deviation (monthly)
3.5%
4.3%
5.3%
5.3%
1
1.07
1.30
1.26
Beta
Alpha (yearly)
20.6%
0
9%
21.8%
NA
t-statistic for alpha = 0
11
14.9
15
Reward to variability
0.9%
9.6%
22.9%
24.7%
Excess return above benchmark
4.7%
15.1%
16.4%
0
Maximum drawdown
15.7% 13.3%
13.4%
13.3%
of sharing are
no
sharing
22.8%
5.2%
1.26
19.9%
14
22.3%
14.5%
13.3%
Table 2: Comparative results for the 5-3-2-1 architecture: three types of sharing
are compared with the buy-and-hold benchmark (see text).
buy & share share first share all
no
hold
all
hidden
hidden
sharing
8.3%
12.5%
22.7%
Average yearly return
23%
9.1%
Standard deviation (monthly)
3.5%
4.%
5.2%
5.2%
3.1%
1
1.02
1.25
1.28
0.87
Beta
Alpha (yearly)
8.2%
19.7%
20.1%
4.%
0
NA
12.1
14.1
14.8
21.2
t-statistic for alpha = 0
Reward to variability
0.9%
22.2%
9.3%
22.5%
2.5%
Excess return above benchmark
4.2%
14.4%
14.7%
0.8%
0
Maximum drawdown
15.7%
13%
12.6%
13.4%
10%
ri (interest rates), according to the linear regression E(rp -rd = alpha + beta(rMr i). Beta is simply the ratio of the covariance between the portfolio return and the
market return with the variance of the market. According to the Capital Asset
Pricing Model (Sharpe, 1964), beta gives a measure of "systematic" risk, i.e., as it
relates to the risk of the market, whereas the variance of the return gives a measure
of total risk. The value of alpha in the tables is annualized (as a compound return):
it represents a measure of excess return (over the market benchmark) adjusted for
market risk (beta). The hypothesis that alpha = 0 is clearly rejected in all cases
(with t-statistics above 9, and corresponding p-values very close to 0). The reward
to variability (or "Sharpe ratio") as defined in (Sharpe, 1966), is another riskadjusted measure of performance: h.=2:J,
where O"p is the standard deviation of
Up
the portfolio return (monthly returns were used here). The excess return above
benchmark is the simple difference (not risk-adjusted) between the return of the
portfolio and that of the benchmark. The maximum drawdown is another measure
of risk, and it can be defined in terms of the worth curve: worth[t] is the ratio
between the value of the portfolio at time t and its value at time o. The maximum
drawdown is then defined as max ({max.<t worth[s])-worth[t))
t
(max.:St worth[s])
Three conclusions clearly come out of the tables and figure: (1) The main improvement is obtained by allowing some parameters to be not shared (for the 5-3-1
a~chitecture, although the best results are obtained with a shared hidden and a free
output layer, there are no significant differences between the different types of partial
sharing, or no sharing at all). (2) Sharing some parameters yielded more consistent
results (across architectures) than when not sharing at all. (3) The performance
obtained in this way is very much better than that obtained by the benchmarks
(buy-and-hold or TSE300), i.e., the yearly return is more than 14% above the best
benchmark, while the risks are comparable (as measured by standard deviation of
Multi-Task Learning/or Stock Selection
-
Share WO
3 .2 617
No Share
951
Share WI
Share A1l
MSE
Buy & Hold
TSE300
2 . 781 88
2 .30206
1. 8 2224
1. 3 4242
0 . 862602
8 9 02
9002
91 0 2
9202
93 02
Figure 1: Evolution of total worth in the 5-year test period 02/89-01/94, for the
5-3-1 architecture, and different types of sharing. From top to bottom: sharing
the hidden layer, no sharing across stocks, sharing the output layer , sharing everything, sharing everything with MSE training, Buy and Hold benchmark, TSE300
benchmark.
return or by maximum drawdown).
5
Future Work
We will extend the results presented here in two directions. Firstly, given the
impressive results obtained with the described approach, we would like to repeat
the experiment on different data sets, for different markets. Secondly, we would like
to generalize the type of multi-task learning by allowing for more freedom in the
way the different tasks influence each other .
Following (Omohundro, 1996) , the basic idea is to re-parameterize the parameters
Rnl of the ith model, for all n models in the following way: (}i = 1(Pi ,W) where
Pi E R n2, W E Rn3 , and n x nl < n x n2 + n3 . For example, if 10 is an affine
function , this forces the parameters of each the n different networks to lie on the
same linear manifold. The position of a point on the manifold is given by a n 2dimensional vector Pi , and the manifold itself is specified by the n3 parameters of w.
The expected advantage of this approach with respect to the one used in this paper
is that different models (e.g., corresponding to different stocks) may "share" more
or less depending on how far their Pi is from the Pj'S for other models. One does
not have to specify which parameters are free and which are shared, but one has to
specify how many are really free (n2) per model, and the shape of the manifold.
(}i E
6
Conclusion
The results presented of this paper show an interesting application of the ideas of
multi-task learning to stock selection. In this paper we have addressed the question of whether ANNs trained for stock selection or portfolio management should
be different for each stock or shared across all the stocks. We have found significantly better results when some or (sometimes) all of the parameters of the stock
models are free (not shared). Since a parcimonuous model is always preferable, we
conclude that partially sharing the parameters is even preferable, since it does not
952
J. Ghosn and Y. Bengio
yield a deterioration in performance, and it yields more consistent results. Another
interesting conclusion of this paper is that very large returns can be obtained at
risks comparable to the market using a combination of partial parameter sharing
and training with respect to a financial training criterion, with a small number
of explanatory input features that include technical, micro-economic and macroeconomic information.
References
Baxter, J. (1995) . Learning internal representations. In Proceedings of the Eighth International Conference on Computational Learning Theory, pages 311- 320, Santa Cruz,
California. ACM Press.
Bengio, Y. (1996). Using a financial training criterion rather than a prediction criterion.
Technical Report #1019, Dept. Informatique et Recherche Operationnelle, Universite
de Montreal.
Caruana, R. (1995). Learning many related tasks at the same time with backpropagation. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors, Advances in Neural
Information Processing Systems, volume 7, pages 657--664, Cambridge, MA. MIT
Press.
Caruana, R., Baluja, S., and Mitchell, T. (1996). Using the future to "sort out" the
present: Rankprop and multitask learning for medical risk evaluation. In Advances
in Neural Information Processing Systems, volume 8.
Intrator, N. and Edelman, S. (1996). How to make a low-dimensional representation
suitable for diverse tasks. Connection Science, Special issue on Transfer in Neural
Networks. to appear.
Moody, J. , Levin, U., and Rehfuss, S. (1993). Predicting the U.S. index of industrial
production. Neural Network World, 3(6):791-794.
Omohundro, S. (1996). Family discovery. In Mozer, M., Touretzky, D., and Perrone,
M., editors, Advances in Neural Information Processing Systems 8. MIT Press, Cambridge, MA.
Pratt, L. Y. (1993) . Discriminability-based transfer between neural networks. In Giles,
C. L., Hanson, S. J., and Cowan, J ., editors, Advances in Neural Information Processing Systems 5, pages 204-211 , San Mateo, CA. Morgan Kaufmann.
Refenes , A. (1994). Stock performance modeling using neural networks: a comparative
study with regression models. Neural Networks, 7(2):375-388.
Sharpe, W. (1964). Capital asset prices: A theory of market equilibrium under conditions
of risk. Journal of Finance, 19:425-442.
Sharpe, W. (1966). Mutual fund performance. Journal of Business, 39(1):119-138 .
Silver, D. L. and Mercer, R. E. (1995). Toward a model of consolidation: The retention and
transfer of neural net task knowledge. In Proceedings of the INNS World Congress
on Neural Networks, volume 3, pages 164-169, Washington, DC.
| 1221 |@word multitask:2 worsens:1 covariance:1 profit:2 initial:4 series:6 past:1 current:4 must:1 cruz:1 additive:1 shape:1 fund:1 selected:1 yr:1 ith:1 short:1 recherche:3 toronto:2 firstly:1 beta:7 rnl:1 incorrect:1 edelman:2 fitting:1 operationnelle:3 theoretically:1 acquired:2 market:8 expected:3 multi:12 company:5 automatically:2 window:1 provided:1 linearity:1 what:2 rmr:1 nj:1 finance:1 preferable:2 refenes:2 unit:3 medical:1 yn:1 appear:1 retention:1 treat:1 congress:1 interpolation:1 annualized:1 discriminability:1 studied:1 mateo:1 testing:1 backpropagation:1 procedure:1 lucky:1 significantly:3 suggest:1 close:1 selection:8 risk:14 influence:3 optimize:1 bill:1 maximizing:1 graphically:1 qc:2 financial:9 variation:1 hypothesis:4 bottom:1 module:5 parameterize:1 cycle:2 mozer:1 reward:3 trained:5 depend:1 serve:1 learner:5 selling:1 microeconomic:1 stock:38 various:2 represented:1 informatique:3 describe:1 artificial:1 choosing:2 larger:1 statistic:3 h3c:2 itself:1 sequence:3 differentiable:1 advantage:1 net:1 inn:1 took:1 propose:1 macro:4 relevant:1 date:2 comparative:3 silver:3 help:1 depending:2 montreal:5 measured:1 strong:1 trading:5 come:1 direction:1 public:1 everything:3 a1l:1 exchange:2 generalization:3 really:1 secondly:1 adjusted:3 extension:1 hold:9 sufficiently:1 considered:1 around:1 seed:1 equilibrium:1 predict:3 bond:1 mit:2 clearly:2 j7:2 always:2 rather:1 derived:2 improvement:1 likelihood:1 rehfuss:2 industrial:1 explanatory:3 hidden:12 borrowing:1 going:1 overall:2 issue:1 constrained:1 special:1 mutual:2 construct:1 washington:1 represents:3 sell:2 future:9 minimized:1 report:1 yoshua:1 others:1 micro:3 few:1 spline:1 freedom:1 stationarity:1 interest:1 chitecture:1 evaluation:1 sharpe:5 nl:1 partial:2 dividend:1 re:2 desired:3 rn3:1 tse:1 modeling:1 giles:1 caruana:6 cost:2 deviation:5 subset:1 uniform:1 levin:2 st:1 international:1 systematic:1 moody:2 na:2 squared:2 management:2 book:1 return:31 account:2 distribute:1 de:3 summarized:2 depends:1 performed:8 extrapolation:1 lab:1 sort:1 parallel:1 minimize:1 variance:3 kaufmann:1 yield:6 generalize:1 worth:10 asset:12 anns:6 touretzky:2 sharing:26 nonetheless:1 universite:3 mitchell:2 knowledge:4 feed:1 day:1 specify:2 leen:1 furthermore:1 profitability:3 rejected:1 hand:2 ghosn:5 defines:1 pricing:1 verify:1 y2:1 inductive:1 evolution:1 during:1 anything:1 criterion:10 presenting:1 omohundro:3 umontreal:2 common:1 superior:1 volume:3 extend:1 refer:1 monthly:6 significant:1 cambridge:2 rd:1 rasa:1 portfolio:10 impressive:1 something:1 belongs:1 tesauro:1 compound:1 morgan:1 tabula:1 maximize:2 period:7 relates:1 multiple:2 technical:3 faster:1 long:1 concerning:1 controlled:1 prediction:5 regression:2 basic:1 sometimes:1 represent:1 deterioration:1 whereas:1 want:1 separately:1 addressed:2 source:1 rankprop:1 capitalization:2 cowan:1 canadian:3 bengio:8 pratt:3 concerned:1 baxter:3 variety:1 architecture:8 restrict:1 economic:9 idea:3 whether:2 introd:1 wo:1 action:2 santa:1 restricts:1 per:1 diverse:1 four:4 capital:2 pj:1 backward:1 year:6 sum:1 parameterized:3 place:1 family:4 decision:2 holmdel:1 comparable:3 layer:16 followed:1 annual:2 yielded:1 x2:1 ri:1 n3:2 speed:1 span:1 performing:1 transferred:1 according:3 combination:1 perrone:1 across:6 wi:1 making:1 taken:1 available:1 apply:1 quarterly:1 intrator:2 alternative:1 rp:1 top:1 remaining:2 ensure:1 include:1 yearly:9 giving:1 concatenated:1 build:2 february:2 objective:1 question:3 realized:1 gradient:3 separate:1 manifold:6 iro:2 toward:1 consumer:1 besides:1 index:4 ratio:4 setup:1 statement:1 negative:1 unknown:1 allowing:2 benchmark:17 january:2 situation:1 variability:3 dc:1 pair:1 specified:1 connection:1 hanson:1 california:1 learned:2 suggested:1 eighth:1 built:1 including:1 max:3 belief:1 suitable:2 business:3 force:1 predicting:1 representing:1 improve:2 risky:2 text:2 review:1 prior:1 discovery:2 epoch:1 relative:1 embedded:1 interesting:2 affine:1 sufficient:1 consistent:2 mercer:3 editor:3 share:13 pi:4 production:1 course:1 consolidation:1 repeat:1 last:1 free:7 bias:3 allow:1 taking:1 face:1 absolute:1 curve:2 xn:1 world:3 computes:1 made:1 san:1 far:1 transaction:4 excess:4 alpha:8 buy:12 conclude:1 table:7 learn:4 transfer:3 ca:3 drawdown:5 mse:3 did:1 main:2 universe:1 whole:2 n2:3 position:1 xl:1 bengioy:1 lie:2 treasury:1 specific:4 uction:1 backpropagate:1 depicted:1 simply:1 explore:2 partially:1 acm:1 ma:2 viewed:2 month:4 ann:1 shared:15 price:6 change:3 baluja:2 justify:1 called:1 total:2 experimental:2 internal:3 macroeconomic:1 incorporate:1 dept:3 |
245 | 1,222 | Extraction of temporal features in the
electrosensory system of weakly electric
fish
Fabrizio GabbianiDivision of Biology
139-74 Caltech
Pasadena, CA 91125
Walter Metzner
Department of Biology
Univ. of Cal. Riverside
Riverside, CA 92521-0427
RalfWessel
Department of Biology
Univ. of Cal. San Diego
La J oBa, CA 92093-0357
Christof Koch
Division of Biology
139-74 Caltech
Pasadena, CA 91125
Abstract
The encoding of random time-varying stimuli in single spike trains
of electrosensory neurons in the weakly electric fish Eigenmannia
was investigated using methods of statistical signal processing. At
the first stage of the electrosensory system, spike trains were found
to encode faithfully the detailed time-course of random stimuli,
while at the second stage neurons responded specifically to features
in the temporal waveform of the stimulus. Therefore stimulus information is processed at the second stage of the electrosensory system
by extracting temporal features from the faithfully preserved image
of the environment sampled at the first stage.
1
INTRODUCTION
The weakly electric fish, Eigenmannia, generates a quasi sinusoidal, dipole-like electric field at individually fixed frequencies (250 - 600 Hz) by discharging an electric
organ located in its tail (see Bullock and Heilgenberg, 1986 for reviews). The fish
sense local changes in the electric field by means of two types of tuberous electroreceptors located on the body surface. T-type electroreceptors fire phase-locked to
the zero-crossing of the electric field once per cycle of the electric organ discharge
-email: gabbiani@klab.caltech.edu, wmetzner@mail.ucr.edu, rwessel@jeeves.ucsd.edu,
koch@klab.caltech.edu.
Extraction of Temporal Features in Weakly Electric Fish
63
(EOD) and are thus able to encode phase changes. P-type electroreceptors fire in
a more loosely phase-locked manner with a probability smaller than 1 per EOD.
Their probability of firing increases with the mean amplitude of the field thereby
allowing them to encode amplitude changes (Zakon, 1986).
This information is used by the fish in order to locate objects (electrolocation,
Bastian 1986) as well as for communication with conspecifics (Hopkins, 1988). One
behavior which has been most thoroughly studied (Heiligenberg, 1991), the jamming
avoidance response, occurs when two fish with similar EOD frequency (less than
15 Hz difference) approach close enough to sense each other's field . In order to
minimize beat patterns resulting from their summed electric fields, the fish with
the higher (resp. lower) EOD raises further (resp. lowers) its own EOD frequency.
The resulting frequency difference increase reduces the dis torsions in the interfering
EODs. The fish is known to correlate phase differences computed across body
regions with local amplitude increases or decreases in order to determine whether
it should raise or lower its own EOD.
At the level of the first central nervous nucleus of the electrosensory pathway, the
electrosensory lateral line lobe of the hindbrain (ELL), phase and amplitude information are processed nearly independently of each other (Maler et al., 1981).
Amplitude information is encoded in the spike trains of ELL pyramidal cells that
receive input from P-receptor afferents and transmit their information to higher
order levels of the electrosensory system. Two functional classes of pyramidal cells
are distinguished : E-type pyramidal cells respond by raising their firing frequency
to increases in the amplitude of an externally applied electric field while I-type pyramidal cells increase their firing frequency when the amplitude decreases (Bastian,
1981).
The aim of this study was to characterize the temporal information processing
performed by ELL pyramidal cells on random electric field amplitude modulations
and to relate it to the information carried by P-receptor afferents. To this end
we recorded the responses of P-receptor afferents and pyramidal cells to random
electric field amplitude modulations and analyzed them using two different methods:
a signal estimation method characterizing to what extent the neuronal response
encodes the detailed . time-course of the stimulus and a signal-detection method
developed to identify features encoded in spike trains. These two methods as well
as the electrophysiology are explained in the next section followed by the result
section and a short discussion.
2
2.1
METHODS
ELECTROPHYSIOLOGY
Adult specimens of Eigenmannia were immobilized by intramuscular injection of
a curare-like drug (Flaxedil). This also strongly attenuated the fish's EODs. The
fish were held in place by a foam-lined clamp in an experimental tank and an EOD
substitute electric field was established by two electrodes placed in the mouth and
near the tail of the fish. The carrier frequency of the electric field, fcarrier, was
chosen equal to the EOD frequency prior to curarization and the voltage generating
the stimulus was modulated according to
Vet) = Ao(1 + set)) cos (27rfcarrier) ,
where Ao is the mean amplitude and set) is a random, zero-mean modulation having
a flat (white) spectrum up to a variable cut-off frequency fc and a variable standard
deviation u. The modulation set) was generated by playing a blank tape on a tape
F. Gabbiani, W. Metzner, R. Wessel and C. Koch
64
B
A
}
-
200m.
Figure 1: A. Schematic drawing of the experimental set-up. Tape recorder (T),
variable cutoff frequency Bessel filter (BF) and function generator (FG). B. Sample
amplitude modulation set) and intracellular recording from a pyramidal cell (I-type,
Ie 12 Hz) . Spikes occurring in bursts are marked with an asterisk (see sect. 2.3.2).
The intracellular voltage trace reveals a high frequency noise caused by the EOD
substitute signal and a high electrode resistance.
=
recorder, passing the signal through a variable cut-off frequency low-pass filter before
mUltiplying it by the frequency carrier signal in a function generator (fig. 1A).
Extracellular recordings from P-receptor afferents were made by exposing the anterior lateral line nerve. Intracellular recordings from ELL pyramidal cells were
obtained by positioning electrodes in the central region of the pyramidal cell layer .
Intracellular recoroing electrodes were filled with neurotracer (neurobiotin) to be
used for subsequent intracellular labeling if the recordings were stable long enough.
This allowed to verify the cell type and its location within the ELL . In case no intracellular labeling could be made the recording site was verified by setting electrolytic
lesions at the conclusion of the experiment. In the subsequent data analysis, data
from E- and I-type pyramidal cells and from two different maps (centromedial and
lateral, Carr et al., 1982) were pooled. For further experimental details, see Wessel
et al. (1996), Metzner and Heiligenberg (1991), Metzner (1993).
2.2
SIGNAL ESTIMATION
The ability of single spike trains to carry detailed time-varying stimulus information
was assessed by estimating the stimulus from the spike train. The spike trains,
x(t) E 6(t - ti), where ti are the spike occurrence times, were convolved with a
filter h (Wiener-Kolmogorov filtering; Poor, 1994; Bialek et al. 1991),
=
Se3t(t)
=
J
dtl h(tl)X(t - t 1)
chosen in order to minimize the mean square error between the true stimulus and
the estimated stimulus,
f2
= (s(t) -
Se3t(t?2}.
The optimal filter h(t) is determined in Fourier space as the ratio of the Fourier
transform of the cross-correlation between stimulus and spike train, R X3 ( r) =
(x(t)s(t + r)} and the Fourier transform of the autocorrelation (power spectrum) of
the spike train, Rxx(r) (x(t)x(t + r)}. The accuracy at which single spike trains
transmit information about the stimulus was characterized in the time domain by
the coding fraction, defined as 'Y = 1 - flu, were u is the standard deviation of
the stimulus. The coding fraction is normalized between 1 when the stimulus is
= 0) and 0, when the estimation perforperfectly estimated by the spike train
mance of the spike train is at chance level (f = u). Thus, the coding fraction can be
=
?(
Extraction a/Temporal Features in Weakly Electric Fish
65
compared across experiments. For further details and references on this stimulus
estimation method in the context of neuronal sensory processing, see Gabbiani and
Koch (1996) and Gabbiani (1996).
2.3
2.3.1
FEATURE EXTRACTION
General procedure
The ability of single spikes to encode the presence of a temporal feature in the stimulus waveform was assessed by adapting a Fisher linear discriminant classification
scheme to our data (Anderson, 1984; sect. 6.5). Each random stimulus wave-form
and spike response of pyramidal cells (resp. P-receptor afferents) were binned. The
bin size ~ was varied between ~min = 0.5 ms, corresponding to the sampling ratio and ~max, corresponding to the longest interval leading to a maximum of one
spike per bin. The sampling interval yielding the best performance (see below) was
finally retained. Typical bin sizes were ~ = 7 ms for pyramidal cells (typical mean
firing rate: 30 Hz) and ~ = 1 ms for P-receptor afferents (typical firing rate : 200
Hz). The mean stimulus preceding a spike containing bin (m1) or no-spike containing bin (mO) as well as the covariances eEl, 'Eo) of these distributions were
computed (Anderson, 1984; sect. 3.2) . Mean vectors (resp . covariance matrices)
had at most 100 (resp . 100 x 100) components. The optimal linear feature vector
f predicting the occurrence or non-occurrence of a spike was found by maximizing
the signal-to-noise ratio (see fig. 2A and Poor, 1994; sect. IIIB)
SNR(f) = [f ? ~ml - mo)]2 .
f? 2('EO + 'El)f
(1)
The vector f is solution of (m1 - mO) = ('EO + 'E1)f. This equation was solved by
diagonalizing 'EO + 'E 1 and retaining the first n largest eigenvalues accounting for
99% of the variance (Jolliffe, 1986; sect . 6.1 and 8.1) . The optimal feature vector
f thus obtained corresponded to up- or downstrokes in the stimulus amplitude
modulation for E- and I-type pyramidal cells respectively, as expected from their
mean response properties to changes in the electric field amplitude (see sect. 1).
Similarly, optimal feature vectors for P-receptor afferents corresponded to upstrokes
in the electric field amplitude (see sect. 1) .
Once f was determined, we projected the stimuli preceding a spike or no spike onto
the optimal feature vector (fig. 2A) and computed the probability of correct classification between the two distributions so obtained by the resubstitution method
(Raudys and Jain, 1991). The probability of correct classification (Pee) is obtained
by optimizing the value of the threshold used to separate the two distributions in
order to maximize
(2)
where the probabilities of false alarm (PFA) and correct detection (PeD ) depend
on the threshold.
2.3.2
Distinction between isolated spikes and burst-like spike patterns
A large fraction (56% ? 21%, n = 30) of spikes generated by pyramidal cells in
response to random electric field amplitude modulations occurred in bursts (mean
burst length: 18 ? 9 ms, mean number of spikes per burst : 2.9 ? 1.3 , n = 30,
fig . 1B). In order to verify whether spikes occurring in bursts corresponded to
a more reliable encoding of the feature vector, we separated the distribution of
stimuli occurring before a spike in two distributions, conditioned on whether the
F. Gabbiani, W. Metzner, R. Wessel and C. Koch
66
A
B
?2000 ?1000
0
1000
2000
projection onto the feature vector
Figure 2: A. 2-dimensional example of two random distributions (circles and
squares) as well as the optimal discrimination direction determined by maximizing the signal-to-noise ratio of eq. 1. The I-dimensional projection of the two
distributions onto the optimal direction is also shown (compare with B). B. Example of the distribution of stimuli projected onto the optimal feature vector (same cell
as in fig. 1B). Stimuli preceding a bin containing no spike (null), an isolated spike
(isolated) and a spike belonging to a burst (burst). Horizontal scale is arbitrary
(see eq. 1).
spike belonged to a burst or not. The stimuli were then projected onto the feature
vector (fig. 2B), as described in 2.3.1, and the probability of correct classification
between the distribution of stimuli occurring before no spike and isolated spike bins
was compared to the probability of correct classification between the distribution
of stimuli occurring before no spike and burst spike bins (see sect. 3) .
3
RESULTS
The results are summarized in fig. 3. Data were analyzed from 30 pyramidal cells
(E- and I-type) and 20 P-receptor afferents for a range of stimulus parameters
(Ie = 2 - 40 Hz, (J' = 0.1 - 0.4, Ao was varied in order to obtain ?20 dB changes
around the physiological value of the mean electric field amplitude which is of the
order of 1 m V / cm). Fig. 3A reports the best probability of correct classification
(eq. 2) obtained for each pyramidal cell (white squares) / P-receptor afferent (black
dots) as a function of the coding fraction observed in the same experiment (note
that for pyramidal cells only burst spikes are shown, see sect. 2.3.2 and fig. 3B).
The horizontal axis shows that while the coding fraction of P-receptors afferents
can be very high (up to 75% of the detailed stimulus time-course is encoded in a
single spike train), pyramidal cells only poorly transmit information on the detailed
time-course of the stimulus (less than 20% in most cases). In contrast, the vertical
axis shows that pyramidal cells outperform P-receptor afferent in the classification
task: it is possible to classify with up to 85% accuracy whether a given stimulus
will cause a short burst of spikes or not by comparing it to a single feature vector
f. This indicates that the presence of an up- or downstroke (the feature vector)
is reliably encoded by pyramidal cells. Fig. 3B shows for each experiment on
the ordinate the discrimination performance (eq. 2) for stimuli preceding isolated
spikes against stimuli preceding no spike. The abscissa plots the discrimination
performance (eq. 2) for stimuli preceding spikes occurring in bursts (white squares,
fig. 3A) or stimuli preceding all spikes (black squares) against stimuli preceding
Extraction o/Temporal Features in Weakly Electric Fish
B
A
c
0
;:;
tJ
!E
=
.
D
1.0
Preceptors
!E
0.9
D
-
0.8
0
0.7
0
0.6
U
~
u
~
:a
.!
20.
c
.2
'tii
u
pyramidal cells
l1li
U
67
=
U
'"
l1li
II DtiJ D
D
..
.
.
.. . .
D
D.p,
o
'\,
0
'
o D
Do
0.5
0.0
0 .2
0.4
0.6
coding fraction
~\
:al1li
0.8 .a
....
D
. .?
. .
0.8
D
DD
D
D
0.7
0
~
burst spikes
all spikes
D
0.9
.D
o
0.6
CD
ill?
D . '~
lill"
J' , . . .
....
D
u
'0
.
1.0
...
~ ~~ . "
o
0.5
0.5
.
0.6
0.7
0.8
0.9
1.0
20. probability of correct classification
Figure 3: A. Coding fraction and probability of correct classification for pyramidal
cells (white squares, burst spikes only) and P-receptor afferents (black circles). B.
Probability of correct classification against stimuli preceding no spikes for stimuli
preceding burst spikes or all spikes vs. stimuli preceding isolated spikes. Dashed
line: identical performance.
no spike. The distribution of stimuli occurring before burst spikes (all spikes) is
more easily distinguished from the distribution of stimuli occurring before no spike
than the distribution of stimuli preceding isolated spikes. This clearly indicates that
spikes occurring in bursts carry more reliable information than isolated spikes.
4
DISCUSSION
We have analyzed the response of P-receptor afferents and pyramidal cells to random
electric field amplitude modulations using methods of statistical signal processing.
The previously studied mean responses of P-receptor afferents and pyramidal cells
to step amplitude changes or sinusoidal modulations of an externally applied electric field left several alternatives open for the encoding and processing of stimulus
information in single spike trains. We find that, while P-receptor afferents encode
reliably the detailed time-course of the stimulus, pyramidal cells do not. In contrast, pyramidal cells perform better than P-receptor afferents in discriminating the
occurrence of up- and downstrokes in the amplitude modulation. The presence of
these features is signaled most reliably to higher order stations in the electrosensory system by short bursts of spikes emitted by pyramidal cells in response to the
stimulus. This code can be expected to be robust against possible subsequent noise
sources, such as synaptic unreliability. The temporal pattern recognition task solved
at the level of the ELL is particularly appropriate for computations which have to
rely on the temporal resolution of up- and downstrokes, such as those underlying
the jamming avoidance response .
68
F Gabbiani, W. Metzner, R. Wessel and C. Koch
Acknowledgments
We thank Jenifer Juranek for computer assistance. Support: UCR and NSF grants,
Center of Neuromorphic Systems Engineering as a part of the NSF ERC Program,
and California Trade and Commerce Agency, Office of Strategic Technology.
References
Anderson, T.W. (1984) An introduction to Multivariate Statistical Analysis. Wiley,
New York.
Bastian, J. (1981) Electrolocation 2. The effects of moving objects and other electrical stimuli on the activities of two categories of posterior lateral line lobe cells in
apteronotus albifrons. J. Compo Physiol. A, 144: 481-494.
Bialek, W., de Ruyter van Steveninck, R.R. & Warland, D. (1991) Reading a neural
code. Science, 252: 1854-1857.
Bullock, T .H. & Heiligengerg, W. (1986) Electroreception. Wiley, New York .
Carr, C.C., Maler , L. & Sas, E. (1982). Peripheral Organization and Central Projections of the Electrosensory Nerves in Gymnotiform Fish. J. Compo Neurol.,
211:139-153 .
Gabbiani, F. & Koch , C. (1996) Coding of Time-Varying Signals in Spike Trains of
Integrate-and-Fire Neurons with Random Threshold. Neur. Comput., 8: 44-66.
Gabbiani , F. (1996) Coding of time-varying signals in spike trains of linear and
half-wave rectifying neurons. Network: Compo Neur. Syst., 7:61-85 .
Heiligenberg, W . (1991) Neural Nets in electric fish. MIT Press, Cambridge, MA.
Hopkins, C.D. (1976) Neuroethology of electric communication. Ann. Rev. Neurosci., 11:497-535.
Jolliffe, I.T. (1986) Principal Component Analysis. Springer-Verlag, New York .
Maler, L., Sas, E.K.B . & Rogers , J . (1981) The cytology of the posterior lateral line
lobe of high-frequency weakly electric fish (gymnotidae): Dendritic differentiation
and synaptic specificity. J. Compo Neurol., 255 : 526-537.
Metzner , W. (1993) The jamming avoidance response in Eigenmannia is controlled
by two separate motor pathways. J. Neurosci. , 13:1862-1878.
Metzner, W. & Heiligenberg, W. (1991). The coding of signals in the electric
communication of the gymnotiform fish Eigenmannia: From electroreceptors to
neurons in the torus semicircularis of the midbrain . J. Comp o Physiol. A, 169 :
135-150.
Poor, H.V . (1994) An introduction to Signal Detection and Estimation. Springer
Verlag, New York.
Raudys, S.J. & Jain , A.K. (1991) Small sample size effects in statistical pattern
recognition: Recommendations for practitioners. IEEE Trans. Patt. Anal. Mach.
Intell., 13: 252-264.
Wessel , R., Koch, C. & Gabbiani F. (1996) Coding of Time-Varying Electric Field
Amplitude Modulations in a Wave-Type Electric Fish J. Neurophysiol. 75:22802293 .
Zakon, H. (1986) The electroreceptive periphery. In: Bullock, T .H. & Heiligenberg,
W. (eds) , Electroreception, pp. 103-156. Wiley, New York.
| 1222 |@word bf:1 open:1 electrosensory:9 lobe:3 electroreceptors:4 covariance:2 accounting:1 thereby:1 carry:2 cytology:1 blank:1 comparing:1 anterior:1 exposing:1 physiol:2 subsequent:3 motor:1 plot:1 discrimination:3 v:1 half:1 nervous:1 short:3 compo:4 location:1 burst:19 pathway:2 upstroke:1 autocorrelation:1 manner:1 dtij:1 expected:2 behavior:1 abscissa:1 estimating:1 underlying:1 null:1 what:1 cm:1 developed:1 eod:9 differentiation:1 temporal:10 ti:2 electrolocation:2 jamming:3 grant:1 christof:1 unreliability:1 discharging:1 carrier:2 before:6 engineering:1 local:2 receptor:16 encoding:3 mach:1 flu:1 firing:5 modulation:11 black:3 studied:2 wessel:5 co:1 maler:3 locked:2 range:1 steveninck:1 acknowledgment:1 commerce:1 x3:1 procedure:1 drug:1 adapting:1 projection:3 specificity:1 dtl:1 close:1 onto:5 cal:2 context:1 map:1 center:1 maximizing:2 independently:1 resolution:1 dipole:1 avoidance:3 discharge:1 resp:5 diego:1 transmit:3 crossing:1 recognition:2 particularly:1 located:2 cut:2 observed:1 solved:2 electrical:1 region:2 cycle:1 sect:9 decrease:2 trade:1 environment:1 agency:1 weakly:7 raise:2 depend:1 division:1 f2:1 neurophysiol:1 easily:1 kolmogorov:1 train:16 univ:2 walter:1 jain:2 separated:1 labeling:2 corresponded:3 encoded:4 drawing:1 ability:2 transform:2 iiib:1 eigenvalue:1 net:1 clamp:1 poorly:1 electrode:4 generating:1 object:2 electroreception:2 sa:2 eq:5 direction:2 waveform:2 correct:9 filter:4 rogers:1 bin:8 ao:3 dendritic:1 koch:8 klab:2 around:1 mo:3 pee:1 estimation:5 individually:1 largest:1 organ:2 gabbiani:9 faithfully:2 mit:1 clearly:1 aim:1 neuroethology:1 varying:5 voltage:2 office:1 encode:5 lill:1 longest:1 indicates:2 contrast:2 sense:2 el:1 pasadena:2 quasi:1 tank:1 classification:10 ill:1 retaining:1 summed:1 ell:6 field:18 once:2 equal:1 extraction:5 having:1 sampling:2 biology:4 identical:1 nearly:1 report:1 stimulus:45 intell:1 phase:5 fire:3 conspecific:1 detection:3 organization:1 analyzed:3 pfa:1 yielding:1 tj:1 held:1 filled:1 loosely:1 circle:2 signaled:1 isolated:8 classify:1 neuromorphic:1 strategic:1 deviation:2 snr:1 characterize:1 thoroughly:1 discriminating:1 ie:2 off:2 eel:1 hopkins:2 central:3 recorded:1 containing:3 leading:1 syst:1 sinusoidal:2 tii:1 de:1 pooled:1 coding:11 summarized:1 caused:1 afferent:16 performed:1 wave:3 rectifying:1 minimize:2 square:6 accuracy:2 responded:1 wiener:1 variance:1 identify:1 foam:1 multiplying:1 comp:1 synaptic:2 email:1 ed:1 against:4 frequency:14 pp:1 sampled:1 ped:1 amplitude:20 nerve:2 higher:3 response:11 strongly:1 anderson:3 stage:4 correlation:1 horizontal:2 effect:2 verify:2 true:1 normalized:1 white:4 assistance:1 m:4 carr:2 heiligenberg:5 image:1 functional:1 tail:2 occurred:1 m1:2 l1li:2 cambridge:1 similarly:1 erc:1 had:1 dot:1 moving:1 stable:1 surface:1 resubstitution:1 own:2 multivariate:1 posterior:2 optimizing:1 periphery:1 verlag:2 caltech:4 preceding:12 specimen:1 eo:4 determine:1 maximize:1 bessel:1 signal:14 ii:1 dashed:1 reduces:1 positioning:1 characterized:1 cross:1 long:1 e1:1 recorder:2 controlled:1 schematic:1 cell:30 preserved:1 receive:1 interval:2 pyramidal:27 source:1 hz:6 recording:5 db:1 emitted:1 extracting:1 practitioner:1 near:1 presence:3 enough:2 attenuated:1 whether:4 resistance:1 riverside:2 york:5 passing:1 tape:3 cause:1 detailed:6 processed:2 category:1 outperform:1 nsf:2 fish:19 estimated:2 fabrizio:1 per:4 patt:1 rxx:1 threshold:3 cutoff:1 verified:1 fraction:8 respond:1 place:1 ucr:2 layer:1 followed:1 bastian:3 activity:1 binned:1 flat:1 encodes:1 generates:1 fourier:3 min:1 injection:1 extracellular:1 department:2 according:1 neur:2 peripheral:1 poor:3 belonging:1 smaller:1 across:2 bullock:3 rev:1 midbrain:1 explained:1 equation:1 previously:1 lined:1 jolliffe:2 end:1 mance:1 appropriate:1 occurrence:4 distinguished:2 alternative:1 convolved:1 substitute:2 warland:1 spike:61 occurs:1 bialek:2 separate:2 thank:1 lateral:5 mail:1 extent:1 discriminant:1 immobilized:1 length:1 code:2 retained:1 ratio:4 relate:1 trace:1 anal:1 reliably:3 perform:1 allowing:1 vertical:1 neuron:5 beat:1 communication:3 locate:1 ucsd:1 varied:2 station:1 arbitrary:1 ordinate:1 raising:1 california:1 distinction:1 established:1 trans:1 adult:1 able:1 below:1 pattern:4 gymnotiform:2 belonged:1 reading:1 program:1 max:1 reliable:2 mouth:1 power:1 rely:1 predicting:1 apteronotus:1 diagonalizing:1 scheme:1 torsion:1 technology:1 axis:2 carried:1 review:1 prior:1 filtering:1 generator:2 asterisk:1 nucleus:1 integrate:1 dd:1 playing:1 interfering:1 cd:1 course:5 placed:1 dis:1 characterizing:1 fg:1 van:1 sensory:1 made:2 san:1 projected:3 correlate:1 ml:1 reveals:1 spectrum:2 vet:1 robust:1 ca:4 ruyter:1 investigated:1 electric:29 domain:1 oba:1 intracellular:6 neurosci:2 noise:4 alarm:1 allowed:1 lesion:1 body:2 neuronal:2 fig:11 site:1 downstroke:4 tl:1 wiley:3 torus:1 comput:1 externally:2 neurol:2 physiological:1 eigenmannia:5 false:1 conditioned:1 occurring:9 electrophysiology:2 fc:1 recommendation:1 springer:2 chance:1 ma:1 marked:1 ann:1 fisher:1 change:6 specifically:1 determined:3 typical:3 principal:1 pas:1 experimental:3 la:1 support:1 modulated:1 eods:2 assessed:2 metzner:8 |
246 | 1,223 | Learning Appearance Based Models:
Mixtures of Second Moment Experts
Christoph 8regler and Jitendra Malik
Computer Science Division
University of California at Berkeley
Berkeley, CA 94720
email: bregler@cs.berkeley.edu, malik@cs.berkeley.edu
Abstract
This paper describes a new technique for object recognition based on learning
appearance models. The image is decomposed into local regions which are
described by a new texture representation called "Generalized Second Moments" that are derived from the output of multiscale, multiorientation filter
banks. Class-characteristic local texture features and their global composition
is learned by a hierarchical mixture of experts architecture (Jordan & Jacobs).
The technique is applied to a vehicle database consisting of 5 general car
categories (Sedan, Van with back-doors, Van without back-doors, old Sedan,
and Volkswagen Bug). This is a difficult problem with considerable in-class
variation. The new technique has a 6.5% misclassification rate, compared to
eigen-images which give 17.4% misclassification rate, and nearest neighbors
which give 15 .7% misclassification rate.
1 Introduction
Until a few years ago neural network and other statistical learning techniques were not very
popular in computer vision domains. Usually such techniques were only applied to artificial
visual data or non-mainstream problems such as handwritten digit recognition.
A significant shift has occurred recently with the successful application of appearance-based
or viewer-centered techniques for object recognition, supplementing the use of 3D models.
Appearance-based schemes rely on collections of images of the object. A principal advantage
is that they implicitly capture both shape and photometric information(e.g. surface reflectance
variation). They have been most sucessfully applied in the domain of human faces [15, 11 , I ,
14] though other 3d objects under fixed lighting have also been considered [13] . View-based
representations lend themselves very naturally to learning from examples- principal component
analysis[15, 13] and radial basis functions[l] have been used.
Approaches such as principal component analysis (or "eigen-images") use global representations
at the image level. The objective of our research was to develop a representation which would
846
C. Bregler and J. Malik
be more 'localist', where representations of different 'parts' of the object would be composed
together to form the representation of the object as a whole. This appears to be essential in order
to obtain robustness to occlusion and ease of generalization when different objects in a class may
have variations in particular parts but not others. A part based view is also more consistent with
what is known about human object recognition (Tanaka and collaborators).
In this paper, we propose a domain independent part decomposition using a 2D grid representation
of overlapping local image regions. The image features of each local patch are represented using a
new texture descriptor that we call "Generalized Second Moments". Related representations have
already been successfully applied to other early-vision tasks like stereopsis, motion, and texture
discrimination. Class-based local texture features and their global relationships are induced using
the "Hierarchical Mixtures of Experts" Architecture (HME) [8].
We apply this technique to the domain of vehicle classification. The vehicles are seen from behind
by a camera mounted above a freeway(Figure 1). We urge the reader to examine Figure 3 to see
examples of the in class variations in the 5 different categories. Our technique could classify five
broader categories with an error of as low as 6.5% misclassification, while the best results using
eigen-images and nearest neighbor techniques were 17.4% and 15.7% mis-classification error.
Figure 1: Typical shot of the freeway segment
2 Representation
An appearance based representation should be able to capture features that discriminate the
different object categories. It should capture both local textural and global structural information .
This corresponds roughly to the notion in 3D object models of (i) parts (ii) relationship between
parts.
2.1
Structural Description
Objects usually can be decomposed into parts. A face consists of eyes, nose, and mouth. Cars
are made out of window screens, taillights, license plates etc. The question is what granularity is
appropriate and how much domain knowledge should be exploited. A car could be a single part in
a scene, a license plate could be a part, or the letters in the license plate could be the decomposed
parts. Eyes, nose, and mouth could be the most important parts of a face for recognition, but
maybe other parts are important as well.
It would be advantageous if each part could be described in a decoupled way using a representation
that was most appropriate for it. Object classification should be based on these local part
descriptions and the relationship between the parts. The partitioning reduces the complexity
greatly and invariance to the precise relation between the parts could be achieved.
For our domain of vehicle classification we don't believe it is appropriate to explicitly code any
Learning Appearance Based Models: Mixtures of Second Moment Experts
847
part decomposition. The kind and number of useful parts might vary across different car makes.
The resolution of the images (1 OOx 100 pixel) restricts us to a certain degree of granularity. We
decided to decompose the image using a 20 grid of overlapping tiles or Gaussian windows but
only local classification for each tile region is done. The content of each local tile is represented
by a feature vector (next section). The generic grid representation allows the mixture of experts
architecture to induce class-based part decomposition, and extract local texture and global shape
features . For example the outline of a face could be represented by certain orientation dominances
in the local tiles at positions of the face boundary. The eyes are other characteristic features in
the tiles.
2.2 Local Features
We wanted to extract characteristic features from each local tile. The traditional computer vision
approach would be to find edges and junctions. The weakness of these representations is that
they do not capture the richness of textured regions, and the hard decision thresholds make the
measurement process non-robust.
An alternative view is motivated by our understanding of processing in biological vision systems.
We start by convolving image regions with a large number of spatial filters, at various orientations,
phases, and scales. The response values of such filters contain much more general information
about the local neighborhood, a fact that has now been recognized and exploited in a number of
early vision tasks like stereopsis, motion and texture analysis [16,9,6, 12, 7].
Although this approach is loosely inspired by the current understanding of processing in the
early stages of the primate visual system, the use of spatial filters has many advantages from a
pure analytical viewpoint[9, 7]. We use as filter kernels, orientation selective elongated Gaussian
derivatives. This enables one to gain the power of orientation specific features, such as edges,
without the disadvantage of non-robustness due to hard thresholds. If multiple orientations are
present at a single point (e.g. junctions), they are represented in a natural way. Since multiple
scales are used for the filters, no ad hoc choices have to be made for the scale parameters of
the feature detectors. Interestingly the choices of these filter kernels can also be motivated in a
learning paradigm as they provide very useful intermediate layer units in convolutional neural
networks [3].
The straightforward approach would then be to characterize each image pixel by such a vector
of feature responses. However note that there is considerable redundancy in the filter responsesparticularly at coarse scales, the responses of filters at neighboring pixels are strongly correlated.
We would like to compress the representation in some way. One approach might be to subsample
at coarse scales, another might be to choose feature locations with local magnitude maxima or
high responses across several directions. However there might be many such interesting points
in an image region. It is unclear how to pick the right number of points and how to order them.
Leaving this issue of compressing the filter response representation aside for the moment, let
us study other possible representations of low level image data. One way of representing the
texture in a local region is to calculate a windowed second moment matrix [5]. Instead of finding
maxima offilterresponses, the second moments of brightness gradients in the local neighborhood
are weighted and averaged with a circular Gaussian window. The gradient is a special caSe of
Gaussian oriented filter banks. The windowed second moment matrix takes into account the
response of all filters in this neighborhood. The disadvantage is that gradients are not very
orientation selective and a certain scale has to be selected beforehand. Averaging the gradients
"washes" out the detailed orientation information in complex texture regions .
Orientation histograms would avoid this effect and have been applied successfully for classification [4]. Elongated families of oriented and scaled kernels could be used to estimate the
orientation at each point. But as pointed out already, there might be more than one orientation at
each point, and significant information is lost.
848
C. Bregler and 1. Malik
Figure 2: Left image: The black rectangle outlines the selected area of interest. Right image: The
reconstructed scale and rotation distribution of the Generalized Second Moments. The horizontal
axis are angles between 0 and 180 degrees and the vertical axis are different scales.
3
Generalized Second Moments
We propose a new way to represent the texture in a local image patch by combining the filter
bank approach with the idea of second moment matrices.
The goal is to compute a feature vector for a local image patch that contains information about
the orientation and scale distribution. We compute for each pixel in the image patch the R basis
kernel responses (using X-Y separable steerable scalable approximations of a rich filter family).
Given a spatial weighting function of the patch (e.g. Gaussian), we compute the covariance
matrix of the weighted set of R dimensional vectors. In [2] we show that this covariance matrix
can be used to reconstruct for any desired oriented and scaled version of the filter family the
weighted sum of all filter response energies:
E(O, CT) =
L W(x, y)[F(J,a(x, y)f
(1)
X,Y
Using elongated kernels produces orientation/scale peaks, therefore the sum of all orientation/scale responses doesn't "wash" out high peaks. The height of each individual peak corresponds to the intensity in the image. Little noisy orientations have no high energy responses in the
sum. E( 0, CT) is somehow a "soft" orientation/scale histogram of the local image patch. Figure
2 shows an example of such a scale/orientation reconstruction based on the covariance matrix
(see [2] for details). Three peaks are seen, representing the edge lines along three directions and
scales in the local image patch.
This representation greatly reduces the dimensionality without being domain specific or applying
any hard decisions. It is shift invariant in the local neighborhood and decouples scale in a nice
way. Dividing the R x R covariance matrix by its trace makes this representation also illumination
invariant.
Using a 10xlO grid and a kernel basis of 5 first Gaussian derivatives and 5 second Gaussian
derivatives represents each input image as an 10 . 10 . (5 + 1) . 5 3000 dimensional vector (a
5 x 5 covariance matrix has (5 + 1) ?5 independent parameters). Potentially we could represent
the full image with one generalized second moment matrix of dimension 20 if we don't care
about capturing the part decomposition.
=
4
Mixtures of Experts
Even if we only deal with the restricted domain of man-made object categories (e.g. cars), the
extracted features still have a considerable in-class variation. Different car shapes and poses
produce nonlinear class subspaces. Hierarchical Mixtures of Experts (HME by Jordan & Jacobs)
are able to model such nonlinear decision surfaces with a soft hierarchical decomposition of the
Learning Appearance Based Models: Mixtures of Second Moment Experts
849
Sedan
Van1
Bug
Old
Van2
Figure 3: Example images of the five vehicle classes.
feature space and local linear classification experts. Potentially different experts are "responsible"
for different object poses or sub-categories.
The gating functions decompose the feature space into a nested set of regions using a hierarchical
structure of soft decision boundaries. Each region is the domain for a specific expert that classifies
the feature vectors into object categories. We used generalized linear models (GUM). Given the
training data and output labels, the gating functions and expert functions can be estimated using
an iterative version of the EM-algorithm. For more detail see [8].
In order to reduce training time and storage requirements, we trained such nonlinear decision
surfaces embedded in one global linear subspace. We choose the dimension of this linear
subspace to be large enough, so that it captures most of the lower-dimensional nonlinearity (3000
dimensional feature space projected into an 64 dimensional subspace estimated by principal
components analysis).
5
Experiments
We experimented with a database consisting of images taken from a surveillance camera on a
bridge covering normal daylight traffic on a freeway segment (Figure 1). The goal is to classify
different types of vehicles. We are able to segment each moving object based on motion cues
[10] . We chose the following 5 vehicle classes: Modem Sedan, Old Sedan, Van with back-doors,
Van without back-door, and Volkswagen Bug. The images show the rear of the car across a small
set of poses (Figure 3). All images are normalized to looxloo pixel using bilinear interpolation.
For this reason the size or aspect ratio can not be used as a feature.
We ran our experiments using two different image representations:
? Generalized Second Moments: A 10 x 10 grid was used. Generalized second moments
C. Bregler and J. Malik
850
Classiftcation Error
Miucl. ..
1.00
0.90
0.80
0.70
0.60
O.SO
0.40
0.30
0.20
\.
\\
\'\
-\"',
----'" " ...
---.- ....
:~-
::~:::~
~.,..
--- ~---~
-~- -----:::: =:::==...
0. 10
1"'00..
.-.
GSM+HME
0 .00
50.00
100.00
150.00
200.00
TraiD.Si2e
Figure 4: The classification errors of four different techniques . The X-axis shows the size of
the training set, and the Y-axis shows the percentage of rnisclassified test images. HME stands
for Hierarchical Mixtures of Experts, GSM stands for Generalized Second Moments, and I-NN
stands for Nearest Neighbors.
were computed 1 using a window of (F = 6 pixel, and 5 filter bases of 3: I elongated first
and second Gaussian derivatives on a scale range between 0.25 and 1.0.
? Principal Components Analysis ("Eigen-Images"): We used no grid decomposition and
projected the global gray level vector into a 64 dimensional linear space.
Two different classifiers were used:
? HME architecture with 8 local experts.
? A simple I-Nearest-Neighbor Classifier (1-NN).
Figure 4 shows the classification error rate for all 4 combinations as a function of the size of the
training set. Each experiment is run 5 times with different sampling of training and test images 2 .
The database consists of 285 example images. Therefore the number of test images are (285number of training images).
Across all experiments the HME architecture based on Generalized Moments was superior to all
other techniques. The best performance with a misclassification of 6.5% was achieved using 228
training images. When fewer than 120 training images are used, the HME archi tecture performed
worse than nearest neighbors.
The most common confusion was between sedans and "old" sedans. The second most confusion
was between vans with back-doors, vans without back-doors, and old sedans.
IWe experimented also with grid sizes between 6x6 to 16x16, and with 8 filter bases and a rectangle
window for the second moment statistics without getting significant improvement
2Por a given training size n we trained 5 classifiers on 5 different training and test sets and computed
the average error rate. The training and test set for each classifier was generated by the same database. The
n training examples were randomly sampled from the database, and the remaining examples were used for
the test set.
Learning Appearance Based Models: Mixtures of Second Moment Experts
6
851
Conclusion
We have demonstrated a new technique for appearance-based object recognition based on a 20
grid representation, generalized second moments, and hierarchical mixtures of experts. Experiments have shown that this technique has significant better performance than other representation
techniques like eigen-images and other classification techniques like nearest neighbors.
We believe that learning such appearance-based representations offers a very attractive methodology. Hand-coding features that could discriminant object categories like the different car types
in our database seems to be a nearly impossible task. The only choice in such domains is to
estimate discriminating features from a set of example images automatically.
The proposed technique can be applied to other domains as well. We are planning to experiment
with face databases, as well as larger car databases and categories to furtherinvestigate the utility
of hierarchical mixtures of experts and generalized second moments.
Acknowledgments We would like to thank Leo Breiman, Jerry Feldman, Thomas Leung, Stuart Russell,
and Jianbo Shi for helpful discussions and Michael Jordan, Lawrence Saul, and Doug Shy for providing
code.
References
[1] D. Beymer, A. Shashua, and T. Poggio. Example based image analysis and synthesis. M.J. T. A.l.
Memo No. 1431, Nov 1993.
[2] c. Bregler and J. Malik. Learning Appearance Based Models: Hierarchical Mixtures of Experts
Approach based on Generalized Second Moments. Technical Report UCBIICSD-96-897, Compo Sci.
Dep., U.c. Berkeley, http://www.cs/ breglerlsoft.html, 1996.
[3] Y. Le Cun, B. Boser, J.S. Denker, S. Solla, R. Howard, and L. Jackel. Back-propagation applied to
handwritten zipcode recognition. Neural Computation, 1(4), 1990.
[4] W. Freeman and M. Roth. Orientation histograms for hand gesture recognition. In International
Workshop on Automatic Face- and Gesture-Recognition, 1995.
[5] J. Garding and T. Lindeberg. Direct computation of shape cues using scale-adapted spatial derivative
operators. Int. J. of Computer Vision, 17, February 1996.
[6] OJ. Heeger. Optical flow using spatiotemporal filters. Int. J. of Computer Vision, 1, 1988.
[7] D. Jones and 1. Malik. Computational framework for determining stereo correspondence from a set
of linear spatial filters. Image and Vision Computing, 10(10), 1992.
[8] M.1. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural
Computation, 6(2), March 1994.
[9] J.J. Koenderink. Operational significance of receptive field assemblies. BioI. Cybern., 58: 163-171,
1988.
[10] D. Koller, J. Weber, and J. Malik. Robust multiple car tracking with occlusion reasoning. In Proc.
Third European Conference on Computer Vision, pages 189-196, May 1994.
[11] M. Lades,J.C. Vorbrueggen, J. Buhmann, J. Lange, C. von der Malsburg, and R.P. Wuertz. Distortion
invariant object recognition in the dynamic link architecure. In IEEE Transactions on Computers,
volume 42, 1993.
[12] J. Malik and P. Perona. Preattentive texture discrimination with early vision mechanisms. J. Opt. Soc.
Am. A, 7(5):923-932, 1990.
[13] H. Murase and S.K. Nayar. Visual learning and recognition of 3-d objects from appearance. Int. J.
Computer Vision, 14(1):5-24, January 1995.
[14] H.A. Rowley, S. Baluja, and T. Kanade. Human face detection in visual scenes. In NIPS, volume 8,
1996.
[15] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1 ):71-86,
1991.
[16] R.A. Young. The gaussian derivative theory of spatial vision: Analysis of cortical cell receptive field
line-weighting profiles. Technical Report GMR-4920, General Motors Research, 1985.
| 1223 |@word version:2 advantageous:1 seems:1 jacob:3 decomposition:6 covariance:5 pick:1 brightness:1 volkswagen:2 shot:1 moment:22 contains:1 interestingly:1 current:1 shape:4 enables:1 wanted:1 motor:1 discrimination:2 aside:1 cue:2 selected:2 fewer:1 compo:1 coarse:2 location:1 five:2 height:1 windowed:2 along:1 direct:1 consists:2 roughly:1 themselves:1 examine:1 planning:1 inspired:1 freeman:1 decomposed:3 automatically:1 little:1 freeway:3 window:5 supplementing:1 lindeberg:1 classifies:1 what:2 kind:1 sedan:8 finding:1 berkeley:5 decouples:1 scaled:2 classifier:4 jianbo:1 partitioning:1 unit:1 local:24 textural:1 bilinear:1 interpolation:1 might:5 black:1 chose:1 christoph:1 ease:1 range:1 averaged:1 decided:1 acknowledgment:1 camera:2 responsible:1 lost:1 digit:1 steerable:1 urge:1 area:1 radial:1 induce:1 operator:1 storage:1 applying:1 impossible:1 cybern:1 www:1 elongated:4 demonstrated:1 shi:1 roth:1 straightforward:1 resolution:1 pure:1 notion:1 variation:5 recognition:12 database:8 capture:5 calculate:1 region:10 compressing:1 richness:1 solla:1 russell:1 ran:1 complexity:1 rowley:1 dynamic:1 trained:2 gmr:1 segment:3 division:1 basis:3 textured:1 represented:4 various:1 leo:1 artificial:1 neighborhood:4 larger:1 distortion:1 reconstruct:1 statistic:1 noisy:1 zipcode:1 hoc:1 advantage:2 analytical:1 propose:2 reconstruction:1 neighboring:1 combining:1 bug:3 description:2 getting:1 requirement:1 produce:2 object:20 develop:1 pose:3 nearest:6 dep:1 dividing:1 soc:1 c:3 murase:1 direction:2 filter:20 centered:1 human:3 generalization:1 collaborator:1 decompose:2 opt:1 biological:1 bregler:5 viewer:1 considered:1 normal:1 lawrence:1 vary:1 early:4 proc:1 label:1 jackel:1 bridge:1 successfully:2 weighted:3 gaussian:9 avoid:1 breiman:1 surveillance:1 broader:1 derived:1 improvement:1 greatly:2 am:1 helpful:1 rear:1 nn:2 leung:1 perona:1 relation:1 koller:1 selective:2 pixel:6 issue:1 classification:10 orientation:17 html:1 spatial:6 special:1 field:2 sampling:1 represents:1 stuart:1 jones:1 nearly:1 photometric:1 others:1 report:2 few:1 oriented:3 randomly:1 composed:1 individual:1 phase:1 consisting:2 occlusion:2 detection:1 interest:1 circular:1 weakness:1 mixture:14 behind:1 beforehand:1 edge:3 poggio:1 decoupled:1 old:5 loosely:1 desired:1 classify:2 soft:3 disadvantage:2 localist:1 successful:1 characterize:1 sucessfully:1 spatiotemporal:1 peak:4 international:1 discriminating:1 michael:1 together:1 synthesis:1 von:1 choose:2 por:1 tile:6 worse:1 cognitive:1 expert:19 convolving:1 derivative:6 koenderink:1 account:1 hme:7 coding:1 int:3 jitendra:1 explicitly:1 ad:1 vehicle:7 view:3 performed:1 traffic:1 shashua:1 start:1 tecture:1 convolutional:1 descriptor:1 characteristic:3 handwritten:2 lighting:1 ago:1 detector:1 gsm:2 email:1 energy:2 turk:1 naturally:1 mi:1 gain:1 sampled:1 popular:1 knowledge:1 car:10 dimensionality:1 back:7 appears:1 x6:1 methodology:1 response:10 done:1 though:1 strongly:1 stage:1 until:1 hand:2 horizontal:1 multiscale:1 overlapping:2 nonlinear:3 propagation:1 somehow:1 gray:1 believe:2 effect:1 contain:1 normalized:1 jerry:1 lades:1 deal:1 attractive:1 covering:1 generalized:13 plate:3 outline:2 confusion:2 motion:3 reasoning:1 image:41 weber:1 recently:1 superior:1 rotation:1 common:1 volume:2 occurred:1 significant:4 composition:1 measurement:1 feldman:1 automatic:1 grid:8 pointed:1 nonlinearity:1 moving:1 mainstream:1 surface:3 etc:1 base:2 certain:3 der:1 exploited:2 seen:2 care:1 recognized:1 paradigm:1 ii:1 multiple:3 full:1 reduces:2 technical:2 gesture:2 offer:1 scalable:1 vision:12 histogram:3 kernel:6 represent:2 achieved:2 cell:1 leaving:1 induced:1 flow:1 jordan:4 call:1 structural:2 door:6 granularity:2 intermediate:1 enough:1 architecture:5 reduce:1 idea:1 lange:1 shift:2 motivated:2 utility:1 stereo:1 useful:2 detailed:1 maybe:1 category:9 http:1 percentage:1 restricts:1 estimated:2 neuroscience:1 dominance:1 redundancy:1 four:1 threshold:2 license:3 rectangle:2 year:1 sum:3 run:1 angle:1 letter:1 family:3 reader:1 patch:7 decision:5 capturing:1 layer:1 ct:2 correspondence:1 adapted:1 scene:2 archi:1 aspect:1 separable:1 optical:1 combination:1 march:1 describes:1 across:4 em:2 cun:1 primate:1 invariant:3 restricted:1 taken:1 mechanism:1 nose:2 junction:2 apply:1 denker:1 hierarchical:10 appropriate:3 generic:1 alternative:1 robustness:2 eigen:5 thomas:1 compress:1 remaining:1 assembly:1 malsburg:1 reflectance:1 february:1 malik:9 objective:1 already:2 question:1 receptive:2 traditional:1 unclear:1 gradient:4 subspace:4 thank:1 link:1 sci:1 discriminant:1 reason:1 code:2 relationship:3 ratio:1 providing:1 daylight:1 difficult:1 potentially:2 trace:1 memo:1 vertical:1 modem:1 howard:1 pentland:1 january:1 precise:1 intensity:1 iwe:1 california:1 learned:1 boser:1 tanaka:1 nip:1 able:3 usually:2 oj:1 lend:1 mouth:2 power:1 misclassification:5 natural:1 rely:1 buhmann:1 representing:2 scheme:1 eye:3 axis:4 doug:1 extract:2 nice:1 understanding:2 determining:1 embedded:1 interesting:1 mounted:1 shy:1 degree:2 consistent:1 viewpoint:1 bank:3 neighbor:6 saul:1 face:8 eigenfaces:1 van:6 boundary:2 dimension:2 cortical:1 stand:3 rich:1 doesn:1 collection:1 made:3 projected:2 transaction:1 reconstructed:1 nov:1 implicitly:1 global:7 stereopsis:2 don:2 iterative:1 kanade:1 robust:2 ca:1 correlated:1 operational:1 complex:1 european:1 domain:11 significance:1 whole:1 subsample:1 profile:1 screen:1 x16:1 sub:1 position:1 heeger:1 weighting:2 third:1 young:1 specific:3 gating:2 experimented:2 essential:1 workshop:1 texture:11 magnitude:1 wash:2 illumination:1 appearance:12 beymer:1 visual:4 xlo:1 tracking:1 corresponds:2 oox:1 nested:1 extracted:1 bioi:1 goal:2 man:1 considerable:3 content:1 hard:3 typical:1 baluja:1 averaging:1 principal:5 called:1 discriminate:1 invariance:1 preattentive:1 nayar:1 |
247 | 1,224 | Learning From Demonstration
Stefan Schaal
sschaal @cc .gatech.edu; http://www.cc.gatech.edulfac/Stefan.Schaal
College of Computing, Georgia Tech, 801 Atlantic Drive, Atlanta, GA 30332-0280
ATR Human Information Processing, 2-2 Hikaridai, Seiko-cho, Soraku-gun, 619-02 Kyoto
Abstract
By now it is widely accepted that learning a task from scratch, i.e., without
any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies
how to approach a learning problem from instructions and/or demonstrations
of other humans. For learning control, this paper investigates how learning
from demonstration can be applied in the context of reinforcement learning.
We consider priming the Q-function, the value function, the policy, and the
model of the task dynamics as possible areas where demonstrations can speed
up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in
the special case of linear quadratic regulator (LQR) problems, all methods
profit from the demonstration. In an implementation of pole balancing on a
complex anthropomorphic robot arm, we demonstrate that, when facing the
complexities of real signal processing, model-based reinforcement learning
offers the most robustness for LQR problems. Using the suggested methods,
the robot learns pole balancing in just a single trial after a 30 second long
demonstration of the human instructor.
1. INTRODUCTION
Inductive supervised learning methods have reached a high level of sophistication. Given
a data set and some prior information about its nature, a host of algorithms exist that can
extract structure from this data by minimizing an error criterion. In learning control, however, the learning task is often less well defined. Here, the goal is to learn a policy, i.e., the
appropriate actions in response to a perceived state, in order to steer a dynamical system to
accomplish a task. As the task is usually described in terms of optimizing an arbitrary performance index, no direct training data exist which could be used to learn a controller in a
supervised way. Even worse, the performance index may be defined over the long term
behavior of the task, and a problem of temporal credit assignment arises in how to credit
or blame actions in the past for the current performance. In such a setting, typical for reinforcement learning, learning a task from scratch can require a prohibitively timeconsuming amount of exploration of the state-action space in order to find a good policy.
On the other hand, learning without prior knowledge seems to be an approach that is rarely
taken in human and animal learning. Knowledge how to approach a new task can be transferred from previously learned tasks, and/or it can be extracted from the performance of a
teacher. This opens the questions of how learning control can profit from these kinds of information in order to accomplish a new task more quickly. In this paper we will focus on
learning from demonstration.
Learning from demonstration, also known as "programming by demonstration", "imitation
learning" , and "teaching by showing" received significant attention in automatic robot assembly over the last 20 years. The goal was to replace the time-consuming manual pro-
1041
Learningfrom Demonstration
a,s
(a)
mI2jj = -/.tiH mg/sin9 + r . 9 E
[-tr.tr]
r(9.r)= (;)' _{;)210gCO{~ r:)
m =1
= I. g = 9.81, J.I = 0.05. r..., =5Nm
define : x = (9.8)'.
U
=r
gramming of a robot by an automatic programming process, solely driven by showing the robot the assembly task
by an expert. In concert with the main stream of Artificial
Intelligence at the time, research was driven by symbolic
approaches: the expert's demonstration was segmented
into primitive assembly actions and spatial relationships
between manipulator and environment, and subsequently
submitted to symbolic reasoning processes (e.g., LozanoPerez, 1982; Dufay & Latombe, 1983; Segre & Dejong,
1985). More recent approaches to programming by demonstration started to include more inductive learning
components (e.g., Ikeuchi, 1993; Dillmann, Kaiser, &
Ude, 1995). In the context of human skill learning,
teaching by showing was investigated by Kawato, Gandolfo, Gomi, & Wada (1994) and Miyamoto et al. (1996)
for a complex manipulation task to be learned by an anthropomorphic robot arm. An overview of several other
projects can be found in Bakker & Kuniyoshi (1996).
In this paper, the focus lies on reinforcement learning and
how learning from demonstration can be beneficial in this
context. We divide reinforcement learning into two cate(b)
gories: reinforcement learning for nonlinear tasks
mlXcos9 + ml2e - mglsin9 = 0
(Section 2) and for (approximately) linear tasks (Section
(m + me)x + m/ecos 9 - mU'J2 sin9 = F
3), and investigate how methods like Q-Iearning, valuefunction learning, and model-based reinforcement learndefine: x = (x,i,9.e) T, u = F
ing can profit from data from a demonstration. In Section
r(x.u) = XTQX + uTRu
2.3, one example task, pole balancing, is placed in the
1= 0.75m. m = 0.15kg, me = LOkg
context of using an actual, anthropomorphic robot to learn
Q = diag(1.25, I. 12. 0.25). R = om
it, and we reconsider the applicability of learning from
Figure 1 : a) pendulum swing up, demonstration in this more complex situation.
b) cart pole balancing
2. REINFORCEMENT LEARNING FROM DEMONSTRATION
Two example tasks will be the basis of our investigation of learning from demonstration.
The nonlinear task is the "pendulum swing-up with limited torque" (Atkeson, 1994; Doya,
19%), as shown in Figure 1a. The goal is to balance the pendulum in an upright position
starting from hanging downward. As the maximal torque available is restricted such that
the pendulum cannot be supported against gravity in all states, a "pumping" trajectory is
necessary, similar as in the mountain car example of Moore (1991), but more delicately in
its timing since building up too much momentum during pumping will overshoot the upright position. The (approximately) linear example, Figure 1b, is the well-known cart-pole
balancing problem (Widrow & Smith, 1964; Barto, Sutton, & Anderson, 1983). For both
tasks, the learner is given information about the one-step reward r (Figure 1), and both
tasks are formulated as continuous state and continuous action problems. The goal of each
task is to find a policy which minimizes the infinite horizon discounted reward:
co
J
(S-I)
00
v(x(t)) = e --~ r(x(s), u(s))ds or V(x(t)) = L ri-1r(x(i), u(i))
(1)
i=t
where the left hand equation is the continuous time formulation, while the right hand
equation is the corresponding discrete time version, and where x and u denote a ndimensional state vector and a m-dimensional command vector, respectively. For the
Swing-Up, we assume that a teacher provided us with 5 successful trials starting from dif-
S. Schaal
1042
e,
ferent initial conditions. Each trial consists of a time series of data vectors (0, -r) sampled at 60Hz. For the Cart-Pole, we have a 30 second demonstration of successful balancing, represented as a 60Hz time series of data vectors (x, X, 0, F). How can these demonstrations be used to speed up reinforcement learning?
e,
2.1 THE NONLINEAR TASK: SWING-UP
We applied reinforcement learning based on learning a value function (V-function) (Dyer
& McReynolds, 1970) for the Swing-Up task, as the alternative method, Q-learning
(Watkins, 1989), has yet received very limited research for continuous state-action spaces.
The V-function assigns a scalar reward value V(x(t?) to each state x such that the entire Vfunction fulfills the consistency equation:
V(x(t)) = arg min(r(x(t), u(t)) +
r V(x(t + 1)))
(2)
u(t)
For clarity, this equation is given for a discrete state-action system; the continuous formulation can be found, e.g., in Doya (1996). The optimal policy, u =Jt(x), chooses the action
u in state x such that (2) is fulfilled. Note that this computation involves an optimization
step that includes knowledge of the subsequent state x(t+ 1). Hence, it requires a model of
the dynamics of the controlled system, x(t+ 1)=f(x(t),u(t?. From the viewpoint of learning
from demonstration, V-function learning offers three candidates which can be primed from
a demonstration: the value function V(x), the policy 1t(x), and the modelf(x,u).
2.1.1 V-Learning
In order to assess the benefits of a demon50-+-----------------+------~~'?.JJl~
stration for the Swing-Up, we implemented V-learning as suggested in Doya's
(1996) continuous TD (CTD) learning alr~30+----------r~-+------------~
gorithm. The V-function and the dynam20+-----~~~---+------------~
ics model were incrementally learned by a
10 ...,-"
- - a)scratch
c) primed model
nonlinear function approximator, Recep- - b) primed actor d) primed actor&model
tive Field Weighted Regression (RFWR)
o
10
100
(Schaal & Atkeson (1996?). Differing
Trial
from Doya's (1996) implementation, we
Figure 2: Smoothed learning curves of the average used the optimal action suggested by CTD
of 10 learning trials for the learning conditions a) to learn a model of the policy 1t (an
to d) (see text). Good performance is characterized "actor" as in Barto et al. (1983?, again reby Tup >45s; below this value the system is usu- presented by RFWR. The following learnally able to swing up properly but it does not know ing conditions were tested empirically:
how to stop in the upright position.
a) Scratch: Trial by trial learning of
value function V, model f, and actor 1t from scratch.
b) Primed Actor: Initial training of 1t from the demonstration, then trial by trial learning.
c) Primed Model: Initial training of f from the demonstration, then trial by trial learning.
d) Primed Actor&Model: Priming of 1t and f as in b) and c), then trial by trial learning.
60
?
Figure 2 shows the results of learning the Swing-Up. Each trial lasted 60 seconds. The
time Tup the pole spent in the interval E [-7r / 2, 7r /2] during each trial was taken as the
performance measure (Doya, 1996). Comparing conditions a) and c), the results demonstrate that learning the pole model from the demonstration did not speed up learning. This
is not surprising since learning the V-function is significantly more complicated than
learning the model, such that the learning process is dominated by V-function learning.
Interestingly, priming the actor from the demonstration had a significant effect on the initial performance (condition a) vs. b?). The system knew right away how to pump up the
pendUlum, but, in order to learn how to balance the pendulum in the upright position, it finally took the same amount of time as learning from scratch. This behavior is due to the
Learning from Demonstration
1043
fact that, theoretically, the V-function can only be approximated correctly if the entire
state-action space is explored densely. Only if the demonstration covered a large fraction
of the entire state space one would expect that V-learning can profit from it. We also investigated using the demonstration to prime the V-function by itself or in combination
with the other functions. The results were qualitatively the same as in shown in Figure 2:
if the policy was included in the priming, the learning traces were like b) and d), otherwise
like a) and c). Again, this is not totally surprising. Approximating a V-function is not just
supervised learning as for ;t and f, it requires an iterative procedure to ensure the validity
of (2) and amounts to a complicated nonstationary function approximation process. Given
the limited amount of data from the demonstration, it is generally very unlikely to approximate a good value function.
2.1.2 Model-Based V-Learning
If learning a model f is required, one can
50
make more powerful use of it. According
~
40
to the certainty equivalence principle, f
can substitute the real world, and planning
1--~30
can be run in "mental simulations" instead
20
of
interaction with the real world. In reinV
forcement learning, this idea was origi10
. b) primed model
a)scratch
nally pursued by Sutton's (1990) DYNA
o
100)
algorithms
for discrete state-action spaces.
10
Trial
Here we will explore in how far a conFigure 3: Smoothed learning curves of the average tinuous version of DYNA, DYNA-CTD,
of 10 learning trials for the learning conditions a) can help in learning from demonstration.
The only difference compared to CTD in
and b) (see text) of the Swing-Up problem using
"mental simulations". See Figure 2 for explanaSection 2.1.1 is that after every real trial,
tions how to interpret the graph.
DYNA-CTD performs five "mental trials"
in which the model of the dynamics acquired so far replaces the actual pole dynamics. Two learning conditions we be explored:
60
--- - -
/'
.....,..
-- -
/'
/
I
a)
b)
Scratch: Trial by trial learning of V, model f, and policy ;t from scratch.
Primed Model: Initial training of f from the demonstration, then trial by trial learning.
Figure 3 demonstrates that in contrast to V-learning in the previous section, learning from
demonstration can make a significant difference now: after the demonstration, it only
takes about 2-3 trials to accomplish a good swing-up with stable balancing, indicated by
T up >45s. Note that also learning from scratch is significantly faster than in Figure 2.
2.2 THE LINEAR TASK: CART -POLE BALANCING
One might argue that applying reinforcement learning from demonstration to the SwingUp task is premature, since reinforcement learning with nonlinear function approximators
has yet to obtain appropriate scientific understanding. Thus, in this section we turn to an
easier task: the cart-pole balancer. The task is approximately linear if the pole is started in
a close to upright position, and the problem has been well studied in the dynamic programming literature in the context of linear quadratic regulation (LQR) (Dyer & McReynolds, 1970).
2.2.1 Q-Learning
In contrast to V-learning, Q-Iearning (Watkins, 1989; Singh & Sutton, 1996) learns a more
complicated value function, Q(x,u), which depends both on the state and the command.
The analogue of the consistency equation (2) for Q-Iearning is:
Q(x(t), u(t?)
=r(x(t), u(t?) + r arg min(Q(x(t + 1), u(t + 1?))
u(I+1)
(3)
S. Schaal
1044
At every state x, picking the action u which minimizes Q is the optimal action under the
reward function (l). As an advantage, evaluating the Q-function to find the optimal policy does not require a model the dynamical system f that is to be controlled; only the
value of the one-step reward r is needed. For learning from demonstration, priming the Qfunction and/or the policy are the two candidates to speed up learning.
For LQR problems, Bradtke (1993) suggested a Q-Iearning method that is ideally suited
for learning from demonstration, based on extracting a policy. He observed that for LQR
the Q-function is quadratic in the states and commands:
Q(x,u) = [xT,uTl[HH Il
21
H I2
H22
][X T,U TY, HIl =nxn, H22 =mxm, HI2 =H;I =nxm
and that the (linear) policy, represented as a
gain matrix K, can be extracted from (4) as:
0.045
KdOmo=
0.04
~nal
0.035
[?0.59. -1.81. -18.71. ?6.67)
[?5.76, -11.37, -83.05, -21.92)
=
uopt = -K x = -H;;H2Ix
'0
~ 0.03
? 0.025
a.
~ 0.02
~ 0.Q15
o
0.01
0.005
o-tJcr~Dl"!NI(fj~~~~II!~
o
20
40
60
Time[s)
80
100
(4)
120
(5)
Conversely, given a stabilizing initial policy
K demo' the current Q-function can be approximated by a recursive least squares procedure,
and it can be optimized by a policy iteration
process with guaranteed convergence (Bradkte,
1993). As a demonstration allows one to extract
an initial policy K demo by linearly regressing
the observed command u against the corresponding observed states x, one-shot learning
of pole balancing is achievable. As shown in
Figure 4, after about 120 seconds (12 policy iteration steps), the policy is basically indistinguishable from the optimal policy. A caveat of
this Q-Iearning, however, is that it cannot not learn without a stabilizing initial policy.
Figure 4: Typical learning curve of a noisy
simulation of the cart-pole balancer using Qlearning. The graph shows the value of the
one-step reward over time for the first
learning trial. The pole is never dropped.
2.2.2 Model-based V -Learning
Learning an LQR task by learning the V-function is one of the classic forms of dynamic
programming (Dyer & McReynolds, 1970). Using a stabilizing initial policy K demo ' the
current V-function can be approximated by recursive least squares in analogy with
Bradtke (1993). Similarly as for K demo ' a (linear) model f demo of the cart-pole dynamics
can be extracted from a demonstration by linear regression of the cart-pole state x(t) vs.
the previous state and command vector (x(t-1), u(t-1?, and the model can be refined with
every new data point experienced during learning. The policy update becomes:
K= y(R + yBTHBtBTHA, where Vex) = xTHx,
idemo
=[AB], A = n x n,B = n X m
(6)
Thus, a similar process as in Bradtke (1993) can be used to find the optimal policy K, and
the system accomplishes one shot learning, qualitatively indistinguishable from Figure 4.
Again, as pointed out in Section 2.1.2, one can make more efficient use of the learned
model by performing mental simulations. Given the model f demo' the policy K can be calculated by off-line policy iteration from an initial estimate ofH, e.g., taken to be the identity matrix (Dyer & McReynolds, 1970). Thus, no initial (stabilizing) policy is required,
but rather an estimate of the task dynamics. Also this method achieves one shot learning.
2.3 POLE BALANCING WITH AN ACTUAL ROBOT
As a result of the previous section, it seems that there are no real performance differences
between V-learning, Q-Iearning, and model-based V-learning for LQR problems. To explore the usefulness of these methods in a more realistic framework, we implemented
Leamingfrom Demonstration
1045
learning from demonstration of pole balancing on an anthropomorphic robot arm. The robot is equipped with a 60 Hz video-based stereo vision. The pole is marked by two color
blobs which can be tracked in real-time. A 30 second long demonstration of pole balaocing was is provided by a human standing in front of the two robot cameras.
There are a few crucial differences in comparison with the simulations. First, as the demonstration is vision-based, only kinematic variables can be extracted from the demonstration. Second, visual signal processing has about 120ms time delay. Third, a command
given to the robot is not executed with very high accuracy due to unknown nonlinearities
of the robot. And lastly, humans use internal state for pole balancing, i.e., their policy is
partially based on non-observable variables. These issues have the following impact:
Kinematic Variables: In this implementation, the robot arm
replaces the cart of the Cart-Pole problem. Since we have an
estimate of the inverse dynamics and inverse kinematics of
the arm, we can use the acceleration of the finger in Cartesian space as command input to the task. The arm is also
much heavier than the pole which allows us to neglect the
interaction forces the pole exerts on the arm. Thus, the pole
balancing dynamics of Figure Ib can be reformulated as:
,
..
i
uml cosO + Oml 2 - mgl sin 0= 0, x = u
(7)
Figure 5: Sketch of SARCOS An variables in this equation can be extracted from a demanthropomorphic robot arm onstration. We omit the 3D extension of these equations.
Delayed Visual Information: There are two possibilities of dealing with delayed variables.
Either the state of the system is augmented by delayed commands corresponding to
7* 1/60s:::::120s delay time, x T = (x, x,O, (}, U t _1' ut - 2 ' ... , ut - 7 ) , or a state predictive controller
is employed. The former method increases the complexity of a policy significantly, while
the latter method requires a model f.
Inaccuracies of Command Execution: Given an acceleration command u, the robot will
execute something close to u, but not u exactly. Thus, learning a function which includes
u, e.g., the dynamics model (7), can be dangerous since the mapping (x,i,O,(},u) ~ (x,ii)
is contaminated by the nonlinear dynamics of the robot arm. Indeed, it turned out that we
could not learn such a model reliably. This could be remedied by "observing" the command u, i.e., by extracting u = from visual feedback.
x
Internal State in Demonstrated Policy: Investigations with human subjects have shown
that humans use internal state in pole balancing. Thus, a policy cannot be observed that
easily anymore as claimed in Section 2.2: a regression analysis for extracting the policy of
a teacher must find the a~propriate time-alignment of observed current state and command(s) in the past. This can become a numerically involved process as regressing a policy based on delayed commands is endangered by singUlar regression matrices. Consequently, it easily happens that one extracts a nonstabilizing policy from the demonstration,
which prevents the application of Q-Iearning and V-learning as described in Section 2.2.
As a result of these considerations, the most trustworthy item to extract from a demonstration is the model of the pole dynamics. In our implementation it was used in two ways, for
calculating the policy as in (6), and in state-predictive control with a Kalman filter to
overcome the delays in visual information processing. The model was learned incrementally in real-time by an implementation of RFWR (Schaal & Atkeson 1996). Figure 6
shows the results of learning from scratch and learning from demonstration of the actual
robot. Without a demonstration, it took about 10-20 trials before learning succeeded in reliable performance longer than one minute. With a 30 second long demonstration, learning
was reliably accomplished in one single trial, using a large variety of physically different
poles and using demonstrations from arbitrary people in the laboratory.
1046
~~------------------,
S. Schaal
3. CONCLUSION
70
We discussed learning from demonstration in the
context of reinforcement learning, focusing on Q50
learning, value function learning, and model
,.'l' 40
based reinforcement learning. Q-Iearning and
30
value function learning can theoretically profit
20
from a demonstration by extracting a policy, by
a)scratcn
10
_
b) primed model
using the demonstration data to prime the Q/value
O-fl=:::::;:==;:::;:::;='~~lrO-L:::;::'.::!.;:.~~~I00 function, or, in the case of value function learn/!Trial
ing, by extracting a predictive model of the
world. Only in the special case of LQR problems,
Figure 6: Smoothed average of 10 learning curves of the robot for pole balancing. however, could we find a significant benefit of
The trials were aborted after successful
priming the learner from the demonstration. In
balancing of 60 seconds. We also tested
contrast, model-based reinforcement learning was
long term performance of the learning
able to greatly profit from the demonstration by
system by running pole balancing for over
using the predictive model of the world for
an hour-the pole was never dropped.
"mental simulations". In an implementation with
an anthropomorphic robot arm, we illustrated that even in LQR problems, model-based
reinforcement learning offers larger robustness towards the complexity in real learning
systems than Q-Iearning and value function learning. Using a model-based strategy, our
robot learned pole-balancing from a demonstration in a single trial with great reliability.
The important message of this work is that not every learning approach is equally suited to
allow knowledge transfer and/or the incorporation of biases. This issue may serve as a
critical additional constraint to evaluate artificial and biological models of learning.
Acknowledgments
Ikeuchi, K. (1993b). "Assembly plan from observa60+--- ------.,..------1
tion.", School of Computer Science, Carnegie Mellon
Support was provided by the A TR Human Infor- University, Pittsburgh, PA.
mation Processing Labs the German Research Kawato, M., Gandolfo, F., Gomi, H., & Wada, Y.
. .
'b
(1994b). "Teaching by showing in kendama based on
~ssocIatlOn, the Alexander v. ~um oldt ~ounda- optimization principle." In: Proceedings of the InternatlOn, and the German ScholarshIp FoundatIOn.
tional Conference on Artificial Neural Networks
(ICANN'94), 1, pp.601-606.
References
Lozano-Perez, T. (1982). "Task-Planning." In: Brady,
Atkeson, C. G. (1994). "Using local trajectory optimiz- M., Hollerbach, 1. M., Johnson, T. L., Lozano-P_rez, T.,
ers to speed up global optimization in dynamic pro- & Mason, M. T. (Eds.), , pp,473-498. MIT Press.
gramming." In: Moody, Hanson, & Lippmann (Ed.), Miyamoto, H., Schaal, S., Gandolfo, F., Koike, Y., Osu,
Adv. in Neural In! Proc. Sys. 6. Morgan Kaufmann.
R., Nakano, E., Wada, Y., & Kawato, M. (in press). "A
Bakker, P., & Kuniyoshi, Y. (1996). "Robot see, robot Kendama learning robot based on bi-directional theory."
do: An overview of robot imitation.", Autonomous Neural Networks.
Systems Section, Electrotechnical Laboratory, Tsukuba Moore, A. (1991a). "Fast, robust adaptive control by
Science City, Japan.
learning only forward models." In: Moody, 1. E., HanBarto, A. G., Sutton, R. S., & Anderson, C. W. (1983). son, S. J., & and Lippmann, R. P. (Eds.), Advances in
"Neuronlike adaptive elements that can solve difficult NeuralInt Proc. Systems 4. Morgan Kaufmann.
learning control problems." IEEE Transactions on Sys- Schaal, S., & Atkeson, C. G. (1996). "From isolation to
tems, Man, and Cybernetics, SMC-13, 5.
cooperation: An alternative of a system of experts." In:
Bradtke, S. 1. (1993). "Reinforcement learning applied Touretzky, D. S., Mozer, M. c., & Hasselmo, M. E.
to linear quadratic regulation." In: Hanson, J. S., Cowan, (Eds.), Advances in Neural Information Processing
1. D., & Giles, C. L. (Eds.), Advances in Neural In! Systems 8. Cambridge, MA: MIT Press.
Processing Systems 5, pp.295-302. Morgan Kaufmann. Segre, A. B., & Dejong, G. (1985). "Explanation-based
Dillmann, R., Kaiser, M., & Ude, A. (1995). manipulator learning: Acquisition of planning ability
"Acquisition of elementary robot skills from human through observation." In: Conference on Robotics and
demonstration." In: International Symposium on Intelli- Automation, pp.555-560.
gent Robotic Systems (SIRS'95), Pisa, Italy.
Singh, S. P., & Sutton, R. S. (1996). "Reinforcement
Doya, K. (1996). "Temporal difference learning in con- learning with eligibility traces." Machine Learning.
tinuous time and space." In: Touretzky, D. S., Mozer, Sutton, R. S. (1990). "Integrated architectures for
M. c. , & Hasselmo, M. E. (Eds.), Advances in Neural learning, planning, and reacting based on approximating
Information Processing Systems 8. MIT Press.
dynamic programming." In: Proceedings of the InternaDufay, B., & Latombe, J.-c. (1984). "An approach to tional Machine Learning Conference.
automatic robot programming based on inductive Watkins, C. 1. C. H. (1989). "Learning with delayed relearning." In: Brady, M., & Paul, R. (Eds.), Robotics Re- wards." Ph.D. thesis, Cambridge University (UK), .
search, pp.97-115. Cambridge, MA: MIT Press.
Widrow, B., & Smith, F. W. (1964). "Pattern recognizDyer, P., & Mc~eynolds, S. R. (1970). The ~omputation ing control systems." In: 1963 Compo and In! Sciences
and theory of opumal control. NY: AcademIC Press.
(COINS) Symp. Proc., 288-317, Washington: Spartan.
| 1224 |@word trial:30 version:2 achievable:1 seems:2 open:1 instruction:1 simulation:6 valuefunction:1 delicately:1 profit:6 tr:3 shot:3 initial:12 series:2 lqr:9 interestingly:1 past:2 atlantic:1 nally:1 current:4 comparing:1 trustworthy:1 surprising:2 yet:2 must:1 oml:1 subsequent:1 realistic:1 concert:1 update:1 v:2 intelligence:1 pursued:1 item:1 sys:2 smith:2 compo:1 sarcos:1 mental:5 caveat:1 mgl:1 tems:1 five:1 direct:1 become:1 symposium:1 consists:1 symp:1 acquired:1 theoretically:2 indeed:1 behavior:2 planning:4 aborted:1 torque:2 discounted:1 td:1 actual:4 equipped:1 totally:1 becomes:1 project:1 provided:3 kg:1 kind:1 mountain:1 bakker:2 minimizes:2 dejong:2 differing:1 brady:2 temporal:2 certainty:1 every:4 iearning:9 gravity:1 exactly:1 prohibitively:1 demonstrates:1 um:1 uk:1 control:8 omit:1 before:1 dropped:2 timing:1 local:1 sutton:6 pumping:2 reacting:1 solely:1 approximately:3 might:1 kuniyoshi:2 studied:1 equivalence:1 conversely:1 co:1 dif:1 limited:3 smc:1 bi:1 acknowledgment:1 camera:1 utru:1 recursive:2 procedure:2 area:1 significantly:3 instructor:1 symbolic:2 cannot:3 ga:1 close:2 context:6 applying:1 www:1 demonstrated:1 timeconsuming:1 primitive:1 attention:1 starting:2 stabilizing:4 assigns:1 classic:1 autonomous:1 programming:7 pa:1 element:1 approximated:3 coso:1 gorithm:1 observed:5 adv:1 mozer:2 environment:1 mu:1 complexity:3 reward:6 ideally:1 dynamic:15 overshoot:1 singh:2 predictive:4 serve:1 learner:2 basis:1 easily:2 represented:2 finger:1 fast:1 artificial:3 spartan:1 refined:1 widely:1 larger:1 forcement:1 solve:1 otherwise:1 ability:1 ward:1 itself:1 noisy:1 advantage:1 mg:1 blob:1 h22:2 took:2 interaction:2 maximal:1 j2:1 turned:1 hi2:1 convergence:1 spent:1 help:1 widrow:2 tions:1 school:1 received:2 implemented:2 involves:1 undertaking:1 mcreynolds:4 filter:1 subsequently:1 exploration:1 human:12 require:2 endangered:1 investigation:2 anthropomorphic:5 biological:1 elementary:1 extension:1 credit:2 ic:1 great:1 mapping:1 sschaal:1 achieves:1 perceived:1 proc:3 hasselmo:2 city:1 weighted:1 stefan:2 mit:4 mation:1 hil:1 primed:10 rather:1 barto:2 gatech:2 command:13 focus:2 schaal:9 usu:1 properly:1 lasted:1 tech:1 contrast:3 greatly:1 utl:1 tional:2 entire:3 unlikely:1 integrated:1 infor:1 arg:2 issue:2 animal:1 spatial:1 special:2 jjl:1 plan:1 field:1 never:2 washington:1 contaminated:1 seiko:1 few:1 densely:1 delayed:5 gent:1 attempt:1 ab:1 atlanta:1 neuronlike:1 message:1 possibility:1 investigate:1 dillmann:2 kinematic:2 regressing:2 alignment:1 perez:1 configure:1 succeeded:1 necessary:1 divide:1 re:1 miyamoto:2 steer:1 giles:1 alr:1 assignment:1 applicability:1 pole:33 wada:3 pump:1 usefulness:1 delay:3 successful:3 johnson:1 too:1 balancer:2 front:1 teacher:3 accomplish:3 cho:1 chooses:1 international:1 standing:1 off:1 picking:1 quickly:1 moody:2 again:3 electrotechnical:1 nm:1 thesis:1 worse:1 expert:3 japan:1 nonlinearities:1 includes:2 automation:1 depends:1 stream:1 tion:1 lab:1 observing:1 pendulum:6 reached:1 gandolfo:3 complicated:3 om:1 ass:1 il:1 ni:1 square:2 accuracy:1 kaufmann:3 ecos:1 directional:1 koike:1 basically:1 mc:1 trajectory:2 cc:2 drive:1 cybernetics:1 gomi:2 submitted:1 touretzky:2 manual:1 ed:7 against:2 ty:1 acquisition:2 pp:5 involved:1 con:1 sampled:1 stop:1 gain:1 knowledge:5 car:1 color:1 ut:2 focusing:1 supervised:3 response:1 daunting:1 formulation:2 execute:1 kendama:2 anderson:2 just:2 lastly:1 d:1 hand:3 sketch:1 nonlinear:7 incrementally:2 indicated:1 scientific:1 manipulator:2 building:1 effect:1 validity:1 inductive:3 swing:10 hence:1 former:1 lozano:2 moore:2 laboratory:2 i2:1 illustrated:1 indistinguishable:2 during:3 sin:1 eligibility:1 criterion:1 m:1 demonstrate:2 performs:1 bradtke:4 pro:2 fj:1 reasoning:1 consideration:1 kawato:3 empirically:1 overview:2 tracked:1 discussed:1 he:1 interpret:1 numerically:1 significant:5 mellon:1 cambridge:3 automatic:3 consistency:2 similarly:1 teaching:3 pointed:1 blame:1 had:1 reliability:1 robot:27 actor:7 stable:1 longer:1 something:1 recent:1 optimizing:1 italy:1 driven:2 prime:2 manipulation:1 cate:1 claimed:1 approximators:1 accomplished:1 morgan:3 additional:1 employed:1 accomplishes:1 signal:2 ii:2 kyoto:1 segmented:1 ing:4 faster:1 characterized:1 academic:1 offer:3 long:5 host:1 equally:1 controlled:2 impact:1 regression:4 controller:2 vision:2 mxm:1 exerts:1 physically:1 iteration:3 robotics:2 interval:1 singular:1 crucial:1 cart:10 hz:3 subject:1 cowan:1 ikeuchi:2 nonstationary:1 extracting:5 variety:1 isolation:1 architecture:1 idea:1 qfunction:1 heavier:1 stereo:1 soraku:1 reformulated:1 rfwr:3 action:13 generally:1 covered:1 amount:4 ph:1 http:1 exist:2 lro:1 ctd:5 fulfilled:1 correctly:1 discrete:3 carnegie:1 clarity:1 nal:1 graph:2 fraction:1 year:1 run:1 inverse:2 powerful:1 doya:6 vex:1 investigates:1 fl:1 guaranteed:1 tsukuba:1 quadratic:4 replaces:2 dangerous:1 incorporation:1 constraint:1 ri:1 optimiz:1 dominated:1 regulator:1 speed:6 min:2 performing:1 transferred:1 uml:1 hanging:1 according:1 combination:1 beneficial:1 son:1 happens:1 restricted:1 taken:3 equation:7 nxm:1 previously:1 turn:1 kinematics:1 german:2 dyna:4 needed:1 know:1 hh:1 dyer:4 available:1 away:1 appropriate:2 anymore:1 alternative:2 robustness:2 latombe:2 coin:1 substitute:1 uopt:1 running:1 include:1 assembly:4 gramming:2 ensure:1 nakano:1 learningfrom:1 neglect:1 calculating:1 scholarship:1 hikaridai:1 approximating:2 ude:2 question:1 kaiser:2 strategy:2 remedied:1 atr:1 gun:1 me:2 argue:1 kalman:1 index:2 relationship:1 demonstration:57 minimizing:1 balance:2 regulation:2 executed:1 difficult:1 trace:2 reconsider:1 implementation:6 reliably:2 policy:35 unknown:1 observation:1 situation:1 smoothed:3 arbitrary:2 tih:1 tive:1 required:2 optimized:1 hanson:2 learned:6 leamingfrom:1 hour:1 inaccuracy:1 able:2 suggested:4 dynamical:2 usually:1 below:1 i00:1 pattern:1 hollerbach:1 ofh:1 reliable:1 video:1 explanation:1 analogue:1 critical:1 force:1 ndimensional:1 arm:10 started:2 tinuous:2 extract:5 text:2 prior:3 understanding:1 swingup:1 literature:1 nxn:1 sir:1 expect:1 analogy:1 facing:1 approximator:1 foundation:1 principle:2 viewpoint:1 balancing:18 cooperation:1 placed:1 last:1 supported:1 bias:2 allow:1 benefit:2 curve:4 calculated:1 feedback:1 world:4 evaluating:1 overcome:1 ferent:1 forward:1 qualitatively:2 reinforcement:19 adaptive:2 premature:1 atkeson:5 far:2 transaction:1 approximate:1 skill:2 qlearning:1 observable:1 lippmann:2 dealing:1 global:1 robotic:1 pittsburgh:1 q50:1 consuming:1 knew:1 demo:6 imitation:2 continuous:6 iterative:1 search:1 learn:9 nature:1 transfer:1 robust:1 investigated:2 priming:6 complex:3 diag:1 did:1 icann:1 main:1 linearly:1 paul:1 augmented:1 georgia:1 ny:1 experienced:1 position:5 momentum:1 pisa:1 lie:1 candidate:2 intelli:1 ib:1 watkins:3 third:1 learns:2 minute:1 xt:1 jt:1 showing:4 er:1 explored:2 mason:1 dl:1 execution:1 downward:1 cartesian:1 horizon:1 relearning:1 easier:1 suited:2 sophistication:1 explore:2 visual:4 prevents:1 partially:1 scalar:1 extracted:5 ma:2 goal:4 formulated:1 identity:1 marked:1 acceleration:2 consequently:1 towards:1 replace:1 man:1 included:1 typical:2 upright:5 infinite:1 accepted:1 osu:1 rarely:2 college:1 internal:3 people:1 support:1 latter:1 arises:1 fulfills:1 alexander:1 evaluate:1 tested:2 scratch:12 |
248 | 1,225 | Dynamics of Training
Siegfried Bos*
Lab for Information Representation
RIKEN, Hirosawa 2-1, Wako-shi
Saitama 351-01, Japan
Manfred Opper
Theoretical Physics III
University of Wiirzburg
97074 Wiirzburg, Germany
Abstract
A new method to calculate the full training process of a neural network is introduced. No sophisticated methods like the replica trick
are used. The results are directly related to the actual number of
training steps. Some results are presented here, like the maximal
learning rate, an exact description of early stopping, and the necessary number of training steps. Further problems can be addressed
with this approach.
1
INTRODUCTION
Training guided by empirical risk minimization does not always minimize the expected risk. This phenomenon is called overfitting and is one of the major problems
in neural network learning. In a previous work [Bos 1995] we developed an approximate description of the training process using statistical mechanics. To solve this
problem exactly, we introduce a new description which is directly dependent on the
actual training steps. As a first result we get analytical curves for empirical risk and
expected risk as functions of the training time, like the ones shown in Fig. l.
To make the method tractable we restrict ourselves to a quite simple neural network model, which nevertheless demonstrates some typical behavior of neural nets.
The model is a single layer perceptron, which has one N -dim. layer of adjustable
weights W between input x and output z. The outputs are linear, Le.
Z
=h =
1
r;:;r
vN
N
L
WiXi .
(1)
i=l
We are interested in supervised learning, where examples xf (J-L = 1, ... , P) are
given for which the correct output z* is known. To define the task more clearly
and to monitor the training process, we assume that the examples are provided
by another network, the so called teacher network. The teacher is not restricted to
linear outputs, it can have a nonlinear output function 9*(h*).
* email: boesClzoo.riken.go.jpandopperCiphysik.uni-wuerzburg.de
S. Bos and M. Opper
142
Learning by examples attempts to minimize the error averaged over all examples,
Le. ET := 1/2 < (z~ - ZIl)2 >{il'}' which is called training error or empirical risk. In
fact what we are interested in is a minimal error averaged over all possible inputs
X, i.e EG := 1/2 < (z* - z)2 > {xEInput}' called generalization error or expected risk.
It can be shown [see Bos 1995] that for random inputs, Le. all components Xi are
independent and have zero means and unit variance, the generalization error can
be described by the order parameters R and Q,
EG(t)
1
= 2" [G -
2H R(t)
+ Q(t)]
(2)
with the two parameters G =< [g*(h)]2 >h and H =<g*(h) h>h. The order parameters are defined as:
1 N
R(t)
=< N
L WtWi(t) >{wt} ,
(3)
i=l
As a novelty in this paper we average the order parameters not as usual in statistical
mechanics over many example realizations {xt}, but over many teacher realizations
{Wt}, where we use a spherical distribution. This corresponds to a Bayesian average
over the unknown teacher. A study of the static properties of this model was done
by Saad [1996]. Further comments about the averages can be found in the appendix.
In the next section we introduce our new method briefly. Readers, who do not
wish to go into technical details in first reading, can tum directly to the results (15)
and (16) . The remainder of the section can be read later, as a proof. In the third
section results will be presented and discussed. Finally, we conclude the paper with
a summary and a perspective on further problems.
2
DYNAMICAL APPROACH
Basically we exploit the gradient descent learning rule, using the linear student, i.e
g'(h) = 1 and zJl. = hll = )wWxJl.,
For P < N, the weights are linear combinations of the example inputs
0,
xr, if Wi(O) =
(5)
After some algebra a recursion for (1 Jl.(t) can be found, Le.
(6)
where the term in the round brackets defines the overlap matrix C IlV . From the
geometric series we know the solution of this recursion, and therefore for the weights
(7)
143
Dynamics o/Training
?
xr
It fulfills the initial conditions Wi(O) =
and W i (l) = 1] 'L:=1 z~
(Hebbian),
and yields after infinite time steps the so called Pseudo-inverse weights, i.e.
Wi(oo)
p
=~
z~ (C- 1 )/-IV xr .
L
(8)
/-1,?1=1
This is valid as long as the examples are linearly independent, i.e. P < N. Remarks
about the other case (P > N) will follow later.
With the expression (7) we can calculate the behavior of the order parameters
for the whole training process. For R(t) we get
R(t)
=
1 ~ [E - (E - 1]C)t]
N 6
C
/-1,?1=1
< Z*/-I
(
v)
1 ~ W*
'N
6
i xi
>{W,"}
V lV a=1
/-IV
(9)
For the average we have used expression (21) from the appendix. Similarly we get
for the other order parameter,
/-IT
X
< Z.
Z* (
1
N
?ItT
N LXi Xi
)
>{W:}
a=1
(10)
Again we have applied an identity (20) from the appendix and we did some matrix
algebra. Note, up to this point the order parameters were calculated without any
assumption about the statistics of the inputs. The results hold, even without the
thermodynamic limit.
The trace can be calculated by an integration over the eigenvalues, thus we attain
integrals of the following form,
~mu
p
~L
/-1=1
[(E - 1]C)1
cm] /-1/-1
=
Jd~ p(~)(l- 1]~)1 ~m
=: I!n(t, 0:, 1]),
(11)
~min
with l = {O, t, 2t} and m = {-1,0, 1}.
These integrals can be calculated once we know the density of the eigenvalues
p(~). The determination of this density can be found in recent literature calculated
by Opper [1989J using replicas, by Krogh [1992J using perturbation theory and by
Sollich [1994J with matrix identities. We should note, that the thermodynamic limit
and the special assumptions about the inputs enter the calculation here. All authors
found
(12)
S. Bos and M. Opper
144
for a < 1. The maximal and the minimal eigenvalues are ~max,min := (1 ? fo)2. So
all that remains now is a numerical integration.
Similarly we can calculate the behavior of the training error from
1
ET(t)
p
=< 2P L (z~
- hl-')2
>{Wn .
(13)
1-'=1
For the overdetermined case (P
W;{t
+ I} = ~
> N) we can find a recursion analog to (6),
(~ ~ x'tx'J ) 1Wj{t} + .IN ~ z~x't .
[6;; -
{14}
The term in the round brackets defines now the matrix B ij . The calculation is
therefore quite similar to above with the matrix B playing the role of matrix C. The
density of the eigenvalues p(A) for matrix B is the one from above (12) multiplied
bya.
Altogether, we find the following results in the case of a < 1,
EG(t, a, 77) =
G
"2
+
Er(t,a,77)
G - H2 2t
2
10
=
and in the case of a
(1
G - H2
2
a
2t )
rl
1 _ a - 2.1':"'1 + L1
H2
+2
-
H2 (
a 1- 102t)
T
2t
,
(15)
II ,
> 1,
E G (t,a,77)
= G -2 H2
E T ( t, a, 77) =
G-
(
H2 (
2
1
t
2t )
1+ a_I-2I-1+L1
1
1- a
)
H2 1ft
+ -l~t
+
- .
a
2 a
H2 2t
+ T 1o,
(16)
.t;t
If t ---+ 00 all the time-dependent integrals Ik and
vanish. The remaining first
two terms describe, in the limit a -+ 00, the optimal convergence rate of the errors.
In the next section we discuss the implications of this result.
3
RESULTS
First we illustrate how well the theoretical results describe the training process. If
we compare the theory with simulations, we find a very good correspondence, see
Fig. 1.
Trying other values for the learning rate we can see that there is a maximal
learning rate. It is twice the inverse of the maximal eigenvalue of the matrix B, i.e.
2
77max
2
= ~max = (1 + fo)2
.
(17)
This is consistent with a more general result, that the maximalleaming is twice the
inverse of the maximal eigenvalue of the Hessian. In the case of the linear perceptron
the matrix B is identical to the Hessian.
As our approach is directly related to the actual number of training steps we can
examine how training time varies in different training scenarios. Training can be
stopped if the training error reaches a certain minimal value, i.e if ET(t) ~ E?!in +f.
Or, in cross-validated early stopping, we will terminate training if the generalization
error starts to increase, i.e. if EG(t + 1) > EG(t).
Dynamics of Training
145
0.4
0.3
0.2 ,,,
,,
,,
,,
,,
,,
,
ET
0.1
'l ,
-- - - ~ - -- - -?----- ~- - - - -?- - - -- ~-- ---%- - -- - ~ - - - --%- - - -- ~- - - --%- - ---~ - - - -~ - - - - -~ ----
0.0
0
50
150
100
training steps
Figure 1: Behavior of the generalization error EG (upper line) and the training error
ET (lower line) during the training process. As the loading rate a = P / N = 1.5 is
near the storage capacity (a = 1) of the net overfitting occurs. The theory describes
the results of the simulations very well. Parameters: learning rate TJ = 0.1, system
size N = 200, and g .. (h) = tanh(')'h) with gain 'Y = 5.
Fig. 2 shows that in exhaustive training the training time diverges for a near 1,
in the region where also the overfitting occurs. In the same region early stopping
shows only a slight increase in training time.
Furthermore, we can guess from Fig. 2 that asymptotically only a few training
steps are necessary to fulfill the stopping criteria. This has to be specified more
precisely. First we study the behavior of EG after only one training step, i.e. t = 1.
Since we interested in the limit of many examples (a -+ 00) we choose the learning
rate as a fraction of the maximal learning rate (17), i.e. TJ = TJO/~max' Then we can
calculate the behavior of EG(t = 1, a, ~;ix) analytically. We find that only in the
case of TJo = 1, the generalization error can reach its asymptotic minimum Einf. The
rate of the convergence is a-l like in the optimal case, but the prefactor is different.
However, already for t = 2 we find,
EG
(t =
2, a, TJ
= ~;!x) := EG -
E;.nf = G -2 H2 a
~ 1 + 0 (~2)
.
(18)
If a is large, so that we can neglect the a- 2 term, then two batch training steps are
already enough to get the optimal convergence rate. These results are illustrated in
Fig. 3.
4
SUMMARY
In this paper we have calculated the behavior of the learning and the training error
during the whole training process. The novel approach relates the errors directly to
the actual number of training steps. It was shown how good this theory describes the
training process. Several results have been presented, such as the maximal learning
rate and the training time in different scenarios, like early stopping. If the learning
rate is chosen appropriately, then only two batch training steps are necessary to
reach the optimal convergence rate for sufficiently large a.
S. Bas and M. Opper
146
le+02
40
t
/
le+OI
4-.
tl.
o
~ ...
i
.co...'
0
,,
,, ,,
:
~
~
\,
I;J /
0,_,-'
\
\0 ~~"
,:
,?
A'".
4):'
eps=O.OOl
eps=O.OlO
early stop
"
..-
\~
.\i>.
1_
O __ ~_i-
0
'. ~
l ?'.\.a.
"'P.
cr-..t;>-L-~-<>-+-'1' -4
0: ___ :
"-t-R
'---:
-----.,
,----_..
-',
\
...... ..... ~......~
. ---------1
,
............... -.....
.
,
........... -,
,
-----------------------""'i~
le+OO
le-Ol
le+OO
le+OI
"
le+02
PIN
Figure 2: Number of necessary training steps to fulfill certain stopping criteria. The
upper lines show the result if training is stopped when the training error is lower
than E~in+?, with ? = 0.001 (dotted line) and ? = 0.01 (dashed line). The solid line
is the early stopping result where training is stopped, when the generalization error
started to increase, EG(t + 1) > EG(t). Simulation results are indicated by marks.
Parameters: learning rate 11 = 0.01, system size N = 200, and 9.(h) = tanh(-yh)
with gain '"Y = 5.
Further problems, like the dynamical description of weight decay, and the relation
of the dynamical approach to the thermodynamic description of the training process
[see Bos, 1995] can not be discussed here due to lack of space. These problems are
examined in an extended version of this work [Bos and Opper 1996]. It would be very
interesting if this method could be extended towards other, more realistic models.
A
APPENDIX
Here we add some identities which are necessary for the averages over the teacher
weight distributions, eqs. (9) and (10). In the statistical mechanics approach one
assumes that the distribution of the local fields h is Gaussian. This becomes true, if
one averages over random inputs Xi, with first moments zero and one, which is the
usual approach [see Bos 1995 and ref.]. In principle it is also possible to average over
many tasks, i.e many teacher realizations W?, which is done here. The Gaussian
local fields h~ fulfill,
(19)
< h~ >= 0, < h~h~ >=CjJ.v'
This implies
00
< z~ z~ >{Wtl
00
JDh~ JDh~ 9.(V
-00
1 - (CjJ.v)2
h~ + CjJ.v h~) 9.(h~)
-00
(20)
In the second identity we first calculated the diagonal term and for the non-diagonal
term we made an expansion assuming small correlations. Similarly the following
147
Dynamics o/Training
le+OO
exh.
opt.
t=1
t=2
t=3
----
Ie-OI
... ............
le-02
EO
le-03
le-04
Ie-OS
Ie-OI
Ie+OO
le+OI
le+02
le+03
le+04
PIN
Figure 3: Behavior of EG = EG - Einf after t training steps. Results for t = 1, 2
and 3 are given. For large enough a it is already after t = 2 training steps possible
to reach the optimal convergence (solid line). If t = 3 the optimal result is reached
even faster. Parameters: learning rate 17 = ~;!x and 9*(h) = tanhbh) with gain
')' = 5.
identity can be proved,
< z~ h~
>{W;}= 8,.?1/ H
+ (G,.
W
-
8,.".. ) H.
(21)
Acknowledgment: We thank Shun-ichi Amari for many discussions and E.
Helle, A. Stevenin-Barbier for proofreading and valuable comments.
References
Bas S. (1995), 'Avoiding overfitting by finite temperature learning and crossvalidation', in Int. Conference on Artificial Neural Networks 95 (ICANN'95),
edited by EC2 & Cie, Vo1.2, p.1l1- 1l6.
Bas S., and Opper M. (1996), 'An exact description of early stopping and weight
decay', submitted.
Kinzel W., and Opper M. (1995), 'Dynamics of learning', in Models of Neural
Networks I, edited by E. Domany, J. L. van Hemmen and K. Schulten, Springer,
p.157-179.
Krogh A. (1992), 'Learning with noise in a linear perceptron', J. Phys. A 25,
p.1l35-1l47.
Opper M. (1989), 'Learning in neural networks: Solvable dynamics', Europhys.
Lett. 8, p.389-392.
Saad D. (1996), 'General Gaussian priors for improved generalization', submitted
to Neural Networks.
Sollich P. (1995), 'Learning in large linear perceptrons and why the thermodynamic
limit is relevant to the real world', in NIPS 7, p.207-214.
| 1225 |@word version:1 briefly:1 loading:1 simulation:3 solid:2 moment:1 initial:1 series:1 wako:1 realistic:1 numerical:1 guess:1 manfred:1 ik:1 introduce:2 expected:3 behavior:8 examine:1 mechanic:3 ol:1 spherical:1 actual:4 becomes:1 provided:1 what:1 cm:1 developed:1 pseudo:1 nf:1 exactly:1 demonstrates:1 unit:1 local:2 limit:5 twice:2 examined:1 co:1 averaged:2 acknowledgment:1 xr:3 empirical:3 attain:1 get:4 storage:1 risk:6 shi:1 go:2 rule:1 exact:2 overdetermined:1 trick:1 role:1 ft:1 prefactor:1 calculate:4 wj:1 region:2 valuable:1 edited:2 mu:1 dynamic:6 algebra:2 tx:1 riken:2 describe:2 artificial:1 exhaustive:1 europhys:1 quite:2 solve:1 amari:1 statistic:1 eigenvalue:6 analytical:1 net:2 maximal:7 remainder:1 relevant:1 realization:3 description:6 crossvalidation:1 convergence:5 diverges:1 oo:5 illustrate:1 ij:1 eq:1 krogh:2 implies:1 guided:1 correct:1 shun:1 generalization:7 opt:1 hold:1 sufficiently:1 major:1 early:7 ilv:1 tanh:2 minimization:1 clearly:1 always:1 gaussian:3 fulfill:3 cr:1 validated:1 bos:8 dim:1 dependent:2 stopping:8 relation:1 interested:3 germany:1 integration:2 special:1 field:2 once:1 identical:1 few:1 ourselves:1 attempt:1 tjo:2 bracket:2 tj:3 implication:1 integral:3 necessary:5 iv:2 theoretical:2 minimal:3 stopped:3 saitama:1 teacher:6 varies:1 density:3 ec2:1 ie:4 physic:1 hirosawa:1 again:1 choose:1 japan:1 de:1 student:1 int:1 later:2 lab:1 reached:1 start:1 minimize:2 il:1 oi:5 variance:1 who:1 yield:1 bayesian:1 basically:1 submitted:2 fo:2 reach:4 phys:1 email:1 proof:1 static:1 gain:3 stop:1 proved:1 wixi:1 sophisticated:1 tum:1 supervised:1 follow:1 improved:1 done:2 furthermore:1 correlation:1 nonlinear:1 o:1 lack:1 defines:2 indicated:1 true:1 analytically:1 read:1 illustrated:1 eg:14 round:2 during:2 criterion:2 trying:1 l1:3 temperature:1 novel:1 kinzel:1 rl:1 jl:1 discussed:2 analog:1 slight:1 eps:2 zil:1 enter:1 similarly:3 add:1 recent:1 perspective:1 scenario:2 certain:2 minimum:1 eo:1 novelty:1 dashed:1 ii:1 relates:1 full:1 thermodynamic:4 hebbian:1 technical:1 xf:1 determination:1 calculation:2 cross:1 long:1 cjj:3 faster:1 cie:1 addressed:1 appropriately:1 saad:2 comment:2 near:2 iii:1 enough:2 wn:1 restrict:1 domany:1 expression:2 ool:1 hessian:2 remark:1 dotted:1 ichi:1 nevertheless:1 monitor:1 replica:2 asymptotically:1 jdh:2 fraction:1 inverse:3 reader:1 vn:1 appendix:4 layer:2 correspondence:1 precisely:1 min:2 proofreading:1 combination:1 describes:2 sollich:2 wi:3 hl:1 restricted:1 remains:1 discus:1 pin:2 know:2 tractable:1 multiplied:1 batch:2 lxi:1 altogether:1 jd:1 assumes:1 remaining:1 l6:1 neglect:1 exploit:1 already:3 occurs:2 wtl:1 usual:2 diagonal:2 gradient:1 thank:1 capacity:1 assuming:1 wiirzburg:2 trace:1 ba:3 adjustable:1 unknown:1 upper:2 finite:1 descent:1 extended:2 perturbation:1 zjl:1 introduced:1 specified:1 nip:1 dynamical:3 reading:1 max:4 overlap:1 solvable:1 recursion:3 hll:1 started:1 prior:1 geometric:1 literature:1 asymptotic:1 interesting:1 lv:1 h2:9 consistent:1 principle:1 playing:1 summary:2 perceptron:3 van:1 curve:1 opper:9 calculated:6 valid:1 lett:1 world:1 author:1 made:1 approximate:1 uni:1 overfitting:4 conclude:1 xi:4 why:1 terminate:1 itt:1 siegfried:1 expansion:1 did:1 icann:1 linearly:1 whole:2 noise:1 ref:1 fig:5 tl:1 hemmen:1 schulten:1 wish:1 barbier:1 vanish:1 third:1 yh:1 ix:1 xt:1 er:1 decay:2 springer:1 corresponds:1 identity:5 towards:1 typical:1 infinite:1 wt:2 vo1:1 called:5 perceptrons:1 wuerzburg:1 mark:1 fulfills:1 phenomenon:1 olo:1 avoiding:1 |
249 | 1,226 | Viewpoint invariant face recognition using
independent component analysis and
attractor networks
Marian Stewart Bartlett
University of California San Diego
The Salk Institute
La Jolla, CA 92037
marni@salk.edu
Terrence J. Sejnowski
University of California San Diego
Howard Hughes Medical Institute
The Salk Institute, La Jolla, CA 92037
terry@salk.edu
Abstract
We have explored two approaches to recogmzmg faces across
changes in pose. First, we developed a representation of face images
based on independent component analysis (ICA) and compared it
to a principal component analysis (PCA) representation for face
recognition. The ICA basis vectors for this data set were more
spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present
a model for the development of viewpoint invariant responses to
faces from visual experience in a biological system. The temporal
continuity of natural visual experience was incorporated into an
attractor network model by Hebbian learning following a lowpass
temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization
of Griniasty et al. (1993), which associates temporally proximal
input patterns into basins of attraction. The system acquired representations of faces that were largely independent of pose.
1
Independent component representations of faces
Important advances in face recognition have employed forms of principal component analysis, which considers only second-order moments of the input (Cottrell &
Metcalfe, 1991; Turk & Pentland 1991). Independent component analysis (ICA)
is a generalization of principal component analysis (PCA), which decorrelates the
higher-order moments of the input (Comon, 1994). In a task such as face recognition, much of the important information is contained in the high-order statistics of
the images. A representational basis in which the high-order statistics are decorrelated may be more powerful for face recognition than one in which only the second
order statistics are decorrelated, as in PCA representations. We compared an ICAbased representation to a PCA-based representation for recognizing faces across
changes in pose.
818
M. S. Bartlett and T. J. Sejnowski
-30"
-IS"
0"
IS"
30"
Figure 1: Examples from image set (Beymer, 1994).
The image set contained 200 images of faces, consisting of 40 subjects at each of
five poses (Figure 1). The images were converted to vectors and comprised the rows
of a 200 x 3600 data matrix, X. We consider the face images in X to be a linear
mixture of an unknown set of statistically independent source images S, where A
is an unknown mixing matrix (Figure 2). The sources are recovered by a matrix of
learned filters, W, which produce statistically independent outputs, U.
s
X
X
Sources
Unknown
Mixing
Process
U
~
~
~
~
X
Face
Images
Learned
Weights
III
Separated
Outputs
Figure 2: Image synthesis model.
The weight matrix, W, was found through an unsupervised learning algorithm that
maximizes the mutual information between the input and the output of a nonlinear
transformation (Bell & Sejnowski, 1995). This algorithm has proven successful for
separating randomly mixed auditory signals (the cocktail party problem), and has
recently been applied to EEG signals (Makeig et al., 1996) and natural scenes (see
Bell & Sejnowski, this volume). The independent component images contained in
the rows of U are shown in Figure 3. In contrast to the principal components, all 200
independent components were spatially local. We took as our face representation
the rows of the matrix A = W- 1 which provide the linear combination of source
images in U that comprise each face image in X.
1.1
Face Recognition Performance: leA vs. Eigenfaces
We compared the performance of the leA representation to that of the peA representation for recognizing faces across changes in pose. The peA representation of a
face consisted of its component coefficients, which was equivalent to the "Eigenface"
Viewpoint Invariant Face Recognition
819
Figure 3: Top: Four independent components of the image set. Bottom: First four
principal components.
representation (Turk & Pentland, 1991). A test image was recognized by assigning
it the label of the nearest of the other 199 images in Euclidean distance.
Classification error rates for the ICA and PCA representations and for the original
graylevel images are presented in Table 1. For the PCA representation, the best
performance was obtained with the 120 principal components corresponding to the
highest eigenvalues. Dropping the first three principal components, or selecting
ranges of intermediate components did not improve performance. The independent
component sources were ordered by the magnitude of the weight vector, row of W,
used to extract the source from the image. 1 Best performance was obtained with
the 130 independent components with the largest weight vectors. Performance with
the ICA representation was significantly superior to Eigenfaces by a paired t-test
(0 < 0.05).
Graylevel Images
PCA
ICA
Mutual Information
.89
.10
.007
Percent Correct Recognition
.83
.84
.87
Table 1: Mean mutual information between all pairs of 10 basis images, and between the
original graylevel images. Face recognition performance is across all 200 images.
For the task of recognizing faces across pose, a statistically independent basis set
provided a more powerful representation for face images than a principal component
representation in which only the second order statistics are decorrelated.
IThe magnitude of the weight vector for optimally projecting the source onto the sloping
part of the nonlinearity provides a measure of the variance of the original source (Tony
Bell, personal communication).
820
2
M. S. Bartlett and T. 1. Sejnowski
Unsupervised Learning of Viewpoint Invariant
Representations of Faces in an Attractor Network
Cells in the primate inferior temporal lobe have been reported that respond selectively to faces despite substantial changes in viewpoint (Hasselmo, Rolls, Baylis, &
Nalwa, 1989). Some cells responded independently of viewing angle, whereas other
cells gave intermediate responses between a viewer-centered and an object centered
representation. This section addresses how a system can acquire such invariance to
viewpoint from visual experience.
During natural visual experience, different views of an object or face tend to appear
in close temporal proximity as an animal manipulates the object or navigates around
it, or as a face changes pose. Capturing such temporal relationships in the input
is a way to automatically associate different views of an object without requiring
three dimensional descriptions.
Attractor Network
CoDlpetitive Hebbian Learning
Figure 4: Model architecture.
Hebbian learning can capture these temporal relationships in a feedforward system when the output unit activities are passed through a lowpass temporal filter
(Foldiak, 1991; Wallis & Rolls, 1996). Such lowpass temporal filters have been
related to the time course of the modifiable state of a neuron based on the open
time of the NMDA channel for calcium influx (Rhodes, 1992). We show that 1)
this lowpass temporal filter increases viewpoint invariance of face representations in
a feedforward system trained with competitive Hebbian learning, and 2) when the
input patterns to an attractor network are passed through a lowpass temporal filter, then a basic Hebbian weight update rule associates sequentially proximal input
patterns into the same basin of attraction.
This simulation used a subset of 100 images from Section 1, consisting of twenty
faces at five poses each. Images were presented to the model in sequential order as
the subject changed pose from left to right (Figure 4). The first layer is an energy
model related to the output of V1 complex cells (Heeger, 1991). The images were
filtered by a set of sine and cosine Gabor filters at 4 spatial scales and 4 orientations
at 255 spatial locations. Sine and cosine outputs were squared and summed. The
set of V1 model outputs projected to a second layer of 70 units, grouped into two
Viewpoint Invariant Face Recognition
821
inhibitory pools. The third stage of the model was an attractor network produced
by lateral interconnections among all of the complex pattern units. The feedforward
and lateral connections were trained successively.
2.1
Competitive Hebbian learning of temporal relationships
The Competitive Learning Algorithm (Rumelhart & Zipser, 1985) was extended to
include a temporal lowpass filter on output unit activities (Bartlett & Sejnowski,
1996). This manipulation gives the winner in the previous time steps a competitive
advantage for winning, and therefore learning, in the current time step.
winner = maxj [y/t)]
y/t) = AY} + (1 _ A)y/t-1)
(1)
The output activity of unit j at time t, y/t), is determined by the trace, or running
average, of its activation, where y} is the weighted sum of the feedforward inputs, a
is the learning rate, Xitl is the value of input unit i for pattern u, and 8t1 is the total
amount of input activation for pattern u. The weight to each unit was constrained
to sum to one. This algorithm was used to train the feedforward connections. There
was one face pattern per time step and A was set to 1 between individuals.
2.2
Lateral connections in the output layer form an attractor network
Hebbian learning of lateral interconnections, in combination with a lowpass temporal filter on the unit activities in (1), produces a learning rule that associates
temporally proximal inputs into basins of attraction. We begin with a simple Hebbian learning rule
P
Wij
=
~ L(Y~ -
yO)(y} _ yO)
(2)
t=1
where N is the number of units, P is the number of patterns, and yO is the mean
activity over all of the units. Replacing y~ with the activity trace y/t) defined in
(1), substituting yO = Ayo + (1 - A)YO and multiplying out the terms leads to the
following learning rule:
+k2 [(y/t-1) _ yO)(y/t-1) _ yO)]
h
k 1= >'(1;>')
were
>.
an d k 2=
(3)
(1_;)2
>.
The first term in this equation is basic Hebbian learning, the second term associates
pattern t with pattern t - 1, and the third term is Hebbian association of the trace
activity for pattern t - 1. This learning rule is a generalization of an attractor
network learning rule that has been shown to associate random input patterns
822
M. S. Bartlett and T. 1. Sejnowski
into basins of attraction based on serial position in the input sequence (Griniasty,
Tsodyks & Amit, 1993). The following update rule was used for the activation V
of unit i at time t from the lateral inputs (Griniasty, Tsodyks, & Amit, 1993) :
Vi(t + lSt) = ?
[I: W
ij Vj(t)
- 0]
Where 0 is a neural threshold and ?(x) = 1 for x > 0, and 0 otherwise. In these
simulations, 0 = 0.007, N = 70, P = 100, yO = 0.03, and A = 0.5 gave kl = k2 = 1.
2.3
Results
Temporal association in the feedforward connections broadened the pose tuning of
the output units (Figure 5 Left) . When the lateral connections in the output layer
were added, the attractor network acquired responses that were largely invariant to
pose (Figure 5 Right).
? Hebb plus trace
DTesiset
.. Griniasly 01. al.
? Hebbonly
[J Same Face, with Trace
o Same Face, no Trace
-- Different Faces
80.8
:c
,.!g
~ 0.6
...
<3~0.4
Q)
~0.2
o,....-o_-u,
-600
-450 _30?
_15?
0?
15?
30?
45?
60?
-600 _45? _30?
~Pose
_15?
0?
15?
30?
45?
60?
~Pose
Figure 5:
Left: Correlation of the outputs of the feedforward system as a function
of change in pose. Correlations across different views of the same face (- ) are compared
to correlations across different faces (--) with the temporal trace parameter A = 0.5
and A = O. Right: Correlations in sustained activity patterns in the attractor network
as a function of change in pose. Results obtained with Equation 3 (Hebb plus trace) are
compared to Hebb only, and to the learning rule in Griniasty et al. (1993). Test set
results for Equation 3 (open squares) were obtained by alternately training on four poses
and testing on the fifth, and then averaging across all test cases.
F
5
10
20
20
N
70
70
70
160
Attractor Network
% Correct
FIN
.07
1.00
_14
.90
.29
.61
.13
.73
leA
% Correct
.96
.86
.89
.89
Table 2: Face classification performance of the attractor network for four ratios of the
number of desired memories, F, to the number of units, N. Results are compared to ICA
for the same subset of faces.
Viewpoint Invariant Face Recognition
823
Classification accuracy of the attractor network was calculated by nearest neighbor
on the activity states (Table 2). Performance of the attractor network depends both
on the performance of the feedforward system, which comprises its input, and on
the ratio of the number of patterns to be encoded in memory, F, to the number of
units, N, where each individual in the face set comprises one memory pattern. The
attractor network performed well when this ratio was sufficiently high. The ICA
representation also performed well, especially for N=20.
The goal of this simulation was to begin with structured inputs similar to the responses of VI complex cells, and to explore the performance of unsupervised learning mechanisms that can transform these inputs into pose invariant responses. We
showed that a lowpass temporal filter on unit activities, which has been related
to the time course of the modifiable state of a neuron (Rhodes, 1992), cooperates
with Hebbian learning to (1) increase the viewpoint invariance of responses to faces
in a feedforward system, and (2) create basins of attraction in an attractor network which associate temporally proximal inputs. These simulations demonstrated
that viewpoint invariant representations of complex objects such as faces can be
developed from visual experience by accessing the temporal structure of the input.
Acknowledgments
This project was supported by Lawrence Livermore National Laboratory ISCR Agreement
B291528, and by the McDonnell-Pew Center for Cognitive Neuroscience at San Diego.
References
Bartlett, M. Stewart, & Sejnowski, T., 1996. Unsupervised learning of invariant representations of faces through temporal association. Computational Neuroscience: Int. Rev.
Neurobio. Suppl. 1 J.M Bower, Ed., Academic Press, San Diego, CA:317-322.
Beymer, D. 1994. Face recognition under varying pose. In Proceedings of the 1994 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Comput. Soc. Press: 756-61.
Bell, A. & Sejnowski, T., (1997). The independent components of natural scenes are edge
filters. Advances in Neural Information Processing Systems 9.
Bell, A., & Sejnowski, T., 1995. An information Maximization approach to blind separation and blind deconvolution. Neural Compo 7: 1129-1159.
Comon, P. 1994. Independent component analysis - a new concept? Signal Processing
36:287-314.
Cottrell & Metcalfe, 1991. Face, gender and emotion recognition using Holons. In Advances in Neural Information Processing Systems 3, D. Touretzky, (Ed.), Morgan Kaufman, San Mateo, CA: 564 - 571.
Foldiak, P. 1991. Learning invariance from transformation sequences. Neural Compo 3:194200.
Griniasty, M., Tsodyks, M., & Amit, D. 1993. Conversion of temporal correlations between
stimuli to spatial correlations between attractors. Neural Compo 5:1-17.
Hasselmo M. Rolls E. Baylis G. & Nalwa V. 1989. Object-centered encoding by faceselective neurons in the cortex in the superior temporal sulcus of the monkey. Experimental Brain Research 75(2):417-29.
Heeger, D. (1991). Nonlinear model of neural responses in cat visual cortex. Computational
Models of Visual Processing, M. Landy & J. Movshon, Eds. MIT Press, Cambridge,
MA.
Makeig, S, Bell, AJ, Jung, T-P, and Sejnowski, TJ 1996. Independent component analysis of Electroencephalographic data, In: Advances in Neural Information Processing
Systems 8, 145-151.
Rhodes, P. 1992. The long open time of the NMDA channel facilitates the self-organization
of invariant object responses in cortex. Soc. Neurosci. Abst. 18:740.
Rumelhart, D. & Zipser, D. 1985. Feature discovery by competitive learning. Cognitive
Science 9: 75-112.
Turk, M., & Pentland, A. 1991. Eigenfaces for Recognition. J. Cog. Neurosci. 3(1):71-86.
Wallis, G. & Rolls, E. 1996. A model of invariant object recognition in the visual system.
Technical Report, Oxford University Department of Experimental Psychology.
| 1226 |@word open:3 simulation:4 lobe:1 moment:2 selecting:1 recovered:1 current:1 neurobio:1 activation:3 assigning:1 cottrell:2 update:3 v:1 compo:3 filtered:1 provides:1 location:1 five:2 sustained:1 acquired:2 ica:9 brain:1 automatically:1 provided:1 begin:2 project:1 maximizes:1 sloping:1 kaufman:1 monkey:1 developed:2 transformation:2 temporal:21 marian:1 holons:1 makeig:2 k2:2 unit:16 medical:1 broadened:1 appear:1 t1:1 local:2 despite:1 encoding:1 oxford:1 cooperates:1 plus:2 mateo:1 range:1 statistically:3 acknowledgment:1 testing:1 hughes:1 bell:6 significantly:1 gabor:1 onto:1 close:1 marni:1 equivalent:1 demonstrated:1 center:1 independently:1 manipulates:1 rule:9 attraction:5 graylevel:3 diego:4 agreement:1 associate:7 rumelhart:2 recognition:16 bottom:1 capture:1 tsodyks:3 highest:1 substantial:1 accessing:1 personal:1 trained:2 ithe:1 basis:5 lowpass:8 lst:1 cat:1 train:1 separated:1 sejnowski:11 encoded:1 interconnection:2 otherwise:1 statistic:4 transform:1 advantage:1 eigenvalue:1 sequence:2 abst:1 took:1 mixing:2 representational:1 description:1 los:1 produce:2 object:8 pose:19 ij:1 nearest:2 soc:2 correct:3 filter:12 pea:2 centered:3 viewing:1 eigenface:1 generalization:3 biological:1 viewer:1 proximity:1 around:1 sufficiently:1 baylis:2 lawrence:1 substituting:1 rhodes:3 label:1 largest:1 hasselmo:2 grouped:1 create:1 weighted:1 mit:1 varying:1 yo:8 electroencephalographic:1 contrast:1 wij:1 classification:3 orientation:1 among:1 development:1 animal:1 spatial:3 summed:1 constrained:1 mutual:3 emotion:1 comprise:1 unsupervised:4 report:1 stimulus:1 randomly:1 national:1 individual:2 maxj:1 consisting:2 attractor:17 organization:1 mixture:1 tj:1 edge:1 experience:5 euclidean:1 desired:1 stewart:2 maximization:1 subset:2 comprised:1 recognizing:3 successful:1 optimally:1 reported:1 proximal:4 combined:1 terrence:1 pool:1 synthesis:1 squared:1 successively:1 cognitive:2 converted:1 coefficient:1 int:1 vi:2 depends:1 blind:2 sine:2 view:3 performed:2 competitive:5 square:1 accuracy:1 became:1 variance:1 largely:2 roll:4 responded:1 produced:1 multiplying:1 touretzky:1 decorrelated:3 ed:3 energy:1 turk:3 auditory:1 nmda:2 higher:1 response:8 stage:1 correlation:6 replacing:1 nonlinear:2 continuity:1 aj:1 consisted:1 requiring:1 concept:1 spatially:2 laboratory:1 nalwa:2 during:1 self:1 inferior:1 cosine:2 ay:1 percent:1 image:26 recently:1 griniasty:5 superior:2 winner:2 volume:1 association:3 cambridge:1 pew:1 tuning:1 nonlinearity:1 had:1 cortex:3 navigates:1 foldiak:2 showed:1 jolla:2 manipulation:1 morgan:1 greater:1 employed:1 recognized:1 signal:3 hebbian:12 technical:1 academic:1 long:1 serial:1 paired:1 basic:3 vision:1 suppl:1 cell:5 lea:3 whereas:1 source:8 subject:2 tend:1 facilitates:1 zipser:2 intermediate:2 iii:1 feedforward:9 gave:2 psychology:1 architecture:1 pca:8 bartlett:6 passed:2 movshon:1 cocktail:1 amount:1 inhibitory:1 neuroscience:2 per:1 modifiable:2 dropping:1 four:4 threshold:1 sulcus:1 v1:2 sum:2 angle:1 powerful:2 respond:1 separation:1 capturing:1 layer:4 activity:11 scene:2 influx:1 structured:1 o_:1 department:1 combination:2 mcdonnell:1 across:8 rev:1 primate:1 comon:2 projecting:1 invariant:12 equation:3 mechanism:1 original:3 top:1 running:1 tony:1 include:1 landy:1 amit:3 especially:1 society:1 added:1 alamitos:1 distance:1 separating:1 lateral:6 considers:1 relationship:3 ratio:3 acquire:1 trace:8 calcium:1 unknown:3 twenty:1 conversion:1 neuron:3 howard:1 fin:1 pentland:3 extended:1 incorporated:1 communication:1 pair:1 kl:1 livermore:1 connection:5 california:2 learned:2 alternately:1 address:1 pattern:16 memory:3 terry:1 natural:4 improve:1 temporally:3 extract:1 discovery:1 mixed:1 proven:1 basin:5 viewpoint:11 row:4 course:2 changed:1 jung:1 supported:1 institute:3 neighbor:1 eigenfaces:3 face:45 decorrelates:1 fifth:1 calculated:1 san:5 projected:1 party:1 sequentially:1 table:4 channel:2 ca:5 eeg:1 complex:4 vj:1 did:1 neurosci:2 hebb:3 salk:4 position:1 comprises:2 heeger:2 winning:1 comput:1 bower:1 third:2 cog:1 explored:1 deconvolution:1 sequential:1 magnitude:2 explore:1 beymer:2 visual:8 contained:3 ordered:1 gender:1 ma:1 goal:1 change:8 determined:1 averaging:1 principal:8 total:1 wallis:2 invariance:5 experimental:2 la:2 selectively:1 metcalfe:2 |
250 | 1,227 | U sing Curvature Information for
Fast Stochastic Search
Genevieve B. Orr
Dept of Computer Science
Willamette University
900 State Street
Salem, OR 97301
gorr@willamette.edu
Todd K. Leen
Dept of Computer Science and Engineering
Oregon Graduate Institute of
Science and Technology
P.O.Box 91000, Portland, Oregon 97291-1000
tleen@cse.ogi.edu
Abstract
We present an algorithm for fast stochastic gradient descent that
uses a nonlinear adaptive momentum scheme to optimize the late
time convergence rate. The algorithm makes effective use of curvature information, requires only O(n) storage and computation,
and delivers convergence rates close to the theoretical optimum.
We demonstrate the technique on linear and large nonlinear backprop networks.
Improving Stochastic Search
Learning algorithms that perform gradient descent on a cost function can be formulated in either stochastic (on-line) or batch form. The stochastic version takes
the form
(1)
Wt+l = Wt + J1.t G( Wt, Xt )
where Wt is the current weight estimate, J1.t is the learning rate, G is minus the
instantaneous gradient estimate, and Xt is the input at time t i . One obtains the
corresponding batch mode learning rule by taking J1. constant and averaging Gover
all x.
Stochastic learning provides several advantages over batch learning. For large
datasets the batch average is expensive to compute. Stochastic learning eliminates
the averaging. The stochastic update can be regarded as a noisy estimate of the
batch update, and this intrinsic noise can reduce the likelihood of becoming trapped
in poor local optima [1, 2J.
1 We assume that the inputs are i.i.d. This is achieved by random sampling with replacement from the training data.
607
Using Curvature Informationfor Fast Stochastic Search
The noise must be reduced late in the training to allow weights to converge. After
settling within the basin of a local optimum W., learning rate annealing allows convergence of the weight error v == W - w?. It is well-known that the expected squared
weight error, E[lv12] decays at its maximal rate ex: l/t with the annealing schedule
flo/to FUrthermore to achieve this rate one must have flo > flcnt = 1/(2A m in) where
Amin is the smallest eigenvalue of the Hessian at w. [3, 4, 5, and references therein].
Finally the optimal flo, which gives the lowest possible value of E[lv1 2] is flo = 1/ A.
In multiple dimensions the optimal learning rate matrix is fl(t) = (l/t) 1-?-1 ,where
1-? is the Hessian at the local optimum.
Incorporating this curvature information into stochastic learning is difficult for two
reasons. First, the Hessian is not available since the point of stochastic learning is
not to perform averages over the training data. Second, even if the Hessian were
available, optimal learning requires its inverse - which is prohibitively expensive to
compute 2.
The primary result of this paper is that one can achieve an algorithm that behaves
optimally, i.e. as if one had incorporated the inverse of the full Hessian, without
the storage or computational burden. The algorithm, which requires only V(n)
storage and computation (n = number of weights in the network), uses an adaptive
momentum parameter, extending our earlier work [7] to fully non-linear problems.
We demonstrate the performance on several large back-prop networks trained with
large datasets.
Implementations of stochastic learning typically use a constant learning rate during
the early part of training (what Darken and Moody [4] call the search phase) to obtain exponential convergence towards a local optimum, and then switch to annealed
learning (called the converge phase). We use Darken and Moody's adaptive search
then converge (ASTC) algorithm to determine the point at which to switch to l/t
annealing. ASTC was originally conceived as a means to insure flo > flcnt during
the annealed phase, and we compare its performance with adaptive momentum as
well. We also provide a comparison with conjugate gradient optimization.
1
Momentum in Stochastic Gradient Descent
The adaptive momentum algorithm we propose was suggested by earlier work on
convergence rates for annealed learning with constant momentum. In this section
we summarize the relevant results of that work.
Extending (1) to include momentum leaves the learning rule
wt+ 1
= Wt + flt
G ( Wt, x t)
+ f3 ( Wt -
Wt -1
(2)
)
where f3 is the momentum parameter constrained so that 0 < f3 < 1. Analysis of
the dynamics of the expected squared weight error E[ Ivl 2 ] with flt flo/t learning
rate annealing [7, 8] shows that at late times, learning proceeds as for the algorithm
without momentum, but with a scaled or effective learning rate
=
fleff
_
flo
= 1_
f3 .
( )
3
This result is consistent with earlier work on momentum learning with small, constant fl, where the same result holds [9, 10, 11]
2Venter [6] proposed a I-D algorithm for optimizing the convergence rate that estimates
the Hessian by time averaging finite differences of the gradient and scalin~ the learning
rate by the inverse. Its extension to multiple dimensions would require O(n ) storage and
O(n3) time for inversion. Both are prohibitive for large models.
G. B. Orr and T. K. Leen
608
If we allow the effective learning rate to be a matrix, then, following our comments
in the introduction, the lowest value of the misadjustment is achieved when /leff =
ti- 1 [7, 8]. Combining this result with (3) suggests that we adopt the heuristic3
/3o p t = I - /loti.
(4)
where /3o p t is a matrix of momentum parameters, I is the identity matrix, and /lo
is a scalar.
We started with a scalar momentum parameter constrained by 0 < /3 < 1. The
equivalent constraint for our matrix /3opt is that its eigenvalues lie between 0 and
1. Thus we require /lo < 1/ Amoz where Amoz is the largest eigenvalue of ti.
A scalar annealed learning rate /loft combined with the momentum parameter /3o p t
ought to provide an effective learning rate asymptotically equal to the optimal learning rate ti- 1. This rate 1) is achieved without ever performing a matrix inversion
on ti and 2) is independent of the choice of /lo, subject to the restriction in the
previous paragraph.
We have dispensed with the need to invert the Hessian, and we next dispense with
the need to store it. First notice that, unlike its inverse, stochastic estimates of ti
are readily available, so we use a stochastic estimate in (4). Secondly according to
(2) we do not require the matrix /3opt, but rather /3o p t times the last weight update. For both linear and non-linear networks this dispenses with the O( n 2 ) storage
requirements. This algorithm, which we refer to as adaptive momentum, does not
require explicit knowledge or inversion of the Hessian, and can be implemented very
efficiently as is shown in the next section.
2
Implementation
The algorithm we propose is
where ~Wt
=
Wt+!
= Wt +
/It
G( Wt, Xt)
+
(I - /lo iit ) ~Wt
(5)
Wt - Wt-l and ii t is a stochastic estimate of the Hessian at time t.
We first consider a single layer feedforward linear network. Since the weights connecting the inputs to different outputs are independent of each other we need only
discuss the case for one output node. Each output node is then treated identically.
n
For one output node and N inputs, the Hessian is ti = (xxT}z E NxN where 0:1:
indicates expectation over the inputs x and where x T is the transpose of x. The
single-step estimate of the hessian is then just ii t
xtxi. The momentum term
becomes
=
~
(I - /lotit)
~Wt
T
T
= (I - /lo(XtXt ))~Wt = ~Wt - /loXt(X t ~Wt).
(6)
Written in this way, we note that there is no matrix multiplication, just the vector
dot product xi ~Wt and vector addition that are both O(n). For M output nodes,
the algorithm is then O(Nw ) where N w = NM is the total number weights in the
network.
For nonlinear networks the problem is somewhat more complicated. To compute
iit~Wt we use the algorithm developed by Pearlmutter [12] for computing the product of the hessian times an arbitrary vector.4 The equivalent of one forward-back
3We refer to (4) as a heuristic since we have no theoretical results on the dynamics of
the squared weight error for learning with this matrix of momentum parameters.
?We actually use a slight modification that calculates the linearized Hessian times a
vector: D f @D f ~Wt where D f is the Jacobian of the network output (vector) with respect
to the weights, and @ indicates a tensor product.
609
Using Curvature Information for Fast Stochastic Search
Log( E[ Ivl2 1)
I~o=O?1 I
Log( E[ Iv12])
'--------""_--..flo=O.1
?1
?1
?2
?2L.-------
?3
2
B=adaptlve
I
flo=O?01
B=adaptlve
?3
3
Log(t)
a)
I
5
2
b)
3
Log(t)
Figure 1: 2?D LMS Simulations: Behavior of log(E[lvI 2 ]) over an ensemble of 1000 networks with Al = .4 and Al = 4, (J'~ = 1. a) 1-'0 = 0.1 with various 13. Dashed curve
corresponds to adaptive momentum. b) 13 adaptive for various 1-'0.
propagation is required for this calculation. Thus, to compute the entire weight update requires two forward-backward propagations, one for the gradient calculation
and one for computing iltllWt.
The only constraint on JJo is that JJo < 1/ Amax. We use the on-line algorithm
developed by LeCun, Simard, and Pearlmutter [13] to find the largest eigenvalue
prior to the start of training.
3
Examples
In the following two subsections we examine the behavior of annealed learning with
adaptive momentum on networks previously trained to a point close to an optimum,
where the noise dominates. We look at very simple linear nets, large linear nets, and
a large nonlinear net. In section 3.3 we couple adaptive momentum with automatic
switching from constant to annealed learning.
3.1
Linear Networks
We begin with a simple 2-D LMS network. Inputs Xt are gaussian distributed with
zero mean and the targets d at each timestep t are d t = W,!, Xt + Et where Et is zero
mean gaussian noise, and W* is the optimal weight vector. The weight error at time
t is just v == Wt - w*.
Figure 1 displays results for both constant and adaptive momentum with averages
computed over an ensemble of 1000 networks. Figure (la) shows the decay of E[lv1 2 ]
for JJo = 0.1 and various values of f3. As momentum is increased, the convergence
rate increases. The optimal scalar momentum parameter is f3 == (1- JJOAmin) = .96.
Adaptive momentum achieves essentially the same rate of convergence without prior
knowledge of the Hessian.
Figure 1b shows the behavior of E[lvI 2 ] for various JJo when adaptive momentum
is used. One can see that after a few hundred iterations the value of E[lv1 2 ] is
independent of JJo (in all cases JJo < l/A max < JJcrit ).
Figure 2 shows the behavior of the misadjustment (mean squared error in excess of the optimum~ for a 4-D LMS problem with a large condition number
P == Amax/Arr;in = 10 . We compare 3 cases:. 1) the opt~mal learning rate matrix
JJo = 1i- wIthout momentum, 2) JJo = .5 wIth the optzmal constant momentum
matrix f3 = I - JJo 1i, and 3) JJo = .5 with the adaptive momentum. All three
cases show similar behavior, showing the efficacy with which the matrix momentum
G. B. Orr and T. K. Leen
610
10.
0.1
0.001
10.
Figure 2: 4-D LMS with p = 105 : Plot
displays misadjustment. Annealing starts at
t 10. For {3adapt and {3 = I - 1-'0 1i, we use
1-'0 = .5. Each curve is an average of 10 runs.
=
100. 1000. 10000.
5
I
10
6
10
Figure 3: Linear Prediction: 1-'0 = 0.26.
Curves show constant learning rate, annealing started at t = 50 without momentum,
and with adaptive momentum.
=
mocks up the optimal learning rate matrix J1.0 1? -1, and lending credence to the
stochastic estimate of the Hessian used in adaptive momentum.
We next consider a large linear prediction problem (128 inputs, 16 outputs and
eigenvalues ranging from 1.06 x 10- 5 to 19.98 - condition number p 1.9 X 106 )5.
Figure 3 displays the misadjustment for 1) annealed learning with f3 = f3adapt,
2) annealed learning with f3 = 0, and 3) constant learning rate (for comparison
purposes). As before, we have first trained (not shown completely) at constant
learning rate J1.0
.026 until the MSE and the weight error have leveled out. As
can be seen f3adapt does much better than annealing without momentum.
=
=
3.2
Phoneme Classification
We next use phoneme classification as an example of a large nonlinear problem.
The database consists of 9000 phoneme vectors taken from 48 50-second speech
monologues. Each input vector consists of 70 PLP coefficients. There are 39 target
classes. The architecture was a standard fully connected feedforward network with
71 (includes bias) input nodes, 70 hidden nodes, and 39 output nodes for a total of
7700 weights.
We first trained the network with constant learning rate until the MSE flattened
out. At that point we either annealed without momentum, annealed with adaptive
momentum, or used ASTC (which attempts to adjust J1.0 to be above J1.crit - see
next section). When annealing was used without momentum, we found that the
noise went away, but the percent of correctly classified phonemes did not improve.
Both the adaptive momentum and ASTC resulted in significant increases in the
percent correct, however, adaptive momentum was significantly better than ASTC.
In the next section, we examine this problem in more detail.
3.3
Switching on Annealing
A complete algorithm must choose an appropriate point to change from constant J1.
search to annealed learning. We use Moody and Darken's ASTC algorithm [4, 14]
to accomplish this. ASTC measures the roughness of trajectories, switching to 1ft
annealing when the trajectories become very rough - an indication that the noise
in the updates is dominating the algorithm's behavior. In an attempt to satisfy
5Prediction of a 4
X
4 block of image pixels from the surrounding 8 blocks.
611
Using Curvature Information for Fast Stochastic Search
50
50
-
40
40
0
~30
~30
0
0
(.)20
(.)20
10
10
~
0
;,I!
0
100000
qo
a)
b)
20
epoch
50
100
Figure 4: Phoneme Classification: Percent Correct a) ASTC without momentum (bottom
curve) and adaptive momentum (top) as function of the number of input presentations.
b) Conjugate Gradient Descent - one epoch equals one pass through the data, i.e. 9000
input presentations.
> J.lcrit, ASTC can also switch back to constant learning when trajectories
become too smooth.
J.lo
We return to the phoneme problem using three different training methods: 1) ASTC
without momentum (with switching back and forth between annealed and constant
learning), 2) adaptive momentum with annealing turned on when ASTC first suggests the transition (but no subsequent return to constant learning rate), and 3)
standard conjugate gradient descent.
Figure 4a compares ASTC (no momentum) with adaptive momentum (using ASTC
to turn on annealing). After annealing is turned on, the classification accuracy
improves far more quickly with adaptive momentum.
Figure 4b displays the classification performance as a function of epoch using conjugate gradient descent (CGD). After 100 passes through the 9000 example dataset
(900,000 presentations), the classification accuracy is 39.6%, or 7% below adaptive
momentum's performance at 100,000 presentations. Note also that adaptive momentum is continuing to improve the optimization, while the ASTC and conjugate
gradient descent curves have flattened out.
The cpu time used for the optimization was about the same for the CGD and adaptive momentum algorithms. It thus appears that our implementation of adaptive
momentum costs about 9 times as much per pattern as CGD. We believe that the
performance can be improved. Our complexity analysis [8] predicts a 3:1 cost ratio,
rather than 9:1, and optimization comparable to that applied to the CGD code6
should enhance the run-time performance of CGD.
For this problem, the performance of the two algorityms on the test set (no shown
on graph) is not much different (31.7% for CGD versus 33.4% for adaptive momentum. Howver we are concerned here with the efficiency of the optimization, not
generalization performance. The latter depends on dataset size and regularization
techniques, which can easily be combined with any optimizer.
4
Summary
We have presented an efficient O( n) stochastic algorithm with few adjustable parameters that achieves fast convergence during the converge phase for both linear and
nonlinear problems. It does this by incorporating curvature information without
6CGD was performed using nopt written by Etienne Barnard and made available
through the Center for Spoken Language Understanding at the Oregon Graduate Institute.
612
G. B. Orr and T. K. Leen
explicit computation of the Hessian. We also combined it with a method (ASTC)
for detecting when to make the transition between search and converge regimes.
Acknowledgments
The authors thank Yann LeCun for his helpful critique. This work was supported
by EPRl under grant RPB015-2 and AFOSR under grant FF4962-93-1-0253.
References
[1] Genevieve B. Orr and Todd K. Leen. Weight space probability densities in stochastic
learning: II. Transients and basin hopping times. In Giles, Hanson, and Cowan,
editors, Advances in Neural Information Processing Systems, vol. 5, San Mateo, CA,
1993. Morgan Kaufmann.
[2] William Finnoff. Diffusion approximations for the constant learning rate backpropagation algorithm and resistence to local minima. In Giles, Hanson, and Cowan,
editors, Advances in Neural Information Processing Systems, vol. 5, San Mateo, CA,
1993. Morgan Kaufmann.
[3] Larry Goldstein. Mean square optimality in the continuous time Robbins Monro
procedure. Technical Report DRB-306, Dept. of Mathematics, University of Southern
California, LA, 1987.
[4] Christian Darken and John Moody. Towards faster stochastic gradient search. In J.E.
Moody, S.J. Hanson, and R.P. Lipmann, editors, Advances in Neural Information
Processing Systems 4. Morgan Kaufmann Publishers, San Mateo, CA, 1992.
[5] Halbert White. Learning in artificial neural networks: A statistical perspective. Neural Computation, 1:425-464, 1989.
[6] J. H. Venter. An extension of the robbins-monro procedure. Annals of Mathematical
Statistics, 38:117-127, 1967.
[7] Todd K. Leen and Genevieve B. Orr. Optimal stochastic search and adaptive momentum. In J.D. Cowan, G. Tesauro, and J . Alspector, editors, Advances in Neural
Information Processing Systems 6, San Francisco, CA., 1994. Morgan Kaufmann Publishers.
[8] Genevieve B. Orr. Dynamics and Algorithms for Stochastic Search. PhD thesis,
Oregon Graduate Institute, 1996.
[9] Mehmet Ali Tugay and Yalcin Tanik. Properties of the momentum LMS algorithm.
Signal Processing, 18:117-127, 1989.
[10] John J. Shynk and Sumit Roy. Analysis of the momentum LMS algorithm. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 38(12):2088-2098, 1990.
[11] W. Wiegerinck, A. Komoda, and T. Heskes. Stochastic dynamics of learning with
momentum in neural networks. Journal of Physics A, 27:4425-4437, 1994.
[12] Barak A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation,
6:147-160, 1994.
[13] Yann LeCun, Patrice Y. Simard, and Barak Pearlmutter. Automatic learning rate
maximization by on-line estimation of the hessian's eigenvectors. In Giles, Hanson,
and Cowan, editors, Advances in Neural Information Processing Systems, vol. 5, San
Mateo, CA, 1993. Morgan Kaufmann.
[14J Christian Darken. Learning Rate Schedules for Stochastic Gradient Algorithms. PhD
thesis, Yale University, 1993.
| 1227 |@word version:1 inversion:3 simulation:1 linearized:1 minus:1 efficacy:1 current:1 must:3 readily:1 written:2 john:2 subsequent:1 j1:8 christian:2 plot:1 update:5 credence:1 leaf:1 prohibitive:1 detecting:1 provides:1 cse:1 node:7 lending:1 mathematical:1 become:2 consists:2 paragraph:1 expected:2 alspector:1 behavior:6 examine:2 mock:1 cpu:1 becomes:1 begin:1 insure:1 lowest:2 what:1 xtxt:1 developed:2 spoken:1 ought:1 ti:6 prohibitively:1 scaled:1 grant:2 before:1 engineering:1 local:5 todd:3 switching:4 critique:1 becoming:1 therein:1 mateo:4 suggests:2 graduate:3 acknowledgment:1 lecun:3 block:2 backpropagation:1 procedure:2 significantly:1 close:2 storage:5 optimize:1 equivalent:2 restriction:1 center:1 annealed:12 monologue:1 rule:2 regarded:1 amax:2 cgd:7 his:1 annals:1 target:2 exact:1 us:2 roy:1 expensive:2 predicts:1 database:1 bottom:1 ft:1 mal:1 connected:1 went:1 complexity:1 dispense:1 dynamic:4 trained:4 crit:1 ali:1 efficiency:1 completely:1 easily:1 iit:2 various:4 tugay:1 xxt:1 surrounding:1 fast:7 effective:4 artificial:1 heuristic:1 dominating:1 statistic:1 noisy:1 patrice:1 advantage:1 eigenvalue:5 indication:1 net:3 propose:2 maximal:1 product:3 arr:1 relevant:1 combining:1 turned:2 achieve:2 amin:1 forth:1 flo:9 convergence:9 optimum:7 extending:2 requirement:1 implemented:1 correct:2 stochastic:26 transient:1 larry:1 backprop:1 require:4 generalization:1 opt:3 secondly:1 roughness:1 extension:2 hold:1 nw:1 lm:6 achieves:2 early:1 smallest:1 adopt:1 optimizer:1 purpose:1 estimation:1 robbins:2 largest:2 rough:1 gaussian:2 rather:2 portland:1 likelihood:1 indicates:2 ivl:1 helpful:1 typically:1 entire:1 hidden:1 pixel:1 classification:6 constrained:2 equal:2 f3:9 sampling:1 look:1 report:1 few:2 resulted:1 phase:4 replacement:1 william:1 attempt:2 genevieve:4 adjust:1 continuing:1 halbert:1 theoretical:2 increased:1 earlier:3 giles:3 maximization:1 cost:3 hundred:1 venter:2 too:1 sumit:1 optimally:1 accomplish:1 combined:3 density:1 physic:1 enhance:1 connecting:1 quickly:1 moody:5 squared:4 thesis:2 nm:1 choose:1 tanik:1 simard:2 return:2 orr:7 includes:1 salem:1 coefficient:1 oregon:4 satisfy:1 depends:1 leveled:1 performed:1 start:2 complicated:1 monro:2 square:1 accuracy:2 phoneme:6 kaufmann:5 efficiently:1 ensemble:2 trajectory:3 classified:1 couple:1 finnoff:1 dataset:2 knowledge:2 subsection:1 improves:1 schedule:2 actually:1 back:4 goldstein:1 appears:1 originally:1 improved:1 leen:6 box:1 furthermore:1 just:3 until:2 qo:1 nonlinear:6 propagation:2 mode:1 believe:1 regularization:1 white:1 ogi:1 during:3 plp:1 dispensed:1 lipmann:1 complete:1 demonstrate:2 pearlmutter:4 delivers:1 percent:3 ranging:1 image:1 instantaneous:1 behaves:1 slight:1 refer:2 significant:1 leff:1 automatic:2 drb:1 mathematics:1 heskes:1 language:1 had:1 dot:1 curvature:7 perspective:1 optimizing:1 tesauro:1 store:1 misadjustment:4 seen:1 morgan:5 minimum:1 somewhat:1 converge:5 determine:1 dashed:1 ii:3 signal:2 multiple:2 full:1 smooth:1 technical:1 faster:1 adapt:1 calculation:2 astc:15 calculates:1 prediction:3 essentially:1 expectation:1 iteration:1 achieved:3 invert:1 addition:1 annealing:13 resistence:1 publisher:2 eliminates:1 unlike:1 pass:1 comment:1 subject:1 cowan:4 call:1 feedforward:2 identically:1 concerned:1 switch:3 architecture:1 reduce:1 speech:2 hessian:18 eigenvectors:1 reduced:1 notice:1 trapped:1 conceived:1 correctly:1 loft:1 per:1 vol:3 willamette:2 diffusion:1 backward:1 timestep:1 asymptotically:1 graph:1 run:2 inverse:4 yann:2 comparable:1 tleen:1 fl:2 layer:1 display:4 yale:1 lcrit:1 constraint:2 n3:1 optimality:1 performing:1 according:1 poor:1 conjugate:5 modification:1 taken:1 previously:1 discus:1 turn:1 available:4 away:1 appropriate:1 batch:5 top:1 include:1 etienne:1 hopping:1 tensor:1 primary:1 southern:1 gradient:13 thank:1 street:1 reason:1 jjo:10 ratio:1 difficult:1 implementation:3 adjustable:1 perform:2 darken:5 datasets:2 sing:1 finite:1 descent:7 incorporated:1 ever:1 arbitrary:1 xtxi:1 required:1 hanson:4 california:1 acoustic:1 suggested:1 proceeds:1 below:1 pattern:1 regime:1 summarize:1 max:1 treated:1 settling:1 scheme:1 improve:2 technology:1 started:2 shynk:1 mehmet:1 prior:2 epoch:3 understanding:1 multiplication:2 nxn:1 afosr:1 fully:2 versus:1 basin:2 consistent:1 editor:5 lo:6 summary:1 supported:1 last:1 transpose:1 bias:1 allow:2 barak:2 institute:3 taking:1 distributed:1 curve:5 dimension:2 transition:2 forward:2 made:1 adaptive:29 author:1 san:5 far:1 transaction:1 excess:1 obtains:1 nopt:1 francisco:1 xi:1 search:12 continuous:1 ca:5 dispenses:1 improving:1 mse:2 lvi:2 did:1 noise:6 momentum:53 explicit:2 exponential:1 lie:1 late:3 jacobian:1 xt:5 showing:1 decay:2 flt:2 dominates:1 intrinsic:1 incorporating:2 burden:1 flattened:2 phd:2 scalar:4 corresponds:1 prop:1 identity:1 formulated:1 presentation:4 towards:2 barnard:1 change:1 wt:24 averaging:3 wiegerinck:1 called:1 total:2 pas:1 la:2 latter:1 dept:3 ex:1 |
251 | 1,228 | Efficient Nonlinear Control with
Actor-Tutor Architecture
Kenji Doya*
A.TR Human Information Processing Research Laboratories
2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan.
Abstract
A new reinforcement learning architecture for nonlinear control is
proposed. A direct feedback controller, or the actor, is trained by
a value-gradient based controller, or the tutor. This architecture
enables both efficient use of the value function and simple computation for real-time implementation. Good performance was verified
in multi-dimensional nonlinear control tasks using Gaussian softmax networks.
1
INTRODUCTION
In the study of temporal difference (TD) learning in continuous time and space
(Doya, 1996b), an optimal nonlinear feedback control law was derived using the
gradient of the value function and the local linear model of the system dynamics. It was demonstrated in the simulation of a pendulum swing-up task that the
value-gradient based control scheme requires much less learning trials than the conventional "actor-critic" control scheme (Barto et al., 1983).
In the actor-critic scheme, the actor, a direct feedback controller, improves its control policy stochastically using the TD error as the effective reinforcement (Figure 1a). Despite its relatively slow learning, the actor-critic architecture has the
virtue of simple computation in generating control command. In order to train a
direct controller while making efficient use of the value function, we propose a new
reinforcement learning scheme which we call the "actor-tutor" architecture (Figure 1b).
?Current address: Kawato Dynamic Brain Project, JSTC. 2-2 Hikaridai, Seika-cho,
Soraku-gun, Kyoto 619-02, Japan. E-mail: doya@erato.atr.co.jp
Efficient Nonlinear Control with Actor-Tutor Architecture
1013
In the actor-tutor scheme, the optimal control command based on the current estimate of the value function is used as the target output of the actor. With the use of
supervised learning algorithms (e.g., LMSE), learning of the actor is expected to be
faster than in the actor-critic scheme, which uses stochastic search algorithms (e.g.,
A RP )' The simulation result below confirms this prediction. This hybrid control
architecture provides a model of functional integration of motor-related brain areas,
especially the basal ganglia and the cerebellum (Doya, 1996a).
2
CONTINUOUS TD LEARNING
First, we summarize the theory of TD learning in continuous time and space (Doya,
1996b), which is basic to the derivation of the proposed control scheme.
2.1
CONTINUOUS TD ERROR
Let us consider a continuous-time, continuous-state dynamical system
d~;t) = f(x(t), u(t?
(I)
where x E X C R n is the state and u E U C R m is the control input (or the
action). The reinforcement is given as the function of the state and the control
r(t)
= r(x(t), u(t?.
(2)
For a given control law (or a policy)
u(t) = p(x(t?,
(3)
we define the "value function" of the state x(t) as
VJ'(x(t? =
1
00
t
1
- e -r- r(x(s), u(s?ds,
.-j
(4)
T
where x(s) and u(s) (t :5 s < 00) follow the system dynamics (I) and the control
law (3). Our goal is to find an optimal control law p* that maximizes VJ'(x) for
any state x EX. Note that T is the time constant of imminence-weighting, which
is related to the discount factor 'Y of the discrete-time TD as 'Y = 1 _ ~t.
By differentiating (4) by t, we have a local consistency condition for the value
function
(5)
Let P(x(t? be the prediction of the value function VJ'(x(t? from x(t) by a neural
network, or some function approximator that has enough capability of generalization. The prediction should be adjusted to minimize the inconsistency
r(t)
= r(t) -
P(x(t? + T dP~~(t? ,
(6)
which is a continuous version of the TD error. Because the boundary condition
for the value function is given on the attractor set of the state space, correction of P(x(t? should be made backward into time. The correspondence between
continuous-time TD algorithms and discrete-time TD(A) algorithms (Sutton , 1988)
is shown in (Doya, 1996b).
K. Doya
1014
Figure 1:
2.2
(b) Actor-tutor
(a) Actor-critic
OPTIMAL CONTROL BY VALUE GRADIENT
According to the principle of dynamic programming (Bryson and Ho, 1975), the
local constraint for the value function V? for the optimal control law p. is given by
the Hamilton-Jacobi-Bellman equation
V?(t)
= u(1)EU
max
[r(x(t), u(t)) + T av?~x(t)) I(x(t), u(t))] .
(7)
x
The optimal control p* is given by solving the maximization problem in the HJB
equation, i.e.,
ar(x, u)
aV?(x) al(x, u) _ 0
au
+T ax
au
(8)
-.
When the cost for each control variable is given by a convex potential function Gj
r(x,u) = R(x) - L:Gj(Uj),
0
(9)
j
equation (8) can be solved using a monotonic function gj(x) = (Gj)-l(x) as
Uj
= gj
(TaV;~X) a/~:~
u)) .
(10)
If the system is linear with respect to the input, which is the case with many
mechanical systems, al(x, u)/aUj is independent of u and the above equation gives
a closed-form optimal feedback control law u = p?(x).
In practice, the optimal value function is unknown and we replace V?(x) with the
current estimate of the value function P(x)
aPex) al(x,
u=g ( T~ au
u)) .
(11)
While the system evolves with the above control law, the value function P(x) is
updated to minimize the TD error (6). In (11), the vector aP(x)/ax represents the
desired motion direction in the state space and the matrix al(x, u)/au transforms
it into the action space. The function g, which is specified by the control cost,
determines the amplitude of control output. For example, if the control cost G is
quadratic, then (11) reduces to a linear feedback control. A practically important
case is when 9 is a sigmoid, because this gives a feedback control law for a system
with limited control amplitude, as in the examples below.
Efficient Nonlinear Control with Actor-Tutor Architecture
3
1015
ACTOR-TUTOR ARCHITECTURE
It was shown in a task of a pendulum swing-up with limited torque (Doya, 1996b)
that the above value-gradient based control scheme (11 can learn the task in much
less trials than the actor-critic scheme. However, computation of the feedback
command by (11) requires an on-line calculation of the gradient of the value function
oP(x)/ox and its multiplication with the local linear model of the system dynamics
lex, u)/ou, which can be too demanding for real-time implementation.
a
One solution to this problem is to use a simple direct controller network, as in the
case of the actor-critic architecture. The training of the direct controller, or the
actor, can be performed by supervised learning instead of trial-and-error learning
because the target output of the controller is explicitly given by (11). Although
computation of the target output may involve a processing time that is not acceptable for immediate feedback control, it is still possible to use its output for training
the direct controller provided that there is some mechanism of short-term memory
(e.g., eligibility trace in the connection weights).
Figure l(b) is a schematic diagram of this "actor-tutor" architecture. The critic
monitors the performance of the actor and estimates the value function. The "tutor"
is a cascade of the critic, its gradient estimator, the local linear model ofthe system,
and the differential model of control cost. The actor is trained to minimize the
difference between its output and the tutor's output.
4
SIMULATION
We tested the performance of the actor-tutor architecture in two nonlinear control
tasks; a pendulum swing-up task (Doya, 1996b) and the global version of a cart-pole
balancing task (Barto et al., 1983).
The network architecture we used for both the actor and the critic was a Gaussian
soft-max network. The output of the network is given by
K
Y=
I: Wkbk(X),
k=l
b ( )
k X
exp[- L:~=1 (~)2]
=",K
ul=l
exp
[_",n (X.-Cli)2]'
ui=l
3/0
where (CkI' ... , Ckn) and (Ski, ... , Skn) are the center and the size of the k-th basis
function. It is in general possible to adjust the centers and sizes of the basis function,
but in order to assure predictable transient behaviors, we fixed them in a grid. In
this case, computation can be drastically reduced by factorizing the activation of
basis functions in each input dimension.
4.1
PENDULUM SWING-UP TASK
The first task was to swing up a pendulum with a limited torque ITI ~ Tmax , which
was about one fifth of the torque that was required to statically bring the pendulum
up (Figure 2 (a)). This is a nonlinear control task in which the controller has to
swing the pendulum several times at the bottom to build up enough momentum.
K. Doya
1016
triat.
(a) Pendulum
(b) Value gradient
trial_
(c) Actor-Critic
( d) Actor-Tutor
Figure 2: Pendulum swing-up task. The dynamics of the pendulum (a) is given by
mle = -ti; + mglsin{} + T. The parameters were m = I = 1, g = 9.8, Jl. = 0.01,
and Tmax
2.0. The learning curves for value-gradient based optimal control (b),
actor-critic (c), and actor-tutor (d); t_up is time during which I{}I < 45?.
=
=
({},w) was 2D and we used 12 x 12 basis
The state space for the pendulum x
functions to cover the range I{} I ~ 180? and Iw I ~ 180? / s. The reinforcement for the
state was given by the height of the tip of the pendulum, i.e., R(x) = cos {} and the
cost for control G and the corresponding output sigmoid function g were selected
to match the maximal output torque ymax.
Figures 2 (b), (c), and (d) show the learning curves for the value-gradient based
control (11), actor critic, and actor-tutor control schemes, respectively. As we
expected, the learning of the actor-tutor was much faster than that of the actorcritic and was comparable to the value-gradient based optimal control schemes.
4.2
CART-POLE SWING-UP TASK
Next we tested the learning scheme in a higher-dimensional nonlinear control task,
namely, a cart-pole swing-up task (Figure 3). In the pioneering work of , the actorcritic system successfully learned the task of balancing the pole within ? 12? of
the upright position while avoiding collision with the end of the cart track. The
task we chose was to swing up the pole from an arbitrary angle and to balance it
upright. The physical parameters of the cart-pole were the same as in (Barto et al.,
1983) except that the length of the track was doubled to provide enough room for
swinging.
J017
Efficient Nonlinear Control with Actor-Tutor Architecture
(a)
(b)
(c)
Figure 3: Cart-pole swing-up task. (a) An example of a swing-up trajectory. (b)
Value function learned by the critic. (c) Feedback force learned by the actor. Each
square in the plot shows a slice of the 4D state space parallel to the (0, w) plane.
Figure 3 (a) shows an example of a successful swing up after 1500 learning trials
with the actor-tutor architecture. We could not achieve a comparable performance
with the actor-critic scheme within 3000 learning trials. Figures 3 (b) and (c) show
the value function and the feedback force field, respectively, in the 4D state space
x = (x, v, 0, w), which were implemented in 6 x 6 x 12 x 12 Gaussian soft-max
networks. We imposed symmetric constraints on both actor and critic networks to
facilitate generalization. It can be seen that the paths to the upright position in
the center of the track are represented as ridges in the value function.
5
DISCUSSION
The biggest problem in applying TD or DP to real-world control tasks is the curse
of dimensionality, which makes both the computation for each data point and the
numbers of data points necessary for training very high. The actor-tutor architecture provides a partial solution to the former problem in real-time implementation.
The grid-based Gaussian soft-max basis function network was successfully used in
a 4D state space. However, a more flexible algorithm that allocates basis functions
only in the relevant parts of the state space may be necessary for dealing with
higher-dimension systems (Schaal and Atkeson, 1996).
In the above simulations, we assumed that the local linear model of the system
dynamics fJf(x,u)/fJu was available. In preliminary experiments, it was verified
that the critic, the system model, and the actor can be trained simultaneously.
1018
K. Doya
The actor-tutor architecture resembles "feedback error learning" (Kawato et al. ,
1987) in the sense that a nonlinear controller is trained by the output of anther
controller. However, the actor-tutor scheme can be applied to a highly nonlinear
control task to which it is difficult to prepare a simple linear feedback controller.
Motivated by the performance of the actor-tutor architecture and the recent physiological and fMRI experiments on the brain activity during the course of motor
learning (Hikosaka et al., 1996; Imamizu et al., 1996), we proposed a framework of
functional integration of the basal ganglia, the cerebellum, and cerebral motor areas
(Doya, 1996a). In this framework, the basal ganglia learns the value function P(x)
(Houk et al., 1994) and generates the desired motion direction based on its gradient
oP(x)/ox. This is transformed into a motor command by the "transpose model" of
the motor system (of (x, u)/ouf in the lateral cerebellum (cerebrocerebellum). In
early stages of learning, this output is used for control, albeit its feedback latency
is long. As the subject repeats the same task, a direct controller is constructed
in the medial and intermediate cerebellum (spinocerebellum) with the above motor command as the teacher. The direct controller enables quick, near-automatic
performance with less cognitive load in other parts of the brain.
References
Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983). Neuronlike adaptive
elements that can solve difficult learning control problems. IEEE Transactions
on Systems, Man , and Cybernetics, 13:834-846.
Bryson, Jr., A. E .. and Ho, Y -C. (1975). Applied Optimal Control. Hemisphere
Publishing, New York, 2nd edition.
Doya, K. (1996a). An integrated model of basal ganglia and cerebellum in sequential
control tasks . Society for Neuroscience Abstracts, 22:2029.
Doya, K. (1996b) . Temporal difference learning in continuous time and space. In
Touretzky, D. S., Mozer, M. C., and Hasselmo, M. E., editors, Advances in
Neural Information Processing Systems 8, pages 1073-1079. MIT Press, Cambridge, MA.
Hikosaka, 0., Miyachi, S., Miyashita, K., and Rand, M. K. (1996). Procedural learning in monkeys - Possible roles of the basal ganglia. In Ono, T., McNaughton,
B. 1., Molotchnikoff, S., Rolls, E. T ., and Nishijo, H., editors, Perception,
Memory and Emotion: Frontiers in Neuroscience, pages 403-420 . Pergamon,
Oxford.
Houk, J . C., Adams, J. L., and Barto, A. G. (1994) . A model of how ,the basal
ganglia generate and use neural signals that predict reinforcement. In Houk,
J . C., Davis, J. L., and Beiser, D. G., editors, Models of Information Processing
in the Basal Ganglia, pages 249-270 . MIT Press, Cambrigde, MA.
Imamizu, H., Miyauchi, S., Sasaki, Y, Takino, R., Putz, B., and Kawato, M. (1996).
A functional MRI study on internal models of dynamic transformations during
learning a visuomotor task. Society for Neuroscience Abstracts, 22:898.
Kawato, M., Furukawa, K., and Suzuki, R. (1987). A hierarchical neural network
model for control and learning of voluntary movement. Biological Cybernetics,
57:169-185.
Schaal, S. and Atkeson, C. C. (1996) . From isolation to cooperation: An alternative
view of a system of experts. In Touretzky, D. S., Mozer, M. C., and Hasselmo,
,M. E., editors, Advances in Neural Information Processing Systems 8, pages
605-611. MIT Press, Cambridge, MA, USA.
Sutton, R. S. (1988) . Learning to predict by the methods of temporal difference.
Machine Learning, 3:9-44.
| 1228 |@word trial:5 mri:1 version:2 nd:1 confirms:1 simulation:4 tr:1 current:3 activation:1 enables:2 motor:6 plot:1 medial:1 selected:1 plane:1 short:1 provides:2 cambrigde:1 height:1 constructed:1 direct:8 differential:1 hjb:1 expected:2 behavior:1 seika:2 multi:1 brain:4 bellman:1 torque:4 td:11 curse:1 project:1 provided:1 maximizes:1 monkey:1 transformation:1 temporal:3 ti:1 control:49 hamilton:1 local:6 despite:1 sutton:3 oxford:1 path:1 ap:1 tmax:2 chose:1 au:4 resembles:1 co:2 limited:3 range:1 practice:1 area:2 cascade:1 doubled:1 applying:1 conventional:1 imposed:1 demonstrated:1 center:3 quick:1 convex:1 swinging:1 estimator:1 mcnaughton:1 updated:1 target:3 programming:1 us:1 assure:1 element:1 bottom:1 role:1 solved:1 eu:1 movement:1 mozer:2 predictable:1 ui:1 dynamic:8 trained:4 solving:1 basis:6 represented:1 derivation:1 train:1 effective:1 visuomotor:1 solve:1 propose:1 maximal:1 relevant:1 ymax:1 achieve:1 cli:1 generating:1 adam:1 op:2 implemented:1 kenji:1 direction:2 stochastic:1 human:1 transient:1 generalization:2 preliminary:1 biological:1 adjusted:1 auj:1 frontier:1 correction:1 practically:1 exp:2 houk:3 predict:2 early:1 iw:1 prepare:1 hasselmo:2 successfully:2 mit:3 gaussian:4 barto:5 command:5 derived:1 ax:2 schaal:2 bryson:2 sense:1 integrated:1 transformed:1 flexible:1 softmax:1 integration:2 field:1 emotion:1 represents:1 fmri:1 simultaneously:1 attractor:1 neuronlike:1 highly:1 adjust:1 partial:1 necessary:2 allocates:1 desired:2 soft:3 ar:1 cover:1 maximization:1 cost:5 pole:7 successful:1 too:1 teacher:1 cho:2 cki:1 tip:1 stochastically:1 cognitive:1 expert:1 japan:2 potential:1 explicitly:1 performed:1 view:1 closed:1 pendulum:12 capability:1 parallel:1 actorcritic:2 minimize:3 square:1 roll:1 ofthe:1 trajectory:1 cybernetics:2 touretzky:2 jacobi:1 improves:1 dimensionality:1 amplitude:2 ou:1 higher:2 supervised:2 follow:1 rand:1 ox:2 anderson:1 stage:1 d:1 nonlinear:12 usa:1 facilitate:1 swing:13 former:1 symmetric:1 laboratory:1 cerebellum:5 erato:1 during:3 eligibility:1 davis:1 ridge:1 motion:2 bring:1 sigmoid:2 kawato:4 functional:3 physical:1 jp:1 cerebral:1 jl:1 cambridge:2 automatic:1 consistency:1 grid:2 apex:1 actor:41 gj:5 recent:1 hemisphere:1 inconsistency:1 furukawa:1 seen:1 signal:1 beiser:1 kyoto:2 reduces:1 faster:2 match:1 calculation:1 hikosaka:2 long:1 mle:1 schematic:1 prediction:3 basic:1 controller:14 diagram:1 cart:6 subject:1 call:1 near:1 intermediate:1 enough:3 isolation:1 architecture:18 motivated:1 ul:1 soraku:2 york:1 action:2 collision:1 latency:1 involve:1 transforms:1 discount:1 reduced:1 generate:1 neuroscience:3 track:3 discrete:2 basal:7 procedural:1 monitor:1 verified:2 backward:1 angle:1 imamizu:2 doya:14 acceptable:1 comparable:2 correspondence:1 quadratic:1 activity:1 constraint:2 generates:1 statically:1 relatively:1 according:1 jr:1 evolves:1 making:1 lmse:1 equation:4 mechanism:1 end:1 available:1 hierarchical:1 alternative:1 ho:2 rp:1 publishing:1 especially:1 hikaridai:2 build:1 society:2 uj:2 tutor:22 pergamon:1 lex:1 gradient:12 dp:2 atr:1 lateral:1 gun:2 mail:1 length:1 balance:1 difficult:2 trace:1 implementation:3 ski:1 policy:2 unknown:1 av:2 iti:1 immediate:1 voluntary:1 arbitrary:1 namely:1 mechanical:1 specified:1 required:1 connection:1 learned:3 address:1 miyashita:1 below:2 dynamical:1 perception:1 summarize:1 pioneering:1 max:4 memory:2 demanding:1 hybrid:1 force:2 scheme:14 multiplication:1 law:8 approximator:1 ckn:1 principle:1 editor:4 critic:17 balancing:2 course:1 cooperation:1 repeat:1 transpose:1 drastically:1 differentiating:1 fifth:1 slice:1 feedback:13 boundary:1 dimension:2 curve:2 world:1 made:1 reinforcement:6 adaptive:1 suzuki:1 atkeson:2 transaction:1 dealing:1 global:1 assumed:1 factorizing:1 continuous:9 search:1 learn:1 vj:3 edition:1 biggest:1 slow:1 momentum:1 position:2 weighting:1 learns:1 load:1 ono:1 physiological:1 virtue:1 albeit:1 sequential:1 ganglion:7 monotonic:1 determines:1 ma:3 goal:1 room:1 replace:1 man:1 upright:3 except:1 sasaki:1 internal:1 skn:1 tested:2 avoiding:1 ex:1 |
252 | 1,229 | Multidimensional Triangulation and
Interpolation for Reinforcement Learning
Scott Davies
scottd@cs.cmu.edu
Department of Computer Science, Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213
Abstract
Dynamic Programming, Q-Iearning and other discrete Markov Decision
Process solvers can be -applied to continuous d-dimensional state-spaces by
quantizing the state space into an array of boxes. This is often problematic
above two dimensions: a coarse quantization can lead to poor policies, and
fine quantization is too expensive. Possible solutions are variable-resolution
discretization, or function approximation by neural nets. A third option,
which has been little studied in the reinforcement learning literature, is
interpolation on a coarse grid. In this paper we study interpolation techniques that can result in vast improvements in the online behavior of the
resulting control systems: multilinear interpolation, and an interpolation
algorithm based on an interesting regular triangulation of d-dimensional
space. We adapt these interpolators under three reinforcement learning
paradigms: (i) offline value iteration with a known model, (ii) Q-Iearning,
and (iii) online value iteration with a previously unknown model learned
from data. We describe empirical results, and the resulting implications for
practical learning of continuous non-linear dynamic control.
1
GRID-BASED INTERPOLATION TECHNIQUES
Reinforcement learning algorithms generate functions that map states to "cost-t<r
go" values. When dealing with continuous state spaces these functions must be
approximated. The following approximators are frequently used:
? Fine grids may be used in one or two dimensions. Above two dimensions,
fine grids are too expensive. Value functions can be discontinuous, which
(as we will see) can lead to su boptimalities even with very fine discretization
in two dimensions .
? Neural nets have been used in conjunction with TD [Sutton, 1988] and
Q-Iearning [Watkins, 1989] in very high dimensional spaces [Tesauro, 1991,
Crites and Barto, 1996]. While promising, it is not always clear that they
produce the accurate value functions that might be needed for fine nearoptimal control of dynamic systems, and the most commonly used methods
of applying value iteration or policy iteration with a neural-net value function are often unstable. [Boyan and Moore, 1995].
S. Davies
1006
Interpolation over points on a coarse grid is another potentially useful approximator
for value functions that has been little studied for reinforcement learning. This
paper attempts to rectify this omission. Interpolation schemes may be particularly
attractive because they are local averagers, and convergence has been proven in
such cases for offline value iteration [Gordon, 1995].
All of the interpolation methods discussed here split the state space into a regular
grid of d-dimensional boxes; data points are associated with the centers or the
corners of the resulting boxes. The value at a given point in the continuous state
space is computed as a weighted average of neighboring data points.
1.1
MULTILINEAR INTERPOLATION
When using multilinear interpolation , data points are situated at the corners of
the grid's boxes. The interpolated value within a box is an appropriately weighted
average of the 2d datapoints on that box's corners. The weighting scheme assures
global continuity of the interpolated surface, and also guarantees that the interpolated value at any grid corner matches the given value of that corner.
In one-dimensional space, multilinear interpolation simply involves piecewise linear
interpolations between the data points. In a higher-dimensional space, a recursive
(though not terribly efficient) implementation can be described as follows:
? Pick an arbitrary axis. Project the query point along this axis to each of the two
opposite faces of the box containing the query point.
? Use two (d - 1)-dimensional multilinear interpolations over the 2 d - 1 datapoints
on each of these two faces to calculate the values at both of these projected points.
? Linearly interpolate between the two values generated in the previous step.
Multilinear interpolation processes 2d data points for every query, which becomes
prohibitively expensive as d increases.
1.2
SIMPLEX-BASED INTERPOLATION
It is possible to interpolate over d + 1 of the data points for any given query in only
O( d log d) time and still achieve a continuous surface that fits the datapoints exactly.
Each box is broken into d! hyperdimensional triangles, or simplexes, according to
the Coxeter-Freudenthal-Kuhn triangulation [Moore, 1992].
Assume that the box is the unit hypercube, with one corner at (Xl, X2, . .. , Xd) =
(0,0, ... ,0), and the diagonally opposite corner at (1,1, ... ,1). Then, each simplex
in the Kuhn triangulation corresponds to one possible permutation p of (1,2, ... , d),
and occupies the set of points satisfying the equation
o ~ Xp(l) ~ Xp(2)
~
...
~
Xp(d)
~ 1.
Triangulating each box into d! simplexes in this manner generates a conformal mesh:
any two elements with a (d - 1)-dimensional surface in common have entire faces in
common, which ensures continuity across element boundaries when interpolating.
We use the Kuhn triangulation for interpolation as follows:
? Translate and scale to a coordinate system in which the box containing the
query point is the unit hypercube. Let the new coordinate of the query point
be (x~, ... ,x~).
? Use a sorting algorithm to rank x~ through x~. This tells us the simplex of the
Kuhn triangulation in which the query point lies.
Triangulation and Interpolation for Reinforcement Learning
]007
? Express (x~, . .. , x~) as a convex combination of the coordinates of the relevant
simplex's (d + 1) corners .
? Use the coefficients determined in the previous step as the weights for a weighted
sum of the data values stored at the corresponding corners.
At no point do we explicitly represent the d! different simplexes. All of the above
steps can be performed in Oed) time except the second, which can be done in
O( d log d) time using conventional sorting routines.
2
PROBLEM DOMAINS
CAR ON HILL: In the Hillcar domain, the goal is to park a car near the top of
a one-dimensional hill. The hill is steep enough that the driver needs to back up in
order to gather enough speed to get to the goal. The state space is two-dimensional
(position, velocity). See [Moore and Atkeson, 1995] for further details, but note
that our formulation is harder than the usual formulation in that the goal region
is restricted to a narrow range of velocities around 0, and trials start at random
states. The task is specified by a reward of -1 for any action taken outside the goal
region, and 0 inside the goal. No discounting is used, and two actions are available:
maximum thrust backwards, and maximum thrust forwards.
ACROBOT: The Acrobot is a two-link planar robot acting in the vertical plane
under gravity with a weak actuator at its elbow joint joint. The shoulder is unactuated. The goal is to raise the hand to at least one link's height above the
unactuated pivot [Sutton, 1996]. The state space is four-dimensional: two angular
positions and two angular velocities. Trials always start from a stationary position
hanging straight down. This task is formulated in the same way as the car-on-thehill. The only actions allowed are the two extreme elbow torques.
3
APPLYING INTERPOLATION: THREE CASES
3.1
CASE I: OFFLINE VALUE ITERATION WITH A KNOWN
MODEL
First, we precalculate the effect of taking each possible action from each state corresponding to a datapoint in the grid . Then, as suggested in [Gordon, 1995], we use
these calculations to derive a completely discrete MDP. Taking any action from any
state in this MDP results in c possible successor states, where c is the number of
datapoints used per interpolation. Without interpolation, c is 1; with multilinear
interpolation, 2d; with simplex-based interpolation, d + 1.
We calculate the optimal policy for this derived MDP offline using value iteration [Ross, 1983]; because the value iteration can be performed on a completely
discrete MDP, the calculations are much less computationally expensive than they
would have been with many other kinds of function approximators. The value iteration gives us values for the datapoints of our grid, which we may then use to
interpolate the values at other states during online control.
3.1.1 Hillcar Results: value iteration with known model
We tested the two interpolation methods on a variety of quantization levels by
first performing value iteration offline, and then starting the car from 1000 random
states and averaging the number of steps taken to the goal from those states. We
also recorded the number of backups required before convergence, as well as the
execution time required for the entire value iteration on a 85 MHz Sparc 5. See
Figure 1 for the results. All steps-to-goal values are means with an expected error
of 2 steps.
S. Davies
1008
Interpolation Method
None
Steps to Goal :
Backups :
Time (sec) :
MultlLm
Steps to Goal :
Backups:
Time _Lsec) :
Simplex
Steps to Goal :
Backups :
Time (sec) :
l1:l
Grid size
21:l
51:l
301:l
237
2.42K
0.4
131
15.4K
1.0
133
156K
4 .1
120
14.3M
192
134
4.S4K
0 .6
116
18.1K
1.3
lOS
205K
7 .1
107
17.8M
405
134
6 .17K
0.5
118
18.1K
1 .2
109
195K
5 .7
107
17.9M
328
Figure 1: Hillcar: value iteration with known model
Interpolation Method
None
Steps to Goal :
Backups :
Time (sec) :
MultlLm
Steps to Goal :
Backups:
Time (sec):
Simplex
Steps to Goal :
Backups:
Time(sec):
Grid size
124
11"
13"
14'1
15"
-
-
-
> 100000
-
26952
622K
30
-
1136
730K
42
3209
2.01M
83
1300
2 .03M
99
2953
590K
22
3209
2 .28M
47
4663
1 .62M
47
84
94
104
-
-
-
-
44089
280K
15
3340
233K
17
2006
1.01M
43
4700
196K
9
8007
1.16M
24
1.42M
53
-
1820
3.74M
164
1518
4.45M
197
1802
6.78M
284
2733
4.03M
86
1742
3 .65M
93
9613
6 .73M
142
Figure 2: Acrobot: value iteration with known model
The interpolated functions require more backUps for convergence, but this is amply
compensated by dramatic improvement in the policy. Surprisingly, both interpolation methods provide improvements even at extremely high grid resolutions - the
noninterpolated grid with 301 datapoints along each axis fared no better than the
interpolated grids with only 21 datapoints along each axis(!) .
3.1.2 Acrobot Results: value iteration with known model
We used the same value iteration algorithm in the acrobot domain. In this case our
test trials always began from the same start state, but we ran tests for a larger set
of grid sizes (Figure 2).
Grids with different resolutions place grid cell boundaries at different locations, and
these boundary locations appear to be important in this problem - the performance varies unpredictably as the grid resolution changes. However, in all cases,
interpolation was necessary to arrive at a satisfactory solution; without interpolation, the value iteration often failed to converge at all. With relatively coarse
grids it may be that any trajectory to the goal passes through some grid box more
than once, which would immediately spell disaster for any algorithm associating a
constant value over that entire grid box.
Controllers using multilinear interpolation consistently fared better than those employing the simplex-based interpolation; the smoother value function provided by
multilinear interpolation seems to help. However, value iteration with the simplexbased interpolation was about twice as fast as that with multilinear interpolation.
In higher dimensions this speed ratio will increase.
1009
Triangulation and Interpolation for Reinforcement Learning
3.2 CASE II: Q-LEARNING
Under a second reinforcement learning paradigm, we do not use any model.
Rather, we learn a Q-function that directly maps state-action pairs to long-term
rewards [Watkins, 1989]. Does interpolation help here too?
In this implementation we encourage exploration by optimistically initializing the
Q-function to zero everywhere. After travelling a sufficient distance from our last
decision point, we perform a single backup by changing the grid point values according to a perceptron-like update rule, and then we greedily select the action for
which the interpolated Q-function is highest at the current state.
3.2.1 Hillcar Results: Q-Learning
We used Q-Learning with a grid size of 112. Figure 3 shows learning curves for
three learners using the three different interpolation techniques.
Both interpolation methods provided a significant improvement in both initial and
final online performance. The learner without interpolation achieved a final average performance of about 175 steps to the goal; with multilinear interpolation, 119;
with simplex-based interpolation, 122. Note that these are all significant improvements over the corresponding results for offline value iteration with a known model.
Inaccuracies in the interpolated functions often cause controllers to enter cycles; because the Q-Iearning backups are being performed online, however, the Q-Iearning
controller can escape from these control cycles by depressing the Q-values in the
vicinities of such cycles.
3.2.2 Acrobot Results: Q-Learning
We used the same algorithms on the acrobot domain with a grid size of 154 ; results
are shown in Figure 3.
...,.,.,.
-_._--
.
~
J
-1.,..07
15
1
~
- -'5&+07
l
-,eoooo
-,eoooo
?200000
o
-2 ... 01
!iO
100
150
200
2iO
3DO
~ofl,..
360
400
450
500
L---_'-----'_---'_--'_---'-_~_~
0
!iO
100
150
200
~otT!U
250
300
_
!fiO
__'
400
Figure 3: Left: Cumulative performance of Q-Iearning hillcar on an 112 grid. (Multilinear
interpolation comes out on top; no interpolation on the bottom.) Right: Q-Iearning
acrobot on a. 154 grid. (The two interpolations come out on top with nearly identical
performance.) For each learner, the y-axis shows the sum of rewards for all trials to date.
The better the average performance, the shallower the gradient. Gradients are always
negative because each state transition before reaching the goal results in a reward of -1.
Both Q-Iearners using interpolation improved rapidly, and eventually reached the
goal in a relatively small number of steps per trial. The learner using multilinear
interpolation eventually achieved an average of 1,529 steps to the goal per trial;
the learner using simplex-based interpolation achieved 1,727 steps per trial. On
the other hand, the learner not using any interpolation fared much worse, taking
s. Davies
1010
an average of more than 27,000 steps per trial. (A controller that chooses actions
randomly typically takes about the same number of steps to reach the goal.)
Simplex-based interpolation provided on-line performance very close to that provided by multilinear interpolation, but at roughly half the computational cost.
3.3
CASE III: VALUE ITERATION WITH MODEL LEARNING
Here , we use a model of the system, but we do not assume that we have one to start
with. Instead , we learn a model of the system as we interact with it; we assume this
model is adequate and calculate a value function via the same algorithms we would
use if we knew the true model. This approach may be particularly beneficial for
tasks in which data is expensive and computation is cheap. Here, models are learned
using very simple grid-based function approximators without interpolation for both
the reward and transition functions of the model. The same grid resolution is used
for the value function grid and the model approximator. We strongly encourage
exploration by initializing the model so that every state is initially assumed to be
an absorbing state with zero reward.
While making transitions through the state space, we update the model and use
prioritized sweeping [Moore and Atkeson, 1993] to concentrate backups on relevant
parts of the state space. We also occasionally stop to recalculate the effects of
all actions under the updated model and then run value iteration to convergence.
As this is fairly time-consuming, it is done rather rarely; we rely on the updates
performed by prioritized sweeping to guide the system in the meantime .
.,00000
o
.",.".
50
100
150
200
2!10
!OO
~o1 Tn'"
350
-400
4SO
500
~_'------'_~_-.L_--'-_--'-_--'-_~
0
50
100
150
200
250
~oo
.:ISO
400
~o1 T "'1I
Figure 4: Left: Cumulative performance, model-learning on hillcar with a 112 grid.
Right: Acrobot with a 15 4 grid. In both cases, multilinear interpolation comes out on
top, while no interpolation winds up on the bottom.
3.3.1
Hillcar Results: value iteration with learned model
We used the algorithm described above with an ll-by-ll grid. An average of about
two prioritized sweeping backups were performed per transition; the complete recalculations were performed every 1000 steps throughout the first two trials and
every 5000 steps thereafter. Figure 4 shows the results for the first 500 trials.
Over the first 500 trials, the learner using simplex-based interpolation didn't fare
much better than the learner using no interpolation. However, its performance
on trials 1500-2500 (not shown) was close to that of the learner using multilinear
interpolation, taking an average of 151 steps to the goal per trial while the learner
using multilinear interpolation took 147. The learner using no interpolation did
significantly worse than the others in these later trials, taking 175 steps per trial.
Triangulation and Interpolation for Reinforcement Learning
1011
The model-learners' performance improved more quickly than the Q-Iearners' over
the first few trials; on the other hand, their final performance was significantly worse
that the Q-Iearners'.
3.3.2 Acrobot Results: value iteration with learned model
We used the same algorithm with a 154 grid on the acrobot domain, this time
performing the complete recalculations every 10000 steps through the first two trials
and every 50000 thereafter. Figure 4 shows the results. In this case, the learner
using no interpolation took so much time per trial that the experiment was aborted
early; after 100 trials, it was still taking an average of more than 45,000 steps
to reach the goal. The learners using interpolation, however, fared much better.
The learner using multilinear interpolation converged to a solution taking 938 steps
per trial; the learner using simplex-based interpolation averaged about 2450 steps.
Again, as the graphs show, these three learners initially improve significantly faster
than did the Q-Learners using similar grid sizes.
4
CONCLUSIONS
We have shown how two interpolation schemes- one based on a weighted average of
the 2d points in a square cell, the other on a d- dimensional triangulation-may be
used in three reinforcement learning paradigms: Optimal policy computation with
a known model, Q-Iearning, and online value iteration while learning a model. In
each case our empirical studies demonstrate interpolation resoundingly decreasing
the quantization level necessary for a satisfactory solution. Future extensions of
this research will explore the use of variable resolution grids and triangulations,
multiple low-dimensional interpolations in place of one high-dimension interpolation
in a manner reminiscent ofCMAC [Albus, 1981], memory-based approximators, and
more intelligent exploration.
This research was funded in part by a National Science Foundation Graduate Fellowship to Scott Davies,
and a Research Initiation Award to Andrew Moore.
References
[Albus, 1981] J. S. Albus. Brains, BehaVIour and Robottcs. BYTE Books, McGraw-Hili , 1981.
[Boyan and Moore, 1995] J. A . Boyan and A. W. Moore . Generalization in Reinforcement Learning :
Safely Approximating the Value Function. In Neural Information Processing Systems 7, 1995 .
[Crites and Barto, 1996] R. H. Crites and A. G. Barto. Improving Elevator Performance using Reinforcement Learning. In D. Touretzky, M. Mozer, and M. Hasselmo, editors, Neural Information
Processing Systems 8, 1996.
[Gordon, 1995] G. Gordon. Stable Function Approximation in Dynamic Programming. In Proceedmgs
of the 12th International Conference on Machme Learning. Morgan Kaufmann, June 1995 .
[Moore and Atkeson, 1993] A. W. Moore and C . G. Atkeson . Prioritized Sweeping: Reinforcement
Learning with Less Data and Less Real Time. Machme Learning, 13, 1993.
[Moore and Atkeson, 1995] A . W. Moore and C. G. Atkeson. The Parti-game Algorithm for Variable
Resolution Reinforcement Learning in Multidimensional State-spaces. Machine Learning, 21, 1995.
[Moore, 1992] D. W . Moore. Simplical Mesh Generation with Applications. PhD. Thesis. Report no.
92-1322, Cornell University, 1992 .
[Ross, 1983] S. Ross. Introduction to Stochastic Dynamic Programming. Academic Press, New York,
1983.
[Sutton, 1988] R. S. Sutton. Learning to Predict by the Methods of Temporal Differences. Machine
Learning, 3:9-44, 1988.
[Sutton, 1996] R. S. Sutton . Generalization in Reinforcement Learning: Successful Examples Using
Sparse Coarse Coding. In D . Touretzky, M . Mozer, and M . Hasselmo, editors, Neural Information
Processing Systems 8, 1996.
[Tesauro, 1991] G. J. Tesauro. Practical Issues in Temporal Difference Learning. RC 17223 (76307),
IBM T. J . Watson Research Center, NY , 1991.
[Watkins, 1989] C. J . C . H . Watkins . Learning from Delayed Rewards . PhD . Thesis, King's College,
University of Cambridge, May 1989.
| 1229 |@word trial:20 seems:1 amply:1 pick:1 dramatic:1 harder:1 initial:1 current:1 discretization:2 must:1 reminiscent:1 mesh:2 thrust:2 cheap:1 update:3 stationary:1 half:1 plane:1 iso:1 coarse:5 location:2 height:1 rc:1 along:3 driver:1 inside:1 manner:2 expected:1 roughly:1 behavior:1 frequently:1 aborted:1 fared:4 brain:1 torque:1 decreasing:1 td:1 little:2 solver:1 elbow:2 becomes:1 project:1 provided:4 didn:1 kind:1 averagers:1 guarantee:1 safely:1 temporal:2 every:6 multidimensional:2 machme:2 xd:1 iearning:8 gravity:1 exactly:1 prohibitively:1 control:5 unit:2 appear:1 before:2 local:1 io:3 sutton:6 interpolation:67 optimistically:1 might:1 twice:1 studied:2 range:1 graduate:1 averaged:1 practical:2 recursive:1 empirical:2 significantly:3 davy:5 regular:2 get:1 close:2 applying:2 conventional:1 map:2 center:2 compensated:1 go:1 starting:1 convex:1 resolution:7 immediately:1 rule:1 parti:1 array:1 datapoints:7 coordinate:3 updated:1 programming:3 pa:1 element:2 velocity:3 expensive:5 approximated:1 particularly:2 satisfying:1 bottom:2 initializing:2 calculate:3 recalculate:1 region:2 ensures:1 cycle:3 highest:1 ran:1 mozer:2 broken:1 reward:7 dynamic:5 oed:1 raise:1 learner:18 completely:2 triangle:1 joint:2 interpolators:1 fast:1 describe:1 query:7 tell:1 outside:1 larger:1 final:3 online:6 quantizing:1 net:3 iearners:3 took:2 neighboring:1 relevant:2 date:1 rapidly:1 translate:1 achieve:1 albus:3 los:1 convergence:4 produce:1 help:2 derive:1 oo:2 andrew:1 c:1 involves:1 come:3 kuhn:4 concentrate:1 discontinuous:1 stochastic:1 exploration:3 occupies:1 terribly:1 successor:1 require:1 behaviour:1 generalization:2 multilinear:18 extension:1 around:1 predict:1 early:1 ross:3 hasselmo:2 weighted:4 always:4 rather:2 reaching:1 cornell:1 barto:3 conjunction:1 derived:1 june:1 improvement:5 consistently:1 rank:1 ave:1 greedily:1 entire:3 typically:1 initially:2 issue:1 fairly:1 once:1 identical:1 park:1 nearly:1 future:1 simplex:13 report:1 others:1 gordon:4 piecewise:1 escape:1 few:1 intelligent:1 randomly:1 national:1 interpolate:3 elevator:1 delayed:1 attempt:1 extreme:1 implication:1 accurate:1 encourage:2 necessary:2 mhz:1 ott:1 cost:2 successful:1 too:3 stored:1 nearoptimal:1 varies:1 chooses:1 international:1 quickly:1 again:1 thesis:2 recorded:1 containing:2 worse:3 corner:9 book:1 sec:5 coding:1 coefficient:1 explicitly:1 performed:6 later:1 wind:1 reached:1 start:4 option:1 forbes:1 square:1 kaufmann:1 weak:1 none:2 trajectory:1 straight:1 converged:1 datapoint:1 reach:2 touretzky:2 simplexes:3 associated:1 stop:1 car:4 routine:1 back:1 higher:2 planar:1 improved:2 formulation:2 done:2 box:13 though:1 depressing:1 strongly:1 angular:2 hand:3 su:1 continuity:2 mdp:4 effect:2 true:1 spell:1 discounting:1 vicinity:1 moore:13 satisfactory:2 attractive:1 ll:2 during:1 game:1 hill:3 complete:2 demonstrate:1 tn:1 l1:1 hilus:1 began:1 common:2 absorbing:1 discussed:1 fare:1 mellon:1 significant:2 cambridge:1 enter:1 grid:36 funded:1 rectify:1 robot:1 stable:1 surface:3 triangulation:11 tesauro:3 sparc:1 occasionally:1 initiation:1 watson:1 approximators:4 morgan:1 converge:1 paradigm:3 ii:2 smoother:1 multiple:1 match:1 adapt:1 calculation:2 faster:1 long:1 academic:1 award:1 controller:4 cmu:1 iteration:24 represent:1 ofl:1 disaster:1 achieved:3 cell:2 fellowship:1 fine:5 appropriately:1 pass:1 near:1 backwards:1 iii:2 split:1 enough:2 variety:1 fit:1 associating:1 opposite:2 pivot:1 york:1 cause:1 action:9 adequate:1 useful:1 clear:1 situated:1 simplical:1 generate:1 problematic:1 per:10 carnegie:1 discrete:3 express:1 thereafter:2 four:1 changing:1 vast:1 graph:1 sum:2 run:1 everywhere:1 place:2 arrive:1 throughout:1 decision:2 x2:1 interpolated:7 generates:1 speed:2 lsec:1 extremely:1 performing:2 relatively:2 department:1 according:2 hanging:1 combination:1 poor:1 across:1 fio:1 beneficial:1 making:1 restricted:1 taken:2 computationally:1 equation:1 previously:1 assures:1 eventually:2 needed:1 conformal:1 travelling:1 available:1 actuator:1 top:4 approximating:1 hypercube:2 coxeter:1 usual:1 gradient:2 distance:1 link:2 unstable:1 o1:2 ratio:1 steep:1 potentially:1 negative:1 implementation:2 policy:5 unknown:1 perform:1 shallower:1 vertical:1 markov:1 freudenthal:1 shoulder:1 omission:1 arbitrary:1 sweeping:4 pair:1 required:2 specified:1 learned:4 narrow:1 inaccuracy:1 suggested:1 scott:2 memory:1 boyan:3 rely:1 meantime:1 scheme:3 improve:1 axis:5 byte:1 literature:1 permutation:1 interesting:1 recalculation:2 generation:1 proven:1 approximator:2 foundation:1 gather:1 sufficient:1 xp:3 editor:2 ibm:1 diagonally:1 surprisingly:1 last:1 l_:1 offline:6 guide:1 perceptron:1 face:3 taking:7 sparse:1 boundary:3 dimension:6 curve:1 transition:4 cumulative:2 forward:1 commonly:1 reinforcement:15 projected:1 atkeson:6 employing:1 mcgraw:1 dealing:1 global:1 pittsburgh:1 assumed:1 knew:1 consuming:1 continuous:5 promising:1 learn:2 improving:1 interact:1 interpolating:1 domain:5 did:2 linearly:1 crites:3 backup:12 allowed:1 ny:1 unactuated:2 position:3 xl:1 lie:1 watkins:4 third:1 weighting:1 down:1 quantization:4 phd:2 acrobot:11 execution:1 sorting:2 simply:1 explore:1 failed:1 corresponds:1 goal:22 formulated:1 king:1 prioritized:4 change:1 determined:1 except:1 unpredictably:1 acting:1 averaging:1 triangulating:1 rarely:1 select:1 college:1 tested:1 |
253 | 123 | 65
LINEAR LEARNING: LANDSCAPES AND ALGORITHMS
Pierre Baldi
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
What follows extends some of our results of [1] on learning from examples in layered feed-forward networks of linear units. In particular we examine what happens when the ntunber of layers is large or
when the connectivity between layers is local and investigate some
of the properties of an autoassociative algorithm. Notation will be
as in [1] where additional motivations and references can be found.
It is usual to criticize linear networks because "linear functions do
not compute" and because several layers can always be reduced to
one by the proper multiplication of matrices. However this is not the
point of view adopted here. It is assumed that the architecture of the
network is given (and could perhaps depend on external constraints)
and the purpose is to understand what happens during the learning
phase, what strategies are adopted by a synaptic weights modifying
algorithm, ... [see also Cottrell et al. (1988) for an example of an application and the work of Linsker (1988) on the emergence of feature
detecting units in linear networks}.
Consider first a two layer network with n input units, n output units
and p hidden units (p < n). Let (Xl, YI), ... , (XT, YT) be the set of
centered input-output training patterns. The problem is then to find
two matrices of weights A and B minimizing the error function E:
E(A, B) =
L
l<t<T
IIYt -
ABXtIl2.
(1)
66
Baldi
Let ~x x, ~XY, ~yy, ~y x denote the usual covariance matrices. The
main result of [1] is a description of the landscape of E, characerised
by a multiplicity of saddle points and an absence of local minima.
More precisely, the following four facts are true.
Fact 1: For any fixed n x p matrix A the function E(A, B) is convex
in the coefficients of B and attains its minimum for any B satisfying
the equation
A'AB~xx = A/~yX.
(2)
If in addition ~x X is invertible and A is of full rank p, then E is
strictly convex and has a unique minimum reached when
(3)
Fact 2: For any fixed p x n matrix B the function E(A, B) is convex
in the coefficients of A and attains its minimum for any A satisfying
the equation
(4)
AB~xxB' = ~YXB'.
If in addition ~xx is invertible and B is of full rank p, then E is
strictly convex and has a unique minimum reached when
(5)
Fact 3: Assume that ~x X is invertible. If two matrices A and B
define a critical point of E (i.e. a point where 8E / 8aij = 8E /8bij =
0) then the global map W = AB is of the form
(6)
where P A denotes the matrix of the orthogonal projection onto the
subspace spanned by the columns of A and A satisfies
(7)
Linear Learning: Landscapes and Algorithms
with ~ = ~y X ~x~~XY. If A is of full rank p, then A and B define
a critical point of E if and only if A satisties (7) and B = R(A), or
equivalently if and only if A and W satisfy (6) and (7). Notice that
in (6), the matrix ~y X ~x~ is the slope matrix for the ordinary least
square regression of Y on X.
Fact 4: Assume that ~ is full rank with n distinct eigenvalues At >
... > An. If I = {i t , ... ,ip}(l < it < ... < ip < n) is any ordered p-index set, let Uz = [Ui t , ??? , Ui p ] denote the matrix formed
by the orthononnal eigenvectors of ~ associated with the eigenvalues
Ail' ... , Ai p ? Then two full rank matrices A and B define a critical
point of E if and only if there exist an ordered p-index set I and an
invertible p x p matrix C such that
A=UzC
(8)
(9)
For such a critical point we have
(10)
E(A,B) = tr(~yy) -
L Ai.
(11 )
iEZ
Therefore a critical point of W of rank p is always the product of the
ordinary least squares regression matrix followed by an orthogonal
projection onto the subspace spanned by p eigenvectors of~. The map
W associated with the index set {I, 2, ... ,p} is the unique local and
global minimum of E. The remaining (;) -1 p-index sets correspond
to saddle points. All additional critical points defined by matrices
A and B which are not of full rank are also saddle points and can
be characerized in terms of orthogonal projections onto subspaces
spanned by q eigenvectors with q < p.
67
68
Baldi
Deep Networks
Consider now the case of a deep network with a first layer of n input
units, an (m + 1)-th layer of n output units and m - 1 hidden layers
with an error function given by
E(AI, ... ,An )=
L
IIYt-AIAl ... AmXtll2.
(12)
l<t<T
It is worth noticing that, as in fact 1 and 2 above, if we fix any m-1
of the m matrices AI, ... , Am then E is convex in the remaining matrix
of connection weights. Let p (p < n) denote the ntunber of units in
the smallest layer of the network (several hidden layers may have p
units). In other words the network has a bottleneck of size p. Let i
be the index of the corresponding layer and set
A = A I A 2... A m-i+1
(13)
B = Am-i+2 ... Am
When we let AI, ... , Am vary, the only restriction they impose on A
and B is that they be of rank at most p. Conversely, any two matrices A and B of rank at most p can always be decomposed (and
in many ways) in products of the form of (13). It results that any
local minima of the error function of the deep network should yield a
local minima for the corresponding "collapsed" .three layers network
induced by (13) and vice versa. Therefore E(A I , ... , Am) does not
have any local minima and the global minimal map W? = AIA2 ... Am
is unique and given by (10) with index set {I, 2, ... , p}. Notice that
of course there is a large number of ways of decomposing W? into
a product of the form A I A 2 ... A m . Also a saddle point of the error
function E(A, B) does not necessarily generate a saddle point of the
corresponding E (A I , ... , Am) for the expressions corresponding to the
two gradients are very different.
Linear Learning: Landscapes and Algorithms
Forced Connections. Local Connectivity
Assume now an error function of the form
E(A)
=
L
IIYt - AXt[[2
(14)
l~t~T
for a two layers network but where the value of some of the entries
of A may be externally prescribed. In particular this includes the
case of local connectivity described by relations of the form aij = 0
for any output unit i and any input unit j which are not connected.
Clearly the error function E(A) is convex in A. Every constraint of
the form aij =cst defines an hyperplane in the space of all possible A.
The intersection of all these constraints is therefore a convex set. Thus
minimizing E under the given constraints is still a convex optimization
problem and so there are no local minima. It should be noticed that,
in the case of a network with even only three constrained layers with
two matrices A and B and a set of constraints of the form aij =cst
on A and bk1 =cst for B, the set of admissible matrices of the form
W = AB is, in general, not convex anymore. It is not unreasonable
to conjecture that local minima may then arise, though this question
needs to be investigated in greater detail.
Algorithmic Aspects
One of the nice features of the error landscapes described so far is
the absence of local minima and the existence, up to equivalence,
of a unique global minimum which can be understood in terms of
principal component analysis and least square regression. However
the landscapes are also characterized by a large number of saddle
points which could constitute a problem for a simple gradient descent
algorithm during the learning phase. The proof in [1] shows that
the lower is the E value corresponding to a saddle point, the more
difficult it is to escape from it because of a reduction in the possible
number of directions of escape (see also [Chauvin, 1989] in a context of
Hebbian learning). To assert how relevant these issues are for practical
implementations requires further simulation experiments. On a more
69
70
Baldi
speculative side, it remains also to be seen whether, in a problem
of large size, the number and spacing of saddle points encountered
during the first stages of a descent process could not be used to "get
a feeling" for the type of terrain being descented and as a result to
adjust the pace (i. e. the learning rate).
We now turn to a simple algorithm for the auto-associative case in a
three layers network, i. e. the case where the presence of a teacher
can be avoided by setting Yt = Xt and thereby trying to achieve a
compression of the input data in the hidden layer. This technique
is related to principal component analysis, as described in [1]. If
Yt = Xt, it is easy to see from equations (8) and (9) that, if we take
the matrix C to be the identity, then at the optimum the matrices
A and B are transpose of each other. This heuristically suggests a
possible fast algorithm for auto-association, where at each iteration a
gradient descent step is applied only to one of the connection matrices
while the other is updated in a symmetric fashion using transposition
and avoiding to back-propagate the error in one of the layers (see
[Williams, 1985] for a similar idea). More formally, the algorithm
could be concisely described by
A(O) = random
B(O) = A'(O)
(15)
8E
A(k+l)=A(k)-11 8A
B(k+l)=A'(k+l)
Obviously a similar algorithm can be obtained by setting B(k + 1) =
B(k) -118E/8B and A(k + 1) = B'(k + 1). It may actually even be
bet ter to alternate the gradient step, one iteration with respect to A
and one iteration with respect to B.
A simple calculation shows that (15) can be rewritten as
+ 1) =
B(k + 1) =
A(k
A(k) + 11(1 - W(k))~xxA(k)
B(k)
+ 11B(k)~xx(I -
W(k))
(16)
Linear Learning: Landscapes and Algorithms
where W(k) = A(k)B(k). It is natural from what we have already
seen to examine the behavior of this algorithm on the eigenvectors of
~xx. Assume that u is an eigenvector of both ~xx and W(k) with
eigenvalues ,\ and /-l( k). Then it is easy to see that u is an eigenvector
of W(k + 1) with eigenvalue
(17)
For the algorithm to converge to the optimal W, /-l( k + 1) must converge to 0 or 1. Thus one has to look at the iterates of the function f( x) = x[l + 7],\(1 - x)]2. This can be done in detail and
we shall only describe the main points. First of all, f' (x) = 0 iff
x = 0 or x = Xa = 1 + (1/7],\) or x = Xb = 1/3 + (1/37],\) and
f"(x) = 0 iff x = Xc = 2/3 + (2/37],\) = 2Xb. For the fixed points,
f(x) = x iff x = 0, x = 1 or x = Xd = 1 + (2/7],\). Notice also
that f(xa) = a and f(1 + (1/7],\)) = (1 + (1/7],\)(1 - 1? Points corresponding to the values 0,1, X a , Xd of the x variable can readily be
positioned on the curve of f but the relative position of Xb (and xc)
depends on the value assumed by 7],\ with respect to 1/2. Obviously
if J1(0) = 0,1 or Xd then J1( k) = 0,1 or Xd, if J1(0) < 0 /-l( k) ~ -00
and if /-l( k) > Xd J1( k) ~ +00. Therefore the algorithm can converge
only for a < /-leO) < Xd. When the learning rate is too large, i. e.
when 7],\ > 1/2 then even if /-leO) is in the interval (0, Xd) one can see
that the algorithm does not converge and may even exhibit complex
oscillatory behavior. However when 7],\ < 1/2, if 0 < J1(0) < Xa then
J1( k) ~ 1, if /-leO) = Xa then /-l( k) = a and if Xa < J1(0) < Xd then
/-l(k) ~ 1.
In conclusion, we see that if the algorithm is to be tested, the
learning rate should be chosen so that it does not exceed 1/2,\, where
,\ is the largest eigenvalue of ~xx. Even more so than back propagation, it can encounter problems in the proximity of saddle points.
Once a non-principal eigenvector of ~xx is learnt, the algorithm
rapidly incorporates a projection along that direction which cannot
be escaped at later stages. Simulations are required to examine the
effects of "noisy gradients" (computed after the presentation of only
a few training examples), multiple starting points, variable learning
rates, momentum terms, and so forth.
71
72
Baldi
Aknowledgement
Work supported by NSF grant DMS-8800323 and in part by ONR
contract 411P006-01.
References
(1) Baldi, P. and Hornik, K. (1988) Neural Networks and Principal
Component Analysis: Learning from Examples without Local Minima. Neural Networks, Vol. 2, No. 1.
(2) Chauvin, Y. (1989) Another Neural Model as a Principal Component Analyzer. Submitted for publication.
(3) Cottrell, G. W., Munro, P. W. and Zipser, D. (1988) Image Compression by Back Propagation: a Demonstration of Extensional Programming. In: Advances in Cognitive Science, Vol. 2, Sharkey, N. E.
ed., Norwood, NJ Abbex.
(4) Linsker, R. (1988) Self-Organization in a Perceptual Network.
Computer 21 (3), 105-117.
( 5) Willi ams, R. J. (1985) Feature Discovery Through Error-Correction
Learning. ICS Report 8501, University of California., San Diego.
| 123 |@word compression:2 aia2:1 heuristically:1 simulation:2 propagate:1 covariance:1 thereby:1 tr:1 reduction:1 must:1 readily:1 cottrell:2 extensional:1 j1:7 transposition:1 detecting:1 iterates:1 along:1 baldi:6 behavior:2 examine:3 uz:1 decomposed:1 xx:7 notation:1 what:5 ail:1 eigenvector:3 nj:1 assert:1 every:1 xd:8 axt:1 unit:11 grant:1 understood:1 local:12 equivalence:1 conversely:1 suggests:1 unique:5 practical:1 projection:4 word:1 get:1 onto:3 cannot:1 layered:1 collapsed:1 context:1 restriction:1 map:3 yt:3 williams:1 starting:1 convex:9 spanned:3 updated:1 diego:1 programming:1 satisfying:2 connected:1 ui:2 depend:1 leo:3 distinct:1 forced:1 fast:1 describe:1 emergence:1 noisy:1 ip:2 associative:1 obviously:2 eigenvalue:5 product:3 relevant:1 rapidly:1 iff:3 achieve:1 forth:1 description:1 optimum:1 direction:2 modifying:1 centered:1 fix:1 strictly:2 correction:1 proximity:1 ic:1 algorithmic:1 vary:1 smallest:1 purpose:1 largest:1 vice:1 clearly:1 always:3 bet:1 publication:1 rank:9 attains:2 am:8 pasadena:1 hidden:4 relation:1 issue:1 constrained:1 once:1 look:1 linsker:2 report:1 escape:2 few:1 phase:2 ab:4 organization:1 investigate:1 willi:1 adjust:1 xb:3 xy:2 orthogonal:3 minimal:1 iiyt:3 column:1 ordinary:2 entry:1 too:1 teacher:1 learnt:1 contract:1 invertible:4 connectivity:3 external:1 cognitive:1 includes:1 coefficient:2 satisfy:1 ntunber:2 depends:1 later:1 view:1 reached:2 slope:1 square:3 formed:1 correspond:1 yield:1 landscape:7 worth:1 bk1:1 submitted:1 oscillatory:1 synaptic:1 ed:1 sharkey:1 dm:1 associated:2 proof:1 positioned:1 actually:1 back:3 feed:1 done:1 though:1 xa:5 stage:2 propagation:2 defines:1 perhaps:1 effect:1 true:1 symmetric:1 laboratory:1 during:3 self:1 trying:1 image:1 speculative:1 association:1 versa:1 ai:5 analyzer:1 onr:1 yi:1 seen:2 minimum:14 additional:2 greater:1 impose:1 converge:4 full:6 multiple:1 hebbian:1 jet:1 characterized:1 calculation:1 escaped:1 regression:3 iteration:3 addition:2 spacing:1 interval:1 induced:1 incorporates:1 zipser:1 presence:1 ter:1 exceed:1 easy:2 architecture:1 idea:1 bottleneck:1 whether:1 expression:1 munro:1 constitute:1 autoassociative:1 deep:3 eigenvectors:4 reduced:1 generate:1 exist:1 nsf:1 notice:3 yy:2 pace:1 shall:1 vol:2 four:1 noticing:1 extends:1 layer:16 followed:1 encountered:1 constraint:5 precisely:1 aspect:1 prescribed:1 conjecture:1 alternate:1 happens:2 multiplicity:1 equation:3 remains:1 turn:1 adopted:2 decomposing:1 rewritten:1 unreasonable:1 pierre:1 anymore:1 encounter:1 existence:1 denotes:1 remaining:2 xc:2 yx:1 noticed:1 question:1 already:1 strategy:1 usual:2 exhibit:1 gradient:5 subspace:3 propulsion:1 chauvin:2 index:6 minimizing:2 demonstration:1 equivalently:1 difficult:1 implementation:1 proper:1 xxa:1 descent:3 required:1 connection:3 california:2 concisely:1 pattern:1 criticize:1 critical:6 natural:1 technology:1 auto:2 nice:1 discovery:1 multiplication:1 relative:1 norwood:1 course:1 supported:1 transpose:1 aij:4 side:1 understand:1 institute:1 curve:1 forward:1 san:1 avoided:1 feeling:1 far:1 global:4 assumed:2 terrain:1 ca:1 hornik:1 investigated:1 necessarily:1 complex:1 main:2 motivation:1 arise:1 fashion:1 position:1 momentum:1 xl:1 perceptual:1 bij:1 admissible:1 externally:1 xt:3 intersection:1 saddle:9 ordered:2 satisfies:1 characerized:1 identity:1 presentation:1 absence:2 cst:3 hyperplane:1 principal:5 formally:1 tested:1 avoiding:1 |
254 | 1,230 | Local Bandit Approximation
for Optimal Learning Problems
Michael o. Duff
Andrew G. Barto
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
{duff.barto}Ccs.umass.edu
Abstract
In general, procedures for determining Bayes-optimal adaptive
controls for Markov decision processes (MDP's) require a prohibitive amount of computation-the optimal learning problem
is intractable. This paper proposes an approximate approach in
which bandit processes are used to model, in a certain "local" sense,
a given MDP. Bandit processes constitute an important subclass of
MDP's, and have optimal learning strategies (defined in terms of
Gittins indices) that can be computed relatively efficiently. Thus,
one scheme for achieving approximately-optimal learning for general MDP's proceeds by taking actions suggested by strategies that
are optimal with respect to local bandit models.
1
INTRODUCTION
Watkins [1989] has defined optimal learning as:" the process of collecting and
using information during learning in an optimal manner, so that the learner makes
the best possible decisions at all stages of learning: learning itself is regarded as a
mUltistage decision process, and learning is optimal if the learner adopts a strategy
that will yield the highest possible return from actions over the whole course of
learning."
For example, suppose a decision-maker is presented with two biased coins (the
decision-maker does not know precisely how the coins are biased) and asked to allocate twenty flips between them so as to maximize the number of observed heads.
Although the decision-maker is certainly interested in determining which coin has
a higher probability of heads, his principle concern is with optimizing performance
en route to this determination. An optimal learning strategy typically intersperses
"exploitation" steps, in which the coin currently thought to have the highest proba-
M. O. Duff and A. G. Barto
1020
1
1- P"
~
P?
(a)
1
~-QP22
f'r:i)
"""~
l-P~,
~
1/2
~~P
2
P
R2 :y 22
ll~ R"1
2
22
~
~2;(3,2)(1,1):(1,1)(1,1) .7
1 /3
~
~1;(2'1)(1'1):(1'1)(1'1) l~
2
l-p 2
y
2/ 3
aJ
1
1- P12
(b)
i
1;(3,1)(1,1):(1 ,1)(1 ,1)
-
a
???
a, ;~ 2:(1,2)(1,1):(1,1)(1,1) ~ 7
""
1;(1,1)(1 ,1):(1,1)(1.1)
a,
)fo
./
>
a,
2;(1.2)(1,1):(1,1)(2,1)
<{:
aj
~
(c)
Figure 1: A simple example: dynamics/rewards under (a) action 1 and (b) action
2. (c) The decision problem in hyperstate space.
bility of heads is flipped, with "exploration" steps in which, on the basis of observed
flips, a coin that would be deemed inferior is flipped anyway to further resolve its
true potential for turning up heads. The coin-flip problem is a simple example of a
(two-armed) bandit problem. A key feature of these problems, and of adaptive control processes in general, is the so-called "exploration-versus-exploitation trade-off"
(or problem of "dual control" [Fel'dbaum, 1965]).
As an another example, consider the MDP depicted in Figures l(a) and (b). This is
a 2-state/2-action proceSSj transition probabilities label arcs, and quantities within
circles denote expected rewards for taking particular actions in particular states.
The goal is to assign actions to states so as to maximize, say, the expected infinite
horizon discounted sum ofrewards (the value function) over all states. For the case
considered in this paper, the transition probabilites are not known. Given that
the process is in some state, one action may be optimal with respect to currentlyperceived point-estimates of unknown parameters, while another action may result
in greater information gain. Optimal learning is concerned with striking a balance
between these two criteria.
While reinforcement learning approaches have recognized the dual-effects of control, at least in the sense that one must occasionally deviate from a greedy policy
to ensure a search of sufficient breadth, many exploration procedures appear not
to be motivated by real notions of optimallearningj rather, they aspire to be practical schemes for avoiding unrealistic levels of sampling and search that would be
required if one were to strictly adhere to the theoretical sufficient conditions for
convergence-that all state-action pairs must be considered infinitely many times.
If one is willing to adopt a Bayesian perspective, then the exploration-versusexploitation issue has already been resolved, in principle. A solution was recognized
by Bellman and Kalaba nearly fo rty years ago [Bellman & Kalaba, 1959]j their
dynamic programming algorithm for computing Bayes-optimal policies begins by
regarding "state" as an ordered pair, or "hyperstate," (:z:,I), where :z: is a point
in phase-space (Markov-chain state) and I is the "information pattern," which
summarizes past history as it relates to modeling the transitional dynamics of :z:.
Computation grows increasingly burdensome with problem size, however, so one is
compelled to seek approximate solutions, some of which ignore the effects of information gain entirely. In contrast, the approach suggested in this paper explicitly
acknowledges that there is an information-gain component to the optimal learn-
Local Bandit Approximation/or Optimal Learning Problems
1021
ing problem; if certain salient aspects of the value of information can be captured,
even approximately, then one may be led to a reasonable method for approximating
optimal learning policies.
Here is the basic idea behind the approach suggested in this paper: First note that
there exists a special class of problems, namely multi-armed bandit problems, in
which the information pattern is the sole component of the hyperstate. These special
problems have the important feature that their optimal policies can be defined
concisely in terms of "Gittins indices," and these indices can be computed in a
relatively efficient way. This paper is an attempt to make use of the fact that this
special subclass of MDP's has tractably-computable optimal learning strategies.
Actions for general MDP's are derived by, first, attaching to a given general MDP
in a given state a "local" n-armed bandit process that captures some aspect of the
value of information gain as well as explicit reward. Indices for the local bandit
model can be computed relatively efficiently; the largest index suggests the best
action in an optimal-learning sense. The resulting algorithm has a receding-horizon
flavor in that a new local-bandit process is constructed after each transition; it
makes use of a mean-process model as in some previously-suggested approximation
schemes, but here the value of information gain is explicitly taken into account, in
part, through index calculations.
2
THE BAYES-BELLMAN APPROACH FOR
ADAPTIVE MDP'S
Consider the two-state, two-action process shown in Figure 1, and suppose that
one is uncertain about the transition probabilities. If the process is in a given
state and an action is taken, then the result is that the process either stays in the
state it is in or jumps to the other state-one observes a Bernoulli process with
unknown parameter-just as in the coin-flip example. But in this case one observes
four Bernoulli processes: the result of taking action 1 in state 1, action 1 in
state 2, action 2 in state 1, action 2 in state 2. So if the prior probability
for staying in the current state, for each of these state-action pairs, is represented by
a beta distribution (the appropriate conjugate family of distributions with regard
to Bernoulli sampling; I.e., a Bayesian update of a beta prior remains beta), then
one may perform dynamic programming in a space of "hyperstates," in which the
components are four pairs of parameters specifying the beta distributions describing the uncertainty in the transition probabilities, along with the Markov chain
state: (:z:, (aL,Bt), (a~,,B~)(a~,,Bn, (a~,,B~?), where for example (aL,BD denotes
the parmeters specifying the beta distribution that represents uncertainty in the
transition probability P~l. Figure l(c) shows part ofthe associated decision tree; an
optimality equation may be written in terms of the hyperstates. MDP's with more
than two states pose no special problem (there exists an appropriate generalization
of the beta distribution). What i& a problem is what Bellman calls the "problem
of the expanding grid:" the number of hyperstates that must be examined grows
exponentially with the horizon.
How does one proceed if one is constrained to practical amounts of computation and
is willing to settle for an approximate solution? One could truncate the decision
tree at some shorter and more manageable horizon, compute approximate terminal
values by replacing the distributions with their means, and proceed with a recedinghorizon approach: Starting from the approximate terminal values at the horizon,
perform a backward sweep of dynamic programming, computing an optimal policy.
Take the initial action of the policy, then shift the entire computational window
forward one level and repeat. One can imagine a sort of limiting, degenerate version
1022
M. O. Duff and A. G. Barto
of this receding horizon approach in which the horizon is zerOj that is, use the means
of the current distributions to calculate an optimal policy, take an "optimal" action,
observe a transition, perform a Bayesian modification of the prior, and repeat.
This (certainty-equivalence) heuristic was suggested by [Cozzolino et al., 1965], and
has recently reappeared in [Dayan & Sejnowski, 1996]. However, as was noted in
[Cozzolino et al., 1965] " ... the trade-off between immmediate gain and information
does not exist in this heuristic. There is no mechanism which explicitly forces
unexplored policies to be observed in early stages. Therefore, if it should happen
that there is some very good policy which a priori seemed quite bad, it is entirely
possible that this heuristic will never provide the information needed to recognize
the policy as being better than originally thought .. .'t This comment and others seem
to refer to what is now regarded as a problem of "identifiability" associated with
certainty-equivalence controllers in which a closed-loop system evolves identically for
both true and false values of the unknown parametersj that is, certainty-equivalence
control may make some of the unknown parameters invisible to the identification
process and lead one to repeatedly choose the wrong action (see [Borkar & Varaiya,
1979], and also Watkinst discussion of "metastable policiestt in [Watkins, 1989]).
3
BANDIT PROBLEMS AND INDEX COMPUTATION
One basic version of the bandit problem may be described as follows: There are
some number of statistically independent reward processes-Markov chains with
an imposed reward structure associated with the chain's arcs. At each discrete
time-step, a decision-maker selects one of these processes to activate. The activated
process yields an immediate reward and then changes state. The other processes
remain frozen and yield no reward. The goal is to splice together the individual
reward streams into one sequence having maximal expected discounted value.
The special Cartesian structure of the bandit problem turns out to imply that there
are functions that map process-states to scalars (or "indices't), such that optimal
policies consist simply of activating the task with the largest index. Consider one of
the reward processes, let S be its state space, and let B be the set of all subsets of
S. Suppose that :z:(k) is the state of the process at time k and, for B E B, let reB)
be the number of transitions until the process first enters the set B. Let v(ij B) be
the expected discounted reward per unit of discounted time starting from state i
until the stopping time reB):
Then the Gittins index of state i for the process under consideration is
v(i) = maxv(ijB).
BEB
(1)
[Gittins & Jones, 1979] shows that the indices may be obtained by solving a set
of functional equations. Other algorithms that have been suggested include those
by Beale (see the discussion section following [Gittins & Jones, 1979]), [Robinsion, 1981], [Varaiya et al., 1985], and [Katehakis & Vein ott , 1987]. [Dufft 1995]
provides a reinforcement learning approach that gradually learns indices through
online/model-free interaction with bandit processes. The details of these algorithms
would require more space than is available here. The algorithm proposed in the next
section makes use of the approach of [Varaiya et al., 1985].
Local Bandit Approximation for Optimal Learning Problems
4
1023
LOCAL BANDIT APPROXIMATION AND AN
APPROXIMATELY-OPTIMAL LEARNING
ALGORITHM
The most obvious difference between the optimal learning problem for an MDP and
the multi-armed bandit problem is that the MDP has a phase-space component
(Markov chain state) to its hyperstate. A first step in bandit-based approximation,
then, proceeds by "removing" this phase-space component. This can be achieved by
viewing the process on a time-scale defined by the recurrence time of a given state.
That is, suppose the process is in some state, z. In response to some given action,
two things can happen: (1) The process can transition, in one time-step, into z
again with some immediate reward, or (2) The process can transition into some
state that is not z and experience some "sojourn" path of states and rewards before
returning to z. On a time-scale defined by sojourn-time, one can view the process
in a sort of "state-z-centric" way (if state z never recurs, then the sojourn-time is
"infinite" and there is no value-of-information component of the local bandit model
to acknowledge) . From this perspective, the process appears to have only one state,
and is 8em~Markov; that is, the time between transitions is a random variable.
Some other action taken in state z would give rise to a different sojourn reward
process. For both processes (sojourn-processes initiated by different actions applied
to state z), the sojourn path/reward will depend upon the policy for states encountered along sojourn paths, but suppose that this policy is fixed for the moment.
By viewing the original process on a time-scale of sojourn-time, one has effectively
collapsed the phase-space component of the hyperstate. The new process has one
state, z, and the problem of choosing an action, given that one is uncertain about
the transition probabilities, presents itself as a semi-Markov bandit problem.
The preceding discussion suggests an algorithm for approximately-optimal learning:
(0) Given that the uncertainty in transition probabilities is expressed in terms of
sufficient statistics < a, Ii >, and the process is currently in state Zt.
(1) Compute the optimal policy for the mean process, 7r?[F(a,Ii)]; that is, compute the policy that is optimal for the MDP whose transition probabilities
are taken to be the mean values associated with < a, Ii >-this defines a
nominal (certainty-equivalent) policy for sojourn states.
(2) Construct a local bandit model at state Zt; that is, the decision-maker must
choose between some number (the number of admissible actions) of sojourn
reward processes-this is a semi-Markov multi-armed bandit problem.
(3) Compute the Gittins indices for the local bandit model.
( 4) Take the action with the largest index.
(5) Observe a transition to Zt+l in the underlying MDP.
a,1i
(6) Update <
> accordingly (Bayes update).
(7) Go to step (1)
The local semi-Markov bandit process associated with state 1 / action 1 for
the 2-state example MDP of Figure 1 is shown in Figure 2. The sufficient statistics
for ptl are denoted by (Q, f3), and Q~.8 and ~ are the expected probabilities for
transition into state 1 and state 2, respectively. rand R121 are random variables
signifying sojourn time and reward.
The goal is to compute the index for the root information-state labeled < Q, f3 > and
to compare it with that computed for a similar diagram associated with the bandit
M. O. Duff and A. G. Barto
1024
r
~/
/
~,y.~
/,~I
u+1,
Figure 2: A local semi-Markov bandit process associated with state 1 / action
1 for the 2-state example MDP of Figure 1.
process for taking action 2. The approximately-optimal action is suggested by the
process having the largest root-node index. Indices for semi-Markov bandits can be
obtained by considering the bandits as Markov, but performing the optimization
in Equation 1 over a restricted set of stopping times. The algorithm suggested
in [Tsitsiklis, 1993], which in turn makes use of methods described in [Varaiya et
al., 1985], proceeds by "reducing" the graph through a sequence of node-excisions
and modifications of rewards and transition probabilities; [Duff, 1997] details how
these steps may be realized for the special semi-Markov processes associated with
problems of optimal learning.
5
Discussion
In summary, this paper has presented the problem of optimal learning, in which
a decision-maker is obliged to enjoy or endure the consequences of its actions in
quest of the asymptotically-learned optimal policy. A Bayesian formulation of the
problem leads to a clear concept of a solution whose computation, however, appears
to entail an examination of an intractably-large number of hyperstates. This paper has suggested extending the Gittins index approach (which applies with great
power and elegance to the special class of multi-armed bandit processes) to general
adaptive MDP's. The hope has been that if certain salient features of the value
of information could be captured, even approximately, then one could be led to a
reasonable method for avoiding certain defects of certainty-equivalence approaches
(problems with identifiability, "metastability"). Obviously, positive evidence, in the
form of empirical results from simulation experiments, would lend support to these
ideas- work along these lines is underway.
Local bandit approximation is but one approximate computational approach for
problems of optimal learning and dual control. Most prominent in the literature of
control theory is the "wide-sense" approach of [Bar-Shalom & Tse, 1976], which utilizes local quadratic approximations about nominal state/control trajectories. For
certain problems, this method has demonstrated superior performance compared
to a certainty-equivalence approach, but it is computationally very intensive and
unwieldy, particularly for problems with controller dimension greater than one.
One could revert to the view of the bandit problem, or general adaptive MDP,
as simply a very large MDP defined over hyperstates, and then consider a some-
Local Bandit Approximationfor Optimal Learning Problems
1025
what direct approach in which one performs approximate dynamic programming
with function approximation over this domain-details of function-approximation,
feature-selection, and "training" all become important design issues. [Duff, 1997]
provides further discussion of these topics, as well as a consideration of actionelimination procedures [MacQueen, 1966] that could result in substantial pruning
of the hyperstate decision tree.
Acknowledgements
This research was supported, in part, by the National Science Foundation under
grant ECS-9214866 to Andrew G. Barto.
References
Bar-Shalom, Y. 8? Tse, E. (1976) Caution, probing and the value of information in
the control of uncertain systems, Ann. Econ. Soc. Meas. 5:323-337.
R. Bellman 8? R. Kalaba, (1959) On adaptive control processes. IRE Trans., 4:1-9.
Bokar, V. 8? Varaiya, P.P. (1979) Adaptive control of Markov chains I: finite parameter set. IEEE Trans. Auto. Control 24:953-958.
Cozzolino, J.M., Gonzalez-Zubieta, R., 8? Miller, R.L. (1965) Markov decision processes with uncertain transition probabilities. Tech. Rpt. 11, Operations Research
Center, MIT.
Dayan, P. 8? Sejnowski, T. (1996) Exploration Bonuses and Dual Control. Machine
Learning (in press).
Duff, M.O. (1995) Q-Iearning for bandit problems. in Machine Learning: Proceedings of the Twelfth International Conference on Machine Learning: pp. 209-217.
Duff, M.O. (1997) Approximate computational methods for optimal learning and
dual control. Technical Report, Deptartment of Computer Science, Univ. of Massachusetts, Amherst.
Fel'dbaum, A. (1965) Optimal Control Systems, Academic Press.
Gittins, J.C. 8? Jones, D. (1979) Bandit processes and dynamic allocation indices
(with discussion). J. R. Statist. Soc. B 41:148-177.
Katehakis, M.H. 8? Veinott, A.F. (1987) The multi-armed bandit problem: decomposition and computation Math. OR 12: 262-268.
MacQueen, J. (1966). A modified dynamic programming method for Markov decision problems, J. Math. Anal. Appl., 14:38-43.
Robinsion, D.R. (1981) Algorithms for evaluating the dynamic allocation index.
Research Report No. 80/DRR/4, Manchester-Sheffield School of Probability and
Statistics.
Tsitsiklis, J. (1993) A short proof of the Gittins index theorem. Proc. 3fnd Conf.
Dec. and Control: 389-390.
Varaiya, P.P., Walrand, J.C., 8? Buyukkoc, C. (1985) Extensions of the multiarmed
bandit problem: the discounted case. IEEE Trans. Auto. Control 30(5):426-439.
Watkins, C. (1989) Learning /rom Delayed Rewards Ph.D. Thesis, Cambidge University.
| 1230 |@word exploitation:2 version:2 manageable:1 hyperstates:5 twelfth:1 willing:2 seek:1 simulation:1 bn:1 decomposition:1 moment:1 initial:1 uma:1 past:1 current:2 must:4 bd:1 written:1 happen:2 reappeared:1 update:3 maxv:1 greedy:1 prohibitive:1 accordingly:1 compelled:1 short:1 ijb:1 provides:2 node:2 ire:1 math:2 along:3 constructed:1 direct:1 beta:6 katehakis:2 become:1 manner:1 expected:5 bility:1 multi:5 terminal:2 bellman:5 obliged:1 discounted:5 resolve:1 armed:7 window:1 considering:1 begin:1 underlying:1 bonus:1 what:4 probabilites:1 caution:1 certainty:6 unexplored:1 collecting:1 subclass:2 ptl:1 iearning:1 returning:1 wrong:1 control:17 unit:1 grant:1 enjoy:1 appear:1 before:1 positive:1 local:17 consequence:1 kalaba:3 initiated:1 path:3 approximately:6 examined:1 equivalence:5 suggests:2 specifying:2 metastability:1 appl:1 statistically:1 practical:2 procedure:3 empirical:1 thought:2 selection:1 collapsed:1 equivalent:1 imposed:1 map:1 demonstrated:1 center:1 go:1 starting:2 regarded:2 his:1 anyway:1 notion:1 limiting:1 deptartment:1 imagine:1 suppose:5 nominal:2 rty:1 programming:5 particularly:1 labeled:1 vein:1 observed:3 enters:1 capture:1 calculate:1 trade:2 highest:2 transitional:1 observes:2 substantial:1 reward:18 asked:1 multistage:1 dynamic:9 depend:1 solving:1 upon:1 learner:2 basis:1 resolved:1 represented:1 revert:1 univ:1 activate:1 sejnowski:2 choosing:1 quite:1 heuristic:3 whose:2 say:1 statistic:3 itself:2 online:1 obviously:1 sequence:2 frozen:1 interaction:1 maximal:1 loop:1 degenerate:1 fel:2 convergence:1 manchester:1 extending:1 gittins:9 staying:1 andrew:2 pose:1 ij:1 school:1 sole:1 soc:2 exploration:5 viewing:2 settle:1 require:2 activating:1 assign:1 generalization:1 strictly:1 extension:1 considered:2 great:1 adopt:1 early:1 proc:1 label:1 currently:2 maker:6 largest:4 hope:1 mit:1 modified:1 rather:1 beb:1 barto:6 derived:1 bernoulli:3 recurs:1 tech:1 contrast:1 sense:4 burdensome:1 dayan:2 stopping:2 typically:1 bt:1 entire:1 bandit:36 interested:1 selects:1 endure:1 issue:2 dual:5 denoted:1 priori:1 proposes:1 constrained:1 special:7 construct:1 never:2 having:2 f3:2 sampling:2 flipped:2 represents:1 jones:3 nearly:1 others:1 report:2 parmeters:1 recognize:1 national:1 individual:1 delayed:1 phase:4 proba:1 attempt:1 certainly:1 behind:1 activated:1 chain:6 experience:1 shorter:1 tree:3 sojourn:11 circle:1 theoretical:1 uncertain:4 tse:2 modeling:1 ott:1 subset:1 international:1 amherst:2 stay:1 reb:2 off:2 michael:1 together:1 again:1 thesis:1 choose:2 conf:1 return:1 account:1 potential:1 explicitly:3 stream:1 view:2 root:2 closed:1 bayes:4 sort:2 identifiability:2 efficiently:2 miller:1 yield:3 ofthe:1 bayesian:4 identification:1 trajectory:1 cc:1 ago:1 history:1 fo:2 pp:1 obvious:1 elegance:1 associated:8 proof:1 gain:6 massachusetts:2 excision:1 centric:1 appears:2 higher:1 originally:1 response:1 rand:1 formulation:1 just:1 stage:2 until:2 replacing:1 defines:1 aj:2 mdp:19 grows:2 effect:2 concept:1 true:2 rpt:1 ll:1 during:1 recurrence:1 inferior:1 drr:1 noted:1 criterion:1 prominent:1 invisible:1 performs:1 p12:1 consideration:2 recently:1 superior:1 functional:1 exponentially:1 refer:1 multiarmed:1 grid:1 entail:1 perspective:2 optimizing:1 shalom:2 route:1 certain:5 occasionally:1 captured:2 greater:2 preceding:1 recognized:2 maximize:2 semi:6 relates:1 ii:3 ing:1 technical:1 determination:1 calculation:1 academic:1 basic:2 sheffield:1 controller:2 aspire:1 achieved:1 dec:1 diagram:1 adhere:1 biased:2 comment:1 thing:1 seem:1 call:1 identically:1 concerned:1 regarding:1 idea:2 computable:1 intensive:1 shift:1 motivated:1 allocate:1 proceed:2 constitute:1 action:33 repeatedly:1 clear:1 amount:2 statist:1 ph:1 exist:1 walrand:1 per:1 econ:1 discrete:1 key:1 salient:2 four:2 achieving:1 breadth:1 backward:1 graph:1 asymptotically:1 defect:1 sum:1 year:1 uncertainty:3 striking:1 family:1 reasonable:2 utilizes:1 gonzalez:1 decision:15 summarizes:1 entirely:2 quadratic:1 encountered:1 precisely:1 aspect:2 optimality:1 performing:1 relatively:3 department:1 metastable:1 hyperstate:6 truncate:1 conjugate:1 remain:1 increasingly:1 em:1 evolves:1 modification:2 gradually:1 restricted:1 taken:4 computationally:1 equation:3 previously:1 remains:1 describing:1 turn:2 mechanism:1 needed:1 know:1 flip:4 available:1 operation:1 observe:2 appropriate:2 beale:1 coin:7 original:1 denotes:1 ensure:1 include:1 approximating:1 sweep:1 already:1 quantity:1 realized:1 strategy:5 topic:1 rom:1 index:21 balance:1 rise:1 design:1 anal:1 zt:3 policy:17 twenty:1 unknown:4 perform:3 markov:16 macqueen:2 arc:2 acknowledge:1 finite:1 immediate:2 head:4 duff:9 pair:4 required:1 namely:1 varaiya:6 concisely:1 learned:1 tractably:1 trans:3 suggested:9 proceeds:3 receding:2 pattern:2 bar:2 lend:1 unrealistic:1 power:1 force:1 examination:1 turning:1 scheme:3 imply:1 deemed:1 acknowledges:1 auto:2 deviate:1 prior:3 literature:1 acknowledgement:1 determining:2 underway:1 allocation:2 versus:1 foundation:1 sufficient:4 principle:2 course:1 summary:1 repeat:2 supported:1 free:1 intractably:1 tsitsiklis:2 wide:1 taking:4 attaching:1 regard:1 dimension:1 transition:18 evaluating:1 seemed:1 adopts:1 forward:1 adaptive:7 reinforcement:2 jump:1 ec:1 approximate:8 pruning:1 ignore:1 search:2 learn:1 expanding:1 domain:1 whole:1 en:1 probing:1 explicit:1 watkins:3 learns:1 splice:1 admissible:1 removing:1 unwieldy:1 theorem:1 bad:1 r2:1 meas:1 concern:1 evidence:1 intractable:1 exists:2 consist:1 false:1 effectively:1 cartesian:1 horizon:7 flavor:1 depicted:1 led:2 borkar:1 simply:2 infinitely:1 expressed:1 ordered:1 scalar:1 applies:1 ma:1 goal:3 ann:1 change:1 infinite:2 reducing:1 fnd:1 called:1 quest:1 support:1 signifying:1 avoiding:2 |
255 | 1,231 | Promoting Poor Features to Supervisors:
Some Inputs Work Better as Outputs
Rich Caruana
JPRC and
Carnegie Mellon University
Pittsburgh, PA 15213
caruana@cs.cmu.edu
Virginia R. de Sa
Sloan Center for Theoretical Neurobiology and
W . M. Keck Center for Integrative Neuroscience
University of California, San Francisco CA 94143
desa@phy.ucsf.edu
Abstract
In supervised learning there is usually a clear distinction between
inputs and outputs - inputs are what you will measure, outputs
are what you will predict from those measurements. This paper
shows that the distinction between inputs and outputs is not this
simple. Some features are more useful as extra outputs than as
inputs. By using a feature as an output we get more than just the
case values but can. learn a mapping from the other inputs to that
feature. For many features this mapping may be more useful than
the feature value itself. We present two regression problems and
one classification problem where performance improves if features
that could have been used as inputs are used as extra outputs
instead. This result is surprising since a feature used as an output
is not used during testing.
1
Introduction
The goal in supervised learning is to learn functions that map inputs to outputs
with high predictive accuracy. The standard practice in neural nets is to use all
features that will be available for the test cases as inputs, and use as outputs only
the features to be predicted.
Extra features available for training cases that won't be available during testing
can be used as extra outputs that often benefit the original output[2][5]. Other
ways of adding information to supervised learning through outputs include hints[l]'
tangent-prop[7], and EBNN[8]. In unsupervised learning it has been shown that
inputs arising from different modalities can provide supervisory signals (outputs for
the other modality) to each other and thus aid learning [3][6].
If outputs are so useful, and since any input could be used as an output, would some
inputs be more useful as outputs? Yes. In this paper we show that in supervised
backpropagation learning, some features are more useful as outputs than as inputs.
This is surprising since using a feature as an output only extracts information from
it during training; during testing it is not used.
390
R. Caruana and V. R. de Sa
This paper uses the following terms: The Main Task is the output to be learned.
The goal is to improve performance on the Main Task. Regular Inputs are the
features provided as inputs in all experiments. Extra Inputs (Extra Outputs) are
the extra features when used as inputs (outputs). STD is standard backpropagation
using the Regular Inputs as inputs and the Main Task as outputs. STD+IN uses
the Extra Features as Extra Inputs to learn the Main Task. STD+OUT uses the
Extra Features, but as Extra Outputs learned in parallel with the Main Task, using
just the Regular Inputs as inputs.
2
Poorly Correlated Features
This section presents a simple synthetic problem where it is easy to see why using a
feature as an extra output is better than using that same feature as an extra input.
Consider the following function:
F1(A,B)
= SIGMOID(A+B),
SIGMOID(x) = 1/(1 +
e(-x)
The STD net in Figure 1a has 20 inputs, 16 hidden units, and one output. We use
backpropagation on this net to learn FlO. A and B are uniformly sampled from the
interval [-5,5]. The network's input is binary codes for A and B. The range [-5,5] is
discretized into 210 bins and the binary code of the resulting bin number is used as
the input coding. The first 10 input units receive the code for A and the second 10
that for B. The target output is the unary real (unencoded) value F1(A,B).
A:B'I'D
fully connected
hidden layer
,
!lain OUtput
~
.:81'0+11
folly cOMocted
hidden layer
,
IIIlin OUtput
C I BTDtOUT
~
,
,
!laiD OUtput Bxtra OUtput
fully connected
hidden layer
~~oo~
"'''''''' """"I'
binary inputs
coding for A
a ? g u I a r
binary inputs
coding for B
I
"11'"'" ""'''"'
n put ?
binary inputs
coding for A
binary inputs
codinq for B
aegular Input.
Jlxtra
IIDput
""'''''' ''''''''''
binary inputs
codino for A
Ragular
binary inputs
coding for B
Input.
Figure 1: Three Neural Net Architectures for Learning F1
Backpropagation is done with per-epoch updating and early stopping. Each trial
uses new random training, halt, and test sets. Training sets contain 50 patterns.
This is enough data to get good performance, but not so much that there is not
room for improvement. We use large halt and test sets - 1000 cases each - to
minimize the effect of sampling error in the measured performances. Larger halt
and test sets yield similar results. We use this methodology for all the experiments
in this paper.
Table 1 shows the mean performance of 50 trials of STD Net 1a with backpropagation and early stopping.
Now consider a similar function:
J:2(A,B) = SIGMOID(A-B).
Suppose, in addition to the 10-bit codings for A and B, you are given the unencoded
unary value F2(A,B) as an extra input feature. Will this extra input help you learn
F1(A,B) better? Probably not. A+B and A-B do not correlate for random A and B.
The correlation coefficient for our training sets is typically less than ?0.0l. Because
391
Promoting Poor Features to Supervisors
Table 1: Mean Test Set Root-Mean-Squared-Error on F1
Network
STD
STD+IN
STD+OUT
I Trials I Mean RMSE I Significance I
50
50
50
0.0648
0.0647
0.0631
ns
0.013*
of this, knowing the value of F2(A,B) does not tell you much about the target value
F1(A,B) (and vice-versa).
F1(A,B)'s poor correlation with F2(A,B) hurts backprop's ability to learn to use
F2(A,B) to predict F1(A,B). The STD+IN net in Figure 1b has 21 inputs - 20
for the binary codes for A and B, and an extra input for F2(A,B). The 2nd line in
Table 1 shows the performance of STD+IN for the same training, halting, and test
sets used by STD; the only difference is that there is an extra input feature in the
data sets for STD+ IN. Note that the performance of STD+ IN is not significantly
different from that ofSTD - the extra information contained in the feature F2(A,B)
does not help backpropagation learn F1(A,B) when used as an extra input.
If F2(A,B) does not help backpropagation learn Fl(A,B) when used as an input,
should we ignore it altogether? No. F1(A,B) and F2(A,B) are strongly related.
They both benefit from decoding the binary input encoding to compute the subfeatures A and B. If, instead of using F2(A,B) as an extra input, it is used as an extra
output trained with backpropagation, it will bias the shared hidden layer to learn
A and B better, and this will help the net better learn to predict Fl(A,B).
Figure 1c shows a net with 20 inputs for A and B, and 2 outputs, one for Fl(A,B)
and one for F2(A,B). Error is back-propagated from both outputs, but the performance of this net is evaluated only on the output F1(A,B) and early stopping
is done using only the performance of this output. The 3rd line in Table 1 shows
the mean performance of 50 trials of this multitask net on F1(A,B). Using F2(A,B)
as an extra output significantly improves performance on F1(A,B). Using the extra feature as an extra output is better than using it as an extra input. By using
F2(A,B) as an output we make use of more than just the individual output values
F2(A , B) but learn to extract information about the function mapping the inputs to
F2(A,B). This is a key difference between using features as inputs and outputs.
The increased performance of STD+OUT over STD and STD+IN is not due to
STD+OUT reducing the capacity available for the main task FlO. All three nets
- STD, STD+IN, STD+OUT - perform better with more hidden units. (Because
larger capacity favors STD+OUT over STD and STD+IN, we report results for the
moderate sized 16 hidden unit nets to be fair to STD and STD+IN.)
3
Noisy Features
This section presents two problems where extra features are more useful as inputs
if they have low noise, but which become more useful as outputs as their noise
increases. Because the extra features are ideal features for these problems, this
demonstrates that what we observed in the previous section does not depend on
the extra features being contrived so that their correlation with the main task is
low - features with high correlation can still be more useful as outputs.
Once again, consider the main task from the previous section:
F1(A,B)
= SIGMOID(A+B)
R. Caruana and V. R. de Sa
392
Now consider these extra features:
* Noisel
NOISE...sCALE * Noise2
EF(A) = A + NOISE...sCALE
EF(B) = B +
Noisel and Noise2 are uniformly sampled on [-1,1]. If NOISE...sCALE is not too
large, EF(A) and EF(B) are excellent input features for learning FI(A,B) because
the net can avoid learning to decode the binary input representations. However, as
NOISE_SCALE increases, EF(A) and EF(B) become less useful and it is better for
the net to learn FI(A,B) from the binary inputs for A and B.
As before, we try using the extra features as either extra inputs or as extra outputs.
Again, the training sets have 50 patterns, and the halt and test sets have 1000
patterns. Unlike before, however, we ran preliminary tests to find the best net size.
The results showed 256 hidden units to be about optimal for the STD nets with
early stopping on this problem.
0.06
0.18
0.17
w
(/)
w
0.055
~
:::i:
!
0.16
ex:
ex:
1ii
(/)
0.15
0.05
'STD+IN"
"STD+OUr -+-"STD" ? B??
0.14
0.13
"STD+IN"
' STD+OUT' -+-'STD" ?B--
0.045
0.12
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
Feature Noise Scale
"---'---'-_'----'----'-_L..--'-----'---JL..-~
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
Feature Noise Scale
Figure 2: STD, STD+IN, and STD+OUT on FI (left) and F3 (right)
Figure 2a plots the average performance of 50 trials of STD+IN and STD+OUT
as NOISE...sCALE varies from 0.0 to 10.0. The performance of STD, which does
not use EF(A) and EF(B), is shown as a horizontal line; it is independent of
NOISE_SCALE. Let's first examine the results of STD+IN which uses EF(A) and
EF(B) as extra inputs. As expected, when the noise is small, using EF(A) and
EF(B) as extra inputs improves performance considerably. As the noise increases,
however, this improvement decreases. Eventually there is so much noise in EF(A)
and EF(B) that they no longer help the net if used as inputs. And, if the noise
increases further, using EF(A) and EF(B) as extra inputs actually hurts. Finally,
as the noise gets very large, performance asymptotes back towards the baseline.
Using EF(A) and EF(B) as extra outputs yields quite different results. When the
noise is low, they do not help as much as they did as extra inputs. As the noise
increases, however, at some point they help more as extra outputs than as extra
inputs, and never hurt performance the way the noisy extra inputs did.
Why does noise cause STD+IN to perform worse than STD? With a finite training
sample, correlations between noisy inputs and the main task cause the network to
use the noisy inputs. To the extent that the main task is a function of the noisy
inputs, it must pass the noise to the output, causing the output to be noisy. Also,
as the net comes to depend on the noisy inputs, it depends less on the noise-free
binary inputs. The noisy inputs explain away some of the training signal, so less is
available to encourage learning to decode the binary inputs.
Promoting Poor Features to Supervisors
393
Why does noise not hurt STD+OUT as much as it hurts STD+IN? As outputs, the
net is learning the mapping from the regular inputs to EF(A) and EF(B). Early
in training, the net learns to interpolate through the noise and thus learns smooth
functions for EF(A) and EF(B) that have reasonable fidelity to the true mapping.
This makes learning less sensitive to the noise added to these features.
3.1 Another Problem
F1(A ,B) is only mildly nonlinear because A and B do not go far into the tails of
the SIGMOID. Do the results depend on this smoothness? To check, we modified
F1(A,B) to make it more nonlinear. Consider this function:
F3(A,B)
= SIGMOID(EXPAND(SIGMOID(A)-SIGMOID(B)))
where EXPAND scales the inputs from (SIGMOID(A)-SIGMOID(B)) to the range
[-12.5,12.5]' and A and B are drawn from [-12.5,12.5]. F3(A,B) is significantly more
nonlinear than F1(A ,B) because the expanded scales of A and B, and expanding
the difference to [-12.5 ,12.5] before passing it through another sigmoid, cause much
of the data to fall in the tails of either the inner or outer sigmoids.
Consider these extra features :
EF(A)
= SIGMOID(A) +
EF(B) = SIGMOID(B) +
* Noise1
NOISE...sCALE * Noise2
NOISE...sCALE
where Noises are sampled as before. Figure 2B shows the results of using extra
features EF(A) and EF(B) as extra inputs or as extra outputs. The trend is similar
to that in Figure 2A but the benefit of STD+OUT is even larger at low noise. The
data for 2a and 2b are generated using different seeds, 2a used steepest descent and
Mitre's Aspirin simulator, 2b used conjugate gradient and Toronto's Xerion simulator, and F1 and F3 do not behave as similarly as their definitions might suggest.
The similarity between the two graphs is due to the ubiquity of the phenomena, not
to some small detail of the test functions or how the experiments were run.
4
A Classification Problem
This section presents a problem that combines feature correlation (Section 1) and
feature noise (Section 2) into one problem. Consider the 1-D classification problem,
shown in Figure 3, of separating two Gaussian distributions with means 0 and 1,
and standard deviations of 1. This problem is simple to learn if the 1-D input
is coded as a single, continuous input but can be made harder by embedding it
non-linearly in a higher dimensional space. Consider encoding input values defined
on [0.0 ,15.0] using an interpolated 4-D Gray code(GC); integer values are mapped
to a 4-D binary Gray code and intervening non-integers are mapped linearly to
intervening 4-D vectors between the binary Gray codes for the bounding integers.
As the Gray code flips only one bit between neighboring integers this involves simply
interpolating along the 1 dimension in the 4-D unit cube that changes. Thus 3.4 is
encoded as .4(GC(4) - GC(3)) + GC(3).
The extra feature is a 1-D value correlated (with correlation p) with the original
unencoded regular input, X. The extra feature is drawn from a Gaussian distribution
with mean p x (X - .5) + .5 and standard deviation )(1 - p2). Examples of the
distributions of the unencoded original dimension and the extra feature for various
correlations are shown in Figure 3. This problem has been carefully constructed so
that the optimal classification boundary does not change as p varies.
Consider the extreme cases. At p = 1, the extra feature is exactly an unencoded
R. Caruana and V. R. de Sa
394
y
p=O
y
p=O.5
y
p=1
:\,
; \,
./,
I
I
,X
x
o1
X
X
Figure 3: Two Overlapped Gaussian Classes (left), and An Extra Feature (y-axis)
Correlated Different Amounts (p = 0: no correlation, p = 1: perfect correlation)
With the unencoded version of the Regular Input (x-axis)
version of the regular input. A STD+IN net using this feature as an extra input
could ignore the encoded inputs and solve the problem using this feature alone. An
STD+OUT net using this extra feature as an extra output would have its hidden
layer biased towards representations that decode the Gray code, which is useful to
the main classification task. At the other extreme (p = 0), we expect nets using the
extra feature to learn no better than one using just the regular inputs because there
is no useful information provided by the uncorrelated extra feature. The interesting
case is between the two extremes. We can imagine a situation where as an output,
the extra feature is still able to help STD+OUT by guiding it to decode the Gray
code but does not help STD+IN because of the high level of noise.
0,65
... -- ---- --.-. -- ---- -- ---- ----- -- --- --.-1:: 1
0,64
0 ,63
+-- - -- ----- --- -- ... -+- - ... ... -- ............ -
--------+---
0,62
---.
0,61
"STD+IN"
_____
"STD.OUT" -+- -"STD?
- 0 --
0,6
0,00
0 , 25
0,50
Correlation of Extra Feature
0 ,75
1,00
Figure 4: STD, STD+IN, and STD+OUT vs. p on the Classification Problem
The class output unit uses a sigmoid transfer function and cross-entropy error measure. The output unit for the correlated extra feature uses a linear transfer function
and squared error measure. Figure 4 shows the average performance of 50 trials of
STD, STD+IN, and STD+OUT as a function of p using networks with 20 hidden
units, 70 training patterns, and halt and test sets of 1000 patterns each. As in the
previous section, STD+IN is much more sensitive to changes in the extra feature
than STD+OUT, so that by p = 0.75 the curves cross and for p less than 0.75, the
dimension is actually more useful as an output dimension than an extra input.
5
Discussion
Are the benefits of using some features as extra outputs instead of as inputs large
enough to be interesting? Yes. Using only 1 or 2 features as extra outputs instead of
as inputs reduced error 2.5% on the problem in Section 1, more than 5% in regions
of the graphs in Section 2, and more than 2.5% in regions of the graph in Section
Promoting Poor Features to Supervisors
395
3. In domains where many features might be moved, the net effect may be larger.
Are some features more useful as outputs than as inputs only on contrived problems?
No. In this paper we used the simplest problems we could devise where a few
features worked better as outputs than as inputs. But our findings explain a result
we noted previously, but did not understand, when applying multitask learning to
pneumonia risk prediction[4]. There, we had the choice of using lab tests that would
be unavailable on future patients as extra outputs, or using poor - i.e., noisy predictions of them as extra inputs. Using the lab tests as extra outputs worked
better. If one compares the zero noise points for STD+OUT (there's no noise in a
feature when used as an output because we use the values in the training set, not
predicted values) with the high noise points for STD+IN in the graphs in Section
2, it is easy to see why STD+OUT could perform much better.
This paper shows that the benefit of using a feature as an extra output is different
from the benefit of using that feature as an input. As an input, the net has access
to the values on the training and test cases to use for prediction. As an output,
however, the net is instead biased to learn a mapping from the other inputs in the
training set to that output. From the graphs it is clear that some features help
when used either as an input, or as an output. Given that the benefit of using a
feature as an extra output is different from that of using it as an input, can we
get both benefits? Our early results with techniques that reap both benefits by
allowing some features to be used simultaneously as both inputs and outputs while
preventing learning direct feed through identity mappings are promising.
Acknowledgements
R. Caruana was supported in part by ARPA grant F33615-93-1-1330, NSF grant
BES-9315428, and Agency for Health Care Policy and Research grant HS06468. v.
de Sa was supported by postdoctoral fellowships from NSERC (Canada) and the
Sloan Foundation. We thank Mitre Group for the Aspirin/Migraines Simulator and
The University of Toronto for the Xerion Simulator.
References
[1] Y.S. Abu-Mostafa, "Learning From Hints in Neural Networks," Journal of Complexity
6:2, pp. 192-198, 1989.
[2] S. B aluj a, and D.A. Pomerleau, "Using the Representation in a Neural Network's
Hidden Layer for Task-Specific Focus of Attention". In C. Mellish (ed.) The International Joint Conference on Artificial Intelligence 1995 (IJCAI-95): Montreal, Canada.
IJCAII & Morgan Kaufmann. San Mateo, CA. pp 133-139, 1995.
[3] S. Becker and G. E. Hinton, "A self-organizing neural network that discovers surfaces
in random-dot stereograms," Nature 355 pp. 161-163, 1992.
[4] R. Caruana, S. Baluja, and T. Mitchell, "Using the Future to Sort Out the Present:
Rankprop and Multitask Learning for Pneumnia Risk Prediction," Advances in Neural
Information Processing Systems 8, 1996.
[5] R. Caruana, "Learning Many Related Tasks at the Same Time With Backpropagation," Advances in Neural Information Processing Systems 7, 1995.
[6] V. R. de Sa, "Learning classification with unlabeled data," Advances in Neural Information Processing Systems 6, pp. 112-119, Morgan Kaufmann, 1994.
[7] P. Simard, B. Victoni, Y. 1. Cun, and J. Denker, "Tangent prop - a formalism for
specifying selected invariances in an adaptive network," Advances in Neural Information Processing Systems 4, pp. 895-903, Morgan Kaufmann, 1992.
[8] S. Thrun and T. Mitchell, "Learning One More Thing," CMU TR: CS-94-184, 1994.
| 1231 |@word multitask:3 trial:6 version:2 nd:1 integrative:1 reap:1 tr:1 harder:1 phy:1 surprising:2 must:1 asymptote:1 plot:1 v:1 alone:1 intelligence:1 selected:1 steepest:1 ebnn:1 toronto:2 along:1 constructed:1 direct:1 become:2 combine:1 expected:1 examine:1 simulator:4 discretized:1 provided:2 what:3 finding:1 exactly:1 demonstrates:1 unit:9 grant:3 before:4 encoding:2 might:2 mateo:1 specifying:1 range:2 testing:3 practice:1 backpropagation:9 significantly:3 regular:8 suggest:1 get:4 unlabeled:1 put:1 risk:2 applying:1 map:1 center:2 go:1 attention:1 embedding:1 hurt:5 target:2 suppose:1 imagine:1 decode:4 us:7 pa:1 trend:1 overlapped:1 updating:1 std:62 observed:1 region:2 connected:2 decrease:1 ran:1 agency:1 complexity:1 stereograms:1 trained:1 depend:3 predictive:1 f2:14 joint:1 various:1 artificial:1 tell:1 quite:1 encoded:2 larger:4 solve:1 ability:1 favor:1 itself:1 noisy:9 xerion:2 net:26 causing:1 neighboring:1 organizing:1 poorly:1 intervening:2 flo:2 moved:1 ijcai:1 contrived:2 keck:1 perfect:1 help:10 oo:1 montreal:1 measured:1 sa:6 p2:1 c:2 predicted:2 come:1 involves:1 bin:2 backprop:1 f1:18 preliminary:1 seed:1 mapping:7 predict:3 mostafa:1 early:6 sensitive:2 vice:1 gaussian:3 modified:1 avoid:1 focus:1 improvement:2 check:1 baseline:1 stopping:4 unary:2 typically:1 hidden:11 expand:2 classification:7 fidelity:1 cube:1 once:1 f3:4 never:1 sampling:1 unsupervised:1 future:2 report:1 hint:2 few:1 simultaneously:1 mellish:1 interpolate:1 individual:1 aspirin:2 extreme:3 encourage:1 theoretical:1 arpa:1 increased:1 formalism:1 caruana:8 deviation:2 supervisor:4 virginia:1 too:1 varies:2 synthetic:1 considerably:1 international:1 decoding:1 squared:2 again:2 worse:1 desa:1 simard:1 halting:1 de:6 coding:6 coefficient:1 sloan:2 depends:1 root:1 try:1 lab:2 sort:1 parallel:1 rmse:1 minimize:1 accuracy:1 kaufmann:3 yield:2 yes:2 explain:2 ed:1 definition:1 pp:5 propagated:1 sampled:3 mitchell:2 improves:3 carefully:1 actually:2 back:2 feed:1 higher:1 supervised:4 methodology:1 evaluated:1 done:2 strongly:1 just:4 correlation:11 horizontal:1 nonlinear:3 gray:6 supervisory:1 effect:2 contain:1 true:1 during:4 self:1 noted:1 won:1 ef:26 fi:3 discovers:1 sigmoid:14 jl:1 tail:2 mellon:1 measurement:1 versa:1 smoothness:1 rd:1 similarly:1 had:1 dot:1 access:1 longer:1 similarity:1 surface:1 showed:1 moderate:1 binary:16 devise:1 morgan:3 care:1 signal:2 ii:1 smooth:1 cross:2 coded:1 halt:5 prediction:4 regression:1 patient:1 cmu:2 receive:1 addition:1 fellowship:1 interval:1 folly:1 modality:2 extra:66 biased:2 unlike:1 probably:1 rankprop:1 thing:1 integer:4 ideal:1 easy:2 enough:2 subfeatures:1 architecture:1 inner:1 knowing:1 becker:1 passing:1 cause:3 useful:13 clear:2 amount:1 simplest:1 reduced:1 nsf:1 neuroscience:1 arising:1 per:1 carnegie:1 abu:1 group:1 key:1 drawn:2 graph:5 lain:1 run:1 you:5 laid:1 reasonable:1 bit:2 layer:6 fl:3 noise1:1 worked:2 noise2:3 interpolated:1 f33615:1 expanded:1 poor:6 conjugate:1 cun:1 previously:1 eventually:1 flip:1 available:5 promoting:4 denker:1 away:1 ubiquity:1 altogether:1 original:3 include:1 added:1 pneumonia:1 gradient:1 thank:1 mapped:2 separating:1 capacity:2 thrun:1 outer:1 extent:1 code:10 o1:1 pomerleau:1 policy:1 perform:3 allowing:1 finite:1 descent:1 behave:1 situation:1 neurobiology:1 hinton:1 gc:4 canada:2 california:1 distinction:2 learned:2 able:1 usually:1 pattern:5 improve:1 axis:2 migraine:1 extract:2 health:1 epoch:1 acknowledgement:1 tangent:2 fully:2 expect:1 interesting:2 foundation:1 uncorrelated:1 supported:2 free:1 bias:1 understand:1 fall:1 benefit:9 boundary:1 dimension:4 curve:1 rich:1 preventing:1 made:1 adaptive:1 san:2 far:1 correlate:1 ignore:2 pittsburgh:1 francisco:1 postdoctoral:1 continuous:1 why:4 table:4 promising:1 nature:1 learn:15 transfer:2 ca:2 expanding:1 correlated:4 unavailable:1 excellent:1 interpolating:1 mitre:2 domain:1 did:3 significance:1 main:11 linearly:2 bounding:1 noise:30 fair:1 aid:1 n:1 guiding:1 learns:2 specific:1 adding:1 sigmoids:1 mildly:1 entropy:1 simply:1 contained:1 nserc:1 prop:2 goal:2 sized:1 identity:1 towards:2 room:1 shared:1 change:3 baluja:1 uniformly:2 reducing:1 pas:1 invariance:1 ucsf:1 phenomenon:1 ex:2 |
256 | 1,232 | Gaussian Processes for Bayesian
Classification via Hybrid Monte Carlo
David Barber and Christopher K. I. Williams
Neural Computing Research Group
Department of Computer Science and Applied Mathematics
Aston University, Birmingham B4 7ET, UK
d.barber~aston.ac.uk
c.k.i.williams~aston.ac.uk
Abstract
The full Bayesian method for applying neural networks to a prediction problem is to set up the prior/hyperprior structure for the
net and then perform the necessary integrals. However, these integrals are not tractable analytically, and Markov Chain Monte Carlo
(MCMC) methods are slow, especially if the parameter space is
high-dimensional. Using Gaussian processes we can approximate
the weight space integral analytically, so that only a small number
of hyperparameters need be integrated over by MCMC methods.
We have applied this idea to classification problems, obtaining excellent results on the real-world problems investigated so far .
1
INTRODUCTION
To make predictions based on a set of training data, fundamentally we need to
combine our prior beliefs about possible predictive functions with the data at hand.
In the Bayesian approach to neural networks a prior on the weights in the net induces
a prior distribution over functions. This leads naturally to the idea of specifying our
beliefs about functions more directly. Gaussian Processes (GPs) achieve just that,
being examples of stochastic process priors over functions that allow the efficient
computation of predictions. It is also possible to show that a large class of neural
network models converge to GPs in the limit of an infinite number of hidden units
(Neal, 1996). In previous work (Williams and Rasmussen, 1996) we have applied GP
priors over functions to the problem of predicting a real-valued output, and found
that the method has comparable performance to other state-of-the-art methods.
This paper extends the use of GP priors to classification problems.
The G Ps we use have a number of adjustable hyperparameters that specify quantities like the length scale over which smoothing should take place. Rather than
Gaussian Processes/or Bayesian Classification via Hybrid Monte Carlo
341
optimizing these parameters (e.g . by maximum likelihood or cross-validation methods) we place priors over them and use a Markov Chain Monte Carlo (MCMC)
method to obtain a sample from the posterior which is then used for making predictions . An important advantage of using GPs rather than neural networks arises
from the fact that the GPs are characterized by a few (say ten or twenty) hyperparameters, while the networks have a similar number of hyperparameters but many
(e.g. hundreds) of weights as well, so that MCMC integrations for the networks are
much more difficult .
We first briefly review the regression framework as our strategy will be to transform
the classification problem into a corresponding regression problem by dealing with
the input values to the logistic transfer function. In section 2.1 we show how to
use Gaussian processes for classification when the hyperparameters are fixed , and
then describe the integration over hyperparameters in section 2.3. Results of our
method as applied to some well known classification problems are given in section
3, followed by a brief discussion and directions for future research.
1.1
Gaussian Processes for regression
We outline the GP method as applied to the prediction of a real valued output
y* = y(x*) for a new input value x*, given a set of training data V = {(Xi, ti), i =
1. .. n}
Given a set of inputs X*,X1, ... Xn, a GP allows us to specify how correlated we
expect their corresponding outputs y = (y(xt), y(X2), ... , y(x n )) to be. We denote
this prior over functions as P(y), and similarly, P(y*, y) for the joint distribution
including y*. If we also specify P( tly), the probability of observing the particular
values t = (t1, .. . tn)T given actual values y (i.e. a noise model) then
P(y*lt)
=
J
P(y*,ylt)dy
= P~t)
J
P(y*,y)P(tly)dy
(1)
Hence the predictive distribution for y. is found from the marginalization of the
product of the prior and the noise model. If P(tly) and P(y*, y) are Gaussian then
P(Y. It) is a Gaussian whose mean and variance can be calculated using matrix
computations involving matrices of size n x n. Specifying P(y*, y) to be a multidimensional Gaussian (for all values of n and placements of the points X*, Xl , . .. Xn)
means that the prior over functions is a G P. More formally, a stochastic process is a
collection ofrandom variables {Y(x )Ix E X} indexed by a set X. In our case X will
be the input space with dimension d, the number of inputs. A GP is a stochastic
process which can be fully specified by its mean function J.l(x) = E[Y(x)] and its
covariance function C(x,x') = E[(Y(x) - J.l(x))(Y(x') - J.l(x'))]; any finite set of
Y -variables will have a joint multivariate Gaussian distribution. Below we consider
GPs which have J.l(x) == o.
2
GAUSSIAN PROCESSES FOR CLASSIFICATION
For simplicity of exposition, we will present our method as applied to two class
problems as the extension to multiple classes is straightforward.
By using the logistic transfer function u to produce an output which can be interpreted as 11"(x), the probability of the input X belonging to class 1, the job of
specifying a prior over functions 11" can be transformed into that of specifying a prior
over the input to the transfer function. We call the input to the transfer function
the activation, and denote it by y, with 11"(x) = u(y(x)). For input Xi, we will
denote the corresponding probability and activation by 11"i and Yi respectively.
D. Barber and C. K. l Williams
342
To make predictions when using fixed hyperparameters we would like to compute
11-. = !7r.P(7r.lt) d7r., which requires us to find P(7r.lt) = P(7r(z.)lt) for a new
input z ?. This can be done by finding the distribution P(y. It) (Y. is the activation of
7r.) and then using the appropriate Jacobian to transform the distribution . Formally
the equations for obtaining P(y. It) are identical to equation 1. However, even if
we use a GP prior so that P(Y., y) is Gaussian, the usual expression for P(tly) =
7r;' (1 - 7rd 1 - t , for classification data (where the t's take on values of 0 or 1),
means that the marginalization to obtain P(Y. It) is no longer analytically tractable.
ni
We will employ Laplace's approximation , i.e. we shall approximate the integrand
P(Y., ylt, 8) by a Gaussian distribution centred at a maximum of this function with
respect to Y., Y with an inverse covariance matrix given by - v"v log P(Y., ylt, 8) .
The necessary integrations (marginalization) can then be carried out analytically
(see, e.g. Green and Silverman (1994) ?5.3) and we provide a derivation in the
following section.
2.1
Maximizing P(y.,ylt)
Let y+ denote (Y. , y), the complete set of activations. By Bayes' theorem
log P(y+ It) = log P(tly)+log P(y+)-log P(t), and let 'It+ = log P(tly)+log P(y+) .
As P(t) does not depend on y+ (it is just a normalizing factor), the maximum of
P(y+ It) is found by maximizing 'It + with respect to y+. We define 'It similarly in
relation to P(ylt). Using log P(tdyd = tiYi -log(1 + eY'), we obtain
'It +
T
~
1 T -1
1
n +1
t y-~log(1+eY')-2y+J{+ y+-210glli.+I--2-log27r (2)
T
i=1
(3)
where J{+ is the covariance matrix of the GP evaluated at Z1, . .. Zn,Z ?. J{+ can
be partitioned in terms of an n x n matrix J{, a n x 1 vector k and a scalar k., viz.
~)
(4)
As y. only enters into equation 2 in the quadratic prior term and has no data point
associated with it, maximizing 'It + with respect to y+ can be achieved by first
maximizing 'It with respect to y and then doing the further quadratic optimization
to determine the posterior mean y?. To find a maximum of 'It we use the NewtonRaphson (or Fisher scoring) iteration ynew
y - ('V'V'It)-1'V'It . Differentiating
equation 3 with respect to y we find
=
(t - 1r) _J{-1 -
J{-1 y
W
(5)
(6)
where W = diag( 7r1 (1 - 7r1), .. , 7r n (1 - 7rn )), which gives the iterative equation 1,
(7)
IThe complexity of calculating each iteration using standard matrix methods is O( n 3 ).
In our implementation, however, we use conjugate gradient methods to avoid explicitly
inverting matrices. In addition, by using the previous iterate y as an initial guess for the
conjugate gradient solution to equation 7, the iterates are computed an order of magnitude
faster than using standard algorithms.
Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo
343
Given a converged solution y for Y, fl. can easily be found using y. = kT f{-ly =
kT(t -i'). var(y.) is given by (f{+l + W+)(n1+l)(n+l)' where W+ is the W matrix
with a zero appended in the (n + l)th diagonal position.
Given the (Gaussian) distribution of y. we then wish to find the mean of the distribution of P(11".lt) which is found from 71-. = u(y.)P(y.lt) . We calculate this by
approximating the sigmoid by a set of five cumulative normal densities (erf) that
interpolate the sigmoid at chosen points. This leads to a very fast and accurate
analytic approximation for the mean class prediction.
J
The justification of Laplace's approximation in our case is somewhat different from
the argument usually put forward, e.g. for asymptotic normality of the maximum
likelihood estimator for a model with a finite number of parameters. This is because
the dimension of the problem grows with the number of data points. However , if
we consider the "infill asymptotics" , where the number of data points in a bounded
region increases, then a local average of the training data at any point x will provide a tightly localized estimate for 11"( x) and hence y( x), so we would expect the
distribution P(y) to become more Gaussian with increasing data.
2.2
Parameterizing the covariance function
There are many reasonable choices for the covariance function . Formally, we are
required to specify functions which will generate a non-negative definite covariance
matrix for any set of points (Xl, . .. , Xk). From a modelling point of view we wish
to specify covariances so that points with nearby inputs will give rise to similar
predictions. We find that the following covariance function works well:
C(x , x')
= Vaexp {-~
t
WI(XI -
XD2}
(8)
1=1
where XI is the Ith component of x and 8 = log(va, W1,
hyperparameters2.
.. . , Wd)
plays the role of
We define the hyperparameters to be the log of the variables in equation 8 since
these are positive scale-parameters. This covariance function has been studied by
Sacks et al (1989) and can be obtained from a network of Gaussian radial basis
functions in the limit of an infinite number of hidden units (Williams, 1996).
The WI parameters in equation 8 allow a different length scale on each input dimension . For irrelevant inputs, the corresponding WI will become small , and the
model will ignore that input. This is closely related to the Automatic Relevance
Determination (ARD) idea of MacKay and Neal (Neal, 1996). The Va variable gives
the overall scale of the prior; in the classification case, this specifies if the 11" values
will typically be pushed to 0 or 1, or will hover around 0.5.
2.3
Integration over the hyperparameters
Given that the GP contains adjustable hyperparameters, how should they be
adapted given the data? Maximum likelihood or (generalized) cross-validation
methods are often used, but we will prefer a Bayesian solution. A prior distribution over the hyperparameters P( 8) is modified using the training data to obtain
' the posterior distribution P(8It) ex P(tI8)P(8). To make predictions we integrate
2We call f) the hyperparameters rather than parameters as they correspond closely to
hyperparameters in neural networks.
344
D. Barber and C. K. I. Williams
the predicted probabilities over the posterior; for example, the mean value 7f(:I:*)
for test input :1:* is given by
7f(:I:.)
=
J
(9)
1i-(:I:. 19)P(9I t )d9,
where 1i-(:I:* 19) is the mean prediction for a fixed value of the hyperparameters, as
given in section 2.
=
For the regression problem P(tI9) can be calculated exactly using P(tI9)
this integral is not analytically tractable for the classification problem. Again we use Laplace's approximation and obtain 3
J P(tly)P(yI9)dy , but
logP(tI9) c:= w(y)
+ ~IJ{-l + WI +
i
(10)
log27r
where y is the converged iterate of equation 7. We denote the right-hand side of
equation 10 by log Pa(tI9) (where a stands for approximate) .
The integration over 9-space also cannot be done analytically, and we employ a
Markov Chain Monte Carlo method. We have used the Hybrid Monte Carlo (HMC)
method of Duane et al (1987), with broad Gaussian hyperpriors on the parameters.
HMC works by creating a fictitious dynamical system in which the hyperparameters
are regarded as position variables, and augmenting these with momentum variables
p. The purpose of the dynamical system is to give the hyperparameters "inertia"
so that random-walk behaviour in 9-space can be avoided. The total energy, 1l, of
the system is the sum of the kinetic energy, K pT pj2 and the potential energy, ? .
The potential energy is defined such that p( 91D) <X exp( -?), i.e. ?
-log P( tI9)logP(9). In practice logPa(tI9) is used instead of log P(tI9). We sample from the
joint distribution for 9 and p given by P( 9, p) <X exp( -? - K); the marginal of this
distribution for 9 is the required posterior. A sample of hyperparameters from the
posterior can therefore be obtained by simply ignoring the momenta.
=
=
Sampling from the joint distribution is achieved by two steps: (i) finding new points
in phase space with near-identical energies 1l by simulating the dynamical system
using a discretised approximation to Hamiltonian dynamics, and (ii) changing the
energy 1l by Gibbs sampling the momentum variables.
Hamilton's first order differential equations for 1l are approximated using the
leapfrog method which requires the derivatives of ? with respect to 9. Given a
Gaussian prior on 9, log P(9) is straightforward to differentiate. The derivative of
log Pa(9) is also straightforward, although implicit dependencies of y (and hence
ir) on 9 must be taken into account by using equation 5 at the maximum point to
obtain ayjae = (I + J{W)-l (aKjae)(t - 7r). The computation of the energy can
be quite expensive as for each new 9, we need to perform the maximization required
for Laplace's approximation, equation 10. The Newton-Raphson iteration was initialized each time with 7r = 0.5, and continued until the mean relative difference of
the elements of W between consecutive iterations was less than 10- 4 .
The same step size [ is used for all hyperparameters, and should be as large as
possible while keeping the rejection rate low. We have used a trajectory made up of
L = 20 leapfrog steps, which gave a low correlation between successive states 4 . This
proposed state is then accepted or rejected using the Metropolis rule depending on
3This requires O( n 3 ) computation.
4In our experiments, where () is only 7 or 8 dimensional, we found the trajectory length
needed is much shorter than that for neural network HMC implementations.
Gaussian Processes/or Bayesian Classification via Hybrid Monte Carlo
345
the final energy 1{* (which is not necessarily equal to the initial energy 1{ because
of the discretization of Hamilton's equations).
The priors over hyperparameters were set to be Gaussian with a mean of -3 and a
standard deviation of 3. In all our simulations a step size ? = 0.1 produced a very
low rejection rate ? 5%). The hyperparameters corresponding to the WI'S were
initialized to -2 and that for Va to O. The sampling procedure was run for 200
iterations, and the first third of the run was discarded ; this "burn-in" is intended
to give the hyperparameters time to come close to their equilibrium distribution.
3
RESULTS
We have tested our method on two well known two-class classification problems, the
Leptograpsus crabs and Pima Indian diabetes datasets and the multiclass Forensic
Glass dataset 5 . We first rescale the inputs so that they have mean zero and unit variance on the training set. Our Matlab implementations for the HMC simulations for
both tasks each take several hours on a SGI Challenge machine (R10000), although
good results can be obtained in less time. We also tried a standard Metropolis
MCMC algorithm for the Crabs problem, and found similar results, although the
sampling by this method is slower than that for HMC. Comparisons with other
methods are taken from Ripley (1994) and Ripley (1996).
Our results for the two-class problems are presented in Table 1: In the Leptograpsus
crabs problem we attempt to classify the sex of crabs on the basis of five anatomical
attributes. There are 100 examples available for crabs of each sex, making a total
of 200 labelled examples. These are split into a training set of 40 crabs of each
sex, making 80 training examples, with the other 120 examples used as the test set.
The performance of the G P is equal to the best of the other methods reported in
Ripley (1994), namely a 2 hidden unit neural network with direct input to output
connections and a logistic output unit which was trained with maximum likelihood
(Network(l) in Table 1) .
For the Pima Indians diabetes problem we have used the data as made available
by Prof. Ripley, with his training/test split of 200 and 332 examples respectively
(Ripley, 1996). The baseline error obtained by simply classifying each record as
coming from a diabetic gives rise to an error of 33%. Again, the GP method is
comparable with the best alternative performance, with an error of around 20%.
Table 1
Neural Network(l)
Neural Network(2)
Neural Network( 3 )
Linea.r Di scrimina.nt
Logis tic regres sion
MAR S (degree _ 1)
PP (4 ridge functions)
2 Ga.ussia.n Mixture
Gaussian Process (HMC)
Gaussian Process (MAP)
Pima
75+
67
66
75
75
64
68
69
Crabs
3
3
8
4
4
6
.
3
3
Table 2
Neural Network (4HU)
Linea.r Discriminant
MARS (degree - 1)
PP (5 ridge funct ion s )
Ga.u ssia.n Mixture
Decision Tree
Gaussian Process (MAP)
Forenslc Gla.ss
23 .8%
36%
3 2 .2'70
35%
30 .8%
3 2 .2%
23 .3'70
Table 1: Number of test errors for the Pima Indian diabetes and Leptograpsus crabs tasks.
Network(2) used two hidden units and the predictive approach (Ripley, 1994), which uses
Laplace's approximation to weight each network local minimum. Network(3) had one
hidden unit and was trained with maximum likelihood; the results were worse for nets
with two or more hidden units (Ripley, 1996). Table 2: Percentage classification error on
the Forensic Glass task.
5 All
available from http://markov .stats. ox. ac. uk/pub/PRNN.
D. Barber and C. K I Williams
346
Our method is readily extendable to multiple class problems by using the softmax
function . The details of this work which will be presented elsewhere, and we simply
report here our initial findings on the Forensic Glass problem (Table 2). This is a 6
class problem , consisting of 214 examples containing 9 attributes. The performance
is estimated using 10 fold cross validation. Computing the MAP estimate took ~
24 hours and gave a classification error of 23.3%, comparable with the best of the
other presented methods .
4
DISCUSSION
We have extended the work of Williams and Rasmussen (1996) to classification
problems, and have demonstrated that it performs well on the datasets we have
tried so far. One of the main advantages of this approach is that the number of
parameters used in specifying the covariance function is typically much smaller than
the number of weights and hyperparameters that are used in a neural network , and
this greatly facilitates the implementation of Monte Carlo methods. Furthermore,
because the Gaussian Process is a prior on function space (albeit in the activation
function space), we are able to interpret our prior more readily than for a model
in which the priors are on the parametrization of the function space, as in neural
network models. Some of the elegance that is present using Gaussian Processes
for regression is lost due to the inability to perform the required marginalisation
exactly. Nevertheless, our simulation results suggest that Laplace's approximation
is accurate enough to yield good results in practice. As methods based on GPs
require the inversion of n x n matrices, where n is the number of training examples,
we are looking into methods such as query selection for large dataset problems.
Other future research directions include the investigation of different covariance
functions and improvements on the approximations employed.
We hope to make our MATLAB code available from http ://www .ncrg.aston.ac.uk/
Acknowledgements
We thank Prof. B. Ripley for making available the Leptograpsus crabs and Pima Indian
diabetes datasets. This work was partially supported by EPSRC grant GRj J75425, "Novel
Developments in Learning Theory for Neural Networks" .
References
Duane, S., A. D. Kennedy, B. J. Pendleton, and D. Roweth (1987) . Hybrid Monte Carlo.
Physics Letters B 195, 216-222.
Green, P. J.and Silverman, B. W . (1994). Nonparametric regression and generalized
linear models. Chapman and Hall.
Neal, R. M. (1996). Bayesian Learning for Neural Networks. Springer. Lecture Notes in
Statistics 118.
Ripley, B. (1996) . Pattern Recognition and Neural Networks. Cambridge.
Ripley, B. D. (1994). Flexible Non-linear Approaches to Classification. In V. Cherkassy,
J. H. Friedman, and H. Wechsler (Eds.), From Statistics to Neural Networks, pp.
105-126. Springer.
Sacks, J., W. J. Welch , T. J. Mitchell, and H. P. Wynn (1989). Design and analysis of
computer experiments. Statistical Science 4(4), 409- 435.
Williams, C. K. 1. Computing with infinite networks. This volume.
Williams, C. K. I. and C. E . Rasmussen (1996). Gaussian processes for regression. In
D. S. Touretzky, M. C . Mozer, and M. E. Hasselmo (Eds.), Advances in Neural
Information Processing Systems 8, pp. 514- 520. MIT Press.
| 1232 |@word briefly:1 inversion:1 sex:3 hu:1 simulation:3 tried:2 covariance:11 initial:3 contains:1 pub:1 wd:1 discretization:1 nt:1 activation:5 must:1 readily:2 analytic:1 guess:1 xk:1 parametrization:1 ith:1 hamiltonian:1 record:1 iterates:1 successive:1 five:2 direct:1 become:2 differential:1 combine:1 grj:1 actual:1 prnn:1 increasing:1 bounded:1 tic:1 interpreted:1 finding:3 multidimensional:1 ti:1 exactly:2 uk:5 yi9:1 unit:8 ly:1 grant:1 hamilton:2 t1:1 positive:1 local:2 limit:2 burn:1 studied:1 specifying:5 practice:2 lost:1 definite:1 silverman:2 procedure:1 infill:1 asymptotics:1 radial:1 suggest:1 cannot:1 close:1 ga:2 selection:1 put:1 applying:1 www:1 map:3 demonstrated:1 maximizing:4 williams:10 straightforward:3 welch:1 simplicity:1 stats:1 estimator:1 parameterizing:1 rule:1 regarded:1 continued:1 his:1 justification:1 laplace:6 pt:1 play:1 gps:6 us:1 diabetes:4 pa:2 element:1 approximated:1 expensive:1 recognition:1 role:1 epsrc:1 enters:1 calculate:1 region:1 mozer:1 complexity:1 dynamic:1 trained:2 depend:1 ithe:1 funct:1 predictive:3 basis:2 easily:1 joint:4 d7r:1 derivation:1 fast:1 describe:1 monte:10 query:1 pendleton:1 whose:1 quite:1 valued:2 say:1 s:1 erf:1 statistic:2 gp:9 transform:2 final:1 differentiate:1 advantage:2 net:3 took:1 product:1 hover:1 coming:1 achieve:1 sgi:1 p:1 r1:2 produce:1 depending:1 ac:4 augmenting:1 rescale:1 ij:1 ard:1 job:1 predicted:1 come:1 direction:2 j75425:1 closely:2 attribute:2 stochastic:3 require:1 behaviour:1 investigation:1 extension:1 around:2 crab:9 hall:1 normal:1 exp:2 equilibrium:1 consecutive:1 purpose:1 birmingham:1 hasselmo:1 hope:1 mit:1 gaussian:27 modified:1 rather:3 avoid:1 sion:1 viz:1 leapfrog:2 improvement:1 modelling:1 likelihood:5 greatly:1 ylt:5 baseline:1 glass:3 integrated:1 typically:2 hidden:6 relation:1 transformed:1 overall:1 classification:18 flexible:1 development:1 art:1 smoothing:1 integration:5 mackay:1 marginal:1 equal:2 softmax:1 sampling:4 chapman:1 identical:2 broad:1 future:2 report:1 fundamentally:1 few:1 employ:2 tightly:1 interpolate:1 phase:1 intended:1 consisting:1 n1:1 attempt:1 friedman:1 pj2:1 mixture:2 chain:3 gla:1 kt:2 accurate:2 integral:4 necessary:2 shorter:1 indexed:1 ynew:1 tree:1 hyperprior:1 walk:1 initialized:2 roweth:1 classify:1 zn:1 logp:2 maximization:1 deviation:1 hundred:1 reported:1 dependency:1 extendable:1 density:1 physic:1 d9:1 w1:1 again:2 containing:1 worse:1 creating:1 derivative:2 account:1 potential:2 centred:1 explicitly:1 view:1 observing:1 doing:1 wynn:1 bayes:1 appended:1 ni:1 logpa:1 ir:1 variance:2 correspond:1 yield:1 bayesian:8 produced:1 carlo:10 trajectory:2 kennedy:1 converged:2 touretzky:1 ussia:1 ed:2 leptograpsus:4 energy:9 pp:4 naturally:1 associated:1 di:1 elegance:1 dataset:2 mitchell:1 specify:5 done:2 evaluated:1 mar:2 ox:1 furthermore:1 just:2 implicit:1 rejected:1 until:1 correlation:1 hand:2 christopher:1 logistic:3 grows:1 analytically:6 hence:3 ti8:1 neal:4 generalized:2 outline:1 complete:1 ridge:2 tn:1 performs:1 novel:1 sigmoid:2 b4:1 volume:1 ncrg:1 interpret:1 cambridge:1 gibbs:1 rd:1 automatic:1 mathematics:1 similarly:2 sack:2 had:1 longer:1 posterior:6 multivariate:1 optimizing:1 irrelevant:1 yi:1 scoring:1 minimum:1 somewhat:1 ey:2 employed:1 converge:1 determine:1 ii:1 full:1 multiple:2 faster:1 characterized:1 determination:1 cross:3 raphson:1 va:3 prediction:10 involving:1 regression:7 iteration:5 achieved:2 ion:1 addition:1 logis:1 marginalisation:1 facilitates:1 call:2 near:1 split:2 enough:1 iterate:2 marginalization:3 gave:2 idea:3 multiclass:1 expression:1 matlab:2 nonparametric:1 ten:1 induces:1 generate:1 specifies:1 http:2 percentage:1 estimated:1 anatomical:1 shall:1 group:1 nevertheless:1 changing:1 sum:1 tly:7 run:2 inverse:1 letter:1 extends:1 place:2 reasonable:1 decision:1 dy:3 prefer:1 comparable:3 pushed:1 fl:1 followed:1 fold:1 quadratic:2 ti9:7 adapted:1 placement:1 x2:1 nearby:1 integrand:1 argument:1 linea:2 department:1 belonging:1 conjugate:2 smaller:1 partitioned:1 wi:5 metropolis:2 making:4 taken:2 equation:14 needed:1 tractable:3 available:5 hyperpriors:1 appropriate:1 simulating:1 alternative:1 slower:1 include:1 newton:1 wechsler:1 calculating:1 especially:1 prof:2 approximating:1 quantity:1 strategy:1 usual:1 diagonal:1 gradient:2 thank:1 barber:5 discriminant:1 length:3 code:1 difficult:1 hmc:6 pima:5 negative:1 rise:2 implementation:4 design:1 adjustable:2 perform:3 twenty:1 markov:4 discarded:1 datasets:3 finite:2 extended:1 looking:1 rn:1 david:1 inverting:1 discretised:1 required:4 specified:1 namely:1 z1:1 connection:1 hour:2 able:1 below:1 usually:1 dynamical:3 pattern:1 challenge:1 including:1 green:2 belief:2 hybrid:6 predicting:1 forensic:3 normality:1 aston:4 brief:1 carried:1 prior:22 review:1 acknowledgement:1 asymptotic:1 relative:1 log27r:2 fully:1 expect:2 lecture:1 fictitious:1 var:1 localized:1 validation:3 integrate:1 degree:2 classifying:1 elsewhere:1 supported:1 rasmussen:3 keeping:1 side:1 allow:2 differentiating:1 calculated:2 xn:2 world:1 dimension:3 cumulative:1 stand:1 forward:1 collection:1 inertia:1 made:2 avoided:1 far:2 approximate:3 ignore:1 dealing:1 xi:4 ripley:10 iterative:1 diabetic:1 table:7 transfer:4 correlated:1 ignoring:1 obtaining:2 excellent:1 investigated:1 necessarily:1 diag:1 main:1 noise:2 hyperparameters:22 x1:1 slow:1 position:2 momentum:3 wish:2 xl:2 jacobian:1 third:1 ix:1 theorem:1 xt:1 normalizing:1 albeit:1 magnitude:1 rejection:2 lt:6 simply:3 partially:1 scalar:1 springer:2 duane:2 kinetic:1 exposition:1 labelled:1 fisher:1 xd2:1 infinite:3 total:2 accepted:1 formally:3 newtonraphson:1 arises:1 inability:1 relevance:1 indian:4 mcmc:5 tested:1 ex:1 |
257 | 1,233 | Fast Network Pruning and Feature
Extraction Using the Unit-OBS Algorithm
Achim Stahlberger and Martin Riedmiller
Institut fur Logik , Komplexitiit und Deduktionssysteme
Universitiit Karlsruhe, 76128 Karlsruhe, Germany
email: stahlb@ira.uka.de . riedml@ira.uka.de
Abstract
The algorithm described in this article is based on the OBS algorithm by Hassibi, Stork and Wolff ([1] and [2]). The main disadvantage of OBS is its high complexity. OBS needs to calculate the
inverse Hessian to delete only one weight (thus needing much time
to prune a big net) . A better algorithm should use this matrix to
remove more than only one weight , because calculating the inverse
Hessian takes the most time in the OBS algorithm.
The algorithm, called Unit- OBS, described in this article is a
method to overcome this disadvantage. This algorithm only needs
to calculate the inverse Hessian once to remove one whole unit thus
drastically reducing the time to prune big nets.
A further advantage of Unit- OBS is that it can be used to do a
feature extraction on the input data. This can be helpful on the
understanding of unknown problems.
1
Introduction
This article is based on the technical report [3] about speeding up the OBS algorithm. The main target of this work was to reduce the high complexity O(n 2 p) of
the OBS algorithm in order to use it for big nets in a reasonable time. Two "exact" algorithms were developed which lead to exactly the same results as OBS but
using less time. The first with time O( n1. 8 p) makes use of Strassens' fast matrix
multiplication algorithm. The second algorithm uses algebraic transformations to
speed up calculation and needs time O(np2). This algorithm is faster than OBS in
the special case of p < n.
A. Stahlberger and M. Riedmiller
656
To get a much higher speedup than these exact algorithms can do, an improved
OBS algorithm was developed which reduces the runtime needed to prune a big
network drastically. The basic idea is to use the inverse Hessian to remove a group
of weights instead of only one, because the calculation of this matrix takes the most
time in the OBS algorithm. This idea leads to an algorithm called Unit- OBS that
is able to remove whole units.
Unit-OBS has two main advantages: First it is a fast algorithm to prune big nets,
because whole units are removed in every step instead of slow pruning weight by
weight . On the other side it can be used to do a feature extraction on the input
data by removing unimportant input units. This is helpful for the understanding
of unknown problems.
2
Optimal Brain Surgeon
This section gives a small summary of the OBS algorithm described by Hassibi,
Stork and Wolff in [1] and [2] . As they showed the increase in error (when changing
weights by ~ w) is
(1)
where H is the Hessian matrix. The goal is to eliminate weight Wq and minimize the
increase in error given by Eq. 1. Eliminating Wq can be expressed by Wq + ~Wq = 0
which is equivalent to (w + ~ W f eq = 0 where eq is the unit vector corresponding to
weight Wq (w T eq = wq ) . Solving this extremum problem with side condition using
Lagrange's method leads to the solution
~E=
2
Wq
(2)
2H-1 qq
Wq
~W = - H-1
qq
H
-1
eq
(3)
H -1 qq denotes the element (q, q) of matrix H -1 . For every weight Wq the minimal
increase in error ~E(wq) is calculated and the weight which leads to overall minimum will be removed and all other weights be adapted referring to Eq. 3. Hassibi,
Stork and Wolff also showed how to calculate H- 1 using time O(n 2 p) where n is
the number of weights and p the number of pattern.
The main disadvantage of the OBS algorithm is that it needs time O(n 2 p) to remove
only one weight thus needing much time to prune big nets. The basic idea to
soften this disadvantage is to use H- 1 to remove more than only one weight! This
generalized OBS algorithm is described in the next section.
3
Generalized OBS (G-OBS)
This section shows a generalized OBS algorithm (G-OBS) which can be used to
delete m weights in one step with minimal increase in error. Like in the OBS
algorithm the increase in error is given by ~E = ~~wT H ~w . But the condition
Wq + ~Wq = 0 is replaced by the generalized condition
(4)
Fast Network Pruning by using the Unit-OBS Algoritiun
657
where M is the selection matrix (selecting the weights to be removed) and
ql, q2, . .. , qm are the indices of the weights that will be removed. Solving this
extremum problem with side condition using Lagrange's method leads to the solution
jj.E = !wT M(MT H- 1 M)-l MT w
(5)
2
jj.w = _H- 1 M(MT H- 1 M)-l MT w
(6)
Choosing M = eq Eq. 5 and 6 reduce to Eq. 2 and 3. This shows that OBS is (as
expected) a special case of G-OBS. The problem of calculating H- 1 was already
solved by Hassibi , Stork and Wolff ([1] and [2]) .
4
Analysis of G-OBS
Hassibi, Stork and Wolff ([1] and [2]) showed that the time to calculate H-l is in
O(n2p). The calculation of jj.E referring to Eq. 5 needs time O(m3)t where m is
the number of weights to be removed. The calculation of jj.w (Eq. 6) needs time
O(nm + m 3 ).
The problem within this solution consists of not knowing which weights should be
deleted and thus jj.E has to be calculated for all possible combinations to find
the global minimum in error increase. Choosing m weights out of n can be done
with
possible combinations. This takes time (~)O(m3) to find the minimum.
Therefore the total runtime of the generalized OBS algorithm to remove m weights
(with minimal increase in error) is
(;:J
m
C:Jm
3 dominates and TG-OBS is in O(n 4 ).
The problem is that for
> 3 the term
In other words G-OBS can be used only to remove a maximum of three weights in
one step. But this means little advantage over OBS.
To overcome this problem the set of possible combinations has to be restricted
to a small subset of combinations that seem to be "good" combinations. This
reduces the term (~)m3 to a reasonable amount. One way to do this is that a good
combination exists of all outgoing connections of a unit. This reduces the number
of combinations to the number of units! The basic idea for that subset is: If all
outgoing connections of a unit can be removed then the whole unit can be deleted
because it can not influence the net output anymore. Therefore choosing this subset
leads to an algorithm called Unit- OBS that is able to remove whole units without
the need to recalculate H- 1 .
5
Special Case of G-OBS: Unit-OBS
With the results of the last sections we can now describe an algorithm called UnitOBS to remove whole units.
1. Train a network to minimum error.
t M is a matrix of special type and thus the calculation of (MT H- J M) needs only
O(m 2 ) operations!
A. Stahlbergerand M. Riedmiller
658
2. Compute H- 1 .
3. For each unit u
(a) Compute the indices Ql, Q2 , .. . ,Qm(u) of the outgoing connections of
unit u where m(u) is the number of outgoing connections of unit u.
(b) M := (e q1 eq2 ... eqm(u?)
(c) D..E(u) :=
~wT M(MT H- 1 M)-l MT w
4. Find the Uo that gives the smallest increase in error D..E(uo).
5. M := M(uo) (refer to steps 3.(a) and 3.(b))
6. D..w := _H- 1 M(MT H- 1 M)-l MT w
7. Remove unit Uo and use D..w to update all weights.
8. Repeat steps 2 to 7 until a break criteria is reached .
Following the analysis of G-OBS the time to remove one unit is
TUnit-OBS
= O(n 2 p + um3 )
(7)
where u is the number of units in the network and m is the maximum number of
outgoing connections. If m is much smaller than n we can neglect the term um 3
and the main problem is to calculate H- 1 . Therefore, if m is small, we can say that
Unit-OBS needs the same time to remove a whole unit as OBS needs to remove
a single weight . The speedup when removing units with an average of s outgoing
connections should then be s .
6
6.1
Simulation results
The Monk-1 benchmark
Unit- OBS was applied to the MONK's problems because the underlying logical
rules are well known and it is easy to say which input units are important to the
problem and which input units can be removed. The simulations showed that in
no case Unit-OBS removed a wrong unit and that it has the ability to remove all
unimportant input units.
Figure 1 shows a MONK-I- net pruned with Unit-OBS. This net is the minimal
network that can be found by Unit-OBS. Table 1 shows the speedup of Unit-OBS
compared to OBS to find an equal-size network for the MONK-I problem.
The network shown in Fig. 1 is only minimal in the number of units but not minimal
with respect to the number of weights . Hassibi, Stork and Wolff ([1] and [2]) found a
network with only 14 weights by applying OBS (Fig. 3). In the framework of UnitOBS, OBS can be used to do further pruning on the network after all possible units
have been pruned. The advantage lies in the fact that now the time consuming
OBS- algorithm is applied to a much smaller network (22 weights instead of 58).
The result of this combination of Unit-OBS and OBS is a network with only 14
weights (Fig . 2) which has also 100 % accuracy like the minimal net found by OBS
(see Table 1).
.
659
Fast Network Pruning by using the Unit-OBS Algorithm
Atuibute 1
Attribute 2
Attribute 3
Attribute 4
Attribute 5
Atuibute 6
Figure 1: MONK-I - net pruned with Unit-OBS, 22 weights. All unimportant units
are removed and this net needs less units than the minimal network found by OBS!
Atuibute 1
Attribute 2
Attribute 3
Attribute 4
Atuibute 5
Atuibute 6
Figure 2: Minimal network (14 weights) for the MONK-I problem found by the
combination of Unit-OBS with OBS . The logical rule for the MONK- I problem is
more evident in this network than in the minimal network found by OBS (comp .
Fig. 3) .
Atuibute 1
Attribute 2
Attribu te 3
Attribute 4
Attribute 5
Attribute 6
Figure 3: Minimal network (14 weights) for the MONK-I problem found by OBS
(see [1] and [2]) .
660
A. Stahlberger and M. Riedmiller
algorithm
no prumng
OBS
Unit-OBS
Unit-OBS + OBS
#
weights
58
14
22
14
topology
speedup+
17-3-1
6-3-1
5-3-1
5-3-1
1.0
2.8
2.6
perf.
train
100%
100%
100%
100%
perf.
test
100%
100%
100%
100%
Table 1: The Monk- l problem
For the initial Monk-l network the maximum number of outgoing connections (m
in Eq. 7) is 3 and this is much smaller than the number of weights. The average
number of outgoing connections of the removed units is 3 and therefore we expect
a speedup by factor 3 (compare Table 1).
By comparing the two minimal nets found by Unit-OBSjOBS (Fig. 2) and
OBS (Fig. 3) it can be seen that the underlying logical rule (out=1 ?:} AttribuLl=AttribuL2 or AttribuL5=1) is more evident in the network found by UnitOBSjOBS. The other advantage of Unit-OBS is that it needs only 38 % of the time
OBS needs to find this minimal network. This advantage makes it possible to apply
Unit-OBS to big nets for which OBS is not useful because of its long computation
time.
6.2
The Thyroid Benchmark
The following describes the application of pruning on a medical classification problem. The task is to classify measured data values of patients into three categories.
The output of the three layered feed forward network therefore consists of three neurons indicating the corresponding class. The input consists of 21 both continuos
and binary signals.
The task was first described in [4]. The results obtained there are shown in the first
row of Table 2. The initially used network has 21 input neurons, 10 hidden and 3
output neurons, which are fully connected using shortcut connections.
When applying OBS to prune the network weights, more than 90 % of the weights
can be pruned. However, over 8 hours of cpu-time on a sparc workstation are used
to do so (row 2 in Table 2). The solution finally found by OBS uses only 8 of
the originally 21 input features. The pruned network shows a slightly improved
classification rate on the test set.
Unit-OBS finds a solution with 41 weights in only 76 minutes of cpu-time. In
comparison to the original OBS algorithm, Unit-OBS is about 8 times as fast when
deleting the same number of weights. Also another important fact can be seen from
the result: The Unit-OBS network considers only 7 of the originally 21 inputs, 1 less
than the weight-focused OBS- algorithm. The number of hidden units is reduced to
2 units, 5 units less than the OBS network uses.
When further looking for an absolute minimum in the number of used weights, the
Unit-OBS network can be additionally pruned using OBS . This finally leeds to an
optimized network with only 24 weights. The classification performance of this very
tCompared to OBS deleting the same number of weights.
Fast Network Pruning by using the Unit-DBS Algorithm
661
small network is 98.5 % which is even slightly better than obtained by the much
bigger initial net .
algorithm
no prunmg
OBS
Unit-OBS
Unit-OBS + OBS
#
weights
316
28
41
24
topology
21-10-3
8-7-3
7-2-3
7-2-3
speedup
I cpu-time
-
-
1.0
511 min .
76 min.
137 min.
7.8
-
perf. test
98.4%
98.5%
98.4%
98.5%
Table 2: The thyroid benchmark
7
Conclusion
The article describes an improvement of the OBS-algorithm introduced in [1] called
Generalized OBS (G-OBS). The underlying idea is to exploit second order information to delete mutliple weights at once. The aim to reduce the number of different
weight groups leads to the formulation of the Unit-OBS algorithm, which considers
the outgoing weights of one unit as a group of candidate weights: When all the
weights of a unit can be deleted, the unit itself can be pruned . The new Unit-OBS
algorithm has two major advantages : First, it considerably accelerates pruning by
a speedup factor which lies in the range of the average number of outgoing weights
of each unit . Second, deleting complete units is especially interesting to determine
the input features which really contribute to the computation of the output. This
information can be used to get more insight in the underlying problem structure,
e .g. to facilitate the process of rule extraction.
References
[1] B . Hassibi, D. G . Storck: Second Order Derivatives for Network Pruning: Optimal Brain Surgeon . Advances in Neural Information Processing Systems 5,
Morgan Kaufmann, 1993, pages 164- 171 .
[2] B. Hassibi, D. G. Stork, G. J. Wolff: Optimal Brain Surgeon and general Network Pruning. IEEE International Conference on Neural Networks, 1993 Volume
1, pages 293-299.
[3] A. Stahlberger: OBS - Verbesserungen und neue Ansatze. Diplomarbeit, Universitat Karlsruhe, Institut fur Logik , Komplexitat und Deduktionssysteme,
1996.
[4] W . Schiffmann , M. Joost, R. Werner : Optimization of the Backpropagation
Algorithm for Training Multilayer Perceptrons. Technical Report, University of
Koblenz, Institute of Physics, 1993 .
| 1233 |@word especially:1 eliminating:1 already:1 leeds:1 attribute:11 simulation:2 neue:1 q1:1 initial:2 criterion:1 generalized:6 really:1 selecting:1 evident:2 complete:1 considers:2 comparing:1 index:2 ql:2 mt:9 major:1 smallest:1 remove:15 stork:7 volume:1 update:1 unknown:2 refer:1 monk:10 neuron:3 benchmark:3 looking:1 contribute:1 aim:1 ira:2 np2:1 consists:3 showed:4 introduced:1 improvement:1 connection:9 fur:2 sparc:1 optimized:1 expected:1 binary:1 hour:1 storck:1 helpful:2 brain:3 able:2 seen:2 minimum:5 morgan:1 eliminate:1 pattern:1 achim:1 little:1 jm:1 cpu:3 hidden:2 initially:1 prune:6 determine:1 germany:1 underlying:4 eqm:1 overall:1 classification:3 signal:1 deduktionssysteme:2 needing:2 reduces:3 technical:2 faster:1 deleting:3 calculation:5 q2:2 developed:2 special:4 long:1 equal:1 extremum:2 transformation:1 once:2 extraction:4 bigger:1 every:2 basic:3 multilayer:1 patient:1 runtime:2 exactly:1 um:1 wrong:1 qm:2 report:2 perf:3 unit:68 uo:4 medical:1 tunit:1 speeding:1 calculating:2 understanding:2 n2p:1 multiplication:1 fully:1 expect:1 replaced:1 uka:2 interesting:1 n1:1 db:1 exploit:1 seem:1 article:4 range:1 easy:1 row:2 summary:1 repeat:1 last:1 topology:2 drastically:2 reduce:3 backpropagation:1 idea:5 knowing:1 institut:2 side:3 institute:1 riedmiller:4 absolute:1 overcome:2 calculated:2 delete:3 minimal:13 word:1 classify:1 forward:1 algebraic:1 get:2 disadvantage:4 hessian:5 selection:1 layered:1 werner:1 jj:5 soften:1 influence:1 applying:2 tg:1 subset:3 useful:1 equivalent:1 unimportant:3 pruning:10 amount:1 global:1 universitat:1 focused:1 category:1 reduced:1 consuming:1 considerably:1 rule:4 insight:1 referring:2 international:1 table:7 additionally:1 physic:1 logik:2 qq:3 target:1 group:3 exact:2 nm:1 us:3 deleted:3 changing:1 element:1 main:5 big:7 derivative:1 whole:7 de:2 inverse:4 fig:6 solved:1 calculate:5 recalculate:1 reasonable:2 connected:1 slow:1 hassibi:8 removed:10 break:1 ob:87 lie:2 und:3 reached:1 complexity:2 accelerates:1 candidate:1 removing:2 minute:1 solving:2 minimize:1 adapted:1 accuracy:1 surgeon:3 kaufmann:1 dominates:1 exists:1 joost:1 speed:1 thyroid:2 min:3 pruned:7 train:2 comp:1 martin:1 fast:7 describe:1 speedup:7 te:1 combination:9 choosing:3 smaller:3 email:1 describes:2 slightly:2 say:2 lagrange:2 expressed:1 ability:1 continuo:1 restricted:1 workstation:1 itself:1 eq2:1 advantage:7 logical:3 net:14 goal:1 needed:1 universitiit:1 feed:1 shortcut:1 higher:1 originally:2 mutliple:1 operation:1 reducing:1 improved:2 apply:1 wt:3 formulation:1 done:1 komplexitat:1 wolff:7 called:5 total:1 anymore:1 m3:3 until:1 perceptrons:1 indicating:1 original:1 denotes:1 wq:12 measured:1 neglect:1 outgoing:10 karlsruhe:3 eq:12 facilitate:1 |
258 | 1,234 | A Constructive Learning Algorithm for
Discriminant Tangent Models
Diego Sona
Alessandro Sperduti
Antonina Starita
Dipartimento di Informatica, Universita di Pisa
Corso Italia, 40, 56125 Pisa, Italy
email: {sona.perso.starita}di.unipi.it
Abstract
To reduce the computational complexity of classification systems
using tangent distance, Hastie et al. (HSS) developed an algorithm to devise rich models for representing large subsets of the
data which computes automatically the "best" associated tangent subspace. Schwenk & Milgram proposed a discriminant modular classification system (Diabolo) based on several autoassociative
multilayer perceptrons which use tangent distance as error reconstruction measure.
We propose a gradient based constructive learning algorithm for
building a tangent subspace model with discriminant capabilities
which combines several of the the advantages of both HSS and
Diabolo: devised tangent models hold discriminant capabilities,
space requirements are improved with respect to HSS since our
algorithm is discriminant and thus it needs fewer prototype models,
dimension of the tangent subspace is determined automatically by
the constructive algorithm, and our algorithm is able to learn new
transformations.
1
Introduction
Tangent distance is a well known technique used for transformation invariant pattern recognition. State-of-the-art accuracy can be achieved on an isolated handwritten character task using tangent distance as the classification metric within a
nearest neighbor algorithm [SCD93]. However, this approach has a quite high computational complexity, owing to the inefficient search and large number of Euclidean
and tangent distances that need to be calculated. Different researchers have shown
how such time complexity can be reduced [Sim94, SS95] at the cost of increased
space complexity.
A Constructive Learning Algorithm for Discriminant Tangent Models
787
A different approach to the problem was used by Hastie et al. [HSS95) and Schwenk
& Milgram [SM95b, SM95a). Both of them used learning algorithms for reducing
the classification time and space requirements, while trying to preserve the same
accuracy. Hastie et al. [HSS95) developed rich models for representing large subsets
of the prototypes. These models are learned from a training set through a Singular
Value Decomposition based algorithm which minimizes the average 2-sided tangent
distance from a subset of the training images. A nice feature of this algorithm
is that it computes automatically the "best" tangent subspace associated with
the prototypes. Schwenk & Milgram [SM95b) proposed a modular classification
system (Diabolo) based on several autoassociative multilayer perceptrons which use
tangent distance as the error reconstruction measure. This original model was then
improved by adding discriminant capabilities to the system [SM95a).
Comparing Hastie et al. algorithm (HSS) versus the discriminant version of Diabolo,
we observe that: Diabolo seems to require less memory than HSS, however, learning
is faster in HSS; Diabolo is discriminant while HSS is not; the number of hidden
units to be used in Diabolo's autoassociators must be decided heuristically through
a trial and error procedure, while the dimension of the tangent subspaces in HSS
can be controlled more easily; Diabolo uses predefined transformations, while HSS
is able to learn new transformations (like style transformations).
In this paper, we introduce the tangent distance neuron (TO-neuron), which implements the I-sided version of the tangent distance, and we devise a gradient based
constructive learning algorithm for building a tangent subspace model with discriminant capabilities. In this way, we are able to combine the advantages of both
HSS and Diabolo: the model holds discriminant capabilities, learning is just a bit
slower than HSS, space requirements are improved with respect to HSS since the
TO-neuron is discriminant and thus it needs fewer prototype models, the dimension
of the tangent subspace is determined automatically by the constructive algorithm,
and TO-neuron is able to learn new transformations.
2
Tangent Distance
In several pattern recognition problems Euclidean distance fails to give a satisfactory solution since it is unable to account for invariant transformations of the
patterns. Simard et al. [SCD93) suggested dealing with this problem by generating
a parameterized 7-dimensional manifold for each image, where each parameter accounts for one such invariance. The underlying idea consists in approximating the
considered transformations locally through a linear model.
For the sake of exposition, consider rotation. Given a digitalized image Xi of a
pattern i, the rotation operation can be approximated by Xi(O) = Xi + Tx,O,
where 0 is the rotation angle, and T x, is the tangent vector to the rotation curve
generated by the rotation operator for Xi. The tangent vector T x, can easily be
computed by finite difference. Now, instead of measuring the distance between two
images as D( Xi, X j) = IIXi-X j II for any norm 11?11. Simard et al. proposed using
the tangent distance DT(Xi,X j ) = min9.,9, IIXi(Oi) - Xj(Oj)lI.
If k types of transformations are considered, there will be k different tangent vectors
per pattern. If II . II is the Euclidean norm, computing the tangent distance is a
simple least-squares problem. A solution for this problem l can be found in Simard
et al. [SCD93], where the authors used DT to drive a I-NN classification rule.
1A
special
case of tangent distance,
D~-?;ded(x" X J ) = mine; IIX,((J,) - Xjll, can
i.e.,
the one sided tangent
distance
be computed more efficiently [SS95].
D. Sona, A. Sperduti and A. Starita
788
Figure 1: Geometric interpretation of equation 1. Note that net = (D~-8ided ):l.
Unfortunately, 1-NN is expensive. To reduce the complexity ofthe above approach,
Hastie et al. [HSS95] proposed an algorithm for the generation of rich models
representing large subsets of patterns. This algorithm computes for each class a
prototype (the centroid), and an associated subspace (described by the tangent
vectors), such that the total tangent distance of the centroid with respect to the
prototypes in the training set is minimised. Note that the associated subspace is
not predefined as in the case of standard tangent distance, but is computed on the
basis of the training set .
3
Tangent Distance Neuron
In this section we define the Tangent ~istance neuron (TO-neuron), which is the
computational model studied in this paper. A TO-neuron is characterized by a set
of n + 1 vectors, of the same dimension as the input vectors (in our case, images) .
One of these vectors, W is used as reference vector (centroid), while the remaining
vectors, Ti (i = 1, ??? , n), are used as tangent vectors. Moreover, the set of tangent
vectors constitutes an ortho-normal basis.
Given an input vector I the input net of the TO-neuron is computed as the square
of the I-sided tangent distance between I and the tangent model {W, T 1 , ? ? ? , Tn}
(see Figure 1)
n
n
where we have used the fact that the tangent vectors constitute an ortho-normal
basis. For the sake of notation, d denotes the difference between the input pattern
and the centroid, and the projection of d over the i-th tangent vector is denoted by
"'fi. Note that, by definition, net is non-negative.
The output 0 of the TO-neuron is then computed by transforming the net through
a nonlinear monotone function f. In our experiments, we have used the following
function
1
0=
f(o.,net) = - - - 1 + 0. net
where 0. controls the steepness of the function. Note that
always positive and within the range (0, 1] .
(2)
0
is positive since net is
789
A Constructive Learning Algorithm for Discriminant Tangent Models
4
Learning
The TD-neuron can be trained to discriminate between patterns belonging to two
different classes through a gradient descent technique. Thus, given a training set
{(I"t,), ... ,(IN,tN)} , where ti E {O,1} is the i-th desired output, and N is
the total number of patterns in the training set, we can define the error function as
N
E =
=2:)tk -
Okr~
(3)
2 k=l
where Ok is the output of the TD-neuron for the k-th input pattern.
Using equations (1-2), it is trivial to compute the changes for the tangent vectors,
the centroid and 0.:
(4)
(5)
..do.
:=:
-'1101
(6E)
=- t
60.
netk 'I101(tk - Ok)
o~
(6)
k=l
where '11 and '1101 are learning parameters.
The learning algorithm initializes the centroid W to the average of the patterns with
target 1, i.e., W = /., Lf~, Ik , where N, is the number of patterns with target
equal to 1, and the tangent vectors to random vectors with small modulus. Then
0., t he centroid Wand the tangent vectors Ti are changed according to equations
(4-6). Moreover , since the tangent vectors must constitute an ortho-normal basis,
after each epoch of training the vectors Ti are ortho-normalized.
5
The Constructive Algorithm
Before training the TD-neuron using equations (4-6), we have to set the tangent
subspace dimension. The same problem is present in HSS and Diabolo (i.e., number
of hidden units). To solve this problem we have developed a constructive algorithm
which adds tangent vectors one by one according to the computational needs.
The key idea is based on the observation that a typical run of the learning algorithm
described in Section 4 leads to the sequential convergence of the vectors according to
their relative importance. This means that the tangent vectors all remain random
vectors while the centroid converges first.
Then one of the tangent vectors converges to the most relevant transformation
(while the remaining tangent vectors are still immature), and so on till all the
tangent vectors converge, one by one, to less and less relevant transformations .
This behavior suggests starting the training using only the centroid (i .e., without
tangent vectors) and allow it to converge. Then , as in other constructive algorithms,
the centroid is frozen and one random tangent vector T 1 is added. Learning is
resumed till changes in T 1 become irrelevant. During learning, however, T, is
normalized after each epoch. At convergence, T 1 is frozen, a new random tangent
vector T 2 is added, and learning is resumed. New tangent vectors are iteratively
added till changes in the classification accuracy becomes irrelevant.
790
D. Sona, A. Sperduti and A. Starita
HSS
#
Tang.
0
1
2
3
4
5
6
7
8
% Cor
% Err
78 . 74
79.10
79 .94
81.47
76 . 87
71 .29
21.26
20.90
20.06
18.53
23.13
28 . 71
-
-
-
-
% Cor
73 . 78
72 . 06
77 . 99
81.14
82 .68
84 .25
85 . 21
86 . 16
86 .37
TD-neuron
% Rej
% Err
7.24
10 .? 8
8 .05
7.17
6 .8.
5.63
5 . 14
4.76
4 .89
18.98
17.46
13.96
11.69
10 .48
10. 12
9 .65
9 .08
8 . 74
Table 1; The results obtained by the HSS algorithm and the TO-neuron.
6
Results
We have tested our constructive algorithm versus the HSS algorithm (which uses
the 2-sided tangent distance) on 10587 binary digits from the NIST-3 dataset . The
binary I28xI28 digits were transformed into a 64-grey level I6xI6 format by a
simple local counting procedure. No other pre-processing transformation
was performed. The training set consisted of 3000 randomly chosen digits, while
the remaining digits where used in the test set. A single tangent model for each
class of digit was computed using both algorithms. The classification of the test
digits was performed using the label of the closest model for HSS and the output
of the TO-neurons for our system . The TO-neurons used a rejection criterion with
parameters adapted during training.
In Table 1 we have reported the performances on the test set of both HSS and our
system. Oifferent numbers of tangent vectors were tested for both of them. From the
results it is clear that the models generated by HSS reach a peak in performance with
4 tangent vectors and then a sharp degradation of the generalization is observed
by adding more tangent vectors. On the contrary, the TO-neurons are able to
steadly increase the performance with an increasing number of tangent vectors.
The improvement in the performance, however, seems to saturate when using many
tangent vectors. Table 2 presents the confusion matrix obtained by the TO-neurons
with 8 tangent vectors.
For comparison, we display some of the tangent models computed by HSS and
by our algorithm in Figure 2. Note how tangent models developed by the HSS
algorithm tend to be more blurred than the ones developed by our algorithm. This
is due to the lake of discriminant capabilities by the HSS algorithm and it is the
main cause of the degradation in performance observed when using more than 4
tangent vectors.
It must be pointed out that, for a fixed number of tangent vectors, the HSS algo-
rithm is faster than ours, because it needs only a fraction of the training examples
(only one class). However, our algorithm is remarkably more efficient when a family
of tangent models with an increasing number of tangent vectors must be generated 2 ?
Moreover, since a TO-neuron uses the one sided tangent distance, it is faster in computing the output.
7
Conclusion
We introduced the tangent distance neuron (TO-neuron), which implements the
I-sided version of the tangent distance and gave a constructive learning algorithm
for building a tangent subspace with discriminant capabilities. As stated in the in:>The tangent model computed by HSS depends on the number of ta.ngent vectors.
A Constructive Learning Algorithm/or Discriminant Tangent Models
791
Figure 2: The tangent models obtained for digits '1' and '3' by the HSS algorithm
(row 1 and 3, respectively) and our TD-neuron (row 2 and 4, respectively). The
centroids are shown in the first column .
troduction , there are many advantages of using the proposed computational model
versus other techniques like HSS and Diabolo. Specifically, we believe that the
proposed approach is particularly useful in those applications where it is very important to have a classification system which is both discriminant and semantically
transparent, in the sense that it is very easy to understand how it works. One
among these applications is the classification of ancient book scripts. In fact, the
description, the comparison , and the classification of forms are the main tasks of
paleographers. Until now, however, these tasks have been generally performed without the aid of a universally accepted and quantitatively based method or technique.
Consequently, very often it is impossible to reach a definitive date attribution of a
document to within 50 years. In this field, it is very important to have a system
which is both discriminant and explanatory, so that paleographers can learn from it
which are the relevant features of the script of a given epoch. These requirements
rule out systems like Diabolo, which is not easily interpretable, and also tangent
models developed by HSS, which are not discriminant. In Figure 3 we have reported
some preliminary results we obtained within this field.
Perhaps most importantly, our work suggests a number of research avenues. We
used just a single TD-neuron; presumably having several neurons arranged as an
adaptive pre-processing layer within a standard feed-forward neural network can
yield a remarkable increase in the transformation invariant features of the network.
00
Co
Ct
661
4
C~
{
C.,.
1
0
1
0
1
1
0
C
. Co.
Cil
C,7
CR
C',q
Total
{
M2
1
3
0
1
4
1
4
0
2
0
650
22
2
3
1
3
1
0
5
1
2
656
0
39
2
0
18
12
Correct :
27
1
13
0
633
2
6
4
14
43
8
1
2
26
3
535
86 .37%
11
3
12
1
2
1
10
1
4
7
680
0
0
0
1
11
6
11
5
1
0
127
7
70
Re.Jected
08
OQ
ReJ
9
8
9
18
1
7
4
12
607
36
0
0
0
{2
9
69
28
32
H
33
27
62
25
{
48
3
0
24
11
562
4 .89%
II
% Cor
% Rej
% Err
86 .86
95.90
M.86
85.19
86 .2{
83 .20
91. 77
90.65
81.70
15.03
5 .52
1.03
9.01
3.6{
{.36
6.M
4 .45
3 .37
8 .34
3.34
1 .62
3.08
6.14
11.17
9 .{0
9 .95
3.18
5 .99
9.96
21.63
Errors
8 . 74%
Table 2: The confusion matrix for the TD-neurons with 8 tangent vectors.
| 1234 |@word trial:1 version:3 seems:2 norm:2 heuristically:1 grey:1 decomposition:1 ours:1 document:1 err:3 comparing:1 must:4 interpretable:1 fewer:2 become:1 ik:1 consists:1 combine:2 introduce:1 behavior:1 automatically:4 td:7 increasing:2 becomes:1 underlying:1 moreover:3 notation:1 minimizes:1 developed:6 transformation:13 ti:4 control:1 unit:2 positive:2 before:1 local:1 studied:1 suggests:2 co:2 range:1 decided:1 implement:2 lf:1 digit:7 procedure:2 projection:1 pre:2 operator:1 impossible:1 attribution:1 starting:1 m2:1 rule:2 importantly:1 ortho:4 diego:1 target:2 us:3 expensive:1 recognition:2 approximated:1 particularly:1 unipi:1 observed:2 alessandro:1 transforming:1 complexity:5 mine:1 trained:1 algo:1 basis:4 easily:3 schwenk:3 tx:1 quite:1 modular:2 solve:1 italia:1 advantage:3 frozen:2 net:7 reconstruction:2 propose:1 relevant:3 date:1 till:3 description:1 convergence:2 requirement:4 generating:1 converges:2 tk:2 nearest:1 perso:1 correct:1 owing:1 require:1 transparent:1 generalization:1 preliminary:1 diabolo:12 dipartimento:1 hold:2 considered:2 normal:3 presumably:1 label:1 always:1 cr:1 improvement:1 centroid:11 sense:1 nn:2 explanatory:1 hidden:2 transformed:1 classification:11 among:1 denoted:1 art:1 special:1 equal:1 field:2 having:1 constitutes:1 quantitatively:1 randomly:1 preserve:1 predefined:2 digitalized:1 euclidean:3 ancient:1 sperduti:3 desired:1 re:1 isolated:1 increased:1 column:1 measuring:1 resumed:2 cost:1 subset:4 autoassociators:1 reported:2 peak:1 minimised:1 book:1 inefficient:1 style:1 simard:3 li:1 sona:4 account:2 blurred:1 depends:1 performed:3 script:2 capability:7 oi:1 square:2 accuracy:3 efficiently:1 yield:1 ofthe:1 handwritten:1 researcher:1 drive:1 reach:2 milgram:3 email:1 definition:1 corso:1 associated:4 di:3 dataset:1 ok:2 feed:1 ta:1 dt:2 improved:3 arranged:1 just:2 until:1 nonlinear:1 perhaps:1 believe:1 modulus:1 building:3 normalized:2 consisted:1 iteratively:1 satisfactory:1 during:2 criterion:1 trying:1 confusion:2 tn:2 image:5 fi:1 rotation:5 interpretation:1 he:1 pointed:1 add:1 closest:1 italy:1 irrelevant:2 binary:2 immature:1 devise:2 converge:2 ii:4 faster:3 characterized:1 devised:1 controlled:1 multilayer:2 metric:1 achieved:1 remarkably:1 singular:1 tend:1 contrary:1 oq:1 counting:1 easy:1 xj:1 gave:1 hastie:5 reduce:2 idea:2 prototype:6 avenue:1 cause:1 constitute:2 autoassociative:2 useful:1 generally:1 clear:1 locally:1 iixi:2 informatica:1 reduced:1 xjll:1 per:1 steepness:1 key:1 monotone:1 fraction:1 year:1 wand:1 run:1 angle:1 parameterized:1 family:1 lake:1 okr:1 bit:1 layer:1 ct:1 display:1 adapted:1 sake:2 format:1 according:3 belonging:1 remain:1 character:1 invariant:3 sided:7 equation:4 cor:3 jected:1 operation:1 observe:1 slower:1 rej:3 original:1 denotes:1 remaining:3 iix:1 approximating:1 universita:1 initializes:1 added:3 gradient:3 subspace:11 distance:24 unable:1 manifold:1 discriminant:19 trivial:1 unfortunately:1 negative:1 stated:1 neuron:26 observation:1 finite:1 nist:1 descent:1 sharp:1 introduced:1 learned:1 able:5 suggested:1 pattern:12 oj:1 memory:1 representing:3 nice:1 geometric:1 epoch:3 tangent:76 relative:1 generation:1 hs:27 versus:3 remarkable:1 row:2 changed:1 allow:1 understand:1 neighbor:1 curve:1 dimension:5 calculated:1 rich:3 computes:3 author:1 forward:1 adaptive:1 universally:1 dealing:1 xi:6 search:1 table:4 learn:4 main:2 definitive:1 ded:1 rithm:1 cil:1 aid:1 fails:1 pisa:2 tang:1 saturate:1 adding:2 sequential:1 importance:1 rejection:1 consequently:1 exposition:1 change:3 determined:2 typical:1 reducing:1 specifically:1 semantically:1 degradation:2 total:3 discriminate:1 invariance:1 accepted:1 perceptrons:2 constructive:13 tested:2 |
259 | 1,235 | Blind separation of delayed and convolved
sources.
Te-Won Lee
Max-Planck-Society, GERMANY,
AND Interactive Systems Group
Carnegie Mellon University
Pittsburgh, PA 15213, USA
tewonOes. emu. edu
Anthony J. Bell
Computational Neurobiology,
The Salk Institute
10010 N. Torrey Pines Road
La Jolla, California 92037, USA
tonyOsalk.edu
Russell H. Lambert
Dept of Electrical Engineering
University of South California, USA
rlambertOsipi.use.edu
Abstract
We address the difficult problem of separating multiple speakers
with multiple microphones in a real room. We combine the work
of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks,
blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail
in real rooms which usually involve non-minimum phase transfer
functions, not-invertible using stable IIR filters. An approach that
sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate
real-room separation of two natural signals using this approach.
1
The problem.
In the linear blind signal processing problem ([3, 2] and references therein), N
signals, s(t) = [S1(t) ... SN(t)V, are transmitted through a medium so that an
array of N sensors picks up a set of signals x(t) = [Xl (t) ... XN(t)V, each of which
Blind Separation of Delayed and Convolved Sources
759
has been mixed, delayed and filtered as follows:
N
Xi(t) =
M-l
LL
(1)
aijkSj(t - Dii - k)
j=l k=O
(Here Dij are entries in a matrix of delays and there is an M-point filter, aij,
between the the jth source and the ith sensor.) The problem is to invert this
mixing without knowledge of it, thus recovering the original signals, s(t).
2
Architectures.
The obvious architecture for inverting eq.1 is the feedforward one:
N
Ui(t) =
M-l
L L WijkXj(t -
(2)
dij - k).
j=l k=O
which has filters, Wij, and delays, dij, which supposedly reproduce, at the Ui , the
original uncorrupted source Signals, Si. This was the architecture implicitly assumed
in [2]. However, it cannot solve the delay-compensation problem, since in eq.1 each
delay, D ij , delays a single source, while in eq.2 each delay, d ii is associated with a
mixture, Xj.
Torkkola [8], has addressed the problem of solving the delay-compensation problem with a feedback architecture. Such an architecture can, in principle, solve this
problem, as shown earlier by Platt & Faggin [7]. Torkkola [9] also generalised the
feedback architecture to remove dependencies across time, to achieve the deconvolution of mixtures which have been filtered, as in eq.1.
Here we propose a slightly different architecture than Torkkola's ([9], eq.15). His
architecture could fail since it is missing feedback cross-weights for t = 0, ie: WijO.
A full feedback system looks like:
N
Ui (t)
= Xi -
M-l
L L WijkUj(t -
dij - k).
(3)
j=l k=O
and is illustrated in Fig.I. Because terms in Ui(t) appear on both sides, we rewrite
this in vector terms: u(t) = x(t) - Wou(t) - E~~l Wku(t - k), in order to solve
it as follows:
M-l
u(t)
= (I + WO)-l(X(t) -
L WkU(t - k?
(4)
k=l
In these equations, there is a feedback unmixing matrix, W k, for each time point
of the filter, but the 'leading matrix', Wo has a special status in solving for u(t).
The delay terms are useful since one metre of distance in air at an 8kHz sampling
rate, corresponds to a whole 25 zero-taps of a filter. Reintroducing them gives us:
N
u(t) =
(I + WO)-l(X(t) - net(t?,
neti(t) =
M-l
LL
j=l k=l
WijkU(t - dij - k?
(5)
T. Lee, A. 1. Bell and R. H. Lambert
760
Ma:tdmize Joint Entropy
H(Y)
S I ---.----;
I---.--XI
/
S2--<----;
UI
U2
A(z)
W(z)
Figure 1: The feedback neural architecture of eq.9, which is used to separate and
deconvolve signals. Each box represents a causal filter and each circle denotes a
time delay.
3
Algorithms.
Learning in this architecture is performed by maximising the joint entropy, H(y(t)),
of the random vector y(t) = g(u(t)), where 9 is a bounded monotonic nonlinear
function (a sigmoid function). The success of this for separating sources depends
on four assumptions: (1) that the sources are statistically independent, (2) that
each source is white, ie: there are no dependencies between time points, (3) that
the non-linearity, g, has a derivative which has higher kurtosis than the probability
density functions (pdf's) of the sources, and (4) that a stable IIR (feedback) inverse
of the mixing exists; ie: that a is minimum phase (see section 5).
Assumption (1) is reasonable and Assumption (3) allows some tailoring of our algorithm to fit data of different types. Assumption (2), on the other hand, is not true
for natural signals. Our algorithm will whiten: it will remove dependencies across
time which already existed in the original source signals, Si. However, it is possible
to restore the characteristic autocorrelations (amplitude spectra) of the sources by
post-processing. For the reasoning behind Assumption (3) see [2]. We will discuss
Assumption 4 in section 5.
In the static feedback case of eq.5, when M = 1, the learning rule for the feedback
weights W 0 is just a co-ordinate transform of the rule for feedforward weights, W0,
in the equivalent architecture of u(t) = Wox(t). Since Wo
(I + WO)-l, we
have Wo = WOI - I, which, due to the quotient rule for matrix differentiation,
differentiates as:
=
(6)
The best way to maximise entropy in the feedforward system is not to follow the
entropy gradient, as in [2J , but to follow its 'natural' gradient, as reported by Amari
et al [1]:
(7)
This is an optimal rescaling of the entropy gradient [1, 3]. It simplifies the learning
761
Blind Separation of Delayed and Convolved Sources
rule and speeds convergence considerably. Evaluated, it gives [2]:
T
AWO ex: (I + yu )WO,
A
A
(8)
Substituting into eq.7 gives the natural gradient rule for static feedback weights:
(9)
This reasoning may be extended to networks involving filters. For the feedforward
filter architecture u(t) = ~:!:~1 WkX(t - k), we derive a natural gradient rule (for
k> 0) of:
T
(10)
A W k ex: YUt-k W k
A
A
where, for convenience, time has become subscripted. Performing the same coordinate transforms as for Wo above, gives the rule:
(11)
(We note that learning rules similar to these have been independently derived by
Cichocki et al [4]). Finally, for the delays in eq.5, we derive [2, 8]:
Ad??
ex:
13
aH( )
M-l
a
ad.Y. = _yA.1 ~
L -W
at 13" kU(t - d13.. - k)
13
(12)
k=l
This rule is different from that in [8] because it uses the collected temporal gradient
information from all the taps. The algorithms of eq.9, eq.11 and eq.12 are the ones
we use in our experiments on the architecture of eq.5.
4
Simulation results for the feedback architecture
To test the learning rules in eq.9, eq.ll and eq.12 we used an IIR filter system to
recover two sources which had been mixed and delayed as follows (in Z-transform
notation):
An (z) = 0.9 + 0.5z- 1 + 0.3z- 2
A21 (z) = -0.7z- 5 - 0.3z- 6 - 0.2z- 7
(13)
A 12 (Z) = 0.5z- 5 + 0.3z- 6 + 0.2z- 7
A 22 (Z) = 0.8 - 0.lz- 1
The mixing system, A(z), is a minimum-phase system with all its zeros inside the
unit circle. Hence, A(z) can be inverted using a stable causal IIR system since all
poles of the inverting systems are also inside the unit circle. For this experiment, we
chose an artificially-generated source: a white process with a Laplacian distribution
[/z(x) = exp( -Ix!)]. In the frequency domain the deconvolving system looks as
follows:
(14)
where D(z) = Wll (Z)W22 (Z) - W12 (Z)W21 (Z)). This leads to the following solution
for the weight filters:
W ll (z) = A 22 (Z)
W 21 (z) = -A21 (z)
W 22 (Z) = All (z)
W 12 (Z) = - A 12 (Z)
(15)
T. Lee, A. 1. Bell and R. H. Lambert
762
The learning rule we used was that of eq.9 and eq.ll with the logistic non-linearity,
Yi = 1/ exp( -Ui). Fig.2A shows the four filters learnt by our IIR algorithm. The
bottom row shows the inverting system convolved with the mixing system, proving
that W * A is approximately the identity mapping. Delay learning is not demonstrated here, though for periodic signals like speech we observed that it is subject
to local minima problems [8, 9].
(A) Feedback (IIR) learning
(8) Feedforward (FIR) learning
Figure 2: Top two rows: learned unmixing filters for (A) IIR learning on minimumphase mixing, and (B) FIR freq.-domain learning on non-minimum phase mixing.
Bottom row: the convolved mixing and unmixing systems. The delta-like response
indicates successful blind unmixing. In (B) this occurs acausally with a time-shift.
5
Back to the feedforward architecture.
The feedback architecture is elegant but limited. It can only invert minimumphase mixing (all zeros are inside the unit circle meaning that all poles of the
inverting system are as well). Unfortunately, real room acoustics usually involves
non-minimum phase mixing.
There does exist, however, a stable non-causal feedforward (FIR) inverse for nonminimum phase mixing systems. The learning rules for such a system can be formulated using the FIR polynomial matrix algebra as described by Lambert [5]. This
may be performed in the time or frequency domain, the only requirements being
that the inverting filters are long enough and their main energy occurs more-orless in their centre. This allows for the non-causal expansion of the non-minimum
phase roots, causing the roughly symmetrical "flanged" appearance of the filters in
Fig-.2B.
For convenience, we formulate the infomax and natural gradient info max rules [2, 1]
in the frequency domain:
+ fft(y)XH
(16)
~ W ex (I + fft(y)UH)W
(17)
~ W ex W- H
where the H superscript denotes the Hermitian transpose (complex conjugate). In
these rules, as in eq.14, W is a matrix of filters and U and X are blocks of multi-
Blind Separation of Delayed and Convolved Sources
sensor signal in the frequency domain. Note that the nonlinearity
operates in the time domain and the fft is applied at the output.
6
763
1li =
8~i ~ still
Simulation results for the feedforward architecture
To show the learning rule in eq.17 working, we altered the transfer function in eq.13
as follows:
(18)
This system is now non-minimum phase, having zeros outside the unit circle. The
inverse system can be approximated by stable non-causal FIR filters. These were
learnt using the learning rule of eq.17 (again, with the logistic non-linearity). The
resulting learnt filters are shown in Fig.2B where the leading weights were chosen
to be at half the filter size (M/2). Non-causality of the filters can be clearly observed for W12 and W21, where there are non-zero coefficients before the maximum
amplitude weights. The bottom row of Fig.2B shows the successful separation by
plotting the complete unmixing/mixing transfer function: W * A.
7
Experiments with real recordings
To demonstrate separation in a real room, we set up two microphones and recorded
firstly two people speaking and then one person speaking with music in the background. The microphones and the sources were both 60cm apart and 60cm from
each other (arranged in a square), and the sampling was 16kHz. Fig.3A shows
the two recordings of a person saying the digits "one" to "ten" while loud music
plays in the background. The IIR system of eq.5, eq.9 and eq.ll was unable to
separate these signals, presumably due to the non-mini mum-phase nature of the
room transfer functions. However, the algorithm of eq.17, converged after 30 passes
through the 10 second recordings. The filter lengths were 256 (corresponding to
16ms). The separated signals are shown in Fig.3B. Listening to them conveys a
sense of almost-clean separation, though interference is audible. The results on the
two people speaking were similar.
An important application is in spontaneous speech recognition tasks where the best
recognizer may fail completely in the presence of background music or competing
speakers (as in the teleconferencing problem). To test this application, we fed into a
speech recognizer, ten sentences recorded with loud music in the background and ten
sentences recorded with a simultaneous speaker interference. After separation, the
recognition rate increased considerably for both cases. These results are reported
in detail in [6].
8
Conclusions
Starting with 'Natural gradient infomax' IIR learning rules for blind time delay
adjustment, separation and deconvolution, we showed how these worked well on
minimum-phase mixing, but not on non-minimum-phase mixing, as usually occurs
in rooms. This led us to an FIR frequency domain infomax approach suggested
by Lambert [5]. The latter approach shows much better separation of speech and
music mixed in a real-room. Based on these techniques, it should now be possible
to develop real-world applications.
T. Lee, A. 1. Bell and R. H. Lambert
764
(A) Mixtures
...
1'1.11 .:M~i.t; :.rr'.~:" I ~ II~
(8) Separations
1.-+--?. .-s-:-ec-~-r-;
: .--.--'-;-..-.--r;-t-"'-~-'1
I'.'.~!!s!';:~~.':.:
.?: : ~J l
.1111
Figure 3: Real-room separation/deconvolution. (A) recorded mixtures (B) separated speech (spoken digits 1-10) and music.
Acknowledgments
T.W.L. is supported by the Daimler-Benz-Fellowship, and A.J.B. by a grant from
the Office of Naval Research. We are grateful to Kari Torkkola for sharing his results
with us, and to Jiirgen Fritsch, Terry Sejnowski and Alex Waibel for discussions
and comments.
References
[1] Amari S-I. Cichocki A. & Yang H.H. 1996. A new learning algorithm for blind
signal separation, Advances in Neural Information Processing Systems 8, MIT
press.
[2] Bell A.J. & Sejnowski T.J. 1995. An information maximisation approach to
blind separation and blind deconvolution, Neural Computation, 7, 1129-1159
[3] Cardoso J-F. & Laheld B. 1996. Equivariant adaptive source separation, IEEE
Trans. on Signal Proc., Dec. 1996
[4] Cichocki A., Amari S-I & Coo J. 1996. Blind separation of delayed and convolved signals with self-adaptive learning rate, in Proc. Intern. Symp. on Nonlinear Theory and Applications (NOLTA *96), Kochi, Japan.
[5] Lambert R. 1996.Multichannel blind deconvolution: FIR matrix algebra and
separation of multipath mixtures, PhD Thesis, University of Southern California, Department of Electrical Engineering, May 1996.
[6] Lee T-W. & Orglmeister R. Blind source separation of real-world signals. submitted to Proc. ICNN, Houston, USA, 1997.
[7] Platt J.C. & Faggin F. 1992. Networks for the separation of sources that are
superimposed and delayed, in Moody J.E et al (eds) Advances in Neural Information Processing Systems 4, Morgan-Kaufmann
[8] Torkkola K. 1996. Blind separation of delayed sources based on information
maximisation, Proc IEEE ICASSP, Atlanta, May 1996.
[9] Torkkola K. 1996. Blind separation of convolved sources based on information
maximisation, Proc. IEEE Workshop on Neural Networks and Signal Processing, Kyota, Japan, Sept. 1996
| 1235 |@word polynomial:1 simulation:2 pick:1 metre:1 si:2 tailoring:1 wll:1 remove:2 half:1 ith:1 filtered:2 firstly:1 become:1 combine:1 symp:1 inside:3 hermitian:1 roughly:1 equivariant:1 multi:1 bounded:1 linearity:3 notation:1 medium:1 cm:2 spoken:1 differentiation:1 temporal:1 interactive:1 platt:2 unit:4 grant:1 appear:1 planck:1 generalised:1 maximise:1 engineering:2 local:1 before:1 approximately:1 chose:1 therein:1 co:1 limited:1 statistically:1 d13:1 acknowledgment:1 maximisation:4 block:1 digit:2 laheld:1 bell:5 road:1 cannot:1 convenience:2 deconvolve:1 equivalent:1 demonstrated:1 missing:1 starting:1 independently:1 formulate:1 rule:19 array:1 his:2 proving:1 coordinate:1 spontaneous:1 play:1 us:1 pa:1 approximated:1 recognition:2 bottom:3 observed:2 electrical:2 russell:1 supposedly:1 ui:6 grateful:1 solving:2 rewrite:1 algebra:2 completely:1 uh:1 teleconferencing:1 icassp:1 joint:2 separated:2 sejnowski:2 outside:1 solve:3 fritsch:1 amari:4 torrey:1 transform:2 superscript:1 rr:1 net:1 kurtosis:1 propose:1 causing:1 mixing:13 achieve:1 convergence:1 requirement:1 unmixing:5 derive:2 recurrent:1 develop:1 ij:1 eq:26 recovering:1 quotient:1 involves:1 filter:20 dii:1 icnn:1 exp:2 presumably:1 mapping:1 pine:1 substituting:1 recognizer:2 proc:5 mit:1 clearly:1 sensor:3 office:1 derived:1 naval:1 indicates:1 superimposed:1 sense:1 wij:1 reproduce:1 subscripted:1 germany:1 wkx:1 special:1 having:1 sampling:2 represents:1 look:2 deconvolving:2 yu:1 delayed:9 phase:11 atlanta:1 mixture:5 behind:1 circle:5 causal:5 increased:1 earlier:1 pole:2 entry:1 delay:13 successful:2 dij:5 iir:10 reported:2 dependency:3 faggin:2 learnt:3 periodic:1 considerably:2 person:2 density:1 ie:3 lee:5 audible:1 invertible:1 infomax:4 moody:1 yut:1 again:1 recorded:4 thesis:1 fir:7 sidestep:1 leading:2 derivative:1 rescaling:1 li:1 japan:2 coefficient:1 blind:15 depends:1 ad:2 performed:2 root:1 recover:1 air:1 square:1 kaufmann:1 characteristic:1 lambert:8 w21:2 ah:1 converged:1 submitted:1 simultaneous:1 sharing:1 ed:1 energy:1 frequency:6 obvious:1 conveys:1 jiirgen:1 associated:1 static:2 adjusting:1 knowledge:1 amplitude:2 back:1 higher:1 follow:2 response:1 arranged:1 evaluated:1 box:1 though:2 just:1 hand:1 working:1 nonlinear:2 autocorrelations:1 logistic:2 usa:4 true:1 hence:1 freq:1 illustrated:1 white:2 ll:6 self:1 speaker:3 whiten:1 won:1 m:1 pdf:1 complete:1 demonstrate:2 reasoning:2 meaning:1 sigmoid:1 khz:2 mellon:1 centre:1 nonlinearity:1 had:1 stable:5 showed:1 jolla:1 apart:1 success:1 yi:1 uncorrupted:1 inverted:1 transmitted:1 minimum:10 morgan:1 houston:1 signal:19 ii:2 multiple:2 full:1 cross:1 long:1 post:1 laplacian:1 involving:1 nonminimum:1 blindly:1 invert:2 dec:1 background:4 fellowship:1 addressed:1 source:21 pass:1 south:1 subject:1 recording:3 elegant:1 comment:1 yang:2 presence:1 feedforward:9 enough:1 fft:3 xj:1 fit:1 architecture:18 competing:1 simplifies:1 listening:1 shift:1 wo:8 speech:5 speaking:3 useful:1 involve:1 cardoso:1 transforms:1 ten:3 daimler:1 multichannel:1 exist:1 delta:1 carnegie:1 group:1 four:2 clean:1 inverse:3 saying:1 reasonable:1 almost:1 separation:22 w12:2 existed:1 worked:1 w22:1 alex:1 speed:1 performing:1 department:1 waibel:1 conjugate:1 across:2 slightly:1 s1:1 coo:1 interference:2 benz:1 equation:1 discus:1 fail:3 neti:1 differentiates:1 fed:1 multipath:1 convolved:8 original:3 denotes:2 top:1 music:6 society:1 already:1 occurs:3 southern:1 gradient:9 distance:1 separate:2 unable:1 separating:3 simulated:1 w0:1 reintroducing:1 collected:1 maximising:1 length:1 mini:1 difficult:1 unfortunately:1 info:1 perform:1 compensation:2 neurobiology:1 extended:1 ordinate:1 inverting:5 sentence:2 tap:2 california:3 acoustic:1 learned:1 emu:1 trans:1 address:1 suggested:1 usually:3 orglmeister:1 max:2 terry:1 natural:8 restore:1 altered:1 cichocki:4 sept:1 sn:1 mixed:4 nolta:1 principle:1 plotting:1 row:4 supported:1 transpose:1 jth:1 aij:1 side:1 institute:1 feedback:13 xn:1 world:2 kari:1 adaptive:2 lz:1 ec:1 implicitly:1 status:1 pittsburgh:1 assumed:1 symmetrical:1 xi:3 spectrum:1 ku:1 transfer:4 nature:1 expansion:1 complex:1 artificially:1 anthony:1 domain:8 main:1 whole:1 s2:1 fig:7 causality:1 salk:1 a21:2 xh:1 xl:1 kochi:1 ix:1 torkkola:7 woi:1 deconvolution:5 exists:1 workshop:1 phd:1 te:1 entropy:5 led:1 appearance:1 intern:1 adjustment:1 u2:1 monotonic:1 corresponds:1 loud:2 ma:1 identity:1 formulated:1 room:9 operates:1 microphone:3 la:1 ya:1 people:2 latter:1 dept:1 mum:1 ex:5 |
260 | 1,236 | Neural Models for Part-Whole Hierarchies
Maximilian Riesenhuber
Peter Dayan
Department of Brain & Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
{max,dayan}~ai.mit.edu
Abstract
We present a connectionist method for representing images that explicitly addresses their hierarchical nature. It blends data from neuroscience about whole-object viewpoint sensitive cells in inferotemporal cortex8 and attentional basis-field modulation in V4 3 with
ideas about hierarchical descriptions based on microfeatures.5 ,11
The resulting model makes critical use of bottom-up and top-down
pathways for analysis and synthesis.6 We illustrate the model with
a simple example of representing information about faces.
1
Hierarchical Models
Images of objects constitute an important paradigm case of a representational hierarchy, in which 'wholes', such as faces, consist of 'parts', such as eyes, noses and
mouths. The representation and manipulation of part-whole hierarchical information in fixed hardware is a heavy millstone around connectionist necks, and has
consequently been the inspiration for many interesting proposals, such as Pollack's
RAAM.l1
We turned to the primate visual system for clues. Anterior inferotemporal cortex
(IT) appears to construct representations of visually presented objects. Mouths and
faces are both objects, and so require fully elaborated representations, presumably
at the level of anterior IT, probably using different (or possibly partially overlapping) sets of cells. The natural way to represent the part-whole relationship between
mouths and faces is to have a neuronal hierarchy, with connections bottom-up from
the mouth units to the face units so that information about the mouth can be used
to help recognize or analyze the image of a face, and connections top-down from
the face units to the mouth units expressing the generative or synthetic knowledge
that if there is a face in a scene, then there is (usually) a mouth too. There is little
We thank Larry Abbott, Geoff Hinton, Bruno Olshausen, Tomaso Poggio, Alex Pouget,
Emilio Salinas and Pawan Sinha for discussions and comments.
18
M. Riesenhuberand P. Dayan
empirical support for or against such a neuronal hierarchy, but it seems extremely
unlikely on the grounds that arranging for one with the correct set of levels for all
classes of objects seems to be impossible.
There is recent evidence that activities of cells in intermediate areas in the visual
processing hierarchy (such as V4) are influenced by the locus of visual attention. 3
This suggests an alternative strategy for representing part-whole information, in
which there is an interaction, subject to attentional control, between top-down
generative and bottom-up recognition processing. In one version of our example,
activating units in IT that represent a particular face leads, through the top-down
generative model, to a pattern of activity in lower areas that is closely related to
the pattern of activity that would be seen when the entire face is viewed. This
activation in the lower areas in turn provides bottom-up input to the recognition
system. In the bottom-up direction, the attentional signal controls which aspects of
that activation are actually processed, for example, specifying that only the activity
reflecting the lower part of the face should be recognized. In this case, the mouth
units in IT can then recognize this restricted pattern of activity as being a particular
sort of mouth. Therefore, we have provided a way by which the visual system can
represent the part-whole relationship between faces and mouths.
This describes just one of many possibilities. For instance, attentional control could
be mainly active during the top-down phase instead. Then it would create in VI (or
indeed in intermediate areas) just the activity corresponding to the lower portion
of the face in the first place. Also the focus of attention need not be so ineluctably
spatial.
The overall scheme is based on an hierarchical top-down synthesis and bottom-up
analysis model for visual processing, as in the Helmholtz machine6 (note that "hierarchy" here refers to a processing hierarchy rather than the part-whole hierarchy
discussed above) with a synthetic model forming the effective map:
'object' 18) 'attentional eye-position' -t 'image'
(1)
(shown in cartoon form in figure 1) where 'image' stands in for the (probabilities
over the) activities of units at various levels in the system that would be caused by
seeing the aspect of the 'object' selected by placing the focus and scale of attention
appropriately. We use this generative model during synthesis in the way described
above to traverse the hierarchical description of any particular image. We use the
statistical inverse of the synthetic model as the way of analyzing images to determine
what objects they depict. This inversion process is clearly also sensitive to the
attentional eye-position - it actually determines not only the nature of the object
in the scene, but also the way that it is depicted (ie its instantiation parameters)
as reflected in the attentional eye position.
In particular, the bottom-up analysis model exists in the connections leading to
the 2D viewpoint-selective image cells in IT reported by Logothetis et al 8 which
form population codes for all the represented images (mouths, noses, etc). The
top-down synthesis model exists in the connections leading in the reverse direction.
In generalizations of our scheme, it may, of course, not be necessary to generate an
image all the way down in VI.
The map (1) specifies a top-down computational task very like the bottom-up one
addressed using a multiplicatively controlled synaptic matrix in the shifter model
Neural Modelsfor Part-Whole Hierarchies
19
layer
3
o
attentional
eye
position e =(c;, tyl t%)
2
m
"- ...
1
p
,'iIL.~
\ ';:111.. ,
,~
'-"
Figure 1: Cartoon of the model. In the top-down, generative, direction, the model generates
images of faces, eyes, mouths or noses based on an attentional eye position and a selection of a
single top-layer unit; the bottom-up, recognition, direction is the inverse of this map. The response
of the neurons in the middle layer is modulated sigmoidally (as illustrated by the graphs shown
inside the circles representing the neurons in the middle layer) by the attentional eye position. See
section 2 for more details.
of Olshausen et al. 9 Our solution emerges from the control the attentional eye
position exerts at various levels of processing, most relevantly modulating activity
in V4. 3 Equivalent modulation in the parietal cortex based on actual (rather than
attentional) eye position! has been characterized by Pouget & Sejnowski 13 and
Salinas & Abbott 15 in terms of basis fields. They showed that these basis fields
can be used to solve the same tasks as the shifter model but with neuronal rather
than synaptic multiplicative modulation. In fact, eye-position modulation almost
certainly occurs at many leve~s in the system, possibly including VIP Our sch~me
clearly requires that the modulating attentional eye-position must be able to become
detached from the spatial eye-position - Connor et al. 3 collected evidence for part of
this hypothesis; although the coordinate system(s) of the modulation is not entirely
clear from their data.
Bottom-up and top-down mappings are learned taking the eye-position modulation into account. In the experiments below, we used a version of the wake-sleep
algorithm,6 for its conceptual and computational simplicity. This requires learning
the bottom-up model from generated imagery (during sleep) and learning the topdown model from assigned explanations (during observation of real input during
wake). In the current version, for simplicity, the eye position is set correctly during
recognition, but we are also interested in exploring automatic ways of doing this.
2
Results
We have developed a simple model that illustrates the feasibility of the scheme
presented above in the context of recognizing and generating cartoon drawings of
a face and its parts. Recognition involves taking an image of a face or a part
thereof (the mouth, nose or one of the eyes) at an arbitrary position on the retina,
M. Riesenhuber and P. Dayan
20
b)
a)
::0 GJ ::Li]
_no. _
.}
...,
~
, , ..-
f~.
~
_,......
__
l.
eye
?.?
~J::
0
,__
~
mouth
~
[!J ?'0
[iJ ??'0
.
?.?
'~.'
.... J
?.
nose
..
.:c..
......
faoo
?
..--..th noe-
__
~.
__
............_
__
Figure 2: a) Recognition: the left column of each pair shows the stimuli; the right shows the
resulting activations in the top layer (ordered as face, mouth, nose and eye). The stimuli are faces
at random positions in the retina. Recognition is performed by setting the attentional eye position
in the image and setting the attentional scale, which creates a window of attention around the
attended to position, shown by a circle of corresponding size and position. b) Generation: each
panel shows the output of the generative pathway for a randomly chosen attentional eye position
on activating each of the top layer units in turn. The focus of attention is marked by a circle
whose size reflects the attentional scale. The name of the object whose neuronal representation in
the top layer was activated is shown above each panel.
and setting the appropriate top level unit to 1 (and the remaining units to zero).
Generation involves imaging either a whole face or of one of its parts (selected by
the active unit in the top layer) at an arbitrary position on the retina.
The model (figure 1) consists of three layers. The lowest layer is a 32 x 32 'retina'.
In the recognition direction, the retina feeds into a layer of 500 hidden units. These
project to the top layer, which has four neurons. In the generative direction, the
connectivity is reversed. The network is fully connected in both directions. The
activity of each neuron based on input from the preceding (for recognition) or the
following layer (for generation) is a linear function (weight matrices wr, Vr in the
recognition and vg, W g in the generative direction). The attentional eye position
influences activity through multiplicative modulation of the neuronal responses in
the hidden layer. The linear response ri = (Wrp)i or ri = (VgO)i of each neuron i
in the middle layer based on the bottom-up or top-down connections is multiplied
by ~i = ?i(ex)?f(ey)?f(e s ), where ?!x,y,s} are the tuning curves in each dimension
of the attentional eye position e = (eX, eY , eS), coding the x- and y- coordinates and
the scale of the focus of attention, respectively. Thus, for the activity mi of hidden
neuron i we have mi = (Wrp)i ?~i in the recognition pathway and mi = (VgO)i ?~i in
the generative pathway. The tuning curves of the ~i are chosen to be sigmoid with
random centers Ci and random directions d i E {-I, I}, eg ?! = u( 4 * d! * (e S - cn).
In other implementations, we have also used Gaussian tuning functions. In fact,
the only requirement regarding the shape of the tuning functions is that through a
superposition of them one can construct functions that show a peaked dependence
on the attentional eye position. In the recognition direction, the attentional eye
position also has an influence on the activity in the input layer by defining a 'window
of attention',7 which we implemented using a Gaussian window centered at the
attentional eye position with its size given by the attentional scale. This is to allow
the system to learn models of parts based on experience with images of whole faces.
To train the model, we employ a variant of the unsupervised wake-sleep algorithm. 6
In this algorithm, the generative pathway is trained during a wake-phase, in which
Neural Models/or Part-Whole Hierarchies
21
stimuli in the input layer (the retina, in our case) cause activation of the neurons
in the network through the recognition pathway, providing an error signal to train
the generative pathway using the delta rule. Conversely, in the sleep-phase, random
activation of a top layer unit (in conjunction with a randomly chosen attentional
eye-position) leads, via the generative connections, to the generation of activation
in the middle layer and consequently an image in the input layer that is then used to
adapt the recognition weights, again using the delta rule. Although the delta rule in
wake-sleep is fine for the recognition direction, it leads to a poor generative model
- in our simple case, generation is much more difficult than recognition. As an
interim solution, we therefore train the generative weights using back-propagation,
which uses the activity in the top layer created by the recognition pathway as the
input and the retinal activation pattern as the target signal. Hence, learning is
still unsupervised (except that appropriate attentional eye-positions are always set
during recognition). We have also experimented with a system in which the weights
wr and w g are preset and only the weights between layers 2 and 3 are trained.
For this model, training could be done with the standard wake-sleep algorithm, ie
using the local delta-rule for both sets of weights.
Figure 2a shows several examples of the performance of the recognition pathway for
the different stimuli after 300,000 iterations. The network is able to recognize the
stimuli accurately at different positions in the visual field. Figure 2b shows several
examples of the output of the generative model, illustrating its capacity to produce
images of faces or their parts at arbitrary locations. By imaging a whole face and
then focusing the attention on eg an area around its center, which activates the
'nose' unit through the recognition pathway, the relationship that, eg a nose is part
of a face can be established in a straightforward way.
3
Discussion
Representing hierarchical structure is a key problem for connectionism. Visual
images offer a canonical example for which it seems possible to elucidate some of
the underlying neural mechanisms. The theory is based on 2D view object selective
cells in anterior IT, and attentional eye-position modulation of the firing of cells in
V4. These work in the context of analysis by synthesis or recognition and generative
models such that the part-whole hierarchy of an object such as a face (which contains
eyes, which contain pupils, etc) can be traversed in the generative direction by
choosing to view the object through a different effective eye-position, and in the
recognition direction by allowing the real and the attentional eye-positions to be
decoupled to activate the requisite 2D view selective cells.
The scheme is related to Pollack's Recursive Auto-Associative Memory (RAAM)
system. l1 RAAM provides a way of representing tree-structured information - for
instance to learn an object whose structure is {{A,B},{C,D}}, a standard threelayer auto-associative net would be taught AB, leading to a pattern of hidden unit
activations 0:; then it would learn CD leading to (3; and finally 0:(3 leading to I,
which would itself be the representation of the whole object. The compression
operation (AB -t 0:) and its expansion inverse are required as explicit methods for
manipulating tree structure.
Our scheme for representing hierarchical information is similar to RAAM, using
the notion of an attentional eye-position to perform its compression and expansion
22
M. Riesenhuberand P. Dayan
operations. However, whereas RAAM normally constructs its own codes for intermediate levels of the trees that it is fed, here, images of faces are as real and as
available as those, for instance, of their associated mouths. This not only changes
the learning task, but also renders sensible a notion of direct recognition without
repeated RAAMification of the parts.
Various aspects of our scheme require comment: the way that eye position affects
recognition; the coding of different instances of objects; the use of top-down information during bottom-up recognition; variants of the scheme for objects that are
too big or too geometrically challenging to 'fit' in one go into a single image; and
hierarchical objects other than images. We are also working on a more probabilistically correct version, taking advantage of the statistical soundness of the Helmholtz
machine.
Eye position information is ubiquitous in visual processing areas,12 including the
LG N and VI, 17 as well as the parietal cortex 1 and V4. 3 Further, it can be revealed
as having a dramatic effect on perception, as in Ramachandran et al'sl4 study on
intermittent exotropes. This is a form of squint in which the two eyes are normally
aligned, but in which the exotropic eye can deviate (voluntarily or involuntarily) by
as much as 60?. The study showed that even if an image is 'burnt' on the retina in
this eye as an afterimage, and so is fixed in retinal coordinates, at least one component of the percept moves as the eye moves. This argues that information about eye
position dramatically effects visual processing in a manner that is consistent with
the model presented here of shifts based on modulation. This is also required by
Bridgeman et al's2 theory of perceptual stability across fixations, that essentially
builds up an impression of a scene in exactly the form of mapping (1).
In general, there will be many instances for an object, e.g., many different faces. In
this general case, the top level would implement a distributed code for the identity
and instantiation parameters of the objects. We are currently investigating methods
of implementing this form of representation into the model.
A key feature of the model is the interaction of the synthesis and analysis pathways when traversing the part-whole hierarchies. This interaction between the two
pathways can also aid the system when performing image analysis by integrating
information across the hierarchy. Just as in RAAM, the extra feature required
when traversing a hierarchy is short term memory. For RAAM, the memory stores
information about the various separate sub-trees that have already been decoded
(or encoded). For our system, the memory is required during generative traversal
to force 'whole' activity on lower layers to persist even after the activity on upper
layers has ceased, to free these upper units to recognize a 'part'. Memory during
recognition traversal is necessary in marginal cases to accumulate information across
separate 'parts' as well as the 'whole'. This solution to hierarchical representation
inevitably gives up the computational simplicity of the naive neuronal hierarchical
scheme described in the introduction which does not require any such accumulation.
Knowledge of images that are too large to fit naturally in a single view 4 at a canonical location and scale, or that theoretically cannot fit in a view (like 360? information
about a room) can be handled in a straightforward extension of the scheme. All this
requires is generalizing further the notion of eye-position. One can explore one's
generative model of a room in the same way that one can explore one's generative
model of a face.
Neural Models/or Part-Whole Hierarchies
23
We have described our scheme from the perspective of images. This is convenient
because of the substantial information available about visual processing. However,
images are not the only examples of hierarchical structure - this is also very relevant
to words, music and also inferential mechanisms. We believe that our mechanisms
are also more general - proving this will require the equivalent of the attentional
eye-position that lies at the heart of the method.
References
[1] Andersen, R, Essick, GK & Siegel, RM (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456-458.
[2] Bridgeman, B, van der Hejiden, AHC & Velichkovsky, BM (1994). A theory of visual
stability across saccadic eye movements. Behavioral and Brain Sciences, 17, 247-292.
[3] Connor, CE, Gallant, JL, Preddie, DC & Van Essen, DC (1996). Responses in area
V4 depend on the spatial relationship between stimulus and attention. Journal of
Neurophysiology, 75, 1306-1308.
[4] Feldman, JA (1985). Four frames suffice: A provisional model of vision and space.
The Behavioral and Brain Sciences, 8, 265-289.
[5] Hinton, GE (1981). Implementing semantic networks in parallel hardware. In GE
Hinton & JA Anderson, editors, Parallel Models of Associative Memory. Hillsdale,
NJ: Erlbaum, 161-188.
[6] Hinton, GE, Dayan, P, Frey, BJ & Neal, RM (1995). The wake-sleep algorithm for
unsupervised neural networks. Science, 268, 1158-1160.
[7] Koch, C & Ullmann, S (1985). Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4, 219-227.
[8] Logothetis, NK, Pauls, J, & Poggio, T (1995). Shape representation in the inferior
temporal cortex of monkeys. Current Biology, 5, 552-563.
[9] Olshausen, BA, Anderson, CH & Van Essen, DC (1993). A neurobiological model
of visual attention and invariant pattern recognition based on dynamic routing of
information. Journal of Neuroscience, 13, 4700-4719.
[10] Pearl, J (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Inference. San Mateo, CA: Morgan Kaufmann.
[11] Pollack, JB (1990). Recursive distributed representations. Artificial Intelligence, 46,
77-105.
[12] Pouget, A, Fisher, SA & Sejnowski, TJ (1993). Egocentric spatial representation in
early vision. Journal of Cognitive Neuroscience, 5, 150-161.
[13] Pouget, A & Sejnowski, TJ (1995) . Spatial representations in the parietal cortex may
use basis functions. In G Tesauro, DS Touretzky & TK Leen, editors, Advances in
Neural Information Processing Systems 7, 157-164.
[14] Ramachandran, VS, Cobb, S & Levi, L (1994). The neural locus of binocular rivalry
and monocular diplopia in intermittent exotropes. Neuroreport, 5, 1141-1144.
[15] Salinas, E & Abbott LF (1996). Transfer of coded information from sensory to motor
networks. Journal of Neuroscience, 15, 6461-6474.
[16] Sung, K & Poggio, T (1995). Example based learning for view-based human face detection. AI Memo 1521, CBCL paper 112, Cambridge, MA: MIT.
[17] Trotter, Y, Celebrini, S, Stricanne, B, Thorpe, S & Imbert, M (1992). Modulation of
neural stereoscopic processing in primate area VI by the viewing distance. Science,
257, 1279-1281.
PART
II
NEUROSCIENCE
| 1236 |@word neurophysiology:1 illustrating:1 version:4 inversion:1 middle:4 seems:3 compression:2 trotter:1 attended:1 dramatic:1 cobb:1 contains:1 current:2 anterior:3 activation:8 must:1 shape:2 motor:1 depict:1 v:1 generative:20 selected:2 intelligence:1 short:1 provides:2 location:3 traverse:1 provisional:1 direct:1 become:1 diplopia:1 consists:1 fixation:1 pathway:12 behavioral:2 inside:1 manner:1 theoretically:1 indeed:1 tomaso:1 brain:3 little:1 actual:1 window:3 provided:1 project:1 underlying:2 suffice:1 panel:2 lowest:1 what:1 monkey:1 developed:1 ahc:1 nj:1 sung:1 noe:1 temporal:1 stricanne:1 exactly:1 rm:2 control:4 unit:17 normally:2 local:1 frey:1 encoding:1 analyzing:1 firing:1 modulation:10 mateo:1 suggests:1 specifying:1 conversely:1 challenging:1 burnt:1 recursive:2 implement:1 lf:1 area:8 empirical:1 convenient:1 inferential:1 word:1 integrating:1 refers:1 seeing:1 cannot:1 selection:1 context:2 impossible:1 influence:2 accumulation:1 equivalent:2 map:3 center:2 straightforward:2 attention:11 go:1 simplicity:3 pouget:4 rule:4 population:1 stability:2 notion:3 coordinate:3 proving:1 arranging:1 hierarchy:15 logothetis:2 target:1 elucidate:1 us:1 hypothesis:1 helmholtz:2 recognition:27 persist:1 bottom:13 connected:1 movement:1 voluntarily:1 substantial:1 traversal:2 dynamic:1 trained:2 depend:1 creates:1 threelayer:1 basis:4 geoff:1 various:4 represented:1 train:3 effective:2 activate:1 sejnowski:3 artificial:1 choosing:1 salina:3 whose:3 encoded:1 solve:1 plausible:1 drawing:1 soundness:1 itself:1 associative:3 advantage:1 net:1 interaction:3 turned:1 aligned:1 relevant:1 representational:1 description:2 ceased:1 requirement:1 produce:1 generating:1 object:20 help:1 illustrate:1 tk:1 ij:1 sa:1 implemented:1 involves:2 direction:13 closely:1 correct:2 centered:1 human:2 routing:1 viewing:1 larry:1 implementing:2 hillsdale:1 require:4 ja:2 activating:2 generalization:1 connectionism:1 traversed:1 exploring:1 extension:1 around:3 koch:1 ground:1 visually:1 presumably:1 iil:1 mapping:2 bj:1 cbcl:1 circuitry:1 early:1 currently:1 superposition:1 sensitive:2 modulating:2 create:1 reflects:1 mit:2 clearly:2 activates:1 gaussian:2 always:1 rather:3 probabilistically:1 conjunction:1 focus:4 mainly:1 inference:1 dayan:6 unlikely:1 entire:1 hidden:4 manipulating:1 selective:4 interested:1 overall:1 spatial:6 marginal:1 field:4 construct:3 having:1 essick:1 cartoon:3 biology:1 placing:1 unsupervised:3 peaked:1 connectionist:2 stimulus:6 intelligent:1 jb:1 employ:1 retina:7 thorpe:1 randomly:2 recognize:4 phase:3 pawan:1 ab:2 detection:1 possibility:1 essen:2 certainly:1 activated:1 tj:2 necessary:2 experience:1 poggio:3 decoupled:1 traversing:2 tree:4 circle:3 pollack:3 sinha:1 instance:5 column:1 recognizing:1 erlbaum:1 too:4 reported:1 synthetic:3 ie:2 v4:6 probabilistic:1 synthesis:6 connectivity:1 imagery:1 again:1 andersen:1 possibly:2 cognitive:2 leading:5 li:1 account:1 leve:1 retinal:2 coding:2 explicitly:1 caused:1 vi:4 multiplicative:2 performed:1 view:6 analyze:1 doing:1 portion:1 sort:1 parallel:2 elaborated:1 kaufmann:1 percept:1 accurately:1 influenced:1 touretzky:1 synaptic:2 against:1 thereof:1 vgo:2 naturally:1 associated:1 mi:3 massachusetts:1 knowledge:2 emerges:1 ubiquitous:1 actually:2 reflecting:1 back:1 appears:1 feed:1 focusing:1 reflected:1 response:4 leen:1 done:1 anderson:2 just:3 binocular:1 d:1 working:1 ramachandran:2 overlapping:1 propagation:1 believe:1 olshausen:3 name:1 effect:2 detached:1 contain:1 hence:1 inspiration:1 assigned:1 semantic:1 illustrated:1 eg:3 neal:1 during:11 inferior:1 imbert:1 impression:1 argues:1 l1:2 reasoning:1 image:25 sigmoid:1 celebrini:1 jl:1 discussed:1 accumulate:1 expressing:1 cambridge:2 connor:2 ai:2 feldman:1 automatic:1 tuning:4 bruno:1 cortex:5 gj:1 etc:2 inferotemporal:2 posterior:1 own:1 recent:1 showed:2 perspective:1 reverse:1 manipulation:1 store:1 tesauro:1 der:1 raam:7 seen:1 morgan:1 preceding:1 ey:2 recognized:1 determine:1 paradigm:1 signal:3 ii:1 emilio:1 characterized:1 adapt:1 offer:1 coded:1 controlled:1 feasibility:1 variant:2 essentially:1 exerts:1 vision:2 bridgeman:2 iteration:1 represent:3 cell:7 proposal:1 whereas:1 fine:1 addressed:1 wake:7 appropriately:1 sch:1 extra:1 probably:1 comment:2 subject:1 microfeatures:1 intermediate:3 revealed:1 affect:1 fit:3 idea:1 cn:1 regarding:1 shift:2 handled:1 render:1 peter:1 cause:1 constitute:1 dramatically:1 clear:1 hardware:2 processed:1 generate:1 specifies:1 canonical:2 stereoscopic:1 neuroscience:5 delta:4 correctly:1 wr:2 taught:1 key:2 four:2 levi:1 ce:1 abbott:3 imaging:2 graph:1 egocentric:1 geometrically:1 inverse:3 place:1 almost:1 entirely:1 layer:24 sleep:7 activity:15 alex:1 scene:3 ri:2 generates:1 aspect:3 extremely:1 performing:1 interim:1 department:1 structured:1 poor:1 describes:1 across:4 primate:2 restricted:1 invariant:1 heart:1 monocular:1 turn:2 mechanism:3 locus:2 nose:8 vip:1 fed:1 ge:3 available:2 operation:2 multiplied:1 hierarchical:12 appropriate:2 alternative:1 top:22 remaining:1 music:1 build:1 move:2 already:1 occurs:1 blend:1 strategy:1 saccadic:1 dependence:1 reversed:1 attentional:29 thank:1 separate:2 capacity:1 distance:1 sensible:1 me:1 collected:1 shifter:2 code:3 relationship:4 multiplicatively:1 providing:1 difficult:1 lg:1 gk:1 memo:1 ba:1 implementation:1 squint:1 perform:1 allowing:1 upper:2 gallant:1 neuron:8 observation:1 riesenhuber:2 inevitably:1 parietal:4 defining:1 hinton:4 neurobiology:1 dc:3 frame:1 intermittent:2 arbitrary:3 pair:1 required:4 connection:6 learned:1 established:1 pearl:1 address:1 able:2 topdown:1 usually:1 pattern:6 below:1 perception:1 max:1 including:2 explanation:1 memory:6 mouth:16 critical:1 natural:1 force:1 representing:7 scheme:10 technology:1 eye:42 created:1 auto:2 naive:1 deviate:1 fully:2 interesting:1 generation:5 vg:1 consistent:1 viewpoint:2 editor:2 heavy:1 cd:1 course:1 free:1 allow:1 institute:1 face:28 taking:3 rivalry:1 distributed:2 van:3 curve:2 dimension:1 stand:1 sensory:1 clue:1 san:1 bm:1 neurobiological:1 active:2 instantiation:2 investigating:1 conceptual:1 nature:2 learn:3 transfer:1 ca:1 expansion:2 whole:19 big:1 s2:1 paul:1 repeated:1 neuronal:6 siegel:1 pupil:1 sigmoidally:1 vr:1 aid:1 sub:1 position:37 decoded:1 explicit:1 lie:1 perceptual:1 down:13 experimented:1 evidence:2 consist:1 exists:2 ci:1 illustrates:1 maximilian:1 nk:1 depicted:1 generalizing:1 explore:2 forming:1 visual:13 ordered:1 partially:1 ch:1 determines:1 ma:2 viewed:1 marked:1 identity:1 consequently:2 towards:1 room:2 fisher:1 change:1 except:1 preset:1 preddie:1 neck:1 e:1 support:1 modulated:1 requisite:1 neuroreport:1 ex:2 |
261 | 1,238 | Support Vector Regression Machines
Harris Drucker? Chris J.C. Burges" Linda Kaufman"
Alex Smola?? Vladimir Vapoik +
*Bell Labs and Monmouth University
Department of Electronic Engineering
West Long Branch. NJ 07764
**BellLabs +AT&T Labs
Abstract
A new regression technique based on Vapnik's concept of support
vectors is introduced. We compare support vector regression (SVR)
with a committee regression technique (bagging) based on regression
trees and ridge regression done in feature space. On the basis of these
experiments, it is expected that SVR will have advantages in high
dimensionality space because SVR optimization does not depend on the
dimensionality of the input space.
1. Introduction
In the following, lower case bold characters represent vectors and upper case bold
characters represent matrices. Superscript "t" represents the transpose of a vector. y
represents either a vector (in bold) or a single observance of the dependent variable in the
presence of noise. yCp) indicates a predicted value due to the input vector x(P) not seen in
the training set.
Suppose we have an unknown function G(x) (the "truth") which is a function of a vector
[.XI,X2, ... ,Xd] has d components where d is
termed the dimensionality of the input space. F(x, w) is a family of functions
is that value of w that minimizes a measure of error between
parameterized by w.
G(x) and F(x, w). Our objective is to estimate w with
by observing the N training
instances Vj, j=l, .. ?,N. We will develop two apprOximations for the truth G(x). The first
which we term a feature space representation . One (of many) such
one is F 1 (x,
feature vectors is:
x (termed input space). The vector xt =
w
w
w)
2 ... X2d X X ... x?x? ... Xd X?-J X ... x 1]
Z t_[x
I, , , 1 2, , I l ' ' -I u' .. , d,
which is a quadratic function of the input space components. Using the feature space
representation, then F 1 (x,
=z that is, F 1 (x, is linear in feature space although
w) tw ,
w)
H. Drucker, C. J. Burges, L. Kaufman. A. Smola and V. Vapnik
156
it is quadratic in input space. In general, for a p'th order polynomial and d'th
dimensional input space, the feature dimensionality / of is
w
p+d-l
/ = L
.
CJ-l
;=d-l
h
Cn
were k
n!
= k !(n-k)!
The second representation is a support vector regression (SVR) representation that was
developed by Vladimir Vapnik (1995):
N
F2(x,w)=L(at-a;)(v~x+1)P + b
;=1
F 2 is an expansion explicitly using the training examples. The rationale for calling it a
support vector representation will be clear later as will the necessity for having both an
a and an a rather than just one multiplicative constant. In this case we must choose the
2N + 1 values of a; at and b. If we expand the term raised to the p'th power, we find/
coefficients that multiply the various powers and cross product terms of the components
of x. So, in this sense Fl looks very similar to F2 in that they have the same number of
terms. However F} has/free coefficients while F2 has 2N+1 coefficients that must be
determined from the N training vectors.
We let a represent the 2N values of aj and at. The optimum values for the components
of
or a depend on our definition of the loss function and the objective function. Here
the primal objective function is:
w
N
ULL[Yj-F(vj, w)]+11
w112
j=l
where L is a general loss function (to be defined later) and F could be F 1 or F 2, Yj is the
observation of G(x) in the presence of noise, and the last term is a regularizer. The
regularization constant is U which in typical developments multiplies the regularizer but
is placed in front of the first term for reasons discussed later.
If the loss function is quadratic, i.e., we L[?]=[?J2, and we let F=F 1, i.e., the feature space
representation, the objective function may be minimized by using linear algebra
techniques since the feature space representation is linear in that space. This is termed
ridge regression (Miller, 1990). In particular let V be a matrix whose i'th row is the i'th
training vector represented in feature space (including the constant term "1" which
represents a bias). V is a matrix where the number of rows is the number of examples
(N) and the number of columns is the dimensionality of feature space f Let E be the tx/
diagonal matrix whose elements are 11U. y is the Nxl column vector of observations of
the dependent variable. We then solve the following matrix formulation for
using a
linear technique (Strang, 1986) with a linear algebra package (e.g., MATLAB):
w
Vly
= [VtV +E] w
The rationale for the regularization term is to trade off mean square error (the first term)
in the objective function against the size of the vector. If U is large, then essentially we
are minimizing the mean square error on the training set which may give poor
generalization to a test sel. We find a good value of U by varying U to find the best
performance on a validation set and then applying that U to the test set.
w
157
Support Vector Regression Machines
Let us now define a different type of loss function termed an E-insensitive loss (Vapnik,
1995):
L_{
if I Yj-F 2 (X;,w)
0
- I Yj-F 2(Xj, w) I - E
1< E
otherwise
This defines an E tube (Figure 1) so that if the predicted value is within the tube the loss
is zero, while if the predicted pOint is outside the tube, the loss is the magnitude of the
difference between the predicted value and the radius E of the tube.
Specifically, we minimize:
N
1
N
U(~~ ? + ~~) + "2(w t w)
where ~j or ~. is zero if the sample point is inside the tube. If the observed point is
"above" the tube, l;; is the positive difference between the observed value and E and aj
will be nonzero. Similary, ~j. will be nonzero if the observed point is below the tube
and in this case a7 will be nonzero. Since an observed point can not be simultaneously
on both sides of the tube, either aj or a7 will be nonzero, unless the point is within the
tube, in which case, both constants will be zero. If U is large, more emphasis is placed on
the error while if U is small, more emphasis is placed on the norm of the weights leading
to (hopefully) a better generalization. The constraints are: (for all i, i=1,N)
Yi-(wtVi)--b~~;
(wtvi)+b-yj~~;
l;; *~
~~
The corresponding Lagrangian is:
1
N
N
N
L=-(wtw) + U(L~*j + L~i) - La;[yi-{wtvi)-b+E~;*]
2
i=1
i=1
i=1
N
N
- Lai[(wtvi)+b-Yi+E~i] - L(17~7+Y;~i)
i=1
i=1
where the 1i and aj are Lagrange multipliers.
We find a saddle point of L (Vapnik. 1995) by differentiating with respect to Wi , b, and ~
which results in the equivalent maximization of the (dual space) objective function:
N
N
1=1
1=1
1
N
W(a,a*) = -E~(a7 +Clj)+ ~yj(a~ -Clj) - "2.~ (a7-Clj)(a;-ai)(v~vj + 11
I.J=I
with the constraints:
~Clj~U
N
La;
i=1
N
~aj?~U
i=1, ... ,N
= Lai
i=1
We must find N Largrange multiplier pairs (ai, (7). We can also prove that the product of
Cl; and
is zero which means that at least one of these two terms is zero. A Vi
corresponding to a non-zero Clj or
is termed a support vector. There can be at most N
support vectors. Suppose now, we have a new vector x(P), then the corresponding
a;
a;
158
H. Drucker. C. J Burges, L Kaufman, A. Smola and V. Vapnik
prediction of y(P) is:
N
=
y(P)
L(a: - ai)(vfx(p)
+ 1)P+b
i=1
Maximizing W is a quadratic programming problem but the above expression for W is
not in standard fonn for use in quadratic programming packages (which usually does
minimization). If we let
A.
1-'1
= a~
I
~i+N
= Clj
i=1, ... ,N
then we minimize:
subject to the constraints
N
'1N
;=1
N+l
L~i = L~;
and
05~i~U i=1,??? ,2N
where
C'=[E-YloE-Y2, ... ,E-YN,E+Yl ,E+Y2, ... ,E+YN]
Q=
[! :]
djj = (vfVj + 1)P i,j = 1, ... ,N
We use an active set method (Bunch and Kaufman, 1980) to solve this quadratic
programming problem.
2. Nonlinear Experiments
We tried three artificial functions from (Friedman, 1991) and a problem (Boston
Housing) from the UCI database. Because the first three problems are artificial, we
know both the observed values and the truths. Boston Housing has 506 cases with the
dependent variable being the median price of housing in the Boston area. There are
twelve continuous predictor variables. This data was obtaining from the UCI database
(anonymous ftp at ftp.ics.ucLedu in directory Ipub/machine-learning-databases) In this
case, we have no "truth", only the observations.
In addition to the input space representation and the SVR representation, we also tried
bagging. Bagging is a technique that combines regressors, in this case regression trees
(Breiman, 1994). We used this technique because we had a local version available. In the
case of regression trees, the validation set was used to prune the trees.
Suppose we have test points with input vectors xfP) i=1,M and make a prediction yr)
using any procedure discussed here. Suppose Yi is the actually observed value, which is
the truth G(x) plus noise. We define the prediction error (PE) and the modeling error
(ME):
1 M
ME=-L(Yr LG(Xj?2
M
;=1
1 M
PE=-L(YfP L y;)2
M
i=1
For the three Friedman functions we calculated both the prediction error and modeling
159
Support Vector Regression Machines
error. For Boston Housing, since the "truth" was not known, we calculated the prediction
error only. For the three Friedman functions, we generated (for each experiment) 200
training set examples and 40 validation set examples. The validation set examples were
used to find the optimum regularization constant in the feature space representation. The
following procedure was followed. Train on the 200 members of the training set with a
choice of regularization constant and obtain the prediction error on the validation set.
Now repeat with a different regularization constant until a minimum of prediction error
occurs on the validation set. Now, use that regularizer constant that minimizes the
validation set prediction error and test on a 1000 example test set. This experiment was
repeated for 100 different training sets of size 200 and validation sets of size 40 but one
test set of size 1000. Different size polynomials were tried (maximum power 3). Second
order polynomials fared best. For Friedman function #1, the dimensionality of feature
space is 66 while for the last two problems, the dimensionality of feature space was 15
(for d=2). Thus the size of the feature space is smaller than that of the number of
examples and we would expect that a feature space representation should do well.
A similar procedure was followed for the SVR representation except the regularizer
constant U, ? and power p were varied to find the minimum validation prediction error.
In the majority of cases p=2 was the optimum choice of power.
For the Boston Housing data, we picked randomly from the 506 cases using a training set
of size 401, a validation set of size 80 and a test set of size 25. This was repeated 100
times. The optimum power as picked by the validations set varied between p=4 and p=5.
3. Results of experiments
The first experiments we tried were bagging regression trees versus support regression
(Table I).
Table I. Modeling error and prediction error
on the three Friedman problems (100 trials).
bagging
SVR
bagging
SVR
1# trials
ME
PE
PE
1#3
.0302
.0261
3.36
66,077
.0677
1.75
1#2
2.26
10,185
ME
.67
4,944
better
100
92
46
1#1
60,424
.0692
Rather than report the standard error, we did a comparison for each training set. That is,
for the first experiment we tried both SVR and bagging on the same training, validation,
and test set. If SVR had a better modeling error on the test set, it counted as a win. Thus
for Friedman 1#1. SVR was always better than bagging on the 100 trials. There is no clear
winner for Friedman function #3.
Subsequent to our comparison of bagging to SVR, we attempted working directly in
feature space. That is. we used F 1 as our approximating function with square loss and a
second degree polynomial. The results of this ridge regression (Table TI) are better than
SVR. In retrospect, this is not surprising since the dimensionality of feature space is
small (/=66 for Friedman #1 and.t=15 for the two remaining functions) in relation to the
number of training examples (200). This was due to the fact that the best approximating
polynomial is second order. The other advantages of the feature space representation in
H. Drucker; C. J. Burges, L Kaufman, A. Smola and V. Vapnik
160
this particular case are that both PE and ME are mean squared error and the loss function
is mean squared error also.
Table ll. Modeling error for SVR and
feature space polynomial approximation.
#1
#2
#3
SVR
feature
space
.67
4,944
.0261
.61
3051
.0176
We now ask the question whether U and E are important in SVR by comparing the results
in Table I with the results obtaining by setting E to zero and U to 100,000 making the
regularizer insignificant (Table DI). On Friedman #2 (and less so on Friedman #3), the
proper choice of E and U are important.
Table m. Comparing the results above with those obtained by setting
E to zero and U to 100,000 (labeled suboptimum).
#1
#2
#3
optimum
ME
.67
4,944
.0261
suboptimum
ME
.70
34,506
.0395
For the case of Boston Housing, the prediction error using bagging was 12.4 while for
SVR we obtained 7.2 and SVR was better than bagging on 71 out of 100 trials. The
optimum power seems to be about five. We never were able to get the feature
representation to work well because the number of coefficients to be determined (6885)
was much larger than the number of training examples (401).
4 Conclusions
Support vector regression was compared to bagging and a feature space representation on
four nonlinear problems. On three of these problems a feature space representation was
best, bagging was worst, and SVR came in second. On the fourth problem, Boston
Housing, SVR was best and we were unable to construct a feature space representation
because of the high dimensionality required of the feature space. On linear problems,
forward subset selection seems to be the method of choice for the two linear problems we
tried at varying signal to noise ratios.
In retrospect, the problems we decided to test on were too simple. SVR probably has
greatest use when the dimensionality of the input space and the order of the
approximation creates a dimensionality of a feature space representation much larger than
that of the number of examples. This was not the case for the problems we considered.
We thus need real life examples that fulfill these requirements.
5. Acknowledgements
This project was supported by ARPA contract number NOOOl4-94-C-1086.
Support Vector Regression Machines
161
6. References
Leo Breiman, "Bagging Predictors", Technical Report 421, September 1994, Department
of Statistics, University of California Berkeley, CA Also at anonymous ftp site:
ftp.stat.berkeley.edulpub/tech-reports/421.ps.Z.
Jame R. Bunch and Linda C. Kaufman, " A Computational Method of the Indefinite
Quadratic Programming Problem", Linear Algebra and Its Applications, Elsevier-North
Holland, 1980.
Jerry Friedman, "Multivariate Adaptive Regression Splines", Annal Of Statistics, vol 19,
No.1, pp. 1-141
Alan J. Miller, Subset Selection in Regression, Chapman and Hall, 1990.
Gilbert Strang, Introduction to Applied Mathematics, Wellesley Cambridge Press, 1986.
VladimirN. Vapnik, The Nature of Statistical Learning Theory, Springer, 1995.
Figure 1: The p ....meters for the support vector
regression.
| 1238 |@word trial:4 version:1 polynomial:6 norm:1 seems:2 tried:6 t_:1 fonn:1 necessity:1 comparing:2 surprising:1 must:3 subsequent:1 yr:2 directory:1 five:1 prove:1 combine:1 inside:1 expected:1 fared:1 project:1 linda:2 kaufman:6 minimizes:2 developed:1 nj:1 berkeley:2 ti:1 xd:2 ull:1 yn:2 positive:1 engineering:1 local:1 plus:1 emphasis:2 decided:1 yj:6 procedure:3 area:1 bell:1 svr:21 get:1 selection:2 applying:1 gilbert:1 equivalent:1 lagrangian:1 maximizing:1 suppose:4 programming:4 element:1 database:3 labeled:1 observed:6 worst:1 trade:1 depend:2 algebra:3 creates:1 f2:3 basis:1 various:1 represented:1 tx:1 regularizer:5 leo:1 train:1 x2d:1 artificial:2 outside:1 whose:2 larger:2 solve:2 otherwise:1 statistic:2 superscript:1 housing:7 advantage:2 product:2 j2:1 uci:2 optimum:6 requirement:1 p:1 ftp:4 develop:1 stat:1 predicted:4 radius:1 generalization:2 anonymous:2 considered:1 ic:1 hall:1 minimization:1 always:1 rather:2 fulfill:1 sel:1 breiman:2 varying:2 indicates:1 tech:1 sense:1 elsevier:1 dependent:3 relation:1 expand:1 dual:1 multiplies:1 development:1 raised:1 construct:1 never:1 having:1 chapman:1 represents:3 look:1 minimized:1 report:3 spline:1 randomly:1 simultaneously:1 friedman:11 multiply:1 primal:1 unless:1 tree:5 annal:1 arpa:1 instance:1 column:2 modeling:5 maximization:1 subset:2 predictor:2 front:1 too:1 twelve:1 contract:1 off:1 yl:1 squared:2 tube:9 choose:1 leading:1 bold:3 north:1 coefficient:4 explicitly:1 vi:1 later:3 multiplicative:1 picked:2 lab:2 wellesley:1 observing:1 minimize:2 vly:1 square:3 miller:2 bunch:2 similary:1 definition:1 against:1 pp:1 di:1 ask:1 dimensionality:11 cj:1 actually:1 formulation:1 done:1 just:1 smola:4 until:1 retrospect:2 working:1 a7:4 hopefully:1 nonlinear:2 defines:1 aj:5 concept:1 multiplier:2 y2:2 regularization:5 jerry:1 nonzero:4 ll:1 djj:1 ridge:3 winner:1 insensitive:1 discussed:2 cambridge:1 ai:3 mathematics:1 had:2 multivariate:1 termed:5 came:1 life:1 yi:4 seen:1 minimum:2 prune:1 signal:1 branch:1 alan:1 technical:1 cross:1 long:1 lai:2 suboptimum:2 prediction:11 regression:20 essentially:1 represent:3 addition:1 median:1 ycp:1 probably:1 subject:1 member:1 presence:2 xj:2 cn:1 drucker:4 vtv:1 whether:1 expression:1 matlab:1 clear:2 vol:1 four:1 indefinite:1 wtw:1 package:2 parameterized:1 fourth:1 family:1 electronic:1 fl:1 followed:2 quadratic:7 constraint:3 alex:1 x2:1 calling:1 department:2 poor:1 smaller:1 character:2 wi:1 tw:1 making:1 committee:1 know:1 available:1 bagging:14 remaining:1 approximating:2 objective:6 question:1 occurs:1 diagonal:1 september:1 win:1 unable:1 majority:1 chris:1 me:7 reason:1 ratio:1 minimizing:1 vladimir:2 lg:1 proper:1 unknown:1 upper:1 observation:3 varied:2 introduced:1 pair:1 required:1 california:1 able:1 below:1 usually:1 including:1 power:7 greatest:1 noool4:1 acknowledgement:1 meter:1 strang:2 loss:9 expect:1 rationale:2 versus:1 validation:12 degree:1 clj:6 row:2 placed:3 last:2 transpose:1 free:1 repeat:1 l_:1 supported:1 bias:1 side:1 burges:4 differentiating:1 calculated:2 forward:1 adaptive:1 regressors:1 counted:1 active:1 xi:1 continuous:1 table:7 nature:1 ca:1 obtaining:2 expansion:1 cl:1 vj:3 did:1 noise:4 repeated:2 site:1 west:1 pe:5 xt:1 insignificant:1 vapnik:8 magnitude:1 boston:7 saddle:1 lagrange:1 holland:1 springer:1 truth:6 harris:1 price:1 determined:2 typical:1 specifically:1 except:1 la:2 attempted:1 support:13 |
262 | 1,239 | Multilayer neural networks:
one or two hidden layers?
G. Brightwell
Dept of Mathematics
LSE, Houghton Street
London WC2A 2AE, U.K.
c.
Kenyon, H. Paugam-Moisy
LIP, URA 1398 CNRS
ENS Lyon, 46 alIee d'Italie
F69364 Lyon cedex, FRANCE
Abstract
We study the number of hidden layers required by a multilayer neural network with threshold units to compute a function f from n d
to {O, I}. In dimension d = 2, Gibson characterized the functions
computable with just one hidden layer, under the assumption that
there is no "multiple intersection point" and that f is only defined
on a compact set. We consider the restriction of f to the neighborhood of a multiple intersection point or of infinity, and give necessary and sufficient conditions for it to be locally computable with
one hidden layer. We show that adding these conditions to Gibson's assumptions is not sufficient to ensure global computability
with one hidden layer, by exhibiting a new non-local configuration,
the "critical cycle", which implies that f is not computable with
one hidden layer.
1
INTRODUCTION
The number of hidden layers is a crucial parameter for the architecture of multilayer
neural networks. Early research, in the 60's, addressed the problem of exactly realizing Boolean functions with binary networks or binary multilayer networks . On the
one hand, more recent work focused on approximately realizing real functions with
multilayer neural networks with one hidden layer [6, 7, 11] or with two hidden units
[2]. On the other hand , some authors [1, 12] were interested in finding bounds on
the architecture of multilayer networks for exact realization of a finite set of points.
Another approach is to search the minimal architecture of multilayer networks for
exactly realizing real functions, from nd to {O, I}. Our work , of the latter kind, is a
continuation of the effort of [4, 5, 8, 9] towards characterizing the real dichotomies
which can be exactly realized with a single hidden layer neural network composed
of threshold units.
Multilayer Neural Networks: One or Two Hidden Layers?
1.1
149
NOTATIONS AND BACKGROUND
A finite set of hyperplanes {Hd1<i<h defines a partition of the d-dimensional space
into convex polyhedral open regIons, the union of the Hi'S being neglected as a
subset of measure zero. A polyhedral dichotomy is a function I : R d --t {O, I},
obtained by associating a class, equal to 0 or to 1, to each of those regions. Thus both
1-1 (0) and 1-1 (1) are unions of a finite number of convex polyhedral open regions.
The h hyperplanes which define the regions are called the essential hyperplanes of
I. A point P is an essential point if it is the intersection of some set of essential
hyperplanes .
In this paper, all multilayer networks are supposed to be feedforward neural networks of threshold units, fully interconnected from one layer to the next, without
skipping interconnections. A network is said to realize a function I: Rd --t to, 1} if,
for an input vector x, the network output is equal to I(x), almost everywhere in Rd .
The functions realized by our multilayer networks are the polyhedral dichotomies.
By definition of threshold units, each unit of the first hidden layer computes a binary
function Yj of the real inputs (Xl, . .. ,Xd). Therefore, subsequent layers compute
a Boolean function. Since any Boolean function can be written in DNF -form, two
hidden layers are sufficient for a multilayer network to realize any polyhedral dichotomy. Two hidden layers are sometimes also necessary, e.g. for realizing the
"four-quadrant" dichotomy which generalizes the XOR function [4].
For all j, the /h unit of the first hidden layer can be seen as separating the space
by the hyperplane H j : 2::=1 WijXi = OJ. Hence the first hidden layer necessarily
contains at least one hidden unit for each essential hyperplane of I. Thus each
region R can be labelled by a binary number Y = (Y1 , ... ,Yh) (see [5]). The /h
digit Yj will be denoted by Hj(R).
Usually there are fewer than 2h regions and not all possible labels actually exist.
The Boolean family BJ of a polyhedral dichotomy I is defined to be the set of all
Boolean functions on h variables which are equal to I on all the existing labels.
1.2
PREVIOUS RESULTS
It is straightforward that all polyhedral dichotomies which have at least one linearly
separable function in their Boolean family can be realized by a one-hidden-Iayer
network. However the converse is far from true. A counter-example was produced
in [5]: adding extra hyperplanes (i.e. extra units on the first hidden layer) can
eliminate the need for a second hidden layer. Hence the problem of finding a minimal
architecture for realizing dichotomies cannot be reduced to the neural computation
of Boolean functions . Finding a generic description of all the polyhedral dichotomies
which can be realized exactly by a one-hidden-Iayer network is still an open problem.
This paper is a new step towards its resolution.
One approach consists of finding geometric configurations which imply that a function is not realizable with a single hidden layer. There are three known such geometric configurations: the XOR-situation, the XOR-bow-tie and the XOR-at-infinity
(see Figure 1) .
A polyhedral dichotomy is said to be in an XOR-situation iff one of its essential
hyperplanes H is inconsistent, i.e. if there are four regions B, B', W, W' such that
Band B' are in class 1, W and W' are in class 0, Band W' are on one side of H,
B' and Ware on the other side of H, and Band Ware adjacent along H, as well
as B' and W'.
G. Brightwell, C. Kenyon and H. Paugam-Moisy
150
Given a point P, two regions containing P in their closure are called opposite with
respect to P if they are in different halfspaces w.r.t. all essential hyperplanes going
through P. A polyhedral dichotomy is said to be in an XOR-bow-tie iff there exist
four distinct regions B,B', W, W', such that Band B', both in class 1 (resp. W
and W', both in class 0), are opposite with respect to point P.
The third configuration is the XOR-at-infinity, which is analogous to the XOR-bowtie at a point 00 added to n d. There exist four distinct unbounded regions B, B'
(in class 1), W, W' (in class 0) such that, for every essential hyperplane H, either
all of them are on the same side of H (e.g. the horizontal line), or Band B' are on
opposite sides of H, and Wand W' are on opposite sides of H (see [3]) .
B
B'
Figure 1: Geometrical representation of XOR-situation, XOR-bow-tie and XOR-atinfinity in the plane (black regions are in class 1, grey regions are in class 0).
Theorem 1 If a polyhedral dichotomy I, from n d to {O, I}, can be realized by a
one-hidden-layer network, then it cannot be in an XOR-situation, nor in an XORbow-tie, nor in an XOR-at-infinity.
The proof can be found in [5] for the XOR-situation, in [13] for the XOR-bow-tie,
and in [5] for the XOR-at-infinity.
Another research direction, implying a function is realizable by a single hidden
layer network, is based on the universal approximator property of one-hidden-Iayer
networks, applied to intermediate functions obtained constructively adding extra
hyperplanes to the essential hyperplanes of f. This direction was explored by Gibson
[9], but there are virtually no results known beyond two dimensions. Gibson's result
can be reformulated as follows:
Theorem 2 II a polyhedral dichotomy I is defined on a compact subset of n 2 , if I
is not in an XOR-situation, and if no three essential hyperplanes (lines) intersect,
then f is realizable with a single hidden layer network.
Unfortunately Gibson's proof is not constructive, and extending it to remove some
of the assumptions or to go to higher dimensions seems challenging. Both XORbow-tie and XOR-at-infinity are excluded by his assumptions of compactness and
no multiple intersections. In the next section, we explore the two cases which are
excluded by Gibson's assumptions. We prove that, in 2 , the XOR-bow-tie and
the XOR-at-infinity are the only restrictions to local realizability.
n
2
2.1
LOCAL REALIZATION IN
1(,2
MULTIPLE INTERSECTION
Theorem 3 Let I be a polyhedral dichotomy on n 2 and let P be a point of multiple
intersection. Let Cp be a neighborhood of P which does not intersect any essential
hyperplane other than those going through P . The restriction of I to Cp is realizable
by a one-hidden-layer network iff I is not in an XOR-bow-tie at P.
151
Multilayer Neural Networks: One or 1Wo Hidden Layers?
The proof is in three steps: first, we reorder the hyperplanes in the neighborhood
of P, so as to get a nice looking system of inequalities; second, we apply Farkas'
lemma; third, we show how an XOR-bow-tie can be deduced.
Proof: Let P be the intersection of k 2': 3 essential hyperplanes of f. All the
hyperplanes which intersect at P can be renumbered and re-oriented so that the
intersecting hyperplanes are totally ordered. Thus the label of the regions which
have the point P in their closure is very regular. If one drops all the digits corresponding to the essential hyperplanes of f which do not contain P, the remaining
part of the region labels are exactly like those of Figure 2.
H,
A=
H.
fl
0
f2
f2
0
0
0
fA:
0
fA:+!
0
0
-fl
-f2
fA:-l
0
fA:
fA:
-fA:
fA:+!
-fA:+l
-fA:+2
fk+2
H7
0
f2A:-l
-f21:-1
0
-f2"
H5
Figure 2: Labels of the regions in the neighborhood of P, and matrix A.
The problem of finding a one-hidden-Iayer network which realizes f can be rewritten
as a system of inequalities. The unknown variables are the weights Wi and threshold
() of the output unit. Let (S) denote the subsystem of inequalities obtained from
the 2k regions which have the point P in their closure. The regular numbering of
these 2k regions allows us to write the system as follows
(S)
I
k
[ 2:~=1 Wm <
2:~=1 Wm
l<i<k
+1~ i
~
2k
>
()
()
[ I:r=H+1 Wm <
2:m=i-k+l Wm
>
()
()
The system (S) can be rewritten in the matrix form Ax
if class 0
if class 1
if class 0
if class 1
~
b, where
x T = [Wl,W2, ... ,Wk,()] and bT = [b 1,b2, ... ,bk,bk +1, ... ,b2k]
where bi = - f , for all i, and f is an arbitrary small positive number. Matrix A can
be seen in figure 2, where fj = +1 or -1 depending on whether region j is in class 0
or 1. The next step is to apply Farkas lemma, or an equivalent version [10], which
gives a necessary and sufficient condition for finding a solution of Ax ~ b.
nn
Lemma 1 (Farkas lemma) There exists a vector x E
there does not exist a vector Y E
such that y T A = 0, y
nm
such that Ax
~
2': 0 and y T b < O.
b iff
Assume that Ax ~ b is not solvable. Then, by Lemma 1 for n = k + 1 and m = 2k,
a vector y can be found such that y 2': O. Since in addition yTb = - f 2:~~1 Yj, the
condition y T b < 0 implies (3jt) Y31 > O. But y T A = 0 is equivalent to the system
G. Brightwell. C. Kenyon and H. Paugam-Moisy
152
(t:) of k + 1 equations
(t:) {
~::; i ::; k
z=k+1
"i+k-l
i..Jm=i
,,2k
i..Jm=l
Ym/clau
0
Ym/clau 0
"i+k-l
i..Jm=i
,,2k
L..m=l
Ym/clau 1
Ym/clau 1
Since (3jt) Yil > 0, the last equation (Ek+l) of system (t:) implies that
(3h / class (region jt) ::/= class(region h)) Yh > O. Without loss of generality,
assume that it and h are less than k and that region it is in class 0 and region h
is in class 1. Comparing two successive equations of (t:), for i < k, we can write
(VA E {O, 1}) L(E.+d Ym/clau
>.
=
L(E.) Ym/clau >. - Yi/clau >.
+ Yi+k/clau
>.
Since Yit > 0 and region it is in class 0, the transition from Ejl to EiI+1 implies
that Yil +k = Yit > 0 and region it + k, which is opposite to region it, is also
in class O. Similarly, the transition from Eh to E h +1 implies that both opposite
regions h + k and h are in class 1. These conditions are necessary for the system
(t:) to have a non-negative solution and they correspond exactly to the definition
of an XOR-bow-tie at point P. The converse comes from theorem 1.
?
2.2
UNBOUNDED REGIONS
If no two essential hyperplanes are parallel, the case of unbounded regions is exactly
the same as a multiple intersection. All the unbounded regions can be labelled as
on figure 2. The same argument holds for proving that, if the local system (S)
Ax ::; b is not solvable, then there exists an XOR-at-infinity. The case of parallel
hyperplanes is more intricate because matrix A is more complex. The proof requires
a heavy case-by-case analysis and cannot be given in full in this paper (see [3]) .
Theorem 4 Let f be a polyhedral dichotomy on 'R,2 . Let Coo be the complementary
region of the convex hull of the essential points of f? The restriction of f to Coo is
realizable by a one-hidden-layer network iff f is not in an XOR-at-in/inity.
From theorems 3 and 4 we can deduce that a polyhedral dichotomy is locally realizable in 'R,2 by a one-hidden-Iayer network iff f has no XOR-bow-tie and no XORat-infinity. Unfortunately this result cannot be extended to the global realization
of f in 'R, 2 because more intricate distant configurations can involve contradictions
in the complete system of inequalities. The object of the next section is to point
out such a situation by producing a new geometric configuration, called a critical
cycle, which implies that f cannot be realized with one hidden layer.
3
CRITICAL CYCLES
In contrast to section 2, the results of this section hold for any dimension d 2:: 2.
We first need some definitions. Consider a pair of regions {T, T'} in the same class
and which both contain an essential point P in their closure. This pair is called
critical with respect to P and H if there is an essential hyperplane H going through
P such that T' is adjacent along H to the region opposite to T . Note that T and
T' are both on the same side of H.
We define a graph G whose nodes correspond to the critical pairs of regions of f.
There is a red edge between {T, T'} and {U, U'} if the pairs, in different classes,
are both critical with respect to the same point (e.g., {Bp, Bp} and {Wp, Wi>} in
figure 3) . There is a green edge between {T, T'} and {U, U'} if the pairs are both
critical with respect to the same hyperplane H, and either the two pairs are on the
Multilayer Neural Networks: One or Two Hidden Layers?
153
same side of H, but in different classes (e.g., {Wp, Wp} and {BQ, BQ})' or they
are on different sides of H, but in the same class (e.g., {Bp,Bp} and {BR, Bk})?
Definition 1 A critical cycle is a cycle in graph G, with alternating colors.
f
-i B P. B' P }--..-.-- ----{Y Y'}
.... ;
P,
P
~
I
I
;
I
{B Q , B'Q~- ?-?-{Y Q ,Y'Q}',
I
I
I
I
" { B R , B'R}l---{Y R ,Y'R}"
red edge
green edge
Figure 3: Geometrical configuration and graph of a critical cycle, in the plane. Note
that one can augment the figure in such a way that there is no XOR-situation, no
XOR-bow-tie, and no XOR-at-infinity.
n-
I, from d to {O, I}, can be realized by a
one-hidden-layer network, then it cannot have a critical cycle.
Theorem 5 If a polyhedral dichotomy
Proof: For the sake of simplicity, we will restrict ourselves to doing the proof for a
case similar to the example figure 3, with notation as given in that figure, but without any restriction on the dimension d of I. Assume, for a contradiction, that I has
a critical cycle and can be realized by a one-hidden-Iayer network. Consider the sets
of regions {Bp, Bp , BQ , BQ, BR, Bk} and {Wp, W WQ , WQ, WR , WR}. Consider
the regions defined by all the hyperplanes associated to the hidden layer units (in
general, these hyperplanes are a large superset of the essential hyperplanes). There
is a region bp ~ B p, whose border contains P and a (d - 1)-dimensional subset
of H 1. Similarly we can define bp, .. . , bR,Wp , . . . ,wR' Let B be the set of such
regions which are in class 1 and W be the set of such regions in class 0.
p,
Let H be the hyperplane associated to one of the hidden units. For T a region, let
H(T) be the digit label of T w.r.t. H, i.e. H(T) = 1 or according to whether T
is above or below H (cf. section 1.1) . We do a case-by-case analysis .
?
If H does not go through P, then H(bp) = H(b'p) = H(wp) = H(wp); similar
equalities hold for lines not going through Q or R. If H goes through P but is
not equal to H1 or to H2 , then, from the viewpoint of H, things are as if b'p was
opposite to bp, and w'p was opposite to Wp, so the two regions of each pair are on
different sides of H, and so H (bp )+H(b'p) = H( wp )+H(w'p) = 1; similar equalities
hold for hyperplanes going through Q or R. If H = H 1, then we use the fact that
there is a green edge between {Wp, Wp} and {BQ, B Q}, meaning in the case of
the figure that all four regions are on the same side of H 1 but in different classes.
Then H(bp) +H(b'p) +H(bQ) +H(b'q) = H(wp) +H(wp)+ H(wQ) +H(w'q). In
fact, this equality would also hold in the other case, as can easily be checked. Thus
for all H, we have L,bEB H(b) = L,wEW H(w). But such an equality is impossible:
since each b is in class 1 and each w is in class 0, this implies a contradiction in the
system of inequalities and I cannot be realized by a one-hidden-Iayer network.
Obviously there can exist cycles of length longer than 3, but the extension of the
proof is straightforward.
_
G. Brightwell, C. Kenyon and H. Paugam-Moisy
154
4
CONCLUSION AND PERSPECTIVES
This paper makes partial progress towards characterizing functions which can be realized by a one-hidden-Iayer network, with a particular focus on dimension 2. Higher
dimensions are more challenging, and it is difficult to even propose a conjecture:
new cases of inconsistency emerge in subspaces of intermediate dimension. Gibson
gives an example of an inconsistent line (dimension 1) resulting of its intersection
with two hyperplanes (dimension 2) which are not inconsistent in n3.
The principle of using Farkas lemma for proving local realizability still holds but
the matrix A becomes more and more complex. In d , even for d = 3, the labelling
of the regions, for instance around a point P of multiple intersection, can become
very complex.
n
In conclusion, it seems that neither the topological method of Gibson, nor our
algebraic point of view, can easily be extended to higher dimensions. Nevertheless,
we conjecture that in dimension 2, a function can be realized by a one-hiddenlayer network iff it does not have any of the four forbidden types of configurations:
XOR-situation, XOR-bow-tie, XOR-at-infinity, and critical cycle.
Acknowledgements
This work was supported by European Esprit III Project
nO
8556, NeuroCOLT.
References
[1] E. B. Baum. On the capabilities of multilayer perceptrons. Journal of Complexity,
4:193-215, 1988.
[2] E. K. Blum and L. K. Li. Approximation theory and feedforward networks. Neural
Networks, 4(4):511-516, 1991.
[3] G. Brightwell, C. Kenyon, and H. Paugam-Moisy. Multilayer neural networks: one or
two hidden layers? Research Report 96-37, LIP, ENS Lyon, 1996.
[4] M. Cosnard, P . Koiran, and H. Paugam-Moisy. Complexity issues in neural network
computations. In I. Simon, editor, Proc. of LATIN'92, volume 583 of LNCS, pages
530-544. Springer Verlag, 1992.
[5] M . Cosnard, P. Koiran, and H. Paugam-Moisy. A step towards the frontier between
one-hidden-Iayer and two-hidden layer neural networks. In I. Simon, editor, Proc. of
IJCNN'99-Nagoya, volume 3, pages 2292-2295. Springer Verlag, 1993.
[6] G. Cybenko. Approximation by superpositions of a sigmoidal function. Math . Control,
Signal Systems, 2:303-314, October 1988.
[7] K. F\mahashi. On the approximate realization of continuous mappings by neural
networks. Neural Networks, 2(3):183-192, 1989.
[8] G . J. Gibson. A combinatorial approach to understanding perceptron decision regions.
IEEE Trans. Neural Networks, 4:989--992, 1993.
[9] G. J . Gibson. Exact classification with two-layer neural nets. Journal of Computer
and System Science, 52(2):349-356, 1996.
[10] M. Grotschel, 1. Lovsz, and A. Schrijver. Geometric Algorithms and Combinatorial
Optimization. Springer-Verlag, Berlin, Heidelberg, 1988.
[11] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are
universal approximators. Neural Networks, 2(5):359-366, 1989.
[12] S.-C. Huang and Y.-F. Huang. Bounds on the number of hidden neurones in multilayer
perceptrons. IEEE Trans. Neural Networks, 2:47-55, 1991.
[13] P. J. Zweitering. The complexity of multi-layered perceptrons. PhD thesis, Technische
Universiteit Eindhoven, 1994.
| 1239 |@word version:1 seems:2 nd:1 open:3 closure:4 grey:1 configuration:8 contains:2 existing:1 comparing:1 skipping:1 written:1 realize:2 distant:1 subsequent:1 partition:1 remove:1 drop:1 farkas:4 implying:1 fewer:1 plane:2 realizing:5 math:1 node:1 successive:1 hyperplanes:22 sigmoidal:1 unbounded:4 along:2 become:1 consists:1 prove:1 polyhedral:16 intricate:2 nor:3 multi:1 lyon:3 jm:3 totally:1 becomes:1 project:1 grotschel:1 notation:2 kind:1 finding:6 every:1 xd:1 tie:13 exactly:7 esprit:1 control:1 unit:12 converse:2 producing:1 positive:1 local:5 ware:2 cosnard:2 approximately:1 black:1 challenging:2 bi:1 yj:3 union:2 digit:3 lncs:1 intersect:3 universal:2 gibson:10 quadrant:1 regular:2 get:1 cannot:7 subsystem:1 layered:1 impossible:1 restriction:5 equivalent:2 baum:1 straightforward:2 go:3 convex:3 focused:1 resolution:1 simplicity:1 contradiction:3 his:1 proving:2 analogous:1 resp:1 exact:2 lovsz:1 houghton:1 region:43 cycle:10 counter:1 halfspaces:1 complexity:3 neglected:1 f2a:1 f2:4 easily:2 distinct:2 dnf:1 london:1 wijxi:1 dichotomy:17 neighborhood:4 whose:2 interconnection:1 obviously:1 net:1 propose:1 interconnected:1 realization:4 bow:11 iff:7 supposed:1 description:1 extending:1 object:1 depending:1 progress:1 implies:7 come:1 exhibiting:1 direction:2 hull:1 cybenko:1 eindhoven:1 extension:1 frontier:1 hold:6 around:1 mapping:1 bj:1 koiran:2 early:1 proc:2 realizes:1 label:6 combinatorial:2 superposition:1 wl:1 f21:1 beb:1 hj:1 ax:5 focus:1 contrast:1 realizable:6 cnrs:1 nn:1 eliminate:1 bt:1 compactness:1 hidden:43 going:5 france:1 interested:1 issue:1 classification:1 denoted:1 augment:1 equal:4 report:1 oriented:1 b2k:1 composed:1 ourselves:1 hiddenlayer:1 edge:5 partial:1 necessary:4 bq:6 re:1 minimal:2 instance:1 boolean:7 technische:1 subset:3 deduced:1 ym:6 intersecting:1 thesis:1 nm:1 containing:1 huang:2 ek:1 li:1 b2:1 wk:1 h1:1 view:1 doing:1 red:2 wm:4 universiteit:1 parallel:2 capability:1 simon:2 wew:1 xor:32 correspond:2 produced:1 checked:1 definition:4 proof:8 associated:2 color:1 actually:1 higher:3 generality:1 just:1 hand:2 horizontal:1 defines:1 kenyon:5 contain:2 true:1 hence:2 equality:4 excluded:2 alternating:1 wp:13 white:1 adjacent:2 eii:1 complete:1 cp:2 fj:1 geometrical:2 lse:1 meaning:1 yil:2 volume:2 ejl:1 rd:2 fk:1 mathematics:1 similarly:2 longer:1 deduce:1 recent:1 forbidden:1 perspective:1 nagoya:1 verlag:3 inequality:5 binary:4 inconsistency:1 yi:2 approximators:1 seen:2 signal:1 ii:1 multiple:7 full:1 characterized:1 h7:1 dept:1 va:1 multilayer:17 ae:1 sometimes:1 background:1 addition:1 addressed:1 crucial:1 extra:3 w2:1 cedex:1 ura:1 virtually:1 thing:1 inconsistent:3 feedforward:3 intermediate:2 superset:1 iii:1 latin:1 architecture:4 associating:1 opposite:9 restrict:1 paugam:7 computable:3 br:3 moisy:7 whether:2 effort:1 wo:1 algebraic:1 reformulated:1 neurones:1 involve:1 locally:2 band:5 reduced:1 continuation:1 exist:5 wr:3 write:2 four:6 threshold:5 nevertheless:1 blum:1 yit:2 neither:1 computability:1 graph:3 wand:1 everywhere:1 almost:1 family:2 decision:1 layer:33 bound:2 hi:1 fl:2 topological:1 ijcnn:1 inity:1 infinity:11 bp:12 n3:1 sake:1 argument:1 separable:1 conjecture:2 numbering:1 according:1 wi:2 coo:2 y31:1 equation:3 generalizes:1 rewritten:2 apply:2 generic:1 remaining:1 ensure:1 cf:1 added:1 realized:11 fa:9 said:3 subspace:1 separating:1 neurocolt:1 street:1 berlin:1 length:1 italie:1 difficult:1 unfortunately:2 october:1 negative:1 constructively:1 unknown:1 finite:3 situation:9 extended:2 looking:1 y1:1 arbitrary:1 bk:4 pair:7 required:1 trans:2 beyond:1 usually:1 below:1 oj:1 green:3 stinchcombe:1 critical:12 eh:1 solvable:2 imply:1 realizability:2 nice:1 geometric:4 acknowledgement:1 understanding:1 fully:1 loss:1 approximator:1 h2:1 sufficient:4 principle:1 viewpoint:1 editor:2 heavy:1 supported:1 last:1 side:10 perceptron:1 characterizing:2 emerge:1 dimension:12 bowtie:1 transition:2 computes:1 author:1 far:1 approximate:1 compact:2 global:2 reorder:1 iayer:9 search:1 continuous:1 lip:2 hornik:1 heidelberg:1 necessarily:1 complex:3 european:1 linearly:1 border:1 brightwell:5 complementary:1 en:2 f69364:1 xl:1 third:2 yh:2 theorem:7 jt:3 explored:1 essential:17 exists:2 adding:3 phd:1 labelling:1 intersection:10 explore:1 ordered:1 springer:3 towards:4 labelled:2 hyperplane:7 clau:8 lemma:6 called:4 hd1:1 schrijver:1 perceptrons:3 wq:3 latter:1 constructive:1 h5:1 |
263 | 124 | 65
LINEAR LEARNING: LANDSCAPES AND ALGORITHMS
Pierre Baldi
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
What follows extends some of our results of [1] on learning from examples in layered feed-forward networks of linear units. In particular we examine what happens when the ntunber of layers is large or
when the connectivity between layers is local and investigate some
of the properties of an autoassociative algorithm. Notation will be
as in [1] where additional motivations and references can be found.
It is usual to criticize linear networks because "linear functions do
not compute" and because several layers can always be reduced to
one by the proper multiplication of matrices. However this is not the
point of view adopted here. It is assumed that the architecture of the
network is given (and could perhaps depend on external constraints)
and the purpose is to understand what happens during the learning
phase, what strategies are adopted by a synaptic weights modifying
algorithm, ... [see also Cottrell et al. (1988) for an example of an application and the work of Linsker (1988) on the emergence of feature
detecting units in linear networks}.
Consider first a two layer network with n input units, n output units
and p hidden units (p < n). Let (Xl, YI), ... , (XT, YT) be the set of
centered input-output training patterns. The problem is then to find
two matrices of weights A and B minimizing the error function E:
E(A, B) =
L
l<t<T
IIYt -
ABXtIl2.
(1)
66
Baldi
Let ~x x, ~XY, ~yy, ~y x denote the usual covariance matrices. The
main result of [1] is a description of the landscape of E, characerised
by a multiplicity of saddle points and an absence of local minima.
More precisely, the following four facts are true.
Fact 1: For any fixed n x p matrix A the function E(A, B) is convex
in the coefficients of B and attains its minimum for any B satisfying
the equation
A'AB~xx = A/~yX.
(2)
If in addition ~x X is invertible and A is of full rank p, then E is
strictly convex and has a unique minimum reached when
(3)
Fact 2: For any fixed p x n matrix B the function E(A, B) is convex
in the coefficients of A and attains its minimum for any A satisfying
the equation
(4)
AB~xxB' = ~YXB'.
If in addition ~xx is invertible and B is of full rank p, then E is
strictly convex and has a unique minimum reached when
(5)
Fact 3: Assume that ~x X is invertible. If two matrices A and B
define a critical point of E (i.e. a point where 8E / 8aij = 8E /8bij =
0) then the global map W = AB is of the form
(6)
where P A denotes the matrix of the orthogonal projection onto the
subspace spanned by the columns of A and A satisfies
(7)
Linear Learning: Landscapes and Algorithms
with ~ = ~y X ~x~~XY. If A is of full rank p, then A and B define
a critical point of E if and only if A satisties (7) and B = R(A), or
equivalently if and only if A and W satisfy (6) and (7). Notice that
in (6), the matrix ~y X ~x~ is the slope matrix for the ordinary least
square regression of Y on X.
Fact 4: Assume that ~ is full rank with n distinct eigenvalues At >
... > An. If I = {i t , ... ,ip}(l < it < ... < ip < n) is any ordered p-index set, let Uz = [Ui t , ??? , Ui p ] denote the matrix formed
by the orthononnal eigenvectors of ~ associated with the eigenvalues
Ail' ... , Ai p ? Then two full rank matrices A and B define a critical
point of E if and only if there exist an ordered p-index set I and an
invertible p x p matrix C such that
A=UzC
(8)
(9)
For such a critical point we have
(10)
E(A,B) = tr(~yy) -
L Ai.
(11 )
iEZ
Therefore a critical point of W of rank p is always the product of the
ordinary least squares regression matrix followed by an orthogonal
projection onto the subspace spanned by p eigenvectors of~. The map
W associated with the index set {I, 2, ... ,p} is the unique local and
global minimum of E. The remaining (;) -1 p-index sets correspond
to saddle points. All additional critical points defined by matrices
A and B which are not of full rank are also saddle points and can
be characerized in terms of orthogonal projections onto subspaces
spanned by q eigenvectors with q < p.
67
68
Baldi
Deep Networks
Consider now the case of a deep network with a first layer of n input
units, an (m + 1)-th layer of n output units and m - 1 hidden layers
with an error function given by
E(AI, ... ,An )=
L
IIYt-AIAl ... AmXtll2.
(12)
l<t<T
It is worth noticing that, as in fact 1 and 2 above, if we fix any m-1
of the m matrices AI, ... , Am then E is convex in the remaining matrix
of connection weights. Let p (p < n) denote the ntunber of units in
the smallest layer of the network (several hidden layers may have p
units). In other words the network has a bottleneck of size p. Let i
be the index of the corresponding layer and set
A = A I A 2... A m-i+1
(13)
B = Am-i+2 ... Am
When we let AI, ... , Am vary, the only restriction they impose on A
and B is that they be of rank at most p. Conversely, any two matrices A and B of rank at most p can always be decomposed (and
in many ways) in products of the form of (13). It results that any
local minima of the error function of the deep network should yield a
local minima for the corresponding "collapsed" .three layers network
induced by (13) and vice versa. Therefore E(A I , ... , Am) does not
have any local minima and the global minimal map W? = AIA2 ... Am
is unique and given by (10) with index set {I, 2, ... , p}. Notice that
of course there is a large number of ways of decomposing W? into
a product of the form A I A 2 ... A m . Also a saddle point of the error
function E(A, B) does not necessarily generate a saddle point of the
corresponding E (A I , ... , Am) for the expressions corresponding to the
two gradients are very different.
Linear Learning: Landscapes and Algorithms
Forced Connections. Local Connectivity
Assume now an error function of the form
E(A)
=
L
IIYt - AXt[[2
(14)
l~t~T
for a two layers network but where the value of some of the entries
of A may be externally prescribed. In particular this includes the
case of local connectivity described by relations of the form aij = 0
for any output unit i and any input unit j which are not connected.
Clearly the error function E(A) is convex in A. Every constraint of
the form aij =cst defines an hyperplane in the space of all possible A.
The intersection of all these constraints is therefore a convex set. Thus
minimizing E under the given constraints is still a convex optimization
problem and so there are no local minima. It should be noticed that,
in the case of a network with even only three constrained layers with
two matrices A and B and a set of constraints of the form aij =cst
on A and bk1 =cst for B, the set of admissible matrices of the form
W = AB is, in general, not convex anymore. It is not unreasonable
to conjecture that local minima may then arise, though this question
needs to be investigated in greater detail.
Algorithmic Aspects
One of the nice features of the error landscapes described so far is
the absence of local minima and the existence, up to equivalence,
of a unique global minimum which can be understood in terms of
principal component analysis and least square regression. However
the landscapes are also characterized by a large number of saddle
points which could constitute a problem for a simple gradient descent
algorithm during the learning phase. The proof in [1] shows that
the lower is the E value corresponding to a saddle point, the more
difficult it is to escape from it because of a reduction in the possible
number of directions of escape (see also [Chauvin, 1989] in a context of
Hebbian learning). To assert how relevant these issues are for practical
implementations requires further simulation experiments. On a more
69
70
Baldi
speculative side, it remains also to be seen whether, in a problem
of large size, the number and spacing of saddle points encountered
during the first stages of a descent process could not be used to "get
a feeling" for the type of terrain being descented and as a result to
adjust the pace (i. e. the learning rate).
We now turn to a simple algorithm for the auto-associative case in a
three layers network, i. e. the case where the presence of a teacher
can be avoided by setting Yt = Xt and thereby trying to achieve a
compression of the input data in the hidden layer. This technique
is related to principal component analysis, as described in [1]. If
Yt = Xt, it is easy to see from equations (8) and (9) that, if we take
the matrix C to be the identity, then at the optimum the matrices
A and B are transpose of each other. This heuristically suggests a
possible fast algorithm for auto-association, where at each iteration a
gradient descent step is applied only to one of the connection matrices
while the other is updated in a symmetric fashion using transposition
and avoiding to back-propagate the error in one of the layers (see
[Williams, 1985] for a similar idea). More formally, the algorithm
could be concisely described by
A(O) = random
B(O) = A'(O)
(15)
8E
A(k+l)=A(k)-11 8A
B(k+l)=A'(k+l)
Obviously a similar algorithm can be obtained by setting B(k + 1) =
B(k) -118E/8B and A(k + 1) = B'(k + 1). It may actually even be
bet ter to alternate the gradient step, one iteration with respect to A
and one iteration with respect to B.
A simple calculation shows that (15) can be rewritten as
+ 1) =
B(k + 1) =
A(k
A(k) + 11(1 - W(k))~xxA(k)
B(k)
+ 11B(k)~xx(I -
W(k))
(16)
Linear Learning: Landscapes and Algorithms
where W(k) = A(k)B(k). It is natural from what we have already
seen to examine the behavior of this algorithm on the eigenvectors of
~xx. Assume that u is an eigenvector of both ~xx and W(k) with
eigenvalues ,\ and /-l( k). Then it is easy to see that u is an eigenvector
of W(k + 1) with eigenvalue
(17)
For the algorithm to converge to the optimal W, /-l( k + 1) must converge to 0 or 1. Thus one has to look at the iterates of the function f( x) = x[l + 7],\(1 - x)]2. This can be done in detail and
we shall only describe the main points. First of all, f' (x) = 0 iff
x = 0 or x = Xa = 1 + (1/7],\) or x = Xb = 1/3 + (1/37],\) and
f"(x) = 0 iff x = Xc = 2/3 + (2/37],\) = 2Xb. For the fixed points,
f(x) = x iff x = 0, x = 1 or x = Xd = 1 + (2/7],\). Notice also
that f(xa) = a and f(1 + (1/7],\)) = (1 + (1/7],\)(1 - 1? Points corresponding to the values 0,1, X a , Xd of the x variable can readily be
positioned on the curve of f but the relative position of Xb (and xc)
depends on the value assumed by 7],\ with respect to 1/2. Obviously
if J1(0) = 0,1 or Xd then J1( k) = 0,1 or Xd, if J1(0) < 0 /-l( k) ~ -00
and if /-l( k) > Xd J1( k) ~ +00. Therefore the algorithm can converge
only for a < /-leO) < Xd. When the learning rate is too large, i. e.
when 7],\ > 1/2 then even if /-leO) is in the interval (0, Xd) one can see
that the algorithm does not converge and may even exhibit complex
oscillatory behavior. However when 7],\ < 1/2, if 0 < J1(0) < Xa then
J1( k) ~ 1, if /-leO) = Xa then /-l( k) = a and if Xa < J1(0) < Xd then
/-l(k) ~ 1.
In conclusion, we see that if the algorithm is to be tested, the
learning rate should be chosen so that it does not exceed 1/2,\, where
,\ is the largest eigenvalue of ~xx. Even more so than back propagation, it can encounter problems in the proximity of saddle points.
Once a non-principal eigenvector of ~xx is learnt, the algorithm
rapidly incorporates a projection along that direction which cannot
be escaped at later stages. Simulations are required to examine the
effects of "noisy gradients" (computed after the presentation of only
a few training examples), multiple starting points, variable learning
rates, momentum terms, and so forth.
71
72
Baldi
Aknowledgement
Work supported by NSF grant DMS-8800323 and in part by ONR
contract 411P006-01.
References
(1) Baldi, P. and Hornik, K. (1988) Neural Networks and Principal
Component Analysis: Learning from Examples without Local Minima. Neural Networks, Vol. 2, No. 1.
(2) Chauvin, Y. (1989) Another Neural Model as a Principal Component Analyzer. Submitted for publication.
(3) Cottrell, G. W., Munro, P. W. and Zipser, D. (1988) Image Compression by Back Propagation: a Demonstration of Extensional Programming. In: Advances in Cognitive Science, Vol. 2, Sharkey, N. E.
ed., Norwood, NJ Abbex.
(4) Linsker, R. (1988) Self-Organization in a Perceptual Network.
Computer 21 (3), 105-117.
( 5) Willi ams, R. J. (1985) Feature Discovery Through Error-Correction
Learning. ICS Report 8501, University of California., San Diego.
375
MODELS OF OCULAR DOMINANCE
COLUMN FORMATION: ANALYTICAL AND
COMPUTATIONAL RESULTS
Kenneth D. Miller
UCSF Dept. of Physiology
SF, CA 94143-0444
Joseph B. Keller
Mathematics Dept., Stanford ken@phyb.ucsf.edu
Michael P. Stryker
Physiology Dept., UCSF
ABSTRACT
We have previously developed a simple mathematical model for formation of ocular dominance columns in
mammalian visual cortex. The model provides a common framework in which a variety of activity-dependent
biological machanisms can be studied. Analytic and computational results together now reveal the following: if
inputs specific to each eye are locally correlated in their
firing, and are not anticorrelated within an arbor radius,
monocular cells will robustly form and be organized by
intra-cortical interactions into columns. Broader correlations withln each eye, or anti-correlations between the
eyes, create a more purely monocular cortex; positive correlation over an arbor radius yields an almost perfectly
monocular cortex. Most features of the model can be understood analytically through decomposition into eigenfunctions and linear stability analysis. This allows prediction of the widths of the columns and other features from
measurable biological parameters.
INTRODUCTION
In the developing visual system in many mammalian species, there is initially a uniform, overlapping innervation of layer 4 of the visual cortex by inputs representing
the two eyes. Subsequently, these inputs segregate into patches or stripes that are
largely or exclusively innervated by inputs serving a single eye, known as ocular
dominance patches. The ocular dominance patches are on a small scale compared
to the map of the visual world, so that the initially continuous map becomes two
interdigitated maps, one representing each eye. These patches, together with the
layers of cortex above and below layer 4, whose responses are dominated by the
eye innervating the corresponding layer 4 patch, are known as ocular dominance
columns.
376
Miller, Keller and Stryker
The discoveries of this system of ocular dominance and many of the basic features
of its development were made by Hubel and Wiesel in a series of pioneering studies
in the 1960s and 1970s (e.g. Wiesel and Hubel (1965), Hubel, Wiesel and LeVay
(1977)). A recent brief review is in Miller and Stryker (1989).
The segregation of patches depends on local correlations of neural activity that are
very much greater within neighboring cells in each eye than between the two eyes.
Forcing the eyes to fire synchronously prevents segregation, while forcing them to
fire more asynchronously than normally causes a more complete segregation than
normal. The segregation also depends on the activity of cortical cells. Normally, if
one eye is closed in a young kitten during a critical period for developmental plasticity ("monocular deprivation"), the more active, open eye comes to dominantly or
exclusively drive most cortical cells, and the inputs and influence of the closed eye
become largely confined to small islands of cortex. However, when cortical cells are
inhibited from firing, the opposite is the case: the less active eye becomes dominant,
suggesting that it is the correlation between pre- and post-synaptic activation that
is critical to synaptic strengthening.
We have developed and analyzed a simple mathematical model for formation of
ocular dominance patches in mammalian visual cortex, which we briefly review
here (Miller, Keller, and Stryker, 1986). The model provides a common framework
in which a variety of activity-dependent biological models, including Hebb synapses
and activity-dependent release and uptake of trophic factors, can be studied. The
equations are similar to those developed by Linsker (1986) to study the development
of orientation selectivity in visual cortex. We have now extended our analysis and
also undertaken extensive simulations to achieve a more complete understanding of
the model. Many results have appeared, or will appear, in more detail elsewhere
(Miller, Keller and Stryker, 1989; Miller and Stryker, 1989; Miller, 1989).
EQUATIONS
Consider inputs carrying information from two eyes and co-innervating a single cortical sheet. Let SL(x, 5, t) and SR(x, 5, t), respectively, be the left eye and right
eye synaptic weight from eye-coordinate 5 to cortical coordinate x at time t. Consideration of simple activity-dependent models of synaptic plasticity, such as Hebb
synapses or activity-dependent release and uptake of trophic or modification factors,
leads to equations for the time development of SL and SR:
8 t S J (x,5,t)
=
AA(x-5)
L
I(x-y)OJK(5-P)SK(y, p,t)_-ySK(x, 5,t)-e. (1)
f/,{l,K
J, K are variables which each may take on the values L, R. A(x-5) is a connectivity
function, giving the number of synapses from 5 to x (we assume an identity mapping
from eye coordinates to cortical coordinates). OJ K (5 - P) measures the correlation
in firing of inputs from eyes J and K when the inputs are separated by the distance
5 - p. I(x - y) is a model-dependent spread of influence across cortex, telling how
two synapses which fire synchronously, separated by the distance x-y, will influence
Models of Ocular Dominance Column Formation
one another's growth. This influence incorporates lateral synaptic interconnections
in the case of Hebb synapses (for linear activation, 1= (1- B)-l, where 1 is
the identity matrix and B is the matrix of cortico-cortical synaptic weights), and
incorporates the effects of diffusion of trophic or modification factors in models
involving such factors. .A, "1 and ? are constants. Constraints to conserve or limit
the total synaptic strength supported by a single cell, and nonlinearities to keep leftand right-eye synaptic weights positive and less than some maximum, are added.
Subtracting the equation for SR from that for SL yields a model equation for the
difference, SD(x, 0, t) = SL(x, 0, t) - SR(x, 0, t):
8t SD(x, 0, t) = .AA(x - 0)
L I(x -
y)eD(o - p)SD(y, p, t) - "1SD(x, 0, t).
(2)
".Il
Here, CD =
e LR
=
eSameEye _ eOppEye, where eSameEye = eLL = eRR, eOppEye =
RL
e , and we have assumed statistical equality of the two eyes.
SIMULATIONS
The development of equation (1) was studied in simulations using three 25 x 25
grids for the two input layers, one representing each eye, and a single cortical layer.
Each input cell connects to a 7 x 7 square arbor of cortical cells centered on the
corresponding grid point (A(x) = 1 on the square of ?3 grid points, 0 otherwise).
Initial synaptic weights are randomly assigned from a uniform distribution between
0.8 and 1.2. Synapses are allowed to decrease to 0 or increase to a weight of 8.
Synaptic strength over each cortical cell is conserved by subtracting after each
iteration from each active synapse the average change in synaptic strength on that
cortical cell. Periodic boundary conditions on the three grids are used.
A typical time development of the cortical pattern of ocular dominance is shown
in figure 1. For this simulation, correlations within each eye decrease with distance
to zero over 4-5 grid points (circularly symmetric gaussian with parameter 2.8
grid points). There are no opposite-eye correlations. The cortical interaction function is a "Mexican hat" function excitatory between nearest neighbors and weakly
inhibitory more distantly (I(x) = exp((-1;1)2) - ~exp((;l:1)2), .A1 = 0.93.) Individual cortical cell receptive fields refine in size and become monocular (innervated
exclusively by a single eye), while individual input arbors refine in size and become
confined to alternating cortical stripes (not shown).
Dependence of these results on the correlation function is shown in figure 2. Wider
ranging correlations within each eye, or addition of opposite-eye anticorrelations,
increase the monocularity of cortex. Same-eye anticorrelations decrease monocularity, and if significant within an arbor radius (i.e. within ?3 grid points for the
7 x 7 square arbors) tend to destory the monocular organization, as seen at the
lower right. Other simulations (not shown) indicate that same-eye correlation over
nearest neighbors is sufficient to give the periodic organization of ocular dominance,
while correlation over an arbor radius gives an essentially fully monocular cortex.
377
378
Miller, Keller and Stryker
T=O
T=10
T=20
T=30
T=40
T=80
R
L
Figure 1. Time development of cortical ocular dominance. Results shown after 0,
10, 20,30, 40, 80 iterations. Each pixel represents ocular dominance (E a SD(x, a))
of a single cortical cell. 40 X 40 pixels are shown, repeating 15 columns and rows of
the cortical grid, to reveal the pattern across the periodic boundary conditions.
Simulation of time development with varying cortical interaction and arbor functions shows complete agreement with the analytical results presented below. The
model also reproduces the experimental effects of monocular deprivation, including
the presence of a critical developmental period for this effect.
EIGENFUNCTION ANALYSIS
Consider an initial condition for which SD ~ 0, and near which equation (2)
linearizes some more complex, nonlinear biological reality. SD = 0 is a timeindependent solution of equation (2). Is this solution stable to small perturbations,
so that equality of the two eyes will be restored after perturbation, or is it unstable,
so that a pattern of ocular dominance will grow? If it is unstable, which pattern
will initially grow the fastest? These are inherently linear questions: they depend
only on the behavior of the equations when SD is small, so that nonlinear terms
are negligible.
To solve this problem, we find the characteristic, independently growing modes of
equation (2). These are the eigenfunctions of the operator on the right side of that
equation. Each mode grows exponentially with growth rate given by its eigenvalue.
If any eigenvalue is positive (they are real), the corresponding mode will grow. Then
the SD 0 solution is unstable to perturbation.
=
Models of Ocular Dominance Column Formation
SAME-EYE
CORRELATIONS
+ OPP-EYE
SAME-EYE
ANTI-CORR
+
ANTI-CORR
2.8
1.4
Figure f. Cortical ocular dominance after fOO iterations for 6 choices of correlation
functions. Top left is simulation of figure 1. Top and bottom rows correspond to
gaussian same-eye correlations with parameter f.B and 1.4 grid points, respectively.
Middle column shows the effect of adding weak, broadly ranging anticorrelations
between the two eyes (gaussian with parameter 9 times larger than, and amplitude
- ~ that oj, the same-eye correlations). Right column shows the effect of instead
adding the anticorrelation to the same-eye correlation function.
ANALYTICAL CHARACTERIZATION OF EIGENFUNCTIONS
Change variables in equation (2) from cortex and inputs, (x, a), to cortex and receptive field, (x, r) with r = x-a. Then equation 2 becomes a convolution in the
cortical variable. The result (assume a continuum; results on a grid are similar)
is that eigenfunctions are of the form S~,e(x, a, t) = eimoz RFm,e(r). RFm,e is a
characteristic receptive field, representing the variation of the eigenfunction as r
varies while cortical location x is fixed. m is a pair of real numbers specifying a two
dimensional wavenumber of cortical oscillation, and is an additional index enumerating RFs for a given m. The eigenfunctions can also be written eimoa ARBm..,{r)
where ARBm..,(r) = eimor RFm..,(r). ARB is a characteristic arbor, representing
the variation of the eigenfunction as r varies while input location a is fixed. While
these functions are complex, one can construct real eigenfunctions from them with
similar properties (Miller and Stryker, 1989). A monocular (real) eigenfunction is
illustrated in figure 3.
e
379
380
Miller, Keller and Stryker
CHARACTERISTIC RECEPTIVE FIELD
I
vvv
CHARACTERISTIC ARBOR
Figure 9. One of the set (identical but for rotations and reflections) of fastestgrowing eigenfunctions for the functions used in figure 1. The monocular receptive
fields of synaptic differences SD at different cortical locations, the oscillation across
cortez, and the corresponding arbors are illustrated.
Modes with RFs dominated by one eye C~::II RFm.dY) -:F 0) will oscillate in dominance with wavelength ~ across cortex. A monocular mode is one for which RF
does not change sign. The oscillation of monocular fields, between domination by
one eye and domination by the other, yields ocular dominance columns. The fastest
growing mode in the linear regime will dominate the final pattern: if its receptive
field is monocular, its wavelength will determine the width of the final columns.
One can characterize the eigenfunctions analytically in various limiting cases. The
general conclusion is as follows. The fastest growing mode's receptive field RF
is largely determined by the correlation function CD. If the peak of the fourier
transform of CD corresponds to a wavelength much larger than an arbor diameter,
the mode will be monocular; if it corresponds to a wavelength smaller than an arbor
diameter, the mode will be binocular. If CD selects a monocular mode, a broader
CD (more sharply peaked fourier spectrum about wavenumber 0) will increase the
dominance in growth rate of the monocular mode over other modes; in the limit
Models of Ocular Dominance Column Formation
in which CD is constant with distance, only the monocular modes grow and all
other modes decay. If the mode is monocular, the peak of the fourier transform of
the cortical interaction function selects the wavelength of the cortical oscillation,
and thus selects the wavelength of ocular dominance organization. In the limit in
which correlations are broad with respect to an arbor, one can calculate that the
growth rate of monocular modes as a function of wavenumber of oscillation m is
proportional to E, i(m -1)6(1)..4 2 (1) (where X is the fourier transform of X). In
this limit, only l's which are close to 0 can contribute to the sum, so the peak will
occur at or near the m which maximizes i(m).
There is an exception to the above results if constraints conserve, or limit the change
in, the total synaptic strength over the arbor of an input cell. Then monocular
modes with wavelength longer than an arbor diameter are suppressed in growth
rate, since individual inputs would have to gain or lose strength throughout their
arborization. Given a correlation function that leads to monocular cells, a purely
excitatory cortical interaction function would lead a single eye to take over all
of cortex; however, if constraints conserve synaptic strength over an input arbor,
the wavelength will instead be about an arbor diameter, the largest wavelength
whose growth rate is not suppressed. Thus, ocular dominance segregation can occur
with a purely excitatory cortical interaction function, though this is a less robust
phenomenon. Analytically, a constraint conserving strength over afferent arbors,
implemented by subtracting the average change in strength over an arbor at each
iteration from each synapse in the arbor, transforms the previous expression for the
growth rates to E, i(m -1)0(1)..42(1)(1- A~~!?)).
COMPUTATION OF EIGENFUNCTIONS
Eigenfunctions are computed on a grid, ~nd the resulting growthrates as a function
of wavelength are compared to the analytical expression above, in the absence of
constraints on afferents. The results, for the parameters used in figure (2), are
shown in figure (4). The grey level indicates monocularity of the modes, defined as
Er RF(r) normalized on a scale between 0 and 1 (described in Miller and Stryker
(1989)). The analytical expression for the growth rate, 1rhose peak coincides in
every case with the peak of i(m), accurately predicts the growth rate of monocular
modes, even far from the limiting case in which the expression was derived. Broader
correlations or opposite-eye anticorrelations enhance the monocularity of modes
and the growth rate of monocular modes, while same-eye anticorrelations have the
opposite effects. When same-eye anticorrelations are short range compared to an
arbor radius, the fastest growing modes are binocular.
Results obtained for calculations in the presence of constraints on afferents are
also as predicted. With an excitatory cortical interaction function, the spectrum
is radically changed by constraints, selecting a mode with a wavelength equal to
an arbor diameter rather than one with a wavelength as wide as cortex. With the
Mexican hat cortical interaction function used in the simulations, the constraints
suppress the growth of long-wavelength monocular modes but do not alter the basic
381
382
Miller, Keller and Stryker
27.3 . :
'.-..???
SAME-EYE
+ OPP-EYE
CORRELATIONS
ANTI-CORR
+ SAME-EYE
ANTI-CO~~
35.0
19.7
13.8
5.92
2.8
8.98
1.4
Figure 4. Growth rate (vertical axis) as a function of inverse wavelength (horizontal
axis) for the six sets of functions used in figure 2, computed on the same grids. Grey
level codes maximum monocularity of modes with the given wavelength and growth
rate, from fully monocular (
white) to fully binocular (black). The black curve
indicates the prediction for relative growth rates of monocular modes given in the
l~?mit of broad correlations, as described in the text.
structure or peak of the spectrum.
CONNECTIONS TO OTHER MODELS
The model of Swindale (1980) for ocular dominance segregation emerges as a limiting case of this model when correlations are constant over a bit more than an arbor diameter. Swindale's model assumed an effective interaction between synapses
depending only on eye of origin and distance across cortex. Our model gives a
biological underpinning to this effective interaction in the limiting case, allows consideration of more general correlation functions, and allows examination of the
development of individual arbors and receptive fields and their relationships as well
as of overall ocular dominance.
Equation 2 is very similar to equations studied by others (Linsker, 1986, 1988;
Sanger, this volume). There are several important differences in our results. First,
in this model synapses are constrained to remain positive. Biological synapses are
Models of Ocular Dominance Column Fonnation
either exclusively positive or exclusively negative, and in particular the projection of
visual input to visual cortex is purely excitatory. Even if one is modelling a system
in which there are both excitatory and inhibitory inputs, these two populations will
almost certainly be statistically distinct in their activities and hence not treatable as
a single population whose strengths may be either positive or negative. S D, on the
other hand, is a biological variable which starts near 0 and may be either positive
or negative. This allows for a linear analysis whose results will remain accurate in
the presence of nonlinearities, which is crucial for biology.
Second, we analyze the effect of intracortical synaptic interactions. These have two
impacts on the modes: first, they introduce a phase variation or oscillation across
cortex. Second, they typically enhance the growth rate of monocular modes relative
to modes whose sign varies across the receptive field.
Acknowledgements
Supported by an NSF predoctoral fellowship and by grants from the McKnight
Foundation and the System Development Foundation. Simulations were performed
at the San Diego Supercomputer Center.
References
Hubel, D.H., T.N. Wiesel and S. LeVay, 1977. Plasticity of ocular dominance
columns in monkey striate cortex, Phil. Trans. R. Soc. Lond. B. 278:377-409.
Linsker, R., 1986. From basic network principles to neural architecture, Proc. Nat!.
Acad. Sci. USA 83:7508-7512, 8390-8394, 8779-8783.
Linsker, R., 1988. Self-Organization in a Perceptual Network. IEEE Computer
21:105-117.
Miller, K.D., 1989. Correlation-based models of neural development, to appear in
Neuroscience and Connectionist Theory (M.A. Gluck & D.E. Rumelhart, Eds.), Hillsdale, NJ: Lawrence Erlbaum Associates.
Miller, K.D., J.B. Keller & M.P. Stryker, 1986. Models for the formation of ocular
dominance columns solved by linear stability analysis, Soc. N eurosc. Abst.
12:1373.
Miller, K.D., J.B. Keller & M.P. Stryker, 1989. Ocular dominance column development: analysis and simulation. Submitted for publication.
Miller, K.D. & M.P. Stryker, 1989. The development of ocular dominance columns:
mechanisms and models, to appear in Connectionist Modeling and Brain
Function: The Developing Interface (S. J. Hanson & C. R. Olson, Eds.),
MIT Press/Bradford.
Sanger, T.D., 1989. An optimality principle for unsupervised learning, this volume.
Swindale, N. V., 1980. A model for the formation of ocular dominance stripes, Proc.
R. Soc. Lond. B. 208:265-307.
Wiesel, T.N. & D.H. Hubel, 1965. Comparison of the effects of unilateral and
bilateral eye closure on cortical unit responses in kittens, J. Neurophysiol.
28:, 1029-1040.
383
| 124 |@word middle:1 briefly:1 wiesel:5 compression:2 aia2:1 cortez:1 nd:1 open:1 heuristically:1 grey:2 simulation:12 propagate:1 closure:1 covariance:1 decomposition:1 innervating:2 thereby:1 tr:1 reduction:1 initial:2 series:1 exclusively:5 selecting:1 err:1 activation:2 must:1 readily:1 written:1 cottrell:2 extensional:1 j1:7 plasticity:3 analytic:1 short:1 lr:1 transposition:1 detecting:1 iterates:1 provides:2 characterization:1 location:3 contribute:1 mathematical:2 along:1 become:3 baldi:6 introduce:1 behavior:3 examine:3 growing:4 uz:1 brain:1 decomposed:1 innervation:1 becomes:3 xx:7 notation:1 maximizes:1 what:5 ail:1 eigenvector:3 monkey:1 developed:3 nj:2 assert:1 every:2 xd:8 growth:15 axt:1 unit:12 grant:2 normally:2 appear:3 positive:7 negligible:1 understood:2 local:13 sd:10 limit:5 acad:1 firing:3 black:2 studied:4 equivalence:1 conversely:1 suggests:1 specifying:1 co:2 fastest:4 range:1 statistically:1 unique:5 practical:1 physiology:2 projection:5 word:1 pre:1 get:1 onto:3 cannot:1 layered:1 sheet:1 anticorrelations:6 operator:1 collapsed:1 context:1 influence:4 close:1 restriction:1 measurable:1 map:6 yt:3 center:1 phil:1 williams:1 starting:1 keller:9 convex:9 independently:1 spanned:3 dominate:1 stability:2 population:2 coordinate:4 variation:3 updated:1 limiting:4 diego:2 programming:1 origin:1 agreement:1 associate:1 rumelhart:1 satisfying:2 conserve:3 mammalian:3 stripe:3 predicts:1 bottom:1 solved:1 calculate:1 connected:1 decrease:3 developmental:2 ui:2 trophic:3 depend:2 carrying:1 weakly:1 purely:4 neurophysiol:1 various:1 leo:3 separated:2 distinct:2 forced:1 fast:1 describe:1 effective:2 formation:8 whose:5 stanford:1 solve:1 larger:2 interconnection:1 otherwise:1 emergence:1 noisy:1 transform:3 ip:2 associative:1 obviously:2 asynchronously:1 final:2 eigenvalue:7 abst:1 analytical:5 subtracting:3 interaction:11 product:3 strengthening:1 neighboring:1 relevant:1 rapidly:1 iff:3 achieve:2 levay:2 conserving:1 forth:1 description:1 olson:1 optimum:1 wider:1 depending:1 arb:1 nearest:2 soc:3 implemented:1 predicted:1 come:1 indicate:1 direction:2 radius:5 modifying:1 subsequently:1 centered:2 hillsdale:1 fix:1 biological:7 timeindependent:1 strictly:2 swindale:3 correction:1 underpinning:1 proximity:1 ic:1 normal:1 exp:2 lawrence:1 algorithmic:1 mapping:1 vary:1 continuum:1 smallest:1 purpose:1 proc:2 lose:1 largest:2 vice:1 create:1 mit:2 clearly:1 always:3 gaussian:3 rather:1 bet:1 varying:1 broader:3 publication:2 release:2 derived:1 rank:9 indicates:2 modelling:1 attains:2 am:8 dependent:6 typically:1 initially:3 pasadena:1 hidden:4 relation:1 selects:3 pixel:2 issue:1 overall:1 orientation:1 development:12 anticorrelation:1 constrained:2 ell:1 field:10 once:1 construct:1 equal:1 identical:1 represents:1 broad:2 look:1 biology:1 unsupervised:1 alter:1 linsker:6 distantly:1 report:1 peaked:1 others:1 escape:2 few:1 inhibited:1 connectionist:2 randomly:1 individual:4 phase:3 connects:1 fire:3 ab:4 arborization:1 organization:5 investigate:1 rfm:4 intra:1 willi:1 adjust:1 certainly:1 analyzed:1 xb:3 accurate:1 xy:2 orthogonal:3 minimal:1 iiyt:3 column:19 modeling:1 ordinary:2 entry:1 uniform:2 erlbaum:1 too:1 characterize:1 teacher:1 periodic:3 learnt:1 varies:3 peak:6 contract:1 invertible:4 michael:1 together:2 enhance:2 connectivity:4 external:1 cognitive:1 suggesting:1 nonlinearities:2 intracortical:1 includes:1 coefficient:2 satisfy:1 afferent:3 ntunber:2 depends:3 later:1 view:1 monocularity:5 closed:2 performed:1 analyze:1 bilateral:1 reached:2 start:1 slope:1 square:6 formed:1 il:1 largely:3 characteristic:5 miller:16 correspond:2 yield:4 landscape:7 weak:1 accurately:1 worth:1 drive:1 bk1:1 submitted:2 oscillatory:1 synapsis:9 synaptic:16 ed:4 ocular:27 sharkey:1 dm:1 associated:2 proof:1 gain:1 emerges:1 organized:1 amplitude:1 positioned:1 actually:1 back:3 feed:1 response:2 synapse:2 done:1 though:2 xa:5 stage:2 binocular:3 correlation:26 hand:1 horizontal:1 nonlinear:2 overlapping:1 propagation:2 defines:1 mode:29 perhaps:1 reveal:2 grows:1 usa:1 effect:9 normalized:1 true:1 analytically:3 equality:2 assigned:1 alternating:1 symmetric:2 laboratory:1 hence:1 illustrated:2 white:1 during:4 self:2 width:2 coincides:1 trying:1 complete:3 reflection:1 interface:1 image:1 ranging:2 consideration:2 common:2 speculative:1 rotation:1 rl:1 exponentially:1 volume:2 association:1 significant:1 versa:1 ai:5 grid:12 mathematics:1 analyzer:1 stable:1 cortex:20 ysk:1 longer:1 dominant:1 recent:1 forcing:2 selectivity:1 onr:1 interdigitated:1 wavenumber:3 yi:1 conserved:1 seen:3 minimum:14 additional:3 greater:2 impose:1 converge:4 determine:1 period:2 ii:1 full:6 multiple:1 fonnation:1 hebbian:1 jet:1 characterized:1 calculation:2 long:1 escaped:1 post:1 a1:1 impact:1 prediction:2 involving:1 regression:3 basic:3 essentially:1 iteration:7 confined:2 cell:14 addition:3 fellowship:1 spacing:1 interval:1 grow:4 crucial:1 sr:4 eigenfunctions:10 induced:1 tend:1 vvv:1 incorporates:3 linearizes:1 zipser:1 near:3 presence:4 ter:1 exceed:1 easy:2 variety:2 architecture:2 perfectly:1 opposite:5 idea:1 enumerating:1 bottleneck:1 whether:1 expression:5 six:1 munro:1 unilateral:1 cause:1 constitute:1 oscillate:1 autoassociative:1 deep:3 eigenvectors:4 transforms:1 repeating:1 locally:1 ken:1 diameter:6 reduced:1 generate:1 sl:4 exist:1 nsf:2 inhibitory:2 notice:3 sign:2 neuroscience:1 yy:2 pace:1 serving:1 broadly:1 shall:1 vol:2 dominance:29 four:1 kenneth:1 undertaken:1 diffusion:1 sum:1 inverse:1 noticing:1 extends:1 almost:2 throughout:1 patch:7 oscillation:6 dy:1 bit:1 layer:22 followed:1 encountered:1 refine:2 activity:8 strength:9 occur:2 constraint:13 precisely:1 sharply:1 dominated:2 aspect:1 fourier:4 prescribed:1 lond:2 optimality:1 conjecture:1 developing:2 alternate:1 mcknight:1 across:7 smaller:1 remain:2 suppressed:2 island:1 joseph:1 modification:2 happens:2 multiplicity:1 equation:18 monocular:27 remains:1 previously:1 turn:1 segregation:6 mechanism:1 adopted:2 decomposing:1 rewritten:1 unreasonable:1 innervated:2 pierre:1 anymore:1 robustly:1 encounter:1 hat:2 existence:1 supercomputer:1 denotes:1 remaining:2 top:2 sanger:2 yx:1 xc:2 giving:1 noticed:1 question:2 already:1 added:1 restored:1 strategy:1 receptive:9 stryker:14 usual:2 dependence:1 striate:1 exhibit:1 gradient:5 subspace:3 distance:5 lateral:1 sci:1 propulsion:1 unstable:3 chauvin:2 code:1 index:7 relationship:1 minimizing:2 demonstration:1 equivalently:1 difficult:1 negative:3 suppress:1 implementation:1 proper:1 anticorrelated:1 predoctoral:1 vertical:1 convolution:1 xxa:1 descent:3 anti:5 segregate:1 extended:1 synchronously:2 perturbation:3 pair:1 required:1 extensive:1 connection:4 hanson:1 california:2 concisely:1 eigenfunction:4 trans:1 below:2 pattern:6 criticize:1 appeared:1 regime:1 pioneering:1 rf:5 including:2 oj:2 critical:9 natural:1 examination:1 representing:5 technology:1 brief:1 eye:49 uptake:2 axis:2 auto:2 text:1 nice:1 review:2 discovery:2 understanding:1 acknowledgement:1 multiplication:1 relative:3 fully:3 proportional:1 norwood:1 foundation:2 sufficient:1 principle:2 cd:6 row:2 course:1 changed:1 elsewhere:1 supported:3 excitatory:6 transpose:1 dominantly:1 aij:4 side:2 cortico:1 understand:1 telling:1 institute:1 neighbor:2 wide:1 curve:2 boundary:2 cortical:32 world:1 forward:1 made:1 san:2 avoided:1 feeling:1 far:2 opp:2 keep:1 global:4 hubel:5 active:3 reproduces:1 assumed:4 terrain:1 spectrum:3 continuous:1 sk:1 reality:1 robust:1 ca:2 inherently:1 hornik:1 investigated:1 necessarily:1 complex:3 main:2 spread:1 motivation:1 arise:1 allowed:1 fashion:1 hebb:3 foo:1 position:1 momentum:1 sf:1 xl:1 perceptual:2 deprivation:2 bij:1 admissible:1 externally:1 young:1 xt:3 specific:1 er:1 decay:1 circularly:1 adding:2 corr:3 nat:1 gluck:1 intersection:1 wavelength:15 saddle:9 visual:8 prevents:1 ordered:2 aa:2 corresponds:2 radically:1 satisfies:1 characerized:1 identity:3 presentation:1 absence:3 change:5 cst:3 typical:1 determined:1 hyperplane:1 principal:5 mexican:2 total:2 specie:1 bradford:1 arbor:25 experimental:1 domination:2 exception:1 formally:1 ojk:1 ucsf:3 phenomenon:1 avoiding:1 dept:3 tested:1 kitten:2 correlated:1 |
264 | 1,240 | Predicting Lifetimes in Dynamically
Allocated Memory
David A. Cohn
Adaptive Systems Group
Harlequin, Inc.
Menlo Park, CA 94025
Satinder Singh
Department of Computer Science
University of Colorado
Boulder, CO 80309
cohn~harlequin.com
baveja~cs.colorado.edu
Abstract
Predictions oflifetimes of dynamically allocated objects can be used
to improve time and space efficiency of dynamic memory management in computer programs. Barrett and Zorn [1993] used a simple
lifetime predictor and demonstrated this improvement on a variety
of computer programs. In this paper, we use decision trees to do
lifetime prediction on the same programs and show significantly
better prediction . Our method also has the advantage that during
training we can use a large number of features and let the decision
tree automatically choose the relevant subset.
1
INTELLIGENT MEMORY ALLOCATION
Dynamic memory allocation is used in many computer applications. The application requests blocks of memory from the operating system or from a memory
manager when needed and explicitly frees them up after use. Typically, all of these
requests are handled in the same way, without any regard for how or for how long
the requested block will be used . Sometimes programmers use runtime profiles to
analyze the typical behavior of their program and write special purpose memory
management routines specifically tuned to dominant classes of allocation events.
Machine learning methods offer the opportunity to automate the process of tuning
memory management systems.
In a recent study, Barrett and Zorn [1993] used two allocators: a special allocator
for objects that are short-lived, and a default allocator for everything else. They
tried a simple prediction method on a number of public-domain , allocation-intensive
programs and got mixed results on the lifetime prediction problem. Nevertheless,
they showed that for all the cases where they were able to predict well, their strategy
of assigning objects predicted to be short-lived to the special allocator led to savings
D. A. Cohn and S. Singh
940
in program running times. Their results imply that if we could predict well in all
cases we could get similar savings for all programs. We concentrate on the lifetime
prediction task in this paper and show that using axis-parallel decision trees does
indeed lead to significantly better prediction on all the programs studied by Zorn and
Grunwald and some others that we included. Another advantage of our approach
is that we can use a large number of features about the allocation requests and let
the decision tree decide on their relevance.
There are a number of advantages of using lifetime predictions for intelligent memory management. It can improve CPU usage, by using special-purpose allocators,
e.g., short-lived objects can be allocated in small spaces by incrementing a pointer
and deallocated together when they are all dead. It can decrease memory fragmentation, because the short-lived objects do not pollute the address space of long lived
objects. Finally, it can improve program locality, and thus program speed, because
the short-lived objects are all allocated in a small part of the heap .
The advantages of prediction must be weighed against the time required to examine
each request and make that prediction about its intended use. It is frequently
argued that, as computers and memory become faster and cheaper, we need to
be less concerned about the speed and efficiency of machine learning algorithms.
When the purpose of the algorithm is to save space and computation, however,
these concerns are paramount.
1.1
RELATED WORK
Traditionally, memory management has been relegated to a single, general-purpose
allocator. When performance is critical, software developers will frequently build a
custom memory manager which they believe is tuned to optimize the performance
of the program. Not only is this hand construction inefficient in terms of the programming time required, this "optimization" may seriously degrade the program's
performance if it does not accurately reflect the program's use [Wilson et al., 1995].
Customalloc [Grunwald and Zorn, 1992] monitors program runs on benchmark inputs to determine the most commonly requested block sizes. It then produces a
set of memory allocation routines which are customized to efficiently allocate those
block sizes. Other memory requests are still handled by a general purpose allocator.
Barrett and Zorn [1993] studied lifetime prediction based on benchmark inputs. At
each allocation request, the call graph (the list of nested procedure/function calls in
effect at the time) and the object size was used to identify an allocation site. If all
allocations from a particular site were short-lived on the benchmark inputs, their
algorithm predicted that future allocations would also be short-lived. Their method
produced mixed results at lifetime prediction, but demonstrated the savings that
such predictions could bring.
In this paper, we discuss an approach to lifetime prediction which uses learned
decision trees . In the next section, we first discuss the identification of relevant
state features by a decision tree. Section 3 discusses in greater detail the problem
of lifetime prediction. Section 4 describes the empirical results of applying this
approach to several benchmark programs, and Section 5 discusses the implications
of these results and directions for future work.
Predicting Lifetimes in Dynamically Allocated Memory
2
941
FEATURE SELECTION WITH A DECISION TREE
Barrett and Zorn 's approach captures state information in the form of the program's
call graph at the time of an allocation request, which is recorded to a fixed predetermined depth. This graph, plus the request size, specifies an allocation "site";
statistics are gathered separately for each site. A drawback of this approach is that
it forces a division for each distinct call graph, preventing generalization across irrelevant features. Computationally, it requires maintaining an explicit call graph
(information that the program would not normally provide), as well as storing a
potentially large table of call sites from which to make predictions. It also ignores
other potentially useful information, such as the parameters of the functions on the
call stack, and the contents of heap memory and the program registers at the time
of the request.
Ideally, we would like to examine as much of the program state as possible at the
time of each allocation request, and automatically extract those pieces of information that best allow predicting how the requested block will be used. Decision tree
algorithms are useful for this sort of task . A decision tree divides inputs on basis
of how each input feature improves "purity" of the tree's leaves. Inputs that are
statistically irrelevant for prediction are not used in any splits; the tree's final set
of decisions examine only input features that improve its predictive performance.
Regardless of the parsimony of the final tree however, training a tree with the entire
program state as a feature vector is computationally infeasible. In our experiments,
detailed below, we arbitrarily used the top 20 words on the stack, along with the
request size, as an approximate indicator of program state. On the target machine
(a Sparcstation) , we found that including program registers in the feature set made
no significant difference, and so dropped them from consideration for efficiency.
3
LIFETIME PREDICTION
The characteristic of memory requests that we would like to predict is the lifetime
of the block - how long it will be before the requested memory is returned to the
central pool. Accurate lifetime prediction lets one segregate memory into shortterm, long-term and permanent storage. To this end, we have used a decision tree
learning algorithm to derive rules that distinguish "short-lived" and "permanent"
allocations from the general pool of allocation requests.
For short-lived blocks, one can create a very simple and efficient allocation scheme
[Barrett and Zorn, 1993]. For "permanent" blocks, allocation is also simple and
cheap , because the allocator does not need to compute and store any of the information that would normally be required to keep track of the block and return it to
the pool when freed .
One complication is that of unequal loss for different types of incorrect predictions.
An appropriately routed memory request may save dozens of instruction cycles, but
an inappropriately routed one may cost hundreds. The cost in terms of memory
may also be unequal: a short-lived block that is incorrectly predicted to be "permanent" will permanently tie up the space occupied by the block (if it is allocated
via a method that can not be freed). A "permanent" block, however, that is incorrectly predicted to be short-lived may pollute the allocator's short-term space
by preventing a large segment of otherwise free memory from being reclaimed (see
Barrett and Zorn for examples).
These risks translate into a time-space tradeoff that depends on the properties of
942
D. A. Cohn and S. Singh
the specific allocators used and the space limitations of the target machine. For our
experiments, we arbitrarily defined false positives and false negatives to have equal
loss, except where noted otherwise. Other cases may be handled by reweighting
the splitting criterion, or by rebalancing the training inputs (as described in the
following section).
4
EXPERIMENTS
We conducted two types of experiments. The first measured the ability of learned
decision trees to predict allocation lifetimes. The second incorporated these learned
trees into the target applications and measured the change in runtime performance.
4.1
PREDICTIVE ACCURACY
We used the OC1 decision tree software (designed by Murthy et al. [1994]) and
considered only axis-parallel splits, in effect, conditioning each decision on a single
stack feature. We chose the sum minority criterion for splits, which minimizes the
number of training examples misclassified after the split. For tree pruning, we used
the cost complexity heuristic, which holds back a fraction (in our case 10%) of
the data set for testing, and selects the smallest pruning of the original tree that is
within one standard error squared ofthe best tree [Breiman et al. 1984]. The details
of these and other criteria may be found in Murthy et al. [1994] and Breiman et al.
[1984]. In addition to the automatically-pruned trees, we also examined trees that
had been truncated to four leaves, in effect examining no more than two features
before making a decision.
OC1 includes no provisions for explicitly specifying a loss function for false positive
and false negative classifications. It would be straightforward to incorporate this
into the sum minority splitting criterion; we chose instead to incorporate the loss
function into the training set itself, by duplicating training examples to match
the target ratios (in our case, forcing an equal number of positive and negative
examples).
In our experiments, we used the set of benchmark applications reported on by
Barrett and Zorn: Ghostseript, a PostScript interpreter, Espresso, a PLA logic
optimizer, and Cfrae, a program for factoring large numbers, Gawk, an AWK programming language interpreter and Perl, a report extraction language. We also
examined Gee, a public-domain C compiler, based on our company's specific interest in compiler technology.
The experimental procedure was as follows: We linked the application program with
a modified mal/oe routine which, in addition to allocating the requested memory,
wrote to a trace file the size of the requested block, and the top 20 machine words
on the program stack. Calls to free allowed tagging the existing allocations, which,
following Barrett and Zorn, were labeled according to how many bytes had been
allocated during their lifetime. 1
It is worth noting that these experiments were run on a Sparcstation, which fre-
quently optimizes away the traditional stack frame. While it would have been
possible to force the system to maintain a traditional stack, we wished to work
from whatever information was available from the program "in the wild" , without
overriding system optimizations.
1 We have also examined, with comparable success, predicting lifetimes in terms of the
number of intervening calls to malloc; which may be argued as an equally useful measure.
We focus on number of bytes for the purposes of comparison with the existing literature.
Predicting Lifetimes in Dynamically Allocated Memory
943
Input files were taken from the public ftp archive made available by Zorn and
Grunwald [1993]. Our procedure was to take traces of three of the files (typically
the largest three for which we could store an entire program trace). Two of the
traces were combined to form a training set for the decision tree , and the third was
used to test the learned tree.
Ghostseript training files: manual.ps and large.ps; test file : ud-doc.ps
Espresso training files: cps and mlp4; test file: Z5xp1
Cfrae training
inputs:
41757646344123832613190542166099121
and
327905606740421458831903; test input: 417576463441248601459380302877
Gawk training file: adj.awk/words-small.awk; test file: adj.awk/words-Iarge.awk 2
Perl training files: endsort.perl (endsort .perl as input) , hosts.perl (hosts-data.perl
as input) ; test file : adj.perl(words-small.awk as input)
Gee training files : cse.c and combine.c; test file : expr.c
4.1.1
SHORT-LIVED ALLOCATIONS
First, we attempted to distinguish short-lived allocations from the general pool.
For comparison with Barrett and Zorn [1993], we defined "short-lived" allocations
as those that were freed before 32k subsequent bytes had been allocated. The
experimental results of this section are summarized in Table 1.
application
ghostscnpt
espresso
cfrac
gawk
perl
gcc
Barrett &c Zorn
false pos % false neg %
25.2
0.006
72
3.65
52 .7
-3
0
1.11
78.6
?
-
-
OC1
false pos %
false neg %
0.13
1.7
\0.72)
\13.5)
6.58 (14.9)
0.38
(1.39)
16.9 (19.4)
2.5
(0.49)
0.092 (0.092) 0.34 (0.34)
5.32
(10.8)
33.8 (34.3)
0.85
31.1 (31.0)
(2.54)
Table 1: Prediction errors for "short-lived" allocations, in percentages of misallocated bytes. Values in parentheses are for trees that have been truncated to two
levels. Barrett and Zorn's results included for comparison where available.
4.1.2
"PERMANENT" ALLOCATIONS
We then attempted to distinguish "permanent" allocations from the general pool
(Barrett and Zorn only consider the short-lived allocations discussed in the previous
section). "Permanent" allocations were those that were not freed until the program
terminated. Note that there is some ambiguity in these definitions - a "permanent"
block that is allocated near the end of the program's lifetime may also be "shortlived". Table 2 summarizes the results of these experiments.
We have not had the opportunity to examine the function of each of the "relevant
features" in the program stacks; this is a subject for future work .
2For Gawk, we varied the training to match that used by Barrett and Zorn . They used
as training input a single gawk program file run with one data set , and tested on the same
gawk program run with another.
3We were unable to compute Barrett and Zorn's exact results here, although it appears
that their false negative rate was less than 1%.
D. A. Cohn and S. Singh
944
application
ghostscript
espresso
cfrac
gcc
false pos %
0
0
0.019
0.35
false neg %
0.067
1.27
3.3
19.5
Table 2: Prediction errors for "permanent" allocations (% misallocated bytes).
4.2
RUNTIME PERFORMANCE
The raw error rates we have presented above indicate that it is possible to make
accurate predictions about the lifetime of allocation requests, but not whether those
predictions are good enough to improve program performance. To address that
question, we have incorporated predictive trees into three of the above applications
and measured the effect on their runtimes.
We used a hybrid implementation, replacing the single monolithic decision tree with
a number of simpler, site-specific trees. A "site" in this case was a lexical instance
of a call to malloc or its equivalent. When allocations from a site were exclusively
short-lived or permanent, we could directly insert a call to one of the specialized
allocators (in the manner of Barrett and Zorn). When allocations from a site were
mixed, a site-specific tree was put in place to predict the allocation lifetime.
Requests predicted to be short-lived were routed to a "quick malloc" routine similar
to the one described by Barrett and Zorn; those predicted to be permanent were
routed to another routine specialized for the purpose. On tests with random data
these specialized routines were approximately four times faster than "malloc".
Our experiments targeted three applications with varying degrees of predictive accuracy: ghostscript, gcc, and cfrac. The results are encouraging (see Table 3). For
ghostscript and gcc, which have the best predictive accuracies on the benchmark
data (from Section 4.1), we had a clear improvement in performance. For cfrac,
with much lower accuracy, we had mixed results: for shorter runs, the runtime performance was improved, but on longer runs there were enough missed predictions
to pollute the short-lived memory area and degrade performance.
5
DISCUSSION
The application of machine learning to computer software and operating systems
is a largely untapped field with promises of great benefit. In this paper we have
described one such application, producing efficient and accurate predictions of the
lifetimes of memory allocations.
Our data suggest that, even with a feature set as large as a runtime program stack,
it is possible to characterize and predict the memory usage of a program after only
a few benchmark runs. The exceptions appear to be programs like Perl and gawk
which take both a script and a data file. Their memory usage depends not only
upon characterizing typical scripts, but the typical data sets those scripts act upon. 4
Our ongoing research in memory management is pursuing a number of other con4Perl's generalization performance is significantly better when tested on the same script
with different data. We have reported the results using different scripts for comparison
with Barrett and Zorn.
945
Predicting Lifetimes in Dynamically Allocated Memory
application
benchmark test error
{training set) short
I permanent
I long
ghostscript , trained on ud-doc.ps; 7 sites, 1 tree
manual.ps
16/256432
0/3431
0/0
large.ps
thesis .ps
gcc, trained on combine, cse, c-decl; 17 sites, 4 trees
expr.c
0/11988
2786/11998
301/536875
loop.c
reload1.c
cfrac, trained on 100 ? . ?057; 8 sItes, 4 trees
327 .. ?903
24/7970099 13172/22332 106/271
417? ? ? 771
417 .. ? 121
run tIme
normal I predictive
96.29
17.22
40.27
95.43
16.75
37.57
12.59
5.16
7.02
12.40
5.16
6.81
7.75
67.93
225.31
7.23
74.57
245 .64
Table 3: Running times in seconds for applications with site-specific trees. Times
shown are averages over 24-40 runs , and with the exception of loop.c , are statistically
significant with probability greater than 99%.
tinuations of the results described here , including lifetime clustering and intelligent
garbage collection.
REFERENCES
D. Barrett and B. Zorn (1993) Using lifetime predictors to improve memory
allocation performance. SIGPLAN'93 - Conference on Programming Language
Design and Implementation, June 1993, Albuquerque , New Mexico, pp. 187-196.
L. Breiman, J. Friedman, R. Olshen and C. Stone (1984) Classification and
Regression Trees, Wadsworth International Group , Belmont , CA.
D. Grunwald and B. Zorn (1992) CUSTOMALLOC: Efficient synthesized memory allocators. Technical Report CU-CS-602-92, Dept. of Computer Science, University of Colorado.
S. Murthy, S. Kasif and S. Salzberg (1994) A system for induction of oblique
decision trees . Journal of Artificial Intelligence Research 2:1-32.
P. Wilson, M. Johnstone, M. Neely and D. Boles (1995) Dynamic storage
allocation : a survey and critical review . Proc. 1995 Intn'l Workshop on Memory
Management, Kinross, Scotland, Sept. 27-29, Springer Verlag.
B. Zorn and D. Grunwald (1993) A set of benchmark inputs made publicly
available , in ftp archive ftp. cs. colorado . edu: /pub/misc/malloc-benchmarks/.
| 1240 |@word cu:1 instruction:1 tried:1 exclusively:1 pub:1 tuned:2 seriously:1 existing:2 com:1 adj:3 assigning:1 must:1 belmont:1 subsequent:1 predetermined:1 cheap:1 designed:1 overriding:1 intelligence:1 leaf:2 scotland:1 short:21 oblique:1 pointer:1 complication:1 cse:2 simpler:1 along:1 become:1 incorrect:1 wild:1 combine:2 manner:1 tagging:1 indeed:1 behavior:1 examine:4 frequently:2 manager:2 automatically:3 company:1 cpu:1 encouraging:1 rebalancing:1 parsimony:1 developer:1 minimizes:1 interpreter:2 sparcstation:2 duplicating:1 act:1 runtime:5 tie:1 whatever:1 normally:2 appear:1 producing:1 before:3 positive:3 dropped:1 monolithic:1 approximately:1 plus:1 chose:2 studied:2 examined:3 dynamically:5 specifying:1 co:1 statistically:2 pla:1 testing:1 block:14 procedure:3 area:1 empirical:1 significantly:3 got:1 word:5 suggest:1 get:1 selection:1 storage:2 risk:1 applying:1 put:1 weighed:1 optimize:1 equivalent:1 demonstrated:2 lexical:1 quick:1 straightforward:1 regardless:1 survey:1 splitting:2 rule:1 traditionally:1 construction:1 target:4 colorado:4 exact:1 programming:3 us:1 labeled:1 capture:1 awk:6 mal:1 cycle:1 oe:1 decrease:1 complexity:1 ideally:1 reclaimed:1 dynamic:3 trained:3 singh:4 segment:1 predictive:6 upon:2 division:1 efficiency:3 basis:1 po:3 kasif:1 distinct:1 artificial:1 heuristic:1 otherwise:2 ability:1 statistic:1 itself:1 final:2 advantage:4 relevant:3 loop:2 translate:1 intervening:1 p:7 produce:1 object:8 ftp:3 derive:1 measured:3 wished:1 c:3 predicted:6 indicate:1 concentrate:1 direction:1 drawback:1 programmer:1 public:3 everything:1 argued:2 generalization:2 insert:1 hold:1 considered:1 normal:1 great:1 predict:6 automate:1 optimizer:1 heap:2 smallest:1 purpose:7 proc:1 largest:1 create:1 modified:1 occupied:1 breiman:3 varying:1 wilson:2 focus:1 june:1 improvement:2 factoring:1 typically:2 entire:2 relegated:1 misclassified:1 selects:1 classification:2 special:4 wadsworth:1 equal:2 field:1 saving:3 extraction:1 runtimes:1 park:1 future:3 others:1 report:2 intelligent:3 few:1 cheaper:1 intended:1 maintain:1 friedman:1 interest:1 custom:1 implication:1 allocator:12 accurate:3 allocating:1 shorter:1 tree:35 divide:1 instance:1 salzberg:1 cost:3 subset:1 predictor:2 hundred:1 examining:1 conducted:1 characterize:1 reported:2 combined:1 international:1 pool:5 together:1 espresso:4 thesis:1 squared:1 reflect:1 recorded:1 management:7 central:1 choose:1 ambiguity:1 dead:1 inefficient:1 return:1 summarized:1 includes:1 inc:1 untapped:1 permanent:13 explicitly:2 register:2 depends:2 piece:1 script:5 analyze:1 linked:1 compiler:2 sort:1 parallel:2 publicly:1 accuracy:4 neely:1 largely:1 characteristic:1 efficiently:1 gathered:1 identify:1 ofthe:1 identification:1 raw:1 albuquerque:1 accurately:1 produced:1 worth:1 murthy:3 manual:2 definition:1 against:1 pp:1 fre:1 improves:1 provision:1 routine:6 back:1 appears:1 improved:1 lifetime:25 until:1 hand:1 replacing:1 cohn:5 reweighting:1 believe:1 usage:3 effect:4 misc:1 during:2 noted:1 criterion:4 stone:1 bring:1 consideration:1 specialized:3 conditioning:1 discussed:1 synthesized:1 significant:2 tuning:1 language:3 baveja:1 had:6 longer:1 operating:2 dominant:1 recent:1 showed:1 irrelevant:2 optimizes:1 forcing:1 store:2 verlag:1 arbitrarily:2 success:1 neg:3 greater:2 malloc:5 purity:1 determine:1 ud:2 technical:1 faster:2 match:2 offer:1 long:5 host:2 equally:1 parenthesis:1 prediction:26 regression:1 sometimes:1 cps:1 addition:2 separately:1 else:1 allocated:11 appropriately:1 archive:2 file:15 subject:1 call:11 near:1 noting:1 split:4 enough:2 concerned:1 variety:1 tradeoff:1 intensive:1 whether:1 handled:3 allocate:1 routed:4 returned:1 garbage:1 useful:3 detailed:1 clear:1 pollute:3 specifies:1 percentage:1 track:1 write:1 promise:1 group:2 four:2 nevertheless:1 monitor:1 freed:4 graph:5 zorn:23 fraction:1 sum:2 run:9 place:1 decide:1 pursuing:1 missed:1 doc:2 decision:18 summarizes:1 comparable:1 distinguish:3 paramount:1 software:3 sigplan:1 speed:2 pruned:1 department:1 according:1 request:16 describes:1 across:1 making:1 boulder:1 taken:1 computationally:2 discus:4 needed:1 end:2 available:4 away:1 save:2 permanently:1 original:1 top:2 running:2 clustering:1 opportunity:2 maintaining:1 build:1 expr:2 question:1 strategy:1 traditional:2 unable:1 degrade:2 induction:1 minority:2 ratio:1 postscript:1 mexico:1 olshen:1 potentially:2 trace:4 negative:4 gcc:5 implementation:2 design:1 lived:20 benchmark:10 incorrectly:2 truncated:2 segregate:1 incorporated:2 frame:1 varied:1 stack:8 david:1 required:3 unequal:2 learned:4 address:2 able:1 below:1 oc1:3 program:36 perl:9 including:2 memory:34 event:1 critical:2 force:2 hybrid:1 predicting:6 indicator:1 customized:1 scheme:1 improve:6 technology:1 imply:1 axis:2 shortterm:1 extract:1 sept:1 byte:5 review:1 literature:1 loss:4 mixed:4 limitation:1 allocation:35 degree:1 storing:1 free:3 infeasible:1 gee:2 allow:1 johnstone:1 characterizing:1 benefit:1 regard:1 default:1 depth:1 preventing:2 ignores:1 commonly:1 adaptive:1 made:3 collection:1 approximate:1 pruning:2 keep:1 satinder:1 logic:1 wrote:1 table:7 ca:2 menlo:1 requested:6 domain:2 inappropriately:1 terminated:1 incrementing:1 profile:1 allowed:1 site:14 grunwald:5 explicit:1 harlequin:2 third:1 dozen:1 specific:5 list:1 barrett:18 concern:1 workshop:1 quently:1 false:11 fragmentation:1 locality:1 led:1 springer:1 nested:1 targeted:1 content:1 change:1 included:2 typical:3 specifically:1 except:1 experimental:2 attempted:2 exception:2 relevance:1 ongoing:1 incorporate:2 dept:1 tested:2 |
265 | 1,241 | Ordered Classes and Incomplete Examples
in Classification
Mark Mathieson
Department of Statistics, University of Oxford
1 South Parks Road, Oxford OXI 3TG, UK
E-mail: mathies@stats.ox.ac.uk
Abstract
The classes in classification tasks often have a natural ordering, and the
training and testing examples are often incomplete. We propose a nonlinear ordinal model for classification into ordered classes. Predictive,
simulation-based approaches are used to learn from past and classify future incomplete examples. These techniques are illustrated by making
prognoses for patients who have suffered severe head injuries.
1
Motivation
Jennett et al. (1979) reported data on patients with severe head injuries. For each patient
some of the information in Table 1 was available shortly after injury. The objective is to
predict the degree of recovery attained within six months as measured by outcome. This
problem exhibits two characteristics that are common in classification tasks: allocation qf
examples into classes which have a natural ordering, and learning from past and classifying
future incomplete examples.
2 A Flexible Model for Ordered Classes
The Bayes decision rule (see, for example, Ripley, 1996) depends on the loss L(j, k) incurred in assigning to class k an object belonging to class j. When better information is
unavailable, for unordered or nominal classes we treat every mis-classification as equally
serious: LU, k) is 0 when j = k and 1 otherwise. For ordered classes, when the K classes
are numbered from 1 to K in their natural order, a better default choice is LU, k) =1 j - k I.
A class is then given support by its position in the ordering, and the Bayes rule will sometimes assign patterns to classes that do not have maximum posterior probability to avoid
making a serious error.
551
Ordered Classes and Incomplete Examples in Classification
Table 1: Definition of variables with proportion missing.
Definition
Missing %
Age in decades (1=0-9, 2=10-19, ... ,8=70+).
Measure of eye, motor and verbal response to stimulation (1-7).
41
Motor response patterns for all limbs (1-7).
33
Change in neurological function over the first 24 hours (-1,0,+1).
78
Eye indicant. 1 (bad), 2 (impaired), 3 (good).
65
Pupil reaction to light. 1 (non-reacting), 2 (reacting).
30
Recovery after six months based on Glasgow Outcome Scale.
1 (dead/vegetative), 2 (severe disability), 3 (moderate/good recovery).
Variable
age
emv
motor
change
eye
?
pupils
outcome
?
If the classes in a classification problem are ordered the ordering should also be reflected
in the probability model. Methods for nominal tasks can certainly be used for ordinal
problems, but an ordinal model should have a simpler parameterization than comparable
nominal models, and interpretation will be easier. Suppose that an example represented by
a row vector X belongs to class C = C(X). To make the Bayes-optimal classification it
is sufficient to know the posterior probabilities p(C = k I X = x). The ordinallogistic regression (OLR) model for K ordered classes models the cumulative posterior class
probabilities p( C ~ k I X = x) by
p( C ~ k I X = x) ]
log [ 1 _ p(C ~ k I X = x) = ?>k -1](x)
k
= 1, ... ,K -1,
(1)
for some function 1]. We impose the constraints ?>1 ~ . . . ~ ?>K-l on the cut-points to
ensure thatp(C ~ k I X = x) increases with k. If ?>o = -00 and ?>K = 00 then (1) gives
p(C
where a(x)
k=l, ... ,K
= k I X = x) = a(?>k -1](x)) - a(?>k-l -1](x))
= 1/(1 + e- McCullagh (1980) proposed linear OLR where 1](x) = x{3.
X ).
The posterior probabilities depend on the patterns x only through 1], and high values of
1](x) correspond to higher predicted classes (Figure la). This can be useful for interpreting
the fitted model. However, linear OLR is rather inflexible since the decision boundaries are
always parallel hyperplanes. Departures from linearity can be accommodated by allowing
1] to be a non-linear function of the feature space. We extend OLR to non-linear ordinal
logistic regression (NOLR) by letting 1](x) be the single linear output of a feed-forward
neural network with input vector x, having skip-layer connections and sigmoid transfer
functions in the hidden layer (Figure Ib). Then for weights Wij and biases bj we have
1](x) =
2: WioXCi) + 2: wjoa(bj + 2: WijXCi?),
i-to
j-to
i-tj
where :Li-tj denotes the sum over i such that node i is connected to node j, and node
o is the single output node. The usual output-unit bias is incorporated in the cut-points.
Observe that OLR is the special case of NOLR with no hidden nodes. Although the network
component of NOLR is a universal approximator the NOLR model cannot approximate all
probability densities arbitrarily well (unlike 'softmax', the most similar nominal method).
The likelihood for the cut-points l/> = (?>1, ... ,?>K -1) and network parameters w given a
training set T = {(Xi, Ci) Ii = 1, ... ,n} ofn correctly classified examples is
n
?(w, l/? = IIp(Ci I Xi) =
i=l
n
II [a(?>Ci -1](Xi; w)) i=l
a(?>ci-l -1](Xi; w))] .
(2)
552
M. Mathieson
q,------------------,
p( 11 eta)
p(21 eta)
p(31 eta)
p(41 eta)
p(SI eta)
co
o
'"
o
-10
-8
-6
-4
-2
Network output (eta)
o
2
o
20
40
60
age (years)
Figure 1: (a) p(k 1 "I) plotted against "I for an OLR model with K = 5 classes and 4> =
(-7, -6, -3, -1). (b) The network output TJ(x) from a NOLR model used to predict change given
all other variables (except outcome) predicts that young patients with high emv score are likely to
improve over first 24 hours. While age and emv are varied, other variables are fixed. Dark shading
denotes low values ofTJ(x) . The Bayes decision boundaries are shown for loss L(j, k) =1 j - k I.
If we estimate the classifier by substituting the maximum likelihood estimates we must
maximize (2) whilst constraining the cut-points to be increasing (Mathieson, 1996). To
avoid over-fitting we regularize both by weight decay (which is equivalent to putting independent Gaussian priors on the network weights) and by imposing independent Gamma priors on the differences between adjacent cut-points. The minim and is now -log f(w, l/? +
>..D(w) + E(l/>; t, 0:) with hyperparameters >.. > 0, t, 0: (to be chosen by cross-validation,
for example, or averaged over under a Bayesian scheme) where D(w) = 2:i,j W;j and
K-l
E(l/? =
L [t(<Pi - <pi-d + (1 -
0:) log(<pi - <Pi-d] .
i=2
3
Classification and Incomplete Examples
We now consider simulation-based methods for training diagnostic paradigm classifiers
from incomplete examples, and classifying future incomplete examples. To avoid modelling the missing data we assume that the missing data mechanism is independent of the
missing values given the observed values (missing at random) and that the missing data and
data generation mechanisms are independent (ignorable) (Little & Rubin, 1987). This assumption is rarely true but is usually less damaging than adopting crude ad hoc approaches
to missing values.
3.1
Learning from Incomplete Examples
r
The training set is
= {(xi, Ci) I i = 1, ... ,n} where xi, xi are the observed and
unobserved parts of the ith example, which belongs to class Ci. Define XO = {xi I
i = 1, ... ,n} and Xu = {xi I i = 1, ... ,n}, and use C to denote all the classes, so
r = (XO, C). We assume that C is fully observed. Under the diagnostic paradigm (which
includes logistic regression and its non-linear and ordinal variants such as 'softmax' and
553
Ordered Classes and Incomplete Examples in Classification
NOLR) we model p( C Ix) by p( C I Xj 8) giving the conditional likelihood
n
?(8)
n
= IIp(ci I xf;8) =
i=1
n
IIIEx:'lxfP(ci I xf,Xi;8)
= IExulXo IIp(ci I xf,Xi;8)
i=l
i=l
(3)
when the examples are independent. The model for p( C Ix) contains no information about
p(x) and so we construct a model for p(XU I XO) separately using T (Section 3.2). Once we
can sample xfu' .. ,xfm from p(xf I xi, Ci) a Monte Carlo approximation for ?(8) based
on the last expression of (3) by averaging over repeated imputations of the missing values
in the training set (Little & Rubin, 1987, and earlier):
1 m
n
)
log?(8) ;:::;; log ( m ~ 1jP(Ci I xf, xij; 8) .
(4)
Existing algorithms for finding maximum likelihood estimates for 8 allow maximization of
the individual summands in (4) with respect to 8, but in general the software will require
extensive modification in order to maximize the average. This problem can be avoided if
we approximate the arithmetic average over the imputations by a geometric one so that
1/m
?( 8);:::;; ( TI j TIi p( Ci I xi, xt; 8) )
. Now the log-posterior averages over the log of the
likelihoods of the completed training sets, so standard estimation algorithms can be used
on a training set formed by pooling all completions of the training set, giving each weight
11m. The approximation log I:j Pj ;:::;; I:j logpj has been made, where we define
Pj (8) = TIi p( Ci I xi , xij; 8), although in fact log I:j Pj ~
I:j log Pj everywhere.
Suppose that the Pj are well approximated by some function P for the region of interest in
the parameter space. Then in this region
!
!
lL p._-m1 Llogp?;:::;;1 L
2m
Iog-
m
J
J
j
j
i
!
!
(Pi
p)2 - -1--P
2m2
L
i,j
(Pi - p)(Pj - p)
p2
(5)
and so the approximation will be reasonable when the imputed values have little effect
on the likelihood of the completed training sets. Note that the approximation cannot be
improved by increasing m; (5) does not tend to zero as m ---t 00. The relative effects on
the likelihood of making this approximation and the Monte Carlo approximation (4) will
be problem specific and dependent on m .
The predictive approach (Ripley, 1996, for example) incorporates uncertainty in 8 by estimatingp(c I x) asp(c I x) = IEOITP(C I x;8). Changing the order of integration gives
p(C I x) =
J
p(c
I Xj 8)p(8 I T) d8 ex
= IExulXo
J
J
p(c I x; 8)p(8)
IT
IExulxfP(Ci I xf, Xi; 8) d8
\=1
p(c I x; 8)p(8) ITp(Ci I xi, Xi; 8) d8
i=1
(6)
This justifies applying standard techniques for complete data to build a separate classifier
using each completed training set, and then averaging the posterior class probabilities that
they predict. The integral over 8 in (6) will usually require approximation; in particular we
could average over plug-in estimates toobtainp(c I x);:::;;! I:~1P(C I x; OJ), where OJ is
the MAP estimate of 8 based only on the jth imputed training set. A more subtle approach
554
M. Mathieson
Table 2: Classifier performance on 301 complete test examples. See Section 4.
Training set
40 complete training examples only
40 complete + 206 incomplete training examples:
? Median imputation (In each variable, substitute the median for missing
values whenever they occur.)
? Averaging predicted probabilities over 1000 completions of T generated by:
[> Unconditional imputation (Sample missing values from the
empirical distribution of each variable in the training set.)
[> Gibbs sampling from p(XU I xo,,,fJ)
Pool the 1000 completions from the line above to form a single training set
Test set loss
132
149
133
118
117
(Ripley, 1994) approximates each posterior by a mixture of Gaussians centred at the local
maxima Oj1, ... ,0jRj of p( fJ 1 T, X}L) to give
(7)
where: N(?; j.?, E) is the Gaussian density function with mean j.? and covariance matrix
E, the Hessian Hjr = &()~~&() 10gp(fJ 1 T, XJ') is evaluated at Ojr and, using Laplace's
approximation, Wjr
= p(Ojr 1T, Xl) 1H jr 1- 1 / 2 . We can average over the maxima to get
p(c 1 x) ~ (m l:j,r Wjr )-ll: j ,r P(c I x; Ojr), butthe full-blooded approach samples from
the 'mixture of mixtures' approximation to p( fJ 1 T) and also uses importance sampling to
compute the predictive estimates p.
3.2 The Imputation Model
We need samples from p(xy I xi, Ci) for each i. When many patterns of missing values occur it is not practical to model p(X U 1 xo, c) for each pattern, but Markov chain
Monte Carlo methods can be employed. The Gibbs sampler is convenient and in its most
basic form requires models for the distribution of each element of x given the others,
that is p(x(j) 1 x( -j), c) where x( -j) = (X(l), ... ,x(j-1), x(j+1) , ... ,x(p?. We model
these full conditionals parametrically as p( xU) 1 x( - j) , c; 'I/J) and assume here that the parameters for each of the full conditionals are disjoint, so p(x(j) I x( -j), C; 'I/J(j? where
'I/J = ('I/J(1), ... ,'I/J(p?. When x(j} takes discrete values this is a classification task, and
for continuous values a regression problem. Under certain conditions the chain of dependent samples of Xu converges in distribution to p( XU I xo, 'I/J) and the ergodic average
of p(c I xo, XU) converges as required to the predictive estimate p(c I Xo). We usually
take every wth sample to provide a cover of the space in fewer samples, reducing the computation required to learn the classifier. It is essential to check convergence of the Gibbs
sampler although we do not give details here.
If we have sufficient complete examples we might use them to estimate 'I/J to be -J; and
Gibbs sample from p(XU 1 xo; -J;). Otherwise, in the Bayesian framework, incorporate 'I/J
into the sampling scheme by Gibbs sampling from p( 'I/J, Xu I XO) (the solution suggested
by Li, 1988). In the head injury example we report results using the former approach. (The
latter was found to make little improvement and requires considerably more computation
time.)
555
Ordered Classes and Incomplete Examples in Classification
Table 3: Predictive approximations for a NOLR model fitted to a single completion T, Xu of the
training set. The likelihood maxima at {h and {h account for over 0.99 of the posterior probability.
Posterior probability
-logp(Oi I T, XU}
Test set loss:
? using the plug-in classifier p( c I x; Oi}
? averaging over 10,000 samples from Gaussian
3.3
0.929
176.10
0.071
174.65
128
120
149
137
Predictive:
126
119
Classifying Incomplete Examples
We could build a separate classifier for each pattern of missing data that occurs, but this
can be computationally expensive, will lose information and the classifiers need not make
consistent predictions. We know that p(c I XO) = IExulxop(c I xo, XU) so it seems better
to classify Xo by averaging over repeated imputations of XU from the imputation model.
4
Prognosis After Head Injury
We now return to the head injury prognosis example to learn a NOLR classifier from a
training set containing 40 complete and 206 incomplete examples. The NOLR architecture (4 nodes, skip-layer connections and A = 0.01) was selected by cross-validation on
a single imputation of the training set, and we use a predictive approximation. 1 Table 2
shows the performance of this classifier on a test set of 301 complete examples and loss
L (j, k) = I j - k Ifor different strategies for dealing with the missing values. For imputation
by Gibbs sampling we modelled each of the full conditionals using NOLR because all variables in this dataset are ordinal. Categorical inputs to models are put in as level indicators,
so change corresponds to two indicators taking values (0,0), (1,0) and (1,1). Throughout
this example we predict age, emv and motor as categorical variables but treat them as continuous inputs to models. Models were selected by cross-validation based on the complete
training examples only and used the predictive approximation described above. Several full
conditionals benefited from a non-linear model.
We now classify 199 incomplete test examples using the classifier found in the last line
of Table 2. Median imputation of missing values in the test set incurs loss 132 whereas
unconditional imputation incurs loss 106. The Gibbs sampling imputation model incurs
loss 91 and is predicting probabilities accurately (Figure 2). Michie et al. (1994) and
references therein give alternative analyses of the head injury data.
NOLR has provided an interpretable network model for ordered classes, the missing data
strategy successfully learns from incomplete training examples and classifies incomplete
future examples, and the predictive approach is beneficial.
IFor each completion T, X jU of the training set we form a mixture approximation (7) to p(O I
T, X jU } , sample from this 10,000 times and average the predicted probabilities. These predictions are
averaged over completions. Maxima were found by running the optimizer 50 times from randomized
starting weights. Up to 26 distinct maxima were found and approximately 5 generally accounted
for over 95% of the posterior probability in most cases. Table 3 gives an example: averaging over
maxima has greater effect than sampling around them, although both are useful. The cut-points cI>
in the NOLR model must satisfy order constraints, so we rejected samples of () where these did not
hold. However, the parameters were sufficiently well determined that this occurred in less than 0.5%
of samples.
556
M. Mathieson
severe disability
good recovery
Figure 2: (a) Test set calibration for median imputation (dashed) and conditional imputation (solid).
For predictions by conditional imputation we average p( c 1 xo, XU) over 100 pseudo-independent
samples from p(X U 1 Xo). Ticks on the lower (upper) axis denote predicted probabilities for the
test examples using median (conditional) imputation. (b) In 100 pseudo-independent conditional
imputations of the missing parts XU of a particular incomplete test example eight distinct values xf
(i = 1, . . . ,8) occur. (Recall that all components of x are discrete.) For each distinct imputation
we plot a circle with centre corresponding to (p(1 1 xO,xf),p(2 1 xO,xf),p(3 1 xO,xf)) and
area proportional to the number of occurrences of xf in the 100 imputations. The prediction by
median imputation is located by x; the average prediction over conditional imputations is located
by ? . Actual outcome is 'good recovery'. The conditional method correctly classifies the example
and shows that the example is close to the Bayes decision boundary under loss L(j, k) =1 j - k 1
(dashed). Median imputation results in a confident and incorrect classification.
Software: A software library for fitting NOLR models in S-Plus is available at URL
http://www.stats.ox.ac.uk/-mathies
Acknowledgements: The author thanks Brian Ripley for productive discussions of this
work and Gordon Murray for permission to use the head injury dataset. This research was
funded by the UK EPSRC and investment managers GMO Woolley Ltd.
References
Jennett, B., Teasdale, G., Braakman, R., Minderhoud, J., Heiden, J. & Kurze, T. (1979)
Prognosis of patients with severe head injury. Neurosurgery, 4782-790.
Li, K.-H. (1988) Imputation using Markov chains. Journal of Statistical Computation and
Simulation, 3057-79.
Little, R. & Rubin, D. B. (1987) Statistical Analysis with Missing Data. (Wiley, New York).
Mathieson, M. J. (1996) Ordinal models for neural networks. In Neural Networks in Financial Engineering, eds A.-P. Refenes, Y. Abu-Mostafa, J. Moody and A. S. Weigend
(World Scientific, Singapore) 523-536.
McCullagh, P. (1980) Regression models for ordinal data. Journal of the Royal Statistical
Society Series B, 42 109-142.
Michie, D., Spiegelhalter, D . J. & Taylor, C. C. (eds) (1994) Machine Learning, Neural and
Statistical Classification. (Ellis Horwood, New York).
Ripley, B. D. (1994) Flexible non-linear approaches to classification. In From Statistics
to Neural Networks. Theory and Pattern Recognition Applications, eds V. Cherkassky,
J. H. Friedman and H. Wechsler (Springer Verlag, New York) 108-126.
Ripley, B. D. (1996) Pattern Recognition and Neural Networks. (Cambridge University
Press, Cambridge).
| 1241 |@word proportion:1 seems:1 simulation:3 covariance:1 incurs:3 solid:1 shading:1 contains:1 score:1 series:1 itp:1 past:2 reaction:1 existing:1 si:1 assigning:1 must:2 motor:4 plot:1 interpretable:1 fewer:1 selected:2 parameterization:1 ith:1 wth:1 node:6 hyperplanes:1 simpler:1 incorrect:1 fitting:2 manager:1 little:5 actual:1 increasing:2 provided:1 classifies:2 linearity:1 whilst:1 unobserved:1 finding:1 pseudo:2 every:2 ti:1 refenes:1 classifier:11 uk:4 unit:1 engineering:1 local:1 treat:2 oxford:2 gmo:1 reacting:2 approximately:1 might:1 plus:1 therein:1 co:1 averaged:2 practical:1 testing:1 investment:1 area:1 empirical:1 universal:1 convenient:1 road:1 numbered:1 get:1 cannot:2 close:1 put:1 applying:1 www:1 equivalent:1 map:1 missing:18 starting:1 ergodic:1 recovery:5 stats:2 glasgow:1 m2:1 rule:2 regularize:1 financial:1 laplace:1 nominal:4 suppose:2 us:1 element:1 approximated:1 expensive:1 located:2 recognition:2 michie:2 ignorable:1 cut:6 predicts:1 observed:3 epsrc:1 region:2 connected:1 ordering:4 thatp:1 productive:1 depend:1 predictive:9 represented:1 llogp:1 distinct:3 monte:3 xfu:1 outcome:5 otherwise:2 statistic:2 gp:1 hoc:1 propose:1 convergence:1 impaired:1 converges:2 object:1 ac:2 completion:6 measured:1 p2:1 predicted:4 skip:2 require:2 assign:1 brian:1 hold:1 around:1 sufficiently:1 predict:4 bj:2 ojr:3 mostafa:1 substituting:1 optimizer:1 estimation:1 lose:1 jrj:1 successfully:1 neurosurgery:1 always:1 gaussian:3 rather:1 avoid:3 asp:1 improvement:1 modelling:1 likelihood:8 check:1 dependent:2 hidden:2 wij:1 classification:15 flexible:2 special:1 softmax:2 integration:1 construct:1 once:1 having:1 sampling:7 park:1 future:4 others:1 report:1 gordon:1 serious:2 gamma:1 individual:1 friedman:1 interest:1 severe:5 certainly:1 mixture:4 light:1 unconditional:2 tj:3 chain:3 integral:1 xy:1 incomplete:18 taylor:1 accommodated:1 circle:1 plotted:1 fitted:2 classify:3 earlier:1 eta:6 elli:1 injury:9 cover:1 logp:1 maximization:1 tg:1 parametrically:1 reported:1 considerably:1 confident:1 ju:2 density:2 thanks:1 randomized:1 pool:1 moody:1 iip:3 containing:1 d8:3 dead:1 return:1 li:3 account:1 tii:2 centred:1 unordered:1 includes:1 satisfy:1 depends:1 ad:1 bayes:5 parallel:1 formed:1 oi:2 who:1 characteristic:1 correspond:1 modelled:1 bayesian:2 accurately:1 lu:2 carlo:3 classified:1 whenever:1 ed:3 definition:2 against:1 mi:1 dataset:2 recall:1 subtle:1 feed:1 higher:1 attained:1 reflected:1 response:2 improved:1 evaluated:1 ox:2 rejected:1 heiden:1 olr:6 nonlinear:1 logistic:2 scientific:1 effect:3 true:1 former:1 illustrated:1 adjacent:1 ll:2 complete:8 interpreting:1 fj:4 common:1 sigmoid:1 stimulation:1 jp:1 extend:1 interpretation:1 m1:1 approximates:1 occurred:1 cambridge:2 imposing:1 gibbs:7 centre:1 funded:1 calibration:1 summands:1 posterior:10 moderate:1 belongs:2 certain:1 verlag:1 arbitrarily:1 greater:1 impose:1 employed:1 maximize:2 paradigm:2 dashed:2 ii:2 arithmetic:1 full:5 xf:11 plug:2 cross:3 equally:1 iog:1 prediction:5 variant:1 regression:5 basic:1 patient:5 sometimes:1 adopting:1 whereas:1 conditionals:4 separately:1 median:7 suffered:1 unlike:1 south:1 pooling:1 tend:1 incorporates:1 constraining:1 xj:3 architecture:1 prognosis:4 six:2 expression:1 url:1 ltd:1 york:3 hessian:1 useful:2 generally:1 dark:1 ifor:2 mathieson:6 imputed:2 http:1 xij:2 singapore:1 diagnostic:2 disjoint:1 correctly:2 discrete:2 abu:1 putting:1 imputation:23 changing:1 pj:6 sum:1 year:1 weigend:1 everywhere:1 uncertainty:1 throughout:1 reasonable:1 decision:4 comparable:1 layer:3 occur:3 constraint:2 software:3 jennett:2 department:1 belonging:1 jr:1 inflexible:1 beneficial:1 making:3 modification:1 xo:18 computationally:1 mechanism:2 ordinal:8 know:2 letting:1 horwood:1 oxi:1 available:2 gaussians:1 eight:1 limb:1 observe:1 occurrence:1 alternative:1 permission:1 shortly:1 substitute:1 denotes:2 running:1 ensure:1 completed:3 wechsler:1 giving:2 build:2 murray:1 society:1 objective:1 occurs:1 strategy:2 usual:1 disability:2 exhibit:1 separate:2 mail:1 minim:1 allowing:1 upper:1 markov:2 incorporated:1 head:8 varied:1 required:2 extensive:1 connection:2 hour:2 suggested:1 usually:3 pattern:8 departure:1 oj:2 royal:1 natural:3 predicting:1 indicator:2 scheme:2 improve:1 spiegelhalter:1 eye:3 library:1 axis:1 categorical:2 prior:2 geometric:1 acknowledgement:1 relative:1 loss:9 fully:1 generation:1 allocation:1 oj1:1 proportional:1 approximator:1 age:5 validation:3 incurred:1 degree:1 sufficient:2 consistent:1 rubin:3 classifying:3 pi:6 row:1 qf:1 accounted:1 last:2 jth:1 verbal:1 bias:2 allow:1 tick:1 taking:1 boundary:3 default:1 world:1 cumulative:1 forward:1 made:1 author:1 avoided:1 approximate:2 dealing:1 xi:18 ripley:6 continuous:2 decade:1 table:7 learn:3 transfer:1 unavailable:1 did:1 motivation:1 hyperparameters:1 repeated:2 xu:15 benefited:1 pupil:2 mathies:2 wiley:1 position:1 xl:1 crude:1 ib:1 ix:2 young:1 learns:1 ofn:1 bad:1 xt:1 specific:1 wjr:2 decay:1 essential:1 importance:1 ci:17 woolley:1 justifies:1 easier:1 cherkassky:1 likely:1 ordered:10 neurological:1 springer:1 corresponds:1 conditional:7 month:2 change:4 mccullagh:2 determined:1 except:1 reducing:1 averaging:6 sampler:2 la:1 rarely:1 damaging:1 mark:1 support:1 latter:1 incorporate:1 ex:1 |
266 | 1,242 | Unsupervised Learning by
Convex and Conic Coding
D. D. Lee and H. S. Seung
Bell Laboratories, Lucent Technologies
Murray Hill, NJ 07974
{ddlee I seung}Obell-labs. com
Abstract
Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic
combination of basis vectors to the input. The learning algorithms
produce basis vectors that minimize the reconstruction error of the
encoders. The convex algorithm develops locally linear models of
the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with
vector quantization and principal component analysis. The neural
network implementations involve feedback connections that project
a reconstruction back to the input layer.
1
Introduction
Vector quantization (VQ) and principal component analysis (peA) are two widely
used unsupervised learning algorithms, based on two fundamentally different ways
of encoding data. In VQ, the input is encoded as the index of the closest prototype
stored in memory. In peA, the input is encoded as the coefficients of a linear
superposition of a set of basis vectors. VQ can capture nonlinear structure in
input data, but is weak because of its highly localized or "grandmother neuron"
representation. Many prototypes are typically required to adequately represent the
input data when the number of dimensions is large. On the other hand, peA uses a
distributed representation so it needs only a small number of basis vectors to model
the input. Unfortunately, it can only model linear structures.
Learning algorithms based on convex and conic encoders are introduced here. These
encoders are less constrained than VQ but more constrained than peA. As a result,
D. D. Lee and H. S. Seung
516
Figure 1: The affine, convex, and conic hulls for two basis vectors.
they are able to produce sparse distributed representations that are efficient to compute. The resulting learning algorithms can be understood as approximate matrix
factorizations and can also be implemented as neural networks with feedforward
and feedback connections between neurons.
2
Affine, convex, conic, and point encoding
Given a set of basis vectors {wa}, the linear combination L::=1 vawa is called
affine
con~ex
comc
}
if
L:a Va = 1 ,
Va:::?:
0
,
L:a Va = 1 ,
{
Va:::?: 0 .
The complete set of affine, convex, and conic combinations are called respectively the
affine, convex, and conic hulls of the basis. These hulls are geometrically depicted in
Figure 1. The convex hull contains only interpolations of the basis vectors, whereas
the affine hull contains not only the convex hull but also linear extrapolations. The
conic hull also contains the convex hull but is not constrained to stay within the
set La Va = 1. It extends to any nonnegative combination of the basis vectors and
forms a cone in the vector space.
Four encoders are considered in this paper. The convex and conic encoders are
novel, and find the nearest point to the input x in the convex and conic hull of the
basis vectors. These encoders are compared with the well-known affine and point
encoders. The affine encoder finds the nearest point to x in the affine hull and is
equivalent to the encoding in peA, while the point encoder or VQ finds the nearest
basis vector to the input. All of these encoders minimize the reconstruction error:
2
(1)
The constraints on Va for the convex, conic, and affine encoders were described
above. Point encoding can be thought of as a heavily constrained optimization of
Eq. (1): a single Va must equal unity while all the rest vanish.
Efficient algorithms exist for computing all of these encodings. The affine and point
encoders are the fastest. Affine encoding is simply a linear transformation of the
input vector. Point encoding is a nonlinear operation, but is computationally simple
517
Unsupervised Learning by Convex and Conic Coding
since it involves only a minimum distance computation. The convex and conic
encoders require solving a quadratic programming problem. These encodings are
more computationally demanding than the affine and point encodingsj nevertheless,
polynomial time algorithms do exist. The tractability of these problems is related
to the fact that the cost function in Eq. (1) has no local minima on the convex
domains in question. These encodings should be contrasted with computationally
inefficient ones. A natural modification of the point encoder with combinatorial
expressiveness can be obtained by allowing to be any vector of zeros and ones
[1, 2]. Unfortunately, with this constraint the optimization of Eq. (1) becomes an
integer programming problem and is quite inefficient to solve.
v
The convex and conic encodings of an input generally contain coefficients Va that
vanish, due to the nonnegativity constraints in the optimization of Eq. (1). This
method of obtaining sparse encodings is distinct from the method of simply truncating a linear combination by discarding small coefficients [3].
3
Learning
There correspond learning algorithms for each of the encoders described above that
minimize the average reconstruction error over an ensemble of inputs. If a training
set of m examples is arranged as the columns of a N x m matrix X, then the learning
and encoding minimization can be expressed as:
min liX
W,V
- WVI1 2
(2)
where IIXII 2 is the summed squares of the elements of X. Learning and encoding
can thus be described as the approximate factorization of the data matrix X into a
N x r matrix W of r basis vectors and a r X m matrix V of m code vectors.
Assuming that the input vectors in X have been scaled to the range [0,1], the
constraints on the optimizations in Eq. (2) are given by:
Affine:
Convex:
Conic:
o ~ W ia ~ 1,
o ~ W ia ~ 1,
o ~ W ia ~ 1,
The nonnegativity constraints on W and V prevent cancellations from occurring in
the linear combinations, and their importance will be seen shortly. The upper bound
on W is chosen such that the basis vectors are normalized in the same range as the
inputs X. We noted earlier that the computation for encoding is tractable since
the cost function Eq. (2) is a quadratic function of V. However, when considered
as a function of both W and V, the cost function is quartic and finding its global
minimum for learning can be very difficult. The issue of local minima is discussed
in the following example.
4
Example: modeling handwritten digits
We applied Affine, Convex, Conic, and VQ learning to the USPS database [4],
which consists of examples of haI!dwritten digits segmented from actual zip codes.
Each of the 7291 training and 2007 test images were normalized to a 16 x 16 grid
518
D. D. Lee and H. S. Seung
VQ
Affine (PeA)
Convex
Conic
Figure 2: Basis vectors for "2" found by VQ, Affine, Convex, and Conic learning.
with pixel intensities in the range [0,1]. There were noticeable segmentation errors
resulting in unrecognizable digits, but these images were left in both the training
and test sets. The training examples were segregated by digit class and separate
basis vectors were trained for each of the classes using the four encodings. Figure 2
shows our results for the digit class "2" with r = 25 basis vectors.
The k-means algorithm was used to find the VQ basis vectors shown in Figure 2.
Because the encoding is over a discontinuous and highly constrained space, there
exist many local minima to Eq. (2). In order to deal with this problem, the algorithm
was restarted with various initial conditions and the best solution was chosen. The
resulting basis vectors look like "2" templates and are blurry because each basis
vector is the mean of a large number of input images.
Affine determines the affine space that best models the input data. As can be seen
in the figure, the individual basis vectors have no obvious interpretation. Although
the space found by Affine is unique, its representation by basis vectors is degenerate. Any set of r linearly independent vectors drawn from the affine space can be
used to represent it. This is due to the fact that the product WV is invariant under
the transformation W ~ W 8 and V ~ 8- 1 V. 1
Convex finds the r basis vectors whose convex hull best fits the input data. The
optimization was performed by alternating between projected gradient steps of W
and V. The constraint that the column sums of V equal unity was implemented
1 Affine is essentially equivalent to peA, except that they represent the affine space in
different ways. Affine represents it with r points chosen from the space. peA represents
the affine space with a single point from the space and r - 1 orthonormal directions. This
is still a degenerate representation, but peA fixes it by taking the point to be the sample
mean and the r - 1 directions to be the eigenvectors of the covariance matrix of X with
the largest eigenvalues.
519
Unsupervised Learning by Convex and Conic Coding
Input Image
.[J.
Convex Coding
. .
. . .
?::' .... .'. .... ..
. '
.
...
?i~~.:...
,', .. ,\",: ;,._
..
.
. . . . . . . . . . .. ,' ..
Convex Reconstructions
:""'' ' '
[}?
Conic Coding
. ..
.
??? ? ? :
. ..
lo"
."
: ,.'.: :,
? ?
. ~.. . . . .
.. . .
.........
. ..........
.
.
.. .
..
. .
.
: "....:
. ... .
-.
:
v
Conic Reconstructions
Figure 3: Activities and reconstructions of a "2" using conic and convex coding.
by adding a quadratic penalty term. In contrast to Affine, the basis vectors are
interpretable as templates and are less blurred than those found by VQ . Many of
the elements of Wand also of V are zero at the minimum. This eliminates many invariant transformations S, because they would violate the nonnegativity constraints
on Wand V. From our simulations, it appears that most of the degeneracy seen in
Affine is lifted by the nonnegativity constraints.
Conic finds basis vectors whose conic hull best models the input images. The
learning algorithm is similar to Convex except there is no penalty term on the sum
of the activities. The Conic representation allows combinations of basis vectors, not
just interpolations between them. As a result, the basis vectors found are features
rather than templates, as seen in Figure 2. In contrast to Affine, the nonnegativity
constraint leads to features that are interpretable as correlated strokes. As the
number of basis vectors r increases, these features decrease in size until they become
individual pixels.
These models were used to classify novel test images. Recognition was accomplished
by separately reconstructing the test images with the different digit models and associating the image with the model having the smallest reconstruction error. Figure
3 illustrates an example of classifying a "2" using the conic and convex encodings.
The basis vectors are displayed weighted by their activites Va and the sparsity in
the representations can be clearly seen. The bottom part of the figure shows the
different reconstructions generated by the various digit models.
With r = 25 patterns per digit class, Convex incorrectly classified 113 digits out of
the 2007 test examples for an overall error rate of 5.6%. This is virtually identical
to the performance of k = 1 nearest neighbor (112 errors) and linear r = 25 peA
models (111 errors). However, scaling up the convex models to r = 100 patterns
results in an error rate of 4.4% (89 errors). This improvement arises because the
larger convex hulls can better represent the overall nonlinear nature of the input
D. D. Lee and H. S. Seung
520
distributions. This is good performance relative to other methods that do not
use prior knowledge of invariances, such as the support vector machine (4.0% [5]).
However, it is not as good as methods that do use prior knowledge, such as nearest
neighbor with tangent distance (2.6% [6]).
On the other hand, Conic coding with r = 25 results in an error rate of 6.8%
(138 errors). With larger basis sets r > 50, Conic shows worse performance as the
features shrink to small spots. These results indicate that by itself, Conic does not
yield good models; non-trivial correlations still remain in the Va and also need to
be taken into account. For instance, while the conic basis for "9" can fit some "7" 's
quite well with little reconstruction error, the codes Va are distinct from when it
fits "9" 'so
5
Neural network implementation
Conic and Convex were described above as matrix factorizations. Alternatively,
the encoding can be performed by a neural network dynamics [7] and the learning
by a synaptic update rule. We describe here the implementation for the Conic
network; the Convex network is similar. The Conic network has a layer of N
error neurons ei and a layer of r encoding neurons Va. The fixed point of the
encoding dynamics
dV a
dt +Va
[i:e;Wi. +v.f
(3)
t=l
dei
-+e ?
dt
t
r
Xi -
~WiaVa,
(4)
a=l
optimizes Eq. (1), finding the best convex encoding of the input Xi. The rectification
nonlinearity [x]+ = max(x,O) enforces the nonnegativity constraint. The error
neurons subtract the reconstruction from the input Xi. The excitatory connection
from ei to Va is equal and opposite to the inhibitory connection from Va back to ei.
The Hebbian synaptic weight update
(5)
is made following convergence of the encoding dynamics for each input, while respecting the bound constraints on W ia . This performs stochastic gradient descent
on the ensemble reconstruction error with learning rate "1.
6
Discussion
Convex coding is similar to other locally linear models [8, 9, 10, 11]. Distance to
a convex hull was previously used in nearest neighbor classification [12], though no
learning algorithm was proposed. Conic coding is similar to the noisy OR [13, 14]
and harmonium [15] models. The main difference is that these previous models
contain discrete binary variables, whereas Conic uses continuous ones. The use of
analog rather than binary variables makes the encoding computationally tractable
and allows for interpolation between basis vectors.
Unsupervised Learning by Convex and Conic Coding
521
Here we have emphasized the geometrical interpretation of Convex and Conic coding. They can also be viewed as probabilistic hidden variable models. The inputs Xi
are visible while the Va are hidden variables, and the reconstruction error in Eq. (1)
is related to the log likelihood, log P{xilva). No explicit model P(v a) for the hidden
variables was used, which limited the quality of the Conic models in particular.
The feature discovery capabilities of Conic, however, make it a promising tool for
building hierarchical representations. We are currently working on extending these
new coding schemes and learning algorithms to multilayer networks.
We acknowledge the support of Bell Laboratories. We thank C. Burges, C. Cortes,
and Y. LeCun for providing us with the USPS database. We are also grateful to K.
Clarkson, R. Freund, L. Kaufman, L. Saul, and M. Wright for helpful discussions.
References
[1] Hinton, GE & Zemel, RS (1994) . Autoencoders, minimum description length and
Helmholtz free energy. Advances in Neural Information Processing Systems 6, 3- 10.
[2] Ghahramani, Z (1995). Factorial learning and the EM algorithm. Advances in Neural
Information Processing Systems 7, 617-624.
[3] Olshausen, BA & Field, DJ (1996). Emergence of simple-cell receptive field properties
by learning a sparse code for natural images. Nature 381, 607-609.
[4] Le Cun, Yet al. (1989). Backpropagation applied to handwritten zip code recognition.
Neural Comput. 1, 541-55l.
[5] Scholkopf, B, Burges, C, & Vapnik, V (1995). Extracting support data for a given
task. KDD-95 Proceedings, 252-257.
[6] Simard, P, Le Cun Y & Denker J (1993). Efficient pattern recognition using a new
transformation distance. Advances in Neural Information Processing Systems 5, 5058.
[7] Tank, DW & Hopfield, JJ (1986). Simple neural optimization networks; an AjD
converter, signal decision circuit, and a linear programming circuit. IEEE 7rans.
Circ. Syst. CAS-33, 533- 54l.
[8] Bezdek, JC, Coray, C, Gunderson, R & Watson J (1981). Detection and characterization of cluster substructure. SIAM J. Appl. Math. 40, 339- 357; 358- 372.
[9] Bregler, C & Omohundro, SM (1995). Nonlinear image interpolation using manifold
learning. Advances in Neural Information Processing Systems 7, 973-980.
[10] Hinton, GE, Dayan, P & Revow M (1996). Modeling the manifolds of images of
handwritten digits. IEEE 7rans. Neural Networks, submitted.
[11] Hastie, T, Simard, P & Sackinger E (1995). Learning prototype models for tangent
distance. Advances in Neural Information Processing Systems 7, 999-1006.
[12] Haas, HPA, Backer, E & Boxma, I (1980). Convex hull nearest neighbor rule. Fifth
Intl. Conf. on Pattern Recognition Proceedings, 87-90.
[13] Dayan, P & Zemel, RS (1995). Competition and multiple cause models. Neural Comput. 7, 565- 579.
[14] Saund, E (1995). A multiple cause mixture model for unsupervised learning. Neural
Comput. 7, 51-7l.
[15] Freund, Y & Haussler, D (1992). Unsupervised learning of distributions on binary
vectors using two layer networks. Advances in Neural Information Processing Systems
4,912-919.
| 1242 |@word polynomial:1 r:2 simulation:1 covariance:1 initial:1 contains:3 com:1 yet:1 must:1 visible:1 kdd:1 interpretable:2 update:2 ajd:1 characterization:1 math:1 become:1 scholkopf:1 consists:1 actual:1 little:1 becomes:1 project:1 circuit:2 kaufman:1 finding:2 transformation:4 nj:1 scaled:1 understood:1 local:3 encoding:23 interpolation:4 backer:1 appl:1 fastest:1 factorization:3 limited:1 range:3 unique:1 lecun:1 enforces:1 backpropagation:1 spot:1 digit:11 bell:2 thought:1 equivalent:2 truncating:1 convex:42 rule:2 haussler:1 orthonormal:1 dw:1 heavily:1 programming:3 us:2 element:2 helmholtz:1 recognition:4 database:2 bottom:1 lix:1 capture:1 decrease:1 respecting:1 seung:5 dynamic:3 trained:1 grateful:1 solving:1 harmonium:1 basis:31 usps:2 hopfield:1 various:2 distinct:2 describe:1 zemel:2 quite:2 encoded:2 widely:1 solve:1 whose:2 larger:2 circ:1 encoder:3 emergence:1 itself:1 noisy:1 eigenvalue:1 reconstruction:13 product:1 degenerate:2 description:1 competition:1 convergence:1 cluster:1 extending:1 intl:1 produce:2 unrecognizable:1 nearest:7 ex:1 noticeable:1 eq:9 implemented:2 involves:1 indicate:1 direction:2 discontinuous:1 pea:10 hull:15 stochastic:1 require:1 fix:1 bregler:1 considered:2 wright:1 smallest:1 combinatorial:1 currently:1 superposition:1 largest:1 tool:1 weighted:1 minimization:1 clearly:1 rather:2 lifted:1 comc:1 improvement:1 likelihood:1 contrast:2 helpful:1 dayan:2 typically:1 hidden:3 pixel:2 issue:1 overall:2 classification:1 tank:1 constrained:5 summed:1 equal:3 field:2 having:1 identical:1 represents:2 look:1 unsupervised:8 bezdek:1 develops:1 fundamentally:1 individual:2 detection:1 highly:2 mixture:1 instance:1 column:2 earlier:1 modeling:2 classify:1 tractability:1 cost:3 stored:1 encoders:14 siam:1 stay:1 lee:4 probabilistic:1 worse:1 conf:1 inefficient:2 simard:2 syst:1 account:1 coding:12 coefficient:3 blurred:1 jc:1 performed:2 saund:1 extrapolation:1 lab:1 capability:1 substructure:1 minimize:3 square:1 ensemble:2 correspond:1 yield:1 weak:1 handwritten:4 classified:1 stroke:1 submitted:1 synaptic:2 energy:1 obvious:1 con:1 degeneracy:1 knowledge:2 segmentation:1 back:2 appears:1 dt:2 arranged:1 shrink:1 though:1 just:1 until:1 correlation:1 hand:2 working:1 autoencoders:1 ei:3 sackinger:1 nonlinear:4 quality:1 olshausen:1 building:1 contain:2 normalized:2 adequately:1 alternating:1 laboratory:2 deal:1 noted:1 hill:1 complete:1 omohundro:1 performs:1 geometrical:1 image:11 discovers:1 novel:2 discussed:1 interpretation:2 analog:1 grid:1 cancellation:1 nonlinearity:1 dj:1 closest:2 quartic:1 optimizes:1 wv:1 binary:3 watson:1 accomplished:1 seen:5 minimum:7 zip:2 signal:1 violate:1 multiple:2 hebbian:1 segmented:1 va:16 multilayer:1 essentially:1 represent:4 cell:1 whereas:2 separately:1 rest:1 eliminates:1 virtually:1 integer:1 extracting:1 feedforward:1 fit:3 hastie:1 associating:1 opposite:1 converter:1 prototype:3 penalty:2 clarkson:1 cause:2 jj:1 generally:1 involve:1 eigenvectors:1 factorial:1 locally:2 exist:3 inhibitory:1 per:1 discrete:1 four:2 nevertheless:1 drawn:1 prevent:1 geometrically:1 cone:1 sum:2 wand:2 extends:1 decision:1 scaling:1 bound:2 layer:4 quadratic:3 nonnegative:1 activity:2 constraint:11 min:1 combination:7 remain:1 em:1 reconstructing:1 unity:2 wi:1 cun:2 modification:1 dv:1 invariant:2 taken:1 computationally:4 rectification:1 vq:10 previously:1 rans:2 ge:2 tractable:2 operation:1 denker:1 hierarchical:1 blurry:1 shortly:1 ghahramani:1 murray:1 question:1 receptive:1 hai:1 gradient:2 distance:5 separate:1 thank:1 manifold:2 haas:1 trivial:1 assuming:1 code:5 length:1 index:1 providing:1 difficult:1 unfortunately:2 ba:1 implementation:3 allowing:1 upper:1 neuron:5 sm:1 acknowledge:1 descent:1 displayed:1 incorrectly:1 hinton:2 expressiveness:1 intensity:1 introduced:1 required:1 connection:4 able:1 pattern:4 sparsity:1 grandmother:1 max:1 memory:1 ia:4 demanding:1 natural:2 scheme:1 technology:1 dei:1 conic:41 prior:2 discovery:1 tangent:2 segregated:1 relative:1 freund:2 localized:1 affine:28 classifying:1 lo:1 excitatory:1 free:1 burges:2 neighbor:4 template:3 taking:1 saul:1 fifth:1 sparse:3 distributed:2 feedback:2 dimension:1 made:1 projected:1 approximate:2 global:1 xi:4 alternatively:1 continuous:1 promising:1 nature:2 ca:1 obtaining:1 domain:1 main:1 linearly:1 ddlee:1 nonnegativity:6 explicit:1 comput:3 vanish:2 lucent:1 discarding:1 emphasized:1 cortes:1 quantization:2 vapnik:1 adding:1 iixii:1 importance:1 illustrates:1 occurring:1 subtract:1 depicted:1 simply:2 expressed:1 restarted:1 determines:1 viewed:1 revow:1 except:2 contrasted:1 principal:2 called:2 invariance:1 la:1 support:3 arises:1 correlated:1 |
267 | 1,243 | Learning with Noise and Regularizers
Multilayer Neural Networks
David Saad
Dept. of Compo Sci. & App. Math.
Aston University
Birmingham B4 7ET, UK
D .Saad@aston.ac.uk
?
In
Sara A. Solla
AT &T Research Labs
Holmdel, NJ 07733, USA
solla@research .at t .com
Abstract
We study the effect of noise and regularization in an on-line
gradient-descent learning scenario for a general two-layer student
network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer
teacher network with an arbitrary number of hidden units; the examples are corrupted by Gaussian noise affecting either the output
or the model itself. We examine the effect of both types of noise
and that of weight-decay regularization on the dynamical evolution of the order parameters and the generalization error in various
phases of the learning process.
1
Introduction
One of the most powerful and commonly used methods for training large layered
neural networks is that of on-line learning, whereby the internal network parameters
{J} are modified after the presentation of each training example so as to minimize
the corresponding error. The goal is to bring the map fJ implemented by the
network as close as possible to a desired map j that generates the examples. Here
we focus on the learning of continuous maps via gradient descent on a differentiable
error function.
Recent work [1]-[4] has provided a powerful tool for the analysis of gradient-descent
learning in a very general learning scenario [5]: that of a student network with N
input units, I< hidden units, and a single linear output unit, trained to implement a
continuous map from an N-dimensional input space onto a scalar (. Examples of
the target task j are in the form of input-output pairs (e', (1'). The output labels
(JAto independently drawn inputs e' are provided by a teacher network of similar
e
261
Learning with Noise and Regularizers in Multilayer Neural Networks
architecture, except that its number M of hidden units is not necessarily equal to
K.
Here we consider the possibility of a noise process pI-' that corrupts the teacher
output. Learning from corrupt examples is a realistic and frequently encountered
scenario. Previous analysis of this case have been based on various approaches:
Bayesian [6], equilibrium statistical physics [7], and nonequilibrium techniques for
analyzing learning dynamics [8]. Here we adapt our previously formulated techniques [2] to investigate the effect of different noise mechanisms on the dynamical
evolution of the learning process and the resulting generalization ability.
2
The model
We focus on a soft committee machine [1], for which all hidden-to-output weights
are positive and of unit strength. Consider the student network: hidden unit i
receives information from input unit r through the weight Jir, and its activation
with J i =
under presentation of an input pattern = (6,? .. , ~N) is Xi = J i .
(J il , . .. , J iN ) defined as the vector of incoming weights onto the i-th hidden unit.
The output of the student network is O"(J, e) = 2:~1 g (J i . e), where g is the
activation function of the hidden units, taken here to be the error function g(x) ==
erf(x/v'2), and J == {Jdl~i~K is the set of input-to-hidden adaptive weights .
e
e,
The components of the input vectors el-' are uncorrelated random variables with zero
mean and unit variance. Output labels (I-' are provided by a teacher network of similar architecture: hidden unit n in the teacher network receives input information
through the weight vector Bn = (Bn 1, . . . , B nN ), and its activation under presentation of the input pattern
is Y~
Bn .
In the noiseless case the teacher output
is given by (t
2:~=1 g (Bn . e). Here we concentrate on the architecturally
matched case M = K, and consider two types of Gaussian noise: additive output
noise that results in (I-'
pI-' + 2:~=1 g (Bn . e), and model noise introduced as
fluctuations in the activations Y~ of the hidden units, (I-' = 'E~=1 g (p~ + Bn . e).
The random variables pI-' and p~ are taken to be Gaussian with zero mean and
variance (J'2 .
=
e
=
e.
=
The error made by a student with weights J on a given input
quadratic deviation
e is given by the
(1)
measured with respect to the noiseless teacher (it is also possible to measure
performance as deviations with respect to the actual output ( provided by the
noisy teacher) . Performance on a typical input defines the generalization error
Eg(J) == < E(J,e) >{O' through an average over all possible input vectors to
be performed implicitly through averages over the activations x = (Xl, ... , XK) and
Y = (Yl, . . . , YK) . These averages can be performed analytically [2] and result in a
compact expression for Eg in terms of order parameters: Qik == Ji ?Jk, Rin == Ji? B n ,
and Tnm == Bn . B m , which represent student-student, student-teacher, and teacherteacher overlaps, respectively. The parameters Tnm are characteristic of the task to
be learned and remain fixed during training, while the overlaps Qik among student
hidden units and R in between a student and a teacher hidden units are determined
by the student weights J and evolve during training.
e
A gradient descent rule on the error made with respect to the actual output provided
D. Saad and S. A. SolLa
262
e
N
by the noisy teacher results in Jr+ 1 = Jf +
8f for the update of the student
weights, where the learning rate 1] has been scaled with the input size N, and 8f
depends on the type of noise. The time evolution of the overlaps Rin and Qik can
be written in terms of similar difference equations. We consider the large N limit,
and introduce a normalized number of examples Q' = III N to be interpreted as a
continuous time variable in the N -+ 00 limit. The time evolution of Rin and Qik
is thus described in terms of first-order differential equations.
3
Output noise
The resulting equations of motion for the student-teacher and student-student overlaps are given in this case by:
(2)
e
where each term is to be averaged over all possible ways in which an example
could be chosen at a given time step. These averages have been performed using
the techniques developed for the investigation of the noiseless case [2]; the only
difference due to the presence of additive output noise is the need to evaluate the
fourth term in the equation of motion for Qik, proportional to both 1]2 and 0'2.
=
=
We focus on isotropic un correlated teacher vectors: Tnm
T 8nm , and choose T
1
in our numerical examples. The time evolution of the overlaps Rin and Qik follows
from integrating the equations of motion (2) from initial conditions determined by
a random initialization of the student vectors {Jd1<i<K. Random initial norms
Qii for the student vectors are taken here from a uniform distribution in the [0,0.5]
interval. Overlaps Qik between independently chosen student vectors J i and J k,
or Rin between Ji and an unknown teacher vector B n , are small numbers of order
1/-..iN for N ? K, and taken here from a uniform distribution in the [0,10- 12]
interval.
We show in Figures l.a and 1. b the evolution of the overlaps for a noise variance
= 0.3 and learning rate 1] = 0.2. The example corresponds to M K 3. The
qualitative behavior is similar to the one observed for M
K in the noiseless case
extensively analyzed in [2]. A very short transient is followed by a long plateau
characterized by lack of differentiation among student vectors: all student vectors
have the same norm Qii = Q, the overlap between any two different student vectors
takes a unique value Qik = C for i :j:. k, and the overlap Rin between an arbitrary
student vector i and a teacher vector n is independent of i (as student vectors are
indistinguishable in this regime) and of n (as the teacher is isotropic), resulting
in Rin
R. This phase is characterized by an unstable symmetric solution; the
perturbation introduced through the nonsymmetric initialization of the norms Qii
and overlaps Rin eventually takes over in a transition that signals the onset of
specialization.
0'2
= =
=
=
This process is driven by a breaking of the uniform symmetry of the matrix of
student-teacher overlaps: each student vector acquires an increasingly dominant
overlap R with a specific teacher vector which it begins to imitate, and a gradually
decreasing secondary overlap S with the remaining teacher vectors. In the example
of Figure l.b the assignment corresponds to i
1 -+ n
1, i
2 -+ n
3, and
i
3 -+ n
2. A relabeling of the student hidden units allows us to identify R
with the diagonal elements and S with the off-diagonal elements of the matrix of
student-teacher overlaps.
=
=
=
=
=
=
263
Learning with Noise and Regularizers in Multilayer Neural Networks
(a)
r
1.5
QII - - QI2 ----- QI3
- - - Qu ..-....-. Q" - -- --. Q"
1.0
~
I,f
r---
--=-~~--------j:/
CI
0.5
1.0
r:l
0.8
RII - - RI2 -.---- RI3
0.6
R" ------ Ru ...---- R"
R" -.- - - R" -----. R"
0.4
0.2
:--...
~
0.0
0
(b)
1.2-.-~'--------------,
2000
4000
0.0
o
6000
0.03-
0.03
----------------------~
- -.- - - - -- - - - - - - - - ....
til 0.02
- - u~o
1
r--
W 0.015
- - Z007
0.01
-------- Zos
\
17.02
"
, \\
l
17.0 a
Zoo
Zos
til 0.02
\, \
l
0.005
6000
0.025-
,,
,,
,\
", \\
W 0.015
0.0\
4000
(d)
(c)
0.025
2000
0.005-
\
~~:.:.:.:.:::---:.::::.:.:.:---- ----: --.: - ----- -::: -0.0..L-.-1.::..:.::::=:::::::::~~1~~~~~1~~;d
\
\0,,:::..__- =::"_-:.=.:"-::..=.:"
O.o-+==::::;:::===r~-.--'-=:::::::::=F====?
1000
1500
2000
2500
3000
o
500
4*10'
6*10'
8*103
Figure 1: Dependence of the overlaps and the generalization error on the normalized number of examples a for a three-node student learning corrupted examples
generated by an isotropic three-node teacher. (a) student-student overlaps Qik and
(b) student-teacher overlaps Rin for 17 2 0.3. The generalization error is shown in
(c) for different values of the noise variance 17 2 , and in (d) for different powers of
the polynomial learning rate decay, focusing on a > 0'0 ( asymptotic regime).
=
Asymptotically the secondary overlaps S decay to zero, while R in -+ -ICJii indicates
L As specialization proceeds, the student weight vectors
full alignment for Tnn
grow in length and become increasingly uncorrelated. It is interesting to observe
that in the presence of noise the student vectors grow asymptotically longer than
the teacher vectors: Qii -+ Qoo > 1, and acquire a small negative correlation with
each other. Another detectable difference in the presence of noise is a larger gap
between the values of Q and C in the symmetric phase. Larger norms for the
student vectors result in larger generalization errors: as shown in Figure I.c, the
generalization error increases monotonically with increasing noise level, both in the
symmetric and asymptotic regimes.
=
For an isotropic teacher, the teacher-student and student-student overlaps can thus
be fully characterized by four parameters: Qik QCik +C(I- Cik) and R;n
RCin +
S(I-Cin). In the symmetric phase the additional constraint R = S reflects the lack
of differentiation among student vectors and reduces the number of parameters to
three.
=
=
The symmetric phase is characterized by a fixed point solution to the equations
D. Saad and S. A. Solfa
264
of motion (2) whose coordinates can be obtained analytically in the small noise
approximation: R* = I/JK(2K -1) + 1/ 0'2 r8 , Q* = 1/(2K -1) + 1/ 0'2 q8 , and
C* = 1/(2K -1) + 1/ 0'2 C8 , with r 8, q8, and C8 given by relatively simple functions
of K. The generalization error in this regime is given by:
*
K (7r
,
. ( 1 ))
fg = -; 6' - Ii arCSIn 2K
+
0'21/ (2K - 1?/2
27r (2K + 1)1/2 ;
note its increase over the corresponding noiseless value, recovered for
(3)
0'2
= O.
The asymptotic phase is characterized by a fixed point solution with R* =j:. S*. The
coordinates of the asymptotic fixed point can also be obtained analytically in the
small noise approximation: R* = 1 + 1/ 0'2 r a , S* = -1/ 0'2 Sa, Q* = 1 + 1/ 0'2 qa,
and C* = -1/ 0'2 Ca , with r a, Sa, qa, and C a given by rational functions of K with
corrections of order 1/. The asymptotic generalization error is given by
* = .J3
67r
fg
2
T.'
1/ 0' .Ii .
(4)
Explicit expressions for the coefficients r 8, q8' C8 , r a, Sa, qa, and C a will not be given
here for lack of space; suffice it to say that the fixed point coordinates predicted on
the basis of the small noise approximation are found to be in excellent agreement
with the values obtained from the numerical integration of the equations of motion
for 0'2 ~ 0.3.
It is worth noting in Figure I.c that in the small noise regime the length of the
symmetric plateau decreases with increasing noise. This effect can be investigated
analytically by linearizing the equations of motion around the symmetric fixed point
and identifying the positive eigenvalue responsible for the escape from the symmetric
phase. This calculation has been carried out in the small noise approximation, to
obtain A = (2/7r)K(2K - 1)-1/2(2K + 1)-3/2 + Au 0'21/, where Au is positive and
increases monotonically with K for K > 1. A faster escape from the symmetric
plateau is explained by this increase of the positive eigenvalue. The calculation
is valid for 0'21/ ~ 1; we observe experimentally that the trend is reversed as 0'2
increases. A small level of noise assists in the process of differentiation among
student vectors, while larger levels of noise tend to keep student vectors equally
ignorant about the task to be learned.
The asymptotic value (4) for the generalization error indicates that learning at finite
1/ will result in asymptotically suboptimal performance for 0'2 > O. A monotonic
decrease ofthe learning rate is necessary to achieve optimal asymptotic performance
with
= O. Learning at small 1/ results in long trapping times in the symmetric
phase; we therefore suggest starting the training process with a relatively large value
of 1/ and switching to a decaying learning rate at 0' = 0'0, after specialization begins.
We propose 1/ = 1/0 for 0' ~ 0'0 and 1/ = 1/0/(0' - O'oy for 0' > 0'0 . Convergence
to the asymptotic solution requires z ~ 1. The value z = 1 corresponds to the
fastest decay for 1/(0'); the question of interest is to determine the value of z which
results in fastest decay for fg(O'). Results shown in Figure l.d for 0' > 0'0 = 4000
correspond to M = K = 3, 1/0 = 0.7, and 0'2 = 0.1. Our numerical results indicate
optimal decay of fg(O') for z = 1/2. A rigorous justification of this result remains
to be found.
f;
4
Model noise
The resulting equations of motion for the student-teacher and student-student overlaps can also be obtained analytically in this case; they exhibit a structure remark-
265
Learning with Noise and Regularizers in Multilayer Neural Networks
0.04
0.06-
,
.~,
1-------.. :'
~~
tIGo.04
W
u~o
5
u~o
1
--- - u'JJ
9
--
------- -- ---.-.---?,1
~
0.02
~
--._----.--_...---_.-- - - --.
'.-._._---_._--....
_------ -_._
.._-------
~0.03
100.02
0.01
0.0 - + - - - - - - . - - - - - - - - . - - - - '
0*10'
5*10'
1*10'
\0
20
30
40
50
60
70
80
90
K
Figure 2: Left - The generalization error for different values of the noise variance
training examples are corrupted by model noise. Right - 7max as a function of
K.
(72;
ably similar to those for the noiseless case reported in [2], except for some changes
in the relevant covariance matrices.
A numerical investigation of the dynamical evolution of the overlaps and generalization error reveals qualitative and quantitative differences with the case of additive
output noise: 1) The sensitivity to noise is much higher for model noise than for
output noise. 2) The application of independent noise to the individual teacher
hidden units results in an effective anisotropic teacher and causes fluctuations in
the symmetric phase; the various student hidden units acquire some degree of differentiation and the symmetric phase can no longer be fully characterized by unique
values of Q and C. 3) The noise level does not affect the length of the symmetric
phase.
The effect of model noise on the generalization error is illustrated in Figure 2 for
M
K
3, 'rJ
0.2, and various noise levels. The generalization error increases
monotonically with increasing noise level, both in the symmetric and asymptotic
regimes, but there is no modification in the length of the symmetric phase. The
dynamical evolution of the overlaps, not shown here for the case of model noise,
exhibits qualitative features quite similar to those discussed in the case of additive
output noise: we observe again a noise-induced widening of the gap between Q and
C in the symmetric phase, while the asymptotic phase exhibits an enhancement of
the norm of the student vectors and a small degree of negative correlation between
them.
= =
=
Approximate analytic expressions based on a small noise expansion have been obtained for the coordinates of the fixed point solutions which describe the symmetric
and asymptotic phases. In the case of model noise the expansions for the symmetric
solution are independent of 'rJ and depend only on (72 and K. The coordinates of
the asymptotic fixed point can be expressed as: R* = 1 + (72 r a, S* = _(72 Sa,
Q* = 1 + (72 qa, C* = _(72 Ca , with coefficients r a , Sa, qa, and Ca given by rational
functions of K with corrections of order 'rJ . The important difference with the output noise case is that the asymptotic fixed point is shifted from its noiseless position
O. It is therefore not possible to achieve optimal asymptotic perforeven for 'rJ
mance even if a decaying learning rate is utilized. The asymptotic generalization
error is given by
=
*
y'3 2}-'\+'rJ(7 2K'
(g=--(7
1271"
(a
(}.,
\,'rJ ) .
(5)
266
D. Saad and S. A. Solla
Note that the asymptotic generalization error remains finite even as TJ -
5
O.
Regularlzers
A method frequently used in real world training scenarios to overcome the effects of
noise and parameter redundancy (1< > M) is the use of regularizers such as weight
decay (for a review see [6]).
Weight-decay regularization is easily incorporated within the framework of on-line
learning; it leads to a rule for the update of the student weights of the form Jf+l
Jf +
1:r Jf? The corresponding equations of motion for the dynamical
evolution of the teacher-student and student-student overlaps can again be obtained
analytically and integrated numerically from random initial conditions.
11 6r e -
=
The picture that emerges is basically similar to that described for the noisy case: the
dynamical evolution of the learning process goes through the same stages, although
specific values for the order parameters and generalization error at the symmetric
phase and in the asymptotic regime are changed as a consequence of the modification
in the dynamics.
Our numerical investigations have revealed no scenario, either when training from
noisy data or in the presence of redundant parameters, where weight decay improves the system performance or speeds up the training process. This lack of
effect is probably a generic feature of on-line learning, due to the absence of an
additive, stationary error surface defined over a finite and fixed training set. In
off-line (batch) learning, regularization leads to improved performance through the
modification of such error surface. These observations are consistent with the absence of 'overfitting' phenomena in on-line learning. One of the effects that arises
when weight-decay regularization is introduced in on-line learning is a prolongation
of the symmetric phase, due to a decrease in the positive eingenvalue that controls
the onset of specialization. This positive eigenvalue, which signals the instability of
the symmetric fixed point, decreases monotonically with increasing regularization
strength 'Y, and crosses zero at 'Ymax = TJ 7max. The dependence of 7max on 1< is
shown in Figure 2; for 'Y > 'Ymax the symmetric fixed point is stable and the system
remains trapped there for ever.
The work reported here focuses on an architecturally matched scenario, with M =
1<. Over-realizable cases with 1< > M show a rich behavior that is rather less
amenable to generic analysis. It will be of interest to examine the effects of different
types of noise and regularizers in this regime.
Acknowledgement: D.S. acknowledges support from EPSRC grant GRjLl9232.
References
M. Biehl and H. Schwarze, J. Phys. A 28, 643 (1995).
D. Saad and S.A. Solla, Phys. Rev. E 52, 4225 (1995).
D. Saad and S.A. Solla, preprint (1996).
P. Riegler and M. Biehl, J. Phys. A 28, L507 (1995).
G. Cybenko, Math. Control Signals and Systems 2, 303 (1989).
C.M. Bishop, Neural networks for pattern recognition, (Oxford University Press, Oxford, 1995).
[7] T.L.H. Watkin, A. Rau, and M. Biehl, Rev. Mod. Phys. 65, 499 (1993).
[8] K.R. Muller, M. Finke, N. Murata, K. Schulten, and S. Amari, Neural Computation
8, 1085 (1996).
[1]
[2]
[3]
[4]
[5]
[6]
| 1243 |@word polynomial:1 norm:5 bn:7 covariance:1 initial:3 recovered:1 com:1 activation:5 written:1 additive:5 numerical:5 realistic:1 analytic:1 update:2 stationary:1 imitate:1 trapping:1 xk:1 isotropic:4 short:1 compo:1 math:2 node:2 differential:1 become:1 qualitative:3 introduce:1 behavior:2 examine:2 frequently:2 decreasing:1 actual:2 increasing:4 provided:5 begin:2 matched:2 suffice:1 interpreted:1 developed:1 differentiation:4 nj:1 quantitative:1 scaled:1 uk:2 control:2 unit:19 grant:1 positive:6 limit:2 consequence:1 switching:1 analyzing:1 oxford:2 riegler:1 fluctuation:2 initialization:2 au:2 sara:1 qii:5 fastest:2 averaged:1 unique:2 responsible:1 implement:1 ri2:1 integrating:1 suggest:1 onto:2 close:1 layered:1 instability:1 map:4 go:1 starting:1 independently:2 identifying:1 rule:2 coordinate:5 justification:1 target:1 agreement:1 element:2 trend:1 recognition:1 jk:2 utilized:1 labeled:1 observed:1 epsrc:1 preprint:1 solla:6 decrease:4 cin:1 yk:1 architecturally:2 dynamic:2 trained:1 depend:1 rin:9 basis:1 easily:1 various:4 effective:1 describe:1 whose:1 quite:1 larger:4 biehl:3 say:1 amari:1 ability:1 erf:1 itself:1 noisy:4 differentiable:1 eigenvalue:3 propose:1 relevant:1 achieve:2 ymax:2 convergence:1 enhancement:1 ac:1 measured:1 sa:5 implemented:1 predicted:1 indicate:1 concentrate:1 transient:1 generalization:17 investigation:3 cybenko:1 correction:2 around:1 equilibrium:1 birmingham:1 label:2 tool:1 reflects:1 gaussian:3 modified:1 rather:1 qoo:1 focus:4 indicates:2 rigorous:1 realizable:1 el:1 nn:1 integrated:1 hidden:16 corrupts:1 among:4 integration:1 equal:1 escape:2 randomly:1 individual:1 relabeling:1 ignorant:1 phase:17 interest:2 possibility:1 investigate:1 alignment:1 analyzed:1 tj:2 regularizers:6 amenable:1 necessary:1 desired:1 soft:1 finke:1 assignment:1 deviation:2 nonequilibrium:1 uniform:3 reported:2 teacher:29 corrupted:3 sensitivity:1 physic:1 yl:1 off:2 again:2 nm:1 choose:1 watkin:1 til:2 student:49 coefficient:2 depends:1 onset:2 performed:3 lab:1 decaying:2 ably:1 minimize:1 il:1 variance:5 characteristic:1 murata:1 correspond:1 identify:1 ofthe:1 bayesian:1 basically:1 zoo:1 worth:1 app:1 plateau:3 phys:4 rational:2 emerges:1 improves:1 cik:1 focusing:1 higher:1 improved:1 q8:3 stage:1 correlation:2 receives:2 lack:4 defines:1 schwarze:1 usa:1 effect:9 normalized:2 evolution:10 regularization:6 analytically:6 symmetric:22 illustrated:1 eg:2 indistinguishable:1 during:2 acquires:1 whereby:1 linearizing:1 motion:8 bring:1 fj:1 ji:3 b4:1 anisotropic:1 nonsymmetric:1 discussed:1 numerically:1 rau:1 stable:1 longer:2 surface:2 dominant:1 recent:1 driven:1 scenario:6 muller:1 additional:1 determine:1 redundant:1 monotonically:4 signal:3 ii:2 full:1 rj:6 reduces:1 faster:1 adapt:1 characterized:6 calculation:2 long:2 cross:1 prolongation:1 equally:1 j3:1 multilayer:4 noiseless:7 represent:1 qik:10 affecting:1 ri3:1 interval:2 grow:2 saad:7 probably:1 induced:1 tend:1 mod:1 presence:4 noting:1 revealed:1 iii:1 affect:1 architecture:2 suboptimal:1 expression:3 specialization:4 assist:1 cause:1 jj:1 remark:1 extensively:1 qi2:1 shifted:1 trapped:1 redundancy:1 four:1 drawn:2 asymptotically:3 powerful:2 fourth:1 holmdel:1 layer:2 followed:1 quadratic:1 encountered:1 strength:2 constraint:1 generates:1 speed:1 c8:3 relatively:2 jr:1 remain:1 increasingly:2 qu:1 rev:2 modification:3 explained:1 gradually:1 taken:4 equation:10 previously:1 remains:3 eventually:1 mechanism:1 committee:1 detectable:1 mance:1 observe:3 generic:2 batch:1 remaining:1 question:1 dependence:2 diagonal:2 exhibit:3 gradient:4 reversed:1 sci:1 evaluate:1 unstable:1 ru:1 length:4 acquire:2 negative:2 zos:2 rii:1 unknown:1 observation:1 finite:3 descent:4 incorporated:1 ever:1 perturbation:1 arbitrary:3 david:1 introduced:3 pair:1 learned:2 qa:5 proceeds:1 dynamical:6 pattern:3 regime:8 max:3 power:1 overlap:23 widening:1 l507:1 aston:2 picture:1 jir:1 carried:1 acknowledges:1 review:1 acknowledgement:1 evolve:1 asymptotic:17 fully:2 interesting:1 oy:1 proportional:1 degree:2 consistent:1 corrupt:1 uncorrelated:2 pi:3 changed:1 fg:4 overcome:1 transition:1 valid:1 world:1 rich:1 commonly:1 adaptive:1 made:2 approximate:1 compact:1 implicitly:1 keep:1 overfitting:1 incoming:1 reveals:1 xi:1 continuous:3 un:1 ca:3 symmetry:1 expansion:2 excellent:1 necessarily:1 investigated:1 noise:48 jdl:1 position:1 schulten:1 explicit:1 xl:1 breaking:1 tnm:3 specific:2 bishop:1 tnn:1 r8:1 decay:10 ci:1 arcsin:1 gap:2 expressed:1 scalar:1 monotonic:1 jd1:1 corresponds:3 goal:1 presentation:3 formulated:1 jf:4 absence:2 experimentally:1 change:1 typical:1 except:2 determined:2 secondary:2 internal:1 support:1 arises:1 dept:1 phenomenon:1 correlated:1 |
268 | 1,244 | A neural model of visual contour
integration
Zhaoping Li
Computer Science, Hong Kong University of Science and Technology
Clear Water Bay, Hong Kong
zhaoping~uxmail.ust.hkl
Abstract
We introduce a neurobiologically plausible model of contour integration from visual inputs of individual oriented edges. The model
is composed of interacting excitatory neurons and inhibitory interneurons, receives visual inputs via oriented receptive fields (RFs)
like those in VI. The RF centers are distributed in space. At each
location, a finite number of cells tuned to orientations spanning
180 0 compose a model hypercolumn. Cortical interactions modify
neural activities produced by visual inputs, selectively amplifying
activities for edge elements belonging to smooth input contours. Elements within one contour produce synchronized neural activities.
We show analytically and empirically that contour enhancement
and neural synchrony increase with contour length, smoothness
and closure, as observed experimentally. This model gives testable
predictions, and in addition, introduces a feedback mechanism allowing higher visual centers to enhance, suppress, and segment
contours.
1. Introduction
The visual system must group local elements in its input into meaningful global
features to infer the visual objects in the scene. Sometimes local features group into
regions, as in texture segmentation; at other times they group into contours which
may represent object boundaries. Although much is known about the processing
steps that extract local features such as oriented input edges, it is still unclear how
local features are grouped into global ones more meaningful for objects. In this
1r would very much like to thank Jochen Braun for introducing me to the topic, and
Peter Dayan for many helpful conversations and comments on the drafts. This work was
supported by the Hong Kong Research Grant Council.
70
Z. Li
study, we model the neural mechanisms underlying the grouping of edge elements
into contours - contour integration.
Recent psychophysical and physiological observations[14, 8] demonstrate a decrease
in detection threshold of an edge element, by human observers or a primary cortical
cell, if there are aligned neighboring edge elements. Changes in neural responses by
visual stimuli presented outside their RFs have been observed physiologically[9, 8].
Human observers easily identify a smooth curve composed of individual, even disconnected, Gabor "edge" elements distributed among many similar elements scattered in the background[4]. Horizontal neural connections observed in the primary
visual cortex[5], and the finding that these connections preferably link cells tuned
to similar orienations[5], provide a likely neural basis underlying the primitive visual grouping phenomena such as contour integration. These findings suggest that
simple and local neural interactions even in VI could contribute to grouping.
However, it has been difficult to model contour integration using only VI elements
and operation. Most existing models[15, 18] of contour integration lack well-founded
biological bases. More neurally based models, e.g., the one by Grossberg and
Mingolla[7], require operations beyond VI or biologically questionable. It is thus
desirable to find out whether contour enhancement can indeed occur within VI or
has to be attributed to top-down feedback. We introduce a VI model of contour
integration, using orientation selective cells, local cortical circuits, and horizontal
connections. This model captures the essentials of the contour integration behavior.
More details of the model can be found in a longer paper[12].
2. The Model
2.1 Model outline
K neuron pairs at each spatial location i model a hypercolumn in VI (figure 1).
Each neuron has a receptive field center i and an optimal orientation fJ = k1r / K
for k = 1,2, ... K. A neuron pair consist of a connected excitatory neuron and
inhibitory neuron which are denoted by indice (ifJ) for their receptive field center
and preferred orientation, and are referred to as an edge segment. An edge segment
receives the visual input via the excitatory cell, whose output quantifies the saliency
of the edge segment and projects to higher visual centers. The inhibitory cells are
treated as interneurons. When an input image contains an edge at i oriented at fJ o ,
the edge segment ifJ receives input Ii9 (X </J(fJ - fJ o ), where </J(fJ) = e- 191 /(1I"/8) is a
cell's orientation tuning curve.
Visual space, bypen:oluDlDS
and neural edge segmenos
~~~
excitatory
~ ~--~ ~
~:~j ..~
~~~~
A hype<oolumn A oeuraI edge
atlocalion i
segment i8
An intercon
neuron pair (ClI'
Edge outputs to higher visual areas
edge segment ill
Figure 1: Model visual space, hypercolumn, edge segments, neural elements, visual inputs,
and neural connections. The input space is a discrete hexagonal or Manhatten grid.
A Neural Model o/Visual Contour Integration
Model visual input
71
Model output
-, ,......
....
1\1'\-\'
'~
Output after removing ed,ges
of activities lower than 1/"1.
of the most active edge
,"
-
1\-
r./,,,I\ , \',
I
,
... \
\
I
, ...
"'....'.,!'... ,'
'..{...
I,
"\
...............
\..
I
,
- II....
"
...............
,I A (I - ,1 I,
Figure 2: Model neural connections and performance for contour enhancement and noise
reduction. The top graph depicts connections J;(J,j(J1 and W;(J,j(J1 respectively between the
center (white) horizontal edge and other edges. The whiteness and blackness of each edge
is proportional to the connection sthength Ji(J,j(J1 and W;(J,j(J1 respectively. The bottom
row plots visual input and the maximum model outputs. The edge thickness in this row is
proportional to the value of edge input or activity. The same format applies to the other
figures in this paper.
Let Xifl and YiO be the cell membrane potentials for the excitatory and inhibitory
cells respectively in the edge segment, then
-axXiO - 9y(YiO)
+ J 09x(XiO) +
L
J i Oj OI9x(XjOI)
+ Iio + 10
(1)
jO'#iO
YiO
-ayYio
+ 9x(XiO) +
L
WiOjOI9x(XjO')
+ Ie
(2)
jO'#iO
where 9x(XiO) and 9y(YiO) are the firing rates from the excitatory and inhibitory cells
respectively, l/a x and l/ay are the membrane constants, J o is the self-excitatory
connection weight, JiO ,jO' and WiO,jOI are synaptic weights between neurons, and 10
and Ie are the background inputs to the excitatory and inhibitory cells. Without
loss of generality, we take ax = a y = 1, 9x(X) as threshold linear with saturation
and an unit gain 9~(X) = 1 in the linear range.
The synaptic connections JiO,jO' and Wi(},jOI are local and translation and rotation
invariant (Figure (2)). JiO,jO' increases with the smoothness (small curvature) of
the curve that best connects (if}) and (jf}') , and edge elements inhibit each other
via WiO,jOI when they are alternative choices in a smooth curve route. Given an
input pattern lUI, the network approaches a dynamic state after several membrane
time constants. As in Figure (2), the neurons with relatively higher final activities
are those belonging to smooth curves in the input.
2.2 Model analysis
Ignoring neural connections between edge segments, the neuron in edge segment if}
Z. Li
72
has input sensitivity
1 + g~(Yo)g~(xo) - Jog~(xo)
g~(Yo)g~(xo)
(3)
(4)
where gx(x o) and gy(yo) are roughly the average neural activities (omitting iO for
simplicity). Thus the edge activity increases with Ii9 and decreases with Ie (in cases
that interest us, g~(Yo)g~(xo) > Jog~(xo) -1) . The resulting input-output function
given Ie, gx(x o) vs. I i9 , corresponds well with physiological data.
By effectively increasing IiO or Ie, the edge element (jO') can excite or inhibit the
element (iO) with excitatory-to-excitatory input J i9 ,j9,gx(Xj9 and excitatory-toinhibitory input WiOj9' gx(Xj9') respectively. Contour enhancement is so (Fig. 2)
achieved. In the simplest example when the visual input has equally spaced equal
strength edges from a line and all other edge segments are silent, we can treat the
line system as one dimensional, omit 0 and take i as locations along the line. A lack
of inhibition between line segments gives:
1 )
-axxi - gy(Yi)
+ Jogx(Xi) +
L Jijgx(Xj) + 10 + Iline-input
(5)
#i
(6)
If line is infinite, by symmetry, each edge segment has the same average activity
gx(Xi) rv gx(xo) for all i. This system can then be seen[12] either as a giant edge
with self-excitatory connection (Jo + 2::#i Jij ), or a single edge with extra external
input t1I = (2::#i Jij)gx(xo). Either way, activities gx(xo) are enhanced for each
edge element in the line (figure 3 and 2).
This analysis is also applicable to constant curvature curves[12]. It can be shown
that, in the linear range of gx 0, the response ratio between a curve segment and an
isolated segment is (g~(yo) + 1- Jo)/(g~(yo) + 1 - J o - 2::i#j Jij ). Since 2::ih Jij
decreases with increasing curvature, so does the response enhancement. Translation
invariance along the curve breaks down in a finite length curve near its two ends,
where activity enhancement decays by a decreased excitation from fewer neighboring
segments. This suggests that a closed or longer curve has higher saliency than an
open or shorter one (figure (3)). This prediction is expected to hold also for curves
of non-constant curvature, and should playa significant role in the psychophysical
observation[lO] showing a decreased detection threshold for closed curves from that
of the open ones.
Further analysis[12] shows that the edge segments in a curve normally exhibit neural
oscillations around their mean activity levels with near-zero phase delays from each
other. The model predicts that, like the contour enhancement, the oscillation is
stronger for longer, smoother, and closed curves than open and shorter ones, and
tapers off near curve endings where oscillation synchrony also deteriorates (figure
(3)).
2.3 Central feedback control for contour enhancement, snpression, filling
in, and segmentation
73
A Neural Model o/VlSual Contour Integration
Model visual inputs
I
/
-I ,
/
"\
1
1
I
\
-- ,
I
(
-
\
\ , , )
\
,-I
(
" \
\
)
Model outputs
-I \
,I
.
/#- . . \
-I ,
\1-/
\
-I
#
.......... #
Neural signals in time
I,
I,'
I,'
I,
i"
itI'
I,
i.
i'"
I"
\:
l
I..
i:,
!,
.' i:,
U~fi9IllP5lll1
I::
i:,
j"
I"
I..
i:,
1"f--L---;-1-..-_.--;;--.J I,ol' --.L.-:-=----_---,,--;r-----J I'ol-' -'---,,..-_----;-----;;r-----J I,ol-' .L.-,---o-----;;--;---'
Figure 3: Model performance for input curves and noises, Each column is dedicated to
one input condition. The top row is the visual input; the middle row is the maximum
neural responses, and the bottom row the segment outputs as a function of time. The
neural signals in the bottom row are shown superposed. The solid curves plot outputs
for the segments away from the curve endings, the dash-dotted curves for segments near
the curve endings, and the dashed curve in each plot, usually the lowest lying one, is an
example from a noise segment. Note the decrease in neural signal synchrony between the
curve segments, in the oscillation amplitudes, and average neural activities for the curve
segments, as the curve becomes open, shorter, and more curled or when the segment is
near the curve endings. Because the model employs discrete a grid input space, the figure
'8' curve in the right column is almost not a smooth curve, hence the contour is only very
weakly enhanced. The enhancement for this curve could be increased by a denser input
grid or a multiscale input space.
The model assumes that higher visual centers send inputs Ie to the inhibitory cells,
to influence neural activities by equation (4). Take Ie = Ie,baekground+Ie,eontrol ~ 0,
where Ie,baekground is the same for all edge segments and is useful for modulating
the overall visual alertness level. By setting Ie,eontrol differently for different edge
segments, higher centers can selectively suppress or enhance activities for some visual objects (contours) (compare figure 4D,G), and even effectively achieve contour
segmentation (figure (4H)) by silencing segments from a given curve. A similar feedback control mechanism was used in an olfactory model for odor sensitivity control
and odor segmentation[ll]. Nevertheless, the feedback cannot completely substitute
for the visual input Iii}. By equation (4), Ie is effective only when g~(Yo)g~(xo) "# 0.
With insufficient background input 10 , some visual input or excitation from other
segments is needed to have g~(xo) > 0, even when all the inhibition from Ie is removed by central control. However, a segment with weak visual input or excitation
from aligned neighboring segments can increase its activity or become active from
subthreshold. Therefore, this model central feedback can enhance a weak contour or
fill in an incomplete one (compare figure 4F ,I), but can not enhance or "hallucinate"
any contour not existing at least partially in the visual input I (figure 4E),
3. Summary and Discussion
We have presented a model which accounts for phenomena of contour enhance-
Z.Li
74
A: Input
D: Outpl,lt for A
no central control
':x
I:'
,- . . -.... +
'\- ',)
,
B: Input
;:\ - ~
,
'\
,
\
I
/
c: Input
,I . . \ ... \:\
7~\ -'--.I
... -...
-7-.,.
..
G.: Output for A, line suppresion
circle enhancement
I~-:-?,\
-
\
-a-------a--
" \.
:--'.1.-
E: Qutout for B, enh~cement H: Same.as G, except with
fOr clrcre and non-exlstmg line stronger line suppreslOn
.
I~'-?,\
I~-:-?,\
~
.,' -
-
\
'. '.1
F: Output f~r C
no central control
I" "
-
''I.
. .,
,-'.-;
-...
,
\
..
,
?
I
-
\
?
-
'. '.1
I; Outp~t.for-C, en):lajlcement
lOr both hne and circle
I~'-?~
'
. ,.
-
\
~?\?~T
.....
I
..
Figure 4: Central feedback control. The visual inputs are in A, B and C. The rest
are the model responses under different feedback conditions. D: model response for
input A without central control. G: model response for input A with line suppression by Ic,control = O.25Ic ,background on the line segments, and circle enhancement by
Ic,control = -O.29Ic ,background on the circle elements. H: Same as G except the line suppression signal Ic,control is doubled. E: Model response to input B with enhancement for
the line and the circle by central feedback Ic,control = -O.29Ic ,background' F: Response to
input C with no central control. I: Response to input C with line and circle enhancement
by Ic ,control = -O.29Ic ,background' Note how the input gaps are partiallly filled in F and
almost completely filled in I. Note that the apparent gaps in the circle are caused by the
underlying discrete grid in the model input space, and hence no gap actually exists, and
no filling in is needed, for this circle. Also, with the wrap around boundary condition, the
line is actually a closed or infinitely long line, and is thus naturally more salient in this
model without feedback.
ment using neurally plausible elements in VI. It is shown analytically and empirically that both the contour enhancement and the neural oscillation amplitudes are
stronger for longer, closed, and smaller curvature curves, agreeing with experimental
observations [8, 4, 10, 6, 2]. The model predicts that horizontal connections target
preferentially excitatory or inhibitory post-synaptic cells when the linked edges are
aligned or less aligned (Fig. 2). In addition, we introduce a possible feedback mechanism by which higher visual centers could selectively enhance or suppress contour
activities, and achieve contour segmentation. This feedback mechanism has the desirable property that while the higher centers can enhance or complete an existing
weak and/or fragmented input contour, they cannot enhance a non-exist ant contour
in the input, thus preventing "hallucination". This property could be exploited by
higher visual centers for hypothesis testing and object reconstruction by cooperating with lower visual centers. Analogous computational mechanisms have been used
in an olfactory model to achieve odor segmentation and sensitivity modulation[ll].
It will be interesting to explore the universality of such computational mechanisms
across sensory modalities.
The organization of the model is based on various experimental finding[17, 5, 1,
8, 14]: recurrent excitatory-inhibitory interactions; excitatory and inhibitory linking of edge elements with similar orientation preferences; and neural connection
A Neural Model of Visual Contour Integration
75
patterns. At the cost of analytical tractability without essential changes in model
performance, one can relax the model's idealization of a 1:1 ratio in the excitatory
and inhibitory cell numbers, the lack of connections between the inhibitory cells,
and the excitatory cells as the exclusive recipients of visual input. While abundant
feedback connections are observed from higher visual centers to the primary visual
cortex, there is as yet no clear indication of cell types of their targets [3, 16]. It is
desirable to find out whether the feedback is indeed directed to the inhibitory cells
as predicted.
This model can be extended to stereo, temporal, and chromatic dimensions, by
linking edge segments aligned in orientation, depth, motion direction and color.
V1 cells have receptive field tuning in all these dimensions, and cortical connections are indeed observed to link cells of similar receptive field properties[5]. This
model does not model many other apparently non-contour related visual phenomena such as receptive field adaptations[5]. It is also beyond this model to explain
how the higher visual centers decide which segments belong to one contour in order
to achieve feedback control, although it has been hypothesized that phase locked
neural oscillations and neural correlations can play such a role[13].
References
[1] Douglas R .J. and Martin K. A. "Neocortex" in Synaptic Organization of the Brain
3rd Edition, Ed. G. M. Shepherd, Oxford University Press 1990.
[2] Eckhorn R, Bauer, R. Jordan W. Brosch, M. Kruse W., Munk M. and Reitboeck, H.
J. 1988. Bioi. Cybern. 60:121-130.
[3] van Essen D. Peters and E G Jones, Plenum Press, New York, 1985. p. 259-329
[4] Field DJ; Hayes A; Hess RF 1993. Vision Res. 1993 Jan; 33(2): 173-93
[5] Gilbert CD Neuron. 1992 Julj 9(1): 1-13
[6] Gray C.M. and Singer W. 1989 Proc. Natl. Acad. Sci. USA 86: 1698-1702.
[7] Grossberg S; Mingolla E Percept Psychophys. 1985 Aug; 38(2): 141-71
[8] Kapadia MKj Ito M; Gilbert CD; Westheimer G, Neuron. 1995 Oct; 15(4): 843-56
[9] Knierim J. J. and van Essen D. C. 1992, J. Neurophysiol. 67, 961-980.
[10] Kovacs I; Julesz B Proc Natl Acad Sci USA . 1993 Aug 15; 90(16): 7495-7
[11] Li Zhaoping, Biological Cybernetics, 62/4 (1990), P. 349-361
[12] Li Zhaoping, "A neural model contour integration in primary visual cortex"
manuscript submitted for publication.
[13] von der Malsburg C. 1981 "The correlation theory of brain function ." Internal report,
Max-Planck-Institute for Biophysical Chemistry, Gottingen, West Germany.
[14] Polat U; Sagi D, 1994 Vision Res. 1994 Jan; 34(1): 73-8
[15] Shashua A. and Ullman S. 1988 Proceedings of the International Conference on Computer Vision. Tempa, Florida, 482-488.
[16] Valverde F . in Cerebral Cortex Eds. A Peters and E G Jones, Plenum Press, New
York, 1985. p. 207-258.
[17] White E . L. Cortical circuits 46-82, Birkhauser, Boston, 1989
[18] Zucker S. W., David C., Dobbins A, and Iverson L. in Second international conference
on computer vision pp. 568-577, IEEE computer society press, 1988.
| 1244 |@word kong:3 middle:1 stronger:3 open:4 closure:1 solid:1 reduction:1 contains:1 tuned:2 existing:3 universality:1 yet:1 ust:1 must:1 j1:4 plot:3 v:1 fewer:1 hallucinate:1 draft:1 contribute:1 location:3 gx:9 preference:1 lor:1 along:2 iverson:1 become:1 compose:1 olfactory:2 introduce:3 expected:1 indeed:3 roughly:1 behavior:1 ol:3 brain:2 increasing:2 becomes:1 project:1 underlying:3 circuit:2 lowest:1 finding:3 giant:1 gottingen:1 temporal:1 preferably:1 braun:1 questionable:1 control:15 unit:1 grant:1 omit:1 normally:1 planck:1 local:7 modify:1 treat:1 sagi:1 io:4 acad:2 oxford:1 firing:1 modulation:1 suggests:1 range:2 locked:1 grossberg:2 directed:1 testing:1 jan:2 area:1 gabor:1 yio:4 suggest:1 doubled:1 cannot:2 mkj:1 influence:1 cybern:1 superposed:1 gilbert:2 center:14 send:1 primitive:1 simplicity:1 fill:1 analogous:1 plenum:2 enhanced:2 target:2 play:1 dobbin:1 hypothesis:1 element:17 t1i:1 neurobiologically:1 predicts:2 observed:5 bottom:3 role:2 k1r:1 capture:1 region:1 connected:1 decrease:4 inhibit:2 alertness:1 removed:1 xio:3 dynamic:1 weakly:1 segment:34 basis:1 completely:2 neurophysiol:1 easily:1 differently:1 various:1 effective:1 outside:1 whose:1 apparent:1 plausible:2 denser:1 relax:1 final:1 indication:1 biophysical:1 analytical:1 kapadia:1 reconstruction:1 ment:1 interaction:3 jij:4 adaptation:1 neighboring:3 aligned:5 achieve:4 cli:1 enhancement:14 produce:1 object:5 recurrent:1 aug:2 predicted:1 synchronized:1 direction:1 iio:2 human:2 munk:1 require:1 biological:2 hold:1 lying:1 around:2 ic:9 joi:3 proc:2 applicable:1 amplifying:1 council:1 modulating:1 grouped:1 silencing:1 chromatic:1 publication:1 ax:1 yo:7 suppression:2 helpful:1 dayan:1 selective:1 germany:1 overall:1 among:1 orientation:7 ill:1 denoted:1 spatial:1 integration:12 field:7 equal:1 zhaoping:4 jones:2 eontrol:2 filling:2 jochen:1 report:1 stimulus:1 employ:1 oriented:4 composed:2 individual:2 phase:2 connects:1 detection:2 organization:2 interest:1 interneurons:2 essen:2 hallucination:1 introduces:1 natl:2 edge:41 shorter:3 filled:2 incomplete:1 circle:8 abundant:1 re:2 isolated:1 increased:1 column:2 cost:1 introducing:1 tractability:1 delay:1 thickness:1 international:2 sensitivity:3 ie:13 off:1 enhance:8 jo:8 von:1 central:9 external:1 ullman:1 li:6 account:1 potential:1 gy:2 chemistry:1 cement:1 caused:1 vi:8 polat:1 break:1 observer:2 closed:5 linked:1 apparently:1 shashua:1 synchrony:3 percept:1 spaced:1 saliency:2 identify:1 subthreshold:1 ant:1 weak:3 produced:1 cybernetics:1 submitted:1 explain:1 ed:3 synaptic:4 pp:1 naturally:1 attributed:1 gain:1 conversation:1 color:1 segmentation:6 amplitude:2 actually:2 manuscript:1 higher:12 response:10 generality:1 correlation:2 receives:3 horizontal:4 multiscale:1 lack:3 gray:1 usa:2 omitting:1 hypothesized:1 analytically:2 hence:2 white:2 ll:2 self:2 excitation:3 hong:3 outline:1 ay:1 demonstrate:1 complete:1 dedicated:1 motion:1 fj:5 image:1 rotation:1 empirically:2 ji:1 cerebral:1 linking:2 belong:1 significant:1 smoothness:2 tuning:2 rd:1 grid:4 eckhorn:1 hess:1 dj:1 zucker:1 cortex:4 longer:4 inhibition:2 base:1 playa:1 curvature:5 recent:1 route:1 yi:1 exploited:1 der:1 seen:1 kruse:1 dashed:1 signal:4 ii:1 neurally:2 desirable:3 rv:1 infer:1 smoother:1 smooth:5 jog:2 long:1 post:1 equally:1 prediction:2 vision:4 represent:1 sometimes:1 achieved:1 cell:20 addition:2 background:7 decreased:2 modality:1 extra:1 rest:1 comment:1 shepherd:1 jordan:1 near:5 iii:1 xj:1 silent:1 whether:2 mingolla:2 stereo:1 peter:3 york:2 useful:1 clear:2 j9:1 julesz:1 neocortex:1 simplest:1 exist:1 inhibitory:13 dotted:1 deteriorates:1 discrete:3 group:3 salient:1 threshold:3 nevertheless:1 ifj:2 douglas:1 v1:1 graph:1 cooperating:1 idealization:1 almost:2 decide:1 i9:2 oscillation:6 dash:1 activity:17 strength:1 occur:1 scene:1 hexagonal:1 format:1 relatively:1 martin:1 jio:3 disconnected:1 belonging:2 membrane:3 smaller:1 across:1 agreeing:1 wi:1 biologically:1 invariant:1 xo:10 equation:2 brosch:1 mechanism:7 needed:2 singer:1 ge:1 end:1 operation:2 away:1 alternative:1 odor:3 florida:1 substitute:1 recipient:1 top:3 assumes:1 malsburg:1 testable:1 society:1 outp:1 psychophysical:2 receptive:6 primary:4 exclusive:1 unclear:1 exhibit:1 wrap:1 thank:1 link:2 sci:2 me:1 topic:1 water:1 spanning:1 length:2 insufficient:1 ratio:2 preferentially:1 westheimer:1 difficult:1 suppress:3 allowing:1 neuron:12 observation:3 iti:1 finite:2 extended:1 interacting:1 kovacs:1 knierim:1 david:1 pair:3 hypercolumn:3 connection:16 beyond:2 usually:1 pattern:2 psychophys:1 hkl:1 saturation:1 rf:4 oj:1 max:1 treated:1 enh:1 technology:1 extract:1 loss:1 interesting:1 proportional:2 reitboeck:1 i8:1 cd:2 translation:2 row:6 lo:1 excitatory:17 summary:1 supported:1 taper:1 institute:1 bauer:1 distributed:2 fragmented:1 depth:1 feedback:15 boundary:2 cortical:5 curve:30 ending:4 contour:37 preventing:1 sensory:1 dimension:2 founded:1 preferred:1 global:2 active:2 hayes:1 excite:1 xi:2 physiologically:1 quantifies:1 bay:1 ignoring:1 symmetry:1 noise:3 edition:1 fig:2 referred:1 west:1 xjo:1 en:1 scattered:1 depicts:1 ito:1 down:2 removing:1 showing:1 decay:1 physiological:2 grouping:3 essential:2 consist:1 ih:1 exists:1 effectively:2 texture:1 gap:3 boston:1 lt:1 likely:1 infinitely:1 explore:1 visual:40 partially:1 van:2 applies:1 corresponds:1 oct:1 hne:1 bioi:1 jf:1 experimentally:1 change:2 infinite:1 except:2 lui:1 birkhauser:1 invariance:1 wio:2 experimental:2 meaningful:2 selectively:3 internal:1 phenomenon:3 |
269 | 1,245 | Sequential Tracking in Pricing Financial
Options using Model Based and Neural
Network Approaches
Mahesan Niranjan
Cambridge University Engineering Department
Cambridge CB2 IPZ, England
niranjan@eng.cam.ac.uk
Abstract
This paper shows how the prices of option contracts traded in financial markets can be tracked sequentially by means of the Extended
Kalman Filter algorithm . I consider call and put option pairs with
identical strike price and time of maturity as a two output nonlinear system. The Black-Scholes approach popular in Finance literature and the Radial Basis Functions neural network are used in
modelling the nonlinear system generating these observations. I
show how both these systems may be identified recursively using
the EKF algorithm. I present results of simulations on some FTSE
100 Index options data and discuss the implications of viewing the
pricing problem in this sequential manner.
1
INTRODUCTION
Data from the financial markets has recently been of much interest to the neural
computing community. The complexity of the underlying macro-economic system
and how traders react to the flow of information leads to highly nonlinear relationships between observations. Further, the underlying system is essentially time
varying, making any analysis both difficult and interesting. A number of problems, including forecasting a univariate time series from past observations, rating
credit risk, optimal selection of portfolio components and pricing options have been
thrown at neural networks recently.
The problem addressed in this paper is that of sequential estimation, applied to
pricing of options contracts. In a nonstationary environment, such as financial
markets, sequential estimation is the natural approach to modelling. This is because
data arrives at the modeller sequentially, and there is the need to build and apply the
961
Sequential Tracking of Financial Option Prices
best possible model with available data. At the next point in time, some additional
data is available and the task becomes one of optimally updating the model to
account for the new data. This can either be done by reestimating the model with
a moving window of data or by sequentially propagating the estimates of model
parameters and some associated information (such as the error covariance matrices
in the Kalman filtering framework discussed in this paper).
2
SEQUENTIAL ESTIMATION
Sequential estimation of nonlinear models via the Extended Kalman Filter algorithm is well known (e.g. Candy, 1986; Bar-Shalom & Li, 1993). This approach
has also been widely applied to the training of Neural Network architectures (e.g.
Kadirkamanathan & Niranjan, 1993; Puskorius & Feldkamp, 1994). In this section,
I give the necessary equations for a second order EKF, i.e. Taylor series expansion
of the nonlinear output equations, truncated at order two, for the state space model
simplified to the system identification framework considered here.
The parameter vector or state vector, 0, is assumed to have the following simple
random walk dynamics.
(t(n
+ 1)
= (t(n)
+
.Y(n) ,
where .Y( n) is a noise term, known as process noise . .Y( n) is of the same dimensionality as the number of states used to represent the system. The process noise gives
a random walk freedom to the state dynamics facilitating the tracking behaviour
desired in non stationary environments. In using the Kalman filtering framework,
we assume the covariance matrix of this noise process, denoted Q, is known. In
practice, we set Q to some small diagonal matrix.
The observations from the system are given by the equation
~(n) =
[({t, U)
+
w(n),
where, the vector ~(n) is the output of the system consisting of the call and put
option prices at time n. U denotes the input information. In the problem considered
here, U consists of the price of the underlying asset and the time to maturity if the
option. w is known as the measurement noise, covariance matrix of which, denoted
R, is also assumed to be known. Setting the parameters Rand Q is done by trial
and error and knowledge about the noise processes. In the estimation framework
considered here, Q and R determine the tracking behaviour of the system. For the
experiments reported in this paper, I have set these by trial and error, but more
systematic approaches involving multiple models is possible (Niranjan et al, 1994).
The prior estimates at time (n + 1), using all the data upto time (n) and the model
of the dynamical system, or the prediction phase of the Kalman algorithm is given
by the equations:
~(n + lin)
=
~(nln)
+
P(n
+ lin)
P(nln)
~(n
+ lin)
10 ({t(n + lin)) + 2" L~i tr (mo(n + 1) P(n + lin))
Q(n)
1
n8
i=l
where 10 and H~o are the Jacobian and Hessians of the output e; also no = 2. ~i
are unit vectors in direction i. tr(.) denotes trace of a matrix. The posterior esti-
M. Niranjan
962
mates or the correction phase of the Kalman algorithm are given by the equations:
8(n+1)
=
l..o(n + l)P(n
1 nil nil
+2
+ 1In)~(n + 1)
L L~i~j (H~o(n + 1) P(n + 1In)illo(n + 1) P(n + lin?)
i=1 j=1
+ 1)
!L(n + 1)
~(n + lin + 1)
K(n
P(n + lin + 1)
Here, K(n
3
+ 1)
+R
P(n + 1In)l..o(n + 1)8- I (n + 1)
~(n + 1) - l..o(n + l)~(n + lin)
~(n + lin) + K(n + l)!L(n + 1)
(I - K(n + l)l..o(n +
P(n + lin) (I - K(n + l)l..o(n + I?'
+ K(n + 1) R K(n + I)'
is the Kalman Gain matrix and !L(n + 1) is the innovation signal.
1?
BLACK-SCHOLES MODEL
The Black-Scholes equation for calculating the price of an European style call option
(Hull, 1993) is
where,
In(8j X)
d1
-
+
(r
2
+ -%- )0m
(70m
(70m
Here, C is the price of the call option, 8 the underlying asset price, X the strike
price of the option at maturity, tm the time to maturity and r is the risk free interest
rate. (7 is a term known as volatility and may be seen as an instantaneous variance
of the time variation of the asset price. N(.) is the cumulative normal function.
For a derivation of the formula and the assumptions upon which it is based see
Hull, 1993. Readers unfamiliar with financial terms only need to know that all the
quantities in the above equation, except (7, can be directly observed. (7 is usually
estimated from a small moving window of data of about 50 trading days .
The equivalent formula for the price of a put option is given by
P = -8 N(-d l )
+
X e- r t .,. N(-d 2 ),
For recursive estimation of the option prices with this model, I assume that the
instantaneous picture given by the Black Scholes model is correct. The state vector
is two dimensional and consists of the volatility (7 and the interest rate r . The
Jacobian and Hessian required for applying EKF algorithm are
10
(
aC
au
or )
Be
aP
au
aP
Or
IDo
(
a2c
au 2
a2c
auar
B'e ) ;
auar
a2c
8r 2
~.
a2p
au 2
=
Expressions for the terms in these matrices are given in table 1.
(
a2p
auar
a2p
auar
a2p
ar2
)
963
Sequential Tracking of Financial Option Prices
Table 1: First and Second Derivatives of the Black Scholes Model
aC
au
aP
au
--
aC
aP
-Xtmexp( -rtm)N( -d2 )
Or
--
a 2p
au 2
S~dld2 N'(dd
a 2C
ar2
-Xtmexp( -rtm) (t mN(d2 )
-
d2
'fFN' (d 2 ) )
a 2p
ar2
Xtmexp( -rtm) (tmN( -d2 )
-
d2
'fFN' (-d2 ))
a 2c
auar
4
0m N'(d 1 )
Xtmexp( -rtm)N(d2 )
Or
a 2C
au 2
S
--
a 2p
aUaT
-S!lt", N'(dd
NEURAL NETWORK MODELS
The data driven neural network model considered here is the Radial Basis FUnctions
Network (RBF) . Following Hutchinson et al, I use the following architecture:
where U is the two dimensional input data vector consisting of the asset price
and time to maturity. The asset price S is normalised by the strike price of the
option X. The time to maturity, t rn , is also normalised such that the full lifetime
of the option gets a value 1.0. These normalisations is the reason for considering
options in pairs with the same strike price and time of maturity in this study. The
nonlinear function 1>(.) is set to 1>( Q) =
and m = 4. With the nonlinear part
of the network fixed, Kalman filter training of the RBF model is straightforward
(see Kadirkamanathan & Niranjan, 1993). In the simulations studied in this paper,
I used two approaches to fix the nonlinear functions. The first was to use the -J
/-l.S
and the ~ published in Hutchinson et al. The second was to select the /-l . terms as
-J
random subsets of the training data and set ~ to I. The estimation problem is now
linear and hence the Kalman filter equations become much simpler than the EKF
equations used in the training of the Black-Scholes model.
va
In addition to training by EKF, I also implemented a batch training of the RBF
model in which a moving window of data was used, training on data from (n 50) to n days and testing on day (n + 1). Since it is natural to assume that data
closer to the test day is more appropriate than data far back in time, I incorporated
a weighting function to weight the errors linearly, in the minimisation process. The
least squares solution, with a weighting function, is' given by the modified pseudo
964
M. Niranjan
Table 2: Comparison of the Approximation Errors for Different Methods
Strike Price
2925
3025
3125
3225
Trivial
0.0790
0.0999
0.0764
0.1116
RBF Batch
0.0632
0.1109
0.0455
0.0819
RBF Kalman
0.0173
0.0519
0.0193
0.0595
BS Historic
0.0845
0.1628
0.0343
0.0885
BS Kalman
0.0180
0.0440
0.0112
0.0349
inverse
l
= (Y' W y)-l y' W
t
Matrix W is a diagonal matrix, consisting of the weighting function in its diagonal
elements, t is the target values of options prices, and I is the vector containing the
unknown coefficients AI, ... ,Am . The elements of Yare given by Yij = </>j(U i ),
with j = 1, ... ,m and i = n - 50, ... ,n.
5
SIMULATIONS
The data set for teh experiments consisted of call and put option contracts on the
FTSE-100 Index, during the period February 1994 to December 1994. The date of
maturity of all contracts was December 1994. Five pairs (Call and Put) of contracts
at strike prices of 2925, 3025, 3125, 3225, and 3325.
The tracking behaviour of the EKF for one of the pairs is shown in Fig. 1 for a
call/put pair with strike price 3125. Fig. 2 shows the trajectories of the underlying
state vector for four different call/put option pairs. Table 2 shows the squared errors
in the approximation errors computed over the last 100 days of data (allowing for
an initial period of convergence of the recursive algorithms) .
6
DISCUSSION
This paper presents a sequential approach to tracking the price of options contracts.
The sequential approach is based on the Extended Kalman Filter algorithm, and I
show how it may be used to identify a parametric model of the underlying nonlinear
system. The model based approach of the finance community and the data driven
approach of neural computing community lead to good estimates of the observed
price of the options contracts when estimated in this manner.
In the state space formulation of the Black-Scholes model, the volatility and interest rate are estimated from the data. I trust the instantaneous picture presented
by the model based approach, but reestimate the underlying parameters. This is
different from conventional wisdom, where the risk free interest rate is set to some
figure observed in the bond markets. The value of volatility that gives the correct
options price through Black Scholes equation is called option implied volatility, and
is usually different for different options. Option traders often use the differences
in implied volatility to take trading positions. In the formulation presented here,
there is an extra freedom coming in the form what one might call implied interest
rates. It's difference from the interest rates observed in the markets might explain
trader speculation about risk associated with a particular currency.
The derivatives of the RBF model output with respect to its inputs is easy to
compute. Hutchinson et a1 use this to define a highly relevant performance measure
Sequential Tracking of Financial Option Prices
965
(a) Estimated Call Option Price
0.25 \
---True
0.2
~~Estimaie
~ 0.15
0.1
0.05
O~--~----L---~----~--~----~----L----L----~--~----~
20
40
60
80
100
120
140
160
180
200
220
(b) Estimated Put Option Price
0.1.---.----.----.----.----.----.----.----.----.----,----. .
0.08
~
---True
.:Estimate
.
. .
0.06
0.04
0.02
20
40
60
80
100
120
140
160
180
200
220
time
Figure 1: Tracking Black-Scholes Model with EKF; Estimates of Call and Put Prices
suitable to this particular application, namely the tracking error of a delta neutral
portfolio. This is an evaluation that is somewhat unfair to the RBF model since
at the time of training, the network is not shown the derivatives. An interesting
combination of the work presented in this paper and Hutchinson et al's performance
measure is to train the neural network to approximate the observed option prices
and simultaneously force the derivative network to approximate the delta observed
in the markets.
References
Bar-Shalom, Y. & Li, X-R. (1993), 'Estimation and Tracking: Principles, Techniques and Software', Artech House, London.
Candy, J. V. (1986), 'Signal Processing: The Model Based Aproach', McGraw-Hill,
New York.
Hull, J. (1993), 'Options, Futures and Other Derivative Securities', Prentice Hall,
NJ.
Hutchinson, J. M., Lo, A. W . & Poggio, T. (1994), 'A Nonparametric Approach to
Pricing and Hedging Derivative Securities Via Learning Networks', The Journal of
Finance, Vol XLIX, No.3., 851-889.
M. Niranjan
966
Trajectory of State Vector
0.055,....----.-----.----.-----,-----.----.-----,----.
o ?2925
0.05
0.045
:
x
"
lIE
3025
3125 . .
+ . ~225.
0.04
Q)
.. .. .... ... .
0.035
iii
c::
~ 0.03
$
c:
- 0.025
0.02
0.015
0.01
. .. .. . . .. ",
0.005L.----...J-----'----L...----"-----'----L...----"------'
0.22
0.24
0.3
0.18
0.26
0.28
0.14
0.16
0.2
Volatility
Figure 2: Tracking Black-Scholes Model with EKF; Estimates of Call and Put Prices
and the Trajectory of the State Vector
Kadirkamanathan, V. & Niranjan, M {1993}, 'A Function Estimation Approach to
Sequential Learning with Neural Networks', Neural Computation 5, pp . 954-975.
Lowe, D. {1995}, 'On the use of Nonlocal and Non Positive Definite Basis Functions
in Radial Basis Function Networks', Proceedings of the lEE Conference on Artificial
Neural Networks, lEE Conference Publication No. 409, pp 206-211.
Niranjan, M., Cox, I. J., Hingorani, S. {1994}, 'Recursive Estimation of Formants
in Speech', Proceedings of the International Conference on Acoustics, Speech and
Signal Processing, ICASSP '94, Adelaide.
Puskorius, G.V. & Feldkamp, L.A. {1994}, 'Neurocontrol of Nonlinear Dynamical
Systems with Kalman Filter-Trained Recurrent Networks', IEEE Transactions on
Neural Networks, 5 {2}, pp 279-297.
| 1245 |@word trial:2 cox:1 d2:7 simulation:3 eng:1 covariance:3 tr:2 recursively:1 n8:1 initial:1 series:2 past:1 candy:2 stationary:1 simpler:1 five:1 become:1 maturity:8 consists:2 manner:2 market:6 formants:1 feldkamp:2 window:3 considering:1 becomes:1 underlying:7 what:1 nj:1 esti:1 pseudo:1 finance:3 uk:1 unit:1 positive:1 engineering:1 ap:4 black:10 might:2 au:8 studied:1 testing:1 recursive:3 practice:1 definite:1 cb2:1 radial:3 tmn:1 get:1 selection:1 put:10 risk:4 applying:1 prentice:1 a2p:4 equivalent:1 conventional:1 straightforward:1 react:1 financial:8 variation:1 target:1 element:2 updating:1 observed:6 environment:2 complexity:1 a2c:3 cam:1 dynamic:2 trained:1 upon:1 basis:4 icassp:1 derivation:1 train:1 london:1 artificial:1 widely:1 coming:1 macro:1 relevant:1 date:1 convergence:1 generating:1 volatility:7 recurrent:1 ac:4 propagating:1 implemented:1 trading:2 direction:1 correct:2 filter:6 hull:3 viewing:1 behaviour:3 fix:1 yij:1 correction:1 credit:1 considered:4 normal:1 hall:1 traded:1 mo:1 estimation:10 bond:1 ekf:8 modified:1 varying:1 publication:1 minimisation:1 modelling:2 am:1 denoted:2 identical:1 ipz:1 future:1 simultaneously:1 phase:2 consisting:3 thrown:1 freedom:2 interest:7 normalisation:1 highly:2 evaluation:1 arrives:1 puskorius:2 implication:1 closer:1 necessary:1 poggio:1 taylor:1 walk:2 desired:1 mahesan:1 subset:1 neutral:1 trader:3 optimally:1 reported:1 rtm:4 ido:1 hutchinson:5 international:1 contract:7 systematic:1 lee:2 squared:1 containing:1 derivative:6 style:1 li:2 account:1 coefficient:1 hedging:1 lowe:1 option:32 square:1 variance:1 identify:1 wisdom:1 identification:1 trajectory:3 asset:5 published:1 modeller:1 explain:1 reestimate:1 pp:3 associated:2 gain:1 popular:1 knowledge:1 dimensionality:1 back:1 day:5 rand:1 formulation:2 done:2 lifetime:1 trust:1 nonlinear:10 pricing:5 consisted:1 true:2 hence:1 during:1 hill:1 instantaneous:3 recently:2 tracked:1 discussed:1 measurement:1 unfamiliar:1 cambridge:2 ai:1 portfolio:2 moving:3 posterior:1 shalom:2 driven:2 nln:2 seen:1 additional:1 somewhat:1 determine:1 strike:7 period:2 signal:3 artech:1 multiple:1 full:1 currency:1 england:1 lin:11 niranjan:10 a1:1 va:1 prediction:1 involving:1 essentially:1 represent:1 addition:1 addressed:1 extra:1 december:2 flow:1 call:12 nonstationary:1 iii:1 easy:1 architecture:2 identified:1 economic:1 tm:1 expression:1 forecasting:1 speech:2 hessian:2 york:1 nonparametric:1 estimated:5 delta:2 vol:1 four:1 inverse:1 ftse:2 reader:1 software:1 aproach:1 department:1 combination:1 making:1 b:2 equation:10 discus:1 know:1 available:2 apply:1 yare:1 upto:1 appropriate:1 hingorani:1 batch:2 denotes:2 calculating:1 build:1 february:1 implied:3 quantity:1 kadirkamanathan:3 parametric:1 diagonal:3 trivial:1 reason:1 kalman:13 index:2 relationship:1 innovation:1 difficult:1 trace:1 unknown:1 teh:1 allowing:1 observation:4 mate:1 truncated:1 extended:3 incorporated:1 rn:1 community:3 rating:1 pair:6 required:1 namely:1 speculation:1 security:2 acoustic:1 scholes:10 bar:2 dynamical:2 usually:2 including:1 suitable:1 natural:2 force:1 mn:1 picture:2 prior:1 literature:1 historic:1 interesting:2 filtering:2 ar2:3 reestimating:1 dd:2 principle:1 lo:1 neurocontrol:1 last:1 free:2 normalised:2 ffn:2 cumulative:1 simplified:1 far:1 transaction:1 nonlocal:1 approximate:2 mcgraw:1 sequentially:3 assumed:2 table:4 expansion:1 european:1 linearly:1 noise:6 facilitating:1 fig:2 position:1 lie:1 house:1 unfair:1 jacobian:2 weighting:3 formula:2 sequential:12 lt:1 univariate:1 tracking:12 rbf:7 price:30 except:1 nil:2 called:1 select:1 adelaide:1 d1:1 |
270 | 1,247 | Genetic Algorithms and Explicit Search Statistics
Shumeet 8a1uja
baluja@cs.cmu.edu
Justsystem Pittsburgh Research Center &
School of Computer Science, Carnegie Mellon University
Abstract
The genetic algorithm (GA) is a heuristic search procedure based on mechanisms
abstracted from population genetics. In a previous paper [Baluja & Caruana, 1995],
we showed that much simpler algorithms, such as hillcIimbing and PopulationBased Incremental Learning (PBIL), perform comparably to GAs on an optimization problem custom designed to benefit from the GA's operators. This paper
extends these results in two directions. First, in a large-scale empirical comparison
of problems that have been reported in GA literature, we show that on many problems, simpler algorithms can perform significantly better than GAs. Second, we
describe when crossover is useful, and show how it can be incorporated into PBIL.
1 IMPLICIT VS. EXPLICIT SEARCH STATISTICS
Although there has recently been controversy in the genetic algorithm (GA) community as
to whether GAs should be used for static function optimization, a large amount of research
has been, and continues to be, conducted in this direction [De Jong, 1992]. Since much of
GA research focuses on optimization (most often in static environments), this study examines the performance of GAs in these domains.
In the standard GA, candidate solutions are encoded as fixed length binary vectors. The initial group of potential solutions is chosen randomly. At each generation, the fitness of each
solution is calculated; this is a measure of how well the solution optimizes the objective
function. The subsequent generation is created through a process of selection, recombination, and mutation. Recombination operators merge the information contained within pairs
of selected "parents" by placing random subsets of the information from both parents into
their respective positions in a member of the subsequent generation. The fitness proportional selection works as selective pressure; higher fitness solution strings have a higher
probability of being selected for recombination. Mutations are used to help preserve diversity in the population by introducing random changes into the solution strings. The GA uses
the population to implicitly maintain statistics about the search space. The selection, crossover, and mutation operators can be viewed as mechanisms of extracting the implicit statistics from the population to choose the next set of points to sample. Details of GAs can be
found in [Goldberg, 1989] [Holland, 1975].
Population-based incremental learning (PBIL) is a combination of genetic algorithms and
competitive learning [Baluja, 1994]. The PBIL algorithm attempts to explicitly maintain
statistics about the search space to decide where to sample next. The object of the algorithm
is to create a real valued probability vector which, when sampled, reveals high quality solution vectors with high probability. For example, if a good solution can be encoded as a
string of alternating O's and l's, a suitable final probability vector would be 0.01, 0.99,
0.01, 0.99, etc. The PBIL algorithm and parameters are shown in Figure 1.
Initially, the values of the probability vector are initialized to 0.5. Sampling from this vector yields random solution vectors because the probability of generating a I or 0 is equal.
As search progresses, the values in the probability vector gradually shift to represent high
320
S. Baluja
?????? Initialize Probability Vector ?????
for i :=1 to LENGTH do P[i] = 0.5;
while (NOT tennination condition)
..... Generate Samples .....
for i := 1 to SAMPLES do
sample_vectors[i] := generate_sample_vector_according_to-probabilities (P);
evaluations[I] := evaluate(sample_vectors[i));
besLvector:= find_vectocwith_besLevaluation (sample_vectors, evaluations);
worsLvector := find_vectocwith_worsLevaluation (sample_vectors, evaluations);
????? Update Probability Vector Towards Best Solution ?????
for i :=1 to LENGTH do
P[i] := P[I] ? (1.0 ? LA) + besLvector[i] ? (LA);
PBIL: USER DEFINED CONSTANTS (Values Used In this Study):
SAMPLES: the number of vectors generated before update of the probability vector (100).
LA: the leaming rate, how fast to exploit the search perfonned (0.1).
NEGATIVE_LA: negative leaming rate, how much to leam from negative examples (PBIL 1=0.0, PBIL2= 0.075).
LENGTH: the number of bits In a generated vBCtor (problem specific).
Figure 1: PBILIIPBIL2 algorithm for a binary alphabet. PBIL2 includes shaded region. Mutations not shown.
evaluation solution vectors through the following process. A number of solution vectors
are generated based upon the probabilities specified in the probability vector. The probability vector is pushed towards the generated solution vector with the highest evaluation.
After the probability vector is updated, a new set of solution vectors is produced by sampling from the updated probability vector, and the cycle is continued. As the search
progresses, entries in the probability vector move away from their initial settings of 0.5
towards either 0.0 or 1.0.
One key feature of the early generations of genetic optimization is the parallelism in the
search; many diverse points are represented in the population of points during the early
generations. When the population is diverse, crossover can be an effective means of
search, since it provides a method to explore novel solutions by combining different members of the population. Because PBIL uses a single probability vector, it may seem to have
less expressive power than a GA using a full population, since a GA can represent a large
number of points simultaneously. A traditional single population GA, however, would not
be able to maintain a large number of points. Because of sampling errors, the population
will converge around a single point. This phenomenon is summarized below:
"... the theorem [Fundamental Theorem of Genetic Algorithms [Goldberg, 1989]], assumes
an infinitely large population size. In a finite size population, even when there is no selective
advantage for either of two competing alternatives ... the population will converge to one
alternative or the other in finite time (De Jong, 1975; [Goldberg & Segrest, 1987]). This
problem of finite populations is so important that geneticists have given it a special name,
genetic drift. Stochastic errors tend to accumulate, ultimately causing the population to converge to one alternative or another" [Goldberg & Richardson, 1987].
Diversity in the population is crucial for GAs. By maintaining a population of solutions,
the GA is able-in theory at least-to maintain samples in many different regions. Crossover is used to merge these different solutions. A necessary (although not sufficient) condition for crossover to work well is diversity in the popUlation. When diversity is lost,
crossover begins to behave like a mutation operator that is sensitive to the convergence of
the value of each bit [Eshelman, 1991]. If all individuals in the population converge at
Genetic Algorithms and Explicit Search Statistics
321
some bit position, crossover leaves those bits unaltered. At bit positions where individuals
have not converged, crossover will effectively mutate values in those positions. Therefore,
crossover creates new individuals that differ from the individuals it combines only at the
bit positions where the mated individuals disagree. This is analogous to PBIL which creates new trials that differ mainly in positions where prior good performers have disagreed.
As an example of how the PBIL algorithm works, we can examine the values in the probability vector through multiple generations. Consider the following maximization problem: 1.0/1(366503875925.0 - X)I, 0 ~ X < 240. Note that 366503875925 is represented in
binary as a string of 20 pairs of alternating '01'. The evolution of the probability vector is
shown in Figure 2. Note that the most significant bits are pinned to either 0 or 1 very
quickly, while the least significant bits are pinned last. This is because during the early
portions of the search, the most significant bits yield more information about high evaluation regions of the search space than the least significant bits.
o
5
'0
~"'5
"0
Sl 2 0
6:
g
25
30
35
Generation
Figure 2:
Evolution of the probability vector over successive generations. White represents a high
probability of generating a 1. black represents a high probability of generating a O. Intennediate grey represent
probabilities close to 0.5 - equal chances of generating a 0 or 1. Bit 0 is the most significant. bit 40 the least.
2 AN EMPIRICAL COMPARISON
This section provides a summary of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Thirty-four static
optimization problems, spanning six sets of problem classes which are commonly
explored in the genetic algorithm literature, are examined. The search spaces in these
problems range from 2 128 to 22040. The results indicate that, on many problems, using
standard GAs for optimizing static functions does not yield a benefit, in terms of the final
answer obtained, over simple hillclimbing or PBIL. Recently, there have been other studies which have examined the perfonnance of GAs in comparison to hillclimbing on a few
problems; they have shown similar results [Davis, 1991][Juels & Wattenberg, 1996].
Three variants of Multiple-Restart Stochastic Hillclimbing (MRS H) are explored in this
paper. The first version, MRSH-l, maintains a list of the position of the bit flips which
were attempted without improvement. These bit flips are not attempted again until a better
solution is found. When a better solution is found, the list is emptied. If the list becomes as
large as the solution encoding, MRSH-l is restarted at a random solution with an empty
list. MRSH-2 and MRSH-3 allow moves to regions of higher and equal evaluation. In
MRSH-2, the number of evaluations before restart depends upon the length of the encoded
solution. MRSH-2 allows 1O*(length of solution) evaluations without improvement
before search is restarted. When a solution with a higher evaluation is found, the count is
reset. In MRSH-3, after the total number of iterations is specified, restart is forced 5 times
during search, at equally spaced intervals.
Two variants of the standard GA are tested in this study. The first, tenned SGA, has the
following parameters: Two-Point crossover, with a crossover rate of 100% (% of times
crossover occurs, otherwise the individuals are copied without crossover), mutation probability of 0.001 per bit, population size of 100, and elitist selection (the best solution in
322
s. Haluja
generation N replaces the worst solution in generation N+ 1). The second GA used, termed
GA-Scale, uses the same parameters except: uniform crossover with a crossover rate of
80% and the fitness of the worst member in a generation is subtracted from the fitnesses of
each member of the generation before the probabilities of selection are determined.
Two variants of PBIL are tested. Both move the probability vector towards the best example in each generated population. PBIL2 also moves the probability vector away from the
worst example in each generation. Both variants are shown in Figure 1. A small mutation,
analogous to the mutation used in genetic algorithms, is also used in both PBILs. The
mutation is directly applied to the probability vector.
The results obtained in this study should not be considered to be state-of-the-art. The
problem encodings were chosen to be easily reproducible and to allow easy comparison
with other studies. Alternate encodings may yield superior results. In addition, no problem-specific information was used for any of the algorithms. Problem-specific information, when available, could help all of the algorithms examined.
All of the variables in the problems were encoded in binary, either with standard Graycode or base-2 representation. The variables were represented in non-overlapping, contiguous regions within the solution encoding. The results reported are the best evaluations
found through the search of each algorithm, averaged over at least 20 independent runs
per algorithm per problem; the results for GA-SCALE and PBIL2 algorithms are the average of at least 50 runs. All algorithms were given 200,000 evaluations per run. In each run,
the GA and PBIL algorithms were given 2000 generations, with 100 function evaluations
per generation. In each run, the MRSH algorithms were restarted in random locations as
many times as needed until 200,000 evaluations were performed. The best answer found
in the 200,000 evaluations was returned as the answer found in the run.
Brief notes about the encodings are given below. Since the numerical results are not useful
without the exact problems, relative results are provided in Table I. For most of the problems, exact results and encodings are in [Baluja, 1995). To measure the significance of the
difference between the results obtained by PBIL2 and GA-SCALE, the Mann-Whitney
test is used. This is a non-parametric equivalent to the standard two-sample pooled t-tests.
?
TSP:
128,200 & 255 city problems were tried. The "sort" encoding [Syswerda, 1989]
was used. The last problem was tried with the encoding in binary and Gray-Code.
?
Jobshop: Two standard JS problems were tried with two encodings. The first encoding is
described in [Fang et. ai, 1993]. The second encoding is described in [Baluja, 1995]. An additional, randomly generated, problem was also tried with the second encoding.
?
Knapsack: Problem 1&2: a unique element is represented by each bit. Problem 3&4: there
are 8 and 32 copies of each element respectively. The encoding specified the number of copies of
each element to include. Each element is assigned a "value" and "weight". Object: maximize
value while staying under pre-specified weight.
?
Bin-Packing/EquaI Piles: The solution is encoded in a bit vector of length M * log2N (N
bins, M elem.). Each element is assigned a substring of length log2N, which specifies a bin.
Object: pack the given bins as tightly as possible. Because of the large variation in results which is
found by varying the number of bins and elements, the results from 8 problems are reported.
?
Neural-Network Weight Optimization: Problem 1&2: identify the parity of7 inputs. Problem 3&4: determine whether a point falls within the middle of 3 concentric squares. For problems
3&4, 5 extra inputs, which contained noise, were used. The networks had 8 inputs (including
bias), 5 hidden units, and 1 output. The network was fully connected between sequential layers.
?
Numerical Function Optimization (FI-FJ): Problems 1&2: the variables in the first portions of the solution string have a large influence on the quality of the rest of the solution. In the
third problem, each variable can be set independently. See [Baluja, 1995] for details.
?
Graph Coloring: Select 1 of 4 colors for nodes of a partially connected graph such that connected nodes are not the same color. The graphs used were not necessarily planar.
Genetic Algorithms and Explicit Search Statistics
323
Table I: Summary of Empirical Results - Relative Ranks (l=best, 7=worst).
3 EXPLICITL Y PRESERVING DIVERSITY
Although the results in the previous section showed that PBIL often outperformed GAs
and hillclimbing, PBIL may not surpass GAs at all population sizes. As the population
size increases, the observed behavior of a GA more closely approximates the ideal behavior predicted by theory [Holland, 1975]. The population may contain sufficient samples
from distinct regions for crossover to effectively combine "building blocks" from multiple
solutions. However, the desire to minimize the total number of function evaluations often
prohibits the use of large enough populations to make crossover behave ideally.
One method of avoiding the cost of using a very large population is to use a parallel GA
(pGA). Many studies have found pGAs to be very effective for preserving diversity for
function optimization [Cohoon et al., 1988][Whitley et ai., 1990]. In the pGA, a collection
of independent GAs, each maintaining separate populations, communicate with each other
S. Baluja
324
via infrequent inter-population (as opposed to intra-population) matings. pGAs suffer less
from premature convergence than single population GAs. Although the individual populations typically converge, different populations converge to different solutions, thus preserving diversity across the populations. Inter-population mating permits crossover to
combine solutions found in different regions of the search space.
We would expect that employing mUltiple PBIL evolutions, parallel PBIL (pPBIL), has
the potential to yield performance improvements similar to those achieved in pGAs. Multiple PBIL evolutions are simulated by using multiple probability vectors to generate solutions. To keep the evolutions independent, each probability vector is only updated with
solutions which are generated by sampling it.
The benefit of parallel populations (beyond just multiple runs) is in using crossover to
combine dissimilar solutions. There are many ways of introducing crossover into PBIL.
The method which is used here is to sample two probability vectors for the creation of
each solution vector, see Figure 3. The figure shows the algorithm with uniform crossover; nonetheless, many other crossover operators can be used.
The randomized nature of crossover often yields unproductive results. If crossover is to be
used, it is important to simulate the crossover operation many times. Therefore, crossover
is used to create each member of the population (this is in contrast to crossing over the
probability vectors once, and generating the entire population from the newly created
probability vector). More details on integrating crossover and PBIL, and its use in combinatorial problems in robotic surgery can be found in [Baluja & Simon, 1996].
Results with using pPBIL in comparison to PBIL, GA, and pGA are shown in Table II.
For many of the problems explored here, parallel versions of GAs and PBIL work better
than the sequential versions, and the parallel PBIL models work better than the parallel
GA models. In each of these experiments, the parameters were hand-tuned for each algorithms. In every case, the GA was given at least twice as many function evaluations as
PBIL. The crossover operator was chosen by trying several operators on the GA, and
selecting the best one. The same crossover operator was then used for PBIL. For the pGA
and pPBIL experiments, 10 subpopulations were always used .
..... Generate Samples With Two Probability Vectors .....
for i :=1 to SAMPLES do
vector I := generate_sample_vector_with_probabilities (PI);
vector2 := generate_sample_vector_with_probabilities (P2);
for j := I to LENGTH_do
if (random (2) = 0) sample_vector{i]lil := vector I [j]
else sample_vector{i][j] := vector2[j]
evaluations[i] := Evaluate_Solution (sample[i]);
besevector := best_evaluation (sample_vectors. evaluations) ;
..... Update Both Probability Vectors Towards Best Solution .....
for i :=1 to LENGTH do
PI[i] := Pl[i] ? (1.0 - LR) + best_vector[i] ? (LR);
P2[i] := P2[i] * (1 .0 - LR) + besevector[i] ? (LR);
Figure 3: Generating samples based
on two probability vectors. Shown
with uniform crossover [Syswerda,
1989] (50% chance of using
probability vector 1 or vector 2 for
each bit position). Every 100
generations, each population makes
a local copy of another population's
probability vector (to replace
vector2). In these experiments, there
are a total of 10 subpopulations.
Table IT: Sequential & Parallel, GA & PBIL, Avg. 25 runs
- 200 city (minimize tour length)
"'lIIont:lI'
Optim. Highly Correlated Parameters - Base-2 Code (max)
Optim. Highly Correlated Parameters - Gray Code (max)
Optim. Independent Parameters - Base-2 Code (max)
... lIlmo'",,,,,. Optim. Independent Parameters - Gray Code (max)
(Problem with many maxima, see [Baluja, 1994]) (max)
Genetic Algorithms and Explicit Search Statistics
325
4 SUMMARY & CONCLUSIONS
PBIL was examined on a very large set of problems drawn from the GA literature. The
effectiveness of PBIL for finding good solutions for static optimization functions was
compared with a variety of GA and hillclimbing techniques. Second, Parallel-PBIL was
introduced. pPBIL is designed to explicitly preserve diversity by using multiple parallel
evolutions. Methods for reintroducing crossover into pPBIL were given.
With regard to the empirical results, it should be noted that it is incorrect to say that one
procedure will always perform better than another. The results do not indicate that PBIL
will always outperform a GA. For example, we have presented problems on which GAs
work better. Further, on problems such as binpacking, the relative results can change drastically depending upon the number of bins and elements. The conclusion which should be
reached from these results is that algorithms, like PBIL and MRSH, which are much simpler than GAs, can outperform standard GAs on many problems of interest.
The PBIL algorithm presented here is very simple and should serve as a prototype for
future study. Three directions for future study are presented here. First, the most obvious
extension to PBIL is to track more detailed statistics, such as pair-wise covariances of bit
positions in high-evaluation vectors. ~eliminary work in this area has been conducted,
and the results are very promising. Second, another extension is to quickly determine
which probability vectors, in the pPBIL model, are unlikely to yield promising answers;
methods such as Hoeffding Races may be adapted here [Maron & Moore, 1994]. Third,
the manner in which the updates to the probability vector occur is similar to the weight
update rules used in Learning Vector Quantization (LVQ). Many of the heuristics used in
L VQ can be incorporated into the PBIL algorithm.
Perhaps the most important contribution of the PBIL algorithm is a novel way of examining GAs. In many previous studies of the GA, the GA was examined at a micro-level, analyzing the preservation of building blocks and frequency of sampling hyperplanes. In this
study, the statistics at the population level were examined. In the standard GA, the population serves to implicitly maintain statistics about the search space. The selection and crossover mechanisms are ways of extracting these statistics from the population. PBIL's
population does not maintain the information that is carried from one generation to the
next. The statistics of the search are explicitly kept in the probability vector.
References
Baluja, S. (1995) "An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics," CMU-CS95-193. Available via. http://www.cs.cmu.edul-baluja.
Baluja, S. (1994) "Population-Based Incremental Learning". Carnegie MeUon University . Technical Repon. CMU-CS-94-163.
Baluja, S. & Caruana, R. (1995) "Removing the Genetics from the Standard Genetic Algorithm", Imer.Con! Mach. uarning-12.
Baluja, S. & Simon, D. (1996) "Evolution-Based Methods for Selecting Point Data for Object Localization: Applications to
Computer Assisted Surgery". CMU?CS?96 -183.
Cohoon, J., Hedge, S., Martin, W. , Richards, D., (1988) "Distributed Genetic Algorithms for the Floor Plan Design Problem,"
School of Engineering and Applied Science, Computer Science Dept., University of Virginia, TR-88-12.
Davis, L.1. (1991) "Bit-Climbing, Representational Bias and Test Suite Design".lntemational Con! on Genetic Algorilhms 4.
De Jong, K. (1975) An Analysis of the Behavior of a Class of Genetic Adaptive Systems. Ph.D. Dissenation.
De Jong, K. (1993) "Genetic Algorithms are NOT Function Optimizers". In Whitley (ed.) Foundations of GAs-2. 5-17.
Eshelman, L.J. (1991) "The CHC Adaptive Search Algorithm," in Rawlings (ed.) Foundations of GAs-I. 265-283.
Fang, H.L, Ross, P., Come, D. (1993) "A Promising Genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and
Open- Shop Scheduling Problems". In Forrest, S. Imernational Conference on Genetic Algorithms 5.
GOldberg, D.E. (1989) Genetic Algorithms in Search, Optimization, and Machine uarning. Addison-Wesley.
Goldberg & Richardson (1987) "Genetic Algorithms with Sharing for Multimodal Function Optimization" - Proceedings of the
Second International Conference on Genetic Algorithms.
HoUand, J. H. (1975) Adaptation in Natural and Ani/icial Systems. Ann Arbor: The University of Michigan Press.
Juels, A. & Wattenberg, M. (1994) "Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms" NIPS 8.
Maron, O. & Moore, A.(1994) "Hoeffding Races:Accelerating Model Selection for Classification and Function Approx." NIPS 6
Mitchell, M., Holland, 1. & Forrest, S. (1994) "When will a Genetic Algorithm Outperform Hill Climbing" NIPS 6.
Syswerda, G. (1989) "Uniform Crossover in Genetic Algorithms," International Conference on Genetic Algorithms 3.2-9.
Whitley, D., & Starkweather, T. "Genitor II: A Distributed Genetic Algorithm". }ETAl2: 189-214.
| 1247 |@word trial:1 version:3 unaltered:1 middle:1 open:1 grey:1 pbil:36 tried:4 covariance:1 pressure:1 tr:1 initial:2 selecting:2 genetic:27 tuned:1 optim:4 tenned:1 numerical:2 subsequent:2 designed:2 reproducible:1 update:5 v:1 selected:2 leaf:1 lr:4 provides:2 node:2 location:1 successive:1 hyperplanes:1 simpler:3 incorrect:1 combine:4 manner:1 inter:2 behavior:3 examine:1 rawlings:1 becomes:1 begin:1 provided:1 intennediate:1 string:5 prohibits:1 finding:1 suite:1 every:2 unit:1 before:4 engineering:1 shumeet:1 local:1 encoding:13 mach:1 analyzing:1 merge:2 black:1 twice:1 examined:6 shaded:1 range:1 averaged:1 unique:1 thirty:1 lost:1 block:2 optimizers:1 procedure:2 area:1 empirical:6 crossover:33 significantly:1 sga:1 pre:1 integrating:1 subpopulation:2 ga:30 selection:7 operator:8 close:1 scheduling:2 influence:1 www:1 equivalent:1 center:1 of7:1 independently:1 examines:1 continued:1 rule:1 fang:2 population:45 variation:1 analogous:2 updated:3 infrequent:1 user:1 exact:2 us:3 goldberg:6 element:7 crossing:1 continues:1 richards:1 observed:1 worst:4 region:7 cycle:1 connected:3 highest:1 environment:1 ideally:1 ultimately:1 controversy:1 creation:1 serve:1 upon:3 creates:2 localization:1 packing:1 easily:1 multimodal:1 represented:4 alphabet:1 forced:1 fast:1 describe:1 effective:2 distinct:1 heuristic:4 encoded:5 valued:1 say:1 whitley:3 otherwise:1 statistic:13 richardson:2 tsp:1 final:2 advantage:1 reset:1 adaptation:1 causing:1 combining:1 syswerda:3 representational:1 mutate:1 parent:2 convergence:2 empty:1 generating:6 incremental:3 staying:1 object:4 help:2 depending:1 school:2 progress:2 job:1 p2:3 c:4 predicted:1 indicate:2 come:1 differ:2 direction:3 closely:1 stochastic:3 mann:1 bin:6 pinned:2 extension:2 pl:1 assisted:1 around:1 considered:1 early:3 outperformed:1 combinatorial:1 ross:1 sensitive:1 create:2 city:2 always:3 varying:1 focus:1 improvement:3 rank:1 mainly:1 contrast:1 baseline:1 typically:1 entire:1 unlikely:1 initially:1 hidden:1 genitor:1 selective:2 pgas:3 classification:1 plan:1 art:1 special:1 initialize:1 equal:3 once:1 sampling:5 placing:1 represents:2 future:2 micro:1 few:1 randomly:2 preserve:2 simultaneously:1 tightly:1 individual:7 fitness:5 maintain:6 attempt:1 interest:1 highly:2 intra:1 custom:1 evaluation:20 necessary:1 respective:1 perfonnance:1 initialized:1 populationbased:1 contiguous:1 caruana:2 whitney:1 maximization:1 cost:1 introducing:2 subset:1 entry:1 tour:1 uniform:4 examining:1 conducted:2 virginia:1 reported:3 answer:4 fundamental:1 randomized:1 international:2 quickly:2 again:1 opposed:1 choose:1 hoeffding:2 li:1 potential:2 de:4 diversity:8 summarized:1 pooled:1 includes:1 explicitly:3 race:2 depends:1 performed:1 portion:2 competitive:1 sort:1 maintains:1 parallel:9 reached:1 simon:2 mutation:9 contribution:1 minimize:2 square:1 yield:7 spaced:1 identify:1 climbing:2 comparably:1 produced:1 substring:1 converged:1 sharing:1 ed:2 mating:2 nonetheless:1 frequency:1 obvious:1 static:5 vector2:3 sampled:1 newly:1 con:2 mitchell:1 color:2 coloring:1 wesley:1 higher:4 planar:1 just:1 implicit:2 until:2 hand:1 expressive:1 overlapping:1 maron:2 quality:2 perhaps:1 gray:3 name:1 building:2 contain:1 evolution:8 assigned:2 alternating:2 moore:2 white:1 during:3 davis:2 noted:1 trying:1 hill:1 fj:1 wise:1 novel:2 recently:2 fi:1 superior:1 approximates:1 elem:1 accumulate:1 mellon:1 significant:5 ai:2 approx:1 had:1 etc:1 base:3 j:1 showed:2 optimizing:1 optimizes:1 wattenberg:2 termed:1 binary:5 preserving:3 additional:1 floor:1 performer:1 mr:1 converge:6 maximize:1 determine:2 ii:2 preservation:1 full:1 multiple:8 technical:1 equally:1 variant:4 cmu:5 iteration:1 represent:3 achieved:1 addition:1 interval:1 else:1 crucial:1 extra:1 rest:1 tend:1 member:5 seem:1 effectiveness:1 extracting:2 ideal:1 easy:1 enough:1 variety:1 jobshop:1 competing:1 prototype:1 shift:1 whether:2 six:1 accelerating:1 rescheduling:1 suffer:1 returned:1 useful:2 detailed:1 repon:1 amount:1 ph:1 generate:3 specifies:1 sl:1 outperform:3 http:1 per:5 track:1 diverse:2 carnegie:2 group:1 key:1 four:1 drawn:1 juels:2 ani:1 kept:1 graph:3 run:8 communicate:1 extends:1 decide:1 forrest:2 bit:20 pushed:1 layer:1 pga:4 copied:1 replaces:1 adapted:1 occur:1 simulate:1 mated:1 martin:1 edul:1 alternate:1 combination:1 across:1 gradually:1 vq:1 count:1 mechanism:3 needed:1 addison:1 flip:2 serf:1 available:2 operation:1 permit:1 away:2 eshelman:2 subtracted:1 alternative:3 knapsack:1 assumes:1 include:1 maintaining:2 exploit:1 recombination:3 surgery:2 objective:1 move:4 occurs:1 parametric:1 traditional:1 evolutionary:1 separate:1 simulated:1 restart:3 reintroducing:1 seven:2 evaluate:1 spanning:1 length:10 code:5 negative:2 design:2 lil:1 perform:3 disagree:1 finite:3 behave:2 gas:19 incorporated:2 community:1 drift:1 concentric:1 introduced:1 pair:3 specified:4 nip:3 able:2 beyond:1 parallelism:1 below:2 including:1 max:5 power:1 suitable:1 perfonned:1 natural:1 shop:2 brief:1 created:2 carried:1 elitist:1 prior:1 literature:3 relative:3 fully:1 expect:1 generation:17 proportional:1 foundation:2 sufficient:2 pi:2 pile:1 genetics:2 summary:3 last:2 copy:3 parity:1 drastically:1 bias:2 allow:2 fall:1 benefit:3 regard:1 distributed:2 calculated:1 evaluating:1 commonly:1 collection:1 avg:1 adaptive:2 premature:1 employing:1 implicitly:2 keep:1 abstracted:1 robotic:1 reveals:1 pittsburgh:1 search:24 iterative:2 table:4 promising:3 pack:1 nature:1 necessarily:1 domain:1 significance:1 justsystem:1 noise:1 position:9 explicit:5 candidate:1 third:2 theorem:2 removing:1 specific:3 explored:3 list:4 leam:1 disagreed:1 quantization:1 sequential:3 effectively:2 michigan:1 explore:1 infinitely:1 hillclimbing:6 desire:1 contained:2 partially:1 holland:3 restarted:3 chance:2 hedge:1 viewed:1 lvq:1 ann:1 towards:5 leaming:2 replace:1 change:2 baluja:15 except:1 determined:1 surpass:1 total:3 arbor:1 la:3 attempted:2 jong:4 geneticist:1 select:1 dissimilar:1 phenomenon:1 dept:1 log2n:2 tested:2 avoiding:1 correlated:2 |
271 | 1,248 | Neural Learning in Structured
Parameter Spaces
Natural Riemannian Gradient
Shun-ichi Amari
RIKEN Frontier Research Program, RIKEN,
Hirosawa 2-1, Wako-shi 351-01, Japan
amari@zoo.riken.go.jp
Abstract
The parameter space of neural networks has a Riemannian metric structure. The natural Riemannian gradient should be used
instead of the conventional gradient, since the former denotes the
true steepest descent direction of a loss function in the Riemannian
space. The behavior of the stochastic gradient learning algorithm
is much more effective if the natural gradient is used. The present
paper studies the information-geometrical structure of perceptrons
and other networks, and prove that the on-line learning method
based on the natural gradient is asymptotically as efficient as the
optimal batch algorithm. Adaptive modification of the learning
constant is proposed and analyzed in terms of the Riemannian measure and is shown to be efficient. The natural gradient is finally
applied to blind separation of mixtured independent signal sources.
1
Introd uction
Neural learning takes place in the parameter space of modifiable synaptic weights
of a neural network. The role of each parameter is different in the neural network
so that the parameter space is structured in this sense. The Riemannian structure
which represents a local distance measure is introduced in the parameter space by
information geometry (Amari, 1985).
On-line learning is mostly based on the stochastic gradient descent method, where
the current weight vector is modified in the gradient direction of a loss function.
However, the ordinary gradient does not represent the steepest direction of a loss
function in the Riemannian space. A geometrical modification is necessary, and it is
called the natural Riemannian gradient. The present paper studies the remarkable
effects of using the natural Riemannian gradient in neural learning.
S. Amari
128
We first studies the asymptotic behavior of on-line learning (Opper , NIPS'95 Workshop). Batch learning uses all the examples at any time to obtain the optimal weight
vector , whereas on-line learning uses an example once when it is observed. Hence,
in general , the target weight vector is estimated more accurately in the case of batch
learning. However, we prove that , when the Riemannian gradient is used , on-line
learning is asymptotically as efficient as optimal batch learning.
On-line learning is useful when the target vector fluctuates slowly (Amari, 1967).
In this case, we need to modify a learning constant TJt depending on how far the
current weight vector is located from the target function. We show an algorithm
adaptive changes in the learning constant based on the Riemannian criterion and
prove that it gives asymptotically optimal behavior. This is a generalization of the
idea of Sompolinsky et al. [1995].
We then answer the question what is the Riemannian structure to be introduced in
the parameter space of synaptic weights. We answer this problem from the point of
view of information geometry (Amari [1985 , 1995], Amari et al [1992]) . The explicit
form of the Riemannian metric and its inverse matrix are given in the case of simple
perceptrons.
We finally show how the Riemannian gradient is applied to blind separation of mixtured independent signal sources. Here, the mixing matrix is unknown so that the
parameter space is the space of matrices . The Riemannian structure is introduced
in it. The natural Riemannian gradient is computationary much simpler and more
effective than the conventional gradient .
2
Stochastic Gradient Descent and On-Line Learning
Let us consider a neural network which is specified by a vector parameter w
(Wl' ... w n ) E Rn. The parameter w is composed of modifiable connection weights
and thresholds. Let us denote by I(~, w) a loss when input signal ~ is processed
by a network having parameter w . In the case of multilayer perceptrons, a desired
output or teacher signal y is associated with ~, and a typical loss is given
I(~ , y,w)
where z
1
= 211
y -
I(~ , w)
II 2 ,
(1)
= I(~, w) is the output from the network.
When input ~, or input-output training pair (~ , y) , is generated from a fixed probability distribution, the expected loss L( w) of the network specified by w is
L(w)
= E[/(~,y;w)],
(2)
where E denotes the expectation. A neural network is trained by using training
examples (~l ' Yl)'(~2 ' Y2) ''' ' to obtain the optimal network parameter w? that
minimizes L(w). If L(w) is known, the gradient method'is described by
t = 1, 2, ,, ,
where TJt is a learning constant depending on t and "ilL = oL/ow. Usually L(w) is
unknown . The stochastic gradient learning method
(3)
was proposed by an old paper (Amari [1967]) . This method has become popular
since Rumelhart et al. [1986] rediscovered it . It is expected that , when TJt converges
to 0 in a certain manner, the above Wt converges to w". The dynamical behavior of
Neural Learning and Natural Gradient Descent
129
(3) was studied by Amari [1967], Heskes and Kappen [1992] and many others when
"It is a constant.
It was also shown in Amari [1967] that
(4)
works well for any positive-definite matrix, in particular for the metric G. GeWt+l - Wt is a
ometrically speaking Ol/aw is a covariant vector while Llwt
contravariant vector. Therefore, it is natural to use a (contravariant) metric tensor
C- 1 to convert the covariant gradient into the contravariant form
=
VI
01 = (L.;
" g'J. . -(w)
a )
= G- 1 -ow
.
aWj
,
(5)
J
=
=
where C- 1 (gij) is the inverse matrix of C
(gij). The present paper studies how
the matrix tensor matrix C should be defined in neural learning and how effective
is the new gradient learning rule
(6)
3
Gradient in Riemannian spaces
Let S = {w} be the parameter space and let I(w) be a function defined on S.
When S is a Euclidean space and w is an orthonormal coordinate system, the
squared length of a small incremental vector dw connecting wand w + dw is given
by
n
Idwl 2
= L( dwd 2 .
(7)
i=l
However, when the coordinate system is non-orthonormal or the space S is Riemannian, the squared length is given by a quadratic form
Idwl 2 = Lgjj(w)dwidwj
= w'Gw.
(8)
i ,j
=
Here, the matrix G (gij) depends in general on wand is called the metric tensor.
It reduces to the identity matrix I in the Euclidean orthonormal case. It will be
shown soon that the parameter space S of neural networks has Riemannian structure
(see Amari et al. [1992], Amari [1995], etc.).
The steepest descent direction of a function I( w) at w is defined by a vector dw
that minimize I(w+dw) under the constraint Idwl 2 [2 (see eq.8) for a sufficiently
small constant [.
=
Lemma 1. The steepest descent direction of I (w) in a Riemannian space is given
by
-V/(w) -C-1(w)VI(w).
=
We call
V/(w)
= C-1(w)V/(w)
the natural gradient of I( w) in the Riemannian space. It shows the steepest descent
direction of I, and is nothing but the contravariant form of VI in the tensor notation.
When the space is Eu~lidean and the coordinate system is orthonormal, C is the
unit matrix I so that VI = V/'
S. Amari
130
4
Natural gradient gives efficient on-line learning
Let us begin with the simplest case of noisy multilayer analog perceptrons. Given
input ~, the network emits output z = f(~, 10) + n, where f is a differentiable
deterministic function of the multilayer perceptron with parameter 10 and n is a
noise subject to the normal distribution N(O,1) . The probability density of an
input-output pair (~, z) is given by
p(~,z;1O)
=
q(~)p(zl~;1O),
where q( ~) is the probability distribution of input
p(zl~; 10) =
vk
exp
~,
and
{-~ II z - f(~, 10) 112} .
The squared error loss function (1) can be written as
l(~, z, 10) = -logp(~, z; 10) + logq(~) -log.;z;:.
Hence , minimizing the loss is equivalent to maximizing the likelihood function
p(~, Zj
10) .
Let DT = {(~l' zI),? ? ?, (~T, ZT)} be T independent input-output examples generated by the network having the parameter 10" . Then, maximum likelihood estimator
WT minimizes the log loss 1(~,z;1O) = -logp(~,z;1O) over the training data DT ,
that is, it minimizes the training error
(9)
The maximum likelihood estimator is efficient (or Fisher-efficient) , implying that it
is the best consistent estimator satisfying the Cramer-Rao bound asymptotically,
lim TE[(wT - 1O")(WT - 10")']
T ..... oo
= C-
1,
where G-l is the inverse of the Fisher information matrix G
gij
=E [
8 logp(~, z; 10) 8 logp(~, z; 10)]
8 Wi
8Wj
(10)
= (9ii) defined by
(11)
in the component form. Information geometry (Amari, 1985) proves that the Fisher
information G is the only invariant metric to be introduced in the space S = {1o}
of the parameters of probability distributions.
Examples (~l' zd, (~2' Z2) . . . are given one at a time in the case of on-line learning.
Let Wt be the estimated value at time t. At the next time t + 1, the estimator Wt is
modified to give a new estimator Wt+l based on the observation (~t+ll zt+d. The
old observations (~l' zd, .. . , (~t, zt) cannot be reused to obtain Wt+l, so that the
learning rule is written as Wt+l = m( ~t+l, Zt+l , wt}. The process {wd is hence
Markovian. Whatever a learning rule m we choose, the behavior of the estimator
Wt is never better than that of the optimal batch estimator Wt because of this
restriction . The conventional on-line learning rule is given by the following gradient
form Wt+l = Wt -1JtV'I(~t+l,zt+l;Wt). When 1Jt satisfies a certain condition, say
1Jt = cit , the stochastic approximation guarantees that Wt is a consistent estimator
converging to 10" . However, it is not efficient in general.
There arises a question if there exists an on-line learning rule that gives an efficient
estimator. If it exists, the asymptotic behavior of on-line learning is equivalent to
131
Neural Learning and Natural Gradient Descent
that of the batch estimation method. The present paper answers the question by
giving an efficient on-line learning rule
_
Wt+l
_
= Wt
-
1"/(
_ )
- v ~t+l, Zt+l; Wt .
t
(12)
Theorem 1. The natural gradient on-line learning rule gives an Fisher-efficient
estimator, that is,
(13)
5
Adaptive modification of learning constant
We have proved that TJt = lit with the coefficient matrix C- 1 is the asymptotically
best choice for on-line learning. However, when the target parameter w? is not fixed
but fluctuating or changes suddenly, this choice is not good, because the learning
system cannot follow the change if TJt is too small. It was proposed in Amari [1967]
to choose TJt adaptively such that TJt becomes larger when the current target w?
is far from Wt and becomes small when it is close to Wt adaptively. However, no
definite scheme was analyzed there . Sompolinsky et at. [1995] proposed an excellent
scheme of an adaptive choice of TJt for a deterministic dichotomy neural networks.
We extend their idea to be applicable to stochastic cases, where the Riemannian
structure plays a role.
We assume that I( ~, z; w) is differentiable with respect to w.
(The nondifferentiable case is usually more difficult to analyze. Sompolinsky et al [1995]
treated this case.) We moreover treat the realizable teacher so that L(w?) = o.
We propose the following learning scheme:
Wt+l = Wt - TJt't7/(~t+1,Zt+I;Wt)
TJt+l = TJt + aTJt [B/( ~t+l, Zt+l; wd - TJt],
(14)
(15)
where a and (3 are constants. We try to analyze the dynamical behavior of learning
by using the continuous version of the algorithm,
!
Wt = -TJt 't7/(~t, Zt; Wt),
!TJt
= aTJt[B/(~t,zt;wd -
(16)
(17)
TJt].
In order to show the dynamical behavior of (Wt, TJt), we use the averaged version
of the above equation with respect to the current input-output pair (~t, Zt). We
introduce the squared error variable
et
= ~(Wt -
(18)
w?)'C?(Wt - w?).
By using the average and continuous time version
tVt
*
= -TJtC- 1 (Wt) (a~ I(~t, Zt; Wt?) ,
= aTJd{3(I(~t, Zt;Wt?) - TJd,
where? denotes dldt and ( ) the average over the current
et = - 2TJtet ,
*=
a{3TJt et - aTJi?
(~,
z), we have
(19)
(20)
S.Amari
132
The behavior of the above equation is interesting: The origin (0,0) is its attractor.
However, the basin of attraction has a fractal boundary. Anyway, starting from an
adequate initial value, it has the solution of the form
et
~ ~ (~ -
?) ~,
(21)
1
1Jt ~ 2t'
(22)
This proves the lit convergence rate of the generalization error, that is optimal in
order for any estimator tVt converging to w? .
6
Riemannian structures of simple perceptrons
We first study the parameter space S of simple perceptrons to obtain an explicit
form of the metric G and its inverse G-1. This suggests how to calculate the metric
in the parameter space of multilayer perceptrons.
Let us consider a simple perceptron with input ~ and output z. Let w be its
connection weight vector. For the analog stochastic perceptron, its input-output
behavior is described by z
f( Wi ~) + n, where n denotes a random noise subject
to the normal distribution N(O, (72) and f is the hyperbolic tangent,
1- e- u
f(u)=l+e- u
=
In order to calculate the metric G explicitly, let ew be the unit column vector in
the direction of w in the Euclidean space R n ,
w
ew=~,
where
II w II
is the Euclidean norm. We then have the following theorem.
Theorem 2. The Fisher metric G and its in verse G-l are given by
G(w) Cl(W)! + {C2(W) - C1(w)}ewe~,
=
G -1 (w)
1 -)! + (-(
1 -) = -(
Cl W
C2 W
1
I
-(
-)) ewew'
C1 W
(23)
(24)
where W = Iwl (Euclidean norm) and C1(W) and C2(W) are given by
C2(W)
J
4~(72 J
4~(72
C1(W)
=
{f2(wc) _1}2 exp { _~c2} dc,
{f2(wc) - 1}2c 2 exp
{_~c2} de.
(25)
(26)
Theorem 3. The Jeffrey prior is given by
vIG(w)1
7
= Vn1 VC2(W){C1(W)}n-1.
(27)
The natural gradient for blind separation of mixtured
signals
=
Let s (51, ... , 5 n ) be n source signals which are n independent random variables.
We assume that their n mixtures ~ = (Xl,' " , X n ),
~ = As
(28)
Neural Learning and Natural Gradient Descent
133
are observed. Here, A is a matrix. When 8 is time serieses, we observe
~(1), . .. , ~(t) . The problem of blind separation is to estimate W
A-I adaptively from z(t), t = 1,2,3,? ?? without knowing 8(t) nor A. We can then recover
original 8 by
(29)
y=W~
=
when W = A-I. Let W E Gl(n), that is a nonsingular n x n-matrix, and ?>(W)
be a scalar function. This is given by a measure of independence such as ?>(W) =
f{ L[P(y);p(y?)' which is represented by the expectation of a loss function . We define
the natural gradient of ?>(W).
Now we return to our manifold Gl(n) of matrices. It has the Lie group structure :
Any A E Gl(n) maps Gl(n) to Gl(n) by W -+ W A. We impose that the Riemannian
structure should be invariant by this operation A.
We can then prove that the natural gradient in this case is
fl?>
= \7?>W'W.
(30)
The natural gradient works surprisingly well for adaptive blind signal separation
Amari et al. (1995), Cardoso and Laheld [1996].
References
[1] S. Amari. Theory of adaptive pattern classifiers, IEEE Trans., EC-16, No.3,
299-307 , 1967.
[2] S. Amari. Differential-Geometrical Methods in Statistics, Lecture Notes in
Statistics, vol.28 , Springer, 1985.
[3] S. Amari. Information geometry of the EM and em algorithms for neural networks , Neural Networks, 8, No.9, 1379-1408, 1995.
[4] S. Amari, A. Cichocki and H.H . Yang. A new learning algorithm for blind signal
separation, in NIPS'95, vol.8, 1996, MIT Press, Cambridge, Mass.
[5] S. Amari , K. Kurata, H. Nagaoka. Information geometry of Boltzmann machines , IEEE Trans. on Neural Networks, 3, 260-271, 1992.
[6] J . F. Cardoso and Beate Laheld. Equivariant adaptive source separation, to
appear IEEE Trans. on Signal Processing, 1996.
[7] T. M. Heskes and B. Kappen. Learning processes in neural networks, Physical
Review A , 440, 2718-2726, 1991.
[8] D. Rumelhart , G.E. Hinton and R. J . Williams. Learning internal representation, in Parallel Distributed Processing: Explorations in the Microstructure of
Cognition, 1, Foundations , MIT Press, Cambridge, MA, 1986.
[9] H. Sompolinsky,N. Barkai and H. S. Seung. On-line learning of dichotomies:
algorithms and learning curves, Neural Networks: The statistical Mechanics
Perspective, Proceedings of the CTP-PBSRI Joint Workshop on Theoretical
Physics, J .-H. Oh et al eds , 105- 130, 1995.
| 1248 |@word version:3 norm:2 reused:1 kappen:2 initial:1 series:1 t7:2 wako:1 current:5 z2:1 wd:3 written:2 implying:1 steepest:5 simpler:1 c2:6 become:1 differential:1 prove:4 ewe:1 introduce:1 manner:1 expected:2 equivariant:1 behavior:10 nor:1 mechanic:1 ol:2 becomes:2 begin:1 notation:1 moreover:1 mass:1 what:1 minimizes:3 contravariant:4 guarantee:1 classifier:1 zl:2 unit:2 whatever:1 appear:1 positive:1 local:1 modify:1 treat:1 vig:1 studied:1 suggests:1 averaged:1 definite:2 laheld:2 hyperbolic:1 cannot:2 close:1 restriction:1 conventional:3 map:1 deterministic:2 equivalent:2 shi:1 maximizing:1 go:1 williams:1 starting:1 rule:7 estimator:11 attraction:1 orthonormal:4 oh:1 dw:4 anyway:1 coordinate:3 target:5 play:1 us:2 origin:1 rumelhart:2 satisfying:1 located:1 logq:1 observed:2 role:2 calculate:2 wj:1 sompolinsky:4 eu:1 seung:1 trained:1 f2:2 joint:1 represented:1 riken:3 effective:3 dichotomy:2 fluctuates:1 larger:1 say:1 amari:22 statistic:2 nagaoka:1 noisy:1 differentiable:2 propose:1 mixing:1 convergence:1 incremental:1 converges:2 depending:2 oo:1 eq:1 direction:7 stochastic:7 exploration:1 shun:1 microstructure:1 generalization:2 tjt:17 frontier:1 sufficiently:1 cramer:1 normal:2 exp:3 cognition:1 estimation:1 applicable:1 awj:1 wl:1 mit:2 modified:2 vn1:1 vk:1 likelihood:3 iwl:1 sense:1 realizable:1 ill:1 once:1 never:1 having:2 represents:1 lit:2 others:1 composed:1 geometry:5 jeffrey:1 attractor:1 rediscovered:1 analyzed:2 mixture:1 necessary:1 old:2 euclidean:5 desired:1 theoretical:1 column:1 rao:1 markovian:1 logp:4 ordinary:1 too:1 answer:3 teacher:2 aw:1 adaptively:3 density:1 yl:1 physic:1 connecting:1 hirosawa:1 squared:4 choose:2 slowly:1 return:1 japan:1 de:1 coefficient:1 explicitly:1 blind:6 vi:4 depends:1 view:1 try:1 analyze:2 recover:1 parallel:1 minimize:1 nonsingular:1 accurately:1 zoo:1 synaptic:2 ed:1 verse:1 associated:1 riemannian:24 emits:1 proved:1 popular:1 lim:1 dt:2 follow:1 barkai:1 effect:1 true:1 y2:1 former:1 hence:3 gw:1 ll:1 criterion:1 geometrical:3 physical:1 jp:1 analog:2 extend:1 cambridge:2 heskes:2 etc:1 perspective:1 certain:2 impose:1 signal:9 ii:5 dwd:1 reduces:1 converging:2 multilayer:4 metric:10 expectation:2 represent:1 c1:5 whereas:1 source:4 subject:2 call:1 yang:1 independence:1 zi:1 idea:2 knowing:1 introd:1 speaking:1 adequate:1 fractal:1 useful:1 cardoso:2 processed:1 simplest:1 cit:1 zj:1 estimated:2 modifiable:2 zd:2 vol:2 ichi:1 group:1 threshold:1 asymptotically:5 convert:1 wand:2 inverse:4 place:1 separation:7 bound:1 fl:1 quadratic:1 constraint:1 wc:2 structured:2 em:2 wi:2 modification:3 invariant:2 equation:2 operation:1 observe:1 fluctuating:1 tjd:1 batch:6 kurata:1 original:1 denotes:4 giving:1 prof:2 suddenly:1 tensor:4 question:3 gradient:33 ow:2 distance:1 nondifferentiable:1 manifold:1 length:2 minimizing:1 difficult:1 mostly:1 zt:13 boltzmann:1 unknown:2 observation:2 descent:9 hinton:1 dc:1 rn:1 introduced:4 pair:3 specified:2 connection:2 nip:2 trans:3 usually:2 dynamical:3 pattern:1 program:1 natural:19 treated:1 scheme:3 cichocki:1 prior:1 review:1 tangent:1 asymptotic:2 loss:10 lecture:1 interesting:1 remarkable:1 foundation:1 basin:1 consistent:2 gl:5 surprisingly:1 soon:1 beate:1 perceptron:3 distributed:1 boundary:1 opper:1 curve:1 adaptive:7 far:2 ec:1 continuous:2 ctp:1 excellent:1 cl:2 noise:2 nothing:1 tvt:2 explicit:2 xl:1 lie:1 theorem:4 jt:3 workshop:2 exists:2 uction:1 te:1 scalar:1 springer:1 covariant:2 satisfies:1 ma:1 identity:1 fisher:5 change:3 typical:1 wt:33 lemma:1 called:2 gij:4 perceptrons:7 ew:2 internal:1 arises:1 |
272 | 1,249 | An Analog Implementation of the
Constant Statistics Constraint
For Sensor Calibration
John G. Harris and Yu-Ming Chiang
Computational Neuro-Engineering Laboratory
Department of Computer and Electrical Engineering
University of Florida
Gainesville, FL 32611
Abstract
We use the constant statistics constraint to calibrate an array of
sensors that contains gain and offset variations. This algorithm has
been mapped to analog hardware and designed and fabricated with
a 2um CMOS technology. Measured results from the chip show that
the system achieves invariance to gain and offset variations of the
input signal.
1
Introduction
Transistor mismatches and parameter variations cause unavoidable nonuniformities
from sensor to sensor. A one-time calibration procedure is normally used to counteract the effect of these fixed variations between components. Unfortunately, many
of these variations fluctuate with time--either with operating point (such as datadependent variations) or with external conditions (such as temperature). Calibrating these sensors one-time only at the "factory" is not suitable-much more frequent
calibration is required. The sensor calibration problem becomes more challenging as
an increasing number of different types of sensors are integrated onto VLSI chips at
higher and higher integration densities. Ullman and Schechtman studied a simple
gain adjustment algorithm but their method provides no mechanism for canceling
additive offsets [10]. Scribner has addressed this nonuniformity correction problem
in software using a neural network technique but it will be difficult to integrate this
complex solution into analog hardware [9]. A number of researchers have studied
sensors that output the time-derivative of the signal[9][4]. A simple time derivative
1. G. Harris and Y. Chiang
700
cancels any additive offset in the signal but also loses all of the DC and most of
the low frequency temporal information present. The offset-correction method proposed by this paper, in effect, uses a time-derivative with an extremely long time
constant thereby preserving much of the low-frequency information present in the
signal. However, even if an ideal time-derivative approximation is used to cancel
out additive offsets, the standard deviation process described in this paper can be
used to factor out gain variations.
We hope to obtain some clues for sensory adaptation from neurobiological systems
which possess a tremendous ability to adapt to the surrounding environment at
multiple time-scales and at multiple stages of processing. Consider the following
experiments:
? After staring at a single curved line ten minutes, human subjects report
that the amount of curvature perceived appears to decrease. Immediately
after training, the subjects then were shown a straight line and perceived
it as slightly curved in the opposite direction[5].
? After staring long enough at an object in continuous motion, the motion
seems to decrease with time. Immediately after adaptation, subjects perceive motion in the opposite direction when looking at stationary objects.
This experiment is called the waterfall effect[2].
? Colors tend to look less saturated over time. Color after-images are perceived containing exactly the opponent colors of the original scene[1] .
Though the purpose of these biological adaptation mechanisms is not clear, some
theories suggest that these methods allow for fine-tuning the visual system through
long-term averaging of measured visual parameters[lO]. We will apply such
continuous-calibration procedures to VLSI sensor calibration.
The real-world variable x(t) is transduced by a nonlinear response curve into a
measured variable y(t). For a single operating point, the linear approximation can
be written as:
y(t) = ax(t) + b
(1)
with a and b being the multiplicative gain and additive offset respectively. The gain
and offset values vary from pixel to pixel and may vary slowly over time. Current
infra-red focal point arrays (IRFPAs) are limited by their inability to calibrate out
component variations [3] . Typically, off-board digital calibration is used to correct
nonuniformities in these detector arrays; Special calibration images are used to
calibrate the system at startup. One-time calibration procedures such as these do
not take into account other operating points and will fail to recalibrate for any drift
in the parameters.
2
Implementing Natural Constraints
A continuous calibration system must take advantage of natural constraints available during the normal operation of the sensors. One theory holds that biological
systems adapt to the long-term average of the stimulus. For example, the constraints for the three psychophysical examples mentioned above (curvature, motion
and color adaptation) may rely on the following constraints:
The Constant Average Statistics Constraint
701
? The average line is straight.
? The average motion is zero.
? The average color is gray.
The system adapts over time in the direction of this average, where the average
must be taken over a very long time: from minutes to hours. We use two additional
constraints for offset/gain normalization, namely:
? The average pixel intensities are identical.
? The variances of the input for each pixel are all identical.
Each of these constraints assumes that the photoarray is periodically moving in
the real-world and that the average statistics each pixels sees should be constant
when averaged over a very long time. In pathological situations where humans or
machines are forced to stare at a single static scene for a long time, we violate this
assumption.
We estimate the time-varying mean and variance by using an exponentially shaped
window into the past. The equations for mean and variance are:
11
m(t) = and
s(t)
11
=-
T
00
00
y(t - b.)e-tJ,./T db.
(2)
Iy(t - b.) - m(t - b.)le-tJ,./T db.
(3)
T
0
0
The m(t) and s(t) in Equation 2 and 3 can be expressed as low-pass filters with
inputs y(t) and Iy(t) - m(t)1 respectively. To simplify the hardware implementation
further, we chose the Ll (absolute value) definition of variance instead of the more
usual L2 definition. The Ll definition is an equally acceptable definition of signal
variation in terms of the complete calibration system. Using this definition, no
squares or square roots need be calculated. An added benefit of the Ll norm is that
it provides robustness to outliers in the estimation.
A zero-mean, unity variance 1 signal can then be produced with the following shift/
normalization formula:
x(t) = y(t) - m(t)
(4)
s(t)
Equation 2, Equation 3 and Equation 4 constitute a new algorithm for continuously
calibrating systems with gain and offset variations. Note that without additional
apriori knowledge about the values of the gains and offsets, it is impossible to recover
the true value of the signal x(t) given an infinite history of y(t). This is an ill-posed
problem even with fixed but unknown gain and offset parameters for each sensor.
All that can be done is to calibrate each sensor output to have zero offset and unity
variance. After calibration, each sensor would therefore all have the same offset and
variance when averaged over a long time. The fundamental assumption embedded
1 For simplicity the signal s(t) will be called the variance estimate throughout the rest of
this paper even though technically s(t) is neither the variance nor the standard deviation.
J. G. Harris and Y. Chiang
702
y(t}
m(t}
-----.:-
Mean
estimation
y(t}
Variance
S(I)
estimation
-------1
1 R
1
1
1 A~A
:v'V,-L
, CI'
1
1
1
1
x(t)
m(t)
1
1
1
1
_I
-_.1
1_____
v..
s(t}
m(t)
y(t}
y(t)
x(t}
Divider
I
Figure 1: Left: block diagram of continuous-time calibration system, Right:
schematic of the divider circuit.
in this algorithm is that each sensor measures real-world quantities with the same
statistical properties (e.g., mean and variance). For example, this would mean
that all pixels in a camera should eventually see the same average intensity when
integrated over a long enough time. This assumption leads to other system-level
constraints-in this case, the camera must be periodically moving.
We have successfully demonstrated this algorithm in software for the case of nonuniformity (gain and offset) correction of images [6]. Since there may be potentially
thousands of sensors per chip, it is desirable to build calibration circuitry using
subthreshold analog MOS technology to achieve ultra-low power consumption[8].
The next section describes the analog VLSI implementation of this algorithm.
3
Continuous-time calibration circuit
The block diagram of the continuous-time gain and offset calibration circuit is shown
in Figure 1a. This system includes three building blocks: a mean estimation circuit,
a variance estimation circuit and a divider circuit. As is shown, the mean of the
signal can be easily extracted by a RC low-pass filter circuit. Since there may
be potentially thousands of sensors per chip, it is desirable to build calibration
circuitry using subthreshold analog MOS technology to achieve ultra-low power
consumption[8] .
Figure 2 shows the schematic of the variance estimation circuit. A full-wave rectifier [8] operating in the sub-threshold region is used to obtain the absolute value
of the difference between the input and its mean. In the linear region, the current
lout is proportional to Iy(t) - m(t)l. As indicated in Equation 3, lout has to be
low-pass filtered to obtain s(t). In Figure 2, transconductance amplifiers A 3 , A4
and capacitor C2 are used to form a current mode low-pass filter. For signals in the
linear region, we can derive the Laplace transform of VI as:
R
Vt (s) = RC2 S
+ 1Iout(s)
(5)
which is a first-order low-pass filter for lout. The value of R is a function of several
703
The Constant Averoge Statistics Constraint
yet)
11
V1
1001
met)
C2
I
V'1 Vb2
Figure 2: Variance estimation circuit. The triangle symbols represent 5-transistor
transconductance amplifiers that output a current proportional to the difference of
their inputs.
fabrication constants and an adjustable bias current. Figure 3(a) shows the expected linear relationship between the measured variance s(t) and the peak-to-peak
amplitude of the sine-wave input.
The third building block in the calibration system is the divider circuit shown in
Figure lb. A fed-back multiplier is used to enforce the constraint that y(t) - m(t)
is proportional to x(t)s(t) which results in a scaled version of Equation 4. The
characteristics of the divider have been measured and shown in Figure 3(b). With
a fixed Vb6 and m(t), we sweep y(t) from m(t) - 0.3V to m(t) + 0.3V and measure
the the change of output. A family of input/output characteristics with s(t) =20,
25, 30, 40, 50, 60 and 70nA is shown in Figure 3(b). The divider circuit has been
tested up to frequencies of 45kHz.
The first version of the calibration circuit has been designed and fabricated in a
2-um CMOS technology. The chip includes the major parts of this calibration
circuit: the variance estimation circuit and divider circuit. In our initial design, the
mean estimation circuit, which is simply a RC low-pass filter, was built off-chip.
However, it can be easily integrated on-chip using a transconductance amplifier and
a capacitor.
The calibration results for a signal with gain and offset variations are shown in
Figure 4. The input signal is a sine wave with a severe gain and offset jump as
shown at the top of Figure 4. At the middle of Figure 4, the convergence of the
variance estimation is illustrated. It takes a short time for the circuit to converge
after any change of the mean or variance or of the input signal. At the bottom of
Figure 4, we show the calibrated signal produced by the chip. The output eventually
converges to a zero-mean, constant-height sine wave independent of the values of
the DC offset and amplitude of the input sine wave. Additional experiments have
shown that with the input amplitude changing from 20mV to 90mV, the measured
output amplitude varies by less than 3mV. Similarly, when the DC offset is varied
from l.5V to 3.5V, the amplitude of the output varies by less than 5mV. These
J. G. Harris and Y. Chiang
704
....
253
M
.52
x
.
1"51
' .8
....
>(
'
25
>(
L~
jf .5
_I(Q
>(
2.4'
.
....
2.4 20
30
40
50
eo
70
eo
.... poaI<_mV)
90
100
110
120
.~.
-<>.?
-<>1
0
y(1f.<n(Q
0.1
0.'
0.3
M
Figure 3: Left (a) shows the characteristics of measured variance s{t) vs. peakto-peak input voltage. Right (b) shows the characteristics of divider with different
s{t).
results demonstrate that system is invariant to gain and offset variations of the
input.
4
Conclusions
The calibration circuit has been demonstrated with the time-constants on the order
of lOOms. In many applications, much longer time constants will be necessarily and
these cannot be reached with on-chip capacitors even with subthreshold CMOS
operation. We expect to use floating-gate techniques where essentially arbitrarily
long time-constants can be achieved. Mead has demonstrated a novel adaptive
adaptive silicon retina that requires UV light for adaptation to occur [7]. The
adaptive silicon retina implemented the constant average brightness constraint. The
unoptimized layout area of one of our calibration circuits is about 250x300 um 2 in
2um CMOS technology. A future challenge will be to reduce this area and replace
the large on-chip capacitors with floating gates.
Acknowledgments
The authors would like to acknowledge an NSF CAREER Award and Office of
Naval Research contract #N00014-94-1-0858.
References
[1] M. Akita, C. Graham, and Y. Hsia. Maintaining an absolute hue in the presence
of different background colors. Vision Research, 4:539-556, 1964.
[2] V.R. Carlson. Adaptation in the perception of visual velocity. J. Exp. Psychol.,
64(2): 192-197, 1962.
[3] M.R. Kruer D.A. Scribner and J.M. Killiany. Infrared focal plane array technology. Proc. IEEE, 79(1):66-85, 1991.
705
The Constant Average Statistics Constraint
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
~2.52
CD
()
c:::
.~
2.5
~
2.48 L-_....l..__
o
0.1
___L.......:::::==t==~=~~_'___....l..__
0.2
0.3
0.4
0.2
0.3
0.4
0.5
0.6
0.7
0.5
0.6
0.7
___L.._ _'___...J.......I
0.8
0.9
1
...... 1.8
~
'5
a.
'5 1.6
o
0.1
0.8
0.9
1
time(sec)
Figure 4: Calibrating signal with offset and gain 1Jariations. Top: the input signal,
y(t). Middle: the computed signal1Jariance s(t). Bottom: the output signal, x(t).
[4] T. Delbriick. An electronic photoreceptor sensitive to small changes. In
D. Touretzky, editor, Ad1Jance in Neural Information Processing Systems, Volume 1, pages 720-727. Morgan Kaufmann, Palo Alto, CA, 1989.
[5] J. Gibson. Adaptation, aftereffect and contrast in the perception of curved
lines. J. Exp. Psychol., 16:1-16, 1933.
[6] J. G. Harris. Continuous-time calibration of VLSI sensors for gain and offset
variations. In SPIE Internatiional Symposium on Aerospace Sensing and DualUse Photonices:Smart Focal Plane Arrays and Focal Plane Array Testing, pages
23-33, Orlando, FL, April 1995.
[7] C. Mead. Adaptive retina. In C. Mead and M. Ismail, editors, Analog VLSI Implementation of Neural Systems, pages 239-246. Kluwer Academic Publishers,
1989.
[8] C. Mead. Analog VLSI and Neural Systems. Addison-Wesley, 1989.
[9] D.A. Scribner, K.A. Sarkady, M.R. Kruer, J.T. Calufield, J.D. Hunt, M. Colbert, and M. Descour. Adaptive retina-like preprocessing for imaging detector
arrays. In Proc. of the IEEE International Conference on Neural Networks,
pages 1955-1960, San Francisco, CA, Feb. 1993.
[10] S. Ullman and G. Schechtman. Adaptation and gain normalization. Pmc. R.
Soc. Lond. B, 216:299-313, 1982.
| 1249 |@word middle:2 version:2 seems:1 norm:1 gainesville:1 brightness:1 thereby:1 initial:1 contains:1 past:1 current:5 yet:1 written:1 must:3 john:1 periodically:2 additive:4 designed:2 v:1 stationary:1 plane:3 short:1 chiang:4 filtered:1 provides:2 height:1 rc:2 c2:2 symposium:1 expected:1 nor:1 ming:1 window:1 increasing:1 becomes:1 circuit:19 transduced:1 alto:1 fabricated:2 scribner:3 temporal:1 exactly:1 um:4 scaled:1 normally:1 engineering:2 mead:4 chose:1 studied:2 challenging:1 limited:1 hunt:1 averaged:2 acknowledgment:1 camera:2 testing:1 block:4 procedure:3 area:2 gibson:1 suggest:1 onto:1 cannot:1 impossible:1 demonstrated:3 layout:1 simplicity:1 immediately:2 perceive:1 array:7 variation:13 laplace:1 us:1 velocity:1 infrared:1 bottom:2 electrical:1 thousand:2 region:3 decrease:2 mentioned:1 environment:1 smart:1 technically:1 triangle:1 easily:2 chip:10 surrounding:1 forced:1 startup:1 posed:1 ability:1 statistic:6 transform:1 advantage:1 transistor:2 adaptation:8 frequent:1 achieve:2 adapts:1 ismail:1 convergence:1 cmos:4 converges:1 object:2 derive:1 measured:7 soc:1 implemented:1 met:1 direction:3 correct:1 filter:5 human:2 implementing:1 orlando:1 ultra:2 biological:2 correction:3 hold:1 normal:1 exp:2 mo:2 circuitry:2 major:1 achieves:1 vary:2 purpose:1 perceived:3 estimation:10 proc:2 palo:1 sensitive:1 successfully:1 hope:1 sensor:17 fluctuate:1 varying:1 voltage:1 office:1 ax:1 waterfall:1 naval:1 contrast:1 integrated:3 typically:1 vlsi:6 unoptimized:1 pixel:6 ill:1 integration:1 special:1 apriori:1 shaped:1 identical:2 yu:1 cancel:2 look:1 future:1 report:1 infra:1 stimulus:1 simplify:1 retina:4 pathological:1 floating:2 amplifier:3 severe:1 saturated:1 light:1 tj:2 calibrate:4 recalibrate:1 deviation:2 fabrication:1 varies:2 calibrated:1 density:1 fundamental:1 peak:3 international:1 contract:1 off:2 iy:3 continuously:1 na:1 unavoidable:1 containing:1 slowly:1 external:1 derivative:4 ullman:2 account:1 sec:1 includes:2 mv:5 vi:1 multiplicative:1 root:1 sine:4 red:1 wave:5 recover:1 reached:1 staring:2 square:2 variance:19 characteristic:4 kaufmann:1 subthreshold:3 produced:2 researcher:1 straight:2 history:1 detector:2 touretzky:1 canceling:1 definition:5 frequency:3 spie:1 static:1 gain:18 color:6 knowledge:1 amplitude:5 back:1 appears:1 wesley:1 higher:2 response:1 april:1 done:1 though:2 stage:1 nonlinear:1 mode:1 indicated:1 gray:1 building:2 effect:3 calibrating:3 divider:8 true:1 multiplier:1 laboratory:1 illustrated:1 ll:3 during:1 complete:1 demonstrate:1 motion:5 temperature:1 image:3 novel:1 nonuniformity:4 khz:1 exponentially:1 volume:1 analog:8 kluwer:1 silicon:2 tuning:1 uv:1 focal:4 similarly:1 moving:2 calibration:24 longer:1 operating:4 feb:1 curvature:2 n00014:1 arbitrarily:1 vt:1 iout:1 preserving:1 morgan:1 additional:3 eo:2 converge:1 signal:17 multiple:2 violate:1 desirable:2 full:1 adapt:2 academic:1 long:10 equally:1 award:1 schematic:2 neuro:1 vision:1 essentially:1 normalization:3 represent:1 achieved:1 background:1 fine:1 addressed:1 diagram:2 publisher:1 rest:1 posse:1 subject:3 tend:1 db:2 capacitor:4 presence:1 ideal:1 enough:2 opposite:2 reduce:1 shift:1 cause:1 constitute:1 clear:1 rc2:1 amount:1 hue:1 ten:1 hardware:3 nsf:1 per:2 threshold:1 changing:1 neither:1 v1:1 imaging:1 counteract:1 throughout:1 family:1 electronic:1 acceptable:1 graham:1 fl:2 occur:1 constraint:14 scene:2 software:2 extremely:1 lond:1 transconductance:3 department:1 describes:1 slightly:1 unity:2 outlier:1 invariant:1 taken:1 equation:7 aftereffect:1 eventually:2 mechanism:2 fail:1 addison:1 fed:1 available:1 operation:2 opponent:1 apply:1 enforce:1 robustness:1 gate:2 florida:1 original:1 assumes:1 top:2 a4:1 maintaining:1 carlson:1 build:2 psychophysical:1 sweep:1 added:1 quantity:1 usual:1 mapped:1 consumption:2 relationship:1 pmc:1 difficult:1 unfortunately:1 potentially:2 implementation:4 design:1 unknown:1 adjustable:1 acknowledge:1 curved:3 colbert:1 situation:1 looking:1 dc:3 varied:1 lb:1 delbriick:1 drift:1 intensity:2 namely:1 required:1 aerospace:1 tremendous:1 hour:1 perception:2 mismatch:1 lout:3 challenge:1 built:1 power:2 suitable:1 natural:2 rely:1 loom:1 technology:6 psychol:2 l2:1 embedded:1 expect:1 proportional:3 digital:1 integrate:1 editor:2 cd:1 lo:1 bias:1 allow:1 absolute:3 benefit:1 curve:1 calculated:1 world:3 sensory:1 author:1 clue:1 jump:1 adaptive:5 preprocessing:1 san:1 neurobiological:1 francisco:1 continuous:7 ca:2 career:1 complex:1 necessarily:1 board:1 sub:1 x300:1 factory:1 third:1 minute:2 formula:1 rectifier:1 symbol:1 offset:23 sensing:1 ci:1 simply:1 visual:3 expressed:1 datadependent:1 adjustment:1 loses:1 harris:5 extracted:1 vb2:1 jf:1 replace:1 change:3 infinite:1 averaging:1 called:2 pas:6 invariance:1 schechtman:2 photoreceptor:1 inability:1 tested:1 |
273 | 125 | 494
TRAINING A 3-NODE NEURAL NETWORK
IS NP-COMPLETE
Avrim Blum'"
MIT Lab. for Computer Science
Cambridge, Mass. 02139 USA
Ronald L. Rivest t
MIT Lab. for Computer Science
Cambridge, Mass. 02139 USA
ABSTRACT
We consider a 2-layer, 3-node, n-input neural network whose nodes
compute linear threshold functions of their inputs. We show that it
is NP-complete to decide whether there exist weights and thresholds
for the three nodes of this network so that it will produce output consistent with a given set of training examples. We extend the result
to other simple networks. This result suggests that those looking for
perfect training algorithms cannot escape inherent computational
difficulties just by considering only simple or very regular networks.
It also suggests the importance, given a training problem, of finding
an appropriate network and input encoding for that problem. It is
left as an open problem to extend our result to nodes with non-linear
functions such as sigmoids.
INTRODUCTION
One reason for the recent surge in interest in neural networks is the development of the "back-propagation" algorithm for training neural networks. The
ability to train large multi-layer neural networks is essential for utilizing neural
networks in practice, and the back-propagation algorithm promises just that.
In practice, however, the back-propagation algorithm runs very slowly, and the
question naturally arises as to whether there are necessarily intrinsic computational difficulties associated with training neural networks, or whether better
training algorithms might exist. This paper provides additional support for the
position that training neural networks is intrinsically difficult.
A common method of demonstrating a problem to be intrinsically hard is to
show the problem to be "NP-complete". The theory of NP-complete problems
is well-understood (Garey and Johnson, 1979), and many infamous problemssuch as the traveling salesman problem-are now known to be NP-complete.
While NP-completeness does not render a problem totally unapproachable in
?Supported by an NSF graduate fellowship.
tThis paper was prepared with support from NSF grant DCR-8607494, ARO Grant
DAAL03-86-K-0l71, and the Siemens Corporation.
Training a 3-Node Neural Network is NP-Complete
practice, it usually implies that only small instances ofthe problem can be solved
exactly, and that large instances can at best only be solved approximately, even
with large amounts of computer time.
The work in this paper is inspired by Judd (Judd, 1987) who shows the following
problem to be NP-complete:
"Given a neural network and a set of training examples, does there
exist a set of edge weights for the network so that the network produces the correct output for all the training examples?"
Judd also shows that the problem remains NP-complete even if it is only required
a network produce the correct output for two-thirds of the training examples,
which implies that even approximately training a neural network is intrinsically
difficult in the worst case. Judd produces a class of networks and training examples for those networks such that any training algorithm will perform poorly
on some networks and training examples in that class. The results, however,
do not specify any particular "hard network"-that is, any single network hard
for all algorithms. Also, the networks produced have a number of hidden nodes
that grows with the number of inputs, as well as a quite irregular connection
pattern.
We extend his result by showing that it is NP-complete to train a specific very
simple network, having only two hidden nodes and a regular interconnection
pattern. We also present classes of regular 2-layer networks such that for all
networks in these classes, the training problem is hard in the worst case (in
that there exists some hard sets of training examples). The NP-completeness
proof also yields results showing that finding approximation algorithms that
make only one-sided error or that approximate closely the minimum number
of hidden-layer nodes needed for a network to be powerful enough to correctly
classify the training data, is probably hard, in that these problems can be related
to other difficult (but not known to be NP-complete) approximation problems.
Our results, like Judd's, are described in terms of "batch"-style learning algorithms that are given all the training examples at once. It is worth noting that
training with an "incremental" algorithm that sees the examples one at a time
such as back-propagation is at least as hard. Thus the NP-completeness result
given here also implies that incremental training algorithms are likely to run
slowly.
Our results state that given a network of the classes considered, for any training
algorithm there will be some types of training problems such that the algorithm
will perform poorly as the problem size increases. The results leave open the
possibility that given a training problem that is hard for some network, there
might exist a different network and encoding of the input that make training
easy.
495
496
Blum and Rivest
1
2
3
4
n
Figure 1: The three node neural network.
THE NEURAL NETWORK TRAINING PROBLEM
The multilayer network that we consider has n binary inputs and three nodes:
Nt, N2, N a. All the inputs are connected to nodes Nl and N2. The outputs
of hidden nodes Nl and N2 are connected to output node Na which gives the
output of the network.
Each node Ni computes a linear threshold function Ii on its inputs. If Ni has
input Z = (Zll ??. I Zm), then for some constants ao, . .. , am,
The aj's (j > 1) are typically viewed as weights on the incoming edges and ao
as the threshold.
The training algorithm for the network is given a set of training examples. Each
is either a positive example (an input for which the desired network output is +1)
or a negative example (an input for which the desired output is -1). Consider
the following problem. Note that we have stated it as a decision ("yes" or "no")
problem, but that the search problem (finding the weights) is at least equally
hard.
TRAINING A 3-NODE NEURAL NETWORK:
Given: A set of O( n) training examples on n inputs.
Question: Do there exist linear threshold functions
h, /2, fa for nodes Nt, N21 Na
Training a 3-Node Neural Network is NP-Complete
such that the network of figure 1 produces outputs consistent with the training
set?
Theorem:
Training a 3-node neural network is NP-complete.
We also show (proofs omitted here due to space requirements) NP-completeness
results for the following networks:
1. The 3-node network described above, even if any or all of the weights for
one hidden node are required to equal the corresponding weights of the
other, so possibly only the thresholds differ, and even if any or all of the
weights are forced to be from {+ 1, -I}.
=
2. Any k-hidden node, for k bounded by some polynomial in n (eg: k n 2 ),
two-layer fully-connected network with linear threshold function nodes
where the output node is required to compute the AND function of its
inputs.
3. The 2-layer, 3-node n-input network with an XOR output node, if ternary
features are allowed.
In addition we show (proof omitted here) that any set of positive and negative
training examples classifiable by the 3-node network with XOR output node (for
which training is NP-complete) can be correctly classified by a perceptron with
O(n 2 ) inputs which consist of the original n inputs and all products of pairs of
the original n inputs (for which training can be done in polynomial-time using
linear programming techniques).
THE GEOMETRIC POINT OF VIEW
A training example can be thought of as a point in n-dimensional space, labeled
'+' or '-' depending on whether it is a positive or negative example. The points
are vertices of the n-dimensional hypercube. The zeros of the functions /1 and
h for the hidden nodes can be thought of as (n - I)-dimensional hyperplanes
in this space. The planes Pl and P2 corresponding to the functions hand
/2 divide the space into four quadrants according to the four possible pairs of
outputs for nodes Nl and N 2 ? If the planes are parallel, then one or two of the
quadrants is degenerate (non-existent). Since the output node receives as input
only the outputs of the hidden nodes Nl and N 2 , it can only distinguish between
points in different quadrants. The output node is also restricted to be a linear
function. It may not, for example, output "+1" when its inputs are (+1,+1)
and (-1, -1), and output "-I" when its inputs are (+1, -1) and (-1,+1).
So, we may reduce our question to the following: given O( n) points in {O, 1}n ,
each point labeled '+' or '-', does there exist either
497
498
Blum and Rivest
1. a single plane that separates the
'+' points from the
'-' points, or
2. two planes that partition the points so that either one quadrant contains
all and only '+' points or one quadrant contains all and only '-' points.
We first look at the restricted question of whether there exist two planes that
partition the points such that one quadrant contains all and only the '+' points.
This corresponds to having an "AND" function at the output node. We will call
this problem: "2-Linear Confinement of Positive Boolean Examples". Once we
have shown this to be NP-complete, we will extend the proof to the full problem
by adding examples that disallow the other possibilities at the output node.
Megiddo (Megiddo, 1986) has shown that for O(n) arbitrary '+' and '-' points
in n-dimensional Euclidean space, the problem of whether there exist two hyperplanes that separate them is NP-complete. His proof breaks down, however,
when one restricts the coordinate values to {O, I} as we do here. Our proof
turns out to be of a quite different style.
SET SPLITTING
The following problem was proven to be NP-complete by Lovasz (Garey and
Johnson 1979).
SET-SPLITTING:
Given: A finite set 5 and a collection C of subsets
Ci
of 5.
Question: Do there exist disjoint sets 51, S2 such that Sl U S2 - Sand
Vi, Ci rt. Sl and Ci rt. S2.
The Set-Splitting problem is also known as 2-non-Monotone Colorability. Our
use of this problem is inspired by its use by Kearns, Li, Pitt, and Valiant to
show that learning k-term DNF is NP-complete (Kearns et al. 1987) and the
style of the reduction is similar.
THE REDUCTION
Suppose we are given an instance of the Set-Splitting problem:
Create the following signed points on the n-dimensional hypercube {O, l}n:
? Let the origin
? For each
Si,
on
be labeled
'+' .
put a point labeled '-' at the neighbor to the origin that has
a 1 in the ith bit-that is, at
12 ...
i
... n
(00" -010?? ?0). Call this point Pi.
Training a 3-Node Neural Network is NP-Complete
(001)
(010)
(100)
(000)
Figure 2: An example .
=
? For each Cj
{Sjt, ..? ,Sjkj}, put a point labeled '+' at the location whose
bits are 1 at exactly the positions j1,i2, ... ,jkj-that is, at Pj1 + .. '+Pjkr
For example, let 8
= {Sl,S2,S3}, C = {Ct,C2}, Cl = {Sl,S2},
C2
= {S2,S3}'
SO, we create '-' points at: (0 0 1), (0 1 0), (1 0 0)
and
'+' points at:
(0 0 0), (1 1 0), (0 1 1) in this reduction (see figure 2).
Claim: The given instance of the Set-Splitting problem has a solution ?:::::} the
constructed instance of the 2-Linear Confinement of Positive Boolean Examples
problem has a solution.
Proof: (=?
Given 8 1 from the solution to the Set-Splitting instance, create the plane P1 :
a1z1 + ... + anZn = -~, where ai = -1 if Sj E 8 11 and aj = n if Si ? 8 1 , Let
the vectors a (a1, .. ' an),z (Zl,"" zn).
=
=
This plane separates from the origin exactly the '-' points corresponding to
Si E 81 and no '+' points. Notice that for each Si E 81, a? Pi = -1, and that
for each Si ? 8 1 , a . Pi = n. For each '+' point p, a? P > - ~ since either P is
the origin or else P has a 1 in a bit i such that Si ? 8 1 ,
Similarly, create the plane P2 from 8 2 ,
{?::}
Let 81 be the set of points separated from the origin by PI and 8 2 be those
points separated by P2. Place any points separated by both planes in either
81 or 8 2 arbitrarily. Sets 8 1 and 8 2 cover 8 since all '-' points are separated
from the origin by at least one of the planes. Consider some Cj = {Sjl .?? Sjkj}
499
500
Blum and Rivest
(001)
(010)
(000)
(100)
Figure 3: The gadget.
and the corresponding '-' points Pjb" ? ,Pjkr If, say, Cj C 8 11 then P1 must
separate all the Pji from the origin. Therefore, Pl must separate Pj1 + ... + Pjkj
from the origin. Since that point is the '+' point corresponding to Cj, the '+'
points are not all confined to one quadrant, contradicting our assumptions. So,
?
no Cj can be contained in 81. Similarly, no Cj can be contained in 8 2 ?
We now add a "gadget" consisting of 6 new points to handle the other possibilities at the output node. The gadget forces that the only way in which two
planes could linearly separate the '+' points from the '-' points would be to
confine the '+' points to one quadrant. The gadget consists of extra points and
three new dimensions. We add three new dimensions, xn+b X n +2, and X n +3,
and put '+' points in locations:
(0? .. 0101), (0 .. ?0011)
and '-' points in locations:
(0 .. ?0100), (0 .. ?0010), (0 .. ?0001), (0 .. ?0 111).
(See figure 3.)
The '+' points ot:this cube can be separated from the '-' points by appropriate
settings of the weights of planes P1 and P2 corresponding to the three new
and P 2 : b1x1 + ... +
dimensions. Given planes P{ : a1X1 + ... + anXn =
bnxn = -~ which solve a 2-Linear Confinement of Positive Boolean Examples
instance in n dimensions, expand the solution to handle the gadget by setting
-!
to
a1 x 1
+ ... + anXn + X n +1 + X n +2 -
to
b1 x 1
+ ... + bnxn -
X n +3
= -2"1
1
x n +1 -
x n +2
+ X n +3 = -2"
Training a 3-Node Neural Network is NP-Complete
(Pl separates '-' point (0???0 001) from the '+' points and P2 separates the
other three '-' points from the '+' points). Also, notice that there is no way
in which just one plane can separate the '+' points from the '-' points in the
cube and also no way for two planes to confine all the negative points in one
quadrant. Thus we have proved the theorem.
CONCLUSIONS
Training a 3-node neural network whose nodes compute linear threshold functions is NP-complete.
An open problem is whether the NP-completeness result can be extended to
neural networks that use sigmoid functions. We believe that it can because the
use of sigmoid functions does not seem to alter significantly the expressive power
of a neural network. Note that Judd (Judd 1987), for the networks he considers,
shows NP-completeness for a wide variety of node functions including sigmoids.
References
James A. Anderson and Edward Rosenfeld, editors. Neurocomputing: Foundations of Research. MIT Press, 1988.
M. Garey and D. Johnson. Computers and Intractability: A Guide to the
Theory of NP-Completeness. W. H. Freeman, San Francisco, 1979.
J. Stephen Judd. Learning in networks is hard. In Proceedings of the First
International Conference on Neural Networks, pages 685-692, I.E.E.E.,
San Diego, California June 1987.
J. Stephen Judd. Neural Network Design and the Complexity of Learning.
PhD thesis, Computer and Information Science dept., University of Massachusetts, Amherst, Mass. U.S.A., 1988.
Michael Kearns, Ming Li, Leonard Pitt, and Leslie Valiant. On the learn ability
of boolean formulae. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pages 285-295, New York, New York,
May 1987.
Nimrod Megiddo. On The Complexity of Polyhedral Separability. Technical
Report RJ 5252, IBM Almaden Research Center, August 1986.
Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. The MIT Press, 1969.
David E. Rumelhart and James 1. McClelland, editors. Parallel Distributed
Processing (Volume I: Foundations). MIT Press, 1986.
501
| 125 |@word polynomial:2 open:3 reduction:3 contains:3 nt:2 si:6 must:2 zll:1 ronald:1 partition:2 j1:1 plane:15 ith:1 provides:1 completeness:7 jkj:1 node:41 location:3 hyperplanes:2 c2:2 constructed:1 symposium:1 consists:1 polyhedral:1 p1:3 surge:1 multi:1 inspired:2 freeman:1 ming:1 considering:1 totally:1 rivest:4 bounded:1 mass:3 finding:3 corporation:1 megiddo:3 exactly:3 zl:1 grant:2 positive:6 understood:1 seymour:1 infamous:1 encoding:2 approximately:2 might:2 signed:1 suggests:2 graduate:1 ternary:1 practice:3 thought:2 significantly:1 regular:3 quadrant:9 cannot:1 put:3 center:1 splitting:6 utilizing:1 his:2 tthis:1 handle:2 coordinate:1 diego:1 suppose:1 programming:1 origin:8 rumelhart:1 labeled:5 solved:2 worst:2 connected:3 daal03:1 complexity:2 existent:1 train:2 separated:5 forced:1 dnf:1 whose:3 quite:2 solve:1 nineteenth:1 say:1 interconnection:1 ability:2 rosenfeld:1 aro:1 product:1 zm:1 poorly:2 degenerate:1 requirement:1 produce:5 perfect:1 incremental:2 leave:1 depending:1 edward:1 p2:5 implies:3 differ:1 closely:1 correct:2 sand:1 ao:2 pl:3 confine:2 considered:1 pitt:2 claim:1 omitted:2 create:4 lovasz:1 mit:5 june:1 am:1 typically:1 hidden:8 expand:1 almaden:1 development:1 cube:2 equal:1 once:2 having:2 look:1 alter:1 np:27 report:1 escape:1 inherent:1 neurocomputing:1 minsky:1 consisting:1 geometry:1 interest:1 possibility:3 pj1:2 nl:4 edge:2 divide:1 euclidean:1 desired:2 instance:7 classify:1 boolean:4 cover:1 zn:1 leslie:1 vertex:1 subset:1 johnson:3 international:1 amherst:1 michael:1 na:2 thesis:1 n21:1 slowly:2 possibly:1 style:3 li:2 vi:1 view:1 break:1 lab:2 parallel:2 ni:2 xor:2 who:1 yield:1 ofthe:1 yes:1 produced:1 worth:1 classified:1 james:2 garey:3 naturally:1 associated:1 proof:7 proved:1 intrinsically:3 massachusetts:1 cj:6 back:4 specify:1 done:1 anderson:1 just:3 traveling:1 hand:1 receives:1 expressive:1 propagation:4 aj:2 believe:1 grows:1 usa:2 i2:1 eg:1 complete:20 common:1 sigmoid:2 volume:1 extend:4 he:1 cambridge:2 ai:1 similarly:2 add:2 recent:1 binary:1 arbitrarily:1 sjt:1 minimum:1 additional:1 ii:1 stephen:2 full:1 rj:1 technical:1 equally:1 a1:2 multilayer:1 confined:1 irregular:1 addition:1 fellowship:1 else:1 extra:1 ot:1 probably:1 seem:1 call:2 noting:1 enough:1 easy:1 variety:1 confinement:3 reduce:1 whether:7 render:1 sjl:1 york:2 amount:1 prepared:1 mcclelland:1 nimrod:1 sl:4 exist:9 restricts:1 nsf:2 s3:2 notice:2 disjoint:1 correctly:2 promise:1 four:2 threshold:8 blum:4 demonstrating:1 monotone:1 run:2 powerful:1 classifiable:1 place:1 decide:1 decision:1 bit:3 layer:6 ct:1 distinguish:1 annual:1 marvin:1 according:1 separability:1 restricted:2 sided:1 remains:1 turn:1 dcr:1 needed:1 salesman:1 appropriate:2 batch:1 pji:1 original:2 hypercube:2 question:5 fa:1 rt:2 separate:9 considers:1 reason:1 difficult:3 negative:4 stated:1 design:1 perform:2 finite:1 extended:1 looking:1 arbitrary:1 august:1 david:1 pair:2 required:3 connection:1 california:1 usually:1 pattern:2 including:1 power:1 difficulty:2 force:1 geometric:1 fully:1 proven:1 foundation:2 consistent:2 editor:2 intractability:1 pi:4 ibm:1 supported:1 disallow:1 guide:1 perceptron:1 neighbor:1 wide:1 distributed:1 judd:9 dimension:4 xn:1 computes:1 collection:1 san:2 sj:1 approximate:1 incoming:1 b1:1 francisco:1 search:1 learn:1 necessarily:1 cl:1 linearly:1 s2:6 n2:3 contradicting:1 allowed:1 gadget:5 papert:1 position:2 third:1 theorem:2 down:1 formula:1 specific:1 showing:2 essential:1 intrinsic:1 exists:1 avrim:1 consist:1 adding:1 importance:1 ci:3 valiant:2 phd:1 sigmoids:2 likely:1 contained:2 corresponds:1 acm:1 viewed:1 leonard:1 hard:10 colorability:1 kearns:3 siemens:1 perceptrons:1 support:2 arises:1 dept:1 |
274 | 1,250 | Effective Training of a Neural Network
Character Classifier for Word Recognition
Larry Yaeger
Apple Computer
5540 Bittersweet Rd.
Morgantown, IN 46160
larryy@apple.com
Richard Lyon
Apple Computer
1 Infinite Loop, MS301-3M
Cupertino, CA 95014
lyon@apple.com
Brandyn Webb
The Future
4578 Fieldgate Rd.
Oceanside, CA 92056
brandyn@brainstorm.com
Abstract
We have combined an artificial neural network (ANN) character
classifier with context-driven search over character segmentation, word
segmentation, and word recognition hypotheses to provide robust
recognition of hand-printed English text in new models of Apple
Computer's Newton MessagePad. We present some innovations in the
training and use of ANNs al; character classifiers for word recognition,
including normalized output error, frequency balancing, error emphasis,
negative training, and stroke warping. A recurring theme of reducing a
priori biases emerges and is discussed.
1 INTRODUCTION
We have been conducting research on bottom-up classification techniques ba<;ed on
trainable artificial neural networks (ANNs), in combination with comprehensive but
weakly-applied language models. To focus our work on a subproblem that is tractable
enough to le.:'ld to usable products in a reasonable time, we have restricted the domain to
hand-printing, so that strokes are clearly delineated by pen lifts. In the process of
optimizing overall performance of the recognizer, we have discovered some useful
techniques for architecting and training ANNs that must participate in a larger recognition
process. Some of these techniques-especially the normalization of output error,
frequency balanCing, and error emphal;is-suggest a common theme of significant value
derived by reducing the effect of a priori biases in training data to better represent low
frequency, low probability smnples, including second and third choice probabilities.
There is mnple prior work in combining low-level classifiers with various search
strategies to provide integrated segmentation and recognition for writing (Tappert et al
1990) and speech (Renals et aI1992). And there is a rich background in the use of ANNs
a-; classifiers, including their use as a low-level, character classifier in a higher-level word
recognition system (Bengio et aI1995). But many questions remain regarding optimal
strategies for deploying and combining these methods to achieve acceptable (to a real
user) levels of performance. In this paper, we survey some of our experiences in
exploring refinements and improvements to these techniques.
2 SYSTEM OVERVIEW
Our recognition system, the Apple-Newton Print Recognizer (ANPR), consists of three
conceptual stages: Tentative Segmentation, Classification, and Context-Driven Search.
The primary data upon which we operate are simple sequences of (x,y) coordinate pairs,
L. Yaeger, R. Lyon and B. Webb
808
plus pen-up/down infonnation, thus defining stroke primitives. The Segmentation stage
decides which strokes will be combined to produce segments-the tentative groupings of
strokes that will be treated as possible characters-and produces a sequence of these
segments together with legal transitions between them. This process builds an implicit
graph which is then scored in the Classification stage and examined for a maximum
likelihood interpretation in the Search stage.
Words
(x,y) Points & Pen-Lifl'i
~------~~
Character
Segmentation
Hypotheses
Neural Network
Classifier
~------~~
Character
Class
Hypotheses
Figure 1: A Simplified Block Diagram of Our Hand-Print Recognizer.
3 TRAINING THE NEURAL NETWORK CLASSIFIER
Except for an integrated multiple-representations architecture (Yaeger et a11996) and the
training specitics detailed here, a fairly standard multi-layer perceptron trained with BP
provides the ANN character cla<;sitler at the heart of ANPR. Training an ANN character
cla<;sifier for use in a word recognition system, however, has different constraints than
would training such a system for stand-alone character recognition. All of the techniques
below, except for the annealing schedule, at least modestly reduce individual character
recognition accuracy, yet dramatically increase word recognition accuracy.
A large body of prior work exist<; to indicate the general applicability of ANN technology
as classifiers providing good estimates of a posteriori probabilities of each class given the
input (Gish 1990, Richard and Lippmann 1991, Renals and Morgan 1992, Lippmann
1994, Morgan and Bourlard 1995, and others cited herein).
3.1 NORMALIZING OUTPUT ERROR
Despite their ability to provide good first choice a posteriori probabilities, we have found
that ANN cla<;sifiers do a poor job of representing second and third choice probabilities
when trained in the classic way-minimizing mean squared error for target vectors that
are all O's, except for a single 1 corresponding to the target class. This result<; in erratic
word recognition failures as the net fails to accurately represent the legitimate ambiguity
between characters. We speculated that reducing the "pressure towards 0" relative to the
"pressure towards 1" as seen at the output unit", and thus reducing the large bias towards
o in target vectors, might pennit the net to better model these inherent ambiguities.
We implemented a technique for "nonnalizing output error" (NormOutErr) by reducing
the BP error for non-target classes relative to the target class by a factor that nonnalizes
the total non-target error seen at a given output unit relative to the total target error seen
at that unit. Assuming a training set with equal representation of classes, this
nonnalization should then be based on the number of non-target versus target classes in a
typical training vector, or, simply, the number of output units (minus one). Hence for
non-target output units, we scale the error at each unit by a constant:
e'=Ae
where eis the emlr at an output unit, and A is detlned to be:
A = 1/[d(NnuIJlUrs -1)]
where N"uIPuIs is the number of output units, and d is a user-adjusted tuning parameter,
typically ranging from 0.1 to 0.2. Error at the target output unit is unchanged. Overall,
this raises the activation values at the output units, due to the reduced pressure towards
zero, particularly for low-probability samples. Thus the learning algorithm no longer
809
Effective Training ofa NN Character Classifier for Word Recognition
converges to a minimum mean-squared error (MMSE) estimate of P(classlinput), but to
an MMSE estimate of a nonlinear function !(P(classlinput),A) depending on the factor
A by which we reduced the error pressure toward zero.
Using a simple version of the technique of Bourlard and Wellekens (1990), we worked
out what that resulting nonlinear function is. The net will attempt to converge to
minimize the modified quadratic error function
(i;2) = p(l- y)2 + A(l- p)y2
by setting its output y for a particular class to
y= p/(A-Ap+ p)
where p = P(classlinput), and A is as defined above. The inverse function is
p= yA/(yA+1- y)
We verified the fit of this function by looking at histograms of character-level empirical
percentage-correct versus y, a'\ in Figure 2.
1.-~~r-~~--~~--~-'--~~
0.9
0.8
0.7
Pc
eel
)
0.6
net output y
0.5
0.4
0.3
0.2
0.1
0
0
Figure 2: Empiric.:'ll p vs. y Histogram for a Net Trained with A=D.ll (d=.l), with the
Corresponding Theoretical Curve.
Note that the lower-probability samples have their output activations raised significantly,
relative to the 45? line that A = 1 yields.
The primary benefit derived from this technique is that the net does a much better job of
representing second and third choice probabilities, and low probabilities in general.
Despite a small drop in top choice character accuracy when using NormOutErr, we obtain
a very significant increase in word accuracy by this technique. Figure 3 shows an
exaggerated example of this effect, for an atypically large value of d (0.8), which overly
penalizes character accuracy; however, the 30% decrea'\e in word error rate is normal for
this technique. (Note: These data are from a multi-year-old experiment, and are not
necessarily representative of current levels of performance on any absolute scale.)
%
E 40
30
r
o
20
10
r
o
r
NonnOutErr
= ?
0.0
Character En-or
III 0.8
Word Error
Figure 3: Character and Word Error Rates for Two Different Values of NormOutErr (d).
A Value 01'0.0 Disables NormOutErr, Yielding Normal BP. The Unusually High Value
of 0.8 (A=O.013) Produces Nearly Equal Pressures Towards 0 and 1.
L. Yaeger, R. Lyon and B. Webb
810
3.2 FREQUENCY BALANCING
Training data from natural English words and phrases exhibit very non-uniform priors for
the various character classes, and ANNs readily model these priors. However, as with
NonnOutErr, we tind that reducing the effect of these priors on the net, in a controlled
way, and thus forcing the net to allocate more of its resources to low-frequency, lowprobability classes is of significant benefit to the overall word recognition process. To
this end, we explicitly (partially) balance the frequencies of the classes during training.
We do this by probabilistically skipping and repeating patterns, ba<;ed on a precomputed
repetition factor. Each presentation of a repeated pattern is "warped" uniquely, as
discussed later.
To compute the repetition factor for a class i, we tirst compute a normalized frequency of
that cla'ls:
F; =S; IS
where S; is the number of s~unples in class i, and S is the average number of srunples
over all cla<;ses, computed in the obvious way:
_
1 c
S =(- LS;)
c ;=1
with C being the number of classes. Our repetition factor is then defined to be:
R; =(a/F;t
with a and b being user controls over the ~ount of skipping vs. repeating and the degree
of prior normalization, respectively. Typical values of a range from 0.2 to 0.8, while b
ranges from 0.5 to 0.9. The factor a < 1 lets us do more skipping than repeating; e.g. for
a =0.5, cla<;ses with relative frequency equal to half the average will neither skip nor
repeat; more frequent classes will skip, and less frequent classes will repeat. A value of
0.0 for b would do nothing, giving R; = 1.0 for all classes, while a value of 1.0 would
provide "full" normalization. A value of b somewhat less than one seems to be the best
choice, letting the net keep some bia<; in favor of cla<;ses with higher prior probabilities.
This explicit prior-bias reduction is related to Lippmann's (1994) and Morgan and
Bourlard's (1995) recommended method for converting from the net's estimate of
posterior probability, p(classlinput), to the value needed in an HMM or Viterbi search,
p(inputlclass), which is to divide by p(class) priors. Besides eliminating potentially noisy
estimates of low probability cla<;se.<; and a possible need for renormalization, our approach
forces the net to actually 1e:'lfI1 a better model of these lower frequency classes.
3.3 ERROR EMPHASIS
While frequency balancing corrects for under-represented classes, it cannot account for
under-represented writing styles. We utilize a conceptually related probabilistic skipping
of patterns, but this time for just those patterns that the net correctly classifies in it.<;
forward/recognition pa<;s, as a form of "error empha<;is", to address this problem. We
define a correct-train probability (0.1 to 1.0) that is used a<; a bia'led coin to determine
whether a particular pattern, having been correctly classified, will also be used for the
backward/training pass or not. This only applies to correctly segmented, or "positive"
patterns, and miscla<;sified patterns are never skipped.
Especially during early stages of training, we set this parruneter fairly low (around 0.1),
thus concentrating most of the training time and the net's learning capability on patterns
that are more difticult to correctly classify. This is the only way we were able to get the
net to learn to correctly classify unusual character variants, such a') a 3-stroke "5" as
written by only one training writer.
Effective Training of a NN Character Classifier for Word Recognition
811
Variants of this scheme are possible in which mLliclalisified patterns would be repeated, or
different learning rates would apply to correctly and incorrectly classified patterns. It is
also related to techniques that use a training subset, from which ealiily-classified patterns
are replaced by randomly selected patterns from the full training set (Guyon et aI1992).
3.4 NEGATIVE TRAINING
Our recognizer's tentative segmentation stage necessarily produces a large number of
invalid segments, due to inherent ambiguities in the character segmentation process.
During recognition, the network must clalisify these invalid segment<; just ali it would any
valid segment, with no knowledge of which are valid or invalid. A significant increase in
word-level recognition accuracy wali obtained by performing negative training with these
invalid segments. This consists of presenting invalid segments to the net during training,
with all-zero target vectors. We retain control over the degree of negative training in two
ways. First is a negative-training Jactor (0.2 to 0.5) that modulates the learning rate
(equivalently by modulating the error at the output layer) for these negative patterns.
This reduces the impact of negative training on positive training, and modulates the
impact on characters that specifically look like elementIi of multi-stroke characters (e.g.,
I, 1, I, 0, 0, 0). Secondly, we control a negative-training probability (0.05 to 0.3), which
determines the probability that a particular negative sample will actually be trained on
(for a given presentation). This both reduces the overall impact of negative training, and
significcUltly reduces training time, since invalid segment<; are more numerous than valid
segments. As with NormOutErr, this modification hurtli character-level accuracy a little
bit, but helps word-level accuracy a lot.
3.5 STROKE WARPING
During training (but not during recognition), we produce random variations in stroke
data, consisting of small changes in skew, rotation, and x and y linear and quadratic
scalings. This produces alternate character forms that are consistent with stylistic
variations within and between writers, and induces an explicit alipect ratio and rotation
in variance within the framework of standard back-propagation. The amounts of each
distortion to apply were chosen through cross-validation experimentli, as just the amount
needed to yield optimum generalization. We also examined a number of such samples by
eye to verify that they represent a natural range of variation. A small set of such
variations is shown in Figure 4.
Figure 4: A Few Random Stroke Warpings of the Same Original "m" Data.
Our stroke warping scheme is somewhat related to the ideas of Tangent Dist and Tangent
Prop (Simard et (II 1992, 1993), in terms of the use of predetermined families of
transformations, but we believe it is much easier to implement. It is also somewhat
distinct in applying transformations on the original coordinate data, as opposed to using
distortions of images. The voice transformation scheme of Chang and Lippmann (1995)
is also related, but they use a static replication of the training set through a small number
of tr<Ulsformations, rather than dymunic random transformations of infinite variety.
3.6 ANNEALING & SCHEDULING
Many discussions of back-propagation seem to assume the use of a single fixed learning
rate. We view the stochastic back-propagation process ali more of a simulated annealing,
with a learning rate starting very high and decreasing only slowly to a very low value.
We typically start with a rate near l.0 and reduce the rate by a decay Jactor of 0.9 until it
gets down to about 0.001. The rate decay factor is applied following any epoch for which
the total squared error increased on the training set. Repeated tests indicate that this
L. Yaeger, R. Lyon and B. Webb
812
approach yields better results than low (or even moderate) initial learning rates, which we
speculate to be related to a better ability to escape local minima.
In addition, we find that we obtain best overall results when we also allow some of our
many training parameters to change over the course of a training run. In particular, the
correct train probability needs to start out very low to give the net a chance to learn
unusual character styles, but it should finish up at 1.0 in order to not introduce a general
posterior probability bias in favor of classes with lots of ambiguous examples. We
typic.:'llly train a net in t{)Ur "phases" according to parameters such a\; in Figure 5.
Correct
Train
Prob
Negative
Train
Prob
Phase
Epochs
Learning
Rate
1
25
1.0 - 0.5
0.1
0.05
2
25
0.5 - 0.1
0.25
0.1
3
50
0.1 - 0.01
0.5
0.18
4
30
0.01 - 0.001
1.0
0.3
Figure 5: A Typical Multi-Pha-;e Schedule of Learning Rates and Other Parameters for
Training a Character-Classifier Net.
4 DISCUSSION AND FUTURE DIRECTIONS
The normalization of output error, frequency balancing, and error emphasis networktraining methods discussed previously share a unifying theme: Reducing the effect of a
priori biases in the training data on network learning significantly improves the network's
performance in an integrated recognition system, despite a modest reduction in the
network's accuracy for individual characters. Normalization of output error prevents
over-represented non-target classes from bia\;ing the net against under-represented target
cla\;ses. Frequency balancing prevent\; over-represented target classes from biasing the
net against under-represented target classes. And error emphasis prevent\; overrepresented writing styles from bia-;ing the net against under-represented writing styles.
One could even argue that negative training eliminates an absolute bias towards properly
segmented characters, and that stroke warping reduces the bias towards those writing
styles found in the training data, although these techniques provide wholly new
information to the system as well.
Though we've offered arguments for why each of these techniques, individually, helps the
overall recognition process, it is unclear why prior-bias reduction, in general, should be
so consistently valuable. The general effect may be related to the technique of dividing
out priors, as is sometimes done to convert from p(clllsslinput) to p(inputlclass). But we
also believe that forcing the net, during learning, to allocate resources to represent less
frequent sample types may be directly beneficial. In any event, it is clear that paying
attention to such biases and taking steps to modulate them is a vital component of
effective training of a neural network serving as a classifier in a maximum likelihood
recognition system.
We speculate that a method of modulating the leaming rate at each output unit-based on
a measure of its accuracy relative to the other output units-may be possible, and that
such a method might yield the combined benefits of several of these techniques, with
fewer user-controllable parameters.
Effective Training of a NN Character Classifier for Word Recognition
813
Acknowledgement..
This work was done in collaboration with Bill Stafford, Apple Computer, and Les Vogel,
Angel Island Technologies. We are also indebted to our many colleagues in the
connectionist community and at Apple Computer.
Some techniques in this paper have pending U.S. and foreign patent applications.
References
Y. Bengio, Y. LeCun, C . Nohl, and C. Burges, "LeRec: A NNIHMM Hybrid for On-Line
Handwriting Recognition," Neural Computation, Vol. 7, pp. 1289-1303, 1995.
H. Bourlard and C. .T. Wellekens, "Links hetween Markov Models and Multilayer Perceptrons,"
IEEE Trans. PAMI, Vol. 12, pp. 1167-1178, 1990.
E. I. Chang and R. P. Lippmann, "Using Voice Transformations to Create Additional Training
Talkers for Word Spotting," in Advances in Neural Information Processing Systems 7, Tesauro et
al. (eds.), pp. 875-882, MIT Press, 1995.
H. Gish, "A Prohahilistic Appmach to Understanding and Training of Neural Network Classifiers,"
Proc. IEEE Inri. Conf on Acoustic.\?, Speech, and Signal Processing (Albuquerque, NM), pp. 13611364,1990.
I. Guyon, D. Henderson, P. Alhrecht, Y. LeCun, and P. Denker, "Writer independent and writer
adaptive neural network for on-line character recognition," in From pixels to features III, S.
Impedovo (ed.), pp. 493-506, Elsevier, Amsterdam, 1992.
R. A. .Tacohs, M. I. .Tordan, S . .T. NOWlan, and G. E. Hinton, "Adaptive Mixtures of Local Experts,"
Neural CompUTaTion, Vol. 3, pp. 79-87, 1991.
R. P. Lippmann, "Neural Networks, Bayesian a posteriori Probahilities, and Pattern Classification,"
pp. 83-104 in: From STaTisTics to Neural Networks-Theory and Pattern Recognition Applications,
V. Cherkas sky, .T. H. Friedman, and H. Wechsler (eds .), Springer-Verlag, Berlin, 1994.
N. Morgan and H. Bourlard, "Continuous Speech Recognition-An introduction to the hyhrid
HMM/connectionist approach," IEEE Signal Processing Mag., Vol. 13, no. 3, pp. 24--42, May
1995 .
S. Renals and N. Morgan, "Connectionist Prohahility Estimation in HMM Speech Recognition,"
TR-92-081, International Computer Science Institute, 1992.
S. Renals and N. Morgan, M. Cohen, and H. Franco "Connectionist Prohahility Estimation in the
Decipher Speech Recognition System," Proc. IEEE Inti. Conf on Acoustics, Speech, and Signal
Processing (San Francisco), pp. 1-601-1-604, 1992.
M. D. Richard and R. P. Lippmann, "Neural Network Classifiers Estimate Bayesian a Posteriori
Prohahilities," Neural ComputaTion, Vol. 3, pp. 461-483, 1991.
P. Simard, B. Victorri, Y. LeCun and .T . Denker, "Tangent Prop-A Formalism for Specifying
Selected Invariances in an Adaptive Network," in Advances in Neural Information Processing
SYSTems 4, Moody et al. (eds .), pp. 895-903, Morgan Kaufmann, 1992.
P. Simard, Y. LeCun and .T . Denker, "Efficient Pattern Recognition Using a New Transformation
Distance," in Advances in Neural Information Processing Systems 5, Hanson et al. (eds.), pp. 5058, Morgan Kaufmann, 1993.
C . C. Tappert, C. Y. Suen, and T. Wakahara, "The State of the Art in On-Line Handwriting
Recognition," IEEE Trans. PAM/, Vol. 12, pp. 787-808, 1990.
L. Yaeger, B. Wehh, and R. Lyon, "Comhining Neural Networks and Context-Driven Search for
On-Line, Printed Handwriting Recognition", (unpuhlished).
PART VII
VISUAL PROCESSING
| 1250 |@word version:1 eliminating:1 seems:1 ount:1 gish:2 cla:9 pressure:5 minus:1 tr:2 ld:1 reduction:3 initial:1 mag:1 mmse:2 current:1 com:3 skipping:4 nowlan:1 activation:2 yet:1 must:2 readily:1 written:1 predetermined:1 disables:1 drop:1 v:2 alone:1 half:1 selected:2 fewer:1 provides:1 replication:1 consists:2 introduce:1 angel:1 nor:1 dist:1 multi:4 decreasing:1 lyon:6 little:1 yaeger:6 classifies:1 what:1 transformation:6 sky:1 ofa:1 unusually:1 classifier:16 control:3 unit:12 positive:2 local:2 despite:3 ap:1 pami:1 might:2 plus:1 emphasis:4 pam:1 examined:2 specifying:1 range:3 lecun:4 block:1 implement:1 wholly:1 empirical:1 significantly:2 printed:2 word:21 nonnalization:1 suggest:1 get:2 cannot:1 scheduling:1 context:3 applying:1 writing:5 bill:1 overrepresented:1 wakahara:1 primitive:1 attention:1 starting:1 l:2 survey:1 legitimate:1 classic:1 coordinate:2 variation:4 target:16 user:4 hypothesis:3 pa:1 recognition:33 particularly:1 bottom:1 subproblem:1 eis:1 stafford:1 lerec:1 valuable:1 trained:4 weakly:1 raise:1 segment:9 ali:2 upon:1 writer:4 various:2 represented:7 train:5 distinct:1 effective:5 artificial:2 lift:1 larger:1 distortion:2 s:4 ability:2 favor:2 statistic:1 noisy:1 sequence:2 net:22 product:1 renals:4 frequent:3 loop:1 combining:2 achieve:1 optimum:1 produce:6 converges:1 help:2 depending:1 paying:1 job:2 dividing:1 implemented:1 skip:2 indicate:2 direction:1 correct:4 stochastic:1 larry:1 generalization:1 secondly:1 adjusted:1 exploring:1 around:1 normal:2 viterbi:1 talker:1 early:1 recognizer:4 estimation:2 proc:2 infonnation:1 modulating:2 individually:1 repetition:3 create:1 mit:1 clearly:1 suen:1 modified:1 rather:1 probabilistically:1 derived:2 focus:1 improvement:1 properly:1 consistently:1 likelihood:2 skipped:1 posteriori:4 elsevier:1 nn:3 foreign:1 integrated:3 typically:2 pixel:1 overall:6 classification:4 priori:3 raised:1 art:1 fairly:2 equal:3 never:1 having:1 atypically:1 look:1 nearly:1 future:2 others:1 connectionist:4 richard:3 inherent:2 few:1 escape:1 randomly:1 ve:1 comprehensive:1 individual:2 replaced:1 phase:2 consisting:1 attempt:1 friedman:1 henderson:1 mixture:1 yielding:1 pc:1 experience:1 modest:1 old:1 divide:1 penalizes:1 theoretical:1 increased:1 classify:2 formalism:1 hetween:1 phrase:1 applicability:1 subset:1 uniform:1 combined:3 cited:1 international:1 retain:1 probabilistic:1 eel:1 corrects:1 together:1 moody:1 squared:3 ambiguity:3 nm:1 opposed:1 slowly:1 conf:2 warped:1 usable:1 style:5 simard:3 expert:1 account:1 speculate:2 explicitly:1 later:1 view:1 lot:2 start:2 capability:1 minimize:1 accuracy:10 variance:1 conducting:1 kaufmann:2 yield:4 conceptually:1 decipher:1 bayesian:2 albuquerque:1 accurately:1 apple:8 indebted:1 classified:3 stroke:12 anns:5 deploying:1 ed:7 failure:1 against:3 colleague:1 frequency:12 pp:13 obvious:1 static:1 handwriting:3 concentrating:1 impedovo:1 knowledge:1 emerges:1 improves:1 segmentation:8 schedule:2 actually:2 back:3 higher:2 done:2 though:1 just:3 stage:6 implicit:1 until:1 hand:3 nonlinear:2 propagation:3 cherkas:1 believe:2 effect:5 normalized:2 y2:1 verify:1 hence:1 ll:2 during:7 uniquely:1 ambiguous:1 presenting:1 ranging:1 image:1 common:1 rotation:2 overview:1 patent:1 cohen:1 cupertino:1 discussed:3 interpretation:1 significant:4 rd:2 tuning:1 language:1 longer:1 nonnalizing:1 posterior:2 exaggerated:1 optimizing:1 moderate:1 driven:3 forcing:2 tesauro:1 verlag:1 morgan:8 seen:3 minimum:2 somewhat:3 additional:1 converting:1 converge:1 determine:1 recommended:1 signal:3 ii:1 multiple:1 full:2 reduces:4 empiric:1 segmented:2 ing:2 cross:1 controlled:1 impact:3 variant:2 ae:1 multilayer:1 histogram:2 normalization:5 represent:4 sifier:1 sometimes:1 background:1 addition:1 annealing:3 victorri:1 diagram:1 vogel:1 operate:1 eliminates:1 seem:1 near:1 bengio:2 enough:1 iii:2 vital:1 variety:1 fit:1 finish:1 architecture:1 reduce:2 regarding:1 idea:1 whether:1 allocate:2 speech:6 dramatically:1 useful:1 detailed:1 se:1 clear:1 amount:2 repeating:3 induces:1 reduced:2 exist:1 percentage:1 overly:1 correctly:6 serving:1 vol:6 prevent:2 neither:1 verified:1 utilize:1 backward:1 graph:1 year:1 convert:1 bia:4 inverse:1 run:1 prob:2 family:1 reasonable:1 guyon:2 stylistic:1 pennit:1 acceptable:1 scaling:1 bit:1 layer:2 quadratic:2 constraint:1 worked:1 bp:3 argument:1 franco:1 performing:1 according:1 alternate:1 combination:1 poor:1 remain:1 pha:1 beneficial:1 character:33 ur:1 island:1 delineated:1 modification:1 restricted:1 tind:1 heart:1 inti:1 legal:1 resource:2 wellekens:2 previously:1 skew:1 precomputed:1 needed:2 letting:1 tractable:1 end:1 unusual:2 apply:2 denker:3 coin:1 voice:2 original:2 top:1 newton:2 unifying:1 wechsler:1 giving:1 especially:2 build:1 unchanged:1 warping:5 question:1 print:2 strategy:2 primary:2 modestly:1 unclear:1 exhibit:1 distance:1 link:1 simulated:1 berlin:1 hmm:3 participate:1 argue:1 toward:1 assuming:1 besides:1 brainstorm:1 providing:1 minimizing:1 balance:1 innovation:1 equivalently:1 ratio:1 webb:4 potentially:1 negative:12 ba:2 markov:1 decrea:1 incorrectly:1 defining:1 hinton:1 looking:1 discovered:1 community:1 pair:1 tentative:3 hanson:1 acoustic:2 herein:1 trans:2 address:1 able:1 recurring:1 spotting:1 below:1 pattern:16 biasing:1 sified:1 including:3 erratic:1 event:1 treated:1 natural:2 force:1 hybrid:1 bourlard:5 representing:2 scheme:3 technology:2 eye:1 numerous:1 nohl:1 text:1 prior:11 epoch:2 acknowledgement:1 tangent:3 understanding:1 relative:6 a11996:1 versus:2 validation:1 degree:2 offered:1 consistent:1 share:1 balancing:6 collaboration:1 course:1 repeat:2 english:2 bias:10 allow:1 burges:1 perceptron:1 institute:1 taking:1 absolute:2 benefit:3 curve:1 transition:1 stand:1 rich:1 valid:3 forward:1 refinement:1 adaptive:3 simplified:1 san:1 lippmann:7 keep:1 decides:1 conceptual:1 francisco:1 search:6 pen:3 continuous:1 why:2 learn:2 robust:1 ca:2 controllable:1 pending:1 necessarily:2 domain:1 scored:1 tappert:2 nothing:1 repeated:3 body:1 representative:1 en:1 renormalization:1 fails:1 theme:3 explicit:2 printing:1 third:3 down:2 decay:2 normalizing:1 grouping:1 lifl:1 modulates:2 easier:1 vii:1 led:1 simply:1 visual:1 prevents:1 amsterdam:1 partially:1 chang:2 speculated:1 applies:1 springer:1 determines:1 chance:1 prop:2 modulate:1 ai1992:2 presentation:2 ann:5 invalid:6 towards:7 leaming:1 change:2 infinite:2 except:3 reducing:7 typical:3 specifically:1 total:3 pas:1 invariance:1 ya:2 perceptrons:1 trainable:1 |
275 | 1,251 | Multi-effect Decompositions
for Financial Data Modeling
Lizhong Wu & John Moody
Oregon Graduate Institute, Computer Science Dept.,
PO Box 91000, Portland, OR 97291
also at:
Nonlinear Prediction Systems,
PO Box 681, University Station, Portland, OR 97207
Abstract
High frequency foreign exchange data can be decomposed into three
components: the inventory effect component, the surprise infonnation
(news) component and the regular infonnation component. The presence
of the inventory effect and news can make analysis of trends due to the
diffusion of infonnation (regular information component) difficult.
We propose a neural-net-based, independent component analysis to separate high frequency foreign exchange data into these three components.
Our empirical results show that our proposed multi-effect decomposition
can reveal the intrinsic price behavior.
1 Introduction
Tick-by-tick, high frequency foreign exchange rates are extremely noisy and volatile, but
they are not simply pure random walks (Moody & Wu 1996). The price movements are
characterized by a number of "stylized facts" ,including the following two properties:
(1) short tenn, weak oscillations on a time scale of several ticks and (2) erratic occurrence
of turbulence lasting from minutes to tens of minutes. Property (1) is most likely caused
by the market makers' inventory effect (O'Hara 1995), and property (2) is due to surprise
information, such as news, rumors, or major economic announcements. The price changes
due to property (1) are referred to as the inventory effect component, and the changes due
to property (2) are referred to as the surprise infonnation component. The price changes
due to other infonnation is referred to as the regular infonnation component.
IThis terminology is borrowed from the financial economics literature. For additional properties
of high frequency foreign exchange price series. see (Guilaumet, Dacorogna. Dave. Muller. Olsen &
Pictet 1994).
996
L. Wu and 1. E. Moody
Due to the inventory effect, price changes show strong negative correlations on short time
scales (Moody & Wu 1995). Because of the surprise infonnation effect, distributions of
price changes are non-nonnal (Mandelbrot 1963). Since both the inventory effect and
the surprise information effect are short tenn and temporary, their corresponding price
components are independent of the fundamental price changes. However, their existence
will seriously affect data analysis and modeling (Moody & Wu 1995). Furthennore, the
most reliable component of price changes, for forecasting purposes, is the long tenn trend.
The presence of high frequency oscillations and short periods ofturbulence make it difficult
to identify and predict the changes in such trends, if they occur.
In this paper, we propose a novel approach with the following price model:
q(t) = CIPI(t)
+ C2P2(t) + C3P3(t) + e(t).
(1)
In this model, q(t) is the observed price series and PI (t), P2(t) and P3(t) correspond
respectively to the regular infonnation component, the surprise infonnation component and
the inventory effect component. PI (t), P2 (t) and P3 (t) are mutually independent and may
individually be either iid or correlated. e(t) is process noise, and c}, C2 and C3 are scale
constants. Our goal is to find PI (t), p2(t) and P3( t) given q(t).
The outline of the paper is as follows. We describe our approach for multi-effect decomposition in Section 2. In Section 3, we analyze the decomposed price components obtained
for the high frequency foreign exchange rates and characterize their stochastic properties.
We conclude and discuss the potential applications of our multi-effect decomposition in
Section 4.
2 Multi-effect Decomposition
2.1
Independent Source Separation
The task of decomposing the observed price quotes into a regular infonnation component,
a surprise infonnation component and an inventory effect component can be exactly fitted
into the framework of independent source separation. Independent source separation can
be described as follows:
Assume that X = {Xi, i = 1, 2, ... , n} are the sensor outputs which
are some superposition of unknown independent sources S = {Si' i =
1, 2, ... , m }. The task of independent source separation is to find a
mapping Y = f (X), so that Y ~ AS, where A is an m x m matrix in
which each row and column contains only one non-zero element.
Approaches to separate statistically-independent components in the inputs include
? Blind source separation (Jutten & Herault 1991),
? Infonnation maximization (Linsker 1989), (Bell & Sejnowski 1995),
? Independent component analysis, (Comon 1994), (Amari, Cichocki & Yang 1996),
? Factorial coding (Barlow 1961).
All of these approaches can be implemented by artificial neural networks. The network
architectures can be linear or nonlinear, multi-layer perceptrons, recurrent networks or other
context sensitive networks (pearlmutter & Parra 1997). We can choose a training criterion
to minimize the energy in the output units, to maximize the infonnation transferred in
the network, to reduce the redundancies between the outputs, or to use the Edgeworth
expansion or Gram-Charlier expansion of a probability distribution, which leads to an
analytic expression of the entropy in tenns of measurable higher order cumulants.
997
Multi-effect Decompositions/or Financial Data Modeling
r1(t)
Independent
Orthogonal
q(t)
Multi-scale
Smoothing
e
Decomposition
r2(t)
Component
r3(t)
Analysis
~(t)
~(t)
Figure 1: System diagram of multi-effect decomposition for high frequency foreign exchange rates. q(t) are original price quotes, ri(t) are the reference inputs, and Pie t) are the
decomposed components.
For our price decomposition problem, the non-Gaussian nature of price series requires that
the transfer function of the decomposition system be nonlinear. In general, the nonlinearities
in the transfer function are able to pick up higher order moments of the input distributions
and perfonn higher order statistical redundancy reduction between outputs.
2.2 Reference input selection
In traditional approaches to blind source separation, nothing is assumed to be known about
the inputs, and the systems adapt on-line and without a supervisor. This works only if the
number of sensors is not less than the number of independent sources. If the number of
sensors is less than that of sources, the sources can, in theory, be separated into disjoint
groups (Cao & Liu 1996). However, the problem is ill-conditioned for most of the above
practical approaches which only consider the case where the number of sensors is equal to
the number of sources.
In our task to decompose the multiple components of price quotes, the problem can be
divided into two cases. If the prices are sampled at regular intervals, we can use price
quotes observed in different markets, and have the number of sensors be equal to the
number of price components. However, in the high frequency markets, the price quotes
are not regularly spaced in time. Price quotes from different markets will not appear at the
same time, so we cannot apply the price quotes from different markets to the system. In
this case, other reference inputs are needed.
Motivated by the use of reference inputs for noise canceling (Widrow, Glover, McCool,
Kaunitz, Williams, Heam, Zeidler, Dong & Goodlin 1975), we generate three reference
inputs from original price quotes. They are the estimates of the three desired components.
In the following, we briefly describe our procedure for generating the reference inputs.
By modeling the price quotes using a "True Price" state space model (Moody & Wu 1996)
q(t) =
rl (t)
+ r3(t) ,
(2)
where rl (t) is an estimate of the infonnation component (True Price) and r3( t) is an estimate
of the inventory effect component (additive noise), and by assuming that the True Price
rl (t) is a fractional Brownian motion (Mandelbrot & Van Ness 1968), we can estimate
rl (t) and r3 (t) with given q( t), (Moody & Wu 1996), as
(3)
m,n
(4)
998
L. Wu and J. E. Moody
Figure 2: Multi-effect decompositions for two segments of the DEMIUSD (log prices)
extracted from September 1995. The three panels in each segment display the observed
prices (the dotted curve in upper panel), the regular information component (solid curve
in upper panel), the surprise information component (mid panel) and the inventory effect
component (lower panel).
where t/!::'(t) is an orthogonal wavelet function, Q::' is the coefficient of the wavelet
transform of q(t), m is the index of the scales and n is the time index of the components in
the wavelet transfer, S(m,B) is a smoothing function, and its parameters can be estimated
using the EM algorithm (Womell & Oppenheim 1992).
We then estimate the surprise information component as the residual between the information component and its moving average:
r2(t) = r}(t) - set)
(5)
set) is an exponential moving average of rl(t) and
set)
= (1 + a)rl(t) -
as(t - 1)
(6)
where a is a factor. Although it can be optimized based on the training data, we set
a = -0.9 in our current work.
Our system diagram for multi-effect decomposition is shown in Figure 1. Using multiscale decomposition Eqn(3) and smoothing techniques Eqn(6), we obtain three reference
inputs. We can then separate the reference inputs into three independent components via
independent component analysis using an artificial neural network. Figure 2 presents multieffect decompositions for two segments of the DEMIUSD rates. The first segment contains
some impulses, and the corresponding surprise information component is able to catch
such volatile movements. The second segment is basically down trending, so its surprise
information component is comparatively flat.
3
3.1
Empirical Analysis
Mutually Independent Analysis
Mutual independence of the variables is satisfied if the joint probability density function
equals the product of the marginal densities, or equivalently, the characteristic function
splits into the sum of marginal characteristic functions: g(X) = E7=1 gi(Xi). Taking the
Taylor expansion of both sides of the above equation, products between different variables
Xi in the left-hand side must be zero since there are no such terms in the right-hand side.
999
Multi-effect Decompositions for Financial Data Modeling
Table 1: Comparisons between the correlation coefficients p (nonnalized) and the crosscumulants r (unnonnalized) of order 4 before and after independent component analysis
(lCA). The DEMIUSD quotes for September 1995 is divided into 147 sub-sets of 1024
ticks. The results presented here are the median values. The last column is the absolute
ratio of before ICA and after ICA. We note that all ratios are greater than I, indicating that
after ICA, the components become more independent.
Components
pairs
CrossCumulants
Pl(t) "" P2(t)
r13
r 22
P12
r31
P13
r13
r22
r31
pl(t) "" P3(t)
P23
r13
r22
r 31
P2(t) "" P3(t)
Before
ICA
0.56
2.7e-14
-5.6e-15
2.0e-11
0.15
2.le-15
-2.0e-15
5.ge-12
0.17
9.le-16
1.2e-15
3.6e-15
After
ICA
0.14
7.8e-17
9.2e-16
1.3e-13
0.03
1.6e-17
-4.5e-16
6.ge-14
0.04
-5.0e-19
4.ge-17
3.0e-17
Absolute
ratio
4.1
342.2
6.0
148.5
4.7
128.9
4.5
84.5
4.3
1806.0
24.3
119.6
We observe the cross-cumulants of order 4:
r13
r 22
3M2oMll
(7)
M22 - M20M02 - 2M?1
M31 - 3M02Mll
(8)
(9)
M13 -
=
r 31 =
where M lei = E { x~ x~} denote the moments of order k + I. If x i and x j are independent,
then their cross-cumulants must be zero (Comon 1994). Table I compares the crosscumulants before and after independent component analysis (ICA) for the DEMIUSD in
September 1995. For reference, the correlation coefficients before and after ICA are also
listed in the table. We see that after ICA, the components have become less correlated and
thus more independent.
3.2 Autocorrelation Analysis
Figure 3 depicts the autocorrelation functions of the changes in individual components
and compares them to the original returns. We compute the short-run autocorrelations for
the lags up to 50. Figure 3 gives the means and standard deviations for September 1995.
From the figure, we can see that both the inventory effect component and the original
returns show very similar autocorrelation functions, which are dominated by the significant
negative, first-order autocorrelations. The mean values for the other orders are basically
equal to zero. The autocorrelations of the regular information component and the surprise
information component show positive correlations except at first order. These non-zero
autocorrelations are hidden by noise in the original series. The autocorrelation function
of the surprise information component decays faster than that of the regular information
component. On average, it is below the 95% confidence band for lags larger than 20 ticks.
The above autocorrelation analysis suggests the following. (I) Price changes due to the
information effects are slightly trending on tick-by-tick time scales. The trend in the surprise
information component is shorter term than that in the regular information component.
L. Wu and J. E. Moody
1000
a
~?
0.6
0.6
0.4
0.4
~ 0.2
0.2
mm.mmmmHHim!lH!ml!l)mtmm
0
0
?" -0.2
-0.4
-0.4
0.6
0.6
0.4
0.4
~ 0.2
~ 0.2
~
~
-0.2
~
0
?" -0.2
-0.4
-0.4
0
0
?" -0.2
10
20
30
40
0
10
20
30
40
Figure 3: Comparison of autocorrelation functions of the changes in the original observed
prices (the upper-left panel), the inventory effect component (the lower-left panel), the regular information component (the upper-right panel) and the surprise infonnation component
(the lower-right panel). The results presented are means and standard deviations, and the
horizontal dotted lines represent the 95% confidence band. The DEMIUSD in September
1995 is divided into 293 sub-sets of 1024 ticks with overlapping of 512 ticks.
(2) The autocorrelation function of original returns reflects only the price changes due to
the inventory effect. This further confirms that the existence of short tenn memory can
mislead the analysis of dependence on longer time scales. Subsequently, we can see the
usefulness of the multi-effect decomposition. Our empirical results should be viewed as
preliminary, since they may depend upon the choice of True Price model. Additional
studies are ongoing.
4
Conclusion and Discussion
We have developed a neural-net-based independent component analysis (ICA) for the
multi-effect decomposition of high frequency financial data. Empirical results with foreign
exchange rates have demonstrated that the decomposed components are mutually independent. The obtained regular infonnation component has recovered the trending behavior of
the intrinsic price movements.
Potential applications for multi-effect decompositions include:
(1) outlier detection and filtering: Filtering techniques for removing various noisy effects
and identifYing long tenn trends have been widely studied (see for example Assimakopoulos (1995?. MuIti-effect decompositions provide us with an alternative approach. As
demonstrated in Section 3, the regular infonnation component can, in most cases, catch
relatively stable and longer tenn trends originally embedded in the price quotes.
(2) devolatilization: Price series are heteroscedastic (Boilers lev, Chou & Kroner 1992).
Devolatilization has been widely studied (see, for example, Zhou (1995?. The regular
infonnation component obtained from our multi-effect decomposition appears less volatile,
and furthennore, its volatility changes more smoothly compared to the original prices.
(3) mixture of local experts modeling: In most cases, one might be interested in only stable,
long tenn trends of price movements. However, the surprise infonnation and inventory effect components are not totally useless. By decomposing the price series into three mutually
Multi-effect Decompositionsfor Financial Data Modeling
1001
independent components, the prices can be modeled by a mixture of local experts (Jacobs,
Jordan & Barto 1990), and better modeling perfonnances can be expected.
References
Amari, S., Cichocki, A. & Yang. H. (1996), A new learning algorithm for blind signal separation,
in D. Touretzky. M. Mozer & M. Hasselmo, eds. 'Advances in Neural Infonnation Processing
Systems 8', MIT Press: Cambridge, MA.
Assimakopoulos. V. (1995), 'A successive filtering technique for identifying long-tenn trends'. Journal of Forecasting 14, 35-43.
Barlow, H. (1961), Possible principles underlying the transfonnation of sensory messages, in
W. Rosenblith, ed., 'Sensory Communication', MIT Press: Cambridge, MA, pp. 217-234.
Bell, A. & Sejnowski, T. (1995), 'An infonnation-maximization approach to blind separation and
blind deconvolution', Neural Computation 7(6), 1129-1159.
Bollerslev, T., Chou, R. & Kroner. K. (1992). 'ARCH modelling in finance: A review of the theory
and empirical evidence', Journal of Econometrics 8, 5-59.
Cao, X. & Liu, R. (1996), 'General approach to blind source separation' ? IEEE Transactions on Signal
Processing 44(3), 562-569.
Comon, P. (1994), 'Independent component analysis, a new concept?', Signal Process 36,287-314.
Guilaumet, D., Dacorogna, M., Dave, R., Muller, U., Olsen, R. & Pictet, O. (1994), From the bird's
eye to the microscope, a survey of new stylized facts of the intra-daily foreign exchange markets,
Technical Report DMG.1994-04-06, Olsen & Associates, Zurich, Switzerland.
Jacobs. R., Jordan, M. & Barto, A. (1990), Task decomposition through competition in a modular
connectionist architecture: The what and where vision tasks, Technical Report COINS 90-27,
Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge,
MA.
Jutten, C. & Herault, 1. (1991). 'Blind separation of sources. part I: An adaptive algorithm based on
neuromimetic architecture' , Signal Process 24( 1), 1-10.
Linsker, R. (1989), An application of the principle of maximum infonnation preservation to linear
systems, in D. Touretzky, ed., 'Advances in Neural Infonnation Processing Systems 1', Morgan
Kaufmann Publishers, San Francisco, CA.
Mandelbrot, B. (1963), 'The variation of certain specUlative prices' , Journal ofBusiness 36, 394--419.
Mandelbrot, B. & Van Ness, J. (1968), 'Fractional Brownian motion, fractional noise. and applications', SIAM Review 10.
MoOdy, 1. & Wu. L. (1995), Statistical analysis and forecasting of high frequency foreign exchange
rates, in 'The First International Conference on High Frequency Data in Finance', Zurich,
Switzerland.
Moody, J. & Wu, L. (1996), What is the 'True Price '? - state space models for high frequency financial
data, in S. Amari, L. Xu, L. Chan, I. King & K. Leung, eds, 'Progress in Neural Infonnation
Processing (Proceedings of ICONIPS*96, Hong Kong)', Springer-Verlag, Singapore. pp. 697704, Vo1.2.
O'Hara, M. (1995), Market Microstructure Theory, Blackwell Business.
Pearl mutter, B. A. & Parra, L. (1997), Maximum likelihood blind source separation: a contextsensitive generalization ofICA, in M. Mozer, M. Jordan & T. Petsche, eds, 'Advances in Neural
Infonnation Processing Systems 9' , MIT Press: Cambridge, MA.
Widrow, B., Glover, J., McCool, 1., Kaunitz, J., Williams, C., Hearn, R., Zeidler, J., Dong, E. &
Goodlin, R. (1975), 'Adaptive noise cancelling: principles and applications', Proceedings of
IEEE 63(12), 1692-1716.
Wornell, G. & Oppenheim, A. (1992), 'Estimation of fractal signals from noisy measurements using
wavelets', IEEE Transactions on Signal Processing 40(3), 611-623.
Zhou, B. (1995), Forecasting foreign exchange rates series subject to de-volatilization, in 'The First
International Conference on High Frequency Data in Finance', Zurich, Switzerland.
PART
IX
CONTROL, NAVIGATION AND PLANNING
| 1251 |@word kong:1 briefly:1 r13:4 confirms:1 decomposition:21 jacob:2 pick:1 solid:1 reduction:1 moment:2 liu:2 series:7 contains:2 seriously:1 hearn:1 current:1 recovered:1 si:1 must:2 john:1 additive:1 analytic:1 tenn:8 short:6 successive:1 glover:2 c2:1 mandelbrot:4 become:2 m22:1 autocorrelation:7 ica:9 expected:1 market:7 behavior:2 planning:1 multi:17 brain:1 decomposed:4 totally:1 ofica:1 underlying:1 panel:9 what:2 developed:1 perfonn:1 finance:3 exactly:1 control:1 unit:1 appear:1 before:5 positive:1 local:2 lev:1 might:1 bird:1 studied:2 suggests:1 heteroscedastic:1 graduate:1 statistically:1 practical:1 edgeworth:1 hara:2 procedure:1 empirical:5 bell:2 confidence:2 regular:14 cannot:1 selection:1 turbulence:1 context:1 measurable:1 demonstrated:2 williams:2 economics:1 survey:1 mislead:1 identifying:2 pure:1 financial:7 variation:1 associate:1 trend:8 element:1 zeidler:2 econometrics:1 observed:5 wornell:1 news:3 movement:4 mozer:2 depend:1 segment:5 upon:1 po:2 stylized:2 joint:1 various:1 rumor:1 boiler:1 separated:1 describe:2 sejnowski:2 artificial:2 lag:2 larger:1 widely:2 modular:1 furthennore:2 amari:3 gi:1 transform:1 noisy:3 net:2 propose:2 product:2 cancelling:1 cao:2 goodlin:2 competition:1 bollerslev:1 r1:1 generating:1 volatility:1 recurrent:1 widrow:2 progress:1 borrowed:1 strong:1 p2:5 implemented:1 switzerland:3 stochastic:1 subsequently:1 exchange:10 microstructure:1 generalization:1 decompose:1 preliminary:1 announcement:1 parra:2 pl:2 mm:1 mapping:1 predict:1 major:1 purpose:1 estimation:1 dmg:1 infonnation:25 maker:1 quote:11 superposition:1 individually:1 sensitive:1 r31:2 hasselmo:1 reflects:1 mit:3 sensor:5 gaussian:1 e7:1 zhou:2 barto:2 portland:2 modelling:1 likelihood:1 chou:2 foreign:10 leung:1 hidden:1 interested:1 nonnal:1 ill:1 herault:2 smoothing:3 ness:2 mutual:1 marginal:2 equal:4 linsker:2 report:2 connectionist:1 individual:1 trending:3 detection:1 message:1 intra:1 mixture:2 navigation:1 lizhong:1 daily:1 lh:1 shorter:1 orthogonal:2 taylor:1 walk:1 desired:1 p13:1 fitted:1 column:2 modeling:8 cumulants:3 maximization:2 deviation:2 usefulness:1 supervisor:1 characterize:1 krone:2 density:2 fundamental:1 siam:1 international:2 dong:2 moody:11 satisfied:1 choose:1 cognitive:1 expert:2 return:3 potential:2 nonlinearities:1 de:1 coding:1 coefficient:3 oregon:1 caused:1 blind:8 analyze:1 minimize:1 kaufmann:1 characteristic:2 correspond:1 identify:1 spaced:1 weak:1 iid:1 basically:2 dave:2 oppenheim:2 touretzky:2 canceling:1 ed:5 rosenblith:1 energy:1 frequency:13 pp:2 sampled:1 massachusetts:1 fractional:3 appears:1 higher:3 originally:1 mutter:1 box:2 arch:1 correlation:4 hand:2 eqn:2 horizontal:1 nonlinear:3 multiscale:1 overlapping:1 autocorrelations:4 jutten:2 reveal:1 impulse:1 lei:1 effect:34 concept:1 true:5 barlow:2 criterion:1 hong:1 outline:1 pearlmutter:1 motion:2 p12:1 novel:1 volatile:3 speculative:1 rl:6 significant:1 measurement:1 cambridge:4 moving:2 stable:2 longer:2 brownian:2 chan:1 certain:1 verlag:1 tenns:1 muller:2 morgan:1 additional:2 greater:1 mccool:2 maximize:1 period:1 signal:6 preservation:1 multiple:1 technical:2 faster:1 characterized:1 adapt:1 cross:2 long:4 divided:3 prediction:1 vision:1 represent:1 microscope:1 interval:1 diagram:2 median:1 source:14 publisher:1 subject:1 regularly:1 transfonnation:1 jordan:3 yang:2 presence:2 split:1 affect:1 independence:1 charlier:1 architecture:3 economic:1 reduce:1 expression:1 motivated:1 forecasting:4 fractal:1 listed:1 factorial:1 mid:1 ten:1 band:2 generate:1 singapore:1 dotted:2 estimated:1 disjoint:1 r22:2 group:1 redundancy:2 terminology:1 diffusion:1 sum:1 run:1 contextsensitive:1 wu:11 p3:5 oscillation:2 separation:11 layer:1 display:1 occur:1 ri:1 flat:1 dominated:1 extremely:1 relatively:1 transferred:1 department:1 lca:1 slightly:1 em:1 lasting:1 comon:3 outlier:1 equation:1 mutually:4 zurich:3 discus:1 r3:4 needed:1 ge:3 neuromimetic:1 decomposing:2 apply:1 observe:1 petsche:1 occurrence:1 alternative:1 coin:1 existence:2 original:8 include:2 comparatively:1 dependence:1 traditional:1 september:5 separate:3 p23:1 assuming:1 index:2 useless:1 modeled:1 ratio:3 equivalently:1 difficult:2 pie:1 negative:2 unknown:1 upper:4 nonnalized:1 communication:1 station:1 pair:1 blackwell:1 c3:1 optimized:1 temporary:1 pearl:1 able:2 below:1 including:1 reliable:1 erratic:1 memory:1 business:1 residual:1 technology:1 eye:1 catch:2 cichocki:2 review:2 literature:1 embedded:1 filtering:3 principle:3 pi:3 row:1 last:1 tick:9 side:3 institute:2 taking:1 absolute:2 van:2 curve:2 gram:1 sensory:2 adaptive:2 san:1 transaction:2 perfonnances:1 olsen:3 ml:1 conclude:1 assumed:1 francisco:1 xi:3 table:3 nature:1 transfer:3 ca:1 inventory:14 expansion:3 m13:1 kaunitz:2 noise:6 nothing:1 xu:1 referred:3 depicts:1 sub:2 exponential:1 wavelet:4 ix:1 minute:2 down:1 removing:1 r2:2 decay:1 evidence:1 deconvolution:1 intrinsic:2 conditioned:1 surprise:16 entropy:1 smoothly:1 simply:1 likely:1 springer:1 extracted:1 ma:4 goal:1 viewed:1 king:1 dacorogna:2 price:44 change:13 except:1 vo1:1 ithis:1 perceptrons:1 indicating:1 ongoing:1 dept:1 correlated:2 |
276 | 1,252 | A comparison between neural networks
and other statistical techniques for
modeling the relationship between
tobacco and alcohol and cancer
Tony Plate
BC Cancer Agency
601 West 10th Ave, Epidemiology
Vancouver BC Canada V5Z 1L3
tap@comp.vuw.ac.nz
Pierre Band
BC Cancer Agency
601 West 10th Ave, Epidemiology
Vancouver BC Canada V5Z 1L3
Joel Bert
Dept of Chemical Engineering
University of British Columbia
2216 Main Mall
Vancouver BC Canada V6T 1Z4
John Grace
Dept of Chemical Engineering
University of British Columbia
2216 Main Mall
Vancouver BC Canada V6T 1Z4
Abstract
Epidemiological data is traditionally analyzed with very simple
techniques. Flexible models, such as neural networks, have the
potential to discover unanticipated features in the data. However,
to be useful, flexible models must have effective control on overfitting. This paper reports on a comparative study of the predictive
quality of neural networks and other flexible models applied to real
and artificial epidemiological data. The results suggest that there
are no major unanticipated complex features in the real data, and
also demonstrate that MacKay's [1995] Bayesian neural network
methodology provides effective control on overfitting while retaining the ability to discover complex features in the artificial data.
1
Introduction
Traditionally, very simple statistical techniques are used in the analysis of epidemiological studies. The predominant technique is logistic regression, in which the
effects of predictors are linear (or categorical) and additive on the log-odds scale.
An important virtue of logistic regression is that the relationships identified in the
T. Plate, P. Band, J. Bert and J. Grace
968
data can be interpreted and explained in simple terms, such as "the odds of developing lung cancer for males who smoke between 20 and 29 cigarettes per day are
increased by a factor of 11.5 over males who do not smoke". However, because of
their simplicity, it is difficult to use these models to discover unanticipated complex
relationships, i.e., non-linearities in the effect of a predictor or interactions between
predictors. Interactions and non-linearities can of course be introduced into logistic
regressions, but must be pre-specified, which tends to be impractical unless there
are only a few variables or there are a priori reasons to test for particular effects.
Neural networks have the potential to automatically discover complex relationships.
There has been much interest in using neural networks in biomedical applications;
witness the recent series of articles in The Lancet, e.g., Wyatt [1995] and Baxt
[1995]. However, there are not yet sufficient comparisons or theory to come to firm
conclusions about the utility of neural networks in biomedical data analysis. To
date, comparison studies, e.g, those by Michie, Spiegelhalter, and Taylor [1994],
Burke, Rosen, and Goodman [1995]' and Lippmann, Lee, and Shahian [1995], have
had mixed results, and Jefferson et aI's [1995] complaint that many "successful"
applications of neural networks are not compared against standard techniques appears to be justified. The intent of this paper is to contribute to the body of useful
comparisons by reporting a study of various neural-network and statistical modeling
techniques applied to an epidemiological data analysis problem.
2
The data
The original data set consisted of information on 15,463 subjects from a study conducted by the Division of Epidemiology and Cancer Prevention at the BC Cancer
Agency. In this study, detailed questionnaire reported personal information, lifetime tobacco and alcohol use, and lifetime employment history for each subject.
The subjects were cancer patients in BC with diagnosis dates between 1983 and
1989, as ascertained by the population-based registry at the BC Cancer Agency.
Six different tobacco and alcohol habits were included: cigarette (c), cigar (G), and
pipe (p) smoking, and beer (B), wine (w), and spirit drinking (s). The models reported in this paper used up to 27 predictor variables: age at first diagnosis (AGE),
and 26 variables related to alcohol and tobacco consumption. These included four
variables for each habit: total years of consumption (CYR etc), consumption per
day or week (CDAY, BWK etc), years since quitting (CYQUIT etc), and a binary variable indicating any indulgence (CSMOKE, BDRINK etc) . The remaining two binary
variables indicated whether the subject ever smoked tobacco or drank alcohol. All
the binary variables were non-linear (threshold) transforms of the other variables.
Variables not applicable to a particular subject were zero, e.g., number of years of
smoking for a non-smoker, or years since quitting for a smoker who did not quit.
Of the 15,463 records, 5901 had missing information in some of the fields related
to tobacco or alcohol use. These were not used, as there are no simple methods
for dealing with missing data in neural networks. Of the 9,562 complete records, a
randomly selected 3,195 were set aside for testing, leaving 6,367 complete records
to be used in the modeling experiments.
There were 28 binary outcomes: the 28 sites at which a subject could have cancer
(subjects had cancers at up to 3 different sites). The number of cases for each site
varied, e.g., for LUNGSQ (Lung Squamous) there were 694 cases among the complete
records, for ORAL (Oral Cavity and Pharynx) 306, and for MEL (Melanoma) 464.
All sites were modeled individually using carefully selected subjects as controls.
This is common practice in cancer epidemiology studies, due to the difficulty of
collecting an unbiased sample of non-cancer subjects for controls. Subjects with
Neural Networks in Cancer Epidemiology
969
cancers at a site suspected of being related to tobacco usage were not used as
controls. This eliminated subjects with any sites other than COLON, RECTUM, MEL
(Melanoma), NMSK (Non-melanoma skin), PROS (Prostate), NHL (Non-Hodgkin's
lymphoma), and MMY (Multiple-Myeloma), and resulted in between 2959 and 3694
controls for each site. For example, the model for LUNGSQ (lung squamous cell)
cancer was fitted using subjects with LUNGSQ as the positive outcomes (694 cases),
and subjects all of whose sites were among COLON, RECTUM, MEL, NMSK, PROS, NHL,
or MMY as negative outcomes (3694 controls).
3
Statistical methods
A number of different types of statistical methods were used to model the data.
These ranged from the non-flexible (logistic regression) through partially flexible
(Generalized Additive Models or GAMs) to completely flexible (classification trees
and neural networks). Each site was modeled independently, using the log likelihood of the data under the binomial distribution as the fitting criterion. All of the
modeling, except for the neural networks and ridge regression, was done using the
the S-plus statistical software package [StatSci 1995].
For several methods, we used Breiman's [1996] bagging technique to control overfitting. To "bag" a model, one fits a set of models independently on bootstrap
samples. The bagged prediction is then the average of the predictions of the models
in the set. Breiman suggests that bagging will give superior predictions for unstable
models (such as stepwise selection, pruned trees, and neural networks).
Preliminary analysis revealed that the predictive power of non-flexible models could
be improved by including non-linear transforms of some variables, namely AGESQ
and the binary indicator variables SMOKE, DRINK, CSMOKE, etc. Flexible models
should be able to discover useful non-linear transforms for themselves and so these
derived variables were not included in the flexible models. In order to allow comparisons to test this, one of non-flexible models (ONLYLIN-STEP) also did not use any
of these derived variables.
Null model: (NULL) The predictions of the null model are just the frequency of
the outcome in the training set.
Logistic regression: The FULL model used the full set of predictor variables,
including a quadratic term for age: AGESQ.
Stepwise logistic regression: A number of stepwise regressions were fitted, differing in the set of variables considered. Outcome-balanced lO-fold cross validation
was used to select the model size giving best generalization. The models were as
follows: AGE-STEP (AGE and AGESQ); CYR-AGE-STEP (CYR, AGE and AGESQ)j ALCCYR-AGE-STEP (all alcohol variables, CYR, AGE and AGESQ); FULL-STEP (all variables
including AGESQ); and ONLYLIN-STEP (all variables except for the derived binary
indicator variables SMOKE, CSMOKE, etc, and only a linear AGE term).
Ridge regression: (RIDGE) Ridge regression penalizes a logistic regression model
by the sum of the squared parameter values in order to control overfitting. The
evidence framework [MacKay 1995] was used to select seven shrinkage parameters:
one for each of the six habits, and one for SMOKE, DRINK, AGE and AGESQ.
Generalized Additive Models: GAMs [Hastie and Tibshirani 1990] fit a smoothing spline to each parameter. GAMs can model non-linearities, but not interactions.
A stepwise procedure was used to select the degree (0,1,2, or 4) of the smoothing
spline for each parameter. The procedure started with a model having a smoothing
spline of degree 2 for each parameter, and stopped when the Ale statistic could
970
T. Plate, P. Band, 1. Bert and 1. Grace
not reduced any further. Two stepwise GAM models were fitted : GAM-FULL used
the full set of variables, while GAM-CIG used the cigarette variables and AGE .
Classification trees: [Breiman et al. 1984] The same cross-validation procedure
as used with stepwise regression was used to select the best size for TREE, using
the implementation in S-plus, and the function shrink.treeO for pruning. A bagged
version with 50 replications, TREE-BAGGED, was also used. After constructing a tree
for the data in a replication, it was pruned to perform optimally on the training
data not included in that replication.
Ordinary neural networks: The neural network models had a single hidden layer
of tanh functions and a small weight penalty (0 .01) to prevent parameters going to
infinity. A conjugate-gradient procedure was used to optimize weights. For the NNORD-H2 model , which had no control on complexity, a network with two hidden units
was trained three times from different small random starting weights. Of these three,
the one with best performance on the training data was selected as "the model". The
NN-ORD-HCV used common method for controlling overfitting in neural networks: 10fold CV for selecting the optimal number of hidden units. Three random starting
points for each partition were used calculate the average generalization error for
networks with one, two and three hidden units Three networks with the best number
of hidden units were trained on the entire set of training data, and the network
having the lowest training error was chosen.
Bagged neural networks with early stopping: Bagging and early stopping
(terminating training before reaching a minimum on training set error in order to
prevent overfitting) work naturally together. The training examples omitted from
each bootstrap replication provide a validation set to decide when to stop, and with
early stopping, training is fast enough to make bagging practical. 100 networks
with two hidden units were trained on separate bootstrap replications, and the
best 50 (by their performance on the omitted examples) were included in the final
bagged model, NN-ESTOP-BAGGED. For comparison purposes, the mean individual
performance of these early-stopped networks is reported as NN-ESTOP-AVG .
N eur~l networks with Bayesian regularization: MacKay's [1995] Bayesian
evidence framework was used to control overfitting in neural networks. Three random starts for networks with 1, 2, 3 or 4 hidden units and three different sets of
regularization (penalty) parameters were used, giving a total of 36 networks for each
site. The three possibilities for regularization parameters were: (a) three penalty
parameters - one for each of input to hidden, bias to hidden, and hidden to output;
(b) partial Automatic Relevance Determination (ARD) [MacKay 1995] with seven
penalty parameters controlling the input to hidden weights - one for each habit and
one for AGE ; and (c) full ARD, with one penalty parameter for each of the 19 inputs. The "evidence" for each network was evaluated and the best 18 networks were
selected for the equally-weighted committee model NN-BAYES-CMTT. NN-BAYES-BEST
was the single network with the maximum evidence.
4
Results and Discussion
Models were compared based on their performance on the held-out test data, so
as to avoid overfitting bias in evaluation. While there are several ways to measure
performance, e.g. , 0-1 classification error, or area under the ROC curve (as in Burke,
Rosen and Goodman [1995]), we used the test-set deviance as it seems appropriate
to compare models using the same criterion as was used for fitting. Reporting
performance is complicated by the fact that there were 28 different modeling tasks
(Le., sites), and some models did better on some sites and worse on others. We
report some overall performance figures and some pairwise comparisons of models.
Neural Networks in Cancer Epidemiology
971
-10
0
10
20
-10
ALL
MEL
6618/3299
322/142
NULL
AGE-STEP
CYR-AGE-STEP
ALC-CYR-AGE-STEP
ONLYLIN-STEP
FULL-STEP
FULL
RIDGE
GAM-CIG
GAM-FULL
TREE
TREE-BAGGED
NN-ORD-H2
NN-ORD-HCV
NN-ESTOP-AVG
NN-ESTOP-BAGGED
NN -BAYES-BEST
NN-BAYES-CMTT
r
?
?
?
?
?
?
?
?
?
?
0
10
? :.
-10
0
?
?
?
?
?
?
10
20
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
20
10
?
?
?
?
?
?
?
?
?
?
?
?
-10
?
?
?
?
0
20
Figure 1: Percent improvement in deviance on test data over the null model.
Figure 1 shows aggregate deviances across sites (Le., the sum of the test deviance for
one model over the 28 sites) and deviances for selected sites. The horizontal scale
in each column indicates the percentage reduction in deviance over the null model.
Zero percent (the dotted line) is the same performance as the null model, and 100%
would be perfect predictions. Numbers below the column labels are the number of
positive outcomes in the training and test sets, respectively. The best predictions
for LUNGSQ can reduce the null deviance by just over 25%. It is interesting to note
that much of the information is contained in AGE and CYR: The CYR-AGE-STEP
model achieved a 7.1% reduction in overall deviance, while the maximum reduction
(achieved by NN-BAYES-CMTT) was only 8.3%.
There is no single threshold at which differences in test-set deviance are "significant", because of strong correlations between predictions of different models. However, the general patterns of superiority apparent in Figure 1 were repeated across
the other sites, and various other tests indicate they are reliable indicators of general performance. For example, the best five models, both in terms of aggregate
deviance across all sites and median rank of performance on individual sites, were,
in order NN-BAYES-CMTT, RIDGE, NN-ESTOP-BAGGED, GAM-CIG, and FULL-STEP. The
ONLYLIN-STEP model ranked sixth in median rank, and tenth in aggregate deviance.
Although the differences between the best flexible models and the logistic models
were slight, they were consistent. For example, NN-BAYES-CMTT did better than
FULL-STEP on 21 sites, and better than ONLYLIN-STEP on 23 sites, while FULL-STEP
drew with ONLYLIN-STEP on 14 sites and did better on 9. If the models had no
effective difference, there was only a 1.25%. chance of one model doing better than
the other 21 or more times out of 28. Individual measures of performance were
also consistent with these findings. For example, for LUNGSQ a bootstrap test of
test-set deviance revealed that the predictions of NN-BAYES-CMTT were on average
better than those of ONLYLIN-STEP in 99.82% of res amp led test sets (out of 10,000),
while the predictions of NN-BAYES-CMTT beat FULL-STEP in 93.75% of replications
T. Plate, P. Band, J. Bert and J. Grace
972
and
FULL-STEP
beat
ONLYLIN-STEP
in 98.48% of replications.
These results demonstrate that good control on overfitting is essential for this task.
Ordinary neural networks with no control on overfitting do worse than guessing (i.e.,
the null model). Even when the number of hidden units is chosen by cross-validation,
the performance is still worse than a simple two-variable stepwise logistic regression
(CYR-AGE-STEP). The inadequacy of the simple Ale-based stepwise procedure for
choosing the complexity of G AMs is illustrated by the poor performance of the
GAM-FULL model (the more restricted GAM-CIG model does quite well).
The effective methods for controlling overfitting were bagging and Bayesian regularization. Bagging improved the performance of trees and early-stopped neural
networks to good levels. Bayesian regularization worked very well with neural networks and with ridge regression. Furthermore, examination of the performance of
individual networks indicates that networks with fine-grained ARD were frequently
superior to those with coarser control on regularization.
5
Artificial sites with complex relationships
The very minor improvement achieved by neural networks and trees over logistic
models provokes the following question: are complex relationships are really relatively unimportant in this data, or is the strong control on overfitting preventing
identification of complex relationships? In order to answer this question, we created six artificial "sites" for the subjects. These were designed to have very similar
properties to the real sites, while possessing non-linear effects and interactions.
The risk models for the artificial sites possessed a underlying trend equal to half
that of a good logistic model for LUNGSQ, and one of three more complex effects:
FREQ, a frequent non-linear (threshold) effect (BWK > 1) affecting 4,334 of the 9,562
subjects; RARE, a rare threshold effect (BWK > 10), affecting 1,550 subjects; and
INTER, an interaction (BYR . GYR) affecting 482 subjects. For three of the artificial
sites the complex effect was weak (LO), and for the other three it was strong (HI). For
each subject and each artificial site, a random choice as to whether that subject was
a positive case for that site was made, based on probability given by the model for
the artificial site. Models were fitted to these sites in the same way as to other sites
and only subjects without cancer at a smoking related site were used as controls.
o
NULL
FREQ-TRUE
RARE-TRUE
INTER-TRUE
ONLYLIN-STEP
FULL-STEP
TREE-BAGGED
NN-ESTOP-BAGGED
NN-BAYES-CMTT
20
40
FREQ-LO
FREQ-HI
263;128
440/2lO
?
?
?
?
?
?
?
?
?
0204060
~
?
?
?
?
?
?
?
?
o
60
RA~f-LO
?
40
o
60
RARE-HI
402 218
~
20
?
?
?
?
?
?
?
0204060
?
?
245/115
~
60
?
?
?
564/274
~
?
? ?
?
?
40
INTER-LO INTER-HI
482/253
~
20
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0204060
Figure 2: Percent improvement in deviance on test data for the artificial sites.
For comparison purposes, logistic models containing the true set of variables, including non-linearities and interactions, were fitted to the artificial data. For example,
the model RARE-TRUE contained the continuous variables AGE, AGESQ, CDAY, CYR,
and CYQUIT, and the binary variables SMOKE and BWK> 10-,
Neural Networks in Cancer Epidemiology
973
Figure 2 shows performance on the artificial data. The neural networks and bagged
trees were very effective at detecting non-linearities and interactions. Their performance was at the same level as the appropriate true models , while the performance
of simple models lacking the ability to fit the complexities (e.g., FULL-STEP) was
considerably worse.
6
Conclusions
For predicting the risk of cancer in our data, neural networks with Bayesian estimation of regularization parameters to control overfitting performed consistently
but only slightly better than logistic regression models. This appeared to be due
to the lack of complex relationships in the data: on artificial data with complex
relationships they performed markedly better than logistic ?models. Good control
of overfitting is essential for this task, as shown by the poor performance of neural
networks with the number of hidden units chosen by cross-validation.
Given their ability to not overfit while still identifying complex relationships we
expect that neural networks could prove useful in epidemiological data-analysis
by providing a method for checking that a simple statistical model is not missing
important complex relationships.
Acknowledgments
This research was funded by grants from the Workers Compensation Board of
British Columbia, NSERC , and IRIS, and conducted at the BC Cancer Agency.
References
Baxt , w . G. 1995.
Application of artificial neural networks to clinical medicine. The
Lancet, 346:1135-1138.
Breiman, L. 1996. Bagging predictors. Machine Learning, 26(2) :123- 140.
Breiman, L. , Friedman, J ., Olshen, R. , and Stone, C. 1984. Classification and Regression
Trees. Wadsworth, Belmont, CA.
Burke, H. , Rosen, D. , and Goodman, P. 1995. Comparing the prediction accuracy of
artificial neural networks and other statistical methods for breast cancer survival. In
Tesauro, G., Touretzky, D. S., and Leen, T. K. , editors, Advances in Neural Information
Processing Systems 7, pages 1063- 1067, Cambridge, MA. MIT Press.
Hastie, T. J . and Tibshirani, R. J . 1990. Generalized additive models. Chapman and Hall,
London.
Jefferson, M. F ., Pendleton, N., Lucas, S., and Horan, M. A. 1995. Neural networks
(letter). The Lancet, 346:1712.
Lippmann, R. , Lee, Y. , and Shahian, D. 1995. Predicting the risk of complications in
coronary artery bypass operations using neural networks. In Tesauro, G. , Touretzky,
D. S. , and Leen, T . K ., editors, Advances in Neural Information Processing Systems 7,
pages 1055-1062, Cambridge, MA. MIT Press.
MacKay, D. J. C. 1995. Probable networks and plausible predictions - a review of practical
Bayesian methods for supervised neural networks. Network: Computation in Neural
Systems, 6:469- 505.
Michie, D., Spiegelhalter, D., and Taylor, C. 1994. Machine Learning, Neural and Statistical Classification. Ellis Horwood, Hertfordshire, UK.
StatSci 1995. S-Plus Guide to Statistical and Mathematical Analyses, Version 3.3. StatSci,
a division of MathSoft, Inc, Seattle.
Wyatt, J. 1995. Nervous about artificial neural networks? (commentary). The Lancet,
346:1175-1177.
| 1252 |@word version:2 seems:1 reduction:3 series:1 selecting:1 bc:10 amp:1 comparing:1 yet:1 must:2 john:1 belmont:1 additive:4 partition:1 designed:1 aside:1 half:1 selected:5 nervous:1 record:4 provides:1 detecting:1 contribute:1 complication:1 five:1 mathematical:1 replication:7 prove:1 fitting:2 pairwise:1 inter:4 ra:1 themselves:1 frequently:1 automatically:1 discover:5 linearity:5 underlying:1 null:10 lowest:1 interpreted:1 differing:1 finding:1 impractical:1 collecting:1 complaint:1 uk:1 control:18 unit:8 grant:1 superiority:1 positive:3 before:1 engineering:2 tends:1 melanoma:3 plus:3 nz:1 suggests:1 statsci:3 practical:2 acknowledgment:1 testing:1 practice:1 epidemiological:5 bootstrap:4 procedure:5 habit:4 area:1 pre:1 deviance:13 suggest:1 selection:1 risk:3 optimize:1 missing:3 starting:2 independently:2 simplicity:1 identifying:1 population:1 traditionally:2 controlling:3 mathsoft:1 trend:1 michie:2 coarser:1 v6t:2 drank:1 calculate:1 balanced:1 agency:5 questionnaire:1 complexity:3 personal:1 employment:1 trained:3 terminating:1 oral:2 predictive:2 division:2 completely:1 various:2 fast:1 effective:5 london:1 artificial:15 aggregate:3 outcome:6 choosing:1 pendleton:1 firm:1 lymphoma:1 whose:1 apparent:1 quite:1 plausible:1 ability:3 statistic:1 final:1 cig:4 interaction:7 jefferson:2 frequent:1 hcv:2 baxt:2 date:2 artery:1 seattle:1 comparative:1 perfect:1 ac:1 ard:3 minor:1 strong:3 come:1 indicate:1 generalization:2 really:1 preliminary:1 probable:1 quit:1 drinking:1 burke:3 considered:1 hall:1 week:1 major:1 early:5 omitted:2 wine:1 purpose:2 estimation:1 applicable:1 bag:1 label:1 tanh:1 individually:1 weighted:1 mit:2 reaching:1 avoid:1 shrinkage:1 breiman:5 derived:3 improvement:3 consistently:1 rank:2 likelihood:1 indicates:2 ave:2 am:1 colon:2 stopping:3 nn:19 entire:1 hidden:13 going:1 overall:2 among:2 flexible:11 classification:5 priori:1 retaining:1 prevention:1 lucas:1 smoothing:3 mackay:5 wadsworth:1 bagged:12 field:1 equal:1 having:2 eliminated:1 chapman:1 rosen:3 report:2 prostate:1 spline:3 others:1 few:1 randomly:1 resulted:1 individual:4 shahian:2 friedman:1 interest:1 possibility:1 joel:1 evaluation:1 predominant:1 analyzed:1 male:2 provokes:1 held:1 partial:1 worker:1 ascertained:1 unless:1 tree:13 taylor:2 penalizes:1 re:1 fitted:5 stopped:3 increased:1 column:2 modeling:5 elli:1 vuw:1 wyatt:2 ordinary:2 rare:5 predictor:6 successful:1 conducted:2 optimally:1 reported:3 answer:1 considerably:1 eur:1 epidemiology:7 lee:2 together:1 squared:1 containing:1 worse:4 potential:2 inc:1 performed:2 doing:1 start:1 lung:3 bayes:10 complicated:1 accuracy:1 who:3 weak:1 bayesian:7 identification:1 comp:1 history:1 touretzky:2 sixth:1 against:1 frequency:1 naturally:1 stop:1 carefully:1 appears:1 day:2 supervised:1 methodology:1 improved:2 leen:2 done:1 shrink:1 evaluated:1 lifetime:2 just:2 biomedical:2 furthermore:1 correlation:1 overfit:1 horizontal:1 smoke:6 lack:1 logistic:14 quality:1 indicated:1 cigarette:3 usage:1 effect:8 consisted:1 unbiased:1 ranged:1 true:6 regularization:7 chemical:2 freq:4 illustrated:1 mel:4 iris:1 criterion:2 generalized:3 stone:1 plate:4 ridge:7 demonstrate:2 complete:3 pro:2 percent:3 possessing:1 common:2 superior:2 slight:1 significant:1 cambridge:2 ai:1 cv:1 automatic:1 z4:2 had:6 funded:1 l3:2 etc:6 recent:1 tesauro:2 binary:7 hertfordshire:1 minimum:1 commentary:1 ale:2 multiple:1 full:17 determination:1 clinical:1 cross:4 equally:1 prediction:11 regression:16 breast:1 patient:1 achieved:3 cell:1 justified:1 myeloma:1 affecting:3 fine:1 median:2 leaving:1 goodman:3 markedly:1 subject:20 spirit:1 odds:2 revealed:2 enough:1 fit:3 hastie:2 identified:1 registry:1 reduce:1 whether:2 six:3 utility:1 inadequacy:1 penalty:5 useful:4 detailed:1 unimportant:1 transforms:3 band:4 reduced:1 percentage:1 dotted:1 per:2 tibshirani:2 diagnosis:2 four:1 threshold:4 nhl:2 prevent:2 tenth:1 year:4 sum:2 package:1 letter:1 hodgkin:1 reporting:2 decide:1 layer:1 drink:2 hi:4 fold:2 quadratic:1 infinity:1 worked:1 software:1 pruned:2 relatively:1 developing:1 poor:2 conjugate:1 across:3 slightly:1 explained:1 restricted:1 committee:1 horwood:1 operation:1 gam:11 appropriate:2 pierre:1 original:1 bagging:7 binomial:1 remaining:1 tony:1 medicine:1 giving:2 skin:1 question:2 grace:4 guessing:1 gradient:1 separate:1 consumption:3 seven:2 rectum:2 unstable:1 reason:1 modeled:2 relationship:11 providing:1 difficult:1 olshen:1 negative:1 intent:1 implementation:1 perform:1 ord:3 possessed:1 compensation:1 beat:2 witness:1 ever:1 bwk:4 unanticipated:3 varied:1 bert:4 canada:4 introduced:1 smoking:3 namely:1 specified:1 pipe:1 tap:1 able:1 below:1 pattern:1 appeared:1 including:4 reliable:1 mall:2 power:1 difficulty:1 ranked:1 examination:1 predicting:2 indicator:3 squamous:2 alcohol:7 spiegelhalter:2 mmy:2 started:1 created:1 categorical:1 columbia:3 review:1 checking:1 vancouver:4 lacking:1 expect:1 mixed:1 interesting:1 coronary:1 age:20 validation:5 h2:2 degree:2 sufficient:1 beer:1 consistent:2 article:1 lancet:4 suspected:1 editor:2 bypass:1 lo:6 cancer:21 course:1 bias:2 allow:1 guide:1 curve:1 cmtt:8 preventing:1 made:1 avg:2 pruning:1 lippmann:2 cavity:1 dealing:1 overfitting:14 continuous:1 ca:1 complex:13 cigar:1 constructing:1 did:5 main:2 repeated:1 body:1 site:33 west:2 roc:1 board:1 grained:1 british:3 virtue:1 evidence:4 survival:1 essential:2 stepwise:8 alc:1 drew:1 smoker:2 led:1 contained:2 nserc:1 partially:1 chance:1 ma:2 included:5 except:2 total:2 indicating:1 select:4 relevance:1 dept:2 |
277 | 1,253 | Improving the Accuracy and Speed of
Support Vector Machines
Chris J.C. Burges
Bell Laboratories
Lucent Technologies , Room 3G429
101 Crawford 's Corner Road
Holmdel , NJ 07733-3030
burges@bell-Iabs.com
Bernhard Scholkopf"
Max-Planck-Institut fur
biologische Kybernetik ,
Spemannstr. 38
72076 Tubingen , Germany
bs@mpik-tueb.mpg.de
Abstract
Support Vector Learning Machines (SVM) are finding application
in pattern recognition , regression estimation , and operator inversion for ill-posed problems. Against this very general backdrop ,
any methods for improving the generalization performance, or for
improving the speed in test phase, of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern
recognition problem. The method for improving generalization performance (the "virtual support vector" method) does so by incorporating known invariances of the problem. This method achieves
a drop in the error rate on 10,000 NIST test digit images of 1.4%
to 1.0%. The method for improving the speed (the "reduced set"
method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in
test phase over the virtual support vector machine. The combined
approach yields a machine which is both 22 times faster than the
original machine, and which has better generalization performance,
achieving 1.1 % error. The virtual support vector method is applicable to any SVM problem with known invariances. The reduced
set method is applicable to any support vector machine.
1
INTRODUCTION
Support Vector Machines are known to give good results on pattern recognition
problems despite the fact that they do not incorporate problem domain knowledge.
?Part of this work was done while B.S. was with AT&T Research , Holmdel, NJ.
C. 1. Burges and B. SchOlkopf
376
However, they exhibit classification speeds which are substantially slower than those
of neural networks (LeCun et al., 1995).
The present study is motivated by the above two observations. First, we shall
improve accuracy by incorporating knowledge about invariances of the problem at
hand. Second, we shall increase classification speed by reducing the complexity of
the decision function representation. This paper thus brings together two threads
explored by us during the last year (Scholkopf, Burges & Vapnik, 1996; Burges,
1996).
The method for incorporating invariances is applicable to any problem for which
the data is expected to have known symmetries. The method for improving the
speed is applicable to any support vector machine. Thus we expect these methods
to be widely applicable to problems beyond pattern recognition (for example, to
the regression estimation problem (Vapnik, Golowich & Smola, 1996)).
After a brief overview of Support Vector Machines in Section 2, we describe how
problem domain knowledge was used to improve generalization performance in Section 3. Section 4 contains an overview of a general method for improving the
classification speed of Support Vector Machines. Results are collected in Section 5.
We conclude with a discussion.
2
SUPPORT VECTOR LEARNING MACHINES
This Section summarizes those properties of Support Vector Machines (SVM) which
are relevant to the discussion below. For details on the basic SVM approach, the
reader is referred to (Boser, Guyon & Vapnik, 1992; Cortes & Vapnik, 1995; Vapnik,
1995). We end by noting a physical analogy.
=
=
1, ... ,f, with corresponding
Let the training data be elements Xi E C, C R d, i
class labels Yi E {?1}. An SVM performs a mapping 4> : C ---+ 1i, x t-+ X into a
high (possibly infinite) dimensional Hilbert space 1i. In the following , vectors in
1i will be denoted with a bar. In 1i, the SVM decision rule is simply a separating
hyperplane: the algorithm constructs a decision surface with normal ~ E 1i which
separates the Xi into two classes:
~?xi+b ~ kO-c'i, Yi=+1
(1)
1j, . Xi + b < kl + c'i, Yi
-1
(2)
where the c'j are positive slack variables, introduced to handle the non-separable
case (Cortes & Vapnik...l 1995), and where ko and kl are typically defined to be +1
and -1, respectively. W is computed by minimizing the objective function
=
-
-
l
W?W
-2-
""' C,i)P
+ C(L..J
(3)
i=l
subject to (1), (2), where C is a constant, and we choose p = 2. In the separable case,
the SVM algorithm constructs that separating hyperplane for which the margin
between the positive and negative examples in 1i is maximized. A test vector x E C
is then assigned a class label {+ 1, -' 1} depending on whether 1j, . 4>( x) + b is greater
or less than (ko + kt)/2. Support vectors Sj E C are defined as training samples
for which one of Equations (1) or (2) is an equality. (We name the suppo!:t vectors
S to distinguish them from the rest of the training data) . The solution W may be
expressed
Ns
1j, =
I:
j=1
O'jYj4>(Sj)
(4)
377
Improving the Accuracy and Speed of Support Vector Machines
?
where Cl:j ~ are the positive weights, determined during training , Yj E {?1} the
class labels of the Sj , and N s the number of support vectors. Thus in order to
classify a test point x one must compute
Ns
Ns
q, . X = 2 :' Cl:jYj Sj . x = 2: Cl:jYj4>(Sj)
j=l
i=l
Ns
.
4>(x)
= 2: Cl:jYj J?Sj, x) .
(5)
j=l
One of the key properties of support vector machines ,is the use of the kernel J< to
compute dot products in 1-l without having to explicitly compute the mapping 4> .
It is interesting to note that the solution has a simple physical interpretation in
the high dimensional space 1-l . If we assume that each support vector Sj exerts a
perpendicular force of size Cl:j and sign Yj on a solid plane sheet lying along the
hyperplane ~ . x + b = (ko + kd/2 , then the solution satisfies the requirements of
mechanical stability. At the solution , the Cl:j can be shown to satisfy 2:7';1 Cl:iYj = 0,
which translates into the forces on the sheet summing to zero; and Equation (4)
implies that the torques also sum to zero.
3
IMPROVING ACCURACY
This section follows the reasoning of (Scholkopf, Burges, & Vapnik , 1996). Problem
domain knowledge can be incorporated in two different ways : the knowledge can
be directly built into the algorithm , or it can be used to generate artificial training
examples ("virtual examples") . The latter significantly slows down training times,
due to both correlations in the artificial data and to the increased training set size
(Simard et aI. , 1992) ; however it has the advantage of being readily implemented for
any learning machine and for any invariances. For instance, if instead of Lie groups
of symmetry transformations one is dealing with discrete symmetries, such as the
bilateral symmetries of Vetter , Poggio, & Biilthoff (1994) , then derivative-based
methods (e .g. Simard et aI. , 1992) are not applicable.
For support vector machines , an intermediate method which combines the advantages of both approaches is possible. The support vectors characterize the solution
to the problem in the following sense: If all the other training data were removed ,
and the system retrained , then the solution would be unchanged . Furthermore,
those support vectors Si which are not errors are close to the decision boundary
in 1-l , in the sense that they either lie exactly on the margin (ei = 0) or close to
it (ei < 1). Finally, different types of SVM , built using different kernels , tend to
produce the same set of support vectors (Scholkopf, Burges, & Vapnik , 1995). This
suggests the following algorithm: first , train an SVM to generate a set of support
vectors {Sl, .. . , SN. }; then , generate the artificial examples (virtual support vectors) by applying the desired invariance transformations to {Sl , ... , SN.} ; finally,
train another SVM on the new set. To build a ten-class classifier, this procedure is
carried out separately for ten binary classifiers.
Apart from the increase in overall training time (by a factor of two , in our experiments) , this technique has the disadvantage that many of the virtual support
vectors become support vectors for the second machine , increasing the number of
summands in Equation (5) and hence decreasing classification speed. However , the
latter problem can be solved with the reduced set method , which we describe next .
378
4
C. J. Burges and B. SchOlkopf
IMPROVING CLASSIFICATION SPEED
The discussion in this Section follows that of (Burges, 1996). Consider a set of
vectors Zk E C, k = 1, ... , Nz and corresponding weights rk E R for which
Nz
~I == L rk4>(Zk)
(6)
k=l
minimizes (for fixed N z) the Euclidean distance to the original solution:
p = II~
Note that p, expressed here
in terms of functions (using
{( rk, Zk) I k = 1, ... , N z} is
expansion in Equation (5) is
-
~/II?
in terms of vectors in 1i, can be expressed entirely
the kernel K) of vectors in the input space C. The
called the reduced set. To classify a test point x, the
replaced by the approximation
Nz
~/?X
(7)
Nz
= 2:rkZk?X = LrkK(Zk'X).
k=l
(8)
k=l
The goal is then to choose the smallest N z ~ N s, and corresponding reduced
set, such that any resulting loss in generalization performance remains acceptable.
Clearly, by allowing N z = N s, P can be made zero. Interestingly, there are nontrivial cases where N z < Ns and p = 0, in which case the reduced set leads to
an increase in classification speed with no loss in generalization performance. Note
that reduced set vectors are not support vectors, in that they do not necessarily lie
on the separating margin and, unlike support vectors, are not training samples.
While the reduced set can be found exactly in some cases, in general an unconstrained conjugate gradient method is used to find the Zk (while the corresponding
optimal rk can be found exactly, for all k). The method for finding the reduced set
is computationally very expensive (the final phase constitutes a conjugate gradient
descent in a space of (d + 1) . N z variables, which in our case is typically of order
50,000).
5
EXPERIMENTAL RESULTS
In this Section, by "accuracy" we mean generalization performance, and by "speed"
we mean classification speed. In our experiments, we used the MNIST database of
60000+ 10000 handwritten digits, which was used in the comparison investigation
of LeCun et al (1995). In that study, the error rate record of 0.7% is held by a
boosted convolutional neural network ("LeNet4").
We start by summarizing the results of the virtual support vector method. We
trained ten binary classifiers using C 10 in Equation (3). We used a polynomial
kernel K(x, y) = (x? y)5. Combining classifiers then gave 1.4% error on the 10,000
test set; this system is referred to as ORIG below. We then generated new training data by translating the resulting support vectors by one pixel in each of four
directions, and trained a new machine (using the same parameters). This machine,
which is referred to as VSV below, achieved 1.0% error on the test set. The results
for each digit are given in Table 1.
=
Note that the improvement in accuracy comes at a cost in speed of approximately
a factor of 2. Furthermore, the speed of ORIG was comparatively slow to start
with (LeCun et al., 1995), requiring approximately 14 million multiply adds for one
Improving the Accuracy and Speed of Support Vector Machines
379
Table 1: Generalization Performance Improvement by Incorporating Invariances.
N E and N sv are the number of errors and number of support vectors respectively; "ORIG" refers to the original support vector machine, "vsv" to the machine
trained on virtual support vectors.
Digit
0
1
2
3
4
5
6
7
8
9
NE ORIG
NE VSV
N sv ORIG
Nsv VSV
15
13
23
21
30
23
18
39
35
40
1206
757
2183
2506
1784
2255
1347
1712
3053
2720
2938
1887
5015
4764
3983
5235
3328
3968
6978
6348
17
15
34
32
30
29
30
43
47
56
Table 2: Dependence of Performance of Reduced Set System on Threshold. The
numbers in parentheses give the corresponding number of errors on the test set.
Note that Thrsh Test gives a lower bound for these numbers.
Digit
0
1
2
3
4
5
6
7
8
9
Thrsh VSV
1.39606 (9)
3.98722 (24)
1.27175 (31)
1.26518 (29)
2.18764 (37)
2.05222 (33)
0.95086 (25)
3.0969 (59)
-1.06981 (39)
1.10586 (40)
Thrsh Bayes
1.48648 (8)
4.43154 (12)
1.33081 (30)
1.42589 (27)
2.3727 (35)
2.21349 (27)
1.06629 (24)
3.34772 (57)
-1.19615 (40)
1.10074 (40)
Thrsh Test
1.54696 (7)
4.32039 (10)
1.26466 (29)
1.33822 (26)
2.30899 (33)
2.27403 (24)
0.790952 (20)
3.27419 (54)
-1.26365 (37)
1.13754 (39)
classification (this can be reduced by caching results of repeated support vectors
(Burges, 1996)). In order to become competitive with systems with comparable
accuracy, we will need approximately a factor of fifty improvement in speed. We
therefore approximated VSV with a reduced set system RS with a factor of fifty
fewer vectors than the number of support vectors in VSV.
Since the reduced set method computes an approximation to the decision surface in
the high dimensional space, it is likely that the accuracy of RS could be improved
by choosing a different threshold b in Equations (1) and (2). We computed that
threshold which gave the empirical Bayes error for the RS system, measured on
the training set. This can be done easily by finding the maximum of the difference
between the two un-normalized cumulative distributions of the values of the dot
products q, . Xi, where the Xi are the original training data. Note that the effects of
bias are reduced by the fact that VSV (and hence RS) was trained only on shifted
data, and not on any of the original data. Thus, in the absence of a validation
set, the original training data provides a reasonable means of estimating the Bayes
threshold. This is a serendipitous bonus of the VSV approach. Table 2 compares
results obtained using the threshold generated by the training procedure for the
VSV system; the estimated Bayes threshold for the RS system; and, for comparison
C. 1. Burges and B. SchOlkopf
380
Table 3: Speed Improvement Using the Reduced Set method. The second through
fourth columns give numbers of errors on the test set for the original system , the
virtual support vector system , and the reduced set system. The last three columns
give , for each system , the number of vectors whose dot product must be computed
in test phase.
Digit
0
1
2
3
4
5
6
7
8
9
ORIG Err
17
15
34
32
30
29
30
43
47
56
VSV Err
15
13
23
21
30
23
18
39
35
40
RS Err
18
12
30
27
35
27
24
57
40
40
ORIG # SV
1206
757
2183
2506
1784
2255
1347
1712
3053
2720
VSV # SV
2938
1887
5015
4764
3983
5235
3328
3968
6978
6348
#RSV
59
38
100
95
80
105
67
79
140
127
purposes only (to see the maximum possible effect of varying the threshold) , the
Bayes error computed on the test set .
Table 3 compares results on the test set for the three systems , where the Bayes
threshold (computed with the training set) was used for RS . The results for all ten
digits combined are 1.4% error for ORIG, 1.0% for VSV (with roughly twice as
many multiply adds) and 1.1% for RS (with a factor of 22 fewer multiply adds than
ORIG).
The reduced set conjugate gradient algorithm does not reduce the objective function
p2 (Equation (7)) to zero . For example , for the first 5 digits, p2 is only reduced
on average by a factor of 2.4 (the algorithm is stopped when progress becomes too
slow). It is striking that nevertheless, good results are achieved.
6
DISCUSSION
The only systems in LeCun et al (1995) with better than 1.1% error are LeNet5
(0 .9% error , with approximately 350K multiply-adds) and boosted LeNet4 (0.7%
error, approximately 450K mUltiply-adds). Clearly SVMs are not in this league yet
(the RS system described here requires approximately 650K multiply-adds).
However, SVMs present clear opportunities for further improvement. (In fact , we
have since trained a VSV system with 0.8% error , by choosing a different kernel) .
More invariances (for example, for the pattern recognition case, small rotations,
or varying ink thickness) could be added to the virtual support vector approach.
Further, one might use only those virtual support vectors which provide new information about the decision boundary, or use a measure of such information to keep
only the most important vectors. Known invariances could also be built directly
into the SVM objective function.
Viewed' as an approach to function approximation , the reduced set method is currently restricted in that it assumes a decision function with the same functional
form as the original SVM. In the case of quadratic kernels, the reduced set can be
computed both analytically and efficiently (Burges, 1996). However, the conjugate
gradient descent computation for the general kernel is very inefficient. Perhaps re-
Improving the Accuracy and Speed of Support Vector Machines
381
laxing the above restriction could lead to analytical methods which would apply to
more complex kernels also.
Acknowledgements
We wish to thank V. Vapnik, A. Smola and H. Drucker for discussions. C. Burges
was supported by ARPA contract N00014-94-C-0186. B. Sch6lkopf was supported
by the Studienstiftung des deutschen Volkes.
References
[1] Boser, B. E., Guyon, I. M., Vapnik, V., A Training Algorithm for Optimal
Margin Classifiers, Fifth Annual Workshop on Computational Learning Theory,
Pittsburgh ACM (1992) 144-152.
[2] Bottou, 1., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Jackel, L. D., Le
Cun, Y., Muller, U. A., Sackinger, E., Simard, P., Vapnik, V., Comparison of
Classifier Methods: a Case Study in Handwritten Digit Recognition, Proceedings of the 12th International Conference on Pattern Recognition and Neural
Networks, Jerusalem (1994)
[3] Burges, C. J. C., Simplified Support Vector Decision Rules, 13th International
Conference on Machine Learning (1996), pp. 71 - 77.
[4] Cortes, C., Vapnik, V., Support Vector Networks, Machine Learning 20 (1995)
pp. 273 - 297
[5] LeCun, Y., Jackel, 1., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker,
H., Guyon, I., Muller, U., Sackinger, E., Simard, P., and Vapnik, V., Comparison of Learning Algorithms for Handwritten Digit Recognition, International
Conference on Artificial Neural Networks, Ed. F. Fogelman, P. Gallinari, pp.
53-60, 1995.
[6] Sch6lkopf, B., Burges, C.J.C., Vapnik, V., Extracting Support Data for a Given
Task, in Fayyad, U. M., Uthurusamy, R. (eds.), Proceedings, First International
Conference on Knowledge Discovery & Data Mining, AAAI Press, Menlo Park,
CA (1995)
[7] Sch6lkopf, B., Burges, C.J.C., Vapnik, V., Incorporating Invariances in Support
Vector Learning Machines, in Proceedings ICANN'96 - International Conference on Artificial Neural Networks. Springer Verlag, Berlin, (1996)
[8] Simard, P., Victorri, B., Le Cun, Y., Denker, J., Tangent Prop - a Formalism
for Specifying Selected Invariances in an Adaptive Network, in Moody, J. E.,
Hanson, S. J:, Lippmann, R. P., Advances in Neural Information Processing
Systems 4, Morgan Kaufmann, San Mateo, CA (1992)
[9] Vapnik, V., Estimation of Dependences Based on Empirical Data, [in Russian]
Nauka, Moscow (1979); English translation: Springer Verlag, New York (1982)
[10] Vapnik, V., The Nature of Statistical Learning Theory, Springer Verlag, New
York (1995)
[11] Vapnik, V., Golowich, S., and Smola, A., Support Vector Method for Function
Approximation, Regression Estimation, and Signal Processing, Submitted to
Advances in Neural Information Processing Systems, 1996
[12] Vetter, T., Poggio, T., and Bulthoff, H., The Importance of Symmetry and Virtual Views in Three-Dimensional Object Recognition, Current Biology 4 (1994)
18-23
| 1253 |@word inversion:1 polynomial:1 r:9 solid:1 contains:1 interestingly:1 err:3 current:1 com:1 si:1 yet:1 must:2 readily:1 drop:1 fewer:2 selected:1 plane:1 record:1 provides:1 along:1 become:2 scholkopf:7 combine:2 expected:1 roughly:1 mpg:1 torque:1 decreasing:1 increasing:2 becomes:1 estimating:1 bonus:1 substantially:1 minimizes:1 finding:3 transformation:2 nj:2 exactly:3 classifier:6 gallinari:1 planck:1 positive:3 kybernetik:1 despite:1 approximately:6 might:1 twice:1 nz:4 mateo:1 suggests:1 specifying:1 perpendicular:1 lecun:5 yj:2 digit:10 procedure:2 empirical:2 bell:2 significantly:1 road:1 vetter:2 refers:1 close:2 operator:1 sheet:2 applying:1 restriction:1 jerusalem:1 rule:2 stability:1 handle:1 element:1 recognition:9 expensive:1 approximated:1 database:1 solved:1 removed:1 complexity:1 trained:5 orig:9 easily:1 train:2 describe:2 artificial:5 choosing:2 whose:1 posed:1 widely:1 final:1 advantage:2 analytical:1 product:3 relevant:1 combining:1 achieve:1 requirement:1 produce:1 object:1 mpik:1 depending:1 golowich:2 measured:1 progress:1 p2:2 implemented:1 implies:1 come:1 direction:1 translating:1 virtual:12 generalization:8 investigation:1 lying:1 normal:1 mapping:2 achieves:1 smallest:1 purpose:1 estimation:4 applicable:6 label:3 currently:1 jackel:2 clearly:2 caching:1 boosted:2 varying:2 improvement:5 fur:1 sense:2 summarizing:1 typically:2 germany:1 pixel:1 overall:1 classification:8 ill:1 fogelman:1 denoted:1 construct:2 having:1 biology:1 park:1 constitutes:1 replaced:1 phase:4 volkes:1 interest:1 mining:1 multiply:6 held:1 kt:1 poggio:2 institut:1 euclidean:1 desired:1 re:1 stopped:1 arpa:1 increased:1 classify:2 instance:1 column:2 formalism:1 tubingen:1 disadvantage:1 cost:1 too:1 characterize:1 thickness:1 sv:4 combined:2 international:5 contract:1 together:1 moody:1 aaai:1 choose:2 possibly:1 corner:1 iyj:1 simard:5 derivative:1 inefficient:1 de:2 satisfy:1 explicitly:1 bilateral:1 view:1 biologische:1 start:2 bayes:6 competitive:1 nsv:1 accuracy:10 convolutional:1 kaufmann:1 efficiently:1 maximized:1 yield:1 handwritten:3 submitted:1 ed:2 against:1 pp:3 knowledge:6 hilbert:1 improved:1 done:2 furthermore:2 vsv:14 smola:3 correlation:1 hand:1 bulthoff:1 ei:2 sackinger:2 brings:1 perhaps:1 russian:1 name:1 effect:2 requiring:1 normalized:1 equality:1 assigned:1 hence:2 analytically:1 laboratory:1 during:2 performs:1 reasoning:1 image:1 rotation:1 functional:1 physical:2 overview:2 million:1 interpretation:1 ai:2 unconstrained:1 league:1 dot:3 surface:3 summands:1 add:6 apart:1 n00014:1 verlag:3 binary:2 yi:3 muller:2 morgan:1 greater:1 signal:1 ii:2 faster:1 parenthesis:1 regression:3 basic:1 ko:4 exerts:1 kernel:8 achieved:2 separately:1 victorri:1 fifty:3 rest:1 unlike:1 subject:1 tend:1 spemannstr:1 extracting:1 noting:1 intermediate:1 gave:2 reduce:1 translates:1 drucker:3 thread:1 motivated:1 whether:1 york:2 clear:1 tueb:1 ten:4 svms:3 reduced:20 generate:3 sl:2 shifted:1 sign:1 estimated:1 suppo:1 discrete:1 shall:2 group:1 key:1 four:1 threshold:8 nevertheless:1 achieving:1 year:1 sum:1 fourth:1 striking:1 reader:1 guyon:4 reasonable:1 decision:9 summarizes:1 holmdel:2 acceptable:1 comparable:1 entirely:1 bound:1 distinguish:1 quadratic:1 annual:1 nontrivial:1 speed:19 fayyad:1 separable:2 speedup:1 kd:1 conjugate:4 cun:2 b:1 brunot:1 restricted:1 computationally:1 equation:7 remains:1 slack:1 end:1 apply:2 denker:3 uthurusamy:1 slower:1 original:8 assumes:1 moscow:1 opportunity:1 build:1 approximating:1 comparatively:1 unchanged:1 lenet5:1 ink:1 objective:3 added:1 dependence:2 exhibit:1 gradient:4 distance:1 separate:1 thank:1 separating:3 berlin:1 chris:1 collected:1 minimizing:1 negative:1 slows:1 allowing:1 sch6lkopf:3 observation:1 nist:1 descent:2 incorporated:1 retrained:1 introduced:1 mechanical:1 kl:2 hanson:1 boser:2 beyond:1 bar:1 below:3 pattern:6 built:3 max:1 force:2 improve:2 technology:1 brief:1 ne:2 carried:1 crawford:1 sn:2 acknowledgement:1 discovery:1 tangent:1 loss:2 expect:1 interesting:1 analogy:1 validation:1 iabs:1 translation:1 deutschen:1 supported:2 last:2 english:1 lenet4:2 bias:1 burges:16 fifth:1 boundary:2 cumulative:1 computes:1 made:1 adaptive:1 san:1 simplified:1 sj:7 biilthoff:1 lippmann:1 bernhard:1 rsv:1 keep:1 dealing:1 summing:1 pittsburgh:1 conclude:1 xi:6 un:1 table:6 nature:1 zk:5 ca:2 menlo:1 symmetry:5 improving:12 expansion:1 bottou:2 cl:7 necessarily:1 complex:1 domain:3 icann:1 repeated:1 referred:3 backdrop:1 slow:2 n:5 wish:1 lie:3 down:1 rk:3 lucent:1 explored:1 svm:12 cortes:5 incorporating:5 studienstiftung:1 mnist:1 vapnik:18 workshop:1 importance:1 margin:4 simply:1 likely:1 expressed:3 springer:3 satisfies:1 acm:1 prop:1 goal:1 viewed:1 room:1 absence:1 infinite:1 determined:1 reducing:1 hyperplane:3 called:1 invariance:11 rk4:1 experimental:1 nauka:1 support:45 latter:2 incorporate:1 |
278 | 1,254 | The effect of correlated input data on the
dynamics of learning
S~ren
Halkjrer and Ole Winther
CONNECT, The Niels Bohr Institute
Blegdamsvej 17
2100 Copenhagen, Denmark
halkjaer>winther~connect.nbi.dk
Abstract
The convergence properties of the gradient descent algorithm in the
case of the linear perceptron may be obtained from the response
function. We derive a general expression for the response function
and apply it to the case of data with simple input correlations. It
is found that correlations severely may slow down learning. This
explains the success of PCA as a method for reducing training time.
Motivated by this finding we furthermore propose to transform the
input data by removing the mean across input variables as well as
examples to decrease correlations. Numerical findings for a medical
classification problem are in fine agreement with the theoretical
results.
1
INTRODUCTION
Learning and generalization are important areas of research within the field of neural networks . Although good generalization is the ultimate goal in feed-forward
networks (perceptrons), it is of practical importance to understand the mechanism
which control the amount of time required for learning, i. e. the dynamics of learning. This is of course particularly important in the case of a large data set. An exact
analysis of this mechanism is possible for the linear perceptron and as usual it is
hoped that the results to some extend may be carried over to explain the behaviour
of non-linear perceptrons.
We consider N dimensional input vectors x E
nN
and scalar output y. The linear
170
S. Halkjrer and O. Winther
perceptron is parametrized by the weight vector wE
y(x) =
nN
1
..JNwT x
(1)
Let the training set be {( xt' , yt'), J.l = 1, . .. ,p} and the training error be the usual
squared error, E(w) = ~ 2:t' (yt' - y(xt'))2. We will use the well-known gradient
descent aigorithm i w(k + 1) = w(k) - rl'\1 E(w(k)) to estimate the minimum points
w'" of E. Here 7] denotes the learning parameter. Collecting the input examples
in the N x p matrix X and the corresponding output in y, the error function is
written E(w) = ~(wTRw-2qTw+c), where R == }, 2:t' xt'(xt')T, q = -;};;XY and
c = yT y. As in (Le Cun et at., 1991) the convergence properties of the minimum
points w'" are examined in the coordinate system where R is diagonal. Let U denote
the matrix whose columns are the eigenvectors of R and ~ = diag( AI, ... , AN) the
diagonal matrix containing the eigenvalues of R. The new coordinates then become
v = U T (w - w"') with corresponding error function 2
E(v) =
1
1
'2vT ~v + Eo = '2 ~Aivl + Eo
?
(2)
where Eo = E(w"') . Gradient descent now leads to the decoupled equations
vi(k
+ 1) = (1 -
7]Ai)Vi(k)
= (1 -
7]Adkvi(O)
(3)
with i = 1, ... , N. Clearly, v -+ 0 requires 11 - 7]Ai I < 1 for all i, so that 7] must
be chosen in the interval 0 < 7] < 2/ Amax. In the extreme case Ai = A we will
have convergence in one step for 7] = 1/ A. However, in the usual case of unequal Ai
the convergence for large k will be exponential vi(k) = exp(-7]Aik)Vi(O) . (7]Ai)-1
therefore defines the time constant of the i'th equation giving a slowest time constant
(7]A min)-I. A popular choice for the learning parameter is 7] = 1/ Amax resulting in
a slowest time constant Amax / Amin called the learning time T in the following. The
convergence properties of the gradient descent algorithm is thus characterized by
T. In the case of a singular matrix R, one or more of the eigenvalues will be zero,
and there will be no convergence along the corresponding eigendirections. This has
however no influence on the error according to (2). Thus, Amin will in the following
denote the smallest non-zero eigenvalue.
We will in the article calculate the eigenvalue spectrum of R in order to obtain the
learning time of the gradient descent algorithm. This may be done by introducing
the response function
1
1
(
1
)
GL == G(L, H) = N TrL 1 _ RH == L 1 _ 'RH
(4)
where L, H are arbitrary N x N matrices. Using a standard representation of the
Dirac o-function (Krogh, 1992) we may write the eigenvalue spectrum of R as
(5)
IThe Newton-Raphson method, w(k + 1) = w(k) - vr E(w(k))(vr2 E(w(k)))-l is of
course much more effective in the linear case since it gives convergence in one step. This
method however requires an inversion of the Hessian matrix.
2Note that this analysis is valid for any part of an error surface in which a quadratic
approximation is valid. In the general case R should be exchanged with the Hessian
vrvr E(w"').
171
The Effect of Correlated Input Data on the Dynamics of Learning
where). has an infinitesimal imaginary part which is set equal to zero at the end of
the calculation.
In the 'thermodynamic limit' N -+ 00 keeping a = Il constant and finite, G
(and thus the eigenvalue spectrum) is a self-averaging quantity (Sollich, 1996) i.
e. G - G = O(N- 1 ), where G is defined as the response function averaged over
the input distribution. Previously G has been calculated for independent input
variables (Hertz et al., 1989; Sollich, 1996). In section 2 we derive an implicit
equation for the averaged response function for arbitrary correlations using random
matrix techniques (Brody et al., 1981). This equation is solved showing that simple
input correlations may slow down learning significantly. Based on this finding we
propose in section 3 data transformations for improving the learning speed and test
the transformation numerically on a medical classification problem in section 4. We
conclude in section 5 with a discussion of the results.
2
THE RESPONSE FUNCTION
The method for deriving the averaged response function is based on the fact that the
response function (4) maybe written as a geometrical series GL = L~o (L(RHY).
We will assume that the input examples x/J are drawn independently from a
Gaussian distribution with means mi and correlations xixi - mimi = Gi i , i. e.
x/J(xv)T = /5/JvZ and R = aZ where Z == C + mmT . The Gaussian distribution
has the property that the average of products of x's can be calculated by making
all possible pair correlations, e.g. XiXjXkXI = ZiiZkl + ZikZjl + ZilZjk. To take
the average of (L(RHY), we must therefore make all possible pairs of the x's and
exchange each pair xixi with Zij. This combinatorial problem will be solved below
in a recursive fashion leading to an implicit equation for GL. Using underbraces to
indicate pairings of x 's, we get for r 2: 2
(L(RH)r)
~ 2:(L~H(RH)r-1)
/J
+ ~2 ~ ~
(LX" Sx")TH~RH)'x",<x"JTH(RH)"-'-2)
r-2
a(LZH(RHY-1)
+ 2:-(L-(R-H-Y--3---1) (ZH(RH)3)
(6)
3=0
Resumming this we get the response function
00
GL
r-2
00
= (L) + a 2: (LZH(RHY) + 2: 2: ~(L~(R-H~)-r--3--1)
(ZH(RH)3)
(7)
r=O
Exchanging the order of summation in the last term we can write everything in
terms of the response function
00
GL
00
(L)
+ aGLzH + 2:
(L)
+ aGLzH + (GL -
2: (L(RHY-3) (ZH(RH)s)
(L))GCH
172
S. Halkjrer and O. Winther
(L)
+
,
aGLZH
1 - GZH
(8)
Using (8) recursively setting L equal to LZH, L(ZH)2 etc. one obtains
_
GL
=L
00
(
L
(
r=O
aZH
1 - G ZH
)r) = (L
1
1-
aZH
)
....:;:.,;;,=...
1-GZH
(9)
This is our main result for the response function. To get the response function
G 1. = G(..\ -1, ..\ -1) requires two steps, first set L = ZH and solve for GZ H and then
...
solve for G.!.. If Z has a particularly simple form (9) may be solved analytically,
but in gene:al it must be solved numerically.
In the following we will calculate the eigenvalue spectrum for a correlation matrix
on the form C = nnT + rI and general mean m. To ensure that C is positive
semi definite r 2: 0 and /n/ 2 + r 2: 0 where /n/ 2 == n . n . The eigenvalues of
Z = nn T + mmT + rI are straight forwardly shown to be a1 = r (with multiplicity
= r + [/n/2 + /m/ 2 -
N - 2), a2
D
= (/nI 2 -
Im1 2)2
+ 4(n . m)2 .
Gt
N-2
v'D] /2 and a3 = r + [/n/2 + /m/ 2 + v'D] /2 with
Carrying out the trace in eq. (9) we get
1
1
= ----;;-..\ _ ---E4L- + N ..\ _
1-GZH
1
~
1
+N
1
..\ _ ~
l-GZH
(10)
1-GZH
This expression suggests that we may solve G 1.... in powers of 1/N (see e.g. (Sollich,
1996)). However for purposes of the discussion of learning times the only ~-term
that will be of importance is the last term above. We therefore only need to solve
for GZH (setting L = ZH in (9)) to leading order
- ..\ + a1(1- a) - J(..\
G
+ a1(1- a)2) -
4..\a1
2,,\
ZH -
(11)
Note that GZH will vanish for large..\ implying that the last term in (10) to leading
order is singular for ..\ = <('W 3. Inserting the result in (10) gives
-+- [. \ +
G.!. =
..
2Aa2
a2(1 - a) - J(..\
+ a2(1- a))2 -
4..\a 2] +
~..\
1
N - aa1
(12)
According to (5) the eigenvalue spectrum is determined by the imaginary part and
poles of G.!.. G.!. has an imaginary part for ..\_ < ..\ < ..\+ where ..\? = al (1 ? y'a)2
and the p01es l = 0, ..\ = aa3. The poles contribute each with a 6-function such
that the eigenvalue spectrum up to corrections of order ~ becomes
p(..\)
= (1 -
a)6(1 - a)6(..\)
1
+ N 6(..\ -
aa3)
1
+ 27r ..\al J(..\+ - ..\)(..\ - ..\_)
(13)
where 6(x) = 1 for x > 0 and 0 otherwise. The first term expresses the trivial fact
that for p < N the whole input space is not spanned and R will have a fraction of
1 - a zero-eigenvalues. The continuous spectrum (the root term) only contributes
for ..\_ < ..\ < ..\+ . Numerical simulations has been performed to test the validity
of the spectrum (13) (Halkjrer, 1996). They are in good agreement with predicted
results indicating that finite size effects are unimportant. The continuous spectrum
' ... (13) has also been calculated using the replica method (Halkjrer, 1996).
173
The Effect of Correlated Input Data on the Dynamics of Learning
From the spectrum the learning time
T
T
may be read of directly
A+, aa3)
= max ( - = max
A_
A_
((11 +- va)
va
2
,
aa3
adl-
va)
2
)
(14)
To illustrate how input correlations and bias may affect learning time consider
simple correlations Cij = oijv(1 - c) + vc and mi = m. With this special choice
. . corre IatlOns,
.
T = (aN(m~+vc)
)( r:)2. For m 2 + cv > O?, I.e. ?or non-zero mean or posItIve
v I-c
I-va
the convergence time will blow up by a factor proportional to N. The input bias
effect has previously been observed by (Le Cun et al.,Wendemuth et al.). In the
next section we will consider transformations to remove the large eigenvalue and
thus to speed up learning.
3
DATA TRANSFORMATIONS FOR INCREASING
LEARNING SPEED
In this section we consider two data transformations for minimizing the learning
time T of a data set, based on the results obtained in the previous sections.
1991)
The PCA transformation (Jackson,
is a data transformation often used in data
analysis. Let U be the matrix whose columns are the eigenvectors of the sample
covariance matrix and let x mean denote the sample average vector (see below). It
is easy to check that the transformed data set
(15)
have uncorrelated (zero-mean) variables. However, the new PCA variables will often
have a large spread in variances which might result in slow convergence. A simple
rescaling of the new variables will remove this problem, such that according to (14)
a PCA transformed data set with rescaled variables will have optimal convergence
properties.
The other transformation, which we will call double centering, is based on the removal of the observation means and the variable means. However, whereas the
PCA transformation doesn't care about the initial distribution, this transformation
is optimal for a data set generated from the matrix Zij = Oij v( l-c )+vc+mi mj studied earlier. Define xr ean = } L:~ xf (mean of the i'th variable), x~ean = ~ L:i xf
(mean of the p'th example) and x~:~~ = p;'" L:~i xf (grand mean). Consider first
the transformed data set
-~ _
. _
mean _
~
Xi - X,
Xi
X mean
+ Xmean
mean
The new variables are readily seen to have zero mean, variance ii = v(l-c)- ~(I-c)
and correlation c = N-!I. Since v(1 - c) = v(1 - c) we immediately get from
(13) that the continuous eigenvalue spectrum is unchanged by this transformation.
Furthermore the 'large' eigenvalue aal is equal to zero and therefore uninteresting.
Thus the learning time becomes T = (1 +
/(1 - va)2. This transformation
however removes perhaps important information from the data set, namely the
observation means. Motivated by these findings, we create a new data set {x~}
where this information is added as an extra component
va?
-~
X
_
-
-~
~
mean)
X I , ... , X N, Xmean - Xmean
(-~
(16)
s. Halkjrer and O.
174
Winther
Table 1: Required number of iterations and corresponding learning times for different data transformations. 'Raw' is the original data set, 'Var. cent.' indicates the
variable centered (mi = 0) data set, 'Doub. cent.' denotes (16), while the two last
columns concerns the peA transformed data set (15) without and with rescaled
variables.
Var . cent. Doub. cent. peA peA (res.)
Raw
Iterations
00
7
300
50
630
1
161190
237
3330
T
3330
The matrix Ii resulting from this data set is identical to the above case except that
a column and a row have been added. We therefore conclude that the eigenvalue
spectrum of this data set consists of a continuous spectrum equal to the above
and a single eigenvalue which is found to be A = ~ v(1 - c) + cv . For c 1= 0 we
will therefore have a learning time T of order one indicating fast convergence. For
independent variables (c = 0) the transformation results in a learning time of order
N but in this case a simple removal of the variable means will be optimal. After
training, when an (N + 1)-dim parameter set w has been obtained, it is possible to
transform back to the original data set using the parameter transformation WI
WI
4
1 + NWN+l
-
1 "",N
N L...i=l Wi ?
NUMERICAL INVESTIGATIONS
The suggested transformations for improving the convergence properties have been
tested on a medical classification problem. The data set consisted of 40 regional
values of cerebral glucose metabolism from 85 patients, 48 HIV -negatives and 37
HIV-positives. A simple perceptron with sigmoidal tanh output was trained using
gradient descent on the entropic error function to diagnose the 85 patients correctly. The choice of an entropic error function was due to it's superior convergence
properties compared to the quadratic error function considered in the analysis. The
learning was stopped once the perceptron was able to diagnose all patients correctly.
Table 1 shows the average number of required iterations for each of the transformed
data sets (see legend) as well as the ratio T = Amax/Amin for the corresponding
matrix R. The 'raw' data set could not be learned within the allowed 1000 iterations which is indicated by an 00. Overall, there's fine agreement between the order
of calculated learning times and the corresponding order of required number of iterations. Note especially the superiority of the peA transformation with rescaled
variables.
5
CONCLUSION
For linear networks the convergence properties of the gradient descent algorithm
may be derived from the eigenvalue spectrum of the covariance matrix of the input
data. The convergence time is controlled by the ratio between the largest and
smallest (non-zero) eigenvalue. In this paper we have calculated the eigenvalue
spectrum of a covariance matrix for correlated and biased inputs. It turns out that
correlation and bias give rise to an eigenvalue of order the input dimension as well as
a continuous spectrum of order one. This explains why a peA transformation (with
The Effect of Correlated Input Data on the Dynamics of Learning
175
a variable rescaling) may increase learning speed significantly. We have proposed
to center (setting equal to zero) the empirical mean both for each variable and
each observation in order to remove the large eigenvalue. We add an additional
component containing the observation mean to the input vector in order have this
information in the training set. At the end of training it is possible to transform
the solution back to the original representation. Numerical investigations are in fine
agreement with the theoretical analysis for improving the convergence properties.
6
ACKNOWLEDGMENTS
We would like to thank Sara A. Solla and Lars Kai Hansen for valuable comments
and discussions. Furthermore we wish to thank Ido Kanter for providing us with
notes on some of his previous work. This work has been supported by the Danish National Councils for the Natural and Technical Sciences through the Danish
Computational Neural Network Center CONNECT.
REFERENCES
Brody, T. A., Flores J ., French J. B., Mello, P. A., Pendey, A., & Wong, S. S. (1981)
Random-matrix physics. Rev. Mod. Phys. 53:385.
Halkjrer, S. (1996) Dynamics of learning in neural networks: application to the diagnosis of HIV and Alzheimer patients. Master's thesis, University of Copenhagen.
Hertz, J. A., Krogh, A. & Thorbergsson G. 1. (1989) Phase transitions in simple
learning. J. Phys. A 22 :2133-2150.
Jackson, J. E. (1991) A User's Guide to Principal Components. John Wiley & Sons.
Krogh, A. (1992) Learning with noise in a linear perceptron J. Phys A 25:1119-1133.
Le Cun, Y., Kanter , 1. & Solla, S.A. (1991) Second Order Properties of Error
Surfaces: Learning Time and Generalization. NIPS, 3:918-924.
Sollich, P. (1996) Learning in large linear perceptrons and why the thermodynamic
limit is relevant to the real world. NIPS, 7:207-214
Wendemuth, A., Opper, M. & Kinzel W. (1993) The effect of correlations in neural
networks, J. Phys. A 26:3165.
| 1254 |@word inversion:1 simulation:1 covariance:3 recursively:1 initial:1 series:1 zij:2 imaginary:3 written:2 readily:1 must:3 john:1 numerical:4 remove:4 implying:1 metabolism:1 contribute:1 lx:1 sigmoidal:1 along:1 become:1 pairing:1 consists:1 increasing:1 becomes:2 qtw:1 finding:4 transformation:18 collecting:1 control:1 medical:3 superiority:1 positive:3 xv:1 limit:2 severely:1 might:1 studied:1 examined:1 suggests:1 sara:1 averaged:3 practical:1 acknowledgment:1 recursive:1 definite:1 xr:1 area:1 empirical:1 significantly:2 get:5 vr2:1 influence:1 wong:1 yt:3 center:2 independently:1 immediately:1 amax:4 deriving:1 spanned:1 jackson:2 his:1 coordinate:2 aik:1 exact:1 user:1 agreement:4 particularly:2 observed:1 solved:4 calculate:2 forwardly:1 nbi:1 solla:2 decrease:1 rescaled:3 valuable:1 dynamic:6 trained:1 carrying:1 ithe:1 wtrw:1 fast:1 effective:1 xixjxkxi:1 ole:1 whose:2 hiv:3 kai:1 solve:4 kanter:2 otherwise:1 gi:1 transform:3 eigenvalue:21 propose:2 product:1 inserting:1 relevant:1 amin:3 dirac:1 az:1 mello:1 convergence:16 double:1 derive:2 illustrate:1 eq:1 krogh:3 predicted:1 indicate:1 pea:5 vc:3 centered:1 lars:1 everything:1 explains:2 exchange:1 behaviour:1 generalization:3 investigation:2 summation:1 correction:1 considered:1 exp:1 entropic:2 smallest:2 a2:3 purpose:1 niels:1 combinatorial:1 tanh:1 hansen:1 council:1 nwn:1 largest:1 create:1 clearly:1 gaussian:2 derived:1 check:1 indicates:1 slowest:2 dim:1 nn:3 transformed:5 overall:1 classification:3 special:1 field:1 equal:5 once:1 identical:1 national:1 phase:1 extreme:1 bohr:1 xy:1 decoupled:1 exchanged:1 re:1 theoretical:2 stopped:1 column:4 earlier:1 exchanging:1 introducing:1 pole:2 uninteresting:1 connect:3 ido:1 winther:5 grand:1 physic:1 squared:1 thesis:1 containing:2 leading:3 rescaling:2 blow:1 vi:4 performed:1 root:1 diagnose:2 il:1 ni:1 variance:2 raw:3 ren:1 straight:1 explain:1 phys:4 danish:2 infinitesimal:1 centering:1 mi:4 adl:1 nnt:1 popular:1 back:2 feed:1 response:12 done:1 furthermore:3 implicit:2 correlation:13 french:1 defines:1 indicated:1 perhaps:1 effect:7 validity:1 consisted:1 analytically:1 read:1 self:1 geometrical:1 superior:1 kinzel:1 rl:1 cerebral:1 extend:1 numerically:2 glucose:1 ai:6 cv:2 surface:2 etc:1 gt:1 add:1 success:1 vt:1 seen:1 minimum:2 additional:1 care:1 eo:3 semi:1 ii:2 thermodynamic:2 technical:1 xf:3 characterized:1 calculation:1 raphson:1 a1:4 va:6 controlled:1 patient:4 iteration:5 aal:1 whereas:1 fine:3 interval:1 singular:2 extra:1 biased:1 regional:1 comment:1 legend:1 mod:1 call:1 alzheimer:1 easy:1 affect:1 expression:2 pca:5 motivated:2 ultimate:1 hessian:2 eigenvectors:2 unimportant:1 maybe:1 amount:1 correctly:2 diagnosis:1 write:2 express:1 drawn:1 im1:1 replica:1 fraction:1 master:1 eigendirections:1 brody:2 corre:1 quadratic:2 ri:2 speed:4 min:1 xmean:3 according:3 hertz:2 across:1 sollich:4 son:1 wi:3 cun:3 rev:1 making:1 multiplicity:1 equation:5 previously:2 turn:1 mechanism:2 end:2 apply:1 original:3 cent:4 denotes:2 ensure:1 newton:1 giving:1 especially:1 unchanged:1 added:2 quantity:1 usual:3 diagonal:2 gradient:7 thank:2 blegdamsvej:1 parametrized:1 trivial:1 denmark:1 xixi:2 minimizing:1 ratio:2 providing:1 trl:1 cij:1 trace:1 negative:1 rise:1 observation:4 finite:2 descent:7 arbitrary:2 copenhagen:2 required:4 pair:3 namely:1 unequal:1 learned:1 nip:2 flores:1 able:1 suggested:1 below:2 max:2 power:1 natural:1 oij:1 carried:1 gz:1 removal:2 zh:8 proportional:1 var:2 gzh:7 article:1 uncorrelated:1 row:1 course:2 gl:7 last:4 keeping:1 supported:1 jth:1 bias:3 guide:1 understand:1 perceptron:6 institute:1 calculated:5 dimension:1 valid:2 transition:1 world:1 doesn:1 opper:1 forward:1 resumming:1 obtains:1 gene:1 conclude:2 a_:2 xi:2 spectrum:16 continuous:5 why:2 table:2 mj:1 contributes:1 improving:3 ean:2 diag:1 mmt:2 main:1 spread:1 rh:9 whole:1 noise:1 allowed:1 fashion:1 slow:3 vr:1 wiley:1 wish:1 exponential:1 vanish:1 removing:1 down:2 xt:4 showing:1 dk:1 a3:1 concern:1 importance:2 hoped:1 sx:1 scalar:1 mimi:1 thorbergsson:1 goal:1 determined:1 except:1 reducing:1 averaging:1 principal:1 called:1 perceptrons:3 indicating:2 tested:1 correlated:5 |
279 | 1,255 | A Model of Recurrent Interactions in
Primary Visual Cortex
ElDanuel Todorov, Athanassios Siapas and David SOlDers
Dept. of Brain and Cognitive Sciences
E25-526, MIT, Cambridge, MA 02139
Email: {emo, thanos,somers }@ai.mit.edu
Abstract
A general feature of the cerebral cortex is its massive interconnectivity - it has been estimated anatomically [19] that cortical
neurons receive upwards of 5,000 synapses, the majority of which
originate from other nearby cortical neurons. Numerous experiments in primary visual cortex (VI) have revealed strongly nonlinear interactions between stimulus elements which activate classical
and non-classical receptive field regions. Recurrent cortical connections likely contribute substantially to these effects . However,
most theories of visual processing have either assumed a feedforward processing scheme [7], or have used recurrent interactions to
account for isolated effects only [1, 16, 18]. Since nonlinear systems cannot in general be taken apart and analyzed in pieces, it
is not clear what one learns by building a recurrent model that
only accounts for one, or very few phenomena. Here we develop
a relatively simple model of recurrent interactions in VI, that reflects major anatomical and physiological features of intracortical
connectivity, and simultaneously accounts for a wide range of phenomena observed physiologically. All phenomena we address are
strongly nonlinear, and cannot be explained by linear feedforward
models.
1
The Model
We analyze the mean firing rates observed in oriented VI cells in response to stimuli
consisting of an inner circular grating and all outer annular grating. Mean responses
of individual cells are modeled by single-valued "cellular" response functions, whose
arguments are the mean firing rates of the cell's inputs and their maximal synaptic
conductances.
119
A Model of Recurrent Interactions in Primary Visual Cortex
1.1
Neuronal model
Each neuron is modeled as a single voltage compartment in which the membrane
potential V is given by:
C dV(t)
m
dt
gex(t)(Eex - V{t))
+ ginh(t)(Einh
gleak(Eleak - V{t))
+ gahp(t)(Eahp - V(t))
- V(t))
+
where em is the membrane capacitance, Ex is the reversal potential for current x,
and gx is the conductance for that current. If the voltage exceeds a threshold V9,
a spike is generated, and afterhyperpolarizing currents are activated. The conductances for excitatory and inhibitory currents are modeled as sums of a-functions,
and the ahp conductance is modeled as a decaying exponential. The model consists
of two distinct cell types, excitatory and inhibitory, with realistic cellular parameters [13], similar to the ones used in [17] . To compute the response functions for
the two cell types, we simulated one cell of each type, receiving excitatory and inhibitory Poisson inputs. The synaptic strengths were held constant, while the rates
of the excitatory and inhibitory inputs were varied independently.
Although the driving forces for excitation and inhibition vary, we found that single
cell responses can be accurately modeled if incoming excitation and inhibition are
combined linearly, and the net input is passed through a response function that
is approximately threshold-linear, with some smoothing around threshold . This is
consistent with the results of intracellular experiments that show linear synaptic
interactions in visual cortex[5] . Note that the cellular functions are not sigmoids,
and thus saturating responses could be achieved only through network interactions.
1.2
Cortical connectivity
The visual cortex shares with many other cortical areas a similar pattern of intraareal connections [12]. Excitatory cells make dense local projections, as well as
long-range horizontal projections that usually contact cells with similar response
properties. Inhibitory cells make only local projections, which are spread further in
space than the local excitatory connections [10]. We assume that cells with similar
response properties have a higher probability of connection, and that probability
falls down with distance in "feature" space. For simplicity, we consider only two
feature dimensions: orientation and RF center in visual space. Since we are dealing
with stimuli with radial symmetry, one spatial dimension is sufficient. The extension
to more dimensions, i.e. another spatial dimension, direction selectivity, ocularity,
etc., is straightforward.
We assume that the feature space is filled uniformly with excitatory and inhibitory
cells. Rather than modeling individual cells, we model a grid of locations, and
for each location we compute the mean firing rate of cells present there. The
connectivity is defined by two projection kernels Kex, Kin (one for each presynaptic
cell type) and weights Wee, Wei, Wie, Wii, corresponding to the number and strength
of synapses made onto excitatory/inhibitory cells. The excitatory projection has
sharper tuning in orientation space, and bigger spread in visual space (Figure 1).
1.3
Visual stimuli and thalamocortical input
The visual stimulus is defined by five parameters: diameter d of the inner grating
(the outer is assumed infinite), log contrast Cl, C2, and orientation (Jl, (J2 of each
grating. The two gratings are always centered in the spatial center of the model.
120
E. Todorov, A. Siapas and D. Somers
VlslU.a.1
Spa.oe
Figure 1: Excitatory (solid) and Inhibitory (dashed) connectivity kernels.
visual apace
Figure 2: LGN input for a stimulus with high contrast, orthogonal orientations of
center and surround gratings.
Each cortical cell receives LGN input which is the product of log contrast, orientation tuning, and convolution of the stimulus with a spatial receptive field. The
LGN input computed in this way is multiplied by LGNex , LGNin and sent to the
cortical cells. Figure 2 shows an example of what the LGN input looks like.
2
Results
For given input LGN input, we computed the steady-state activity in cortex iteratively (about 30 iteration were required). Since we studied the model for gradually
changing stimulus parameters, it was possible to use the solution for one set of
parameters as an initial guess for the next solution, which resulted in significant
speedup. The results presented here are for the excitatory population, since i) it
provides the output of the cortex; ii) contains four times more cells than the inhibitory popUlation, and therefore is more likely to be recorded from. All results
were obtained for the same set of parameters.
2.1
Classical RF effects
First we simulate responses to a central grating (ldeg diameter) for increasing logcontrast levels. It has been repeatedly observed that although LGN firing increases
linearly with stimulus log-contrast, the contrast response functions observed in VI
saturate, and may even supersaturate or decline[ll, 2]. The most complete and recent model of that phenomena [6, 3] assunws ~hunting inhibition, which contradicts
recent intracellular observations f4, 5]. Furthermore, that model cannot explain supersaturation. Our model achieves saturation (Figure 3A) for high contrast levels,
12I
A Model of Recurrent Interactions in Primary Visual Cortex
B)
A)
80
60
70
50
,,
60
I
40
r::i"SO
=
i
,
I
,
40
CD
I
a:: 30
I
r::i"
=
i
30
~
20
I
20
I
I
10
10
0
,
I
Log Contrast
Figure 3: Classical RF effects. A) Contrast response function for excitatory (solid)
and inhibitory (dashed) cells. B) Orientation tuning for different contrast levels
(solid); scaled LGN input (dashed).
and can easily achieve supersaturation for different parameter setting. Instead of
using shunting (divisive) inhibition, we suggest that inhibitory neurons have higher
contrast thresholds than excitatory neurons, i.e. the direct LGN input to inhibitory
cells is weaker. Note that we only need a subpopulation of inhibitory neurons with
that property; the rest can have the same response threshold as the excitatory
population.
Another well-known property of VI neurons is that their orientation tuning is
roughly contrast-invariant[14]. The LGN input tuning is invariant, therefore this
property is easily obtained in models where VI responses are almost linear, rather
than saturating [1, 16]. Achieving both contrast-invariant tuning and contrast saturation for the entire population (while singe cell feedforward response functions
are non-saturating) is non-trivial. The problem is that invariant tuning requires the
responses at all orientations saturate simultaneously. This is the case in our model
(Figure 3B) - we found that the tuning (half width at half amplitude) varied within
a 5deg range as contrast increased.
2.2
Extraclassical RF effects
Next we consider stimuli which include both a center and a surround grating. In the
first set of simulations we held the diameter constant (at Ideg) and varied stimulus
contrast and orientation. It has been obE.erved [9, 15] that a high contrast isoorientation surround stimulus facilitates responses to a low contrast, but suppresses
responses to a high contrast center stimulus. Tpis behavior is captured very well
by our model (Figure 4A). The strong response to the surround stimulus alone is
partially due to direct thalamic input (Le. the thalamocortical projective field is
larger than the classical receptive field of a Vl cell). The response to an orthogonal
surround is between the center and iso-orientation surround responses, as observed
in [9].
Many neurons in VI respond optimally to a center grating with a certain diameter,
but their response decreases as the diameter increases (end-stopping). End-stopping
in the model is shown in Figure 4B - responses to increasing grating diameter
reach a peak and then decrease. In experiments it has been observed that the
E. Todorov, A. Siapas and D. Somers
122
8)
A)
60r-------------~----------~
60~------------------------~
50
40
/- .i
/
I
"
I
I
I
I
I
10
I
-'
I
I
O~~~--~L-o-g-C~o-n~t-ra-a-t~----~
Figure 4: Extraclassical RF effects. A) Contrast response functions for center
(solid), center + iso-orientation surround (dashed), center + orthogonal surround
(dash-dot). B) Length tuning for 4 center log contrast levels (1, .75, .5, .4), response
to surround of high contrast (dashed).
border between the excitatory and inhibitory regions shifts outward (rightward) as
stimulus contrast levels decline [8] . Our model also achieves this effect (for other
parameter settings it can shift more downward than rightward). Note also that a
center grating with maximal contrast reaches its peak response for a diameter value
which is 3 times smaller than the diameter for which responses to a surround grating
disappear. This is interesting because both the peak response to a central grating,
and the first response to a surround grating can be used to define the extent of the
classical RF - in this case clearly leading to very different definitions. This effect
(shown in Figure 4B) has also been recently observed in VI [15].
3
Population Modeling and Variability
So far we have analyzed the population firing rates in the model, and compared them
to physiological observations. Unfortunately, in many cases the limited sample size,
or the variability in a given physiological experiment does not allow an accurate
estimate of what the population response might be. In such cases researchers only
describe individual cells, which are not necessarily representative. How can findings reported in this way be captured in a population model? The parameters of
the model could be modified in order capture the behavior of individual cells on
the population level; or, further subdivisions into neuronal subpopulations may be
introduced explicitly. The approach we prefer is to increase the variance of the
number of connections made across individual cells, while maintaining the same
mean values. We consider variations in the amount of excitatory and inhibitory
synapses that a particular cell receives from other cortical neurons. Note that the
presynaptic origin of these inputs is still the "average" cortical cell , and is not
chosen from a subpopulation with special response properties.
The two examples we show have recently been reported in [15]. If we increase
the cortical input (excitation by 100%, inhibition by 30%), we obtain a completely
patch-suppressed cell, i.e. it does not respond to a center + iso-orientation surround
stimulus - Figure 5A. Figure 5B shows that this cell is an "orientation contrast
detector", i.e. it responds well to Odeg center + 90deg surround, and to 90deg
A Model of Recurrent Interactions in Primary Visual Cortex
123
center + Odeg surround. Interestingly, the cells with that property reported in [15]
were all patch-suppressed. Note also that our cell has a supersaturating center
response - we found that it always accompanies patch suppression in the model.
8)
A)
70r-------------~----------~
35r-------------------------~
60
30
50
25
.
,
I
I
""
20
,
I
"
"
10
10
"
""
,
I
I
5
"
I
I
O~-?----~--~----~------~
O~--~~--~--~~~------~
Orientation Difference
Log Contrast
Figure 5: Orientation discontinuity detection for a strongly patch- suppressed cell.
A) Contrast response functions for center (solid) and center + iso-orientation surround (dashed) . B) Cell's response for Odeg cmter, 0 - 90deg surround (solid) and
90deg center, 90 - Odeg surround.
A)
8)
60
60
70
..- .-
60
o:::i"50
=
i
, .-
40
.- .-
CD
r::r::30
0
,
70
60
..- .-
,
o:::i"SO
e..
i
40
~30
,------
.- .-
20
10
.-
, .-
.-
20
.- .-
10
Log Contarst
0
1.1
Sur
Ort
S ... O
Cen
Figure 6: Nonlinear summation for a non patch-suppressed cell. A) Contrast response functions for center (solid) and center + iso-orientation surround (dashed).
B) Cell's response for iso-orientation surround, orthogonal center, surround + center, center only.
The second example is a cell receIvmg 15% of the average cortical input. Not
surprisingly, its contrast response function does not saturate - Figure 6A. However,
this cell exhibits an interesting nonlinear property - it respond well to a combination
of an iso-orientation surround + orthogonal center, but does not respond to either
stimulus alone (Figure 6B). It. is not clear from [15] whether the cells with this
property had saturating contrast response functions.
124
4
E. Todorov, A. Siapas and D. Somers
Conclusion
We presented a model of recurrent computation in primary visual cortex that relies
on a limited set of physiologically plausible mechanisms. In particular, we used
the different cellular properties of excitatory and inhibitory neurons and the nonisotropic shape of lateral connections (Figure 1) . Due to space limitations, we only
presented simulation results, rather than analyzing the effects of specific parameters.
A preliminary version of such analysis is given in [17] for local effects, and will be
developed elsewhere for lateral interaction effects.
Our goal here Was to propose a framework for studying recurrent computation in
VI, that is relatively simple, yet rich enough to simultaneously account for a wide
range of physiological observations. Such a framework is needed if we are to analyse
systematically the fundamental role of recurrent interactions in neocortex.
References
[1] Ben-Yishai, R., Lev Bar-Or, R. & Sompolinsky, H. Proc. Nail. Acad. Sci. U.S.A.
92, 3844-3848 (1995).
[2] Bonds, A.B . Visual Neurosci. 6 , 239-255 (1991).
[3] Carandini, M. & Heeger, D.J. Science. 264, 1333-1336 (1994) .
[4] Douglas R.J., Martin K.C ., Whitteridge D. An intracellular analysis of the visual
responses on neurones in cat visual cortex. J Physiology 44, 659-696, 1991.
[5] Ferster,D & Jagadeesh, B. J. Neurosci . 12, 1262-1274(1992).
[6]
[7]
[8]
[9]
Heeger, D.J. Visual Neurosci. 70, 181-197 (1992) .
Hubel, D.H. & Wiesel, T.N. J. Neurophyszol. 148, 574-591 (1959) .
Jagadeesh, B. & Ferster, D. Soc Neursci Abstr. 16 130.11. (1990) .
Knierim , J.J. & Van Essen, D.C. J. Neurophysiol. 67, 961-980 (1992).
[10] Kisvarday, Z.F., Martin, K.A .C., Freund, T.F. , Magloczky, Z.F., Whitteridge,
D., and Somogyi, D. Exp. Brain Res. 64 , 541-552.
[11] Li , C .Y. & Creutzfeldt, O.D. Pflugers Arch. 401, 304-314 (1984).
[12] Lund J .S., Yoshioka T., Levitt, J .B. Cereb. Cortex 3, 148-162.
[13] McCormick, D.A. , Connors, B.W., Lighthall, J.W . & Prince, D.A. J. Neurophysiol. 54, 782-806 (1985) .
[14] Sclar, G. & Freeman, R.D . Exp. Brain Res. 46,457-461.
[15] Sillito, A.M., Grieve, K.L., Jones, H.E. , Cudeiro, J., & Davis, J. Nature, Nov
1995.
[16] Somers, D.C., Nelson, S.B. & Sur, M. J. Neurosci. 15, 5448-5465 (1995).
[17] Siapas A, Todorov E, Somers D. Computing the mean firing rates of ensembles
of realistic neurons. Soc Neuroscience Abstract, 1995 .
[18] Stemmler, M., Usher, M. & Niebur, E S cience 269 , 1877-1880, (1995) .
[19] White, E.L. Cortical Circuits 46-82 (Birkhauser, Boston, 1989).
III
THEORY
PART
| 1255 |@word eex:1 version:1 wiesel:1 simulation:2 solid:7 hunting:1 initial:1 contains:1 interestingly:1 current:4 yet:1 extraclassical:2 realistic:2 shape:1 alone:2 half:2 guess:1 iso:7 provides:1 contribute:1 location:2 gx:1 five:1 c2:1 direct:2 consists:1 grieve:1 ra:1 behavior:2 roughly:1 brain:3 freeman:1 increasing:2 circuit:1 what:3 substantially:1 suppresses:1 developed:1 finding:1 scaled:1 local:4 acad:1 analyzing:1 lev:1 firing:6 approximately:1 might:1 studied:1 limited:2 projective:1 range:4 area:1 physiology:1 projection:5 radial:1 subpopulation:3 suggest:1 cannot:3 onto:1 center:25 straightforward:1 independently:1 simplicity:1 wie:1 population:9 variation:1 massive:1 origin:1 element:1 observed:7 role:1 capture:1 region:2 oe:1 sompolinsky:1 jagadeesh:2 decrease:2 completely:1 neurophysiol:2 rightward:2 easily:2 cat:1 stemmler:1 distinct:1 describe:1 activate:1 whose:1 larger:1 valued:1 plausible:1 analyse:1 net:1 propose:1 interaction:11 maximal:2 product:1 j2:1 achieve:1 abstr:1 ben:1 cience:1 recurrent:11 develop:1 strong:1 soc:2 grating:14 direction:1 f4:1 centered:1 preliminary:1 summation:1 obe:1 extension:1 around:1 exp:2 driving:1 major:1 vary:1 achieves:2 cen:1 proc:1 bond:1 interconnectivity:1 reflects:1 mit:2 clearly:1 always:2 modified:1 rather:3 voltage:2 contrast:29 suppression:1 yoshioka:1 stopping:2 vl:1 entire:1 lgn:9 kex:1 orientation:19 smoothing:1 spatial:4 special:1 field:4 afterhyperpolarizing:1 look:1 jones:1 stimulus:17 few:1 oriented:1 wee:1 simultaneously:3 resulted:1 individual:5 consisting:1 conductance:4 detection:1 circular:1 essen:1 analyzed:2 activated:1 yishai:1 held:2 accurate:1 orthogonal:5 filled:1 re:2 prince:1 isolated:1 increased:1 modeling:2 optimally:1 reported:3 combined:1 peak:3 fundamental:1 receiving:1 e25:1 connectivity:4 central:2 recorded:1 cognitive:1 leading:1 li:1 account:4 potential:2 intracortical:1 explicitly:1 vi:9 piece:1 cudeiro:1 analyze:1 decaying:1 thalamic:1 compartment:1 v9:1 variance:1 ensemble:1 accurately:1 niebur:1 researcher:1 explain:1 synapsis:3 reach:2 detector:1 synaptic:3 email:1 definition:1 carandini:1 gleak:1 amplitude:1 higher:2 dt:1 response:39 wei:1 strongly:3 furthermore:1 arch:1 receives:2 horizontal:1 nonlinear:5 building:1 effect:11 iteratively:1 white:1 ll:1 width:1 davis:1 excitation:3 steady:1 complete:1 cereb:1 upwards:1 recently:2 cerebral:1 jl:1 significant:1 cambridge:1 surround:21 ai:1 siapas:5 whitteridge:2 tuning:9 grid:1 had:1 dot:1 cortex:13 inhibition:5 etc:1 ort:1 recent:2 apart:1 selectivity:1 certain:1 captured:2 dashed:7 ii:1 exceeds:1 annular:1 long:1 shunting:1 bigger:1 poisson:1 iteration:1 kernel:2 achieved:1 cell:39 receive:1 rest:1 usher:1 sent:1 facilitates:1 revealed:1 feedforward:3 enough:1 iii:1 todorov:5 gex:1 inner:2 decline:2 shift:2 whether:1 passed:1 accompanies:1 neurones:1 repeatedly:1 clear:2 outward:1 amount:1 neocortex:1 diameter:8 inhibitory:16 estimated:1 neuroscience:1 anatomical:1 four:1 threshold:5 achieving:1 changing:1 douglas:1 sum:1 respond:4 somers:6 almost:1 nail:1 patch:5 prefer:1 spa:1 dash:1 activity:1 strength:2 nearby:1 simulate:1 argument:1 solder:1 relatively:2 martin:2 speedup:1 combination:1 membrane:2 smaller:1 across:1 em:1 contradicts:1 suppressed:4 anatomically:1 explained:1 dv:1 gradually:1 invariant:4 taken:1 emo:1 mechanism:1 needed:1 reversal:1 ahp:1 end:2 studying:1 wii:1 multiplied:1 apace:1 include:1 maintaining:1 disappear:1 classical:6 contact:1 capacitance:1 spike:1 receptive:3 primary:6 ginh:1 responds:1 exhibit:1 distance:1 simulated:1 lateral:2 majority:1 outer:2 sci:1 kisvarday:1 originate:1 nelson:1 presynaptic:2 extent:1 cellular:4 trivial:1 length:1 sur:2 modeled:5 unfortunately:1 sharper:1 mccormick:1 neuron:11 convolution:1 observation:3 variability:2 varied:3 knierim:1 david:1 introduced:1 required:1 connection:6 discontinuity:1 address:1 bar:1 usually:1 pattern:1 lund:1 ocularity:1 saturation:2 rf:6 force:1 scheme:1 numerous:1 freund:1 interesting:2 limitation:1 eleak:1 sufficient:1 consistent:1 systematically:1 share:1 cd:2 excitatory:17 elsewhere:1 einh:1 thalamocortical:2 surprisingly:1 weaker:1 allow:1 wide:2 fall:1 van:1 dimension:4 cortical:12 rich:1 made:2 far:1 nov:1 dealing:1 deg:5 connors:1 hubel:1 incoming:1 assumed:2 physiologically:2 sillito:1 nature:1 symmetry:1 singe:1 cl:1 necessarily:1 dense:1 spread:2 linearly:2 intracellular:3 border:1 neurosci:4 neursci:1 neuronal:2 levitt:1 representative:1 heeger:2 exponential:1 learns:1 kin:1 down:1 saturate:3 specific:1 physiological:4 sigmoids:1 downward:1 boston:1 likely:2 isoorientation:1 visual:18 saturating:4 sclar:1 partially:1 relies:1 ma:1 goal:1 ferster:2 infinite:1 uniformly:1 birkhauser:1 divisive:1 subdivision:1 dept:1 phenomenon:4 ex:1 |
280 | 1,256 | The Learning Dynamics of
a Universal Approximator
Ansgar H. L. West 1 ,2
David Saad 1
Ian T. N abneyl
A.H.L.West~aston.ac.uk
D.Saad~aston.ac.uk
I.T.Nabney~aston.ac.uk
1 Neural
Computing Research Group, University of Aston
Birmingham B4 7ET, U.K.
http://www.ncrg.aston.ac.uk/
2Department of Physics, University of Edinburgh
Edinburgh EH9 3JZ, U.K.
Abstract
The learning properties of a universal approximator, a normalized
committee machine with adjustable biases, are studied for on-line
back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which
do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases
even for realizable cases and noiseless data.
1
INTRODUCTION
Recently there has been much interest in the theoretical breakthrough in the understanding of the on-line learning dynamics of multi-layer feedforward perceptrons
(MLPs) using a statistical mechanics framework. In the seminal paper (Saad &
Solla, 1995), a two-layer network with an arbitrary number of hidden units was
studied, allowing insight into the learning behaviour of neural network models whose
complexity is of the same order as those used in real world applications.
The model studied, a soft committee machine (Biehl & Schwarze, 1995), consists of
a single hidden layer with adjustable input-hidden, but fixed hidden-output weights.
The average learning dynamics of these networks are studied in the thermodynamic
limit of infinite input dimensions in a student-teacher scenario, where a stu.dent
network is presented serially with training examples (e lS , (IS) labelled by a teacher
network of the same architecture but possibly different number of hidden units.
The student updates its parameters on-line, i.e., after the presentation of each
example, along the gradient of the squared error on that example, an algorithm
usually referred to as back-propagation.
Although the above model is already quite similar to real world networks, the approach suffers from several drawbacks. First, the analysis of the mean learning
dynamics employs the thermodynamic limit of infinite input dimension - a problem which has been addressed in (Barber et al., 1996), where finite size effects have
been studied and it was shown that the thermodynamic limit is relevant in most
The Learning Dynamcis of a UniversalApproximator
289
cases. Second, the hidden-output weights are kept fixed, a constraint which has
been removed in (Riegler & Biehl, 1995), where it was shown that the learning
dynamics are usually dominated by the input-hidden weights. Third, the biases of
the hidden units were fixed to zero, a constraint which is actually more severe than
fixing the hidden-output weights. We show in Appendix A that soft committee
machines are universal approximators provided one allows for adjustable biases in
the hidden layer.
In this paper, we therefore study the model of a normalized soft committee machine
with variable biases following the framework set out in (Saad & Solla, 1995). We
present numerical studies of a variety of learning scenarios which lead to remarkable
effects not present for the model with fixed biases.
2
DERIVATION OF THE DYNAMICAL EQUATIONS
The student network we consider is a normalized soft committee machine of K
hidden units with adjustable biases. Each hidden unit i consists of a bias (Ji and a
weight vector lVi which is connected to the N-dimensional inputs All hidden units
are connected to a linear output unit with arbitrary but fixed gain 'Y by couplings
of fixed strength. The activation of any unit is normalized by the inverse square
root of the number of weight connections into the unit, which allows all weights to
be of 0(1) magnitude, independent of the input dimension or the number of hidden
units. The implemented mapping is therefore /w(e) = (-Y/VK) L:~1 g(Ui - (Ji),
where Ui = lVi ?e/.,fJii and g(.) is a sigmoidal transfer function. The teacher network to be learned is of the same architecture except for a possible difference in
the number of hidden units M and is defined by the weight vectors En and biases Pn (n = 1, ... , M). Training examples are of the form (e, (1-'), where the
input vectors el-' are drawn form the normal distribution and the outputs are
(I-' = (-Y/.JiJ) L:~1 g(v~ - Pn ), where v~ = Bn ?el-' /.,fJii.
The weights and biases are updated in response to the presentation of an example
(el-', (1-'), along the gradient of the squared error measure ? = ![(I-' - /w(el-')F
e.
Wol-'+!
- Wol-'
t
I
= 1/Wt.,fJii
61!' el-'
and
(J.I-'+!
- (J I.I-'
I
= - 1/0
61!'
Nt
(1)
with 6f == [(I-' - /w(el-')]g'(uf - (Ji). The two learning rates are 1/w for the weights
and 1/0 for the biases. In order to analyse the mean learning dynamics resulting
from the above update equations, we follow the statistical mechanics framework in
(Saad & Solla, 1995) . Here we will only outline the main ideas and concentrate on
the results of the calculation.
As we are interested in the typical behaviour of our training algorithm we average
We rewrite the update equations (1)
over all possible instances of the examples
in lVi as equations in the order parameters describing the overlaps between pairs
of student nodes Qij = lVi?W;/N, student and teacher nodes Rin = lVi?En/N,
and teacher nodes Tnm = Bn ?Bm/N. The generalization error ?g, measuring the
typical performance, can be expressed solely in these variables and the biases (Ji and
Pn. The order parameters Qij, Rin and the biases (Ji are the dynamical variables.
These quantities need to be self-averaging with respect to the randomness in the
training data in the thermodynamic limit (N ~ 00), which enforces two necessary
constraints on our calculation. First, the number of hidden units K ? N, whereas
one needs K", O(N) for the universal approximation proof to hold. Second, one
can show that the updates of the biases have to be of 0(1/N), i.e., the bias learning
rate has to be scaled by 1/N, in order to make the biases self-averaging quantities,
a fact that is confirmed by simulations [see Fig. 1]. If we interpret the normalized
e.
290
A. H. L. West, D. Saad and I. T. Nabney
example number 0 = piN as a continuous time variable, the update equations for
the order parameters and the biases become first order coupled differential equations
dQij
TJw (8iuj + 8j U i}e + TJ!. (8i8j }e?
do
dR in
dOi
(2)
TJw (8 i v n }e ' and do = -TJo (8 i }e .
do
Choosing g(x) = erf(xlV2) as the sigmoidal transfer, most integrations in Eqs. ~2)
can be performed analytically, but for single Gaussian integrals remaining for TJ w terms and the generalization error. The exact form of the resulting dynamical
equations is quite complicated and will be presented elsewhere. Here we only remark, that the gain "/ of the linear output unit, which determines the output scale,
merely rescales the learning rates with ,,/2 and can therefore be set to one without
loss of generality. Due to the numerical integrations required, the differential equations can only be solved accurately in moderate times for smaller student networks
(K ~ 5) but any teacher size M.
3
ANALYSIS OF THE DYNAMICAL EQUATIONS
The dynamical evolution of the overlaps Qij, R in and the biases Oi follows from
integrating the equations of motion (2) from initial conditions determined by the
(random) initialization of the student weights Wi and biases Oi. For random initialization the resulting norms Qii of the student vector will be order 0(1), while
the overlaps Qij between different student vectors, and student-teacher vectors Rin
will be only order CJ(I/VN). A random initialization of the weights and biases can
therefore be simulated by initializing the norms Qii, the biases Oi and the normalized
overlaps Qij = Qij I JQiiQjj and Rin = Rinl JQiiTnn from uniform distributions
in the [0,1]' [-1,1], and [_10- 12 ,10- 12 ] intervals respectively.
We find that the results of the numerical integration are sensitive to these random initial values, which has not been the case to this extent for fixed biases.
Furthermore, the dynamical behaviour can become very complex even for realizable cases (K = M) and networks with three or four hidden units. For sake of
simplicity, we will therefore restrict our presentation to networks with two hidden
units (K = M = 2) and uncorrelated isotropic teachers, defined by Tnm = 8nm , although larger networks and graded teacher scenarios were investigated extensively
as well. We have further limited our scope by investigating a common learning
rate (TJo = TJo = TJw) for biases and weights. To study the effect of different weight
initialization, we have fixed the initial values of the student-student overlaps Qij
and biases Oi, as these can be manipulated freely in any learning scenario. Only the
initial student-teacher overlaps R in are randomized as suggested above.
In Fig. 1 we compare the evolution of the overlaps, the biases and the generalization
error for the soft committee machine with and without adjustable bias learning a
similar realizable teacher task. The student denoted by * lacks biases, Le., Oi = 0,
and learns to imitate an isotropic teacher with zero biases (Pn = 0). The other
student features adjustable biases, trained from an isotropic teacher with small
biases (Pl,2 = =FO.I). For both scenarios, the learning rate and the initial conditions
were judiciously chosen to be TJo = 2.0, Qll = 0.1, Q22 = 0.2, Rin = Q12 =
U[ _10- 12 ,10- 12 ] with 01 = 0.0 and O2 = 0.5 for the student with adjustable biases.
In both cases, the student weight vectors (Fig. Ia) are drawn quickly from their
initial values into a suboptimal symmetric phase, characterized by the lack of specialization of the student hidden units on a particular teacher hidden unit, as can
be depicted from the similar values of ~n in Fig. 1b. This symmetry is broken
291
The Learning Dynamcis of a UniversalApproximator
,-
1.0
Q 11 (N =1 0) Q11 ,Q22
"Q11 (N=100)
Qil-:}
A Q12 (N=10)
Q22???????????
o Q12 (N=100)
Qi2-'-'Q11-----?
Q22--'Q*
Q*
III 22
_.-.- -_.-. ._._.
'-. Q12----?
1.0
0
o
0.8
/1
0.6
Qij
__
0.4
0.8
0.6
~n
~-.-.-.-.-.-.-.-.-:~.'::,'"------.
0.2
0.0
o
0.4
0.2
0.0
o
100 200 300 400 500 600 700
100 200 300 400 500 600 700
ex
ex
0.02-r-------------,
0.3
(d)
0.2
- -,.:.<--'1< ..... . ,. <::
,
0.1
,,
..
(}i 0.0 -..---==:0...
'.
-0.1
0.015
,...,".
fg(O .Ol) - -fg(O .l) _ ... - .
fg(0.5)fg(l) - -
fg(O*) --fg(O) ...........
f g(10-S) ----_.
fg(10-4) - .- .-
N=200
N=500"
0
' ....
0.005
-0.2
-0.3
0
100 200 300 400 500 600 700
ex
o
100 200 300 400 500 600 700
ex
Figure 1: The dynamical evolution of the student-student overlaps Qij (a), and the
student-teacher overlaps R in (b) as a function of the normalized example number 0
is compared for two student-teacher scenarios: One student (denoted by *) has fixed
zero biases, the other has adjustable biases. The influence of the symmetry in the
initialization of the biases on the dynamics is shown for the student biases (Ji (c),
and the generalization error fg (d): (Jl = 0 is kept for all runs, but the initial value
of (J2 varies and is given in brackets in the legends. Finite size simulations for input
dimensions N = 10 ... 500 show that the dynamical variables are self-averaging.
almost immediately in the learning scenario with adjustable biases and the student
converges quickly to the optimal solution, characterized by the evolution of the
overlap matrices Q, R and biases (Ji (see Fig. 1c) to their optimal values T and
Pn (up to the permutation symmetry due to the arbitrary labeling of the student
nodes). Likewise, the generalization error fg decays to zero in Fig. 1d. The student
with fixed biases is trapped for most of its training time in the symmetric phase
before it eventually converges.
Extensive simulations for input dimensions N = 10 ... 500 confirm that the dynamic
variables are self-averaging and show that variances decrease with liN. The mean
trajectories are in good agreement with the theoretical predictions even for very
small input dimensions (N = 10) and are virtually indistinguishable for N = 500.
The length of the symmetric phase for the isotropic teacher scenario is dominated
by the learning ratel , hut also exhibits a logarithmic dependence on the typical
1The length of the symmetric phase is linearly dependent on 110 for small learning rates.
A. H. L. West, D. Saad and I. T. Nabney
292
3200
2800
!I
2-0
--710!I
........... 710=0.01!1
-----?710=0.1 ;1
-I
- .- .- 710=0.5
- - - 710=1
il
710=1.5 iI
il
--- 710=2
il
---. ' 10=3
!I
!I
!I
1
1
1
I
I
II
I
\
\
(}i O.O-t---? -''-' -' '-'-''-'-'-'' -,;--'~;-~--i
..... ..... .. .... ... ...........
"
""
-0.2- .,;:.:-=:::~.::::.------_
r'"
(0.25)
.......... 01
o
-----.
Ol
1600
Ja)
I
I
I
I
400
800
1200
1600
a
ac
2000
II
.:::--:..:-~-..:.;::.-- .. '
i
-- --. (h .... :::- '~.-j
(05)
....
-
-0.4- ...... O2
-0.6
I
2400
- "'-
:77~.:7"''=:- .~;''- ' ~
1200
0.0
0.2
0.4
i
,
I
I
., /
/
I
j
I
I
I
I
I
--------------}/
/:
I
I
I
I
I
I
I
I
r
r
r
I
I
I
I
I
i ,,,
,
I
I
i ,,
/ ,. ,
:-,."
0.6
0.8
1.0
(}2
Figure 2: (a) The dynamical evolution of the biases Oi for a student imitating an
isotropic teacher with zero biases. reveals symmetric dynamics for 01 and O2 ? The
student was randomly initialized identically for the different runs, but for a change
in the range of the random initialization of the biases (U[-b,b]), with the value of
b given in the legend. Above a critical value of b the student remains stuck in a
suboptimal phase. (b) The normalized convergence time ~ == TJoQc is shown as a
function of the initialization of O2 for varios learning rates TJo (see legend, TJ5 = 0
symbolizes the dynamics neglecting TJ5 terms.).
differences in the initial student-teacher overlaps R in (Biehl et al., 1996) which are
typically of order O(I/..fN) and cannot be influenced in real scenarios without a
priori knowledge. The initialization of the biases, however, can be controlled by
the user and its influence on the learning dynamics is shown in Figs. lc and Id for
the biases and the generalization error respectively. For initially identical biases
(0 1 = O2 = 0), the evolution of the order parameters and hence the generalization
error is almost indistinguishable from the fixed biases case. A breaking of this
symmetry leads to a decrease of the symmetric phase linear in log(IOl - ( 2 1) until
it has all but disappeared. The dynamics are again slowed down for very large
initialization of the biases (see Id), where the biases have to travel a long way to
their optimal values.
This suggests that for a given learning rate the biases have a dominant effect in
the learning process and strongly break existent symmetries in weight space. This
is argueably due to a steep minimum in the generalization error surface along the
direction of the biases. To confirm this, we have studied a range of other learning
scenarios including larger networks and non-isotropic teachers, e.g., graded teachers
with Tnm = n6nm . Even when the norms of the teacher weight vectors are strongly
graded, which also breaks the weight symmetry and reduces the symmetric phase
significantly in the case of fixed biases, we have found that the biases usually have
the stronger symmetry breaking effect: the trajectories of the biases never cross,
provided that they were not initialized too symmetrically.
This would seem to promote initializing the biases of the student hidden units evenly
across the input domain, which has been suggested previously on a heuristic basis
(Nguyen & Widrow, 1990). However, this can lead to the student being stuck in a
suboptimal configuration. In Fig. 2a, we show the dynamics of the student biases Oi
when the teacher biases are symmetric (Pn = 0). We find that the student progress
is inversely related to the magnitude of the bias initialization and finally fails to
converge at all. It remains in a suboptimal phase characterized by biases of the same
large magnitude but opposite sign and highly correlated weight vectors. In effect,
the outputs of the two student nodes cancel out over most of the input domain. In
The Learning Dynamcis of a Universal Approximator
293
Fig. 2b, the influence of the learning rate in combination with the bias initialization
in determining convergence is illustrated. The convergence time Qc, defined as the
example number at which the generalization error has decayed to a small value,
here judiciously chosen to be 10- 8 , is shown as a function of the initial value of ()2
for various learning rates 'TJo. For convenience, we have normalized the convergence
time with 1/""0. The initialization of the other order parameters is identical to
Fig. 1a. One finds that the convergence time diverges for all learning rates, above
a critical initial value of (h. For increasing learning rates, this transition becomes
sharper and occurs at smaller ()2, i.e., the dynamics become more sensitive to the
bias initialization.
4
SUMMARY AND DISCUSSION
This research has been motivated by recent progress in the theoretical study of
on-line learning in realistic two-layer neural network models - the soft-committee
machine, trained with back-propagation (Saad & Solla, 1995). The studies so far
have excluded biases to the hidden layers, a constraint which has been removed in
this paper, which makes the model a universal approximator. The dynamics of the
extended model turn out to be very rich and more complex than the original model.
In this paper, we have concentrated on the effect of initialization of student weights
and biases. We have further restricted our presentation for simplicity to realizable
cases and small networks with two hidden units, although larger networks were
studied for comparison. Even in these simple learning scenarios, we find surprising dynamical effects due to the adjustable biases. In the case where the teacher
network exhibits distinct biases, unsymmetric initial values of the student biases
break the node symmetry in weight space effectively and can speed up the learning
process considerably, suggesting that student biases should in practice be initially
spread evenly across the input domain if there is no a priori knowledge of the function to be learned. For degenerate teacher biases however such a scheme can be
counterproductive as different initial student bias values slow down the learning
dynamics and can even lead to the student being stuck in suboptimal fixed points,
characterized by student biases being grouped symmetrically around the degenerate
teacher biases and strong correlations between the associated weight vectors.
In fact, these attractive suboptimal fixed points exist even for non-degenerate
teacher biases, but the range of initial conditions attracted to these suboptimal
network configurations decreases in size. Furthermore, this domain is shifted to
very large initial student biases as the difference in the values of the teacher biases
is increased. We have found these effects also for larger network sizes, where the
dynamics and number of attractive suboptimal fixed points with different internal
symmetries increases. Although attractive suboptimal fixed points were also found
in the original model (Biehl et al., 1996), the basins of attraction of initial values
are in general very small and are therefore only of academic interest.
However, our numerical work suggests that a simple rule of thumb to avoid being
attracted to suboptimal fixed points is to always initialize the squared norm of a
weight vector larger than the magnitude of the corresponding bias. This scheme
will still support spreading of the biases across the main input domain in order to
encourage node symmetry breaking. This is somewhat similar to previous findings
(Nguyen & Widrow, 1990; Kim & Ra, 1991), the former suggesting spreading the
biases across the input domain, the latter relating the minimal initial size of each
weight with the learning rate. This work provides a more theoretical motivation for
these results and also distinguishes between the different roles of biases and weights.
In this paper we have addressed mainly one important issue for theoreticians and
A. H. L West, D. Saad and l. T. Nabney
294
practitioners alike: the initialization of the student network weights and biases.
Other important issues, notably the question of optimal and maximal learning rates
for different network sizes during convergence, will be reported elsewhere.
A
THEOREM
Let S9 denote the class of neural networks defined by sums of the form L~l nig(ui - (h)
where K is arbitrary (representing an arbitrary number of hidden units), (h E lR and ni E Z
(i.e. integer weights). Let 'I/J(x) == ag(x)/ax and let 1>", denote the class of networks defined
by sums of the form L~l Wi'I/J(Ui -0;) where W; E lR. If 9 is continuously differentiable and
if the class 1>", are universal approximators, then S9 is a class of universal approximatorsj
that is, such functions are dense in the space of continuous functions with the Loo norm.
As a corollary, the normalized soft committee machine forms a class of universal approximators with both sigmoid and error transfer functions [since radial basis function networks
are universal (Park & Sandberg, 1993) and we need consider only the one-dimensional input case as noted in the proof below). Note that some restriction on 9 is necessary: if 9 is
the step function, then with arbitrary hidden-output weights, the network is a universal
approximator, while with fixed hidden-output weights it is not.
A.!
Proof
By the arguments of (Hornik et al., 1990) which use the properties of trigonometric polynomials, it is sufficient to consider the case of one-dimensional input and output spaces.
Let I denote a compact interval in lR and let f be a continuous function defined on I.
Because 1>", is universal, given any E > 0 we can find weights Wi and biases Oi such that
K
E
f- LW;'I/J(u-Oi )
;=1
<-2
(i)
00
Because the rationals are dense in the reals, without loss of generality we can assume
that the weights Wi E Q. Since 'I/J(x) is continuous and I is compact, the convergence of
[g(x + h) - g(x)J1h to ag(x)/ax is uniform and hence for all n> n (21;Wi) the following
i~.lity hblds:
(ii)
Also note that for suitable ni > n (2~Wi)'
Thus, by the triangle inequality,
K
L .rn; [g(u+
.=1
~i
I
rn. = now;
-0;) -g(u-Oi )] -
E Z, as Wi is a rational number.
K
(iii)
LWi'I/J(u-Oi)
i=l
00
The result now follows from equations (i) and (iii) and the triangle inequality.
References
Barber, D., Saad, D., & Sollich, P. 1996. Europhys. Lett., 34, 151-156.
Biehl, M., & Schwarze, H. 1995. J. Phys. A, 28, 643-656.
Biehl, M., Riegler, P., & Wohler, C. 1996. University of Wiirzburg Preprint WUE-ITP96-003.
Hornik, K., Stinchcombe, M., & White, H. 1990. Neural Networks, 3, 551-560.
Kim, Y. K., & Ra, J. ,B. 1991. Pages 2396-2401 of: International Joint Conference on
Neural Networks 91.
Nguyen, D., & Widrow, B. 1990. Pages C21-C26 of: IJCNN International Conference on
Neural Networks 90.
Park, J., & Sandberg, 1. W. 1993. Neural Computation, 5, 305-316.
Riegler, P., & Biehl, M. 1995. J. Phys. A, 28, L507-L513.
Saad, D., & SoHa, S. A. 1995. Phys. Rev. E, 52, 4225-4243.
| 1256 |@word polynomial:1 norm:5 stronger:1 simulation:3 bn:2 initial:16 configuration:2 o2:5 nt:1 surprising:1 activation:1 attracted:2 fn:1 realistic:1 numerical:5 update:5 imitate:1 theoretician:1 isotropic:6 tjw:3 lr:3 provides:1 node:7 sigmoidal:2 along:3 become:3 differential:2 qij:9 consists:2 notably:1 ra:2 mechanic:3 multi:1 ol:2 increasing:1 becomes:1 provided:2 finding:1 ag:2 scaled:1 uk:4 unit:20 before:1 limit:4 id:2 riegler:3 solely:1 initialization:15 studied:8 suggests:2 qii:2 limited:1 range:3 c21:1 enforces:1 practice:1 universal:12 significantly:1 integrating:1 radial:1 cannot:1 convenience:1 s9:2 influence:3 seminal:1 www:1 restriction:1 l:1 qc:1 simplicity:2 immediately:1 insight:1 attraction:1 rule:1 updated:1 user:1 exact:1 agreement:1 qil:1 qll:1 role:1 preprint:1 solved:1 initializing:2 connected:2 solla:4 decrease:3 removed:2 broken:1 ui:4 complexity:1 dynamic:17 existent:1 trained:2 rewrite:1 rin:5 basis:2 triangle:2 joint:1 various:1 iol:1 derivation:1 distinct:1 doi:1 labeling:1 choosing:1 europhys:1 whose:1 quite:2 larger:5 biehl:7 heuristic:1 erf:1 analyse:1 differentiable:1 jij:1 maximal:1 j2:1 relevant:1 trigonometric:1 degenerate:3 convergence:7 diverges:1 disappeared:1 converges:2 coupling:1 widrow:3 ansgar:1 fixing:1 ac:5 progress:2 eq:1 strong:1 implemented:1 q12:4 concentrate:1 direction:1 drawback:1 wol:2 ja:1 behaviour:3 generalization:9 dent:1 pl:1 hold:1 hut:1 around:1 normal:1 mapping:1 scope:1 birmingham:1 travel:1 spreading:2 sensitive:2 grouped:1 gaussian:1 always:1 pn:6 avoid:1 corollary:1 ax:2 vk:1 mainly:1 kim:2 realizable:4 dependent:1 el:6 typically:1 initially:2 hidden:26 interested:1 issue:2 denoted:2 priori:2 breakthrough:1 integration:3 initialize:1 never:1 identical:2 park:2 cancel:1 promote:1 employ:1 distinguishes:1 randomly:1 manipulated:1 stu:1 phase:9 interest:2 highly:1 severe:1 tjo:6 bracket:1 tj:2 integral:1 encourage:1 neglecting:1 necessary:2 initialized:2 theoretical:4 minimal:1 instance:1 increased:1 soft:7 measuring:1 uniform:2 too:1 loo:1 reported:1 teacher:28 varies:1 considerably:1 decayed:1 international:2 randomized:1 physic:1 quickly:2 continuously:1 squared:3 again:1 nm:1 q11:3 possibly:1 dr:1 suggesting:2 student:44 rescales:1 performed:1 root:1 break:3 complicated:1 mlps:1 square:1 ni:2 oi:11 il:3 variance:1 likewise:1 symbolizes:1 counterproductive:1 thumb:1 accurately:1 trajectory:2 confirmed:1 randomness:1 fo:1 suffers:1 influenced:1 phys:3 proof:3 associated:1 gain:2 rational:2 knowledge:2 cj:1 sandberg:2 actually:1 back:3 follow:1 response:1 strongly:2 generality:2 furthermore:2 until:1 correlation:1 propagation:3 lack:2 schwarze:2 effect:9 normalized:10 evolution:6 analytically:1 hence:2 former:1 excluded:1 symmetric:9 illustrated:1 white:1 attractive:4 indistinguishable:2 during:1 self:4 noted:1 outline:1 motion:1 recently:1 common:1 sigmoid:1 ji:7 b4:1 ncrg:1 jl:1 relating:1 interpret:1 surface:1 dominant:1 recent:1 moderate:1 scenario:11 inequality:2 approximators:3 minimum:1 somewhat:1 freely:1 converge:1 ii:4 thermodynamic:4 reduces:1 academic:1 characterized:4 calculation:2 cross:1 long:1 lin:1 controlled:1 prediction:1 noiseless:1 whereas:1 addressed:2 interval:2 saad:11 nig:1 virtually:1 legend:3 seem:1 practitioner:1 integer:1 symmetrically:2 feedforward:1 iii:3 identically:1 variety:1 architecture:2 restrict:1 suboptimal:11 opposite:1 idea:1 judiciously:2 specialization:1 motivated:1 remark:1 extensively:1 concentrated:1 http:1 qi2:1 exist:2 shifted:1 sign:1 trapped:1 group:1 four:1 drawn:2 kept:2 merely:1 sum:2 run:2 inverse:1 almost:2 vn:1 appendix:1 eh9:1 layer:7 nabney:4 strength:1 ijcnn:1 constraint:4 sake:1 dominated:2 speed:1 argument:1 uf:1 department:1 combination:1 smaller:2 across:4 sollich:1 wi:7 rev:1 alike:1 slowed:1 restricted:1 imitating:1 equation:11 previously:2 remains:2 describing:1 pin:1 committee:8 eventually:1 turn:1 original:2 remaining:1 graded:3 already:1 quantity:2 occurs:1 question:1 dependence:1 exhibit:2 gradient:2 simulated:1 evenly:2 barber:2 extent:1 length:2 wiirzburg:1 steep:1 sharper:1 adjustable:11 allowing:1 finite:2 lity:1 extended:1 rn:2 arbitrary:6 c26:1 david:1 pair:1 required:1 extensive:1 connection:1 learned:2 suggested:2 usually:3 dynamical:10 below:1 including:1 stinchcombe:1 ia:1 overlap:11 critical:2 serially:1 suitable:1 l507:1 representing:1 scheme:2 aston:5 inversely:1 coupled:1 understanding:1 determining:1 loss:2 permutation:1 approximator:5 remarkable:1 basin:1 sufficient:1 uncorrelated:1 elsewhere:2 summary:1 bias:78 fg:9 edinburgh:2 q22:4 dimension:6 lett:1 world:2 transition:1 rich:1 stuck:3 bm:1 nguyen:3 far:1 compact:2 confirm:2 investigating:1 reveals:1 continuous:4 jz:1 transfer:3 correlated:1 symmetry:10 hornik:2 investigated:1 complex:2 domain:6 lvi:5 main:2 spread:1 linearly:1 iuj:1 motivation:1 dense:2 fig:10 west:5 referred:1 en:2 slow:1 lc:1 fails:1 breaking:3 lw:1 third:1 learns:1 ian:1 down:2 theorem:1 tnm:3 decay:1 effectively:1 magnitude:4 depicted:1 logarithmic:1 soha:1 expressed:1 determines:1 lwi:1 presentation:4 labelled:1 change:1 infinite:2 except:1 typical:3 determined:1 wt:1 averaging:4 perceptrons:1 internal:1 support:1 latter:1 ex:4 |
281 | 1,257 | Removing Noise in On-Line Search using
Adaptive Batch Sizes
Genevieve B. Orr
Department of Computer Science
Willamette University
900 State Street
Salem, Oregon 97301
gorr@willamette.ed-u
Abstract
Stochastic (on-line) learning can be faster than batch learning.
However, at late times, the learning rate must be annealed to remove the noise present in the stochastic weight updates. In this
annealing phase, the convergence rate (in mean square) is at best
proportional to l/T where T is the number of input presentations.
An alternative is to increase the batch size to remove the noise. In
this paper we explore convergence for LMS using 1) small but fixed
batch sizes and 2) an adaptive batch size. We show that the best
adaptive batch schedule is exponential and has a rate of convergence which is the same as for annealing, Le., at best proportional
to l/T.
1
Introduction
Stochastic (on-line) learning can speed learning over its batch training particularly
,,,,hen data sets are large and contain redundant information [M0l93J. However, at
late times in learning, noise present in the weight updates prevents complete convergence from taking place. To reduce the noise, the learning rate is slowly decreased
(annealed{ at late times. The optimal annealing schedule is asymptotically proportional to T where t is the iteration [GoI87, L093, Orr95J. This results in a rate of
convergence (in mean square) that is also proportional to
t.
An alternative method of reducing the noise is to simply switch to (noiseless) batch
mode when the noise regime is reached. However, since batch mode can be slow,
a better idea is to slowly increase the batch size starting with 1 (pure stochastic)
and slowly increasing it only "as needed" until it reaches the training set size (pure
batch). In this paper we 1) investigate the convergence behavior of LMS when
Removing Noise in On-Line Search using Adaptive Batch Sizes
233
using small fixed batch sizes, 2) determine the best schedule when using an adaptive
batch size at each iteration, 3) analyze the convergence behavior of the adaptive
batch algorithm, and 4) compare this convergence rate to the alternative method
of annealing the learning rate.
Other authors have approached the problem of redundant data by also proposing
techniques for training on subsets of the data. For example, Pluto [PW93] uses
active data selection to choose a concise subset for training. This subset is slowly
added to over time as needed. Moller [1'10193] proposes combining scaled conjugate
gradient descent (SCG) with \.... hat he refers to as blocksize updating. His algorithm
uses an iterative approach and assumes that the block size does not vary rapidly
during training. In this paper, we take the simpler approach of just choosing exemplars at mndom at each iteration. Given this, we then analyze in detail the
convergence behavior. Our results are more of theoretical than practical interest
since the equations we derive are complex functions of quantities such as the Hessian
that are impractical to compute.
2
Fixed Batch Size
In this section we examine the convergence behavior for LMS using a fixed batch
siL~e. vVe assume that we are given a large but finite sized training set T == {Zj ==
(Xi, dd }~1 where Xi E nm is the ith input and dj En is the corresponding target.
\Ve further assume that the targets are generated using a signal plus noise model
so that we can write
(1)
dj =
Xj + f j
w;
where w. E nm is the optimal weight vector and the ?j is zero mean noise. Since
the training set is assumed to be large we take the average of f j and Xj?j over the
training set to be approximately zero. Note that we consider only the problem of
optimization of the w over the training set and do not address the issue of obtaining
good generalization over the distribution from which the training set was drawn.
At each iteration, we assume that exactly n samples are randomly drawn without
Teplacement from T where 1 S n S N. vVe denote this batch of size n drawn at
time t by Bn(t) == {ZkJ~l' When n = 1 we have pure on-line training and when
/I = lV we have pure batch. \Ve choose to sample without replacement so that as
the batch size is increased, we have a smooth transition from on-line to batch.
FoI' L1'1S, the squared error at iteration t
/OT'
a batch of size n is
(2)
and where Wt E nm is the current weight in the network. The update weight
equation is then Wt+l = Wt - /.L &~B"
where /.L is the fixed learning rate. Rewriting
UWt
this in terms of the weight error v == W - w. and defining gBn,t == 8?Bn(t)/8vt we
obtain
(3)
Convergence (in mean square) to (,v'. can be characterized by the rate of change of
the average squared norm of the weight error E[v 2 ] where v 2 == v T v. From (3) we
obtain an expression for V;+l in terms of Vt,
,V,2+
t
1
= v2t -
2/.LV,Tg
t B,,,t
+ /J-2 g2Bn,t?
(4)
G. B. Orr
234
To compute the expected value of Vi+l conditioned on Vt we can average the right
side of (4) over all possible ways that the batch Bn (t) can be chosen from the N
training examples. In appendix A, we show that
(gB,,,t)B
2
(gBn,t)B
(gi ,t)N
N-n
2
n(N -1) (gi,t)N
=
(5)
(n-1)N
2
+ (N -1)n (gi,t}N
(6)
where (. ) N denotes average over all examples in the training set, (.) B denotes average
over the possible batches drawn at time t, and gi,t == O?(Zi)/OVt. The averages over
the entire training set are
N
=
N
~L
N
i=l
O?(Zi) = - L fiXi - vT XiXi = RVt
OVt
i=l
(7)
N
= ~ L(fiXi -
vT XiXj)T (fjXj
vT XjXj) = u;('I'r R)
-
+ vT SVt
(8)
i=l
u;
== (f2) and (Tr R) is the trace of R.
where R == (xxT)N' S == (xxTxxT)N 1 ,
These equations together with (5) and (6) in (4) gives the expected value of Vt+l
conditioned on Vt
.
2
(Vt+tlVt)
=
T {
Vt
J- 2IJR + IJ
2
(N(n - 1) 2
(N _ 1)n R
N - n
)}
_ 1)n S
Vt
+ (N
+
1J2 U ;(Tr
R)(N - n)
n(N -1)
.
(9)
Note that this reduces to the standard stochastic and batch update equations when
n = 1 and n = N, respectively.
2.0.1
Special Cases: 1-D solution and Spherically Symmetric
In I-dimension we can average over Vt in (9) to give
(Vi+l)
= a(vi) + (3
(10)
where
a
= 1 - 211R + I-l
2
(N(n - 1)
(N _ l)n R
2
N - n
+ (N _
and where Rand S simplifY to R = (x2)N' S
which can be solved exactly to give
(vi) =
a t - tO
(v5)
)
1)n S ,
+
= (x 4 ) N.
This is a difference equation
1- at-to
1- a
(12)
(3
where (v5) is the expected squared weight error at the initial time to.
Figure la compares equation (12) with simulations of 1-D LMS with gaussian inputs
for N = 1000 and batch sizes n = 10, 100, and 500. As can be seen, the agreement is
good. Note that (v 2 ) decreases exponentially until flattening out. The equilibrium
value can be computed from (12) by setting t = 00 (assuming lal < 1) to give
( 2)
(3
WT;R(N - n)
v 00 = 1- (~ = 2Rn(N - 1) -1J(N(n - 1)R2
+ (N -
(13)
n)S)?
Note that (v 2 )00 decreases as n increases and is zero only if n = N.
1 For real zero mean gaussian inputs, we can apply the gaussian moment factoring
theorem [Hay91] which states that (XiXjXkXI}N
RijRkl + RikRjl + RilRjk where the
subscripts on x denote components of x. From this, we find that S = (Trace R)R+ 2R2.
=
235
Removing Noise in On-Line Search using Adaptive Batch Sizes
E[ v"2]
1
E[ v"2)
1
0.1
n=10
0.01
n=10
O.OOl!-----:":"iil'~------
n,.l00
't
0.001 0
(b)
0
Figure 1: Simulations(solid) vs Theoretical (dashed) predictions of the squared weight
error of 1-D LMS a) as function of the number of t, the batch updates (iterations), and
b) as function of the number of input presentations, T. Training set size is N = 1000 and
batch sizes are n =10, 100, and 500. Inputs were gaussian with R = 1, 0'( = 1 and p. = .1.
Simulations used 10000 networks
(a) 0.0001
n,.500
Equation (9) can also be solved exactly in multiple dimensions in the rather restrictive case where we assume that the inputs are spherically symmetric gaussians with
R = a1 where a is a constant, 1 is the identity matrix, and m is the dimension.
The update equation and solution are the same as (10) and (12), respectively, but
where a and {J are now
C\'
3
=1 -
2
2p.a + p. a
2
(N(n - 1)
(N _ 1)n
N - n
)
1)n (m + 2) , f3
+ (N _
=
p. 2 0';ma(N - n)
n(N _ 1)
.
(14)
Adaptive Batch Size
The time it takes to compute the weight update in one iteration is roughly proportional to the number of input presentations, i.e the batch size. To make the comparison of convergence rates for different batch sizes meaningful, we must compute the
change in squared weight error as a function of the number of input presentations,
T, rather than iteration number t.
For fixed batch size, T = nt. Figure 1b displays our 1-D LMS simulations plotted as
a function of T. As can be seen, training \vith a large batch size is slow but results
in a lower equilibrium value than obtained with a small batch size.. This suggests
that we could obtain the fastest decrease of (v 2 ) overall by varying the batch size at
each iteration. The batch size to choose for the current (v 2 ) would be the smallest
n that has yet to reach equilibrium, i.e. for which (v 2 ) > {v 2 )oo.
To determine the best batch size, we take the greedy approach by demanding that
at each itemtion the batch size is chosen so as to reduce the weight error at the
next iteration by the greatest amount per input presentation. This is equivalent to
asking what value of n maximizes h == ((v;) - (V~+l))/n? Once we determine n we
then express it as a function of T.
We treat the 1-D case, although the analysis would be similar for the spherically
symmetric case. From (10) we have h = ~ (((~ - 1)(v;) + {J). Differentiating h with
respect to n and solving yields the batch size that decreases the weight error the
most to be
. (
2J.1.N((S-R2)(v;)+(1~R)
)
nt = mm N, (2R(N -1) + J.I.(S _ NR2))(v;) + J.I.(1~R .
(15)
We have nt exactly equal to N when the current value of (v;) satisfies
_
J.I.(f~ R
( 2)
vt < Ie = 2R(N _ 1) - J.I.(R2(N - 2) - S)
(nt = N).
(16)
G. B. Orr
236
E[ v"2J
n
Adaptive Batch
1
theory
- - - Simulation
0 .1
Batch Size
100.
50 .
0.01
10.
5
0.001
(a) 0 10 20 30 40 50
(b) 1~~?=~~~~~~~~~
0
10
20
30
40
50
60
70
Figure 2: 1-D LMS: a) Comparison of the simulated and theoretically predicted (equation
(18)) squared weight error as a function of t with N = 1000, R = 1, u. = 1, p. = .1 , and
10000 networks. b) Corresponding batch sizes used in the simulations.
Thus, after (v;) has decreased to Ie, training will proceed as pure batch. When
(vn > Ie, we have nt < N and we can put (15) into (10) to obtain
( 2 )_ (
Vt+l
-
1- I-'-R + I-'-
jJ 2u;R
t - 2(N -1)'
2(NR2 -S)) (2)
2(N _ 1)
V
(17)
Solving (17) we get
(v;) = ni- tO (v6)
where
(tl,
+
1-
nt-to
1- :~1 {31
(18)
and (31 are constants
2
u;R
(31 = -j.t 2(N - 1)"
(19)
Figure 2a compares equation (18) with 1-D LMS simulations. The adaptive batch
size was chosen by rounding (15) to the nearest integer. Early in training, the
predicted nt is always smaller than 1 but the simulation always rounds up to 1 (can't
have n=O). Figure 2b displa:ys the batch sizes that were used in the simulations. A
logarithmic scale is used to show that the batch size increases exponentially in t.
\Ve next examine the batch size as a function of T.
3.1
Convergence Rate per Input Presentation
\Vhen we use (15) to choose the batch size, the number of input presentations will
vary at each iteration. Thus, T is not simply a mUltiple of t. Instead, we have
t
T(t) = TO
+L
(20)
nj
i=to
where TO is the number of inputs that have been presented by to. This can be
evaluated when N is very large. In this case, equations (18) and (15) reduce to
(-un
-
(V6)
t
Ct3 -to
where
n3
1 2 2
== 1 - I-'-R + "21-'
- R
21-'-?S- R2)(v;) + u;R)
21-'-(S - R2)
(2R - I-'-R2) (v;)
= 2R - I-'-R2
(21)
2j.tu;
+ (2 _ jJR)(v6)(};~-to'
(22)
Putting (22) into (20), and summing gives gives
~
2j.t(S - R2) ~
T(t) = (2 _ I-'-R)R
t
+ (2 -
21-'-u;
Ct3~t - (};3
I-'-R)(V6) 1 - Ct3
(23)
Removing Noise in On-Line Search using Adaptive Batch Sizes
E[v"2J
1f':-----____~n-~-1~OO~______
237
E[v"2J
1
annealed
??? 10;-
f????? ..
n=1
\.
o.~?g~ n:::~~iivE!-'- ___ '_" ___
I
0.01
n=10
adaptive batch:
theory (solid)
simulation (dashed)
0 .001
~~---------
L-~5-0~10-0~15~0-2~OO--2~5~0-3~OO~3~50~400~
~~~~~~~~~~~~.~
10. 20.
50. 100. 200 . 500. 10002000.
(a)
(b)
Figure 3: I-D LMS: a) Simulations of the squared weight error as a function of T , the
number of input presentations for N = 1000, R = 1, u. = 1, 11 = .1, and 10000 networks.
Batch sizes are n =1, 10, 100, and nt (see (15)). b) Comparison of simulation (dashed)
and theory (see (24)) using adaptive batch size. Simulation (long dash) using an annealed
learning rate with n = 1 and 11 R- 1 is also shown.
=
where D.t == t - to and D.T == T - TO. Assuming that la31 < 1, the term with a;-At
will dominate at late times. Dropping the other terms and solving for a~ gives
40- 2
(v~) = (vi)a~t ~ (2 -pR)2 R(T _ TO)'
(24)
Thus, when using an adaptive batch size, (v 2 ) converges at late times as ~. Figure
3a compares simulations of (v 2 ) with adaptive and constant batch sizes. As can
be seen, the adaptive n curve follows the n = 1 curve until just before the n = 1
curve starts to flatten out. Figure 3b compares (24) with the simulation. Curves
are plotted on a log-log plot to illustrate the l/T relationship at late times (straight
line with slope of -1).
4
Learning Rate Annealing vs Increasing Batch Size
\Vith online learning (n = 1), we can reduce the noise at late times by annealing
the learning rate using a p/t schedule. During this phase, (v 2 ) decreases at a rate of
l/T if J.1 > R-l /2 [L093] and slower otherwise. In this paper, we have presented an
alternative method for reducing the noise by increasing the batch size exponentially
in t. Here, (v 2 ) also decreases at rate of l/T so that, from this perspective, an
adaptive batch size is equivalent to annealing the learning rate. This is confirmed
in Figure 3b which compares using an adaptive batch size with annealing.
An advantage of the adaptive batch size comes when n reaches N. At this point n
remains constant so that (v 2 ) decreases exponentially in T. However, with annealing,
the convergence rate of (v 2 ) always remains proportional to l/T. A disadvantage,
though, occurs in multiple dimensions with nonspherical R where the best choice of
Ht would likely be different along different directions in the weight space. Though
it is possible to have a different learning rate along different directions, it is not
possible to have different batch sizes.
5
Appendix A
In this appendix we use simple counting arguments to derive the two results in
equations (5) and (6). We first note that there are M == ( ~ ) ways of choosing n
examples out of a total of N examples. Thus, (5) can be rewritten as
1
M
1
M
1
(gB".t)B= MLgB~i),t= ML;; L
i=l
i=l
ZjEB~i)
gj,t.
(25)
G. B. Orr
238
where B~i) is the ith batch (i = 1, ... ,M), and gj,t == a?(Zj )jaVt for j = 1, ... ,N.
If we were to expand (25) we would find that there are exactly nlvl terms. From
symmetry and since there are only N unique gj,t, we conclude that each gj,t occurs
n;:
exactly
times. The above expression can then be written as
(26)
Thus, we have equation (5). The second equation (6) is
By the same argument to derive (5), the first term on the right (ljn)(gr,t)N' In the
second term, there are a total n(n -1)M terms in the sum, of which only N(N -1)
are unique. Thus, a given gJ.tgk,t occurs exactly n(n - I)Mj(N(N - 1)) times so
that
1 ~"
n2M ~
~
.=1
=
Zj
1 n(n-l)M
~
gj,t gk,t = n 2M N(N _ 1) . ~.
,z"EB~') ,j~k
1) (1 (
N(n n(N - 1)
N2'
.
L.N
J,k=l,J~k
=
gj,t gk,t
J,k=l,J~k
gj,tgk,t
N
2) 1(1 ~gj,t2))
+ ~gj,t
J=l
- N
N
N
J=l
N(n - 1)
2
(n - 1)
2
n(N - 1) (gi,t)N - n(N -1) (9i,t)N.
(28)
Putting the simplified first term together and (28) both into (27) we obtain our
second result in equation (6).
References
[Go187] Larry Goldstein. Mean square optimality in the continuous time Robbins Monro
procedure. Technical Report DRB-306, Dept. of Mathematics, University of
Southern California, LA, 1987.
[Hay91] Simon Haykin. Adaptive Filter Theory. Prentice Hall, New Jersey, 1991.
[L093] Todd K. Leen and Genevieve B. Orr. Momentum and optimal stochastic search.
In Advances in Neural Information Processing Systems, vol. 6, 1993. to appear.
[M!2I193] Martin M!2Iller. Supervised learning on large redundant training sets. International
Journal of Neural Systems, 4{1} :15-25, 1993.
[Orr95] Genevieve B. Orr. Dynamics and Algorithms for Stochastic learning. PhD thesis,
Oregon Graduate Institute, 1995.
[PW93] Mark Plutowski and Halbert White. Selecting concise training sets from clean
data. IEEE Transactions on Neural Networks, 4:305-318, 1993.
| 1257 |@word norm:1 simulation:15 scg:1 bn:3 concise:2 tr:2 solid:2 moment:1 initial:1 selecting:1 current:3 nt:8 yet:1 must:2 written:1 remove:2 plot:1 update:7 v:2 greedy:1 ith:2 haykin:1 simpler:1 along:2 zkj:1 theoretically:1 expected:3 roughly:1 behavior:4 examine:2 increasing:3 maximizes:1 what:1 proposing:1 impractical:1 nj:1 exactly:7 scaled:1 appear:1 before:1 svt:1 treat:1 todd:1 v2t:1 subscript:1 approximately:1 plus:1 eb:1 suggests:1 fastest:1 graduate:1 practical:1 unique:2 rvt:1 block:1 procedure:1 flatten:1 refers:1 get:1 selection:1 put:1 prentice:1 equivalent:2 annealed:4 starting:1 pure:5 dominate:1 his:1 target:2 us:2 agreement:1 particularly:1 updating:1 solved:2 decrease:7 dynamic:1 solving:3 f2:1 jersey:1 xxt:1 xixjxkxi:1 approached:1 choosing:2 otherwise:1 gi:5 online:1 n2m:1 advantage:1 j2:1 tu:1 combining:1 rapidly:1 convergence:14 converges:1 derive:3 oo:4 illustrate:1 exemplar:1 nearest:1 ij:1 predicted:2 come:1 direction:2 filter:1 stochastic:7 ovt:2 larry:1 generalization:1 mm:1 hall:1 iil:1 equilibrium:3 lm:9 vith:2 vary:2 early:1 smallest:1 robbins:1 gaussian:4 always:3 rather:2 varying:1 factoring:1 entire:1 expand:1 issue:1 overall:1 proposes:1 special:1 equal:1 once:1 f3:1 t2:1 report:1 simplify:1 randomly:1 ve:3 tgk:2 phase:2 replacement:1 interest:1 investigate:1 genevieve:3 plotted:2 halbert:1 theoretical:2 increased:1 asking:1 disadvantage:1 tg:1 nr2:2 subset:3 rounding:1 gr:1 international:1 ie:3 together:2 squared:7 thesis:1 nm:3 choose:4 slowly:4 orr:6 salem:1 oregon:2 vi:5 analyze:2 reached:1 start:1 slope:1 simon:1 monro:1 square:4 ni:1 uwt:1 yield:1 foi:1 confirmed:1 straight:1 reach:3 ed:1 schedule:4 goldstein:1 supervised:1 rand:1 leen:1 evaluated:1 though:2 just:2 until:3 mode:2 contain:1 spherically:3 symmetric:3 vhen:1 white:1 round:1 during:2 complete:1 l1:1 exponentially:4 he:1 drb:1 mathematics:1 dj:2 gj:10 perspective:1 vt:15 plutowski:1 seen:3 determine:3 redundant:3 signal:1 dashed:3 multiple:3 reduces:1 smooth:1 technical:1 faster:1 characterized:1 long:1 y:1 a1:1 prediction:1 noiseless:1 iteration:11 annealing:9 decreased:2 ot:1 integer:1 counting:1 switch:1 xj:2 zi:2 reduce:4 idea:1 expression:2 gb:2 ool:1 hessian:1 proceed:1 jj:1 amount:1 nonspherical:1 zj:3 ct3:3 per:2 write:1 dropping:1 vol:1 express:1 putting:2 drawn:4 willamette:2 rewriting:1 clean:1 ht:1 asymptotically:1 sum:1 place:1 vn:1 appendix:3 dash:1 display:1 x2:1 n3:1 speed:1 argument:2 optimality:1 martin:1 department:1 conjugate:1 smaller:1 xixj:1 pr:1 equation:15 remains:2 needed:2 gaussians:1 rewritten:1 apply:1 batch:62 alternative:4 slower:1 hat:1 assumes:1 denotes:2 restrictive:1 added:1 quantity:1 occurs:3 v5:2 southern:1 gradient:1 iller:1 simulated:1 street:1 assuming:2 relationship:1 xixi:1 gk:2 trace:2 finite:1 descent:1 defining:1 rn:1 lal:1 california:1 address:1 regime:1 greatest:1 demanding:1 hen:1 proportional:6 lv:2 sil:1 dd:1 blocksize:1 side:1 institute:1 taking:1 differentiating:1 curve:4 dimension:4 transition:1 author:1 adaptive:20 simplified:1 transaction:1 l00:1 ml:1 active:1 summing:1 assumed:1 conclude:1 xi:2 search:5 iterative:1 un:1 continuous:1 gbn:2 mj:1 obtaining:1 symmetry:1 moller:1 complex:1 flattening:1 noise:14 n2:1 vve:2 en:1 tl:1 slow:2 momentum:1 exponential:1 late:7 removing:4 theorem:1 r2:9 phd:1 conditioned:2 ljn:1 logarithmic:1 simply:2 explore:1 likely:1 prevents:1 v6:4 satisfies:1 iive:1 ma:1 sized:1 presentation:8 identity:1 change:2 reducing:2 wt:4 total:2 la:2 meaningful:1 mark:1 dept:1 |
282 | 1,258 | A Constructive RBF Network
for Writer Adaptation
John C. Platt and Nada P. Matic
Synaptics, Inc.
2698 Orchard Parkway
San Jose, CA 95134
platt@synaptics.com, nada@synaptics.com
Abstract
This paper discusses a fairly general adaptation algorithm which
augments a standard neural network to increase its recognition accuracy for a specific user . The basis for the algorithm is that the
output of a neural network is characteristic of the input, even when
the output is incorrect. We exploit this characteristic output by
using an Output Adaptation Module (OAM) which maps this output into the correct user-dependent confidence vector . The OAM
is a simplified Resource Allocating Network which constructs radial basis functions on-line. We applied the OAM to construct
a writer-adaptive character recognition system for on-line handprinted characters. The OAM decreases the word error rate on a
test set by an average of 45%, while creating only 3 to 25 basis
functions for each writer in the test set.
1
Introduction
One of the major difficulties in creating any statistical pattern recognition system
is that the statistics of the training set is often different from the statistics in
actual use. The creation of a statistical pattern recognizer is often considered as
a regression problem, where class probabilities are estimated from a fixed training
set. Statistical pattern recognizers tend to work well for typical data that is similar
to the training set data, but do not work well for atypical data that is not well
represented in the training set. Poor performance on atypical data is a problem for
human interfaces, because people tend to provide drastically non-typical data (for
example, see figure 1).
The solution to this difficulty is to create an adaptive recognizer, instead of treating recognition as a static regression problem. The recognizer must adapt to new
statistics during use. As applied to on-line handwriting recognition, an adaptive
J. C. Platt and N. P. MafiC
766
typical lOr"
atypical "rIO
r
OAM
~>
~'---
t - - - - - - - - - - ' ,
abc
consistent (Incorrect)
neural network
response
abc
r
correct
neural network
response
Figure 1: When given atypical input data, the neural network produces a consistent
incorrect output pattern. The OAM recognizes the consistent pattern and produces
a corrected user-adaptive output.
recognizer improves the accuracy for a particular user by adapting the recognizer
to that user .
This paper proposes a novel method for creating an adaptive recognizer, which
we call the Output Adaptation Module or OAM. The OAM was inspired by the
development of a writer-independent neural network handwriting recognizer. We
noticed that the output of this neural network was characteristic of the input: if
a specific style of character was shown to the network , the network's output was
almost always consistent for that specific style, even when the output was incorrect.
To exploit the consistency of the incorrect outputs, we decided to add an OAM
on top of the network. The OAM learns to recognize these consistent incorrect
output vectors, and produces a more correct output vector (see figure 1). The
units of the OAM are radial basis functions (RBF) [5]. Adaptation of these RBF
units is performed using a simplified version of the Resource Allocating Network
(RAN) algorithm of Platt [4][2]. The number of units that RAN allocates scales
sub-linearly with the number of presented learning examples, in contrast to other
algorithms which allocate a new unit for every learned example.
The OAM has the following properties, which are useful for a user-adaptive recognIzer:
? The adaptation is very fast: the user need only provide a few additional
examples of his own data.
? There is very little recognition speed degradation .
? A modest amount of additional memory per user is required.
? The OAM is not limited to neural network recognizers.
? The output of the OAM is a corrected vector of confidences, which is more
useful for contextual post-processing than a single label.
767
A Constructive RBF Networkfor Writer Adaptation
1.1
Relationship to Previous Work
The OAM is related to previous work in user adaptation of neural recognizers for
both speech and handwriting.
A previous example of user adaptation of a neural handwriting recognizer employed
a Time Delay Neural Network (TDNN), where the last layer of a TDNN was replaced
with a tunable classifier that is more appropriate for adaptation [1][3]. In Guyon,
et al. [1], the last layer of a TDNN was replaced by a k-nearest neighbor classifier.
This work was further extended in Matic, et al. [3], where the last layer of the
TDNN was replaced with an optimal hyperplane classifier which is retrained for
adaptation purposes. The optimal hyperplane classifier retained the same accuracy
as the k-nearest neighbor classifier, while reducing the amount of computation and
memory required for adaptation.
The present work improves upon these previous user-adaptive handwriting systems
in three ways. First, the OAM does not require the retraining and storage of the
entire last layer of the network. The OAM thus further reduces both CPU and
memory requirements. Second, the OAM produces an output vector of confidences,
instead of simply an output label. This vector of confidences can be used effectively
by a contextual post-processing step, while a label cannot. Third, our adaptation
experiments are performed on a neural network which recognizes a full character
set. These previous papers only experimented with neural networks that recognized
character subsets, which is a less difficult adaptation problem.
The OAM is related to stacking [6]. In stacking, outputs of multiple recognizers
are combined via training on partitions of the training set. With the OAM, the
multiple outputs of a recognizer are combined using memory-based learning. The
OAM is trained on the idiosyncratic statistics of actual use, not on a pre-defined
training set partition.
2
The Output Adaptation Module (OAM)
Section 2 of this paper describes the OAM in detail, while section 3 describes its
application to create a user-adaptive handwriting recognizer.
The OAM maps the output of a neural network Vi into a user-adapted output Oi,
by adding an adaptation vector Ai:
Oi = Vi + Ai?
(1)
Depending on the neural network training algorithm used, both the output of the
neural network Vi and the user-adapted output Oi can estimate a posteriori class
probabilities, suitable for further post-processing.
The goal of the OAM is to bring the output Oi closer to an ideal response Ii . In
our experiments, the target Ii is 0.9 for the neuron corresponding to the correct
character and 0.1 for all other neurons.
The adaptation vector Ai is computed by a radial basis function network that takes
Vi as an input:
Oi =
Vi + Ai
Vi +
L CijcI>jCV),
(2)
j
cI>j(V)
f
(d(V;t;?) .
(3)
where Mj is the center ofthe jth radial basis function, d is a distance metric between
V and Mj, Rj is a parameter that controls the width of the jth basis function, f is
768
J. C. Platt and N. P Matie
o
Desired = max
loi l
10.551
A
RBF
-
M
j
2:
)
INPUT
Figure 2: The architecture of the OAM.
a decreasing function that controls the shape of the basis functions , and Gij is the
amount of correction that the jth basis function adds to the output. We call Mj
the memories in the adaptation module, and we call Cj the correction vectors (see
figure 2).
The function
f is a decreasing polynomial function:
f(x) = { (1_x 2)2, if x < 1;
(4)
0,
otherwise.
The distance function d is a Euclidean distance metric that first clips both of its
input vectors to the range [0.1,0.9] in order to reduce spurious noise.
The algorithm for constructing the radial basis functions is a simplification of the
RAN algorithm [4][2]. The OAM starts with no memories or corrections. When the
user corrects a recognition error, the OAM finds the distance d min from the nearest
memory to the vector Vi. If the distance d min is greater than a threshold 6, then a
new RBF unit is allocated, with a new memory that is set to the vector Vi, and a
corresponding correction vector that is set to correct the error with a step size a:
Gik = a(Ti - Oi).
(5)
If the distance d min is less than 6, no new unit is allocated: the correction vector of
the nearest memory to Vi is updated to correct the error by a step size b.
(6)
For our experiments, we set 6 = 0.1, a = 0.25, and b = 0.2. The values a and bare
chosen to be less than 1 to sacrifice learning speed to gain learning stability.
The number of radial basis functions grows sub-linearly with the number of errors,
because units are only allocated for novel errors that the OAM has not seen before.
A Constructive RBF Network/or Writer Adaptation
769
For errors similar to those the OAM has seen before, the algorithm updates one of
the correction vectors using a simplified LMS rule (eq. 6) .
In the computation of the nearest memory, we always consider an additional phantom memory: the target Qthat corresponds to the highest output in V. This phantom memory is considered in order to prevent the OAM from allocating memories
when the neural network output is unambiguous. The phantom memory prevents
the OAM from affecting the output for neatly written characters.
The adaptation algorithm used is described as pseudo-code, below:
For every character shown to the network {
If the user indicates an error {
f = target vector of the correct character
Q = target vector of the highest element in
d min = min(minj d(V, Mj ), d(V, Q))
If d min > 0 { / / allocate a new memory
k = index of the neW memory
Cik a(1i - Od
Mk = V
Rj = dmin
V
=
}
else if memories exist and minj d(V, M}) < d(V, Q)
k
arg minj d(V, Mj)
Cik = Cik + b(1i - Oi)cf>k(V)
=
{
}
}
}
3
Experiments and Results
To test the effectiveness of the OAM, we used it to create a writer-adaptive handwriting recognition system. The OAM was connected to the outputs of a writerindependent neural network trained to recognize characters hand-printed in boxes.
This neural network was a carefully tuned multilayer feed-forward network, trained
with the back-propagation algorithm . The network has 510 inputs, 200 hidden
units, and 72 outputs.
The input to the OAM was a vector of 72 confidences, one per class. These confidences were in the range [0,1]. There is one input for every upper case character,
lower case character, and digit. There is also one input for each member of a subset
of punctuation characters ( !$&',-:;=? ).
The OAM was tested interactively. Tests of the OAM were performed by five writers
disjoint from the training writers for the writer-independent neural network . These
writers had atypical writing styles which were difficult for the network to recognize.
Test characters were entered word-by-word on a tablet. The writers were instructed
to write more examples of characters that reflected their atypical writing style. The
words that these writers used were not taken from a word list, and could consist
of any combination of the 72 available characters. Users were shown the results of
the OAM, combined into words and further processed by a dictionary. Whenever
the system failed to recognize a word correctly, all misclassified characters and their
corresponding desired labels were used by the OAM to adapt the system to the
J. C. Platt and N. P. Matic
770
...o
writer nm
1:
no adaptation
GI
"~20
1a;
~
'3 10
adaptation
E
:::J
U
20
30
writertw
o
50
word no.
no adaptation
adaptation
50 word no.
Figure 3: The cumulative number of word errors for the writer "nm" and the writer
"tw" , with and without adaptation
Writer
% word error
% word error
no OAM
OAM
Memories stored
during test
Words written
during test
25%
62%
42%
54%
28';70
17%
36%
31%
21%
10'10
6
12
25
4
3
93
53
80
39
60
rag
mfe
qm
nm
tw
Table 1: Quantitative test results for the OAM.
particular user.
Figure 3 shows the performance of the OAM for the writer "nm" and for the writer
"tw". The total number of word errors since adaptation started is plotted against
the number of words shown to the OAM. The baseline cumulative error without the
OAM is also given. The slope of each curve gives the estimate of the instantaneous
error rate. The OAM causes the slope for both writers to decrease dramatically
as the adaptation progresses. Over the last third of the test set, the word error
rate for writer "nm" was 0%, while the word error rate for writer "tw" was 5%.
These examples show the the OAM can substantially improve the accuracy of a
writer-independent neural network.
Quantitative results are shown in table 1, where the word error rates obtained with
the OAM are compared to the baseline word error rates without the OAM. The
right two columns contain the number of stored basis functions and the number of
words tested by the OAM.
The OAM corrects an average of 45% of the errors in the test set. The accuracy
A Constructive RBF Network/or Writer Adaptation
771
rates with the OAM were taken for the entire test run, and therefore count the
errors that were made while adaptation was taking place. By the end of the test,
the true error rates for these writers would be even lower than those shown in table
1, as can be seen in figure 3.
These experiments showed that the OAM adapts very quickly and requires a small
amount of additional memory and computation. For most writers, only 2-3 presentations of a writer-dependent variant of a character were sufficient for the OAM
to adapt. The maximum number of stored basis functions were 25 in these experiments. The OAM did not substantially affect the recognition speed of the system.
4
Conclusions
We have designed a widely applicable Output Adaptation Module (OAM) to place
on top of standard neural networks . The OAM takes the output of the network as
input, and determines an additional adaptation vector to add to the output. The
adaptation vector is computed via a radial basis function network, which is learned
with a simplification of the RAN algorithm. The OAM has many nice properties:
only a few examples are needed to learn atypical inputs, the number of stored
memories grows sub-linearly with the number of errors, the recognition rate of the
writer-independent neural network is unaffected by adaptation and the output of
the module is a confidence vector suitable for further post-processing.
The OAM addresses the difficult problem of creating a high-perplexity adaptive recognizer. We applied the OAM to create a writer-adaptive handwriting recognition
system. On a test set of five difficult writers, the adaptation module decreased the
error rate by 45%, and only stored between 3 and 25 basis functions per writer.
5
Acknowledgements
We wish to thank Steve Nowlan for helpful suggestions during the development of
the OAM algorithm and for his work on the writer-independent neural network. We
would also like to thank Joe Decker for his work on the writer-independent neural
network.
References
[1] 1. Guyon, D. Henderson, P. Albrecht, Y . Le Cun, and J. Denker. Writer independent and writer adaptive neural network for on-line character recognition.
In S. Impedovo, editor, From Pixels to Features III, Amsterdam, 1992. Elsevier.
[2] V. Kadirkamanathan and M. Niranjan. A function estimation approach to
sequential learning with neural networks. Neural Computation, 5(6):954- 976,
1993.
[3] N. Matic, 1. Guyon, J. Denker, and V. Vapnik. Writer adaptation for on-line
handwritten character recognition. In ICDAR93, Tokyo, 1993. IEEE Computer
Society Press.
[4] J. Platt. A resource-allocating network for function interpolation . Neural Computation, 3(2):213-225, 1991.
[5] M. Powell. Radial basis functions for multivariate interpolation : A review. In
J. C. Mason and M. G. Cox, editors, Algorithms for Approximation, Oxford,
1987. Clarendon Press.
[6] D. Wolpert. Stacked generalization. Neural Networks, 5(2):241-260, 1992.
| 1258 |@word qthat:1 version:1 cox:1 polynomial:1 retraining:1 tuned:1 com:2 contextual:2 od:1 nowlan:1 must:1 written:2 john:1 partition:2 shape:1 treating:1 designed:1 update:1 lor:1 five:2 incorrect:6 sacrifice:1 inspired:1 decreasing:2 actual:2 little:1 cpu:1 substantially:2 pseudo:1 quantitative:2 every:3 ti:1 classifier:5 qm:1 platt:7 control:2 unit:8 before:2 oxford:1 interpolation:2 limited:1 range:2 decided:1 digit:1 powell:1 adapting:1 printed:1 confidence:7 word:20 radial:8 pre:1 cannot:1 storage:1 writing:2 map:2 phantom:3 center:1 rule:1 his:3 stability:1 updated:1 target:4 user:18 tablet:1 element:1 recognition:13 module:7 connected:1 decrease:2 highest:2 ran:4 trained:3 creation:1 upon:1 writer:36 basis:16 represented:1 stacked:1 fast:1 widely:1 otherwise:1 statistic:4 gi:1 adaptation:35 entered:1 adapts:1 requirement:1 produce:4 depending:1 nearest:5 progress:1 eq:1 correct:7 tokyo:1 human:1 require:1 generalization:1 correction:6 considered:2 lm:1 major:1 dictionary:1 purpose:1 recognizer:12 estimation:1 applicable:1 label:4 create:4 always:2 nada:2 indicates:1 contrast:1 baseline:2 posteriori:1 rio:1 helpful:1 dependent:2 elsevier:1 entire:2 spurious:1 hidden:1 misclassified:1 pixel:1 oam:61 arg:1 proposes:1 development:2 fairly:1 construct:2 few:2 recognize:4 replaced:3 henderson:1 punctuation:1 allocating:4 closer:1 allocates:1 modest:1 euclidean:1 desired:2 plotted:1 mk:1 column:1 stacking:2 subset:2 delay:1 stored:5 combined:3 corrects:2 quickly:1 nm:5 interactively:1 creating:4 style:4 albrecht:1 inc:1 vi:9 performed:3 start:1 slope:2 oi:7 accuracy:5 characteristic:3 ofthe:1 handwritten:1 unaffected:1 minj:3 whenever:1 against:1 static:1 handwriting:8 gain:1 tunable:1 impedovo:1 improves:2 cj:1 carefully:1 cik:3 back:1 feed:1 steve:1 clarendon:1 reflected:1 response:3 box:1 hand:1 propagation:1 grows:2 contain:1 true:1 during:4 width:1 unambiguous:1 interface:1 bring:1 instantaneous:1 novel:2 ai:4 consistency:1 neatly:1 had:1 synaptics:3 recognizers:4 add:3 multivariate:1 own:1 showed:1 perplexity:1 seen:3 additional:5 greater:1 employed:1 recognized:1 ii:2 full:1 multiple:2 rj:2 reduces:1 adapt:3 post:4 niranjan:1 variant:1 regression:2 multilayer:1 metric:2 affecting:1 decreased:1 else:1 allocated:3 tend:2 member:1 effectiveness:1 call:3 ideal:1 iii:1 affect:1 architecture:1 reduce:1 allocate:2 speech:1 cause:1 dramatically:1 useful:2 amount:4 clip:1 processed:1 augments:1 exist:1 estimated:1 disjoint:1 per:3 correctly:1 write:1 threshold:1 prevent:1 run:1 jose:1 networkfor:1 place:2 almost:1 guyon:3 layer:4 simplification:2 adapted:2 speed:3 min:6 orchard:1 combination:1 poor:1 describes:2 character:20 tw:4 cun:1 taken:2 resource:3 discus:1 count:1 needed:1 end:1 available:1 denker:2 appropriate:1 top:2 cf:1 recognizes:2 exploit:2 society:1 noticed:1 kadirkamanathan:1 distance:6 mfe:1 thank:2 code:1 retained:1 relationship:1 index:1 handprinted:1 difficult:4 idiosyncratic:1 dmin:1 upper:1 neuron:2 extended:1 retrained:1 required:2 rag:1 learned:2 address:1 below:1 pattern:5 max:1 memory:20 suitable:2 difficulty:2 improve:1 started:1 tdnn:4 bare:1 nice:1 review:1 acknowledgement:1 suggestion:1 sufficient:1 consistent:5 editor:2 last:5 jth:3 drastically:1 neighbor:2 taking:1 curve:1 cumulative:2 forward:1 made:1 instructed:1 adaptive:12 san:1 simplified:3 parkway:1 matic:4 table:3 mj:5 learn:1 ca:1 constructing:1 did:1 decker:1 linearly:3 noise:1 sub:3 wish:1 atypical:7 third:2 learns:1 specific:3 list:1 experimented:1 mason:1 gik:1 consist:1 joe:1 vapnik:1 adding:1 effectively:1 sequential:1 ci:1 wolpert:1 simply:1 failed:1 prevents:1 amsterdam:1 corresponds:1 determines:1 abc:2 goal:1 presentation:1 rbf:8 typical:3 corrected:2 reducing:1 hyperplane:2 degradation:1 total:1 gij:1 loi:1 people:1 constructive:4 tested:2 |
283 | 1,259 | Are Hopfield Networks Faster Than
Conventional Computers?
Ian Parberry* and Hung-Li Tsengt
Department of Computer Sciences
University of North Texas
P.O. Box 13886
Denton, TX 76203-6886
Abstract
It is shown that conventional computers can be exponentiallx faster
than planar Hopfield networks: although there are planar Hopfield
networks that take exponential time to converge, a stable state of an
arbitrary planar Hopfield network can be found by a conventional
computer in polynomial time. The theory of 'P.cS-completeness
gives strong evidence that such a separation is unlikely for nonplanar Hopfield networks, and it is demonstrated that this is also the
case for several restricted classes of nonplanar Hopfield networks,
including those who interconnection graphs are the class of bipartite graphs, graphs of degree 3, the dual of the knight's graph, the
8-neighbor mesh, the hypercube , the butterfly, the cube-connected
cycles, and the shuffle-exchange graph.
1
Introduction
Are Hopfield networks faster than conventional computers? This apparently
straightforward question is complicated by the fact that conventional computers
are universal computational devices, that is, they are capable of simulating any
discrete computational device including Hopfield networks. Thus , a conventional
computer could in a sense cheat by imitating the fastest Hopfield network possible.
* Email: ianGcs. unt .edu. URL: http://hercule .csci. unt. edu/ian.
t Email: ht sengGponder. csci. unt . edu.
I. Parberry and H. Tseng
240
But the question remains, is it faster for a computer to imitate a Hopfield network ,
or to use other computational methods? Although the answer is likely to be different for different benchmark problems, and even for different computer architectures ,
we can make our results meaningful in the long term by measuring scalability, that
is, how the running time of Hopfield networks and conventional computers increases
with the size of any benchmark problem to be solved.
Stated more technically, we are interested in the computational complexity of the
stable state problem for Hopfield networks , which is defined succinctly as follows :
given a Hopfield network, determine a stable configuration. As previously stated,
this stable configuration can be determined by imitation, or by other means. The
following results are known about the scalability of Hopfield network imitation. Any
imitative algorithm for the stable state problem must take exponential time on som e
Hopfield networks, since there exist Hopfield networks that require exponential time
to converge (Haken and Luby [4] , Goles and Martinez [2]) . It is unlikely that even
non-imitative algorithms can solve the stable state problem in polynomial time ,
since the latter is PeS-complete (Papadimitriou , Schaffer, and Yannakakis [9]).
However , the stable state problem is more difficult for some classes of Hopfield
networks than others. Hopfield networks will converge in polynomial time if their
weights are bounded in magnitude by a polynomial of the number of nodes (for
an expository proof see Parberry [11 , Corollary 8.3.4]) . In contrast , the stable
state problem for Hopfield networks whose interconnection graph is bipartite is
peS-complete (this can be proved easily by adapting techniques from Bruck and
Goodman [1]) which is strong evidence that it too requires superpolynomial time
to solve even with a nonimitative algorithm.
We show in this paper that although there exist planar Hopfield networks that t ake
exponential time to converge in the worst case , the stable state problem for planar
Hopfield networks can be solved in polynomial time by a non-imitative algorithm.
This demonstrates that imitating planar Hopfield networks is exponentially slower
than using non-imitative algorithmic techniques. In contrast , we discover that the
stable state problem remains peS-complete for many simple classes of nonplanar
Hopfield network , including bipartite networks , networks of degree 3, and some
networks that are popular in neurocomputing and parallel computing.
The main part of this manuscript is divided into four sections. Section 2 contains
some background definitions and references. Section 3 contains our results about
planar Hopfield networks. Section 4 describes our peS-completeness results , based
on a pivotal lemma about a nonstandard type of graph embedding.
2
Background
This section contains some background which are included for completeness but
may be skipped on a first reading. It is divided into two subsections , the first on
Hopfield networks, and the second on PeS-completeness.
2.1
Hopfield Networks
4- Hopfield network [6] is a discrete neural network model with symmetric connections . Each processor in the network computes a hard binary weighted threshold
Are Hopfield Networks Faster than Conventional Computers?
241
function. Only one processor is permitted to change state at any given time. That
processor becomes active if its excitation level exceeds its threshold, and inactive
otherwise. A Hopfield network is said to be in a stable state if the states of all of
its processors are consistent with their respective excitation levels. It is well-known
that all Hopfield networks converge to a stable state. The proof defines a measure
called energy, and demonstrates that energy is positive but decreases with every
computation step. Essentially then, a Hopfield network finds a local minimum in
some energy landscape.
2.2
P .cS-completeness
While the theory of NP-completeness measures the complexity of global optimization, the theory of p.cS-completeness developed by Johnson, Papadimitriou, and
Yannakakis [7] measures the complexity of local optimization. It is similar to the
theory of NP-completeness in that it identifies a set of difficult problems known
collectively as p.cS-complete problems. These are difficult in the sense that if a
fast algorithm can be developed for any P .cS-complete problem, then it can be
used to give fast algorithms for a substantial number of other local optimization
problems including many important problems for which no fast algorithms are currently known. Recently, Papadimitriou, Schaffer, and Yannakakis [9] proved that
the problem of finding stable states in Hopfield networks is P .cS-complete.
3
Planar Hopfield Networks
A planar Hopfield network is one whose interconnection graph is planar, that is, can
be drawn on the Euclidean plane without crossing edges. Haken and Luby [4] describe a planar Hopfield network that provably takes exponential time to converge,
and hence any imitative algorithm for the stable state problem must take exponential time on some Hopfield network. Yet there exists a nonimitative algorithm for
the stable state problem that runs in polynomial time on all Hopfield networks:
Theorem 3.1 The stable state problem for Hopfield networks with planar interconnection pattern can be solved in polynomial time.
PROOF: (Sketch.) The prooffollows from the fact that the maximal cut in a planar
graph can be found in polynomial time (see , for example, Hadlock [3]), combined
with results of Papadimitriou, Schaffer, and Yannakakis [9]. 0
4
P .cS-completeness Results
Our P .cS-completeness results are a straightforward consequence of a new result
that characterizes the difficulty of the stable state problem of an arbitrary class
of Hopfield networks based on a graph-theoretic property of their interconnection
patterns. Let G = (V, E) and H = (V', E') be graphs. An embedding of G into H
is a function f: V -+ 2Vi such that the following properties hold. (1) For all v E V,
the subgraph of H induced by f(v) is connected. (2) For all (u, v) E E, there exists
a path (which we will denote f(u , v)) in H from a member of f(u) to a member
of f(v). (3) Each vertex w E H is used at most once, either as a member of f(v)
I. Parberry and H. Tseng
242
for some v E V, or as an internal vertex in a path feu, v) for some u, v E V. The
graph G is called the guest graph, and H is called the host graph. Our definition
of embedding is different from the standard notion of embedding (see, for example,
Hong, Mehlhorn, and Rosenberg [5]) in that we allow the image of a single guest
vertex to be a set of host vertices, and we insist in properties (2) and (3) that the
images of guest edges be distinct paths. The latter property is crucial to our results,
and forms the major difficulty in the proofs.
Let 5, T be sets of graphs. 5 is said to be polynomial-time embeddable into T,
written 5 ::;e T, if there exists polynomials Pl(n),P2(n) and a function f with the
following properties: (1) f can be computed in time PI(n), and (2) for every G E 5
with n vertices, there exists H E T with at most p2(n) vertices such that G can
be embedded into H by f. A set 5 of graphs is said to be pliable if the set of all
graphs is polynomial-time embeddable into 5.
Lemma 4.1 If 5 is pliable, then the problem of finding a stable state in Hopfield
networks with interconnection graphs in 5 is 'P ?S-complete.
PROOF: (Sketch.) Let 5 be a set of graphs with the property that the set of all
graphs is polynomial-time embeddable into 5 . By the results of Papadimitriou,
Schaffer, and Yannakakis [9], it is enough to show that the max-cut problem for
graphs in 5 is 'P ?S-complete.
Let G be an arbitrary labeled graph. Suppose G is embedded into H E 5 under the
polynomial-time embedding. For each edge e in G of cost c, select one edge from
the path connecting the vertices in f( e) and assign it cost c. We call this special
edge f' (e). Assign all other edges in the path cost -00. For all v E V, assign the
edges linking the vertices in f(v) a cost of -00. Assign all other edges of H a cost
of zero.
It can be shown that every cut in G induces a cut of the same cost in H, as follows.
Suppose G ~ E is a cut in G, that is, a set of edges that if removed from G,
disconnects it into two components containing vertices VI and V2 respectively. Then,
removing vertices f'(G) and all zero-cost edges from H will disconnect it into two
components containing vertices f(VI ) and f(V2 ) respectively. Furthermore, each
cut of positive cost in H induces a cut of the same cost in G, since a positive cost
cut in H cannot contain any edges of cost -00, and hence must consist only of f'(e)
for some edges e E E. Therefore, every max-cost cut in H induces in polynomial
time a max-cost cut in G. 0
We can now present our 'P ?S-completeness results. A graph has degree 3 if all
vertices are connected to at most 3 other vertices each.
Theorem 4.2 The problem of finding stable states in Hopfield networks of degree
3 is 'P ?S-complete.
PROOF:
(Sketch.) By Lemma 4.1, it suffices to prove that the set of degree-3
graphs is pliable. Suppose G (V, E) is an arbitrary graph. Replace each degree-k
vertex x E V by a path consisting of k vertices, and attach each edge incident with
v by a new edge incident with one of the vertices in the path. Figure 1 shows an
example of this embedding. 0
=
243
Are Hopfield Networks Faster than Conventional Computers?
Figure 1: A guest graph of degree 5 (left), and the corresponding host of degree 3
(right). Shading indicates the high-degree nodes that were embedded into paths.
All other nodes were embedded into single nodes.
Figure 2: An 8-neighbor mesh with 25 vertices (left), and the 8
superimposed on an 8 x 8 board (right).
X
8 knight's graph
The 8-neighbor mesh is the degree-8 graph G = (V, E) defined as follows: V =
{1,2, ... ,m} x {1,2, ... ,n}, and vertex (u,v) is connected to vertices (u,v? 1),
(u ? 1, v), (u ? 1, v ? 1). Figure 2 shows an 8-neighbor mesh with 25 vertices.
Theorem 4.3 The problem of finding stable states in H opfield networks on the
8-neighbor mesh is peS-complete.
PROOF: (Sketch.) By Lemma 4.1, it suffices to prove that the 8-neighbor mesh is
pliable. An arbitrary graph can be embedded on an 8-neighbor mesh by mapping
each node to a set of consecutive nodes in the bottom row of the grid, and mapping
edges to disjoint rectilinear paths which use the diagonal edges of the grid for
crossovers. 0
The knight's graph for an n X n chessboard is the graph G = (V, E) where V =
{(i, j) 11 ~ i, j ~ n}, and E
{((i, j), (k,
I {Ii - kl, Ij - il} {I, 2}}. That is,
there is a vertex for every square of the board and an edge between two vertices
exactly when there is a knight's move from one to the other. For example, Figure 2
shows the knight's graph for the 8 x 8 chessboard. Takefuji and Lee [15] (see also
Parberry [12]) use the dual of the knight's graph for a Hopfield-style network to
solve the knight's tour problem. That is, they have a vertex Ve for each edge e of
the knight's graph, and an edge between two vertices Vd and Ve when d and e share
a common vertex in the knight's graph.
=
i?
=
244
I. Parberry and H. Tseng
Theorem 4.4 The problem of finding stable states in H opfield networks on the dual
of the knight's graph is pes -complete.
PROOF: (Sketch.) By Lemma4.1, it suffices to prove that the dual of the knight 's
graph is pliable. It can be shown that the knight 's graph is pliable using the
technique of Theorem 4.3. It can also he proved that if a set S of graphs is pliable,
then the set consisting of the duals of graphs in S is also pliable. 0
The hypercube is the graph with 2d nodes for some d, labelled with the binary
representations of the d-bit natural numbers , in which two nodes are connected by
an edge iff their labels differ in exactly one bit. The hypercube is an important
graph for parallel computation (see , for example, Leighton [8], and Parberry [lOD .
Theorem 4.5 The problem of finding stable states in Hopfield networks on the
hypercube is peS-complete.
PROOF : (Sketch.) By Lemma 4.1, it suffices to prove that the hypercube is pliable.
Since the " ~e" relation is transitive, it further suffices by Theorem 4.2 to show that
the set of degree-3 graphs is polynomial-time embeddable into the hypercube . To
embed a degree-3 graph G into the hypercube , first break it into a degree-1 graph
G 1 and a degree-2 graph G2 . Since G 2 consists of cycles, paths, and disconnected
vertices, it can easily be embedded into a hypercube (since a hypercube is rich
in cycles). G 1 can be viewed as a permutation of vertices in G and can hence be
realized using a hypercube implementation of Waksman 's permutation network [16] .
o
We conclude by stating PeS-completeness results for three more graphs that are
important in the parallel computing literature the butterfly (see, for example,
Leighton [8]) , the cube- connected cycles (Preparata and Vuillemin [13D , and the
shuffle-exchange (Stone [14]). The proofs use Lemma 4.1 and Theorem 4.5 , and are
omitted for conciseness.
Theorem 4.6 The problem of finding stable states in Hopfield networks on the
butterfly, the cube-connected cycles, and the shuffle-exchange is peS-complete.
Conclusion
Are Hopfield networks faster than conventional computers? The answer seems to be
that it depends on the interconnection graph of the Hopfield network. Conventional
nonimitative algorithms can be exponentially faster than planar Hopfield networks.
The theory of peS-completeness shows us that such an exponential separation
result is unlikely not only for nonplanar graphs, but even for simple nonplanar
graphs such as bipartite graphs , graphs of degree 3, the dual of the knight's graph,
the 8-neighbor mesh , the hypercube , the butterfly, the cube-connected cycles, and
the shuffle-exchange graph.
Acknowledgements
The research described in this paper was supported by the National Science Foundation under grant number CCR- 9302917 , and by the Air Force Office of Scientific
Are Hopfield Networks Faster than Conventional Computers?
245
Research, Air Force Systems Command, USAF, under grant number F49620-93-10100.
References
[1] J. Bruck and J. W. Goodman. A generalized convergence theorem for neural
networks. IEEE Transactions on Information Theory, 34(5):1089-1092, 1988.
[2] E. Goles and S. Martinez. Exponential transient classes of symmetric neural
networks for synchronous and sequential updating. Complex Systems, 3:589597, 1989.
[3] F. Hadlock. Finding a maximum cut of a planar graph in polynomial time.
SIAM Journal on Computing, 4(3) :221-225, 1975.
[4] A. Haken and M. Luby. Steepest descent can take exponential time for symmetric conenction networks. Complex Systems, 2:191-196,1988.
[5] J.-W. Hong, K. Mehlhorn, and A.L. Rosenberg. Cost tradeoffs in graph embeddings. Journal of the ACM, 30 :709-728,1983.
[6] J. J . Hopfield. Neural networks and physical systems with emergent collective
computational abilities. Proc. National Academy of Sciences, 79:2554-2558 ,
April 1982.
[7] D. S. Johnson , C. H. Papadimitriou, and M. Yannakakis. How easy is local
search? In 26th Annual Symposium on Foundations of Computer Science,
pages 39-42. IEEE Computer Society Press , 1985.
[8] F. T. Leighton. Introduction to Parallel Algorithms and Architectures: Arrays
. Trees? Hypercubes. Morgan Kaufmann, 1992.
[9] C. H. Papadimitrioll, A. A. Schaffer, and M. Yannakakis. On the complexity
of local search. In Proceedings of the Twenty Second Annual ACM Symposium
on Theory of Computing, pages 439-445. ACM Press, 1990.
[10] I. Parberry. Parallel Complexity Theory. Research Notes in Theoretical Computer Science. Pitman Publishing, London , 1987.
[11] I. Parberry. Circuit Complexity and Neural Networks. MIT Press , 1994.
[12] I. Patberry. Scalability of a neural network for the knight's tour problem.
Neurocomputing, 12 :19-34, 1996.
[13] F. P. Preparata and J. Vuillemin . The cube-connected cycles: A versatile
network for parallel computation. Communications of the ACM, 24(5):300309 , 1981.
[14] H. S. Stone. Parallel processing with the perfect shuffle. IEEE Transactions
on Computers, C-20(2):153-161, 1971.
[15] Y. Takefuji and K. C. Lee. Neural network computing for knight's tour problems. Neurocomputing, 4(5):249-254 , 1992.
[16] A. Waksman. A permutation network . Journal of the ACM, 15(1):159-163,
January 1968.
| 1259 |@word polynomial:16 leighton:3 seems:1 versatile:1 shading:1 configuration:2 contains:3 yet:1 must:3 written:1 mesh:8 device:2 imitate:1 plane:1 steepest:1 completeness:13 node:8 mehlhorn:2 symposium:2 prove:4 consists:1 insist:1 becomes:1 discover:1 bounded:1 circuit:1 developed:2 finding:8 every:5 exactly:2 demonstrates:2 grant:2 positive:3 local:5 consequence:1 cheat:1 feu:1 path:10 fastest:1 universal:1 crossover:1 adapting:1 cannot:1 conventional:12 demonstrated:1 straightforward:2 array:1 embedding:6 notion:1 suppose:3 lod:1 crossing:1 updating:1 cut:11 labeled:1 bottom:1 solved:3 embeddable:4 worst:1 connected:9 cycle:7 decrease:1 shuffle:5 knight:15 removed:1 substantial:1 complexity:6 unt:3 usaf:1 technically:1 bipartite:4 easily:2 hopfield:50 emergent:1 tx:1 distinct:1 fast:3 describe:1 london:1 whose:2 solve:3 otherwise:1 interconnection:7 ability:1 butterfly:4 maximal:1 subgraph:1 iff:1 academy:1 scalability:3 convergence:1 perfect:1 vuillemin:2 stating:1 ij:1 strong:2 p2:2 c:8 differ:1 duals:1 transient:1 exchange:4 require:1 assign:4 suffices:5 imitative:5 pl:1 hold:1 algorithmic:1 mapping:2 major:1 consecutive:1 omitted:1 proc:1 label:1 currently:1 weighted:1 mit:1 command:1 rosenberg:2 office:1 corollary:1 chessboard:2 indicates:1 superimposed:1 contrast:2 skipped:1 sense:2 unlikely:3 relation:1 interested:1 provably:1 dual:5 special:1 cube:5 once:1 yannakakis:7 denton:1 hadlock:2 papadimitriou:6 others:1 np:2 preparata:2 ve:2 neurocomputing:3 national:2 consisting:2 edge:20 capable:1 respective:1 tree:1 euclidean:1 theoretical:1 measuring:1 cost:14 vertex:27 tour:3 johnson:2 too:1 answer:2 takefuji:2 combined:1 hypercubes:1 siam:1 lee:2 connecting:1 containing:2 style:1 li:1 north:1 disconnect:2 vi:3 depends:1 break:1 apparently:1 characterizes:1 complicated:1 parallel:7 il:1 square:1 air:2 kaufmann:1 who:1 landscape:1 processor:4 nonstandard:1 email:2 definition:2 energy:3 proof:10 conciseness:1 proved:3 popular:1 subsection:1 nonplanar:5 manuscript:1 planar:15 permitted:1 april:1 box:1 furthermore:1 sketch:6 defines:1 scientific:1 contain:1 hence:3 symmetric:3 excitation:2 hong:2 generalized:1 stone:2 complete:13 theoretic:1 haken:3 image:2 recently:1 common:1 superpolynomial:1 physical:1 exponentially:2 linking:1 he:1 rectilinear:1 grid:2 stable:23 binary:2 morgan:1 minimum:1 converge:6 determine:1 ii:1 exceeds:1 faster:9 long:1 divided:2 host:3 essentially:1 background:3 crucial:1 goodman:2 induced:1 member:3 call:1 enough:1 embeddings:1 easy:1 architecture:2 tradeoff:1 texas:1 inactive:1 synchronous:1 url:1 induces:3 http:1 exist:2 disjoint:1 ccr:1 discrete:2 four:1 threshold:2 drawn:1 ht:1 graph:56 run:1 separation:2 bit:2 annual:2 department:1 goles:2 expository:1 disconnected:1 describes:1 restricted:1 imitating:2 remains:2 previously:1 v2:2 simulating:1 luby:3 slower:1 running:1 publishing:1 hypercube:11 society:1 move:1 question:2 realized:1 diagonal:1 said:3 vd:1 tseng:3 guest:4 difficult:3 stated:2 implementation:1 collective:1 ake:1 twenty:1 benchmark:2 descent:1 january:1 communication:1 arbitrary:5 schaffer:5 kl:1 connection:1 pattern:2 reading:1 including:4 max:3 difficulty:2 natural:1 attach:1 bruck:2 force:2 identifies:1 parberry:9 transitive:1 literature:1 acknowledgement:1 embedded:6 permutation:3 foundation:2 degree:15 incident:2 consistent:1 pi:1 share:1 row:1 succinctly:1 supported:1 allow:1 neighbor:8 pitman:1 f49620:1 rich:1 computes:1 transaction:2 global:1 active:1 conclude:1 imitation:2 search:2 complex:2 som:1 main:1 martinez:2 pivotal:1 board:2 exponential:9 pe:11 ian:2 theorem:10 removing:1 embed:1 evidence:2 exists:4 consist:1 sequential:1 magnitude:1 likely:1 g2:1 collectively:1 acm:5 viewed:1 labelled:1 replace:1 hard:1 change:1 included:1 determined:1 lemma:6 called:3 meaningful:1 select:1 internal:1 latter:2 hung:1 |
284 | 126 | 419
COMPUTER MODELING OF ASSOCIATIVE LEARNING
DANIEL L. ALKON'
FRANCIS QUEK 2a
THOMAS P. VOGL 2b
1. Laboratory for Cellular and Molecular
NeurobiologYt NINCDS t NIH t Bethesda t MD 20892
2. Environmental Research Institute of Michigan
a) P.O. Box 8G18 t Ann Arbor t MI 48107
b) 1501 Wilson Blvd. t Suite 1105 t Arlington t VA 22209
INTRODUCTION
Most of the current neural networks use models which have
only tenuous connections to the biological neural systems
on which they purport to be based t and negligible input
from the neuroscience/biophysics communities. This paper
describes an ongoing effort which approaches neural net
research in a program of close collaboration of neurosci ent i sts and eng i neers.
The effort is des i gned to
elucidate associative learning in the marine snail
Hermissenda crassicornis t in which Pavlovian conditioning
has been observed. Learning has been isolated in the four
neuron network at the convergence of the vi sua 1 and
vestibular pathways in this animal t and biophysical
changes t specific to learning t have been observed in the
membrane of the photoreceptor B cell. A basic charging
capacitance model of a neuron is used and enhanced with
biologically plausible mechanisms that are necessary to
replicate the effect of learning at the cellular level.
These mechanisms are non-linear and are t primarilYt
instances of second order control systems (e.g. t fatigue t
modul at i on of membrane res i stance t time dependent
rebound)t but also include shunting and random background
fi ri ng.
The output of the model of the four- neuron
network di sp 1ays changes in the temporal vari at i on of
membrane potential similar to those observed in electrophysiological measurements.
420
Alkon, Quek and Vogl
NEUROPHYSIOLOGICAL BACKGROUND
Alkon 1 showed that Hermissenda crassicornis, a marine
snail, is capable of associating two stimuli in a fashion
which exhibits all the characteristics of classical
Pavlovian conditioning (acquisition, retention, extinction, and savings)2. In these experiments, Hermissenda
were trained to associate a visual with a vestibular
stimulus. In its normal environment, Hermissenda moves
toward light; in turbulence, the animal increases the area
of contact of its foot with the surface on which it is
moving, reducing its forward velocity. Alkon showed that
the snail can be condi-tioned to associate these stimuli
through repeated exposures to ordered pairs (light
followed by turbulence).
When the snails are exposed to light (the unconditioned
stimulus) followed by turbulence (the conditioned
snails
stimulus) after varying time intervals, the
transfer to the light their unconditioned response to
turbulence (increased area of foot contact); i.e., when
presented with light alone, they respond with an increased
area of foot contact. The effect of such training lasts
for several weeks. It was further shown that the learning
was maximized when rotation followed light by a fixed
i nterva 1 of about one second, and that such 1earn i ng
exhibits all the characteristics of classical conditioning
observed in higher animals.
The relevant neural interconnections of Hermissenda have
been mapped by Alkon, and learning has been isolated in
the four neuron sub-network (Figure 1) at the con-vergence
of the visual and vestibular pathways of this animal.
Light generates signals in the B cells while turbulence is
transduced into signals by the statocyst's hair cells, the
The optic ganglion cell
animal's vestibular organs.
medi ates the interconnect ions between the two sensory
pathways.
The effects of learning also h\ve been observed at the
cellular level.
Alkon et al. have shown that biophys i ca 1 changes assoc i ated wi th 1earni ng occur in the
photo-receptor B cell of Hermissenda. The signals in
the neurons take the form of voltage dependent ion
Computer Modeling of Associative Learning
Optic
gon<;llion
,
I
I
I
\
c
Figure 1. The four neuron network at the convergence of
the visual and vestibular pathways of Hermissenda crasAll filled endings indicate inhibitory
sicornis.
synapses; all open endings indicate excitatory synapses.
(a) Convergence of synaptic inhibition from the photoreceptor B cell and caudal ha i r cell s unto the opt i c
ganglion cell.
(b) Pos it i ve synapt i c feedback onto the type B photoreceptor: 1, direct synaptic excitation; 2, indirect
excitation -- the ganglion excites the cephalic hair cell
that inhibits the caudal hair cell, and thus disinhibits
the type B cell; 3, indirect excitation -- the ganglion
inhibits the caudal hair cell and thus disinhibits the
type B cell.
(c) Intra- and intersensory inhibition. The cephalic and
caudal hair cells are mutually inhibitory. The B cell
inhibits mainly the cephalic hair cell.
From: Tabata, M., and Alkon, D. L. Positive synaptic
feedback in visual system of nudibranch mollusk Hermissenda Crassicornis.
J. of Neurophysiology 48: 174-191
(1982).
421
422
Alkon, Quek and VogI
currents, and learning is reflected in biophysical changes
in the membrane of the B cell.
The effects of ion
currents can be observed in the form of time variations in
membrane potential recorded by means of microelectrodes.
It is the variation in membrane potential resulting from
associative learning that is the focus of this research.
Our goal is to model those properties of biological
neurons suffi c i ent (and necessary) to demonstrate assoc i ative learning at the neural network level. In order to
understand the effect and necessity of each component of
the model, a minimalist approach was adopted. Nothing was
added to the model which was not necessary to produce a
required effect, and then only when neurophysiologists,
biophysicists, electrical engineers, and computer scientists agreed that the addition was reasonable from the
perspective of their disciplines.
METHOD
Following Kuffler and Nicholas 4 , the model is described
in terms of circuit elements. It must be emphasized
however, that this is simply a recognition of the fact
that the chemical and physical processes occurring in the
neuron can be described by (partial) differential equations, as can electronic circuits. The equivalent circuit
of the charging and discharging of the neuron
membrane is shown in Figure 2. The model was constructed
using the P3 network simulation shell developed by Zipser
and Rabi n5 ?
The P3 stri p-chart capabil i ty was
particularly useful in facilitating interdisciplinary
interactions. Figure 3 shows the response of the basic
model of the neuron as the frequency of input pulses is
varied.
Our aim, however, is not to model an individual neuron.
Rather, we consistently focus on properties of the neural
network that are necessary and sufficient to demonstrate
associative learning. Examination of the behavior of
biological neurons reveals additional common properties
that express themselves differently depending on the
funct i on of the i ndi vi dua 1 neuron.
These propert i es
inc 1ude background fi ri ng, second order cont ro 1s, and
shunting. Their inclusion in the model is necessary for
the simulation of associative learning, and their implementation is described below.
Computer Modeling of Associative Learning
INPUT CIRCUITRY
~
R, """'. S,
~ ~ --.---i______- -
.-
u--------' ... ~ - -
membrane
?? S
R
~
potential
c
Rs,
RS2
Rsk
?
S,. Sk
: closes when there are input EPSPs to the
the cell. otherwise. open.
? ? Sd~y
: closes when there is no input ESPSs to the
the cell. otherwise. open.
Figure 2. Circuit model of the basic neuron. The lines
from the 1eft are the inputs to the neuron from its
dendritic connections -to presynaptic neurons. Rsk's are
the resistances that determine the magnitude of t~e effect
(voltage) of the pulses from presynaptic neurons. The gap
indicated by open circles is a high impedance coupler. R1
through R~, together with C, determine the rise time for
the kth lnput of the potential across the capacitor C,
wh i ch represents the membrane. Rdecay cont ro 1s the disWhen the
charge time constant of the capacl tor C
membrane potential (across C) exceeds threshold potential,
the neuron fi res a pul se into its axon (to all output
connections) and the charge on C is reduced by the
"discharge quantum" (see text).
BACKGROUND FIRING IN ALL NEURONS
Background fi ri ng, i. e., spontaneous fi ri ng by neurons
wi thout any input pul ses from other neurons, has been
observed in all neurons. The fundamental importance of
background firing is exemplified by the fact that in the
four neuron network under study, the optic ganglion does
423
424
Alkon, Quek and Vogl
Figure 3. Response of the basic model of a single neuron
to a variety of inputs. The four horizontal strips, from
top to bottom, show: 1) the input stream; 2) the resulting
membrane potential; 3) the resulting stream of output
pulses; and 4) the composite output of pulses superimposed
on the membrane potent i a1, ernul at i ng the correspond i ng
electrophysiological measurement.
The four vertical
sections, from left to right, indicate: a) an extended
input, simulating exposure of the B cell to light; b) a
presynaptic neuron firing at maximum frequency; c) a
presynaptic neuron firing at an intermediate frequency; d)
a presynaptic neuron firing at a frequency insufficient to
cause the neuron to fire ?but sufficient to maintain the
neuron at a membrane potential just below firing
threshold.
BACKGROUND FIRING IN ALL NEURONS (Continued)
not have any synapses that excite it (all its inputs are
inhibitory). However, the optic ganglion provides the
only two excitatory synapses in the entire network (one on
the photoreceptor B cell and the other on the cephalad
statocyst hair cell). Hence, without back-ground firing,
i.e., when there is no external stimuli of the neurons,
all activity in the network would cease.
Further, without background firing, any stimulus to either
the vestibular or the visual receptors will completely
swamp the response of the network.
Computer Modeling of Associative Learning
Background firing is incorporated in our model by applying
random pul ses to a 'vi rtua l' exci tatory synapse.
By
altering the mean frequency of the random pulses, various
levels of 'internal' homeostatic neuronal activity can be
simulated. Physiologically, this pulse source yields
results similar to an ion pump or other energy source,
e.g., cAMP, in the biological system.
SECOND ORDER CONTROL IN NEURONS
Second order controls, i.e., the modulation of cellular
parameters as the resul t of the past history of the
neuron, appear in all biological neurons and play an
essential role in their behavior. The ability of the cell
to integrate its internal parameters (membrane potential
in particular) over time turns out to be vital not only in
understanding neural behavior but, more specifically, in
providing the mechanisms that permit temporally specific
associative learning. In the course of this investigation, a number of different second order control
mechanisms, essential to the proper performance of the
model, were elucidated. These mechanisms share a dependence on the time integral of the difference between the
instantaneous membrane potential and some reference
potential.
The part i cul ar second order cont ro 1 mechan isms i ncorporated into the model are: 1) Overshoot in the 1i ght
response of the photoreceptor B cell; 2) maintenance of
a post-stimulus state in the B cell subsequent to
prolonged stimulation; 3) modulation of the discharge
resistance of the B cell; 4) Fatigue in the statocysts
and the optical ganglion; and 5) time dependent rebound
in the opt i cal gangl ion. In addi t i on to these second
order control effects, the model required the inclusion
of the observed shunting of competing inputs to the B cell
during light exposure. The consequence of the interaction
of these mechanisms with the basic model of the neurons in
the four neuron network is the natural emergence of
temporally specific associative learning.
OVERSHOOT IN THE LIGHT RESPONSE OF THE PHOTORECEPTOR B
CELL
Under strong light exposure, the membrane potential of an
isolated photoreceptor B cell experiences an initial
'overshoot' and then settles at a rapidly firing level
425
426
Alkon, Quek and VogI
far above the usual firing potential of the neuron (see
Figure 4a). (We refer to the elevated membrane potential
of the B cell during illumination as the "active firing
membrane potential"). The initial overshoot (and slight
ringing) observed in the potential of the biological B
cell (F i gure 4a) is the signature of an integral second
order control system at work. This control was realized
in the model by altering the quantity of charge removed
from the cell (the discharge quantum) each time the cell
fires. (The biological cell loses charge whenever it
fi res and the quant i ty of charge lost vari es wi th the
membrane potential.) The discharge quantum is modulated
by the definite integral of the difference between the
membrane potential and the active firing membrane potential as follows:
Qdiseharge(t)
=
K
X
h'
{Potmembrane(t) - Potaetive ffring(t)}dt
o
As the membrane potential rises above the active firing
membrane potential, the value of the integral rises. The
magnitude of the discharge quantum rises with the integra 1 . Th is increased discharge retards the membrane
depolarization, until at some point, the size of the
discharge quantum outstrips the charging effect of light
on the membrane potential, and the potential falls. As
the membrane potent i a1 fall s below the act i ve fi ri ng
membrane potential, the magnitude of the discharge quantum
begi ns to decrease (i. e. , the value of the integral
falls). This, in turn, causes the membrane potential to
rise when the charging owing to light input once again
overcomes the declining discharge quantum.
This process repeats with each subsequent swing in the
membrane potential becoming smaller until steady state is
reached at the active fi ri ng membrane potent i a1 . The
response of the model to simulated light exposure is shown
in Figure 4b. Note that only a single overshoot is
obvious and that steady state is rapidly reached.
MAINTAINING THE POST-STIMULUS STATE IN THE B CELL
During exposure to light, the B cell is strongly
depolarized, and the membrane potential is maintained
substantially above the firing threshold potential. When
the light stimulus is removed, one would expect the cell
to fire at its maximum rate so as to bring its membrane
Computer Modeling of Associative Learning
eel)
Lig!!.!J
I
to 2.9
1
pixels~tick
Figure 4. Response of the B cell and the model to a light
pulse.
(a). Electrophysiological recording of the response of
the photoreceptor B cell to 1i ght. Note the in it i a1
overshoot and one cycle of oscillation before the membrane
potential settles at the "active firing potential." From:
Alkon t D.L. Memory Traces in the Brain.
Cambridge
University Press t London (1987)t p.S8.
(b) Response of the model to a light pulse.
427
428
Alkon, Quek and Vogl
potential below the firing threshold (by releasing a
discharge quantum w?i th each output pulse). This does not
happen in Herm;ssenda; there is, however, a change in
the amount of charge released with each output pulse when
the cell is highly depolarized.
Note that the di scharge quantum is modul ated post-exposure
in a manner analogous to that occurring during exposure as
discussed above: It is modulated by the magnitude of the
membrane potential above the firing threshold. The result
of this modulation is that the more positive the membrane
potential, the smaller the discharge quantum, (subject to
a non-negative minimum value). The average value of the
interval between pulses is also modulated by the magnitude
of the discharge quantum. This modulation persists until
the membrane potential returns to the fi ri ng threshold
after cessation of light exposure.
This mechanism is particular to the B cell. Upon cessation of vestibular stimulation, hair cells fire rapidly
unt il thei r membrane potent i a1s are below the fi ri ng
threshold, just as the basic model predicts.
MODULATION OF DISCHARGE RESISTANCE IN THE B CELL
The duration of the post-stimulus membrane potential is
determined by the magnitude of the discharge resistance
of the B cell. In the model, the discharge resistance
changes exponentially toward a predetermined maximum
value, R~x' when the membrane potential exceeds the firing
threshola. Roose is the baseline value. That is,
Rdisch(t-to)
=
Rmax - {Rmax - RdiSCh(tO)}exp{(t-to)/rrise}
when the membrane potential is above the firing threshold,
and
Rdisch(t-to)
=
Roose- {Rbase - Rdisch(to)}exp{(t-to)/1'decay}
when the membrane potential is below the firing threshold.
FATIGUE IN STATOCYST HAIR CELLS
In Hermissenda, caudal cell activity actually decreases
immediately after it has fired strongly, rather than
returning to its normal background level of firing. This
effect, which results from the tendency of membrane
potential to "fatigue" toward its resting potential, is
Computer Modeling of Associative Learning
incorporated into our model of the statocyst hair cells
using the second order control mechanism previously
described. I.e., when the membrane potential of a hair
cell is above the firing threshold (e.g., during vestibular stimulation), the shunting resistance of the cell
decays exponentially with time toward zero as long as the
hair cell membrane potential is above the firing
threshold. This resistance is allowed to recover exponentially to its usual value when the membrane potential
falls below the firing threshold.
FATIGUE OF THE OPTICAL GANGLION CELL DURING HYPERPOLARIZATION
In Hermissenda the optical ganglion undergoes hyperpolarization at the beginning of the light pulse and/or
vestibular stimulus. Contrary to what one might expect,
it then recovers and is close to the firing threshold by
the time the stimuli cease. This effect is incorporated
into the model by fatigue induced by hyperpolarization.
As above, this fatigue is implemented by allowing the
shunt i ng res i stance in the gangl ion cell to decrease
exponentially toward a minimum value, while the membrane
potential is below the firing threshold by a prespecified
amount. The value of the minimum shunting resistance is
modulated by the magnitude of hyperpolarization (potential
difference between the membrane potential and the firing
threshold).
The shunting resistance recovers
exponentially from its hyperpolarized value, once the
membrane potential returns to its firing threshold as a
result of background firing input.
The effect of this decrease is that the ganglion cell will
temporarily remain relatively insensitive to the enhanced
post-stimulus firing of the B cell until the shunting
resistance recovers. Once the membrane potential of the
ganglion cell recovers, the pulses from the ganglion cell
will excite the B cell and maintain its prolongation
effect. (See Figure 1.)
The modulation of the minimuM shunting resistance by the
magnitude Qf hyperpolarization introduces the first
stimulus pairing dependent component in the post-stimulus
behavi or of the B cell because the degree of
hyperpolarization is higher under paired stimulus
conditions.
429
430
Alkon, Quek and YogI
TIME DEPENDENT REBOUND IN THE OPTICAL GANGLION CELL
Experimental evidence with Hermissenda indicates that the
rebound of the optical ganglion is much stronger than is
possible if the usual background activity were the sole
Furthermore rebound in the
cause of this rebound.
gang1 ion ce 11
is stronger when t~e 1i ght exposure
precedes vestibular stimulus by the optimal inter-stimulus
interval
(lSI).
Since the ganglion cell has no
exc i tatory input synapses, the increased rebound must
result from a mechanism internal to the cell that
heightens its background activity during pairing at the
optimal lSI. The mechanism must be time dependent and
must be able to distinguish between the inhibitory signal
which comes from the B cell and that which comes from the
caudal hair cell. To achieve this result, two mechanisms
must interact.
The first mechanism enhances the inhibitory effect of the
caudal hair cellon the gangl ion cell. This "caudal
inhibition enhancer", elE, is triggered by pu1 ses from
the B cell. The elE has the property that it rises
exponentially toward 1.0 when a pulse is seen at the
synapse from the B cell and decays toward zero when no
such pulses are received.
The second mechanism provides an increase in the background act i vi ty of the opt i c gang1 i on when the cell is
hyperpolarized; it is a fatigue effect at the synapse from
the caudal hair cell. This synapse specific fatigue (SSF)
rises toward 1.0 as any of the inhibitory synapses onto
the ganglion receive a pulse, and decays toward zero when
there is no incoming inhibitory pulse. Note that this
second order control causes fatigue at the synapse between
the caudal hair cell and the ganglion whenever any
inhibitory pulse is incident on the ganglion.
Control of the lSI resides in the interaction of these
two mechanisms. The efficacy of an inhibitory pulse from
the caudal cell upon the gang1 ion ce 11 is determi ned by
the product of elE and (1 - SSF), the "lSI convo1ver."
With light exposure alone or when caudal stimulation
follows light, the elE rises toward 1.0 along with the
SSF. Initially, (1 - SSF) is close to 1.0 and the elE
term dominates the convo1ver function. As elE approaches
1.0, the (1 - SSF) term brings the convolver toward o. At
some intermediate time, the lSI convo1ver is at a maximum.
Computer Modeling of Associative Learning
When vestibular stimulus precedes light exposure, the SSF
rises at the start of the vestibular stimulus while the
elE remains at o. When light exposure then begins, the
elE rises, but by then (1 - SSF) is approaching zero, and
the convolver does not reach any significant value.
The result of this interaction is that when caudal
st~mulation follows light by the optimal lSI, the inhibition of the ganglion will be maximal.
This causes
he i ghtened background act i vi ty in the gang 1ion. . Upon
cessation of stimulus, the heightened background activity
will express itself by rapidly depolarizing the ganglion
membrane, thereby bringing about the desired rebound
firing.
SHUNTING OF THE PHOTORECEPTOR B CELL DURING EXPOSURE TO
LIGHT
In experiments in which light and vestibular stimulus are
paired, both the B cell and the caudal hair statocyst cell
fire strongly. There is an inhibitory synapse from the
hair cell to the B cell (see Figure 1). Without shunting,
the hair cell output pulses interfere with the effect of
1ight on the B cell and prevent it from arriving at a
level of depolarization necessary for learning. This is
contrary to experimental data which shows that the
response of the B ~ell to light (during the light pulse)
is constant whether or not vestibular stimulus is present.
Biological experiments have determlned that while light is
on, the B cell shunting resistance is very low making the
cell insensitive to incoming pulses.
Figures 5-8 summari.ze the current performance of the
model. Figures 6, 7, and 8 present the response to a
light pulse of the untrained, sham trained (unpaired light
and turbulence), and trained (paired light and turbulence)
model of the four neuron network.
DISCUSSION
The model developed here is more complex than those
generally employed in neural network research because the
mechanisms invoked are primarily second order controls.
Furthermore, while we operated under a paradigm of minimal
commitment (no new features unless needed), the functional
requi rements of network demanded that di fferent i at i ng
features be added to the cells. The model reproduces the
431
432
Alkon, Quek and Vogl
electrophysio10gica1 measure-ments in Hermissenda that
are indicative of associative learning. These results
call into question the notion that linear and quasi-linear
summing elements ar~ capable of emulating neural activity
and the learning inherent in neural systems.
This preliminary modeling effort has already resulted in
a greater understandi.ng of bi 01 ogi cal systems by 1)
modeling experiments which cannot be performed in vivo,
2) testing theoretical constructs on the model, and 3)
developing hypotheses and proposing neurophysiological
experiments. The effort has also significantly assisted
in the deve 1opment of neural network a1gori thms by
uncoveri ng the necessary and suffi ci ent components for
learning at the neurophysiological level.
Acknowledgements
The authors wish to acknowledge the contribution of Peter
Tchoryk, Jr. for assistance in performing the simulation
experiments and Kim T. Blackwell for many fruitful
discussions of the work and the manuscript. This work was
supported in part by ONR contract NOOOI4-88-K-0659.
References
A1kon, D.L. Memory Traces in the Brain, Cambridge
University Press, London,. 1987 and publications referenced
therein.
1
A1kon, D.L. Learning in a Marine Snail.
American, 249: 70-84 (1983).
2
Scientific
A1kon, D.L., Sakakibara, M., Forman, M., Harrigan, R.,
Lederhend1er, J., and Farley, J.
Reduction of two
voltage-dependent K+ currents mediates retention of
learning association.
Behavioral and Neural Bio1.
44:278-300 (1985).
3
Kuff1er, S.W., and Nicholas, J.G. From Neural to Brain:
A Cell u1 ar Approach the the Funct i on of the Nervous
System. Seinauer Assoc., Pub1., Sunderland, MA (1986).
4
Computer Modeling of Associative Learning
RANDOM
.fRA, .........
Figure 5. Prolongation of B cell post-stimulus membrane
depolarization consequent to learning (exposure to paired
stimuli).
From: West, A. Barnes, E., Alkon, D.L. Primary changes
of voltage responses during retention of associative
learning. J. of Neurophysiol. 48:1243-1255 (1982). Note
the increase ins i ze of the shaded area wh i ch is the
effect of learning.
Figure 6. Naive model: response of untrained ("control"
in Fig. 5) model to light.
433
434
Alkon, Quek and VogI
Figure 7. Sham training: response of model to light
following presentation of 26 randomly alternating ("unpaired" in Fig. 5) light and turbulence inputs.
References (Continued)
SZipser, D.~ and Rabin~ D.
P3: A Parallel Network
Simulating System. In Parallel Distributed Processing,
Vol. I.~ Chapter 13. Rumelhart~ McClelland, and the PDP
Group~ Eds.
MIT Press (1986).
6Buhmann~ J.~ and Schulten~ K.
Influence of Noise on the
Function of a "Physiological" Neural Network. Biological
Cybernetics 56:313-328 (1987).
Computer Modeling of Associative Learning
Figure 8. Trained network: response of network to light
following presentation of 13 light and turbulence input
at optimum lSI. The top trace of this figure is the B
ce 11 response to 1i ght alone. Note that an increased
firing frequency and active membrane potential is maintained after the cessation of light, compared to Figures
6 and 7. This is analogous to what may be seen in
Hermissenda, Figure 5. Note also that the optic ganglion
and the cephalad hair cell (trace 2 and 3 of this figure)
show a decreased post-stimulus firing rate compared with
that of Figures 6 and 7.
435
| 126 |@word neurophysiology:1 stronger:2 replicate:1 hyperpolarized:2 extinction:1 open:4 simulation:3 pulse:23 r:1 eng:1 thereby:1 reduction:1 initial:2 necessity:1 efficacy:1 daniel:1 past:1 medi:1 current:5 must:5 subsequent:2 happen:1 predetermined:1 alone:3 nervous:1 indicative:1 rsk:2 beginning:1 marine:3 prespecified:1 gure:1 provides:2 along:1 constructed:1 direct:1 differential:1 pairing:2 pathway:4 behavioral:1 manner:1 inter:1 behavior:3 themselves:1 brain:3 retard:1 prolonged:1 begin:1 circuit:4 transduced:1 what:2 rmax:2 substantially:1 ringing:1 depolarization:3 developed:2 proposing:1 suite:1 temporal:1 act:3 charge:6 ro:3 assoc:3 returning:1 control:12 appear:1 discharging:1 positive:2 negligible:1 retention:3 scientist:1 persists:1 sd:1 esp:1 consequence:1 referenced:1 receptor:2 firing:35 modulation:6 becoming:1 might:1 blvd:1 shunt:1 therein:1 shaded:1 snail:6 bi:1 testing:1 lost:1 definite:1 harrigan:1 area:4 significantly:1 composite:1 onto:2 close:5 cannot:1 cal:2 turbulence:9 applying:1 influence:1 equivalent:1 fruitful:1 exposure:15 duration:1 hermissenda:13 immediately:1 continued:2 swamp:1 notion:1 variation:2 analogous:2 discharge:15 elucidate:1 enhanced:2 synapt:1 spontaneous:1 play:1 heightened:1 hypothesis:1 associate:2 velocity:1 element:2 recognition:1 particularly:1 ze:2 rumelhart:1 gon:1 predicts:1 observed:9 bottom:1 role:1 electrical:1 cycle:1 decrease:4 removed:2 environment:1 signature:1 trained:4 overshoot:6 unt:1 exposed:1 funct:2 upon:3 completely:1 neurophysiol:1 po:1 indirect:2 differently:1 various:1 chapter:1 london:2 precedes:2 plausible:1 s:3 interconnection:1 otherwise:2 ability:1 addi:1 emergence:1 itself:1 unconditioned:2 associative:17 triggered:1 biophysical:2 net:1 interaction:4 product:1 maximal:1 commitment:1 relevant:1 rapidly:4 lnput:1 fired:1 achieve:1 ent:3 convergence:3 optimum:1 r1:1 produce:1 lig:1 depending:1 excites:1 sole:1 received:1 strong:1 epsps:1 implemented:1 indicate:3 come:2 foot:3 dua:1 mulation:1 owing:1 settle:2 investigation:1 preliminary:1 opt:3 biological:9 dendritic:1 assisted:1 ground:1 normal:2 ays:1 exp:2 minimalist:1 week:1 circuitry:1 tor:1 released:1 suffi:2 organ:1 caudal:14 intersensory:1 mit:1 aim:1 rather:2 varying:1 voltage:4 wilson:1 ndi:1 publication:1 focus:2 consistently:1 superimposed:1 mainly:1 indicates:1 baseline:1 kim:1 camp:1 dependent:7 interconnect:1 entire:1 initially:1 sunderland:1 quasi:1 pixel:1 animal:5 ell:1 once:3 saving:1 construct:1 ng:16 represents:1 rebound:8 stimulus:27 inherent:1 alkon:16 primarily:1 randomly:1 microelectrodes:1 ve:3 resulted:1 individual:1 fire:5 maintain:2 llion:1 highly:1 intra:1 introduces:1 operated:1 light:39 farley:1 integral:5 capable:2 partial:1 necessary:7 experience:1 unless:1 filled:1 re:4 circle:1 desired:1 isolated:3 theoretical:1 minimal:1 instance:1 increased:5 modeling:11 rabin:1 ar:3 altering:2 before:1 pump:1 st:1 fundamental:1 potent:4 interdisciplinary:1 contract:1 eel:1 discipline:1 together:1 outstrips:1 earn:1 again:1 recorded:1 external:1 american:1 return:2 potential:50 de:1 inc:1 sts:1 vi:5 stream:2 ated:2 performed:1 francis:1 reached:2 start:1 recover:1 parallel:2 depolarizing:1 vivo:1 ative:1 contribution:1 chart:1 il:1 characteristic:2 maximized:1 correspond:1 yield:1 cybernetics:1 sakakibara:1 history:1 synapsis:6 reach:1 whenever:2 synaptic:3 strip:1 ed:1 ty:4 energy:1 acquisition:1 frequency:6 obvious:1 mi:1 di:3 con:1 recovers:4 noooi4:1 wh:2 enhancer:1 ele:8 convolver:2 electrophysiological:3 agreed:1 actually:1 back:1 manuscript:1 higher:2 dt:1 arlington:1 reflected:1 response:17 synapse:6 box:1 strongly:3 furthermore:2 just:2 until:4 horizontal:1 cessation:4 interfere:1 undergoes:1 brings:1 indicated:1 scientific:1 effect:17 swing:1 hence:1 stance:2 chemical:1 alternating:1 laboratory:1 ogi:1 assistance:1 during:10 maintained:2 excitation:3 steady:2 fatigue:10 demonstrate:2 bring:1 instantaneous:1 invoked:1 fi:10 nih:1 common:1 rotation:1 stimulation:4 functional:1 physical:1 hyperpolarization:6 conditioning:3 exponentially:6 insensitive:2 s8:1 eft:1 elevated:1 slight:1 discussed:1 resting:1 he:1 measurement:2 refer:1 roose:2 cambridge:2 declining:1 significant:1 inclusion:2 moving:1 surface:1 inhibition:4 showed:2 perspective:1 onr:1 seen:2 minimum:4 additional:1 rs2:1 greater:1 employed:1 determine:2 paradigm:1 integra:1 ight:1 signal:4 sham:2 exceeds:2 long:1 prolongation:2 post:8 molecular:1 shunting:11 a1:4 va:1 biophysics:1 paired:4 basic:6 hair:20 n5:1 maintenance:1 cell:83 ion:10 condi:1 background:16 addition:1 receive:1 interval:3 decreased:1 source:2 resul:1 depolarized:2 releasing:1 bringing:1 recording:1 subject:1 induced:1 contrary:2 capacitor:1 call:1 ssf:7 zipser:1 intermediate:2 vital:1 variety:1 associating:1 competing:1 quant:1 approaching:1 whether:1 effort:4 peter:1 resistance:12 cause:5 useful:1 generally:1 se:1 amount:2 mcclelland:1 unpaired:2 reduced:1 lsi:7 inhibitory:10 neuroscience:1 vol:1 express:2 group:1 four:9 threshold:16 prevent:1 ce:3 respond:1 reasonable:1 electronic:1 p3:3 oscillation:1 followed:3 distinguish:1 barnes:1 activity:7 elucidated:1 gang:1 occur:1 optic:5 ri:8 propert:1 generates:1 u1:1 pavlovian:2 performing:1 optical:5 inhibits:3 relatively:1 ned:1 developing:1 membrane:52 describes:1 ate:1 across:2 smaller:2 remain:1 wi:3 jr:1 bethesda:1 biologically:1 making:1 determi:1 equation:1 mutually:1 previously:1 remains:1 turn:2 mechanism:15 needed:1 tatory:2 photo:1 adopted:1 permit:1 nicholas:2 simulating:2 thomas:1 top:2 include:1 maintaining:1 ism:1 ght:4 classical:2 rabi:1 contact:3 move:1 capacitance:1 added:2 ude:1 realized:1 quantity:1 question:1 already:1 dependence:1 md:1 usual:3 primary:1 exhibit:2 enhances:1 kth:1 mapped:1 simulated:2 a1s:1 exc:1 presynaptic:5 cellular:4 toward:11 gned:1 cont:3 mollusk:1 insufficient:1 providing:1 thei:1 trace:4 negative:1 rise:10 implementation:1 proper:1 allowing:1 vertical:1 neuron:38 acknowledge:1 extended:1 incorporated:3 emulating:1 pdp:1 varied:1 homeostatic:1 thm:1 community:1 pair:1 required:2 blackwell:1 connection:3 mediates:1 vestibular:14 forman:1 able:1 below:8 exemplified:1 program:1 memory:2 charging:4 quek:9 natural:1 examination:1 buhmann:1 temporally:2 naive:1 text:1 understanding:1 acknowledgement:1 expect:2 integrate:1 degree:1 incident:1 sufficient:2 share:1 collaboration:1 qf:1 excitatory:2 course:1 repeat:1 last:1 supported:1 arriving:1 tick:1 kuffler:1 understand:1 institute:1 fall:4 distributed:1 feedback:2 ending:2 vari:2 resides:1 quantum:11 sensory:1 forward:1 unto:1 author:1 far:1 sua:1 pu1:1 tenuous:1 overcomes:1 reproduces:1 active:6 reveals:1 incoming:2 summing:1 excite:2 herm:1 physiologically:1 demanded:1 vergence:1 sk:1 impedance:1 transfer:1 ca:1 interact:1 untrained:2 complex:1 sp:1 neurosci:1 noise:1 nothing:1 repeated:1 facilitating:1 allowed:1 neuronal:1 fig:2 west:1 fashion:1 axon:1 n:1 sub:1 purport:1 schulten:1 wish:1 pul:3 association:1 specific:4 emphasized:1 fra:1 decay:4 cease:2 consequent:1 ments:1 evidence:1 dominates:1 essential:2 physiological:1 importance:1 ci:1 magnitude:8 illumination:1 conditioned:1 biophys:1 occurring:2 gap:1 exci:1 michigan:1 simply:1 ganglion:21 neurophysiological:3 visual:5 ordered:1 temporarily:1 vogl:5 ch:2 loses:1 environmental:1 ma:1 shell:1 goal:1 presentation:2 ann:1 tioned:1 change:7 neurophysiologists:1 specifically:1 reducing:1 determined:1 rbase:1 engineer:1 arbor:1 e:2 tendency:1 photoreceptor:9 experimental:2 internal:3 modulated:4 ongoing:1 |
285 | 1,260 | Clustering via Concave Minimization
P. S. Bradley and O. L. Mangasarian
Computer Sciences Department
University of Wisconsin
1210 West Dayton Street
Madison, WI 53706
email: paulb@es.wise.edu, olvi@es.wise.edu
w. N. Street
Computer Science Department
Oklahoma State University
205 Mathematical Sciences
Stillwater, OK 74078
email: nstreet@es. okstate. edu
Abstract
The problem of assigning m points in the n-dimensional real space
Rn to k clusters is formulated as that of determining k centers in
Rn such that the sum of distances of each point to the nearest
center is minimized. If a polyhedral distance is used, the problem
can be formulated as that of minimizing a piecewise-linear concave
function on a polyhedral set which is shown to be equivalent to
a bilinear program: minimizing a bilinear function on a polyhedral set. A fast finite k-Median Algorithm consisting of solving
few linear programs in closed form leads to a stationary point of
the bilinear program. Computational testing on a number of realworld databases was carried out. On the Wisconsin Diagnostic
Breast Cancer (WDBC) database, k-Median training set correctness was comparable to that of the k-Mean Algorithm, however its
testing set correctness was better. Additionally, on the Wisconsin
Prognostic Breast Cancer (WPBC) database, distinct and clinically important survival curves were extracted by the k-Median
Algorithm, whereas the k-Mean Algorithm failed to obtain such
distinct survival curves for the same database.
1
Introduction
The unsupervised assignment of elements of a given set to groups or clusters of
like points, is the objective of cluster analysis. There are many approaches to this
problem, including statistical [9], machine learning [7], integer and mathematical
programming [18,1]. In this paper we concentrate on a simple concave minimization
formulation of the problem that leads to a finite and fast algorithm. Our point of
369
Clustering via Concave Minimization
departure is the following explicit description of the problem: given m points in the
n-dimensional real space R n , and a fixed number k of clusters, determine k centers in
Rn such that the sum of "distances" of each point to the nearest center is minimized.
If the I-norm is used, the problem can be formulated as the minimization of a
piecewise-linear concave function on a polyhedral set. This is a hard problem to
solve because a local minimum is not necessarily a global minimum. However, by
converting this problem to a bilinear program, a fast successive-linearization kMedian Algorithm terminates after a few linear programs (each explicitly solvable
in closed form) at a point satisfying the minimum principle necessary optimality
condition for the problem. Although there is no guarantee that such a point is a
global solution to our original problem, numerical tests on five real-world databases
indicate that the k-Median Algorithm is comparable to or better than the k-Mean
Algorithm [18, 9, 8]. This may be due to the fact that outliers have less influence
on the k-Median Algorithm which utilizes the I-norm distance. In contrast the kMean Algorithm uses squares of 2-norm distances to generate cluster centers which
may be inaccurate if outliers are present. We also note that clustering algorithms
based on statistical assumptions that minimize some function of scatter matrices
do not appear to have convergence proofs [8, pp. 508-515]' however convergence to
a partial optimal solution is given in [18] for k-Mean type algorithms.
We outline now the contents of the paper. In Section 2, we formulate the clustering
problem for a fixed number of clusters, as that of minimizing the sum of the I-norm
distances of each point to the nearest cluster center. This piecewise-linear concave
function minimization on a polyhedral set turns out to be equivalent to a bilinear
program [3]. We use an effective linearization of the bilinear program proposed in
[3, Algorithm 2.1] to solve our problem by solving a few linear programs. Because
of the simple structure, these linear programs can be explicitly solved in closed
form, thus leading to the finite k-Median Algorithm 2.3 below. In Section 3 we give
computational results on five real-world databases. Section 4 concludes the paper.
A word about our notation now. All vectors are column vectors unless otherwise
specified. For a vector x E Rn, Xi, i = 1, ... ,n, will denote its components. The
norm II . lip will denote the p norm, 1 ~ p ~ 00, while A E RTnxn will signify a real
m x n matrix. For such a matrix, AT will denote the transpose, and Ai will denote
row i. A vector of ones in a real space of arbitrary dimension will be denoted bye.
2
Clustering as Bilinear Programming
Given a set A of m points in R n represented by the matrix A E RTnxn and a number
k of desired clusters, we formulate the clustering problem as follows. Find cluster
centers Gl, e= 1, ... , k, in Rn such that the sum of the minima over e E {I, ... , k}
of the I-norm distance between each point Ai, i = 1, ... , m, and the cluster centers
Gl , e = 1, ... , k, is minimized. More specifically we need to solve the following
mathematical program:
Tn
L
minimize
C ,D
subject to
Here
Dil
i=l
-Dil
~
AT -
min {eT Dil}
(1)
l=l , ... ,k
Gl ~
Dil'
i = 1, ...
,m, e =
1, ... k
E Rn, is a dummy variable that bounds the components of the difference
P. S. Bradley, O. L. Mangasarian and W. N. Street
370
AT - Ct between point AT and center Ct, and e is a vector of ones in Rn. Hence
eT Dit bounds the I-norm distance between Ai and Ct. We note immediately that
since the objective function of (1) is the sum of minima of k linear (and hence
concave) functions, it is a piecewise-linear concave function [13, Corollary 4.1.14].
If the 2-norm or p-norm, p oF 1,00, is used, the objective function will be neither
concave nor convex. Nevertheless, minimizing a piecewise-linear concave function
on a polyhedral set is NP-hard, because the general linear complementarity problem, which is NP-complete [4], can be reduced to such a problem [11, Lemma 1].
Given this fact we try to look for effective methods for processing this problem. We
propose reformulation of problem (1) as a bilinear program. Such reformulations
have been very effective in computationally solving NP-complete linear complementarity problems [14] as well as other difficult machine learning [12] and optimization
problems with equilibrium constraints [12]. In order to carry out this reformulation
we need the following simple lemma.
Lemma 2.1 Let a E Rk. Then
min {at} = min { t altl t t l = 1, tt
1<t<k
- -
tERk
l=l
~ 0, f = 1, ... , k}
(2)
t=1
Proof This essentially obvious result follows immediately upon writing the dual of
the linear program appearing on the right-hand side of (2) which is
Tl;{hlh:::; at, f =
1, . .. k}
(3)
h
Obviously, the maximum of this dual problem is
= minl<t<k {at}. By linear
programming duality theory, this maximum equals the minimum of the primal
linear program in the right hand side of (2). This establishes the equality of (2). 0
By defining a~ = eT Dit, i = 1, ... , m, f = 1, ... , k, Lemma 2.1 can be used to
reformulate the clustering problem (1) as a bilinear program as follows.
Proposition 2.2 Clustering as a Bilinear Program The clustering problem
(1) is equivalent to the following bilinear program:
minimize
CtERn,DttERn ,TilER
subject to
E:'l E;=1
eT DitTit
- Dil :::; AT - Cl :::; Dil' i = 1 ... ,m, f = 1, ... , k
E;=l Til = 1 Til ~ 0, i = 1, ... ,m, f = 1, ... , k
(4)
Note that the constraints of (4) are uncoupled in the variables (C, D) and the variable T. Hence the Uncoupled Bilinear Program Algorithm UBPA [3, Algorithm
2.1] is applicable. Simply stated, this algorithm alternates between solving a linear
program in the variable T and a linear program in the variables (C, D). The algorithm terminates in a finite number of iterations at a stationary point satisfying
the minimum principle necessary optimality condition for problem (4) [3, Theorem
2.1]. We note however, because of the simple structure the bilinear program (4),
the two linear programs can be solved explicitly in closed form. This leads to the
following algorithmic implementation.
Algorithm 2.3 k-Median Algorithm Given
cf+! , ... ,ct+! by the following two steps:
cf, ... ,ct
at iteration j, compute
Clustering via Concave Minimization
371
(a) Cluster Assignment: For each AT, i = 1, ... m, determine ?( i) such that
C1(i) is closest to AT in the 1-norm.
(b) Cluster Center Update: For ? = 1, ... ,k choose Cj'+1 as a median of
all AT assigned to CI.
Stop when
cI+
1
=
cl, ? =
1, ... , k.
Although the k-Median Algorithm is similar to the k-Mean Algorithm wherein the
2-norm distance is used [18, 8, 9], it differs from it computationally, and theoretically. In fact, the underlying problem (1) of the k-Median Algorithm is a concave
minimization on a polyhedral set while the corresponding problem for the p-norm,
p"# 1, is:
L
minimize
C,D
subject to
-Dil ~
,
.=1
AT - Cl
min IIDillip
l=I"",k
(5)
~ Dil' i = 1 ... , m, ? = 1, ... , k.
This is not a concave minimization on a polyhedral set, because the minimum of
a set of convex functions is not in general concave. The concave minimization
problem of [18] is not in the original space of the problem variables, that is, the
cluster center variables, (C, D), but merely in the space of variables T that assign
points to clusters. We also note that the k-Mean Algorithm finds a stationary point
not of problem (5) with p = 2, but of the same problem except that IIDill12 is
replaced by IIDilll~. Without this squared distance term, the subproblem of the
k-Mean Algorithm becomes the considerably harder Weber problem [17, 5] which
locates a center in Rn closest in sum of Euclidean distances (not their squares!) to a
finite set of given points. The Weber problem has no closed form solution. However,
using the mean as a cluster center of points assigned to the cluster, minimizes the
sum of the squares of the distances from the cluster center to the points. It is
precisely the mean that is used in the k-Mean Algorithm subproblem.
Because there is no guaranteed way to ensure global optimality of the solution
obtained by either the k-Median or k-Mean Algorithms, different starting points
can be used to initiate the algorithm. Random starting cluster centers or some
other heuristic can be used such as placing k initial centers along the coordinate
axes at densest, second densest, ... , k densest intervals on the axes.
3
Computational Results
An important computational issue is how to measure the correctness of the results
obtained by the proposed algorithm. We decided on the following three ways.
Remark 3.1 Training Set Correctness The k-Median algorithm (k = 2) is
applied to a database with two known classes to obtain centers. Training correctness
is measured by the ratio of the sum of the number examples of the majority class in
each cluster to the total number of points in the database. The k-Median training
set correctness is compared to that of the k-Mean Algorithm as well as the training
correctness of a supervised learning method, a perceptron trained by robust linear
programming [2l. Table 1 shows results averaged over ten random starts for the
P. S. Bradley, O. L. Mangasarian and W. N. Street
372
publicly available Wisconsin Diagnostic Breast Cancer (WDBC) database as well as
three others [15, 16). We note that for two of the databases k-Median outperformed
k-Mean, and for the other two k-Mean was better.
Algorithm .J.. Database -t
Unsupervised k-Median
Unsupervised k-Mean
Supervised Robust LP
WDBC
93.2%
91.1%
100%
Cleveland
80.6%
83.1%
86.5%
Votes
84.6%
85.5%
95.6%
Star/ Galaxy-Bright
87.6%
85.6%
99.7%
Table 1 Training set correctness using the unsupervised k-Median
and k-Mean Algorithms and the supervised Robust LP on four databases
Remark 3.2 Testing Set Correctness
The idea behind this approach
TeoIing Set Correctness vo. T eoIing Set Size
is that supervised learning
may be costly due to prob94
lem size, difficulty in obtaining
~~------------true classification, etc., hence
92
the importance of good performance of an unsupervised ~ 90
learning algorithm on a test- <1
~
ing subset of a database. The '"
- - k-Median
:j88 '
WDBC database [15} is split ~
k-Meen
into training and testing sub- ... 86
Robust lP
sets of different proportions.
The k-Median and k-Mean Al84
gorithms (k = 2) are applied to
o
10
15
20
25
30
35
40
45
50
the training subset. The cenTesting Set Size (% 01 Original)
ters are given class labels determined by the majority class Figure 1: Correctness on variable-size test set of
of training subset points as- unsupervised k-Median & k-Mean Algorithms versigned to the cluster. Class la- sus correctness of the supervised Robust LP on
bels are assigned to the testing WDBC
subset by the label of the closest center. Testing correctness is determined by the number of points in testing
subset correctly classified by this assignment. This is compared to the correctness
of a supervised learning method, a perceptron trained via robust linear programming
[2}, using the leave-one-out strategy applied to the testing subset only. This comparison is then carried out for various sizes of the testing subset. Figure 1 shows
the results averaged over 50 runs for each of 7 testing subset sizes. As expected,
the performance of the supervised learning algorithm (Robust LP) improved as the
size of the testing subset increases. The k-Median Algorithm test set correctness remained fairly constant in the range of 92.3% to 93.5%, while the k-Mean Algorithm
test set correctness was lower and more varied in the range 88.0% to 91.3%.
I
I
I
I
I
Remark 3.3 Separability of Survival Curves In mining medical databases,
survival curves [10} are important prognostic tools. We applied the k-Median and
k-Mean (k = 3) Algorithms, as knowledge discovery in database (KDD) tools [6},
to the Wisconsin Prognostic Breast Cancer Database (WPBC) [15} using only two
features: tumor size and lymph node status. Survival curves were constructed for
373
Clustering via Concave Minimization
:f\,'
'.
I~
1 0 .7
j08
,,
0."
08
'-------,
\ ?? '
...... i
\
it. . . . . . . . . . .
1::
~0 . 7
JOB
1::
03
03
02
02
0 .1
0.1
~L-~ro~~40~~ro---M~~,OO~~'ro~~
'40
"0'''',.
(a) k-Median
??
. .........-.
2C
40
60
Moo.,. 80
100
120
UO
(b) k-Mean
Figure 2: Survival curves for the 3 clusters obtained by k-Median and k-Mean
Algorithms
each cluster, representing expected percent of surviving patients as a function of
time, for patients in that cluster. Figure 2(a) depicts the survival curves from
clusters obtained from the k-Median Algorithm, Figure 2(b) depicts curves for the
k-Mean Algorithm. The key observation to make here is that curves in Figure 2(a)
are well separated, and hence the clusters can be used as prognostic indicators. In
contrast, the curves in Figure 2(b) are poorly separated, and hence are not useful
for prognosis.
4
Conclusion
We have proposed a new approach for assigning points to clusters based on a simple
concave minimization model. Although a global solution to the problem cannot be
guaranteed, a finite and simple k-Median Algorithm quickly locates a very useful
stationary point. Utility of the proposed algorithm lies in its ability to handle large
databases and hence would be a useful tool for data mining. Comparing it with
the k-Mean Algorithm, we have exhibited instances where the k-Median Algorithm
is superior, and hence preferable. Further research is needed to pinpoint types of
problems for which the k-Median Algorithm is best.
5
Acknowledgements
Our colleague Jude Shavlik suggested the testing set strategy used in Remark 3.2.
This research is supported by National Science Foundation Grants CCR-9322479
and National Institutes of Health !NRSA Fellowship 1 F32 CA 68690-01.
References
[1] K. AI-Sultan. A Tabu search approach to the clustering problem. Pattern
Recognition, 28(9):1443-1451, 1995.
374
p. S. Bradley, 0. L Mangasarian and W N. Street
[2] K. P. Bennett and O. L. Mangasarian. Robust linear programming discrimination of two linearly inseparable sets. Optimization Methods and Software,
1:23-34, 1992.
[3] K. P. Bennett and O. L. Mangasarian. Bilinear separation of two sets in nspace. Computational Optimization ?3 Applications, 2:207-227, 1993.
[4] S.-J. Chung. NP-completeness of the linear complementarity problem. Journal
of Optimization Theory and Applications, 60:393-399, 1989.
[5] F. Cordellier and J. Ch. Fiorot. On the Fermat-Weber problem with convex
cost functionals. Mathematical Programming, 14:295-311, 1978.
[6] U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth. The KDD process for extracting useful knowledge from volumes of data. Communications of the ACM,
39:27-34, 1996.
[7] D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-172, 1987.
[8] K. Fukunaga. Statistical Pattern Recognition. Academic Press, NY, 1990.
[9] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall,
Inc, Englewood Cliffs, NJ, 1988.
[10] E. L. Kaplan and P. Meier. Nonparametric estimation from incomplete observations. J. Am. Stat. Assoc., 53:457-481, 1958.
[11] O. L. Mangasarian. Characterization of linear complementarity problems as
linear programs. Mathematical Programming Study, 7:74-87, 1978.
[12] O. L. Mangasarian. Misclassification minimization. Journal of Global Optimization, 5:309-323, 1994.
[13] O. L. Mangasarian. Nonlinear Programming. SIAM, Philadelphia, PA, 1994.
[14J O. L. Mangasarian. The linear complementarity problem as a separable bilinear
program. Journal of Global Optimization, 6:153-161, 1995.
[15] P. M. Murphy and D. W. Aha. UCI repository of machine learning databases.
Department of Information and Computer Science, University of California,
Irvine, www.ics.uci.edu/AI/ML/MLDBRepository.html, 1992.
[16] S. Odewahn, E. Stockwell, R. Pennington, R. Hummphreys, and W. Zumach.
Automated star/galaxy discrimination with neural networks. Astronomical
Journal, 103(1):318-331, 1992.
[17] M. L. Overton. A quadratically convergent method for minimizing a sum of
euclidean norms. Mathematical Programming, 27:34-63, 1983.
[18J S. Z. Selim and M. A. Ismail. K-Means-Type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Transactions
on Pattern Analysis and Machine Intelligence, PAMI-6:81-87, 1984.
| 1260 |@word repository:1 f32:1 proportion:1 norm:14 prognostic:4 harder:1 carry:1 initial:1 bradley:4 comparing:1 assigning:2 scatter:1 moo:1 numerical:1 kdd:2 update:1 discrimination:2 stationary:4 intelligence:1 characterization:2 completeness:1 node:1 successive:1 five:2 mathematical:6 along:1 constructed:1 polyhedral:8 theoretically:1 expected:2 nor:1 becomes:1 cleveland:1 notation:1 underlying:1 minimizes:1 nj:1 guarantee:1 concave:17 preferable:1 ro:3 assoc:1 medical:1 uo:1 appear:1 grant:1 local:2 bilinear:15 cliff:1 pami:1 range:2 averaged:2 decided:1 testing:12 differs:1 dayton:1 word:1 cannot:1 prentice:1 influence:1 writing:1 www:1 equivalent:3 center:18 starting:2 convex:3 formulate:2 immediately:2 tabu:1 handle:1 coordinate:1 densest:3 programming:10 smyth:1 us:1 complementarity:5 element:1 pa:1 satisfying:2 recognition:2 database:19 subproblem:2 solved:2 trained:2 solving:4 upon:1 represented:1 various:1 separated:2 distinct:2 fast:3 effective:3 jain:1 heuristic:1 solve:3 otherwise:1 ability:1 obviously:1 kmedian:1 propose:1 uci:2 poorly:1 ismail:1 description:1 convergence:3 cluster:26 incremental:1 leave:1 oo:1 stat:1 dubes:1 measured:1 nearest:3 job:1 indicate:1 concentrate:1 assign:1 proposition:1 hall:1 ic:1 equilibrium:1 algorithmic:1 inseparable:1 estimation:1 outperformed:1 applicable:1 label:2 correctness:16 establishes:1 tool:3 minimization:12 corollary:1 ax:2 contrast:2 am:1 inaccurate:1 issue:1 dual:2 classification:1 html:1 denoted:1 fairly:1 equal:1 placing:1 look:1 unsupervised:6 minimized:3 np:4 others:1 piecewise:5 few:3 national:2 murphy:1 replaced:1 consisting:1 englewood:1 mining:2 primal:1 behind:1 overton:1 partial:1 necessary:2 unless:1 incomplete:1 euclidean:2 aha:1 desired:1 instance:1 column:1 assignment:3 cost:1 subset:9 considerably:1 siam:1 quickly:1 squared:1 choose:1 chung:1 leading:1 til:2 star:2 inc:1 explicitly:3 try:1 closed:5 start:1 minimize:4 square:3 publicly:1 bright:1 fermat:1 piatetsky:1 classified:1 email:2 colleague:1 pp:1 acquisition:1 galaxy:2 obvious:1 proof:2 stop:1 irvine:1 knowledge:3 astronomical:1 cj:1 ok:1 supervised:7 wherein:1 improved:1 formulation:1 hand:2 nonlinear:1 dil:8 hlh:1 true:1 hence:8 equality:1 assigned:3 generalized:1 outline:1 complete:2 tt:1 vo:1 tn:1 percent:1 weber:3 wise:2 mangasarian:10 superior:1 volume:1 ai:5 etc:1 closest:3 minimum:8 converting:1 minl:1 determine:2 ii:1 ing:1 academic:1 locates:2 breast:4 essentially:1 patient:2 jude:1 iteration:2 c1:1 whereas:1 fellowship:1 signify:1 interval:1 median:27 exhibited:1 subject:3 integer:1 surviving:1 j08:1 extracting:1 split:1 automated:1 prognosis:1 idea:1 utility:1 sus:1 remark:4 useful:4 nonparametric:1 ten:1 dit:2 reduced:1 generate:1 shapiro:1 diagnostic:2 dummy:1 correctly:1 ccr:1 group:1 key:1 four:1 reformulation:2 nevertheless:1 neither:1 kmean:1 merely:1 sum:9 realworld:1 run:1 utilizes:1 separation:1 comparable:2 bound:2 ct:5 guaranteed:2 convergent:1 constraint:2 precisely:1 software:1 optimality:4 min:4 fayyad:1 fukunaga:1 separable:1 department:3 alternate:1 clinically:1 terminates:2 separability:1 wi:1 lp:5 lem:1 stockwell:1 outlier:2 computationally:2 turn:1 needed:1 initiate:1 reformulations:1 available:1 appearing:1 original:3 clustering:14 cf:2 ensure:1 madison:1 objective:3 strategy:2 costly:1 distance:12 street:5 majority:2 olvi:1 reformulate:1 ratio:1 minimizing:5 difficult:1 stated:1 kaplan:1 implementation:1 observation:2 finite:6 nrsa:1 defining:1 communication:1 rn:8 varied:1 arbitrary:1 meier:1 specified:1 bel:1 lymph:1 california:1 quadratically:1 uncoupled:2 suggested:1 below:1 pattern:3 departure:1 program:23 including:1 misclassification:1 difficulty:1 solvable:1 indicator:1 representing:1 carried:2 concludes:1 health:1 philadelphia:1 discovery:1 acknowledgement:1 determining:1 wisconsin:5 foundation:1 oklahoma:1 principle:2 row:1 cancer:4 wpbc:2 gl:3 bye:1 transpose:1 supported:1 side:2 perceptron:2 shavlik:1 institute:1 curve:10 dimension:1 world:2 transaction:1 functionals:1 status:1 ml:1 global:6 conceptual:1 xi:1 search:1 table:2 additionally:1 lip:1 robust:8 ca:1 obtaining:1 necessarily:1 cl:3 linearly:1 west:1 tl:1 depicts:2 ny:1 gorithms:1 sub:1 explicit:1 pinpoint:1 lie:1 rk:1 theorem:2 remained:1 survival:7 pennington:1 importance:1 ci:2 linearization:2 wdbc:5 simply:1 failed:1 ters:1 ch:1 extracted:1 acm:1 formulated:3 bennett:2 content:1 hard:2 fisher:1 specifically:1 except:1 determined:2 lemma:4 tumor:1 total:1 duality:1 e:3 la:1 vote:1 |
286 | 1,261 | Continuous sigmoidal belief networks
trained using slice sampling
Brendan J. Frey
Department of Computer Science, University of Toronto
6 King's College Road, Toronto, Canada M5S 1A4
Abstract
Real-valued random hidden variables can be useful for modelling
latent structure that explains correlations among observed variables. I propose a simple unit that adds zero-mean Gaussian noise
to its input before passing it through a sigmoidal squashing function. Such units can produce a variety of useful behaviors, ranging
from deterministic to binary stochastic to continuous stochastic. I
show how "slice sampling" can be used for inference and learning
in top-down networks of these units and demonstrate learning on
two simple problems.
1
Introduction
A variety of unsupervised connectionist models containing discrete-valued hidden
units have been developed. These include Boltzmann machines (Hinton and Sejnowski 1986), binary sigmoidal belief networks (Neal 1992) and Helmholtz machines (Hinton et al. 1995; Dayan et al. 1995). However, some hidden variables,
such as translation or scaling in images of shapes, are best represented using continuous values. Continuous-valued Boltzmann machines have been developed (Movellan
and McClelland 1993), but these suffer from long simulation settling times and the
requirement of a "negative phase" during learning. Tibshirani (1992) and Bishop et
al. (1996) consider learning mappings from a continuous latent variable space to a
higher-dimensional input space. MacKay (1995) has developed "density networks"
that can model both continuous and categorical latent spaces using stochasticity at
the top-most network layer. In this paper I consider a new hierarchical top-down
connectionist model that has stochastic hidden variables at all layers; moreover,
these variables can adapt to be continuous or categorical.
The proposed top-down model can be viewed as a continuous-valued belief network, which can be simulated by performing a quick top-down pass (Pearl 1988).
Work done on continuous-valued belief networks has focussed mainly on Gaussian
random variables that are linked linearly such that the joint distribution over all
453
Continuous Sigmoidal Belief Networks Trained using Slice Sampling
(a)
(b)
-4
Zero-mean
Gaussian
noi~e with2
vaflance (T i
Yi
~~
x
4 0
y
1
(d)~r1
== ()(Xi)
-4
x
4 0
y
1
rv--
(c)
()<:J1\!W
-4
x
p(y)A
~
4 0
L
1
Y
(e)~()~d
-400
x
400 0
y
1
Figure 1: (a) shows the inner workings of the proposed unit. (b) to (e) illustrate
four quite different modes of behavior: (b) deterministic mode; (c) stochastic linear
mode; (d) stochastic nonlinear mode; and (e) stochastic binary mode (note the
different horizontal scale). For the sake of graphical clarity, the density functions
are normalized to have equal maxima and the subscripts are left off the variables.
variables is also Gaussian (Pearl 1988; Heckerman and Geiger 1995). Lauritzen
et al. (1990) have included discrete random variables within the linear Gaussian
framework. These approaches infer the distribution over unobserved unit activities
given observed ones by "probability propagation" (Pearl 1988). However, this procedure is highly suboptimal for the richly connected networks that I am interested
in. Also, these approaches tend to assume that all the conditional Gaussian distributions represented by the belief network can be easily derived using information
elicited from experts. Hofmann and Tresp (1996) consider the case of inference
and learning in continuous belief networks that may be richly connected. They use
mixture models and Parzen windows to implement conditional densities.
My main contribution is a simple, but versatile, continuous random unit that can
operate in several different modes ranging from deterministic to binary stochastic
to continuous stochastic. This spectrum of behaviors is controlled by only two
parameters. Whereas the above approaches assume a particular mode for each
unit (Gaussian or discrete), the proposed units are capable of adapting in order to
operate in whatever mode is most appropriate.
2
Description of the unit
The proposed unit is shown in Figure 1a. It is similar to the deterministic sigmoidal
unit used in multilayer perceptrons, except that Gaussian noise is added to the total
input, I-'i, before the sigmoidal squashing function is applied. 1 The probability
density over presigmoid activity Xi for unit i is
p(xill-'i, u;)
== exp[-(xi -l-'i)2 /2u;1/ yi27rUf,
(1)
where I-'i and U[ are the mean and variance for unit i. A postsigmoid activity, Yi, is
obtained by passing the presigmoid activity through a sigmoidal squashing function:
Yi == cI>(Xi).
(2)
Including the transformation Jacobian, the post sigmoid distribution for unit i is
( .1 . u~)
P Yt 1-'"
,
= exp[-(cI>-1(Yi) -l-'i)2 /2ulj
cI>'(cI>-1(Yi))yi27ru;'
IGeoffrey Hinton suggested this unit as a way to make factor analysis nonlinear.
(3)
454
B. 1. Frey
I use the cumulative Gaussian squashing function:
<)(x) == J~ooe-z2j2 /...tFi dz
Both
ment
<)'(x)
= </>(x) == e- z2j2 /...tFi.
(4)
<)0 and <)-10 are nonanalytic, so I use the C-library erfO function to imple<)0 and table lookup with quadratic interpolation to implement <)-10.
Networks of these units can represent a broad range of structures, including deterministic multilayer perceptrons, binary sigmoidal belief networks (aka. stochastic
multilayer perceptrons), mixture models, mixture of expert models, hierarchical
mixture of expert models, and factor analysis models. This versatility is brought
about by a range of significantly different modes of behavior available to each unit .
Figures 1b to Ie illustrate these modes.
Deterministic mode: If the noise variance of a unit is very small, the postsigmoid
activity will be a practically deterministic sigmoidal function of the mean. This
mode is useful for representing deterministic nonlinear mappings such as those found
in deterministic multilayer perceptrons and mixture of expert models.
Stochastic linear mode: For a given mean, if the squashing function is approximately linear over the span of the added noise, the postsigmoid distribution will
be approximately Gaussian with the mean and standard deviation linearly transformed. This mode is useful for representing Gaussian noise effects such as those
found in mixture models, the outputs of mixture of expert models, and factor analysis models.
Stochastic nonlinear mode: If the variance of a unit in the stochastic linear
mode is increased so that the squashing function is used in its nonlinear region, a
variety of distributions are producible that range from skewed Gaussian to uniform
to bimodal.
Stochastic binary mode: This is an extreme case of the stochastic nonlinear
mode. If the variance of a unit is very large, then nearly all of the probability mass
will lie near the ends of the interval (0,1) (see figure Ie). Using the cumulative
Gaussian squashing function and a standard deviation of 150, less than 1% of the
mass lies in (0.1,0.9). In this mode, the postsigmoid activity of unit i appears to
be binary with probability of being "on" (ie., Yi > 0.5 or, equivalently, Xi > 0):
-1
2)
('t. on I.
JL~, CTi
-
-11';
exp [-x 2/2CT;]d - ..T..(JLi) (5)
M:::7i
x - 'J.' ?
o
V 27rCTi
-00
V 27rCTi
CTi
This sort of stochastic activation is found in binary sigmoidal belief networks
(Jaakkola et al. 1996) and in the decision-making components of mixture of expert models and hierarchical mixture of expert models.
P
3
00
eXP[-(X-JLi)2/2o-;]d
M:::7i
X
-
Continuous sigmoidal belief networks
If the mean of each unit depends on the activities of other units and there are
feedback connections, it is difficult to relate the density in equation 3 to a joint
distribution over all unit activities, and simulating the model would require a great
deal of computational effort. However, when a top-down topology is imposed on
the network (making it a directed acyclic graph), the densities given in equations 1
and 3 can be interpreted as conditional distributions and the joint distribution over
all units can be expressed as
p({Xi}) = n~lP(Xil{xjh<i) or p({Yi}) = n~lp(Yil{Yjh<i),
(6)
where N is the number of units. p(xil{xj}j<i) and p(Yil{yj}j<i) are the presigmoid
and postsigmoid densities of unit i conditioned on the activities of units with lower
Continuous Sigmoidal Belief Networks Trained using Slice Sampling
455
indices. This ordered arrangement is the foundation of belief networks (Pearl, 1988).
I let the mean of each unit be determined by a linear combination of the postsigmoid
activities of preceding units:
J1.i = L,j<iWijYj,
(7)
where Yo == 1 is used to implement biases. The variance for each unit is independent
of unit activities. A single sample from the joint distribution can be obtained by
using the bias as the mean for unit 1, randomly picking a noise value for unit 1,
applying the squashing function, computing the mean for unit 2, picking a noise
value for unit 2, and so on in a simple top-down pass.
Inference by slice sampling
Given the activities of a set of visible (observed) units, V, inferring the distribution
over the remaining set of hidden (unobserved) units, H, is in general a difficult task.
The brute force procedure proceeds by obtaining the posterior density using Bayes
theorem:
p( {YihEHI{YihEV) = p( {Yi}iEH ,{yiliEV )/J{Yi}iEHP( {YihEH,{Yi}iEV )TIiEHdYi. (8)
However, computing the integral in the denominator exactly is computationally
intractable for any more than a few hidden units. The combinatorial explosion
encountered in the corresponding sum for discrete-valued belief networks pales in
comparison to this integral; not only is it combinatorial, but it is a continuous
integral with a multimodal integrand whose peaks may be broad in some dimensions
but narrow in others, depending on what modes the units are in.
An alternative to explicit integration is to sample from the posterior distribution
using Markov chain Monte Carlo. Given a set of observed activities, this procedure
pro d uces a state sequence, {Yi }(o)
iEH' {}(l)
Yi iEH' ... , {}(t)
Yi iEH' ... , t h
at?IS guaranteed to
converge to the posterior distribution. Each successive state is randomly selected
based on knowledge of only the previous state. To simplify these random choices,
I consider changing only one unit at a time when making a state transition. Ideally, the new activity of unit i would be drawn from the conditional distribution
p(Yil{Yj}ji=i) (Gibbs sampling). However, it is difficult to sample from this distribution because it may have many peaks that range from broad to narrow.
I use a new Markov chain Monte Carlo method called "slice sampling" (Neal 1997)
to pick a new activity for each unit. Consider the problem of drawing a value Y from
a univariate distribution P(y) - in this application, P(y) is the conditional distribution p(Yil{yj}ji=i). Slice sampling does not directly produce values distributed
according to P(y), but instead produces a sequence of values that is guaranteed to
converge to P(y). At each step in the sequence, the old value Yold is used as a guide
for where to pick the new value Ynew.
To perform slice sampling, all that is needed is an efficient way to evaluate a function
!(y) that is proportional to P(y) - in this application, the easily computed value
p(Yi' {Yj}ji=i) suffices, since p(Yi' {Yj}ji=i) <X p(Yil{Yj}ji=i). Figure 2a shows an
example of a univariate distribution, P(y). The version of slice sampling that I
use requires that all of the density lies within a bounded intenJal as shown. To
obtain Ynew from Yold, !(Yold) is first computed and then a uniform random value
is drawn from [0, !(Yold)]. The distribution is then horizontally "sliced" at this
value, as shown in figure 2a. Any Y for which !(y) is greater than this value is
considered to be part of the slice, as indicated by the bold line segments in the
picture shown at the top of figure 2b. Ideally, Ynew would now be drawn uniformly
from the slice. However, determining the line segments that comprise the slice is
not easy, for although it is easy to determine whether a particular Y is in the slice,
456
B. 1. Frey
(a)
(b)
)(
)(
I
o
Yold
1
?
?
?
?
.
? )(
(c)
~~
H.
I
I
Ynew
?
Ynew
?
?
?
Yold
X randomly drawn point
Figure 2: After obtaining a random slice from the density (a), random values are
drawn until one is accepted. (b) and (c) show two such sequences.
it is much more difficult to determine the line segment boundaries, especially if the
distribution is multimodal. Instead, a uniform value is drawn from the original
interval as shown in the second picture of figure 2b. If this value is in the slice it is
accepted as Ynew (note that this decision requires an evaluation of fey)). Otherwise
either the left or the right interval boundary is moved to this new value, while
keeping Yold in the interval. This procedure is repeated until a value is accepted.
For the sequence in figure 2b, the new value is in the same mode as the old one,
whereas for the sequence in figure 2c, the new value is in the other mode. Once
Ynew is obtained, it is used as Yold for the next step.
If the top-down influence causes there to be two very narrow peaks in p(Yil{Yi}j#i)
(corresponding to a unit in the stochastic binary mode) the slices will almost always
consist of two very short line segments and it will be very difficult for the above
procedure to switch from one mode to another. To fix this problem, slice sampling
is performed in a new domain, Zi = ~ ( {Xi - J.ti} / Ui). In this domain the top-down
distribution p(zil{Yi}j<i) is uniform on (0,1), so p(zil{Yj}j#i) = p(zil{Yj}j>i) and
I use the following function for slice sampling:
f(zi) = exp [ - E~=i+l {Xk - J.t;i - Wki~(Ui~-l(Zi)
+ J.ti)} 2 /2u~],
(9)
where J.t;i = Ej<k,#i WkjYj. Since Xi, Yi and Zi are all deterministically related,
sampling from the distribution of Zi will give appropriately distributed values for
the other two. Many slice sampling steps could be performed to obtain a reliable
sample from the conditional distribution for unit i, before moving on to the next
unit. Instead, only one slice sampling step is performed for unit i before moving
on. The latter procedure often converges to the correct distribution more quickly
than the former. The Markov chain Monte Carlo procedure I use in the simulations
below thus consists of sweeping a prespecified number of times through the set of
hidden units, while updating each unit using slice sampling.
Learning
Given training examples indexed by T, I use on-line stochastic gradient ascent to
perform maximum likelihood learning - ie., maximize TIT p( {xihev). This consists of sweeping through the training set and for each case T following the gradient
oflnp({xi}), while sampling hidden unit values fromp({xiheHI{xihev) using the
sampling algorithm described above. From equations 1,6 and 7,
flWjk == '" 8Inp({xi} )/8Wjk = ",(Xj - EI<jWjIYI)Yk/U],
fllnu] == ",81np({xi})/8Inu; = "'[(Xj - EI<jWjIYI)2 /U]
where", is the learning rate.
-1]/2,
(10)
(11)
Continuous Sigmoidal Belief Networks Trained using Slice Sampling
457
(b)
(1, .97)
(1, .96)
Yvis2
(1 , .69)
(1, . 55)
(1, .19)
(0, . 12)(0, . 14)
(0, .75)(0, .97)
Yvisl
Figure 3: For each experiment (a) and (b), contours show the distribution of the
2-dimensional training cases. The inferred postsigmoid activities of the two hidden
units after learning are shown in braces for several training cases, marked by x.
4
Experiments
I designed two experiments meant to elicit the four modes of operation described in
section 2. Both experiments were based on a simple network with one hidden layer
containing two units and one visible layer containing two units. Thaining data was
obtained by carefully selecting model parameters so as to induce various modes of
operation and then generating 10,000 two-dimensional examples. Before training,
the weights and biases were initialized to uniformly random values between -0.1 and
0.1; log-variances were initialized to 10.0. Thaining consisted of 100 epochs using
a learning rate of 0.001 and 20 sweeps of slice sampling to complete each training
case. Each task required roughly five minutes on a 200 MHz MIPS R4400 processor.
The distribution of the training cases in visible unit space (Yvisl - Yvis2) for the
first experiment is shown by the contours in figure 3a. After training the network,
I ran the inference algorithm for each of ten representative training cases. The
postsigmoid activities of the two hidden units are shown beside the cases in figure 3a;
clearly, the network has identified four classes that it labels (0,0) ... (1,1). Based on a
30x30 histogram, the Kullback-Leibler divergence between the training set and data
generated from the trained network is 0.02 bits. Figure 3b shows a similar picture
for the second experiment, using different training data. In this case, the network
has identified two categories that it labels using the first postsigmoid activity. The
second postsigmoid activity indicates how far along the respective "ridge" the data
point lies. The Kullback-Leibler divergence in this case is 0.04 bits.
5
Discussion
The proposed continuous-valued nonlinear random unit is meant to be a useful
atomic element for continuous belief networks in much the same way as the sigmoidal
deterministic unit is a useful atomic element for multi-layer perceptrons. Four
operational modes available to each unit allows small networks of these units to
exhibit complex stochastic behaviors. The new "slice sampling" method that I
employ for inference and learning in these networks uses easily computed local
information.
The above experiments illustrate how the same network can be used to model two
quite different types of data. In contrast, a Gaussian mixture model would require
many more components for the second task as compared to the first. Although the
methods due to Tibshirani and Bishop et al. would nicely model each submanifold
in the second task, they would not properly distinguish between categories of data in
either task. MacKay's method may be capable of extracting both the sub manifolds
and the categories, but I am not aware of any results on such a dual problem.
458
B. 1. Frey
It is not difficult to conceive of models for which naive Markov chain Monte Carlo
procedures will become fruitlessly slow. In particular, if two units have very highly
correlated activities, the procedure of changing one activity at a time will converge
extremely slowly. Also, the Markov chain method may be prohibitive for larger
networks. One approach to avoiding these problems is to use the Helmholtz machine
(Hinton et al. 1995) or mean field methods (Jaakkola et al. 1996).
Other variations on the theme presented in this paper include the use of other types
of distributions for the hidden units (e.g., Poisson variables may be more biologically
plausible) and different ways of parameterizing the modes of behavior.
Acknowledgements
I thank Radford Neal and Geoffrey Hinton for several essential suggestions and I
also thank Peter Dayan and Tommi Jaakkola for helpful discussions. This research
was supported by grants from ITRC, IRIS, and NSERC.
References
Bishop, C . M, Svensen, M., and Williams, C.K.L 1996. EM optimization of latent-variable
density models. In D. Touretzky, M. Mozer, and M. Hasselmo (editors), Advances in Neural
Information Processing Systems 8, MIT Press, Cambridge, MA.
Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. 1995. The Helmholtz machine.
Neural Computation 7, 889-904.
Heckerman, D ., and Geiger, D. 1994. Learning Bayesian networks: a unification for discrete and Gaussian domains. In P. Besnard and S. Hanks (editors), Proceedings of the
Eleventh Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San
Francisco, CA, 274-284.
Hinton, G. E., Dayan, P., Frey, B. J ., and Neal, R. M. 1995. The wake-sleep algorithm for
unsupervised neural networks. Science 268, 1158-1161.
Hinton, G. E ., and Sejnowski, T . J . 1986. Learning and relearning in Boltzmann machines. In D. E . Rumelhart and J . L. McClelland (editors), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations . MIT Press,
Cambridge, MA.
Hofmann, R., and Tresp, V. 1996. Discovering structure in continuous variables using
Bayesian networks. In D . Touretzky, M. Mozer, and M. Hasselmo (editors), Advances in
Neural Information Processing Systems 8, MIT Press, Cambridge, MA.
Jaakkola, T., Saul, L. K., and Jordan, M. 1. 1996. Fast learning by bounding likelihoods
in sigmoid type belief networks. In D . Touretzky, M. Mozer and M. Hasselmo (editors),
Advances in Neural Information Processing Systems 8, MIT Press, Cambridge, MA.
Lauritzen, S. L., Dawid, A. P., Larsen, B. N., and Leimer, H. G. 1990. Independence
properties of directed Markov Fields. Networks 20,491-505.
MacKay, D. J . C . 1995. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, A 354, 73-80.
Movellan, J. R., and McClelland, J . L. 1992. Learning continuous probability distributions
with symmetric diffusion networks. Cognitive Science 17, 463-496.
Neal, R. M. 1992. Connectionist learning of belief networks. Artificial Intelligence 56,
71-113.
Neal, R. M. 1997. Markov chain Monte Carlo methods based on "sliCing" the density
function. In preparation.
Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA.
Tibshirani, R. (1992) . Principal curves revisited. Statistics and Computing 2, 183-190.
| 1261 |@word version:1 simulation:2 pick:2 versatile:1 selecting:1 activation:1 visible:3 j1:2 hofmann:2 shape:1 designed:1 intelligence:2 selected:1 prohibitive:1 discovering:1 xk:1 short:1 prespecified:1 revisited:1 toronto:2 successive:1 sigmoidal:14 five:1 along:1 become:1 consists:2 eleventh:1 roughly:1 behavior:6 multi:1 window:1 moreover:1 bounded:1 wki:1 mass:2 what:1 interpreted:1 developed:3 unobserved:2 transformation:1 ti:2 exactly:1 whatever:1 unit:62 brute:1 grant:1 before:5 frey:5 local:1 subscript:1 interpolation:1 approximately:2 mateo:1 range:4 directed:2 yj:8 atomic:2 implement:3 movellan:2 procedure:9 elicit:1 adapting:1 significantly:1 road:1 inp:1 induce:1 ooe:1 applying:1 influence:1 deterministic:10 quick:1 yt:1 dz:1 imposed:1 williams:1 besnard:1 slicing:1 parameterizing:1 nuclear:1 jli:2 variation:1 us:1 element:2 helmholtz:3 rumelhart:1 dawid:1 updating:1 observed:4 region:1 connected:2 noi:1 yk:1 ran:1 mozer:3 ui:2 ideally:2 imple:1 trained:5 segment:4 tit:1 easily:3 joint:4 multimodal:2 represented:2 various:1 fast:1 sejnowski:2 monte:5 artificial:2 zemel:1 quite:2 whose:1 larger:1 valued:7 plausible:2 drawing:1 otherwise:1 statistic:1 sequence:6 propose:1 ment:1 pale:1 description:1 moved:1 wjk:1 requirement:1 r1:1 produce:3 xil:2 generating:1 converges:1 illustrate:3 depending:1 svensen:1 lauritzen:2 tommi:1 correct:1 stochastic:18 exploration:1 explains:1 require:2 suffices:1 fix:1 microstructure:1 practically:1 considered:1 exp:5 great:1 mapping:2 cognition:1 combinatorial:2 label:2 hasselmo:3 brought:1 clearly:1 mit:4 gaussian:15 always:1 ej:1 jaakkola:4 derived:1 yo:1 properly:1 modelling:1 likelihood:2 mainly:1 aka:1 indicates:1 contrast:1 brendan:1 am:2 helpful:1 inference:6 itrc:1 dayan:4 hidden:12 transformed:1 interested:1 among:1 dual:1 integration:1 mackay:3 equal:1 comprise:1 once:1 nicely:1 aware:1 sampling:21 field:2 broad:3 unsupervised:2 nearly:1 connectionist:3 others:1 simplify:1 np:1 few:1 inu:1 employ:1 randomly:3 intelligent:1 divergence:2 phase:1 versatility:1 highly:2 evaluation:1 mixture:10 extreme:1 chain:6 iev:1 integral:3 capable:2 explosion:1 unification:1 respective:1 indexed:1 ynew:7 old:2 initialized:2 increased:1 mhz:1 deviation:2 uniform:4 submanifold:1 my:1 density:13 peak:3 ie:4 probabilistic:1 off:1 physic:1 picking:2 parzen:1 quickly:1 containing:3 slowly:1 cognitive:1 expert:7 lookup:1 bold:1 depends:1 performed:3 linked:1 sort:1 bayes:1 elicited:1 parallel:1 contribution:1 variance:6 conceive:1 kaufmann:2 bayesian:3 carlo:5 r4400:1 m5s:1 processor:1 touretzky:3 larsen:1 richly:2 knowledge:1 carefully:1 appears:1 higher:1 done:1 hank:1 correlation:1 until:2 working:1 horizontal:1 ei:2 nonlinear:7 propagation:1 mode:28 indicated:1 effect:1 normalized:1 consisted:1 former:1 symmetric:1 leibler:2 neal:7 deal:1 during:1 skewed:1 iris:1 complete:1 demonstrate:1 ridge:1 pro:1 reasoning:1 ranging:2 image:1 sigmoid:2 ji:5 yil:6 volume:1 jl:1 zil:3 cambridge:4 gibbs:1 stochasticity:1 moving:2 add:1 posterior:3 binary:9 yi:18 morgan:2 greater:1 preceding:1 converge:3 determine:2 maximize:1 rv:1 infer:1 adapt:1 long:1 post:1 controlled:1 multilayer:4 denominator:1 poisson:1 histogram:1 represent:1 bimodal:1 whereas:2 interval:4 wake:1 appropriately:1 operate:2 brace:1 ascent:1 tend:1 jordan:1 tfi:2 extracting:1 near:1 yold:8 easy:2 mips:1 variety:3 xj:3 ieh:4 switch:1 zi:5 independence:1 topology:1 suboptimal:1 identified:2 inner:1 whether:1 effort:1 suffer:1 peter:1 passing:2 cause:1 useful:6 ten:1 mcclelland:3 category:3 tibshirani:3 discrete:5 four:4 drawn:6 clarity:1 changing:2 diffusion:1 graph:1 sum:1 uncertainty:1 almost:1 geiger:2 decision:2 scaling:1 bit:2 layer:5 ct:1 guaranteed:2 distinguish:1 quadratic:1 sleep:1 encountered:1 activity:21 xjh:1 sake:1 integrand:1 span:1 extremely:1 performing:1 department:1 according:1 combination:1 fey:1 heckerman:2 em:1 lp:2 making:3 biologically:1 computationally:1 equation:3 needed:1 instrument:1 end:1 available:2 operation:2 hierarchical:3 appropriate:1 simulating:1 alternative:1 original:1 top:10 remaining:1 include:2 a4:1 graphical:1 especially:1 sweep:1 added:2 arrangement:1 exhibit:1 gradient:2 thank:2 simulated:1 manifold:1 index:1 equivalently:1 difficult:6 relate:1 negative:1 boltzmann:3 perform:2 markov:7 hinton:8 sweeping:2 canada:1 inferred:1 required:1 connection:1 narrow:3 pearl:5 suggested:1 proceeds:1 below:1 xill:1 thaining:2 including:2 reliable:1 belief:17 settling:1 force:1 fromp:1 representing:2 library:1 picture:3 categorical:2 naive:1 tresp:2 epoch:1 acknowledgement:1 determining:1 beside:1 suggestion:1 proportional:1 acyclic:1 geoffrey:1 foundation:2 editor:5 squashing:8 translation:1 supported:1 keeping:1 bias:3 guide:1 saul:1 focussed:1 distributed:3 slice:24 feedback:1 dimension:1 boundary:2 transition:1 cumulative:2 contour:2 curve:1 san:2 far:1 ulj:1 kullback:2 francisco:1 xi:11 spectrum:1 continuous:21 latent:4 table:1 ca:2 operational:1 obtaining:2 complex:1 domain:3 main:1 linearly:2 bounding:1 noise:7 repeated:1 sliced:1 representative:1 slow:1 sub:1 inferring:1 theme:1 explicit:1 deterministically:1 lie:4 jacobian:1 down:8 theorem:1 minute:1 bishop:3 intractable:1 consist:1 essential:1 ci:4 conditioned:1 relearning:1 x30:1 univariate:2 horizontally:1 expressed:1 ordered:1 nserc:1 radford:1 ma:4 cti:2 conditional:6 viewed:1 marked:1 king:1 included:1 determined:1 except:1 uniformly:2 principal:1 total:1 called:1 pas:2 accepted:3 perceptrons:5 college:1 latter:1 meant:2 preparation:1 evaluate:1 avoiding:1 correlated:1 |
287 | 1,262 | Temporal Low-Order Statistics of Natural
Sounds
H. Attias? and C.E. Schreinert
Sloan Center for Theoretical Neurobiology and
W.M. Keck Foundation Center for Integrative Neuroscience
University of California at San Francisco
San Francisco, CA 94143-0444
Abstract
In order to process incoming sounds efficiently, it is advantageous
for the auditory system to be adapted to the statistical structure of
natural auditory scenes. As a first step in investigating the relation
between the system and its inputs, we study low-order statistical
properties in several sound ensembles using a filter bank analysis.
Focusing on the amplitude and phase in different frequency bands,
we find simple parametric descriptions for their distribution and
power spectrum that are valid for very different types of sounds.
In particular, the amplitude distribution has an exponential tail
and its power spectrum exhibits a modified power-law behavior,
which is manifested by self-similarity and long-range temporal correlations. Furthermore, the statistics for different bands within a
given ensemble are virtually identical, suggesting translation invariance along the cochlear axis. These results show that natural
sounds are highly redundant, and have possible implications to the
neural code used by the auditory system.
1
Introduction
The capacity of the auditory system to represent the auditory scene is restricted by
the finite number of cells and by intrinsic noise. This fact limits the ability of the
organism to discriminate between different sounds with similar spectro-temporal
?Corresponding author. E-mail: hagai@phy.ucsf.edu.
tE-mail: chris@phy.ucsf.edu.
H. Attias and C. E. Schreiner
28
characteristics. However, it is possible to enhance the discrimination ability by
a suitable choice of the encoding procedure used by the system, namely of the
transformation of sounds reaching the cochlea to neural spike trains generated in
successive processing stages in response to these sounds. In general, the choice of
a good encoding procedure requires knowledge of the statistical structure of the
sound ensemble.
For the visual system, several investigations of the statistical properties of image
ensembles and their relations to neuronal response properties have recently been
performed (Field 1987, Atick and Redlich 1990, Ruderman and Bialek 1994). In
particular, receptive fields of retinal ganglion and LG N cells were found to be consistent with an optimal-code prediction formulated within information theory (Atick
1992, Dong and Atick 1995), suggesting that the visual periphery may be designed
as to take advantage of simple statistical properties of visual scenes.
In order to investigate whether the auditory system is similarly adapted to the
statistical structure of its own inputs, a good characterization of auditory scenes is
necessary. In this paper we take a first step in this direction by studying low-order
statistical properties of several sound ensembles. The quantities we focus on are
the spectro-temporal amplitude and phase defined as follows. For the sound s(t),
let SII(t) denote its components at the set of frequencies v, obtained by filtering it
through a bandpass filter bank centered at those frequencies. Then
SII(t) = XII (t)cos (vt
+ rPlI(t))
(1)
where xlI(t) ~ 0 and rPlI(t) are the spectro-temporal amplitude (STA) and phase
(STP), respectively. A complete characterization of a sound ensemble with respect
to a given filter bank must be given by the joint distribution of amplitudes and
phases at all times, P (XlIl (tl), rPlII (tD, ... , XII" (t n ), rPlI" (t~)). In this paper, however,
we restrict ourselves to second-order statistics in the time domain and examine the
distribution and power spectrum of the stochastic processes xlI(t) and rPlI(t).
Note that the STA and STP are quantities directly relevant to auditory processing.
The different stages of the auditory system are organized in topographic frequency
maps, so that cells tuned to the same sound frequency v are organized in stripes
perpendicular to the direction of frequency progression (see, e.g., Pickles 1988).
The neuronal responses are thus determined by XII and rPlI' and by XII alone when
phase-locking disappears above 4-5KHz.
2
Methods
Since it is difficult to obtain a reliable sample of an animal's auditory scene over a
sufficiently long time, we chose instead to analyze several different sound ensembles,
each consisting of a 15min sound of a certain type. We used cat vocalizations, bird
songs, wolf cries, environmental sounds, symphonic music, jazz, pop music, and
speech. The sounds were obtained from commercially available compact discs and
from recordings of animal vocalizations in two laboratories. No attempt has been
made to manipulate the recorded sounds in any way (e.g., by removing noise).
Each sound ensemble was loaded into the computer by 30sec segments at a sampling rate of Is = 44.1KHz. After decimating to Is/2, we performed the following frequency-band analysis. Each segment was passed through a bandpass fil-
29
Temporal Low-Order Statistics of Natural Sounds
Symphonic music
Speech
Or---~------~--------~
Or---~------~----~--~
-0.5
_
-0.5
_
-1
as
-1
as
0::
0::
0-1.5
0-1.5
~
~
g>
g> -2
_3L---~------~----~~~
-2
o
-2
2
Cat vocalizations
2
Environmental sounds
O~--~------~----~--~
Or---~------~----~--~
-0.5
_
o
-0.5
-1
_
-1
~
-5:
CI..
0-1.5
0-1 .5
~
8>
E
-2
-2
-2.5
_3ll-----------~----~--~
-2
0
a=log10(x)
2
_3L-----------~----~--~
-2
0
2
a=log10(x)
Figure 1: Amplitude probability distribution in different frequency bands for four
sound ensembles.
ter bank with impulse responses hv{t) to get the narrow-band component signals
sv{t) = s(t) * hv{t). We used square, non-overlapping filters with center frequencies II logarithmically spaced within the range of 100 - 11025Hz. The filters were
usually 1/8-octave wide, but we experimented with larger bandwidths as well. The
amplitude and phase in band II were then obtained via the Hilbert transform
H [sv{t)] = sv{t)
+ :;i
J
dt' ts{t')
_ t' = xv{t)ei(vHtPv(t? .
(2)
The frequency content of Xv is bounded by 0 and by the bandwidth of hv (Flanagan
1980), so keeping the latter below II guarantees that the low frequencies in sv are
all contained in Xv, confirming its interpretation as the amplitude modulator of
the carrier cos lit suggested by (1). The phase ,pv, being time-dependent, produces
frequency modulation. For a given II the results were averaged over all segments.
3
Amplitude Distribution
We first examined the STA distribution in different frequency bands II. Fig. 1
presents historgrams of P{IOglO xv) on a logarithmic scale for four different sound
ensembles. In order to facilitate a comparison among different bands and ensembles,
we normalized the variable to have zero mean and unit variance, (loglO xv(t)) =
0, ((IOglO x v (t))2) = 1, corresponding to a linear gain control.
H. Attias and C. E. Schreiner
30
Symphonic music
Speech
o.---~------------~---.
o.---~------------~---.
-0.5
-0.5
_
-1
=
:?:
g
-1
-2
S
c..
0'-1.5
'Ol
.Q
-2
-2.5
-2.5
c..
0'-1 .5
_3~--~------~----~-L~
-3~--~------~----~--~
-2
o
2
-2
a=10910(x)
0
2
a=10910(x)
Figure 2: n-point averaged amplitude distributions for v = 800Hz in two sound
ensembles, using n = 1,20,50,100,200. The speech ensemble is different from the
one used in Fig. 1.
As shown in the figure, within a given ensemble, the histograms corresponding to
different bands lie atop one another. Furthermore, although curves from different
ensembles are not identical, we found that they could all be fitted accurately to the
same parametric functional form, given by
e-'")'Z",
p(x,,) ex (b5
+ X~){J/2
(3)
with parameter values roughly in the range of 0.1 ~ 'Y ~ 1, 0 ~ f3 ~ 2.5, and
0.1 ~ bo ~ 0.6. In some cases, a mixture of two distributions of the form (3) was
necessary, suggesting the presence of two types of sound sources; see, e.g., the slight
bimodality in the lower parts of Fig. 1. Details of the fitting procedure will be given
in a longer paper. We found the form (3) to be preserved as the filter bandwidths
increased.
Whereas this distribution decays exponentially fast at high amplitudes (p ex
e-'")'z", /xe), it does not vanish at low amplitudes, indicating a finite probability
for the occurence of arbitrarily soft sounds. In contrast, the STA of a Gaussian
noise signal can be shown to be distributed according to p ex x"e-'\z~, which vanishes at x" = 0 and decays faster than (3) at large x". Hence, the origin of the large
dynamic range usually associated with audio signals can be traced to the abundance
of soft sounds rather than of loud ones.
4
Amplitude Self-Similarity
An interesting probe of the STA temporal correlations is the property of scale
invariance (also called statistical self-similarity). The process x,,(t) is scale-invariant
when any statistical quantity on a given scale (e.g., at a given temporal resolution,
determined by the sampling rate) does not change as that scale is varied. To
observe this property we examined the STA distribution p(x,,) at different temporal
resolutions, by defining the n-point averaged amplitude
x~n)(t)
1 n-l
= -n L
k=O
x,,(t + k6.)
(4)
31
Temporal Low-Order Statistics ofNatural Sounds
Speech
Symphonic music
or-__~~--~----------~
0
-0.5
-1
-
12 - 1.5
en
0"
-1
6:
en
'0-2
-
-2
~-2.5
~
-3
-3
-3.5
-4~~------~----~----~
-1
o
1
-4
2
-1
Cat vocalizations
1
2
Environmental sounds
Or--.~----~----------~
Or-~------~-----------.
-1
-1
6:
en
'0-2
'0-2
-
0
~
g.
i
-3
-3
-4~~------~----~----~
-1
o
1
log10(f)
-4~~------~----~----~
-1
o
1
2
log10(f)
Figure 3: Amplitude power spectrum in different frequency bands for four sound
ensembles.
(A = 1/ is) and computing its distribution. Fig. 2 displays the histograms of
P(IOglO x~n) for the II = 800Hz frequency band in two sound ensembles on a logarithmic scale, using n = 1,20,50, 100, 200 which correspond to a temporal resolution
range of 0.75 - 150msec. Remarkably, the histogram remains unmodified even for
n = 200. Had the xlI(t + kA) been statistically independent variables, the central
limit theorem would have predicted a Gaussian p(x~n) for large n. The fact that
this non-Gaussian distribution preserves its form as n increases implies the presence
of temporal STA correlations over long periods.
Notice the analogy between the invariance of p(x II ) under a change in filter bandwidth, reported in the previous section, and under a change in temporal resolution.
An XII with a broad bandwidth is essentially an average over the XII'S with narrow
bandwidth within the same band, thus bandwidth invariance is a manifestation of
STA correlations across frequency bands.
5
Amplitude Power Spectrum
In order to study the temporal amplitude correlations directly, we computed the
STA power spectrum BII(w) = (I XII(W) 12) in different bands II, where xII(w) is
the Fourier transform of the log-amplitude loglO xlI(t) obtained by a 512-point
FFT. As is well-known, the spectrum BII(w) is the Fourier transform of the logamplitude auto-correlation function clI(r) = (IOglO xlI(t) loglO xlI(t + r?). We used
32
H. Attias and C. E. Schreiner
the zero-mean, unit-variance normalization of IOglO Xv, which implies the normalization dJ..J8 v (w) = const. of the spectra. Fig. 3 presents 8 v as a function of
the modulation frequency j = w/21r on a logarithmic scale for four different sound
ensembles. Notice that, as in the case of the STA distribution, the different curves
corresponding to different frequency bands within a given ensemble lie atop one
another, including individual peaks; and whereas spectra in different ensembles are
not identical, we found a simple parametric description valid for all ensembles which
is given by
J
(5)
with parameter values roughly in the range of 1 ::; a ::; 2.5 and 10- 4 ::; Wo ::; 1. This
is a modified power-law form (note that 8 v -+ C / wQ at large w), implying longrangle temporal correlations in the amplitude: these correlations decrease slowly (as
a power law in t) on a time scale of l/wo, beyond which they decay exponentially
fast. Larger Wo contributes more to the flattening of the spectrum at low frequencies
(see especially the speech spectra) and corresponds to a shorter correlation time.
Again, in some cases a sum of two such forms was necessary, corresponding to a
mixture STA distribution as mentioned above; see, e.g., the environmental sound
spectra (lower right part of Fig. 3 and Fig. 1).
The form (5) persisted as the filter bandwidth increased. In the limit of allpass filter
(not shown) we still observed this form, a fact related to the report of (Voss and
Clarke 1975) on 1/ j-like power spectra of sound 'loudness' S(t)2 found in several
speech and music ensembles.
6
Phase Distribution and Power Spectrum
Whereas the STA is a non-stationary process which is locally stationary and can
thus be studied on the appropriate time scale using our methods, the STP is nonstationary even locally. A more suitable quantity to examine is its rate of change
d?v / dt, called the instantaneous frequency. We studied the statistics of I d?v / dt I
in different ensembles, and found its distribution to be described accurately by the
parametric form (3) with 'Y = 0, whereas its power spectrum could be well fitted
by the form (5). In addition, those quantities were virtually identical in different
bands within a given ensemble. More details on this work will be provided in a
longer paper.
7
Implications for Auditory Processing
We have shown that auditory scenes have several robust low-order statistical properties. The STA power spectrum has a modified power-law behavior, which is
manifested in self-similarity and temporal correlations over a few hundred milliseconds. The distribution has an exponential tail and features a finite probability for
arbitrarily soft sounds. Both the phase and amplitude statistics can be described
by simple parametrized functional forms which are valid for very different types of
sounds. These results lead to the conclusion that natural sounds are highly redundant, i.e., they occupy a very small subspace in the space of all possible sounds. It
would therefore be beneficial for the auditory system to adapt its sound representation to these statistics, thus improving the animal discrimination ability. Whether
Temporal Low-Order Statistics of Natural Sounds
33
the auditory system actually follows this design principle is an empirical question
which can be attacked by suitable experiments.
Furthermore, since different frequency bands correspond to different spatial locations on the basal membrane (Pickles 1988), the fact that the distributions and
spectra in different bands within a given ansemble are identical suggests the existence of translation invariance along the cochlear axis, i.e., all the locations in the
cochlea 'see' the same statistics. This is analogous to the translation invariance
found in natural images.
Finally, a recent theory for peripheral visual processing (Dong and Atick 1995) proposes that, in order to maximize information transmission into cortex, the LGN
performs temporal correlation of retinal images. Within an analogous auditory
model, the decorrelation time for sound ensembles reported here implies that the
auditory system should process incoming sounds by a few hundred msec-Iong segments. The ability of cortical neurons to follow in their response modulation rates
near and below 10Hz but usually not higher (Schreiner and Urbas 1988) may reflect
such a process.
Acknowledgements
We thank B. Bonham, K. Miller, S. Nagarajan, and especially W. Bialek for helpful
discussions and suggestions. We also thank F. Theunissen for making his bird song
recordings available to us. Supported by The Office of Naval Research (NOOOI4-941-0547). H.A. was supported by a Sloan Foundation grant for the Sloan Center for
Theoretical Neurobiology.
References
J.J. Atick and N. Redlich (1990), Towards a theory of early visual processing. Neural
Comput. 2, 308-320.
J.J. Atick (1992), Could information theory provide an ecological theory of sensory
processing. Network 3, 213-25l.
D.W. Dong and J.J. Atick (1995), Temporal decorrelation: a theory of lagged and
non-lagged responses in the lateral geniculate nucleus. Network 6, 159-178.
D.J. Field (1987), Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. 4, 2379-2394.
J.L. Flanagan (1980), Parametric coding of speech spectra. J. Acoust. Soc. Am.
68, 412-419.
J.O. Pickles (1988), An introduction to the physiology of hearing (2nd Ed.). San
Diego, CA: Academic Press.
D.L. Ruderman and W. Bialek (1994), Statistics of natural images: scaling in the
woods. Phys. Rev. Lett. 73, 814-817.
C.E. Schreiner and J.V. Urbas, Representation of amplitude modulation in the
auditory cortex of the cat. II. Comparison between cortical fields. Hear. Res. 32,
49-63.
R.F. Voss and J. Clarke (1975), 1/f noise in music and speech. Nature 258,317-318.
| 1262 |@word advantageous:1 nd:1 integrative:1 phy:2 tuned:1 ka:1 atop:2 must:1 confirming:1 designed:1 discrimination:2 alone:1 implying:1 stationary:2 characterization:2 location:2 successive:1 along:2 sii:2 fitting:1 roughly:2 behavior:2 examine:2 voss:2 ol:1 td:1 provided:1 bounded:1 acoust:1 transformation:1 guarantee:1 temporal:19 control:1 unit:2 grant:1 carrier:1 xv:6 limit:3 encoding:2 modulation:4 chose:1 bird:2 studied:2 examined:2 suggests:1 co:2 range:6 perpendicular:1 averaged:3 statistically:1 flanagan:2 procedure:3 empirical:1 physiology:1 get:1 map:1 center:4 resolution:4 schreiner:5 his:1 analogous:2 diego:1 origin:1 logarithmically:1 stripe:1 theunissen:1 observed:1 hv:3 decrease:1 mentioned:1 vanishes:1 locking:1 dynamic:1 segment:4 joint:1 cat:4 train:1 fast:2 larger:2 ability:4 statistic:12 topographic:1 transform:3 vocalization:4 advantage:1 relevant:1 description:2 cli:1 keck:1 transmission:1 produce:1 bimodality:1 soc:2 predicted:1 implies:3 direction:2 filter:9 stochastic:1 centered:1 nagarajan:1 investigation:1 opt:1 hagai:1 fil:1 sufficiently:1 early:1 geniculate:1 jazz:1 pickle:3 loglo:3 gaussian:3 modified:3 reaching:1 rather:1 office:1 focus:1 naval:1 contrast:1 am:2 helpful:1 dependent:1 relation:3 lgn:1 stp:3 among:1 k6:1 proposes:1 animal:3 spatial:1 field:4 f3:1 sampling:2 identical:5 lit:1 broad:1 commercially:1 report:1 few:2 sta:13 preserve:1 individual:1 phase:9 ourselves:1 consisting:1 attempt:1 highly:2 investigate:1 mixture:2 implication:2 necessary:3 shorter:1 re:1 theoretical:2 fitted:2 increased:2 soft:3 unmodified:1 hearing:1 hundred:2 reported:2 sv:4 peak:1 dong:3 enhance:1 again:1 central:1 recorded:1 reflect:1 slowly:1 suggesting:3 retinal:2 sec:1 coding:1 sloan:3 performed:2 analyze:1 square:1 loaded:1 characteristic:1 efficiently:1 ensemble:25 spaced:1 variance:2 correspond:2 miller:1 xli:6 accurately:2 disc:1 phys:1 ed:1 frequency:21 associated:1 gain:1 auditory:17 noooi4:1 knowledge:1 organized:2 hilbert:1 amplitude:21 actually:1 focusing:1 higher:1 dt:3 follow:1 response:7 furthermore:3 stage:2 atick:7 correlation:11 ruderman:2 ei:1 overlapping:1 impulse:1 facilitate:1 normalized:1 hence:1 laboratory:1 ll:1 self:4 manifestation:1 octave:1 complete:1 performs:1 image:5 instantaneous:1 recently:1 functional:2 khz:2 exponentially:2 tail:2 organism:1 interpretation:1 slight:1 similarly:1 had:1 dj:1 similarity:4 longer:2 cortex:2 own:1 recent:1 periphery:1 certain:1 manifested:2 ecological:1 arbitrarily:2 vt:1 xe:1 maximize:1 redundant:2 period:1 signal:3 ii:9 sound:42 faster:1 adapt:1 academic:1 long:3 logamplitude:1 manipulate:1 prediction:1 essentially:1 cochlea:2 represent:1 histogram:3 normalization:2 cell:4 preserved:1 whereas:4 remarkably:1 addition:1 source:1 recording:2 hz:4 virtually:2 nonstationary:1 near:1 presence:2 ter:1 fft:1 modulator:1 restrict:1 bandwidth:8 attias:4 whether:2 b5:1 passed:1 wo:3 song:2 speech:9 band:18 locally:2 occupy:1 millisecond:1 notice:2 neuroscience:1 xii:8 urbas:2 basal:1 four:4 traced:1 sum:1 wood:1 clarke:2 scaling:1 display:1 adapted:2 scene:6 fourier:2 min:1 according:1 peripheral:1 membrane:1 across:1 beneficial:1 rev:1 making:1 restricted:1 invariant:1 remains:1 studying:1 available:2 probe:1 progression:1 observe:1 appropriate:1 bii:2 existence:1 symphonic:4 log10:4 const:1 music:7 especially:2 question:1 quantity:5 spike:1 parametric:5 receptive:1 bialek:3 exhibit:1 loudness:1 subspace:1 thank:2 lateral:1 capacity:1 parametrized:1 chris:1 cochlear:2 mail:2 code:2 lg:1 difficult:1 lagged:2 design:1 neuron:1 finite:3 t:1 attacked:1 defining:1 neurobiology:2 persisted:1 varied:1 namely:1 california:1 narrow:2 pop:1 beyond:1 suggested:1 usually:3 below:2 hear:1 reliable:1 including:1 power:14 suitable:3 decorrelation:2 natural:9 disappears:1 axis:2 auto:1 occurence:1 acknowledgement:1 law:4 j8:1 interesting:1 suggestion:1 filtering:1 analogy:1 foundation:2 nucleus:1 consistent:1 principle:1 bank:4 decimating:1 cry:1 translation:3 supported:2 keeping:1 wide:1 distributed:1 curve:2 lett:1 cortical:3 valid:3 sensory:1 author:1 made:1 san:3 spectro:3 compact:1 incoming:2 investigating:1 francisco:2 spectrum:18 nature:1 robust:1 ca:2 contributes:1 improving:1 domain:1 flattening:1 noise:4 neuronal:2 fig:7 redlich:2 tl:1 en:3 pv:1 msec:2 bandpass:2 exponential:2 comput:1 lie:2 vanish:1 ioglo:5 abundance:1 removing:1 theorem:1 experimented:1 decay:3 intrinsic:1 ci:1 te:1 logarithmic:3 ganglion:1 visual:5 contained:1 bo:1 wolf:1 loud:1 environmental:4 corresponds:1 formulated:1 towards:1 content:1 change:4 determined:2 called:2 discriminate:1 invariance:6 indicating:1 wq:1 latter:1 ucsf:2 audio:1 ex:3 |
288 | 1,263 | Training Algorithms for Hidden Markov Models
Using Entropy Based Distance Functions
Yoram Singer
AT&T Laboratories
600 Mountain Avenue
Murray Hill, NJ 07974
singer@research.att.com
Manfred K. Warmuth
Computer Science Department
University of California
Santa Cruz, CA 95064
manfred@cse.ucsc.edu
Abstract
We present new algorithms for parameter estimation of HMMs. By
adapting a framework used for supervised learning, we construct iterative
algorithms that maximize the likelihood of the observations while also
attempting to stay "close" to the current estimated parameters. We use a
bound on the relative entropy between the two HMMs as a distance measure between them. The result is new iterative training algorithms which
are similar to the EM (Baum-Welch) algorithm for training HMMs. The
proposed algorithms are composed of a step similar to the expectation
step of Baum-Welch and a new update of the parameters which replaces
the maximization (re-estimation) step. The algorithm takes only negligibly more time per iteration and an approximated version uses the same
expectation step as Baum-Welch. We evaluate experimentally the new
algorithms on synthetic and natural speech pronunciation data. For sparse
models, i.e. models with relatively small number of non-zero parameters,
the proposed algorithms require significantly fewer iterations.
1 Preliminaries
We use the numbers from 0 to N to name the states of an HMM. State 0 is a special initial
state and state N is a special final state. Any state sequence, denoted by s, starts with the
initial state but never returns to it and ends in the final state. Observations symbols are also
numbers in {I, ... , M} and observation sequences are denoted by x. A discrete output
hidden Markov model (HMM) is parameterized by two matrices A and B. The first matrix
is of dimension [N, N] and ai,j (0:5: i :5: N - 1,1 :5: j :5: N) denotes the probability of
moving from state i to state j. The second matrix is of dimension [N + 1, M] and bi ,k is the
probability of outputting symbol k at state i. The set of parameters of an HMM is denoted
by 0 = (A, B) . (The initial state distribution vector is represented by the first row of A.)
An HMM is a probabilistic generator of sequences. It starts in the initial state O. It then
iteratively does the following until the final state is reached. If i is the current state then a
next state j is chosen according to the transition probabilities out of the current state (row i of
matrix A). After arriving at state j a symbol is output according to the output probabilities
of that state (row j of matrix B). Let P(x, slO) denote the probability (likelihood) that an
HMM 0 generates the observation sequence x on the path s starting at state 0 and ending
at state N: P(x, sllsl = Ixl + 1, So = 0, slSI = N, 0) ~ I1~~ll as._t,s.bs.,x ?. For the
sake of brevity we omit the conditions on s and x. Throughout the paper we assume that
the HMMs are absorbing, that is from every state there is a path to the final state with a
Y. Singer and M. K. Warmuth
642
non-zero probability. Similar parameter estimation algorithms can be derived for ergodic
HMMs. Absorbing HMMs induce a probability over all state-observation sequences,
i.e. Ex,s P(x, s18) = 1. The likelihood of an observation sequence x is obtained by
summing over all possible hidden paths (state sequences), P(xI8) = Es P(x, sI8). To
obtain the likelihood for a set X of observations we simply mUltiply the likelihood values
for the individual sequences. We seek an HMM 8 that maximizes the likelihood for a
given set of observations X, or equivalently, maximizes the log-likelihood, LL(XI8) =
EXEX In P(xI8).
r:h
To simplify our notation we denote the generic parameter in 8 by Oi, where i ranges
from 1 to the total number of parameters in A and B (There might be less if some are
clamped to zero). We denote the total number of parameters of 8 by I and leave the (fixed)
correspondence between the Oi and the entries of A and B unspecified. The indices are
naturally partitioned into classes corresponding to the rows of the matrices. We denote by
[i] the class of parameters to which Oi belongs and by O[i) the vector of all OJ S.t. j E [i]. If
j E [i] then both Oi and OJ are parameters from the same row of one of the two matrices.
Whenever it is clear from the context, we will use [i] to denote both a class of parameters
and the row number (i.e. state) associated with the class. We now can rewrite P(x, s18) as
O~'(X,S), where ni(x, s) is the number of times parameter i is used along the path s
with observation sequence x. (Note that this value does not depend on the actual parameters
8.) We next compute partial derivatives ofthe likelihood and the log-likelihood using this
notation.
nf=l
o
OOi P(x, s18)
lInl(X,S)
lIn._I(X,S)
u1
... Ui-I
(
) lIn,(X,S)-l
ni x, SUi
lInl(X,S)
... U1
oLL(XI8)
OOi
Here 11i(xI8) ~ Es ni(x, s)P(slx, 8) is the expected number of occurrences of the
transition/output that corresponds to Oi over all paths that produce x in 8. These values are calculated in the expectation step of the Expectation-Maximization (EM) training algorithm for HMMs [7], also known as the Baum-Welch [2] or the ForwardBackward algorithm. In the next sections we use the additional following expectations,
11i(8) ~ Ex,s ni(X, s)P(x, s18) and 11[i) (8) ~ EjE[i) 11j(8). Note that the summation
here is over all legal x and s of arbitrary length and 11[i) (8) is the expected number of times
the state [i] was visited.
2
Entropic distance functions for HMMs
Our training algorithms are based on the following framework of Kivinen and Wannuth
for motivating iterative updates [6]. Assume we have already done a number of iterations
and our current parameters are 8 . Assume further that X is the set of observations to
be processed in the current iteration. In the batch case this set never changes and in the
on-line case X is typically a single observation. The new parameters 8 should stay close
to 8, which incorporates all the knowledge obtained in past iterations, but it should also
maximize the log-likelihood on the current date set X. Thus, instead of maximizing the loglikelihood we maximize, U(8) = 7JLL(XI8) - d(8, 8) (see [6, 5] for further motivation).
Training Algorithms/or Hidden Markov Models
643
Here d measures the dis!ance between the old and new parameters and 1] > 0 is a trade-off
factor. Maximizing U~B) is usually difficult since both the distance function and the loglikelihood depend on B. As in [6, 5], we approximate the log-likelihood by a first order
Taylor expansion around 9 = B and add Lagrange multipliers for the constraints that the
parameters of each class must sum to one:
U(8) ::::::
1]
(LL(XIB)
+ (8 -
+L
B)\7 BLL(XIB?) - d(8, B)
A[i]
[i]
L
OJ.
(3)
JEri]
A commonly used distance function is the relative entropy. To calculate the relative entropy
between two HMMs we need to sum over all possible hidden state sequence which leads to
the following definition,
d
(8 B)
RE,
~f ~ P(
~
x
18 ) 1 P(xI8) =
n P(xIB)
~ (~ P(
'7
~
x, s
?)
18
1 Ls P(x, s19)
n Ls P(x, siB)
However, the above divergence is very difficult to calculate and is not a convex function in
B. To avoid the computational difficulties and the non-convexity of dRE we upper bound
the relative entropy using the log sum inequality [3]:
dRE (8, B)
~
def ~
P(x, s19)
dRE(B,8) = L.t P(x, s18) In P(
IB)
:s;
X,s
x,s
_ (nIl't On.(x,s?)
L
P(x, siB) In
x,s
_
(J~'(X, S)
=
n,=1 ,
O.
L
P(x, siB) ?= ni(x, s) In (J:
x,s
,=1
I
~
O? ~
~ - O?
L.t In (J~ L.t P(x, s18) ni(x, s) = L.t ni(8) In (J~
i=1
' X,S
i=I'
Note that for the distance function ~E(9, 8) an HMM is viewed as a joint distribution
between observation sequences and hidden state sequences. We can further simplify the
bound on the relative entropy using the following lemma (proof omitted).
Lemma 1 ForanyabsorbingHMM, 8, and any parameter(Jj E 8, ni(8)
This gives the following new formula, dRE (9, 8) = L7= 1 n[j] (9)
-
~
-
-
= (Jin[i](B).
[Oi In ~ ] ,
-
which can
-
{j
be rewritten as, dRE(8, B) = L[i] n[i](8) dRE(8[iJ> B[i]) = L[i] n[i](8) LjE[i] (Jj In ~ .
Equation (3) is still difficult to solve since the variables n[i] (9) depend on the new set of
parameters (which~are not known). We therefore further approximate ~E(8, 8) by the
distance function, dRE(9, B)
3
= L[i] n[i](B) LjE[i] OJ
In~.
New Parameter Updates
We now would like to use the distance functions discussed in previous section in U (9). We
first derive ou~ main update using this distance function. This is done by replacing d( 8,8)
in U (9) with ~E (9, 8) and setting the derivatives of the resul ting U (9) w.r.t OJ to O. This
gives the following set of equations (i E {I, ... , I}),
LXEX ni(xIB)
1]
which are equivalent to
IXI(Ji
A
Oi
- n[i](B) (In (Ji - 1)
+
_
A[i] - 0 ,
Y. Singer and M. K. Warmuth
644
We now can solve for Oi and replace A[i] by a nonnalization factor which ensures that the
sum of the parameters in [i] is 1:
_
Oi
OJ exp
=
2:XEX n.(XI8))
(~
n).)(8)
IXI9.
(4)
( 2 : x nJ(XI8))
2:jE [i] OJ exp n[)6)
'~I 9J
The above re-estimation rule is the entropic update for HMMs. l
We now derive an alternate to the updateof(4). The mixture weights n[i](8) (whichapproximate the original mixture weights n[i] (0) in ~E (0, 8) lead to a state dependent learning
rate of ~ for the parameters of class [i]. If computation time is limited (see discussion
n[.)(H)
below) then the expectations n[i] (8) can be approximated by values that are readily available.
One possible choice is to use the sample based expectations 2: jE [i]2:xEX nj(xI8)/IXI as
an approximation for n[i] (8). These weights are needed for calculating the gradient and are
evaluated in the expectation step of Baum-Welch. Let, n[i](xI8) ~ 2: jE [i] nj(xI8), then
this approximation leads to the following distance function
"'" 2:xEX n[i](xI8) d (0. 8.) = "'" 2:xEx n[j)(xI8) "'"
LIXI
RE
[~l> [a)
LIXI
L[i]
JEri]
[i]
O? In OJ
J
0.'
(5)
J
which results in an update which we call the approximated entropic update for HMMs:
-
Oi
Oi exp
= ~.
(
1)
~
. (XI8)
DXEX n['1
DjE[i) OJ exp
(
2:xEx n .(X I8))
9,
1)
~
DXEX nJ(Xlo)
2:x Ex n bl(XI8)
9J
Ll
(6)
)
given a current set of parameters 8 and a learning rate 11 we obtain a new set of parameters
8 by iteratively evaluating the right-hand-side of the entropic update or the approximated
entropic update. We calculate the expectations ni(xI8) as done in the expectation step
of Baum-Welch. The weights n[i](xI8) are obtained by averaging nj(xI8) for j E [i].
This lets us evaluate the right-hand-side of the approximated entropic update. The en tropic
update is slightly more involved and requires an additional calculation of n[i) (8). (Recall
that n[i] (8) is the expected number oftimes state [i] is visited, unconditioned on the data). To
compute these expectations we need to sum over all possible sequences of state-observation
pairs. Since the probability of outputting the possible symbols at a given state sum to one,
calculating n[i] (8) reduces to evaluating the probability of reaching a state for each possible
time and sequence length. For absorbing HMMs n[i] (8) can be approximated efficiently
using dynamic programming; we compute n[i] (8) by summing the probabilities of all legal
state sequences S of up to length eN (typically C = 3 proved to be sufficient to obtain very
accurate approximations of n[i] (8). Therefore, the time complexity of calculating n[i] (8)
depends only on the number of states, regardless of the dimension of the output vector M
and the training data X.
1A subtle improvement is possible over the update (4) by treating the transition probabilities and
output probabilities differently. First the transition probabilities are updated based on (4). Then
the state probabilities n[i)(O) = n[i)(A) are recomputed based on th; new parameters A. This is
possible since the state probabilities depend only on the transition probabilities and not on the output
probabilities. Finally the output probabilities are updated with (4) where the n[.)(O) are used in place
of the n[i](8).
645
Training Algorithms/or Hidden Markov Models
4
The relation to EM and convergence properties
We first show that the EM algorithm for HMMs can be derived using our framework. To
do so, we approximate the relative entropy by the X2 distance (see [3]), dRE(p, p) ~
d x2(p, p)
~~
L:i (P.;~./, and use this distance to approximate dRE (9, 8):
dRE(9,
8) ~ ~2(9, 8) 1;?
2:
1l[i) (9)
dx 2(9[i),8[i))
[i)
~
~
~ L:XEX
L-n[i)(8) dx2(8[i]>8[i)) ~ LA
-
[i)
1l[i) (xI8)
IXI
[i)
Here d x2(9[i), 8[i))
~ L: j E[i) (9'~,8.)2 . By minimizing U (9) with the last version of the X2
distance function and following the same derivation steps as for the approximated entropic
update we arrive at what we call the approximated X2 update for HMMs:
=
Oi
= (1 -
7J)Oi
+ 7J
2: 1li(xI8) /2: 1l[j)(xI8)
XEX
.
(7)
XEX
Setting TJ = 1 results in the update, Oi = L:xEX 1li(xI8)/L:xEX 1l[i) (xI8), which is the
maximization (re-estimation) step of the EM algorithm.
Although omitted from this paper due to the lack of space, it is can be shown that for
7J E (0,1] the en tropic updates and the X2 update improve the likelihood on each iteration.
Therefore, these updates belong to the family of Generalized EM (GEM) algorithms which
are guaranteed to converge to a local maximum given some additional conditions [4].
Furthennore, using infinitesimal analysis and second order approximation of the likelihood
function at the (local) maximum similar to [10]. it can be shown that the approximated X2
update is a contraction mapping and close to the local maximum there exists a learning rate
7J > 1 which results in a faster rate of convergence than when using TJ = 1.
5
Experiments with Artificial and Natural Data
In order to test the actual convergence rate of the algorithms and to compare them to
Baum-Welch we created synthetic data using HMMs. In our experiments we mainly used
sparse models, that is, models with many parameters clamped to zero. Previous work
(e.g., [5, 6]) might suggest that the entropic updates will perfonn better on sparse models.
(Indeed, when we used dense models to generate the data, the algorithms showed almost
the same perfonnance). The training algorithms, however, were started from a randomly
chosen dense model. When comparing the algorithms we used the same initial model.
Due to different trajectories in parameter space, each algorithm may converge to a different
(local) maximum. For the clarity of presentation we show here results for cases where all
updates converged to the same maximum, which often occur when the HMM generating the
data is sparse and there are enough examples (typically tens of observations per non-zero
parameter). We tested both the entropic updates and the X2 updates. Learning rates greater
than one speed up convergence. The two entropic updates converge almost equally fast
on synthetic data generated by an HMM. For natural data the entropic update converges
slightly faster than the approximated version. The X2 update also benefits from learning
rates larger than one. However, the x2-update need to be used carefully since it does not
necessarily ensure non-negativeness of the new parameters for 7J > 1. This problems is
exaggerated when the data is not generated by an HMM. We therefore used the entropic
updates in our experiments with natural data. In order to have a fair comparison, we did not
tune the learning rate 7J and set it to 1.5. In Figure 1 we give a comparison of the entropic
update, the approximated entropic update, and Baum-Welch (left figure), using an HMM
to generate the random observation sequences, where N = M
40 but only 25% (10
parameters on the average for each transition/observation vector) of the parameters of the
=
646
Y. Singer and M. K. Warmuth
HMM are non-zero. The perfonnance of the entropic update and the approximated entropic
update are practically the same and both updates clearly outperfonn Baum-Welch. One
reason the perfonnance of the two entropic updates is the same is that the observations were
indeed generated by an HMM . In this case, approximating the expectations n(il (8) by the
sample based expectations seems reasonable. These results suggest a valuable alternative
to using Baum-Welch with a predetermined sparse, potentially biased, HMM where a large
number of parameters is clamped to zero. Instead, we suggest starting with a full model and
let one of the en tropic updates find the relevant parameters. This approach is demonstrated
on the right part of Figure 1. In this example the data was generated by a sparse HMM with
100 states and 100 possible output symbols. Only 10% ofthe HMM's parameters were nonzero. Three log-likelihood curves are given in the figure. One is the log-likelihood achieved
by Baum-Welch when only those parameters that are non-zero in the HMM generating the
data are initialized to random non-zero values. The other two are the log-likelihood of the
entropic update and Baum-Welch when all the parameters are initialized randomly. The
curves show that the en tropic update compensates for its inferior initialization in less than
10 iterations (see horizontal line in Figure 1) and from this point on it requires only 23
more iterations to converge compared to Baum-Welch which is given prior knowledge of
the non-zero parameters. In contrast, when Baum-Welch is started with a full model then
its convergence is much slower than the entropic update.
-0 . 4
'g
r---"---"'--~----r--~---,
-0 .6
~
.. -0,8
~
7
go
-1
ol
1:1 - 1. 2
~
~ - 1. 4
Entr op ic Upda t e - Entr opi c Up date ....... -~
EM (Daum-wel c h ) . Random I ni t.
En t r opi c Updat e, Random I ni t .
EM (Daum- we l c h)
EM (Ba um-we l c h ). s pa r se I ni t . . .g. .
? 41 ' "
~ -1. 6
-1. 8 ' - _ " ' - - _ . . . L . . - _ - ' - _ - ' - - _ - - ' - _ - - '
o
10
15
Ite r a t ion"
20
25
30
-2. 4 " - _ . . . L . . - _ - - ' - - _ - ' - _ - - ' - - _ - - ' - _ - - '
40
60
80
100
120
20
It e r a t ion "
Figure 1: Comparison of the entropic updates and Baum-Welch.
We next tested the updates on speech pronunciation data. In natural speech, a word might
be pronounced differently by different speakers. A common practice is to construct a
set of stochastic models in order to capture the variability of the possible pronunciations.
alternative pronunciations of a given word. This problem was studied previously in [9]
using a state merging algorithm for HMMs and in [8] using a subclass of probabilistic
finite automata. The purpose of the experiments discussed here is not to compare the above
algorithms to the en tropic updates but rather compare the entropic updates to Baum-Welch.
Nevertheless, the resulting HMM pronunciation models are usually sparse. Typically, only
two or three phonemes have a non zero output probability at a given state and the average
number of states that in practice can follow a states is about 2. Therefore, the entropic
updates may provide a good alternative to the algorithms presented in [8, 9].
We used the TIMIT (Texas Instruments-MIT) database as in [8, 9]. This database contains
the acoustic wavefonns of continuous speech with phone labels from an alphabet of 62
phones which constitute a temporally aligned phonetic transcription to the uttered words.
For the purpose of building pronunciation models, the acoustic data was ignored and we
partitioned the phonetic labels according to the words that appeared in the data. The data
was filtered and partitioned so that words occurring between 20 and 100 times in the dataset
were used for training and evaluation according to the following partition. 75% of the
occurrences of each word were used as training data for the learning algorithm and the
remaining 25% were used for evaluation. We then built for each word three pronunciation
models by training a fully connected HMM whose number of states was set to 1, 1.5 and
1.75 times the longest sample (denoted by N m). The models were evaluated by calculating
647
Training Algorithmsfor Hidden Markov Models
the log-likelihood (averaged over 10 different random parameter initializations) of each
HMM on the phonetic transcription of each word in the test set. In Table 1 we give
the negative log-likelihood achieved on the test data together with the average number of
iterations needed for training. Overall the differences in the log-likelihood are small which
means that the results should be interpreted with some caution. Nevertheless, the entropic
update obtained the highest likelihood on the test data while needing the least number of
iterations. The approximated en tropic update and Baum-Welch achieve similar results on
the test data but the latter requires more iterations. Checking the resulting models reveals
one reason why the en tropic update achieves higher likelihood values, namely, it does a
better job in setting the irrelevant parameters to zero (and it does it faster).
# States
Baum-Welch
Approx. EU
Entropic Update
Negative Log-Likelihood
1.75Nm
1.5Nm
1.0Nm
2448
2440
2418
2388
2389
2352
2425
2426
2405
1.0Nm
27.4
25.5
23.1
# Iterations
1.5Nm
1.75Nm
36.1
35.0
30.9
41.1
37.0
32.6
Table 1: Comparison of the entropic updates and Baum-Welch on speech pronunciation data.
6
Conclusions and future research
In this paper we have showed how the framework of Kivinen and Warmuth [6] can be used
to derive parameter updates algorithms for HMMs. We view an HMM as a joint distribution
between the observation sequences and hidden state sequences and use a bound on relative
entropy as a distance between the new and old parameter settings. If we approximate of the
relative entropy by the X 2 distance, replace the exact state expectations by a sample based
approximation, and fix the learning rate to one then the framework yields an alternative
derivation of the EM algorithm for HMMs. Since the EM update uses sample based
estimates of the state expectations it is hard to use it in an on-line setting. In contrast, the
on-line versions of our updates can be easily derived using only one observation sequence
at a time. Also, there are alternative gradient descent based methods for estimating the
parameters of HMMs. Such methods usually employ an exponential parameterization
(such as soft-max) of the parameters (see [1 D. For the case of learning one set of mixture
coefficients an exponential parameterization led to an algorithm with a slower convergence
rate compared to algorithms derived using entropic distances [5] . However, it is not clear
whether this is still the case for HMMs. Our future goals is to perform a comparative study
of the different updates with emphasis on the on-line versions.
Acknowledgments
We thank Anders Krogh for showing us the simple derivative calculations used in this paper and thank
Fernando Pereira and Yasubumi Sakakibara for interesting discussions.
References
[1] P. Baldi and Y. Chauvin. Smooth on-line learning algorithms for Hidden Markov Models. Neural Computation . 6(2), 1994.
[2] L.E. Baum and T. Petrie. Statistical inference for probabilistic functions of finite state markov chains. Annals of Mathematica I
Statisitics, 37, 1966.
[3] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.
[4] A. P. Dempster, N. M. Laird , and D. B. Rubin. Maximum-likelihood from incomplete data via the EM algorithm. Journal
of the Royal Statistical Society, B39: 1-38,1977.
[5] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth . A comparison of new and old algorithms for a mixture
estimation problem. In Proceedingsofthe Eighth Annual Workshop on Computational Learning Theory, pages 69-78,1995.
[6] J. Kivinen and M. K. Warmuth . Exponentiated gradient versus gradient descent for linear predictors. Informationa and
Computation, 1997. To appear.
[7] LR Rabiner and B. H. Juang. An introduction to hidden markov models. IEEE ASSP Magazine, 3(1 ):4-16, 1986.
[8] D. Ron, Y. Singer, and N. Tishby. On the learnability and usage of acyclic probabilistic finite automata. In Proc. of the
Eighth Annual Workshop on Computational Learning Theory, 1995.
[9] A. Stolcke and S. Omohundro. Hidden Markov model induction by Bayesian model merging. In Advances in Neural
Information Processing Systems, volume 5. Morgan Kaufmann, 1993.
[10] L. Xu and M.I. Jordan. On convergence properties of the EM algorithm for Gaussian mixtures. Neurol Computation,
8:129-151 , 1996.
| 1263 |@word version:5 seems:1 si8:1 seek:1 contraction:1 b39:1 initial:5 contains:1 att:1 past:1 current:7 com:1 comparing:1 dx:1 must:1 readily:1 cruz:1 partition:1 predetermined:1 treating:1 update:52 xex:10 fewer:1 warmuth:7 parameterization:2 manfred:2 filtered:1 lr:1 cse:1 ron:1 along:1 ucsc:1 baldi:1 indeed:2 expected:3 ol:1 actual:2 estimating:1 notation:2 maximizes:2 lixi:2 what:1 mountain:1 unspecified:1 interpreted:1 caution:1 nj:6 perfonn:1 ooi:2 every:1 nf:1 subclass:1 um:1 omit:1 appear:1 local:4 path:5 might:3 emphasis:1 initialization:2 studied:1 hmms:20 limited:1 slx:1 bi:1 range:1 averaged:1 acknowledgment:1 wavefonns:1 practice:2 ance:1 adapting:1 significantly:1 nonnalization:1 induce:1 word:8 suggest:3 close:3 context:1 equivalent:1 demonstrated:1 baum:20 uttered:1 maximizing:2 go:1 regardless:1 starting:2 l:2 convex:1 ergodic:1 welch:19 automaton:2 helmbold:1 rule:1 outperfonn:1 updated:2 annals:1 magazine:1 exact:1 programming:1 us:2 ixi:3 pa:1 element:1 approximated:13 database:2 negligibly:1 capture:1 calculate:3 ensures:1 connected:1 eu:1 trade:1 highest:1 forwardbackward:1 valuable:1 dempster:1 convexity:1 ui:1 complexity:1 dynamic:1 depend:4 rewrite:1 easily:1 joint:2 differently:2 represented:1 derivation:2 alphabet:1 fast:1 artificial:1 pronunciation:8 whose:1 larger:1 solve:2 loglikelihood:2 furthennore:1 compensates:1 laird:1 final:4 unconditioned:1 sequence:19 xi8:24 outputting:2 relevant:1 aligned:1 date:2 achieve:1 pronounced:1 convergence:7 juang:1 produce:1 generating:2 comparative:1 leave:1 converges:1 derive:3 ij:1 op:1 ex:3 job:1 krogh:1 stochastic:1 require:1 fix:1 preliminary:1 s18:6 summation:1 practically:1 around:1 ic:1 exp:4 mapping:1 achieves:1 entropic:26 omitted:2 purpose:2 estimation:6 proc:1 label:2 visited:2 mit:1 clearly:1 gaussian:1 reaching:1 rather:1 avoid:1 derived:4 improvement:1 longest:1 likelihood:23 mainly:1 contrast:2 inference:1 dependent:1 anders:1 typically:4 hidden:12 relation:1 i1:1 overall:1 l7:1 denoted:4 special:2 construct:2 never:2 future:2 simplify:2 employ:1 randomly:2 composed:1 divergence:1 individual:1 multiply:1 evaluation:2 mixture:5 tj:2 chain:1 accurate:1 partial:1 perfonnance:3 incomplete:1 old:3 taylor:1 initialized:2 re:5 soft:1 cover:1 eje:1 maximization:3 entry:1 predictor:1 tishby:1 learnability:1 motivating:1 synthetic:3 stay:2 probabilistic:4 off:1 together:1 nm:6 derivative:3 return:1 li:2 coefficient:1 depends:1 view:1 reached:1 start:2 timit:1 oi:14 ni:13 il:1 phoneme:1 kaufmann:1 efficiently:1 yield:1 ofthe:2 rabiner:1 bayesian:1 trajectory:1 sakakibara:1 converged:1 whenever:1 definition:1 infinitesimal:1 mathematica:1 involved:1 naturally:1 associated:1 proof:1 proved:1 dataset:1 recall:1 knowledge:2 oftimes:1 subtle:1 ou:1 carefully:1 higher:1 supervised:1 follow:1 done:3 evaluated:2 until:1 hand:2 horizontal:1 replacing:1 lack:1 jll:1 usage:1 name:1 building:1 multiplier:1 laboratory:1 iteratively:2 dx2:1 nonzero:1 ll:4 inferior:1 speaker:1 generalized:1 slo:1 hill:1 omohundro:1 petrie:1 common:1 absorbing:3 ji:2 volume:1 discussed:2 belong:1 ai:1 approx:1 moving:1 xib:4 linl:2 add:1 showed:2 exaggerated:1 belongs:1 irrelevant:1 phone:2 phonetic:3 inequality:1 morgan:1 additional:3 greater:1 converge:4 maximize:3 fernando:1 full:2 needing:1 dre:10 reduces:1 smooth:1 sib:3 faster:3 calculation:2 lin:2 equally:1 expectation:15 iteration:12 achieved:2 ion:2 biased:1 resul:1 incorporates:1 jordan:1 call:2 enough:1 stolcke:1 avenue:1 texas:1 whether:1 speech:5 jj:2 constitute:1 ignored:1 santa:1 clear:2 tune:1 se:1 ten:1 processed:1 generate:2 schapire:1 estimated:1 per:2 discrete:1 recomputed:1 nevertheless:2 clarity:1 opi:2 sum:6 parameterized:1 place:1 throughout:1 arrive:1 family:1 almost:2 reasonable:1 wannuth:1 lje:2 wel:1 bound:4 def:1 guaranteed:1 correspondence:1 replaces:1 annual:2 occur:1 constraint:1 x2:10 sake:1 generates:1 u1:2 speed:1 attempting:1 relatively:1 department:1 according:4 alternate:1 slightly:2 em:13 partitioned:3 b:1 legal:2 equation:2 previously:1 singer:7 needed:2 instrument:1 end:1 available:1 rewritten:1 generic:1 occurrence:2 batch:1 alternative:5 slower:2 original:1 thomas:1 denotes:1 remaining:1 ensure:1 calculating:4 daum:2 yoram:1 ting:1 murray:1 approximating:1 society:1 bl:1 already:1 gradient:4 exex:1 distance:16 thank:2 hmm:21 reason:2 chauvin:1 induction:1 length:3 index:1 minimizing:1 equivalently:1 difficult:3 tropic:7 potentially:1 negative:2 ba:1 perform:1 upper:1 observation:19 markov:9 finite:3 jin:1 descent:2 variability:1 bll:1 assp:1 arbitrary:1 pair:1 namely:1 california:1 acoustic:2 usually:3 below:1 eighth:2 appeared:1 built:1 oj:9 max:1 royal:1 s19:2 natural:5 difficulty:1 kivinen:3 improve:1 temporally:1 created:1 started:2 prior:1 checking:1 relative:8 fully:1 interesting:1 acyclic:1 versus:1 ixl:1 generator:1 sufficient:1 rubin:1 i8:1 row:6 last:1 arriving:1 algorithmsfor:1 dis:1 side:2 exponentiated:1 sparse:7 benefit:1 curve:2 calculated:1 dimension:3 transition:6 ending:1 evaluating:2 commonly:1 approximate:5 transcription:2 reveals:1 summing:2 gem:1 continuous:1 iterative:3 why:1 table:2 ca:1 expansion:1 necessarily:1 did:1 main:1 dense:2 motivation:1 fair:1 xu:1 je:3 en:9 wiley:1 pereira:1 exponential:2 clamped:3 ib:1 sui:1 formula:1 showing:1 symbol:5 neurol:1 oll:1 exists:1 workshop:2 merging:2 occurring:1 updat:1 entropy:9 upda:1 led:1 simply:1 lagrange:1 xlo:1 corresponds:1 viewed:1 presentation:1 ite:1 goal:1 replace:2 experimentally:1 change:1 hard:1 averaging:1 lemma:2 total:2 nil:1 e:2 la:1 latter:1 brevity:1 evaluate:2 tested:2 lxex:1 |
289 | 1,264 | Hidden Markov decision trees
Michael I. Jordan*, Zoubin Ghahramani t , and Lawrence K. Saul*
{jordan.zoubin.lksaul}~psyche.mit.edu
*Center for Biological and Computational Learning
Massachusetts Institute of Technology
Cambridge, MA USA 02139
t Department of Computer Science
University of Toronto
Toronto, ON Canada M5S 1A4
Abstract
We study a time series model that can be viewed as a decision
tree with Markov temporal structure. The model is intractable for
exact calculations, thus we utilize variational approximations . We
consider three different distributions for the approximation: one in
which the Markov calculations are performed exactly and the layers
of the decision tree are decoupled, one in which the decision tree
calculations are performed exactly and the time steps of the Markov
chain are decoupled, and one in which a Viterbi-like assumption is
made to pick out a single most likely state sequence. We present
simulation results for artificial data and the Bach chorales.
1
Introduction
Decision trees are regression or classification models that are based on a nested
decomposition of the input space. An input vector x is classified recursively by a
set of "decisions" at the nonterminal nodes of a tree, resulting in the choice of a
terminal node at which an output y is generated. A statistical approach to decision
tree modeling was presented by Jordan and Jacobs (1994), where the decisions were
treated as hidden multinomial random variables and a likelihood was computed by
summing over these hidden variables . This approach, as well as earlier statistical
analyses of decision trees, was restricted to independently, identically distributed
data. The goal of the current paper is to remove this restriction; we describe a
generalization of the decision tree statistical model which is appropriate for time
senes.
The basic idea is straightforward-we assume that each decision in the decision tree
is dependent on the decision taken at that node at the previous time step. Thus we
augment the decision tree model to include Markovian dynamics for the decisions.
502
M. I. Jordan, Z Ghahramani and L. K. Saul
For simplicity we restrict ourselves to the case in which the decision variable at
a given nonterminal is dependent only on the same decision variable at the same
nonterminal at the previous time step. It is of interest, however, to consider more
complex models in which inter-nonterminal pathways allow for the possibility of
various kinds of synchronization.
Why should the decision tree model provide a useful starting point for time series
modeling? The key feature of decision trees is the nested decomposition. If we
view each nonterminal node as a basis function, with support given by the subset
of possible input vectors x that arrive at the node, then the support of each node
is the union of the support associated with its children. This is reminiscent of
wavelets, although without the strict condition of multiplicative scaling. Moreover,
the regions associated with the decision tree are polygons, which would seem to
provide a useful generalization of wavelet-like decompositions in the case of a highdimensional input space.
The architecture that we describe in the current paper is fully probabilistic. We
view the decisions in the decision tree as multinomial random variables, and we
are concerned with calculating the posterior probabilities of the time sequence of
hidden decisions given a time sequence of input and output vectors. Although
such calculations are tractable for decision trees and for hidden Markov models
separately, the calculation is intractable for our model. Thus we must make use
of approximations. We utilize the partially factorized variational approximations
described by Saul and Jordan (1996), which allow tractable substructures (e.g., the
decision tree and Markov chain substructures) to be handled via exact methods,
within an overall approximation that guarantees a lower bound on the log likelihood.
2
Architectures
2.1
Probabilistic decision trees
The "hierarchical mixture of experts" (HME) model (Jordan & Jacobs, 1994) is a
decision tree in which the decisions are modeled probabilistically, as are the outputs.
The total probability of an output given an input is the sum over all paths in the
tree from the input to the output. The HME model is shown in the graphical
model formalism in Figure 2.1. Here a node represents a random variable, and the
links represent probabilistic dependencies. A conditional probability distribution is
associated with each node in the graph, where the conditioning variables are the
node's parents.
Let Zl, Z2, and z3 denote the (multinomial) random variables corresponding to
the first, second and third levels of the decision tree. l We associate multinomial
probabilities P(z l lx,1]l), P(z 2Ix,zl,1]2), and P(Z3jx,zl,Z2,1]3) with the decision
nodes, where 1]1,1]2, and 1]3 are parameters (e.g., Jordan and Jacobs utilized softmax transformations of linear functions of x for these probabilities). The leaf probabilities P(y lx, zl, Z2 , Z3, 0) are arbitrary conditional probability models; e.g., linear/Gaussian models for regression problems.
The key calculation in the fitting of the HME model to data is the calculation of
the posterior probabilities of the hidden decisions given the clamped values of x
and y. This calculation is a recursion extending upward and downward in the tree,
in which the posterior probability at a given nonterminal is the sum of posterior
probabilities associated with its children. The recursion can be viewed as a special
IThroughout the paper we restrict ourselves to three levels for simplicity of presentation.
Hidden Markov Decision Trees
503
1t
Figure 1: The hierarchical mixture of
experts as a graphical model. The
E step of the learning algorithm for
HME's involves calculating the posterior probabilities of the hidden (unshaded) variables given the observed
(shaded) variables.
Figure 2: An HMM as a graphical
model. The transition matrix appears
on the horizontal links and the output
probability distribution on the vertical
links. The E step of the learning algorithm for HMM's involves calculating
the posterior probabilities of the hidden (unshaded) variables given the observed (shaded) variables.
case of generic algorithms for calculating posterior probabilities on directed graphs
(see, e.g., Shachter, 1990).
2.2
Hidden Markov models
In the graphical model formalism a hidden Markov model (HMM; Rabiner, 1989) is
represented as a chain structure as shown in Figure 2.1. Each state node is a multinomial random variable Zt. The links between the state nodes are parameterized by
the transition matrix a(ZtIZt-l), assumed homogeneous in time. The links between
the state nodes Zt and output nodes Yt are parameterized by the output probability
distribution b(YtIZt), which in the current paper we assume to be Gaussian with
(tied) covariance matrix ~.
As in the HME model, the key calculation in the fitting of the HMM to observed
data is the calculation of the posterior probabilities of the hidden state nodes given
the sequence of output vectors. This calculation-the E step of the Baum-Welch
algorithm-is a recursion which proceeds forward or backward in the chain.
2.3
Hidden Markov decision trees
We now marry the HME and the HMM to produce the hidden Markov decision tree
(HMDT) shown in Figure 3. This architecture can be viewed in one of two ways:
(a) as a time sequence of decision trees in which the decisions in a given decision
tree depend probabilistically on the decisions in the decision tree at the preceding
moment in time; (b) as an HMM in which the state variable at each moment in
time is factorized (cf. Ghahramani & Jordan, 1996) and the factors are coupled
vertically to form a decision tree structure.
Let the state of the Markov process defining the HMDT be given by the values of
hidden multinomial decisions z~, z?, and zr, where the superscripts denote the level
ofthe decision tree (the vertical dimension) and the sUbscripts denote the time (the
horizontal dimension). Given our assumption that the state transition matrix has
only intra-level Markovian dependencies, we obtain the following expression for the
M. I. Jordan, Z Ghahramani and L. K. Saul
504
???
Figure 3: The HMDT model is an HME decision tree (in the vertical dimension)
with Markov time dependencies (in the horizontal dimension).
HMDT probability model:
P( {z}, z;, zn, {Yt}l{xt}) = 7r 1(ZUX1)7r2 (zilx1' zt)7r 3 (zflx1 ' zL zi)
T
T
II a1(z: IXt, zL1)a (z; IXt, Z;_1 ' z:)a3(z~lxt , ZL1' z:, z;) II b(Yt IXt, z}, z;, z~)
2
t=2
Summing this probability over the hidden values
likelihood.
t=l
zI, z;, and z~
yields the HMDT
The HMDT is a 2-D lattice with inhomogeneous field terms (the observed data).
It is well-known that such lattice structures are intractable for exact probabilistic
calculations. Thus, although it is straightforward to write down the EM algorithm
for the HMDT and to write recursions for the calculations of posterior probabilities
in the E step, these calculations are likely to be too time-consuming for practical
use (for T time steps, J{ values per node and M levels, the algorithm scales as
O(J{M+1T)). Thus we turn to methods that allow us to approximate the posterior
probabilities of interest .
3
3.1
Algori thms
Partially factorized variational approximations
Completely factorized approximations to probability distributions on graphs can
often be obtained variationally as mean field theories in physics (Parisi, 1988). For
the HMDT in Figure 3, the completely factorized mean field approximation would
delink all of the nodes, replacing the interactions with constant fields acting at each
of the nodes. This approximation, although useful, neglects to take into account
the existence of efficient algorithms for tractable substructures in the graph.
Saul and Jordan (1996) proposed a refined mean field approximation to allow interactions associated with tractable substructures to be taken into account. The
basic idea is to associate with the intractable distribution P a simplified distribution Q that retains certain of the terms in P and neglects others, replacing them
with parameters Pi that we will refer to as "variational parameters." Graphically
the method can be viewed as deleting arcs from the original graph until a forest
of tractable substructures is obtained. Arcs that remain in the simplified graph
Hidden Markov Decision Trees
505
correspond to terms that are retained in Q; arcs that are deleted correspond to
variational parameters.
To obtain the best possible approximation of P we minimize the Kullback-Liebler
divergence K L( Q liP) with respect to the parameters J-li. The result is a coupled
set of equations that are solved iteratively. These equations make reference to the
values of expectations of nodes in the tractable substructures; thus the (efficient)
algorithms that provide such expectations are run as subroutines. Based on the posterior expectations computed under Q, the parameters defining P are adjusted. The
algorithm as a whole is guaranteed to increase a lower bound on the log likelihood.
3.2
A forest of chains
The HMDT can be viewed as a coupled set of chains, with couplings induced directly
via the decision tree structure and indirectly via the common coupling to the output
vector. If these couplings are removed in the variational approximation, we obtain
a Q distribution whose graph is a forest of chains. There are several ways to
parameterize this graph; in the current paper we investigate a parameterization with
time-varying transition matrices and time-varying fields. Thus the Q distribution
is given by
T
3
1 II-1(
~
at Zt111
Zt-1 )-2(
at Zt212
Zt_l )-3(
at Zt31 Zt_1 )
Q t=2
T
II iji(zDij;(z;)ij~(zn
t=l
where a~(z~ IzL1) and
ization.
3.3
ij;(zD
are potentials that provide the variational parameter-
A forest of decision trees
Alternatively we can drop the horizontal couplings in the HMDT and obtain a
variational approximation in which the decision tree structure is handled exactly
and the Markov structure is approximated. The Q distribution in this case is
T
II fi(zDf;(zi IzDf~(zrlzt, zi)
t=l
Note that a decision tree is a fully coupled graphical model; thus we can view the
partially factorized approximation in this case as a completely factorized mean field
approximation on "super-nodes" whose configurations include all possible configurations of the decision tree.
3.4
A Viterbi-like approximation
In hidden Markov modeling it is often found that a particular sequence of states
has significantly higher probability than any other sequence. In such cases the
Viterbi algorithm, which calculates only the most probable path, provides a useful
computational alternative.
We can develop a Viterbi-like algorithm by utilizing an approximation Q that assigns
probability one to a single path {zi, zF, zn:
{ I
o
zL
if z~ =
otherwise
"It, i
(1)
M. I. Jordan, Z Ghahramani and L. K. SauL
506
b)
a)
10r-------------------,
600r-------------------,
500
~
"'0
8 400
,?;
~300
0
~200
-5
-10~-------------'
o
50
100
t
150
200
J))
2
100
ff//JJjjJ
10
20
30
40
iterations of EM
Figure 4: a) Artificial time series data. b) Learning curves for the HMDT.
Note that the entropy Q In Q is zero, moreover the evaluation of the energy Q In P
reduces to substituting z~ for z~ in P. Thus the variational approximation is particularly simple in this case. The resulting algorithm involves a subroutine in which
a standard Viterbi algorithm is run on a single chain , with the other (fixed) chains
providing field terms.
4
Results
We illustrate the HMDT on (1) an artificial time series generated to exhibit spatial
and temporal structure at multiple scales, and (2) a domain which is likely to exhibit
such structure naturally-the melody lines from J .S. Bach's chorales.
The artificial data was generated from a three level binary HMDT with no inputs,
in which the root node determined coarse-scale shifts (?5) in the time series, the
middle node determined medium-scale shifts (?2), and the bottom node determined
fine-scale shifts (?0.5) (Figure 4a). The temporal scales at these three nodes-as
measured by the rate of convergence (second eigenvalue) of the transition matrices,
with 0 (1) signifying immediate (no) convergence-were 0.85,0.5, and 0.3, respectively.
We implemented forest-of-chains, forest-of-trees and Viterbi-like approximations.
The learning curves for ten runs of the forest-of-chains approximation are shown in
Figure 4b. Three plateau regions are apparent, corresponding to having extracted
the coarse, medium, and fine scale structures of the time series. Five runs captured
all three spatia-temporal scales at their correct level in the hierarchy; three runs
captured the scales but placed them at incorrect nodes in the decision tree; and two
captured only the coarse-scale structure.2 Similar results were obtained with the
Viterbi-like approximation. We found that the forest-of-trees approximation was
not sufficiently accurate for these data.
The Bach chorales dataset consists of 30 melody lines with 40 events each. 3 Each
discrete event encoded 6 attributes-start time of the event (st), pitch (pitch),
duration (dur), key signature (key), time signature (time), and whether the event
was under a fermata (ferm).
The chorales dataset was modeled with 3-level HMDTs with branching factors (K)
2Note that it is possible to bias the ordering of the time scales by ordering the initial
random values for the nodes of the tree; we did not utilize such a bias in this simulation.
3This dataset was obtained from the UCI Repository of Machine Learning Datasets.
Hidden Markov Decision Trees
K
2
3
4
5
6
st
3
22
55
57
70
Percent variance explained
pitch dur key time ferm
84
6
6
95
a
38
7
93
99
0
48
36
96
99
5
41
41
97
61
99
40
58
94
10
99
507
Temporal scale
level 1 level 2 level 3
1.00
1.00
0.51
1.00
0.96
0.85
1.00
1.00
0.69
1.00
0.95
0.75
1.00
0.93
0.76
Table 1: Hidden Markov decision tree models of the Bach chorales dataset: mean
percentage of variance explained for each attribute and mean temporal scales at the
different nodes.
2, 3, 4, 5, and 6 (3 runs at each size, summarized in Table 1). Thirteen out of 15
runs resulted in a coarse-to-fine progression of temporal scales from root to leaves
of the tree. A typical run at branching factor 4, for example, dedicated the top
level node to modeling the time and key signatures-attributes that are constant
throughout any single chorale-the middle node to modeling start times, and the
bottom node to modeling pitch or duration.
5
Conclusions
Viewed in the context of the burgeoning literature on adaptive graphical probabilistic models-which includes HMM's, HME's, CVQ's, IOHMM's (Bengio & Frasconi,
1995), and factorial HMM's-the HMDT would appear to be a natural next step.
The HMDT includes as special cases all of these architectures, moreover it arguably
combines their best features: factorized state spaces, conditional densities, representation at multiple levels ofresolution and recursive estimation algorithms. Our work
on the HMDT is in its early stages, but the earlier literature provides a reasonably
secure foundation for its development.
References
Bengio, Y., & Frasconi, P. (1995) . An input output HMM architecture. In G.
Tesauro, D. S. Touretzky & T. K. Leen, (Eds.), Advances in Neural Information
Processing Systems 7, MIT Press, Cambridge MA.
Ghahramani, Z., & Jordan, M. r. (1996). Factorial hidden Markov models. In D. S.
Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in Neural Information
Processing Systems 8, MIT Press, Cambridge MA.
Jordan, M. r., & Jacobs, R. A. (1994). Hierarchical mixtures of experts and the
EM algorithm. Neural Computation, 6, 181-214.
Parisi, G. (1988). Statistical Field Theory. Redwood City, CA : Addison-Wesley.
Rabiner, L. (1989). A tutorial on hidden Markov models and selected application s
in speech recognition. Proceedings of the IEEE, 77, 257-285.
Saul, L. K., & Jordan, M. I. (1996). Exploiting tractable substructures in intractable
networks. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in
Neural Information Processing Systems 8, MIT Press, Cambridge MA.
Shachter, R. (1990). An ordered examination of influence diagrams. Networks, 20,
535-563.
| 1264 |@word repository:1 middle:2 simulation:2 jacob:4 decomposition:3 covariance:1 pick:1 recursively:1 moment:2 initial:1 configuration:2 series:6 current:4 z2:3 reminiscent:1 must:1 remove:1 drop:1 leaf:2 selected:1 parameterization:1 provides:2 coarse:4 node:30 toronto:2 lx:2 five:1 incorrect:1 consists:1 pathway:1 fitting:2 combine:1 inter:1 terminal:1 moreover:3 factorized:8 medium:2 kind:1 transformation:1 guarantee:1 temporal:7 exactly:3 zl:6 appear:1 arguably:1 vertically:1 subscript:1 path:3 shaded:2 directed:1 practical:1 union:1 recursive:1 lksaul:1 significantly:1 zoubin:2 context:1 influence:1 restriction:1 unshaded:2 center:1 yt:3 baum:1 straightforward:2 graphically:1 starting:1 independently:1 duration:2 welch:1 simplicity:2 assigns:1 utilizing:1 hierarchy:1 exact:3 homogeneous:1 associate:2 approximated:1 particularly:1 utilized:1 recognition:1 observed:4 bottom:2 solved:1 parameterize:1 region:2 ordering:2 removed:1 mozer:2 dynamic:1 signature:3 depend:1 basis:1 completely:3 various:1 polygon:1 represented:1 describe:2 artificial:4 refined:1 ithroughout:1 whose:2 apparent:1 encoded:1 otherwise:1 superscript:1 sequence:7 parisi:2 eigenvalue:1 interaction:2 uci:1 lxt:1 exploiting:1 parent:1 convergence:2 extending:1 produce:1 coupling:4 develop:1 illustrate:1 measured:1 nonterminal:6 ij:2 implemented:1 involves:3 inhomogeneous:1 correct:1 attribute:3 melody:2 generalization:2 biological:1 probable:1 adjusted:1 sufficiently:1 lawrence:1 viterbi:7 ztizt:1 substituting:1 early:1 estimation:1 hasselmo:2 city:1 mit:4 gaussian:2 super:1 varying:2 probabilistically:2 likelihood:4 secure:1 dependent:2 iohmm:1 hidden:22 subroutine:2 upward:1 overall:1 classification:1 augment:1 development:1 spatial:1 softmax:1 special:2 field:9 having:1 frasconi:2 represents:1 others:1 divergence:1 resulted:1 ourselves:2 interest:2 possibility:1 investigate:1 intra:1 evaluation:1 mixture:3 chain:11 accurate:1 decoupled:2 tree:44 hmdt:16 formalism:2 modeling:6 earlier:2 markovian:2 retains:1 zn:3 lattice:2 subset:1 too:1 ixt:3 dependency:3 st:2 density:1 probabilistic:5 physic:1 michael:1 algori:1 expert:3 li:1 account:2 potential:1 hme:8 dur:2 summarized:1 includes:2 performed:2 view:3 multiplicative:1 root:2 start:2 substructure:7 minimize:1 variance:2 rabiner:2 ofthe:1 yield:1 correspond:2 m5s:1 classified:1 liebler:1 plateau:1 touretzky:3 ed:3 energy:1 naturally:1 associated:5 dataset:4 massachusetts:1 iji:1 variationally:1 appears:1 wesley:1 higher:1 leen:1 stage:1 until:1 horizontal:4 replacing:2 usa:1 zl1:2 ization:1 iteratively:1 branching:2 dedicated:1 marry:1 percent:1 variational:9 fi:1 common:1 multinomial:6 conditioning:1 refer:1 cambridge:4 spatia:1 posterior:11 tesauro:1 certain:1 binary:1 captured:3 preceding:1 ii:5 multiple:2 reduces:1 calculation:14 bach:4 a1:1 calculates:1 pitch:4 regression:2 basic:2 expectation:3 iteration:1 represent:1 separately:1 fine:3 diagram:1 strict:1 induced:1 seem:1 jordan:14 bengio:2 identically:1 concerned:1 zi:5 architecture:5 restrict:2 idea:2 shift:3 whether:1 expression:1 handled:2 speech:1 useful:4 factorial:2 ten:1 percentage:1 tutorial:1 per:1 zd:1 write:2 discrete:1 key:7 burgeoning:1 deleted:1 utilize:3 backward:1 graph:8 sum:2 run:8 parameterized:2 arrive:1 throughout:1 decision:52 scaling:1 layer:1 bound:2 guaranteed:1 department:1 remain:1 em:3 psyche:1 explained:2 restricted:1 taken:2 equation:2 turn:1 addison:1 tractable:7 progression:1 hierarchical:3 appropriate:1 generic:1 indirectly:1 alternative:1 existence:1 original:1 top:1 include:2 cf:1 a4:1 graphical:6 calculating:4 neglect:2 ghahramani:6 exhibit:2 link:5 hmm:9 modeled:2 retained:1 z3:2 providing:1 thirteen:1 zt:4 zf:1 vertical:3 markov:20 datasets:1 arc:3 immediate:1 defining:2 redwood:1 arbitrary:1 thm:1 canada:1 proceeds:1 deleting:1 event:4 treated:1 natural:1 examination:1 zr:1 recursion:4 chorale:6 technology:1 coupled:4 literature:2 synchronization:1 fully:2 foundation:1 pi:1 placed:1 bias:2 allow:4 institute:1 saul:7 distributed:1 curve:2 dimension:4 transition:5 forward:1 made:1 adaptive:1 simplified:2 approximate:1 kullback:1 summing:2 assumed:1 consuming:1 alternatively:1 why:1 table:2 lip:1 reasonably:1 ca:1 forest:8 complex:1 domain:1 did:1 whole:1 child:2 ff:1 clamped:1 tied:1 third:1 wavelet:2 ix:1 down:1 xt:1 r2:1 a3:1 intractable:5 downward:1 entropy:1 likely:3 shachter:2 ordered:1 partially:3 nested:2 extracted:1 ma:4 conditional:3 viewed:6 goal:1 presentation:1 determined:3 typical:1 sene:1 acting:1 cvq:1 total:1 highdimensional:1 support:3 signifying:1 |
290 | 1,265 | A Silicon Model of
Amplitude Modulation Detection
in the Auditory Brainstem
And~
van Schaik, Eric Fragniere, Eric Vittoz
MANIRA Center for Neuromimetic Systems
Swiss Federal Institute of Technology
CH-lOlS Lausanne
email: Andre.van_Schaik@di.epfl.ch
Abstract
Detectim of the periodicity of amplitude modulatim is a major step in
the determinatim of the pitch of a SOODd. In this article we will
present a silicm model that uses synchrroicity of spiking neurms to
extract the fundamental frequency of a SOODd. It is based m the
observatim that the so called 'Choppers' in the mammalian Cochlear
Nucleus synchrmize well for certain rates of amplitude modulatim,
depending m the cell's intrinsic chopping frequency. Our silicm
model uses three different circuits, i.e., an artificial cochlea, an Inner
Hair Cell circuit, and a spiking neuron circuit
1. INTRODUCTION
Over the last few years, we have developed and implemented several analog VLSI
building blocks that allow us to model parts of the auditory pathway [1], [2], [3]. This
paper presents me experiment using these building blocks to create a model for the
detection of the fundamental frequency of a harmroic complex. The estimatim of this
fundamental frequency by the model shows some important similarities with psychoacoustic experiments in pitch estimation in humans [4]. A good model of pitch
estimation will give us valuable insights in the way the brain processes sounds.
Furthermore, a practical application to speech recognition can be expected, either by
using the pitch estimate as an element in the acoustic vector fed to the recognizer, or by
normalizing the acoustic vector to the pitch.
742
A. van Schaik, E. Fragniere and E. Wttoz
Although the model doesn't yield a complete model of pitch estimatim, and explains
probably mly one of a few different mechanisms the brain uses for pitch estimatim, it
can give us a better understanding of the physiological background of psycho-acoustic
results. An electrmic model can be especially helpful, when the parameters of the model
can be easily controlled, and when the model will operate in real time.
2. THE MODEL
The model was originally developed by Hewitt and Meddis [4], and was based m the
observatim that Chopper cells in the Cochlear Nucleus synchronize when the stimulus
is modulated in amplitude within a particular modulation frequency range [5].
Coehl........ecIuudeaI
Ilierlug lDaer IbIr
n111 AN
Inferior ...Beahu
ecJInddmee nil
A til
A ~
Hair e4illllCldoi
Fig. 1. Diagram of the AM detection model. BMF=Best Modulation Frequency.
The diagram shown in figure 1 shows the elements of the model. The cochlea filters the
incoming sound signal. Since the width of the pass-band of a cochlear band-pass filter is
proportimal to its cut-off frequency, the filters will not be able to resolve the individual
harmooics of a high frequency carrier (>3kHz) amplitude modulated at a low rate
?500Hz). The outputs of the cochlear filters that have their cut-off frequency slightly
above the carrier frequency of the signal will therefm-e still be modulated in amplitude at
the mginal modulatim frequency. This modulatim compment will therefm-e
synchronize a certain group of Chopper cells. The synchronizatim of this group of
Chopper cells can be detected using a coincidence detecting neurm, and signals the
presence of a particular amplitude modulation frequency. This model is biologically
plausible, because it is known that the choppers synchronize to a particular amplitude
modulatim frequency and that they project their output towards the Inferim- Colliculus
(ammgst others). Furthermm-e, neurms that can functim as coincidence detectm-s are
shown to be present in the Inferim- Colliculus and the rate of firing of these neurms is a
743
A Silicon Model ofAmplitude Modulation Detection
band-pass functiro of the amplitude modulatiro rate. It is not known to date however if
the choppers actually project to these coincidence detectoc neurons.
The actual mechanism that synchrooizes the chopper cells will be discussed with the
measurements in sectim 4. In the next sectim. we will first present the circuits that
allowed us to build the VLSI implementation of this model.
3. THE CIRCUITS
All of the circuits used in our model have already been presented in m(X'e detail
elsewhere. but we will present them briefly f(X' completeness. Our silicro cochlea has
been presented in detail at NlPS'95 [1]. and m(X'e details about the Inner Hair Cell
circuit and the spiking neuron circuit can be found in [2].
3.1 THE SILICON COCHLEA
The silicro cochlea crosists of a cascade of secmd (X'der low-pass filters. Each filter
sectiro is biased using Compatible Lateral Bipolar Transist(X's (Q.BTs) which crotrol
the cut-off frequency and the quality fact(X' of each sectiro. A single resistive line is
used to bias all Q.BTs. Because of the exponential relatiro between the Base-Emitter
Voltage and the Collectoc current of the Q.BTs. the linear voltage gradient introduced
by the resistive line will yield a filter cascade with an exponentially decreasing cut-off
frequency of the filters. The output voltage of each filter Vout then represents the
displacement of a basilar membrane sectiro. In (X'der to obtain a representatim of the
basilar membrane velocity. we take the difference between Vout and the voltage m the
internal node of the second order filter.
We have integrated this silicro cochlea using 104 filter stages. and the output of every
second stage is connected to an output pin.
3.2 THE INNER HAIR CELL MODEL
The inner hair cell circuit is used to half-wave rectify the basilar membrane velocity
signal and to perf(I'm some f(I'm of temp<X'al adaptatiro. as can be seen in figure 2b.
The differential pair at the input is used to crovert the input voltage into a current with
a compressive relatim between input amplitude and the actual amplitude of the current.
~Or---------------------~
~1.5
11.0
~ 0.5
0.0 . ._NIl......,........,.,...................
o
5
10
15
20
tlma (me)
25
Fig. 2. a) The Inner Hair Cell circuit. b) measured output current
We have integrated a small chip containing 4 independent inner hair cells.
30
35
744
A. van Schaik, E. Fragniere and E. Vzttoz
3.3 THE SPIKING NEURON MODEL
The spiking neuron circuit is given in figure 3. The membrane of a biological neuron is
modeled by a capacitance. Cmem. and the membrane leakage current is coo.trolled by the
gate voltage. VIeako of an NMOS transiskX'. In the absence of any input O"ex=O). the
membrane voltage will be drawn to its resting potential (coo.trolled by Vrest). by this
leakage current. Excitatory inputs simply add charge to the membrane capacitance.
whereas inhibitory inputs are simply modeled by a negative lex. If an excitatory current
larger than the leakage current of the membrane is injected. the membrane potential will
increase from its resting potential. This membrane potential. Vroom, is COOlpared with a
coo.trollable threshold voltage Vthree. using a basic transconductance amplifier driving a
high impedance load. If Vmem exceeds Vthrea. an action potential will be generated.
Fig. 3. The Spiking Neuron circuit
The generation of the actioo potential happens in a similar way as in the biological
neuron. where an increased sodium coo.ductance creates the upswing of the spike. and a
delayed increase of the potassium coo.ductance creates the downswing. In the circuit this
is modeled as follows. H Vmam rises above Vthrea ? the output voltage of the COOlparat<X"
will rise to the positive power supply. The output of the following inverter will thus go
low. thereby allowing the "sodium current" INa to pull Up the membrane potential. At the
same time however. a second inverter will allow the capacitance CK to be charged at a
speed which can be coo.trolled by the current ~p. As soon as the voltage on CK is high
enough to allow coo.ductioo of the NMOS to which it is connected. the "potassium
current" IK will be able to discharge the membrane capacitance.
H Vmam now drops below Vthreat the output of the first inverter will become high. cutting
off the current INa. Furtherm<X"e. the second inverter will then allow CK to be discharged
by the current IKdown . If IKdown is small, the voltage on CK will decrease only slowly. and.
as loog as this voltage stays high enough to allow IK to discharge the membrane. it will
be impossible to stimulate the neuron if lex is smaller than IK ? Theref<X"e ~own can be
said to control the 'refractory period' of the neuron.
We have integrated a chip. coo.taining a group of 32 neurons. each having the same bias
voltages and currents. The COOlponent mismatch and the noise ensure that we actually
have 32 similar. but not completely equal neurons.
4. TEST RESULTS
Most neuro-physiological data coo.cerning low frequency amplitude modulation of high
frequency carriers exists f<X" carriers at about 5kHz and a modulation depth of about
50%. We theref<X"e used a 5 kHz sinusoid in our tests and a 50% modulatioo depth at
frequencies below 550Hz.
745
A Silicon Model ofAmplitude Modulation Detection
250
i
1-'-1
200
150
100
250
200
j
150
1
100
50
50
0
0
0
10
20
30
Itn.~.)
40
50
0
10
20
40
30
Itn.~.)
50
Fig. 4. PSTH of the chopper chip for 2 different sound intensities
First step in the elaboration of the model is to test if the group of spiking neurons on a
single chip is capable of performing like a group of similar Choppers. Neurons in the
auditory brainstem are often characterized with a Post Stimulus Time Histogram
(pSTH), which is a histogram of spikes in response to repeated stimulatien with a pure
tone of short duratien. If the choppers en the chip are really similar, the PSTH of this
group of choppers will be very similar to the PSTH of a single chopper. In figure 4 the
PSTH of the circuit is shown. It is the result of the summed response of the 32 neurens
en chip to 20 repeated stimulatiens with a 5kHz tene burst. This figure shows that the
response of the Choppers yields a PSTH typical of chopping neurens, and that the
chopping frequency, keeping all other parameters constant, increases with increasing
sound intensity. The chopping rate for an input signal of given intensity can be
controlled by setting the refractory period of the spiking neurons, and can thus be used
to create the different groups of choppers shown in figure 1. The chopping rate of the
choppers in figure 4 is about 300Hz for a 29dB input signal.
1..,..???????????,?_-
0.5
-Merrbrane
potential
0???
o
<
::> (5
refractory charging
period
period
::>( ?
10 time (rrs)
Stirrulation
threshoki
spkt
w iclth
Fig. 5. Spike generation for a Chopper cell.
To understand why the Choppers will synchronize fix' a certain amplitude modulatien
frequency, one has to look at the signal envelope, which CCIltains tempocal information
en a time scale that can influence the spiking neurens. The 5kHz carrier itself will not
contain any tempocal informatien that influences the spiking neuren in an impoctant
way. Consider the case when the modulation frequency is similar to the chopping
frequency (figure 5). If a Chopper then spikes during the rising flank of the envelope, it
will come out of its refractory period just before the next rising flank of the envelope. If
the driven chopping frequency is a bit too low, the Chopper will come out of its
refractory period a bit later, therefix'e it receives a higher average stimulation and it
spikes a little higher en the rising flank of the envelope. This in turn increases the
chopping frequency and thus provides a form of negative feedback on the chopping
frequency. This theref(X'e makes spiking en a certain point en the rising flank of the
envelope a stable situatien. With the same reasening one can show that spiking en the
falling flank is theref(X'e an unstable situatien. Furthermore, it is not possible to stabilize
a cell driven above its maximum chopping rate, n(X' is it possible to stabilize a cell that
fires m(X'e than ence per modulatien period. Since a group of similar choppers will
A. van Schaik, E. Fragniere and E. Vittoz
746
stabilize at about the same point on the rising flank. their spikes will thus coincide when
the modulation frequency allows them to.
I==?[!
2SO ..... ???? .. ......... ??...... ?.. ??? .. ?? .. ?????? ... ?? .... ?..
250200
............... ................... ..... . m ?? ... ?? .. ??? .... i~4&;i3"li:?
l
200
o
100
modllllllDn .... (HI)
200
300
400
modulldon .... (HI)
500
600
Fig. 6. AM sensitivity of the coincidence detecting neuron.
Another free parameter of the model is the threshold of the coincidence detecting
neuron. If this parameter is set so that at least 60% of the choppers must spike within
Ims to be considered a coincidence. we obtain the output of figure 6. We can see that
this yields the expected band-pass Modulation Transfer Function (MTF). and that the
best modulation frequency f(X' the 29dB input signal caTesponds to the intrinsic
chopping rate of the group of neuroo.s. Figure 6 also shows that the best modulatioo.
frequency (BMF). just as the chopping rate. increases with increasing sound intensity.
but that the maximum number of spikes per second actually decreases. This second
effect is caused by the fact that the stabilizing effect of the positive flank of the signal
envelope only influences the time during which the neuron is being charged. which
becomes a smaller part of the total spiking period at higher intensities. The negative
feedback thus has less influence on the total chopping period and therefore synchronizes
the choppers less.
250 .??...???...?. . ??...... .... .?.........?.?..??.???.???? . ?.?..?. . ?....... ..??
25O~
---.15.5c1!
200~
o
100
200 f------'
200
300
400
moduWlon .... (HI)
500
600
100
200
300
400
modllllllDn .... (HE)
500
600
a
b
Fig. 7. AM sensitivity of the coincidence detecting neuron.
When the coincidence threshold is lowered to 50%. we can see in figure 7a that the
maximum number of spikes goes up. because this threshold is m(X'e easily reached.
Furthermore. a second pass-band shows up at double the BMF. This is because the
choppers fire only every second amplitude modulation period. and part of the group of
choppers will synchronize during the odd periods. whereas others during the even
periods. The division of the group of choppers will typically be close to. but hardly ever
exactly 50-50. so that either during the odd or during the even modulation period the
50% coincidence threshold is exceeded. The 60% threshold of figure 6 will only rarely
be exceeded. explaining the weak second peak seen around 500Hz in this figure.
A Silicon Model ofAmplitude Modulation Detection
747
Figure 7b. shows the MIF f<X" low intensity signals with a SO% coincidence threshold.
At low intensities the effect of an additional non-linearity, the stimulation threshold,
shows up. Whenever the instantaneous value of the envelope is lower than the
stimulatioo threshold, the spiking neuroo will not be stimulated because its input current
will be lower than the cell's leakage current. At these low intensities the activity during
the valleys of the modulatioo envelope will thus not be enough to stimulate the Choppers
(see figure 5). F<X" stimuli with a lower modulatioo frequency than the group's chopping
frequency, the Choppers will come out of their refract<X"y period in such a valley. These
choppers theref<X"e will have to wait f<X" the envelope amplitude to increase above a
certain value, bef<X"e they receive anew a stimulatioo. This waiting period nullifies the
effect of the variatioo of the refractory period of the Choppers, and thus synchronizes the
Choppers for low modulatioo frequencies. A secoo.d effect of this waiting period is that
in this case the firing rate of the Choppers matches the modulation frequency. When the
modulation frequency becomes higher than the maximum chopping frequency, the
Choppers will fire only every secood period, but will still be synchronized, as can be
seen between 300Hz and 500Hz in figure 7b.
5. CONCLUSIONS
In this article we have shown that it is possible to use our building blocks to build a
multi-chip system that models part of the audit(X"y pathway. Furtherm<X"e, the fact that
the spiking neuron chip can be easily biased to function as a group of similar Choppers,
combined with the relative simplicity of the spike generation mechanism of a single
neuroo 00 chip, allowed us to gain insight in the process by which chopping neurons in
the mammalian Cochlear Nucleus synchronize to a particular range of amplitude
modulation frequencies.
References
[1] A. van Schaik, E. Fragniere, & E. Vittoz, "Improved silicm cochlea using
compatible lateral bipolar transist<X"s," Advances in Neural Information Processing
Systems 8, MIT Press, Cambridge, 1996.
[2] A. van Schaik, E. Fragniere, & E. Vittoz, "An analoge electronic model of ventral
cochlear nucleus neurons," Proc. Fifth Int. Con! on Microelectronics for Neural
Networks and Fuzzy Systems, IEEE Computer Society Press, Los Alamitos, 1996,
pp. S2-59.
[3] A. van Schaik and R Meddis, "The electronic ear; towards a blueprint,"
Neurobiology, NATO ASI series, Plenum Press, New York, 1996.
[4] MJ. Hewitt and R Meddis, "A computer model of amplitude-modulatioo sensitivity
of single units in the inferior colliculus," 1. Acoust. Soc. Am., 9S, 1994, pp. I-IS.
[S] MJ. Hewitt, R Meddis, & T.M. Shackletoo, "A computer model of a cochlearnucleus stellate cell: respooses to amplitude-modulated and pure tone stimuli," 1.
Acoust. Soc. Am., 91. 1992, pp. 2096-2109.
PART VI
SPEECH, HANDWRITING AND SIGNAL
PROCESSING
| 1265 |@word briefly:1 rising:5 chopping:16 thereby:1 series:1 current:16 must:1 drop:1 half:1 tone:2 short:1 schaik:7 completeness:1 provides:1 detecting:4 node:1 psth:6 burst:1 differential:1 supply:1 ik:3 become:1 resistive:2 pathway:2 expected:2 multi:1 brain:2 decreasing:1 resolve:1 actual:2 little:1 increasing:2 becomes:2 project:2 linearity:1 circuit:14 fuzzy:1 developed:2 compressive:1 acoust:2 every:3 charge:1 bipolar:2 exactly:1 control:1 unit:1 carrier:5 positive:2 before:1 firing:2 modulation:18 mtf:1 lausanne:1 emitter:1 range:2 practical:1 block:3 tene:1 swiss:1 displacement:1 lols:1 rrs:1 asi:1 cascade:2 wait:1 close:1 valley:2 impossible:1 influence:4 charged:2 center:1 blueprint:1 go:2 stabilizing:1 simplicity:1 pure:2 insight:2 pull:1 tlma:1 discharge:2 plenum:1 us:3 element:2 velocity:2 recognition:1 mammalian:2 cut:4 coincidence:10 connected:2 decrease:2 valuable:1 creates:2 division:1 eric:2 completely:1 easily:3 chip:9 artificial:1 detected:1 larger:1 plausible:1 itself:1 date:1 los:1 potassium:2 double:1 depending:1 basilar:3 measured:1 odd:2 soc:2 implemented:1 come:3 vittoz:4 synchronized:1 vrest:1 filter:11 brainstem:2 human:1 explains:1 fix:1 really:1 stellate:1 biological:2 around:1 considered:1 fragniere:6 driving:1 major:1 ventral:1 inverter:4 recognizer:1 estimation:2 proc:1 nullifies:1 create:2 refract:1 federal:1 mit:1 i3:1 ck:4 voltage:13 am:5 bef:1 helpful:1 epfl:1 integrated:3 psycho:1 typically:1 vlsi:2 summed:1 equal:1 having:1 represents:1 look:1 secood:1 others:2 stimulus:4 few:2 individual:1 delayed:1 fire:3 amplifier:1 detection:6 capable:1 increased:1 ence:1 bts:3 too:1 combined:1 fundamental:3 sensitivity:3 peak:1 stay:1 off:5 ear:1 containing:1 slowly:1 til:1 li:1 potential:8 stabilize:3 int:1 caused:1 vi:1 later:1 loog:1 reached:1 wave:1 yield:4 discharged:1 vout:2 weak:1 andre:1 whenever:1 email:1 frequency:35 pp:3 di:1 con:1 handwriting:1 gain:1 auditory:3 amplitude:18 actually:3 exceeded:2 originally:1 higher:4 response:3 improved:1 furthermore:3 just:2 stage:2 synchronizes:2 receives:1 quality:1 stimulate:2 building:3 effect:5 contain:1 sinusoid:1 during:7 width:1 inferior:2 complete:1 instantaneous:1 mif:1 stimulation:2 spiking:15 khz:5 exponentially:1 refractory:6 analog:1 discussed:1 he:1 resting:2 ims:1 silicon:5 measurement:1 cambridge:1 rectify:1 lowered:1 stable:1 similarity:1 furtherm:2 base:1 add:1 own:1 driven:2 certain:5 der:2 seen:3 additional:1 period:18 itn:2 signal:11 sound:5 exceeds:1 match:1 characterized:1 elaboration:1 post:1 controlled:2 pitch:7 neuro:1 basic:1 hair:7 cochlea:7 histogram:2 cell:16 c1:1 receive:1 background:1 whereas:2 diagram:2 biased:2 operate:1 envelope:9 probably:1 hz:6 db:2 transist:2 presence:1 enough:3 inner:6 speech:2 york:1 hardly:1 action:1 band:5 inhibitory:1 per:2 waiting:2 group:13 threshold:9 falling:1 drawn:1 upswing:1 year:1 colliculus:3 injected:1 electronic:2 bit:2 hi:3 activity:1 speed:1 nmos:2 ductance:2 transconductance:1 performing:1 membrane:13 smaller:2 slightly:1 temp:1 biologically:1 happens:1 coo:9 meddis:4 pin:1 turn:1 mechanism:3 fed:1 neuromimetic:1 gate:1 nlp:1 ensure:1 especially:1 build:2 society:1 leakage:4 capacitance:4 already:1 lex:2 spike:10 alamitos:1 said:1 gradient:1 lateral:2 me:2 cochlear:6 unstable:1 modeled:3 negative:3 rise:2 implementation:1 allowing:1 neuron:22 neurobiology:1 ever:1 intensity:8 introduced:1 pair:1 acoustic:3 able:2 below:2 mismatch:1 charging:1 power:1 synchronize:6 sodium:2 technology:1 perf:1 extract:1 understanding:1 vmem:1 relative:1 ina:2 generation:3 nucleus:4 article:2 periodicity:1 elsewhere:1 compatible:2 excitatory:2 last:1 soon:1 keeping:1 free:1 bias:2 allow:5 understand:1 institute:1 explaining:1 fifth:1 van:7 feedback:2 depth:2 doesn:1 coincide:1 taining:1 cutting:1 nato:1 anew:1 incoming:1 why:1 impedance:1 stimulated:1 mj:2 transfer:1 flank:7 complex:1 psychoacoustic:1 hewitt:3 s2:1 bmf:3 noise:1 allowed:2 repeated:2 fig:7 en:7 exponential:1 load:1 physiological:2 microelectronics:1 normalizing:1 intrinsic:2 exists:1 chopper:33 simply:2 cmem:1 ch:2 towards:2 absence:1 typical:1 called:1 nil:2 pas:6 total:2 rarely:1 audit:1 internal:1 modulated:4 ex:1 |
291 | 1,266 | Learning temporally persistent
hierarchical representations
Suzanna Becker
Department of Psychology
McMaster University
Hamilton, Onto L8S 4K1
becker@mcmaster.ca
Abstract
A biologically motivated model of cortical self-organization is proposed. Context is combined with bottom-up information via a
maximum likelihood cost function. Clusters of one or more units
are modulated by a common contextual gating Signal; they thereby
organize themselves into mutually supportive predictors of abstract
contextual features. The model was tested in its ability to discover
viewpoint-invariant classes on a set of real image sequences of centered, gradually rotating faces. It performed considerably better
than supervised back-propagation at generalizing to novel views
from a small number of training examples.
1
THE ROLE OF CONTEXT
The importance of context effects l in perception has been demonstrated in many
domains. For example, letters are recognized more quickly and accurately in the
context of words (see e.g. McClelland & Rumelhart, 1981), words are recognized
more efficiently when preceded by related words (see e.g. Neely, 1991), individual
speech utterances are more intelligible in the context of continuous speech, etc. Further, there is mounting evidence that neuronal responses are modulated by context.
For example, even at the level of the LGN in the thalamus, the primary source of
visual input to the cortex, Murphy & Sillito (1987) have reported cells with "endstopped" or length-tuned receptive fields which depend on top-down inputs from
the cortex. The end-stopped behavior disappears when the top-down connections
are removed, suggesting that the cortico-thalamic connections are providing contextual modulation to the LGN. Moving a bit higher up the visual hierarchy, von der
Heydt et al. (1984) found cells which respond to "illusory contours", in the absence
of a contoured stimulus within the cells' classical receptive fields. These examples demonstrate that neuronal responses can be modulated by secondary sources
of information in complex ways, provided the information is consistent with their
expected or preferred input.
1 We use the term context rather loosely here to mean any secondary source of input.
It could be from a different sensory modality, a different input channel within the same
modality, a temporal history of the input, or top-down information.
Learning Temporally Persistent Hierarchical Representations
825
Figure 1: Two sequences of 48 by 48 pixel images digitized with an IndyCam and preprocessed with a Sobel edge filter. Eleven views of each of four to ten faces were used in the
simulations reported here. The alternate (odd) views of two of the faces are shown above.
Why would contextual modulation be such a pervasive phenomenon? One obvious
reason is that if context can influence processing, it can help in disambiguating or
cleaning up a noisy stimulus. A less obvious reason may be that if context can
influence learning, it may lead to more compact representations, and hence a more
powerful processing system. To illustrate, consider the benefits of incorporating
temporal history into an unsupervised classifier. Given a continuous sensory signal
as input, the classifier must try to discover important partitions in its training
data. If it can discover features that are temporally persistent, and thus insensitive
to transformations in the input, it should be able to represent the signal compactly
with a small set offeatures. FUrther, these features are more likely to be associated
with the identity of objects rather than lower-level attributes.
However, most classifiers group patterns together on the basis of spatial overlap.
This may be reasonable if there is very little shift or other form of distortion between
one time step and the next, but is not a reasonable assumption about the sensory
input to the cortex. Pre-cortical stages of sensory processing, certainly in the visual
system (and probably in other modalities), tend to remove low-order correlations in
space and time, e.g. with centre-surround filters. Consider the image sequences of
gradually rotating faces in Figure 1. They have been preprocessed by a simple edgefilter, so that successive views of the same face have relatively little pixel overlap. In
contrast, identical views of different faces may have considerable overlap. Thus, a
classifier such as k-means, which groups patterns based on their Euclidean distance,
would not be expected to do well at classifying these patterns. So how are people
(and in fact very young children) able to learn to classify a virtually infinite number
of objects based on relatively brief exposures? It is argued here that the assumption
of temporal persistence is a powerful constraining factor for achieving this, and is
one which may be used to advantage in artificial neural networks as well. Not only
does it lead to the development of higher-order feature analyzers, but it can result in
more compact codes which are important for applications like image compression.
Further, as the simulations reported here show, improved generalization may be
achieved by allowing high-level expectations (e.g. of class labels) to influence the
development of lower-level feature detectors.
2
THE MODEL
Competitive learning (for a review, see Becker & Plumbley, 1996) is considered
by many to be a reasonably strong candidate model of cortical learning. It can
be implemented, in its simplest form, by a Hebbian learning rule in a network
S. Becker
826
with lateral inhibition. However, a major limitation of competitive learning, and
the majority of unsupervised learning procedures (but see the Discussion section), is
that they treat the input as a set of independent identically distributed (iid) samples.
They fail to take into account context. So they are unable to take advantage of the
temporal continuity in signals. In contrast, real sensory signals may be better viewed
as discretely sampled, continuously varying time-series rather than iid samples.
The model described here extends maximum likelihood competitive learning
(MLCL) (Nowlan, 1990) in two important ways: (i) modulation by context, and
(ii) the incorporation of several "canonical features" of neocortical circuitry. The
result is a powerful framework for modelling cortical self-organization.
MLCL retains the benefits of competitive learning mentioned above. Additionally,
it is more easily extensible because it maximizes a global cost function:
L
=
t, [t, ~iYi(a)1
(1)
log
where the 7r/s are positive weighting coefficients which sum to one, and the
the clustering unit activations:
y/ a )
N(fl a ), Wi, ~i)
Yi'S
are
(2)
where j(a) is the input vector for pattern a, and NO is the probability of j(a) under
a Gaussian centred on the ith unit's weight vector, Wi, with covariance matrix
2: i . For simplicity, Nowlan used a single global variance parameter for all input
dimensions, and allowed it to shrink during learning. MLCL actually maximizes
the log likelihood (L) of the data under a mixture of Gaussians model, with mixing
proportions equal to the 7r'S. L can be maximized by online gradient ascent 2 with
learning rate E:
D..Wij
=
E
()L
=
E "'"
~
()Wij
7ri Yi(a)
L:k 7rk Yk(a)
(I/ a ) -
Wij)
(3)
Thus, we have a Hebbian update rule with normalization of post-synaptic unit
activations (which could be accomplished by shunting inhibition) and weight decay.
2.1 Contextual modulation
To integrate a contextual information source into MLCL, our first extension is to
replace the mixing proportions (7r/s) by the outputs of contextual gating units (see
Figure 2). Now the 7r/s are computed by separate processing units receiving their
own separate stream of input, the "context". The role of the gating signals here
is analagous to that of the gating network in the (supervised) "competing experts"
model (Jacobs et al., 1991),3 For the network shown in Figure 2, the context is
simply a time-delayed version of the outputs of a module (explained in the next subsection). However, more general forms of context are possible (see Discussion) . In
the simulations reported here, the context units computed their outputs according
to a softmax function of their weighted summed inputs Xi:
(a) _
7r .
Z
ex;(a)
- ---..,.--:L: j eXj(a)
(4)
We refer to the action of the gating units (the 7r/s) as modulatory because of the
2Nowlan (1990) used a slightly different online weight update rule that more closely
approximates the batch update rule of the EM algorithm (Dempster et al., 1977)
3 However , in the competing experts architecture, both the experts and gating network
receive a common source of input. The competing experts model could be thought of as
fitting a mixture model of the training signal.
827
Learning Temporally Persistent Hierarchical Representations
~~~~~
....ta
(f)
I....... .nta
Figure 2: The architecture used in the simulations reported here. Except where indicated,
the gating units received all their inputs across unit delay lines with fixed weights of 1. o.
multiplicative effect they have on the activities of the clustering units (the y/s).
This multiplicative interaction is built into the cost function (Equation 1), and
consequently, arises in the learning rule (Equation 3). Thus, clustering units are
encouraged to discover features that agree with the current context signal they
receive. If their context signal is weak or if they fail to capture enough of the
activation relative to the other clustering units, they will do very little learning.
Only if a unit's weight vector is sufficiently close to the current input vector and
it's corresponding gating unit is strongly active will it do substantial learning.
2.2 Modular, hierarchical architecture
Our second modification to MLCL is required to apply it to the architecture shown
in Figure 2, which is motivated by several ubiquitous features of the neocortex: a
laminar structure, and a functional organization into "cortical clusters" of spatially
nearby columns with similar receptive field properties (see e.g. Calvin, 1995). The
cortex, when flattened out, is like a large six-layered sheet. As Calvin (1995, pp.
269) succinctly puts it, " ... the bottom layers are like a subcortical 'out' box, the
middle layer like an 'in' box, and the superficial layers somewhat like an 'interoffice' box connecting the columns and different cortical areas". The middle and
superficial layer cells are analagous to the first-layer clustering units and gating
units respectively. Thus, we propose that the superficial cells may be providing the
contextual modulation. (The bottom layers are mainly involved in motor output
and are not included in the present model.) To induce a functional modularity in
our model analogous to cortical clusters, clustering units within the same module
receive a shared gating signal. The cost function and learning rule are now:
n
[m
1 I
~
log ~ 1r~a) l ~Yi/a)
L
=
~
E
i
1r(a) Yi .(a)
L..J
2: (~)
a
q1rq
1
(
(a)
(5)
)
Ik(a) -Wijk
(6)
rYqr
Thus, units in the same module form predictions y~j) of the same contextual feature
1r~a). Fortunately, there is a disincentive to all of them discovering identical weights:
they would then do poorly at modelling the input.
3
EXPERIMENTS
As a simple test of this model, it was first applied to a set of image sequences of
four centered, gradually rotating faces (see Figure 1), divided into training and test
828
S. Becker
no context, 4 faces:
context, 4 faces:
context, 10 faces:
Layer
Layer
Layer
Layer
Layer
1
1
2
1
2
Training Set
59.2 (2.4)
88.4 (3.9)
88.8 (4.0)
96.3 (1.2)
91.8 (2.4)
Test Set
65 (3.5)
74.5 (4.2)
72.7 (4.8)
71.0 (3.0)
70.2 (4.3)
Table 1: Mean percent (and standard error) correctly classified faces , across 10 runs,
for unsupervised clustering networks trained for 2000 iterations with a learning rate of
0.5, with and without temporal context. Layer 1: clustering units . Layer 2: gating units.
Performance was assessed as follows: Each unit was assigned to predict the face class for
which it most frequently won (was the most active). Then for each pattern, the layer's
activity vector was counted as correct if the winner correctly predicted the face identity.
sets by taking alternating views. It was predicted that the clustering units should
discover "features" such as individual views of specific faces. Further, different views
of the same face should be clustered together within a module because they will be
observed in the same temporal context, while the gating units should discover the
identity of faces, independent of viewpoint.
First, the baseline effect of the temporal context on clustering performance was
assessed by comparing the network shown in Figure 2 to the same network with the
input connections to the gating layer removed. The latter is equivalent to MLCL
with fixed, equal 7ri'S . The results are summarized in Table 1. As predicted, the
temporal context provides incentive for the clustering units to group successive
instances of the same face together, and the gating layer can therefore do very well
at classifying the faces with a much smaller number of units - i.e., independently of
viewpoint. In contrast, the clustering units without the contextual signal are more
likely to group together similar views of different people's faces .
Next, to explore the scaling properties of the model, a network like the one shown
in Figure 2 but with 10 modules was presented with a set of 10 faces, 11 views each.
As before, the odd-numbered views were trained on and the even-numbered views
were tested on. To achieve comparable performance to the smaller network, the
weights on the self-pointing connections on the gating units were increased from 1.0
to 3.0, which increased the time constant of temporal agveraging. The model then
had no difficulty scaling up to the larger training set size, as shown in Table 1.
Based on the unexpected success of this model, it's classification performance was
then compared against supervised back-propagation networks on the four face sequences. The first supervised network we tried was a simple recurrent network with
essentially the same architecture: one layer of Gaussian units followed by one layer
of recurrent soft max units with fixed delay lines. Over ten runs of each model,
although the unsupervised classifier did worse on the training set (it averaged 88%
while the supervised model always scored 100% correct), it outperformed the supervised model in its generalization ability by a considerable margin (it averaged
73% while the supervised model averaged 45% correct) .
Finally, a feedforward back-propagation network with sigmoid units was trained.
The following constraint on the hidden layer activations, hj(t): 4
hidden state cost = ,\ l:)hj(t) - hj(t - 1?2
j
4 As Geoff Hinton pointed out, the above constraint, if normalized by the variance,
maximizes the mutual information between hidden unit states at adjacent time steps.
829
Learning Temporally Persistent Hierarchical Representations
Test Set Performance
Training Set Performance
1000
U
Ql
u
Ql
BOO
0
0
U
60.0
u
60.0
C
C
Ql
BOO
L..
L..
L..
L..
U
100.0
Ql
a
400
L..
Ql
a...
U
a
400
L..
-4
Ql
a...
????_?? 2
20.0
---. 1
- - -. 1
O.
1000.0
2000.0
JOOO .O
Learning epoch
4000.0
1000.0
2000.0
JOOO.O
4000.0
Learning epoch
Figure 3: Learning curves, averaged over five runs , for feedforward supervised net with a
temporal smoothness constraint, for each of four levels of the parameter
>..
was added to the cost function to encourage temporal smoothness. As the results
in Figure 3 show, a feedforward network with no contextual input was thereby able
to perform as well as our unsupervised model when it was constrained to develop
hidden layer representations that clustered temporally adjacent patterns together.
4
DISCUSSION
The unsupervised model's markedly better ability to generalize stems from it's cost
function; it favors hidden layer features which contribute to temporally coherent
predictions at the output (gating) layer. Multiple views of a given object are therefore more likely to be detected by a given clustering unit in the unsupervised model,
leading to considerably improved interpolation of novel views. The poor generalization performance of back-propagation is not just due to overtraining, as the learning
curves in Figure 3 show. Even with early stopping, the network with the lowest
value of >. would not have done as well as the unsupervised network. There is simply no reason why supervised back-propagation should cluster temporally adjacent
views together unless it is explicitly forced to do so.
A "contextual input" stream was implemented in the simplest possible way in the
simulations reported here, using fixed delay lines. However, the model we have proposed provides for a completely general way of incorporating arbitrary contextual
information, and could equally well integrate other sources of input. The incoming
weights to the gating units could also be learned. In fact, the gating unit activities
actually represent the probabilities of each clustering unit's Gaussian model fitting
the data, conditioned on the temporal history; hence, the entire model could be
viewed as a Hidden Markov Model (Geoff Hinton, personal communication). However, current techniques for fitting HMMs are intractable if state dependencies span
arbitrarily long time intervals.
The model in its present implementation is not meant to be a realistic account of the
way humans learn to recognize faces . Viewpoint-invariant recognition is achieved,
if at all, in a hierarchical, multi-stage system. One could easily extend our model
to achieve this, by connecting together a sequence of networks like the one shown
in Figure 2, each having progressively larger receptive fields.
A number of other unsupervised learning rules have been proposed based on the assumption oftemporally coherent inputs (FOldiak, 1991; Becker, 1993; Stone, 1996).
Phillips et al. (1995) have proposed an alternative model of cortical self-organization
they call coherent Infomax which incorporates contextual modulation. In their
model, the outputs from one processing stream modulate the activity in another
830
s. Becker
stream, while the mutual information between the two streams is maximized.
A wide range of perceptual and cognitive abilities could be modelled by a network that can learn features of its primary input in particular contexts. These include multi-sensor fusion, feature segregation in object recognition using top-down
cues, and semantic disambiguation in natural language understanding. Finally, it
is widely believed that memories are stored rapidly in the hippocampus and related brain structures, and gradually incorporated into the slower-learning cortex
for long-term storage. The model proposed here may be able to explain how such
interactions between disparate information sources are learned.
Acknowledgements
This work evolved out of discussions with Ron Racine and Larry Roberts. Thanks to
Geoff Hinton for contributing several valuable insights, as mentioned in the paper,
and to Ken Seergobin for the face images. Software was developed using the Xerion
neural network simulation package from Hinton's lab, with programming assistance
from Lianxiang Wang. This work was supported by a McDonnell-Pew Cognitive
Neuroscience research grant and a research grant from the Natural Sciences and
Engineering Research Council of Canada.
References
Becker, S. (1993). Learning to categorize objects using temporal coherence. In S. J.
Hanson, J. D. Cowan, & C. L. Giles (Eds.), Advances in Neural Information Processing
Systems 5 (pp. 361-368). San Mateo, CA: Morgan Kaufmann.
Becker, S. & Plumbley, M. (1996) . Unsupervised neural network learning procedures for
feature extraction and classification. International Journal of Applied Intelligence, 6(3).
Calvin, W. H. (1995). Cortical columns, modules, and Hebbian cell assemblies. In M.
Arbib (Ed.), The handbook of brain theory and neural networks. Cambridge, MA: MIT
Press.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Proceedings of the Royal Statistical Society, B-39:1-38.
Foldiak, P. (1991). Learning invariance from transformation sequences. Neural Computation, 3(2):194-200.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixtures
of local experts. Neural Computation, 3(1):79-87.
McClelland, J. L. & Rumelhart, D. E. (1981). An interactive activation model of context
effects in letter perception, part I: An account of basic findings. Psychological Review,
88:375-407.
Murphy, C. & Sillito, A. M. (1987). Corticofugal feedback influences the generation of
length tuning in the visual pathway. Nature, 329:727-729.
Neely, J. (1991). Semantic priming effects in visual word recognition: A selective review of
current findings and theories. In D. Besner & G. W. Humphreys (Eds.), Basic processes
in reading: Visual Word Recognition (pp. 264-336). Hillsdale, NJ: Lawrence Erlbaum
Associates.
Nowlan, S. J. (1990). Maximum likelihood competitive learning. In D. S. Touretzky (Ed.),
Neural Information Processing Systems, Vol. 2 (pp. 574-582). San Mateo, CA: Morgan
Kaufmann.
Phillips, W. A., Kay, J., & Smyth, D. (1995). The discovery of structure by multi-stream
networks of local processors with contextual guidance. Network, 6:225-246 .
Stone, J. (1996). Learning perceptually salient visual parameters using spatiotemporal
smoothness constraints. Neural Computation, 8:1463-1492.
von der Heydt, R., Peterhans, E., & Baumgartner, G. (1984). Illusory contours and cortical
neural responses. Science, 224 :1260-1262.
| 1266 |@word middle:2 version:1 compression:1 proportion:2 hippocampus:1 simulation:6 tried:1 covariance:1 jacob:2 thereby:2 series:1 tuned:1 current:4 contextual:15 comparing:1 nowlan:5 activation:5 must:1 realistic:1 partition:1 eleven:1 motor:1 remove:1 update:3 mounting:1 progressively:1 cue:1 discovering:1 intelligence:1 ith:1 provides:2 contribute:1 ron:1 successive:2 plumbley:2 five:1 ik:1 persistent:5 fitting:3 pathway:1 expected:2 behavior:1 themselves:1 frequently:1 multi:3 brain:2 little:3 provided:1 discover:6 maximizes:3 lowest:1 evolved:1 developed:1 finding:2 transformation:2 nj:1 temporal:13 interactive:1 classifier:5 unit:36 grant:2 hamilton:1 organize:1 positive:1 before:1 engineering:1 local:2 treat:1 modulation:6 interpolation:1 mateo:2 hmms:1 range:1 averaged:4 procedure:2 area:1 thought:1 persistence:1 word:5 pre:1 induce:1 numbered:2 onto:1 close:1 layered:1 sheet:1 put:1 context:26 influence:4 storage:1 equivalent:1 demonstrated:1 exposure:1 independently:1 simplicity:1 suzanna:1 rule:7 insight:1 kay:1 analogous:1 hierarchy:1 cleaning:1 programming:1 smyth:1 associate:1 rumelhart:2 recognition:4 bottom:3 role:2 module:6 observed:1 wang:1 capture:1 removed:2 valuable:1 yk:1 mentioned:2 substantial:1 dempster:2 personal:1 trained:3 depend:1 basis:1 completely:1 compactly:1 easily:2 geoff:3 forced:1 artificial:1 detected:1 modular:1 larger:2 widely:1 distortion:1 ability:4 favor:1 noisy:1 laird:1 online:2 sequence:7 advantage:2 xerion:1 net:1 propose:1 interaction:2 rapidly:1 mixing:2 poorly:1 achieve:2 cluster:4 object:5 help:1 illustrate:1 recurrent:2 develop:1 odd:2 received:1 strong:1 implemented:2 predicted:3 closely:1 correct:3 attribute:1 filter:2 centered:2 human:1 larry:1 hillsdale:1 argued:1 generalization:3 clustered:2 extension:1 sufficiently:1 considered:1 lawrence:1 predict:1 circuitry:1 pointing:1 major:1 early:1 outperformed:1 label:1 council:1 weighted:1 mit:1 sensor:1 gaussian:3 always:1 rather:3 hj:3 varying:1 pervasive:1 modelling:2 likelihood:5 mainly:1 contrast:3 baseline:1 stopping:1 entire:1 hidden:6 wij:3 selective:1 lgn:2 pixel:2 classification:2 development:2 spatial:1 softmax:1 summed:1 mutual:2 constrained:1 field:4 equal:2 having:1 extraction:1 encouraged:1 identical:2 unsupervised:10 stimulus:2 recognize:1 individual:2 delayed:1 murphy:2 organization:4 wijk:1 certainly:1 mixture:3 sobel:1 edge:1 encourage:1 unless:1 incomplete:1 loosely:1 euclidean:1 rotating:3 guidance:1 stopped:1 psychological:1 increased:2 classify:1 column:3 instance:1 soft:1 giles:1 extensible:1 retains:1 cost:7 predictor:1 delay:3 erlbaum:1 reported:6 stored:1 dependency:1 spatiotemporal:1 considerably:2 combined:1 thanks:1 international:1 jooo:2 receiving:1 infomax:1 together:7 quickly:1 continuously:1 offeatures:1 connecting:2 von:2 worse:1 cognitive:2 mcmaster:2 expert:5 leading:1 suggesting:1 account:3 centred:1 summarized:1 coefficient:1 analagous:2 explicitly:1 stream:6 performed:1 view:15 try:1 multiplicative:2 lab:1 competitive:5 thalamic:1 neely:2 variance:2 efficiently:1 maximized:2 kaufmann:2 generalize:1 weak:1 modelled:1 accurately:1 iid:2 processor:1 history:3 classified:1 overtraining:1 detector:1 explain:1 touretzky:1 synaptic:1 ed:4 against:1 pp:4 involved:1 obvious:2 associated:1 sampled:1 illusory:2 subsection:1 ubiquitous:1 actually:2 back:5 higher:2 ta:1 supervised:9 response:3 improved:2 done:1 shrink:1 strongly:1 box:3 contoured:1 stage:2 just:1 correlation:1 propagation:5 continuity:1 indicated:1 effect:5 normalized:1 hence:2 assigned:1 spatially:1 alternating:1 semantic:2 adjacent:3 assistance:1 during:1 self:4 won:1 stone:2 neocortical:1 demonstrate:1 percent:1 image:6 novel:2 common:2 sigmoid:1 functional:2 preceded:1 winner:1 insensitive:1 extend:1 approximates:1 refer:1 surround:1 cambridge:1 phillips:2 smoothness:3 pew:1 tuning:1 pointed:1 centre:1 analyzer:1 exj:1 language:1 had:1 moving:1 cortex:5 iyi:1 inhibition:2 etc:1 own:1 foldiak:2 supportive:1 success:1 arbitrarily:1 der:2 yi:4 accomplished:1 morgan:2 fortunately:1 somewhat:1 recognized:2 signal:11 ii:1 multiple:1 thalamus:1 stem:1 hebbian:3 believed:1 long:2 divided:1 post:1 shunting:1 equally:1 prediction:2 basic:2 calvin:3 essentially:1 expectation:1 iteration:1 represent:2 normalization:1 achieved:2 cell:6 receive:3 interval:1 source:7 modality:3 probably:1 ascent:1 markedly:1 tend:1 virtually:1 cowan:1 incorporates:1 jordan:1 call:1 constraining:1 feedforward:3 identically:1 enough:1 indycam:1 boo:2 psychology:1 nta:1 architecture:5 competing:3 arbib:1 shift:1 motivated:2 six:1 becker:9 baumgartner:1 speech:2 action:1 modulatory:1 neocortex:1 ten:2 mcclelland:2 simplest:2 ken:1 canonical:1 neuroscience:1 correctly:2 incentive:1 vol:1 group:4 four:4 salient:1 achieving:1 preprocessed:2 sum:1 run:3 package:1 letter:2 powerful:3 respond:1 extends:1 reasonable:2 disambiguation:1 coherence:1 scaling:2 comparable:1 bit:1 fl:1 layer:22 followed:1 laminar:1 discretely:1 activity:4 incorporation:1 constraint:4 ri:2 software:1 nearby:1 span:1 relatively:2 department:1 according:1 alternate:1 mcdonnell:1 poor:1 across:2 slightly:1 em:2 smaller:2 wi:2 biologically:1 modification:1 explained:1 invariant:2 gradually:4 equation:2 mutually:1 agree:1 segregation:1 fail:2 end:1 gaussians:1 apply:1 hierarchical:6 batch:1 alternative:1 slower:1 top:4 clustering:14 include:1 assembly:1 k1:1 classical:1 society:1 added:1 receptive:4 primary:2 gradient:1 distance:1 unable:1 separate:2 lateral:1 majority:1 reason:3 length:2 code:1 providing:2 ql:6 robert:1 racine:1 disparate:1 implementation:1 perform:1 allowing:1 markov:1 hinton:5 communication:1 incorporated:1 digitized:1 arbitrary:1 heydt:2 canada:1 required:1 connection:4 hanson:1 besner:1 coherent:3 learned:2 able:4 perception:2 pattern:6 reading:1 built:1 max:1 memory:1 royal:1 overlap:3 difficulty:1 natural:2 brief:1 temporally:8 disappears:1 utterance:1 review:3 epoch:2 understanding:1 acknowledgement:1 discovery:1 contributing:1 relative:1 generation:1 limitation:1 subcortical:1 integrate:2 consistent:1 rubin:1 viewpoint:4 classifying:2 succinctly:1 supported:1 cortico:1 wide:1 face:23 taking:1 benefit:2 distributed:1 curve:2 dimension:1 cortical:10 feedback:1 contour:2 sensory:5 adaptive:1 san:2 counted:1 compact:2 preferred:1 global:2 active:2 incoming:1 handbook:1 xi:1 continuous:2 sillito:2 why:2 table:3 modularity:1 additionally:1 channel:1 learn:3 reasonably:1 ca:3 superficial:3 nature:1 complex:1 priming:1 domain:1 did:1 intelligible:1 scored:1 child:1 allowed:1 neuronal:2 candidate:1 perceptual:1 weighting:1 humphreys:1 young:1 down:4 rk:1 specific:1 gating:18 decay:1 evidence:1 fusion:1 incorporating:2 intractable:1 l8s:1 importance:1 flattened:1 perceptually:1 conditioned:1 margin:1 generalizing:1 simply:2 likely:3 explore:1 visual:7 unexpected:1 ma:1 modulate:1 identity:3 viewed:2 consequently:1 disambiguating:1 replace:1 absence:1 considerable:2 shared:1 included:1 infinite:1 except:1 secondary:2 invariance:1 people:2 latter:1 modulated:3 arises:1 assessed:2 meant:1 categorize:1 tested:2 phenomenon:1 ex:1 |
292 | 1,267 | Adaptive Access Control Applied to Ethernet Data
Timothy X Brown
Dept. of Electrical and Computer Engineering
University of Colorado, Boulder, CO 80309-0530
timxb@colorado.edu
Abstract
This paper presents a method that decides which combinations of traffic
can be accepted on a packet data link, so that quality of service (QoS)
constraints can be met. The method uses samples of QoS results at different load conditions to build a neural network decision function. Previous similar approaches to the problem have a significant bias. This
bias is likely to occur in any real system and results in accepting loads
that miss QoS targets by orders of magnitude. Preprocessing the data to
either remove the bias or provide a confidence level, the method was
applied to sources based on difficult-to-analyze ethernet data traces.
With this data, the method produces an accurate access control function
that dramatically outperforms analytic alternatives. Interestingly, the
results depend on throwing away more than 99% of the data.
1 INTRODUCTION
In a communication network in which traffic sources can be dynamically added or
removed, an access controller must decide when to accept or reject a new traffic source
based on whether, if added, acceptable service would be given to all carried sources.
Unlike best-effort services such as the internet, we consider the case where traffic sources
are given quality of service (QoS) guarantees such as maximum delay, delay variation, or
loss rate. The goal of the controller is to accept the maximal number of users while guaranteeing QoS. To accommodate diverse sources such as constant bit rate voice, variablerate video, and bursty computer data, packet-based protocols are used. We consider QOS
in terms of lost packets (Le. packets discarded due to resource overloads). This is broadly
applicable (e.g. packets which violate delay guarantees can be considered lost) although
some QoS measures can not fit this model.
The access control task requires a classification function-analytically or empirically
derived-that specifies what conditions will result in QoS not being met. Analytic functions have been successful only on simple traffic models [Gue91], or they are so conservative that they grossly under utilize the network. This paper describes a neural network
method that adapts an access control function based on historical data on what conditions
packets have and have not been successfully carried. Neural based solutions have been
previously applied to the access control problem [Hir90][Tra92] [Est94], but these
Adaptive Access Control Applied to Ethernet Data
933
approaches have a distinct bias that under real-world conditions leads to accepting combinations of calls that miss QoS targets by orders of magnitude. Incorporating preprocessing
methods to eliminate this bias is critical and two methods from earlier work will be
described. The combined data preprocessing and neural methods are applied to difficultto-model ethernet traffic.
2 THE PROBLEM
Since the decision to accept a multilink connection can be decomposed into decisions on
the individual links, we consider only a single link. A link can accept loads from different
source types. The loads consist of packets modeled as discrete events. Arriving packets are
placed in a buffer and serviced in turn. If the buffer is full, excess packets are discarded
and treated as lost. The precise event timing is not critical as the concern is with the number of lost packets relative to the total number of packets received in a large sample of
events, the so-called loss rate. The goal is to only accept load combinations which have a
loss rate below the QoS target denoted by p*.
Load combinations are described by a feature vector, $, consisting of load types and possibly other information such as time of day. Each feature vector, $, has an associated loss
rate, p($), which can not be measured directly. Therefore, the goal is to have a classifier
function, C($), such that C($) >, <, =0 if p($) <, >, =p*.
Since analytic C($) are not in general available, we look to statistical classification methods. This requires training samples, a desired output for each sample, and a significance or
weight for each sample. Loads can be dynamically added or removed. Training samples
are generated at load transitions, with information since the last transition containing the
number of packet arrivals, T, the number of lost packets, s, and the feature vector, $.
A sample ($i' si' Ti ), requires a desired classification, d($i> si' Ti ) E {+1, -1}, and a weight,
E (0,00). Given a data set {($i' si' Ti )}, a classifier, C, is then chosen that mini-
W($i' s;. Ti )
mizes the weighted sum squared error E
= 2:j[w(~j, Sj, Tj)(C(~i) -d(~i' S;, T j?2].
A classifier, with enough degrees of freedom will set C($i) =d($i' si' 1j) if all the $i are different. With multiple samples at the same $ then we see that the error is minimized when
C(~) = (2:
_ _[w(~j,
{iI~, "'~}
Si'
Tj)d(~j, Sj, T;)])/(2:
_
_
{il~i "'~}
W(~i' Sj, T;?.
(1)
Thus, the optimal C($) is the weighted average of the d($i' si' Ti ) at $. If the classifier has
fewer degrees of freedom (e.g. a low dimension linear classifier), C($) will be the average
of the d($i' si' 1j) in the neighborhood of $, where the neighborhood is, in general, an
unspecified function of the classifier.
A more direct form of averaging would be to choose a specific neighborhood around $ and
average over samples in this neighborhood. This suffers from having to store all the samples in the decision mechanism, and incurs a significant computational burden. More significant is how to decide the size of the neighborhood. If it is fixed, in sparse regions no
samples may be in the neighborhood. In dense regions near decision boundaries, it may
average over too wide a range for accurate estimates. Dynamically setting the neighborhood so that it always contains the k nearest neighbors solves this problem, but does not
account for the size of the samples. We will return to this in Section 4.
3
THE SMALL SAMPLE PROBLEM
Neural networks have previously been a?plied to the access control proElem [Hir91]
[Tra92][Est94]. In [Hir90] and [Tra92], d(<I>i' si' Ti ) = +1 when s;lTi < p*, d(<I>i' si' 1j) =-1
otherwise, and the weighting is a uniform w($i' si' Ti) = 1 for all i. This desired out and
T. X. Brown
934
uniform weighting we call the normal method. For a given load combination, lP, assume an
idealized system where packets enter and with probability p(<P) independent of earlier or
later packets, the packet is labeled as lost. In a sample of T such Bernoulli trials with S the
number packets lost, let PB =P{s/T> p*}. Since with the normal method d(lP, s, 1) =-1 if
sIT> p*, PB = P{d(lP, s, 1) =-I}. From (1), with uniform weighting the decision boundary is where P B = 0.5. If the samples are small (i.e. T < (In 2)/p* < IIp*), d(lP, s, 1) =-1 for
all s > O. In this case PB = 1 - (1 -p(lP)l Solving for p(lP) at PB =0.5 using In(1 - x) "" -x,
the decision boundary is at p(lP) "" (In 2)ff > p*. So, for small sample sizes, the normal
method boundary is biased to greater than p* and can be made orders of magnitude larger
as T becomes smaller. For larger T, e.g. Tp* > 10, this bias will be seen to be negligible.
One obvious solution is to have large samples. This is complicated by three effects. The
first is that desired loss rates in data systems are often small; typically in the range
1O-ti_1O-12 . This implies that to be large, samples must be at least 107_1013 packets. For
the latter, even at Gbps rates, short packets, and full loading this translates into samples of
several hours of traffic. Even for the first at typical rates, this can translate into minutes of
traffic. The second, related problem is that in dynamic data networks, while individual
connections may last for significant periods, on the aggregate a given combination of loads
may not exist for the requisite period. The third more subtle problem is that in any queueing system even with uncorrelated arrival traffic the buffering introduces memory in the
system. A typical sample with losses may contain 100 losses, but a loss trace would show
that all of the losses occurred in a single short overload interval. Thus the number of independent trials can be several orders of magnitude smaller than indicated by the raw sample
size indicating that the loads must be stable for hours, days, or even years to get samples
that lead to unbiased classification.
An alternative approach used in [Hir95] sets d(lP, s, 1) = sIT and models p(lP) directly. The
probabilities can vary over orders of magnitude making accurate estimates difficult. Estimating the less variable 10g(p(lP? with d =10g(s/1) is complicated by the logarithm being
undefined for small samples where most samples have no losses so that s = o.
4
METHODS FOR TREATING BIAS AND VARIANCE
We present without proof two preprocessing methods derived and analyzed in [Br096] .
The first eliminates the sample bias by choosing an appropriate d and w that directly
solves (1) s.t. c(lP) >, <, =0 if and only if p(lP) <, >, =p* i.e. it is an unbiased estimate as
to whether the loss rate is above and below p*. This is the weighting method shown in
Table 1. The relative weighting of samples with loss rates above and below the critical loss
rate is plotted in Figure 1. For large T, as expected, it reduces to the normal method.
The second preprocessing method assigns uniform weighting, but classifies d(lP, s, 1) = 1
only if a certain confidence level, L, is met that the sample represents a combination where
p(lP) < p*. Such a confidence was derived in [Bro96]:
Table 1: Summary of Methods.
Method
Sample Class
d(<Pj, Sj, Ti) =+1 if
Normal
si::;;Lp*TJ
Weighting
si::;;Lp*T J
Aggregate
Table 2
Weighting, w(<!>j, Sj, TD, when
d(lPi, Sj, Tj )
=+1 (i.e. w+)
d(lPj, si, Tj) =-1 (i.e. w-)
1
1
TL
(~)P*i(l_ p*{-i
i>Lp*T J
TL
(~)p*i(l_ p*{-i
I!>Lp*TJ
1
1
935
Adaptive Access Control Applied to Ethernet Data
S
-
P{p(<I?>p*ls,T}=e-
T ?~(Tp*)
P
i
?..J-'-'-
(2)
l.
i =0
For small T (e.g. T < IIp* and L > 1 - lie), even if s =0 (no losses), this level is not met.
But, a neighborhood of samples with similar load combinations may all have no losses
indicating that this sample can be classified as having p($) < p*. Choosing a neighborhood
requires a metric, m, between feature vectors, $. In this paper we simply use Euclidean
distance. Using the above and solving for T when s =0, the smallest meaningful neighborhood size is the smallest k such that the aggregate sample is greater than a critical size,
1'* = -In(1- L)/p*. From (2), this guarantees that if no packets in the aggregate sample are
lost we can classify it as having p?(j?) < p* within our confidence level. For larger samples,
or where samples are more plentiful and k can afford to be large, (2) can be used directly.
Table 2 summarizes this aggregate method.
The above preprocessing methods assume that the training samples consist of independent
samples of Bernoulli trials. Because of memory introduced by the buffer and possible correlations in the arrivals, this is decidedly not true. The methods can still be applied, if samples can be subsampled at every Ith trial where I is large enough so that the samples are
pseudo-independent, i.e. the dependency is not significant for our application.
A simple graphical method for determining I is as follows. Observing Figure I, if the
number of trials is artificially increased, for small samples the weighting method will tend
to under weight the trials with errors, so that its decision boundary will be at erroneously
high loss rates. This is the case with correlated samples. The sample size, T, overstates the
number of independent trials. As the subsample factor is increased, the subsample size
becomes smaller, the trials become increasingly independent, the weighting becomes
more appropriate, and the decision boundary moves closer to the true decision boundary.
At some point, the samples are sufficiently independent so that sparser subsampling does
not change the decision boundary. By plotting the decision boundary of the classifier as a
function of I, the point where the boundary is independent of the subsample factor indicates a suitable choice for I .
In summary, the procedure consists of collecting traffic samples at different combinations
of traffic loads that do and do not meet quality of service. These are then subsampled with
a factor I determined as above. Then one of the sample preprocessing methods, summarized in Table I , are applied to the data. These preprocessed samples are then used in any
neural network or classification scheme. Analysis in [Br096] derives the expected bias
(shown in Figure 2) of the methods when used with an ideal classifier. The normal method
can be arbitrarily biased, the weighting method is unbiased, and the aggregate method
chooses a conservative boundary. Simulation experiments in [Br096] applying it to a well
characterized MIMII queueing system to determine acceptable loads showed that the
weighting method was able to produce unbiased threshold estimates over a range of valTable 2: Aggregate Classification Algorithm
1. Given Sample (<I>i' si, Ti ) E {(<I>i' si,1j)} , metric, m, and confidence level, L.
2. Calculate T* = -InC 1 - L)/ p*.
3. Find nearest neighbors no, nl' .. , where no =i and m($n j, $i) ~ m($nj+I' $i) for j ~ O.
k
k
4. Choose smallest k S.t. T' = LTnj ~ T*. Let
. _s;,
5. Usmg (2),
d(<I>;,
j=O
T;)
S'
=
L sn/
j=O
{+1 ifP{p($?p*ls',T}?I-L) .
= 1
-
o.w.
T. X. Brown
936
.
le+03
r---~'---~'---~'---~,--~'-------"--'"
Nonnal Weighting ------Aggregate ?
~
,5'le+02
le+02
J
~
8
g le+Ol
~ le+Ol
o
~
.f3
'" letOO
le-OI
IletOO
'""
L........_~_~_~_~_~--'
0.001
0.01
0.1
1
Tp?
\0
100
1000
Figure 1: Plot of Relative Weighting of
Samples with Losses Below (w-) and
Above (w+) the Critical Loss Rate.
!<,. :
.. ....... ....... .. .. . - - _' ..n.!.: ..!...
le-OI
L........_~_~_'--''--~_~--'
0 .001
0.01
0.1
1
Tp?
\0
100
1000
Figure 2: Expected Decision Normalized
by p*. The nominal boundary is P/P* = 1.
The aggregate method uses L =0.95.
ues; and the aggregate method produced conservative estimates that were always below
the desired threshold, although in terms of traffic load were only 5% smaller. Even in this
simple system where the input traffic is uncorrelated (but the losses become correlated due
the memory in the queue), the subsample factor was 12, meaning that good results
required more than 90% of the data be thrown out.
5
EXPERIMENTS WITH ETHERNET TRAFFIC DATA
This paper set out to solve the problem of access control for real world data. We consider a
system where the call combinations consist of individual computer data users trunked onto
a single output link. This is modeled as a discrete-time single-server queueing model
where in each time slot one packet can be processed and zero or more packets can arrive
from the different users. The server has a buffer of fixed length 1000. To generate a realistic arrival process, we use ethernet data traces. The bandwidth of the link was chosen at
from 100100Mbps. With 48 byte packets, the queue packet service rate was the bandwidth
divided by 384. All arrival rates are normalized by the service rate.
5.1
THEDATA
We used ethemet data described in [Le193] as the August 89 busy hour containing traffic
ranging from busy file-servers/routers to users with just a handful of packets. The detailed
data set records every packet's arrival time (to the nearest l00llsec), size, plus source and
destination tags. From this, 108 "data traffic" sources were generated, one for each computer that generated traffic on the ethernet link. To produce uniform size packets, each ethernet packet (which ranged from 64 to 1518 bytes) was split into 2 to 32 48-byte packets
(partial packets were padded to 48 bytes). Each ethernet packet arrival time was mapped
into a particular time slot in the queueing model. All the packets arriving in a times lot are
immediately added to the buffer, any buffer overflows would be discarded (counted as
lost), and if the buffer was non-empty at the start of the timeslot, one packet sent. Ethernet
contains a collision protocol so that only one of the sources is sending packets at anyone
time onto a lOMbps connection. Decorrelating the sources via random starting offsets,
produced independent data sources with the potential for overloads. Multiple copies at different offsets produced sufficient loads even for bandwidths greater than 10Mbps.
The peak data rate with this data is fixed, while the load (the average rate over the one hour
trace normalized by the peak rate) ranges over five orders of magnitude. Also troubling,
analysis of this data [LeI93] shows that the aggregate traffic exhibits chaotic self-similar
properties and suggests that it may be due to the sources' distribution of packet interarrival times following an extremely heavy tailed distribution with infinite higher order
moments. No tractable closed form solution exists for such data to predict whether a particular load will result in an overload. Thus, we apply adaptive access control.
Adaptive Access Control Applied to Ethernet Data
937
5.2 EXPERIMENT AND RESULTS
We divided the data into two roughly similar groups of 54 sources each; one for training
and one for testing. To create sample combinations we assign a distribution over the different training sources, choose a source combination from this distribution, and choose a random, uniform (over the period of the trace) starting time for each source. Simulations that
reach the end of a trace wrap around to the beginning of the trace. The sources are
described by a single feature corresponding to the average load of the source over the one
hour data trace. A group of sources is described by the sum of the average loads. The
source distribution was a uniformly chosen O-M copies of each of the 54 training samples.
M was dynamically chosen so that the link would be sufficiently loaded to cause losses.
Each sample combination was processed for 3x107 time slots, recording the load combination, the number of packets serviced correctly, and the number blocked. The experiment
was repeated for a range of bandwidths. The bandwidths and number of samples at each
bandwidth are shown in Table 3
We applied the three methods of Table 1 based on p* = 10-6 (L = 95% for the aggregate
method) and used the resulting data in a linear classifier. Since the feature is the load and
larger loads will always cause more blocking,p(<I? is a one variable monotonic function. A
linear classifier is sufficient for this case and its output is simply a threshold on the load.
To create pseudo-independent trials necessary for the aggregate and weighting methods,
we subsampled every lth packet. Using the graphical method of Section 4, the resulting I
are shown in column 4 of Table 3. A typical subsample factor is 200. The sample sizes
ranged from 105 to 107 trials, But, after subsampling by a factor of 200, even for the larg.
est samples, p*T < 0.05 ? 1.
The thresholds found by each method are shown in Table 3. To get loss rate estimates at
these thresholds, the average loss rate of the 20% of source combinations below each
method's threshold is computed. Since accepted loads would be below the threshold this is
a typical loss rate. The normal scheme is clearly flawed with losses 10 times higher than
p*, the weighting scheme's loss rate is apparently unbiased with results around p*, while
the aggregate scheme develops a conservative boundary below p*. To test the boundaries,
we repeated the experiment generating source combination samples using the 54 sources
not used in the training. Table 3 also shows the losses on this test set and indicates that the
training set boundaries produce similar results on the test data.
The boundaries are compared with those of more conventional, model-based techniques.
One proposed technique for detecting overloads appears in [Gue91]. This paper assumes
the sources are based on a Markov On/Off model. Applying the method to this ethernet
data (treating each packet arrival as an On period and calculating necessary parameters
from there), all but the very qighest loads in the training sets are classified as acceptable
indicating that the loss rate would be orders of magnitude higher than p*. A conservative
technique is to accept calls only as long as the sum of the peak source transmission rates is
less than the link bandwidth. For the lOMbps link, since this equals the original ethemet
Table 3: Results from Experiments at Different Link Bandwidth.
Number of
Samples
Bandwidth
(Mbps)
Train
10
1569
Threshold Found & Loss Rate
at Threshold on (train/test) Set
Test
Subsample
Factor
Normal
Weighting
Aggregate
1080
230
0.232 (1e-5/4e-6)
0.139 (8e-7/le-6)
0.105 (I e-7/8e-8)
0.268 (5e-7/ge-7)
0.215 (3e-9/4e-7)
17.5
2447
3724
180
0.415 (2e-5/3e-5)
30
6696
4219
230
0.508 (7e-6/4e-5)
0.333 (4e-6/5e-8)
0.286 (3e-7I2e-8)
100
1862
N.A.
180
0.688 (le-51N.A.)
0.566 (5e-71N.A.)
0.494 (Oe-OIN.A.)
938
T. X. Brown
data rate, this peak rate method will accept exactly one source. Averaging over all sources,
the average load would be 0.0014 and would not increase with increasing bandwidth. The
neural method takes advantage of better trunking at increasing bandwidths, and carries
two orders of magnitude more traffic.
6
CONCLUSION
Access control depends on a classification function that decides if a given set of load conditions will violate quality of service constraints. In this paper quality of service was in
terms of a maximum packet loss rate, p* . Given that analytic methods are inadequate
when given realistic traffic sources, a neural network classification method based on samples of traffic results at different load conditions is a practical alternative. With previous
neural network approaches, the synthetic nature of the experiments obscured a significant
bias that exists with more realistic data. This bias, due to the small sample sizes relative to
l/p*, is likely to occur in any real system and results in accepting loads with losses that are
orders of magnitude greater than p*.
Preprocessing the data to either remove the bias or provide a confidence level, the neural
network was applied to sources based on difficult-to-analyze ethernet data traces. A group
of sources was characterized by its total load so that the goal was to simply choose a
threshold on how much load the link would accept. The neural network was shown to produce accurate estimates of the correct threshold. Interestingly these good results depend
on creating traffic samples representing independent packet transmissions. This requires
more than 99% of the data to be thrown away indicating that for good performance an
easy-to-implement sparse sampling of the packet fates is sufficient. It also indicates that
unless the total number of packets that is observed is orders of magnitude larger than l/p*,
the samples are actually small and preprocessing methods such as in this paper must be
applied for accurate loss rate classification.
In comparison to analytic techniques, all of the methods, are more accurate at identifying
overloads. In comparison to the best safe alternative that works even on this ethernet data,
the neural network method was able to carry two orders of magnitude more traffic. The
techniques in this paper apply to a range of network problems from routing, to bandwidth
allocation, network design, as well as access control.
References
[Br096] Brown, TX, "Classifying Loss Rates with Small Samples," Submitted to IEEE
Tran. on Comm., April 1996.
[Est94] Estrella, AD., Jurado, A, Sandoval, F., "New Training Pattern Selection Method
for ATM Call Admission Neural Control," Elec. Let., Vol. 30, No.7, pp.
577-579, Mar. 1994.
[Gue91] Guerin, R., Ahmadi, H., Naghshineh, M., "Equivalent Capacity and its Application to Bandwidth Allocation in High-Speed Networks," IEEE JSAC, vol. 9, no.
7,pp.968-981,1991.
[Hir90] Hiramatsu, A., "ATM Communications Network Control by Neural Networks,"
IEEE Trans. on Neural Networks, vol. 1, no. 1, pp. 122-130, 1990.
[Hir95] Hiramatsu, A, "Training Techniques for Neural Network Applications in ATM,"
IEEE Comm. Mag., October, pp. 58-67,1995.
[LeI93] Leland, W.E., Taqqu, M.S., Willinger, W., Wilson, D.V., "On the Self-Similar
Nature of Ethernet Traffic," in Proc. of ACM SIGCOMM 1993. pp. 183-193.
[Tra92] Tran-Gia, P., Gropp, 0., "Performance of a Neural Net used as Admission Controller in ATM Systems," Pmc. Globecom 92, Orlando, FL, pp. 1303-1309.
| 1267 |@word trial:10 loading:1 simulation:2 incurs:1 accommodate:1 carry:2 moment:1 plentiful:1 contains:2 mag:1 interestingly:2 outperforms:1 si:15 router:1 must:4 willinger:1 realistic:3 analytic:5 remove:2 treating:2 plot:1 fewer:1 beginning:1 ith:1 short:2 record:1 accepting:3 detecting:1 five:1 admission:2 direct:1 become:2 consists:1 leland:1 expected:3 roughly:1 ol:2 decomposed:1 td:1 increasing:2 becomes:3 estimating:1 classifies:1 what:2 unspecified:1 nj:1 guarantee:3 pseudo:2 every:3 collecting:1 ti:9 exactly:1 classifier:10 control:15 service:9 engineering:1 timing:1 negligible:1 meet:1 plus:1 dynamically:4 suggests:1 co:1 range:6 practical:1 testing:1 lost:9 implement:1 chaotic:1 procedure:1 reject:1 confidence:6 get:2 onto:2 selection:1 applying:2 conventional:1 equivalent:1 starting:2 l:2 identifying:1 assigns:1 immediately:1 variation:1 target:3 nominal:1 colorado:2 user:4 us:2 labeled:1 blocking:1 observed:1 electrical:1 calculate:1 region:2 oe:1 removed:2 comm:2 dynamic:1 taqqu:1 depend:2 solving:2 tx:1 train:2 elec:1 distinct:1 aggregate:15 neighborhood:10 choosing:2 larger:5 solve:1 otherwise:1 advantage:1 net:1 tran:2 maximal:1 translate:1 adapts:1 empty:1 transmission:2 produce:5 generating:1 guaranteeing:1 measured:1 nearest:3 received:1 solves:2 ethernet:16 implies:1 met:4 safe:1 correct:1 packet:42 routing:1 orlando:1 assign:1 around:3 considered:1 sufficiently:2 normal:8 bursty:1 predict:1 gia:1 vary:1 smallest:3 gbps:1 proc:1 applicable:1 create:2 successfully:1 weighted:2 clearly:1 always:3 wilson:1 derived:3 bernoulli:2 indicates:3 eliminate:1 typically:1 accept:8 nonnal:1 classification:9 denoted:1 equal:1 f3:1 having:3 flawed:1 sampling:1 buffering:1 represents:1 look:1 minimized:1 develops:1 individual:3 subsampled:3 consisting:1 freedom:2 thrown:2 introduces:1 analyzed:1 nl:1 undefined:1 tj:6 accurate:6 mbps:3 closer:1 partial:1 necessary:2 unless:1 euclidean:1 logarithm:1 desired:5 plotted:1 obscured:1 increased:2 classify:1 earlier:2 column:1 tp:4 uniform:6 delay:3 successful:1 inadequate:1 too:1 timeslot:1 dependency:1 synthetic:1 combined:1 chooses:1 peak:4 destination:1 off:1 squared:1 containing:2 choose:5 possibly:1 iip:2 hiramatsu:2 creating:1 return:1 account:1 busy:2 potential:1 summarized:1 inc:1 idealized:1 depends:1 ad:1 later:1 lot:1 closed:1 analyze:2 traffic:24 observing:1 start:1 apparently:1 complicated:2 il:1 oi:2 atm:4 variance:1 loaded:1 interarrival:1 raw:1 produced:3 classified:2 submitted:1 reach:1 suffers:1 grossly:1 pp:6 obvious:1 associated:1 proof:1 subtle:1 actually:1 appears:1 higher:3 day:2 april:1 decorrelating:1 mar:1 just:1 correlation:1 quality:5 indicated:1 effect:1 brown:5 contain:1 unbiased:5 true:2 normalized:3 analytically:1 ranged:2 self:2 ranging:1 meaning:1 ifp:1 thedata:1 empirically:1 occurred:1 significant:6 blocked:1 enter:1 access:14 stable:1 showed:1 store:1 buffer:7 certain:1 server:3 arbitrarily:1 seen:1 gropp:1 greater:4 determine:1 period:4 ii:1 violate:2 full:2 multiple:2 reduces:1 characterized:2 long:1 divided:2 sigcomm:1 controller:3 metric:2 lpi:1 ethemet:2 usmg:1 interval:1 source:31 biased:2 eliminates:1 unlike:1 file:1 recording:1 tend:1 sent:1 fate:1 call:5 near:1 ideal:1 split:1 enough:2 easy:1 fit:1 serviced:2 bandwidth:13 translates:1 whether:3 effort:1 queue:2 afford:1 cause:2 dramatically:1 collision:1 detailed:1 processed:2 generate:1 specifies:1 exist:1 x107:1 correctly:1 diverse:1 broadly:1 discrete:2 vol:3 group:3 threshold:11 pb:4 queueing:4 preprocessed:1 pj:1 lti:1 utilize:1 padded:1 sum:3 year:1 arrive:1 decide:2 decision:13 acceptable:3 summarizes:1 bit:1 fl:1 internet:1 larg:1 occur:2 constraint:2 throwing:1 handful:1 tag:1 erroneously:1 speed:1 anyone:1 extremely:1 lpj:1 combination:16 describes:1 smaller:4 increasingly:1 lp:18 making:1 globecom:1 boulder:1 resource:1 previously:2 turn:1 mechanism:1 ge:1 tractable:1 end:1 sending:1 available:1 apply:2 away:2 appropriate:2 alternative:4 voice:1 ahmadi:1 original:1 assumes:1 subsampling:2 graphical:2 calculating:1 build:1 overflow:1 move:1 added:4 exhibit:1 wrap:1 distance:1 link:12 mapped:1 capacity:1 length:1 modeled:2 mini:1 pmc:1 difficult:3 troubling:1 october:1 trace:9 design:1 markov:1 discarded:3 communication:2 precise:1 august:1 introduced:1 qos:10 required:1 connection:3 mizes:1 hour:5 trans:1 able:2 below:8 pattern:1 memory:3 video:1 critical:5 event:3 treated:1 suitable:1 decidedly:1 representing:1 scheme:4 carried:2 ues:1 sn:1 byte:4 determining:1 relative:4 loss:32 allocation:2 degree:2 sufficient:3 plotting:1 uncorrelated:2 classifying:1 heavy:1 summary:2 placed:1 last:2 copy:2 arriving:2 l_:2 bias:12 wide:1 neighbor:2 sparse:2 boundary:16 dimension:1 world:2 transition:2 made:1 adaptive:5 preprocessing:9 historical:1 counted:1 excess:1 sj:6 decides:2 tailed:1 table:11 nature:2 artificially:1 protocol:2 significance:1 dense:1 oin:1 subsample:6 arrival:8 repeated:2 tl:2 ff:1 lie:1 weighting:17 third:1 minute:1 load:33 specific:1 offset:2 concern:1 sit:2 incorporating:1 consist:3 burden:1 derives:1 exists:2 magnitude:11 sparser:1 timothy:1 simply:3 likely:2 monotonic:1 acm:1 slot:3 lth:1 goal:4 timxb:1 change:1 typical:4 determined:1 infinite:1 uniformly:1 averaging:2 miss:2 conservative:5 total:3 called:1 accepted:2 est:1 meaningful:1 indicating:4 latter:1 overload:6 dept:1 requisite:1 correlated:2 |
293 | 1,268 | A mean field algorithm for Bayes learning
in large feed-forward neural networks
Manfred Opper
Institut fur Theoretische Physik
Julius-Maximilians-Universitat, Am Hubland
D-97074 Wurzburg, Germany
opperOphysik.Uni-Wuerzburg.de
Ole Winther
CONNECT
The Niels Bohr Institute
Blegdamsvej 17
2100 Copenhagen, Denmark
wintherGconnect.nbi.dk
Abstract
We present an algorithm which is expected to realise Bayes optimal
predictions in large feed-forward networks. It is based on mean field
methods developed within statistical mechanics of disordered systems. We give a derivation for the single layer perceptron and show
that the algorithm also provides a leave-one-out cross-validation
test of the predictions. Simulations show excellent agreement with
theoretical results of statistical mechanics.
1
INTRODUCTION
Bayes methods have become popular as a consistent framework for regularization
and model selection in the field of neural networks (see e.g. [MacKay,1992]). In
the Bayes approach to statistical inference [Berger, 1985] one assumes that the prior
uncertainty about parameters of an unknown data generating mechanism can be
encoded in a probability distribution, the so called prior. Using the prior and
the likelihood of the data given the parameters, the posterior distribution of the
parameters can be derived from Bayes rule. From this posterior, various estimates
for functions ofthe parameter, like predictions about unseen data, can be calculated.
However, in general, those predictions cannot be realised by specific parameter
values, but only by an ensemble average over parameters according to the posterior
probability.
Hence, exact implementations of Bayes method for neural networks require averages
over network parameters which in general can be performed by time consuming
M. Opper and O. Winther
226
Monte Carlo procedures. There are however useful approximate approaches for
calculating posterior averages which are based on the assumption of a Gaussian
form of the posterior distribution [MacKay,1992]. Under regularity conditions on
the likelihood, this approximation becomes asymptotically exact when the number
of data is large compared to the number of parameters. This Gaussian ansatz
for the posterior may not be justified when the number of examples is small or
comparable to the number of network weights. A second cause for its failure would
be a situation where discrete classification labels are produced from a probability
distribution which is a nonsmooth function of the parameters. This would include
the case of a network with threshold units learning a noise free binary classification
problem.
In this contribution we present an alternative approximate realization of Bayes
method for neural networks, which is not based on asymptotic posterior normality. The posterior averages are performed using mean field techniques known from
the statistical mechanics of disordered systems. Those are expected to become exact
in the limit of a large number of network parameters under additional assumptions on
the statistics of the input data. Our analysis follows the approach of [Thouless, Anderson& Palmer,1977] (TAP) as adapted to the simple percept ron by [Mezard,1989].
The basic set up of the Bayes method is as follows: We have a training set consisting
of m input-output pairs Dm = {(sll,ull),m = 1, ... ,/J}, where the outputs are
generated independently from a conditional probability distribution P( u ll Iw, sll).
This probability is assumed to describe the output u ll to an input sll of a neural
network with weights w subject to a suitable noise process. If we assume that the
unknown parameters w are randomly distributed with a prior distribution p(w),
then according to Bayes theorem our knowledge about w after seeing m examples
is expressed through the posterior distribution
m
p(wIDm) = Z-lp(w)
II P(ulllw,sll)
( 1)
11=1
=J
n;=l
where Z
dwp(w)
P(ulllw, sll) is called the partition function in statistical
mechanics and the evidence in Bayesian terminology. Taking the average with respect to the posterior eq. (1), which in the following will be denoted by angle brackets, gives Bayes estimates for various quantities. For example the optimal predictive
probability for an output u to a new input s is given by pBayes(uls) = (P(ulw, s?.
In section 2 exact equations for the posterior averaged weights (w) are derived for
arbitrary networks. In 3 we specialize these equations to a perceptron and develop
a mean field ansatz in section 4. The resulting system of mean field equations equations is presented in section 5. In section 6 we consider Bayes optimal predictions
and a leave-one-out estimator for the generalization error. We conclude in section 7
with a discussion of our results.
2
A RESULT FOR POSTERIOR AVERAGES FROM
GAUSSIAN PRIORS
In this section we will derive an interesting equation for the posterior mean of the
weights for arbitrary networks when the prior is Gaussian. This average of the
227
Mean Field Algorithm/or Bayes Learning
weights can be calculated for the distribution (1) by using the following simple and
well known result for averages over Gaussian distributions.
Let v be a Gaussian random variable with zero means. Then for any function f(v),
we have
2
df(v)
(vf(v?a = (v )a? (~)a .
(2)
Here ( .. .)a denotes the average over the Gaussian distribution of v. The relation is
easily proved from an integration by parts.
=
In the following we will specialize to an isotropic Gaussian prior p(w)
~Ne-!w.w. In [Opper & Winter ,1996] anisotropic priors are treated as well.
v 21r
Applying (2) to each component of wand the function
the following equations
(w)
=Z-l
Here ( . . .)Il =
= Z-l
J
tJ
Ii
1l=1
"icll
dwp(w)
dw wp(w)
Ii
n;=l P(o-Illw,sll), we get
P(o-"Iw, s")
,,=1
P(o-"Iw, s")\7 w P(o-lllw, sll)
(3)
JJdWp(w) ...nn"1t' P(a"lw ,s") is a reduced average over a posterior where
dwp(w)
"~t'
P(a"lw ,s")
the Jl-th example is kept out of the training set and \7 w denotes the gradient with
respect to w.
3
THE PERCEPTRON
In the following , we will utilize the fact that for neural networks, the probability (1)
depends only on the so called internal fields 8 = JNw . s .
A simple but nontrivial example is the perceptron with N dimensional input vector s
and output 0-( W, s) = sign( 8). We will generalize the noise free model by considering
label noise in which the output is flipped, i.e. 0-8 < 0 with a probability (1 +e.B)-l.
(For simplicity, we will assume that f3 is known such that no prior on f3 is needed .)
The conditional probability may thus be written as
P(0-1l8 1l ) = P(o-Illw sll) ,-
e-.B9( -at' At')
1 + e-.B
'
(4)
where 9(x) = 1 for x > 0 and 0 otherwise. Obviously, this a nonsmooth function of
the weights w, for which the posterior will not become Gaussian asymptotically.
For this case (3) reads
t
(w) = _1_
(P'(0-1l8 1l ?1l o-Ilsll =
.jN 1l=1 (P(0-1l8 1l ?1l
_1_
f JJ
.jN 1l=1
d8fll (8)P'(0-1l8) o-Ilsll
d8fll(A)P(0-1l8)
(5)
M. Opper and O. Winther
228
IIJ (~) is the density of -dNw . glJ, when the weights ware randomly drawn from a
posterior, where example (glJ , (TIJ) was kept out of the training set. This result states
that the weights are linear combinations of the input vectors. It gives an example
of the ability of Bayes method to regularize a network model: the effective number
of parameters will never exceed the number of data points.
4
MEAN FIELD APPROXIMATION
Sofar, no approximations have been made to obtain eqs. (3,5). In general IIJ(~)
depends on the entire set of data Dm and can not be calculated easily. Hence, we
look for a useful approximation to these densities.
We split the internal field into its average and fluctuating parts, i.e. we set ~IJ =
(~IJ)IJ + v lJ , with vlJ = IN(w - (w)lJ)glJ. Our mean field approximation is based
on the assumption of a central limit theorem for the fluctuating part of the internal
field, vlJ which enters in the reduced average of eq. (5). This means, we assume
that the non-Gaussian fluctuations of Wi around (Wi)IJ' when mulitplied by sr will
sum up to make vlJ a Gaussian random variable. The important point is here that
for the reduced average, the Wi are not correlated to the sr! 1
We expect that this Gaussian approximation is reasonable, when N, the number
of network weights is sufficiently large.Following ideas of [Mezard, Parisi & Virasoro,1987] and [Mezard,1989]' who obtained mean field equations for a variety of
disordered systems in statistical mechanics, one can argue that in many cases this
assumption may be exactly fulfilled in the 'thermodynamic limit' m, N ~ 00 with
a = ~ fixed. According to this ansatz, we get
in terms of the second moment of vlJ AIJ := ~ 2:i,j
srsj (WiWj)1J -
(Wi)IJ(Wj)IJ).
To evaluate (5) we need to calculate the mean (~IJ)IJ and the variance AIJ. The first
problem is treated easily within the Gaussian approximation.
(6)
In the third line (2) has been used again for the Gaussian random variable vlJ .
Sofar, the calculation of the variance AIJ for general inputs is an open problem.
However, we can make a further reasonable ansatz, when the distribution of the
inputs is known. The following approximation for AIJ is expected to become exact
in the thermodynamic limit if the inputs of the training set are drawn independently
1 Note that the fluctuations of the internal field with respect to the full posterior mean
(which depends on the input si-') is non Gaussian, because the different terms in the sum
become slightly correlated.
229
Mean Field Algorithm/or Bayes Learning
from a distribution, where all components Si are uncorrelated and normalized i.e.
Si
0 and Si Sj
dij. The bars denote expectation over the distribution of inputs.
For the generalisation to a correlated input distribution see [Opper& Winther,1996].
Our basic mean field assumption is that the fluctuations of the All with the data
set can be neglected so that we can replace them by their averages All. Since the
reduced posterior averages are not correlated with the data sf, we obtain All ~
2:i(wl}1l - (Wi)!). Finally, we replace the reduced average by the expectation
over the full posterior, neglecting terms of order liN. Using 2:i(wl)
N, which
2:i (Wi)2 . This
follows from our choice of the Gaussian prior, we get All ~ A = 1 depends only on known quantities.
=
=
tr
tr
5
=
MEAN FIELD EQUATIONS FOR THE PERCEPTRON
(5) and (6) give a selfconsistent set of equations for the variable xll
We finally get
== \~{::::N:
.
(7)
(8)
with
(9)
These mean field equations can be solved by iteration. It is useful to start with a
small number of data and then to increase the number of data in steps of 1 - 10.
Numerical work show that the algorithm works well even for small systems sizes,
N ~ 15.
6
BAYES PREDICTIONS AND LEAVE-ONE-OUT
After solving the mean field equations we can make optimal Bayesian classifications
for new data s by chosing the output label with the largest predictive probability.
In case of output noise this reduces to uBayes(s) = sign(u(w, s? Since the posterior
distribution is independent of the new input vector we can apply the Gaussian assumption again to the internal field, d. and obtain uBayes(s) = u( (w), s), i.e for the
simple perceptron the averaged weights implement the Bayesian prediction. This
will not be the case for multi-layer neural networks.
We can also get an estimate for the generalization error which occurs on the prediction of new data. The generalization error for the Bayes prediction is defined
by {Bayes = (8 (-u(s)(u(w,s??s, where u(s) is the true output and ( ... )s denotes
average over the input distribution. To obtain the leave-one-out estimator of { one
M. Opper and O. Winther
230
0.50
0 .40
."-
"-
. >-
0 .30
...
0 .20
I
0.10
o. 00 '---"'----'-.J' - ----'_ _--'---_
o
---L_
_
-'--_~_
1- I I
J
_
_ L __
___"__
_
L__ _
- ' - -_ - - ' - -_
----'
6
4
2
=
=
Figure 1: Error vs. a mj N for the simple percept ron with output noise f3 0.5
and N = 50 averaged over 200 runs. The full lines are the simulation results (upper
curve shows prediction error and the lower curve shows training error). The dashed
line is the theoretical result for N -+ 00 obtained from statistical mechanics [Opper &
Haussler, 1991] . The dotted line with larger error bars is the moving control estimate.
removes the p-th example from the training set and trains the network using only
the remaining m - 1 examples. The p'th example is used for testing. Repeating
this procedure for all p an unbiased estimate for the Bayes generalization error with
m-1 training data is obtained as the mean value f~~r8 =
EI' e (-ul'(O'(w, 81'?1')
which is exactly the type of reduced averages which are calculated within our approach. Figure 1 shows a result of simulations of our algorithm when the inputs are
uncorrelated and the outputs are generated from a teacher percept ron with fixed
noise rate f3.
!
7
CONCLUSION
In this paper we have presented a mean field algorithm which is expected to implement a Bayesian optimal classification well in the limit of large networks. We have
explained the method for the single layer perceptron. An extension to a simple multilayer network, the so called committee machine with a tree architecture is discussed
in [Opper& Winther,1996]. The algorithm is based on a Gaussian assumption for
the distribution of the internal fields, which seems reasonable for large networks.
The main problem sofar is the restriction to ideal situations such as a known distri-
Mean FieLd Algorithm/or Bayes Learning
231
bution of inputs which is not a realistic assumption for real world data. However,
this assumption only entered in the calculation of the variance of the Gaussian field.
More theoretical work is necessary to find an approximation to the variance which
is valid in more general cases. A promising approach is a derivation of the mean
field equations directly from an approximation to the free energy -In(Z). Besides a
deeper understanding this would also give us the possibility to use the method with
the so called evidence framework , where the partition function (evidence) can be
used to estimate unknown (hyper-) parameters of the model class [Berger, 1985]. It
will further be important to extend the algorithm to fully connected architectures.
In that case it might be necessary to make further approximations in the mean field
method.
ACKNOWLEDGMENTS
This research is supported by a Heisenberg fellowship of the Deutsche Forschrmgsgemeinschaft and by the Danish Research Councils for the Natural and Technical
Sciences through the Danish Computational Neural Network Center (CONNECT) .
REFERENCES
Berger, J. O. (1985) Statistical Decision theory and Bayesian Analysis, SpringerVerlag, New York.
MacKay, D. J. (1992) A practical Bayesian framework for backpropagation networks,
Neural Compo 4 448.
Mezard , M., Parisi G. & Virasoro M. A. (1987) Spin Glass Theory and Beyond,
Lecture Notes in Physics , 9, World Scientific, .
Mezard, M. (1989) The space of interactions in neural networks: Gardner's calculation with the cavity method J. Phys. A 22, 2181 .
Opper, M. & Haussler, D. (1991) in IVth Annual Workshop on Computational
Learning Theory (COLT91) , Morgan Kaufmann.
Opper M. & Winther 0 (1996) A mean field approach to Bayes learning in feedforward neural networks, Phys. Rev. Lett. 76 1964.
Thouless, D .J ., Anderson, P. W . & Palmer, R .G. (1977), Solution of 'Solvable model
of a spin glass' Phil. Mag. 35, 593.
| 1268 |@word seems:1 open:1 physik:1 simulation:3 tr:2 moment:1 mag:1 l__:1 si:4 written:1 realistic:1 numerical:1 partition:2 remove:1 v:1 isotropic:1 compo:1 manfred:1 provides:1 ron:3 become:5 specialize:2 expected:4 mechanic:6 multi:1 considering:1 becomes:1 distri:1 deutsche:1 developed:1 ull:1 exactly:2 control:1 unit:1 limit:5 ware:1 fluctuation:3 might:1 dwp:3 palmer:2 averaged:3 acknowledgment:1 practical:1 testing:1 implement:2 backpropagation:1 procedure:2 seeing:1 get:5 cannot:1 selection:1 applying:1 restriction:1 center:1 phil:1 independently:2 simplicity:1 rule:1 estimator:2 haussler:2 regularize:1 dw:1 exact:5 agreement:1 julius:1 enters:1 solved:1 calculate:1 nbi:1 pbayes:1 wj:1 connected:1 neglected:1 solving:1 predictive:2 easily:3 various:2 derivation:2 train:1 describe:1 effective:1 monte:1 ole:1 hyper:1 encoded:1 larger:1 otherwise:1 ability:1 statistic:1 unseen:1 obviously:1 parisi:2 interaction:1 sll:8 realization:1 entered:1 regularity:1 generating:1 leave:4 derive:1 develop:1 ij:8 eq:3 disordered:3 require:1 generalization:4 extension:1 around:1 b9:1 sufficiently:1 niels:1 label:3 iw:3 council:1 largest:1 wl:2 gaussian:19 derived:2 fur:1 likelihood:2 am:1 glass:2 inference:1 nn:1 entire:1 lj:2 illw:2 wurzburg:1 relation:1 germany:1 classification:4 denoted:1 integration:1 mackay:3 field:27 f3:4 never:1 flipped:1 look:1 nonsmooth:2 randomly:2 winter:1 thouless:2 consisting:1 possibility:1 bracket:1 tj:1 bohr:1 neglecting:1 necessary:2 institut:1 tree:1 theoretical:3 virasoro:2 dij:1 universitat:1 connect:2 teacher:1 density:2 winther:7 physic:1 ansatz:4 again:2 central:1 de:1 depends:4 performed:2 realised:1 start:1 bayes:20 bution:1 contribution:1 il:1 spin:2 variance:4 who:1 percept:3 ensemble:1 kaufmann:1 ofthe:1 theoretische:1 generalize:1 bayesian:6 produced:1 carlo:1 phys:2 danish:2 failure:1 realise:1 energy:1 dm:2 proved:1 popular:1 knowledge:1 feed:2 anderson:2 ei:1 scientific:1 normalized:1 true:1 unbiased:1 regularization:1 hence:2 read:1 wp:1 ll:2 hubland:1 anisotropic:1 jl:1 discussed:1 extend:1 moving:1 selfconsistent:1 posterior:20 binary:1 morgan:1 additional:1 dashed:1 ii:3 thermodynamic:2 full:3 reduces:1 technical:1 calculation:3 cross:1 lin:1 prediction:10 basic:2 multilayer:1 expectation:2 df:1 iteration:1 justified:1 fellowship:1 sr:2 subject:1 ideal:1 exceed:1 split:1 feedforward:1 variety:1 architecture:2 idea:1 ul:1 york:1 cause:1 jj:1 useful:3 tij:1 repeating:1 reduced:6 dotted:1 sign:2 fulfilled:1 discrete:1 terminology:1 threshold:1 drawn:2 kept:2 utilize:1 asymptotically:2 sum:2 wand:1 run:1 angle:1 uncertainty:1 reasonable:3 decision:1 vf:1 comparable:1 layer:3 annual:1 nontrivial:1 adapted:1 according:3 combination:1 slightly:1 maximilians:1 wi:6 lp:1 rev:1 explained:1 equation:12 mechanism:1 committee:1 needed:1 apply:1 fluctuating:2 alternative:1 jn:2 denotes:3 assumes:1 include:1 glj:3 remaining:1 calculating:1 quantity:2 occurs:1 gradient:1 blegdamsvej:1 argue:1 denmark:1 besides:1 berger:3 implementation:1 unknown:3 xll:1 upper:1 situation:2 arbitrary:2 copenhagen:1 pair:1 tap:1 beyond:1 bar:2 suitable:1 treated:2 natural:1 solvable:1 normality:1 ne:1 gardner:1 prior:10 understanding:1 asymptotic:1 fully:1 expect:1 lecture:1 interesting:1 validation:1 consistent:1 uncorrelated:2 l8:5 supported:1 free:3 l_:1 aij:4 deeper:1 perceptron:7 institute:1 heisenberg:1 taking:1 distributed:1 curve:2 opper:10 calculated:4 world:2 valid:1 lett:1 forward:2 made:1 sj:1 approximate:2 uni:1 cavity:1 assumed:1 conclude:1 consuming:1 uls:1 promising:1 mj:1 excellent:1 main:1 noise:7 chosing:1 iij:2 mezard:5 sf:1 lw:2 third:1 theorem:2 specific:1 r8:1 dk:1 evidence:3 workshop:1 sofar:3 expressed:1 conditional:2 replace:2 springerverlag:1 generalisation:1 vlj:5 called:5 wuerzburg:1 internal:6 evaluate:1 correlated:4 |
294 | 1,269 | Analysis of Temporal-Difference Learning
with Function Approximation
John N. Tsitsiklis and Benjamin Van Roy
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA 02139
e-mail: jnt@mit.edu, bvr@mit.edu
Abstract
We present new results about the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of
a Markov chain using linear function approximators. The algorithm we analyze performs on-line updating of a parameter vector
during a single endless trajectory of an aperiodic irreducible finite
state Markov chain. Results include convergence (with probability
1), a characterization of the limit of convergence, and a bound on
the resulting approximation error. In addition to establishing new
and stronger results than those previously available, our analysis
is based on a new line of reasoning that provides new intuition
about the dynamics of temporal-difference learning. Furthermore,
we discuss the implications of two counter-examples with regards
to the Significance of on-line updating and linearly parameterized
function approximators.
1
INTRODUCTION
The problem of predicting the expected long-term future cost (or reward) of a
stochastic dynamic system manifests itself in both time-series prediction and control. An example in time-series prediction is that of estimating the net present
value of a corporation, as a discounted sum of its future cash flows, based on the
current state of its operations. In control, the ability to predict long-term future
cost as a function of state enables the ranking of alternative states in order to guide
decision-making. Indeed, such predictions constitute the cost-to-go function that is
central to dynamic programming and optimal control (Bertsekas, 1995).
Temporal-difference learning, originally proposed by Sutton (1988), is a method for
approximating long-term future cost as a function of current state. The algorithm
1. N. Tsitsiklis and B. Van Roy
1076
is recursive, efficient, and simple to implement. Linear combinations of fixed basis
functions are used to approximate the mapping from state to future cost. The
weights of the linear combination are updated upon each observation of a state
transition and the associated cost. The objective is to improve approximations
of long-term future cost as more and more state transitions are observed. The
trajectory of states and costs can be generated either by a physical system or a
simulated model. In either case, we view the system as a Markov chain. Adopting
terminology from dynamic programming, we will refer to the function mapping
states of the Markov chain to expected long-term cost as the cost-to-go function.
In this paper, we introduce a new line of analysis for temporal-difference learning.
In addition to providing new intuition about the dynamics of the algorithm, this
approach leads to a stronger convergence result than previously available, as well
as an interpretation of the limit of convergence and bounds on the resulting approximation error, neither of which have been available in the past. Aside from
the statement of results, we maintain the discussion at an informal level, and make
no attempt to present a complete or rigorous proof. The formal and more general
analysis based on our line of reasoning can found in (Tsitsiklis and Van Roy, 1996),
which also discusses the relationship between our results and other work involving
tem poral-difference learning.
The convergence results assume the use of both on-line updating and linearly parameterized function approximators. To clarify the relevance of these requirements,
we discuss the implications of two counter-examples that are presented in (Tsitsiklis
and Van Roy, 1996). These counter-examples demonstrate that temporal-difference
learning can diverge in the presence of either nonlinearly parameterized function
approximators or arbitrary (instead of on-line) sampling distributions.
2
DEFINITION OF TD(A)
In this section, we define precisely the nature of temporal-difference learning, as applied to approximation of the cost-to-go function for an infinite-horizon discounted
Markov chain. While the method as well as our subsequent results are applicable to
Markov chains with fairly general state spaces, including continuous and unbounded
spaces, we restrict our attention in this paper to the case where the state space is
finite. Discounted Markov chains with more general state spaces are addressed in
(Tsitsiklis and Van Roy, 1996). Application of this line of analysis to the context of
undiscounted absorbing Markov chains can be found in (Bertsekas and Tsitsiklis,
1996) and has also been carried out by Gurvits (personal communication).
We consider an aperiodic irreducible Markov chain with a state space S =
{I, ... , n}, a transition probability matrix P whose (i, j)th entry is denoted by Pij,
transition costs g(i,j) associated with each transition from a state i to a state j,
and a discount factor Q E (0,1). The sequence of states visited by the Markov chain
is denoted by {it I t = 0,1, ... }. The cost-to-go function J* : S t-+ ~ associated
with this Markov chain is defined by
J*(i)
~E
[f:
olg(it, it+d I io = ij.
t=o
Since the number of dimensions is finite, it is convenient to view J* as a vector
instead of a function.
We consider approximations of J* using a function of the form
J(i, r) = (<I>r)(i).
Analysis ofTemporal-Diflference Learning with Function Approximation
1077
Here, r = (r(l), ... ,r(K)) is a parameter vector and cI> is a n x K. We denote the
ith row of cI> as a (column) vector </J(i).
Suppose that we observe a sequence of states it generated according to the transition
probability matrix P and that at time t the parameter vector r has been set to some
value rt. We define the temporal difference dt corresponding to the transition from
it to it+l by
dt = g(it, it+1) + aJ(it+1' rt) - J(it, rt).
We define a sequence of eligibility vectors Zt (of dimension K) by
t
Zt = 2)aA)t-k</J(ik).
k=O
The TD(A) updates are then given by
rt+l = rt + "Itdtzt,
where ro is initialized to some arbitrary vector, "It is a sequence of scalar step
sizes, and A is a parameter in [0,1]. Since temporal-difference learning is actually
a continuum of algorithms, parameterized by A, it is often referred to as TD(A).
Note that the eligibility vectors can be updated recursively according to Zt+1
aAzt + </J(it+d, initialized with Z-l = O.
3
ANALYSIS OF TD("\)
Temporal-difference learning originated in the field of reinforcement learning. A
view commonly adopted in the original setting is that the algorithm involves "looking back in time and correcting previous predictions." In this context, the eligibility
vector keeps track of how the parameter vector should be adjusted in order to appropriately modify prior predictions when a temporal-difference is observed. Here,
we take a different view which involves examining the "steady-state" behavior of
the algorithm and arguing that this characterizes the long-term evolution of the
parameter vector. In the remainder ofthis section, we introduce this view of TD(A)
and provide an overview of the analysis that it leads to. Our goal in this section is to
convey some intuition about how the algorithm works, and in this spirit, we maintain the discussion at an informal level, omitting technical assumptions and other
details required to formally prove the statements we make. These technicalities are
addressed in (Tsitsiklis and Van Roy, 1996), where formal proofs are presented.
We begin by introducing some notation that will make our discussion here more
concise. Let 71"(1), .. . , 7I"(n) denote the steady-state probabilities for the process it.
We assume that 7I"(i) > 0 for all i E S. We define an n x n diagonal matrix D with
diagonal entries 71"(1), ... , 7I"(n). We define a weighted norm II ?IID by
L 7I"(i)J2(i).
IIJIID =
iES
We define a "projection matrix" II by
IIJ = arg !llin IIJ J=tf>r
JIID.
It is easy to show that II = cI>(cI>' DcI?-lcI>' D.
We define an operator
(T(? J)(i) = (1 -
T(>") : ~n I-t ~n,
~) %;. ~m E
[t,
indexed by a parameter A E [0,1) by
o/g(i" it+1) + "m+l J(im+l) I io = i) .
1. N. Tsitsiklis and B. Van Roy
1078
For A = 1 we define (T(l)J)(i) = J*(i), so that lim>.tl(T(>')J)(i) = (T(l)J)(i). To
interpret this operator in a meaningful manner, note that, for each m, the term
E
[f
cig(it , it+d
+ am+! J(im+d I io =
t=o
i]
is the expected cost to be incurred over m transitions plus an approximation to
the remaining cost to be incurred, based on J. This sum is sometimes called the
"m-stage truncated cost-to-go." Intuitively, if J is an approximation to the costto-go function, the m-stage truncated cost-to-go can be viewed as an improved
approximation. Since T(>') J is a weighted average over the m-stage truncated costto-go values, T(>') J can also be viewed as an improved approximation to J*. A
property of T(>') that is instrumental in our proof of convergence is that T(>') is a
contraction of the norm II?IID. It follows from this fact that the composition IIT(>')
is also a contraction with respect to the same norm, and has a fixed point of the
form cf>r* for some parameter vector r* .
To clarify the fundamental structure of TD(A), we construct a process X t =
(it, i t+!, Zt)? It is easy to see that X t is a Markov process. In particular, Zt+l
and it+! are deterministic functions of X t and the distribution of it+2 only depends
on it+l. Note that at each time t, the random vector X t , together with the current parameter vector rt, provides all necessary information for computing rt+l. By
defining a function s with s(r, X) = (g(i,j)+aJ(j, r) -J(i, r))z, where X = (i,j, z),
we can rewrite the TD(A) algorithm as
rt+1 = rt
+ Its(rt, Xd?
For any r, s(r,Xt ) has a "steady-state" expectation, which we denote by
Eo[s(r, X t )]. Intuitively, once X t reaches steady-state, the TD(A) algorithm, in
an "average" sense, behaves like the following deterministic algorithm:
TT+l = TT
+ ITEO[S(TT' X t )].
Under some technical assumptions, a theorem from (Benveniste, et al., 1990) can
be used to deduce convergence TD(A) from that of the deterministic counterpart.
Our study centers on an analysis of this deterministic algorithm. A theorem from
(Benveniste, et aI, 1990) is used to formally deduce convergence of the stochastic
algorithm.
It turns out that
Eo[s(r,Xt )] = cf>'D(T(>')(cf>r) - cf>r).
Using the contraction property of T(>'),
(r - r*)'Eo[s(r,Xt )]
=
(cf>r - cf>r*)'D(IIT(>')(cf>r) - cf>r*
< lIcf>r - cf>r*IID . IlIIT(>') (cf>r) < (0: -1)IIcf>r - cf>r*1I1.
+ (cf>r*
- cf>r))
cf>r*IID -11cf>r* - cf>r1l1
Since a < 1, this inequality shows that the steady state expectation Eo[s(r, Xd]
generally moves the parameter vector towards r*, the fixed point of IIT(>'), where
"closeness" is measured in terms of the norm II . liD. This provides the main line
of reasoning behind the proof of convergence provided in (Tsitsiklis and Van Roy,
1996). Some illuminating interpretations of this deterministic algorithm, which are
useful in developing an intuitive understanding of temporal difference learning, are
also discussed in (Tsitsiklis and Van Roy, 1996).
Analysis ofTemporal-DiflJerence Learning with Function Approximation
4
1079
CONVERGENCE RESULT
We now present our main result concerning temporal-difference learning. A formal
proof is provided in (Tsitsiklis and Van Roy, 1996).
Theorem 1 Let the following conditions hold:
(a) The Markov chain it has a unique invariant distribution 71" that satisfies 71"' P =
71"', with 71"( i) > 0 for all i.
(b) The matrix 4> has full column rank; that is, the "basis functions" {?k I k =
1, ... ,K} are linearly independent.
(c) The step sizes 'Yt are positive, nonincreasing, and predetermined. Furthermore,
< 00.
they satisfy 2::0 'Yt = 00, and 2::0
We then have:
(a) For any A E [0,1]' the TD(A) algorithm, as defined in Section 2, converges with
probability 1.
(b) The limit of convergence r* is the unique solution of the equation
'Yt
IIT(>') (4)r*) = 4>r*.
(c) Furthermore, r* satisfies
l14>r* - J* liD :S 1 - Aa IlIIJ* - J* liD.
I-a
Part (b) of the theorem leads to an interesting interpretation of the limit of convergence. In particular, if we apply the TD (A) operator to the final approximation
4>r*, and then project the resulting function back into the span of the basis functions, we get the same function 4>r*. Furthermore, since the composition IIT(>')
is a contraction, repeated application of this composition to any function would
generate a sequence of functions converging to 4>r*.
Part (c) of the theorem establishes that a certain desirable property is satisfied
by the limit of convergence. In particular, if there exists a vector r such that
4>r = J*, then this vector will be the limit of convergence of TD(A), for any A E
[0, 1]. On the other hand, if no such parameter vector exists, the distance between
the limit of convergence 4>r* and J* is bounded by a multiple of the distance
between the projection IIJ* and J*. This latter distance is amplified by a factor of
(1 - Aa)/(1 - a), which becomes larger as A becomes smaller.
5
COUNTER-EXAMPLES
Sutton (1995) has suggested that on-line updating and the use of linear function
approximators are both important factors that make temporal-difference learning
converge properly. These requirements also appear as assumptions in the convergence result of the previous section. To formalize the fact that these assumptions
are relevant, two counter-examples were presented in (Tsitsiklis and Van Roy, 1996).
The first counter-example involves the use of a variant of TD(O) that does not sample
states based on trajectories. Instead, the states it are sampled independently from a
distribution q(.) over S, and successor states jt are generated by sampling according
to Pr[jt = jli t ] = Pid. Each iteration of the algorithm takes on the form
rt+I
= rt + 'Yt?(i t) (g(it,jt) + a?'(jt)rt - ?'(it)rt).
We refer to this algorithm as q-sampled TD(O). Note that this algorithm is closely
related to the original TD(A) algorithm as defined in Section 2. In particular, if it is
J. N. Tsitsiklis and B. Van Roy
1080
generated by the Markov chain and jt = it+! , we are back to the original algorithm.
It is easy to show, using a subset of the arguments required to prove Theorem 1,
that this algorithm converges when q(i) = 7r(i) for all i, and the Assumptions of
Theorem 1 are satisfied. However, results can be very different when q( .) is arbitrary. In particular, the counter-example presented in (Tsitsiklis an Van Roy, 1996)
shows that for any sampling distribution q( .) that is different from 7r(-) there exists
a Markov chain with steady-state probabilities 7r(-) and a linearly parameterized
function approximator for which q-sampled TD(O) diverges. A counter-example
with similar implications has also been presented by Baird (1995).
A generalization of temporal difference learning is commonly used in conjunction
with nonlinear function approximators. This generalization involves replacing each
vector </J( it) that is used to construct the eligibility vector with the vector of derivatives of J(it, .), evaluated at the current parameter vector rt. A second counterexample in (Tsitsiklis and Van Roy, 1996), shows that there exists a Markov chain
and a nonlinearly parameterized function approximator such that both the parameter vector and the approximated cost-to-go function diverge when such a variant
of TD(O) is applied. This nonlinear function approximator is "regular" in the sense
that it is infinitely differentiable with respect to the parameter vector. However, it
is still somewhat contrived, and the question of whether such a counter-example exists in the context of more standard function approximators such as neural networks
remains open.
6
CONCLUSION
Theorem 1 establishes convergence with probability 1, characterizes the limit of
convergence, and provides error bounds, for temporal-difference learning. It is interesting to note that the margins allowed by the error bounds are inversely proportional to >.. Although this is only a bound, it strongly suggests that higher values
of >. are likely to produce more accurate approximations. This is consistent with
the examples that have been constructed by Bertsekas (1994).
The sensitivity of the error bound to >. raises the question of whether or not it ever
makes sense to set >. to values less than 1. Many reports of experimental results,
dating back to Sutton (1988), suggest that setting>. to values less than one can
often lead to significant gains in the rate of convergence. A full understanding of
how>. influences the rate of convergence is yet to be found, though some insight in
the case of look-up table representations is provided by Dayan and Singh (1996).
This is an interesting direction for future research.
Acknowledgments
We thank Rich Sutton for originally making us aware of the relevance of on-line
state sampling, and also for pointing out a simplification in the expression for the
error bound of Theorem l. This research was supported by the NSF under grant
DMI-9625489 and the ARO under grant DAAL-03-92-G-01l5.
References
Baird, L. C. (1995). "Residual Algorithms: Reinforcement Learning with Function
Approximation," in Prieditis & Russell, eds. Machine Learning: Proceedings of
the Twelfth International Conference, 9-12 July, Morgan Kaufman Publishers, San
Francisco, CA.
Bertsekas, D. P. (1994) "A Counter-Example to Temporal-Difference Learning,"
Analysis ofTemporal-Diffference Learning with Function Approximation
1081
Neural Computation, vol. 7, pp. 270-279.
Bertsekas, D. P. (1995) Dynamic Programming and Optimal Control, Athena Scientific, Belmont, MA.
Bertsekas, D. P. & Tsitsiklis, J. N. (1996) Neuro-Dynamic Programming, Athena
Scientific, Belmont, MA.
Benveniste, A., Metivier, M., & Priouret, P., (1990) Adaptive Algorithms and
Stochastic Approximations, Springer-Verlag, Berlin.
Dayan, P. D. & Singh, S. P (1996) "Mean Squared Error Curves in Temporal
Difference Learning," preprint.
Gurvits, L. (1996) personal communication.
Sutton, R. S., (1988) "Learning to Predict by the Method of Temporal Differences,"
Machine Learning, vol. 3, pp. 9-44.
Sutton, R.S. (1995) "On the Virtues of Linear Learning and Trajectory Distributions," Proceedings of the Workshop on Value Function Approximation, Machine
Learning Conference 1995, Boyan, Moore, and Sutton, Eds., p. 85. Technical
Report CMU-CS-95-206, Carnegie Mellon University, Pittsburgh, PA 15213.
Tsitsiklis, J. N. & Van Roy, B. (1996) "An Analysis of Temporal-Difference Learning
with Function Approximation," to appear in the IEEE Transactions on Automatic
Control.
| 1269 |@word instrumental:1 stronger:2 norm:4 twelfth:1 open:1 contraction:4 concise:1 recursively:1 series:2 past:1 current:4 yet:1 john:1 belmont:2 subsequent:1 predetermined:1 enables:1 update:1 aside:1 ith:1 characterization:1 provides:4 unbounded:1 constructed:1 ik:1 prove:2 manner:1 introduce:2 indeed:1 expected:3 behavior:1 discounted:3 td:17 becomes:2 begin:1 estimating:1 notation:1 provided:3 project:1 bounded:1 kaufman:1 corporation:1 temporal:20 xd:2 ro:1 control:5 grant:2 appear:2 bertsekas:6 positive:1 modify:1 limit:8 io:3 sutton:7 establishing:1 plus:1 suggests:1 unique:2 acknowledgment:1 arguing:1 recursive:1 implement:1 convenient:1 projection:2 regular:1 suggest:1 get:1 operator:3 context:3 influence:1 deterministic:5 center:1 yt:4 go:10 attention:1 independently:1 correcting:1 insight:1 jli:1 updated:2 suppose:1 programming:4 pa:1 roy:15 approximated:1 updating:4 observed:2 preprint:1 counter:10 russell:1 benjamin:1 intuition:3 reward:1 dynamic:7 personal:2 metivier:1 raise:1 rewrite:1 singh:2 upon:1 basis:3 iit:5 whose:1 larger:1 ability:1 itself:1 final:1 sequence:5 differentiable:1 net:1 cig:1 aro:1 remainder:1 j2:1 relevant:1 amplified:1 intuitive:1 convergence:20 contrived:1 requirement:2 undiscounted:1 diverges:1 produce:1 converges:2 measured:1 ij:1 c:1 involves:4 direction:1 aperiodic:2 closely:1 stochastic:3 successor:1 generalization:2 im:2 adjusted:1 clarify:2 hold:1 mapping:2 predict:2 pointing:1 continuum:1 applicable:1 visited:1 tf:1 establishes:2 weighted:2 mit:2 cash:1 conjunction:1 jnt:1 properly:1 rank:1 rigorous:1 am:1 sense:3 dayan:2 i1:1 arg:1 denoted:2 fairly:1 field:1 construct:2 gurvits:2 once:1 aware:1 sampling:4 dmi:1 poral:1 look:1 tem:1 future:7 report:2 irreducible:2 maintain:2 attempt:1 behind:1 nonincreasing:1 chain:15 implication:3 accurate:1 endless:1 necessary:1 indexed:1 initialized:2 column:2 cost:19 introducing:1 entry:2 costto:2 subset:1 examining:1 fundamental:1 sensitivity:1 international:1 l5:1 diverge:2 together:1 squared:1 central:1 satisfied:2 derivative:1 baird:2 satisfy:1 ranking:1 depends:1 view:5 analyze:1 characterizes:2 iid:4 trajectory:4 lci:1 reach:1 ed:2 definition:1 pp:2 associated:3 proof:5 sampled:3 gain:1 massachusetts:1 manifest:1 lim:1 formalize:1 actually:1 back:4 originally:2 dt:2 higher:1 olg:1 improved:2 evaluated:1 though:1 strongly:1 furthermore:4 stage:3 hand:1 replacing:1 nonlinear:2 aj:2 scientific:2 omitting:1 counterpart:1 evolution:1 laboratory:1 moore:1 during:1 eligibility:4 steady:6 complete:1 demonstrate:1 tt:3 performs:1 reasoning:3 absorbing:1 behaves:1 physical:1 overview:1 discussed:1 interpretation:3 interpret:1 mellon:1 refer:2 composition:3 significant:1 cambridge:1 counterexample:1 ai:1 automatic:1 deduce:2 certain:1 verlag:1 inequality:1 approximators:7 morgan:1 somewhat:1 eo:4 converge:1 july:1 ii:5 full:2 desirable:1 multiple:1 technical:3 long:6 concerning:1 y:1 prediction:5 involving:1 variant:2 neuro:1 converging:1 expectation:2 cmu:1 iteration:1 sometimes:1 adopting:1 addition:2 addressed:2 publisher:1 appropriately:1 flow:1 spirit:1 presence:1 easy:3 restrict:1 prieditis:1 whether:2 expression:1 constitute:1 useful:1 generally:1 discount:1 generate:1 nsf:1 track:1 carnegie:1 vol:2 terminology:1 neither:1 sum:2 parameterized:6 decision:2 bound:7 simplification:1 precisely:1 llin:1 argument:1 span:1 developing:1 according:3 combination:2 smaller:1 lid:3 making:2 intuitively:2 invariant:1 pr:1 pid:1 equation:1 previously:2 remains:1 discus:3 turn:1 informal:2 adopted:1 available:3 operation:1 apply:1 observe:1 alternative:1 original:3 remaining:1 include:1 cf:16 approximating:2 objective:1 move:1 question:2 rt:15 diagonal:2 distance:3 thank:1 simulated:1 berlin:1 athena:2 bvr:1 mail:1 relationship:1 priouret:1 providing:1 statement:2 dci:1 zt:5 observation:1 markov:16 finite:3 truncated:3 defining:1 communication:2 looking:1 ever:1 arbitrary:3 nonlinearly:2 required:2 suggested:1 including:1 boyan:1 predicting:1 residual:1 improve:1 technology:1 inversely:1 carried:1 dating:1 prior:1 understanding:2 interesting:3 proportional:1 approximator:3 illuminating:1 incurred:2 pij:1 consistent:1 benveniste:3 row:1 supported:1 tsitsiklis:17 guide:1 formal:3 institute:1 van:15 regard:1 curve:1 dimension:2 transition:8 rich:1 commonly:2 reinforcement:2 san:1 adaptive:1 transaction:1 approximate:1 keep:1 technicality:1 pittsburgh:1 francisco:1 l14:1 continuous:1 table:1 nature:1 ca:1 significance:1 main:2 linearly:4 repeated:1 allowed:1 convey:1 referred:1 tl:1 iij:3 originated:1 theorem:9 xt:3 jt:5 virtue:1 closeness:1 ofthis:1 exists:5 workshop:1 ci:4 horizon:1 margin:1 likely:1 infinitely:1 scalar:1 springer:1 aa:3 satisfies:2 ma:3 goal:1 viewed:2 towards:1 infinite:1 called:1 experimental:1 meaningful:1 formally:2 latter:1 relevance:2 |
295 | 127 | 91
OPTIMIZATION BY MEAN FIELD ANNEALING
Griff Bilbro
ECE Dept.
NCSU
Raleigh, NC 27695
Reinhold Mann
Eng. Physics and Math. Div.
Oak Ridge Natl. Lab.
Oak Ridge, TN 37831
Thomas K. Miller
ECE Dept.
NCSU
Raleigh, NC 27695
Wesley. E. Snyder
ECE Dept.
NCSU
Raleigh, NC 27695
David E. Van den Bout
ECE Dept.
NCSU
Raleigh, NC 27695
Mark White
ECE Dept.
NCSU
Raleigh, NC 27695
ABSTRACT
Nearly optimal solutions to many combinatorial problems can be
found using stochastic simulated annealing. This paper extends
the concept of simulated annealing from its original formulation
as a Markov process to a new formulation based on mean field
theory. Mean field annealing essentially replaces the discrete degrees of freedom in simulated annealing with their average values
as computed by the mean field approximation. The net result is
that equilibrium at a given temperature is achieved 1-2 orders of
magnitude faster than with simulated annealing. A general framework for the mean field annealing algorithm is derived, and its relationship to Hopfield networks is shown. The behavior of MFA is
examined both analytically and experimentally for a generic combinatorial optimization problem: graph bipartitioning. This analysis
indicates the presence of critical temperatures which could be important in improving the performance of neural networks.
STOCHASTIC VERSUS MEAN FIELD
In combinatorial optimization problems, an objective function or Hamiltonian,
H(s), is presented which depends on a vector of interacting 3pim, S {81," .,8N},
in some complex nonlinear way. Stochastic simulated annealing (SSA) (S. Kirkpatrick, C. Gelatt, and M. Vecchi (1983)) finds a global minimum of H by combining gradient descent with a random process. This combination allows, under
certain conditions, choices of s which actually increa3e H, thus providing SSA with
a mechanism for escaping from local minima. The frequency and severity of these
uphill moves is reduced by slowly decreasing a parameter T (often referred to as
the temperature) such that the system settles into a global optimum.
=
Two conceptual operationo; are involved in simulated annealing: a thermodatic operation which schedules decreases in the temperature, and a relazation operation
92
Bilbro, et al
which iteratively finds the equilibrium solution at the new temperature using the
final state of the system at the previous temperature as a starting point. In SSA, relaxation occurs by randomly altering components of s with a probability determined
by both T and the change in H caused by each such operation. This corresponds to
probabilistic transitions in a Markov chain. In mean field annealing (MFA), some
aspects of the optimization problem are replaced with their means or averages from
the underlying Markov chain (e.g. s is replaced with its average, (s)). As the temperature is decreased, the MFA algorithm updates these averages based on their
values at the previous temperature. Because computation using the means attains
equilibrium faster than using the corresponding Markov chain, MFA relaxes to a
solution at each temperature much faster than does SSA, which leads to an overall
decrease in computational effort.
In this paper, we present the MFA formulation in the context of the familiar Ising
Hamiltonian and discuss its relationship to Hopfield neural networks. Then the
application of MFA to the problem of graph bipartitioning is discussed, where we
have analytically and experimentally investigated the affect of temperature on the
behavior of MFA and observed speedups of 50:1 over SSA.
MFA AND HOPFIELD NETWORKS
Optimization theory, like physics, often concerns itself with systems possessing a
large number ofinteracting degrees offreedom. Physicists often simplify their problems by using the mean field approzimation: a simple analytic approximation of the
behavior of systems of particles or spins in thermal equilibrium. In a corresponding manner, arbitrary functions can be optimized by using an analytic version of
stochastic simulated annealing based on a technique analogous to the mean field
approximation. The derivation of MFA presented here uses the naive mean field
(D. J. Thouless, P.W. Anderson, and R.G. Palmer (1977)) and starts with a simple
Ising Hamiltonian of N spins coupled by a product interaction:
H(s)
= L ~Si + L ""'
L..J'Vi;si s;
,
i
where
;i:'
{ Vi?
= V.?i
s, 'E {O ,'1}
Factoring H(s) shows the interaction between a spin
H(s)
s, and the rest of the system:
= Si . (~ + 2 L Vi;S;) + L h"s" + L
"i:'
;~,
symmetry
. t eger spans.
.
an
L V";s,,s; .
(1)
"i:i ;i:".'
The mean or effective field affecting s, is the average of its coefficient in (1):
w, = (h, + 2 E;i:i Vi;s;) = ~ + 2 L Vi; (s;) =
HI(Si)=l - HI(s,}=o'
(2)
; I-i
The last part of (2) shows that I for the Ising case, the mean field can be simply
calculated from the difference in the Hamiltonian caused by changing (s,) from zero
Optimization by Mean Field Annealing
1. Initialize spin averages and add noise: 8i
= 1/2 + 6 Vi.
2. Perform this relaxation step until a fixed-point is found:
a.
Select a spin average (8,) at random from (s).
h.
Compute the mean field ~i =
c.
Compute the new spin average (8i)
14 + 2 E;;ti 'Vi; (8;).
= {I + exp (~dT)} -1.
3. Decrease T and repeat step 2 until freezing occurs.
Figure 1.
The Mean Field Annealing Algorithm
to one while holding the other spin averages constant. By taking the Boltzmannweighted average of the state values, the spin average is found to be
(3)
Equilibrium is established at a given temperature when equations (2) and (3) hold
for each spin. The MFA algorithm (Figure 1) begins at a high temperature where
this fized-point is easy to determine. The fixed-point is tracked as T is lowered by
iterating a relaxation step which uses the spin averages to calculate a new mean
field that is then used to update the spin averages. As the temperature is lowered,
the optimum solution is found as the limit of this sequence of fixed-points.
The relationship of Hopfield neural networks to MFA becomes apparent if the relaxation step in Figure 1 is recast in a parallel form in which the entire mean field
vector partially moves towards its new state,
and then all the spin averages are updated using ~nc1ll. As'Y
equations become non-linear differential equations,
dip, = h.
dt
'
+ 2~
11:'{8') L.,;'"
;
ip.
,
-t
0, these difference
'Vi ?
I
which are equivaleut to the equations of motion for the Hopfield network (J. J.
Hopfield and D. W. Tank (1985?,
Vi,
93
94
Bilbro, et al
provided we make Gi
= Pi = 1 and use a sigmoidal transfer function
1
f( ui) = -1-+-ex-p"""(u-i/-:-T-"') ?
Thus, the evolution of a solution in a Hopfield network is a special case of the
relaxation toward an equilibrium state effected by the MFA algorithm at a fixed
temperature.
THE GRAPH BIPARTITIONING PROBLEM
Formally, a graph consists of a set of N nodes such that nodes 11i and n; are connected by an edge with weight ~; (which could be zero). The graph bipartitioning
problem involves equally distributing the graph nodes across two bins, bo and bl ,
while minimizing the combined weight of the edges with endpoints in opposite bins.
These two sub-objectives tend to frustrate one another in that the first goal is satisfied when the nodes are equally divided between the bins, but the second goal is
met (trivially) by assigning all the nodes to a single bin.
MEAN FIELD FORMULATION
An optimal solution for the bipartitioning problem minimizes the Hamiltonian
In the first term, each edge attracts adjacent nodes into the same bin with a force
proportional to its weight. Counter balancing this attraction is .,., an amorphous
repulsive force between all of the nodes which discourages them from clustering
together. The average spin of a node 11i can be determined from its mean field:
C)i
= L)Vi; -.,.)
;?i
L 2(Vi; - "')(8;} .
;?i
EXPERIl\1ENTAL RESULTS
Table 1 compares the performance of the MFA algorithm of Figure 1 with SSA
in terms of total optimization and computational effort for 100 trials on each of
three example graphs. While the bipartitions found by SSA and MFA are nearly
equivalent, MFA required as little as 2% of the number of iterations needed by SSA.
The effect of the decrease in temperature upon the spin averages is depicted in
Figure 2. At high tempera.tures the graph bipartition is maximally disordered, (i.e.
(8i) R: i Vi), but as the system is cooled past a critical temperature, Te , each node
Optimization by Mean Field Annealing
TABLE 1.
Comparison of SSA and MFA on Graph Bipartitioning
G1
G1
G1
83/115
0.762
0.187
100/200
1.078
0.063
100/400
1.030
0.019
-0.5
0.0
IOB#!I(T)
0.5
1.0
Nodes/Edges
Solution Value (HMFA/ HSSA
RelaxatIOn Iterations (1M FA/ Iss A
1.0
0.8
0.6
<Si>
0.4
0.2
0.0
-1.5
Figure 2.
-1.0
The Effect of Decreasing Temperature on Spin Averages
begins to move predominantly into one or the other of the two bins (as evidenced
by the drift of the spin averages towards 1 or 0). The changes in the spin averages
cause H to decrease rapidly in the vicinty of Te.
To analyze the effect of temperature on the spin averages, the behavior of a cluster
C of spins is idealized with the assumptions:
1. The repulsive force which balances the bin contents is negligible within C
(7' 0) compared to the attractive forces arising from the graph edgesj
=
2. The attractive force exerted by each edge is replaced with an average attractive
force V
E, E; Vi; / E where E is the number of non-zero weighted edgesj
=
3. On average, each graph node is adjacent to
e= 2E/N neighboring nodesj
(
4. The movement of the nodes in a cluster can be uniformly described by some
deviation, u, such that (")
(1 + u)/2.
=
95
96
Bilbro, et al
Using this model, a cluster moves according to
(4)
The solution to (4) is a fixed point with (1' = 0 when T is high. This fixed point
becomes unstable and the spins diverge from 1/2 when the temperature is lowered
to the point where
Solving shows that Tc = Ve/2, which agrees with our experiments and is within
?20% of those observed in (C. Peterson and J. R. Anderson (1987)).
The point at which the nodes freeze into their respective bins can be found using
(4) and assuming a worst-case situation in which a node is attracted by a single
1). In this case, the spin deviation will cross an arbitrary threshold,
edge (i.e.
(1', (usually set ?0.9), when
e=
Tf
V(1'
In(1 -
= In(1 + (1't) -
(1't)
.
A cooling scpedule is now needed which prescribes how many relaxation iterations,
la, are required at each temperature to reach equilibrium as the system is annealed
from Tc to Tf. Further analysis of (4) shows that Ia ex ITc/(Tc - T)I. Thus,
more iterations are required to reach equilibrium around Tc than anywhere else,
which agrees with observations made during our experiments. ,The affect of using
fewer iterations at various temperatures was empirically studied using the following
procedure:
1. Each spin average was initialized to 1/2 and a small amount of noise was
added to break the symmetry of the problem.
2. An initial temperature Ti was imposed, and the mean field equations were
iterated I times for each node.
3. After completing the iterations at 11, the temperature was quenched to near
zero and the mean field equations were again iterated I times to saturate each
node at one or zero.
The results of applying this procedure to one of our example graphs with different
values ofT, and I are shown in Figure 3. Selecting an initial temperature near Tc and
performing sufficient iterations of the mean field equations (I ~ 40 in this case) gives
final bipartitions that are usually near-opt'i mum, while performing an insufficient
number of iterations (I
5 or I
20) leads to poor solutions. However, even a
large number of iterations will not compensate if T, is set so low that the initial
convergence causes the graph to abruptly freeze into a local minimum. The highest
=
=
Optimization by Mean Field Annealing
6.0
5.0
4.0
t
3.0
"
POOR SOLUTIONS
~
/
2.0
-
1.0
GOOD SOLUTIONS
-1.0
0.0
-0.5
~
0.5
IOHe(Ti)
Figure 3.
The Effect of Initial Temperature and Iterations on the Solution
quality solutions are found when T, ~ Tc and a sufficient number of relaxations
are performed, as shown in the traces for I = 40 and I = 90. This seems to
perform as well as slow cooling and requires much less effort. Obviously, much of
the structure of the optimal solution must be present after equilibrating at Te. Due
to the equivalence we have shown between Hopfleld networks and MFA, this fact
may be useful in tuning the gains in Hopfield networks to get better performance.
CONCLUSIONS
The concept of mean field annealing (MFA) has been introduced and compared to
stochastic simulated annealing (SSA) which it closely resembles in both derivation
and implementation. In the graph bipartitioning application, we saw the level
of optimization achieved by MFA was comparable to that achieved by SSA, but
1-2 orders of magnitude fewer relaxation iterations were required. This speedup
is achieved because the average values of the discrete degrees of freedom used by
MFA relax to their equilibrium values much faster than the corresponding Markov
chain employed in SSA. We have seen similar results when applying MFA to a other
problems including N-way graph partitioning (D. E. Van den Bout and T. K. Miller
III (1988)), restoration of range and luminance images (Griff Bilbro and Wesley
Snyder (1988)), and image halftoning (T. K. Miller III and D. E. Van den Bout
(1989)). As was shown, the MFA algorithm can be formulated as a parallel iterative
procedure, so it should also perform well in parallel processing environments. This
has been verified by successfully porting MFA to a ZIP array processor, a 64-node
97
98
Bilbro, et al
NCUBE hypercube computer, and a 10-processor Sequent Balance shared-memory
multiprocessor with near-linear speedups in each case.
In addition to the speed advantages of MFA, the fact that the system state is
represented by continuous variables allows the use of simple analytic techniques
to characterize the system dynamics. The dynamics of the MFA algorithm were
examined for the problem of graph bipartitioning, revealing the existence of a critical
temperature, Te , at which optimization begins to occur. It was also experimentally
determined that MFA found better solutions when annealing began near Tc rather
than at some lower temperature. Due to the correspondence shown between MFA
and Hopfield networks, the critical temperature may be of use in setting the neural
gains so that better solutions are found.
Acknowledgements
This work was partially supported by the North Carolina State University Center for
Communications and Signal Processing and Computer Systems 'Laboratory, and by
the Office of Basic Energy Sciences, and the Office of Technology Support Programs,
U.S. Department of Energy, under contract No. DE-AC05-840R21400 with Martin
Marietta Energy Systems, Inc.
References
Griff Bilbro and Wesley Snyder (1988) Image restoration by mean field annealing.
In Advance, in Neural Network Information Proce"ing SYBteffll.
D. E. Van den Bout and T. K. Miller III (1988) Graph partitioning using annealed neural networks. Submitted to IEEE Tran,. on Circuiu and SYBtem,.
J. J. Hopfield and D. W. Tank (1985) Neural computation of decision in optimization problems. Biological Cybernetic" 52, 141-152.
T. K. Miller III and D. E. Van den Bout (1989) Image halftoning by mean field
annealing. Submitted to ICNN'89.
S. Kirkpatrick, C. Gelatt, and M. Vecchi (1983) Optimization by simulated annealing. Science, 220(4598),671-680.
C. Peterson and J. R. Anderson (1987) Neural Network, and NP-complete Optimization Problem,: a Performance Study on the Graph Bi,ection Problem. Technical Report MCC-EI-287-87, MCC.
D. J. Thouless, P.'?l. Anderson, and R.G. Palmer (1977) Solution
model of a spin glass'. Phil. Mag., 35(3), 593-601.
of
'solvable
| 127 |@word trial:1 version:1 seems:1 carolina:1 eng:1 initial:4 selecting:1 mag:1 past:1 si:5 assigning:1 attracted:1 must:1 analytic:3 update:2 fewer:2 hamiltonian:5 bipartitions:2 math:1 node:17 sigmoidal:1 oak:2 become:1 differential:1 consists:1 manner:1 uphill:1 behavior:4 decreasing:2 frustrate:1 little:1 becomes:2 begin:3 provided:1 underlying:1 minimizes:1 ti:3 partitioning:2 negligible:1 local:2 limit:1 physicist:1 studied:1 examined:2 equivalence:1 resembles:1 palmer:2 range:1 bi:1 bilbro:7 procedure:3 bipartition:1 mcc:2 revealing:1 equilibrating:1 quenched:1 get:1 context:1 applying:2 equivalent:1 imposed:1 center:1 phil:1 annealed:2 starting:1 bipartitioning:8 attraction:1 array:1 cooled:1 analogous:1 updated:1 us:2 cooling:2 ising:3 observed:2 worst:1 calculate:1 connected:1 decrease:5 counter:1 movement:1 highest:1 environment:1 ui:1 dynamic:2 prescribes:1 solving:1 upon:1 hopfield:10 various:1 represented:1 derivation:2 effective:1 apparent:1 relax:1 gi:1 g1:3 itself:1 final:2 ip:1 obviously:1 sequence:1 advantage:1 net:1 tran:1 interaction:2 product:1 neighboring:1 combining:1 rapidly:1 eger:1 convergence:1 cluster:3 optimum:2 involves:1 met:1 closely:1 stochastic:5 disordered:1 settle:1 mann:1 bin:8 offreedom:1 icnn:1 opt:1 biological:1 hold:1 around:1 exp:1 equilibrium:9 combinatorial:3 saw:1 agrees:2 tf:2 successfully:1 weighted:1 rather:1 office:2 derived:1 indicates:1 attains:1 glass:1 factoring:1 multiprocessor:1 entire:1 tank:2 overall:1 special:1 initialize:1 field:27 exerted:1 nearly:2 np:1 report:1 simplify:1 pim:1 randomly:1 ve:1 thouless:2 familiar:1 replaced:3 freedom:2 kirkpatrick:2 natl:1 chain:4 edge:6 respective:1 initialized:1 altering:1 restoration:2 deviation:2 characterize:1 combined:1 probabilistic:1 physic:2 contract:1 diverge:1 together:1 again:1 satisfied:1 slowly:1 de:1 north:1 coefficient:1 inc:1 caused:2 depends:1 vi:13 idealized:1 performed:1 break:1 lab:1 analyze:1 start:1 effected:1 parallel:3 amorphous:1 spin:23 miller:5 iterated:2 processor:2 submitted:2 reach:2 energy:3 frequency:1 involved:1 iob:1 gain:2 schedule:1 actually:1 porting:1 wesley:3 dt:2 maximally:1 formulation:4 anderson:4 anywhere:1 until:2 freezing:1 ei:1 nonlinear:1 quality:1 effect:4 concept:2 evolution:1 analytically:2 iteratively:1 laboratory:1 white:1 attractive:3 adjacent:2 during:1 ection:1 ridge:2 complete:1 tn:1 motion:1 temperature:28 image:4 possessing:1 predominantly:1 began:1 discourages:1 empirically:1 tracked:1 endpoint:1 discussed:1 freeze:2 tuning:1 trivially:1 particle:1 lowered:3 add:1 certain:1 proce:1 seen:1 minimum:3 zip:1 employed:1 determine:1 signal:1 ing:1 technical:1 faster:4 cross:1 compensate:1 divided:1 equally:2 basic:1 essentially:1 itc:1 iteration:11 achieved:4 affecting:1 addition:1 annealing:21 decreased:1 else:1 rest:1 tend:1 near:5 presence:1 iii:4 easy:1 relaxes:1 affect:2 attracts:1 escaping:1 opposite:1 cybernetic:1 distributing:1 effort:3 abruptly:1 cause:2 useful:1 iterating:1 amount:1 reduced:1 arising:1 discrete:2 snyder:3 threshold:1 changing:1 verified:1 luminance:1 graph:18 relaxation:9 extends:1 decision:1 comparable:1 hi:2 completing:1 correspondence:1 replaces:1 occur:1 ncsu:5 aspect:1 speed:1 vecchi:2 span:1 performing:2 martin:1 speedup:3 department:1 according:1 combination:1 poor:2 across:1 den:5 equation:7 discus:1 mechanism:1 needed:2 repulsive:2 operation:3 generic:1 gelatt:2 existence:1 thomas:1 original:1 clustering:1 hypercube:1 bl:1 objective:2 move:4 added:1 occurs:2 fa:1 div:1 gradient:1 simulated:9 unstable:1 toward:1 assuming:1 relationship:3 insufficient:1 providing:1 minimizing:1 balance:2 nc:5 holding:1 trace:1 implementation:1 perform:3 relazation:1 observation:1 markov:5 descent:1 thermal:1 situation:1 severity:1 communication:1 interacting:1 arbitrary:2 sequent:1 drift:1 reinhold:1 david:1 evidenced:1 introduced:1 required:4 optimized:1 bout:5 established:1 usually:2 ncube:1 oft:1 program:1 recast:1 hopfleld:1 including:1 memory:1 mfa:27 critical:4 ia:1 force:6 solvable:1 technology:1 naive:1 coupled:1 acknowledgement:1 proportional:1 tures:1 halftoning:2 versus:1 degree:3 sufficient:2 pi:1 balancing:1 repeat:1 last:1 supported:1 raleigh:5 peterson:2 taking:1 van:5 dip:1 calculated:1 transition:1 ental:1 made:1 global:2 conceptual:1 continuous:1 iterative:1 table:2 transfer:1 symmetry:2 improving:1 investigated:1 complex:1 noise:2 referred:1 i:1 slow:1 sub:1 saturate:1 ssa:12 concern:1 magnitude:2 te:4 tempera:1 depicted:1 tc:7 simply:1 partially:2 bo:1 corresponds:1 goal:2 formulated:1 towards:2 shared:1 content:1 experimentally:3 change:2 determined:3 uniformly:1 total:1 ece:5 la:1 select:1 formally:1 mark:1 support:1 griff:3 dept:5 mum:1 ex:2 |
296 | 1,270 | Monotonicity Hints
Joseph Sill
Computation and Neural Systems program
California Institute of Technology
email: joe@cs.caltech.edu
Yaser S. Abu-Mostafa
EE and CS Deptartments
California Institute of Technology
email: yaser@cs.caltech.edu
Abstract
A hint is any piece of side information about the target function to
be learned. We consider the monotonicity hint, which states that
the function to be learned is monotonic in some or all of the input
variables. The application of mono tonicity hints is demonstrated
on two real-world problems- a credit card application task, and a
problem in medical diagnosis. A measure of the monotonicity error
of a candidate function is defined and an objective function for the
enforcement of monotonicity is derived from Bayesian principles.
We report experimental results which show that using monotonicity
hints leads to a statistically significant improvement in performance
on both problems.
1
Introduction
Researchers in pattern recognition, statistics, and machine learning often draw
a contrast between linear models and nonlinear models such as neural networks.
Linear models make very strong assumptions about the function to be modelled,
whereas neural networks are said to make no such assumptions and can in principle
approximate any smooth function given enough hidden units. Between these two
extremes, there exists a frequently neglected middle ground of nonlinear models
which incorporate strong prior information and obey powerful constraints.
A monotonic model is one example which might occupy this middle area. Monotonic
models would be more flexible than linear models but still highly constrained. Many
applications arise in which there is good reason to believe the target function is
monotonic in some or all input variables. In screening credit card applicants, for
instance, one would expect that the probability of default decreases monotonically
Monotonicity Hints
635
with the applicant's salary. It would be very useful, therefore, to be able to constrain
a nonlinear model to obey monotonicity.
The general framework for incorporating prior information into learning is well
established and is known as learning from hints[l]. A hint is any piece of information
about the target function beyond the available input-output examples. Hints can
improve the performance oflearning models by reducing capacity without sacrificing
approximation ability [2]. Invariances in character recognition [3] and symmetries in
financial-market forecasting [4] are some of the hints which have proven beneficial in
real-world learning applications. This paper describes the first practical applications
of monotonicity hints. The method is tested on two noisy real-world problems: a
classification task concerned with credit card applications and a regression problem
in medical diagnosis.
Section II derives, from Bayesian principles, an appropriate objective function for
simultaneously enforcing monotonicity and fitting the data. Section III describes
the details and results of the experiments. Section IV analyzes the results and
discusses possible future work .
2
Bayesian Interpretation of Objective Function
Let x be a vector drawn from the input distribution and
\.I ? ../...
I
VJ T 1, Xj
=
Xl
be such that
(1)
Xj
(2)
The statement that ! is monotonically increasing in input variable
for all such x, x' defined as above
!(x / ) ~ !(x)
Xi
means that
(3)
Decreasing monotonicity is defined similarly.
We wish to define a single scalar measure of the degree to which a particular candidate function y obeys monotonicity in a set of input variables.
One such natural measure, the one used in the experiments in Section IV, is defined
in the following way: Let x be an input vector drawn from the input distribution.
Let i be the index of an input variable randomly chosen from a uniform distribution over those variables for which monotonicity holds. Define a perturbation
distribution, e.g., U[O,l], and draw ,sXi from this distribution. Define x' such that
\.I ? ../...
I
VJ T 1, Xj
X~
=
Xj
= Xi + sgn( i),sXi
(4)
(5)
J. Sill and Y. S. Abu-Mosta/a
636
where sgn( i) = 1 or -1 depending on whether f is monotonically increasing or
decreasing in variable i. We will call Eh the monotonicity error of y on the input
pair (x, x').
Eh -
{ o(y(x) -
y(x') ;::: y(x)
Y(X'))2
y(x') < y(x)
(6)
Our measure of y's violation of monotonicity is ?[Eh], where the expectation is
taken with respect to random variables x, i and 8Xi .
We believe that the best possible approximation to f given the architecture used
is probably approximately monotonic. This belief may be quantified in a prior
distribution over the candidate functions implementable by the architecture:
(7)
This distribution represents the a priori probability density, or likelihood, assigned
to a candidate function with a given level of monotonicity error. The probability
that a function is the best possible approximation to f decreases exponentially
with the increase in monotonicity error. ). is a positive constant which indicates
how strong our bias is towards monotonic functions.
In addition to obeying prior information, the model should fit the data well. For
classification problems, we take the network output y to represent the probability
of class c = 1 conditioned on the observation of the input vector (the two possible
classes are denoted by 0 and 1). We wish to pick the most probable model given the
data. Equivalently, we may choose to maximize log(P(modelldata)). Using Bayes'
Theorem,
log(P(modelldata)) ex log(P(datalmodel)
+ log(P(model))
(8)
M
=
L: cmlog(Ym) + (1 -
cm )log(l - Ym) - ).?[Eh]
(9)
m=l
For continuous-output regression problems, we interpret y as the conditional mean
of the observed output t given the observation of x . If we assume constant-variance
gaussian noise, then by the same reasoning as in the classification case, the objective
function to be maximized is :
M
- L (Ym - t m )2 - )'?[Eh]
(10)
m=l
The Bayesian prior leads to a familiar form of objective function, with the first
term reflecting the desire to fit the data and a second term penalizing deviation
from mono tonicity.
Monotonicity Hints
3
637
Experimental Results
Both databases were obtained via FTP from the machine learning database
repository maintained by UC-Irvine 1.
The credit card task is to predict whether or not an applicant will default. For
each of 690 applicant case histories, the database contains 15 features describing
the applicant plus the class label indicating whether or not a default ultimately
occurred. The meaning of the features is confidential for proprietary reasons. Only
the 6 continuous features were used in the experiments reported here. 24 of the case
histories had at least one feature missing. These examples were omitted, leaving
666 which were used in the experiments. The two classes occur with almost equal
frequency; the split is 55%-45%.
Intuition suggests that the classification should be monotonic in the features. Although the specific meanings of the continuous features are not known, we assume
here that they represent various quantities such as salary, assets, debt, number of
years at current job, etc. Common sense dictates that the higher the salary or the
lower the debt, the less likely a default is, all else being equal. Monotonicity in all
features was therefore asserted.
The motivation in the medical diagnosis problem is to determine the extent to
which various blood tests are sensitive to disorders related to excessive drinking.
Specifically, the task is to predict the number of drinks a particular patient consumes
per day given the results of 5 blood tests. 345 patient histories were collected, each
consisting of the 5 test results and the daily number of drinks. The "number of
drinks" variable was normalized to have variance 1. This normalization makes the
results easier to interpret, since a trivial mean-squared-error performance of 1.0
may be obtained by simply predicting for mean number of drinks for each patient,
irrespective of the blood tests.
The justification for mono tonicity in this case is based on the idea that an abnormal
result for each test is indicative of excessive drinking, where abnormal means either
abnormally high or abnormally low.
In all experiments, batch-mode backpropagation with a simple adaptive learning
rate scheme was used 2. Several methods were tested. The performance of a linear perceptron was observed for benchmark purposes. For the experiments using
nonlinear methods, a single hidden layer neural network with 6 hidden units and
direct input-output connections was used on the credit data; 3 hidden units and direct input-output connections were used for the liver task . The most basic method
tested was simply to train the network on all the training data and optimize the
objective function as much as possible. Another technique tried was to use a validation set to avoid overfitting. Training for all of the above models was performed
by maximizing only the first term in the objective function, i.e., by maximizing the
log-likelihood of the data (minimizing training error). Finally, training the networks
with the monotonicity constraints was performed, using an approximation to (9)
lThey may be obtained as follows: ftp ics.uci.edu. cd pub/machine-Iearning-databases.
The credit data is in the subdirectory /credit-screening, while the liver data is in the
subdirectory /liver-disorders.
2If the previous iteration resulted in a increase in likelihood, the learning rate was
increased by 3%. If the likelihood decreased, the learning rate was cut in half
638
1. Sill and Y. S. Abu-Mostafa
and (10).
A leave-k-out procedure was used in order to get statistically significant comparisons of the difference in performance. For each method, the data was randomly
partitioned 200 different ways (The split was 550 training, 116 test for the credit
data; 270 training and 75 test for the liver data). The results shown in Table 1 are
averages over the 200 different partitions.
In the early stopping experiments, the training set was further subdivided into a set
(450 for the credit data, 200 for the liver data) used for direct training and a second
validation set (100 for the credit data, 70 for the liver data). The classification
error on the validation set was monitored over the entire course of training, and the
values of the network weights at the point of lowest validation error were chosen as
the final values.
The process of training the networks with the monotonicity hints was divided into
two stages. Since the meanings of the features were unaccessible, the directions
of mono tonicity were not known a priori. These directions were determined by
training a linear percept ron on the training data for 300 iterations and observing
the resulting weights. A positive weight was taken to imply increasing monotonicity,
while a negative weight meant decreasing monotonicity.
Once the directions of mono tonicity were determined, the networks were trained
with the monotonicity hints. For the credit problem , an approximation to the
theoretical objective function (10) was maximized:
(13)
For the liver problem, objective function (12) was approximated by
(14)
Eh,n represents the network's monotonicityerror on a particular pair of input vectors x, x'. Each pair was generated according to the method described in Section II.
The input distribution was modelled as a joint gaussian with a covariance matrix
estimated from the training data.
For each input variable, 500 pairs of vectors representing monotonicity in that variable were generated. This yielded a total of N=3000 hint example pairs for the
credit problem and N=2500 pairs for the liver problem. A was chosen to be 5000 .
No optimization of A was attempted; 5000 was chosen somewhat arbitrarily as
simply a high value which would greatly penalize non-monotonicity. Hint generalization , i.e. monotonicity test error, was measured by using 100 pairs of vectors for
each variable which were not trained on but whose mono tonicity error was calculated. For contrast, monotonicity test error was also monitored for the two-layer
networks trained only on the input-output examples. Figure 1 shows test error and
monotonicity error vs. training time for the credit data for the networks trained
only on the training data (i.e, no hints), averaged over the 200 different data splits.
Monotonicity Hints
639
Test Error and Monotonicity Error vs. Iteration Number
0 . 3 r---~-----r----~----r---~-----r----~----r---~----~
"testcurve.data"
?'hintcurve.data"
0.25
0
+
t
~~~--------------~
~
0.2
.
o
..
~
0.15
0.1
0.05
500
1000
1500
2000
2500
3000
Iteration Number
3500
4000
4500
5000
Figure 1: The violation of monotonicity tracks the overfitting occurring during
training
The monotonicity error is multiplied by a factor of 10 in the figure to make it more
easily visible. The figure indicates a substantial correlation between overfitting and
monotonicity error during the course of training. The curves for the liver data look
similar but are omitted due to space considerations.
Method
Linear
6-6-1 net
6-6-1 net, w/val.
6-6-1 net, w/hint
training error
22.7%? 0.1%
15.2%? 0.1%
18.8%? 0.2%
18.7%?0.1%
test error
23.7%?0.2%
24.6% ? 0.3%
23.4% ? 0.3%
21.8% ? 0.2%
hint test error
.005115
.000020
Table 1: Performance of methods on credit problem
The performance of each method is shown in tables 1 and 2. Without early stopping,
the two-layer network overfits and performs worse than a linear model. Even with
early stopping, the performance of the linear model and the two-layer network are
almost the same; the difference is not statistically significant. This similarity in performance is consistent with the thesis of a monotonic target function. A monotonic
classifier may be thought of as a mildly nonlinear generalization of a linear classifier.
The two-layer network does have the advantage of being able to implement some
of this nonlinearity. However, this advantage is cancelled out (and in other cases
could be outweighed) by the overfitting resulting from excessive and unnecessary
degrees of freedom. When monotonicity hints are introduced, much of this unnecessary freedom is eliminated, although the network is still allowed to implement
monotonic nonlinearities. Accordingly, a modest but clearly statistically significant
improvement on the credit problem (nearly 2%) results from the introduction of
640
J. Sill and Y. S. Abu-Mosta/a
Method
Linear
5-3-1 net
5-3-1 net, w/val.
5-3-1 net, w/hint
training error
.802 ? .005
.640 ? .003
.758 ? .008
.758? .003
test error
.873 ? .013
.920 ? .014
.871 ? .013
.830 ? .013
hint test error
.004967
-
.000002
Table 2: Performance of methods on liver problem
monotonicity hints. Such an improvement could translate into a substantial increase in profit for a bank. Monotonicity hints also significantly improve test error
on the liver problem; 4% more of the target variance is explained.
4
Conclusion
This paper has shown that monotonicity hints can significantly improve the
performance of a neural network on two noisy real-world tasks. It is worthwhile
to note that the beneficial effect of imposing monotonicity does not necessarily
imply that the target function is entirely monotonic. If there exist some nonmonotonicities in the target function, then monotonicity hints may result in some
decrease in the model's ability to implement this function. It may be, though, that
this penalty is outweighed by the improved estimation of model parameters due to
the decrease in model complexity. Therefore, the use of monotonicity hints probably
should be considered in cases where the target function is thought to be at least
roughly monotonic and the training examples are limited in number and noisy.
Future work may include the application of monotonicity hints to other real world
problems and further investigations into techniques for enforcing the hints.
Aclmowledgements
The authors thank Eric Bax, Zehra Cataltepe, Malik Magdon-Ismail, and Xubo
Song for many useful discussions.
References
[1] Y. Abu-Mostafa (1990). Learning from Hints in Neural Networks Journal of
Complexity 6, 192-198.
[2] Y. Abu-Mostafa (1993) Hints and the VC Dimension Neural Computation 4,
278-288
[3] P. Simard, Y . LeCun & J Denker (1993) Efficient Pattern Recognition Using a
New Transformation Distance NIPS5, 50-58 .
[4] Y. Abu-Mostafa (1995) Financial Market Applications of Learning from Hints
Neural Networks in the Capital Markets, A. Refenes, ed., 221-232. Wiley, London,
UK.
| 1270 |@word repository:1 middle:2 tried:1 covariance:1 pick:1 profit:1 contains:1 pub:1 current:1 applicant:5 visible:1 partition:1 v:2 half:1 indicative:1 accordingly:1 ron:1 direct:3 fitting:1 market:3 roughly:1 frequently:1 decreasing:3 increasing:3 lowest:1 cm:1 transformation:1 iearning:1 refenes:1 classifier:2 uk:1 unit:3 medical:3 positive:2 approximately:1 might:1 plus:1 quantified:1 suggests:1 limited:1 sill:4 statistically:4 obeys:1 averaged:1 practical:1 lecun:1 implement:3 backpropagation:1 procedure:1 area:1 thought:2 dictate:1 significantly:2 get:1 optimize:1 demonstrated:1 missing:1 maximizing:2 disorder:2 financial:2 justification:1 target:8 recognition:3 approximated:1 cut:1 database:4 observed:2 decrease:4 consumes:1 substantial:2 intuition:1 zehra:1 complexity:2 neglected:1 ultimately:1 trained:4 eric:1 easily:1 joint:1 various:2 train:1 london:1 whose:1 ability:2 statistic:1 noisy:3 final:1 advantage:2 net:6 uci:1 translate:1 ismail:1 leave:1 ftp:2 depending:1 liver:11 measured:1 job:1 strong:3 c:3 direction:3 vc:1 sgn:2 subdivided:1 generalization:2 investigation:1 probable:1 drinking:2 hold:1 credit:15 ground:1 ic:1 considered:1 bax:1 predict:2 mostafa:5 early:3 omitted:2 purpose:1 estimation:1 label:1 sensitive:1 tonicity:6 clearly:1 gaussian:2 avoid:1 derived:1 improvement:3 likelihood:4 indicates:2 greatly:1 contrast:2 xubo:1 sense:1 stopping:3 entire:1 hidden:4 classification:5 flexible:1 denoted:1 priori:2 constrained:1 uc:1 equal:2 once:1 eliminated:1 represents:2 look:1 excessive:3 nearly:1 future:2 report:1 hint:33 randomly:2 simultaneously:1 resulted:1 familiar:1 consisting:1 freedom:2 screening:2 highly:1 violation:2 extreme:1 asserted:1 daily:1 modest:1 iv:2 sacrificing:1 theoretical:1 instance:1 increased:1 oflearning:1 deviation:1 uniform:1 reported:1 density:1 ym:3 squared:1 thesis:1 choose:1 worse:1 simard:1 nonlinearities:1 piece:2 performed:2 observing:1 overfits:1 bayes:1 variance:3 percept:1 maximized:2 outweighed:2 modelled:2 bayesian:4 researcher:1 asset:1 history:3 ed:1 email:2 frequency:1 monitored:2 irvine:1 reflecting:1 higher:1 day:1 improved:1 though:1 stage:1 correlation:1 nonlinear:5 mode:1 believe:2 effect:1 normalized:1 assigned:1 during:2 maintained:1 performs:1 reasoning:1 meaning:3 consideration:1 common:1 exponentially:1 interpretation:1 occurred:1 interpret:2 significant:4 imposing:1 similarly:1 nonlinearity:1 had:1 similarity:1 etc:1 arbitrarily:1 caltech:2 analyzes:1 somewhat:1 abnormally:2 determine:1 maximize:1 monotonically:3 ii:2 smooth:1 divided:1 regression:2 basic:1 patient:3 expectation:1 iteration:4 represent:2 normalization:1 penalize:1 whereas:1 addition:1 decreased:1 else:1 leaving:1 salary:3 probably:2 call:1 ee:1 iii:1 enough:1 concerned:1 split:3 xj:4 fit:2 architecture:2 idea:1 whether:3 forecasting:1 penalty:1 song:1 yaser:2 proprietary:1 useful:2 occupy:1 exist:1 estimated:1 per:1 track:1 diagnosis:3 abu:7 cataltepe:1 blood:3 drawn:2 mono:6 capital:1 penalizing:1 sxi:2 year:1 powerful:1 almost:2 draw:2 entirely:1 abnormal:2 drink:4 layer:5 yielded:1 occur:1 constraint:2 constrain:1 according:1 beneficial:2 describes:2 character:1 partitioned:1 joseph:1 explained:1 taken:2 discus:1 describing:1 enforcement:1 available:1 magdon:1 multiplied:1 denker:1 obey:2 worthwhile:1 appropriate:1 cancelled:1 batch:1 include:1 objective:9 malik:1 quantity:1 said:1 distance:1 thank:1 card:4 capacity:1 extent:1 collected:1 trivial:1 reason:2 enforcing:2 index:1 minimizing:1 equivalently:1 statement:1 negative:1 observation:2 benchmark:1 implementable:1 perturbation:1 confidential:1 introduced:1 pair:7 connection:2 california:2 learned:2 established:1 able:2 beyond:1 pattern:2 program:1 debt:2 belief:1 natural:1 eh:6 predicting:1 representing:1 scheme:1 improve:3 technology:2 imply:2 irrespective:1 prior:5 val:2 expect:1 proven:1 validation:4 degree:2 consistent:1 principle:3 bank:1 cd:1 course:2 side:1 bias:1 unaccessible:1 perceptron:1 institute:2 curve:1 default:4 calculated:1 world:5 dimension:1 author:1 adaptive:1 approximate:1 monotonicity:41 overfitting:4 unnecessary:2 xi:3 continuous:3 table:4 symmetry:1 necessarily:1 vj:2 motivation:1 noise:1 arise:1 allowed:1 wiley:1 wish:2 obeying:1 xl:1 candidate:4 theorem:1 specific:1 derives:1 exists:1 incorporating:1 joe:1 conditioned:1 occurring:1 mildly:1 easier:1 simply:3 likely:1 desire:1 scalar:1 monotonic:12 conditional:1 towards:1 specifically:1 determined:2 reducing:1 total:1 invariance:1 experimental:2 attempted:1 indicating:1 meant:1 incorporate:1 tested:3 ex:1 |
297 | 1,271 | Online learning from finite training sets:
An analytical case study
Peter Sollich*
Department of Physics
University of Edinburgh
Edinburgh EH9 3JZ, U.K.
P.SollichOed.ac.uk
David Barber t
Neural Computing Research Group
Department of Applied Mathematics
Aston University
Birmingham B4 7ET, U.K.
D.BarberOaston.ac.uk
Abstract
We analyse online learning from finite training sets at noninfinitesimal learning rates TJ. By an extension of statistical mechanics methods, we obtain exact results for the time-dependent
generalization error of a linear network with a large number of
weights N. We find, for example, that for small training sets of
size p ~ N, larger learning rates can be used without compromising asymptotic generalization performance or convergence speed.
Encouragingly, for optimal settings of TJ (and, less importantly,
weight decay ,\) at given final learning time, the generalization performance of online learning is essentially as good as that of offline
learning.
1
INTRODUCTION
The analysis of online (gradient descent) learning, which is one of the most common
approaches to supervised learning found in the neural networks community, has
recently been the focus of much attention [1]. The characteristic feature of online
learning is that the weights of a network ('student') are updated each time a new
training example is presented, such that the error on this example is reduced. In
offline learning, on the other hand, the total error on all examples in the training
set is accumulated before a gradient descent weight update is made. Online and
* Royal Society Dorothy Hodgkin Research Fellow
t Supported by EPSRC grant GR/J75425: Novel Developments in Learning Theory
for Neural Networks
Online Leamingfrom Finite Training Sets: An Analytical Case Study
275
offline learning are equivalent only in the limiting case where the learning rate
T) --* 0 (see, e.g., [2]). The main quantity of interest is normally the evolution of
the generalization error: How well does the student approximate the input-output
mapping ('teacher') underlying the training examples after a given number of weight
updates?
Most analytical treatments of online learning assume either that the size of the
training set is infinite, or that the learning rate T) is vanishingly small. Both of
these restrictions are undesirable: In practice, most training sets are finite, and noninfinitesimal values of T) are needed to ensure that the learning process converges
after a reasonable number of updates. General results have been derived for the
difference between online and offline learning to first order in T), which apply to
training sets of any size (see, e. g., [2]). These results, however, do not directly
address the question of generalization performance. The most explicit analysis of
the time evolution of the generalization error for finite training sets was provided by
Krogh and Hertz [3] for a scenario very similar to the one we consider below. Their
T) --* 0 (i.e., offline) calculation will serve as a baseline for our work. For finite T),
progress has been made in particular for so-called soft committee machine network
architectures [4, 5], but only for the case of infinite training sets.
Our aim in this paper is to analyse a simple model system in order to assess how the
combination of non-infinitesimal learning rates T) and finite training sets (containing
a examples per weight) affects online learning. In particular, we will consider
the dependence of the asymptotic generalization error on T) and a, the effect of
finite a on both the critical learning rate and the learning rate yielding optimal
convergence speed, and optimal values of T) and weight decay A. We also compare
the performance of online and offline learning and discuss the extent to which infinite
training set analyses are -applicable for finite a.
2
MODEL AND OUTLINE OF CALCULATION
We consider online training of a linear student network with input-output relation
Here x is an N-dimensional vector of real-valued inputs, y the single real output
and w the wei~t vector of the network. ,T, denotes the transpose of a vector and
the factor 1/VN is introduced for convenience. Whenever a training example (x, y)
is presented to the network, its weight vector is updated along the gradient of the
squared error on this example, i. e.,
where T) is the learning rate. We are interested in online learning from finite training sets, where for each update an example is randomly chosen from a given set
{(xll,yll),j.l = l. .. p} ofp training examples. (The case of cyclical presentation of
examples [6] is left for future study.) If example J.l is chosen for update n, the weight
vector is changed to
(1)
Here we have also included a weight decay 'Y. We will normally parameterize the
strength of the weight decay in terms of A = 'YO' (where a = p / N is the number
P. Sollich and D. Barber
276
of examples per weight), which plays the same role as the weight decay commonly
used in offline learning [3]. For simplicity, all student weights are assumed to be
initially zero, i.e., Wn=o
o.
=
The main quantity of interest is the evolution of the generalization error of the
student. We assume that the training examples are generated by a linear 'teacher',
i.e., yJJ = W. T x JJ IVN+e, where JJ is zero mean additive noise of variance (72. The
teacher weight vector is taken to be normalized to w. 2 = N for simplicity, and the
input vectors are assumed to be sampled randomly from an isotropic distribution
over the hypersphere x 2 = N. The generalization error, defined as the average of
the squared error between student and teacher outputs for random inputs, is then
e
where
Vn
= Wn -
W?.
In order to make the scenario analytically tractable, we focus on the limit N -+ 00
of a large number of input components and weights, taken at constant number of
examples per weight a = piN and updates per weight ('learning time') t = niN. In
this limit, the generalization error fg(t) becomes self-averaging and can be calculated
by averaging both over the random selection of examples from a given training set
and over all training sets. Our results can be straightforwardly extended to the case
of percept ron teachers with a nonlinear transfer function, as in [7].
The usual statistical mechanical approach to the online learning problem expresses
the generalization error in terms of 'order parameters' like R = ~wJw. whose
(self-averaging) time evolution is determined from appropriately averaged update
equations. This method works because for infinite training sets, the average order parameter updates can again be expressed in terms of the order parameters
alone. For finite training sets, on the other hand, the updates involve new order
parameters such as Rl = ~wJ Aw., where A is the correlation matrix of the
training inputs, A = ~L-P = lx JJ(xJJ)T. Their time evolution is in turn determined
by order parameters involving higher powers of A, yielding an infinite hierarchy
of order parameters. We solve this problem by considering instead order parameter (generating) junctions [8] such as a generalized form of the generalization error
f(t;h) = 2~vJexp(hA)vn . This allows powers of A to be obtained by differentiation with respect to h, reSUlting in a closed system of (partial differential) equations
for f(t; h) and R(t; h) = ~ wJ exp(hA)w ?.
The resulting equations and details of their solution will be given in a future publication. The final solution is most easily expressed in terms of the Laplace transform
of the generalization error
fg(Z) = '!!..
a
fdt
~
fg(t)e-z(f//a)t = fdz)
+ T}f2(Z) + T} 2f 3(Z)
1-
(2)
T}f4(Z)
The functions fi (z) (i = 1 ... 4) can be expressed in closed form in terms of a, (72
and A (and, of course, z). The Laplace transform (2) yields directly the asymptotic
value of the generalization error, foo = fg(t -+ (0) = limz--+o zig{z) , which can be
calculated analytically. For finite learning times t, fg(t) is obtained by numerical
inversion of the Laplace transform.
3
RESULTS AND DISCUSSION
We now discuss the consequences of our main result (2), focusing first on the asymptotic generalization error foo, then the convergence speed for large learning times,
Online Learningfrom Finite Training Sets: An Analytical Case Study
a=O.s
277
<1=2
a=i
Figure 1: Asymptotic generalization error (00 vs 1] and A. a as shown,
(1"2
= 0.1.
and finally the behaviour at small t. For numerical evaluations, we generally take
(1"2
0.1, corresponding to a sizable noise-to-signal ratio of JQ.I ~ 0.32.
=
The asymptotic generalization error (00 is shown in Fig. 1 as a function of 1] and A
for a
0.5, 1, 2. We observe that it is minimal for A (1"2 and 1] 0, as expected
from corresponding resul ts for offline learning [3]1. We also read off that for fixed A,
(00 is an increasing function of 1]: The larger 1], the more the weight updates tend
to overshoot the minimum of the (total, i.e., offline) training error. This causes a
diffusive motion of the weights around their average asymptotic values [2] which
increases (00. In the absence of weight decay (A = 0) and for a < 1, however, (00
is independent of 1]. In this case the training data can be fitted perfectly; every
term in the total sum-of-squares training error is then zero and online learning does
not lead to weight diffusion because all individual updates vanish . In general, the
relative increase (00(1])/(00(1] = 0) - 1 due to nonzero 1] depends significantly on a.
For 1]
1 and a
0.5, for example, this increase is smaller than 6% for all A (at
(1"2 = 0.1), and for a = 1 it is at most 13%. This means that in cases where training
data is limited (p ~ N), 1] can be chosen fairly large in order to optimize learning
speed, without seriously affecting the asymptotic generalization error. In the large
a limit, on the other hand, one finds (00 = ((1"2/2)[1/a + 1]/(2 - 1])]. The relative
increase over the value at 1] = a therefore grows linearly with a; already for a = 2,
increases of around 50% can occur for 1] = 1.
=
=
=
=
=
Fig. 1 also shows that (00 diverges as 1] approaches a critical learning rate 1]e: As
1] -+ 1]e, the 'overshoot' of the weight update steps becomes so large that the weights
eventually diverge. From the Laplace transform (2), one finds that 1]e is determined
by 1]e(4(Z = 0) = 1; it is a function of a and A only. As shown in Fig. 2b-d, 1]e
increases with A. This is reasonable, as the weight decay reduces the length of the
weight vector at each update, counteracting potential weight divergences. In the
small and large a limit, one has 1]e = 2( 1 + A) and 1]e = 2( 1 + A/a), respectively.
For constant A, 1]e therefore decreases 2 with a (Fig. 2b-d) .
We now turn to the large t behaviour of the generalization error (g(t). For small
1], the most slowly decaying contribution (or 'mode') to (g(t) varies as exp( -ct), its
1 The optimal value of the unscaledweight decay decreases with a as 'Y = (1"2 ja, because
for large training sets there is less need to counteract noise in the training data by using
a large weight decay.
2Conversely, for constant 'Y, f"/e increases with a from 2(1 + 'Ya) to 2(1 + 'Y): For large a ,
the weight decay is applied more often between repeat presentations of a training example
that would otherwise cause the weights to diverge.
P. Sollich and D. Barber
278
=
(va -
decay constant c 71['\ +
1)2]/ a scaling linearly with 71, the size of the weight
updates, as expected (Fig. 2a). For small a, the condition ct ? 1 for fg(t) to have
reached its asymptotic value foo is 71(1 + ,\)(t/a) ? 1 and scales with tla, which is
the number of times each training example has been used. For large a, on the other
hand, the condition becomes 71t ? 1: The size of the training set drops out since
convergence occurs before repetitions of training examples become significant.
For larger 71, the picture changes due to a new 'slow mode' (arising from the denominator of (2)). Interestingly, this mode exists only for 71 above a finite threshold
71min = 2/(a 1 / 2 + a- 1 / 2 -1). For finite a, it could therefore not have been predicted
from a small 71 expansion of (g(t). Its decay constant Cslow decreases to zero as
71 -t 71e, and crosses that of the normal mode at 71x(a,'\) (Fig. 2a). For 71 > 71x,
the slow mode therefore determines the convergence speed for large t, and fastest
convergence is obtained for 71 = 71x. However, it may still be advantageous to use
lower values of 71 in order to lower the asymptotic generalization error (see below);
values of 71 > 71x would deteriorate both convergence speed and asymptotic performance. Fig . 2b-d shows the dependence of 71min, 71x and 71e on a and'\. For
,\ not too large, 71x has a maximum at a ~ 1 (where 71x ~ 71e), while decaying as
71x = 1+2a- 1 / 2 ~ ~71e for larger a. This is because for a ~ 1 the (total training) error surface is very anisotropic around its minimum in weight space [9]. The steepest
directions determine 71e and convergence along them would be fastest for 71 = ~71e
(as in the isotropic case). However, the overall convergence speed is determined by
the shallow directions, which require maximal 71 ~ 71e for fastest convergence.
Consider now the small t behaviour of fg(t). Fig. 3 illustrates the dependence of
fg(t) on 71; comparison with simulation results for N = 50 clearly confirms our
calculations and demonstrates that finite N effects are not significant even for such
fairly small N. For a = 0.7 (Fig. 3a), we see that nonzero 71 acts as effective update
noise, eliminating the minimum in fg(t) which corresponds to over-training [3]. foo
is also seen to be essentially independent of 71 as predicted for the small value of
,\ = 10- 4 chosen. For a = 5, Fig. 3b clearly shows the increase of foo with 71. It
also illustrates how convergence first speeds up as 71 is increased from zero and then
slows down again as 71e ~ 2 is approached.
Above, we discussed optimal settings of 71 and ,\ for minimal asymptotic generalization error foo. Fig. 4 shows what happens if we minimize fg(t) instead for
a given final learning time t, corresponding to a fixed amount of computational
effort for training the network. As t increases, the optimal 71 decreases towards
zero as required by the tradeoff between asymptotic performance and convergence
1..=0
1..=0.1
4,-------,
4,---------,
(c)
(b)
(a)
1..=1
11m in
c
o
o
J
2a 3
4
S
o
J
2
a
3
4
S
Figure 2: Definitions of71min, 71x and 71e, and their dependence on a (for'\ as shown).
279
Online Learning /rom Finite Training Sets: An Analytical Case Study
O.511---~--~--~---.,
(a)
a
= 0.7
(b) a = 5
O.20'--~5--1~O-~15--2~O-~25--3~o--'t
Figure 3: fg vs t for different TJ. Simulations for N = 50 are shown by symbols
(standard errors less than symbol sizes). A=1O- 4 , 0- 2 =0.1, a as shown. The learning
rate TJ increases from below (at large t) over the range (a) 0.5 .. . 1.95, (b) 0.5 ... 1. 75.
(a)
0.8
0.06
0.6
/
(b)
0.08
/
(c)
0.25
,
"
0.4
'------
0.2
0.0
0.04
" ........-------
0.02
O. 00
L-L--'---~_'___'______'_~___'___'__'
o
10
20
30
40
50
10
20
30
40
50
'---'--'----'-..t.......'---'-~...J........_'___'
o
10
20
30
40
50
t
Figure 4: Optimal TJ and A vs given final learning time t, and resulting (g.
Solid/dashed lines: a = 1 / a =2; bold/thin lines: online/offline learning. 0- 2 =0.1.
Dotted lines in (a): Fits of form TJ = (a + bIn t)/t to optimal TJ for online learning.
speed. Minimizing (g(t) ::::: (00+ const . exp( -ct) ~ Cl + TJC2 + C3 exp( -C4TJt) leads to
TJopt = (a + bIn t)/t (with some constants a, b, Cl...4). Although derived for small TJ,
this functional form (dotted lines in Fig. 4a) also provides a good description down
to fairly small t , where TJopt becomes large. The optimal weight decay A increases 3
with t towards the limiting value 0- 2 . However, optimizing A is much less important than choosing the right TJ: Minimizing (g(t) for fixed A yields almost the same
generalization error as optimizing both TJ and A (we omit detailed results here 4 ). It
is encouraging to see from Fig. 4c that after as few as t = 10 updates per weight
with optimal TJ, the generalization error is almost indistinguishable from its optimal
value for t --t 00 (this also holds if A is kept fixed). Optimization of the learning
rate should therefore be worthwhile in most practical scenarios.
In Fig. 4c, we also compare the performance of online learning to that of offline
learning (calculated from the appropriate discrete time version of [3]), again with
30 ne might have expected the opposite effect of having larger>. at low t in order to
'contain' potential divergences from the larger optimal learning rates tJ. However, smaller
>. tends to make the asymptotic value foo less sensitive to large values of tJ as we saw
above, and we conclude that this effect dominates.
4Por fixed>. < u 2 , where fg(t) has an over-training minimum (see Pig. 3a), the asymptotic behaviour of tJopt changes to tJopt <X C 1 (without the In t factor), corresponding to a
fixed effective learning time tJt required to reach this minimum.
P. SalJich and D. Barber
280
optimized values of TJ and A for given t. The performance loss from using online
instead of offline learning is seen to be negligible. This may seem surprising given
the effective noise on weight updates implied by online learning, in particular for
small t. However, comparing the respective optimal learning rates (Fig. 4a), we see
that online learning makes up for this deficiency by allowing larger values of TJ to
be used (for large a, for example, TJc(offline) 2/0' ? TJc(online) 2).
=
=
Finally, we compare our finite a results with those for the limiting case a -+ 00.
Good agreement exists for any learning time t if the asymptotic generalization error
(00 (a < 00) is dominated by the contribution from the nonzero learning rate TJ (as is
the case for a -+ 00). In practice, however, one wants TJ to be small enough to make
only a negligible contribution to (00(0' < 00); in this regime, the a -+ 00 results are
essentially useless.
4
CONCLUSIONS
The main theoretical contribution of this paper is the extension of the statistical
mechanics method of order parameter dynamics to the dynamics of order parameter
(generating) functions . The results that we have obtained for a simple linear model
system are also of practical relevance. For example, the calculated dependence on
TJ of the asymptotic generalization error (00 and the convergence speed shows that,
in general, sizable values of TJ can be used for training sets of limited size (a ~ 1),
while for larger a it is important to keep learning rates small. We also found a
simple functional form for the dependence of the optimal TJ on a given final learning
time t. This could be used, for example, to estimate the optimal TJ for large t from
test runs with only a small number of weight updates. Finally, we found that for
optimized TJ online learning performs essentially as well as offline learning, whether
or not the weight decay A is optimized as well. This is encouraging, since online
learning effectively induces noisy weight updates. This allows it to cope better than
offline learning with the problem of local (training error) minima in realistic neural
networks. Online learning has the further advantage that the critical learning rates
are not significantly lowered by input distributions with nonzero mean, whereas for
offline learning they are significantly reduced [10]. In the future, we hope to extend
our approach to dynamic (t-dependent) optimization of TJ (although performance
improvements over optimal fixed TJ may be small [6]), and to more complicated network architectures in which the crucial question of local minima can be addressed.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
See for example: The dynamics of online learning. Workshop at NIPS '95.
T. Heskes and B. Kappen. Phys. Ret). A, 44:2718, 1991.
A. Krogh and J. A. Hertz. J. Phys. A, 25:1135, 1992.
D. Saad and S. Solla. Phys. Ret). E, 52:4225, 1995; also in NIPS-8.
M. Biehl and H. Schwarze. J. Phys. A, 28:643-656, 1995.
Z.-Q. Luo. Neur. Comp., 3:226, 1991; T. Heskes and W. Wiegerinck. IEEE
Trans. Neur. Netw., 7:919, 1996.
P. Sollich. J. Phys. A, 28:6125, 1995.
1. L. Bonilla, F. G. Padilla, G. Parisi and F. Ritort. Europhys. Lett., 34:159,
1996; Phys. Ret). B, 54:4170, 1996.
J. A. Hertz, A. Krogh and G. I. Thorbergsson. J. Phys. A, 22:2133, 1989.
T. L. H. Watkin, A. Rau and M. Biehl. Ret). Modern Phys., 65:499, 1993.
| 1271 |@word version:1 inversion:1 eliminating:1 advantageous:1 confirms:1 simulation:2 solid:1 kappen:1 seriously:1 interestingly:1 comparing:1 surprising:1 luo:1 numerical:2 realistic:1 additive:1 drop:1 update:19 v:3 alone:1 isotropic:2 steepest:1 hypersphere:1 provides:1 ron:1 lx:1 along:2 differential:1 become:1 deteriorate:1 fdt:1 expected:3 mechanic:2 encouraging:2 considering:1 increasing:1 becomes:4 provided:1 underlying:1 what:1 ret:4 differentiation:1 fellow:1 every:1 act:1 wjw:1 demonstrates:1 uk:2 normally:2 grant:1 omit:1 before:2 negligible:2 local:2 tends:1 limit:4 consequence:1 might:1 conversely:1 fastest:3 limited:2 range:1 averaged:1 practical:2 practice:2 significantly:3 convenience:1 undesirable:1 selection:1 restriction:1 equivalent:1 optimize:1 attention:1 simplicity:2 importantly:1 laplace:4 limiting:3 updated:2 hierarchy:1 play:1 exact:1 agreement:1 epsrc:1 role:1 parameterize:1 wj:2 solla:1 decrease:4 zig:1 dynamic:4 overshoot:2 serve:1 f2:1 easily:1 effective:3 encouragingly:1 approached:1 choosing:1 europhys:1 whose:1 larger:8 valued:1 solve:1 biehl:2 otherwise:1 analyse:2 transform:4 noisy:1 final:5 online:28 advantage:1 parisi:1 analytical:5 vanishingly:1 maximal:1 description:1 convergence:13 nin:1 diverges:1 generating:2 converges:1 ac:2 progress:1 sizable:2 krogh:3 predicted:2 direction:2 j75425:1 compromising:1 f4:1 bin:2 ja:1 require:1 behaviour:4 generalization:25 tjt:1 extension:2 noninfinitesimal:2 hold:1 around:3 normal:1 exp:4 mapping:1 birmingham:1 applicable:1 sensitive:1 saw:1 repetition:1 hope:1 clearly:2 aim:1 publication:1 derived:2 focus:2 yo:1 improvement:1 baseline:1 dependent:2 accumulated:1 initially:1 relation:1 jq:1 interested:1 overall:1 development:1 fairly:3 having:1 thin:1 future:3 few:1 modern:1 randomly:2 divergence:2 individual:1 interest:2 evaluation:1 yielding:2 tj:24 partial:1 respective:1 theoretical:1 minimal:2 fitted:1 increased:1 soft:1 gr:1 too:1 straightforwardly:1 teacher:5 aw:1 varies:1 physic:1 off:1 diverge:2 squared:2 again:3 containing:1 slowly:1 por:1 watkin:1 potential:2 student:6 bold:1 bonilla:1 depends:1 closed:2 reached:1 decaying:2 complicated:1 contribution:4 ass:1 square:1 minimize:1 variance:1 characteristic:1 percept:1 yield:2 comp:1 reach:1 phys:8 whenever:1 definition:1 infinitesimal:1 sampled:1 treatment:1 focusing:1 higher:1 supervised:1 wei:1 correlation:1 hand:4 nonlinear:1 mode:5 schwarze:1 grows:1 effect:4 normalized:1 contain:1 evolution:5 analytically:2 read:1 nonzero:4 indistinguishable:1 self:2 generalized:1 outline:1 performs:1 motion:1 novel:1 recently:1 fi:1 common:1 functional:2 rl:1 b4:1 anisotropic:1 discussed:1 extend:1 significant:2 rau:1 mathematics:1 heskes:2 lowered:1 surface:1 optimizing:2 scenario:3 seen:2 minimum:7 determine:1 signal:1 dashed:1 reduces:1 calculation:3 cross:1 ofp:1 va:1 involving:1 denominator:1 essentially:4 diffusive:1 affecting:1 want:1 whereas:1 addressed:1 dorothy:1 crucial:1 appropriately:1 saad:1 resul:1 tend:1 seem:1 counteracting:1 enough:1 wn:2 affect:1 fit:1 architecture:2 perfectly:1 opposite:1 tradeoff:1 whether:1 effort:1 peter:1 xjj:1 cause:2 jj:3 generally:1 detailed:1 involve:1 amount:1 induces:1 reduced:2 dotted:2 arising:1 per:5 discrete:1 express:1 group:1 threshold:1 diffusion:1 kept:1 sum:1 counteract:1 run:1 hodgkin:1 almost:2 reasonable:2 vn:3 scaling:1 eh9:1 ct:3 strength:1 occur:1 deficiency:1 dominated:1 speed:10 min:2 department:2 neur:2 combination:1 hertz:3 smaller:2 sollich:4 shallow:1 happens:1 tla:1 taken:2 equation:3 discus:2 pin:1 committee:1 turn:2 needed:1 eventually:1 tractable:1 junction:1 apply:1 observe:1 worthwhile:1 appropriate:1 denotes:1 ensure:1 learningfrom:1 const:1 society:1 implied:1 padilla:1 question:2 quantity:2 already:1 occurs:1 dependence:6 usual:1 gradient:3 barber:4 extent:1 yjj:1 rom:1 length:1 useless:1 ratio:1 minimizing:2 slows:1 xll:1 allowing:1 finite:18 descent:2 t:1 extended:1 community:1 david:1 introduced:1 mechanical:1 required:2 c3:1 optimized:3 yll:1 leamingfrom:1 nip:2 trans:1 address:1 below:3 regime:1 pig:1 royal:1 power:2 critical:3 aston:1 ne:1 picture:1 asymptotic:17 relative:2 loss:1 ivn:1 course:1 changed:1 supported:1 repeat:1 transpose:1 offline:16 fg:12 edinburgh:2 calculated:4 lett:1 made:2 commonly:1 cope:1 approximate:1 netw:1 keep:1 assumed:2 conclude:1 jz:1 transfer:1 expansion:1 cl:2 main:4 linearly:2 noise:5 fig:15 slow:2 foo:7 explicit:1 vanish:1 limz:1 down:2 symbol:2 decay:14 dominates:1 exists:2 workshop:1 effectively:1 illustrates:2 expressed:3 cyclical:1 thorbergsson:1 corresponds:1 determines:1 presentation:2 towards:2 absence:1 change:2 included:1 infinite:5 determined:4 averaging:3 wiegerinck:1 total:4 called:1 ya:1 relevance:1 |
298 | 1,272 | Bayesian Model Comparison
by Monte Carlo Chaining
David Barber
Christopher M. Bishop
D.Barber~aston.ac.uk
C.M.Bishop~aston.ac.uk
Neural Computing Research Group
Aston University, Birmingham, B4 7ET, U.K.
http://www.ncrg.aston.ac.uk/
Abstract
The techniques of Bayesian inference have been applied with great
success to many problems in neural computing including evaluation
of regression functions, determination of error bars on predictions,
and the treatment of hyper-parameters. However, the problem of
model comparison is a much more challenging one for which current
techniques have significant limitations. In this paper we show how
an extended form of Markov chain Monte Carlo, called chaining,
is able to provide effective estimates of the relative probabilities of
different models. We present results from the robot arm problem
and compare them with the corresponding results obtained using
the standard Gaussian approximation framework.
1
Bayesian Model Comparison
In a Bayesian treatment of statistical inference, our state of knowledge of the values
of the parameters w in a model M is described in terms of a probability distribution
function. Initially this is chosen to be some prior distribution p(wIM), which can
be combined with a likelihood function p( Dlw, M) using Bayes' theorem to give a
posterior distribution p(wID, M) in the form
( ID M) = p(Dlw,M)p(wIM)
pw ,
p(DIM)
(1)
where D is the data set. Predictions of the model are obtained by performing
integrations weighted by the posterior distribution.
D. Barber and C. M. Bishop
334
The comparison of different models Mi is based on their relative probabilities, which
can be expressed, again using Bayes' theorem, in terms of prior probabilities P(Mi)
to give
p(DIMdp(Mi)
P(MiI D )
(2)
p(DIMj )p(Mj)
p(MjID)
and so requires that we be able to evaluate the model evidence p(DIMi), which
corresponds to the denominator in (1). The relative probabilities of different models
can be used to select the single most probable model, or to form a committee of
models, weighed by their probabilities.
It is convenient to write the numerator of (1) in the form exp{ -E(w)}, where E(w)
is an error function. Normalization of the posterior distribution then requires that
p(DIM) =
J
exp{ -E(w)} dw.
(3)
Generally, it is straightforward to evaluate E(w) for a given value of w, although
it is extremely difficult to evaluate the corresponding model evidence using (3)
since the posterior distribution is typically very small except in narrow regions
of the high-dimensional parameter space, which are unknown a-priori. Standard
numerical integration techniques are therefore inapplicable.
One approach is based on a local Gaussian approximation around a mode of the
posterior (MacKay, 1992). Unfortunately, this approximation is expected to be
accurate only when the number of data points is large in relation to the number of
parameters in the model. In fact it is for relatively complex models, or problems for
which data is scarce, that Bayesian methods have the most to offer. Indeed, Neal
(1996) has argued that, from a Bayesian perspective, there is no reason to limit
the number of parameters in a model, other than for computational reasons. We
therefore consider an approach to the evaluation of model evidence which overcomes
the limitations of the Gaussian framework. For additional techniques and references
to Bayesian model comparison, see Gilks et al. (1995) and Kass and Raftery (1995).
2
Chaining
Suppose we have a simple model Mo for which we can evaluate the evidence analytically, and for which we can easily generate a sample wi (where I = 1, ... , L)
from the corresponding distribution p(wID, Mo). Then the evidence for some other
model M can be expressed in the form
p(DIM)
p(DIMo)
J
exp{-E(w)
1
L
+ Eo(w)}p(wID, Mo)dw
L
:E exp{ -E(w
1)
+ Eo(w 1)}.
(4)
1=1
Unfortunately, the Monte Carlo approximation in (4) will be poor if the two error
functions are significantly different, since the exponent is dominated by regions
where E is relatively small, for which there will be few samples unless Eo is also small
in those regions. A simple Monte Carlo approach will therefore yield poor results.
This problem is equivalent to the evaluation of free energies in statistical physics,
335
Bayesian Model Comparison by Monte Carlo Chaining
which is known to be a challenging problem, and where a number of approaches
have been developed Neal (1993).
Here we discuss one such approach to this problem based on a chain of J{ successive
models Mi which interpolate between Mo and M, so that the required evidence
can be written as
p(DIMl) p(DIM2)
p(DIM)
p(DIM)
= p(DIMo) p(DIMo) p(DIMt} ... p(DIMK)?
(5)
Each of the ratios in (5) can be evaluated using (4). The goal is to devise a chain
of models such that each successive pair of models has probability distributions
which are reasonably close, so that each of the ratios in (5) can be evaluated accurately, while keeping the total number of links in the chain fairly small to limit the
computational costs.
We have chosen the technique of hybrid Monte Carlo (Duane et ai., 1987; Neal,
1993) to sample from the various distributions, since this has been shown to be
effective for sampling from the complex distributions arising with neural network
models (Neal, 1996). This involves introducing Hamiltonian equations of motion in
which the parameters ware augmented by a set of fictitious 'momentum' variables,
which are then integrated using the leapfrog method. At the end of each trajectory
the new parameter vector is accepted with a probability governed by the Metropolis
criterion, and the momenta are replaced using Gibbs sampling. As a check on our
software implementation of chaining, we have evaluated the evidence for a mixture
of two non-isotropic Gaussian distributions, and obtained a result which was within
10% of the analytical solution.
3
Application to Neural Networks
We now consider the application of the chaining method to regression problems
involving neural network models. The network corresponds to a function y(x, w),
and the data set consists of N pairs of input vectors Xn and corresponding targets
tn where n = 1, ... , N. Assuming Gaussian noise on the target data, the likelihood
function takes the form
p(Dlw, M)
f3)N/2 exp {f3
N Ily(xn ; w) = ( 211"
-2" ~
t n l1 2
}
(6)
where f3 is a hyper-parameter representing the inverse of the noise variance. We
consider networks with a single hidden layer of 'tanh' units, and linear output
units. Following Neal (1996) we use a diagonal Gaussian prior in which the weights
are divided into groups Wk, where k = 1, ... ,4 corresponding to input-to-hidden
weights, hidden-unit biases, hidden-to-output weights, and output biases. Each
group is governed by a separate 'precision' hyper-parameter ak, so that the prior
takes the form
p(wl{a,}) =
L
exp {
-~ ~ a,wfw , }
(7)
where Zw is the normalization coefficient. The hyper-parameters {ad and f3 are
themselves each governed by hyper-priors given by Gamma distributions of the form
p( a) ex: a$ exp( -as /2w)
(8)
D. Barber and C. M. Bishop
336
in which the mean wand variance 2w 2 / s are chosen to give very broad hyper-priors
in reflection of our limited prior knowledge of the values of the hyper-parameters.
We use the hybrid Monte Carlo algorithm to sample from the joint distribution of
parameters and hyper-parameters. For the evaluation of evidence ratios, however,
we consider only the parameter samples, and perform the integrals over hyperparameters analytically, using the fact that the gamma distribution is conjugate to
the Gaussian.
In order to apply chaining to this problem, we choose the prior as our reference distribution, and then define a set of intermediate distributions based on a parameter
A which governs the effective contribution from the data term, so that
E(A, w)
= A</>(W) + Eo(w)
(9)
where </>(w) arises from the likelihood term (6) while Eo(w) corresponds to the
prior (7). We select a set of 18 values of A which interpolate between the reference
distribution (A = 0) and the desired model distribution (A = 1) . The evidence for
the prior alone is easily evaluated analytically.
4
Gaussian Approximation
As a comparison against the method of chaining, we consider the framework of
MacKay (1992) based on a local Gaussian approximation to the posterior distribution. This approach makes use of the evidence approximation in which the integration over hyper-parameters is approximated by setting them to specific values
which are themselves determined by maximizing their evidence functions.
This leads to a hierarchical treatment as follows. At the lowest level, the maximum
w of the posterior distribution over weights is found for fixed values of the hyperparameters by minimizing the error function . Periodically the hyper-parameters are
re-estimated by evidence maximization, where the evidence is obtained analytically
using the Gaussian approximation. This gives the following re-estimation formulae
1
(10)
f3
where 'Yk = Wk - Uk Trk(A -1), Wk is the total number of parameters in group
k, A = \7\7 E(w), 'Y = Lk 'Yk. and Trk(-) denotes the trace over the kth group
of parameters. The weights are updated in an inner loop by minimizing the error function using a conjugate gradient optimizer, while the hyper-parameters are
periodically re-estimated using (10)1.
Once training is complete, the model evidence is evaluated by making a Gaussian
approximation around the converged values of the hyper-parameters, and integrating over this distribution analytically. This gives the model log evidence as
Inp(DIM) = -E(w) -
~ In IAI + ~ L Wk lnuk +
k
N
2Inf3+lnh!+2Inh+
1
1
2 ~ln(2hk) + 2 In (2/(N -'Y?.
(11)
1 Note that we are assuming that the hyper-priors (8) are sufficiently broad that they
have no effect on the location of the evidence maximum and can therefore be neglected.
Bayesian Model Comparison by Monte Carlo Chaining
337
Here h is the number of hidden units, and the terms In h! + 2ln h take account of
the many equivalent modes of the posterior distribution arising from sign-flip and
hidden unit interchange symmetries in the network model. A derivation of these
results can be found in Bishop (1995; pages 434-436).
The result (11) corresponds to a single mode of the distribution. If we initialize
the weight optimization algorithm with different random values we can find distinct
solutions. In order to compute an overall evidence for the particular network model
with a given number of hidden units, we make the assumption that we have found all
of the distinct modes of the posterior distribution precisely once each, and then sum
the evidences to arrive at the total model evidence. This neglects the possibility that
some of the solutions found are related by symmetry transformations (and therefore
already taken into account) or that we have missed important modes. While some
attempt could be made to detect degenerate solutions, it will be difficult to do much
better than the above within the framework of the Gaussian approximation.
5
Results: Robot Arm Problem
As an illustration of the evaluation of model evidence for a larger-scale problem
we consider the modelling of the forward kinematics for a two-link robot arm in a
two-dimensional space, as introduced by MacKay (1992). This problem was chosen
as MacKay reports good results in using the Gaussian approximation framework to
evaluate the evidences, and provides a good opportunity for comparison with the
chaining approach. The task is to learn the mapping (Xl, X2) -+ (Yl, Y2) given by
where the data set consists of 200 input-output pairs with outputs corrupted by
zero mean Gaussian noise with standard deviation u = 0.05. We have used the
original training data of MacKay, but generated our own test set of 1000 points
using the same prescription. The evidence is evaluated using both chaining and the
Gaussian approximation, for networks with various numbers of hidden units.
In the chaining method, the particular form of the gamma priors for the precision
variables are as follows: for the input-to-hidden weights and hidden-unit biases,
w = 1, s = 0.2; for the hidden-to-output weights, w = h, s = 0.2; for the output
biases, w = 0.2, s = 1. The noise level hyper-parameters were w = 400, s = 0.2.
These settings follow closely those used by Neal (1996) for the same problem. The
hidden-to-output precision scaling was chosen by Neal such that the limit of an
infinite number of hidden units is well defined and corresponds to a Gaussian process
prior. For each evidence ratio in the chain, the first 100 samples from the hybrid
Monte Carlo run, obtained with a trajectory length of 50 leapfrog iterations, are
omitted to give the algorithm a chance to reach the equilibrium distribution. The
next 600 samples are obtained using a trajectory length of 300 and are used to
evaluate the evidence ratio.
In Figure 1 (a) we show the error values of the sampling stage for 24 hidden units,
where we see that the errors are largely uncorrelated, as required for effective Monte
Carlo sampling. In Figure 1 (b), we plot the values of In{p(DIMi)/p(DIMi_l)}
against .Ai i 1..18. Note that there is a large change in the evidence ratios at the
beginning of the chain, where we sample close to the reference distribution. For this
=
D. Barber and C. M. Bishop
338
(a)
o
100
200
300
400
500
(b)
6
-4~--~----~--~----~--~
o
600
0.2
0.4
0.6
0.8
A1
Figure 1: (a) error E(>. = 0.6,w) for h = 24, plotted for 600 successive Monte Carlo
samples. (b) Values of the ratio In{p(DIM.)jp(DIM.-d} for i = 1, ... ,18 for h = 24.
reason, we choose the Ai to be dense close to A = O. We are currently researching
more principled approaches to the partitioning selection. Figure 2 (a) shows the
log model evidence against the number of hidden units. Note that the chaining
approach is computationally expensive: for h=24, a complete chain takes 48 hours
in a Matlab implementation running on a Silicon Graphics Challenge L.
We see that there is no decline in the evidence as the number of hidden units
grows. Correspondingly, in Figure 2 (b), we see that the test error performance
does not degrade as the number of hidden units increases. This indicates that there
is no over-fitting with increasing model complexity, in accordance with Bayesian
expectations.
The corresponding results from the Gaussian approximation approach are shown in
Figure 3. We see that there is a characteristic 'Occam hill' whereby the evidence
shows a peak at around h = 12, with a strong decrease for smaller values of h
and a slower decrease for larger values. The corresponding test set errors similarly
show a minimum at around h = 12, indicating that the Gaussian approximation is
becoming increasingly inaccurate for more complex models.
6
Discussion
We have seen that the use of chaining allows the effective evaluation of model
evidences for neural networks using Monte Carlo techniques. In particular, we find
that there is no peak in the model evidence, or the corresponding test set error,
as the number of hidden units is increased, and so there is no indication of overfitting. This is in accord with the expectation that model complexity should not be
limited by the size of the data set, and is in marked contrast to the conventional
1.4r-----~-----,-------~----~
70.-----~----~------~----~
60
1.3~
50
1.2'
40
1.1
(a)
30L-----1~0------1~5------~20~--~
h
(b)
~
1!r------&----~
10
15
20
h
Figure 2: (a) Plot of Inp(DIM) for different numbers of hidden units. (b) Test error
against the number of hidden units. Here the theoretical minimum value is 1.0. For
h = 64 the test error is 1.11
Bayesian Model Comparison by Monte Carlo Chaining
339
85or-----------~----~----~
(b)
(a)
2.5 0
o
800
o
o
o
o
00
2
0
00
o
o
o
0 @
o
o
0
1.5
7505L-----l~0----~15------2O~----2......J5
h
1L-----~~--~----~----~
5
10
15
20
25
h
Figure 3: (a) Plot of the model evidence for the robot arm problem versus the number
of hidden units, using the Gaussian approximation framework. This clearly shows the
characteristic 'Occam hill' shape. Note that the evidence is computed up to an additive
constant, and so the origin of the vertical axis has no significance. (b) Corresponding plot
of the test set error versus the number of hidden units. Individual points correspond to
particular modes of the posterior weight distribution, while the line shows the mean test
set error for each value of h .
maximum likelihood viewpoint. It is also consistent with the result that, in the
limit of an infinite number of hidden units, the prior over network weights leads to
a well-defined Gaussian prior over functions (Williams, 1997).
An important advantage of being able to make accurate evaluations of the model
evidence is the ability to compare quite distinct kinds of model, for example radial
basis function networks and multi-layer perceptIOns. This can be done either by
chaining both models back to a common reference model, or by evaluating normalized model evidences explicitly.
Acknowledgements
We would like to thank Chris Williams and Alastair Bruce for a number of useful
discussions. This work was supported by EPSRC grant GR/J75425: Novel Developments in Learning Theory for Neural Networks.
References
Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press.
Duane, S., A. D. Kennedy, B. J. Pendleton, and D. Roweth (1987). Hybrid Monte Carlo.
Physics Letters B 195 (2), 216-222.
Gilks, W. R., S. Richardson, and D. J. Spiegelhalter (1995). Markov Chain Monte Carlo
in Practice. Chapman and Hall.
Kass, R. E. and A. E. Raftery (1995). Bayes factors. J. Am. Statist. Ass. 90, 773-795.
MacKay, D. J. C. (1992). A practical Bayesian framework for back-propagation networks. Neural Computation 4 (3), 448- 472.
Neal, R. M. (1993). Probabilistic inference using Markov chain Monte Carlo methods.
Technical Report CRG-TR-93-1, Department of Computer Science, University of
Toronto, Cananda.
Neal, R. M. (1996). Bayesian Learning for Neural Networks. Springer. Lecture Notes in
Statistics 118.
Williams, C. K. I. (1997). Computing with infinite networks. This volume.
| 1272 |@word pw:1 lnh:1 tr:1 ka:2 current:1 written:1 periodically:2 additive:1 numerical:1 shape:1 plot:4 alone:1 isotropic:1 beginning:1 hamiltonian:1 provides:1 toronto:1 successive:3 location:1 consists:2 fitting:1 indeed:1 expected:1 themselves:2 multi:1 increasing:1 lowest:1 kind:1 developed:1 transformation:1 uk:4 partitioning:1 unit:19 grant:1 local:2 accordance:1 limit:4 ak:1 id:1 oxford:1 ware:1 becoming:1 challenging:2 limited:2 practical:1 gilks:2 practice:1 significantly:1 convenient:1 integrating:1 inp:2 radial:1 alastair:1 close:3 selection:1 www:1 weighed:1 equivalent:2 conventional:1 maximizing:1 straightforward:1 williams:3 dw:2 updated:1 target:2 suppose:1 origin:1 approximated:1 expensive:1 recognition:1 epsrc:1 region:3 decrease:2 yk:2 principled:1 complexity:2 neglected:1 inapplicable:1 basis:1 easily:2 joint:1 various:2 derivation:1 distinct:3 effective:5 monte:16 hyper:14 pendleton:1 quite:1 larger:2 ability:1 statistic:1 richardson:1 advantage:1 indication:1 analytical:1 loop:1 degenerate:1 ac:3 strong:1 involves:1 j75425:1 closely:1 wid:3 argued:1 probable:1 crg:1 around:4 sufficiently:1 hall:1 exp:7 great:1 equilibrium:1 mapping:1 mo:4 optimizer:1 omitted:1 estimation:1 birmingham:1 tanh:1 wim:2 currently:1 wl:1 weighted:1 clearly:1 gaussian:20 leapfrog:2 ily:1 modelling:1 likelihood:4 check:1 indicates:1 hk:1 contrast:1 detect:1 am:1 dim:9 inference:3 inaccurate:1 typically:1 integrated:1 initially:1 hidden:23 relation:1 j5:1 overall:1 priori:1 exponent:1 development:1 integration:3 mackay:6 fairly:1 initialize:1 once:2 f3:5 sampling:4 chapman:1 broad:2 report:2 few:1 gamma:3 interpolate:2 individual:1 replaced:1 attempt:1 possibility:1 evaluation:7 mixture:1 chain:9 accurate:2 integral:1 unless:1 desired:1 re:3 plotted:1 theoretical:1 roweth:1 increased:1 maximization:1 cost:1 introducing:1 deviation:1 gr:1 graphic:1 corrupted:1 combined:1 peak:2 probabilistic:1 physic:2 yl:1 again:1 choose:2 account:2 wk:4 coefficient:1 explicitly:1 ad:1 bayes:3 bruce:1 contribution:1 variance:2 largely:1 characteristic:2 yield:1 correspond:1 bayesian:13 accurately:1 carlo:16 trajectory:3 dimi:2 kennedy:1 converged:1 reach:1 against:4 energy:1 mi:4 treatment:3 dlw:3 knowledge:2 back:2 follow:1 iai:1 evaluated:6 done:1 stage:1 christopher:1 propagation:1 mode:6 grows:1 effect:1 normalized:1 y2:1 analytically:5 neal:9 numerator:1 whereby:1 chaining:16 criterion:1 hill:2 complete:2 trk:2 tn:1 motion:1 l1:1 reflection:1 novel:1 common:1 b4:1 jp:1 volume:1 ncrg:1 significant:1 silicon:1 gibbs:1 ai:3 similarly:1 robot:4 posterior:10 own:1 perspective:1 success:1 devise:1 seen:1 minimum:2 additional:1 eo:5 technical:1 determination:1 offer:1 divided:1 prescription:1 a1:1 prediction:2 involving:1 regression:2 denominator:1 expectation:2 iteration:1 normalization:2 accord:1 zw:1 intermediate:1 inner:1 decline:1 matlab:1 generally:1 useful:1 governs:1 statist:1 http:1 generate:1 sign:1 estimated:2 arising:2 write:1 group:5 sum:1 wand:1 run:1 inverse:1 letter:1 arrive:1 missed:1 mii:1 scaling:1 layer:2 precisely:1 x2:1 software:1 dominated:1 extremely:1 performing:1 relatively:2 department:1 poor:2 conjugate:2 smaller:1 increasingly:1 wi:1 metropolis:1 making:1 taken:1 ln:2 equation:1 computationally:1 discus:1 kinematics:1 committee:1 flip:1 end:1 apply:1 hierarchical:1 slower:1 original:1 dimk:1 denotes:1 running:1 opportunity:1 neglect:1 already:1 diagonal:1 gradient:1 kth:1 link:2 separate:1 thank:1 degrade:1 chris:1 barber:5 reason:3 assuming:2 length:2 illustration:1 ratio:7 minimizing:2 difficult:2 unfortunately:2 trace:1 implementation:2 unknown:1 perform:1 vertical:1 markov:3 extended:1 inh:1 david:1 introduced:1 pair:3 required:2 narrow:1 hour:1 able:3 bar:1 perception:1 pattern:1 challenge:1 including:1 hybrid:4 scarce:1 arm:4 representing:1 aston:4 spiegelhalter:1 lk:1 axis:1 raftery:2 prior:15 acknowledgement:1 relative:3 lecture:1 limitation:2 fictitious:1 versus:2 consistent:1 viewpoint:1 uncorrelated:1 occam:2 supported:1 free:1 keeping:1 bias:4 correspondingly:1 xn:2 evaluating:1 interchange:1 made:1 forward:1 overcomes:1 overfitting:1 mj:1 reasonably:1 learn:1 symmetry:2 as:1 complex:3 significance:1 dense:1 noise:4 hyperparameters:2 augmented:1 precision:3 momentum:2 xl:1 governed:3 theorem:2 formula:1 bishop:7 specific:1 evidence:34 expressed:2 springer:1 duane:2 corresponds:5 chance:1 goal:1 marked:1 change:1 determined:1 except:1 infinite:3 called:1 total:3 accepted:1 indicating:1 select:2 arises:1 evaluate:6 ex:1 |
299 | 1,273 | Multi-Grid Methods for Reinforcement
Learning in Controlled Diffusion Processes
Stephan Pareigis
stp@numerik.uni-kiel.de
Lehrstuhl Praktische Mathematik
Christian-Albrechts-Universi tat Kiel
Kiel, Germany
Abstract
Reinforcement learning methods for discrete and semi-Markov decision problems such as Real-Time Dynamic Programming can
be generalized for Controlled Diffusion Processes. The optimal
control problem reduces to a boundary value problem for a fully
nonlinear second-order elliptic differential equation of HamiltonJacobi-Bellman (HJB-) type. Numerical analysis provides multigrid methods for this kind of equation. In the case of Learning Control, however, the systems of equations on the various grid-levels are
obtained using observed information (transitions and local cost).
To ensure consistency, special attention needs to be directed toward the type of time and space discretization during the observation. An algorithm for multi-grid observation is proposed. The
multi-grid algorithm is demonstrated on a simple queuing problem.
1
Introduction
Controlled Diffusion Processes (CDP) are the analogy to Markov Decision Problems
in continuous state space and continuous time. A CDP can always be discretized in
state space and time and thus reduced to a Markov Decision Problem. Algorithms
like Q-Iearning and RTDP as described in [1] can then be applied to produce controls
or optimal value functions for a fixed discretization.
Problems arise when the discretization needs to be refined, or when multi-grid
information needs to be extracted to accelerate the algorithm. The relation of
time to state space discretization parameters is crucial in both cases. Therefore
1034
S. Pareigis
a mathematical model of the discretized process is introduced, which reflects the
properties of the converged empirical process. In this model, transition probabilities
of the discrete process can be expressed in terms of the transition probabilities of
the continuous process. Recent results in numerical methods for stochastic control
problems in continuous time can be applied to give assumptions that guarantee a
local consistency condition which is needed for convergence. The same assumptions
allow application of multi-grid methods.
In section 2 Controlled Diffusion Processes are introduced. A model for the discretized process is suggested in section 3 and the main theorem is stated. Section 4
presents an algorithm for multi-grid observation according to the results in the preceding section. Section 5 shows an application of multi-grid techniques for observed
processes.
2
Controlled Diffusion Processes
Consider a Controlled Diffusion Process (CDP)
ffi. n fulfilling the diffusion equation
~(t)
= b(~(t), u(t))dt
~(t)
in some bounded domain 0 C
+ (7(~(t))dw.
(1)
The control u(t) takes values in some finite set U. The immediate reinforcement
(cost) for state ~(t) and control u(t) is
r(t) =
r(~(t),u(t)).
(2)
The control objective is to find a feedback control law
u(~(t)),
u(t) =
(3)
that minimizes the total discounted cost
J(x, u) =
IE~
1 e-/3tr(~(t),
00
u(t)dt,
(4)
where IE~ is the expectation starting in x E 0 and applying the control law u(.).
(3 > 0 is the discount.
The transition probabilities of the CDP are given for any initial state x E 0 and
subset A c 0 by the stochastic kernels
PtU(x, A)
:=prob{~(t)
E AI~(O) =x,u} .
(5)
It is known that the kernels have the properties
l
l
(y - x)PtU(x, dy)
(y - x)(y -
xf PtU(x, dy)
t . b(x, u)
+ o(t)
t? (7(x)(7(xf
+ o(t).
(6)
(7)
For the optimal control it is sufficient to calculate the optimal value function V :
O-tffi.
V(x) := inf J(x, u).
(8)
u(.)
Multi-Grid Methods for Reinforcement Learning in Diffusion Processes
1035
Under appropriate smoothness assumptions V is a solution of the Hamilton-JacobiBellman (HJB-) equation
min {C:tV(x) - ,i3V(x)
aEU
+ r(x,
an = 0,
x E O.
(9)
Let a(x) = O"(x)O"(x)T be the diffusion matrix, then La, a E U is defined as the
elliptic differential operator
n
n
La := L
aij(x)ox/Jx;
i,j=l
3
+ Lbi(x,a)oxi.
(10)
i=l
A Model for Observed CDP's
Let Ohi be the centers of cells of a cell-centered grid on 0 with cell sizes ho, hI =
ho/2, h2 = hI/2, .... For any x E Ohi we shall denote by A(x) the cell of x. Let
6.t > 0 be a parameter for the time discretization.
l-
?
?
? ?r D?
?, - J0 ?
~
? t" r----J.,c :J ?
r
J.'" ?
Figure 1: The picture depicts three
cell-centered grid levels and the trajectory of a diffusion process. The approximating value function is represented
locally constant on each cell. The triangles on the path denote the position of the diffusion at sample times
0, /It, 2/lt, 3/lt, . . ..
Transitions between respective cells are then counted
in matrices Q't, for each control a and
grid i.
?
?
?
By counting the transitions between cells and calculating the empirical probabilities
as defined in (20) we obtain empirical processes on every grid. By the law of
great numbers the empirical processes will converge towards observed CDPs as
subsequently defined.
Definition 1 An observed process ~hi,Lldt) is a Controlled Markov Chain (i.e.
discrete state-space and discrete time) on Oh i and interpolation time 6.ti with the
transition probabilities
prob{~(6.ti) E A(Y)I~(O) E A(x), u}
:n J{
i
PXti (z, A(y?dz,
(11)
A(x)
where x, y E Ohi and ~(t) is a solution of (1). Also define the observed reinforcement
p as
(12)
s.
1036
Pareigis
On every grid Ohi the respective process ehi ,Llti has its own value function Vhi ,Llti .
By theorem 10.4.1. in Kushner, Dupuis ([5], 1992) it holds, that
Vhi,Llti (x) -+ V(x) for all x
E
(13)
0,
if the following local consistency conditions hold.
Definition 2 Let D.eh,Llt = eh,Llt(D.t) - eh,Llt(O). eh,Llt is called locally consistent
to a solution e(.) of (1), iff
IE~ D.eh,Llt
IE~[D.eh,Llt - IE~D.eh,Llt][D.eh,Llt - IE~D.eh,LltlT
sup lD.eh,Llt(nD.t) I -+
b(x, a)D.t + o(D.t) (14)
a(x)D.t + o(D.t)
(15)
0 as h -+ O.
(16)
n
To verify these conditions for the observed CDP, the expectation and variance can
be calculated. For the expectation we get
L
Phi,Llti(x,y)(y - x)
yEOhi
:n
L
i yEOhi
l
(y - x)PXdz,A(y))dz.
(17)
A(x)
Recalling properties (6) and (7) and doing a similar calculation for the variance we
obtain the following theorem.
Theorem 3 For observed CDPs
ehi,Llti
let hi and D.ti be such that
(18)
Furthermore, ehi ,Llti shall be truncated at some radius R, such that R -+ 0 for
hi -+ 0 and expectation and variance of the truncated process differ only in the
order o(D.t) from expectation and variance of ehi,Llti. Then the observed processes
ehi,Llti truncated at R are locally consistent to the diffusion process e(.) and therefore
the value functions Vh i ,Llti converge to the value function V.
4
Identification by Multi-Grid Observation
The condition in Theorem 3 provides information as how to choose parameters in
the algorithm with empirical data. Choose discretization values ho, D.to for the
coarsest grid no. D.to should typically be of order Ilbllsup/h o. Then choose for the
finer grids
grid
o 1 2 3 4 5
space
time
ho
D.to
~
2
~
2
~
4
~
2
~
16
~
4
~
32
~
8
(19)
The sequences verify the assumption (18). We may now formulate the algorithm
for Multi-Grid Observation of the CDP e(.). Note that only observation is being
carried out. The actual calculation of the value function may be done separately
as described in the next section. The choice of the control is assumed to be done
Multi-Grid Methodsfor Reinforcement Learning in Diffusion Processes
1037
by a separate controller. Let Ok be the finest grid, Le. Dotk and hk the finest
discretizations. Let U, = u~t';~t,. = U x ... xU, Dotl! Dotk times. Qr' is a 10,1 x 10,1matrix (a, E U,), containing the number of transitions between cells in 0 " Rr' is a
lO,l-vector containing the empirical cost for every cell in 0 , . The immediate cost is
given by the system as r, = Jo~t' e-/3tr(~(t), a,)dt. T denotes current time.
O. Initialize 0 " Qr', Rr' for all a, E U" 1 = 0, ... , k
1. repeat {
2. choose a = a(T) E U and apply a constantly on [T; T + Dotk)
3.
T := T + Dotk
4.
for I = 0 to k do {
5.
determine cell Xl E 0, with ~(T - Dot,) E A(XI)
6.
determine cell Yl E 0 , with ~(T) E A(Yl)
7.
if Ilxk - Ykll ~ R (truncation radius) then goto 2. else
(a(T - Dot,) , a(T + Dotk - Dot,), .. . ,a(T - Dotk))
8.
a,
9.
receive immediate cost
10.
11.
:=
r,
Qr'(Xl,Yl) := Qr'(Xl,Yt) + 1
Rr' (Xl) := (rl + Rr' (Xl) . EZEn, Qr' (Xl, z)) / (1
} (for-do)
} (repeat)
+ EZEn, Qr' (Xl, z))
Before applying a multi-grid algorithm for the calculation of the value function on
the basis of the observations, one should make sure that every box has at least
some data for every control. Especially in the early stages of learning only the two
coarsest grids 00, 0 1 could be used for computation of the optimal value function
and finer grids may be added (possibly locally) as learning evolves.
5
Application of Multi-Grid Techniques
The identification algorithm produces matrices Qr' containing the number of transitions between boxes in 0 , . We will calculate from the matrices Q the transition
matrices P by the formula
p,a' (x, y) = Qr' (x, Y)/
(L
Qr' (x,
zEn,
Z)) , x, Y E 0 "
Now we define matrices A and right hand sides
Ar' := ({31 p,a'
where {31
= e-/3~t,.
-
I) / Dot,
a, E U" 1 = 0, .. . , k.
(20)
I as
It':= Rr' / Dotl ,
(21)
The discrete Bellman equation takes the following form
(22)
The problem is now in a form to which the multi-grid method due to Hoppe, BloB
([2], 1989) can be applied. For prolongation and restriction we choose bilinear
interpolation and full weighted restriction for cell-centered grids. We point out,
that for any cell X E 0 , only those neighboring cells shall be used for prolongation
and restriction for which the minimum in (22) is attained for the same control as
the minimizing control in X (see [2], 1989 and [3], 1996 for details). On every grid
s. Pareigis
1038
0 1 the defect in equation (22) is calculated and used for a correction on grid 0 /- 1 .
As a smoother nonlinear Gauss-Seidel iteration applied to (22) is used.
Our approach differs from the algorithm in Hoppe, BloB ([2], 1989) in the special
form of the matrices A~' in equation (22). The stars are generally larger than
nine-point, in fact the stars grow with decreasing h although the matrices remain
sparse. Also, when working with empirical information the relationship between the
matrices Ar' on the various grids is based on observation of a process, which implies
that coarse grid corrections do not always correct the equation of the finest grid
(especially in the early stages of learning). However, using the observed transition
matrices Ar' on the coarse grids saves the computing time which would otherwise
be needed to calculate these matrices by the Galerkin product (see Hackbusch [4],
1985).
6
Simulation with precomputed transitions
Consider a homogeneous server problem with two servers holding data (Xl, X2) E
[0,1] x [0,1]. Two independent data streams arrive, one at each server. A controller
has to decide to which server to route. The modeling equation for the stream shall
be
dx = b(x, u)dt + CT(x)dw, u E {I, 2}
(23)
with
b(x,l)
=
(!1)
b(x,2)
= (~1) CT= (~ ~)
(24)
The boundaries at Xl = 0 and X2 = 0 are reflecting. The exceeding data on
either server Xl, X2 > 1 is rejected from the system and penalized with g(Xl, 1) =
g(1,x2) = 10, 9 = 0 otherwise. The objective of the control policy shall be to
minimize
IE
10
00
e- i3t (xI(t)
+ X2(t) + g(Xl,X2))dt.
(25)
The plots of the value function show, that in case of high load (Le. Xl, X2 close to
1) a maximum of cost is assumed. Therefore it is cheaper to overload a server and
pay penalty than to stay close to the diagonal as is optimal in the low load case.
For simulation we used preco~puted (Le. converged heuristic) transition probabilities to test the multi-grid performance. The discount f3 was set to .7. The multi-grid
algorithm reduces the error in each iteration by ' a factor 0.21, using 5 grid levels
and a V -cycle and two smoothing iterations on the coarsest grid. For comparison,
the iteration on the finest grid converges with a reduction factor 0.63.
7
Discussion
We have given a condition for sampling controlled diffusion processes such that
the value functions will converge while the discretization tends to zero. Rigorous
numerical methods can now be applied to reinforcement learning algorithms in
continuous-time, continuous-state as is demonstrated with a multi-grid algorithm
for the HJB-equation. Ongoing work is directed towards adaptive grid refinement
algorithms and application to systems that include hysteresis.
Multi-Grid Methodsfor Reinforcement Leaming in Diffusion Processes
1039
Figure 2: Contour plots of the predicted reward in a homogeneous server problem with
nonlinear costs are shown on different grid levels. On the coarsest 4 x 4 grid a sampling rate
of one second is used with 9-point-star transition matrices. At the finest grid (64 x 64) a
sampling rate of second is used with observation on 81-point-stars. Inside the egg-shaped
area the value function assumes its maximum.
t
References
[lJ A. Barto, S. Bradtke, S. Singh. Learning to Act using Real-Time Dynamic Programming, AI Journal on Computational Theories of Interaction and Agency,
1993.
[2J M. BloB and R. Hoppe. Numerical Computation of the Value Function of Optimally Controlled Stochastic Switching Processes by Multi-Grid Techniques,
Numer Funct Anal And Optim 10(3+4), 275-304, 1989.
[3] S. Pareigis. Lernen der Lasung der Bellman-Gleichung durch Beobachtung von
kontinuierlichen Prozessen, PhD Thesis, 1996.
[4J W. Hackbusch. Multi-Grid Methods and Applications, Springer-Verlag, 1985.
[5] H. Kushner and P. Dupuis. Numerical Methods for Stochastic Control Problems in Continuous Time, Springer-Verlag, 1992.
| 1273 |@word nd:1 simulation:2 tat:1 tr:2 ld:1 reduction:1 initial:1 current:1 discretization:7 optim:1 dx:1 finest:5 numerical:5 christian:1 plot:2 coarse:2 provides:2 universi:1 kiel:3 albrechts:1 mathematical:1 differential:2 hjb:3 inside:1 multi:20 discretized:3 bellman:3 discounted:1 decreasing:1 actual:1 bounded:1 multigrid:1 kind:1 minimizes:1 rtdp:1 guarantee:1 every:6 ti:3 act:1 iearning:1 control:17 hamilton:1 before:1 local:3 cdp:7 tends:1 switching:1 bilinear:1 path:1 interpolation:2 directed:2 differs:1 j0:1 area:1 discretizations:1 empirical:7 get:1 close:2 operator:1 applying:2 restriction:3 demonstrated:2 center:1 dz:2 yt:1 attention:1 starting:1 formulate:1 oh:1 dw:2 programming:2 homogeneous:2 praktische:1 observed:10 calculate:3 cycle:1 agency:1 reward:1 dynamic:2 singh:1 funct:1 basis:1 triangle:1 accelerate:1 various:2 represented:1 refined:1 heuristic:1 larger:1 otherwise:2 sequence:1 rr:5 blob:3 interaction:1 product:1 neighboring:1 iff:1 qr:9 convergence:1 produce:2 converges:1 predicted:1 implies:1 differ:1 radius:2 correct:1 stochastic:4 subsequently:1 centered:3 dupuis:2 correction:2 hold:2 great:1 jx:1 early:2 reflects:1 weighted:1 always:2 barto:1 hk:1 rigorous:1 typically:1 lj:1 relation:1 germany:1 stp:1 smoothing:1 special:2 initialize:1 f3:1 shaped:1 sampling:3 cheaper:1 recalling:1 numer:1 aeu:1 chain:1 respective:2 modeling:1 ar:3 cost:8 subset:1 optimally:1 ehi:5 ie:7 stay:1 yl:3 jo:1 von:1 thesis:1 containing:3 choose:5 possibly:1 zen:1 de:1 star:4 hysteresis:1 stream:2 queuing:1 doing:1 sup:1 minimize:1 variance:4 identification:2 trajectory:1 finer:2 converged:2 llt:9 definition:2 ptu:3 pareigis:5 jacobibellman:1 reflecting:1 ok:1 attained:1 dt:5 lehrstuhl:1 done:2 ox:1 box:2 furthermore:1 rejected:1 stage:2 hand:1 working:1 nonlinear:3 puted:1 verify:2 during:1 generalized:1 bradtke:1 lbi:1 rl:1 ai:2 smoothness:1 grid:46 consistency:3 dot:4 own:1 recent:1 inf:1 route:1 verlag:2 server:7 durch:1 ohi:4 der:2 minimum:1 preceding:1 converge:3 determine:2 semi:1 smoother:1 full:1 reduces:2 seidel:1 xf:2 calculation:3 prolongation:2 controlled:9 controller:2 hoppe:3 expectation:5 iteration:4 kernel:2 cell:15 receive:1 separately:1 else:1 grow:1 crucial:1 sure:1 goto:1 counting:1 stephan:1 penalty:1 nine:1 generally:1 discount:2 locally:4 reduced:1 discrete:5 shall:5 diffusion:15 defect:1 prob:2 arrive:1 decide:1 decision:3 dy:2 hi:5 ct:2 pay:1 x2:7 min:1 coarsest:4 tv:1 according:1 remain:1 evolves:1 ykll:1 fulfilling:1 equation:11 mathematik:1 ffi:1 precomputed:1 needed:2 oxi:1 apply:1 elliptic:2 appropriate:1 save:1 ho:4 denotes:1 assumes:1 ensure:1 kushner:2 include:1 calculating:1 especially:2 approximating:1 objective:2 added:1 diagonal:1 separate:1 toward:1 relationship:1 minimizing:1 holding:1 stated:1 anal:1 policy:1 observation:9 markov:4 finite:1 truncated:3 immediate:3 introduced:2 suggested:1 ilxk:1 eh:10 picture:1 carried:1 vh:1 law:3 fully:1 analogy:1 h2:1 sufficient:1 consistent:2 lo:1 penalized:1 repeat:2 truncation:1 aij:1 side:1 allow:1 sparse:1 boundary:2 feedback:1 calculated:2 transition:14 contour:1 reinforcement:8 adaptive:1 refinement:1 counted:1 uni:1 assumed:2 xi:2 continuous:7 hackbusch:2 domain:1 main:1 arise:1 xu:1 depicts:1 egg:1 galerkin:1 position:1 exceeding:1 xl:13 theorem:5 formula:1 load:2 methodsfor:2 phd:1 lt:2 cdps:2 expressed:1 phi:1 springer:2 constantly:1 extracted:1 towards:2 leaming:1 called:1 total:1 gauss:1 la:2 overload:1 ongoing:1 |